There’s never been any real research done on the ability of Australian Pool Lifeguards to detect a drowning person – which is a little surprising considering it’s central to the role.
How long do you think it would take to see someone in trouble? Everyone thinks they’re pretty good at the game. If someone asked me, I’d probably say “not long”. And yet the reality could be quite different.
So how soon is fast enough, how long is too slow and where on the scale do we sit in Australia?
From a practical view point, fast enough is any amount of time that results in the person making a full recovery. A drowning that results in a permanent injury or death is obviously too slow.
In real life the goal posts move around a bit too. I have a video that shows a child struggling on the surface for one minute and ten seconds. Another video (courtesy of Poseidon) shows an adult male, having what is probably a cardiac arrest, on the surface for only three seconds and no struggle.
In the first event the child goes under water with a blood oxygen levels that are already dangerously low whereas the guy in the second event goes to the bottom with blood that is fully oxygenated. He could have 3 or 4 minutes before he experiences hypoxic injuries. The irony is that in the first event there is plenty of opportunity to detect the child on the surface. In the second event it’s a very, very narrow window of opportunity and detection on the bottom is even more challenging.
So what is the best case scenario?
In an attempt to develop a standard Ellis & Associates (USA) developed the 10/20 Protection Rule in the mid-1980s. I’ve never heard it get a mention in Australia, until this season (2015-16). At least one organisation I know has begun to use Ellis & Associates methods for training their lifeguards.
The 10/20 Rule stipulates that if a lifeguard can recognise a drowning person within 10 seconds and then get to them within a further 20 seconds, then the chances of injury should be minimal. This means scanning your zone every 10 seconds so that if, as I scan past you, you begin to drown I’ll see you within the next 10 scan. Then, assuming that I’m no further from you than the distance I can travel in 20 seconds, you’ll have been in trouble for no longer than 30 seconds all up.
For a good deal of time the 10/20 Rule seemed to work well, particularly for zoned, intensive scanning. The 10/20 Rule set a clear and measurable benchmark. Random, unannounced dummy drops could be done with both detection and recovery times recorded.
The wheels fell off the wagon in 2001 when research commissioned by Poseidon Technologies and carried out by Ellis & Associates at 500 of their client centres revealed that the average detection time for a lifeguard was seventy-four seconds. This was clearly not what they were hoping for.
It seemed that while lifeguards may be always looking, they weren’t always seeing. The great work by Daniel Simons & Christopher Chabris of the Invisible Gorilla fame has also made us think differently about this. There are many factors which could mean a lifeguard does not see a manikin within 10 seconds. When they can, how long they can maintain this level of vigilance for is another thing.
Since then Ellis & Associates have developed and refined additional training they call Vigilance Awareness Training (VAT). As a consequence, average detection times have been steadily dropping;
- 2002- 56 seconds.
- 2003- 24.7 seconds (600 tests).
- 2006- 18.9 seconds (1,000 tests).
So is the 10/20 Rule humanly achievable? After more than 30 years of refining techniques, shaving those last 8.9 seconds off is going to be tough.
Is the 10/20 Rule still a good benchmark? In terms of drowning, yes; it provides a better than fair chance of preventing injuries. The 10/20 Rule is the Holy Grail and a useful management tool for developing zones. But it can’t be accurately described as a standard; one which we can measure a lifeguard’s performance by. It feels like it would be more at home amongst Special Forces lifeguards rather than the more common foot soldier type lifeguard that makes up the bulk of our workforce.
In the US the 10/20 Rule has also become a bit of a doubled edged sword. Because a lifeguard didn’t scan their zone in 10 seconds or didn’t reach the person in 20 seconds it could be easy to draw the conclusion that the lifeguard wasn’t doing her job; not necessarily. None the less the 10/20 Rule has been used against lifeguards and organisations by lawyers for the complainant.
Like I said earlier, the 10/20 Rule works best where lifeguards are allocated to zones. In Australia we often use a more roving style of supervision that sees lifeguards walk around a pool in a way that reflects the conditions, the crowd and the behaviours they are seeing.
So why wouldn’t we do more zoned, intensive scanning in Australia? I’m sure we’d love to. One of the main considerations for not is cost; wages for lifeguards in Australia are around three times what they are in countries like the USA.
Also we don’t have rates of drowning nor litigation that would drive the improvements intensive scanning would bring. In Australia, rightly or wrongly, we tend to focus on the preventative and lifesaving aspects of the risk management equation, and not as much on the detection component which is bookended by the two. Maybe we’re still under the illusion that lifeguards won’t miss anything.
At this point in time our drowning rate could be giving us a false impression that we’re doing ok. And sometimes we are. I’ve seen a few videos recently of great saves in Australian pools; again mostly in intensive scanning situations.
The challenge for the Australian aquatic industry into the future will be can we maintain, let alone improve, our current drowning rate (fatal & non-fatal) in public swimming pools in the face of an aging and increasingly diverse population. A starting point would be to have annual fatal drowning data for public facilities isolated out on its own in the RLSSA National (fatal) Drowning Report; like occurs in New Zealand.
So… who’s feeling brave? Who’s up for their lifeguards to be tested under random, unannounced conditions?