A little more on risk analysis (a very interesting subject, particularly when your life is at stake).
A good example about this self-deception and flawed thinking that can arise in risk assessment came from the aftermath of the Challenger explosion in 1986. Richard Feynman, a physics laureate and member of the Rogers Commission into the disaster, found that their was a widespread belief within NASA that the shuttle was 'five nines' reliable per flight, i.e. 99.999 percent chance of a safe flight per mission. Interestingly, it emerged that this belief was only held by some NASA managers. All of the lower-level engineers that Feynman spoke to thought the the actual reliability was a lot lower than that. Most were reluctant to come up with a figure, but one of the astronauts themselves thought that it was something like 1 in 200 per flight (a figure roughly vindicated by the actual flight history since then).
Incidentally, Feynman was never able to find out where this five nines figure actually came from, but apparently there were some references in very early design documents that stated that a 'design goal' should be five nines reliability, 'similar to airliners'. No subsequent study or analysis every supported that figure. It just seemed to seep through higher-level NASA management by osmosis (similarly to what I thnk happens in sport flying communities, such as skydiving, or PG).
So here you had a situation in which upper-level managers (and therefore the general public and politicians) were operating on the assumption (and an assumption it was) that the Shuttle had the reliability of an airliner. And of course, 'operating on the assumption' meant making day-to-day flight planning decisions, e.g. if it's airliner-safe, we can safely fly a civilian on the Shuttle (the Teacher-In-Space project). Those closer to the reality, particularly the astronauts themselves, thought that the Shuttle was a 'dangerous rocket system' (a quote from the Challenger pilot, Dick Scobee). I seem to recall also that Dick Scobee said somewhere something like "you know that someday one of these things is going to explode". As a post-note, I read somewhere that, some years back, NASA revised its assessment of the Shuttle'e reliability to 1 in 150 (interestingly, in line with what the engineers and astronauts had been estimating all along).
I see a lot of parallels between what happened with Challenger and what's going on now in the PG community: namely, a wide disparity between the community mythology and the actual reality of the risk levels involved.
And here's a question for you Rick, is it possible to get some idea of the comparative risk between HG and PG? I know this is a pretty daunting task, given the variables and the poor reporting. But one (accepted) way of tabulating risk in this kind of situation is to look at death and injury rates per participant per year. In PG/HG injury rates would, I imagine, be pretty difficult to tabulate, but I think we can get a much better idea about fatalities, such as Rick has been doing. What is missing, in order to make a reasonable comparison between HG and PG, are participation rates.
From all that I've read and analysed, both in Australia, the US and overseas, for HG the fatality rate is about 1,000 per participant per year. This figure comes up in a variety of reports and studies, and also lines up (roughly) with anecdotal evidence. Rick, would it be possible to estimate this figure for PG? Starting with your fatality stats, we would just need the participation rates over the same period. I don't know whether that should be USHPA membership or some other number: that's the difficult part, I suppose.
Knowing this figure would reveal some very interesting things. Interestingly, the figure of 1 in 1,000 per participant per year (for HG) is similar to that of sailplane flying, skydiving and even scuba diving. This to me suggest that, at that rate, you are down past equipment issues and are looking at raw human nature. What I mean by this is that if you assemble 1,000 people and get them to do something risky (but with quantifiable risk), you would end up with around one person who'll kill themselves regardless, perhaps because they are foolhardy, self-deluded, have a Jehova complex or are otherwise self-deluded. In other words, I think that human nature alone will kill roughly 1 in 1,000 of us per year.
If on the other hand, the figure turns out to be, say, ten times higher, then that would indicate the presence of other unknowns or factors, e.g. design and equipment flaws, or a random risk element somewhere (e.g. atmospheric turbulence + design flaw = death). I think that something like this was going on in the early days of HG, later eliminated by design improvements, better training, better culture etc. Maybe this is what's going on now in the PG community.
So, does anyone know of any respectable report/study that estimates this figure, i.e. the PG fatality rate per participant per year? Or, can we estimate this ourselves, particularly working off Rick's data? If we know this value and can make it known to prospective PG pilots, it might be a way of giving them some reasonable risk-assessment data before they make their fateful decision to take up PGs. For example, I think most rational people (the irrational ones are a lost cause), would lean towards HG over PG if they knew that their chance of dying was, say, ten times less.
One last word on probabilities and statistics. Many people have a flawed understanding of probability (which helps with the whole self-deception thing in aviation). Mike Meier has written effectively about this ("Why Can't We Get a Handle On This Safety Thing"
https://www.willswing.com/articles/Article.asp?reqArticleName=HandleOnSafety ), but the essence is that to get the probability of two unconnected events happening in a row, we multiply the probabilities. For example, if the chance of there being a bomb on an airliner is one in a million, the chance of there being two bombs on the same plane (again, unconnected), the chance is one in a million times one in a million, i.e. one in a trillion.
This is why, whenever I fly on an airliner, I carry a bomb on board myself
, thereby increasing the bad odds to one in a trillion! Obviously, this logic is flawed, but exactly where is the flaw (I'll leave that as an exercise for the reader).
Now take all the complexities of real-life aviation with all its variables and see how any real calculation or understanding of probability becomes much more difficult. But the biggest risk-assessment mistake that I see is the old "I've done this safely (.e. not died) X times before", for whatever choice of "X" you like. People use this argument every day to 'prove' that what they do is safe (particularly in the PG community when they attack Rick's fatality data, or the PDMC analysis). But the flaw in this, as pointed out by Mike Meier's excellent article, and by the Challenger accident, is that doing something safely (i.e. not dying) 100 times in a row, only proves that the real underlying probability of death over that number of tries, is of the rough order of 1 in 100. Probability theory even states that it could be much worse than that and you've just had a 'lucky run'.
So, we can't judge our real 'underlying' safety based on how many times we've flown without dying, other than as a lower limit. A full understanding has to be based on other factors, such as a reasoned risk assessment, on rational thinking, and experimentation (the HGMA certification program comes to mind). I don't know how much of that is going on in the PG world, but from what I've seen (by observation, I don't fly PGs) it seems to be fueled largely on bravado and Space Shuttle-like ignorance and little else (a bit like the early days of HG). Maybe, knowing the true probability of dying flying your PG would be a good start in the right direction.
P.S. Sorry about the very long post, but it's a fascinating topic and one that I'm passionate about (obviously).