My comments about the absence of acceptable evidence for alpha wave
therapy led me to ask myself what _would_ be adequate evidence. Here's what
1. A reasonable therapeutic goal for the therapy
Any programme which starts with grandiose claims that a therapy can do
everything from curing bad breath to restoring brain-damaged individuals
to normalcy is clearly not a scientific exercise. The starting premise
should be the investigation of a small number of carefully-defined and
reasonable claims. Otherwise it's nutcake therapy not worth considering
2. Careful diagnosis of the disorder in the subject population.
There should be clear criteria which are used to establish that the
subjects studied do have the disorder in question. The diagnosis should be
carried out by independent experts unconnected with the study.
3. Valid measures should be used.
Anecdotal evidence won't do. Specific quantitative measures should be
used. These can include self-report data (as collected, for example, using
a questionnaire with ratings scales for various symptoms), direct measures
of behaviour, objective indices of socially-relevant activities (e.g. how
many go back to work), and physiological measures. But what is
particularly important is that these measures be quantitative, meaningful
(valid) and reliable. In particular, when ratings of improvement are used,
it must be shown than the results for different judges rating
independently show high agreement. It is also important that the ratings
be done blindly (without knowledge of which individual received which
4. Establishment of an equally-credible control group.
Placebo effects being what they are, some proportion of people always do
better just because they believe in the therapy and the therapist. In
addition, the passage of time alone can do wonders. These non-specific
effects of treatment must be ruled out if one wants to make _specific_
claims for a particular therapy, as most proponents of new therapies want
One way to do this is to devise a believable but fake alternative
treatment. This is not easy to do, and runs into ethical problems (is it
acceptable to fool people with a therapy we know is nonsense?). The
approach I like which is rarely used is to compare the therapy under
investigation with the best current treatment for the problem. After all,
what we really want to know is if this new therapy is better than what we
have, and putting the two into head-to-head competition will accomplish this.
5. Random assignment of subjects to control or experimental group.
This is absolutely essential if one wants to draw causal inferences from
the study (and this is what it's all about, isn't it?).
6. Use of an appropriate statistical technique to evaluate results.
Too many studies massage the data in various questionable ways that make
the conclusions doubtful. An old saying I like is "If you torture data
long enough, it'll confess". Moreover, the results should not only be of
statistical significance; they must also be of clinical significance.
This means that they should affect sufficient people in a way which has a
meaningful effect on their lives.
7. For positive results, provide a long follow-up.
It is not so difficult to get positive results in the short-term, but
what happens six months, one year, or two years down the road? A
convincing study will have a lengthy follow-up with careful quantitative
assessment to determine whether the therapy can produce lasting effects.
8. The findings should be reported in a respected peer-reviewed
This provides the assurance of quality-control through the opinions of
other scientists that the work is acceptable as science. Publication also
makes the work public where it can be subjected to further error-checking
A study which paid attention to all of the above and reported positive
results would be deserving of serious attention. Heck, if I had the
disorder, I'd be willing to undergo the therapy myself. But it's difficult
and expensive to do a study right, and doing it right often means that you
find out that it doesn't work. So it's no wonder that clinicians instead
prefer to take short-cuts such as those exemplified by the methods and
findings presented by Dr. Ochs on his web page (www.flexyx.com), where
there is little indication of attention to the principles I've listed.
I don't mean to single Dr. Och's work out. Certainly there are many
similar examples elsewhere. Let's just not confuse whatever it is with
(Flames welcome, but please do it in public.)
Stephen Black, Ph.D. tel: (819) 822-9600 ext 2470
Department of Psychology fax: (819) 822-9661
Bishop's University e-mail: sblack at ubishops.ca
J1M 1Z7 Bishop's Department of Psychology web page at:
"I'm a scientist. Certainty is a big word for me."
-from the movie "Volcano"