culty where I found so much junk that
I'm overwhelmed. First I started out by investigating various ideas of
mysticism, and mystic experiences. I went into isolation tanks and got many
hours of hallucinations, so I know something about that. Then I went to
Esalen, which is a hotbed of this kind of thought (it's a wonderful place;
you should go visit there). Then I became overwhelmed. I didn't realize how
much there was.
At Esalen there are some large baths fed by hot springs situated on a
ledge about thirty feet above the ocean. One of my most pleasurable
experiences has been to sit in one of those baths and watch the waves
crashing onto the rocky shore below, to gaze into the clear blue sky above,
and to study a beautiful nude as she quietly appears and settles into the
bath with me.
One time I sat down in a bath where there was a beautiful girl sitting
with a guy who didn't seem to know her. Right away I began thinking, "Gee!
How am I gonna get started talking to this beautiful nude babe?"
I'm trying to figure out what to say, when the guy says to her, "I'm,
uh, studying massage. Could I practice on you?"
"Sure," she says. They get out of the bath and she lies down on a
massage table nearby.
I think to myself, "What a nifty line! I can never think of anything
like that!" He starts to rub her big toe. "I think I feel it," he says. "I
feel a kind of dent -- is that the pituitary?"
I blurt out, "You're a helluva long way from the pituitary, man!"
They looked at me, horrified -- I had blown my cover -- and said, "It's
reflexology!"
I quickly closed my eyes and appeared to be meditating.
That's just an example of the kind of things that overwhelm me. I also
looked into extrasensory perception and PSI phenomena, and the latest craze
there was Uri Geller, a man who is supposed to be able to bend keys by
rubbing them with his finger. So I went to his hotel room, on his
invitation, to see a demonstration of both mindreading and bending keys. He
didn't do any mindreading that succeeded; nobody can read my mind, I guess.
And my boy held a key and Geller rubbed it, and nothing happened. Then he
told us it works better under water, and so you can picture all of us
standing in the bathroom with the water turned on and the key under it, and
him rubbing the key with his finger. Nothing happened. So I was unable to
investigate that phenomenon.
But then I began to think, what else is there that we believe? (And I
thought then about the witch doctors, and how easy it would have been to
check on them by noticing that nothing really worked.) So I found things
that even more people believe, such as that we have some knowledge of how to
educate. There are big schools of reading methods and mathematics methods,
and so forth, but if you notice, you'll see the reading scores keep going
down -- or hardly going up -- in spite of the fact that we continually use
these same people to improve the methods. There's a witch doctor remedy that
doesn't work. It ought to be looked into; how do they know that their method
should work? Another example is how to treat criminals. We obviously have
made no progress -- lots of theory, but no progress -- in decreasing the
amount of crime by the method that we use to handle criminals.
Yet these things are said to be scientific. We study them. And I think
ordinary people with commonsense ideas are intimidated by this
pseudoscience. A teacher who has some good idea of how to teach her children
to read is forced by the school system to do it some other way -- or is even
fooled by the school system into thinking that her method is not necessarily
a good one. Or a parent of bad boys, after disciplining them in one way or
another, feels guilty for the rest of her life because she didn't do "the
right thing," according to the experts.
So we really ought to look into theories that don't work, and science
that isn't science.
I think the educational and psychological studies I mentioned are
examples of what I would like to call cargo cult science. In the South Seas
there is a cargo cult of people. During the war they saw airplanes land with
lots of good materials, and they want the same thing to happen now. So
they've arranged to make things like runways, to put fires along the sides
of the runways, to make a wooden hut for a man to sit in, with two wooden
pieces on his head like headphones and bars of bamboo sticking out like
antennas -- he's the controller -- and they wait for the airplanes to land.
They're doing everything right. The form is perfect. It looks exactly the
way it looked before. But it doesn't work. No airplanes land. So I call
these things cargo cult science, because they follow all the apparent
precepts and forms of scientific investigation, but they're missing
something essential, because the planes don't land.
Now it behooves me, of course, to tell you what they're missing. But it
would be just about as difficult to explain to the South Sea Islanders how
they have to arrange things so that they get some wealth in their system. It
is not something simple like telling them how to improve the shapes of the
earphones. But there is one feature I notice that is generally missing in
cargo cult science. That is the idea that we all hope you have learned in
studying science in school -- we never explicitly say what this is, but just
hope that you catch on by all the examples of scientific investigation. It
is interesting, therefore, to bring it out now and speak of it explicitly.
It's a kind of scientific integrity, a principle of scientific thought that
corresponds to a kind of utter honesty -- a kind of leaning over backwards.
For example, if you're doing an experiment, you should report everything
that you think might make it invalid -- not only what you think is right
about it: other causes that could possibly explain your results; and things
you thought of that you've eliminated by some other experiment, and how they
worked -- to make sure the other fellow can tell they have been eliminated.
Details that could throw doubt on your interpretation must be given, if
you know them. You must do the best you can -- if you know anything at all
wrong, or possibly wrong -- to explain it. If you make a theory, for
example, and advertise it, or put it out, then you must also put down all
the facts that disagree with it, as well as those that agree with it. There
is also a more subtle problem. When you have put a lot of ideas together to
make an elaborate theory, you want to make sure, when explaining what it
fits, that those things it fits are not just the things that gave you the
idea for the theory; but that the finished theory makes something else come
out right, in addition.
In summary, the idea is to try to give all of the information to help
others to judge the value of your contribution; not just the information
that leads to judgment in one particular direction or another.
The easiest way to explain this idea is to contrast it, for example,
with advertising. Last night I heard that Wesson oil doesn't soak through
food. Well, that's true. It's not dishonest; but the thing I'm talking about
is not just a matter of not being dishonest, it's a matter of scientific
integrity, which is another level. The fact that should be added to that
advertising statement is that no oils soak through food, if operated at a
certain temperature. If operated at another temperature, they all will --
including Wesson oil. So it's the implication which has been conveyed, not
the fact, which is true, and the difference is what we have to deal with.
We've learned from experience that the truth will come out. Other
experimenters will repeat your experiment and find out whether you were
wrong or right. Nature's phenomena will agree or they'll disagree with your
theory. And, although you may gain some temporary fame and excitement, you
will not gain a good reputation as a scientist if you haven't tried to be
very careful in this kind of work. And it's this type of integrity, this
kind of care not to fool yourself, that is missing to a large extent in much
of the research in cargo cult science.
A great deal of their difficulty is, of course, the difficulty of the
subject and the inapplicability of the scientific method to the subject.
Nevertheless, it should be remarked that this is not the only difficulty.
That's why the planes don't land -- but they don't land.
We have learned a lot from experience about how to handle some of the
ways we fool ourselves. One example: Millikan measured the charge on an
electron by an experiment with falling oil drops, and got an answer which we
now know not to be quite right. It's a little bit off, because he had the
incorrect value for the viscosity of air. It's interesting to look at the
history of measurements of the charge of the electron, after Millikan. If
you plot them as a function of time, you find that one is a little bigger
than Millikan's, and the next one's a little bit bigger than that, and the
next one's a little bit bigger than that, until finally they settle down to
a number which is higher.
Why didn't they discover that the new number was higher right away?
It's a thing that scientists are ashamed of -- this history -- because it's
apparent that people did things like this: When they got a number that was
too high above Millikan's, they thought something must be wrong -- and they
would look for and find a reason why something might be wrong. When they got
a number closer to Millikan's value they didn't look so hard. And so they
eliminated the numbers that were too far off, and did other things like
that. We've learned those tricks nowadays, and now we don't have that kind
of a disease.
But this long history of learning how to not fool ourselves -- of
having utter scientific integrity -- is, I'm sorry to say, something that we
haven't specifically included in any particular course that I know of. We
just hope you've caught on by osmosis.
The first principle is that you must not fool yourself -- and you are
the easiest person to fool. So you have to be very careful about that. After
you've not fooled yourself, it's easy not to fool other scientists. You just
have to be honest in a conventional way after that.
I would like to add something that's not essential to the science, but
something I kind of believe, which is that you should not fool the layman
when you're talking as a scientist. I am not trying to tell you what to do
about cheating on your wife, or fooling your girlfriend, or something like
that, when you're not trying to be a scientist, but just trying to be an
ordinary human being. We'll leave those problems up to you and your rabbi.
I'm talking about a specific, extra type of integrity that is not lying, but
bending over backwards to show how you're maybe wrong, that you ought to
have when acting as a scientist. And this is our responsibility as
scientists, certainly to other scientists, and I think to laymen.
For example, I was a little surprised when I was talking to a friend
who was going to go on the radio. He does work on cosmology and astronomy,
and he wondered how he would explain what the applications of this work
were. "Well," I said, "there aren't any." He said, "Yes, but then we won't
get support for more research of this kind." I think that's kind of
dishonest. If you're representing yourself as a scientist, then you should
explain to the layman what you're doing -- and if they don't want to support
you under those circumstances, then that's their decision.
One example of the principle is this: If you've made up your mind to
test a theory, or you want to explain some idea, you should always decide to
publish it whichever way it comes out. If we only publish results of a
certain kind, we can make the argument look good. We must publish both kinds
of results.
I say that's also important in giving certain types of government
advice. Supposing a senator asked you for advice about whether drilling a
hole should be done in his state; and you decide it would be better in some
other state. If you don't publish such a result, it seems to me you're not
giving scientific advice. You're being used. If your answer happens to come
out in the direction the government or the politicians like, they can use it
as an argument in their favor; if it comes out the other way, they don't
publish it at all. That's not giving scientific advice.
Other kinds of errors are more characteristic of poor science. When I
was at Cornell, I often talked to the people in the psychology department.
One of the students told me she wanted to do an experiment that went
something like this -- it had been found by others that under certain
circumstances, X, rats did something, A. She was curious as to whether, if
she changed the circumstances to Y, they would still do A. So her proposal
was to do the experiment under circumstances Y and see if they still did A.
I explained to her that it was necessary first to repeat in her
laboratory the experiment of the other person -- to do it under condition X
to see if she could also get result A, and then change to Y and see if A
changed. Then she would know that the real difference was the thing she
thought she had under control.
She was very delighted with this new idea, and went to her professor.
And his reply was, no, you cannot do that, because the experiment has
already been done and you would be wasting time. This was in about 1947 or
so, and it seems to have been the general policy then to not try to repeat
psychological experiments, but only to change the conditions and see what
happens.
Nowadays there's a certain danger of the same thing happening, even in
the famous field of physics. I was shocked to hear of an experiment done at
the big accelerator at the National Accelerator Laboratory, where a person
used deuterium. In order to compare his heavy hydrogen results to what might
happen with light hydrogen, he had to use data from someone else's
experiment on light hydrogen, which was done on different apparatus. When
asked why, he said it was because he couldn't get time on the program
(because there's so little time and it's such expensive apparatus) to do the
experiment with light hydrogen on this apparatus because there wouldn't be
any new result. And so the men in charge of programs at NAL are so anxious
for new results, in order to get more money to keep the thing going for
public relations purposes, they are destroying -- possibly -- the value of
the experiments themselves, which is the whole purpose of the thing. It is
often hard for the experimenters there to complete their work as their
scientific integrity demands.
All experiments in psychology are not of this type, however. For
example, there have been many experiments running rats through all kinds of
mazes, and so on -- with little clear result. But in 1937 a man named Young
did a very interesting one. He had a long corridor with doors all along one
side where the rats came in, and doors along the other side where the food
was. He wanted to see if he could train the rats to go in at the third door
down from wherever he started them off. No. The rats went immediately to the
door where the food had been the time before.
The question was, how did the rats know, because the corridor was so
beautifully built and so uniform, that this was the same door as before?
Obviously there was something about the door that was different from the
other doors. So he painted the doors very carefully, arranging the textures
on the faces of the doors exactly the same. Still the rats could tell. Then
he thought maybe the rats were smelling the food, so he used chemicals to
change the smell after each run. Still the rats could tell. Then he realized
the rats might be able to tell by seeing the lights and the arrangement in
the laboratory like any commonsense person. So he covered the corridor, and
still the rats could tell.
He finally found that they could tell by the way the floor sounded when
they ran over it. And he could only fix that by putting his corridor in
sand. So he covered one after another of all possible clues and finally was
able to fool the rats so that they had to learn to go in the third door. If
he relaxed any of his conditions, the rats could tell.
Now, from a scientific standpoint, that is an A-number-one experiment.
That is the experiment that makes rat-running experiments sensible, because
it uncovers the clues that the rat is really using -- not what you think
it's using. And that is the experiment that tells exactly what conditions
you have to use in order to be careful and control everything in an
experiment with rat-running.
I looked into the subsequent history of this research. The next
experiment, and the one after that, never referred to Mr. Young. They never
used any of his criteria of putting the corridor on sand, or being very
careful. They just went right on running rats in the same old way, and paid
no attention to the great discoveries of Mr. Young, and his papers are not
referred to, because he didn't discover anything about the rats. In fact, he
discovered all the things you have to do to discover something about rats.
But not paying attention to experiments like that is a characteristic of
cargo cult science.
Another example is the ESP experiments of Mr. Rhine, and other people.
As various people have made criticisms -- and they themselves have made
criticisms of their own experiments -- they improve the techniques so that
the effects are smaller, and smaller, and smaller until they gradually
disappear. All the parapsychologists are looking for some experiment that
can be repeated -- that you can do again and get the same effect --
statistically, even. They run a million rats -- no, it's people this time --
they do a lot of things and get a certain statistical effect. Next time they
try it they don't get it any more. And now you find a man saying that it is
an irrelevant demand to expect a repeatable experiment. This is science?
This man also speaks about a new institution, in a talk in which he was
resigning as Director of the Institute of Parapsychology. And, in telling
people what to do next, he says that one of the things they have to do is be
sure they only train students who have shown their ability to get PSI
results to an acceptable extent -- not to waste their time on those
ambitious and interested students who get only chance results. It is very
dangerous to have such a policy in teaching -- to teach students only how to
get certain results, rather than how to do an experiment with scientific
integrity.
So I have just one wish for you -- the good luck to be somewhere where
you are free to maintain the kind of integrity I have described, and where
you do not feel forced by a need to maintain your position in the
organization or financial support, or so on, to lose your integrity. May you
have that freedom.