I've begun to rethink the way we teach students to engage in scientific research. I was trained, as a chemist, to use the classic scientific method: Devise a testable hypothesis, and then design an experiment to see if the hypothesis is correct or not. And I was told that this method is equally valid for the social sciences. I've changed my mind that this is the best way to do science. I have three reasons for this change of mind.
First, and probably most importantly, I've learned that one often needs simply to sit and observe and learn about one's subject before even attempting to devise a testable hypothesis. What are the physical capacities of the subject? What is the social and ecological structure in which it lives? Does some anecdotal evidence suggest the form that the hypothesis should take? Few granting agencies are willing to provide support for this step, but it is critical to the scientific process, particularly for truly innovative research. Often, a proposal to gain observational experience is dismissed as being a "fishing expedition"…but how can one devise a workable hypothesis to test without first acquiring basic knowledge of the system, and how better to obtain such basic knowledge than to observe the system without any preconceived notions?
Second, I've learned that truly interesting questions really often can't be reduced to a simple testable hypothesis, at least not without being somewhat absurd. "Can a parrot label objects?" may be a testable hypothesis, but actually isn't very interesting…what is interesting, for example, is how that labeling compares to the behavior of a young child, exactly what type of training might enable such learning and what type of training is useless, how far can such labeling transfer across exemplars, and….Well, you get the picture…the exciting part is a series of interrelated questions that arise and expand almost indefinitely.
Third, I've learned that the scientific community's emphasis on hypothesis-based research leads too many scientists to devise experiments to prove, rather than test, their hypotheses. Many journal submissions lack any discussion of alternative competing hypotheses: Researchers don't seem to realize that collecting data that are consistent with their original hypothesis doesn't mean that it is unconditionally true. Alternatively, they buy into the fallacy that absence of evidence for something is always evidence of its absence.
I'm all for rigor in scientific research — but let's emphasize the gathering of knowledge rather than the proving of a point.