1.3 Hypotheses and Theories
The use of hypotheses is one of the key distinguishing features of quantitative research. Rather than making things up as they go along, scientists develop a hypothesis ahead of time and design a study to test this hypothesis. (Qualitative research, in contrast, often starts by gathering information and ends with a hypothesis for future inquiries.) This section covers the process of turning rough ideas about the world into testable hypotheses. We discuss the primary sources of hypotheses as well as several criteria for evaluating hypotheses. Watch the following video for an entertaining introduction to hypotheses and theories, which the chapter will then explore in detail: .
Sources of Research Ideas
Every study starts with an idea that researchers frame as a question. But where do all of these great ideas come from in the first place? Students are often nervous about starting a career in research for fear that they might not be able to come up with great ideas to test. In reality, though, ideas are easy to come by, a person knows where to look. The following material suggests some handy sources for developing research ideas.
Real-World Problems
Getty Images/Handout
Nazi Lieutenant Colonel Adolf Eichmann’s claims during his trial that he was just “following orders” throughout the Holocaust inspired Stanley Milgram to conduct a groundbreaking study about obedience to authority.
A great deal of research in psychology and other social sciences is motivated by a desire to understand—or even solve—a problem in the world. This process involves asking a big question about some phenomenon and then trying to think of answers based on psychological mechanisms.
In 1961, Adolf Eichmann was on trial in Jerusalem for his role in orchestrating the Holocaust. Eichmann’s repeated statements that he was only “following orders” caught the attention of Stanley Milgram, a young social psychologist who had just earned a Ph.D. from Harvard University and who began to wonder about the limits of this phenomenon. To understand the power of obedience, Milgram designed a well-known series of experiments that asked participants to help with a study of “punishment and learning.” The protocol required them to deliver shocks to another participant—actually an accomplice of the experimenter—every time he got an answer wrong. Milgram discovered that two-thirds of participants would obey the experimenter’s commands to deliver dangerous levels of shocks, even after the victim of these shocks appeared to lose consciousness. These results revealed that all people have a frightening tendency to obey authority. We will return to this experiment in our discussion of ethics later in the chapter. Read more about Milgram and his landmark study on this website: .
Reconciliation and Synthesis
Ideas can also spring from resolving conflicts between existing ideas. The process of resolving an apparent conflict involves both reconciliation, or finding common ground among the ideas, and synthesis, or merging all the pieces into a new explanation. In the late 1980s, psychologists Jennifer Crocker and Brenda Major noticed an apparent conflict in the prejudice literature. Based on everything then known about the development of self-esteem, members of racial and ethnic minority groups would have been expected to have lower-than-average self-esteem because of the prejudice they faced. However, study after study demonstrated that, in particular, African-American college students had equivalent or higher self-esteem than European-American students. Crocker and Major (1989) offered a new theory to resolve this conflict, suggesting that the existence of prejudice actually grants access to a number of “self-protective strategies.” For example, minority group members can blame prejudice when they receive negative feedback, making the feedback much less personal and therefore less damaging to self-esteem. The results of this synthesis were published in a 1989 review paper, which many people credit with launching an entire research area on the targets of prejudice.
Learning From Failure
Kevin Dunbar, a professor at Dartmouth University, has spent much of his career studying the research process. That is, he interviews scientists and sits in on lab meetings in order to document how people actually do research in the trenches. In a 2010 interview with Jonah Lehrer, Dunbar reported the shocking statistic that approximately 50 to 75% of research results are unexpected. Even though scientists plan their experiments carefully and use established techniques, the data are surprising more often than not. But even more surprising was the tendency of most researchers to discard the data if it did not fit their hypothesis. “These weren’t sloppy people,” Dunbar commented. “They were working in some of the finest labs in the world. But experiments rarely tell us what we think they’re going to tell us. That’s the dirty secret of science.” The trick, then, is knowing what to do with data that make a particular study seem like a failure (Lehrer, 2009).
According to Dunbar, the secret to turning failure into opportunity is twofold: First, question assumptions about why the study feels like a failure in the first place. Perhaps the data contradict the hypothesis but can be explained by a new one, or perhaps the data suggest a dramatic shift in perspective. Second, seek new and diverse perspectives to help in interpreting the results. Perhaps a cognitive psychologist can shed light on reactions to prejudice. Alternatively, perhaps an anthropologist knows what to make of the surprising results of a study on aggression. Some of the best and most fruitful research ideas have sprung from combining perspectives from different disciplines. Sometimes, all that a strange dataset needs is a fresh set of eyes.
Research: Thinking Critically
The Psychology Behind Pricing
Throughout this textbook, we will use short articles about research results as a way to illustrate key points in the text. Follow the link below to an article by William Poundstone, a bestselling author and expert on the psychology of pricing decisions. In this article, Poundstone discusses the peculiar appeal of prices ending in the number “9” and reviews recent research on this appeal by a pair of consumer psychology researchers. As you read the article, consider what you have learned so far about the research process, and then respond to the questions below.
Think About It:
1. What hypothesis are Coulter and Coulter trying to test? Try to state this as succinctly as possible.
2. How was “perception of discounts” operationalized in their studies?
3. How were the key variables measured?
4. How do Coulter and Coulter explain their findings? Are there other possible alternative explanations?
5. Are these studies primarily aimed at description, explanation, prediction, or change? Explain.
From Ideas to Hypotheses
Once a researcher develops a research question, the next step is to translate that question into a testable hypothesis—the first step in the HOME method. Broadly speaking, hypotheses are developed in one of two ways: bottom-up and top-down. This section explores these options in more detail.
Bottom-Up—From Observation to Hypothesis
Research hypotheses are often based on observations about the world around us. For example, people may have noticed the following tendencies as they observe those around them:
· Teenagers do a lot of reckless things when their friends do them.
· Close friends and couples tend to dress alike.
· Everyone faces the front of the elevator.
· Church attendees sit and stand at the same time.
Based on this set of four observations, we could develop a general hypothesis about human behavior: People have a tendency to go along with the crowd and conform to group behaviors. This process of developing a general statement from a set of specific observations is called induction, and it is perhaps best understood as a “bottom-up” approach. In this case, we have developed our hypothesis about conformity from the ground up, based on observing behavioral tendencies.
The process of induction is a very common and useful way to generate hypotheses. Most notably, this process serves as a great source of ideas that are based in real-world phenomena. Induction also helps us to think about the limits of an observed phenomenon. For example, we might observe the same set of conforming behaviors and speculate whether people will also conform in dangerous situations. What if smoke started pouring into a room and no one else reacted? Would people act on their survival instinct or conform to the group and stay put? Social psychologists Bibb Latané and John Darley (1969) conducted just such an experiment with groups of college undergraduates. Participants were asked to sit in a classroom and complete a survey. Meanwhile, the experimenters piped in smoke (actually dry ice) through the air vents. They hypothesized—and found—that the pressure to conform was stronger than the instinct to flee from a potential fire.
Top-Down—From Theory to Hypothesis
The other approach to developing research hypotheses is to work down from a bigger idea. The term for these big ideas is a theory, which refers to a collection of ideas used to explain the connections among variables and phenomena. For example, the theory of evolution organizes knowledge about how species have developed and changed over time. One piece of this theory claims that human life originated in Africa and then spread to other parts of the planet. This idea in and of itself, however, is too big to test in a single study. Instead, researchers move from the “top down” and develop a specific hypothesis from a more general theory, a process known as deduction.
By developing hypotheses using a process of deduction, researchers’ biggest advantage is the ease of placing the study—and its results—in the larger context of related research. Because the hypotheses represent a specific test of a general theory, results can be combined with other research that tested the theory in different ways. For example, in the evolution example, a researcher might hypothesize that the fossils from human ancestors found in Africa would be older than those found in other parts of the world. If this hypothesis were supported, it would be consistent with the overall theory about human life originating in Africa. And as more and more researchers develop and test their own hypotheses about the origins of life, our cumulative knowledge about evolution continues to grow.
Table 1.3 presents a comparison of these two sources of research hypotheses, showcasing their relative advantages and disadvantages.
Table 1.3 Comparing sources of hypotheses
Deduction |
Induction |
“Top-down,” from theory to hypothesis |
“Bottom-up,” from observation to hypothesis |
Easy to interpret findings |
Can be hard to interpret without prior research |
Helps science build and grow |
Helps understanding of the real world |
Might miss out on new perspectives |
Great way to discover new ideas |
Evaluating Theories
While experiments are designed to test one hypothesis at a time, the overall progress in a field is measured by the strength and success of its theories. If we think of hypotheses as individual combat missions on the battlefield, then theories are the overall battle plan. So, how do researchers know whether their theories are any good? Next, we cover four criteria that are useful in evaluating theories.
Explains the Past; Predicts the Future
One of the most important requirements for a theory is that it be consistent with existing knowledge. If a physicist theorized that everything on earth should float off into space, that theory would conflict with millennia’s worth of evidence showing that gravity exists. Similarly, if a psychologist argued that people learn better through punishment than through rewards, that theory would conflict with several decades of research on learning and reinforcement. A new theory should offer a new perspective and a new way of thinking about familiar concepts, but it cannot be so creative that it clashes with what scientists already know. On a related note, a theory also has to lead to accurate predictions about the future, meaning that it has to stand up to empirical tests. There are usually multiple ways to explain existing knowledge, but not all of them will be supported as researchers test their assumptions in new circumstances. At the end of the day, the best theory is the one that best explains both past and future data.
Testable and Falsifiable
oodelay/iStockphoto/Thinkstock
The theory of evolution is falsifiable, meaning that it could be disproved under the right conditions, such as the discovery of fossil evidence contradicting the theory.
Second, a theory needs to be stated in such a way that it leads to testable predictions. More specifically, a theory should be subject to a standard of falsifiability, meaning that the right set of conditions could prove it wrong (Popper, 1959). Calling something “falsifiable” does not mean it is false, only that if it were false, demonstrating its falsehood would be possible. The Darwinian theory of evolution offers an example of this criterion. One of the primary components of evolutionary theory is the idea that species change and evolve from common ancestors over time in response to changing conditions. So far, all evidence from the fossil record has supported this theory—older variants of species always appear farther down in a fossil layer. If conflicting evidence ever were to appear, however, it would deal a serious blow to the theory. The biologist J. B. S. Haldane was once asked what kind of evidence could possibly disprove the theory of natural selection, to which he replied, “fossil rabbits in the Pre-Cambrian era”—that is, a modern version of a mammal buried in a much older fossil layer (Ridley, 2004).
Research: Thinking Critically
Intelligence, Politics, and Religion
Follow the link below to an article by Daniela Perdomo, a staff writer and editor for Alternet. In this article, Perdomo reviews the controversy over a recent scientific study claiming that liberals and atheists are more intelligent. As you read the article, consider what you have learned so far about the research process, and then respond to the questions below.
Think About It:
1. What general theory is Kanazawa trying to test? How does the theory differ from his specific hypothesis?
2. How did Kanazawa operationalize liberalism and intelligence in his research? Are there problems with the way these constructs were operationalized? Explain.
3. What were Kanazawa’s main findings? How is the strength of this evidence influenced by his research methods?
4. Why do you think this research is controversial? If Kanazawa’s methodology were more rigorous, would it still be controversial?
Parsimonious
Third, a theory should strive to be parsimonious, or as simple and concise as possible without sacrificing completeness. (Or, as Einstein [1934] famously quipped during a lecture at Oxford: “Everything should be made as simple as possible, but no simpler” [p. 165].) One helpful way to think about this criterion is in terms of efficiency. Theories need to spell out the components in a way that represents everything important but does not add so much detail that they become hard to understand. This means that theories can lack parsimony either because they are too complicated or because they are too simple.
At one end of this spectrum, Figure 1.1 presents a theoretical model of the causes of malnutrition (Cheah et al., n.d.). This theory does a superb job of summarizing all of the predictors of child malnutrition across multiple levels of analysis. The theory’s potential problem, though, is that it becomes too complicated to test.
Figure 1.1: Predictors of malnutrition
Figure 1.1 presents a theoretical model of the causes of malnutrition.
At the other end of the spectrum, Figure 1.2 shows the overall theoretical perspective behind behaviorism. In the early part of the 20th century, the behaviorist school of psychology argued that everything organisms do could be represented in behavioral terms, without any need to invoke the concept of a “mind.” The overarching theory looked something like Figure 1.2, with the “black box” in the middle representing mental processes. Nevertheless, the cognitive revolution of the 1960s eventually displaced this theory, as it became clear that behaviorism was too simple. To strike an ideal balance, then, a researcher constructs a theory in a way that includes only the necessary pieces, nothing unnecessary.
Figure 1.2: The behaviorist model
Figure 1.2 presents the overall theoretical perspective behind behaviorism. The “black box” in the middle represents mental processes.
Promotes Research
Finally, science is a cumulative field, which means that a theory is really only as good as the research it generates. To state it more bluntly: A theory is essentially useless if no one follows up on it with more research. Thus, one of the best bases for evaluating a theory is whether it encourages new hypotheses. Consider the following example, drawn from real research in social psychology. Since the early 1980s, Bill Swann and his colleagues have argued that people prefer consistent feedback to positive feedback, meaning that they would rather hear things that confirm what they think of themselves. One provocative hypothesis arising from this theory proposes that people with low self-esteem are more comfortable with a romantic partner who thinks less of them than with one who thinks well of them. This hypothesis has been tested and supported many times in a variety of contexts and continues to draw people in because it offers a compelling explanation for why some people stay in bad relationships—a phenomenon that is regrettably recognizable. (For a review of this research, see Swann, Rentfrow, & Guinn, 2005.)
The Cycle of Science
Figure 1.3: The cycle of science
Now, let us take a step back and look at the big picture. We have covered the processes of developing and evaluating both broad theories and specific hypotheses. Of course, none of these pieces occurs in isolation; science is an ongoing process of updating and revising our views based on what the data show. This overall process of quantitative research works something like the cycle depicted in Figure 1.3. Researchers start with either an overall theory or a set of observations about how concepts relate to one another and use this to generate specific, testable, and falsifiable hypotheses. These hypotheses then form the basis for research studies, which generate empirical data. Based on these data, we may have reason to suspect the overall theory needs to be refined or revised. And, so, we develop a new hypothesis, collect some new data, and either confirm or do not confirm our suspicion. The process does not end there, however: other researchers may see a new perspective on our theory and develop their own hypotheses, which lead to their own data and possibly to a revision of the theory. The scientific approach may strike some as a slow and strange approach to problem solving, but it is the most objective one available.
Consider an example of how this cycle works in real life. In the 1960s, social psychologists were beginning to study the ways that people explain the behavior of others (e.g., when someone cuts me off in traffic, I tend to assume he is a jerk.) One early theory, called “correspondent inference theory,” argued that people would come up with these explanations in a rational way. For example, if we read a persuasive essay but then learn that the author was assigned a position on the topic, we should refrain from drawing any conclusions about the writer’s actual position. However, research findings demonstrated just the opposite. In a landmark 1967 study, participants actually ignored information about whether authors had chosen their own position on the issue, assuming instead that whatever they wrote reflected their true opinions (Jones & Harris, 1967). In response to these data (and similar findings from other studies), the correspondent inference theory was gradually revised to incorporate what was termed the “fundamental attribution error”—people tend to ignore situational influence and assume that all behavior simply reflects the person’s own disposition. The study’s authors developed a theory, came up with a specific hypothesis, and collected some empirical data to test it. But because the data ran counter to the theory, the theory was ultimately revised to account for the empirical evidence. In this particular case, the cycle of research on understanding the fundamental attribution continues to this day, over 50 years later.
Proof and Disproof
While on the subject of adjusting theories, think about the notions of “proof” and “disproof.” Because science is a cumulative field, decisions about the validity of a theory are ultimately made based on results of several studies from several research laboratories. This means that a single research study has rather limited implications for an overall theory. This also means that a researcher must use the concepts of proof and disproof in the correct way. We will elaborate on this as we move through the course, but for now we can rely on two very simple rules:
1. If the data from one study are consistent with our hypothesis, we support the hypothesis rather than “prove” it. In fact, research almost never proves a theory, but statistical tests can at least suggest how confident to be in our support.
2. If the data from one study are not consistent with our hypothesis, we fail to support the hypothesis. As the course will discuss, many factors can cause a study to fail; these are often a result of flaws in the design rather than flaws in the overall theory.