Two years ago, 14-year-old Nathan Zohner, a student at Eagle Rock Junior High in Idaho Falls, announced on the Internet that he had circulated a petition demanding strict control of a chemical known as dihydrogen monoxide. This substance, he wrote, caused excessive sweating and vomiting, can be lethal if accidentally inhaled, contributes to erosion, and has been found in tumors of cancer patients. The student asked 50 people whether they supported the ban. Forty-three said yes, six were undecided, and only one knew that dihydrogen monoxide was… water.
While embracing a ban on H2O seems more foolish than dangerous, this anecdote shows how quickly people embrace some kinds of ideas without subjecting them to critical scrutiny. The human propensity to accept ideas at face value—no matter how illogical—is the fertile soil in which pseudoscience grows. Beliefs in UFOs, astrology, extrasensory perception, palm reading, crystal therapy, or guardian angels do not meet scientific criteria for rational plausibility (such as experimental reproducibility or Karl Popper’s idea of falsifiability) and generally rely on anecdotes instead of hard evidence for support, though they may partake of scientific-sounding terms or rationales; all such concepts can be safely described as pseudoscience. Why do people embrace irrational belief systems even after repeated disconfirmation by scientists?
It is easy to dismiss these ideas as amusing and eccentric, but in some situations they pose concrete dangers to individuals; they occasionally even affect society. Former First Lady Nancy Reagan revealed in her autobiography that she employed a psychic for seven years to schedule dates for important meetings; more recently, Hillary Rodham Clinton admitted to having imaginary conversations with Eleanor Roosevelt on the advice of New Age guru Jean Houston. These public figures are hardly alone in seeking answers from the stars and soothsayers; the persistence and popularity of such beliefs reflect the many perceived benefits in pseudoscience. Psychologists agree that all belief systems — astrology, Objectivism, religion — ease anxiety about the human condition, and provide the illusion of security, predictability, control, and hope in an otherwise chaotic world.
Scott Lilienfeld, assistant professor of psychology at Emory University and consulting editor at the Skeptical Inquirer, identifies two major catalysts for the prevalence of pseudoscientific beliefs: the information explosion (often a misinformation explosion) and the low level of scientific literacy in the general population. He cites poll data indicating that only 7 percent of the population can answer basic scientific questions like “What is DNA?” or “What is a molecule?” And when science cannot provide answers, or when people refuse to accept a scientific explanation (such as when fertility treatments don’t work), pseudoscience often provides highly individualized explanations. “People believe in things like astrology because it works for them better than anything else,” says Herbert Gans, the Robert S. Lynd professor of sociology at Columbia. “Your own system is the most efficient one, whether it’s a guardian angel, a rabbit’s foot, or a God watching over you. And if it doesn’t work, there’s always an excuse for it.”
Another reason people find pseudoscience plausible is a cognitive ability to “see” relationships that don’t exist. “We have an adaptive reflex to make sense of the world, and there is a strong motivation to do this,” says Lilienfeld. “We need this ability, because the world is such a complex and chaotic place, but sometimes it can backfire.” This outgrowth of our normal capacity for pattern recognition accounts for the “face on Mars” (a group of rocks that allegedly resembles a face) or the belief that a full moon causes an increase in the crime rate. When people believe in something strongly — whether it is an image on Mars or a causal interpretation of a chronological association — they are unlikely to let it go, even if it has been repeatedly discounted.
In some cases, contradictory evidence can even strengthen the belief. As Leon Festinger and colleagues discussed in When Prophecy Fails, holding two contradictory beliefs leads to cognitive dissonance, a state few minds find tolerable. A believer may then selectively reinterpret data, reinforcing one of the beliefs regardless of the strength of the contradictory case. Festinger infiltrated a doomsday cult whose members were convinced the earth was going to blow up; when the date passed and the earth didn’t explode, the cult attributed the planet’s survival to the power of their prayers. “When people can’t reconcile scientific data with their own beliefs, they minimize one of them—science—and escape into mysticism, which is more reliable to them,” says Dr. Jeffrey Schaler, adjunct professor of psychology at American University.
Belief systems tend to respond to challenges according to this pattern, says Lilienfeld. When researching a cherished belief or coming across information about it, a person may process the data as if wearing blinders, registering only the affirming information. The malleability of memory compounds this effect. “Once you have a belief, the way you look at evidence changes,” says Tory Higgins, chair of the psychology department at Columbia, whose research specialty is mechanisms of cognition. “When you search your memory, you are more likely to retrieve information that will support it and avoid exposure to information that will disconfirm it. If you fail to avoid it, you attack the validity and credibility of the source, or categorize it as an exception.”
Dr. Robert Glick, head of the Columbia Center for Psychoanalytic Training and Research, calls belief systems “societal pain relievers.” “People will recruit anything from their environment that will ensure and protect their safety,” he says. “It gives you a sense that you’re not alone, and helps ease feelings of being powerless.” Power—whether an increase in a person’s perceived power or an abdication of it—is a major component of pseudoscience, and Glick explains people’s relations to power in Freudian terms. He describes belief systems as a metaphoric representation of our parents, providing a release from authority and responsibility. “People have a built-in predilection that wishes for assistance and support. This is an extension of childhood, where there were always people around us who control our life. Beliefs like astrology and even religion are a projection that there are forces in the heavens that are like your parents.”
While it may be fun to read horoscopes in the newspaper, can real harm come from believing strongly in pseudoscience? Lilienfeld advises citizens to consider how pseudosciences pose concrete threats by weakening critical thinking and minimizing a person’s sense of control and responsibility. For individuals, this phenomenon can translate into thousands of dollars wasted on quack remedies—not to mention the medical danger to patients who forgo more reliable treatments. The risks extend to the societal level. “We need to be able to sift through the information overload we’re presented with each day and make sound judgments on everything from advertising to voting for politicians,” Lilienfeld says.
Gans offers a more forgiving point of view. “If someone believes strongly in something like guardian angels, and they’re not in a mental hospital, and we haven’t provided a better answer, why not?” says Gans. “But if you’re just sitting inside your house all day and say, ‘Well, my guardian angel is going to take care of everything,’ then that’s bad.” And perhaps not too far from supporting a ban on dihydrogen monoxide.
Is the “beautiful is good” stereotype accurate? Do beautiful people indeed have desirable traits?
For centuries, those who considered themselves serious scientists thought so when they sought to identify physical traits (shifty eyes, a weak chin) that would predict criminal behavior.
Or, on the other hand, was Leo Tolstoy correct when he wrote that it’s “a strange illusion … to suppose that beauty is goodness”?
There is some truth to the stereotype. Attractive children and young adults are somewhat more relaxed, outgoing, and socially polished (Feingold, 1992b; Langlois & others, 2000). William Goldman and Philip Lewis (1977) demonstrated this by having 60 University of Georgia men call and talk for five minutes with each of three women students. Afterward the men and women rated the most attractive of their unseen telephone partners as somewhat more socially skillful and likable.
Physically attractive individuals tend also to be more popular, more outgoing, and more gender typed—more traditionally masculine if male, more feminine if female (Langlois & others, 1996).
These small average differences between attractive and unattractive people probably result from self-fulfilling prophecies. Attractive people are valued and favored, so many develop more social self-confidence.
By that analysis, what’s crucial to your social skill is not how you look but how people treat you and how you feel about yourself.
Have you ever been a member of a group that got virtually nothing accomplished? If so, you may have been a victim of social loafing, a phenomenon in which people slack off in groups (Latané,Williams, & Harkins, 1979; North, Linley, & Hargreaves, 2000). As a consequence of social loafing, the whole is less than the sum of its parts. Some psychologists believe that social loafing is a variant of bystander nonintervention.
That’s because social loafing appears to be due in part to diffusion of responsibility: People working in groups typically feel less responsible for the outcome of a project than they do when working alone. As a result, they don’t invest as much effort.
Psychologists have demonstrated social loafing in numerous experiments. In one, a researcher placed blindfolds and headphones on six participants and asked them to clap or yell as loudly as possible. When participants thought they were making noises as part of a group, they were less loud than when they thought they were making noises alone (Williams, Harkins, & Latané, 1981). Cheerleaders also cheer less loudly when they believe they’re part of a group than when they believe they’re alone (Hardy & Latané, 1986). Investigators have also identified social loafing effects in studies of rope-pulling (the “tug-of war” game), navigating mazes, identifying radar signals, and evaluating job candidates (Karau & Williams, 1995).
Like many other social psychological phenomena, social loafing may be influenced by cultural factors. People in individualistic countries, like the United States, are more prone to social loafing than people in collectivist countries, like China, probably because people in the latter countries feel more responsible for the outcomes of group successes or failures (Earley, 1989).
One of the best antidotes to social loafing is to ensure that each person in the group is identifiable, for example, by guaranteeing that managers and bosses can evaluate each individual’s performance. By doing so, we can help “diffuse” the diffusion of responsibility that often arises in groups.
from: Psychology - From Inquiry to Understanding
It is common to find couples, families or teams where someone always asks another member about a certain memory, while the opposite happens for a different memory. For example, a mother might always consult his son about computers and technical difficulties, while the father might always consult the mother about his plans for the month. This kind of “shared memory” is named transactive memory, where a group becomes organised in a way to share memory around in an efficient manner. This is usually done by the group reorganising itself so that each member specialises in a certain field, with the other members only remembering that that person is the expert. This means that instead of memorising every field, you can simply remember who is the expert in that field. It is much like learning where the reference text is rather than learning the contents.
Although it may look like dependence, transactive memory is an extremely useful tool in tight groups such as a couple or a small team. By having members specialise in certain domains of knowledge, the group is able to expand their capacity to acquire knowledge and create innovation. Transactive memory allows for a group to become efficient and effective in learning and retrieving knowledge. Overall, it improves decision making processes and the efficiency of the group, allowing for better outcomes. This is achieved by the division of responsibilities from specialising, shortening the time needed for finding the appropriate knowledge (as everyone knows the “guy” or “gal” to go to) and the shared understanding of the teammates regarding the interpersonal relations in the team. This means that everyone knows exactly who to go to for a certain domain of knowledge, while understanding their strengths and weaknesses, allowing for well coordinated interactions. Because of this, transactive memory only works in groups with limited numbers, with the maximum number being similar to the Monkeysphere (150).
Many studies prove the effectiveness of transactive memory. It has been found that couples have much better memory recollection compared to when they are paired with a stranger. In the modern technological era, transactive memory has expanded to the internet, with studies showing that people are more likely to know the source of information (such as Wikipedia) rather than the actual information. Given the ease of access to the internet and large databases containing all the information we need, sometimes it is far more efficient learning how to find these sources rather than rote learning all the information.