The psychology of lying

National Geographic has a great article on the psychology of lying. Lying is a “deeply ingrained human trait” and children learn to lie at a very young age.

Ironically, even though we all lie quite frequently, we are really bad at detecting when others are lying.

That human beings should universally possess a talent for deceiving one another shouldn’t surprise us. Researchers speculate that lying as a behavior arose not long after the emergence of language. The ability to manipulate others without using physical force likely conferred an advantage in the competition for resources and mates, akin to the evolution of deceptive strategies in the animal kingdom, such as camouflage. “Lying is so easy compared to other ways of gaining power,” notes Sissela Bok, an ethicist at Harvard University who’s one of the most prominent thinkers on the subject. “It’s much easier to lie in order to get somebody’s money or wealth than to hit them over the head or rob a bank.”

Vision without seeing

Do you ever get the feeling you are being watched? Did you see something out of the corner of your eye, but not know what it is? It turns out that a lot of visual processing happens without us actual seeing anything.

Once information leaves our eyes it travels to at least 10 distinct brain areas, each with their own specialised functions. Many of us have heard of the visual cortex, a large region at the back of the brain which gets most attention from neuroscientists. The visual cortex supports our conscious vision, processing colour and fine detail to help produce the rich impression of the world we enjoy. But other parts of our brain are also processing different pieces of information, and these can be working away even when we don’t – or can’t – consciously perceive something.

Source: What triggers that feeling of being watched? – Mind Hacks

Ethical review for usability studies

computerThe role of ethics review boards (aka Institutional Review Boards or IRBs) has long been discussed when considering social science, human factors, or usability studies. How much review is appropriate when the behaviours involved are limited to things people do in everyday life (e.g., trying a new computer program or completing a questionnaire)? Is the level of review done for medical experiments appropriate when it comes to usability research? This debate has flared up again with some recent rule changes in the US.

If you took Psychology 101 in college, you probably had to enroll in an experiment to fulfill a course requirement or to get extra credit. Students are the usual subjects in social science research — made to play games, fill out questionnaires, look at pictures and otherwise provide data points for their professors’ investigations into human behavior, cognition and perception.But who gets to decide whether the experimental protocol — what subjects are asked to do and disclose — is appropriate and ethical?

Source: Some Social Scientists Are Tired of Asking for Permission – The New York Times

Understanding false beliefs

truth wordsThere have been many discussion recently about belief in false facts. This phenomenon has shown up both in politics and in science.

In order to understand why people persist in believing things that are demonstrably false, you have to consider the psychology.

…[C]onsider whether the following statements are true or false:

– We only use ten percent of our brains.
– We lose most of our body heat through our heads.
– If you swallow chewing gum, it will stay in your system for seven years.
– Cracking your knuckles will give you arthritis.

If you answered “true” to any of these, you’re guilty of believing falsehoods.

Read more at:
Why Do People Believe Things that Aren’t True?

The economics of the Loch Ness monster

monsterJust in time for tourist season, there has been another sighting…

Study after study has shown there is nothing in the loch that resembles a monster. But the locals desperately wish to keep it alive. I would love to visit Loch Ness for the mythical feel and historical context, but mostly for the geology and beauty. There is no monster in the loch except for the one that has to be fed by tourist cash.

Money monster of Loch Ness must be fed

Replicating Milgram: People are still willing to obey authority and inflict pain

A 2009 article from Cognitive Daily described a replication of Milgram’s classic obedience to authority study.

In that study, participants played the role of “teachers” asked to give electric shocks to “students” during what appeared to be a learning experiment. The key finding was that people were surprisingly willing to obey and give increasing levels of electric shock.

Nobody thought the study could ever be repeated because of ethical concerns, but now it has been done.

And the results? People are still willing to obey.

Would we still obey? The first replication of Milgram’s work in over 30 years

Seventy percent were willing to continue past the 150-volt mark, prompting the experimenter to halt the study. This result was statistically indistinguishable from Milgram’s.

Now there has been a more recent replication from researchers in Poland:

Just like Milgram, and other replication attempts in the US and elsewhere, the team found the majority (90 per cent in this case) of “teachers” were willing to continue to the highest shock level, even after hearing screams of pain from the “learner”.

Brainstorming really is dumb

The dreaded memo has come around again – management has called another brainstorming session. They are getting people together to solve a problem or, more commonly, to discuss future trends or challenges for the organization.

You cringe when remembering the previous awkward sessions, when too many flip charts were filled with half-baked “ideas.” But the managers seemed to be pleased that “people were involved.” If you are an introvert, you remember the sessions as downright painful. And those flip charts that were so important that day – they were soon forgotten as people returned to their usual work.

But everyone says they had a fun day, so we know the next memo will come around soon enough.

Brainstorming has been described as the placebo in the management medicine kit – everyone believes that it works despite clear evidence that it does not. A recent Fast Company article has declared, “brainstorming is dumb.” Brainstorming continues to be popular even though major studies have shown that positive results are rare, at best, and viable alternatives are readily available.

Getting people together to propose new ideas seems, on the surface, like a good thing. However, as the Fast Company article describes, “just because you throw people together doesn’t mean wonderful things happen.” Others have been more critical, describing brainstorming as nothing more than executive entertainment.

So, what does the research say, and what are the alternatives to brainstorming?

To dig into this, we first have to review where brainstorming came from, and how it was supposed to work. Alex Osborn, an advertising executive, made brainstorming popular — most notably in his 1953 book Applied Imagination. Brainstorming was described as a wonderful group creativity technique that could be used to generate great ideas or solve hard problems. The term came from the idea of storming a problem like a group of commandos, with each “stormer” attacking the same objective.

Osborn was quite detailed in describing how brainstorming should be done. He established instructions for brainstorming sessions, many of which are now ignored. Osborn said that brainstorming sessions should:

  • Involve groups of about 12 people, or perhaps fewer
  • Avoid judging ideas
  • Strive for idea quantity, because quantity leads to quality
  • Promote divergent thinking and wild ideas
  • Welcome combinations and improvements on ideas

Perhaps the most overlooked principles in Osborn’s technique were:

  • Carefully choose the topic to be amenable to idea generation and as specific as possible
  • Provide pre-session training on the technique
  • Provide background information and orientation to the problem before the session

Osborn claimed that brainstorming, conducted with these principles, produced 44% more ideas than people working alone. With this evidence, and the attractiveness of the sessions, brainstorming took off and was widely adopted in many organizations.

Almost immediately, there were criticisms. Soon after Osborn’s book was published, researchers reported that, contrary to the doctrine, individuals often performed better than groups. Promoters of the technique argued, however, that those studies used questionable designs — one widely quoted study compared people working in groups versus individuals working alone, but all participants received brainstorming instructions. So, they argued, this was not a fair test of the technique.

Questions about how to properly test brainstorming remain today. Should we compare?

  • Brainstorming groups vs. brainstorming individuals
  • Brainstorming groups vs. other groups
  • Number of ideas vs. quality of ideas
  • Variety of ideas vs. novelty of ideas
  • With facilitator vs. without facilitator
  • Pre-existing groups vs. random groups

With so many open questions, it should be no surprise that studies that test brainstorming have come up with inconsistent results. But very often, the results are negative, suggesting that brainstorming may not actually work as advertised.

A review by Scott Isaksen examined 90 different studies and found that brainstorming was often not effective. The review also showed that many of the principles proposed by Osborn were not followed in the studies, leading to questions about the validity of the tests. Other studies have often found that individuals produced more, better ideas. Researchers have even proposed that working in groups could actually inhibit the creative process, perhaps due to nervousness, free riding by some participants, and “group think” where teams fixate on a few early ideas.

With the tests of brainstorming showing mixed results, people have started to consider modifications to the original Osborn principles. Some have argued that the lack of criticism, something that Osborn thought was crucial for allowing creativity to flourish, might actually be a problem. Their studies showed that adding debate and criticism to the brainstorming sessions actually led to more ideas.

Other people have started to question the role of the group interaction. With recent tech developments, researchers have started to examine electronic brainstorming, where people work at computers or online. When online, people can work anonymously while still participating in groups, perhaps using a shared editing space or chat service. Working electronically might reduce some of the inhibiting social factors of group work, such as nervousness or introverted personalities.

Another derivation of the original brainstorming technique is brainwriting where, instead of sharing ideas out loud in a group session, people write their ideas down and pass them around. Others can read the ideas and perhaps build on them, while continuing to work on their own ideas. The Fast Company article suggests that brainwriting, done best when writing both alone and in groups, can be a lot less dumb.

So, when the memo comes around again, propose an alternative? Try brainwriting or electronic brainstorming. Groups should exchange ideas when working on a problem, but they should also work alone, probably before they work in groups. The 1950s-style, awkwardly social, too-man-flip-chart sessions are not the best way to come up with good ideas.

Suggested Reading

Brainstorming Is Dumb.

Groupthink: The brainstorming myth.

10 New Rules For Brainstorming Without Alienating Introverts.

The psychology of anti-vaccine beliefs

Here is an interesting article on understanding the psychology of anti-vaccine beliefs.

The anti-vaccine conspiracy theory holds that vaccines cause a long list of ills. This is taken as a given, an article of faith. Everything else necessarily flows from that premise. If vaccines cause disease, then the pharmaceutical industry must know it. They have done the research. One does not have to be a conspiracy theorist to assume that corporations are hiding inconvenient information to protect their profits.

But then the narrative necessarily gets darker. Not only must pharmaceutical executive know that vaccines are causing harm, it must also be true that the medical profession knows as well. Who do you think is conducting that research? They review the data, and they make recommendations for treatment. The government must be involved as well, because they regulate vaccines. The Center for Disease Control (CDC) reviews the published science and makes recommendations for the vaccine schedule. So they must be in on it.

Source: The Anti-Vaccine Narrative Just Gets Darker – Science-Based Medicine

Risk perceptions

An interesting introduction to a new book by Geoffrey C. Kabat on the psychology of risk perceptions.

…we have been encouraged to worry about deadly toxins in baby bottles, food, and cosmetics; carcinogenic radiation from power lines and cell phones; and harm from vaccines and genetically modified foods… When looked at even the least bit critically, many of the scares that get high-profile attention turn out to be based on weak or erroneous findings that were hardly ready for prime time.

Can almanacs really predict the weather?

One headline says that the recent winter storms in Vancouver were predicted “almost exactly”. Another headline, 4 days later, says that almanac predictions for the east coast of the U.S. were entirely wrong because springlike conditions occurred when one to two feet of snow had been predicted.

What is going on here? Can we trust an almanac to tells us what the weather will be like, as much as a year in advance?

It turns out that there are many almanacs, with some competing head-to-head. In the U.S., The Old Farmer’s Almanac started in 1792, while the younger rival The Farmers Almanac started publishing in 1818. These almanacs have had a long time to perfect their prediction methods and get things right. But, alas, time has not helped.

A recent article by Dr. Karen Stollznow in Skeptic Magazine provides a nice summary of the history and techniques used in making weather predictions:

The Farmer’s Almamac forecaster, who is only known by the mysterious pseudonym Caleb Weatherbee, uses a “top secret mathematical and astronomical formula, that relies on sunspot activity, tidal action, planetary position and many other factors.” These methods seem to be the “11 herbs and spices” of weather forecasting.

The other almanac uses a similar, secret formula. But how well do they work?

A study conducted in 1981 found that the Old Farmer’s Almanac was no better than chance at predicting the weather for 32 locations around the US. A careful reading of the predictions also shows that they are vague, generalized, and strongly tied to the seasonal averages. Much like astrology, almanacs are relying on the Barnum effect, where people will consider general statements to be very accurate if they believe the statements are tailored just for them (or their local weather).

Almanacs do offer more than weather forecasts, and that might explain some of their appeal:

Almanacs offer an awkward mix of science and superstition. They present factual astronomical information about moon phases, alongside spurious astrological claims. They still offer handy hints, gardening tips and recipes for comfort food, and teach you how to clean the toilet with Coca-Cola and keep fleas away from your dog naturally. Sticking to their roots of prediction, they provide the “best days” to cut hair to increase growth, to quit smoking, apply for a loan or shop for clothes.

So far, nobody can predict the weather far in advance, and certainly not an almanac. But maybe it can tell me when to get my hair cut.