It makes sense that people with different personality traits would have measurable differences in their brains, right? The short answer is “yes.” The long answer is “probably," to a certain extent, but it’s hard to be sure because brains turn out to be pretty complicated.
Many news outlets that publish articles about science for a general audience tend to oversimplify and overstate the results of the scientific studies that they are discussing. Even worse, some scientists do the same when they publish in academic journals. There are numerous checks and balances in place in the world of science and academia that try to protect against such occurrences, but none are 100% fail-proof (like most things in life).
I’ve written here before about the possible neurological bases for extraversion and introversion, and I tried to make several things clear in that article:
- Correlation does not equal causation! Just because two or more things are associated with each other does not mean that one necessarily causes the other.
- Most personality traits exist on a spectrum. One introvert may display more introverted tendencies than another introvert.
- There is natural variation between individuals in brain structure and function, and that is likely not entirely due to personality differences. My fingers probably look slightly different than yours do, but chances are that our fingers work just about the same. The same goes for brains. One of the things that scientists are still working to figure out is just how much difference is significant in the brain.
So it really annoys me when journalists or scientists disregard these principles. I recently read an article on Psychology Today written by a scientist who shares my annoyance. In the article, Dr. Michael L. Anderson talks about the dangers of making science easier at the cost of it being good science. One important observation he makes is that “data never speak for themselves. They are always placed inside an interpretive frame, and when that frame is inadequate, no interpretation can be valid.” He is mainly focused on the problems of oversimplification and overstating on the end of the scientists, rather than the media, so that is where I will begin my discussion.
Anderson uses one specific scientific article to highlight the problems that he sees in the world of personality neuroscience. That article is “Testing Predictions from Personality Neuroscience: Brain Structure and the Big Five” by DeYoung et al, 2010. I am only using this study as an example; there are plenty of published studies out there that rest on even more questionable methods, and I wish DeYoung and his colleagues no ill will, but this particular study is one that I thought would interest Truity readers.
In the study, the researchers gave 117 healthy participants a personality inventory that measured Extraversion, Neuroticism, Agreeablenes, Openness/Intellect, and Conscientiousness. The brains of the participants were then imaged by an MRI. So far, so good. But the problems come with the analysis and interpretation of the data.
The first major issue is that 116 of the participants were compared to a single “reference subject,” selected on the basis of being “near the group average for personality traits and having a good structural image scan and extraction” (DeYoung et al., 2010). There were controls in place for age, sex, and total brain size of the participants, but due to the litany of factors that affect brain development, basing an entire study on how the other 116 participants differed from one person seems a tad short-sighted. Indeed, in Dr. Anderson’s article, he mentions that, according to a colleague of his that is a specialist in individual differences, it is “unusual to norm one’s measures to a single individual.” Would using a normal or average range have been a better option?
The second problem I had with the study was the tendency to treat each brain structure as if it had a single function. For example, they found that conscientiousness was associated with an increase in the size of the lateral prefrontal cortex (LPFC), which was consistent with their prediction that this area would have a greater volume in more conscientious participants due to its involvement in planning and following complex rules. However, the LPFC also plays a role in spatial information processing, language processing, and other aspects of cognitive behavior. Isn’t it possible that someone’s larger-than-average LPFC could be due to his or her superior language processing abilities rather than to his or her conscientiousness?
Finally, all of the findings of this study hinge on the idea that the volume of a brain region is directly correlated with the strength of its functionality. Except, of course, when they claim that the two are indirectly correlated—allow me to explain. For most of the paper, the authors go with the idea that the bigger a brain region (e.g., the LPFC), the greater its ability to carry out its function (e.g., planning and following rules). But then, the authors remark at one point that “a smaller-than-average volume of a given structure might indicate increased efficiency, or that the structure is streamlined to perform a particular function or set of functions” (DeYoung et al, 2010). How then can they connect a person’s conscientiousness to the large size of their LPFC? If the strength of the function can be associated with a large or small brain region, then isn’t the whole study potentially contradictory?
In spite of the critique of this article by myself and Dr. Anderson, DeYoung & co. do at least one thing very well—they always emphasize that the volume of a certain region and the strength of a personality trait are only associated. They never make the mistake of allowing correlation to imply causation.
This detail that the DeYoung article got right is a detail that journalists and other people writing for general audiences often get wrong, in all areas of science. You’ve seen the headlines:
- “People who meditate are more aware of their unconscious brains.” Yes, but is it because meditation leads to awareness or because people who have a greater awareness of their unconscious minds are more likely to meditate?
- “Bacon and other processed meats can cause cancer, experts say.” You remember this, right? The internet exploded with articles about the World Health Organization classifying processed meat as a group 1 carcinogen, just like tobacco. Thankfully, there were other articles to clear things up for all the bacon lovers: “Bacon Causes Cancer? Sort of. Not Really. Ish.” (That title is much more representative of the results of many scientific studies.)
- “Need a ‘sweeter’ way to lose weight? Eat chocolates!” If your headline offers people an easy and yummy way to lose weight, they’re going to click on it. This headline is a perfect example of “clickbait.” In this case, the study in question was bad and was published in a less-than-thorough journal, so the journalists who wrote this kind of headline—and there were many, many terrible puns out there—clearly didn’t do their research. But even when the studies themselves are sound, too often the media outlets twist or simplify them to make for more click-worthy headlines and sound-bites.
There are things you can do when you’re reading about science—personality-related or otherwise—to avoid falling victim to overhyped or just plain bad research findings. First of all, if you are reading, listening to, or watching what a journalist has to say about a study, go to the source. Find the original study whenever possible. Once you’re there, your next step is to make sure that the journal that the original article is in is peer-reviewed (meaning that other scientists evaluated the researcher’s findings and asked questions, and made them clarify or even re-do parts of their experiment). Finally, even if the study was published in a peer-reviewed journal, you have to use your own judgment and b.s.-detection skills to decide for yourself whether or not to trust it.
Remember that, as the saying goes, if it’s too good to be true, it probably is.