Today I Learned about the Moral Foundations Theory, and how this theory can be used to convince someone on the opposing side of an argument to support your viewpoint. The Moral Foundations Theory essentially posits that there are several foundations of “intuitive ethics” that people can have, with each individual foundation forming the ideological edifice of one’s sense of morality. Individuals, cultures, and communities construct narratives and virtues around these ethical foundations. The five foundations are 1) Care/Harm 2) Fairness/Cheating 3) Loyalty/Betrayal 4) Authority/Subversion 5) Sanctity/Degradation, with a potential sixth one being Liberty/Oppression; there’s more information about each of these on the website.
The researchers who developed the theory briefly discuss the manifestation of some of these ethics in American politics:
Much of our present research involves applying the theory to political “cultures” such as those of liberals and conservatives. The current American culture war, we have found, can be seen as arising from the fact that liberals try to create a morality relying primarily on the Care/harm foundation, with additional support from the Fairness/cheating and Liberty/oppression foundations. Conservatives, especially religious conservatives, use all six foundations, including Loyatly/betrayal, Authority/subversion, and Sanctity/degradation. The culture war in the 1990s and early 2000s centered on the legitimacy of these latter three foundations. In 2009, with the rise of the Tea Party, the culture war shifted away from social issues such as abortion and homosexuality, and became more about differing conceptions of fairness (equality vs. proportionality) and liberty (is government the oppressor or defender?).
In an article written by Olga Khazan in The Atlantic, she writes about this theory in the context of political persuasion as well, describing how the best way to convince someone to agree with, or at the very least understand your argument is to frame it through a moral foundation that they utilize for themselves.
The assumption that people often have about morality is that its universal and objective in all cases, and therefore anyone whose morality differs from my own must be fundamentally wrong. However, the reality is much more complex. We don’t take into consideration the multiplicity of factors that can potentially inform someone else’s morality, such as social upbringing, economic forces, cultural norms, religious views, and more. This isn’t to make a case for moral relativism, but to say that the human experience is far more complicated than we give it credit for.
In the article, Khazan shares results from a research experiment that sought to explore the effects of moral “reframing” on the political views of a sample population:
As part of the same study, which they published last year in Personality and Social Psychology Bulletin, Feinberg and Willer tried to see if this type of “moral reframing” would be more effective. Previously, they had found that conservatives were more likely to endorse environmental protections when researchers activated their concerns about purity, rather than the more liberal concern about “harm”: A picture of a forest covered in rotting garbage, in other words, performed better with Republicans than a forest of tree stumps. This time, the researchers tested four different hot-button political issues, each time trying to reframe it in terms of the values that the Moral Foundations Theory tells us are more important for the opposite political side. Again, for liberals that’s “harm and fairness (e.g. benevolence, nurturance, equality, social justice),” and for conservatives, “group loyalty, authority, and purity (e.g., patriotism, traditionalism, strictness, religious sanctity).”
The experiment was performed in a variety of ways to test for reliability, and the results proved to be consistent each time. The practicality of these studies can have profound implications for broader public discourse. For one, in a country deeply divided along racial, economic, geographic, and political fault lines, a dialogue-oriented approach to engaging in politics could possibly bridge those gaps and bring otherwise divergent communities together to a common understanding.
They key takeaway point of the article is here:
“We tend to view our moral values as universal,” Feinberg told me. That “there are no other values but ours, and people who don’t share our values are simply immoral. Yet, in order to use moral reframing you need to recognize that the other side has different values, know what those values are, understand them well enough to be able to understand the moral perspective of the other side, and be willing to use those values as part of a political argument.”
Khazan acknowledges that this is no easy thing to do. Politics is deeply personal, and for that reason, the challenge of empathizing with a view that is diametrically opposed to our own can prove to be immensely difficult. She also mentions that for this approach to work, both sides must be mutually willing to hear the other, and they both must be rational thinkers and actors. However, given Donald Trump’s unprecedented assault on the veracity of media, politicians, and democratic institutions, as well as his cultivating of a climate that engenders mendacity, appealing to moral foundations of a Trump supporter could may as well be futile.
She ends with one last point, namely that ad-hominem attacks do not work. When you attack someone for holding a certain view, it may cause them to staunchly defend themselves and embolden their stances. This wouldn’t be productive for either side. Of course, in the Trump era, avoiding ad-hominems completely may be the greatest challenge of all.