Category Archives: Today I Learned

Short entries about what I learned on the day of writing them. The topics can range from history to science, politics to philosophy, or even personal reflections or anecdotes that I may feel are worth writing about. Ideally I will try learning something new every day.

Today I Learned: 2/7/2017

Today I Learned about the Moral Foundations Theory, and how this theory can be used to convince someone on the opposing side of an argument to support your viewpoint. The Moral Foundations Theory essentially posits that there are several foundations of “intuitive ethics” that people can have, with each individual foundation forming the ideological edifice of one’s sense of morality. Individuals, cultures, and communities construct narratives and virtues around these ethical foundations. The five foundations are 1) Care/Harm 2) Fairness/Cheating 3) Loyalty/Betrayal 4) Authority/Subversion 5) Sanctity/Degradation, with a potential sixth one being Liberty/Oppression; there’s more information about each of these on the website.

The researchers who developed the theory briefly discuss the manifestation of some of these ethics in American politics:

Much of our present research involves applying the theory to political “cultures” such as those of liberals and conservatives. The current American culture war, we have found, can be seen as arising from the fact that liberals try to create a morality relying primarily on the Care/harm foundation, with additional support from the Fairness/cheating and Liberty/oppression foundations. Conservatives, especially religious conservatives, use all six foundations, including Loyatly/betrayal, Authority/subversion, and Sanctity/degradation. The culture war in the 1990s and early 2000s centered on the legitimacy of these latter three foundations. In 2009, with the rise of the Tea Party, the culture war shifted away from social issues such as abortion and homosexuality, and became more about differing conceptions of fairness (equality vs. proportionality) and liberty (is government the oppressor or defender?).

In an article written by Olga Khazan in The Atlantic, she writes about this theory in the context of political persuasion as well, describing how the best way to convince someone to agree with, or at the very least understand your argument is to frame it through a moral foundation that they utilize for themselves.

The assumption that people often have about morality is that its universal and objective in all cases, and therefore anyone whose morality differs from my own must be fundamentally wrong. However, the reality is much more complex. We don’t take into consideration the multiplicity of factors that can potentially inform someone else’s morality, such as social upbringing, economic forces, cultural norms, religious views, and more. This isn’t to make a case for moral relativism, but to say that the human experience is far more complicated than we give it credit for.

In the article, Khazan shares results from a research experiment that sought to explore the effects of moral “reframing” on the political views of a sample population:

As part of the same study, which they published last year in Personality and Social Psychology Bulletin, Feinberg and Willer tried to see if this type of “moral reframing” would be more effective. Previously, they had found that conservatives were more likely to endorse environmental protections when researchers activated their concerns about purity, rather than the more liberal concern about “harm”: A picture of a forest covered in rotting garbage, in other words, performed better with Republicans than a forest of tree stumps. This time, the researchers tested four different hot-button political issues, each time trying to reframe it in terms of the values that the Moral Foundations Theory tells us are more important for the opposite political side. Again, for liberals that’s “harm and fairness (e.g. benevolence, nurturance, equality, social justice),” and for conservatives, “group loyalty, authority, and purity (e.g., patriotism, traditionalism, strictness, religious sanctity).”

The experiment was performed in a variety of ways to test for reliability, and the results proved to be consistent each time. The practicality of these studies can have profound implications for broader public discourse. For one, in a country deeply divided along racial, economic, geographic, and political fault lines, a dialogue-oriented approach to engaging in politics could possibly bridge those gaps and bring otherwise divergent communities together to a common understanding.

They key takeaway point of the article is here:

“We tend to view our moral values as universal,” Feinberg told me. That “there are no other values but ours, and people who don’t share our values are simply immoral. Yet, in order to use moral reframing you need to recognize that the other side has different values, know what those values are, understand them well enough to be able to understand the moral perspective of the other side, and be willing to use those values as part of a political argument.”

Khazan acknowledges that this is no easy thing to do. Politics is deeply personal, and for that reason, the challenge of empathizing with a view that is diametrically opposed to our own can prove to be immensely difficult. She also mentions that for this approach to work, both sides must be mutually willing to hear the other, and they both must be rational thinkers and actors. However, given Donald Trump’s unprecedented assault on the veracity of media, politicians, and democratic institutions, as well as his cultivating of a climate that engenders mendacity, appealing to moral foundations of a Trump supporter could may as well be futile.

She ends with one last point, namely that ad-hominem attacks do not work. When you attack someone for holding a certain view, it may cause them to staunchly defend themselves and embolden their stances. This wouldn’t be productive for either side. Of course, in the Trump era, avoiding ad-hominems completely may be the greatest challenge of all.


Today I Learned – 12/22/2016

Today I Learned a little bit about the difference between civilizational conservatism and ideological conservatism, thanks to Peter Beinart’s recent article in The Atlantic, where he discusses the shift of American conservative foreign-policy thinking as it relates to Russia and Islam/Muslims.

During the cold war, conservatives were united in their opposition to Russia, but Beinart argues that underneath this opposition were divergent worldviews whose appreciable differences have manifested themselves only in today’s unique political climate. He states that two types of conservatives existed during the cold war: “Civilizational,” i.e., those who saw America’s struggle against Russia through the prism of religion, and “ideological,” i.e., those who saw this struggle through the lens of governance.

To understand this shift, it’s worth distinguishing two different strains of conservative foreign-policy thinking during the cold war. Civilizational conservatives like Jerry Falwell and Pat Buchanan saw the cold war as a struggle between two countries defined primarily by their view of God: The Judeo-Christian United States versus the atheistic Soviet Union. Ideological conservatives like Paul Wolfowitz and Elliot Abrams, by contrast, saw the cold war as a conflict between two countries defined primarily by their view of government: the liberty-loving United States versus the totalitarian USSR.

He mentions that a third group also existed, though this group hasn’t significantly shifted in today’s political juncture.

(A third group, composed of realists like Henry Kissinger and George Kennan, saw the cold war as a traditional great power conflict between two countries defined primarily by their geopolitical heft.)

I found Beinart’s analysis to be especially trenchant because as a theoretical framework, it helped me understand today’s conservative schism a little better. For one, I no longer see it as a paradox that some conservatives are kowtowing to Russia, but as an action consistent with deeply-rooted beliefs. This of course, does not make it any better.

The collapse of the Soviet Union brought about an end to the convergence between these two classes of conservatives and the resulting cleavage demonstrated the possibility of each side taking an opposing view of the other depending on the circumstance. Beinart gives the example of Serbia:

In the 1990s, after the Soviet Union collapsed, ideological conservatives and civilizational conservatives parted ways. The clearest example was the former Yugoslavia. In the 1990s, Serbs brutalized the largely Muslim breakaway republic of Bosnia. Ideological conservatives like Robert Kagan urged NATO to intervene in the name of human rights. Cultural conservatives like Buchanan wondered why the U.S. was going to war to defend Muslims against Christians. Ideological conservatives saw Russia, Serbia’s traditional ally, as defending tyranny and ethnic cleansing. Cultural conservatives saw Russia as defending Christendom.

After 9/11, the two sides converged once again and much of the distinction became blurred. George W. Bush and much of the Republican party during and after his time maintained a politics of ideological conservatism. But the events precipitated by 9/11, including the ‘War on Terror’, arguably incubated the civilizational narrative because it conveniently aligned with the ideological narrative of the war being inherently just and good. Despite the divergent worldviews of civilizational and ideological conservatives, they dovetailed well in a post-9/11 world.

So while the Bush administration and conventional republicans like Mitt Romney and John McCain vocally advocated for an ideological conservatism and distanced themselves from impugning Islam as a religion, their concomitant alliance with civilizational conservatives on the war (perhaps tacitly) gave opportunity for the latter’s ideas to gain traction within the conscience of the GOP’s base.

And thus, civilizational conservatism has now recrudesced itself under a Trump administration.

Trump’s approbatory remarks towards Putin and the GOP’s generally softening stance toward Russia have once again manifested the difference between the two conservative camps:

Ideological conservatives loathe Putin because he represents an authoritarian challenge to the American-backed order in Europe and the Middle East. But many civilizational conservatives, who once opposed the Soviet Union because of its atheism, now view Putin’s Russia as Christianity’s front line against the new civilizational enemy: Islam.

The resurgence of a civilizational conservatism, perhaps in its most staunch, untrammeled, and vicious form, creates the stage for an internecine conflict within the Republican party whose winners may decide not only the future of conservatism, but of the country as a whole, and perhaps the world.

Will the GOP define Americanism as the defense of a set of universal principles or as the defense of a racial and religious heritage? The answer won’t only help determine how well liberal democracy fares overseas. It will help determine how well it fares at home.


Today I Learned – 11/3/2016

Today I Learned a little bit about how American Jewish leaders grapple with their religious legal tradition in light of an American cultural tradition and how they engage with their congregations on the topic, with Halloween as a starting point. In an October 2013 article written for the Tablet Magazine, Rabbi Regina Sandler-Phillips shares an anecdote about an intriguing question she was asked by a young student named Adam: “What is your position on zombies?”

It’s a strange question, she admits, but one worth exploring.

The first passage that caught my keen interest, compelling me to reread it twice, was this one:

What “Judaism Says” about Halloween, for example, varies as much by personal focus as by denominational persuasion. More traditionally observant communities uphold clearer dress and dietary codes, but mixed religious symbolism, consumer excess, and supervision of children’s behavior are concerns across the Jewish denominational spectrum.

Whatever their ambivalence, Jewish leaders of less insular worldviews generally conclude that the pagan/Christian elements of Halloween have been secularized beyond recognition, and that participation in this one day out of the year will not cause irreversible damage to Jewish identity. Many recommend a proactive clarification of parameters—at least to ensure that basic health and safety needs are met and ideally to highlight teachable moments for instilling Jewish values. Purim is often upheld as a superior Jewish costume-and-candy alternative.

Almost immediately, my mind was inundated with images of debates that take place within the American Muslim community surrounding the same issue; some of these debates are erudite, with a tremendous display of tact where participants meticulously take into consideration all the relevant factors before forming their opinions, while other debates, often initiated by overzealous youth looking more to demonize rather than to discuss, are lacking both in scholarly sophistication and basic mannerisms. For this latter group, the argument often begins with “Islam says!” and often ends with a pompous chest-thumping wanting in constructive, substantive conclusions. Thankfully however, this group is a minority of a minority, and as American Muslims continue to better navigate their legal and theological traditions in the United States, I believe that a much stronger, intellectually robust and spiritually vibrant community will emerge.

I greatly appreciated seeing the parallels between the Jewish and Muslim community here, and was pleasantly surprised to learn that even when the Jewish community is much more institutionally-established and socially-integrated into the American fabric, it still faces questions like these.

The writer goes back to the question of what “Judaism Says,” and more specifically of the “rationalistic” approach to that question, namely that such an approach, though worthy of its own merit, has led to an unfortunate but inevitable consequence that limits the capacity of humans to stimulate their natural imaginative faculties. This capacity is then left unused and the void is then filled by popular culture. She goes on to suggest that our intrinsic creative powers have the potency to surpass our “rationalistic” efforts, without necessarily diminishing the importance of either one.

The rationalistic “Judaism Says” with which I came of age negated centuries of Jewish afterlife traditions. This negation combined the ancient religious hostility to necromancy, the modern secular insistence on scientific measurement, and the ethical imperative to focus on our responsibilities here and now. Sadly, the rationalistic approach has left us mostly blind to our own denial, with diminished capacities for acceptance of inherent mysteries. Popular culture fills the void with a relentless onslaught of imaginary fates worse than death. The creative powers of fear and fantasy far outstrip our ethical here-and-now efforts to visit the sick, honor the dead, console the bereaved, and communicate our own final wishes to those we love.

When I read the above passage, I interpreted it to mean that the author was arguing for a complementary or hybrid approach to life and death: one that integrates both our (rationalistic) “here-and-now” efforts to heal the world with our creative powers that imagine good works as transcendent of worldly results. She goes on:

“Here is my position on zombies,” I told Adam. “I would like a tiny fraction of all the money, time, and resources spent on zombies and vampires and ‘the undead’ to be spent on coming to terms with real death and what it means for our lives.”

Adam nodded, and the dinner table conversation turned to other topics. I was content to have planted a seed of possibility for future cultivation.

Again, this evokes memories of both past and present conversations in the Muslim community. Indeed, the secularization of Halloween and its rituals has effectively erased any semblance of religious significance to the holiday, but as a cultural holiday, is it possible to make it something more, something positive for our spirituality? This writer says it is: The time spent on Halloween, zombies, and vampires can also be used as a moment of reflection; or at least part of that time can. In fact, as the writer suggests, it may be only a “tiny fraction,” but some short period of time nonetheless. As I read through this article, I wondered, how and when I can apply this principle further? When are there times in my life that give me the opportunity to take a moment to turn a mundane, cultural, and ephemeral event into something transcendental, religious, and lasting?

A moment of reflection.

Last thought: Both the manner in which she responds (constructively, with hope, optimism, and love) along with the content of what she says (accounting for both the “rationalistic” approach and the “creative” approach) speak volumes to her exemplary character as a religious leader. I hope to develop my personal spiritual practice and my relationship with people in such a way where I am able to do the same.





Today I Learned – 10/20/16

Today I Learned a little bit about the psychology of victim-blaming. This is an issue that I’ve thought about quite deeply, especially given that as a social work major in my undergrad, I had the privilege of studying the topic holistically and on an academic level. In some ways this article was a review of the theoretical concepts that I was taught, but also a reminder that these concepts don’t just exist in theory. The culture of victim-blaming is deeply embedded in society and its institutions, and manifests itself throughout our personal and political discourse. While the realist in me doesn’t believe that we can completely expurgate the practice of victim-blaming from society, I do believe that it is an imperative for us to be conscious of it. To acknowledge one’s own implicit biases is the first step toward undoing them.

There were five main points that I believe were the strongest takeaways from this article:

  1. Victim-blaming takes many forms, both subtle and overt, and most people have participated in it to a certain degree without realizing.

    “Any time someone defaults to questioning what a victim could have done differently to prevent a crime, he or she is participating, to some degree, in the culture of victim-blaming.


    Not everyone who engages in victim-blaming explicitly accuses someone of failing to prevent what happened to them. In fact, in its more understated forms, people may not always realize they’re doing it. Something as simple as hearing about a crime and thinking you would have been more careful had you been in the victim’s shoes is a mild form of victim-blaming.”

  2. The practice of victim-blaming may have a lot to do with what’s known as the “just world hypothesis,” which is the idea that the world is an inherently just place, and that whatever happens to us is a consequence of our own doing. This hypothesis tends to be a more pervasive mindset among Americans, given that Americans are inculcated with the mythology of “American Dream” as a Utopian ideal that anyone can attain:

    “In other cultures, where sometimes because of war or poverty or maybe sometimes even just because of a strong thread of fatalism in the culture, it’s a lot better recognized that sometimes bad things happen to good people,” she says. “But as a general rule, Americans have a hard time with the idea that bad things happen to good people.”

  3. According to research done by two psychology scholars, Laura Niemi and Liana Young, a person’s moral and ethical values may be indicative of the degree to which they engage in victim-blaming. The two sets of values they defined were “binding values,” which focus more on preservation of the collective, and “individualizing values,” which focus more on preventing harm to the individual:

    Their research, which involved 994 participants and four separate studies, led to several significant findings. First, they noted that moral values play a large role in determining the likelihood that someone will engage in victim-blaming behaviors, such as rating the victim as “contaminated” rather than “injured,” and thus stigmatizing that person more for having been the victim of a crime. Niemi and Young identified two primary sets of moral values: binding values and individualizing values. While everyone has a mix of the two, people who exhibit stronger binding values tend to favor protecting a group or the interests of a team as a whole, whereas people who exhibit stronger individualizing values are more focused on fairness and preventing harm to an individual.


    Unsurprisingly, participants who exhibited stronger binding values were more likely to assign responsibility for the crime to the victim or suggest actions the victim could have taken to change the outcome. People who exhibited stronger individualizing values tended to do the opposite.

  4. Their research also indicated that the likelihood of blaming the victim may increase if the victim is presented as the subject of the sentence. There is a vast difference between saying “Mike was abusive to Lisa” and “Lisa was abused by Mike.” In the former, the perpetrator is the actor targeting the victim, making it clear who is responsible, whereas in the latter, the victim is targeted upon, leaving it open to the question of whyTo me, this may be the most practical lesson to take away from the article, as it may very well affect how our listeners perceive us when we share stories involving a perpetrator and a victim:

    Niemi and Young manipulated the sentence structure in the vignettes, changing who was the subject of the majority of sentences, the victim or the perpetrator. Some groups were given vignettes with the victim in the subject position (e.g. “Lisa was approached by Dan”) and others were given vignettes with the perpetrator in the subject position (e.g. “Dan approached Lisa”).

    When the perpetrator was the subject of the sentence, participants’ “ratings of victim blame and victim responsibility went down significantly,” Niemi says. “And when we asked them explicitly how could this outcome have been different and then we just gave them an empty text box and they could fill in whatever they wanted, their actual references to victim’s actions—things like, ‘Oh, she could have called a cab’—they decreased. So they actually had a harder time coming up with things that victims could have done and were focusing less on the victim’s behavior in general. That suggests that how we present these cases in text can change how people think about victims.”

  5. The misguided premise that perpetrators must be abnormal or uniquely eccentric individuals makes it difficult to recognize their normalcy, and that they are everyday people just like us. Good people, even people we love, can do wrong, and it is important to refocus our cognitive framework to accept this is a reality, because it will bring us closer to undoing our own biases about how we perceive perpetrators and victims:

    Niemi explains that it can be hard, especially for the loved ones of perpetrators, to reconcile the fact that someone they know so well and see as such a good person could commit a crime that they see as monstrous. In some cases, this might lead to over-empathizing with perpetrators and focusing on their other achievements or attributes, as with coverage of the Stanford rape case, in which Brock Turner was sometimes described as star swimmer instead of as an accused rapist. This is another kind of defense mechanism, one that leads those close to perpetrators to either deny or diminish their crime in order to avoid dealing with the difficult cognitive process of accepting that they were capable of such a thing.

    No matter what we want to believe, the world is not a just place. And it takes some difficult cognitive work to accept both that bad things sometimes happen to good people, and that seemingly normal people sometimes do bad things.

Today I Learned – 10/19/16

Today I Learned how to respond virtuously upon being made aware of displeasing words that someone may have said about me, thanks to Harvard law professor Lawrence Lessig (who, unbeknownst to me also ran for president at one point, but that’s a different topic).

In recent weeks, WikiLeaks has been releasing an avalanche of Hillary Clinton’s private emails, including correspondence between her close aides. In one email thread, two political aides named John Podesta and Neera Tanden are expressing their contempt for Lessig, describing him with some unpleasant words, like “pompous” and “smug[ness].” Thanks to WikiLeaks, what was intended as a personal and private exchange between two high-profile individuals was now open to the public for scrutiny.

Now, given the disappointingly low standards of today’s public discourse and the tendency of an otherwise sane media to scrutinize and dramatize every peccadillo of public figures, it wouldn’t be far-fetched to assume that this exchange would be the next new short-lived scandal to take over our headlines. Perhaps Lessig, who is a strong advocate for campaign finance reform, would use it as an opportunity to take a political jab at Clinton?

The news of this exchange eventually reached Lessig’s attention, and to the surprise of many, he responded in such a manner that was so extraordinarily composed and refreshingly professional that it deserved to be recognized as lesson for us all. Here’s a quote of what he said:

I’m a big believer in leaks for the public interest. That’s why I support Snowden, and why I believe the President should pardon him. But I can’t for the life of me see the public good in a leak like this — at least one that reveals no crime or violation of any important public policy.

We all deserve privacy. The burdens of public service are insane enough without the perpetual threat that every thought shared with a friend becomes Twitter fodder.

Neera has only ever served in the public (and public interest) sector. Her work has always and only been devoted to advancing her vision of the public good. It is not right that she should bear the burden of this sort of breach.

To me, not only was this response principled and virtuous, but it also showed a type of class rarely seen in public discourse.  Lessig showed us the importance of overlooking privately expressed personal remarks in favor of prioritizing a commitment to the greater public good, whilst affirming the rights of others to express those remarks, however unpleasant they may be. He begins his statement by proclaiming his support for leaks that benefit public interest, and follows it up by shunning the type of leaks (like this one) that bring no benefit for the public. He then affirms his universal support for the privacy rights of every individual, and reminds us that there is already much at stake for those working in public service for their every private statement to be scrutinized.

Finally, he concludes by doing contrary of what many of us would expect; instead of retaliating against Neera, or even just dismissing her, he praises her for her good work in public service and scorns the notion that she should be attacked for her privately expressed thoughts.

I am not a politician or a public figure, but this definitely led me to reflect upon similar such instances in my life — whether it was with family, friends, or coworkers — and how I responded. How did I respond when I I was made aware of disparaging comments made about me in private? What was my attitude towards the person who made those comments? What was my attitude towards the individual who made me aware it in the first place? I think the lessons imparted from Lessig’s response are definitely worth pondering over and serve as a great reminder for how to manage our everyday relationships with people and the public.


Today I Learned – 10/14/16

(Note: Though technically it is the 15th, I am using 14th as the date because this happened to be a late, unpublished reflection on the night of the 14th).

Today I Learned about the subtle – but profound – distinction between “wrongfulness” and “harmfulness,” and I owe it to this piece in The New Yorker. The piece shows a clip of writer Malcolm Gladwell’s speech at The New Yorker Festival, where he shares a story about his childhood to elucidate this distinction: He and his brother sneaked into a neighbor’s cornfield and built a fort, which infuriated his father, not because of any harm that these actions caused the neighbor (he states that the harm was only about $1 in damage), but because it was the wrong thing to do regardless of the harm (or lack thereof).

Wrongfulness in itself, he argues, does not take into account whether actual harm (in the act of wronging) was done. Harmfulness, therefore, is distinct from wrongfulness. To declare something as wrong is to express a definitive ethical and/or moral disapproval about that action; however, to declare something as harmful is to say that it was more than just wrong, but that it also resulted in some form of damage. He says that today, all of our moral judgments (I would take it further to include legal judgments) are made by calculating harmfulness, not wrongfulness.

The original reason that he made this distinction was to share his views on the ongoing debates regarding the name of Princeton University’s Woodrow Wilson School of Public and International Affairs. He states that he is on the side of student activists who oppose the school being named after after an “unrepentant racist,” but that the premise upon which the students make their arguments, i.e., that the school’s name causes them harm, is simply not plausible. He suggests that a more effective strategy would be to merely state that it is wrong to name the school after Woodrow Wilson, rather than attempt to implicate harm.

I found this argument very intriguing because it goes to the root of a question that college campuses (and perhaps the general public) have been grappling with in recent years — the question of justice: How do we best measure it and practice it?

With a heightened awareness of this country’s troubling history of racism, sexism, homophobia, and xenophobia, coupled with the democratization of people’s voices thanks (partly) to social media, we’ve been exposed to alternative perspectives from otherwise marginalized groups (women, ethnic, racial, and religious minorities, the LGBTQ community, etc.), thereby forcing us to reexamine issues that we’ve taken for granted. Wilson was a political pioneer in many ways, but he was also an avowed racist. How do we reconcile these two facts? How do we evaluate harm in its historical context in contrast to how we evaluate harm today? Is Gladwell right in his critique of the student activists at Princeton? I do not have the answers to these questions, but I think they are all worth exploring. I think that regardless of the tactical choices of the student activists, and irrespective of the ostensible implausibility of their premise regarding harm, that they deserve to be listened to.

Malcolm Gladwell’s writings have had an enormous impact on me. I’m currently reading his book ‘Outliers,’ which is the third installment of a tripartite series of books (the first two being The Tipping Point and Blink) which looks at how extraordinarily successful people (in various fields) reach their level of success, and how people who exhibit unique idiosyncrasies come to do so. He shows the reader how it has more to do with environment, timing, and circumstance than plain-old practice and hard work. I plan to write a review about it sometime in the future, so I’ll go into detail then. In the meantime, I am glad I was exposed to this particular article which taught me to reexamine the ways in which I view wrongfulness and harmfulness.

– Asad