Our Dysfunctional Discourse Is A Defense Mechanism Against Information Overload

Ashley Hodgson is the Frank Gery Associate Professor of Economics at St. Olaf College and a member of the Institute’s Director’s Council. Here she describes the unintended and often harmful effects of social media in our lives. All views expressed here are her own.

By Ashley Hodgson

The Dysfunctional Relationship

The national discourse is not just polarized; it is dysfunctional.  The best definition of dysfunction that I have heard is: Something is dysfunctional when applying effort makes the problem worse.  When people try to expose themselves to views outside of their social media bubble, it not only does not work; it often times further entrenches people.  For example, in one study people moved farther toward the extremes of their own side of the political spectrum when algorithms were programmed to show them social media content from the other side.

Some of the most pressing issues our society faces relate to our inability to communicate across difference: the rise in anxiety and loneliness, the decline in trust in the institutions, and the compromise-resistant gridlock in congress.  Social media trains our brains in faulty cognitive habits that contribute to the dysfunction of cross-viewpoint conversation.  Exposure to ideas from the other side of the political spectrum is necessary but insufficient for creating real understanding.  Such exposure must take place away from social media feeds in a context that is careful to avoid some of these bad habits.  However, off-platform discussion is also insufficient for creating understanding, because some of the bad cognitive habits formed on social media show up in our classrooms, at our lunch tables, and in our committee conference rooms.

The Algorithms

The content we see online is not curated purely to serve us what we enjoy.  The algorithms curate our content based on their own goals.  Social media managers instruct these algorithms to maximize time on platform, engagement, clicks on advertisements and other metrics aimed at increasing shareholder value.  Moreover, no human being is actually overseeing the methods the algorithms use to achieve these objectives.  Explainable AI, a field within Computer Science, is working on ways to get algorithms to explain their methods, but this task is more challenging than it seems.

How do you get people to engage with the platforms?  Undoubtedly by tapping into the deepest drivers of human motivation.  This includes the fight-or-flight response, our emotional wiring, our insecurities, and our interpretive structure.  The most efficient way to tap into fight-or-flight is by evoking a sense of threat.  This could be physical threat, social threat, economic threat, or reputational threat.  Under the thumb of anxiety, we take interest in every update that social media has to offer.

But how could algorithms evoke threat in people who are clicking a mouse from the safety of their bedrooms?  What tools can the algorithm employ?  Through our social media feeds, the algorithm can influence our perceptions of patterns and perception of the reality inside other people’s heads.  With a near infinite amount of content generated by real people, social media algorithms can mix and match content to create perceptions that answer your most pressing questions in the way that heightens the sense of urgency to consume more.  A few carefully chosen incidents can be cycled through our news feeds in a way that magnifies them and makes them seem like true representative cases of “people like them.”  It is hard to argue with such perceptions because they are drawn from real experiences.  What is formed by the social media feed is not so much a concrete set of beliefs as a set of intuitions.  And intuitions are harder to scrutinize, but they shape the way we feel and react.  They shape the way we process future information.  They shape our lens and our perception of each other.  They determine what is salient to us. Does that group of people hate you?  Do they wish you harm?  Do they want to take away the ideas and opportunities for expression and ways of being that are dear to you?  If you want to evoke the fight or flight response, the answer should be “yes.”

None of this should minimize people’s worries, however.  Just because the algorithms have incentives to create a sense of threat does not mean that there is nothing real to fear.  Quite the reverse.  There is a self-fulfilling prophecy to the threat-inducing nature of algorithms.  Specifically, when you feel threat from a certain group of people, you may take personal or political action to neutralize them, becoming an actual threat to them.  The group you fear then begins to feel threatened by you and the cycle spirals.

In which case, it is worth considering the cognitive habits that the algorithms may be training in our brains…

Cognitive Habit 1: The Label-and-Dismiss Instinct

None of us could fully process all of the information that is flying at us at 90mph in the online world.  We need some way of discerning where to spend our content-consumption time.  This is particularly difficult given that each piece of information has been carefully crafted to highlight the urgency of attending to it right now.  To handle this, we develop ways of quickly dismissing information nuggets.  Otherwise the online space would probably overwhelm us.  It is to both our advantage and to the algorithm’s advantage to have a psychologically satisfying way of dismissing urgent messaging.

There is nothing unethical about treating online content in this way.  The problem occurs when we treat people this way.  However, when people share what is on their minds, it is often the very same content that you dismissed as a pesky headline.  That content worked to capture someone’s attention.  The result is predictable.  One conversation partner throws out a nugget of something they have been thinking about based on extensive content they have read about online.  The friend responds by dismissing it, as they did when it flashed through their social media feed.  But they don’t just dismiss it casually.  They dismiss it with all of the authority they use when dismissing every other urgent piece of clickbait they encounter so regularly.  They dismiss it with a comment that sounds like, “I already know about that, and it’s nothing.”  But they say it with a quick comment indicating they have not really read or thought about the issues plaguing their friend.

Our experience when we encounter a label-and-dismiss attitude will quickly teach us not to try to press further, not to correct misunderstandings.  If we try to clarify our position, we often run into a stubborn caricature of what someone with our position would say…

Cognitive Habit 2: The Stubborn Caricature  

The “label” part of label and dismiss is not just a quick slap-on label.  It is a well-built caricature.  When we try to present information and arguments as we understand them, the other person cannot hear.  Instead, they hear the arguments that their caricature would present.  It can feel like a loss of dignity to compete with such a contempt-deserving caricature, and to have someone we care about project those traits onto us, rather than actually listening to what we are really saying.  We might, at first, try to anticipate and pre-manage the stereotypes by knocking them out before introducing a new topic of potential disagreement.  But then we run into the infinite preamble problem.  “Before I speak, I want to be clear, I don’t think X or Y or Z.”  But you discover a bottomless list of misunderstandings that prevent you from ever even getting started.

That’s when you revert to an attitude of “Why even try to engage across difference?”  To avoid getting pegged into people’s stereotyped holes, we more and more frequently avoid engaging and sharing our perspectives unless we already know that the other person agrees with us.  And, obviously, the giving up of even trying will allow our caricatures to further calcify.

Cognitive Habit 3: The No-Feedback “Discussion”

Our misperceptions of each other also get bigger because asynchronous online conversation allows no space for the back-and-forth that corrects misperceptions.  As a result, people develop the habit of assuming that they understand the other.  They plow forward with their objections and critiques and dismissals without room in the discourse for, “You have my viewpoint way off base.”  When responses take hours or days, as they do online, we have lost our engagement enough to listen carefully to corrections of our misunderstandings.  We leave the conversation triumphant and entrenched in our misunderstanding of the other, and are no longer invested in the topic when feedback shows up hours or days later.

Rational Cognitive Habits and The Platform’s Delight 

In some ways, there is a rationality to this.  The anxieties that our social media feed are able to conjure in us is just as much as we can handle.  To take on the hard-pressing concerns that a friend has from their own social media feeds might tip the edge of what we can handle past overwhelm.  We need ways of keeping new anxieties at bay. The problem here is that all of this leaves us more vulnerable to algorithms trying to incite a sense of threat…

There are three ways that all of this feeds into the algorithm’s objectives.  First, each of the cognitive habits above can make in-person conversation feel more isolating and less satisfying.  We may respond by moving our social lives online, where the sorts of frustrations common in face-to-face interaction can be managed by clicking away often enough until the algorithms learn to do the clicking away on our behalf.  From an economic perspective, the question of “Where do you spend your social time: in-person or on social media?” is akin to “Where do you spend your hotel-staying time: Hilton or Marriott?”  Face-to-face social lives are a direct competitor to the social media platforms trying to maximize time on platform.  When we respond to these dilemmas by shifting our social lives online, we play in perfectly with the algorithm’s objectives.

Second, the above cognitive habits create a drama of misunderstanding that sucks us into the platforms.  Nothing irks people so much as feeling misunderstood.  It evokes a visceral impulse to manage the misunderstanding, ideally before it reverberates further across the web.  Where do you go to manage your reputation?  Exactly where the social media algorithms want you to go.  It’s another win for the “time on platform” algorithm, so we should not be surprised that the online space has discovered ways to magnify misunderstanding.

Finally, these cognitive habits heighten a sense of contempt between groups with different viewpoints.  One definition of contempt is, Belief that the other is morally inferior, intellectually inferior or beneath consideration.  If we need to dismiss information that is constantly cloaked in urgency, our modes of dismissal are likely to sound contemptuous.  Our caricatures of the other will also almost certainly sound contemptuous.

Worse yet, sometimes people will use real arguments on their side of an issue as weapons to dismiss the good points of the other side.  When this happens over and over in both directions, the good points each side makes can begin to feel like weapons in-and-of themselves.  In this way, legitimate arguments devolve into statements of contempt, at least as they are heard by the other side.  The result is a dysfunctional conversation.


Our ills around polarization, misinformation, and inter-group hostility will not be solved by more information and more exposure to other viewpoints.  The solution will need to combine such exposure with more authentic interactions where we are intentional about setting aside some of the cognitive habits discussed here.  A starting point is to be willing to engage with the person as a person, not as a vector of information.

One final warning seems necessary.  The very tools I describe here are too often used as weapons for dismissing others.  When we accuse others of being under the influence of manipulation and brainwash without acknowledging the same possibility in ourselves, we play into the unhealthy dynamic we are trying to extinguish. To undertake conversations across difference, we need enough humility to acknowledge that we have blind spots, too.  We also need enough self-awareness to monitor the fight-or-flight reactions and jabbed insecurities that these conversations evoke, to the algorithm’s benefit.  Blaming the algorithms for a conversation partner’s prickliness is only useful when it makes you take things less personally and more gener0usly.