How to Design Better Social Media

On designing social tools for society

Tobias Rose-Stockwell
14 min readApr 13, 2018

A version published on Quartz can be found here.

Facebook’s old internal motto: Move fast and break things

Data exploits affecting millions, election hacking, the death of newspapers, weaponized propaganda, troll armies, deepening polarization and the shaky future of democracy itself. It seems we’re presented daily with a laundry list of dystopian consequences linked back to our collective overuse of social media.

People are crying for change, but there’s a missing piece of the conversation: despite the emerging awareness of many of the terrible side effects of these platforms, we are dependent on them.

They’ve become our local news channels, our emergency communication systems, our town squares, and the primary windows into the lives of our loved ones and governments. They are a critical part of how we rally around shared causes and engage with our politics.

How do we reconcile their toxicity with their utility?

This shouldn’t be such a painful choice. Over the last year I have been speaking with academics, designers and technologists studying the key flaws of these platforms — and cataloging testable fixes. My efforts have focused on the nested problems of polarization, dehumanization, and outrage, three of the most dangerous byproducts of these tools.

Below, I’ll explore what might be changed by these companies now — not tomorrow — to make these platforms better for humanity.

What’s good for capturing human attention is often bad for humans

Our feeds can make us angry for a reason.

Imagine that you’re walking down the street, and you hear a fight break out. It’s loud and aggressive, and people are yelling about something. You’ll likely stop momentarily to see what’s going on — it’s in your nature.

If you personally know one of the people fighting, you will probably immediately pick a side — you might even get involved. At minimum, you will pay attention.

This is what social media does to us regularly: it encourages us to observe conflicts and pick sides on topics about which we would otherwise have few opinions.

At its core, it is an opinion-serving machine. And on social media, not all opinions are served equally.

Most of our content feeds and timelines are no longer sorted chronologically. The decision about which content to show us is instead based on how likely we are to engage with it.

Instagram, Facebook, Twitter, Youtube and others have moved away from chronological sorting to help users process more information and to keep them on-site

Emotional reactions like outrage are strong indicators of engagement. With the most basic algorithm that sorts our feeds, this kind of divisive content will be shown first, because it captures more attention than other types of content.

A hypothetical distribution based on interviews with platform engineers

This content acts as a trigger for our own emotional reactions. And when we respond, we regularly push our emotions out to the rest of the world.

This is a simplified model of how we share on social media:

Middle step skipped when users simply Retweet or Share content

With angry content, this has allowed for what we might call outrage cascadesviral explosions of moral judgment and disgust. These have come to dominate our feeds and our conversations, and are becoming a prominent part of the cultural zeitgeist.

Outrage triggers are often shared, triggering others and creating outrage cascades.

Moral Outrage = Virality

William J. Brady, a researcher at NYU, recently found a pattern in viral social media posts. Studying a massive data set of hundreds of thousands of tweets, he found that posts using moral and emotional language receive a 20% boost for every moral and emotional keyword used.

Conservative Example Tweet:
“Gay marriage is a diabolical, evil lie aimed at destroying our nation”
- @overpasses4america

Liberal Example Tweet:
New Mormon Policy Bans Children Of Same-Sex Parents — this church wants to punish children? Are you kidding me?!? Shame”
- @martina

Each of these tweets incorporates language that is morally charged, and condemning of others. They incite a deep emotional response, and are more likely to be seen & shared by people who agree with them, providing a measurable boost to virality and engagement.

This is a hidden substrate for people to share divisive, outrageous and emotional content online.

This doesn’t just apply to our personal posts. It likely applies to any content we share on social media — comments, memes, videos, articles. It has incubated an ecosystem of moral outrage that is utilized by content creators everywhere, including news organizations, simply because it works.

Facebook, Twitter, YouTube, and others prioritize this type of content, because it is what we click on, hover over, and respond to. It is the hidden pathway to audience engagement. When trying to capture attention, our anger, fear and disgust are a signal in the noise.

Through the dominance of these tools — in our media, our conversations, and our lives — we’ve watched our common discourse turn ugly, divisive, and increasingly polarizing.

What can we do about it?

These are four design changes we might consider for improving the way we share online:

Give Humanizing Prompts

Nudge people in the right direction with specific prompts before they post

Molly Crockett at Yale’s Crockett Lab has suggested that our inability to physically see the emotional reactions of others might encourage negative behavior on social media. Online we literally can’t see the suffering of others, and this makes us more willing to be unkind.

Nicholas Christakis (also at Yale) has shown that real-life social networks can be influenced by simple AI to help improve group behavior and outcomes. We can think about incorporating prompts like that here.

These are several interventions that we can place immediately after people post — allowing us to nudge users towards kindness and and better behavior.

Research on perceptual dehumanization has shown that we might be significantly more punitive towards people in digital environments. Increasing empathetic responses like this might help increase the perception of our digital avatars as being human. (This research is fascinating: something as simple as the vertical/horizontal orientation of our faces in our profile picture can determine whether others consider us worthy of punishment.)

Your anger is unlikely to be heard by the other side. A better way of thinking about this is: the more outrageous your tweets are, the less likely the other side is to see it. This is based on Brady’s research that has shown tweets with emotional/moral language are limited in how they travel within ideological networks.

For people who genuinely want to connect with other audiences, this might give them pause and help them reframe. Though this may not deter the majority of people who post inflammatory content, some might reconsider their language. This can be framed with basic information about how to make it more accessible to other audiences.

We do angry things that we often later regret. Having a moment to pause, review, and undo content flagged as hurtful might reduce likelihood of sharing it in our worst moments.

Note: All of the above prompts might also be used with linked content-like articles, as long as that content is indexed and flagged correctly.

Disagreeing with people is hard. Disagreeing with people in public is even harder. There are tremendous social pressures at play when we write comments on people’s posts in public: we’re on display, fighting about an idea with a crowd watching us. Giving a private reply — taking it to direct message by default — might encourage people to open sidebars to have conversations with less external pressure.

This is all great, but how do we find these posts in the first place? Isn’t that tricky? Yes it is. Let’s get into the weeds for a moment.

Picking Out Unhealthy Content with Better Metrics

Flagging and filtering specific content types we would prefer to see less of

This is a lot harder than it sounds. We need metrics to train algorithms and humans to find the stuff we don’t want. If we intervene with the incorrect set of metrics, we can easily end up in some really dark places.

Algorithms are representations of human intelligence — and just like any human creation, they can inherit and amplify our perspectives and flaws. This is known as algorithmic bias, and it can manifest in stock market failures, systemic loan discrimination, unfair teacher evaluations, and even in racially unjust jail sentences.

Facebook already trains their news feed algorithm around the metric of what is “meaningful” to its users. The problem with this metric is that many strong human reactions — like moral outrage, disgust, and anger — are broadly considered meaningful.

To do this right, we need better metrics.

By correctly measuring the type of content that users don’t want to see more of, we can begin to give users a menu of choices that accurately represent their preferences, not just what they will click on.

Social media is built for present self

These are three possible candidates for content that we might prefer to have less of: outraged, toxic, and regrettable. These are starting points.

Critical note: Flagging content that induces specific negative responses immediately treads into free-speech territory. Who’s to say platforms should manipulate other people’s voices if they are saying things that certain people find offensive? Isn’t that censorship? This is a huge, critical question — and one that I will address. Read on.

These metrics don’t include the important work that is being done in information fidelity: false news, disinformation, and state-sponsored propaganda, though they could help with reducing its spread.

Once these content types are identified, we can train supervised AI to flag it in the future.

Regrettable Content

Regret is one of the most common emotions we feel after spending significant time on social media. This is largely due to the kind of content we click on despite ourselves that we know we shouldn’t.

This is based on the concept of present bias: A natural human tendency for people to give stronger weight to payoffs that are closer to the present time when considering trade-offs between two future moments.

Measuring present bias is tricky, but a start might be to present a non-obtrusive, simple prompt for feedback to users after they consume a set of content. This will require some thoughtful design choices to balance frequency with flow.

Toxic Content

Toxicity can be determined by asking people to rate posts on a scale from Toxic to Healthy, toxic being defined as “a rude, disrespectful, or unreasonable post that is likely to make you leave a discussion.” This is Perspective API’s model — a project of Jigsaw (part of Google, who’s earliest version of this tool was fairly flawed ¹²). A critical part of making this work is ensuring these terms are defined by a diverse and representative sample of users.

Outraged Content

This is content that uses moral, emotional, and other-condemning language. We can define it by indexing a moral foundations dictionary which was initially developed by Jonathan Haidt and Jesse Graham and combining it with other dictionaries. This definition could be broadened and honed to include ‘other-condemning’ and hyper-polarizing language.

Filter Unhealthy Content by Default

After users publish negative posts, filter how that content is served

All of the content we share on social media already goes through an engagement filter. Facebook, Twitter and others push content up or down our feeds depending on how likely it is to capture attention (or how much you pay them to promote it). The problem is that certain types of content — outraged, sensationalized, clickbait, etc. — naturally hack our attention in unhealthy ways.

Unhealthy content can be proportionally down-weighted to account for its natural virality. Of course social media companies already do this regularly to account for abusive content, piracy, spam, and thousands of other variables. They could also do the same for other types of triggering content. This would proportionally reduce the prominence of outrage-inducing, toxic, and regrettable posts and give them more equal footing with other types of content.

Note: This treads further into the free-speech territory, and the realm brings up filter bubbles and echo chambers. I will address both of these below.

Give Users Feed Control

Providing users with control of their own algorithmic content

During the 2016 election, when streams of political outrage and vitriol were the primary products of our social media feeds, this seemed like a bad idea. Wouldn’t users just cocoon themselves inside their own misinformation-filled realities? Wouldn’t this just increase political isolation? But as efforts to identify false news, misinformation, and propaganda have gained steam, the issue underpinning our reliance on black-box algorithms has remained. What’s missing is transparency about why we see what we see on social media.

The antidote to this is to give users access to the editorial processes that determine which content we see. From the most basic reverse-chronological order to highly curated feeds, this process could be opened up.

How this might be done: Dashboards

Gobo’s dashboard with several initial metrics.

A project started out of the MIT Media Lab called Gobo began this process developing an open aggregator of social feeds to show an example of what a dashboard might look like. This allows users to filter their content by things like politics, rudeness, and virality. These metrics are a decent starting point in thinking through what an open feed might look like.

Another analogy: Building blocks, or recipes

This might look like a set of new tools for exploring new perspectives and the depths of our feed. These feeds could be discussed, reconfigured, and shared on their own.

This process of designing feed control for users would open up a crucial dialog about what constitutes a healthy information diet — something that is currently obscured by their proprietary nature. Forcing users to decide might also encourage users to learn more about the kinds of unhealthy triggers they are being regularly served.

The Big Question

Are these interventions another form of censorship? If we reduce the visibility of people online who say things we disagree with, isn’t that suppression of speech?

This is a huge issue.

It’s hard to overstate how important and influential these platforms are to society at this moment in time. Some of the biggest, most significant social changes in recent years have come from activism catalyzed by moral outrage shared through social media. Many of these cultural and political movements would not have been possible without them: #ArabSpring, #TeaParty, #BlackLivesMatter, #MeToo. What would happen if these voices were suppressed?

But this is the critical problem: social media already suppresses our voices.

The way we are currently served content is not a neutral, unbiased process. It is not inherently democratic, fairly apportioned, or constitutionally protected. These tools already promote or bury content with a proprietary algorithm over which we have no say.

They are not censored for political partisanship. They are censored for our engagement — to keep us attached and connected to these products, and to serve us ads. The day our chronological feeds became proprietary sorting mechanisms was the day these platforms ceased to be neutral.

The more critical these tools become to public discourse, free speech, and democracy, the more problematic it is for these algorithms to be obscured from us.

And this is ultimately the point — by accepting the importance of these tools but asking for none of the control, we are giving up our ability to determine the type of conversation we have as a society.

Three final things:

  1. These are all meant to be testable interventions.
  2. They are designed to be inherently non-partisan.
  3. They will all likely result in short-term reductions in engagement and ad revenue, but a possible long-term increase in perception, health, and well-being.

These are not perfect solutions. They are starting points which can be explored more thoroughly. Part of the goal of this effort is to advance a conversation about these tools grounded in testable outcomes. With that in mind, I will update this article from time to time when there is more evidence and research available.

I hope that by thoughtfully picking apart, testing, and critiquing these designs it might lead us toward something that is ultimately a better alternative to the divisive, toxic and deeply unhealthy digital sphere we mutually inhabit.

I just published my first book on these topics, chock-full of solutions and mental models for understanding this enormous shift to our media landscape. It’s called Outrage Machine, and it’s available wherever books are sold:

References

  • On Filter Bubbles & Echo Chambers: Recent research suggests that regular exposure to opposing political perspectives might actually contribute to polarization. This is a strong refutation of the argument that filter bubbles/echo chambers cause polarization. My read is that if we want to reduce systemic political division, we might instead need to reduce the prominence of specific kinds of strongly partisan/triggering content — not necessarily broaden our exposure.
  • Molly Crockett has shown that digital environments provide us with more opportunities for being outraged than any other medium, and has suggested that the benefits of sharing our outrage online are greatly increased. Nature Article
  • William Brady has shown that tweets with moral/emotional keywords receive a 20% boost for each word used. This has become a hidden incentive for people to share divisive, outrageous, and emotionally-charged content online. Article & abstract
  • Katrina Fincher is studying how we dehumanize others in digital environments. Article & abstract
  • Nicholas Christakis has shown that real-life social networks can be manipulated by “dumb” AI to help improve group behavior and outcomes. Video outlining his recent work

Further Reading

Special thanks to William Brady at NYU, Molly Crockett of Yale, Katrina Fincher at Columbia, and the Center for Humane Tech for insights and research used in this article. If you’d like to keep in touch, sign up for my infrequent updates or follow me on Twitter here.

--

--