This week a violent mob mounted the biggest attack on the Capitol, the seat of American democracy, in more than 200 years, driven by the false belief that the presidential election had been stolen. The chief author of that claim was President Donald Trump, but the mob’s readiness to believe it was in large part a product of the attention economy that modern technology has created.
News feeds on Facebook or Twitter operate on a business model of commodifying the attention of billions of people per day, sorting tweets, posts, and groups to determine which get the most engagement (clicks, views, and shares)—what gets the strongest emotional reactions. These commodifying attention platforms have warped the collective psyche. They have led to narrower and crazier views of the world.
YouTube’s recommendation algorithms, which determine 70% of daily watch time for billions of people, “suggest” what are meant to be similar videos but actually drive viewers to more extreme, more negative, or more conspiratorial content because that’s what keeps them on their screens longer. For years, YouTube recommended “thinspiration”—anorexia-promoting videos—to teen girls who watched videos about “dieting.” And when people watched science videos of NASA’s moon landing, YouTube recommended videos about the flat-Earth conspiracy theory. It did this hundreds of millions of times. News feeds and recommendation systems like this have created a downward spiral of negativity and paranoia, slowly decoupling billions of people’s perception of reality from reality itself.
Seeing reality clearly and truthfully is fundamental to our capacity to do anything. By monetizing and commodifying attention, we’ve sold away our ability to see problems and enact collective solutions. This isn’t new. Almost any time we allow the life support systems of our planet or society to be commodified, it drives other breakdowns. When you commodify politics with AI-optimized microtargeted ads, you remove integrity from politics. When you commodify food, you lose touch with the life cycle that makes agriculture sustainable. When you commodify education into digital feeds of content, you lose the interrelatedness of human development, trust, care, and teacherly authority. When you commodify love by turning people into playing cards on Tinder, you sever the complex dance involved in forging new relationships. And when you commodify communication into chunks of posts and comment threads on Facebook, you remove context, nuance, and respect. In all these cases, extractive systems slowly erode the foundations of a healthy society and a healthy planet.
Shifting systems to protect attention
E.O. Wilson, the famed biologist, proposed that humans should run only half the Earth, and that the rest should be left alone. Imagine something similar for the attention economy. We can and should say that we want to protect human attention, even if that sacrifices a portion of the profits of Apple, Google, Facebook, and other large technology corporations.
Ad blockers on digital devices are an interesting example of what could become a structural shift in the digital world. Are ad blockers a human right? If everybody could block ads on Facebook, Google, and websites, the internet would not be able to fund itself, and the advertising economy would lose massive amounts of revenue. Does that outcome negate the right? Is your attention a right? Do you own it? Should we put a price on it? Selling human organs or enslaved people can meet a demand and generate profit, but we say these items do not belong in the marketplace. Like human beings and their organs, should human attention be something money can’t buy?
The covid-19 pandemic, the Black Lives Matter movement, and climate change and other ecological crises have made more and more people aware of how broken our economic and social systems are. But we are not getting to the roots of these interconnected crises. We’re falling for interventions that feel like the right answer but instead are traps that surreptitiously maintain the status quo. Slightly better police practices and body cameras do not prevent police misconduct. Buying a Prius or Tesla isn’t enough to really bring down levels of carbon in the atmosphere. Replacing plastic straws with biodegradable ones is not going to save the oceans. Instagram’s move to hide the number of “likes” is not transforming teenagers’ mental-health problems, when the service is predicated on constant social comparison and systemic hijacking of the human drive for connection. We need much deeper systemic reform. We need to shift institutions to serve the public interest in ways that are commensurate with the nature and scale of the challenges we face.
At the Center for Humane Technology, one thing we did was convince Apple, Google, and Facebook to adopt—at least in part—the mission of “Time Well Spent” even if it went against their economic interests. This was a movement we launched through broad public media-awareness campaigns and advocacy, and it gained credence with technology designers, concerned parents, and students. It called for changing the digital world’s incentives from a race for “time spent” on screens and apps into a “race to the top” to help people spend time well. It has led to real change for billions of people. Apple, for example, introduced “Screen Time” features in May 2018 that now ship with all iPhones, iPads, and other devices. Besides showing all users how much time they spend on their phone, Screen Time offers a dashboard of parental controls and app time limits that show parents how much time their kids are spending online (and what they are doing). Google launched its similar Digital Wellbeing initiative around the same time. It includes further features we had suggested, such as making it easier to unplug before bed and limit notifications. Along the same lines, YouTube introduced “Take a break” notifications.
These changes show that companies are willing to make sacrifices, even in the realm of billions of dollars. Nonetheless, we have not yet changed the core logic of these corporations. For a company to do something against its economic interest is one thing; doing something against the DNA of its purpose and goals is a different thing altogether.
Working toward collective action
We need deep, systemic reform that will shift technology corporations to serving the public interest first and foremost. We have to think bigger about how much systemic change might be possible, and how to harness the collective will of the people.
Recently at the Center for Humane Technology, we interviewed Christiana Figueres, the former executive secretary of the United Nations Convention on Climate Change (2010–2016), for our podcast Your Undivided Attention. She was responsible for the “collaborative diplomacy” that led to the Paris Agreement, and we learned how she was able to do this—to get 195 different countries, against all odds, to make shared, good-faith resolutions toward addressing climate change. Figueres initially didn’t believe it was possible to get that many countries to agree, but she realized that successfully hosting the Paris Convention meant she herself would have to change. She had to genuinely believe it was possible to get the countries to commit to climate action. That was how she was able to then focus on getting the participating countries to believe in the possibility of addressing climate change as well. Where earlier international climate negotiations had failed, Figueres’s efforts brought nations together to agree on financing, new technologies, and other tools to keep global temperature rise below 2 or, even better, 1.5 °C.
In the case of the tech industry, we have a head start in that we don’t need to convince hundreds of countries or millions of people. Fewer than 10 people run the 21st century’s most powerful digital infrastructure—the so-called FAANG companies, comprising Facebook, Amazon, Apple, Netflix, and Alphabet (formerly Google). If those individuals got together and agreed that maximizing shareholder profit was no longer the common aim, the digital infrastructure could be different. If Christiana Figueres could bring about consensus between 195 nations, we could consider the possibility of doing it with 10 tech CEOs.
A new economics of humane technology
Several economic principles need to shift in order for technology to align with humanity and the planet. One of these is the growth paradigm. You simply can’t carry out a logic of infinite growth on a finite substrate. The drive for infinite economic growth is leading to a planetary ecological crisis. For tech companies, pursuing the infinite growth of extracted human attention leads to a similar crisis of global consciousness and social well-being. We need to shift to a post-growth attention economy that places mental health and well-being at the center of our desired outcomes.
A small hint of this shift is taking place in countries including New Zealand and Scotland, where organizations such as the Wellbeing Economy Alliance are working to shift from an economy that promotes the gross domestic product (GDP) to one with these alternative priorities. Leaders are asking how well-being can inform public understanding of policies and political choices, guide decisions, and become a new foundation for economic thinking and practice.
Another shift toward a more humane technology requires a broader array of stakeholders who can create accountability for the long-term social impact of our actions. Right now, it is possible for large technology companies to make money by selling thinner and thinner “fake” slices of attention—selling fake clicks from fake sources of news to fake advertisers. These companies make money even if what the link or article leads to is egregiously wrong and propagates misinformation. This opportunism debases the information ecology by destroying our capacity to trust sources of knowledge or share beliefs about what is true, which in turn destroys our capacity for good decision making. The result is polarization, misinformation, and the breakdown of democratic citizenship. We need to create mechanisms that incentivize participants in the digital world to consider longer time frames and the broader impact their actions are having on society.
Human will plays an important role here. What if the leaders behind Apple’s App Store revenue distribution model—which acts as the central bank or Federal Reserve of the attention economy—simply chose to distribute revenue to app makers based not on whose users bought the most virtual goods or spent the most time using the app, but on who among the app makers best cooperated with other apps on the phone to help all members of society live more by their values?
Ultimately it comes down to setting the right rules. It is difficult for any one actor to optimize for well-being and alignment with society’s values when other players are still competing for finite resources and power. Without rules and guard rails, the most ruthless actors win. That’s why legislation and policies are necessary, along with the collective will of the people to enact them. The greater meta-crisis is that the democratic processes for creating guard rails operate at a much slower pace than the rate of technological development that is needed to make a difference. Technology will continue to advance faster than the harms can be well understood by 20th-century democratic institutions. The technology sector itself needs to come together, collaboratively, and find ways to operate so that shared societal goals are placed above hyper-competition and profit maximization.
Finally, we need to recognize the massive asymmetric power that technology companies have over individuals and society. They know us better than we know ourselves. Any asymmetric power structure must follow the fiduciary or “duty of care” model exemplified by a good teacher, therapist, doctor, or care worker—that is, it must work in the service of those with less power. It must not operate with a business model based on extraction. Upgraded business models for technology need to be generative: they need to treat us as the customer and not the product, and align with our most deeply held values and humanity.
Toward being human
E.O. Wilson has said, “The problem with humanity is that we have Paleolithic emotions, medieval institutions, and godlike technology.” We need to embrace our paleolithic emotions in all their fixed weaknesses and vulnerabilities. We need to upgrade our institutions to incorporate more wisdom, prudence, and love. And we need to slow down the development of a godlike technology whose powers go beyond our capacity to steer the direction of the ship we are all on.
The realm of what is possible continues to expand, but it is arising contemporaneously with exponentially challenging global issues that require better information, leadership, and action. Rather than accepting a race to the bottom that downgrades and divides us, we can together create a technology landscape that enables a race to the top—one that supports our interconnection, civility, and deep brilliance. Change, I believe, is humanly possible.
Tristan Harris is cofounder and president of the Center for Humane Technology. This essay is an adapted excerpt from The New Possible: Visions of Our World beyond Crisis, to be published on January 26, 2021, by Cascade Books.