In the second of two exclusive interviews, Technology Review’s Editor-in-Chief Gideon Lichfield sat down with Parag Agrawal, Twitter’s Chief Technology officer to discuss the rise of misinformation on the social media platform. Agrawal discusses some of the measures the company has taken to fight back, while admitting Twitter is trying to thread a needle of mitigating harm caused by false content without becoming an arbiter of truth. This conversation is from the EmTech MIT virtual conference and has been edited for clarity.
For more of coverage on this topic, check out this week’s episode of Deep Tech and our tech policy coverage.
Credits:
This episode from EmTech MIT was produced by Jennifer Strong and Emma Cillekens, with special thanks to Brian Bryson and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield.
Transcript:
Strong: Hey everybody it’s Jennifer Strong back with part two of our conversation about misinformation and social media. If Facebook is a meeting place where you go to find your community, YouTube a concert hall or backstage for something you’re a fan of, then Twitter is a bit like the public square where you go to find out what’s being said about something. But what responsibility do these platforms have as these conversations unfold? Twitter has said one of its “responsibilities is to ensure the public conversation is healthy”. What does that mean and how do you measure that?
It’s a question we put to Twitter’s Chief Technology Officer Parag Agrawal. Here he is, in conversation with Tech Review’s editor-in-chief Gideon Lichfield. It was taped at our EmTech conference and has been edited for length and clarity.
Lichfield: A couple of years ago, there was a project you started talking about metrics that would measure what a healthy public conversation is. I haven’t seen very much about it since then. So what’s going on with that? How do you measure this?
Agrawal: Two years ago in working with actually some folks at the MIT media lab and inspired by the thinking, we set out on a project to work with academics outside of the company, to see if we could define a few simple metrics or measurements to indicate the health of the public conversation. What we realized in working with experts from many places is that it’s very, very challenging to boil down the nuances and intricacies of what we consider a healthy public conversation into a few simple to understand, easy to measure metrics that you can put your faith in. And this conversation has informed a change in our approach.
What’s changed is whether or not we are prescriptive in trying to boil things down to a few numbers. But what’s remained is us realizing that we need to work with academic researchers outside of Twitter, share more of our data in an open-ended setting, where they’re able to use it to do research, to advance various fields. Uh, and there are a bunch of API related products that we’ll be shipping in the coming months. And one of the things that directly led to that conversation was in April, as we saw, uh, COVID, uh, we created an end point for COVID-related conversation that academic researchers could have access to. Uh, we’ve seen research across four 20 countries, access it.
So in some sense, I’m glad that we set out on that journey. And I still hold out hope that with this open-ended approach, there’ll be academics and our collaboration with them, which will ultimately lead us to understand public conversation and healthy public conversation enough to be able to boil down the measurement to few metrics. But I’m also excited about all the other avenues of research this approach opens up for us.
Lichfield: Do you have a sense of what an example of such a metric would look like?
Agrawal: So when we set out to talk about this, we hypothesized, there were a few metrics around, do people share a sense of reality? Do people have diverse perspectives and can be exposed to diverse perspectives? We thought about is the conversation civil, right? So, conceptually these are all properties we desire in a healthy public conversation. The challenge lies in being able to measure them in a way that is able to evolve as the conversation evolves, in a way that is reliable and can stand the test of time, as the conversation two years ago was very different from the conversation today. The challenges two years ago, as we understood them are very different today. Uh, and that’s where some of the challenges and our understanding of what healthy public organization means is still emergent for us to be able to boil it down into these simple metrics.
Lichfield: Let’s talk a little bit about some of the things you’ve done over the last couple of years. I mean, there’s been a lot of attention, obviously, on the decisions to flag some of Donald Trump’s tweets. I think the more systematic work that you’ve been doing over the last couple of years against misinformation, can you summarize the main points of what you’ve been doing?
Agrawal: Our approach to it isn’t to try to identify or flag all potential misinformation. But our approach is rooted in trying to avoid specific harm that misleading information can cause. We’ve been focused in our approach, and focusing on harm that can be done with misinformation around COVID-19, which has to do with public health, where a few people being misinformed can lead to implications on everyone. Similarly, we focused in on misinformation around what we call civic integrity, which is about people having the ability to know how to participate in elections.
So an example, just to make this clear, is around civic integrity, we care about and we take action on content which might misinform people who say you should vote on November 5th, when election day is November 3rd. And, we do not try to determine uh what’s true or false when someone takes a policy position or when someone says the sky is purple or blue, or red for that matter. Our approach for misinformation is also not one that’s focused on taking content down as the only measure, which is the regime we all have operated in for many years. But it’s an increasingly nuanced approach with a range of interventions, where we think about whether or not certain content should be amplified without context, or whether it’s our responsibility to provide some context so that people can see a bunch of information, but also have the ability and ease to discover all the conversation and context around it, to inform themselves about what they choose to believe in.
Lichfield: How do you evaluate whether something is harmful without also trying to figure out whether it’s true, in other words, COVID specifically for example?
Agrawal: That’s a great question and I think in some cases you rely on credible sources to provide that context. So you don’t always have to determine if something is true or false, but if there’s potential for harm, we choose not to flag something as true or false, but we choose to add a link to credible sources, or to additional conversation around that topic, to provide people context around the piece of content so that they can be better informed, even as this data for understanding and knowledge is evolving. And public conversation is critical to that evolution. We saw people learn through Twitter, because of the way they got informed. And experts have conversations through Twitter to advance the state of our understanding around this disease as well.
Lichfield: People have been warning about QAnon for years. You started taking down QAnon accounts in July. What took you so long? Why did you… what changed in your thinking?
Agrawal: The way we think about QAnon or we thought about QAnon, is we have a coordinated manipulation policy that we’ve had for awhile, and the way it works is we work with civil services and human rights groups across the globe in trying to understand which groups, or which organizations, or what kind of activity rises to a level of harm where it requires action from us. In hindsight, I wish we’d acted sooner, but since we understood the threat well, by working with these groups, we took action and our actions have involved sort of decreasing amplification of this content and flagging this content in a way that led to very rapid decrease in the amount of reach QAnon and related content got on the platform by over 50%. And since then, we’ve seen sustained decreases as a result of this move.
Lichfield: I’m getting quite a few questions from the audience, which are kind of all asking the same thing. And they’re basically asking, well, I’ll read them. Who gets to decide what is misinformation? Can you give a clear clinical definition of misinformation? Does something have to have malicious intent to be misinformation? How do you know if your credible sources are truthful, what’s measuring the credibility of those sources and someone even saying I’ve seen misinformation in the so-called credible sources. So how do you define that phrase?
Agrawal: I think that’s the, the existential question of our times. Defining misinformation is really, really hard. As we learn through time, our understanding of truth also evolves. We attempt to not adjudicate truth, we focus on potential for harm. And when we say we lean on credible sources, we also lean on all the conversation on the platform that also gets to talk about these credible sources and points out potential gaps as a result of which the credible sources also evolve their thinking or what they talk about.
So, we focused way less on what’s true and what’s false. We focus way more on potential for harm as a result of certain content being amplified on the platform without appropriate context. And context is oftentimes just additional conversation that provides a different point of view on a topic so that people can see the breadth of the conversation on our platform and outside and make their own determinations in a world where we’re all learning together.
Lichfield: Do you apply a different standard to things that come from world leaders?
Agrawal: We do have a policy around public content in the public interest, it’s in our policy framework. So, yes, we do apply different standards. And this is based on the understanding and the knowledge that there’s certain content from elected officials that is important for the public to see and hear. And that all of the content on Twitter is not only on Twitter. It is in newsrooms, it is in press conferences, but oftentimes the source content is on Twitter. The public interest policy exists to make sure that the source content is accessible. We do however flag very clearly for everyone around when such content violates any of our policies. We take the bold move to flag it, label it so that people have the appropriate context that this is indeed an example of a violation, so people can look at that content in light of that understanding.
Lichfield: If you take President Trump, there was a Cornell study showing that – they measured that 38% of COVID misinformation mentions him. They called them the single largest driver of misinformation around COVID. You flagged some of his tweets, but there’s a lot that he puts out that doesn’t quite rise to the strict definition of misinformation, and yet misleads people about the nature of the pandemic. So doesn’t this, this exception for public officials, doesn’t it undermine the whole strategy?
Agrawal: Every public official has access to multiple ways of reaching people. Twitter is one of them. We exist in a large ecosystem. Our approach in labeling content actually allows us to, at the source flag content, that might potentially harm people, and also provide people additional context and additional conversation around it. So a lot of these studies and I’m not familiar on the one you cited, are actually broader than Twitter. And if they are about Twitter, they talk about reach and impressions, without talking about people also being exposed to other bits of information around the topic. Now, we don’t get to decide what people choose to believe, but we do get to showcase content and a diversity of points of views on any topic, so that people can make their own determinations.
Lichfield: That sounds a little bit like you’re trying to say, well, it’s not just our fault. It’s everybody’s fault. And therefore there’s not much we can do about it.
Agrawal: I don’t believe I’m saying that. What I’m talking about, the topics of misinformation have always existed in society. We are now a critical part of the fabric of public conversation, and that’s our role in the world. These are not topics we get to extricate ourselves from. These are topics that will remain relevant today and will remain relevant in five years. I don’t live in the illusion that we can do something that magically makes the misleading information problem goes away. We don’t have that kind of power or control. And I would honestly like not want that power or control. But we do have the privilege of listening to people, of having a diverse set of people on our platform, them expressing a diverse set of points of view, the things that really matter to everyone, and for us to be able to showcase them with the right context so that society can learn from each other and move forward.
Lichfield: When you talk about letting people see content and draw their own conclusions or come to their own opinions, that’s the kind of language that is associated with, I think the way that social media platforms traditionally presented themselves. ‘We’re just a neutral space, people come and use us, we don’t try to adjudicate’. And it seems a little bit at odds with what you were saying earlier about the wanting to promote a healthy public conversation, which clearly involves a lot of value judgments about what is healthy. So how are you reconciling those two?
Agrawal: Oh, I’m not saying that we are a neutral party to this whole conversation. As I said, we’re critical part of the fabric of public conversation. And, you wouldn’t want us to be adjudicating what’s true or what is false in the world. And honestly, we cannot do that globally in all the countries we work in across all the cultures and all the nuances that exist. We do, however, have the privilege of having everyone on the platform being able to change things, to give people more control and have to steer the conversation in a way that it’s sort of more receptive and allows more voices to be heard and for all of us to be better informed.
Lichfield: One of the things that some observers say you could do that would make a big difference would be to abolish the trending topics feature, because that is where a lot of misinformation ends up getting surfaced. Things like the QAnon hashtag save the children, or there was a conspiracy theory about Hillary Clinton staffers rigging the Iowa caucus. Sometimes things like that make their way into trending topics, and then they have a big influence. What do you think about that?
Agrawal: I don’t know if you saw it, but just this week we made a change to how trends and trending topics work on the platform. And one of the things we did was, we’re going to show context on everything that trends, so that people are better informed as they see what people are talking about.
Strong: We’re going to take a short break – but first… I want to suggest another show I think you’ll like. Brave New Planet weighs the pros and cons of a wide range of powerful innovations in science and tech. Dr. Eric Lander, who directs the Broad Institute of MIT and Harvard, explores hard questions like
Lander: Should we alter the Earth’s atmosphere to prevent climate change? And, can truth and democracy survive the impact of deepfakes?
Strong: Brave New Planet is from Pushkin Industries. You can find it wherever you get your podcasts. We’ll be back right after this.
[Advertisement]Strong: Welcome back to a special episode of In Machines We Trust. This is a conversation between Twitter’s Chief Technology Officer Parag Agrawal and Tech Review’s editor-in-chief Gideon Lichfield. If you want more on this topic, including our analysis, please check out the show notes or visit us at Technology Review dot com.
Lichfield: The election obviously is very close. And I think a lot of people are asking what is going to happen particularly on election day, as reports start to come in from the polls, there’s worry that some politicians are going to be spreading rumors of violence or vote rigging or other, other problems, which in turn could spark demonstrations and violence. And so that’s something that all of the social platforms are going to need to react to very quickly in real time. What will you be doing?
Agrawal: We’ve worked through elections in many countries over the last four years. India, Brazil, large democracies learned through each of them, and we’ve been doing work over the years to be better prepared for what’s to come. Last year we made a change around policy to ban all political advertising on Twitter, which was in anticipation of its potential to do harm. And we wanted our attention to be focused, not on advertising, but on the public conversation that’s happening organically to be able to protect it and improve it, especially as it relates to conversations around the elections.
We did a bunch of work on technology to get better at detecting and understanding state bad actors and their attempts to manipulate elections, and we’ve been very transparent about this. We’ve made public releases of hundreds of such operations from over 10 nations, with tens of thousands of accounts each and terabytes of data that allow people outside the company to analyze it and understand the patterns of manipulation at play. And we’ve gone ahead with product changes to make there be more consideration and thoughtfulness in how people share content and how people amplify content.
So, we’ve done a bunch of this work in preparation and through learnings along the way. To get to an answer about election night. We’ve also strengthened policies on our civic integrity to not allow anyone, any candidate or anyone across all races to be able to claim an election when a winner has not been declared. We also have strict measures in place to avoid incitements of violence. And we have a team ready, which will work 24/7 to put us in an agile state.
That being said, we’ve done a bunch of work to anticipate what could happen, but one thing we know for sure is what’s likely to happen is not something we’ve exactly anticipated. So what’s going to be important for us on that night and beyond, and even leading up to that time to be prepared, to be agile, to respond to the feedback we were getting on the platform, to respond to the conversation you see seeing on and off platform, uh, and try to do our best to serve the public conversation conversation in this important time in this country.
Lichfield: Someone in, uh, in the audience asked something that I don’t think you would agree to, which was, they said, should Facebook and Twitter be shut down for three days before the election? But maybe a more modest version of that would be, is there some kind of content that you think should be shut down right before an election?
Agrawal: Just this week one of the prominent changes that’s worth talking about in some detail is we made people have more consideration, more thought when they retweet. So instead of being able to easily just retweet content without additional commentary, we now default people into adding a comment when they retweet. And this is for two reasons, one to add additional considerations when you retweet and amplify certain content and two, to have content be shared with more context about what you think about it so that people understand why you’re sharing it, and what the context around the set of conversation is. We also made the trends change which I described earlier. These are changes which are meant to have the conversation on Twitter be more thoughtful.
That being said, Twitter is going to be a very, very powerful tool during the time of elections for people to understand what’s happening, for people to get really important information. We have labels on all candidates. We have information on the platform about how they can vote. We have real-time feedback coming from people all over the country, telling people what’s happening on the ground. And all of this is important information for everyone in this country to be aware of in that time. It’s a moment where each of us is looking for information and our platform serves a particularly important role on that day.
Lichfield: You’re caught in a bit of a hard place as somebody in the audience is also pointing out, that you’re trying to combat misinformation, you also want to protect free speech as a core value, and also in the U.S. as the first amendment. How do you balance those two?
Agrawal: Our role is not to be bound by the First Amendment, but our role is to serve a healthy public conversation and our moves are reflective of things that we believe lead to a healthier public conversation. The kinds of things that we do about this is, focus less on thinking about free speech, but thinking about how the times have changed. One of the changes today that we see is speech is easy on the internet. Most people can speak. Where our role is particularly emphasized is who can be heard. The scarce commodity today is attention. There’s a lot of content out there. A lot of tweets out there, not all of it gets attention, some subset of it gets attention. And so increasingly our role is moving towards how we recommend content and that sort of, is, is, a struggle that we’re working through in terms of how we make sure these recommendation systems that we’re building, how we direct people’s attention is leading to a healthy public conversation that is most participatory.
Lichfield: Well, we are out of time, but thank you for really interesting insight into how you think about these very complicated issues.
Agrawal: Thank you Gideon for having me.
[Music]Strong: If you’d like to hear our newsroom’s analysis of this topic and the election… I’ve dropped a link in our show notes. I hope you’ll check it out. This episode from EmTech was produced by me and by Emma Cillekens, with special thanks to Brian Bryson and Benji Rosen. We’re edited by Michael Reilly and Gideon Lichfield. As always, thanks for listening. I’m Jennifer Strong.
[TR ID]