Air Date 11/9/2021
[00:00:00] JAY TOMLINSON - HOST, BEST OF THE LEFT: Welcome to this episode of the award-winning Best of the Left Podcast, in which we shall take a look at the Facebook business model that is driving hate, lies, violence, extremism, and the breakdown of democracy around the world, as well as some of the regulatory ideas geared toward reigning it in and mitigating the damage.
Clips today are from Democracy Now!, Your Undivided Attention, the Brian Lehrer Show, TyskySour, with additional members-only clips from Your Undivided Attention and On the Media.
The Facebook Papers Docs Reveal Tech Giants Complicity in Hate, Lies & Violence Around the World Part 1 - Democracy Now! - Air Date 10-26-21
[00:00:32] AMY GOODMAN - HOST, DEMOCRACY NOW!: Thousands of pages of internal documents leaked by former Facebook product manager turned whistleblower Frances Haugen are now the basis of a damning series of reports called the Facebook Papers being published this week. They show how the company’s choices prioritized profits over safety and how it tried to hide its own research from investors and the public. A new story published this morning by the Associated Press finds the Facebook documents show the platform ignored some of its own researchers’ suggestions for addressing vaccine misinformation.
Facebook Chief Executive Mark Zuckerberg pushed back Monday during an earnings call with investors, calling the reports published, “coordinated efforts to selectively use leaked documents to paint a false picture of our company,”, but fallout from the Facebook Papers continues to generate political support for increased regulation. After testifying before Congress earlier this month, on Monday Haugen testified for more than two hours before the British Parliament as the United Kingdom and the European Union are both planning to introduce new digital and consumer protection measures. She spoke about the social harm generated when Facebook’s platform is used to spread hate speech and incite violence without adequate content moderation in local languages, such as in Ethiopia, which is now engulfed by a civil war.
[00:01:54] FRANCES HAUGEN: I think it is a grave danger to democracy and societies around the world to omit societal harm. To give a core part of why I came forward was I looked at the consequences of choices Facebook was making, and I looked at things like the Global South, and I believe situations like Ethiopia are just part of the opening chapters of a novel that is going to be horrific to read. We have to care about societal harm, not just for the Global South but our own societies, because, like I said before, when an oil spill happens, it doesn’t make it harder for us to regulate oil companies, but right now Facebook is closing the door on us being able to act. We have a slight window of time to regain people control over AI. We have to take advantage of this moment.
[00:02:37] AMY GOODMAN - HOST, DEMOCRACY NOW!: Frances Haugen’s testimony comes as the news outlet The Verge reports Facebook plans to change its company name this week to reflect its transition from social media company to being a, “metaverse” company. Facebook has already announced plans to hire thousands of engineers to work in Europe on the metaverse. In the coming weeks, Haugen is scheduled to meet with officials in France, Germany and the European Union.
For more, we’re joined by Ramesh Srinivasan, professor of information studies at the University of California, Los Angeles, UCLA, where he also directs the Digital Cultures Lab. Welcome back to Democracy Now! It’s great to have you with us, Professor. Can you talk about the significance of what has been released so far in the Facebook Papers?
[00:03:25] RAMESH SRINIVASAN: It’s great to be with you, Amy, and great to join you this morning. I think that what Frances Haugen has done is blow the whistle on Facebook’s complicity and its internal knowledge of a number of problematic issues that many of us were alleging that they were engaging with for several years at this time. What Facebook has essentially done, and she has exposed that they were aware of, is play with our emotions, play with our psychologies, play with our anxieties. Those are the raw materials that fuels Facebook’s attempts to be a digital empire. And more generally, this is an example of how this new form of digital capitalism, that I believe Facebook is trailblazing, is one that is playing with our intimate emotions on every single level.
These revelations are extremely important. Right now the timing is very, very important for action to be taken to rein Facebook in, but to also, more generally understand, that this is the general face of big tech that we’re seeing more and more exposed in front of us.
[00:04:32] JUAN GONZALEZ - CO-HOST, DEMOCRACY NOW!: And, Professor, when Frances Haugen talks about greater people control over AI, what does that mean? What would that look like, and especially in view of the other issue that she raised, which is the disparate impact of these social media platforms on the Global South, on less developed countries, where Facebook is pouring far less resources into moderating its content?
[00:04:58] RAMESH SRINIVASAN: Yeah, both are extremely important questions, Juan. I mean, first, to the point about people’s relationship to AI, we tend to talk about AI these days as if it’s some sort of leviathan, but what we’re actually talking about when we’re discussing AI are the mechanisms by which various types of technology companies— mind you, which are the wealthiest companies in the history of the world, all of which are leveraging our lives, our public lives, our lives as citizens, even the internet that we paid for—it’s basically the mechanism by which they’re constantly surveilling us, they’re constantly transacting our attention and our data, and they’re playing with our psychological systems—our psychology, actually—in multiple ways. They’re getting us into fight-or-flight mode to trigger and activate our attention and our arousal, to keep us locked in, and to therefore behaviorally manipulate us. They’re also feeding us into groups that are increasingly hateful and [divisive]. We saw that with January 6th, for example, and the mechanism, the means by which they’re able to do that is by playing again with our emotions, taking us to these groups so we can recognize, falsely, that we are not alone in having particular viewpoints, that there are others with more hardcore viewpoints.
So, now if you look at those mechanisms of manipulation, of behavioral manipulation, which are actually quite common across big tech, but very specifically egregious in the case of Facebook, now let’s look at that when it comes to the Global South. The so-called Global South—we’re talking about the African continent, South America, South Asia, Southeast Asia and so on—represent the vast majority of Facebook’s users, because here we’re talking about users not just of Facebook the technology, but also platforms like WhatsApp and Instagram as well. Now, in many of these countries, there is not necessarily a strong independent media or a strong journalistic press to rebut or to even actually provide some sort of countervisibility to all the hate and lies that Facebook prioritizes, because we know—and Frances Haugen has confirmed this—that Facebook’s algorithm prioritizes divisive, polarizing and hateful content because of algorithmic and computational predictions that it will capture our attention, it will lock in our sympathetic nervous system.
So, if we look at these other countries in the world, we can see consistently, including right now at this very moment, when it comes to the Tigray people of Ethiopia, how hateful content is being prioritized. Facebook’s algorithms tend to work very well with demagogue leaders who spew hateful content, and they tend to rebut the voices, as has always been the case over the last several years, of minorities, of Indigenous peoples, of Muslims and so on. So this has real-life effects in terms of fomenting actual violence against people who are the most vulnerable in our world. And so, this represents a profound, not negligence by Facebook, but they recognize they can get away with it. Simply hire a few exploited content moderators in places like the Philippines, who are encountering PTSD, and then, basically let the game play out in these countries in the world, which represent the vast majority, again, of Facebook’s users.
[00:08:21] JUAN GONZALEZ - CO-HOST, DEMOCRACY NOW!: And what is the impact of all of this on the democratic process? Because, in reality, are we facing the possibility that through algorithms of companies like Facebook we are actually clearly subverting the very process of people being able to democratically choose leadership and the government policies as a result of their being inundated with misinformation and hate?
[00:08:50] RAMESH SRINIVASAN: That is absolutely what’s occurring, because all of us, imagine if billions of us—and we’re talking about approximately 3.5 or so billion people around the world with some access and engagement with Facebook technologies—recognizing that in many parts of the world Facebook is actually the internet, and WhatsApp is actually the cellphone network. So, what this means is we are turning to Facebook to be our media company, our gateway to the world, and to be our space of democratic communication, when it is anything but that, because imagine 3.5 billion people, all with their own screens, that are actually locking—that are unlocking them into a world that is not the public sphere in any sense at all, but is actually based upon polarization, amplification of hate, and, more than anything, the sweet spot, the oil of the new economy, which is people’s addiction and their attention. And there’s nothing that gets our attention more than being put into fight or flight, getting outraged, getting angry, getting just agonized. And that is exactly what Facebook is doing, because we’re all being presented with completely different realities algorithmically on our screen, and those realities are not based on some open sense of dialogue or compassion or tolerance, but based on algorithmic extremism, based on all of this data that’s being gathered about us, none of which we have any clue about.
Facebook Whistleblower Frances Haugen in Conversation Part 1 - Your Undivided Attention - Air Date 10-18-21
[00:10:14] FRANCES HAUGEN: Lots of dynamics on Facebook are driven by extreme users. And by extreme, I don't mean like their positions are extreme. I mean like the intensity of their usage of the platform is. For example, someone who is a 99th percentile user in terms of the number of pieces of content they produce -- maybe that's posts, or maybe it's comments -- might produce 10 times as many pieces of content as a 95th percentile user. And a 99.99% user might produce a hundred times as much content as a 99 percentile user, or at least a 95th percentile user.
And so you can imagine a system or like a change where you just came in and said, okay, we're going to figure out what the 99th percentile is for a number of comments. Let's cap the number of comments you can give in a day at that number of comments. So that might be 30 comments a day. I don't know. I don't know the exact number. But when they went and did that analysis in the case of COVID, even if you ignore the content of the comments, you just say, we're going to cap people at whatever the 99th percentile is right now, that ends up having a huge impact on COVID misinformation, because a very, very small number of people are hyper sharers or they're hyper commenters. And there's a lot of power and just saying, Hey, let's make space for more people to talk. And it turns out on average people are pretty cool.
[00:11:33] TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Yeah. Renee Diresta, one of our previous guests, talks about this as "the asymmetry of passion." But what you're talking about is the hyper asymmetry of passion, where you have a small number of people who are posting, you know, all over the place. I think in some of your work, you talked about the invites and that there's certain people also who invite like many, many, many more people to groups, and that that's also a different issue. Do you want to talk about some of those other asymmetries? I think of it, it's you know, in complexity theory, the notion of scale, that there's certain things that are at greater, greater scales than others, and we can pay attention to the outliers. And how do we control some of the extreme usage that's more dangerous?
[00:12:08] FRANCES HAUGEN: The example that you gave there around invites. So I've, I've discussed before the idea that Facebook should have to publish what all of its rate limits are. So a rate limit is, let's say we sat down and said, how many people should someone be allowed to invite to groups, like a group in any given day or any given week? How many people should they be allowed to invite overall in that same time period? Because the reality is that some people are trying to weaponize the platform. And most people aren't. You can imagine coming in and saying, okay, you can invite a thousand people a week to groups.
The current limits are set so high that the documents show that there was a person they found who had invited 300,000 people to QAnon-related groups. And one of the things that's scary about that is that Facebook has a feature in place, such that if you are invited to a group, Facebook will inject content from that group into your feed for, I think it's 30 days. And if you engage with any of it, they'll consider that like a ghost follow. So when that person went and invited 300,000 people to QAnon groups, now, all those people started having their feeds flooded with QAnon content.
[00:13:10] TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Wait, so this is important. You're saying that if someone gets invited to a group, they don't even accept the invite. They're not saying yes, I would like to join your QAnon group. You're saying suddenly by just the invitation alone, their feed gets flooded with QAnon posts. And then if they engage at all, it kind of auto joins them in some way?
[00:13:26] FRANCES HAUGEN: Yes. It's this question of that, you know, Facebook knows that groups are a valuable conduit for people to connect on Facebook, and that sometimes people get invited to groups and they either don't notice the invitation or maybe they don't really understand that they have to accept it. And so Facebook's idea is that, instead of waiting for someone to accept a group, that you might inject content from that group into their feed for a period of time. And if they engage with that, then that we should assume that they want to continue to receive that content.
This becomes problematic when people get invited to really large groups, because let's say you have a group that has half a million members and it produces 500 pieces of content a day. If you have an algorithm, engagement-based ranking that prioritizes divisive, polarizing hateful content, that content ends up -- and there's 500 posts a day that go into that group -- you might in a situation where Facebook has to figure out what two or three posts of that 500 should go into your newsfeed.
And if you know, there's that those biases exist with your engagement-based ranking, it means you're going to keep having this kind of forcing function that that gives mass distribution to extreme content.
When you combine that with the fact that Facebook will start auto injecting content if you get invited, it's a perfect storm. Because it means that someone who's really motivated, they have that asymmetrical passion, can add huge numbers of people to their group every day, and then be able to force a stream of extreme content into their feeds.
[00:14:53] TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: And if I recall correctly, wasn't the reason that Facebook leaned more on Facebook groups because regular user engagement was going down? That regular users are posting less?
[00:15:03] FRANCES HAUGEN: Facebook noticed that people who were members of groups had a higher retention rate on the platform. And part of that is because they get exposed to more content. Groups are generally wonderful. People love coming together. And to be clear, I'm not saying that groups are bad. I'm just saying that the way you would design groups without algorithmic feeds, right, without having computers choose what to focus on, is you would design those groups that at a much more human scale, right? So you'd have things that looked like Discord servers. Things where people have a single conversation and if it gets too noisy, they start opening smaller rooms, they focus on other topics. And so I think there's a real advantage to having more of a human-oriented design strategy instead of having an AI strategy.
Most people are not aware of how Facebook builds the systems. They get to pick out the content that go into your newsfeed. So Facebook goes and takes the actions of millions and millions and millions of people. And they say, okay, so we had this information about what people were interested in in the past. We have information about the content that we could show them. We're going to try to make these predictions and see how accurate are we. And systems are "trained" by looking at those millions and millions of people's actions.
But the reality is that not all of those people on the system interact with Facebook the same amount. So if the behavior of someone who looks at thousands of posts every day is different than the behavior of someone who looks at say 50 a day, that person, the person who looks at a thousand a day, has 20 times the impact on the algorithm as someone who looks at 50 stories a day.
And so what's interesting is Facebook knows that some of the people who consume the most misinformation they've gone through life experiences recently that make them more vulnerable. Maybe they were recently widowed. Maybe they were recently divorced. Maybe they moved to a new city. Maybe they're getting depressed. That those people end up influencing the algorithm to an outsized impact compared to the average user.
And you know, there's a lot of these weird feedback cycles where as people get more depressed, they might more compulsively use the platform. The idea that their actions could then feed back and influence an average user of the platform is crazy. And you could imagine doing things like coming in and capping how much impact any given user could contribute to the overall ranking for everyone else. And that might also help rein in some of these impacts.
Hate and Lies On Facebook They're Even Worse In India Part 1 - Brian Lehrer A Daily Politics Podcast - Air Date 10-27-21
[00:17:31] BRIAN LEHRER - HOST, BRIAN LEHRER SHOW: Sheera, thanks for coming on today, welcome back to WNYC.
[00:17:34] SHEERA FRENKEL: Thank you so much for having me.
[00:17:36] BRIAN LEHRER - HOST, BRIAN LEHRER SHOW: And I want to walk you through some of the stories that you tell in these articles. I think the listeners will find it really interesting to go point by point through a couple of them. So let's begin with your article Internal alarm, public shrugs: Facebook’s employees dissect its election role. It begins with a Facebook employee who opened an experimental account in 2019, and she quickly discovered something alarming. Want to start with that sotry?
[00:18:04] SHEERA FRENKEL: Sure. So this Facebook employee is a longterm researcher who likes to run these kinds of experiments. She joins Facebook as a new user and then see where Facebook's algorithms take her. In this case, she wanted to be a new user who was a Christian woman based somewhere in the United States. And that was essentially all the information she gave Facebook. And what she found was within just a matter of weeks Facebook's algorithms were pushing her towards extreme content. And by that, I mean fringe conspiracy theories like QAnon, militia groups, the type of stuff that Facebook says that it bans and that it doesn't want to push people towards, it was giving this person that had just joined the platform.
[00:18:44] BRIAN LEHRER - HOST, BRIAN LEHRER SHOW: And then you go onto another Facebook employee raising an alarm on November 5th last year, that first one was 2019. November 5th, last year, three days after election day, what did that employ see?
[00:18:58] SHEERA FRENKEL: So what they start to see is that the amount of election misinformation is just overwhelming and that the safeguards the company said that it put in place, the break glass measures is what Facebook was calling it, we're just not working. It wasn't effective. And I think that it was interesting for us as reporters to see that here we are incredibly important elections in the United States, and this employee is sounding the alarm very early on to say something's going wrong here. Our measures are not working.
[00:19:27] BRIAN LEHRER - HOST, BRIAN LEHRER SHOW: Let's keep growing. Just four days later, November 9th, last year, yet another Facebook employee, one of their data scientists, reported up the chain that an alarming 10% of all US views of political material on Facebook were posts that claimed the election results were fraudulent. So what did Facebook do with any of that information?
[00:19:50] SHEERA FRENKEL: they do? They didn't do much outside of their plan. They did enact some additional break glass measures that they had prepared, and it was a lot of brow furrowing and employees asking one another why their company was being used in that way, but yes, you don't see them go into crisis mode. You don't see them taking some of the more extreme steps that those same researchers were proposing to them.
[00:20:15] BRIAN LEHRER - HOST, BRIAN LEHRER SHOW: more story before we dig into what Facebook could have done, according to its employees. From your article In India, Facebook Grapples With an Amplified Version of Its Problems. That article also starts with a Facebook researcher setting up an account just to see what would happen, but this time in Kerala, India, and with a really simple and interesting game plan. Follow all the recommendations generated by Facebook's algorithms and see what happened. So what happened?
[00:20:45] SHEERA FRENKEL: And this is actually, we don't note it in our articles, but this is the same researcher, the same researcher who set up the profile of a young American woman goes and runs the same experiment in India, and she sets up this account. And again, very little information, she just says that she's based in Kerala, India, and she wants to see what Facebook tells her to do. What groups does it tell her to join? What pages does it tell her to follow? And within a matter of weeks, she is inundated with violent content, hate speech, misinformation.
This is a time where India and Pakistan are actually—tensions are running very, very high. There's been a suicide attack along the border there, and what she finds is just by following the recommendations that Facebook surfaces for her, she's driven to believe all these conspiracies about the suicide bombing that happened along the India-Pakistan border, and she could see how people on the platform would be fed incredibly divisive and polarizing content if they were just brand new to Facebook and joining the platform, looking for information.
[00:21:48] BRIAN LEHRER - HOST, BRIAN LEHRER SHOW: Sheera, you note that India is Facebook's largest market, and I know India has a billion people, second largest country in the world by population, but some people might still be surprised to hear that Facebook is so prevalent there. Why India?
[00:22:02] SHEERA FRENKEL: Well, Facebook was really aggressive about entering the market in India. They made these deals with local telecom carriers, which made it pretty much free to get Facebook. If you were getting a phone, you would have a version of the Facebook app on your phone, which was great for messaging friends, for doing business. It became incredibly prevalent because it was so inexpensive. And this is a country where you often have to pay for data on your phone, and it can be very prohibitive for the average person to have to pay for all that data. So if you were given a cheap alternative, you can see how people were drawn to it.
Facebook also did a lot of marketing in India. It really wanted to be in this country. And I think it's interesting now looking and reading their internal documents about how much they struggled to secure the elections in India, how complicated they found India's voting system. This is a voting process that lasts for over a month. It is a country which has over 22 official languages. This is not an easy environment to regulate content in or to monitor content in, and yet Facebook pushed very aggressively into that market and then found itself struggling and only able to monitor content in I think 3 of the 22 languages, or 4 rather, if you include English.
And so you see a company in love with the idea of doing business in India, in love with idea of his billion person market that it can potentially capture, but then not really able to, I think at least what these documents show, is not really able to keep up with the sheer volume of what it's taking on.
[00:23:29] BRIAN LEHRER - HOST, BRIAN LEHRER SHOW: As we debate the extent of Facebook's role in disinformation and allowing disinformation in this country, and allowing this country to be torn apart over false hoods about the election and vaccines and other things, how is this affecting politics or other aspects of life in India that you were just describing over there?
[00:23:53] SHEERA FRENKEL: We see time and again that the kind of conspiracies that we worry about here in the United States are amplified tenfold when they reach other countries. And when I say that, I'm really thinking specifically about the pandemic, about, about the start of the pandemic and what COVID was and how it spread, and then all the early narratives about the vaccine that anti-vaccine activists wanted to spread. I was doing some research for this India story that I reported for the [New York] Times, and I was just astounded at the anti-vaccine content that I found being spread in India, to its user base there.
And I think a big reason for that is that Facebook, another startling figure that came out of these documents, Facebook dedicates 87% of its budget on misinformation to combating misinformation here in the United States. And by that, that's an umbrella that covers hate speech and conspiracies, and everything else. So 87% of that budget goes to the United States. 13% goes to the entire rest of the world. And let's put that figure in context. Facebook has more people using its platform in India than in the United States. Just one country. Just one country in the rest of the world. So you can see how here in America, they might struggle, but they do, they are somewhat effective in taking down conspiracies about the vaccine, allegations that the vaccine is ineffective and all the rest of that stuff. How can they possibly be effective in the rest of the world when they're spending just 13% of their budget on it?
Zuckerberg's Metaverse - TyskySour - Air Date 10-29-21
[00:25:16] MICHAEL WALKER.- HOST, TYSKYSOUR: Let's now look at the political context for this rebrand. I mentioned in my intro that Facebook has been subject to leaks showing it knowingly endangered the mental health of users. The source of that information was former Facebook Product Manager turned whistleblower, Frances Haugen earlier this month, she gave testimony to the US Senate.
[00:25:35] FRANCES HAUGEN: I believe Facebook's products harm children, stoke division, and weaken our democracy. The company's leadership knows how to make Facebook and Instagram safer but won't make the necessary changes because they have put their astronomical profits before people. Almost no one outside of Facebook knows what happens inside of Facebook. The company intentionally hides vital information from the US government and from governments around the world.
The algorithms are very smart in the sense that they latch on to things that people want to continue to engage with. And unfortunately, in the case of teen girls and things like self-harm, they develop these feedback cycles where children are using Instagram as to self-sooth, but then are exposed to more and more content that makes them hate themselves. The choices being made inside of Facebook are disastrous for our children, for our public safety, for our privacy, and for our democracy.
[00:26:27] MICHAEL WALKER.- HOST, TYSKYSOUR: Testimony described how damaging being attached to a Facebook controlled screen can be, both for people's mental health and for a cohesive society. We might then wonder if upgrading that screen to an entire metaverse is a good idea. Luckily for those who do have their doubts, former Deputy Prime Minister, Nick Clegg was on hand to provide reassurance.
[00:26:49] MARK ZUCKERBERG: Hey Nick.
[00:26:49] NICK CLEGG: Hey mark. I hope I'm not interrupting. You got a sec?
I think Oppy's still in
[00:26:53] MARK ZUCKERBERG: the virtual forest, but I always have time for you, what's going on?
[00:26:56] NICK CLEGG: Look, I just love the presentation so far, it's such visionary stuff, but as you mentioned early on with all big technological advances, there are inevitably going to be all sorts of challenges and uncertainties. And I know you've talked about this a bit already, but people want to know how are we going to do all this in a responsible way, and especially that we play our part in helping to keep people safe and protect their privacy online.
[00:27:21] MARK ZUCKERBERG: right. This is incredibly important.
[00:27:23] NICK CLEGG: The way I look at it is that in the past, the speed that new technologies emerge sometimes left policymakers and regulators playing catch up. So on the one hand companies get accused of charging ahead too quickly, and on the other tech people feel that progress can't afford to wait for the slower pace of regulation. And I really think that it doesn't have to be the case this time round, because we have years until the metaverse we envision is fully realized. So this is the start of the journey, not the end.
Like I said
[00:27:53] MARK ZUCKERBERG: earlier, interoperability, open standards, privacy and safety need to be built into the metaverse from day one. And with all the novel technologies that are being developed, everyone who's building for the metaverse should be focused on building responsibly from the beginning. This is one of the lessons that I've internalized from the last five years. It's that you really want to emphasize these principles from the start.
That was one
[00:28:17] MICHAEL WALKER.- HOST, TYSKYSOUR: of the most stilted conversations ever committed to video. All of this just makes me feel like I do not want to enter the metaverse, but that's just me. Is this just going to be all the bad bits of Facebook, intensified because you're actually in that world?
[00:28:32] AARON BASTANI: I think the great counterpoint to Facebook is probably Uber, who had an approach of we don't care about the regulators. Fuck you. We're going to disrupt. We're going to break things up. There's nothing you can do about it. And actually elected governments around the world and cities and regions, nation states of have said, actually no, we can. And you can see there's clearly a change in tack from Facebook after that, and they're engaging with the criticisms of the metaverse before it's even arrived.
Yes, there are huge problems with Facebook, there are huge problems with social media companies, which are now being increasingly well documented. We knew this for a long time, Michael, and the reason why nothing happened, isn't just because all the companies themselves are bad is because politicians decided not to regulate them properly. Politicians decided the market always knows best, technology and innovation can't be stopped by a big state, so it's better if we just get out of the way. And the idea that there could be massive downsides by doing that was generally ignored, I think until the last few years.
And a good comparison here, I think is wind turbines—onshore wind—which is much cheaper than offshore wind. You don't see many onshore wind turbines because politicians and certain people in the media made it a political hot potato. So they didn't really happen. They've not really been built. And then when it comes to things like Facebook and Instagram, " we can't get involved." So I think, yes, of course you want to blame Zuckerberg and Facebook to an extent, and of course the liberals, they love doing that, but I think that takes a little bit of heat and accountability and scrutiny away from people also matter, Michael, and that's elected politicians. That's state regulators.
We should have been having a proper conversation around this and Facebook 5-10 years ago. We didn't because the political class was in bed with them, particularly in this country. So of course I don't like Nick Clegg, I don't like Mark Zuckerberg, but it also reflects a complete failure of regulation, which was like this capitalist realism thing, which has only really disintegrated since 2016. I think China's so ascendant, Trump wins in 2016, Brexit. I think politicians and the political elite have taken a step back and say, actually, maybe we do sometimes need to do stuff. Maybe the market isn't always right, because that means people like Donald Trump become president and me and my mates don't get to run things anymore.
The Facebook Papers Docs Reveal Tech Giants Complicity in Hate, Lies & Violence Around the World Part 2 - Democracy Now! - Air Date 10-26-21
[00:30:43] AMY GOODMAN - HOST, DEMOCRACY NOW!: So, let’s talk about examples. you’re talking about the privatization of the global commons, because this is where so many people, even with the digital divide, communicate. The U.S. is 9% of the global population of Facebook, the global consumers, so 90% are outside the United States, but of the protections or 90% of the resources going into dealing with the hate are in United States, Facebook is putting in the United States. And even here, look at what happens. Didn’t Facebook set up a young woman — made pretend they were a young woman profile on Facebook, who said they supported Trump? Immediately — and this is Facebook, this is a fictional person — saw she was inundated with requests to join QAnon, with hate, and then we see what happened on January 6th. This is when Facebook has poured in all of the so-called protections.
Talk about its relation to January 6th, and then talk globally, where they’re almost putting nothing in other languages, for example, in Vietnam.
[00:31:54] RAMESH SRINIVASAN: That’s such an important example, Amy. Thank you for bringing it up. Basically, our mechanisms of engaging with the wider world, even in our country, as you point out, even here in the United States, are all based upon routing us down these algorithmic, opaque rabbit holes that get us more and more extreme. Where content that is more extreme is often suggested to us, and it’s often, as you alluded to, through the medium of a Facebook group. So, any sort of group that has hateful speech associated with it, that expresses outrage, that will activate our emotions — because those are the raw materials of digital capitalism, our emotions and our anxieties and our feelings — that's exactly where they want to take us, because then that basically suggests to us, Amy, that you’re not alone. There are other people with viewpoints not necessarily even just like your own, but even more amplified, more radical. So, that ends up, as we saw, generating great amounts of violence and this brutal insurrection.
So, Facebook likes to claim that they are just supporting free speech, that there are some bad actors gaining their platform, when in fact their platform is designed for bad actors to leverage their platform, because they have a highly symbiotic relationship. As we’ve spoken about in the past here on Democracy Now!, the former president was perfectly symbiotic in his relationship with Facebook. They were very, very good bedfellows. And we see that with demagogues around the world. So, now when we want to talk about the Global South, we can recognize that Facebook can basically say, “Hey, you know, we’re just — you know, we have different languages on our platform. Sure, we are talking to a few people in countries like Ethiopia. We talk to a few people in Myanmar, and never mind that they were basically responsible in many ways in fomenting a genocide against the Rohingya. You know, we are talking to people in the Modi government in India, who has said many demagogic things against Muslim minorities,” and the Facebook Papers have revealed this, that Frances Haugen has brought out, as well. This actually works very well for them.
Here in the United States, where we have strong enough or strongish independent media, thanks to Democracy Now!, we are able to challenge Facebook in the public sphere to some extent. But in other countries in the world, that doesn’t necessarily exist to the same extent. And Facebook can basically say, “Hey, we can just leverage the lives, the daily lives, 24/7/365, of billions of you and basically do whatever we want.” And they don’t really have to do much about it. They don’t have to actually take really any real steps to resolve the harms that they’re causing to many people around the world.
[00:34:40] JUAN GONZALEZ - CO-HOST, DEMOCRACY NOW!: And, Professor, you mentioned India, the second-largest population in the world, and the Facebook Papers revealed that Facebook had, some of its managers, had done a test of an average young adult in India, who became quickly flooded with Hindu nationalist propaganda and anti-Muslim hate. And one of the staffers who was monitoring this account said, quote, “I’ve seen more images of dead people in the past 3 weeks than I’ve seen in my entire life total.” Could you talk about the impact of the lack of accountability of Facebook in terms of what its platform is doing in a country like India?
[00:35:21] RAMESH SRINIVASAN: That hits home personally for me as someone of South Indian descent. I have been also part of WhatsApp groups with various friends or relatives in India that tend to spiral into, going from a stereotype, for example, of a Muslim minority or an Indigenous minority and then quickly amplifying. And I think, in a country like India, which is in a sense, the world’s largest democracy, we see major threats to that democracy aligned with the Hindu nationalist government and also with the growth of Hindu nationalism, which, of course, as has always been the case, vilifies and goes after the Muslim minority, which is just absurd and just makes my stomach churn.
And this is exactly, what you just laid out, Juan, is exactly how the Facebook playbook works. You create an account. You browse around, I mean, we don’t really browse anymore these days, but you befriend various people, you look at various pages, and you quickly get suggested content that takes you down extremist rabbit holes, because, we all know, sadly, when we see pictures of dead people, when we see car crashes, when we see fires, we pay attention, because it activates our sympathetic nervous system. Facebook recognizes the easiest way it can lock people in, so it can monetize and manipulate them, is by feeding them that type of content. And that’s what we’ve seen in India.
[00:36:46] AMY GOODMAN - HOST, DEMOCRACY NOW!: So, finally, we just have 30 seconds, Ramesh, but I wanted to ask you, you have Haugen now testifying before the British Parliament, going through Europe, very significant because they’re much more likely to regulate, which could be a model for the United States. Talk about the significance of this.
[00:37:02] RAMESH SRINIVASAN: I think it’s fantastic that the European Union and the UK want to take aggressive measures. Here in the United States, we have Section 230, which is protecting some of the liability of companies like Facebook from spreading misinformation and hate. We need to actually shift to think about a rights space framework. What are the rights we all have as people, as citizens, as peoples who are part of a democracy, in a digital world? We need to deal with the algorithms associated with Facebook. There should be audit and oversight and disclosure. And more than anything, people need their privacy to be protected, because we know, again and again, as we’ve discussed, vulnerable people get harmed. Real violence occurs against vulnerable peoples and polarizes and splinters our society if we don’t do anything to about this.
[00:37:46] AMY GOODMAN - HOST, DEMOCRACY NOW!: Should Facebook or Meta, perhaps that’s what it’s going to be called—the announcement on Thursday, should it simply be broken up, like in the past, a century ago, big oil?
[00:37:56] RAMESH SRINIVASAN: I think we may want to consider that Facebook is a public utility as well, and we should regulate it accordingly. But more than anything, we need to force them to give up power and put power in the hands of independent journalists, people in the human rights space, and, more than anything, treat all of us, who they are using, constantly using, as people who have sovereignty and rights. And they owe us true disclosure around what they know about us, and we should be opted out of surveillance capitalism as a default.
Hate and Lies On Facebook Theyre Even Worse In India Part 2 - Brian Lehrer A Daily Politics Podcast - Air Date 10-27-21
[00:38:24] BRIAN LEHRER - HOST, BRIAN LEHRER SHOW: Your articles frame an uncertainty, I think it's fair to say, over: whether to think Facebook tried to contain disinformation, at least about the election in this country, but it's really hard to do when you have 3 billion individual users; versus, Facebook didn't really try to contain the disinformation, because it was profiting from the extra user engagement that all the election activity was bringing, and user engagement is at the heart of their business model.
But you say the documents that you've obtained show that Facebook's own employees believe that the company could have done more. So what do these employees believe it could have done?
[00:39:05] SHEERA FRENKEL: Right. So these employees that we look at in these documents, that they're an interesting group, because a lot of them work for Facebook's Civic Integrity Unit. This is their specialty. This is what they were hired to do. And, of those, quite a few have a background in research; some of them are trained as data scientists.
So they're looking at this purely from the point of view of, given the tools Facebook has as... it has at its disposal, what could it do to make things better? And I think what they show, time and time again, is,you know, "Here's a step we can take."
For instance, on vaccine misinformation, they find that if you disable all comments on posts related to the vaccine, you'll reduce vaccine misinformation, because Facebook does not seem to have the ability to adequately monitor content in its comments.
And yet, they're told by executives, "Well, no, we can't do that because the moment we do that, we get less engagement on Facebook. People are less interested in spending time on our platform, fewer eyeballs mean, you know, less ad revenue; it means less data on people."
And so, the company executives, who are thinking about this as a business, are coming at it from one point of view, while this Civic Integrity team is really just thinking about, I think, the health of democracy and the health of society.
I... I think one thing that was telling for me, when I was working on the book, was one of Mark Zuckerberg's earliest speech writers writes, and tells the story of how he used to end company meetings by saying, "Company over country." And I just wonder, you know, he is... he has been the founder of that company, the chief executive, he is completely in control. And if that is the mentality going in-- "Company over country," you can see how so many of these decisions would be made downwind of that.
[00:40:43] BRIAN LEHRER - HOST, BRIAN LEHRER SHOW: Now, on that Vox headline I cited in the intro, "Wall Street doesn't care about the Facebook leaks, Mark Zuckerberg does," the Wall Street part of that, is that, Facebook's stock does not seem to be taking a hit because of the Facebook papers. But it includes a clip of Zuckerberg on Monday, largely blaming the media, and critics with an agenda, rather than the company's own policies for the hits its reputation is taking.
[00:41:12] MARK ZUCKERBERG: Good faith criticism helps us get better. But my view is that what we're seeing, is a coordinated effort to selectively use leaked documents to paint a false picture of our company. The reality is that we have an open culture, where we encourage discussion and research about our work, so we can make progress on many complex issues that are not specific to just us.
[00:41:37] BRIAN LEHRER - HOST, BRIAN LEHRER SHOW: Mark Zuckerberg had a public presentation of quarterly earnings on Monday. Sheera Frenkel is still with us from the New York Times.
Sheera, do you give Zuckerberg's argument there, any credence? Is there a complexity that maybe some of the reporting, not yours, but some of the reporting, in simpler news organizations, is missing in favor of an overly simplistic narrative of Facebook coddling disinformation for financial gain?
[00:42:04] SHEERA FRENKEL: I... I'll be honest, I haven't been able to read all the articles that have come out, because there's been dozens and dozens of articles.
I think that... that, you know, of the articles I have read in the Washington Post. Wall Street Journal, AP, I have seen some really fantastic reporting that has delved into these documents.
Is it complete? Does it have the 360 degree view that, I imagine, Mark Zuckerberg has? No. And I imagine it's not black and white, because of all the reporting we've done, we haven't seen any examples where Mark Zuckerberg, or Sheryl Sandberg, or anybody else at the top of the company makes a decision out of malice. They're not-- I think you could say, they're not evil people. They're not sitting there, you know? I forget the name of the character from Austin Powers, you know, twiddling their thumbs, and saying, "How do we destroy the world?" That's not what they're doing as a company. And I think they... they do take some measures that are advised by these researchers.
However, you know, I think he's trying to portray it in a certain way. I think you can also look at these documents and say, if they're given a one through five option, where five is the most effective, but loses them people, and one is the least effective, but doesn't lose them any users, they're not going to choose the most effective, and risk losing those users. They're not going to risk hitting their bottom line. That is not how they're operating as a company.
[00:43:18] BRIAN LEHRER - HOST, BRIAN LEHRER SHOW: Right. The USA Today headline I cited in the intro says, "Instagram's dangerous to children unite liberal and conservative lawmakers who agree on little else."
So, assuming that common ground, how much is Washington divided on the main Facebook platform along the lines of, you know, Democrats say "Facebook, doesn't stop disinformation enough," and Republicans say their politics of the right are singled out for censorship?
[00:43:51] SHEERA FRENKEL: You know, for a very long time, Democrats and Republicans have been divided along the very lines that you just described. It's become a free speech arguments. And as we know, you know, America, umm, cares very much about the First Amendment. It is a very complicated argument.
And this is actually a really great place for Facebook to have people divided on, right? If we're busy talking about the First Amendment, and what should Facebook monitor, and what specific pieces of content are appropriate to remove, versus those that aren't, you can see how you would get bogged down in that for decades, really.
But I think what we're starting to see now, and what I certainly saw in the last Senate hearing, was members across the aisle, looking at this in a different way. They're not asking the question of, "Should you be allowed to say something on Facebook?" They're asking the question. "Should Facebook be promoting it?"
If you want to share a conspiracy, and say that the Earth is flat, for instance, fine. Say that. Say the Earth is flat. But should Facebook push people into groups that try to convince them that the Earth is flat? That is where senators are now focusing their attention. And I think that's something that Republicans and Democrats can both get around, because we're talking about the decision as a company, over promoting things over pushing things to people.
I sometimes think of this as the equivalent of, you know, soda companies: should Pepsi Cola, whatever, be allowed to sell their product? Sure. But should they place them in the hallways of schools, where elementary kids are encouraged to buy them? Probably not, right? That's probably something we've moved against.
We've said "Let's.. It's not a great idea to put them right there, accessible to small kids."
So we're focusing on the decisions Facebook makes as a business, and what it recommends, rather than the decisions of individuals, and what they post.
Facebook Whistleblower Frances Haugen in Conversation Part 2 - Your Undivided Attention - Air Date 10-18-21
[00:45:28] FRANCES HAUGEN: So I think one of the things that I found very shocking about what's in the documents is there are multiple examples of people external to Facebook clueing in on patterns that we're seeing inside of Facebook. So researchers inside of Facebook saw things like the more angry the common threads on a post, the more clicks go out of Facebook, back to a publisher.
The publishers were writing in and saying, Hey, our most popular content on Facebook is some of the content we're most ashamed of. Right? It's inflammatory, it's divisive. It plays on stereotypes. Political parties were coming to Facebook and saying, Hey, we noticed you changed the algorithm. It used to be that we could share out something like a white paper on our agricultural policy, and people would still get to read it. Only now when we do that, all we get is crickets, right? It doesn't really work anymore. Because in engagement-based ranking those polarizing extreme divisive pieces of content are the ones that win.
I think that's one of these interesting things where I think why I feel so strongly about chronological ranking, order by time, is that everyone can understand what order by time is. And even Facebook doesn't really understand how the newsfeed works. And I just think it's safer for society for us to say, Hey, let's have a thing prioritizing our attention that we all understand, instead of a system that not even the experts in the world understand.
[00:46:50] TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Of course, for that to work in a game-theoretic way, the app TikTok versus Twitter versus Instagram versus Facebook, the one that chooses chronological feed won't get as much engagement as the ones that rank by what's really good at getting your attention. So if we were to go chronological, that's the kind of thing that you would need as a kind of a game theoretic multipolar trap. You would need everyone to go to a chronological feed at the same time.
And when I think that's pointing to is not necessarily that, you said it yourself, because start with chronological feed. But what you're really talking about is that everyone should understand why the things are coming to them are coming to them. And it shouldn't be based on a automated system that prioritizes the things that make society not work. I mean, the way I think about it now is that Facebook is basically their business model is making sure you can never have a Thanksgiving dinner where you understand anybody else at the table. Because their business model is polarizing society so that everyone gets their own personalized view of what was most dividing, et cetera. And the goal here can't just be a nicer, more enjoyable Facebook. It's gotta be, well, Facebook is operating the information system, or all these systems are operating the information that goes into an open society. And the open society's ability to govern is based on synthesis and constructiveness and perspective seeking and perspective synthesis and saying, okay, what are we actually going to do about our biggest problems? And the biggest problem I see in an engagement-based ranking system, is that by rewarding more extreme polarizing population bases or political bases, it means that, as you said, politicians and political leaders have to cater to a more extreme base. Which means that their unique selling proposition to their constituents is never agreeing with the other side, which means that democracy grinds to a halt.
And that's what I mean physics business model is making sure you can't show up at the dinner table and Thanksgiving and have a conversation and making sure that you're always going to lose faith in your democracy. And those two things are incompatible with democracy's working. And that's the kind of thing that makes people say, Hey, I don't even want this democracy anymore. I want authoritarianism. So either I want China, or I want to elect some kind of strongman who just going to smash the glass and break through this thing so we can actually have a real governance that's delivering results, at least in some direction, as opposed to constant gridlock.
[00:48:57] AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: means that we're talking about not just Facebook, but a business model more generally. And as you're pointing out, Tristan, that means it can't be something that -- the solution can't be applied only to Facebook. It has to be applied to the entire industry at once.
[00:49:11] FRANCES HAUGEN: Yeah. I think it's the thing where we're going to have to have government oversight, and have them step in and say, Hey, section 230, right now it gives immunity for content that is supplied by users. Right? So it's if platforms aren't the one creating content, then they're not responsible for the content that gets created. But platforms are responsible for the choices they make in designing their algorithms. And I think it's exempting those algorithm and choices from 230 and forcing platforms to have to publish enough data that people could hold them accountable, is an interesting strategy for forcing more platforms to go towards chronological ranking. Because the reality is, if people can choose between an addiction based, growth hacked algorithmic engagement ranking based feed, or one that is time-based, they're always going to pick the one that's engagement based. 'Cause it is stickier. It does make you consume more content. But at the same time, it also makes people depressed. It also causes eating disorders in kids. There's real consequences to these systems.
And I just think in the end, if you actually talked to people and you said, do you want computers to choose what you focus on, or do you want to choose what you focus on, I think from a personal sovereignty perspective, we should all want to have control over what we focus on, not have computers tell us. Especially Facebook's computers.
[00:50:27] TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: I'm curious Frances, as you think about governance for Facebook to take this action or all the other actions that are sort of like this, the safer for society at the cost of a little bit of profit, what is the institutional governance, the process by which we can get there? Because there's going to be this example. And then there's going to be another 10 examples and then another a hundred going down. So it's like designing products is a moving process. What is the moving process for governance that sort of matches the knowledge required and the speed required to work on products like this?
[00:51:05] FRANCES HAUGEN: We need to have a conversation about how do we make sure that Facebook isn't the only one grading its homework. Facebook has established a pattern where even when asked very important, direct questions, questions like, is Facebook safe for our kids? Facebook has outright lied to official groups like Congress. We need to have a process of having privacy protected data. So there's ways of obscuring the data such that people's privacy won't be harmed. And Facebook has been emphasizing a false choice, that we can either have oversight or we can have privacy. And that is a false choice. We need to be working collaboratively with researchers to develop obfuscation techniques that can be privacy sensitive, but also give meaningful data.
So the very, very bare minimum is we need to have enough data that someone other than Facebook can choose what questions to ask about Facebook.
The second is we need to have something like a regulatory body that can actually force Facebook to make changes when those independent researchers identify problems. Because right now, until the incentives change, nothing is going to change at Facebook. And Facebook has shown that on its own, it will not change.
Facebook Whistleblower Frances Haugen in Conversation Part 3 - Your Undivided Attention - Air Date 10-18-21
[00:52:12] TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: What's wild about Frances Huggins whistleblowing is that our very conversations about it are falling prey to precisely the forces that she's whistleblowing about. For example, do you have an opinion about Frances? Did you read that opinion on social media, on Facebook or on Twitter?
There are stories going viral right now that she's some kind of operative or a fake or phony whistleblower, or she was secretly colluding with the government to drive up more censorship of speech online. But these stories are actually going viral because of the very forces that Frances is talking about.
But the amazing thing about Frances is, she still believes change is possible. And that's why she blew the whistle, in order to help Facebook make the changes that so many people outside the company, and so many people on the inside, want Facebook to make
[00:53:05] AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: Frances, I am so excited to have you here on Your Undivided Attention. And I just have to start by asking, how are you? And when you decided to blow the whistle, did you realize that what you were planning to leak would become the biggest exposé in the history of the company?
[00:53:22] FRANCES HAUGEN: I'm doing okay. The months between when I left Facebook and when the stories began to come out were much harder than the last week, because I accept the consequences of my actions. I don't think I did anything wrong. I took a great personal risk because I cared about what I thought the consequences were, right. That I thought kids' lives were at risk, that Facebook was responsible for ethnic violence and other countries.
But the thing that then motivated me to act was a fear for the ethnic violence in Myanmar and in Ethiopia was just the beginning. And it's really scary, right? You look down the road and, at my low point, which was New Year's Eve 2019, 2020. I literally had a panic attack on New Year's Eve because it had been so many months of learning more and more and more depressing, scary things, and feeling like there wasn't resources inside the company to actually make the level of difference fast enough to respect those lives.
And Facebook keeps saying this, we invest more in integrity than anyone else in the world.
Well, it's yeah, but you also have voluntarily chosen to go into some of the most fragile places in the world. And to take on even more responsibility, you've chosen to subsidize the internet for the people in those vulnerable places, but only if they use Facebook.
And so I think there is a certain level of obligation that comes from the idea of if you save someone's life, you're responsible for it, right? If Facebook is going to take on the internet experience for hundreds of millions of people around the world who, because Facebook subsidized their free and open internet didn't develop in their language. It feels like they have a higher obligation of safety for those people, given that they have made it harder for alternatives to emerge.
[00:55:09] TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Maybe that's a good place to dive in because you worked on the civic integrity team, which was looking at a lot of what I think you called at-risk countries, or maybe Facebook has that internal terminology. And I think, we've spoken a little bit on this podcast about that.
But my sense is that you were looking at a lot of really dark things that were happening around the world. And they were a lot darker than maybe what's happening in the US but I have a sense that we get a flavor of that as we've talked about a lot in this podcast. Do you want to talk a little bit about what are the things that you saw that you were worried that not enough other people were seeing?
I'll just say one last thing, which is you realize that the world didn't understand the information you were looking at because you were part of what a team of 20 or something people inside civic integrity, who actually knew this. And it had big consequences and the rest of the world didn't know that. And I just relate to that so much that there's this sort of internal truth of certain areas of the tech industry that are bound by these problems, and the rest of the world doesn't understand it.
I would just love, you know, how do we go ahead and equalize some of that asymmetry where people don't understand what was happening that you saw?
[00:56:11] FRANCES HAUGEN: I think the question of even inside the United States, outside the United States, I joined Facebook because I saw how bad -- like I had lived with the consequences of what misinformation could do in the United States. And I showed up and I learned almost immediately that what we see in the United States is the sanitized, healthiest version of Facebook that, most languages in the world don't have basic integrity systems from Facebook because Facebook has only chosen a handful of languages to write these AI systems that make their systems safer.
And I learned that in places in the world where people are freshly coming on the internet, there's lots of norms that we take for granted as people who live in a society that's had the internet for, you know, 30 years at this point. For example, the idea that people put fake stuff up on the internet, that idea is not actually a norm in other places to the same extent that it is here.
You had people with master's degrees in India saying -- these are educated people -- saying, why would someone go to the trouble of putting something fake on the internet? That sounds like a lot of work. And once you start realizing that people's experience of the internet is so different, depending on where you are, that someone who is becoming literate to use Facebook for the first time in a place like Myanmar, that they experience of the internet in a situation like that is just very, very different than what we're seeing today.
And we're about to face a real, real big change, like a global civilization, in terms of as Starlink -- so that's SpaceX's internet service provider -- as Starlink expands, I bet Facebook gets another billion users over the next five years. And those people, a lot of them will have had the internet for the first time. They may become literate to use Facebook. And given the level of protections I saw Facebook was giving to people in the most vulnerable places in the world, I genuinely think that there are a kind of shockingly large number of lives on the line.
[00:58:06] TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: I mean from a business perspective, it also costs resources to support each one of these other languages. And there's like a growth rate to here's these new countries that are coming online. Here's these new languages that are coming online. And they're all languages we haven't seen before, which means that they cost a lot more than adding a million more users in a language we already have.
So do you want to talk a little about the economies of scale that emerged from this issue?
[00:58:31] FRANCES HAUGEN: Economies of scale issue is really essential to the safety problem at Facebook. Let's imagine Facebook is trying to roll out to new countries and new languages. They would do exactly what Facebook has done. They started out in English, they got lots and lots of English speaking users. They started rolling out to major Western European languages. They moved out into the next largest markets. And as they continue to grow, they keep going into these communities that have fewer and fewer language speakers.
Programming a new language for safety, like building out these integrity and safety systems, costs the same amount in a language that has 10 million speakers as one that's English that has a billion speakers. As you roll into these new languages, the average amount of revenue you get gets smaller and smaller because each one of these new languages has fewer speakers. And on average, the speakers of those languages are less affluent than the ones that are already on the platform.
And so given that there is a fixed cost for each one of these new languages, it's just not economical for Facebook to build out the level of safety that currently is available in the United States and these other places. And people are paying for that with their lives.
[00:59:39] TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: this
[00:59:40] AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: brings to mind is Sophie Zhang, who is another Facebook whistleblower. And to quote her, she said, "I've found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry and caused international news on multiple occasions. I have personally made decisions that affected national presence without oversight and taken action to enforce against so many prominent politicians globally that I've lost count."
And I believe you've talked, Frances, about the triage process. I can't remember whether you said it was 80% or two thirds of the cases when you were on the espionage team that you just had to ignore.
[01:00:22] FRANCES HAUGEN: I worked on counter-espionage under the threat intelligence org as the last job I did at Facebook. And our team was small enough that of the cases we knew about, we only were able to actively work on about a third of them. About a third of them, we would occasionally check in on them. And a third of them, they were basically in the icebox. And like maybe occasionally someone would take a look, but like they weren't able to be worked at the level that that third of cases that got support got. And the part that I think is most concerning is we intentionally did not invest in proactive detection mechanisms, because we couldn't support the cases that we had to start with.
And so the metaphor that I think of for this, is we were dealing with the outer skin of an onion. Only we didn't know if the size of the onion was a baseball or a beach ball. Because we intentionally did not invest in systems to figure out how many candidate cases there were, because we couldn't work on the ones that we already had.
So there's this question of like, why is that happening? Facebook makes $40 billion in profit a year, right? That's not the revenues. That's profit. Surely Facebook could afford to have 50 people on that team instead of having under 10, maybe six people on that team. I think there is a real thing of that Facebook has a long tradition of thinking of itself as a scrappy startup, and there is a lack of appropriate baselining for what level of investment should be happening. And so Facebook has made a number of statements about how there's hard trade offs involved. And if the answers were easy, these problems would be solved.
And it's actually, there's lots of problems or lots of solutions that have been identified by Facebook's own employees. The challenge here is that Facebook doesn't want to invest three, four or five times as many people in solving these problems. And because there is no oversight, we, the public never get a chance to weigh in on what level of investment should be happening, even though our safety and our lives are on the line.
The History of Tomorrow - On the Media - Air Date 11-5-21
[01:02:14] BROOKE GLADSTONE - HOST, ON THE MEDIA: The new name is "Meta," and the new vision is the "Metaverse," which already exists, in limited forms, on other platforms, including one offered by the blockbuster online game Fortnight.
Many tech watchers have observed that if Zuckerberg wants to succeed, they'll have to cooperate with those other platforms, which Facebook is not good at.
Ethan Zuckerman, founder of the Initiative for Digital Infrastructure, wrote that, "Really, Meta's vision is about distracting us from the world it helped break."
Zuckerberg seeks to build an all-inclusive, virtual reality powered, commercial, cultural, and social space, where cool things happen you don't want to miss! And where do many, maybe most, of the coolest ideas come from? Science Fiction, or Speculative fiction, or SF, if you will.
Take Elon Musk pursuing space exploration and neuro links to the internet. He just loves "Hitchhiker's Guide to the Galaxy," which really seems to grok our Mr. Musk.
[01:03:19] AMANDA NEWITZ: And for
[01:03:19] DOUGLAS ADAMS: these extremely rich merchants, life eventually became rather dull. And it seemed that none of the worlds they settled on was entirely satisfactory. Either the climate wasn't quite right in the later part of the afternoon, or the day was half an hour too long, or the sea was just the wrong shade of pink. And that's when it created the conditions for a staggering new form of industry: Custom- made luxury planet building.
[01:03:48] BROOKE GLADSTONE - HOST, ON THE MEDIA: Did you know Amazon's Jeff Bezos considered naming what would become his gargantuan retail disruptor, "Make it So?"
But what happens when a great literary genre-- yes, great! is misread? What might these tech moguls have learned if they'd only read slower, or better?
Starting with the novel that introduced the word "Metaverse," Neil Stephenson's "Snow Crash."
[01:04:15] AMANDA NEWITZ: So, the "Metaverse" is both a, kind of, immersive
[01:04:18] ANNALEE NEWITZ: world that you can dive into, or it can be accessed through goggles that give you an augmented reality.
Anna Lee Newitz is a science
[01:04:27] BROOKE GLADSTONE - HOST, ON THE MEDIA: fiction author, and science journalist, and co-hosts the Hugo Award winning podcast, "Our Opinions Are Correct."
[01:04:34] ANNALEE NEWITZ: The part that they seem to have forgotten, is that the story around the metaverse in "Snow Crash" is that, access to it is controlled by a crazy magnate named L. Bob Rife-- whose name is clearly a reference to L. Ron Hubbard-- and he controls a cable TV network. So to access any of the goodies in there, you have to go through this one corporation. Ultimately, Rife decides that the best way to maintain his control is to release this weird, semi-magical virus, that's both a computer virus and a human virus, that causes people's brains to shut down. They crash, and they are no longer able to speak or understand language.
[01:05:23] BROOKE GLADSTONE - HOST, ON THE MEDIA: Does Zuckerberg not see how he would be cast in such a scenario?
[01:05:28] ANNALEE NEWITZ: I mean, that's certainly what a lot of people have been suggesting. It's about Zuckerberg, kind of, misreading his place in the story.
I also think that it's about not being able to think about where Facebook fits into this larger social picture. It's interesting to see this discussion about who Zuckerberg is in the story "Snow Crash," at the same time that the U S government is asking, "What is Facebook in the context of our nation?" Trying to hold Facebook to account for everything, from suicidal ideation among young people, to the attack on the U S government in January...
Two parallel conversations, one of which is about "What is Facebook doing to us right now in the United States and in other places?" And also, "What does Facebook mean to the progression of humanity, which is the bigger science fictional question that is raised by thinking through "Snow Crash."
[01:06:31] BROOKE GLADSTONE - HOST, ON THE MEDIA: In 1998, the philosopher Richard Rorty said that, "Novels like Snow Crash-- "I mean, specifically! "was a writing of rueful acquiescence in the end of American hope." Inspiring, right?
[01:06:49] JILL LEPORE: Yeah, that's what you really want to build your future out of.
[01:06:52] BROOKE GLADSTONE - HOST, ON THE MEDIA: That's Harvard historian and New Yorker writer, Jill Lapore.
I wondered if, as an educator, she saw reading comprehension is the issue.
[01:07:00] JILL LEPORE: They read the books for the gadgets. It's like reading Playboy for the articles. I don't know. It's a weird thing. And that's why they don't read Octavia Butler, and Margaret Atwood, or Ursula LeGuin. You know, it's one thing to read Neal Stephenson for the gadgets, and ignore the social world. It's another thing to read Margaret Atwood for the gadgets and ignore the social word. You just pretty much can't do it. Right? So it's a very selective and willful reading.
It's an abdication of what reading is, and it's an abdication of historicism. If you were, you know, taking a class in science fiction, you'd be learning a lot about the writers, and the world that they lived in, and the nature of the critique they were offering.
These guys as boys read all these novels that were about the building of worlds, right? They're all world-building novels. And now, they're the only people on the planet who are rich enough to actually build worlds.
[01:07:50] BROOKE GLADSTONE - HOST, ON THE MEDIA: Lapore is talking about the tech moguls raised on fiction written during the mid-last century, by mid-last century men, creating worlds suggested by, and imagined in, a time and place very different from our own.
The tech wizards, she says, used an Exacto knife to extract the gadgets from these books, irrespective of the author's context.
[01:08:13] JILL LEPORE: Asimov and Heinlein were writing from this perspective of a 1950s "Mad Men" era culture of, "swaggering, white male solves everything with his fancy new computer."
We don't live in that world anymore. And, in fact, science fiction has so long ago moved on from all of this stuff. You know, there's a whole world of Afrofuturism, and post-colonial fiction, and feminist science fiction. These guys never cite that stuff!
That, "Well, we're really big science fiction fans!" but they read books that were published generally in 1952. Like, their vision of the future is unbelievably obsolete and antique.
It's not that the people are not excited about the "Metaverse" are Luddites and backward looking, it's that the "Metaverse" is Luddite and backward looking.
You know, science fiction is not fundamentally about the future. It really is always about the present, or about the past. So to read it as a manifesto for the future is to begin by misreading.
Final comments on Facebooks treatment of nonprofits and the actions you can take to call for regulation
[01:09:13] JAY TOMLINSON - HOST, BEST OF THE LEFT: We've just heard clips today, starting with Democracy Now, discussing the Facebook Papers and the fundamental threat Facebook poses to public discourse and democracy;
Your Undivided Attention explained to the mechanisms driving extremism, particularly including group invites;
The Brian Lehrer Show looked at what was revealed by internal Facebook experiments that showed how extremism and conspiracy theories get promoted, and to the total lack of investment in civic integrity outside the U S;
Tiskey Sour pointed the finger less at Facebook and more at the failure to regulate Facebook;
Democracy Now discussed more of the impact of Facebook positioning themselves as the global commons, while being unable to manage the burden;
The Brian Lehrer show explored the fundamental conflict within Facebook and proposals of reforms and regulation;
and Your Undivided Attention discussed even more suggested regulations.
That's what everyone heard, but members also heard bonus clips, including more from Your Undivided Attention, with a focus on Facebook insufficiently investing in harm mitigation around the world, and their internal civic integrity measures;
and On The Media analyzed the science fiction inspirations for big tech companies.
To hear that and have all of our bonus content delivered seamlessly into your podcast feed, sign up to support the show at bestoftheleft.com/support, or request a financial hardship membership, because we don't make a lack of funds a barrier to hearing more information. Every request is granted, no questions asked.
And now, I just want to finish up with a few thoughts on, uh, on a... on Facebook generally, but also in particular, their treatment of non-profits.
I have it on good authority, from an anonymous source, a few of the ways that Facebook interacts with nonprofit organizations tries to, sort of, mask their fundamental internal conflict and... and, you know, plays like they're being helpful to nonprofits with good missions, and all of that. And they're, uh, they're on your side, and trying to help you, but you know, really not so much.
So the problem for nonprofits starts at the same place as everyone else. They have content that they want to get in front of their community, and they cannot get it in front of their community without paying for ads. That's the fundamental problem of being a content producer, trying to be seen on Facebook.
What Facebook does, in response-- they've sort of started this internal community help center for nonprofits, where you can, sort of, join and get insider tips, and strategies, and, sort of, a leg up with this direct contact with Facebook.
And you would think that they might start offering free money to boost your content. Like, "Look, if the way our system works is you have to pay to play, maybe that doesn't make sense to charge nonprofits for that, maybe you get free... free money each month."
Google Ad-words does have that system; Facebook does not. What Facebook does instead, particularly when communicating directly with these nonprofits, is, gaslight them. They claim that all you need to do to be seen more is, have better content. Just invest more time and energy, and probably money, in creating better content to put on Facebook, and then it will be seen more organically.
This is not true. And if it is true, it is incredibly marginal. The way to be seen on Facebook is to pay Facebook money. And, even then, the returns will be incredibly minuscule.
So, as I said, this is basically gas lighting. There is no reach that you can get on Facebook without handing over lots and lots of money.
So, the other thing they have done to try to make themselves look like good guys is, emphasized that they don't charge transaction fees for non-profits who want to receive donations directly through Facebook. So, if you get small dollar donations, it might save you 6%, or somewhere in that neighborhood, as opposed to using a third party service.
Okay, great. 6% on your donations. That sounds nice. However, the problem is, if you have no reach, you'll get no donations. So saving 6% on an incredibly small number of donations that come in, because your reach is so low, it doesn't make that much of a difference.
So there's wide sentiment across NGOs that they would much prefer a boost in their reach, and then have to pay for the transaction fees. You know, if they had to pick one and say, like, "Look, give us a better reach. We will generate the donations, and then we'll pay the fees. It'll be fine," because they would come out ahead anyway.
And then lastly, the last, sort of, fig leaf that Facebook puts up to try to make it look like they are helping nonprofits, is to form partnerships with third party companies that do design work, and then give these non-profits, sort of, a, you know, like, free access, or discounted access, to make better content. Which goes back to point one: emphasizing, over and over again, "You just have to invest time, energy, money, and we'll just try to save you a few dollars around the edges. But if you want anyone to see this content that you're creating... yeah, we're sorry, the only way you can really do that is by handing over money, in the form of paying for your post to be boosted," et cetera.
It's not complicated. And, as we've heard on the show today, there's probably internal conflict. I wouldn't be surprised with the people who were actually running that non-profit help center community really wanted to help. But they're in an impossible position, because of the business model of Facebook. Just like the people in the Civic Integrity Wing, who make a genuine proposals to genuinely improve Facebook, and they run smack into a wall when the executives realize it's going to hurt the bottom line.
So, the last thing that this made me think about, is how incredibly similar this is to the fundamental conflict within, sort of, the liberal elite wing of philanthropy; people who, again, genuinely want to make the world a better place. They see the problems, they understand the problems are there. They want to fix them. And they just think, "Is there any way that we can fix those problems without it impacting my power or wealth in any way. Can I just maintain my level of elite status and wealth and then help other people? Can we do that?"
No. No, you can't. That's... We've tried that. It hasn't been going well.
Facebook is in that same bind. They keep thinking, "Can we just maintain our business model exactly as it is, and still find ways to help people?"
No, not really. You can't. That... it's a fundamental conflict. You cannot do both. And the solution has to come from the outside. There simply has to be regulation, because it is inconceivable that Facebook will ever make the necessary changes, from the inside, of their own volition. It's not going to happen.
So, in the meantime, as we wait for that regulation to come about, there's a couple of things you can do. There is a mass user strike coming up from November 10th through the 13th. You simply log off from Instagram, Facebook, WhatsApp, and then stay off those platforms for at least that four day period, collectively. Hopefully making a point, hopefully putting a little dent in their income. It's not going to fix everything, but it's not going to do any harm either.
And, besides, you might actually enjoy the break, and extend it a little bit longer, and it could be good for your own mental health, even if it doesn't take down Facebook in one fell swoop.
You can learn more about that at thefacebooklogout.Com.
Additionally, the American Economic Liberties Project is pushing for the Department of Justice, the Securities and Exchange Commission, and the Federal Trade Commission, to open criminal investigations into Zuckerberg and other Facebook executives.
The group says, thanks to whistleblower Francis Haugen, we now have a smoking gun proof that Facebook privately deceived investors, regulators, and clients in building their tech empire, chronically violating the law, knowingly and intentionally.
So you can sign a petition, for this action or better yet, you can demand that your legislators publicly call on these agencies to open investigations based on this new damning evidence. There's a link to the petition, which includes language you can use to send to your members of Congress in the show notes.
So, that is going to be it for today. As always keep the comments coming in at202 999 3991, or by emailing me to [email protected]. Thanks everyone for listening.
Thanks to Deon Clark and Erin Clayton for their research work for the show and participation in our bonus episodes.
Thanks to the Monosyllabic Transcriptionist Trio, Ben, Ken, and Scott for their volunteer work, helping put our transcripts together.
Thanks to Amanda Hoffman for all of her work on our social media outlets, activism segments, graphic designing, web mastering, and bonus show co-hosting.
And thanks to those who support the show by becoming a member or purchasing gift memberships at bestoftheleft.com/support, or from right inside the Apple podcast app. Membership is how you get instant access to our incredibly good bonus episodes, in addition to there being extra content and no ads in all of our regular episodes. .
So, coming to you from far outside the conventional wisdom of Washington, DC, my name is Jay, and this has been the Best of the Left podcast coming to you twice weekly, thanks entirely to the members and donors to the show from bestoftheleft.com