#1514 Creating a Digital World of our Worst Habits (Transcript)

Air Date 9/17/2022

Full Notes Page

Download PDF

Audio-Synced Transcript

JAY TOMLINSON - HOST, BEST OF THE LEFT: Welcome to this episode of the award-winning Best of the Left podcast in which we shall take a look at the way terrible patterns of the past -- well, past and present -- like colonialism, racism, propaganda, feudalism, and abuse of corporate monopoly power are recreating and re-entrenching themselves in the digital world.

Clips today are from The Majority Report, Wisecrack, This Machine Kills, Your Undivided Attention, The Arts of Travel, and Future Hindsight, with additional members-only clips from Future Tense and This Machine Kills.

And stay tuned to the end for my list of recommendations for how to use the internet without it making you want to kill yourself.

Rise Of Digital Oligarchy w/ Jillian York - The Majority Report w/ Sam Seder - Air Date 8-4-22

JILLIAN YORK: I think a lot of the early internet thinkers were libertarians, self stately so. But really what I've seen is that a lot of these companies pay a lot of lip service to freedom of expression, to human rights. I mean, we've seen this with Meta, Facebook's latest human rights [00:01:00] report, which is pretty shallow.

But ultimately these companies are doing what's best for their bottom line, and not what's best for their users or for rights.

EMMA VIGELAND - HOST, THE MAJORITY REPORT: Right. And you mentioned Meta and Facebook there. I think that was a large focus of your book. Would you say that Facebook is kind of the worst offender in terms of the structure of that company, and what do you find about it to be so problematic?

JILLIAN YORK: Yes. I mean, I think it's fair to say that all of the major tech platforms, social media platforms have problems with their policies and practice, but Facebook or Meta really is one of the worst offenders. And part of it is the structure of the board and the company. The fact that Zuckerberg himself has a majority share in the company. But also just the fact that they put a strong focus in the United States and on issues there, but they really marginalize the rest of the world and the rest of their users, even though those users make up the majority. And that's really what gets me with Facebook is that they're not even really trying to fix the myriad problems that advocates have brought to them for the past decade or so.

EMMA VIGELAND - HOST, THE MAJORITY REPORT: And what does that look [00:02:00] like when it's tilting the scales in favor of wealthier countries? Tracing over the lines of colonialism and exploitation and just printing it onto their algorithm to a degree? How does that manifest itself in reality?

JILLIAN YORK: Sure. I mean, there's so many examples and marginalized people everywhere, including in the US, are often silenced by these platforms.

But to give one clear concrete example, we've seen a lot of exceptionalism when it comes to the way that Ukrainians are being treated on these platforms. They're given a lot more leeway, including violent and extremist groups in Ukraine that are fighting back against Russia. They're given a lot more leeway to share violent content and praise of even neo-Nazi organizations. Whereas Syrians, on the other side of that, another population that's been in conflict for more than a decade now, they're absolutely silenced by these platforms, human rights documentation is removed by the platform. So even things that could be used in, say, war crimes tribunals. And so that just kind of demonstrates the [00:03:00] imbalance and the sort of disproportionate leeway given to certain groups, and, like you said, along colonialist lines.

EMMA VIGELAND - HOST, THE MAJORITY REPORT: Would you trace that more to power imbalances? Or is it really just -- well, it's probably one in the same, but there is of course racism involved. But that's obscured, I guess, because people think it's tech, it's an algorithm. There's no way for it to have a racist infrastructure or one that exacerbates already problematic and existing power dynamics.

What's your response to that?

JILLIAN YORK: Sure. I mean, yeah, I think it is a little bit of everything. But really I think what we have to keep in mind is that there's humans behind the algorithms. Humans are the ones making the policies, deciding that -- and this has been explicit with Meta's policies -- that Ukraine is given exceptions to their hate speech policy. It's humans who are building the algorithms for baking in bias into the ways in which these algorithms say, for example, decide what is, or is not hate speech and what should or should not be taken [00:04:00] down. So to use a fairly innocuous example of the word -- I hope it's okay to say this -- the word "dyke", which is a word that is a slur, but is reclaimed by queer communities. That's a word that algorithms often remove, even when it's being used positively or self-referentially. And so you can see how that would play out along other controversial terms. That's something that's programmed by people who either lack understanding of, or deprecate the secondary meaning of a term.

EMMA VIGELAND - HOST, THE MAJORITY REPORT: And is it possible -- well, I'll ask this a little bit more straightforwardly -- are there any other major tech companies that are doing it better? Like is TikTok doing it better? Is Twitter doing it better? Twitter seems -- and I don't know anything, but just from my usage -- it seems to be the least problematic in that way. But I even hesitate to say that because of course it has its own massive problems as well.

JILLIAN YORK: Yeah. I mean, all of these companies have problems. I would say that Twitter, Twitter's been embroiled in some controversy over the past few months, and I'm not [00:05:00] very impressed with their CEO. But at the same time, I do know some folks on their policy team and I see the effort that they've put into getting geopolitics right at least. They do get a lot of other things wrong. So I don't wanna give them too much credit.

If I were to point to a company that I think gets more things right than most, I would say Reddit actually. And of course they've certainly had controversies over things like in the past over child sexual abuse imagery as well as more recently over the Donald. But they've put in place rather innovative methods, such as quarantining a certain community so that it can still exist. There's expression there, but new people can't join, so it limits the ability for those communities to go viral. They also have individuals volunteer to moderate their own subreddits, which gives people sort of a sense of empowerment over the community that they're a part of. And I think that's really important. I'll also say Reddit gets higher ratings than all of the other companies when it comes to transparency.

EMMA VIGELAND - HOST, THE MAJORITY REPORT: That's interesting. So Silicon Valley, I feel like, has, and a lot of the libertarians and right wingers who are now using it as an [00:06:00] opportunity to feel aggrieved, have conflated first amendment violations with censorship on these platforms. And you argue that censorship -- and I don't wanna misrepresent your argument -- but I believe you argue that censorship is good in some instances on some of these platforms. Can you talk about that false equivalency and what it upholds, and why your position is what it is?

JILLIAN YORK: Yeah, absolutely. I mean, so the first amendment protects us from state interference. So companies, legally speaking, can do pretty much whatever they want when it comes to curating your own platforms. Now my argument is really that censorship is a value neutral term. It's not a legal term. It's not synonymous with the first amendment. And so the way that I see it is that there's censorship that we agree with and censorship that we don't agree with. And of course "we," you know, varies by community by country, by person. And so for example, I think it's fair to say that we, the majority of us agree that child sexual abuse imagery absolutely should be censored, and that's okay. We should just be [00:07:00] honest about what we're doing there. I think when it comes to terrorist, extremist content, that's actually for me that's much more up for discussion, because there is some value for society to be able to see into a violent conflict. If we just let YouTube erase all of that imagery, then we're not necessarily going to know what's really happening on the ground in Syria, in Ukraine, in Yemen. And so that's an area where I would say that it's censorship and censorship that I don't always necessarily agree with.

And so I guess what I'm saying with companies is that, while I'm not in favor of censorship per se, I think that there are some things that they should limit for the greater good. And I think that we should be looking towards international human rights frameworks to determine what those things are.

How Social Media Profits Off Your Anger - Wisecrack - Air Date 8-26-22

MICHAEL BURNS - HOST, WISECRACK: Before we get into internet rage, let's talk about why anger feels so good. Here's what happens in your brain when you see a post about how comedians don't feel safe doing comedy anymore because of the slap. At the first sign of stress, the oxygen and glucose flood out the prefrontal cortex, where our rational thinking lives, and the amygdala takes over.

Now, the [00:08:00] amygdala is the part of the brain associated with emotion. And when it teams up with the hypothalamus to get stress hormones pumping outta your adrenal glands, look out world. In other words, your brain and therefore your body is fired up and ready to fight, or flee, but probably fight.

Now, anger gives you the same adrenaline rush that thrill seeking does. So maybe next time you feel pissed, just go skydiving instead. Now this reactive aggression activates a reward network of dopamine in the brain. It's the same one that activates when we see something cute or funny, like, uh, like a dog riding a surfboard. And it's tough to outrun dopamine. Once we get a little bit of it, we're conditioned to automatically want more of it.

Being angry, creates a feedback loop. That's too neurologically enticing to ignore. Now anger has deep roots in our psyche playing a complex role in how our emotional processing skills develop as children. And it's not a purely negative emotion either. It can give the illusion of control over situations and even positively [00:09:00] influenced decision making.

Expressing anger, whether you're yelling or slamming doors, can also promote higher levels of wellbeing and lower stress, though it does make you a pain to be around. Seriously, no one likes you when you're angry, even if you think that's like a part of your personality or my family's always been like that. Nobody likes it. Take it from me, I had to work through it in therapy.

On the flip side, suppressing your anger can lead to irritability, guilt, and decreased life satisfaction. So we get that anger can feel good, but why is being angry right here on the internet so specifically pleasing, more so than say screaming at your barista for giving you the wrong milk.

SCREAMING CUSTOMER: I asked quietly [unintelligible, with sound of hands slapping countertop quickly].

ANOTHER CUSTOMER: Hey, get the f** out lady.

MICHAEL BURNS - HOST, WISECRACK: Well, as scholar Ryan Martin explains, when people express their anger online, "they want to hear that others share it because they feel they're vindicated and a little less lonely and isolated in their belief". It can give a sense of catharsis, however brief.

There's a downside though. Martin says, "previous [00:10:00] research shows that people who vent end up being angrier down the road". But sharing your feelings online can also have one huge benefit: anonymity. As a study by psychologist Kimberly M. Christopherson found, online anonymity serves three purposes: recovery, catharsis, and autonomy.

Recovery is the sense of relaxation after actively contemplating your situation. Catharsis is an emotional purge. And autonomy refers to the chance to try on potentially socially unacceptable forms of behavior without any repercussions. So when you sh**post on Reddit, you leave feeling revived, emotionally recharged, and like you're in complete control of your own actions, social conventions be damned.

So getting angry, isn't always necessarily unhealthy. But is online anger uniquely harmful, or at least counterproductive? To find out, let's look at a hypothetical form of discourse on, say, Twitter, the website that's been ruining my life since 2009.

Some of the most provocative, widely shared social media posts are [00:11:00] typically known as hot takes. The term hot take comes from sports journalism around 2012 when increasingly controversial opinion columns and shows captivated the online masses.

STEPHEN A. SMITH: Let me say this straight up and down. I think Kyrie Irvin should retire.

MICHAEL BURNS - HOST, WISECRACK: As scholar Glenn Fuller writes, "the hot take is a form of discursive commentary native to the post-broadcast networked and global communications industry. Hot takes capitalize on the selective para-editorial practice of social media users and their cultural tastes". That is to say, hot takes rely on the hyper-connected nature of social media and on our tendency to be less than strict with our editorial demands for what we create or share there. Also hot takes was a short lived Wisecrack format that did not lead to enough online rage to warrant continued production, but you should go watch the episode on haircuts. It's the good one. And it's very fun.

Hot takes often stem from anger about a dominant mode of thinking in society. Whether it's as weighty politics or as seemingly superficial as Ashton Kutcher saying his children rarely bathe.

ASHTON KUCHER: Now here's the thing. If you [00:12:00] can see the dirt on him, clean him. Otherwise there's no point.

MICHAEL BURNS - HOST, WISECRACK: And hot takes can feel good, especially when your followers applaud you, but you're not always gonna be right. The internet gives us space to wildly overestimate our knowledge and get annoyed when we see content that we think is wrong. And our pension for hot takes on things we know little about is an example of the Dunning-Kruger effect. That's the social phenomenon where the least competent or well-versed of us overestimate our understanding of a particular issue.

Like some of you, when, when you tell me, I don't know things about philosophy in the comments. You're being a little Dunning–Kruger-ish there.

But even when hot takes are delivered by underqualified people, they're often delivered with profound certainty.

TUCKER CARLSON: If you were to assemble a list, a hierarchy of concerns, of problems this country faces, where would white supremacy be on the list? Right up there with Russia probably. It's actually not a real problem in America.

MICHAEL BURNS - HOST, WISECRACK: An Arizona State University study found that, compared to traditional media, Tweets tend to [00:13:00] exhibit higher levels of certainty and lower levels of tentativeness. And the internet is built to reward quick, impulsive, hot takes, as long as they seem confident.

New Speaker: And I'm taking my kids to Disney and we had to wait to get on a ride cuz a bunch of f***in 37 year olds are all like getting emotional to get on the teacups or whatever. I'm just a little pissed. I'm like, Hey, you had your shot at childhood.

MICHAEL BURNS - HOST, WISECRACK: When a hot take-filled thread gains enough attention, it's often followed by what's known as a quote tweet pile on, i.e. when lots of people retweeted with thoughts ranging from snarky to go swim in dog sh**. The pile on aspect is often fueled by moral outrage against a perceived transgression, which a Stanford University meta study found is increasingly common in contemporary public discourse.

Paradoxically, that same study found that a pile on is often seen as bullying and can actually increase sympathy for the original poster. Some of that may be because, as a study in Connecticut Law Review puts it, "moral outrage also seems to lead people to engage in sophistry or bad arguments, partly because one component of outrage is anger, [00:14:00] which impairs judgment and decision making".

So, in your outrage, you're probably just making sh**y arguments. What's more, you might be engaging in what the internet calls virtue signaling and scholars Justin Tosi and Brandon Warmke call grandstanding. Essentially, these terms mean "saying something in the public sphere for the purpose of impressing others with your moral qualities".

When grandstanding, you'll often feel angry or excited. As they write, "the goal is to receive a general form of admiration or respect for being on the side of the angels". Now it's natural to assume we are morally good. According to behavioral scientists, Nadav Klein and Nicholas Epley, "few biases in human judgment are easier to demonstrate than self-righteousness, the tendency to believe one is more moral than others". Grandstanding is annoying. But more perniciously, it's also often a pretext for genuine vitriol. Tosi and Warmke write that people use moral talk to "humiliate, intimidate, and threaten people they dislike, impress their friends, feel better about themselves, and make [00:15:00] people less suspicious of their own misconduct".

The prevalence of moral grandstanding may be part of the reason why scholars William J. Brady and M.J. Crockett argue that moral outrage online is not ultimately effective at galvanizing social change. What's more, they conclude that it may also lead to dehumanization of the enemy, which can even contribute to offline violence.

That's because it's much easier to get angry at someone then to understand them. As political economists Will Davies argues, "if mutual recognition is necessarily slow, then diversion through fury and hostility is extremely fast". Now the goal of every Twitter post is to receive attention. Thus, the popularity of hot takes, and yet the more viral a post becomes, Davies argues, the greater the likelihood it'll be misinterpreted, furthering outrage.

He argues, "the pursuit of attention is fundamentally at odds with the pursuit of mutual understanding". And this way, on Twitter, he says, "misunderstanding and misrepresentation becomes the normal mode of social exchange, [00:16:00] making discourse feel like violence". Speaking of violence, after being blitzed with Tweets, ranging from valid and coherent to, you know, death threats, the next step of the Twitter hot take cycle happens. The double down.

It's common to see folks with the worst possible takes, refuse to yield one inch. You could attribute this to a psychological phenomenon called belief perseverance or conceptual conservatism. According to a Stanford study, "beliefs are remarkably resilient in the face of empirical challenges that seem logically devastating". And in fact, beliefs can actually grow stronger when confronted with opposing information. Scholar Leah Savion argues that "belief perseverance, clinging to explicitly discredited beliefs, is ubiquitous to the point of serving as the ultimate evidence of the feebleness of our mind". And if empirical evidence can do little to persuade us to change our minds, angry Tweets are even less likely to.

The double down is the ultimate proof [00:17:00] that all the angry discourse has done little to change anybody's opinion. And then the next day this whole pattern repeats itself. Thus, as Davies argues, Twitter is a machine for increasing the overall levels of anger in the world. And yet just every day I, I get up and I, and I go into that world and I treat it like it's real. And it's, it's been destroying my brain. Who would I be without Twitter? I wanna meet that guy.

Using AI to Say the Word - This Machine Kills - Air Date 9-2-22

EDWARD ONGWESO JR. - HOST, THIS MACHINE KILLS: We're talking about this startup --

JATHAN SADOWSKI - HOST, THIS MACHINE KILLS: Sanas. S A N A S.

EDWARD ONGWESO JR. - HOST, THIS MACHINE KILLS: -- which offers an amazing product as Jathan was alluding to. It allows you to turn your voice into a different -- it translates your accent. And right now that means to a white American-sounding accent. But they claim it's gonna be any accent to any accent. But right now it's white American accent. And, um, one thing I really loved in writing about it, I wrote about it after SFGate wrote about it. And then the Guardian wrote about it and everyone was each trying to get their own like juicy quotes from the founders, because they just like unperturbed and [00:18:00] undisturbed about the idea that their thing could do anything wrong. They don't think that there's anything to worry about. They don't think that there's a problem. I remember in the Guardian, they had a quote where the guy was like, yes, this is wrong, but a lot of things exist in the world.

JATHAN SADOWSKI - HOST, THIS MACHINE KILLS: Why? [laughs]

EDWARD ONGWESO JR. - HOST, THIS MACHINE KILLS: The full quote was like, "Yes, this is wrong, and we should not have existed at all. But a lot of things exist in the world. Like why does makeup exist? Why can't people accept the way they are? Is it wrong, the way the world is? Absolutely. But do we then let agents suffer? I build this technology for the agents because I don't want him or her to go through what I went through.

JEREME BROWN: What, calling tech support and getting someone with an Indian accent and getting frustrated because he couldn't understand them?

EDWARD ONGWESO JR. - HOST, THIS MACHINE KILLS: Yeah. I mean, that's really what it comes down to.

JATHAN SADOWSKI - HOST, THIS MACHINE KILLS: Yes.

JEREME BROWN: Was the whole impetus for him creating this?

EDWARD ONGWESO JR. - HOST, THIS MACHINE KILLS: This? Yeah, it was created because he was like, look, like people were incredibly racist to me when I was on the phone and it was because I didn't sound like [00:19:00] them. So I created this thing that makes you sound like them. And now you won't get discriminated against, and now they'll be nice to you. But it's like, Hmm, there's a few issues with this theory, right?

The surveillance that the workers go through. The intense analytics that enforce higher and higher performance standards, right? Like the idea that, okay, we already put you under crushing pressure and surveillance. Now we can turn your voice white. We're gonna make you have even higher productivity standards. Like the idea that's not gonna be present? When I talked with them, they were resistant to that idea and it's kind of wild. It's like call center workers are there because they're a first line of defense for angry customers, right? The way that the work works, it's kind of structured similar to content moderation. It's similar to all the sort of invisible labor that powers digital systems. It's the most traumatic stuff, the most difficult stuff, the most important stuff often. And the idea that you can change all of this by making workers sound [00:20:00] white, that just like reinforces the racism and does nothing to address it. So what's the point?

JATHAN SADOWSKI - HOST, THIS MACHINE KILLS: Yeah. I mean, there's so much to get into here because it's such a funny archetype of such an old, 10-year-ago startup, right? And like startup mentality. I mean, it's classic more resolve technological solutionism of -- but they're actually identifying a real problem in the world. Yes, there is racism and people that work in call centers are subjected to all kinds of verbal abuse and actual racism and things from people that they work with, their bosses. But also of course, people calling in, right? It's true, you know, it's really awful. And it leads to also this really racist, but plausibly deniable kind of claims around " our call centers are people that are not on the other side of the world," like companies will do that. " We employ people right here, at home," which is all just very racist and [00:21:00] nationalist. Yes.

So they've identified a real problem, but their solution to it is so, so not a solution. So absurd. It's boss tech, right? That's what they've given and dressed up in the language of worker empowerment. It's really quite absurd.

But also on the other hand, the founders are such funny guys. I can't tell if they are just extremely clever marketing. All these people are writing articles about it: you, SFK, Guardian, whatever. They definitely went through a cycle of being tweeted about and stuff like that. I jumped in on that and I saw it because other people were tweeting and dunking on it. And on one hand, you can't pay for this kind of publicity, giving actually wild ass quotes in on the record to different publications is a really smart way to get you quoted a lot in a piece about the technology.

So I can't tell if they're just really, really clever at marketing their thing, or if they are extremely [00:22:00] naive, like actual true believers. It might be a little bit of both. But you quoted from the Guardian, the founder being like, yes, this is wrong and our company shouldn't exist, but we shouldn't have makeup either, right? Until we can change the beauty standards, then we gotta do what we can to fit into the world until we can change structural racism. We gotta do what we can to make people's lives better. That contrast very hilariously and directly with what he said in the SF Gate article where the founder of the company was quoted in SF Gate saying, quote, "We don't foresee anything bad coming out of this. In fact, I'll take the opposite approach and just say, this is a GDP shifting product. This will bring millions of jobs to the Philippines, millions of jobs to India, millions of jobs to places that otherwise wouldn't be allowed to enter that conversation."

I mean, I do wish more tech founders would explicitly come out and be like, we [00:23:00] do not foresee anything bad coming out of this. It'd be way easier to make them eat those words later.

EDWARD ONGWESO JR. - HOST, THIS MACHINE KILLS: In one of the articles, they said, look, this is a GDP growth engine. This is gonna create so many jobs and enrich so many people, and then proceeded to give an argument that sounded like what some people might say in defense of sweatshops, right? Look, somebody's gotta do the work, and better them than anyone else.

And it's also interesting the way that they wanna scale it up. They talked about it in the interviews, they wanna scale it up to business-to-business enterprises, they wanna scale it up to healthcare. They wanna expand it to more places where people's accents get in the way, legitimately do get in the way, but also again, because of racism, and have no desire to affect what causes the racism.

What's the point? Someone might say, well, what do you wanna start up to solve racism? No, I want a startup that doesn't affirm it. If we're gonna suffer startups, maybe don't [00:24:00] make one where you're basically going, "Look, I get it. Sometimes when you listen to a person with an Indian voice, your blood boils. Don't worry, we have a product for you. It's gonna make them sound just like you. And then you can yell at them for other perfectly legitimate reasons."

Bonus – Addressing the TikTok Threat - Your Undivided Attention - Air Date 9-8-22

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: If you didn't know, TikTok recently surpassed Google and Facebook as the most popular site on the internet in 2021, and is expected to reach more than 1.8 billion users by the end of 2022. So imagine the analogy that the US didn't just allow the Soviet Union to run 13 hours a day of children's TV programming in the US, but we allowed the Soviet Union to run 1 billion TV sets in the entire western world, except they had an artificial intelligence who could perfectly tune what propaganda each person in the US or Western world across a billion TV sets would see.

Now before we go any further, we should make very clear, TikTok is not run by China. TikTok is the flagship app of a company called ByteDance, [00:25:00] which is headquartered in China. So ByteDance and China are two distinct entities with different motives, but sometimes those motives come into conflict. And the chinese government does sometimes force its tech company's hands. The CEOs of Chinese tech companies have notoriously been abducted on several occasions. So the Chinese government does not control TikTok, but it has massive influence over it.

Now, congressional activity against TikTok is picking up. Recently the commissioner of the Federal Communications Commission, Brendan Carr, wrote a public letter to Apple and Google asking them to remove TikTok from its app stores. And this is citing a recent Buzzfeed news report that Chinese ByteDance staff had accessed US TikTok user data on multiple occasions. And then last month, in July, in a more powerful move, bipartisan leaders on the Senate Intelligence Committee asked the Federal Trade Commission to investigate TikTok's data practices and corporate governance over concerns that they pose privacy and security risks for Americans. The request was signed by Senators [00:26:00] mark Warner and Marco Rubio.

Meanwhile, TikTok is starting to go in the defensive. For example, in its recent announcement about its commitment to election integrity, and that it's creating an election center to be a hub for authoritative election information. So congressional activity is picking up, and TikTok's response is also picking up.

ASA RASKIN - HOST, YOUR UNDIVIDED ATTENTION: So Tristan, let's talk about what are the harms? I think the two obvious ones are, of course, surveillance and data gathering, and that was the target of the recent Biden executive order of protecting Americans' sensitive data from foreign adversaries. Just so listeners know what kind of surveillance we're talking about, there's a very alarming revelation in August by security and privacy researcher Felix Krause. What he discovered is that TikTok is running code that tracks and captures every single keystroke when you're using their in-app browser. So that means any search term, your password, credit card information, it's all [00:27:00] being tracked by TikTok when you're using the browser built into the app.

Now, TikTok admits it has this code, but says it's using it for debugging and troubleshooting, which is sort of like when a CEO says that they're resigning to spend more time with their family. They say they're not tracking users online habits, but here's the question; how do we ever know? Do you want to talk about the other ones?

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: So I think a lot of people look at TikTok and the US government has basically said, "Let's focus our attention on the data that it gathers on US citizens." It's all about the data. What if they know a user's location? What if they know the location you're accessing the app from and they can figure out your address? What if they know the videos or times of day that you post? What if they know which videos you're browsing late at night? These are the kinds of things that get our concern, but I actually think the TikTok threat is so much bigger than that, because I can actually manipulate, per person, the information that gets risen to the top in everyone's newsfeeds.

Now, we've actually seen this before. In 2014, it was [00:28:00] exposed that Facebook did experiments where its users were shown happier or sadder content, and then it found that it actually shifted the content that those users shared. And TikTok could do the same thing, but instead of for happier or sad content, it could actually shift to pro China content or anti-Taiwan content in an event that they were to, say, start a war with Taiwan.

Think about it this way. We saw that Russia invaded the Ukraine, and when they did that, while they had propaganda channels online like Sputnik and RT, Russia Today, those were certain propaganda channels, but RT and Sputnik didn't influence all of Facebook, all of Twitter, all of YouTube, all of Instagram and all of all the platforms to influence what they thought. I mean, Putin didn't influence all those platforms, but if China were to be invading Taiwan tomorrow, they could take the most popular information app in the world called TikTok and selectively amplify Western voices who said, "Well, Taiwan was always a part of china. There's really no problem here. Look at all the things that the US did and all these wars that didn't go anywhere." And they wouldn't necessarily be wrong in some of the things they'd be calling [00:29:00] out, but they would be influencing not the propaganda but what our friend Renée DiResta calls ampliganda, or what we sometimes call amplifaganda, which is the ability to selectively amplify and influence people's attitudes by focusing their attention on the things that you want them to focus on, like a magician.

And when you just think about the amount of power and control, especially because Taiwan, for those who are not as aware, holds TSMC, the Taiwan Semiconductor Manufacturing, which is basically all the chips that are in every single product in cars and television, microphones, computers, cell phones. If you had China invade Taiwan and that took over the semiconductor factory for the whole world, this would be a massive, massive problem, and this is the kind of thing that China could influence people's opinion of.

Now, we've also talked on this podcast about the ability to influence and manipulate language. We talked about polling. We had Republican political pollster, Frank Luntz on this program, and Frank Luntz is famous for doing dial testing. You can test people's sentiments on various topics. So if I [00:30:00] say, The Affordable Care Act versus if I call it Obamacare, I can get different reactions out of people. And he did that in a room where he would actually say the words and then watch what people's responses were. Well, if I'm TikTok I can do dial testing at scale. I can do that in every voting district in my number one geopolitical adversaries countries, and I can actually see what do they think about various topics? Which way is it trending? I can focus my attention on the swing states. I could do more dial testing than Frank Luntz could have ever dreamed of, and if I do that at scale and I can see how things are trending, and then I selectively amplify what people are seeing, I can turn up and down the dials and potentially choose the next president of the United States.

Now, a lot of this might sound like a conspiracy theory or xenophobic or arbitrarily picking out China when lots of other countries doing various things, but I think we actually have to look at the nature of this threat. Now, when we looked earlier at Huawei, for those who don't know, Huawei built the kind of cell phone infrastructure [00:31:00] for 5G network. So they were actually building out 5G cell towers all across the world, and Huawei was found to have back doors to the Chinese government. And within the last couple years, India has banned about 200 Chinese apps because they accurately assess the threat, given that India is actually involved in a rivalry with China. So they banned apps like WeChat, UC Browser, Share It, Baidu Map. And up to a third of TikTok global users, up until that time, were actually based in India. So this was a big move.

Now, granted, the Modi government may have ulterior motives here as well. It can be using national security as an excuse to ban various apps and even Twitter posts, and the Indian Supreme Court is reviewing many of these cases because the national security threat hasn't been made clear. Still, we do see the Indian government taking action against Chinese apps. So this has been done before. We did it with Huawei. We've done it in India. Why wouldn't we do it with tikTok?

ASA RASKIN - HOST, YOUR UNDIVIDED ATTENTION: In the same way that [00:32:00] Huawei would enable backdoor access to all the information of our country, TikTok is sort of like cultural infrastructure. It gives you access not only to the data, but direct access to influence the mind's information and attention of first our youth culture and then the entirety of our culture.

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: And not to mention influencing the values of who we want to be when we grow up. We mentioned the survey of what do kids in the US and Gen Z most want be when they grow up. The number one most aspired career is an influencer. And in china, I think in this particular survey, it was an astronaut or a scientist. And keep in mind that inside of China domestically, they regulate TikTok to actually feature educational content. So as you're scrolling, instead of getting influencer videos and all of that, you actually get patriotism videos, science experiments you can do at home, museum exhibits, Chinese history, things like that.

And domestically for kids under the age of 14, they limit their use to 40 minutes a day. They also have [00:33:00] opening hours and closing hours so that at 10:00 PM it's lights out for the entire country. All of TikTok goes dark and no kids under 14 can use it anymore, and then at six in the morning, it opens up again because they realize that TikTok might be the opiate for the masses, and they don't want to opiate their own kids. Meanwhile, they ship the

unregulated version of TikTok to the rest of the world that maximizes influencer culture and narcissism, et cetera. So it's like feeding their own population spinach while shipping opium to the rest of the world. And you could argue that's the West's fault. The West should be regulating TikTok to say, "Well, what kind of influence do we want? If we want not an influencer culture, we should actually say we want to pass laws that feature educational material or bridge building content that actually shows people where they agree in a democracy." But so far we're not doing those things.

Edward Ongweso Jr: Peter Thiel & The Post-Capitalism of Tech's Far Right - The Arts of Travel - Air Date 6-27-21

MATTHEW DAGHER-MARGOSIAN - HOST, THE ARTS OF TRAVEL: Knowing the long term strategic vision, um, of someone like a Peter Theil, who has used individuals as unlikely as a Hulk Hogan to advance his [00:34:00] agenda. When we look at something like an Uber or Lyft or a DoorDash, and we connect them to their backers who feel, and, uh, you can fill the names of others, are neo-feudalists. They wanna return to feudalism with themselves as sort of kings. Do you see them as perhaps using these companies as a vehicle to institute these changes in capitalism? I know you write about regulatory entrepreneurship. Could you explain what that is, and then connect that to this more macabre idea I'm floating that Uber is allowed to be unprofitable because it's not in fact a company, it's instead a spear, a weapon, that people are like Theil can use to build out a neo-feudal, uh, United States?

EDWARD ONGWESO JR. - HOST, THIS MACHINE KILLS: You know, my thesis, my senior thesis in college, was on Uber and a huge chunk of it, you know, was an argument that Uber [00:35:00] should be understood as a accelerationist vector for capital accumulation, in that it doesn't really matter if Uber, to the investors, to the savvy investors, I guess, right?, to the people who forked over the most capital and the people who have forced the company to do something like its IPO, um, to realize a return on capital. It doesn't matter to them whether or not the company is profitable. It matters if the company value can be realized, and there are multiple ways where the value could be realized. You can realize it in the market with a literal return on your investment. And you can do that by good news, favorable financials, regulatory and legal successes, all that's nice, but the real value is in permanently changing regulations. Uh, so that what was illegal for you is now legal and something that you can turn into a new line of business [00:36:00] activity. It was illegal at one point for Uber to operate at most of the cities in the United States, right? Locking it out of a potential market, locking it out of potential revenue. By changing the law, it now has access to new markets, riders, right? And by continuing to change the law, it can expand its margins. It can reduce the amount of money it has to pay workers. It can reduce the amount of money it has to cover for their health insurance. It could reduce the amount of money it has to contribute to any sort of safety net if, that it might otherwise have to, if they were employees. And, you know, all of that is fine and dandy for Uber, that increases its value, but the real value is that it's not simply doing this for Uber. It's doing this for other companies in an industry and a set of investments that these people are also involved in, right? And that Uber, like you said, ends up being the tip [00:37:00] of the spear for new attempts to realize outsized returns in an age of, you know, falling profit, in an age of near zero interest rates, and you're not getting returns on your bonds. If you want like a, if you want a return on investment, there are all sorts of bullshit red tape you have to deal with, things like a minimum wage and health insurance and labor laws and safety regulations, um, and you know, limits on pollution, all this bullshit. But if you can fund, as a small investment, specific enterprises that will eviscerate the rules and the regulations in one field or another, that might reduce the amount of money, uh, or it might reduce a barrier to profit and it might reduce a barrier to value, to realized value for an investment. And that just helps everybody else out, right? In an age of historically low return on capital, that is a good way to accumulate more. I think it is then [00:38:00] important to, uh, you know I've written about them being regulatory entrepreneurs because of that, right? And also one of the reasons why focus on the unprofitability is hopefully that people, you know, notice that there's no market logic for this. And if that's the case, then why the hell do so many drivers have to suffer? Why do people have to starve? Why do people have to sleep in their cars? Because they're not doing it for the market. They're not doing it for your benefit. They're not doing it for my benefit. They're doing it so that a small cadre of investors are able to realize returns elsewhere. Not even in that company, if they do it in that company, that would be nice. But in reality, it's gonna help them somewhere else that they're invested in because it all vents the gig economy. And that is immoral. It's unconscionable. It's disgusting. And I hope, I try to at least infuse stuff in the writing with like the sense that, um, it is very clear what's going on here, right? And [00:39:00] they're not redoing it. They're never going to make a profit. Even the regulatory entrepreneurship will not yield them a profit. This Prop 22 saved them from employee classification, but it introduces new costs. Now they have to pay a little bit of a stipend for health insurance. Now they have to, uh, for health con... they have to contribute to a stipend so that a driver can get access to the Medicare, I mean, to the ACA plan in California, right? Um, they have to contribute to, uh, accident coverage, right? So now their costs have increased a little bit, pushing them a little bit further away from profitability, and they're gonna have to make up for that by increasing their prices, which will push a little bit of their riders out, again pushing them further from profitability. But again, that's not really the point, right? For them the point is regulatory entrepreneurship and realizing returns elsewhere for investors, which is immoral again, you know, it's like this system is not, why should this system be allowed to exist?

MATTHEW DAGHER-MARGOSIAN - HOST, THE ARTS OF TRAVEL: It just feels like we're living in a dream world of capitalism where you screaming, [00:40:00] There's no, you know, um, These are unprofitable! These are unprofitable! But their stocks keep going up. Where like a Rick and Morty tweet from Elon Musk can affect, you know, the hundreds of millions of dollars in profit.

Break Up Monopolies: Zephyr Teachout - Future Hindsight - Air Date 8-11-22

ZEPHYR TEACHOUT: We are experiencing a transformation of work right now, and people often talk about it in terms of the gig economy and, and often think about it as technologically predetermined, you know, if you are going to have technology, then we're moving to a gig economy. But I actually, I use this term chickenization, which I'll explain, because I think it's really important to understand that these are not techniques of technology, these are techniques of power. These are kind of old feudal techniques, old anti-democratic techniques. And although your eyes may glaze over and you may feel a little intimidated around the idea of, say, regulating the [00:41:00] gig economy, I think most people feel like, okay, I may not be a farmer, but I understand farming. So let's, let's look at what's happening in farming and you see a microcosm of what's happening in the workplace across the country and across the world. But I'm very much focused on the us here.

The term chickenization is actually a term that I got from the great book by Christopher Leonard, The Meat Racket. And it's a term that the pork and beef industry use to describe what is happening to them, how pork and beef are becoming chickenized, which is to say they're adopting this really terrifying business model. Here's the business model: a chicken farmer needs one thing basically, well, few things, but one thing is essential life or death, which is, uh, the ability to get their chickens to a grocery store so somebody will buy them. And because of really significant changes in antitrust law, that happened around the 1980s, [00:42:00] the chicken farmer no longer faces a whole suite of options of different distributors, which they could go to, to get their chicken to market. Instead, the industry like so many industries has been totally consolidated.

So there's basically three, four chicken distributors, think Tyson, Purdue, Pilgrims, and they, and they've divided up the country, uh, regionally. So a chicken farmer in one area of North Carolina will have to use Tyson's to get their chickens. Well, Tyson's then uses this incredible power to exercise all kinds of forms of control over the chicken farmer. They look independent, they have their own chicken house. It looks like they're a small business person, but in fact, Tyson says, Yeah, well, you can do whatever you want, but if you don't use our feed, we're not taking your chicken to market. If you don't use our eggs, we're not taking your chicken to market. If you [00:43:00] don't use our consultants, our particular specifications of how to build your chicken house, basically exercising control without taking responsibility. And so all the chicken farmers do all that. And then Tyson says, And you have to sign an arbitration contract, so if we get into a conflict later you can't sue us an open court, and we get to collect all kinds of data from your farm and spy on you, and you can't talk to your neighbors. You also have to sign a contract that seals your lips. You can't find out how much your neighbors are getting paid and you're gonna get paid different amounts every month. So the farmer is then in a position of rational paranoia. If he or she gets paid a different amount one month, is it because they spoke out against this system and there's a lot of farmers, um, I spoke to some and it's been widely reported, who have reported [00:44:00] retaliation when they have spoken up against their distributors. Is it because of the weather? Is it because they're subject to an experiment, maybe Tyson is giving 50 farmers one kind of seed and another 200 farmers another kind, and suddenly their profits are plummeting. They're making poverty wages and I spoke to chicken farmers. And one of the things that really comes through is the level of depression and almost debilitating rage that farmers feel when you are subject to this arbitrary power, but can't see through it. In fact, the suicide rates are, are very high.

So that's the story of what's happening in chicken farming, but you may have already heard the echoes in here. That's the story of what's happening to Uber drivers. Uber drivers also look independent and there's these wonderful fights to correctly classify them as employees, although that won't solve all the problems, but they are [00:45:00] paid different amounts, experimented on, required to sign arbitration clauses in this black box. There's also very high levels of depression. And this is the same posture that Amazon sellers face in relationship to Amazon. Over 2 million sellers who depend on Amazon for life or death for their businesses, subject to Amazon's experimentation, spying, and extraction. And something that is very - I wrote this book before the pandemic - but something that is very front of mind right now is the way in which restaurants who are facing just a devastating pandemic-related crises also have the same relationship to delivery apps, because just as a chicken farmer needs Tyson to get to market, a restaurant requires, uh, GrubHub Seamless, the delivery apps, to stay alive. If 10, 15% of restaurant [00:46:00] revenues depend on delivery, you can't survive if one of these platforms kicks you off and the platforms have the capacity to charge enormous rates, extract data.

So it's not just gig work. This is a feudal form of government that is spreading across all these different industries. And I first wanted to press you, but second wanna empower you in reading this book because the good news is once you see this not as a technological feat, but as an old monopoly business model that keeps rearing its head every 30 or 40 years, you actually can feel a lot more power over it. Cause we can ban these kinds of structures and we have in the past.

MILA ATMOS - HOST, FUTURE HINDSIGHT: Yeah, actually I think that's really the number one takeaway is that this is just an old model dressed up with "new technology" that enables the monopolists to take advantage of workers, everyday people, like they always have, or that is [00:47:00] their propensity to do. I mean, it's the kind of thing where I think the logic of businesses is to want to maximize profits instead of maybe just profit seeking, but really it's a different thing to profit seek and to maximize profits and to impose this regime on us. But I think maybe a good question here is to talk about how we arrived here, because like you said, the antitrust movement used to be really healthy and very ambitious. And then in the eighties, it stopped. What happened then? And how do we find ourselves where we are today as a result of what happened in the eighties?

ZEPHYR TEACHOUT: Yeah, it was a real transformational moment. And I think people know that when they hear about Reagan, you know, that Reagan came in with a not-so-masked White nostalgia, anti-civil rights agenda, a promise to "return America" to a 1950s [00:48:00] America. So, he came in with an agenda that was very much about race. It was also very much about deregulation. And at the heart of that deregulatory agenda was antitrust. What you see when you look at the profiles of Reagan's wrecking crew that he brought in from California is that they talked about antitrust. This wasn't some side issue. The agenda was do something about these terrible civil rights laws and gut antitrust. And actually on the flip side, you'd see Senators, like Senator Phil Hart, one of the key architects of the Voting Rights Act of 65, who had two passions: antitrust and civil rights. And he saw them as deeply connected.

So Reagan and Reagan's team also saw them as deeply connected and came in, appointed hundreds of judges, put in regulators who didn't believe in the regulation. But it wasn't just non-enforcement. It was actually an ideological transformation and it shows the [00:49:00] power of ideas, which is both very dangerous and also, again, hopeful. Cause when you believe in the power of ideas, you know, that things can change, even if power looks pretty stuck right now. So the new idea that Reagan brought in is an idea that was popularized by Bork, the Supreme Court nominee who didn't make it, not the only one, but Bork had really pushed the idea that antitrust laws were, uh, tools of abuse and the only real purpose of antitrust laws was to protect consumer price. And what Bork was taking on is a much longer tradition, a tradition that goes back, not just to the Sherman Act of the late 19th century, but actually to the founding of our country and before, where you see in corporate, uh, seeds in corporate law of concerns about, excess corporate power becoming a form of government.

The old understanding, pre-1980, is that you need strong antitrust [00:50:00] laws as a democracy protection. And this is actually how I come in. I'm a democracy activist. I write about corruption in my scholarship. I've written about structures to protect democracy. And before 1980, we widely understood that antitrust was important the way that campaign finance was important, that if you wanna protect democracy you need strong antitrust laws. And that's something that Bork and Reagan's team totally rejected.

Now that was a pretty terrible little era there, but you might think, okay, well, when Democrats got back in charge or Democrats in the opposition party would constantly be raising this issue and fighting to break up big companies, fighting to, overturn bad court decisions, when you saw Reagan judges making bad decisions. But no, instead we actually saw Clinton, Bush (not as surprisingly), Obama, leadership in the Democratic Party, basically ignoring antitrust as a serious area of concern. By [00:51:00] the way, people understand this, like we've done polling and there's overwhelming support for more antitrust enforcement, for trust busting 2.0, for anti-monopoly work, people hate corporate monopolies. And it's actually one of those areas where the people are way ahead of politicians in understanding the power structures in this country.

The Digital Self, Web3 and reclaiming your online identity - Future Tense - Air Date 7-23-22

PHIL REED: When we're in the real world, we offer a version of ourselves which fits in to particular context. All of these ways of being are dependent on the feedback that we get from others. So we try to fit in, we try to be part of a social group, to be cohesive, or most people try that, and in part that's shaped up by the others around.

In the digital world, there's much less direct shaping by others of our own behavior. In the digital world, we can be however we want to be. [00:52:00] And people may take exception to that, but the feedback we get is somewhat more remote. So that social feedback in the digital world doesn't tend to have the immediacy that very strong impact on people's behavior, and that's one of the big differences that you find between real world behavior and digital world behavior.

ANTHONY FUNNELL - HOST, FUTURE TENSE: Does that mean that shaping an online identity, the construction involved in that is easier than shaping your identity, your persona in the physical world?

PHIL REED: It is easier to the extent that other people aren't dismissing you. So if you want to change the way you are, then, in the real world, what happens is that all of the people who know you and know you well will probably quite rightly start giving you funny looks. If you start behaving very, very differently to the way that you have up to that point, they'll say, "are you just putting [00:53:00] it on? Why are you behaving like this? This isn't you." In the digital world, it's harder for people to do that, and when they do do it, it has much less immediate impact on the person creating this persona.

So in some ways it is easier for somebody just to reinvent themselves digitally than it is in the real world. Of course it's limited by, I suppose, the extent of their imagination, their previous experiences, what they're carrying with them, but other people have less of a role.

ANTHONY FUNNELL - HOST, FUTURE TENSE: And that can be both good and bad says professor Reed.

PHIL REED: There's a good reason why we're sensitive to social feedback. In some ways I don't wanna sound like JLo, but it keeps us real. It keeps us almost healthy to have reality checks from other people. If it's possible to spiral off into a persona which [00:54:00] is completely divorced from reality and actually completely divorced from yourself, then that person can actually experience some psychological problems, a disconnect between the real self that they have and this digital self.

ANTHONY FUNNELL - HOST, FUTURE TENSE: Now one focus of Phil Reed's research has been on gender stereotypes and how the traditional notions of masculine and feminine behavior and demeanor, well, how they play out in the online environment.

PHIL REED: We would think in the real world, that self presentation certainly used to follow certain fairly gender stereotypical roles. So males tended to be a little bit more aggressive in their presentation of themselves. Females tended to be a little bit more appeasing in their presentation. In the digital world we don't really [00:55:00] see that. If anything, the research is suggesting that females tend to be somewhat more aggressive, certainly than they are in the real world, and increasingly as aggressive if not more aggressive online than males. So I think what we're seeing is that all of those theories we had about males and females and the way they are and the way they may have evolved, probably not true.

What we're seeing is an adaptation to the current environment, so that when a lot of the societal constraints are removed, males and females are acting in non-gender stereotypical ways. We're seeing, in the digital world, the power of the environment to shape up behaviors. It's much more contextually driven, much more driven by what you can do, what you're allowed to get [00:56:00] away with, and that can be good and bad, than any inherent limitations to the person.

ANTHONY FUNNELL - HOST, FUTURE TENSE: And is there anything to suggest that our digital persona has an influence on our physical persona or at least one of our personas that we adopt in the physical world?

PHIL REED: The evidence is very sparse about the impacts of digital persona in the real world. I think when people are not true to the way they really are, and they try too hard to present a false version of themselves, then that can be very damaging to them.

We're all using digital technology much more than we used to, and in part that's part of the way the world has gone, and in part that's been forced upon us for the last two years of lockdown and pandemic, although that's brought some changes perhaps for the better, in the way that we've used [00:57:00] digital technology.

We are person. We are one person, whether we're online or in the real world. We have a physical being. We have a system which is more or less healthy and that doesn't change online or in the real world. So there are things that we carry between these two things. What we don't want to be is very, very different across different contexts. That way we start to get mental health issues. We start to almost get a form of a multiple personality, which is never enormously healthy for an individual.

So we are one person, we've got to remember that. We won't change magically when we go online. We can't be somebody different. We are who we. The harder we have to try, the more strain that place is on our being, ourself, and the more likely we are to experience mental health [00:58:00] issues.

Refusing the Everyday Fascism of Artificial Intelligence (ft. Dan McQuillan) - This Machine Killsw - Air Date 8-25-22

JATHAN SADOWSKI - HOST, THIS MACHINE KILLS: You start off right from the beginning saying that these kinds of super intelligent AI apocalypses are not actually the answer to the, "what resistance, what AI, what harms?" So, let's start there?

DAN MCQUILLAN: I do think those things are an incredible diversion and actually, extremely motivated as I do expound on a bit in a book, motivated by an unconscious allegiance, perhaps sometimes to some very old ideas about superiority of intelligences that runs across all history of colonialism. So really that stuff is very toxic in its own right, from its very roots. My interest was not engaging in any of that or in any of the sci-fi debates, because my political understandings would be that, actually, a lot of the political harm that happens, is happening right now, is in the intensification and amplification of systems that are already there, that are already imposed on the people with the least capacity to fight back most of the time. In the bureaucracies, in, as [00:59:00] you would say, the housing projects, in the areas where minorities live in our cities, whatever it is.

There are already many systems which are quite excruciating in their latent cruelty, and adding the capacity of AI to these systems really amps up a lot of the tendencies which are already present both technical, institutional, and to some extent, psychological that allow these things to happen in the first place, and, to my mind, make them even more weaponized, more dangerous. That has its origins, I suppose, in my commitment to a sort of 1970s community politics, really. This is the, the area of primary concern because this is the actual frontline of society, and I see AI having a real effect there right now, nothing to do with some projections of a sci-fi future.

JATHAN SADOWSKI - HOST, THIS MACHINE KILLS: That idea of a acceleration and amplification of already existing things, it's definitely a theme and a analytical approach that runs through my work as well. I think it's really powerful antidote to not only the discourse by the the [01:00:00] boosters and advocates in Silicon Valley or the people like the Bostroms and the William MacAskills or even Elon Musk who are... you'll want to be doomsayers about a very specific kind of thing, but even critics, like this is what Lee Vinsel calls the criti-hype, where it gets really easy to pay a lot of attention to the really fantastical stuff at the sake of the mundane. And not mundane because it doesn't matter, but mundane because it's things that already exist, it's normal. Like we don't need to pay attention to that.

You spend a, a fair amount of time in the book, I think rightfully, arguing that these are where the real politics but also the real material consequences and computational operations of AI are happening. Let's get into that a little bit. How is AI a political technology? What kind of politics is it and what kind of technology is it?

DAN MCQUILLAN: Sure. Just to finish off on the point you were making, I suppose one of my starting points would be [01:01:00] that the kind of worries you're talking about, and also the more every day statements about AI which I find almost equally irritating, which are the ones where, which I'm sure you're over familiar with, where a critique of AI will start off by saying, "well, AI, of course, has many things to offer to healthcare, stroke, you know, insurance, stroke, driving, whatever it is, you know, but you know, here's the few problems I've identified."

All of these critiques really are based on an assumed platform of normality, instability, and liberal order. The assumption that that's what we are living in and that what we should be concerned about are perturbations of those things. If you have a perspective, as I do, that, actually, the world we're living in right now is extremely disordered and dangerous and damaging to far too many people you would start from a different perspective. But I also start from a perspective if somebody works in computer science, in the computing department, which of course is an advantage to some extent, and one of my other concerns let's put it more positively, one of the other things I was trying to address was, [01:02:00] I was reading a lot of early critiques of machine learning and AI by people who I really like, and respect who are social scientists or even journalists and they they're getting a lot of things right, and, I think, hitting a lot of things on the head, but sometimes veering completely off the mark simply because there wasn't any, basically any real familiarity let's say with how this stuff actually operates at a granular level, what's it really do inside.

My interest, my method, I suppose, is to try to read across the levels at the same, simultaneously. Whether you think of it as a stack or a set of resonance or whatever, my interest and my understanding of the historical, material, political effects of things is when there's some kind of, synergy or some kind of resonance between the affordances of a tool and the political or social situation it's it's occurring in.

So, that's a very long winded way of saying what I'm looking for, I look at AI and say, "what's this thing actually doing inside? What is machine learning actually doing? What are the operations of minimizing a lost function? What is back propagation actually doing?", and then [01:03:00] trying to read upwards from that, if you like, in saying what is that likely to deterministically amplify, but under the circumstances where we are looking at the particular distribution of power and decision making that's already exist in society, what would this way of doing things exaggerate?

So, my understanding of AI at a very basic level is actually, it's pretty simplistic. I mean, it's extremely complicated, it's mathematically sophisticated, it's computationally intensive, but essentially it's a statistical approximation of a function. It's taking a lot of input data, it's taking some exemplary output, and it's asking a computer to match a function, and that's great. It's all very clever and I actually quite like technology, I'm a bit nerdy, so, you know, that's all very interesting, but looking at how that works on the nuts and bolts of it, for me, means looking also at the resonance and the consequences are likely knock on effects of putting that into operation in the world.

It's one thing studying it as a [01:04:00] form of computer science, but this stuff doesn't stay in the box. It connects outwards and has unintended, perhaps, knock on effects in the real world. So if we're talking about ranking, ranking and ordering things in abstract mathematical concepts, but they're also very real, subjective social experiences that can have very real material consequences or decision boundaries. Decision boundary, are you in or are you out? Are you one of us or are you one of them? a decision boundaries can be a very abstract mathematical concept, but the same kind of thing applied in the real world, in carrying whatever authority it does carry, that's one of the things about these kind of methods. What kind of authority do they carry? What authority is as ways of knowing and ways of doing, then those things will gain a life of their own.

so that's my interest in the detail of AI, is looking, okay, what is it really doing as a material technology? What is it doing on a data level? What's it doing on an algorithmic level? And what is that likely to act as a resonance chamber for when it gets into the [01:05:00] world?

Final comments on how to fix the internet for yourself

JAY TOMLINSON - HOST, BEST OF THE LEFT: We've just heard clips today, starting with The Majority Report looking at digital platforms that are defining life online. Wisecrack dove into how anger works and how it's monetized online. This Machine Kills critiqued a system designed to change a person's accent as a workaround for racism. Your Undivided Attention looked at the privacy propaganda and national security threats of TikTok. The Arts of Travel explained the neo-feudal business models of the gig economy. And Future Hindsight spoke with Zephyr Teachout about the need to reregulate and break up tech giants.

That's what everyone heard, but members also heard bonus clips from Future Tense looking at both the benefits and dangers of being able to experiment with new identities in digital spaces. And This Machine Kills compared the danger of an AI-fueled robot war to the harms being created by advanced technology right [01:06:00] now.

To hear that and have all of our bonus contents delivered seamlessly to the new members-only podcast feed that you'll receive, sign up to support the show at BestoftheLeft.com/support, or shoot me an email requesting a financial hardship membership, because we don't let a lack of funds stand in the way of hearing more information.

And now to wrap up, as I said, I have some thoughts on how to use the internet in a way that makes it much less frustrating and terrible for your mental health and otherwise. The first thing to note and sort of understand about the modern internet is that it's literally addictive, literally -- and so needs to be treated as such. It's like a drug or an alcohol abuse scenario.

So if you feel the negative effects of internet addiction, social media addiction, anything along those lines, even a little bit -- maybe you don't have an extreme case, maybe you have a mild case -- but it's hard to find a person these days who doesn't [01:07:00] have a case of internet addiction of some form. So I pulled about half a dozen listicals on how to break social media addiction, and then have just pulled together all the best pieces of advice from all those different articles.

So I'm gonna go down a quick list. Save this section of the podcast to listen to later and make all these changes in your digital world.

So the first is to use any features of your operating system that allow you to set up things like app time limits and periods of downtime.

And now, look, we all know that if you are an adult and you're setting your own time limits, then you can just work around that. You can put in the passcode and reactivate it. Sure, yes, we know. But the strategy is about creating speed bumps for yourself. You know, you're not gonna put your phone in a locked safe that you don't have the code for, right? But if you create [01:08:00] enough speed bumps that make it just a little bit less likely in each instance that you're going to gravitate towards the part of the internet you're addicted to, over time you will likely see impacts. So using those operating system features is just one part of that strategy.

Another part people recommend this all the time, delete the apps. That doesn't mean delete your account. I mean that sure, delete your account, if you can pull that off, please go ahead. Otherwise you can delete the apps. So it just makes it a little bit less convenient. You can still go to Facebook.com or Twitter.com or whatever. But if the apps aren't there, it's a little bit less convenient.

But for some people that is not an option, it's literally not an option. You need those apps for something, often for work or whatever. So another option is just move them off your home screen so that you have to find them intentionally [01:09:00] rather than tapping on them mindlessly, because they're staring you in the face on your home screen.

Another good one is to turn off all notifications from anything that's not a human. That was sort of baseline recommendation up until about a year ago, but I've really been liking the scheduled delivery option that Apple implemented about a year ago for all -- well, I use it for all my non-human notifications that I still want to receive. Like, you have a library book ready to check out. I get a push notification for that, but it doesn't come to me randomly in the middle of the day, it comes in the scheduled collected group of notifications.

Now within a given app or platform, Facebook, Twitter, anywhere where you can follow or subscribe to content creators or outlets, obviously a good piece of advice is to unfollow and unsubscribe [01:10:00] from any sources that don't genuinely add value to your life. That way when you're on those platforms, the average quality of the things you see will end up being much higher.

Another one that gets recommended a lot, but that people have a lot of trouble with, is keeping the phone out of the bedroom, so that it's not the last thing you're looking at at night, nor is it the first thing you're looking at in the morning. People for obvious reasons have a lot of trouble with this one, usually because they use their phone as their alarm clock. Of course you can kick it old school and use a physical alarm clock. My favorite thing of the technology world in the last several years was the realization that, if you have some sort of a wrist-based device, a smart watch or a Fitbit or a whatever, using that as an alarm clock -- personally, I find it to be great. I've always hated audible alarm clocks, you know, loud noises or radio, or whatever clicking on in the morning to wake me up, not [01:11:00] to mention if you and someone sleeping in the same room as you don't need to get up at the same time, having a silent alarm that's just tapping you on the wrist, that was a game changer. And I really enjoy that feature for the instances when I need an alarm, I 100% go with a watch instead of audio type alarm.

This next one is super customizable to whatever your personal interests are. But I appreciated that one of those listicals included it, because it was recognizing that social media use usually comes from triggers that we're bored or anxious or something in our lives that makes us fidget and reach for a phone to look at. And you should understand that even if you are trying to get away from internet usage or social media, those trigger events are still gonna be there. We're still gonna get bored. We're still gonna get [01:12:00] anxious. And so, replacing the phone or the internet with something else, maybe drawing or writing or sending a text message to a friend or something along those lines, instead of getting sucked down an algorithmically-driven rabbit hole is a way to scratch that itch, recognizing that we need to scratch that itch with something, but doing it just in a better, healthier way.

And then along those lines, this was a totally unique piece of advice that I had not seen anywhere else before. And this section of this listical says to think about why you'd like to be on social media. Literally ask yourself that question. And it goes on to explain that everything we do in life is about intention. So why do you want to do something? What will it bring you? And this is all about understanding the mental processes behind addiction. [01:13:00]

And it reminded me of the "five whys" technique. And so you start by asking a question about a problem such as, Why do you use social media too much? And you answer that question to yourself. And then ask and answer the question, "Why?" five times, so you go deeper and deeper and deeper with each asking of the question, getting to a deeper level of understanding of why is this thing happening? Why am I doing what I'm doing? And it's a great mental exercise just to get to deeper roots of problems than we usually do.

But then finally, sometimes you need to fight fire with fire. In which case, there are apps and extensions, browser extensions that can be installed to help manually enforce limited internet and social media use. So for Chrome there's I think the [01:14:00] News Feed Eradicater extension is probably the most popular and it works on a bunch of sites, Facebook and Twitter and YouTube and Reddit, and is I think to some degree customizable, but basically it makes it so that when you go to those sites, you can still use them by searching. You can search for what you want to find, but you won't be fed automatic information through an algorithm. But if you want to be, you can turn it off, and then set a timer for "give me one minute, five minutes," however long to just use the site as normal.

So that's Chrome News Feed Eradicater, and Safari has similar stuff. There's one called Be Timeful and another called Intentional Blocker. And those are both all-in-one social web blockers with built in timers. Just for YouTube, I think there's a free one called Focus on Safari. And there's a bunch of different ones on Chrome, I didn't test them all, but [01:15:00] you can search around and find stuff to, for instance, turn the recommendations on YouTube off so that you can watch the videos that you intentionally want to watch and not get sucked down the rabbit hole of the recommendations.

And then the last bit from any of those listicles was a piece of advice that I've never had the chance to try, but they suggested trying accountability apps, and these are to help build habits -- not having necessarily to do with internet use, just building habits, period, any kind of habit you wanna build. So there are these accountability apps. One article literally just said, do a search for accountability apps and look around at the options. And as I said, I haven't had time to test them, so I don't have recommendations, but the advice is out there and there are apps built to help instill new habits by working with psychology to trick you into doing the thing you actively want to do anyway. So those are out there.

And then absolute [01:16:00] lastly, and this is separate from social media and internet addiction, this is just because the internet is so full of garbage that I find it nearly unusable these days: there are browser extensions that do great things that have improved my life. There are extensions that will automatically reject cookie requests that every single website pops up asking for your permission, right? So you can automatically reject those. Or accept them if you want, but have that be done automatically by an extension, so you don't have to do it yourself manually. You can block all kinds of ads. I mean, this is not new, ad blockers are not new, but even including YouTube ads pre-roll and mid-roll ads. And, then another one is activating dark mode. Operating systems have dark mode built into them, but when you're looking at a browser, that's generally just the frame of the browser. And so there are dark mode apps that will actually convert the websites themselves so [01:17:00] that the black text on a white background inverts, and that cuts way, way down on the bright light streaming into your face after sunset.

So as I said, all of those extensions have very much improved my internet experience, and I'm sure that you'd be able to find versions for yourself based on your own particular setup, if any of that appeals to you.

As always, keep the comments coming in at 202-999-3991, or by emailing me to [email protected].

That is gonna be it for today. Thanks to everyone for listening. Thanks to Deon Clark and Erin Clayton for their research work for the show and participation in our bonus episodes. Thanks to the Monosyllabic Transcriptionist Trio, Ben, Ken, and Brian for their volunteer work, helping put our transcripts together. And thanks to Amanda Hoffman for all of her work on our social media outlets, activism segments, graphic designing, web mastering, and bonus show cohosting. And thanks to those who support the show by becoming a member or purchasing gift [01:18:00] memberships at BestoftheLeft.com/support, through our Patreon page, or from right inside the Apple podcast app. Membership is how you get instant access to our incredibly good bonus episodes, in addition to there being extra content and no ads in all of our regular episodes, all through your regular podcast player. And if you wanna continue the discussion, join our Best of the Left Discord community to discuss the show or the news or anything else you like; links to join are in the show notes.

So coming to you from far outside the conventional wisdom of Washington, DC, my name is Jay!, and this has been the Best of the Left podcast coming to you twice weekly, thanks entirely to the members and donors to the show from BestoftheLeft.com.

Sign up for activism updates