#1534 Every Move You Make (Facial Recognition, TikTok, and Surveillance Capitalism 2.0) (Transcript)

Air Date 1/3/2022

Full Notes Page

Download PDF

Audio-Synced Transcript

JAY TOMLINSON - HOST, BEST OF THE LEFT: During today's episode, I'm going to be telling you about a podcast I think you should check out. They are friends of the show and their podcast is called The Black Guy Who Tips. So keep an ear out mid-show when I tell you all about it.

And now...

Welcome to this episode of the award-winning Best of the Left podcast, in which we shall take a look at the emerging implementation of facial recognition technology in public and commercial spaces, along with the tracking and amplifigandizing capabilities of TikTok.

Clips today are from the Lawfare Podcast, Your Undivided Attention, Life Matters, The Brian Lehrer Show, Vox, NOVA from PBS, and Second Thought, with additional members-only clips from Nerdwriter1 and Andrew Russo.

And stay tuned to the end where I'll explain the new ads you may begin to hear in the show and why chapter markers may have disappeared for you.

Twitter, Facial Recognition and the First Amendment - The Lawfare Podcast - Air Date 4-15-21

RAYMA KRISHNAN: Clearview AI is a facial recognition [00:01:00] technology company and news surfaced, I think last year, that this company has been surreptitiously scraping billions of images from the internet to feed an app that it's created. And so basically what it does is it scrapes these images of people, these people that haven't consented to the collection of their images, and the app basically extracts what's called a face print from these images.

And this face print is the equivalent of a fingerprint. It's the sort of precise facial geometry that sort of maps onto your face. And this app has been sold to numerous public agencies. I think the Buzzfeed article noted that the app was accessed by employees at over 2000 agencies.

Clearview has since basically promised that it will only sort of sell access [00:02:00] to law enforcement agencies. It's obviously incredibly a sort of scary business model, I think, to anybody that cares about privacy and also free speech in the digital age because I think most people expect that they have an expectation of sort of relative anonymity in public. They might have a Twitter account or a Facebook account, but they don't necessarily believe that these profiles will follow them every time they go to the shops or have a conversation at an outdoor cafe or attend a public rally. And I think Clearview's technology threatens to destroy that expectation. Because police are using Clearview's app to attempt to match photos that they've attained from really God knows where, whether it's from their extensive sort of network of CCTV cameras or police body cameras or even drones and surveillance planes that are hovering over cities with [00:03:00] photos in Clearview's database. And what this makes possible, at least in theory, is targeted facial recognition at potentially any place and time.

And I think that this has really enormous implications because it turns out that being able to stand sort of unnoticed in the background is actually really important to protest and dissent because without anonymity, or relative anonymity, a lot of people would face pretty serious retaliation.

EVELYN DOUEK - HOST, THE LAWFARE PODCAST: Yeah, so it's not, that's not the only way that speech interests and the First Amendment comes into play in this case. Our podcast is generally about the information environment and speech issues. And you work at the Knight First Amendment Institute and yet here we are talking to you about facial recognition technology and it feels a little bit, you know, through the looking glass to be talking about facial recognition as speech, but maybe that's just a too limited sort of Australian viewpoint of what speech [00:04:00] is, but I think it's at least not intuitive that the First Amendment is relevant here, and yet you, too, have written that a case about Clearview in Illinois is one of the most consequential First Amendment cases of the digital age. So why is it so important and why is this a First Amendment issue?

RAYMA KRISHNAN: Sure. So maybe I can just start with what's at issue in the lawsuit? So the ACLU has brought this lawsuit arguing that Clearview AI has violated Illinois's Biometric Information Privacy Act. And that act requires companies that collect or obtain an Illinois resident's biometric identifier, and that could be a fingerprint or it could also be a face print, relevantly here, obtain the resident's prior written consent. And the lawsuit argues that Clearview failed to comply with that requirement when it collected face prints from online images without the knowledge and [00:05:00] consent of those pictured. And Clearview AI, represented by noted First Amendment advocate Floyd Abrams, has raised a First Amendment defense in response. It is arguing that BIPA violates the First Amendment as it's applied to Clearview's app.

And the reason why this case is incredibly consequential is that it's actually part of a larger trend among tech companies of using the First Amendment to insulate their businesses from privacy regulation. And while these companies attempt to pit privacy against free speech, as we argue in our op-ed, privacy is actually a precondition to free speech. So using the first amendment to strike these laws down, would be disastrous for free speech in the digital age.

JAMEEL JAFFER: Can I just point out what's at stake in these maybe [00:06:00] seemingly esoteric debates over whether something is content-based or whether something is protected First Amendment activity? I mean, I think that the reaction that I get from some people when I tell them that we, a First Amendment organization, are arguing that activity like this, you know, like Clearview's facial recognition app, is not protected by the First Amendment, is a kind of, you know, suspicion. Because I think people assume that First Amendment advocates should be enthusiastic about attaching the label "First Amendment activity" to new forms of activity. The problem with doing that, and this is sort of implicit in what Rayma was saying, the problem with doing that is that every time you attach the label "First Amendment activity" to something, you disable legislatures from being able to regulate it. And that's just a function of the First Amendment doctrine we have. First Amendment doctrine we have is, you know, and this is usually a good thing, is very, very protective. [00:07:00] Once something is characterized by the courts as protected by the First Amendment, it essentially means that it's very difficult for legislatures to regulate that activity and in many, many contexts that's a great thing. We, you know, we don't want legislatures to be able to regulate most of the things that are usually characterized as First Amendment activity. But if you keep expanding the domain of the First Amendment, and you keep characterizing new things as First Amendment activity, one of the things you're doing is disabling legislatures from regulating more and more aspects of human conduct. And in this particular context, what you're disabling legislatures from doing is passing laws that protect the privacy that is sometimes a precondition for the freedoms of speech and association and inquiry. And this is a larger issue than Clearview. It's not just Clearview that's making these kinds of arguments. Rayma and I are are involved [00:08:00] in another suit in Maine involving an internet privacy law that has been challenged by internet service providers. The internet service providers are arguing that a law that regulates what they can collect from their subscribers and how they can use that information violates the First Amendment, violates their their First Amendment rights. And you know, there, too, if you accept the arguments that the internet service providers are making, it's difficult to see how legislatures are going to be able to protect privacy online. And privacy online again is necessary to our enjoyment of the freedoms of speech and association.

Addressing the TikTok Threat Part 1 - Your Undivided Attention - Air Date 9-8-22

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: If you didn't know, TikTok recently surpassed Google and Facebook as the most popular site on the internet in 2021, and is expected to reach more than 1.8 billion users by the end of 2022. So imagine the analogy that the US didn't just allow the Soviet Union to run 13 hours a day of children's TV programming in the US, but we allowed the Soviet Union to run [00:09:00] 1 billion TV sets in the entire Western world, except they had an artificial intelligence who could perfectly tune what propaganda each person in the US or Western world across a billion TV sets would see.

Now, before we go any further, we should make very clear—TikTok is not run by China. TikTok is the flagship app of a company called ByteDance, which is headquartered in China. So ByteDance and China are two distinct entities with different motives, but sometimes those motives come into conflict. And the Chinese government does sometimes force its tech companies hands. The CEOs of Chinese tech companies have notoriously been abducted on several occasions. So the Chinese government does not control TikTok, but it has massive influence over it.

Now, congressional activity against TikTok is picking up. Recently, the Commissioner of the Federal Communications Commission, Brendan Carr, wrote a public letter to Apple and Google asking them to remove TikTok from its app stores. And this is citing a recent Buzzfeed news [00:10:00] report that Chinese ByteDance staff had accessed US TikTok user data on multiple occasions. And then last month in July, in a more powerful move, bipartisan leaders on the Senate Intelligence Committee asked the Federal Trade Commission to investigate TikTok's data practices and corporate governance over concerns that they pose privacy and security risks for Americans. The request was signed by Senators Mark Warner and Marco Rubio.

Meanwhile, TikTok is starting to go on the defensive, for example, in its recent announcement about its commitment to election integrity, and that it's creating an election center to be a hub for authoritative election. So congressional activity is picking up and talk's response is also picking up.

AZA RASKIN - HOST, YOUR UNDIVIDED ATTENTION: So Tristan, let's talk about what are the harms? I think the two obvious ones are, of course, surveillance and data gathering, and that was the target of the recent Biden Executive Order of Protecting American Sensitive Data from Foreign Adversaries.

Just [00:11:00] so listeners know what kind of surveillance we're talking about, there was a very alarming revelation in August by security and privacy researcher Felix Krause. What he discovered is that TikTok is running code that tracks and captures every single keystroke when you're using their in-app browser. So that means any search term, your password, credit card information, it's all being tracked by TikTok when you're using the browser built into the app. Now, TikTok admits it has this code, but says it's using it for "debugging and troubleshooting", which is sort of like when a CEO says that they're resigning to "spend more time with their family."

They say they're not tracking users online habits, but here's the question, how do we ever know? Do you wanna talk about the other ones?

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: So, I think a lot of people look at TikTok and, the US government has basically said, let's focus our attention on the data that it gathers on US citizens. It's all about the data. What if they know a user's location? What if they know the location you're accessing the app from, and they can figure out your address? What if they know the [00:12:00] videos or times of day that you post? What if they know which videos you're browsing late at night? These are the kinds of things that get our concern, but I actually think the Tick TikTok threat is so much bigger than that, because I can actually manipulate, per person, the information that gets risen to the top in everyone's newsfeeds.

Now, we've actually seen this before. In 2014, it was exposed that Facebook did experiments where users were shown happier or sadder content, and then it found that it actually shifted the content that those users shared. And TikTok could do the same thing, but instead of for happier or sad. It could actually shift to pro-China content, or anti Taiwan content—in an event that they were to, say, start a war with Taiwan.

Think about it this way. We saw that Russia invaded the Ukraine, and when they did that, while they had propaganda channels online like Sputnik and RT, Russia Today, those were certain propaganda channels. But RT and Sputnik didn't influence all of Facebook, all of Twitter, all of YouTube, all of Instagram and all of all the platforms to influence what they [00:13:00] thought. Putin didn't influence all those platforms, but if China were to be invading Taiwan tomorrow, they could take the most popular information app in the world, called TikTok, and selectively amplify Western voices who said, "well, Taiwan was always a part of China. There's really no problem here. Look at all the things that the US did and all these wars that didn't go anywhere."

And they wouldn't necessarily be wrong in some of the things they'd be calling, but they would be influencing, not the propaganda, but what our friend Renée DiResta calls ampliganda, or what we sometimes call amplifiganda, which is the ability to selectively amplify and influence people's attitudes by focusing their attention on the things that you want them to focus on, like a magician.

And when you just think about the amount of power and control, especially because Taiwan, for those who are not as aware, holds TSMC, the Taiwan Semiconductor Manufacturing, which is basically all the chips that are in every single product—in cars and television, microphones, computers, cell phones. If you had China invade Taiwan and that took over the semiconductor factory for the whole world. This would be [00:14:00] a massive, massive problem, and this is the kind of thing that China could influence people's opinion of.

Now, we've also talked on this podcast about the ability to influence and manipulate language. We've talked about polling. We had Republican political pollster, Frank Luntz on this program. Frank Luntz is famous for doing dial testing. You can test people's sentiments on various topics. So if I say, the Affordable Care Act versus if I call it Obamacare, I can get different reactions out of people. He did that in a room, where he would actually say the words and then watch what people's responses were. Well, if I'm TikTok, I can do dial testing at scale. I can do that in every voting district in my number one geopolitical adversaries countries. And I can actually see what do they think about various topics? Which way is it trending? I can focus my attention on the swing states. I could do more dial testing than Frank Luntz could have ever dreamed of. And if I do that at scale and I can see how things are trending, and then I selectively amplify what people are seeing, I can turn up and down the dials and potentially choose the next president of the United States.[00:15:00]

Now, a lot of this might sound like a conspiracy theory, or xenophobic, or arbitrarily picking out China when there's lots of other countries doing various things. But I think we actually have to look at the nature of this threat. Now, when we looked earlier at Huawei—for those who don't know, Huawei built the kind of cell phone infrastructure for 5G network. So they were actually building out 5G cell towers all across the world, and Huawei was found to have back doors to the Chinese government. And within the last couple years, India has banned about 200 Chinese apps because they accurately assess the threat, given that India is actually involved in a rivalry with China.

So they banned apps like WeChat, UC Browser, Shareit, Baidu map. And up to a third of TikTok Global users up until that time were actually based in India, so this was a big move. Now, granted, the Modi government may have ulterior motives here as well. It can be using national security as an excuse to ban various apps and even Twitter posts. And the [00:16:00] Indian Supreme Court is reviewing many of these cases because the national security threat hasn't been made clear. Still, we do see the Indian government taking action against Chinese apps. So this has been done before. We did it with Huawei. We've done it in India. Why wouldn't we do it with TikTok?

Privacy, your face and the rise of facial recognition - Life Matters - Air Date 6-26-22

MARK ANDREJEVIC: Facial recognition technology seeks to match an image captured from a camera with an image stored in a database. And there are different types of face recognition. The type that you referenced at airports and on your phone in the introduction count as one-to-one matching. So the goal is to verify your identity by matching your image with a stored image of you.

Some forms of facial recognition, the kind that you mentioned in the retail outlets, are doing one-to-many recognition, which means when somebody enters one of these stores that's using this technology, an image is captured of them, a template is made out of that, and that template is matched to stored templates in a database, presumably images that are [00:17:00] captured of people that the retailers do not want in their store.

So the goal there is to pick a face out of a crowd and match it with a stored image that isn't necessarily identification. So, there are also systems that link an image stored in a database to a name and an address, and then pick, take a image captured, you know, from a crowd or live and then match it to that stored image and attach it to an identity.

So, there's yet one. There's probably another type of face recognition that we might describe, which is using images of faces to make inferences about the internal states or characteristics of the person whose image is captured: their mood, their attentiveness, and so on. So those are different types of automated face recognition.

HILARY HARPER - HOST, LIFE MATTERS: And you can see why some of them might be very, very enticing for advertisers. We'll talk a little bit more about the databases being used and the issues with them in a little while. The few texts popping in already - "shop in a balaclava and sunglasses" - and someone [00:18:00] else says, "if the image data is linked to bank details, we have a huge problem". What are your thoughts? Does it worry you, the idea that this data's being captured and used, as Mark said, to match you potentially big databases of people to see if you are the person who might be doing a wrongdoing or possibly just someone that they can sell things to? Or do you feel like this is just a part of the landscape we live in now and we're, you know, we are trackable in other ways, why does this matter? Mark, what are retailers generally using it for, is it just for detecting theft or also for advertising purposes?

MARK ANDREJEVIC: Well, reportedly, the ones that were recently in the news are using it to prevent theft and antisocial behavior or disruptions. So the claim that came out in the coverage is they store images of people who have caused disruptions in the store, who they associate with illegal activity. It's an interesting question who makes that determination, and what level of or standard of proof is necessary to assign somebody to that category.

HILARY HARPER - HOST, LIFE MATTERS: Yes, indeed. Lilly Ryan, [00:19:00] from a Digital Rights Watch point of view, what was your first thought when you heard the story about Bunnings and the good guys in Kmart installing facial recognition technology in their stores? Shock? Horror?

LILLY RYAN: Honestly, my first thought was, Oh no, not again. We've seen this fairly often recently. We saw this in, you know, 2020, 2021, 7-Eleven was doing this in their stores where they were gathering information that people, facial recognition information, for people who were answering surveys on tablets in their stores. And the Australian Information Commissioner had a fair bit to say about that afterwards. And we've also seen this happen in, we see this happen a lot in lots of large shopping malls. For example, if you're in Melbourne, you go and walk through the Emporium, for example, you will see some facial recognition technology and digital advertising that they've got around the store, around the mall, I should say. If you even look closely, you can see they've got a little Xbox Connect in there that can track where you're going. So honestly, this is something that we've seen fairly frequently in the last, you know, 5, 10 years. It's becoming [00:20:00] increasingly more familiar. And honestly, it was just kind of, a bit more of a, more background noise, I suppose. It was disappointing to hear that this had been happening, but ultimately not surprising.

HILARY HARPER - HOST, LIFE MATTERS: That's a really interesting thing you say about background noise, because that's a thread that's emerging on our Facebook page and in our texts, people saying, Look, this is just how we live now. You know, this is, what we have to get used to. It's not going anywhere.

Does it also bring up issues for you, LIlly, about what is a public space and what is a private space? Because shops are private spaces, aren't they? The businesses can decide who comes and goes there and you know, whether you have to wear a mask and things like that. Is this just an extension of those rights?

LILLY RYAN: In some ways, yeah, it absolutely is. They are private spaces, but it is also difficult from a practical perspective for most people to go about their day-to-day business without visiting some of these spaces. If you tried to avoid attending supermarkets entirely, for most people, that would be a serious imposition. In essence, you know, putting these kinds of technologies in places that are [00:21:00] difficult for us to avoid, you know, in some senses, not in a legal sense, but in the sort of social sense these are public spaces, they're public utilities in the way that we are able to go and, you know, Bunnings, for example, for most people who work in trades, it's very difficult to avoid going to Bunnings. And so it's very difficult then, in the course of many people's work, for us to avoid going and being subject to this kind of stuff.

Which brings in to the whole issue of consent more. Stores have been saying, Well, we've put this on signs right at the front, we've let people know it's happening. But as to whether or not that falls into the definition of consent and what consent means in this context is a really important question that we've been grappling with around the world ever since this technology has been in use and in some cases, we've seen this technology been scaled back because we haven't been able to sufficiently answer this question and several other related social questions about facial recognition.

NJ Legal Rights & NYPD's Facial Recognition Technology - The Brian Lehrer Show - Air Date 9-30-22

NANCY SOLOMON - HOST, THE BRIAN LEHRER SHOW: It's The Brian Lehrer Show on WNYC. Welcome back, everybody. I'm Nancy Solomon, filling [00:22:00] in for Brian today.

Tuesday, when New York governor Kathy Hochul announced a plan to install cameras in subway cars, this somewhat puzzling line from her speech gained lots of attention.

GOVERNOR KATHY HOCHUL: You think Big Brother's watching you on the subways? You're absolutely right. That is our intent, to get the message out that we are going to be having surveillance of activities on the subway trains.

NANCY SOLOMON - HOST, THE BRIAN LEHRER SHOW: Wow. While cameras in subway cars haven't been installed yet, Hochul is right. Big Brother is already watching. Unlike the Orwellian novel, it's not a 24/7 surveillance through telescreens. Instead, we could potentially be identified with facial recognition, a revolutionary technology that has surfaced in the past 10 years.

Police reform advocates have raised the alarm on this technology, saying it can lead to false arrests. One case in [00:23:00] Hudson County has grabbed the attention of multiple organizations concerned by the threat to privacy or civil liberties, that facial recognition software poses.

According to a CNBC reporting, companies that make facial recognition technology have created databases of faces by collecting images, often without people's consent, from any source they can access. What can you tell us about where they're getting this database of photos of faces?

ALEXANDER SHALOM: The interesting thing about this case, Nancy, is that I can't tell you anything. That's really what the case is about. The case is about the fact that in order to defend himself, a person who was charged because of facial recognition technology wanted to know some of those questions, wanted to know where do they get the [00:24:00] database? Who's on the candidate list? Who's manipulating the data? What is the name of the software? Things as basic as that have not been disclosed to the defendant.

NANCY SOLOMON - HOST, THE BRIAN LEHRER SHOW: We do know that there are private companies that are selling people's data which includes photos of their face that could come from social media, say.

ALEXANDER SHALOM: Sure, we know that those are the possible ways that the databases can be formed. What we just don't know in this case is did they only use information from the Department of Motor Vehicles? Or did they also get things from the Department of Corrections? Or did they get things from Facebook? Or did they get thing? They're endless possibilities. All of those things impact the reliability of the technology and to defend oneself in a very serious case it's important to know those answers.

NANCY SOLOMON - HOST, THE BRIAN LEHRER SHOW: You just mentioned [00:25:00] mugshots. We're talking about people obviously who have been previously arrested. How is this being put into use? Why do police officers need facial recognition and how did they use it in conjunction with this database? There used to be a book of mugshots, right? Like how are they using it now in terms of fighting crime?

ALEXANDER SHALOM: Again, we have to just infer, because the NYPD is being scrupulously silent and not answering the questions that we think we're entitled to have answers to. Our best understanding is that the way that NYPD's facial identification Section, FIS, works, is they start with a probe image -- that's something that was maybe pulled from a surveillance camera or something like that. They take their probe image that they're trying to identify. But sometimes it has to get edited because probe images work best when their eyes are open, mouth is closed and it's a full frontal shot right of the face. If the [00:26:00] head is turned to the side or the mouth is agape or the eyes are closed, they might Photoshop it a little. At some point, they then take the probe image, maybe edited, analyze certain points and features to create what's called a face print. It's a mathematical formula, which, again, we don't have access to. They take that and they run it against an unknown database, that will produce a candidate list. Maybe a hundred people who look similar to the probe image, they assign a numerical confidence ratio. This person is a 94 and this person is a 92. And then a technician, again someone we don't know, chooses which image counts as a possible match.

The thing that's so interesting about that, Nancy, though, is that the list, that candidate list is going to be filled with false positives because if it's a hundred people, well, 99 of them are not the person in the image, and maybe a hundred but at least 99 of them are not the person in the image. It's very [00:27:00] important for a criminal defendant to find out who's on that list because in that list might be the actual suspect, the actual person who committed the crime.

NANCY SOLOMON - HOST, THE BRIAN LEHRER SHOW: Tell me, Alex, how this came onto either your personal radar or the ACLU's radar in terms of this potential threat to civil liberties.

ALEXANDER SHALOM: This is a case, as you said, that arises from Hudson County. It was a robbery in West New York. And they had an image from a surveillance camera and they brought it to the New Jersey State Police. The state police said, "Well, we can't find any matches. It's not a good enough picture for us to work with."

The West New York Police Department went to the NYPD and said, "Hey, can you find someone?" NYPD ran it through the process I just described, produced a possible match and they went to two different witnesses to the crime and had the possible match, whose name is Mr. Arteaga, in the photo array. [00:28:00] Both people picked Mr. Arteaga out, though after some hesitation. One person had gone past him once and then came back, but they picked him out and he was then charged with a crime and he became represented by the Office of the Public Defender in New Jersey. They have a really terrific forensics team there who recognized that this was a novel issue here on how we deal with facial recognition technology. They filed an absolutely terrific brief. First thing they said is, "We need some information." The court said, "We're not going to give it to you." I can talk about their rationale there because it's really troubling. They then took an appeal and the court agreed to hear the appeal and the office of the public defender reached out to us at the ACLU of New Jersey and our colleagues at the ACLU National and the Innocence Project. We together wrote a brief and some other organizations like the Electronic Frontier Foundation put together a brief and some of the world's leading experts in misidentification [00:29:00] put together a brief because everyone recognizes that this is new technology, but it's decidedly not science. Rather than being akin to fingerprints, which are at least pseudoscientific, it is more akin to like a sketch artist. It might be helpful, but that doesn't mean it's always reliable and we need certain information to test its reliability. This case that we're litigating now is really about our access to that information.

The problem with banning TikTok - Vox - Air Date 8-29-20

CHRISTOPHE HAUBURGIN - HOST, VOX: TikTok's frictionless personalization is what made the app an instant success around the world. But now that global success is crashing into international politics, putting TikTok in the middle of a worldwide battle over how open the internet should be.

ARCHIVE NEWS CLIP: President Trump threatening to ban TikTok in the United States as Microsoft is helping to acquire it.

EUGENE WEI: I think Chinese tech companies traditionally have really struggled to get a cultural foothold in the US because the culture is just so different.

CHRISTOPHE HAUBURGIN - HOST, VOX: That's Eugene Wei a tech product [00:30:00] executive who's written about how TikTok, which is owned by a company called ByteDance, became the first globally successful Chinese app. How they did it all comes down to design.

When you first open up TikTok, you don't have to follow anyone or tell the app about your interests or even choose what to watch. It shows you a video, and the only decision you have to make is how long you watch it.

EUGENE WEI: So if you look at the history of social media, most of the giants in social networking today started by having people essentially build up a social graph from the bottoms up.

CHRISTOPHE HAUBURGIN - HOST, VOX: A social graph is the web of accounts you follow, and it determines most of the content you see on Facebook, Twitter, Instagram, and Snapchat. The problem with that approach is that it can feel like work. Building up a social network takes time. You're not necessarily gonna like every post from the accounts you follow, and it's hard to find accounts that you would like but don't know about.

TikTok took a different approach. [00:31:00] It bypasses the social graph and instead builds an interest graph, by watching you interact with videos.

TikTok isn't the first platform to do that. It's basically how YouTube works too. But because TikTok videos are less than 60 seconds long, you watch more of them, which means more data.

EUGENE WEI: People talk about the TikTok algorithm as if it's some magic piece of software that is just miraculously better than every piece of software out there. But the truth is, it's not necessarily that the algorithms themselves have gotten that much better, but if you massively, massively increase the training data set that you train the algorithm on, you can achieve really amazing results. And that's why I think a lot of people will describe the algorithm as eerily accurate, eerily personalized.

CHRISTOPHE HAUBURGIN - HOST, VOX: TikTok's interest graph introduces you to like-minded people. And because the videos are often music- or meme-based rather than language-based, you may find that some of those like-minded people live on the other side of the world. [00:32:00] They might be a dancer in Nepal, a family in Mexico, or kids in the UK. Or this guy: "1, 2, 3, 4, 5," as long as the algorithm predicts that it'll entertain you.

EUGENE WEI: And so in that way, the TikTok algorithm kind of allows ByteDance to gain traction in markets all over the world with languages that they don't understand, subcultures they don't understand.

CHRISTOPHE HAUBURGIN - HOST, VOX: TikTok's global appeal enabled it to reach a billion users faster than the other social media giants had. But it also set the app on a collision course with a different trend: the rise of internet nationalism.

ARCHIVE NEWS CLIP: India is banning TikTok and dozens of other Chinese apps.

Australia has cited concerns about national security, so too has South Korea.

President Trump issued executive orders that would ban TikTok and messaging app WeChat from operating in the US in 45 days.

CHRISTOPHE HAUBURGIN - HOST, VOX: ByteDance is based in China, which means it's subject to surveillance by a regime known for [00:33:00] censorship, human rights abuses, and cyber espionage. But TikTok says they've never provided any US user data to the Chinese government.

For his part, President Trump has hinted that this is actually about getting revenge for the coronavirus.

ARCHIVE NEWS CLIP: Yes. Why? Why would you ban it?

DONALD TRUMP: Well, it's a big business. China -- look, what happened with China with this virus, what they've done to this country and to the entire world is disgraceful.

CHRISTOPHE HAUBURGIN - HOST, VOX: But whatever the motivation, the US targeting a globally popular app is a big deal, because it throws a wrench into one of the biggest debates over what the internet should be.

A New America Foundation report plots that debate along a spectrum of how open the internet is within a country.

JUSTIN SHERMAN: So on the one poll, we can visualize the free and open model. So that's the Democratic model. Very little state involvement in internet content.

CHRISTOPHE HAUBURGIN - HOST, VOX: As the original home of the internet and many of the world's biggest tech companies, the US has traditionally advocated for the free flow of information online.

JUSTIN SHERMAN: The opposite end of the spectrum is what [00:34:00] we see in countries like China, where there is heavy state involvement in content, where they do go to internet companies and say, you have to censor all these keywords. You have to censor all these foreign websites.

CHRISTOPHE HAUBURGIN - HOST, VOX: China's Great Firewall famously blocks sites like Google, YouTube, Facebook, Twitter, Wikipedia, Netflix, WhatsApp, and many Western news outlets.

But it's not just China anymore.

JUSTIN SHERMAN: What we see in the middle are countries who I think are going to play a pivotal role going forward in this global scale tipping we see.

CHRISTOPHE HAUBURGIN - HOST, VOX: According to analysts surveyed for this report, many of these countries shifted towards less openness between 2014 and 2018. In 2019, Russia moved to build an internet that is isolated from the rest of the world, following years of increasing government censorship.

Turkey has been blocking some news websites and recently passed a law giving the government sweeping powers over social media. And India, the world's largest democracy, leads the world in deliberate internet shutdowns.

ARCHIVE NEWS CLIP: Turning off the [00:35:00] internet is becoming a defining tool of government repression.

Internet has shut down by the government's internet Nigeria, Liberia, Venezuela, Kazakhstan --

CHRISTOPHE HAUBURGIN - HOST, VOX: As governments decide that a worldwide web doesn't suit their interests, we end up with a fractured internet, what some call "the splinternet," where national borders increasingly dictate what information people can access online.

Now it's up to democratic countries to reimagine an open internet worth fighting for. Instead, the US is threatening to ban a platform used by millions of Americans.

JUSTIN SHERMAN: The US benefits from having technological leadership. It benefits from promoting a democratic internet model and contesting authoritarianism. And so abdicating leadership on that front is not good in the own interest of the US either.

CHRISTOPHE HAUBURGIN - HOST, VOX: TikTok created a uniquely international platform. But it emerged onto an internet that wasn't quite ready for it. It arrived in the midst of rising nationalism, from a country that has [00:36:00] never respected internet freedom.

So now it's forcing the issue. When authoritarian states assert control over online speech, should the US respond by doing the same thing?

Are You Feeding a Powerful Facial Recognition Algorithm? - NOVA PBS Official - Air Date 4-23-21

EMILY ZENDT - HOST, NOVA PBS OFFICIAL: Without your realizing it, images you've posted online could be feeding a powerful facial recognition algorithm, often used by law enforcement.

HOAN TON-THAT: It's over 3 billion photos with faces in the database, and it's all from open source internet. So any kind of websites, cnn.com or mugshot websites, news sites, social media, uh, you name it.

EMILY ZENDT - HOST, NOVA PBS OFFICIAL: A company called Clearview AI has the largest known facial recognition database of images in the U.S. Larger than the FBI's. It includes images scraped from social media sites like Facebook, YouTube, even Venmo. The algorithm isn't looking for your face only in the photos you've posted. It can find your face in other people's public photos, too. The company says that's [00:37:00] how a person accused of sexually abusing a child was identified.

HOAN TON-THAT: They found him in the background of someone else's Instagram page in the gym, you know, in the mirror

EMILY ZENDT - HOST, NOVA PBS OFFICIAL: Clearview's algorithm uses artificial intelligence, or AI, to identify people by mapping a person's unique facial features, like the nose or the distance between your eyes.

HOAN TON-THAT: It will find the features that stay the same across age and and color and, like, lighting and things like that.

EMILY ZENDT - HOST, NOVA PBS OFFICIAL: Clearview says their app is now only available to law enforcement. Users upload an image with a case number and the algorithm searches through billions of images for a match. Matches come up in seconds with links to webpages.

HOAN TON-THAT: And we're always, because we're crawling the web, folding back in the data that's out there into retraining an algorithm. You know, the larger dataset we get, the more accurate it is over time as well.

EMILY ZENDT - HOST, NOVA PBS OFFICIAL: But collecting and storing biometric data from online photos has raised concern. Canada called Clearview [00:38:00] AI's app illegal, a violation of privacy rights and ordered Canadian faces removed from the photo database. And the use of the technology is being challenged in the U.S., in Illinois and in California. But Clearview is becoming increasingly mainstream for law enforcement, and the technology was used to help track down the January 6th Capitol rioters. The day after the riot, the company reported a 26% increase in searches.

DOUG KOUNS: So you get all these pictures that have been submitted from the public to the FBI's tip site, and it has to be compared against a database to get potential matches.

EMILY ZENDT - HOST, NOVA PBS OFFICIAL: More than 400 people have been charged with crimes related to the January 6th attack. Rioters were likely dropping digital breadcrumbs with every move they made. Crowdsource tips, location data, and surveillance footage have all helped law enforcement understand who was where on January 6th.

DOUG KOUNS: So many people were live streaming and taking photos. You know, hey we can figure out who you are pretty easily with the [00:39:00] technology that exists.

EMILY ZENDT - HOST, NOVA PBS OFFICIAL: Clearview says their large data set is part of what makes the algorithm work so well. The company also says that facial recognition software should be a tool in an investigation, but not the only evidence for a criminal identification. But it's not always clear to the public which law enforcement agencies are using the technology and how they're using it.

DOUG KOUNS: The technology is pretty good, but it's still not suitable to say this is positively this person. You still have to have a human look at it and say, This is a match. And even then you still have to go out and do some more legwork and further confirm that your match is in fact who you think it is.

EMILY ZENDT - HOST, NOVA PBS OFFICIAL: Many facial recognition systems have gender, age, and race biases, and often misidentify people of color. And critics are concerned about the technology exacerbating existing inequalities.

JANAI NELSON: So artificial intelligence has the veneer of being objective, [00:40:00] has the veneer of being at arm's length from human bias, but it is far from that. There's always a human element in the creation of these methodologies and these automated decision systems. And we have been very concerned about the inputs into these systems that often produce racially discriminatory results.

EMILY ZENDT - HOST, NOVA PBS OFFICIAL: Clearview's co-founder claims their technologies and identifications are more accurate than eyewitnesses.

HOAN TON-THAT: I think that it can minimize mistakes, minimize misidentification. The technology has far surpassed the human eye now, in terms of accuracy.

EMILY ZENDT - HOST, NOVA PBS OFFICIAL: And then there's the issue of privacy. A full embrace of the technology could potentially mean the end of anonymity in any location within view of a camera lens.

WOODROW HARTZOG: There are some potentially significant benefits for facial recognition technology, things like finding missing people or being able to use, in [00:41:00] law enforcement purposes, to catch, for example, people that are dangerous, that need to be found very quickly. But in order to recognize those benefits, we have to sacrifice almost everything in terms of privacy. Otherwise, those tools aren't going to be effective for their stated purposes. We have to, sort of relinquish control over name-face databases so that all of our face prints are stored, we have to agree to being consistently surveilled in public all the time. And once we've done that, that's when the potential for abuse is at it's highest for this technology.

EMILY ZENDT - HOST, NOVA PBS OFFICIAL: There's been fairly widespread support for efforts to track down the Capitol rioters, but civil rights advocates have raised concerns about broader use of this kind of artificial intelligence.

JANAI NELSON: I think anyone who cares about the future of our democracy, will understand that we must have absolute and complete accountability at all levels for the attack on the Capitol. That [00:42:00] said, what we don't want to see is the January 6th attack being used as a predicate for increased surveillance of Black communities, brown communities, Muslim communities, and other communities that have been subject to this extensive and unwarranted surveillance over time.

WOODROW HARTZOG: Often we turn to technologies to try to solve hard social problems, hard political problems, because it's almost easier just to ask a technology to solve it for us, right? And instead, I think it's time to really ask the harder political questions. Are our rules appropriate? Can we achieve the same level of protection and serve the values that we wanna serve with our existing tools? Is it really just a matter of not wanting to enforce them?

Addressing the TikTok Threat Part 2 - Your Undivided Attention - Air Date 9-8-22

AZA RASKIN - HOST, YOUR UNDIVIDED ATTENTION: In the same way that Huawei would enable backdoor access [00:43:00] to all the information of our country, TikTok is sort of like cultural infrastructure. It gives you access not only to the data, but direct access to influence the minds, information, and attention of, first our youth culture, and then the entirety of our.

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: And not to mention influencing the values of who we want to be when we grow up. We've mentioned the survey of what do kids in the US and Gen Z most want to be when they grow up? The number one most aspired career is an influencer. And in China, I think in this particular survey, it was an astronaut or a scientist. And keep in mind that inside of China, domestically, they regulate TikTok to actually feature educational content. So as you're scrolling, instead of getting influencer videos and all of that, you actually get patriotism videos, science experiments you can do at home, museum exhibits, Chinese history, things like that.

And domestically for kids under the age of 14, they limit their use to 40 minutes a day. They also have opening hours and closing hours, so that at 10:00 PM it's lights [00:44:00] out for the entire country. All of TikTok goes dark, and no kids under 14 can use it anymore. And then at 6 in the morning it opens up again, because they realize that TikTok might be the opiate for the masses and they don't wanna opiate their own kids. Meanwhile, they ship the unregulated version of TikTok to the rest of the world that maximizes influencer culture and narcissism, et cetera. So it's like feeding their own population spinach while shipping opium to the rest of the world.

And you could argue that's the West's fault. The West should be regulating TikTok to say, "well, what kind of influence do we want? If we want not an influencer culture, we should actually say we wanna pass laws that feature educational material, or bridge building content that actually shows people where they agree in a democracy." But so far we're not doing those things.

AZA RASKIN - HOST, YOUR UNDIVIDED ATTENTION: I want to make one point about Amplifi amplifiganda and free speech, because whenever we start to talk about regulating attention, we will always get into the conversation about free speech, and we need to return to the episode we did [00:45:00] about Elon Musk and Twitter. What is the point of free speech? Free speech is a kind of immune system, a protection for democracies, that both protects your individual ability to express, of course, but also for the ability of a nation to make good sense and good decisions.

What we see with amplifiganda is a kind of zero day exploit against the value of free speech as it was written in 1790 because the Chinese government does have influence over TikTok and the algorithm that chooses what goes viral. I wanna zoom out for a second because amplifiganda is an example of how a technological change can change the context in which a value is adequately expressed. Free speech worked as written in 1791 because there was no tech that could do amplifiganda.

But this kind of thing has happened before, and we've had to update our philosophy to safeguard what we really value. I'm thinking of the first mass produced camera, the Kodak camera. There is no right to privacy written into the Constitution, and you did [00:46:00] not find the founding fathers discussing privacy, so where did it come from? Well, the right to privacy came from Louis Brandeis, who would later become one of the most influential Supreme Court Justices who's reacting to the mass produced camera. He wrote, "instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life." That is, because of the invention of the camera, we needed to invent the idea of privacy in a way we didn't have it before. So, for amplifiganda and free speech, we are going to need to update our philosophy of what we think free speech is so that the security and protections we have can serve open society.

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: You know, it makes me think that we're obviously very familiar with security, but we're not familiar with psycho security. How do we secure our, the minds of the culture that we want to be not influenced by outside forces? I think it actually even goes even deeper. We have a friend who knows some of the insides of TikTok and who told [00:47:00] me that we need to actually see TikTok as a parallel incentive system to capitalism.

Now, that might sound like a bold claim, but imagine that there's this other currency in the form of TikTok, which is paying people in the currency of likes, followers, comments, and visibility. Now, just like a central bank has control over the money supply, TikTok has control over the engagement supply. They can tune the dials and say, "we're gonna give you more likes, more followers, more comments, more influence, more visibility if you say more things like this and less things like this."

So for example, if you said, "Hey, Taiwan was always a part of China. This is just China taking back what it already had in the past," they could just add a little subsidy that anybody who speaks in that way can get 10% more likes, followers, comments, and influence. Now, other influencers on TikTok are doing social learning. They're looking at, well, who is popular on TikTok? And if the Chinese government had picked certain topics where people were more successful because they spoke in one way over another, then people would actually learn, "I'm gonna copy the TikTok influencers [00:48:00] who speak positively about China."

And so over time it tilts the floor of humanity into the direction of cultural influence that you want the whole culture to go. This is an alternative incentive system. Instead of paying you in dollars, which takes money outta bank accounts, I can pay you in this infinite currency. And actually in the early days of TikTok, they were known when they had the app Musical.ly of artificially inflating the number of likes that it looked like you got, because then it would convince people that they were getting more attention than they actually got, and it caused 'em to come back more often.

And again, there's no check and balance. There's nothing that stops them from artificially inflating the number of likes you get, the number of views, or lying about the numbers because people are really influenced by that. And as a user, you wanna post your video onto the platform, whether Instagram Reels or TikTok, based on the one that gives you the most reach, the most visibility. So when they inflate the number of likes, that's going to alter which platforms you're gonna be posting on.

Given the threat of all this, what are some of the solutions? Now, some people might be saying, "why don't we wait for the US government to just regulate that? I mean, obviously it's the US [00:49:00] government's job to regulate this. It's not tech company's job, or it's not whistleblower's job. It's not someone else."

But keep in mind that every day social media operates, every day that Twitter goes on, engaging in outraging people, personalized polarization, it actually makes government less capable of regulating, because people live in constant disagreement. It breaks down the shared political will to do anything. So we're living in the cacophony in which government cannot regulate almost anything because the business model of social media is breaking down consensus.

Why Facial Recognition Technology Is So Dangerous - Second Thought - Air Date 7-3-20

JT CHAPMAN - HOST, SECOND THOUGHT: Imagine you are out at lunch somewhere, just enjoying your 30 minutes of peace before you have to go back to work. Some guy thinks you're attractive, snaps a picture of you, and with a facial recognition app is able to find your Facebook, Instagram, or other social media pages. He comes over and says, "No way! We went to the same school and I work just down the road from you." You have no idea who this person is, and yet he has all of this information about you, information you did not give him. You would likely see this as a huge invasion of privacy, and rightly so.

If a picture of your face can function as a key to [00:50:00] all your information, would you feel safe in public? Would you feel safe knowing that someone could just take a quick picture of you on the train or in traffic or at a protest and find not only your name, but any other pictures of you anywhere online? Your LinkedIn profile picture would give them where you work. Photos your tagged in on Facebook would give them the identities of your friends and family.

The idea that your own face could be the ticket to an instant doxing should be alarming. Should we be able to expect a reasonable level of privacy in our everyday lives? It seems like a simple question, but it's been at the heart of a very intense debate for years. Many people say we should be able to keep some personal information private from the rest of the world, to be shared with people of our choice. Others say that we forfeit the right to privacy when we operate in the public sphere. There have been court cases that come down on both sides of the issue, and we don't seem to be getting much closer to a resolution. So technically, as of right now, there's no clear answer to whether or not we can expect a degree of privacy in public.

Privacy and security. These two notions, each lauded by different groups as the more [00:51:00] essential right, have increasingly been in conflict with one another. In recent years, as camera technology and artificial intelligence have become more advanced and widely available, we've seen the genesis of new, highly invasive and widespread surveillance operations.

Countries around the world have adopted high tech surveillance methods to spy on adversaries and their own citizens, usually under the pretense of national defense. More cameras means more safety and better preparedness -- or so we're told.

And don't think that world governments are the only entities conducting surveillance operations. Another serious threat comes from the private sector. Giant companies like Microsoft, Google, Amazon, and IBM have been developing facial recognition technology for years and often sell their identification services to law enforcement agencies.

Here's a quick test. Do you have a passport? What about a driver's license? School ID? The data that state and federal governments have on file is frequently used without our knowledge to build these algorithms. And even if by some weird fluke your data hasn't yet been misused, as soon as the government buys the new software, they'll just feed it their entire [00:52:00] archive of information, which will include your data.

This data is repackaged and sold all the time. Retail stores can get information on known shoplifters. They can use their existing cameras to record foot traffic, then use the software to identify repeat customers or other persons of interest. Odds are you're already in somebody's database. But of course, this is all very hypothetical.

Let's take a look at some more concrete examples.

According to a report released in 2016, US law enforcement agencies maintain a database of over 117 million American adults. That means as of 2016, there's a 50-50 shot of your information being logged in a police or FBI facial recognition database. To make things worse, the use of this data is completely unregulated. So if Officer Friendly thinks you're cute, he can just use the database to find out where you live and come ask you on a date. Hard to say no to a guy with a gun who's apparently fine with misusing the tools available to him.

If this scenario sounds far-fetched, you haven't been paying attention.

In the wake of the George Floyd protests, police have been using facial [00:53:00] recognition software to identify people in photos posted online. They've taken that information and shown up to arrest organizers and other participants.

This isn't the first time the police have misused facial recognition software either. The same thing happened with the Freddie Gray marches. Protestors were targeted. Cops showed up at their homes and arrested them for nothing more than exercising their right to assemble.

If you'd like to see what this kind of surveillance and facial recognition looks like on a large scale, look no further than China.

Who you are, where you've been, whom you meet. If you think this sounds completely dystopian, you are not alone. That level of surveillance is terrifying. What happens when you go to a political meeting that the government doesn't like? Crackdowns are a very real concern in places like China that rely heavily on facial recognition technology. To make things even more absurd, there's a Chinese surveillance company called Skynet -- you know, like the evil surveillance corporation from Terminator.

But China isn't the only country with questionably-named surveillance operations. The United States [00:54:00] has Palantir. Palantir, like the evil crystal ball Saruman uses to see things from his tower in Lord of the Rings. Palantir is perhaps the closest thing we have to a real-life pre-crime unit. They don't have an office; they have a sensitive compartmentalized information facility, or SCIF, in a back alley in Palo Alto. The building is resistant against all attempts to access the data within. Their security includes biometric systems, walls impenetrable to radio waves, phone signal, or internet. Its data is blockchained, as are the identities of those who hold the dozens of required passcodes. Palantir is a big data giant, willing to use data in ways that no other group will. They watch what you do and use that data to try to predict what you'll do next. Its clients include the CIA, the FBI, the NSA, the branches of the US military, and the IRS.

They are a behemoth in the data world. But almost no one knows they exist. Why? Because that would probably be bad for business. Palantir tracks all sorts of targets: suspected terrorists, financial fraudsters, sex traffickers, and [00:55:00] most ominously, a group they simply refer to as "subversives." The LAPD has used Palantir to predict when and where someone will commit a crime, and then swoop in and stop that crime from happening, not unlike Tom Cruise in the Minority Report, you know, before the pre-crime operation frames him for murder and tries to kill him.

Defenders of facial recognition technology will say, look, there's nothing to worry about if you're a law-abiding citizen. That's a weird position to take, because even if the technology were a hundred percent accurate, which it's not, you are assuming that the government and law enforcement will act in accordance with your best interest.

If you're a gun owner, what happens when there's a crackdown on guns and suddenly you have cops showing up at your door to sweep your house? If you're a political activist, what happens when your meetings get criminalized? The law can and always has been used to justify horrific treatment of citizens by the government.

But let's talk about the efficacy of this technology, because in the grand scheme of things, software that isn't accurate might be just as dangerous as software that is.

In a recent test of the accuracy of realtime facial recognition, an independent study [00:56:00] found that the software was only correct in eight out of 42 cases. That's pretty bad. To make things worse, it appears that many facial recognition algorithms have a racial or a gender bias. Partly because the software is often trained with white faces, the error rate when identifying people with darker skin is drastically higher than those with light skin. As one MIT researcher found across three recognition programs, the inaccuracy rate for identifying white men was never more than 0.8%. For darker-skinned women, on the other hand, the error rate was over 20% for one of the algorithms, and over 34% in the other two.

No, the algorithms aren't racist. They're just dumb pieces of software performing the task they were programmed to do. But the people who programmed them probably had some racial blindspots. For example, one group claimed their software had an accuracy rating of over 97%, but the data set used to assess its performance was over 77% male and 83% white.

Then there's the infamous case of the University of Harrisburg, Pennsylvania. Their team developed a facial recognition [00:57:00] software that they claimed could predict whether a person was a criminal just by looking at a picture of their face. After putting out a press release bragging about their new software, the team received a huge wave of backlash. Which is a very good thing, because what they had invented was just a debunked pseudoscience known as phrenology. Phenology was the study of the size and shape of the human head to determine criminality or other such undesirable characteristics. Unsurprisingly, it has a long and obscenely racist history. Some of these facial recognition technologies are just phrenology repackaged for a 21st century audience.

The dangerous part is when this software is sold to police departments or government organizations. What happens when an inaccurate software determines you have a criminal face? That could lead to a confrontation with the police when you've done absolutely nothing wrong. And confrontations with the police, at least in America, rarely end well.

Meanwhile, rich white businessmen are doing cocaine in their offices or ordering drone strikes on civilians, but are determined not to have a criminal face. So the cops never pay them a visit.

 It's likely that the only thing that can truly stop [00:58:00] the expansion of corporate surveillance technologies is continued public pressure on the federal government to pass legislation protecting the people's right to reasonable privacy.

Unless we can stop the rollout of facial recognition software soon, we could very well be headed to a future of real time location tracking, hyper-personalized ads everywhere we go, and increased crackdowns on so-called anti-American activities.

Facial recognition is a powerful and dangerous technology, and it's not worth the risk.

The Real Danger Of ChatGPT - Nerdwriter1 - Air Date 12-30-22

EVAN PUSCHAK - HOST, NERDWRITER1: In the last few weeks ChatGPT, the artificial intelligence chatbot, built by OpenAI, has been on an ambitious killing spree. Timelines overflow with eulogies to its victims.—search engines, copywriters, coders, high school essays, and many more. Now, reports of these impending deaths may be exaggerated. Human beings love to write the words "this will change everything", only to [00:59:00] shrug a year later when "this" changed very little.

ChatGPT might be a game changer, it might not be. Either way, the bot is undeniably impressive. What really astounds is its rhetorical muscle, its ability to generate paragraphs of coherent argument or narrative. Obviously there are several million humans, to which it can't hold a candle in this regard, but it instantly hurdled several million others landing somewhere in the neighborhood of a high school student who's perfectly happy with a B minus on their Pride and Prejudice book report. 20 years ago, that was my neighborhood. If I had access to ChatGPT in 10th grade English, I would've used it without compunction,

Why waste a perfectly good afternoon scratching out a five paragraph essay on Austin's depictions of social class, when I could generate this in seconds? In America, we don't call this cheating, we call [01:00:00] it working smarter. Maybe the eulogies are fitting. Maybe we'll outsource writing to ChatGPT, like we outsourced math to calculators, spelling to spell check, memory to the internet. Now to a certain extent, the prevalence of essay writing in school reflects the decline of memorization as a method of teaching and learning, a decline that feels inevitable, maybe even appropriate, in a world where the totality of human knowledge is pocket sized. We assign essays because they allow teachers to gauge a deeper understanding in their students. They require more than just recall, they require a personal synthesis of information, which can't be outsourced to a machine, or couldn't be until a few weeks ago.

Pretty soon English teachers are gonna get that question math teachers have been getting for decades. "When will we ever need this stuff in real life?" [01:01:00] Bullshit won't work here. Teens are just too savvy for that. The truth is that you'll probably never have to calculate a definite integral once you leave, and the truth is, we'll likely outsource many categories of basic writing to AI and never look back. Now for other, more complex kinds of writing, you could simply start with ChatGPT, let it provide the basic structure, then edit to your liking. Editing, after all, is another way to synthesize information. Maybe ours will become a culture of editors tweakers, embellishes, would that be so bad?

Well, the insipid teenage me would probably say no, but the 34 year old me, the one who ironically became a professional essayist, leans the opposite way. The difference between writing and editing is like the difference between writing and reading. Reading is enormously important, obviously. It closes the gaps of our ignorance and [01:02:00] expands our knowledge, but it does so through the language of others. Their words, their sentences, their narratives and arguments, editing to begins with the language of someone else. In the case of ChatGPT, something else.

Of course, to some extent, writing is editing too. We inherit our language and its rules from culture, from the past. We express ourselves through a system that we didn't invent. But that system is so infinitely flexible that we can use it to create structures of our own. Language is how human beings understand themselves and the world. But writing is how we understand uniquely. Not to write, is to live according to the language of others, or worse, to live through edits, tweaks, and embellishments to language generated by an overconfident AI [01:03:00] chatbot.

I doubt this argument would convince the teenage me to resist a free magic tool that promises easy grades for less work. As ever, it's left to teachers to impassion their students within a system that prioritizes grades over learning. Often they succeed, but even when they don't, they still do the heroic work of giving insipid high schoolers like myself, basic writing competence. In my 20s, when I began to wonder who I really was and what I really believed, questions that come for all of us, I discovered that writing, structuring language of my own, was the only to find out. That's when the foundation teachers worked so hard to give me proved its immense value. I hope future generations have a similar foundation. One way or another. New writing [01:04:00] is on its way from the future to make sense of all of us. I think we'll prefer it to be our own.

Hustle / Grind Alpha Bro vs. Random ChatGPT Guy - Andrew Rousso - Air Date 12-13-22

ANDREW ROUSSO - HOST, ANDREW ROUSSO: I wake up at exactly 4:00 AM every morning. Not cuz I want to, but because the grind calls me. Get to it. 1000 Goblin Squats. Freezing cold shower, coffee, and by 5:00 AM I'm at my workstation primed and ready to go.

So I probably work from my 10:38 to 10:47, and, you know, that's, that's kinda my workday. And one day I was just like, "Hey, ChatGPT, can you automate just a bunch of revenue streams?" And it was like, "yeah."

I put in 16 hour days. Crypto trading, marketing, branding, social media, side hustles, anything to get ahead. My religion is the grind.

I used to work at the DiGiorno's factory. That's the thing. I don't know how to code, I don't know marketing, but ChatGPT does. ChatGPT you are brilliant. A lot of money, a lot. And I don't do anything.

Some people say work smarter. No. Kill yourself to death by working. There are no shortcuts in this life. There are no [01:05:00] cheat codes.

This is definitely a cheat code. This is a shortcut.

There is untapped value everywhere, and I grind to extract.

Hustle culture? You mean asking ChatGPT questions all day? Sigma Grindset? You mean ChatGPT?

I subsist on only minerals and vitamins. You can't stop this grind. Inside everyone there are two wolves, but inside me there are three.

Now I'm just gonna chill and I, I got like a whole communal garden that I'm building. I'm gonna go paint, eat some cereal. I don't.

Some people work to live. I work to work.

Some people say work smarter, not harder, but I'm not a smart man. Why be intelligent if you got artificial intelligence?

What the fuck is ChatGPT? Probably some beta male, cheatcode bullshit. Sleep is for the week. Let's go all night.

You guys gonna edit this documentary yourself? Or you want me to ask ChatGPT? I say, Hey, Chachi, Chachi. I say Hat Cha. I say, Hey, Chad.

Final comments on the new ad system for the show

JAY TOMLINSON - HOST, BEST OF THE LEFT: We've just heard clips today starting with the Lawfare Podcast, breaking down the First Amendment implications [01:06:00] of facial recognition technology. In part one from Your Undivided Attention, they explained the privacy and propaganda concerns of TikTok. Life Matters discussed various implementations of facial recognition in public and commercial spaces. The Brian Lehrer Show highlighted the case of New York's plan to install big brother cameras in the subway system. Vox explained the success of TikTok and the debate between a more open versus a more closed internet environment. Nova, on PBS, looked into the usage and ethics of facial recognition databases. In part two from Your Undivided Attention, they looked at the data leaks from TikTok and the capital of influence. And Second Thought laid out the privacy free future of the "minority report" currently being built.

That's what everybody heard, but members also heard bonus clips from Nerdwriter1 taking a closer look at the new AI chatbot that's writing high school essays today [01:07:00] and who knows what in the future. And Andrew Rousso on none other than TikTok, ironically, compared grind culture to the coming AI takeover. To hear that and have all of our bonus content delivered seamlessly to the new members-only podcast feed that you'll receive, sign up to support the show at bestoftheleft.com/support or shoot me an email requesting a financial hardship membership, because we don't let a lack of funds stand in the way of hearing more information.

And finally, this wasn't planned, but it is fitting. I have an announcement about a medium-sized change with the show, or maybe two changes depending on how you count them, and they are sort of technology- and privacy-related. The first is that if you are not a member of the show, you should begin to hear new ads in the show that are different from our traditional ads, in that they're not produced by me personally. They're prerecorded and [01:08:00] inserted into the show automatically. Now, for you, super savvy listeners, who are familiar the way things like this work, you may be thinking to yourself, Hey, aren't those the type of ads that tap into those massive data brokers and steals my data every time I download an episode? And by the way, didn't Jay run a membership drive a couple years ago, specifically because his ad sales company insisted that he moved the show to one of those unethical, vertically-integrated hosting sites, specifically for the purpose of running exactly those kinds of data stealing campaigns, and since he refused to do that, a bunch of people needed to sign up as members in order to make up without lost revenue? And so what gives, and what's the difference? , the answer to those well-crafted questions that you are undoubtedly having is: Usually; Yes; and I'm happy to [01:09:00] explain.

So, usually those ads do come to you served through a grotesque system of data mining and surveillance capitalism. So for the vast majority of other podcasts you may listen to that have ads auto-inserted into them, it is very likely that those are coming from huge, gross, you know, surveillance capitalism systems and stealing your data in the process. And that is exactly why I refused to take part in that. And I had to actually part ways with my ad sales company, which gave me an ultimatum a couple of years ago. And yes, I ran a membership drive specifically based on the lost revenue for standing up for the principles that we espouse on the show and making sure that we conduct the show in line with those principles. What is different about now is that my privacy-focused, ethical hosting company [01:10:00] has gotten on the auto ads bandwagon, because they have managed to do it in a privacy-respecting way. So all the contracts that they sign with advertising firms forbid those firms from being able to access IP address data. So that drastically reduces cross-site tracking and all of those sorts of things. So the ads may still come from one of those big data broker companies, but they're not able to steal your data in the process. In short, if you're curious, my hosting company is called Libsyn, short for Liberated Syndication. I've been with them since 2006 and I really just got lucky more than anything else that their company culture regarding privacy is in line with mine because, you know, frankly, they were sort of one of the only companies in the game way back then and so we've grown up together and luckily they're on my side when it comes to privacy.

In fact, one of the [01:11:00] higher ups at that company who's been there since the beginning is so much of a bulldog on privacy that sometimes I think that he's a little bit too paranoid. But you know, when it comes to privacy, I am happy that, if anything, he's going to air in the direction of being too protective of listeners and that guy who I've known, you know, for these past 15 years, is on the team that reviews all of the ad contracts for the company. So that brings me a lot of comfort personally, and that's why I've been willing to sign up for this system.

Now the second, but related, change is that you may have noticed that chapter markers have suddenly gone missing. Again, this should only affect non-members. And the reason these changes are related is that when the new ads are inserted into the show, the audio file that you are actually downloading is basically being rewritten to have the ads in them. And during that process of rewriting the [01:12:00] audio file, the chapter markers that I've inserted get wiped away. And now I wouldn't have chosen for it to work this way and I even talked with the team about whether chapter markers could be maintained with auto ads turned on, and the answer was simply, No, that's not possible. Therefore, chapter markers are now a membership perk. Again, I wouldn't have chosen to do it that way in, like, a pure cynical attempt to push people towards membership, but that choice came as a package deal to me and lastly, the answer to what is undoubtedly one of your other questions is, Yes, we really do need the money. Times have been quite tight for the show as of late, and we are basically throwing everything at the wall to see what sticks. And this new system is one of the things that we have to try. If you want to take no chances, get away from ads once and for all and continue to enjoy the super useful chapter markers in the show, now may be the time to join as a member at bestoftheleft.com/support. [01:13:00] We could definitely use your support and I would really appreciate it.

So those are the changes. I have made them as ethically as I am able to within the confines of capitalism that is forcing me to make decisions to try to increase the revenue of the show and the broader forces of marketing that want very much to get as much of your data as they can possibly get their hands on. And we have resisted to the best of our ability while still trying to get some money for the show to, like, pay for health insurance and stuff like that.

As always, keep the comments coming in. You can leave a voicemail as always, or you can now send us a text through standard SMS. Find us on WhatsApp or the Signal messaging app, all with the same number, 202-999-3991. Or keep it old school by emailing me to [email protected].

That is gonna be it for today. Thanks to everyone for listening. [01:14:00] Thanks to Deon Clark and Erin Clayton for their research work for the show and participation in our bonus episodes. Thanks to the Monosyllabic Transcriptionist Trio, Ben, Ken, and Brian, for their volunteer work helping put our transcripts together. Thanks to Amanda Hoffman for all of her work on our social media outlets, activism segments, graphic designing, web mastering, and bonus show co-hosting. And thanks to those who support the show, of course, by becoming a are purchasing gift memberships at bestoftheleft.com/support through our Patreon page or from right inside the Apple Podcast app. Membership is how you get instant access to our incredibly good bonus episodes, in addition to there being extra content, no ads, and chapter markers, in all of our regular episodes, all through your regular podcast player. And if you wanna continue the discussion, join our Best of the Left Discord community to discuss the show, the news, other podcasts, articles, videos, books, anything else you can think of. A link to join is in the show notes.

So, coming to you [01:15:00] from far outside the conventional wisdom of Washington, DC, my name is Jay and this has been the Best of the Left podcast coming to you twice weekly, thanks entirely to the members and donors to the show from bestoftheleft.com.


Showing 1 reaction

  • Jay Tomlinson
    published this page in Transcripts 2023-01-03 18:31:00 -0500
Sign up for activism updates