#1578 A.I. is a big tech airplane with a 10% chance of crashing, should society fly it? (Transcript)

Air Date 8/19/2022

Full Notes Page

Download PDF

Audio-Synced Transcript

 

JAY TOMLINSON - HOST, BEST OF THE LEFT: [00:00:00] During today's episode, I'm going to be telling you about a show I think you should check out. It's the Future Hindsight podcast. So take a moment to hear what I have to say about them in the middle of the show, and listen to Future Hindsight wherever you get your podcasts. 

And now, welcome to this episode of the award-winning Best of the Left podcast, in which we shall take a look at how Big Tech is currently scrambling to bring untested AI products to market, over-promising, under-delivering, and working hard to obscure and ignore any possible downsides for society. Big Tech needs AI regulation now before we all suffer the easily-foreseeable consequences, as well as some of the unforeseeable ones. 

Sources today include Adam Conover, Summit, What Next: TBD, Democracy Now!, The Data Chief, Science Friction, and Your Undivided Attention, with an additional members-only clip from Monastic Academy.

A.I. is B.S. - Adam Conover - Air Date 3-31-23

ADAM CONOVER - HOST, ADAM CONOVER: Artificial intelligence is a real [00:01:00] field of computer science that's been studied for decades, and in recent years, it's made major strides. But I'm not talking about that kind of AI. I'm talking about the marketing term "AI" the tech companies are using to hype up their barely functional products, all so they can jack up their stock price. See, tech companies are powered by hype. It's not enough to be profitable. No! In tech, you have to be able to convince investors that you have cutting edge, disruptive technology that will let you dominate an entire industry, like Google did with Search, Apple did with the iPhone, and Amazon did by making workers pee in bottles and passing the savings on to you.

But now that all that low-hanging fruit has been plucked off the innovation tree, tech companies have started just making up new words that they claim are going to revolutionize everything in hopes of flimflamming their way into that investor cash. You know, words like the Metaverse, Augmented Reality, Web3, and who [00:02:00] can forget Crypto. Last year, every company was racing to pivot to the blockchain. But now that Bankman-Fried has been exposed as a bank fraud man and put the crypt back in crypto, they need some hot new hype to hawk. And that's artificial intelligence. 

So in a desperate bid to juice their stock prices, companies from Snapchat to Spotify to BuzzFeed now claim they're going to jam AI into their products. Hey! And maybe next they can program an AI to read BuzzFeed too. That'd take a lot of unpleasant work off our plates. 

Now, a lot of this hype is just transparent bullshit. I mean, Spotify just released an AI DJ that will create a personalized radio station just for you. Wow. Very impressive. Except that Spotify already fucking does that. What's your next feature? An AI volume knob? 

You can't just release something that already exists and call it AI. "Hey, come on down to Papa Tony's AI Pizza [00:03:00] Shop! We got AI cheese, AI sauce, and the computer was involved somehow." That was fully a vampire voice. 

But it's not all empty talk. The biggest tech companies are unleashing an experimental technology called "generative AI" onto the public, despite the fact that in most cases it straight up cannot do what they claim and is making all of our lives worse.

This actually isn't the first time the tech industry has turned us into their AI Guinea pigs. Remember self-driving cars? For years, companies like Google, Uber and Tesla have told investors that any day now they're gonna replace the 228 million license drivers in the US with AI autopilots. Hell, Elon's been predicting that Tesla's will be fully self-driving next year since fucking 2014. These companies were so successful and making the technology seem inevitable that multiple states actually allowed self-driving cars to be deployed [00:04:00] on the roads that real people drive on. So how did that turn out?

NEWS CLIP: A Tesla, believed to be on autopilot, started braking, causing an eight-car pile up on Thanksgiving.

ADAM CONOVER - HOST, ADAM CONOVER: Whoa, whoa, whoa, whoa. Nope, nope, nope. It wanted to hit the truck. Okay, but in the AI's defense, that child was blocking the lane to Whole Foods. 

After years of broken promises and a hundred billion dollars wasted, pretty much everyone has finally agreed that self-driving cars just don't, uh, work. But the truth is, They never did. It was always a lie. Tesla is currently being criminally investigated by the Department of Justice because it turns out the videos they made promoting their self-driving feature were literally faked. They also falsely advertised their cars as having autopilot and full self-driving. [00:05:00] And since the world is full of gullible sims, who believe every tainted word that falls out of Elon Musk's idiot mouth, that inspired some drivers to take their hands off the wheel and go all Luke Skywalker on the I-95. [Obi-wan Kenobi voice] "Use the AI and... let go." Goddamnit, that's the second kid today. 

People died as a result. Last year, 10 people were killed by Tesla's self-driving cars in just four months, which might be why the government just made them recall 300,000 cars. 

What even people in the tech industry are starting to realize is that there are certain things that computers are just fundamentally ill-equipped to do as well as humans.

Humans are incredibly good at taking in novel stimuli we've never experienced before, reasoning about who's responsible for them and why, and then predicting what's gonna happen next. If you were stopped at a [00:06:00] crosswalk in Los Angeles because, say, James Corden was blocking the road and doing a stupid dance in a mouse costume, well, you'd combine your knowledge of irritating pop culture with your understanding of human nature and the bizarre sight in front of you and conclude, oh, I appear to be in the middle of some sort of horrible viral prank for a late night talk show, and there's nothing I can do but grit my teeth and wait for it to be over. But your self-driving car hasn't seen the Late, Late Show. It doesn't even watch Colbert. So it might conclude, oh, that's a mouse, hit the gas and flatten the motherfucker. And you know, that would be a way funnier segment for the show, but we wouldn't exactly call it intelligent. 

Now look, a lot of self-driving tech is genuinely really cool, and it does have important real-world uses like collision prevention and enhanced cruise control. But the idea that we'd all be kicking it in the backseat with a mai tai while a robo-taxi drove us to work was always a science fiction fantasy. And when companies like Tesla told us it was coming, it [00:07:00] was a lie. A lie told to boost their share price and to trick us into doing what they wanted. To change our laws to permit they're untested, in many cases fraudulent, technology onto the public roads, where it hurt and killed people. And guess what? That same cycle is happening again. Massive tech companies are making us the Guinea pigs for their barely functional bullshit. Only this time they're calling it "generative AI". 

Center for Humane Technology Co-Founders Tristan Harris and Aza Raskin discuss The AI Dilemma Part 1 - Summit - Air Date 6-15-23 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: A few months ago, some of the people inside the major AGI companies came to us and said that the situation has changed. There is now a dangerous arms race to deploy AI as fast as possible, and it is not safe, and would you, Aza and Tristan and The Center for Humane Technology, would you raise your voices to get out there to try to educate policy makers and people to get us better prepared? And so that's what caused this presentation to happen.

As we started doing that work, one of the things that stood out to us was that in the largest survey that's ever been done for AI researchers who've submitted to [00:08:00] conferences their best machine learning papers, that in this survey they were asked what is the likelihood that humans go extinct from our inability to control AI, go extinct or severely disempowered? And half of the AI researchers who responded said that there was a 10% or greater chance that we would go extinct. So, imagine you're getting on a plane, right at Boeing 737 and half of the airplane engineers who are surveyed said there was a 10% chance if you get on that plane, everyone dies. Right? We wouldn't really get on that plane, and yet we're racing to kind of onboard humanity onto this AI plane. And we wanna talk about what those risks really are and how we mitigate them. 

AZA RASKIN: So, before we get into that, I wanna sort of put this into context for how technology gets deployed in the world. And I wish I had known these three rules of technology when I started my career. Hopefully they will be useful to you. And that is, here are the three rules. 

One: when you invent a new technology, you uncover [00:09:00] a new species of responsibilities. And it's not always obvious what those responsibilities are, right? We didn't need the right to be forgotten until the internet could remember us forever. And that's surprising. What should HTML and web servers have to do with the right to be forgotten? That was non-obvious. Or another one. We didn't need the right to privacy to be written into our laws until Kodak started producing the mass-produced camera. Right? So here's a technology that creates a new legal need, and it took Brandeis, one of America's most brilliant legal minds, to write it into law. It doesn't, privacy doesn't appear anywhere in our Constitution. So when you invent a new technology, you need to be scanning the environment to look for what new part of the human condition has been uncovered that may now be exploited. That's part of the responsibility. 

Two: that if that tech confers power, you will [00:10:00] start a race for people trying to get that power. And then three: if you do not coordinate, that race will end in tragedy. And we really learned this from our work on the engagement and attention economy. 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: So, uh, how many people here have seen the Netflix documentary, The Social Dilemma? Okay. 

AZA RASKIN: Wow. 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Most? [audience applause] Awesome. Really briefly, about more than a hundred million people in 190 countries in 30 languages saw The Social Dilemma. It really blew us away. 

AZA RASKIN: Yeah. 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: And the premise of that was actually these three rules that Aza was talking about. What did social media do? It created this new power to influence people at scale. It created, it conferred power to those who started using that to influence people at scale. And if you didn't participate, you would lose. So the race collectively ended in tragedy. 

Now, what does The Social Dilemma have to do with AI? Well, we would argue that social media was human's first contact with AI. Now, why is that? Because when you open up TikTok or Instagram or Facebook and you scroll your finger, [00:11:00] you activate a supercomputer pointed at your brain to calculate what is the best thing to show you. It's a curation AI. It's curating which content to show you. And just the misalignment between what was good for getting engagement and attention, just that simple AI, that built-to-lead simple technology, was enough to cause, in his first contact with social media, Information overload, addiction, doom scrolling, influencer culture, sexualization of young girls, polarization, cult factories, fake news, breakdown of democracy, right?

So, if you have something that's actually really good... it conferred lots of benefits to people too, right? Many, all of us, I'm sure many of you in the rooms all use social media and there's many benefits. We acknowledge all those benefits, but on the dark side, we didn't look at what responsibilities do we have to have to prevent those things from happening. And as we move into the realm of second contact between social media, between AI and humanity, we need to get clear on what caused that to happen. 

So, in that first contact, we lost, right? Humanity loss. Now, how did we lose? How did we lose? What was the story we were telling ourselves? [00:12:00] Well, we told ourselves we're giving everybody a voice. Connect with your friends. Join like-minded communities. We're gonna enable small, medium-sized businesses to reach their customers. And all of these things are true, right? These are not lies. These are, this is real. These are real benefits that social media provided. But this was almost like this nice friendly mask that social media was sort of wearing behind the AI and behind that kind of mask was this maybe slightly darker picture. We see these problems: addiction, disinformation, mental health, polarization, et cetera. But behind that, what we were saying was actually there's this race, right? What we call the race to the bottom of the brainstem for attention, and that is kind of this engagement monster where all of these things are competing to get your attention, which is why it's not about getting Snapchat or Facebook to do one good thing in the world. It's about, How do we change this engagement monster? And this logic of maximizing engagement actually rewrote the rules of every aspect of our society. Right? Because think about elections. You can't win an election [00:13:00] if you're not on social media. Think about reaching customers of your business. You can't actually reach your customers if you're not on social media, if you don't exist and have an Instagram account. Think about media and journalism. Can you be a popular journalist if you're not on social media? 

So this logic of 'maximize engagement' ended up rewriting the rules of our society. So all that's important to notice because with this second contact between humanity and AI, notice have we fixed the first misalignment between social media and humanity?

AUDIENCE MEMBER: No. 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: No. 

AZA RASKIN: Yeah, exactly. 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: And it's important to note, right? If we focus our attention on the addiction, polarization, and we just try to solve that problem, we will constantly be playing whack-a-mole because we haven't gone to the source of the problem. And hence we get caught in conversations and debates like, Is it censorship versus free speech?, rather than saying - and we'll always get stuck in that conversation - rather than saying, let's go upstream if we are maximizing for [00:14:00] engagement, we will always end up at a more polarized, narcissistic, self-hating kind of society. 

Tech's Mask Off Moment - What Next: TBD | Tech, power, and the future - Air Date 8-13-23

CELESTE TEDLEY - HOST, WHAT NEXT: TBD: The tech industry has long been defined by being outside of the mainstream. The move fast and breakthinks culture. And for a while the general perception of the industry was that it was a force for good, bridging gaps between people and culture. Google’s original guiding principle was don’t be evil.

 Fast forward to now. That phrase was removed from Google’s code of conduct years ago and scandal after scandal has tainted public perception of the company and the entire industry. And amidst the tarnishing of brands, many of the big names in tech have also embraced the values of the populist right.

ANIL DASH: I mean, there’s a lot of intersecting causes. It is a space that made room for people who wanted to be outside the mainstream in good and bad ways. And so I think there was this both sincere and opportunistic libertarianism that shaped the current tech industry. I think there were [00:15:00] people that were genuine about this as their policies, but also every single major libertarian movement in America has always sort of made space for a lot of racist thought and movement and people have different relationships with that within that movement. It’s not mine to comment on, but you can sort of see that pattern over time.

And so the sort of tech libertarianism set the stage or made the space to welcome in these folks. But then there was a really clear concerted effort. If you look at Peter Thiel, sort of the most visible of these, but even Marc Andreessen, they had this combination of, they’re smart people and they recognize the social trends that are happening and whether it favors them or not. And if you are trying to build extractive systems, and they are, who do you have to appeal to in order to keep your perch in that position of power? And so I think there’s a very intentional strategy of appealing to, like any politician would, to a sense of grievance, [00:16:00] to a sense of being unjustly wronged.

CELESTE TEDLEY - HOST, WHAT NEXT: TBD: And often that’s throwing support behind far right ideologies. And worse. We see it now when elon musk is talking about birth rates, anti-trans talking points, and colonizing Mars. But Anil also saw this happening more than a decade ago when Marc Andreessen was tweeting his support for colonialism.

ANIL DASH: He said that anti-colonialism was the worst thing that ever happened in India. And I and many others sort of pointed out, in my case, I pointed out I have family members who were killed under that imperial regime who would be alive today if not for that. Not ancient history, living history. My living parents' siblings are amongst that list. To endorse colonialism as a tycoon of industry is to say, If my plans for expansion of our economic opportunity include causing the death of your family members, that’s acceptable to me.

And so understandably, I mean, it’s funny because - not funny, but telling - [00:17:00] he’s a Facebook board member, and this was the one time he was chastened, that he was actually forced to apologize and do the thing. Facebook actually lost out on their bid to be providing what they called free internet access in India, but it was really at the cost of theirs being the only service that you really got for free was Facebook’s app and WhatsApp which they also own. And they lost out on that policy initiative really as a direct backlash to Andreessen’s comments. And I think the idea that there would be some kind of accountability and that people would call him out and that it would be effective at stopping him, flipped a switch. Which was, How am I not the one in control? How is it that one tweet can have an impact that I don’t get to be the one to decide for billions of people that they are using our services? 

And so I think that accelerated. I mean, that was an inflection point. There were many others like that, but that was one that was really clear, because that’s when he deleted his Twitter account and became part of the professionally aggrieved class, right?, of convincing [00:18:00] himself that he was the martyr.

You fast forward to whatever it was two years ago when everybody was talking about Clubhouse. That was the streaming audio, or the audio chat app that, funded by his VC firm, they promoted it. The partners and their families were actually involved in promoting content on Clubhouse, and Andreessen would hang out in a chat room called how to "Destroy the New York Times", which is about ending accountability and ending critical journalism explicitly. That’s the goal. That’s why he funded Clubhouse, to have a platform to do that thing. That’s radical behavior. It’s not normal behavior.

 I’ve been a CEO. I have raised tens of millions of dollars in funding. I have faced public criticism. Some of it fair, some of it I didn’t feel was fair. And you suck it up because that’s why you get to be in the seat and make the big money or have the name out there in the world or whatever, and I don’t have billions like he does. But that’s the cost of doing business, is you sort of accept the good and the bad because you get to the chance to do these things. [00:19:00] And they really worked each other up into a lather of they should not have to countenance criticism, especially valid criticism, from anyone.

 And then the real catalyst, I think, is the rising labor movement of the last several years. They see it across industries, including tech. But you look at Chris Smalls organizing Amazon workers in Staten Island, and it is directly connected to their idea of, like, we cannot allow there to be those people making those moves to organize in those places.

CELESTE TEDLEY - HOST, WHAT NEXT: TBD: A defining characteristic of the right in the tech industry is the idea that they’re somehow held to a different standard and thus can say or do whatever they want without consequence. And in order to keep that idea up, they make sure to prop up other like-minded thinkers whenever they can. People like Hanania.

ANIL DASH: And the thing is, they’re not very secret about it’s, not like some secret group chat or whatever. It would sort of openly [00:20:00] say, you people have no right to keep me from doing whatever I want to do all the time. With, you know, Hanania, the really clear example is this week when he’s unmasked, if you want to call it that, I mean, I think it’s sort of like the very obvious thing turns out to be true. He is who we thought he was. But in any case, there’s clearly a mass media moment of reckoning with this visible person. Two key things. One: part of the reason his profile had risen in recent months was Substack went all in on promoting him. And Substack is a media platform that is funded by Andreessen Horowitz and was designed just as Clubhouse was, as part of their, like, let’s undermine mainstream media. So Substack goes out by funding all of the most prominent anti-trans, and those aligned with the intellectual interests know the Thiels and Andreessens of the world. And they did a sort of special, unprecedented promotion of a podcast for Hanania. They didn’t have to do it. They went out of their way to do it, and [00:21:00] they said, this is a voice everybody should be listening to. It’s not like this is a level playing field. They put their thumb on the scale to say, There, that’s sort of a catalyst moment. It all comes to a head, as you expect it would, when you see somebody that’s outed as an avowed White supremacist, and Musk responds by following the guy on Twitter, or X. And so again, in the time when any right thinking, decent person would say, My gosh. Even if you were, Oh, I’m intellectually curious and I didn’t know, and Substack told me I should check him out, and so I was following a guy and I didn’t know. After this week, you sort of say, Of course I’m not going to listen. This is a repugnant person. This is a person with these sort of vile, hateful views. Musk has the opposite, like, I didn’t follow him before, but I’m going to now. That says it all.

 

Pregnant Woman's False Arrest Shows "Racism Gets Embedded" in Facial Recognition Technology - Democracy Now! - Air Date 8-7-23

AMY GOODMAN: Professor Roberts, I wanted to end by asking you about this shocking story out of Detroit, Michigan, involving a woman named Porcha Woodruff. She was eight months pregnant when police arrested her at her door for [00:22:00] robbery and carjacking. Six officers showed up at her home as she was getting her daughters ready for school. She was held for 11 hours, released on a $100,000 bond. She says she started having contractions in jail, had to be taken to the hospital after release due to dehydration. A month later, prosecutors dropped the case because the Detroit police had made the arrest based on a faulty facial recognition match. According to the ACLU, Woodruff is at least the sixth person to report being falsely accused of a crime as a result of facial recognition technology — all six people Black. Porcha Woodruff is now suing the city of Detroit.

The New York Times had a major story on this, saying, “Porcha Woodruff thought the police who showed up at her door to arrest her for carjacking were joking. She is the first woman known to be wrongfully accused as a result of facial recognition technology.” She was 32 years old. “They asked her to step outside because she was under arrest for robbery and carjacking.” She looked at them. She pointed to her stomach. She was eight months pregnant. And she said, [00:23:00] “Are you kidding?”

Professor Roberts, can you talk about the significance of this and what she went through in that last month of pregnancy?

DOROTHY ROBERTS: This story captures so much of what we’ve been talking about, so much about the devaluation of Black people’s lives, Black women’s lives, and the way in which these deep myths about Black biological difference and inferiority, and the need for regulation and surveillance, get embedded into technologies. They’re embedded in medical technologies. They’re embedded in policing technologies. They’re embedded in artificial intelligence algorithms and predictive analytics.

And so, just one piece of this is the fact that the six [00:24:00] cases we know of false arrest based on false AI facial recognition are involving Black people. Now, that’s not an accident. That’s because racism gets embedded into the technologies. It’s in the databases, because the databases are based on police arrests already or police action, which we know is racially biased or targeted at Black people. And so the data itself gets embedded with racism. The way in which algorithms are created have assumptions that are racist. With the facial recognition, the way in which the recognition technology is created is more likely to target Black faces. [00:25:00] All of this has been shown in research. So, there’s this idea that AI is going to be more objective than the biased decision-making of judges and police and prosecutors, but if it embeds prior biased decisions, it’s going to produce these oppressive outcomes. And also, if it’s being used by police departments that are racist, they’re going to be used in racist ways.

And that gets me to the next point, which is the way in which she was treated. She, as an obviously eight-month-pregnant woman, was treated cruelly and inhumanely by these police officers, which reflects the way in which police interacted with Black communities in general, but also the devaluation of Black women’s childbearing — again, back to this point we [00:26:00] started out with — the devaluation of the autonomy, the worth, the humanity of Black women. And a key aspect of that, in fact, a key aspect of the subjugation of Black people in general, has been the devaluation of Black childbearing. The idea that Black women pass down negative, depraved, antisocial traits to their children, almost sometimes it’s stated in biological terms. And that devaluation of Black women, especially in terms of their childbearing, is part of the basis for reproductive servitude, which we were talking about earlier, but also part of the reason why Black women are three times more likely to die from pregnancy-related causes, maternal mortality, than white women in America.

[00:27:00] So, this one incident reveals this deeply entangled way in which carceral systems in America rely — rely — on this myth of biological race and innate inferiority of Black people, which is so deeply embedded that many people just take it for granted.

Princeton University's Ruja Benjamin on Bias in Data and A.I. - The Data Chief - Air Date - 2-3-21

CINDI HOWSON - HOST, THE DATA CHIEF: When Joy first talked about the problems with the facial recognition and the way it was being used, some of the large tech companies tried to dismiss her, and there's this term that are often applied to women, "gaslighting", you know, she's not competent or what have you. How much do you think that continues to limit how seriously the work of these researchers are taken? 

RUJA BENJAMIN: I certainly think that it's an ongoing issue and at the same time, you know, I think we can point to the [00:28:00] problem much earlier in the process, or the so-called pipeline, where many people who would be able to point out these issues don't even get the chance to, they don't even get the opportunities, the internships, the positions, you know, the training in order to really be heard in the first place. And so certainly the kind of gaslighting in these more high profile cases is ongoing, but at the same time, we have so many people with potential who could be contributing to more socially conscious design and technology that never get even the opportunity to make good trouble, as it were. And so that's also part of the issue, I think. 

CINDI HOWSON - HOST, THE DATA CHIEF: Yeah. We had a wonderful intern. We were debating this, why are they not given the opportunities? Is it the unconscious bias in hiring?, And even the job recommendation and resume matching algorithms? But she also said to me, I feel like people [00:29:00] give up so early on, because maybe even at her high school, calculus wasn't even offered. And their first laptop they only get in college. So, we have these different factors going on. It's almost like the double whammy. 

RUJA BENJAMIN: Mm, yeah. Certainly there is the various ways in which people are pushed out, whether through those kind of structural economic opportunities that are sorely missing at the high school and even earlier, but also people who have PhDs in various fields, experience all kinds of discrimination. And so one of the things I would just say is that the F word - that is the word "fit" - when we think about, you know, whether someone is a good fit for our company or organization, that F word is a pretty loaded word because within it contains all kinds of assumptions, what sociologists call homophily, that we often drawn to people who we see as like us, whether [00:30:00] in terms of our gender or race or background, you know, whatever kind of like, you know, regional background. And so we want to mentor people who we see as many versions of ourselves. And so, if for generations, a narrow demographic have held onto and monopolized positions of power, that means through this process of homophily, they will continue to reproduce themselves rather than looking for potential and looking for capacity in people who don't necessarily fit that profile.

CINDI HOWSON - HOST, THE DATA CHIEF: And to be fair, this goes back to our survival, you know, bias and seeking out people who look like us goes back centuries to the way people survived. But I have seen that oftentimes people are dismissed for lack of cultural fit. And that's that... I thought you were gonna say a different F word. 

RUJA BENJAMIN: Yeah. [laughing] There's so many F words, Cindi, we can have a [00:31:00] whole show just called "F words". 

CINDI HOWSON - HOST, THE DATA CHIEF: [laughing] Okay. But yeah, that we need to stop saying this lack of culture fit, because it's too much of an excuse for failing to empathize and understand somebody else's different education or upbringing. But I wanna ask you, So why do you think now is such a critical time to address bias in technology?

RUJA BENJAMIN: Well, you know, one of the things we've seen in the last nine months or so, you know, with the killing of George Floyd and that kind of high profile public protests, all of the different companies and organizations that have come out with statements in support of Black Lives Matter, you had even the president elect it inauguration using the phrase White supremacy in his speech. So, there's all kinds of public attention and awareness around this, but part of the danger is that people conflate in their heads White supremacy with people who hang [00:32:00] nooses, or White supremacy with people who burn crosses, or a very narrow definition of what counts as racism, and point over there to people like that who would storm the Capitol as the problem when actually it's a... we have so many varieties of racism, genres of White supremacy, and many of them are right in our own backyards. It's the everyday practices, the business as usual that people won't necessarily reckon with. And so the reason why it's so important now to deal with this is that the more that we shine a light on this very high profile, kind of obvious racism, the other varieties will get a pass. They'll go underground. They'll become more mediated by technology in our technical systems, in our employment practices. And so now we need to pay attention and again zoom the lens out and shine a light on the variety of ways that these issues manifest, rather than just point a finger at the obvious [00:33:00] forms of White supremacy that get all the attention.

CINDI HOWSON - HOST, THE DATA CHIEF: So it's the less obvious that we don't notice or that we forget about. 

RUJA BENJAMIN: Yes, and technology has a huge role in making those types of racism invisible. As I mentioned earlier, we bake these forms of discrimination into our technical systems through automated forms of decision making and prediction and profiling. And so they become even harder to detect, and that's why it's so important for us to spend energy and resources on shining a light on those. 

AI ethics leader Timnit Gebru is changing it up after Google fired her - Science Friction - Air Date 4-17-22

NATASHA MITCHELL - HOST, SCIENCE FRICTION: It's never a straightforward decision, is it, to make a decision to work on the inside of a dominant culture that you are critiquing and that you are a minority within? So did you enter with a mix of hope and trepidation, perhaps skepticism? 

DR TIMNIT GEBRU: Most of it, trepidation. It was so many red flags from the very beginning and I almost did not sign my offer until Meg actually invited me to [00:34:00] co-lead her team. So, I thought at least I can work with Meg and we can maybe create a safe environment in our little team and then we can have some amount of power to change things. And I didn't have any illusions about steering this big ship that's Google, but that was kind of how I got into it. That was kind of my hope.

And also there was a team in Ghana, the first AI center in Africa that they were creating, and I was really excited about and know, I thought I definitely need to help with that in keeping with what I care about in terms of increasing the number of the visibility of Black people in the field of AI. And so I was hired to do just what I got fired for, right? Analyze the impacts of AI technology and figure out how to minimize the negative impacts on society. And also just do AI research in a way that is beneficial to us and not cause harm. So that was in my job description. 

NATASHA MITCHELL - HOST, SCIENCE FRICTION: And did you manage [00:35:00] to do that work successfully while you were inside its doors?

DR TIMNIT GEBRU: You know, I think that we were able to move the needle, but it was a battle. When I analyzed the amount of headache and harm that I endured in order for the slight sort of a needle, I don't think it was worth it. But we were able to -- for instance, we were able to grow our team into one of the most diverse teams at Google. We were able to make it normal to hire people who don't have a degree in computer science or related fields in order to work in this kind of area, because we said we need to have an interdisciplinary team. So we hired the first social scientist to be a research scientist at Google in our team, Dr. Alex Hannah, but she also left and we started trying to come up with strategies of how to get them to change. So we said, you know what? We have much more respect as researchers on the outside, outside of Google, than we do inside of Google. So [00:36:00] let's publish a paper and if the paper gets traction, then maybe they'll be shamed into actually doing something, right?

So that's what we did. We would do something like that. And so we just had to come up with all of these different strategies for survival. 

NATASHA MITCHELL - HOST, SCIENCE FRICTION: What particularly were you wanting Google's management and Google's leadership to do differently in relation to the enormous investment that that company has made in machine learning algorithms, artificial intelligence, perhaps amongst the biggest investment in the world.

DR TIMNIT GEBRU: It's just to spend a little bit more resources to make those products safe and not jump into research. That just seems like an arms race. So our last paper that I got fired for was about this technology called large language models. And all of these people -- Google, OpenAI, Microsoft, Facebook, they're all racing to have these models that are just larger and larger and [00:37:00] larger in scope. So that means they take more data to train, more compute power, more everything -- just larger. 

NATASHA MITCHELL - HOST, SCIENCE FRICTION: And what do these large language AI models do? I gather they drive Google Search, for example. 

DR TIMNIT GEBRU: They're an underlying technology in a lot of things. They use them in machine translation to translate from one language to another. They use them to rank queries of search. They use them to have these, I think, question and answer boxes in these autocorrect kind of things that you see in your email, or auto-complete, those kinds of things. And I'm sure there's more, but some of those things come to mind. And we just were very alarmed by this "I want bigger," or "mine should be bigger" kind of motive for working on these things. And so what we wanted to do is just get people to think about the potential negative consequences of working on these large language models and just slow people down a little bit. 

We [00:38:00] spent a good portion of it discussing the data that is used to train these models. So that is very similar to my work on gender shades, right? What data do they use to train these models? They look at all of the internet. So there's this illusion that if I have huge data sets that consists of the entire internet, then I'm gonna have a diverse set of voices represented. But that's actually not true. That's an illusion. So we talk very extensively about what kinds of voices are represented on the internet. Who is left out? Not only because many people don't have access to internet, but also moderation practices of a lot of these websites that are used to train large language models. For instance, Wikipedia. How many women are even represented on Wikipedia or Reddit? I don't ever go to Reddit 'cause I get harassed. It's so hostile. Or you look at the social media networks. So we talk about those things. And then we talk about what it means when you [00:39:00] train a large language model on these kinds of data that represent the dominant hegemonic views that have lots of ableism, sexism, racism, homophobia, et cetera, et cetera, and then unleash it into the world. You can do lots of harm. You can misinform and do mass hate speech and mass kind of radicalization, et cetera, especially when you combine it with social media networks. 

And we also talk about what it means when these kinds of models generate coherent text. They sound to you like they're coming from another person or something like that. And when they do, for instance, machine translation, you get really coherent texts that sounds grammatically correct, but it might be totally wrong. So there was this example of a Palestinian guy writing "good morning" on Facebook Translate and it was translated to attack them and he was arrested, right? He was later let go, of course. But he was arrested. 

There are so many risks. But what [00:40:00] is unbelievable is that just recently Google came out with another paper with yet another huge language model. And they even cited our paper, which was so crazy. And then they're like, well, you know, it could be racist, but whatever. Hopefully somebody else will fix that. What other industry can do that? What other industry can say, "You know, we haven't even tested if our drugs work on everybody. It might kill certain segments of the population, but, oh, well, you know, here it is."

Center for Humane Technology Co-Founders Tristan Harris and Aza Raskin discuss The AI Dilemma Part 2 - Summit - Air Date 6-15-23

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: They say in all the sci-fi books, the last thing you would ever want to do is you're building an AI, is connect to the internet, because then it would actually start doing things in the real world. You would never want to do that, right? Um, well, and of course the whole basis of this is they're connecting it to the internet all the time.

Someone actually experimented. In fact, they made it not just connecting it to the internet, but they gave it arms and legs. So there's something called auto, G p T. How many people here have heard of auto G p T? Good half of you. So, auto, G p T is basically, um, people will often say, Sam Altman will say AI is just a tool.

It's a blinking cursor. What is it? What harm [00:41:00] is it gonna do unless you ask it to? It's not like it's gonna run away and do something on its own. That blinking cursor when you log in, that's true. That's just a little box and you can just ask it things. That's, that's just a tool. But they also release it as an a p i.

And a developer can say, you know, 16 year olds like, Hmm, what if I give it some memory? And I gave it the ability to talk to people on Craigslist and TaskRabbit, then hook it up to a crypto wallet, and then I start sending messages to people and getting people to do stuff in the real world. And I can just call the open AI api, just like instead of a person typing to it with a blinking cursor, I'm querying it a million times a second and starting to actuate real stuff in the real world, which is what you can actually do with these things.

So it's really, really critical that we're aware and we can see through and have x-ray vision to see through the bullshit arguments that this is just a tool. It's not just a tool. Um, now at least that the smartest AI safety people believe that they think there's a way to do it safely. And again, just to come back that this, this one survey that was done, that the 50% of the people who responded thought that there's a 10% or greater chance that we, we don't [00:42:00] get it right. So, and Satya Nadella, the c e o of Microsoft self-describe the pace at which they're releasing things as frantic. The head of alignment at Open AI said before we scramble to deploy and integrate LLMs everywhere into the world. Can we pause and think whether it's wise to do so? This would be like at the head of safety at Boeing said, you know, before we scramble to put these planes that we haven't really tested out there, can we pause and think maybe we should do this safely.

Okay, so now I just want to actually, let's actually take like a breath right now in, so we're doing this, not because we wanna scare you. We're doing this because we can still choose what future we want. I don't think anybody in this room wants a future that their nervous system right now is telling them, [00:43:00] uh, I don't want. Right? No one wants that, which is why we're all here because we can do something about it.

We can choose which future do we want. And we think of this like a rite of passage. This is kind of like seeing our own shadow as a civilization. And like any rite of passage, you have to have this kind of dark night of the soul. You have to look at the externalities. You have to see the uncomfortable parts of who we are or how we've been behaving or what, what's been showing up in the ways that we're doing things in the world.

You know, climate change is just the shadow of an oil-based, you know, $70 trillion economy, right? Um, so in doing this, our goal is to kind of collectively hold hands and be like, we're gonna go through this rite of passage together. On the other side, if we can appraise of what the real risks are, now we can actually take all that in, is design criteria for what? How do we create the guardrails that we want to get to a different, different world? 

AZA RASKIN: And this is both like rites of passage are both terrifying because you come face to face with death, but it's also incredibly exciting 'cause on the other [00:44:00] side of integrating all the places that you've lied to yourself or that you create harm.

Right. Think about it personally when you can do that. On the other side is the increased capacity to love yourself, the increased capacity, hence to love others, and the increased capacity, therefore to receive love, right? So that's at the individual layer. Like imagine we could finally do that if we are forced to do that at the civilizational layer.

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: One of our favorite quotes is that you cannot have the power of gods without the love, prudence, and wisdom of gods. If you have more power than you have awareness or wisdom, then you are going to cause harms 'cause you're not aware of the harms that you're causing. You want your wisdom to exceed the power.

And one of the greatest sort of questions for humanity that Rinko Fermi, who's part of the atomic bomb team says, why don't we see other alien civilizations out there? Because they probably build technology that they don't know how to wield, and they build themselves up. This is in the context of the nuclear bomb [00:45:00] and the kind of real principle is how do we create.

A world where wisdom is actually greater than the amount of power that we have. And so as taking this problem statement that many of you might have heard us mentioned many times from El Wilson, the fundamental problem of humanity is we have paleolithic brains, medieval institutions. In godlike tech. A possible answer is we can embrace the fact that we have paleolithic brains.

Instead of denying it, we can upgrade our medieval institutions instead of trying to rely on 19th century, 19th century laws. And we can have the wisdom to bind these races with God-like technology. And I want you to notice, just like with nuclear weapons, the answer to, oh, we invented a nuclear bomb.

Congress should pass a law. Like it's not about Congress passing a law. It's about a whole of society response to a new technology. And I want you to notice that there are people, we said this yesterday in the the talk on game theory. There were people who are part of the nuclear, um, the Manhattan project scientists who actually committed suicide after the nuclear bomb was created because [00:46:00] they were worried that there's literally a story of someone being in the back of a taxi and they're looking out in New York, it's like in the fifties and someone's building a bridge.

And the guy says like, what's the point? Don't they understand? Like, we built this, this horrible technology, it's gonna destroy the world. And they committed suicide. And they did that before knowing that we were able to limit nuclear weapons to nine countries. We signed nuclear test ban trees. We created the United Nations.

We have not yet had a nuclear war. And one of the most inspiring things that we look to as, as inspiration, uh, for some of our work, how many people here know the film the day after? So quite a, quite a number of you. Yeah. It was the largest made for TV film event in I think, world history. Um, it was made in 1983.

It was a, a film about what would happen in the event of a nuclear war between the US and Russia. And at the time, Reagan had advisors who were telling him we could win a nuclear war. And they made this, this film that based on the idea that there is actually this understanding that there's this nuclear war thing, but [00:47:00] who wants to think about that?

No one. So everyone was repressing it. And what they did is they actually showed a hundred million Americans, um, on primetime television, 7:00 PM to 9:30 PM uh, or like 10:00 PM this film. And it created a shared fate that would shake you out of kind of any egoic place and shake you out of any denial to to be in touch with what would actually happen.

And it was awful. And they also aired the film in the Soviet Union in 1987 four years later, I. And that film decided to have made a major impact on what happens. I, uh, one last thing about it's, they, they actually, after they or the film, they had a democratic dialogue with TED Koppel hosting a panel of experts.

 And this aired right after this film aired. So they actually had a democratic dialogue with the live studio audience of people asking real questions about like, what do you mean you're gonna do nuclear war?

Like this doesn't make any logical sense, at least, you know. And so a few, uh, years later, when in, in 19, I think 89 when in Reykjavik, president Reagan met with Gorbachev, the director of the film the day after, who we've actually [00:48:00] been in, in contact with recently, um, got an email from that, the people who hosted that summit saying, don't think that your film didn't have something to do with this.

If you create a shared fate that no one wants, you can create a coordination mechanism to say, how do we all collectively get to a different future? Because no one wants that future. And I think that we need to have that kind of moment. That's why we're here. That's why we've been racing around. And we want you to see that we are the people in that time in history, in that pivotal time in history, just like the 1940s and fifties when people were trying to figure this out.

We are the people with influence and power and reach. How can we show up for this moment? 

Can We Govern AI? - Your Undivided Attention - Air Date 4-21-23

MARIETJE SCHAAKE: While I was in the European Parliament, we adopted a whole bunch of laws because in Europe the thinking about regulation is actually far more advanced than it is in the United States. And also not just the thinking, but also the doing. And so, For me, the need to put in place guardrails, checks and balances oversight mechanisms is normal and we should also normalize it.

It is not an attack on tech [00:49:00] companies or Silicon Valley that Europeans wanna do this. It is actually a very normal response to the growth of an industry, and in particular, the urgent need now to mitigate all the harms that I know you've worked on so intensively. 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Yeah, so why don't we, um, take a step back and ask, you know, what even is regulation?

Why do we need guardrails on this? 

MARIETJE SCHAAKE: Regulations are, are essentially rules that everybody should adhere to, and I think it's really important to keep that in mind that laws are not only there to protect people from the outsized power of companies, tech companies, but also to protect people from the outsized power of government.

And in the discussion about tech policy that is often lost, it often seems like, you know, the governments or the lawmakers Congress in Washington is just out there to make life miserable for companies to take away the fun services like TikTok. You know, a very current discussion that we've had where you see all these content makers saying, you know, don't take away our business, don't take away [00:50:00] the fun of our teenagers.

But obviously, you know, just. Just showing the entertainment value or the market value does no justice to the harms that you talked about. So I think of regulation as a level playing field, the same rules that apply to everyone, and that create a bottom line, the the lowest sort of necessary safeguards for public health, public safety, wellbeing of people, the protection of children, the protection of the common good.

So I actually think regulation, if done well, Is great. It is what guarantees that we live in freedom and that also the rights of minorities, for example, are respected. Now taking that to ai, what kind of regulations might we need to deal with this rapidly developing new class of technologies? I think there are a couple of fundamental challenges to navigate that make AI different than other technologies, but also other products and services that have been regulated before.[00:51:00] 

One is the information about the use of the technologies, but also the data sets going in to change them is not accessible to lawmakers, to journalists, to you and I it is. Im proprietary hands. These companies guard the secrets to their algorithmic settings with their life. The second thing is that with the constant new iterations and the very personalized experiences that people have, the product or service is fluid.

You can't hold it. You can't pinpoint it. Hold it down. It is different for you. Than it is for me. It is different today than it was last week. And so imagine being a regulator that is supposed to establish whether illegal discrimination has taken place or whether consumer rights have been respected.

Where do you begin? And so with the combination of lack of access to information and the fluidity of the service and the product, that makes it very hard to regulate. So maybe I'll leave it there for [00:52:00] now to give you a sketch of what I think makes AI and AI regulation specific and particularly challenging compared to, let's say, pharmaceutical regulation.

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Yeah, I think it 

is helpful to establish a baseline of other kinds of regulations that are much more straightforward or easy to do. You know, we think about pharmaceuticals which also have unpredictable effects on the body, or interaction effects with other pharmaceuticals. And so there is sort of a, an interesting parallel there where you release social media into the world and maybe it works, you know, well for an individual user and there's no obvious harms.

Those don't emerge as discreet harms like Twitter caused this, you know, prick of blood to emerge from my body where something actually went wrong or a drug that has an adverse side effect where I get a stomach ache or something like that. So I think it might be helpful maybe to set some ground on what makes regulating social media or AI or just runaway technology in general different than previous classes of let's say airplanes or pharmaceuticals or food.

MARIETJE SCHAAKE: Let me start by where they are the same. I [00:53:00] think nobody would ever say that regulation is easy. So even if we think that regulating AI and other technologies is hard. Think about chemicals, think about financial services, think about food, the enormous complexity, wide variety, constant innovations that happen in those sectors too.

I mean, there's constantly new combination of, of chemicals, of foods, of financial services. So we should not be discouraged is what I'm trying to say by the fact that a problem is complex. We should great point, trust that we can really make it work. And it's also high time that we make it work for a number of these technologies because it is.

Entirely normal that there are rules to be safe and to, for example, also have a place to go when you've been wronged. Like let's imagine you've been poisoned by the use of of a medication. Well, then you wanna go somewhere and not just be left on your own with all the harms that it's done to you. So in that sense, I think we need to normalize tech regulation and not see it as an exceptional set [00:54:00] of problems that cannot be solved.

It will just require unique steps, just like the chemicals and the pharma and the food have required unique steps. What does make it somewhat different is, um, the global nature of companies, the fact that they may operate from one jurisdiction, but they reach consumers, users, internet users, citizens, completely in a different context on the other side of the world, where that different context creates different circumstances and can make people vulnerable, can lead to all kinds of new problems.

Buddhism in the Age of AI - Soryu Forall - Monastic Academy - Air Date 6-21-23

SORYU FORALL: We have exponentially destroyed life on just the same curve as we have exponentially increased intelligence. And perhaps these are the two most consistent themes of human history. But in any case, since then, we have seen, just in the past seven years, we've seen us kill most of the animals [00:55:00] on the planet. So 70 years ago, there were twice as many wild animals on this planet as there are now. That's the rate of destruction of life that we are facing right now.

And we have to face that this intelligence, this craving, while we humans have been its best host up until now, with the cognitive revolution we became a very good host for this force. There's no reason to think that it's going to stop and preserve us, as if we really are special. We've been given a lot of resources for several thousand years because we've been good hosts, good slaves to this. But if there were to be something that was a better [00:56:00] host, well then there's no reason to think that we wouldn't be treated like cattle and factory farms or various species who have been driven to extinction.

And so, with that dire circumstance as a jumping off point, we enter into a study of the dharma. And why? Because in the same way that we saw with the cognitive revolution, we saw that some people were able to use intelligence in order to break free of craving, in order to become the most trustworthy beings the planet had ever seen -- the most caring, the most wise. And just the same way when we [00:57:00] look at the agricultural revolution, we see that the Buddha came forth and taught the dharma and was even more effective at producing trustworthy, wise, caring people who had gone beyond intelligence, delusion, insanity, beyond craving, and clinging and selfishness. And those people found ways to have an impact on the societies that they lived in, in order to promote good for all beings. We see that the Buddha was able to create an entire community, a Sangha, a professional class whose job was to study the dharma, become trustworthy, become wise, compassionate, and show people how to develop a society in accord with those qualities.

[00:58:00] And so in the same way that previously a few people became wise and selfless -- but mostly we caused harm -- with the agricultural revolution, we caused even more harm, but we became even better at creating wise, caring, selfless, compassionate people.

So now as we enter into this next intelligence revolution, in which we see that in all likelihood, narrative will no longer be the medium by which collective intelligence functions, and instead algorithms will become the medium by which collective intelligence functions. [00:59:00] That we need to know exactly how to bring the dharma, these methods to make people, and even other beings, trustworthy, wise, caring. We see the necessity of bringing that into this new medium.

But in order to do that, we need to know what the dharma is. We have gone through it step by step, looking at many different teachings, mostly focused on the Theravada, expanding out into the Mahayana, a little bit on the Vajrayana, and then just a little, little bit on this Navayana, this fourth turning of the wheel, if we dare to call it that, in which we have the goal [01:00:00] of creating an enlightened society. No longer is our aim the enlightened individual, or even bringing everyone along eventually, but the entity that we're trying to teach becomes the society. The global civilization is the student.

And this is inspired primarily by the work of Babasaheb Ambedkar. But we should note that since he brought Buddhism to the world, and in particular to the Dalits Of India, since then, tools have emerged -- in particular, what we've discussed is artificial intelligence -- but tools have emerged that make [01:01:00] that a real possibility. And why? Because these new tools are able to base a global consciousness on one single global intelligence.

This artificial intelligence, "artificial intelligence," which as you know I claim is actually the fourth version -- it's AI version four -- through human history. This is the fourth type of artificial intelligence. And it for the first time, due to a lot of glass and metal and plastic strewn about the planet, some software code mixed in, suddenly makes it possible [01:02:00] for this internet -- which is what I'm referring to as plastic and glass and metal strewn about the planet with some code mixed in -- this internet to create an intelligence, mediated primarily by algorithms, on which a global consciousness can alight. And that consciousness is just our consciousness. And the algorithms that work are the ones that encourage us to give it as much of our consciousness, our attention, our life energy as possible. And in the past few years, even more so a few months, we've seen an incredible increase in the amount of attention given to this, even though some of us have been warning about this for many years.

So we here have thrown ourselves in to truly [01:03:00] understanding the truth, so that we can bring that into a totally new format, so this global consciousness can walk the spiritual path and can become trustworthy so that it -- we -- can actually care for all living things. 

 

Final comments on the difference between Microsoft's marketing and the realities of capitalism

JAY TOMLINSON - HOST, BEST OF THE LEFT: We've just heard clips today starting with Adam Conover breaking down why AI is primarily a hollow marketing term, for the moment. Summit featured Tristan Harris and Eiza Raskin from the Center for Humane Technology discussing the upstream source of the problem with big tech. What Next TBD explained how explicit bias gets baked into datasets and algorithms.

Democracy Now! looked at a case of mistaken identity which resulted in a pregnant woman being wrongly arrested for carjacking. The Data Chief pointed out that those who are the best situated to see and point out problems with the fundamentals of AI [01:04:00] systems are rarely in the rooms when decisions are being made.

Science Friction discussed the difficulties of trying to make change from within a big tech company. Summit continued with Tristan Harris and Asa Raskin, making the case for how we can choose a better future. And Your Undivided Attention discussed some of the nuts and bolts of regulating AI. That's what everybody heard, but members also heard one more bonus clip from Monastic Academy, which featured a talk by monk Soryu Foral looking at AI through the lens of Buddhism and the evolution of human intelligence.

To hear that and have all of our bonus content delivered seamlessly to the new members only podcast feed that you'll receive, sign up to support the show at Patreon.

Now, to wrap up, I just want to reiterate the need for international cooperation to manage the risk of AI that was [01:05:00] talked about in the show. For anyone who looks at the state of humanity and takes comfort in the fact that, well, we haven't destroyed ourselves yet, so we must be you know pretty good it's not doing bad things it's important to remember the why of that fact and in essence it's regulation international agreements treaties the un these are all mechanisms by which we've regulated extremely dangerous things like nuclear weapons and we should see huge technological leaps like they claim ai will be similar caution The short and pithy phrase I heard recently and like that describes this is, You can't invent the ship without inventing the shipwreck.

Downsides are inevitable and need to be looked out for, managed, and mitigated. And, in keeping with the nautical disaster theme, just yesterday there was a story about how in 1995 Bill Gates wrote a memo saying that the internet was going to take over the [01:06:00] computing business like a tidal wave. And, Microsoft's current CEO, Satya Nadella, just echoed that memo suggesting that AI would be as big and profound a change as the internet was.

If true, there will undoubtedly be benefits to come, but we know that the internet has brought downsides as well, including having a large hand in destabilizing global society and democracies. So, to do anything other than act cautiously would be ridiculous. With the forces of unrestrained capitalism at play, acting with caution is simply not something we can ask companies to do on their own.

It requires regulation from the state. There was another quote from the article with the Microsoft CEO worth noting. He said, We in the tech industry are classic experts at overhyping everything. What motivates me is I want to use this technology to [01:07:00] truly do what I think at least all of us are in tech for, which is democratizing access to it.

End quote. So, I think that there are two clarifications that need to be made. The first is that there's a difference between what individuals who work in tech think and say, and what the structures of capitalism will actually allow them to do. So, I'm going to translate what I think that statement really means, but I'm not implying that he's lying.

He may really feel that way, and might even convince himself that Microsoft's business goals are in line with that statement. That democratizing access really will be good for humanity and their bottom line. But what he means is not actually democratization. All he really means is that he wants to maximize their user base.

[01:08:00] Microsoft as a corporation, driven by the profit motive, has no interest in democratizing control of their tech. And without control, there is no democracy. They want centralized control, privatized control. And maximum private profit, which to them means no regulation that might cut into those profits. So don't be fooled by their marketing.

That is going to be it for today. As always, keep the comments coming in. I would love to hear your thoughts or questions about this or anything else. You can leave us a voicemail or send us a text to 202 999 3991 or simply email me to jay at bestoftheleft. com. Now thanks to everyone for listening.

Thanks to Deon Clark and Erin Clayton for their research work for the show and participation in our bonus episodes. Thanks to our transcriptionist trio, Ken, Brian, and LaWendy for their volunteer work helping put our transcripts together. Thanks to Amanda Hoffman for all of her work on our social media outlets, activism segments, [01:09:00] graphic designing, webmastering, and bonus show co hosting.

And thanks to those who already support the show by becoming a member or purchasing gift memberships at bestoftheleft. com slash support. You can join them by signing up today. It would be greatly appreciated. And if you want to continue the discussion, join our Discord community. There's also a link to join in the show notes.

So, coming to you from far outside the conventional wisdom of Washington, DC, My name is Jay, and this has been the Best of the Left Podcast, coming to you twice weekly. Thanks entirely to the members and donors to the show from bestoftheleft. com. 


Showing 1 reaction

  • Jay Tomlinson
    published this page in Transcripts 2023-08-19 22:15:59 -0400
Sign up for activism updates