#1612 New Tech and the New Luddite Movement (Transcript)

Air Date 2/20/2024

Full Notes Page

Download PDF

Audio-Synced Transcript

 

JAY TOMLINSON - HOST, BEST OF THE LEFT: [00:00:00] Welcome to this episode of the award-winning Best of the Left podcast, in which we take a look at why a Luddite should never have become the epithet that it is, as Luddites were never afraid of or opposed to technological advancement. They only opposed the exploitation of workers and the degradation to society that came with the unfair distribution of the benefits of the targeted technology, which is echoed in the debate over AI and its impact on the future of work today. Source today include Shift, Left Reckoning, TRT World, jstoobs on TikTok, Factually!, torres, and The Majority Report, with additional members-only clips from Factually! and Alice Cappelle.

The New Luddites - SHIFT - Air Date 2-14-24

BRIAN MERCHANT: The Luddites were a band of rebels, basically; cloth workers in the beginning stages of the industrial revolution, who, after trying [00:01:00] various peaceful measures for a long, long time to make sure that they're working lives and their identities and their trades were protected from the march of what we would refer to today as automation, found themselves with their backs against the walls. Tech companies of the day -- entrepreneurs of the day -- were using machines that automated work that sort of did their jobs worse, shoddier and more cheaply, changed course and they started fighting and they started doing the thing that would come to define them: that smashing the machines that were taking their jobs. 

They organized around this the fictitious, apocryphal figure, Ned Ludd, who was probably just completely made up or may have been an apprentice cloth worker who wasn't working as fast as his master wanted him to. So his master had him whipped and he, enraged, took a giant hammer and smash the machine that he had been working on, fled into Sherwood Forest like Robin Hood before [00:02:00] him. They were both in Nottingham and Nottinghamshire and the legend grew. He's Ned Ludd. So the people who followed in his footsteps smashing the machinery of oppression were the Luddites. And they organized themselves into a big guerrilla force that could oppose the forces of industrialization, the elites and the British crown all at once.

JENNIFER STRONG - HOST, SHIFT: The word basically has had a negative connotation ever since. But you wrote an interesting piece for the Atlantic. The term is suddenly in vogue. 

BRIAN MERCHANT: Yeah. Yeah. It's being reclaimed by people who are increasingly turning to this history as a little bit of a contextualizing source, because a lot of the things that happened 200 years ago -- the Luddites rose up in 1811, so about 210 years ago or so -- the conditions are very similar in a lot of ways to what's happening today with AI and gig work, where big companies, powerful people and rich people with a lot of access to resources are using technologies in a certain way. They're adopting [00:03:00] AI in ways that might threaten writers, artists, ordinary workers. The gig app companies are turning to industries that had been organized a certain way for a long time and basically saying we can do this with a peer-to-peer platform now where we take a cut of it, but also cut out the part where you get benefits. And it started with Uber and it started with Lyft and now it's this gig work model is moving on to bigger sectors of the economy like health care and so forth. And all the while workers are feeling like it's pushing them into more precarious situations. 

So a lot of people are recognizing these forces and the similarities to the original Luddites, and they're saying, Hey, the Luddites were not dummies. The Luddites, in fact, used technology in their lives every day for hundreds of years. If you're a cloth worker, if you're a weaver using a hand loom in a cottage, or if you're a stocking frame worker, using it to knit goods, or if you're working a cloth finishing device, a gig mill -- all of these [00:04:00] things were technologies that they had really firsthand knowledge of how to use, and they were opposing these changes, not because they hated the technology, but because they hated how it was being used against them. 

And so we find ourselves in a similar situation today where artists are saying, Hey, wait a minute. And I've talked to many of them in my work as a journalist who are seeing up to 50 percent of their work dry up because they used to draw illustrations for a company that can now do that with Midjourney, or a copywriter that used to work for a corporation, now they can use ChatGPT to get some approximation of it. 

And so these already precarious jobs are drying up a little bit in a way that is very contested because -- just like the cloth workers 200 years ago who complained about the quality and the methods of their work being stolen by the machines and all the labor that they'd put into building a reputation and so forth of England's cloth industry, the machine owners were capitalizing on [00:05:00] that, automating large parts of it, using children to run the machines for less cost and then putting out an inferior product -- well, artists today say, Hey, all of this stuff has been stolen from work we've put on the internet. It's been vacuumed up into these systems that a tech company is going to profit off of to churn out an inferior product. And it's going to not only deprive us of a chance to make a living selling our work, but it's also going to drive the market down in general for the prices for this stuff, 'cause now we're competing with an automated system. 

So the similarities are so numerous and the Luddites I think are increasingly seen as sympathetic and not reactionary or dumb because really that mischaracterization is the result of a propaganda campaign, to put it bluntly, by the Crown at the time that really had an interest in the Luddites, at when they first rose up were really popular. People would join them in the streets, people from other trades who weren't an obvious threat of being automated away in their workplaces -- [00:06:00] the steel workers, the coal workers, the shoemakers, hatters -- they would all come out and they'd join the Luddites. And even some of the authorities at the time would just let them attack the shops. Part of that is because the Luddites were very tactical and at first, especially, they would only destroy the machines that were automating their work. They would leave all the other machinery that didn't disrupt the social contract in place and they would write threatening letters explaining why they were doing what they were doing, they'd give the factory owners an opportunity to take down the automating machines, and they would be blunt: they said, You have erased 300 of our brother's jobs. And you take down the machines or you'll get a visit from Ned Ludd. That's how they would write their letters and go about their campaign. And if they complied, Ned Ludd's army wouldn't show up. But if they didn't, if they kept the machines up, then they'd either sneak through the window or hold up the overseer at gunpoint, and they'd take that hammer to those machines. 

And it was twofold. It was a symbolic tactic, saying "These are the machines being used by the rich to get even richer. If we destroy them, we're also dealing a [00:07:00] blow to these forces that are making society more unequal, less just." And so people could get behind that. 

They were also very tactical. You destroy the machine that can do the work that has caused you to lose a job.

Economies were much less complex. We didn't have globalization to the same degree. It was: your town was probably a cloth-producing town if you were in the Luddite sphere of influence. If you smash the machine in the factory that's doing your job, then they can't use it to take your job anymore.

So, it did serve both purposes. And they were really popular. So people were cheering for them. So the state had to come in and say, Look at these people, they're destroying their own industry, they're dummies, they're against progress. They're fighting against technology and advancement in general. They're deluded." That was the favorite word: they were deluded, and they would always suggest that they were under the influence of some malcontented leader, because common people at the time couldn't be trusted to act on their own accord or to understand what was bad for them. 

Being a Luddite Is Good, Actually ft. Jathan Sadowski - Left Reckoning - Air Date 5-29-21

JATHAN SADOWSKI: Luddism was a glorious moment of [00:08:00] solidarity and collective action by workers. The reason that on TMK, Ed and I are really trying to bring back Luddism is because we think that there's a lot of lessons to be learned in terms of how we think of tech criticism, going back to the beginning of our conversation is understanding tech criticism as something that is fundamentally adversarial, something that should be dangerous to the interest of capital, to the material conditions of capital, while raising up the material conditions of workers.

And at the same time, understanding that the reason why the good name of Ned Ludd has been dragged through the mud is because because of the capitalist, really, saying that this is actually threatening to us, we need to assassinate the character of this. 

Actually, this was one of the first instances of capital asking the state to bring in the army to suppress workers was the Luddite uprising. And they did. [00:09:00] The army came in and killed so-called Luddites and made Luddism a treasonous act because it was so threatening to the interest of capital. 

MATT LECH - HOST, LEFT RECKONING: It's a lesson about activism too, because there's this way that activism is defined, you either have nonviolence or you have terrorism. And sabotage is something different between those two and it's, I think, very threatening to the interest of capital. So we shouldn't be surprised at this reaction to Ludd. 

JATHAN SADOWSKI: I could go off on this and the linkages to sabotage because I think, fast forward a hundred years later from the Luddites, and you've got the IWW, the International Workers of the World, and you've got people like Elizabeth Gurley Flynn writing a brilliant pamphlet in defense of sabotage, saying that like the strike, sabotage is a necessary tool in the workers' arsenal against capital. Like the strike, we should not moralize about [00:10:00] sabotage, we should not look down upon or question the motivations of workers who engage in sabotage, but instead understand why they do what they do and how we might support what they do.

And that is in the same way, I think, what we see of Luddism is it is really about understanding and asking really critical questions. Does this technology contribute to social welfare? Does it contribute to socially beneficial ends? I've called it like the Marie Kondo of techno politics. You hold up this technology, you ask those questions, and if the answer is no, then you throw it in the trash. And we need to get more comfortable with understanding technology as something that is not only political, not only human made, but it's something that therefore can be unmade, can be deconstructed and dismantled by people for good ends. Just because something exists doesn't mean it deserves to exist. 

And I think that's a myth of determinism that we are [00:11:00] sold is that all we need is innovation: more stuff on top of more stuff on top of more stuff. I think we need to start asking more questions about why we have this stuff and if it deserves to exist.

DAVID GRISCOM - HOST, LEFT RECKONING: I think that's on point. And it's a lot of the way that Ludism is properly understood are people who have severe fears of technology or people who don't understand technology. When in fact the picture that you're painting right here is people who very much understand technology and what it's doing to them and doing to their community.

And it's something that should be replicated much more because, especially if you read tech journalism -- I know there's a lot of great folks out there who do good tech journalism, but a lot of it is just, as you were saying earlier, repeating press releases, acting as if these things have no effect on a material reality, but are just expressions of us uncovering the truth of the being, the truth of the world, uncovering the tech that is just always out there instead of a very specific process that is bringing about a particular end. 

And I think it's important for us to [00:12:00] understand not just as workers, but also in politics, because one thing that was so frustrating, for example, is Andrew Yang's campaign in 2020, the way that he talked about unemployment was this is just the consequence of the wheels of history moving in a certain way, because technology and robots and artificial intelligence are just reaching a point, and there's nothing we can do to stop it. But we'll just, put a bandaid on the bottom, right? It was very attractive to some people, especially working people were attracted to it because they're seeing, oh, my job is becoming more automated. There is more surveillance than I've ever experienced. And it's a great tool for the wealthy and people like Bezos who say yeah, this is just the natural order of things, rather than no, the technology is being developed in a certain way because you have been historically put onto this lower rung, like the working class in this country has been devastated for decades and decades. So technology is being used to brutalize you and to turn you more and more into a machine.

Again, these things are not just coming out of nowhere. 

JATHAN SADOWSKI: A lot of the coverage around [00:13:00] Amazon over the last year or so I think also really shows that the conditions that the Luddites were originally reacting against sounds a whole lot like an Amazon warehouse, right?

And it's the fact that these things in capitalism continue to replay themselves. These same relationships continue to replay themselves, but in ways that just are ever intensifying. And I think that the reaction to them also demands an equal and opposite reaction, right? One that is increasingly intensifying to meet capital where it is trying to meet us.

Why this top AI guru thinks we might be in extinction level trouble | The InnerView - TRT World - Air Date 1-22-24

IMRAN GARDA - HOST, THE INNERVIEW: You're sounding the alarm. Geoffrey Hinton, seen as the founder or father or godfather of AI, he's sounding the alarm and has distanced himself from a lot of his previous statements. Others in the mainstream are coming out, heavily credentialed people who are the real deal when it comes to AI. They're saying we need guardrails. We need regulation. We need to be careful. Maybe we should stop everything. [00:14:00] Yet, OpenAI, Microsoft, DeepMind. These are companies, but then you have governments investing in this. Everybody's still rushing forward, hurtling forward towards a possible doom. Why are they still doing it despite these very legitimate and strong warnings? Is it only about the bottom line and money and competition, or is there more to it? 

CONNOR LEAHY: This is a great question, and I really like how you phrase, you said they were "rushing towards", because this is really the correct way of looking at this. It's not that it is not possible to do this well. It is not that it's not possible to build safe AI. I think this is possible. It's just really hard. It takes time. It's the same way that it's much easier to build a nuclear reactor that melts down than to build a nuclear reactor that is stable. Like, of course, this is just hard. So, you need time, and you need resources to do this. 

But unfortunately, we're in a situation right now where, at least here in the UK, [00:15:00] there is currently more regulation on selling a sandwich to the public than to develop potentially lethal technology that could kill every human on Earth. This is true. This is the current case. And a lot of this is because of slowdown. It's just, you know, governments are slow, people don't want, and vested interests. You make a lot of money by pushing AI. Pushing AI further makes you a lot of money. It gets you famous on Twitter. You know, look how much, like, these people are rock stars. People like Sam Altman's a rock star on Twitter. People love these people. They're like, Oh, yeah, they're bringing the future. They're making big money, so they must be good. 

But like, I mean, it's just not that simple. Unfortunately, we're in a territory where we all agree, somewhere in the future, there's a precipice which we will fall down if we continue. We don't know where it is. Maybe it's far away, maybe it's very close. And my opinion is, if you don't know where it is, you should stop. While other people, [00:16:00] who, you know, gain money, power, or just ideological points... like, a lot of these people, it's very important to understand, do this because they truly believe, like a religion, they believe in transhumanism, in the glorious future where AI will love us, and so on. So there's many reasons. But, I mean, yeah, a cynical take is just I could be making a lot more money right now if I was just pushing AI. I could get a lot more money than I have right now. 

IMRAN GARDA - HOST, THE INNERVIEW: How do we do anything about this without just deciding to cut the undersea internet cables and blow up the satellites in space and just start again? How do you actually, because this is a technical problem, and it's also a moral and ethical problem. So, where do you even begin right now, or is it too late? 

CONNOR LEAHY: So, the weirdest thing about the world to me right now, as someone who's deep into this, is that things are going very, very bad. We have, you know, crazy, [00:17:00] you know, just corporations with zero oversight just plowing billions of dollars into going as fast as possible with no oversight, with no accountability, which is about as bad as it could be. But somehow we haven't yet lost. It's not yet over. It could have been over. There's many things where it could be over tomorrow. But it's not yet. There is still hope. There is still hope. I don't know if there's going to be hope in a couple of years or even in one year, but there currently still is hope.

IMRAN GARDA - HOST, THE INNERVIEW: Wait, hold on. One year? I mean, that's... come on, man! I mean, we're probably going to put out this interview like a couple of weeks after we record it. A few months will pass. We could all be dead by the time this gets 10,000 views. I mean, just explain this timeline. One year. Why one year? Why is it going so fast that even one year would be too far ahead? Explain that. 

CONNOR LEAHY: I'm not saying one year is like guaranteed by any means . I think it's unlikely, but it's not impossible. And this is important to understand, is that [00:18:00] AI and computer technology is an exponential. It's like COVID. This is like saying, in February, you know, 'a million COVID infections! That's impossible! That can't happen in six months!', and it absolutely did. This is kind of how AI is as well. Exponentials look slow. They look like, you know, one infected, two infected, four infected, that's not so bad. But then you have 10,000, 20,000, 40,000, you know, 100,000, you know, within a single week. And this is how this technology works as well, is that as our computers get - there's something called Moore's Law, which is not really a law, it's more like an observation - that every two years, our computers get about, you know, there's some details, but about twice as powerful. So that's an exponential. And it's not just our computers are getting more powerful, our software is getting better, our AIs are getting better, our data is getting better, more money is coming into this field. We are on an [00:19:00] exponential. This is why things can go so fast. So while I'm not like, you know, it would be weird if we would all be dead in one year, it is physically possible. You can't rule it out if we continue on this path. 

IMRAN GARDA - HOST, THE INNERVIEW: The powerful people who can do something about this, especially when it comes to regulation, when you saw those congressmen speaking to Sam Altman, they didn't seem to know what the hell they were talking about. So how frustrating is it for you that the people who can make a difference have zero clue about what's really going on?

And more important than that, they didn't seem to want to actually know. They had weird questions that made no sense. And, uh, so you're thinking, Okay, these guys are in charge. I mean, no wonder the A. I. is gonna come and wipe us all out. Maybe maybe we deserve it. 

CONNOR LEAHY: Well, I wouldn't go that far. But, um, this used to annoy me a lot. This used to be extremely frustrating. Um, but I've come to I've come to peace with it to a large degree because the thing that I've really found is that [00:20:00] understanding the world is hard. Understanding complex topics and technology is hard, not just because they're complicated, but also because people have lives. And this is okay. This is normal. People have families. They have responsibilities. There's a lot of things people have to deal with, and I don't shame people for this. You know, like, I have turkey with my family every Thanksgiving or whatever, and, you know, my aunts and uncles, look, they have their own lives going on. They maybe don't really have time, you know, to listen to me give them a rant about it, so I don't. 

So, I have a lot of love and a lot of compassion for that things are hard. This is, of course, doesn't mean that solves the problem. But I'm just trying to say that, like, it is, of course, frustrating to some degree that there are no adults in the room. This is how I would see it. Is that there is sometimes a belief that somewhere there is someone who knows what's going on. There's an adult who's got it all under control, you know, someone in the government. They've got this under control. And as someone who's tried to find that person, I could tell you [00:21:00] this person does not exist.

The truth is, the fact that anything works at all in the world is kind of a miracle. It's kind of amazing that anything works at all with how chaotic everything is. But the truth is, is that there are quite a lot of people who like, who want the world to be good. You know? They might not have the right information. They might be confused, they might be getting lobbied by various people with bad intentions, but like most people want their families to live and have a good life. Most people don't want bad things to happen. Most people want other people to be happy and safe. And luckily for us, most normal people, so not elites, not necessarily politicians or technologists, but normal people, do have the right intuition around AI, where they see like, Wow, that seems really scary. Let's be careful with this. And this is what gives me hope. 

So when I think about politicians and I'm not being in charge, I think this is now our responsibility as citizens of the world that [00:22:00] we have to take this under our own hands. We can't wait for people to save us.

This is not good - jstoobs (TikTok) - Air Date 2-16-24

MEGAN CRUZ - HOST, THE BROAD PERSPECTIVE: I'm becoming increasingly convinced that we are headed towards an artistic apocalypse. I know that sounds dramatic, but this technology should scare absolutely everyone. In case you hadn't heard yesterday, OpenAI announced their newest technology. They're calling it Sora. It is a text to video engine that allows people to use word prompts to create photo realistic video. 

Every single video in this thread is 100 percent AI generated and it's something that absolutely cannot be stressed enough that every single AI generator that exists is trained off of existing art. It is trained off of art and words and writing that often they don't obtain permission from the original artist before they feed them into their generators to learn how to recreate their work essentially.

But before I get into all of the many ways that this is objectively horrifying from a human and artistic standpoint, I want you to think about all of the immediate real world applications of this technology. Think about the way this technology could be used in a surveillance state. Think about the way this technology could be used in a court of law with a potentially corrupt [00:23:00] legal system and law enforcement system.

This technology absolutely will be used to inflict trauma and humiliation by way of things like AI deepfake porn or anything else that you can think of that is humiliating or degrading. The worst things you can think of that could be used with this technology, I guarantee you will be used with this technology.

And the ease of use and accessibility means that children will inevitably have access to this as well. There are a million terrifying applications for this. I cannot even imagine how one would protect themselves from scams or identity theft with technology like this. Of course, there'll be rampant political propaganda, AI deepfakes. Something absolutely everyone needs to consider is the way that this technology could be used to discredit the validity of things like genocide that are happening in the world right now. If you think Holocaust deniers are bad now, imagine when technology like this is normalized and people can see real footage of human suffering being inflicted by, say, an authoritarian government, and say, Well, who even knows if that's real? Or worse, governments using this technology to create artificial evidence of atrocities they claimed happen but have no evidence of. 

Like, this is [00:24:00] absolutely terrifying in terms of the misinformation that is possible, and I personally find it absolutely disgusting that this technology barely exists. It's in its infancy, and the very first application that people want to use it for in this world is to eliminate artists. 

I've seen so many people say, Oh, well, it'll never really be able to replace humans. We'll always know the difference. And the thing is, no, we won't. The way that this technology has advanced in a single year is absolutely astounding. Any deficiency you can think of right now that you can say, Oh, well, it can't do this thing that a human can do. It will learn way faster than you think it will. Artists have been underpaid and undervalued since forever, and it's only gotten worse with the rise of film as the most prolific art form of the modern age.

I think film is great, but it is inextricably tied to capitalism. Art and entertainment are kind of indiscernible from one another sometimes because of the world we live in and the fact that film is art, but film is also a capitalist endeavor meant to make money. Art is born out of passion, and it inspires passion. I just don't understand why that would ever be something that we would want to fucking streamline. People don't go into [00:25:00] art to become rich and famous. I'm sure it's nice when that does happen, but most artists feel a burning need to create. And that comes from needing to connect with people, needing to make sense of the cruel and chaotic senselessness of our existence, to find meaning in this fucking world. And I'm so sorry. But being good at putting sentences together to throw into a generator that's going to spit art out on the other side is not what that is. 

This is the thing about automation and the way AI is going to eventually be used in all industries, is that it is fundamentally stripping us of our reasons to be alive. Like, there are studies that show that once people retire, they die earlier. Work, even busy work, even monotonous bullshit jobs that we hate, give us a reason to get out of bed in the morning and live. And I am all for finding a way to use this technology to improve people's lives, to make it so they have to work less and live more, but that is not the direction that this is going.

And the scariest thing about all this to me, not just as someone who identifies as an artist, not just as someone who really believes that art is one of, if not the most meaningful [00:26:00] things you can devote your life to, but just as a human who has existed in this world that has become increasingly more isolated, increasingly more digital, increasingly more performative, is that I look around and I see the average person does not give a fuck about any of this. Because for over a decade now, we've had this conditional programming, this dopamine hit after dopamine hit, this artificial identity that we have to construct for ourselves to perform on the internet all the time is becoming increasingly our real identity, and people don't care about the real fucking world.

All of this is so strategically designed to stimulate the pleasure triggers in our brain to slowly turn us into these perfect, docile consumers who don't need real world comforts because we have virtual fun. Like, it's fine that we're all getting poorer and nobody can afford anything because we don't actually need real money to do most of the things we want to do online. 

Whether or not you believe it's intentional, this is what's happening. We're being stripped of the things that make us human. Our sense of community has crumbled as we've become more isolated in this digital space and now our sense of artistic expression is being replaced by the literal click of a button. It is so dystopian.

The ACTUAL Danger of A.I. with Gary Marcus Part 1 - Factually! - Air Date 7-2-23

ADAM CONOVER - HOST, FACTUALLY!: Please walk us through your other [00:27:00] proposals for regulating AI. 

GARY MARCUS: So, next thing would be global AI governance. I think we need to coordinate what we're doing across the globe, which is actually in the interest of the companies itself.

You know, large language models are expensive to train and you don't want to have 195 countries, 195 sets of rules, requiring 195 bits of violence to the environment because each of these is so expensive and so energetically costly. So you want coordination for that reason. The companies don't really want completely different regimes in each place. And ultimately, as things do get more powerful, we want to make sure that all of this stuff is under control. And so I think we need some coordination around that. 

Next thing I would suggest is something like the FDA, if you're going to deploy AI at large scale. So it's one thing if you want to do research in your own lab, but if you're gonna roll something out to 100 million people, you should make sure that the benefits actually outweigh the risks. And independent scientists should be part of that and they should be able [00:28:00] to say, well, you've made this application, but there's this risk and you haven't done enough to address it. Or, you know, you've said there's this benefit, but we look at your measures and they're not very solid. Can you go back and do some more? So there should be a little bit of negotiation until things are really solid. 

Another thing we should have is auditing after things come out, make sure, for example, that systems are not being used in biased ways. So like our large language models being used to make job decisions. And if they are, are they discriminating? We need to know that. 

ADAM CONOVER - HOST, FACTUALLY!: But, uh, now all of these regulations sound great to me. They sound important having an FDA-style agency, et cetera. Uh, that sounds like a great thing to do when you've got a technology that's causing problems. The history of that sort of regulation in the United States is that when you have a new field, that field desperately resists regulation with every fiber of its being. And it isn't until there are real, massive harms, people dying in the streets from tainted food that we get, you know, food regulation and, [00:29:00] you know, instituted by Teddy Roosevelt. I told that story on my Netflix show, The G Word. Um, it requires generally like wholesale death and devastation before we start regulating these things. Do you feel that there's any prospect in the near term for the kind of regulations that you're talking about? Or are we going to have a lot of harms first?

GARY MARCUS: It's difficult to say. I mean, when I gave the Senate testimony, there was actually real strong bipartisan recognition that we do need to move quickly, that government moved too slowly on social media, didn't really get to the right place. And so, there's some recognition that there's a need to do something now. Whether that gets us over the hump, I don't know. 

Part of my thinking is, figure it out now what we need to do, and even if it doesn't pass, we'll have it ready, so if there is a sort of 9/11 moment, some massive, you know, AI induced cybercrime or something like that, we'll be there. We'll know what to do. And so I don't think we should waste time right now being defeatist. I think we should figure out what is in the best interest [00:30:00] of the nation and really of the globe and be as prepared as possible, whether it passes now or later. 

ADAM CONOVER - HOST, FACTUALLY!: I agree that we should do as much as possible. I'm just a little bit concerned about the amount of power wielded by the tech industry. You know that this is one of the most profitable industries in America. So it's very easy for those CEOs to go and get a meeting with Joe Biden, whatever they want. And it's harder for folks such as yourself or some of the other academics we've had on the show to have those conversations, but I agree that we need to have those conversations.

GARY MARCUS: I'll say this. I'm in a little bit of a special category, especially after the Senate testimony. But right now it's actually very easy for me to get meetings. I met, um, well, I guess I shouldn't be too explicit, but I'm able to talk to whoever I need to talk to in Washington and Europe and so forth right now. So, people in power right now. are recognizing that they don't entirely trust the big tech companies, that they do need some outside voices. And for whatever reason, I right now am in that position and they're taking me very seriously. If I say I'm going to be in [00:31:00] Washington, could you meet next week? People say, yes. And in fact, I was just in Washington, met a lot of very high ranking people. And then I got on the airplane and then some other high ranking people are like, when are you coming back? I think just by coincidence.

But, you know, people noticed the testimony that I gave, wanna solve this problem, like, they're sincere in wanting to solve it. There's a problem that not everybody agrees about what to do and everybody's trying to take credit for having the one true solution. And like, in some ways it's an embarrassment of riches, everybody's trying to help. In some ways there's a coordination problem. I would say that more than any time I've ever seen before, the government is reaching out to at least some of us who are experts in the field, trying to say, you know, What would you do in this circumstance? So I give them some credit for that.

The Left Luddites and the AI Accelerationists - torres - Air Date - 5-15-23

TORRES - HOST, TORRES: Visions of the future are varied, and for as much as I'd like to believe that the future will be as rosy as these authors do, I find it hard to believe. Take for example the scandalous finding that 40 percent of jobs will be lost to AI. [00:32:00] These findings have been moderated by more measured studies, like a 2016 OECD study that found that less than 10 percent of jobs were likely to be automated. The study was more robust than the previous one for a variety of reasons, and more importantly, it wasn't funded by the companies that are creating AI technology and want to sell you on it. Seriously, if we were to listen to the CEOs, ChatGPT might as well be digital gold. But even then, 10 percent is still a lot of jobs.

The question of whether AI advancements will lead to job loss is, undeniably yes. You won't find one serious person saying otherwise, but there's something we're missing here. Author Aaron Beninov centers his analysis on one primary question. Why are we so obsessed with technologically driven job loss? There's a recurring hype surrounding automation theory, one that's been happening since at least the 1800s, but frankly, I wouldn't be surprised if we found a manuscript by a caveman afraid that the invention of fire was going to cost him his [00:33:00] role as hunter. 

Beninov argues that the cyclical nature of automation discourse has less to do with technology itself and more to do with the nature of capitalist society. Taking its periodicity into account, automation theory may be described as a spontaneous discourse of capitalist society that, for a mixture of structural and contingent reasons, reappears in those societies time and time again as a way of thinking through their limits. 

What summons the automation discourse periodically into being is a deep anxiety about the functioning of the labor market. There are simply too few jobs for too many people. Why is the market unable to provide jobs for so many of the workers who need them? Proponents of the automation discourse explain this problem of a low demand for labor in terms of runaway technological change. But this is misguided. In short, there's a fundamental problem in the labor market that's prompting these fears in the first place.

As [00:34:00] we discussed, a whole lot of jobs people used to do a hundred years ago no longer exist. But this isn't new. Automation is a constant feature in the history of capitalism. What is new, relatively speaking, is that global capitalism is now failing to provide jobs for the people who need them. And those of us who find them are often underemployed, doing jobs we're way too qualified to do. There's higher spikes of unemployment, inequality is only getting higher, something has gone wrong. Labor is in short demand. Automation theorists would argue, "yeah, no shit. That's because of automation, baby. That's what we've been telling you. Robots took our jobs and they're only gonna keep doing it." but Beninov argues we'd be wrong to chalk it up to simple automation, because if you look at the numbers, there's a deep economic rot at the center of this. 

Let's look at manufacturing, an industry that's already seen automation hit it in a big way. Already cybernetically enhanced, we would expect productivity and output to have skyrocketed, right? But this isn't the case. In fact, recent figures [00:35:00] show the manufacturing industry diminishing, growing at a sluggish pace that doesn't compare with the post WWII golden age. It's a classic crisis of overproduction and overcapacity. Demand for goods has stagnated compared to our ability to produce them, leading to a wave of deindustrialization. And manufacturing is only one such industry. 

Across the board, economic growth has stagnated. Some would argue that this is inevitable if we're using the economy after World War II as the baseline. The global economy was booming after the war. Expecting it to stay like that, well, it's not a fair comparison. If we instead compare it to pre World War I levels, things are much more similar, but here's the kicker. As Beninoff explains, in that period, large sections of the population still lived in the countryside and produced much of what they needed to live. Yet, in spite of the much more limited sphere in which labor markets were active and in which industrialization took place, this era was marked by a persistently low [00:36:00] demand for labor, making for employment insecurity, rising inequality, and tumultuous social movements aimed at transforming economic relations. 

In this respect, the world of today does look like this era. The difference is that today, a much larger share of the world's population depends on finding work in labor markets in order to live. Considering how you can't just grow food in your backyard like you used to a hundred years ago, this development is unsettling.

Beninov admits that technological progress does play a factor here, but it's secondary to the primary issue of a stagnant capitalist engine that can't fuel economic growth to keep people employed. The difference today versus a hundred years ago is that the vast majority of the planet is now a part of this wage labor system. If this stagnation continues, it's likely to make the employment insecurity, rising inequality, and social movements of the past century look like child's play. The problem is capitalism, not AI [00:37:00] or automation.

Luddites Show Us The Politics Of Technology | Brian Merchant - The Majority Report w/ Sam Seder - Air Date 11-21-23

SAM SEDER - HOST, THE MAJORITY REPORT: We should note that literally a couple hours ago was announced that the UAW came to at least a tentative agreement with GM after announcing, I guess it was yesterday or over the weekend, an agreement with Stellantis. This is on the heels of an agreement with Ford. And it seems like one of the most successful strikes, union demands in modern history, maybe at least in the past 50 years, I think maybe for sure in the past 50 years, it seems. 25 percent pay increase is the top line figure, over the course of a three- or four-, five-year contract, depending on all the details we're going to get a little bit more. But how much of that type of unionism in particular -- one that is really aggressive and more democratic. The UAW had a big -- they were under a consent decree. That brought about this administration of Shawn Fain, which [00:38:00] feels far more democratic, both in structure and processes, but also just in disposition. He is much more in tune to the Membership. It feels like from the outside than we've seen in the past. How much of that is a, descendant of the Luddite movement? 

BRIAN MERCHANT: Yeah. So when the Luddites were rising up, one of the reasons they had to rise up, I didn't mention, was that it was illegal to form a union. There were laws on the books called The Combination Acts. So it tried to collectively bargain and say, Hey, we're all agreed on this, we won't work for this much less, you could be thrown in prison. So part of the outgrowth of the Luddite movement was the reform effort that was really spurred by some of the folks that I follow in the book, Gravener Henson, who was a Luddite himself, but also was interested in pulling the levers of reform and he really fought to the bitter end and with, ultimately, some success to get those Combination Acts repealed and we saw the [00:39:00] beginnings of the union movement take rise. 

But there's a really good lesson from the Luddites in that, so being militant can work. You don't need to actually smash machines, but the industrialists and the elites of the day were terrified of the Luddites. A lot of them gave in and offered demands because they had power and they were popular. And, we've seen as you mentioned with Shawn Fain and the previous leadership of some of our unions had not wanted to mix it up too much. They had not wanted to push against the companies. It had gotten pretty slack. 

So I think seeing these more -- it's not militant, but it's a lot more confrontational, they're leaning into their power a lot more. 

And I would also point out, one of the big things was that the companies were trying to say we have new technologies, right? Where you're going to be working with batteries and electric cars, and that's not as hard to produce. So we need to pay you less. And one of the things the union did was stand up and say, absolutely [00:40:00] not. This is still labor. This is still very labor intensive and skilled work. Just because it's a new technology does not give you the right to say that you should be paid less or take more work off the table. 

Same thing with the WGA. I would say that's another modern example of a very successful Luddite-tinged strike, because they saw the studio saying that we want to be able to use AI to write scripts, and then maybe we'll let you rewrite them for a lesser fee. And they drew a red line.

And I argue, and I think I did argue in one of my columns that that's Luddism in the modern day. You don't need a hammer, you just need to reject what you know is going to be an exploitative use of technology. Because they knew the studios were not going to write a whole movie with AI, they were just going to write a blueprint, bring it to them and say, okay you can get a rewrite fee for this, but we'll own the rights, you don't get residuals, you don't get all this, and it was mostly a way to try to break labor power, to try to degrade conditions. And they drew that red line in the WGA and they said no. They said, absolutely not. If somebody is going to use AI, we're going to have control [00:41:00] over how it's going to be used. The studio can't do it. We'll make that decision. And amazingly, they won that. They won that right to control that part of the labor process. So that's a huge victory, and I think one that is extremely inspiring because we're going to see a wave of these fights coming down. 

SAM SEDER - HOST, THE MAJORITY REPORT: Do you think that the legacy of the Luddites or the lessons that come out of there are that tactic of militancy that they had, or did they represent a new way of understanding of the benefits/increase in productivity, and who gets a piece of that, who shares in that so-called benefit -- if the sharing of that benefit goes to all the parties, the constituencies involved in that factory or whatever it is, that production line, or just one narrow beneficiary, or is it both?

BRIAN MERCHANT: Yeah, I think it's a little bit of both, but I think it's more the latter. I think it's saying [00:42:00] it is, in fact, you shouldn't be encouraged to question how technology is going to be used in your workplace, in your life, in your daily routine. Is it, Who is a given technology going to serve? Is it going to serve you the worker? Or is it going to serve your boss at your expense? And it's giving, I think, people license again, especially. 

This is really important in this era, where for so long we've been taught that progress is equal to technology. It's Silicon Valley is the bringer of all of these great technological gifts and to question or to resist them was unthinkable for so long. We've seen some of that change with the tech clash and so forth, but there's still a lot of people who are very resistant to even say, Wait a minute, this seems like an awfully raw deal. And we're seeing that I think with, thanks to the writers and to a number of the other folks who are pushing back on this right now, we're seeing that facade start to crack. 

So I think the Luddites [00:43:00] have given us a good example and an important example to look at the way that it's being deployed in society or even in our specific workplaces and to question it. And it's okay to question it. It's okay to be a Luddite. And in fact, there's great power in being a Luddite.

BONUS The ACTUAL Danger of A.I. with Gary Marcus Part 2 - Factually! - Air Date 7-2-23

ADAM CONOVER - HOST, FACTUALLY!: You do have a view on what regulations you feel that we actually need around AI. 

GARY MARCUS: I do. 

ADAM CONOVER - HOST, FACTUALLY!: So let's talk about what a few of those might be.

GARY MARCUS: So I have suggestion from the top level, like macro level all the way down. I don't know how much time you want to go into it, but I'll start with I think that the US and other countries similarly need a central agency or a cabinet level position or something like that, a secretary of AI with supporting infrastructure, whose full time job it is to look at all the ramifications of this, because they are so vast.

And because even though we have existing agencies that can help, none of the existing agencies were really designed in the AI era. And there are all kinds of cases that [00:44:00] slip through what do you do about wholesale misinformation as opposed to retail misinformation? Like if some foreign actor makes a billion pieces of misinformation a day, maybe you have to rethink how we address that.

And so we definitely need someone who's responsibility, somebody who lives and breathes AI follows all of this stuff. We don't want to leave it to, the Senate has to make different rules when GPT 5 comes out from GPT 4 and from GPT 6. That's not what they're there for.

ADAM CONOVER - HOST, FACTUALLY!: So we need a regulatory agency similar to the EPA or another agency where, when facts on the ground change, that agency can issue new regulatory rules without having to go through Congress, which is how we regulate. We've got the FAA, we've got NHTSA for highway safety, et cetera.

GARY MARCUS: We obviously need this for AI. It's obvious to me, it's probably obvious to you not everybody in Washington agrees. People will tell you it's very hard to stand up a new agency, which is true. There arex complications, it is not trivial, but we need it. So that's one thing I would say. 

ADAM CONOVER - HOST, FACTUALLY!: Do you have any concern? Let me just ask you Gary about that first, because agencies of that type in the [00:45:00] past have become captured by those groups. If you look at the FAA and, the Boeing 737 Max, that really falls at the feet of the FAA is, having lax regulation. You can look at other agencies that have that problem. And why is that? It's because you have the revolving door. 

GARY MARCUS: My understanding, I'm not an expert, but my non expert understanding is that they got tricked on that one. They got told this is not really a new vehicle and it really was. It was, there were not fundamental changes. I think that the general answer to that question is you have to have independents, mostly scientists, outside scientists who can raise their hand and say, no, they're telling you that this is, just the same airplane, but they've gutted all of these systems and replaced them. And we need to understand these new systems. They're nice on paper, but we need data to see if this is actually going to work. We need, for example, to understand how the pilots are going to respond to these new systems, which in principle, mathematically correct, but if they fool the pilots, then you're going to have all kinds of mayhem. And we need to look into that. And so you have to have independents. 

[00:46:00] What you don't want is regulatory capture where the companies being regulated, we already talked about this are the ones who are making the rules. And so, Boeing shifted things and framed things in a way that suited their purpose, but didn't suit the public's purpose.

ADAM CONOVER - HOST, FACTUALLY!: Yeah, that's my concern is that we stand up this agency. And then 10 years from now, the person running it is like Sam Altman's brother or whatever, because he has the power to get his buddy appointed to run the thing. And that's been the case with agencies in the past, especially when an administration changes, but that's just good government. That's a problem of good government that exists for any field. 

GARY MARCUS: And it's a serious problem, it's not to be ignored, but I think we have to face it. So my second, recommendation I just already talked about. Which is scientists have to be involved in this process. We just cannot leave it to the companies and the governments alone. And the governments have been running around putting out press releases and doing photo ops with the leaders of the companies without having scientists in the room or without prominently displaying the scientists that are there, and that turns my stomach every time I see that. They did that in the UK, they've done that in the US, where they roll out some top government [00:47:00] official and they have OpenAI and Deep Mind CEOs or things like that, and you have to have scientists there to send the message that this is not just, my brother in law running the organization kind of thing that you just talked about.

ADAM CONOVER - HOST, FACTUALLY!: Not only do you need to have scientists there, it would probably be better not to have the companies that you are seeking to regulate in the halls of power. If the point is to regulate, the use of AI and regulate these companies, then you probably shouldn't welcome them all to the White House for a big summit where you do what they say, right?

GARY MARCUS: You actually do need them in the room. They have a lot of knowledge about what's practical and where things are. They should have a voice and they're affected and we don't want to. Regulate our way into losing the AI race with China. Like there, there are lots of reasons to have the company in the room, but it has to be in moderation with other voices too. They just can't trust them for the whole deal. 

BONUS The anti-tech movement is back. - Alice Cappelle - Air Date 6-15-22

ALICE CAPPELLE - HOST, ALICE CAPPELLE: Recent anti-tech sentiments echoes the skepticism around Web3, cryptocurrencies, the metaverse. The numerous TV reports, articles on data exploitation, online [00:48:00] surveillance, big tech monopolies have succeeded in making the majority of people across all ages, social classes dubious of big tech.

Cryptocurrencies, NFTs appeared for many as the solution to fight against the lack of transparency of big tech. But the language used -- blockchain, smart contracts, etc. -- the scammy practices, its shortcomings, the volatility, and the massive online backlash it received, really reduced its potential and scope of influence.

Web3 and the crypto world thought to establish themselves as the only alternative to big tech's hand on society and all the problems it brought. Bella Hadid's latest NFT ad is a good example of that. Bella talks about a private society, a new global nation built on peace, love, compassion to escape the imperfections of our world.

As someone commented, private society? For who? New meta nation? For who? Everyone wants sustainability, compassion, peace, and love. This is terrifying. I found it terrifying too, not gonna lie. [00:49:00] Everybody wants sustainability, compassion, peace, and love, and that can be achieved outside of technology.

The idea that progress should be the aim of every nation stretches back to the Enlightenment, where scientific discoveries, the democratization of knowledge and literacy, meant that people could see society advance quickly in their lifetime.

This obsession with progress translated into a new economic model, capitalism, into greater liberties for individuals, new forms of government like democracy, greater power to parliaments. 

Technological progress went hand in hand with social progress, and we can argue that it's still the case. Technological advances have enabled people to live longer, healthier. They have facilitated manual labor. They have allowed us to come together and internationalize social movements through hashtags. 

Technologies aren't inherently bad. That's not the point I'm making here. I won't be a romantic here. I won't talk to you about indigenous communities who live in harmony with nature as an argument against our technology-obsessed societies. I think this argument is used way too often by people who [00:50:00] refuse to give them the technological tools that they deserve and they demand. I mean, they are doing great! Let them do their own thing with nature while we continue to pollute over here. I've seen this argument presented by right wing people, but also by people who claim to be progressive.

I'm not saying that we shouldn't embrace those communities and listen to what they have to say. I think the direction the climate movement is taking, a.k.a. including more indigenous people, is absolutely necessary. Their knowledge of nature is so precious. What I find more questionable is that we romanticize, almost fetishize, a sustainable way of living that only a tiny little fraction of us would be willing to embrace. And we use that argument to continue to isolate them from modern technologies. We all have a right to access modern technologies, so drinking water, medical services, transportation, efficient transportation, in case those services aren't local. 

What seems problematic to me is the reward system we have built around those technologies, that is fed by the ideology of constant progress, but also by our economic system, to the point where [00:51:00] it has become quite hard to define what progress really is.

Think about this. In the West, a new revolutionary cancer treatment gets the same media coverage as a new iPhone. But are we really talking about the same kind of progress? As Tom Nicholas argued in his video on the fake futurism of Elon Musk, the ideology of never-ending progress keeps people hopeful that the future can be bright, that humanity will always find a solution to the problems it faces.

But who is really benefiting from that sort of technological progress? By that I mean high speed tunnels for five-people-max cars, rockets to see the earth from space, or Bella Hadid's meta community. These are toys for the wealthy who ultimately want to leave this planet. 

On the other hand, vaccines patents were kept private by Western companies, and people like Bill Gates claim that we shouldn't give patents to non-Western countries and wait till the West can produce vaccines and send them to them. Of course, they came last on the list and suffered economic and social consequences as a result of that. 

I really want to argue that the [00:52:00] belief in constant progress is an ideology, a.k.a. an idea that has been repeated so many times that it appears as the truth. And one way to show it is to look at how it trickled down to us, how we have internalized it.

Let's take the phrase "be the best version of yourself." That's a good example. Becoming your best version means that you need to improve a little bit every day. The body and mind are perceived as a machine that needs to be improved, yes, every day, month, year, to increase its performance. The individual who seeks to become the best version of themselves will work on their physique, mental health, strength, intelligence, using scientific data to figure out what's the best way to achieve their goals.

In fact, scientific based methods of training at the gym, of memorizing for an exam, are now super appealing. Let's imagine that we make two videos with the same advice, but one is titled, "My 5 Tips to Lose Weight," and the other one, "5 Scientifically Proven Ways of Losing 10 Pounds in a Month." Guess which one is gonna get the most engagement?

Smartwatches tell you how many hours you slept last night, [00:53:00] how deep the sleep was, encouraging you to improve your stats. They also calculate how many steps you did per day, how many calories you burned. My experience of tracking steps, calories in the past completely changed my relationship with things that, in my opinion, should be intuitive: eating, working out, being active. A workout session was only good if I had reached the right amount of calories burned. A meal was only good only if I had met all the macronutrients targets. 

But anyway, closing the parentheses here. To conclude and connect everything we've said with ideas I hear more and more in left wing circles, especially in France at the moment, is the right for intimacy in the sense that we should be able to turn it all off, to be left alone. We are constantly invaded with lights, sounds, notifications, and it's not always our fault. Being stimulated has become the norm and prevent us from having time to just think.

I'm not saying that we should distance ourselves from the outside world, from politics, or any of that. You know you're not like that. What I'm saying is [00:54:00] that the constant flow of stimulation puts us in a state of paralysis. We're numbly being drawn back and forth by the waves of information, of so-called progress, without reflecting on them.

I'll end with this quote I found in the book Psychopolitics, written by Byung-Chul Han. It's [French philosopher Gilles] Deleuze talking, and he says, "It's not a problem of getting people to express themselves, but of providing little gaps of solitude and silence in which they might eventually find something to say. Repressive forces don't stop people from expressing themselves, but rather force them to express themselves. What a relief to have nothing to say." 

Final comments on the fork in the road and a look at our options

JAY TOMLINSON - HOST, BEST OF THE LEFT: We've just heard clips today, starting with SHIFT, laying out an overview of who the Luddites were. Left Reckoning discussed the middle ground between peaceful and violent protest. TRT World explained some of the potential dangers of AI. jstoobs on TikTok described the cultural dystopia of AI video generation. Factually! discussed how government is attempting to regulate tech. [00:55:00] torres looked at the problem of capitalism and AI. And The Majority Report discussed the Luddites as a labor movement. 

That's what everybody heard, but members also heard bonus clips from Factually! discussing the process of setting up regulation for AI, and Alice Cappelle looked at who benefits from big tech and who can opt out. To hear that and have all of our bonus contents delivered seamlessly to the new members-only podcast feed that you'll receive, sign up to support the show at bestoftheleft.com/support, or shoot me an email requesting a financial hardship membership, because we don't let a lack of funds stand in the way of hearing more information. 

Now to wrap up, I have an excerpt from a New Yorker piece on Luddites and the book Blood in the Machine, the end of which may repeat some things that have already been said, but really sum things up pretty well. "The tragedy of the Luddites is not the fact that they failed to stop industrialization, so much as the way in which they [00:56:00] failed. In the end, parliament sided decisively with the entrepreneurs. Blood in the Machine suggests that although the forces of mechanization can feel beyond our control, the way society responds to such changes is not. Regulation of the textile industry could have protected the Luddite workers before they resorted to destruction. In the era of AI, we have another opportunity to decide whether automation will create advantages for all, or whether its benefits will flow only to the business owners and investors looking to reduce their payrolls. One 1812 letter from the Luddites described their mission as fighting against 'all machinery hurtful to commonality'. That remains a strong standard by which to judge technological gains". 

So, fundamentally this fork in the road we are standing in front of is about who the government or society more broadly [00:57:00] is going to back. And I must say it's an interesting time to try to guess what the government might do in this area, given that the right seems to be a leaning, you know, even if only slightly, a bit anti-corporate these days, you know, not for the same reasons that I am or you might be, but this could be one of those cases where we get back to the world of politics making strange bedfellows. We've actually been so hyperpolarized for so long now that that doesn't happen much anymore. But the efforts to reign in or break up big tech could be one of the first big ones in a while. 

 The second thing I want to highlight is the conclusion drawn in an episode of a show called Things Fell Apart that tries to trace the origins of our current culture wars. In the episode about managing online speech, or maybe our tendency to not think we need to manage online [00:58:00] speech until really, really forced to, they talk about the first time anyone was ever shamed for something they posted online. Back during the proto-Internet, an antisemitic joke was posted and it sparked a debate about whether to moderate such things or just let it run free. Initially, after much deliberation, it was decided that it was important to do some sort of content moderation for the sake of a healthy online discourse. However that stance was immediately attacked from a more libertarian perspective that would ultimately win out and set the tone for Silicon valley.

A Scottish Jewish joke - Things Fall Apart - Air Date 1-25-22

JON RONSON - HOST, THINGS FALL APART: John McCarthy was horrified at the thought of speech codes becoming the norm online. So he published a ferocious riposte to the ban, calling John Sack an "underling, who had spent those weeks not deliberating, but gurgling". He launched an [00:59:00] online petition too, one of the very first in internet history, gathering a hundred signatures from faculty. Then, as now, the power of the online petition was formidable. The ban on Brad's joke page was quickly reversed. John McCarthy's winning argument, John Sax says, had boiled down to: "We're really exploring the leading edge of computing here. Let's keep exploring it. Don't try and cut it off. We need to discover the boundaries of free speech by essentially running into them or crossing them." And that's the internet we have all lived in for the decades that followed. A libertarian engineer's utopia, where free speech thrived unencumbered, with no regard for the dangers it might cause society. And by dangers, I mean not only offensive speech, but fake news, too. And [01:00:00] because unencumbered free speech leads to conflict, which keeps people online longer than harmony does, it's a profitable ideology for the tech companies. It's epitomised best by how Twitter's UK general manager described the site in 2012 as the "free speech wing of the free speech party". "The interesting thing about Twitter is it's sort of Silicon Valley native, so maybe it all does tie back to the libertarian bent in the engineering culture". 

JAY TOMLINSON - HOST, BEST OF THE LEFT: Now, maybe it's obvious, but I play this because I see it as another inflection point in the evolution of the relationship between technology and society. So it's clear that these kinds of moments are really important to think through with a long view in mind. What I would argue is that to side with the capitalist and big tech during this AI inflection point, you know, essentially co-signing the [01:01:00] idea that AI can and will replace massive numbers of jobs and the benefits of those advances should go exclusively to the capitalist class, we'll ultimately bring about a destructive wealth stratification that really only has a chance of bringing mass misery. 

However, there's also a left wing, socialist vision of a techno future of full automation where the fruits of those advances are shared across society and people are freed from long hours at bullshit jobs. Maybe only a handful of hours a day and only a couple or a few days a week would even be spent working, leaving people free to live their non-work lives to the fullest. That is a possibility, certainly better than the alternative. But it does also come with the danger of taking work away from millions of people who derive their inner sense of purpose from the work they do, leading to a massive mental health crisis [01:02:00] even if their economic needs are taken care of. Not to mention the way that AI is tending to tackle art as well. Some neo-Luddites hasten to remind us that, much aside from work, the creation of art is also one of the greatest sources of meaning for people and if AI sort of swamps the art scene as well then that could have similar effects as taking away people's work. 

So it really strikes me as a choice between economic hyper-stratification and economic abundance for all, but with the danger of there being too little of what gives life purpose to people. Now of course, given that stress levels are at all-time highs brought about by overwork and a general sense of time poverty. I suppose bringing down work hours and days should start to create improvements for people before it goes too far in the other direction, but all of these things are concerns to keep our eye [01:03:00] on. 

That is going to be it for today. As always keep the comments coming in. I would love to hear your thoughts or questions about this or anything else. You can leave us a voicemail or send us a text to 202-999-3991, or simply email me to [email protected]. Thanks to everyone for listening. Thanks to Deon Clark and Erin Clayton for their research work for the show and participation in our bonus episodes. Thanks to our Transcriptionist Trio, Ken, Brian, and Ben, for their volunteer work helping put our transcripts together. Thanks to Amanda Hoffman for all of her work on our social media outlets, activism segments, graphic designing, web mastering, and a bonus show co-hosting. And thanks to those who already support the show by becoming a member or purchasing gift memberships. You can join them today by signing up at bestoftheleft.com/support, through our Patrion page, or from right inside the Apple podcast app. Membership is how you get instant access to our incredibly good and very often funny bonus [01:04:00] episodes, in addition to there being extra content, no ads, and chapter markers in all of our regular episodes, all through your regular podcast player. You'll find that link in the show notes along with a link to join our Discord community, where you can also continue the discussion. 

So, coming to you from far outside, the conventional wisdom of Washington DC, my name is Jay, and this has been the Best of the Left podcast, coming to you twice weekly, thanks entirely to the members and donors to the show from bestoftheleft.com.


Showing 1 reaction

  • Jay Tomlinson
    published this page in Transcripts 2024-02-20 13:38:02 -0500
Sign up for activism updates