#1655 A Pivotal Moment for Big Tech, Both Old and New: Google Search, the A.I. Boom, Antitrust, and Regulation (Transcript)

Air Date 9/13/2024

Full Notes Page

Download PDF

Audio-Synced Transcript

 

JAY TOMLINSON - HOST, BEST OF THE LEFT: [00:00:00] Welcome to this episode of the award-winning Best of the Left Podcast. 

We've been living through the modern equivalent of the oil boom, back when poking a hole in the ground would make one of the world's most valuable substances upon which entire economies would be built, simply bubble out of the ground. But in our case, it's not oil—it's data. Now, we are at a pivotal moment as the old, unchallenged master of data, Google, has been found guilty of illegal anti-competitive behavior, at the same time as generative AI companies are in a new, desperate rush to stake claims on every last piece of data they can find. 

Sources providing our Top Takes in under an hour today include Andrewism, Your Undivided Attention, the 80,000 Hours Podcast, POLITICO Tech, The Hartman Report, and The Socialist Program. Then, in the additional Deeper Dives half of the show, there'll be more in four sections. Section [00:01:00] A, the threat. Section B, big tech lobbying. Section C, regulation. And section D, thinking through solutions.

The New Colonialism of Big Tech - Andrewism - Air Date 9-3-24

 

ANDREW SAGE - HOST, ANDREWISM: Colonialism never ended, it simply evolved. Historically, colonialism involved the capture of land and labor. Between 1800 and 1875, on average, 215,000 square kilometres were added as colonies every year, or approximately one Guyana. Between 1875 and 1945, 620,000 square kilometres were added as colonies each year, or approximately one Ukraine.

By 1945, one in three people lived under colonial rule. In the decades following World War II, despite the waves of independence, we are still dealing with the consequences of colonialism. It has taken on many forms over its time span, from exploitation colonialism to settler colonialism. In their book Data Grab: The New Colonialism of [00:02:00] Big Tech and How to Fight Back, which you absolutely should read, sociologists Ulises Mejias and Nick Couldry explore one of colonialism's latest forms.

Data Colonialism, which captures not land, but data, and has quietly grown to threaten new dimensions of our lives and futures. 

Just to be clear, we're not saying that today's digital wars are equivalent to the brutality of colonial life in previous centuries. That would be absurd. Land and data are two very different types of assets. Rather, the framework of colonialism makes it possible for us to understand our current digital lives and power relations with the corporations that define them. including how those power relations came to be and continue to exist. 

Data colonialism is a social order in which the continuous extraction of data from our lives generates massive wealth for the few and suffering for the many on a global scale. This process is extensive in its appropriation of human life, as it captures and monetizes nearly everything about [00:03:00] the way we live, move, consume, and converse, with worrying implications on education, healthcare, housing, agriculture, policing, and more. With 3 in 8 people using Facebook and 1 in 8 people using TikTok, with 329 million terabytes of data harvested per day and projections for 2025 estimating 181 zettabytes of data being gathered, the conquistadors of the cloud are pillaging nearly everywhere and everyone.

This isn't to condemn the mere concept of collecting data. That would be as absurd as condemning the telegraph for the role it played in British colonialism, or condemning modern medicine because many of its earliest breakthroughs were appropriated from indigenous peoples. Data's not bad in and of itself. We need data about the world around us, how it affects us, and how we affect it, so that we can understand and change for the better. 

The issue is really how data is extracted: from what, from whom, and on what terms. When data is merely a tool for generating profit in the hands of corporations, that's when we have to stand up and challenge those terms and [00:04:00] conditions.

Data colonialism is not separate from other forms of colonialism. It is an evolution, a continuation of their compounding effects. By understanding the meaning and consequences of data colonialism, along with the civilizing mission used to justify it, we can determine ways to resist its rule and decolonize data for a better tomorrow.

Like other forms of colonialism, data colonialism is deeply intertwined with capitalism. Far from being separate stages in some teleological process, capitalism cannot be understood without its connection to colonialism. The wealth generated in the colonies financed the factories, it enriched and empowered Europe's proto capitalists, and innovated methods of rational management that would be taken from the plantation ground to the factory floor.

The divisions of our capitalist world cannot be properly contextualized without a colonial framework, and by understanding that connection, we can recognize the resemblance between the land grabs of the past and the data grabs of today. Colonialism and capitalism are continuously mutating and [00:05:00] adapting, though their core mission remains the same. Of course, colonialism will look different today because the kind of violence it establishes in the first place set up the social relations that enable it to continue through less overt and more symbolic forms of violence. As Mejias and Couldry put it, "Your dispossession, your loss of control over the data that affects you, and the impact that this has on your ability to control the terms on which you work, get loans, educate your children, and so on, may be no less absolute." 

But no violence is needed to persuade you to click the box that says "I agree to the terms and conditions" before installing an app. That click alone, by virtue of the vast legal and practical infrastructure of capitalist social relations, is enough to plunge us into endless spirals of data extraction. In other words, today's forms of extraction are almost frictionless, although that doesn't mean their long-term repercussions are entirely non-violent. 

Despite being an ardent decolonialist, one of the video game genres I used to enjoy the most was the 4X sub genre of strategy games. I spent many [00:06:00] hours of my youth playing Civilization V and later Civilization VI. A few years ago, I quite enjoyed playing Humankind. I was never particularly good at these games, mind you, but they were certainly designed to help you grasp and internalize the 4Xs of colonialism: explore, expand, exploit, and exterminate. Historical colonizers explored to find places to control; expanded their holdings by forced, appropriate labour and resources; exploited the colonies for all the wealth they could squeeze; and exterminated any opposition through direct violence, or indirectly, through the suffocation of social and economic alternatives to colonial life.

Data colonizers also explore, expand, exploit, and exterminate. Rather than exploring land, With the rise of internet use, data colonizers establish and explore data territories, also known as platforms like Google, Amazon, and Facebook, where interactions can be mediated and harvested. Within such territories, the innocuously-named "cookies" have [00:07:00] become one of the most powerful means of capturing massive amounts of data about internet users.

Data colonizers further seek to expand computers into every interaction and expand the platforms and connections between platforms to gather even more data. Everything from your phone to your vacuum to your fridge to your doorbell to your watch can now gather data about your habits, interactions, opinions, and spaces. Data grabs are taking place in the data territories of agriculture, education, health, and especially work. John Deere tracks its tractors. Google Classroom and other edutech services are expanding into more classrooms and doing who knows what with the data. Fitbit information is being fed to insurance companies. Surveillance, while always being part of capitalist management, has expanded significantly within workplaces, punishing desperate gig workers with lower wages, keeping warehouse workers scanning continuously, and tracking every office worker's keystrokes.

Of course, simply having the data is not enough. Data colonizers must exploit that data. Google's vast data territory is among the [00:08:00] most lucrative sites for exploitation. They have "pioneered" -- cloning language very much intended -- new ways to sell ads using previously untapped swathes of data. Data colonizers convert data into wealth and power through targeted advertising, user manipulation, and predictive, often discriminatory algorithms.

Finally, data colonizers exterminate, not through physical violence, but symbolic and systemic violence, by eradicating alternative ways of thinking and being, and by creating monopolies that are so powerful that they shape the course of genocides and health crises.

Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins - Your Undivided Attention - Air Date 8-26-24

 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Okay, so Brody, I want to set up this conversation with a bit of historical context. There's so much in the press about lobbying and dark money, and people might forget or don't realize that lobbying actually has a long history in American politics, right?

BRODY MULLINS: Yeah, lobbying's been around for a number of years. In fact one of the things I found interesting in researching for my book is that the Founding Fathers envisioned a day in which there would [00:09:00] be lobbying. They called lobbyists "factions", and they thought they'd be a pro-industry faction and a pro -worker or consumer faction. But they thought those factions would be about the same size and strength. They'd battle each other to an equilibrium to create laws and regulations that both sides supported. The problem that we've had that we document in our book is in the last 50 years, companies have gotten so powerful and spending so much money in Washington that they've really outflanked, outgunned, outspent the consumer side, so that right now in Washington, big companies, particularly the tech companies, have all the power and influence over shaping our legislation, and the consumers, the rest of us, the little guy, have no influence. 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: What changed over the last century in lobbying? 

BRODY MULLINS: Yeah, basically from the New Deal to the Great Society, companies actually had very little influence in Washington. Companies did not spend much money trying to influence public policy. Business was good, profits were high, companies cared about their employees. And everything changed in the 1970s. In the 1970s, the economy cratered with [00:10:00] stagflation. We had inflation. Oil prices, gas prices quadrupled. And that really dragged the economy into the tank.

And what business people did is they look around and said, Hey, what's the problem? Well, what's going on with our profits? Why is our business not doing well? And they saw the incredible growth of the federal government in the last 50, 60, 70 years had created so many rules and regulations that were required to comply with and spend money complying with.

As a result, companies for the first time in the 1970s started investing in Washington. And when I say investing, I mean hiring lobbyists, making campaign donations. And from that period until now, corporate America has been incredibly powerful in Washington, more powerful, as I say, than any other interest group in town.

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: I believe it's a quote from your book that from 1967 to 2007, the number of registered lobbyists in Washington exploded from some five or six dozen to nearly 15,000. Is that right? 

BRODY MULLINS: Absolutely. 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Margaret, your thoughts? 

MARGARET O'MARA: Yeah. I think that, just as Brody said, lobbying has in some form been around since the founding of the [00:11:00] Republic. There have always been people trying to persuade the legislature and the president to do their bidding. 

The other thing that was happening in the 1970s or at the beginning of the 70s was that big business was not very popular. If you go to a college campus that's where students are mobilizing against the Vietnam War, they're also mobilizing against big business and defense contractors and any part of the establishment. And so part of this was also trying to make business great again and bring it back in favor, as an American enterprise was core to the American project. 

So, yes, there's active lobbying on particular pieces of legislation. But there's also broader PR that is maybe Washington-focused or policy-focused, but spills out into something that everyone notices and sees, that the public image of a company or an industry is something that plays a big role. And certainly that's played a big role in the story of tech. 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Yeah, I think that's really important, actually. It's so easy to look at lobbying and just synonymize it with pure greed or pure influence peddling. And I think that's what's confusing about lobbying is it both is an [00:12:00] influence peddling game and it's also a public relations game.

BRODY MULLINS: Yeah, to go back to make one point again, a good detail here is that in the 1970s, or right before the 1970s, companies had so little influence in Washington that General Motors found itself in a fight with Ralph Nader. You know, Ralph Nader was a individual consumer advocate who took on General Motors -- is the General Motors as in, "What's good for GM is good for the country." and Ralph Nader beat them on auto safety regulations. And what that shows, one, is how much influence consumer groups and Ralph Nader had in that period, but also how little influence companies had. General Motors got beat by a consumer group, and therefore everything is switched after that. General Motors and other companies realized they needed to get in the game.

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Yeah, and Margaret, what was Silicon Valley's relationship with D.C. and lawmakers during this period? 

MARGARET O'MARA: Yeah. In some ways, Silicon Valley benefited from the broader sentiment against big business or old economy business in the 70s and even into the 80s, not really a great time for the US [00:13:00] economy. This is part of the reason Ronald Reagan was elected, promising "morning in America" and a really fundamental turnaround and which also included business deregulation. 

But for tech companies, while Republicans were, certainly in the Reagan area a very clear champions of business and a more deregulated business, that the tech companies were something that both Republicans and Democrats could get behind. This is the beginning of a more centrist Democratic Party with centrist leaders like eventually Bill Clinton and Al Gore who were elected in 1992, but also many others in Congress of their generation. They're trying to signal that, hey, we care about American economy too and business flourishing as well.

 There was a real embrace of this tech industry. And they didn't have to work very hard, initially, to have organized lobbying efforts, because lawmakers loved them. They thought they were great. 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Yeah, and Tristan and I were both in Silicon Valley in the aughts, which is a little later than what you're talking about. But I remember being there and there was this mentality that said, you really don't want to get caught up in the traditional games, that all we needed to do was to [00:14:00] build things and the government was too stuck, or too captured, or otherwise too corrupt to deal with it. One shouldn't play that game. You should just build great things. And then, Margaret, what was it that convinced Silicon Valley that they really had to start paying attention to this lobbying game? 

MARGARET O'MARA: The first moment that starts the mobilization is actually one that didn't happen in Silicon Valley itself, and one that Silicon Valley interests were cheering, which is the US government's antitrust lawsuit against Microsoft, which happened in the late 1990s, and it ended with a decision that ruled that the company had to be broken up, and for various technical reasons it didn't actually have to do that, but it did have to effectively slow its role significantly. And that moment was, for Microsoft as a tech company, a real watershed, moving it from being a pretty inattentive to anything that was going on in Washington at all, they were really, truly heads down. Bill Gates famously, when the FTC brought a motion against Microsoft for predatory [00:15:00] monopolistic behavior in in 1994, Gates's famous reaction to that was like, Ah, the worst thing that could happen to me in Washington is I fall down the steps of the FTC and die or something like that. It was kind of this old-style Bill Gates kind of brash, like, I just don't care about what they do. This has no bearing on our business. 

And what the DOJ lawsuit showed Microsoft, and then in turn showed the Valley later, they realized they can't blow off regulators, that antitrust is a real threat, and this is a constituency that needs to be worked with, and they can't just take that support for granted. 

Before the DOJ lawsuit, Microsoft's entire Washington lobbying operation was one guy working out of their Bethesda suburban Maryland sales office, and it was a drive between Suburban Maryland and Capitol Hill, and he was it. And after the lawsuit, and after the DOJ decision, Microsoft starts building a fundamentally different, and now what is, perhaps the most, one of the largest and most sophisticated and very successful [00:16:00] lobbying operations in D.C.

Nathan Calvin on Californias AI bill SB 1047 and its potential to shape US AI policy - 80,000 Hours Podcast - Air Date 8-29-24

 

LUIS RODRIGUEZ - HOST, 80,000 HOURS PODCAST: I basically want to dive right into SB1047. Can you start by saying what kinds of risks from AI the bill is trying to address? 

NATHAN CALVIN: I think it's very much trying to pick up where the Biden executive order left off. And so I think there are three categories of risks that the EO talks about in terms of risk from chemical, biological, radiological, and nuclear weapons and ways that AI could kind of exacerbate those risks or kind of allow folks who were previously not able to weaponize those technologies to do so.

And then another one is very severe cyber attacks on critical infrastructure. And then another one is AI systems that are just autonomously causing different types of havoc and evading human control in [00:17:00] different ways. 

Yeah, so those are the three categories of risk that the Biden executive order lays out. And I think that this is very similarly trying to take on those risks. 

LUIS RODRIGUEZ - HOST, 80,000 HOURS PODCAST: What can you say about how the bill came to be, including any involvement you've personally had in it? 

NATHAN CALVIN: I think that Senator Wiener got interested in these issues himself just from talking with a variety of folks in SF who were thinking about these risks. And I think for people who have spent time at SF get-togethers this is a thing that people are just talking about a lot and thinking about a lot, and it's something that he got interested in and really taken with. So, yeah, then he put out the intent bill and then was looking for organizations to help make that into a reality and make it into full detailed legislation. And as part of that process got in touch with us—the Center for AI Safety Action [00:18:00] Fund—as well as Economic Security California Action, and then Co-Justice, and we really worked on putting additional technical meat on the bones of some of those kinds of high level intentions that they laid out, and working really closely with the Senator's legislative director and the Senator himself, who's been, yeah, I think, really... I think there are some authors in the representatives who I think, you know, defer a lot to staff and other folks they're working with. But I think Senator Weiner was just, like, very deeply in the details and wanting to make sure that he understood what we were doing and agreed with the approach. And I think that [has] really been a pleasure to work with him and his office and kind of the amount of involvement and interest he's taken in the policy.

LUIS RODRIGUEZ - HOST, 80,000 HOURS PODCAST: Cool. Okay. So, in just incredibly simple terms, what does the bill say? 

NATHAN CALVIN: Yeah, I think the way that [00:19:00] I think I'd most straightforwardly described the bill is, you know, there have been a lot of voluntary commitments that the AI companies have themselves agreed to, of things like the White House voluntary commitments. There was also some additional voluntary commitments that were made in Seoul, facilitated by the UK AI Safety Institute, and, you know, it's saying a lot of things around testing for serious risks, taking cybersecurity seriously, thinking about these things, and what I really view this bill as is taking those voluntary commitments and actually instantiating them into law and saying that this is not something that you're just going to decide whether you want to do, but something that they're actually going to be legal consequences if you're not doing these things that really seem very sensible and good for the public. 

LUIS RODRIGUEZ - HOST, 80,000 HOURS PODCAST: Hey listeners, a quick interruption. So, to give ourselves more time to chat through objections to the bill, misunderstandings about it, and so on, Nathan and I [00:20:00] didn't dive any deeper into the details of the bill during our actual interview.

So, I wanted to jump in and give a few more concrete details about what's actually in the bill as of August 23rd. So, first, it's worth emphasizing that all of the provisions of the bill only apply to models that require 100 million dollars or more in compute to train, or that take an open source model that is that big to start with and then fine tune it with another 10 million worth of additional compute.

At the moment, there are no models that meet these requirements, so the bill doesn't apply to any currently existing models. But for future models that would be covered by the bill, the bill creates a few key requirements. So, first, developers are required to create a comprehensive safety and security plan, which ensures that their models do not pose an reasonable risk of causing or significantly enabling critical harm. Critical harm is defined in the bill as mass [00:21:00] casualties or incidents resulting in $500 million or more in damages. 

That safety and security plan has to be able to explain how the developer is going to take reasonable care to guard against cybersecurity attacks to make sure that the model can't be stolen; how it would be able to shut down all copies of the model under their control if there were an emergency; and how the developer would test that the model can't itself cause critical harm. And the developer then has to be able to publish the results of those safety tests. 

And finally, that plan has to commit to building in the appropriate kind of guardrails that would make sure that users can't use the model in harmful ways. In addition, developers of these advanced models are required to undergo an annual audit. If a developer violates these rules, And their model, in fact, causes critical harm itself, or is used by a person or group to cause critical harm, the developer can [00:22:00] be held liable for that harm and fined by the attorney general. For fine-tuned models that involve $10 million or more in expenditure, the fine-tuner bears responsibility for all of these things. For those spending less, the original developer holds responsibility.

Finally, the bill creates protections for whistleblowers. So, in other words, employees of AI companies who report noncompliance will be protected from retaliation. There are a few other bits and pieces in the bill, but those were the things that struck me as most important.

This Moment in AI How We Got Here and Where Were Going - Your Undivided Attention - Air Date 8-12-24

 

AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: One of the weird things about wandering around the Bay Area is the phrase, can you feel the AGI? That is the people that are closest... I know, right? 

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: Seriously? 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Feel the AGI. There's t shirts with it. 

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: There's t shirts with it on? 

AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: I've walked into dinners and the first thing that somebody said to me is like, [00:23:00] you're feeling the AGI. He looked at my face. I was really concerned. I actually hadn't been sleeping because when you metabolize how quickly everything is scaling up and the complete inadequacy of our current government or governance to handle it, it honestly makes it hard for me to sleep sometimes and I walked in, he looked at my face, and he's like, Ah, you're feeling the AGI, aren't you?

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: This is AGI as in artificial general intelligence, which some people outside of the Bay area don't ever think that we're actually going to get to. So you're talking about something which is, you know, it's just normal in the Bay Area to be working towards that and thinking about it. 

AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: And I should be really clear here, because there is debate (inside of both the academic community and the labs} of does the current technology—you know, this transformers-based large language model—will it get us to something that we can replace most human beings on most economic tasks as the sort of the, [00:24:00] like, the version of AGI, the definition that I like to use. And the people that believe that scale is all that we need say, Look, if we just keep growing and we sort of project out the graph of how smart the systems have been—four years ago, it was sort of at the level of a preschooler, GPT4, level of a smart high schooler, the next models coming out, maybe it'll be at PhD levels. You just project that out and by 2026-2027, that they will be at the level of the smartest human beings and perhaps even smarter, there's nothing that stops them from getting smarter. And there are other people that say, Hey, actually, large language models aren't everything that we're going to need. They don't do things like long term planning. We're one more breakthrough away from something that can really just be a drop in human replacement. Either one of these two camps, you either don't need any more breakthroughs, or you're just one breakthrough away. We're very, very close. At least that's the talking side of Silicon Valley.

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: You [00:25:00] know, if you talk to different people in Silicon Valley, you really do get different answers and it really feels confusing sometimes. And , I think the point that Aza was making is that whether it is slightly longer, like closer to, I don't know, five to seven years versus, you know, one to two years, still not a lot of time to prepare for that.

And when, you know, artificial general intelligence-level AI emerges, you'll want to have major interventions way before that. You won't want to be starting to figure out how to regulate it after that occurs. You want to do it before. And I think that was the main mission of the AI Dilemma, was how do we make sure that we set the right incentives in motion before entanglement, before it gets entrenched in our society. You only have one period before a new technology gets entangled, and that's right now. 

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: Yeah. I mean, it's hard sitting all the way over here in the suburbs of Sydney, Australia. And I do have a sense from my perspective that there's been a little bit of hype, you know. Some of the fear about AI hasn't [00:26:00] translated. I mean, it hasn't transformed my job yet. My kids aren't really using it at school. And when I try to use it, honestly, I find it a little bit crappy and not really worth my while. So, how do you sort of. take that further and convince someone like me to really care? And what's the future that I'm imagining, I guess, even for my job five or 10 years into the future?

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: I think one thing that's important to distinguish is how fast AI capabilities are coming versus how fast AI will be diffused or integrated into society. I think diffusion or integration can take longer, and I think the capabilities are coming fast. So, I think people look at the fact that the entire economy hasn't been disrupted so quickly as, you know, creating more skepticism around the AI hype. I think certainly with regard to how quickly this transformation can take place, that level of skepticism is warranted . But I do think that we have to pay attention to the raw capabilities. If you click around and find the corner of Twitter where people are [00:27:00] publishing the latest papers in AI capabilities, you will be humbled very quickly by how fast progress is moving. 

AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: I think it's also important to note there is going to be hype. Every technology goes through a hype cycle where people get over excited. 

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: And we're seeing that now, right? People are, OpenAI is supposed to be potentially losing $5 billion this year. You know, there's a but of a feel of is there a kind of crypto crash coming, you know, with the energy around AI at the moment? 

AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: Right, exactly. So, and that happens with every technology. So, that is true. And also true is the raw capabilities that the models have and the amount of investment into the, essentially, data centers and compute centers that companies are making now. So, you know, Microsoft is building right now a hundred billion dollar computer super center, essentially. 

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: Okay, I do want to move on now to questions around data because there's been a huge amount of reporting recently about how large language [00:28:00] models are just super hungry for human generated data and they're potentially running out of things to hoover up and ingest. And there's been predictions that we might even hit a data wall by 2028. How is this going to affect the development of AI? 

AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: I mean, it's a real and interesting question, right? Like, if you've used all of the data that's easily available on the internet, what happens after that? Well, a couple of things happen after that. One, and we're seeing this, is that all the companies are racing for proprietary data sets, sitting inside of financial institutions, sitting inside of academic institutions is a lot of data that is just not available on the open internet. So, it's not exactly the case that we've just run out of data, like the AI companies may have run out of easily accessible open data. 

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: Free data.

AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: Free data. The second thing is that there are a lot of data sources that require translations. That is, there's a lot of television and [00:29:00] movies, YouTube videos, and it takes processing power to convert those into, say, text. But that's why OpenAI created Whisper and these other systems. There's a big push in the next models to make them multimodal, that is not just speaking language, but also generating images, also understanding videos, understanding robotic movements. And it is the case with GPT 4 scale models that as they were made multimodal, they didn't seem to be getting that much smarter. But the theory is that's because they just weren't big enough. They couldn't hold enough of every one of these modalities at the same time. So there's some big open questions there. 

But when we talk to people on the inside, These are not like the folks like the Sam Altman's or the Dario's that have an incentive to say that the model search is just going to keep scaling getting better. What we've heard is that they are figuring out clever ways of getting over the data wall, and that the scaling does seem [00:30:00] to be progressing. We can't, of course, independently verify that, but I'm inclined to believe them. 

Newsoms AI dilemma To sign or not to sign - POLITICO Tech - Air Date 9-6-24

 

STEVEN OVERLY - HOST, POLITICO TECH: So the California legislature has passed a hotly debated AI safety bill, just as the session comes to a close. What made it into the final version of the bill? 

JEREMY WHITE: This bill would require the largest artificial intelligence model, so called 'frontier models', to undergo safety testing before they are released onto the market, essentially ensuring that they don't pose a risk for catastrophic harms like bio attacks and that type of thing.

Companies would essentially be exposed to civil penalties if the state finds that they are not doing their due diligence on these models that they're seeking to release into the market. And the mechanism for enforcing this became a central point of contention with this bill. The message from a lot of the tech company foes all along was, rather than apply liability at the front end before we release these [00:31:00] models, punish us if harm occurs. If our models go out into the world and they wreak havoc, then yes, we deserve some accountability. And the response from proponents has been, you wouldn't wait to regulate nuclear energy until you had a Chernobyl. And essentially that we need to be proactive and preemptive about preventing these harms.

STEVEN OVERLY - HOST, POLITICO TECH: Got it. And so with all of that debate, the legislature still passed it. And that bill now needs to be signed by Governor Gavin Newsom. And there's a lot of pressure on him to veto it. Do we know what he's expected to do? 

JEREMY WHITE: The governor has, as he generally tends to do with legislation, not said specifically where he's leaning. I do think, given the governor's history of being close to the tech industry, and his record of having rejected some bills that passed the legislature that the technology industry did not like, such as regulation last year on autonomous vehicles, I think the even money is on him being more likely to veto this than to sign it. That said, the [00:32:00] governor has, again, been pretty diligent about not giving a clear indication either way, beyond saying that he believes that artificial intelligence, while it merits regulation, is an important industry and one that helps California maintain its competitive edge. 

STEVEN OVERLY - HOST, POLITICO TECH: Right. Well, and that's always the tension there in California when it comes to regulating tech. And as you said, Newsom, in particular, has relationships with these tech companies going back many, many years and through many, many different roles that he's had. What are companies saying now that the bill is heading to his desk? 

JEREMY WHITE: I think all along, there has been a hope among people in the industry that the governor would be an ally and a backstop on this one. That certainly isn't to say that they didn't try to stop it or substantially amend it in the legislature, but I do think there's been this dynamic for a while in which groups see that when it comes to regulating tech, they're more likely to have an ally in the governor often than in a majority of democratic legislators. People are continuing to bend his ear, [00:33:00] [unintelligible] the people warning that if he doesn't sign this and there's some sort of catastrophe that's on him; to people warning you're gonna be the one who's responsible for sort of killing the golden goose that, not just California, but San Francisco, a city that he was mayor of, don't forget, is looking to drive its economic engine.

STEVEN OVERLY - HOST, POLITICO TECH: Some other pushback has also come from Washington, which, you know, Washington has not passed any meaningful AI safety legislation of its own, and yet we have seen California lawmakers like Nancy Pelosi, Ro Khanna, Enzo Lofgren, all kind of come out against this California bill. How is that message being interpreted there?

JEREMY WHITE: Certainly the fact that these lawmakers represent the Bay Area, Silicon Valley, and adjacent districts, I think it is a strong signal from them that they have heard from folks in their districts and in this industry. In Congresswoman Lofgren's case, she had her staff talk to people and make a recommendation that this is going to be bad for innovation. This is [00:34:00] going to hurt these businesses that again, are major economic players in this area. I think Congresswoman Pelosi's intervention was also read in some quarters as sort of putting a marker in a succession fight. The state senator carrying this bill, Scott Wiener, is known to be all but certain to run for Nancy Pelosi's seat when she leaves. Some people saw this as perhaps the speaker emerita creating some space for her daughter, Christine, in that race. That aside, I think clearly these members of Congress represent a lot of the executives and workers and headquarters of these companies, and so they are channeling some of those very significant industry concerns.

STEVEN OVERLY - HOST, POLITICO TECH: Got it. So, some policy behind it, but also it sounds like politics being played as well, which is not, I guess, unexpected. 

You know, we have seen this dynamic, though, where Washington fails to regulate or fails to act in tech and so the California legislature kind of steps in to regulate. We saw that with [00:35:00] privacy, you know, kids online safety. Is that dynamic involved here as well? Have we heard from any federal lawmakers that they don't want to be preempted again by California on tech regulation? 

JEREMY WHITE: That is absolutely a dynamic in play. I've heard over and over again from California lawmakers, whether it's Scott Wiener or one of the many others doing AI bills that they felt they had to act because it was clear to them Congress was not going to. 

There has been some back and forth between Sacramento and Washington on this. When I spoke to Congresswoman Lofgren, she told me that's nonsense, we have been working on it. And she said there are some areas in which she thinks there's an appropriate role for California to move ahead, things like data privacy and clean car regulations. But on this matter, which she cast as a matter of national security and importance, she was adamant that this is Congress's turf and they should be the ones to move first.

So, there has definitely been some tension between Democrats in different levels of government on this one. 

At Last Big Tech's Free Ride May Be Over - The Hartmann Report - Air Date 9-4-24

 

THOM HARTMANN - HOST, THE HARTMANN REPORT: [00:36:00] I have spoken on many occasions here and also written over at Hartmann Report about back in the day when, back in the late 70s, early 80s and through the mid 90s, when, Nigel Peacock and I were running, and Sue Nethercote was in another area, but, you know, we were all working with CompuServe forums. Nigel and I ran some 30 of them. We had the IBM forum, we had the Macintosh forum, we had the ADHD forum, we had the JFK assassination forum, we had the international trade forum, we had a bunch of them. And, you know, the two of us and about a dozen other people that worked with us were paid— specifically I was paid and I shared that revenue with all of them—we were paid to monitor the forums on CompuServe. Because this was all before 1996. And CompuServe, at that [00:37:00] time, was the internet actually up until the mid 90s. AOL and CompuServe were pretty much all there was. And they were viewed legally the same as the New York Times, essentially. They were a publisher, or as a bookstore. They were a distributor of content. Now, the content was being created by individuals, you know, people who were participating. But, just like if you were to write a letter to the New York Times threatening to kill the president, or send the New York Times a photograph of, you know, somebody being murdered or somebody being tortured or raped or something, and they published it, they could be held responsible for that. The New York Times could be held responsible for it. And if a bookstore was selling, you know, for example, child pornography, they could be held responsible for that. 

And so as a consequence of that, because CompuServe and AOL were [00:38:00] viewed as bookstores or as, you know, publishers, they had to hire people like Nigel and me to run and police these forums. And we made a good living doing it, by the way. I mean, you know, it was not inconsequential amount of money. And then in 1996, Congress got together and said, you know, we really want to turn this internet thing into something. We think it has great potential and we want to encourage companies to jump in. And so, we're going to pass a law—it's called Section 230 of the Decency and whatever it is Act, which is a subset of the Telecommunications Act of 1996—we're going to pass Section 230, which says that these publishers, you know, AOL, CompuServe, and then what came after 1996 was Facebook and Twitter and everything else, that they no longer have liability, they no longer have responsibility for what they publish. So, if somebody puts [00:39:00] child pornography on their site, or somebody puts, you know, a call to murder the president on their site or whatever, they can remove it if they want, and they probably should, just as good business practices, but we're not going to punish them, we're not going to prosecute them, we're not going to fine them if they don't. So, you can have the Wild West. 

And it succeeded in jump starting the Internet. Between 1996 and 2005, the Internet went from basically, you know, AOL and CompuServe, which was small and limited, to just exploded, worldwide. And I have been saying for some time that Section 230's time is past. That you could argue that it was useful to have there for five or ten years, but we no longer need these big companies. They're multi billion dollar companies. I mean, Mark Zuckerberg is the richest millennial on earth. He has, I mean, he's worth, you know, hundreds of [00:40:00] millions of dollars. I don't know his exact net worth. He's worth a pile of money. And he can afford to pay somebody to monitor what's going on on Facebook. Just like CompuServe used to pay Nigel and me. I mean, CompuServe, you know, Facebook is, I mean, some of these companies are showing like 40 percent profit margins. They're spending off billions of dollars in profits every single month. So, you know, if they have to hire a small army of content moderators, and/or change their algorithm to make sure that the kind of stuff that they're pushing out isn't getting pushed out, they can afford to do that. And they should be doing that, both morally and under the law, except that section 230 says they don't have to do it. So, they don't, they just take the money. 

Well, things got really bad for a family. The family of 10 year old Nylah Anderson. [00:41:00] And this was on TikTok. And TikTok has an algorithm that decides what to push to people. And little ten year old Nylah got a blackout challenge pushed to her. It's where you hang yourself and then try to save yourself just before you blackout. You cut off the blood to your, to your brain and then, and Nylah died, she hung herself, as a result of this thing that TikTok had actually sent to her. She did not follow this person. She did not solicit this. She did not ask for it. She received it and she did it and she's dead. And so her family sued TikTok. You know, TikTok, they argued in court, they said that TikTok knew that such videos were causing kids to get into tragic accidents, yet their algorithm targeted children nonetheless. They sued under Pennsylvania state law for product liability, negligence and wrongful death. [00:42:00] And this court, it's been through a couple of courts, and then it finally went to the Third Circuit, the Third Federal Circuit of the Appeals Court, and three judges, two of them Trump appointees, one of them an Obama appointee, wrote, this is what one of the judges wrote: "Today, Section 230 rides in to rescue corporations from virtually any claim loosely related to content posted by a third party, no matter the cause of action and whatever the provider's actions". And they basically said, you know, we're not going to let this happen anymore. They blew up these provisions of Section 230. 

Now, this is just a major rollback to Section 230. They said, because "TikTok's algorithm", I'm quoting now from the decision, "TikTok's algorithm curates and [00:43:00] recommends a tailored compilation of videos for users FYP", that's, you know, a homepage or whatever they call it, "based on a variety of factors, including the user's age and other demographics, online interaction, other metadata. It becomes TikTok's own speech". In other words, if somebody were to simply post some terrible thing on TikTok and only the people who follow that person saw it, that would be one thing. But because TikTok has this algorithm, and they're not unique in this, of course, this is true of all the social media sites, they have this algorithm that decides which posts to push out to people who haven't asked for them, this court ruled that this is not the speech of the person who posted it on TikTok, it has become the speech of TikTok itself. And TikTok is responsible for this. They are liable for this. 

Now, oddly enough, Clarence Thomas agrees with this. [00:44:00] Proof that a broken clock is right twice a day. He wrote, Back in 2022, he said, "The reason for this use and misuse of Section 230 is simple: advertising money. In particular, the kind of advertising facilitated by large swaths of personal data depends on Section 230 immunity. Otherwise, dominant platforms would have to spend large amounts on content moderation". He goes on to say, actually this is Matt Stoller writing about what Clarence Thomas is saying. Matt Stoller goes on to say, "He pointed out that Facebook refused to do anything to stop the use of its services by human traffickers", now this is a quote from Clarence Thomas, "because doing so would cost the company users and the advertising revenue those users generate".

Where AI Isn't a Four Letter Word China Builds Robots to Aid Workers - The Socialist Program - Air Date 9-4-24

 

RICHARD WOLFF: Imagine we have an enterprise, a workplace, with a hundred workers, and they make shirts, let's call it, for lack of a [00:45:00] better one. They make shirts. These hundred workers make shirts. And along comes a new invention, whatever it is, automation of one kind or another, and it is now possible to get the same number of shirts coming off the production lines every day or every week as you used to but you no longer need 100 workers. Fifty workers can do it because the new machine, the new technology, the new software, whatever it is, allows those 50 workers to be doubly productive compared to what they used to be, and so the employer, whoever that might be, fires half of the workers because they don't need them to produce the same number of shirts. 

Now, here follow the example, the simple arithmetic. If you're producing the same number of shirts, you're [00:46:00] getting the same revenue. Let's assume, simply, the price is the same. Whatever you got for shirts before, you get for shirts now. You make the same number of shirts, the hundred workers in the old days made, now you only need fifty workers, you make the same number of shirts. Okay, if the price is the same and you're producing the same amount of shirts, you're gonna get the same revenue. But the employer claps his hands together because he may be getting the same revenue, but he has fired half of his workforce. He is saved on labor costs, the way those people put it. Half of the revenue he got that he used to have to pay to a worker, he keeps for himself. So whatever his profits were before, his profits now are much higher because he's keeping, for himself, what he used to have to pay to the workers. No wonder he [00:47:00] will spend the money to get that machine installed that will make his workers more productive, because he's going to end up with more profits. 

That's the story. That's the way it's carried in the textbook. That's the way it actually works. Notice in this story, nobody seems to be worried about the 50 workers who got fired. What happened to them? What happened to their husbands, their wives, their children, the elderly who depended on them? What happened to the stores in the community that depended on these people having money to spend for their groceries, for their clothing, for their amusements? All of that damage done by technical progress, we're not supposed to think about. And that's not because it's bad news. It's because it highlights that technology is installed [00:48:00] if and when and to the extent that it is profitable, not for any other reason. 

And so let me now conclude my little example by giving you the other reason. What could have happened in this shirt producing enterprise is something completely different. The people there could have utilized the new technology in an altogether different way. And it's really very simple. Here's what they could have done. They could have said to the 100 workers, stay right where you are. You are not going anywhere. We are going to have you come here and produce the shirts the way you always did. However, we're going to cut the labor day, the working day, from eight hours a day to four hours a day. And why? Well, it's very [00:49:00] simple. In four hours, with these new machines we're going to get, you are twice as productive as you used to be. The company will produce the same number of shirts with you working half time as we did before. We'll sell them, we'll assume the same revenue comes in, and we will pay you as we always did, but you will have to do only half a day's work, five days a week. In other words, the technology frees up human labor. The technology helps everybody have half as much time to work for the same income they got before.

Is that possible? Of course it is. Has that been done in history? Yes, it has. You know who would do that, [00:50:00] who has done it? A worker cooperative, because they are workers who together decide what to do about new technology. A worker co-op has the workers being their own boss, so they make the decision. And of course, this decision is a no brainer, because for the workers as a whole, the hundred workers, it would take them exactly one second to choose between half of them losing their job while the others continue, versus all of them getting half time off for the same salary they got before. That's easy. Which one of those is better? Well, if you're a democrat, with a small d, you would obviously favor the second one. Why? Because a hundred workers [00:51:00] getting half a day off every day from now on is serving the majority, whereas firing half the workers so that the employer can make a bigger profit, well, that's serving the minority with more profit at the expense of one half of the majority. No democratic decision making. would ever end up that way. 

And now let me simply apply this to the story about the Chinese robots. China calls itself 'socialism with Chinese characteristics'. Well, this is a very old problem. And what the robots enable the Chinese to do is to make a really big decision. Are you going to go down the capitalist road, sacrificing workers to make more profits for the [00:52:00] employer, whether that employer is a private individual or a company on the one hand, or a state operated and owned enterprise on the other? Or are you going to use the exploding technology in China to give people a quality of life that the rest of the world has only dreamed of? Put people on half time. Imagine with me, if the Chinese choose, and it's an open question, which way they're going to go, but if they choose to do what the worker co-op would do, to utilize the new technology, the robots, that they are already the number one producer of in the world—the Chinese are, they're also the number one market for robots already—but if they were to choose to really do this, to make use of robots on a massive [00:53:00] scale, and not just robots to produce shirts and ice cream cones and all the rest of it, but robots to produce the robots so that we really don't need people to do hard drudgery labor, eight hours a day, five out of seven days a week, you know, what we're all used to. Then, the struggle between China and the West will be won—not by a war, not in the old ways, not by saber rattling against each other, not by tariff wars or trade wars, all of it—the war will be won because the whole world will watch while Chinese workers work fewer and fewer hours per day while earning the same amount of money, and the struggle between systems will be resolved that way, and no [00:54:00] war will be tolerated by either side. It'll be a no brainer which way to go. 

Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins Part 2 - Your Undivided Attention - Air Date 8-26-24

 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: So in this conversation, we've diagnosed a bunch of problems. You know, we've diagnosed that there's a complexity gap. Technology's moving faster than, you know. the law and when technology companies see Ted Stevens say the internet is a series of tubes that's not them just advocating for their position that's realizing that there's a lack of understanding in government and we want to preserve the kind of lobbying that's educational, right? But we don't want the kind of lobbying where there's let's say a thousand to one difference in the amount of resources that companies and private interests can deploy compared to that which might be good for people. So, when you think about this perspective, what are the kinds of mechanisms or interventions that would lead us to a more humane world with a better balance of power? Brody? 

BRODY MULLINS: Uh, that's a tough one. You know, I feel like reporters are really good at pointing out the problems, but not very good at coming up with solutions. [00:55:00] But in what we've talked about, you know, these companies, as we've said, are spending far more money to get far more influence than regular Americans. But at the end of the day, regular Americans do have the power. They have the votes. They're the ones who send members of Congress to Congress in the first place. The problem is that, you know, most consumers and Americans are not mobilized and organized. There's not one big organization that's pulling people together. But if there was, if the American people can come together and talk to their members of Congress in an organic way, you know, similar to the shutdown the internet day that Google and the tech industry organized, if there was an organic movement like that, the American people would have far more power than corporate America. It's just that they're disorganized right now. 

MARGARET O'MARA: Yeah, I think for so long Silicon Valley or the tech industry in DC have kind of seen one another through a glass darkly, not quite understood and appreciated the role of the other in the broader project in which all are engaged. [00:56:00] Silicon Valley, you know, has its origins in government spending and defense spending during the Cold War. The government policy towards higher education, research and development as well as spending on tech and buying tech things and encouraging the development of them, has been foundational to the Valley from the Manhattan Project to today. And that's something that isn't fully appreciated, and I think kind of drives some of the antiregulatory feeling in the valley, when we move beyond the kind of C suite of these biggest companies, but it's kind of this feeling like, Oh, if you regulate us, this innovation machine is going to stop. And actually the longer history shows that is not the case, that there has been a real robust government role that has encouraged the growth of the Valley. 

So, I think that's one thing. I think the other thing, Tristan, you point out this growth and balance in expertise and resources, which is, I think, a result of, this is something where government itself, it's the reflection of this dismantling of the expertise from within the government at the federal level, where you have [00:57:00] industry, you know, agencies like the FTC that are kind of operating on a shoestring and tin cans between them and basically with very little expertise, and where, particularly in the last 15 years, there's been this giant sucking sound that has drawn expertise from academia and from government towards industry because the paycheck is just too good.

So, we have this real severe imbalance. So I think part of it is Washington or the public sector, the public building up its capacity to be good partners, be good regulatory partners, and to understand how the tech works and to do smart regulation that may well cut into profits, but actually will ultimately benefit the consumers and market competition, which is the point of the whole business.

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Well said. 

MARGARET O'MARA: And actually, when you look over time, you see kind of a swing towards less regulation, more regulation. You see, you know, change happens slowly, then happens all at once. And so the kind of political dysfunction of Capitol Hill will not be [00:58:00] forever, if history is any guide. History doesn't repeat itself, nothing's inevitable, but we generally have some good proof points. And also, if you again go back to the early 20th century and kind of the extraordinarily concentrated wealth and power, what dismantled that, and it took a long time, but it involved government, it involved citizens mobilizing together in interest groups of their own, lobbying groups of their own, and the voters, the voters voting, and voting for pro-regulatory policies and lawmakers, and gradually things do shift. 

Note from the Editor on a possible future for humanity

 

JAY TOMLINSON - HOST, BEST OF THE LEFT: We've just heard clips starting with Andrewism describing the elements of neocolonialism inherent in big data. Your Undivided Attention described the dawn of the big tech lobbying era. The 80,000 Hours Podcast looked at the proposed regulation in California. Your Undivided Attention described the difficulty of balancing AI growth and safety. Politico Tech dove deeper on proposed [00:59:00] legislation. The Hartmann Report discussed the lawsuit against TikTok that put the danger of Section 230 in stark relief. The Socialist Program spoke with professor Richard Wolff about the options for using technological advancement to relieve people from the drudgery of work. And Your Undivided Attention explained the need for public mobilization to demand regulation of Silicon valley. 

And those were just the Top Takes. There's a lot more in the Deeper Dives section, but first, a reminder that this show is supported by members who get access to bonus episodes, featuring the production crew here, discussing all manner of important and interesting topics. To support our work and have those bonus episodes delivered seamlessly to the new members-only podcast feed that you'll receive, sign up to support the show at bestoftheleft.com/support (there's a link in the show notes), through our Patreon page, or from right inside the Apple podcast app. If regular membership isn't in the cards for you, shoot me an email requesting a financial hardship membership, because we don't let a lack of funds stand in [01:00:00] the way of hearing more information. 

Now, before we continue to the Deeper Dives half, I have just a couple of notes to add. The first is a great reference that I don't think it's mentioned in the show today. It's regarding the phenomenon of training AI models on any data it can get its hands on. Inevitably leading to the model ingesting data that itself was created by other AI models, leading to the degradation of the AI generally. One metaphor that gets used is mad cow disease caused by feeding dead cows to other cows. Bad practice. Don't do that. Another metaphor is inbreeding and the genetic defects that can come from it. Stemming from the inbreeding idea, and this is my favorite reference, one writer coined the term "Habsburg AI", which is a wonderfully deep cut to the old Royal family of Austria that's famous for having deteriorated [01:01:00] genetically due to generations of inbreeding. So, I enjoyed that. There's also something extra poetic about referring to a family dating back to the 11th century to describe potential problems with AI. So, nicely done. 

Secondly, Yuval Noah Harari wrote a piece in The Guardian that makes some fine points. It's titled "Never summon a power you can't control. Yuval Noah Harari on how AI could threaten democracy and divide the world". So, you know, nothing too heavy. It's a breezy 18 minute read if you want to check it out. He starts with a couple of old stories meant to warn humanity away from harnessing power that it can't control. The first, a Greek myth, took this pretty literally as it was about a mortal attempting to harness the chariot of the sun and drive it across the sky with predictably disastrous consequences. The second story is a lot more whimsical thanks [01:02:00] to Walt Disney and Fantasia: The Sorcerer's Apprentice, in which Mickey Mouse unsatisfied doing menial work, conjures magic to have a broom do the work for him only to have the situation get wildly out of control. So, Harari points out that these stories don't have any suggestions for how to get yourself out of a predicament like this, other than to have like a God or a magician on hand to set things right. So, the real lesson is just don't do that. Don't get yourself in that situation. 

Toward the end of the piece, he turns to game theory to describe the degree of danger we may be in. After describing the mutually assured destruction dynamic of the nuclear age, he points out that those same dynamics do not apply to cyber warfare. "Cyber weapons can bring down a country's electric grid, but they can also be used to destroy a secret research facility, jam an enemy [01:03:00] sensor, inflame a political scandal, manipulate elections, or hack a single smartphone, and they can do all that stealthily. They don't announce their presence with a mushroom cloud and a storm of fire, nor do they leave a visible trail from launchpad to target. Consequently, at times it is hard to know if an attack even occurred or who launched it. The temptation to start a limited cyber war is therefore big and so is the temptation to escalate it". 

So, it makes carrying out a first strike a little bit more tempting and then he points out, "Even worse, if one side thinks it has such an opportunity, the temptation to launch a first strike could become irresistible, because one never knows how long the window of opportunity will remain open. Game theory, posits that the most dangerous situation in an arms race is when one side feels it has an advantage, but that this [01:04:00] advantage is slipping away". 

Now earlier in the piece, it's described that the data systems of the world previously thought of, even if this wasn't exactly accurate, sort of thought of basically as an interconnected web. that paradigm would begin to Balkanize as different nations begin using data protectionism as a way of sort of jockeying for power on the international stage. This could end up leading to a very siloed digital experience of the world and of reality for all of the people in the world, driving people's farther apart without them necessarily even knowing it. So he concludes, "The division of the world into rival digital empires dovetails with the political vision of many leaders who believe that the world is a jungle, that the relative piece of recent decades has been in [01:05:00] illusion, and that the only real choice is whether to play the part of predator or prey. Given such a choice, most leaders would prefer to go down in history as predators and add their names to the grim list of conquerors that unfortunate pupils are condemned to memorize for their history exams. These leaders should be reminded, however, that there is a new alpha predator in the jungle. If humanity doesn't find a way to cooperate and protect our shared interests, we will all be easy prey to AI". 

Sort of makes me think of all those people back in the sixties, watching the Jetsons and following the space race, who just couldn't wait for the future to arrive. Well, here we are.

SECTION A: THE THREAT

JAY TOMLINSON - HOST, BEST OF THE LEFT: And now we'll continue to dive deeper on four topics. Next up, Section A:. More on the Threat. Section B: Big Tech Lobbying. Section C: [01:06:00] Regulation. And Section D: Thinking Through Solutions.

The New Colonialism of Big Tech Part 2 - Andrewism - Air Date 9-3-24

 

ANDREW SAGE - HOST, ANDREWISM: Data colonialism shares six distinct similarities with colonialism's past and present.

First, it is also founded on the appropriation of resources, with the shared mindset regarding that appropriation that the resources are cheap and unbound from ethical or environmental considerations. The spice must float. Historical colonialism's focus was on appropriating land, as the savages weren't using it properly, and labour, as the savages were predestined to servitude.

But data colonialism is focused on appropriating human life in the form of data, as clearly, every detail of our lives exists to maximize shareholder value. In any case, it's free real estate. And unlike land, data is a non rival good, so it's ripe for exploitation by multiple parties at once. The second similarity to the appropriation serves to build a new social and economic order that benefits the colonizer, [01:07:00] whether Britain or Big Tech.

The default position is now to extract data from whatever people do, no matter how trivial. Platforms and apps organized around the collection and exploitation of data are now the near inescapable infrastructure of daily life. Third, colonialism continues to be a private state partnership. It was never solely the domain of Church and Crown.

Chartered companies and enterprises have always played a role. Today, various players in the data extraction game form what Mahias and Couldry call the social quantification sector. Opaque and utterly unaccountable companies like Palantir quietly work hand in hand with governments to maintain smart borders and predictive policing that terrorize vulnerable populations.

The more famous Big Five of Google, Apple, Meta, Microsoft, and Amazon are just as collaborative with the state and carry on colonial legacies of dispossession and injustice. Data harvesters and data aggregators, large and small, have coalesced into a parallel of the old colonial [01:08:00] administrations. As a few thousand coders, designers, managers, and marketers control the lives of billions.

Fourth, both forms of colonialism devastate the physical environment. Historical colonialism set the precedent for the natural world being viewed as cheap and ripe for large scale extraction, while today's data colonialism continues to devour precious minerals, energy, and water to sustain itself, expanding its data centers across lands in the global North and South while expelling metric tons of carbon with every Amazon package delivered, Bitcoin verified, or satellite launched.

Fifth, all forms of colonialism generate deep inequalities between colonizers and colonized, exploiters and exploited. On its face, data colonialism may not seem as physically violent as historical colonialism, but it certainly creates asymmetric data relations that deepen existing inequalities of class, race, gender, and more that affect people's ability to live, and it relies on the continued exploitation of historically colonized people to mine its much needed materials, thus [01:09:00] enabling systemic violence.

Furthermore, these data relations are absent of physical limitations in size, connectivity, depth, and transferability, thus opening up new forms of colonial power and control. Sixth, historical colonialism was justified with the civilizing mission, or white man's burden, of evangelizing Christianity and asserting racial, scientific, and economic superiority.

These days, such narratives have been thoroughly discredited, so data colonizers have turned to new justifications. Data colonizers speak of ushering in the inevitable progress of a new machine age or fourth industrial revolution and extol the convenience and connection that their extraction enables.

When you put it that way, why would anyone oppose convenience and connection? When cloud storage and WhatsApp groups make it easier than ever to save and share, why resist? This is precisely how big tech wants us to see things. They're not calling out the [01:10:00] costs of our convenience. They're not bringing attention to the asymmetric relations that our data empowers.

They won't raise the alarm on the new forms of exploitation that their extraction brings, especially when it impacts the workers, or contractors, and not the end users. They'll sell you convenient solutions to the problems they create, if you can afford that convenience, of course, and you'll have to accept it, because you're not able to opt out without significant consequences once these platforms have accrued enough power.

When they become THE social operating system, you kinda have to click I agree. But hey, at least it's convenient, right? Don't even get me started on the convenience offered to the Global South, when a combination of these countries weak infrastructure and tech corporations massive resources has enabled the continuation of our dependence on the Global North.

How can we ever gain our independence and truly decolonize when we're reliant on the external provision of WhatsApp, to facilitate our day to day existence? We should really be asking if the only form that convenience can take just so happens to coincide with [01:11:00] the extractive ambitions of tech conglomerates.

These companies also love to sell us on connection, but do we as a social species need social media companies to connect when we've been connected on small and large scales for tens of thousands of years? Obviously, as a writer who has chosen to distribute my work on social media, I can recognize that such platforms offer some value.

I know that they've empowered political mobilization for better and for worse, but they also make it easier to surveil and suppress dissent. While echo chambers aren't nearly as common as is commonly believed, since being exposed to dissent and opinions is what keeps people hooked online, radicalization has certainly proliferated thanks to the profits over facts model of social media.

Fractures have long existed in our society, but social media certainly enhances those fractures. Meta wants us to believe that it represents the inevitable progress of human connection. But I'm sorry, how could a safe, global community ever be created from the exploitative, profit driven model of meta?

Why [01:12:00] should we accept their implicit claim that continuous data extraction is necessary for the human community to flourish? Don't we deserve to connect in ways that aren't dictated by their business model and disconnected from reality? But perhaps I'm asking too many questions. Maybe I just need to connect my toilet to the grid so that Amazon has a continuous reading of my stool samples for targeted probiotic advertising.

Maybe I should digitally bind every inch of my home to the global data colony. Maybe I should just hush from out and plug into the Internet of Things. I've been told that my data, in combination with that of countless others, has enabled the development of artificial intelligence, which as we all know is way smarter than any of us.

Or is it? As it turns out, the hype train of AI is simply a parade in praise of an over glorified, pattern recognizing parrot that replicates the racial and gendered biases of its massive dataset and still needs to be taught and corrected constantly. AI serves as a convenient cop out for folks who don't want to challenge inequality and would rather give it a neutral face while relying on marginalized [01:13:00] folks globally to actually teach the computer what to do.

Even the efforts to counteract these concerns with AI ethics boards fall flat, because their ethics codes are uselessly abstract, isolated from the levers of power, and thus utterly toothless. Particularly when these boards get their checks cut by the very same corporations they're supposed to be regulating.

There may be some real scientific value in AI for sure, but much of it is just marketing and party tricks. It might get really good at detecting cancer, but it shouldn't be clogging the internet with SEO optimized slop, and it certainly shouldn't be deciding the fate of real people. Whether it's Europe bringing progress and salvation to the savages, or Facebook graciously running internet infrastructure in over 30 African countries, colonialism often excuses itself with virtuous, civilizing missions that serve to justify or erase the reality of their exploitation.

Alternatively, following the shock doctrine, data colonizers use crises like the pandemic as an excuse to expand the territories of data extraction. In any [01:14:00] case, they need these alibis to distract us from the truth and capture the social imagination so fully that we can't even consider that there are alternative means of convenience and connection.

Just click I agree. Or don't. Maybe it's time to unaccept these terms and conditions. It won't be easy. Clonalisms past and present love to make us feel as though their power is incontestable. There's a lot of deception, exploitation, and coercion that gets us to accept this way of the world. But that doesn't mean we're completely helpless.

With the mental health impacts of big tech, the ongoing loss of workers rights, the ever growing authority of algorithms, the manipulation of populations for commercial and political purposes, the rising threat of disinformation and hate speech, and the decimation of environments by data centres, the threat of data colonialism seems insurmountable.

Yet data colonialism can be resisted. Once we identify our shared interests, build concrete solidarity, and develop our understanding of these issues.

This Moment in AI How We Got Here and Where Were Going Part 2 - Your Undivided Attention - Air Date 8-12-24

 

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: [01:15:00] Some companies are turning to AI generated content to fill that void. This is what they call synthetic data.

What are the risks of feeding AI generated content back into the models? 

AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: Right. Generally, when people talk about the concerns of synthetic data, what they're talking about is sort of these models getting high off their own exhaust, which is that if the models are putting out hallucinations and they're trained on those hallucinations, you end up in this sort of like downward spiral where the models keep getting worse.

And in fact, this is a concern. Uh, last year, Sam Altman said that one out of every thousand words that humanity was generating was generated by chat GPT. Right. That's incredible. That is absolutely incredible. Incredibly concerning, right? Because that shows that, um, not too far into the future, there will be more text generated by AI and AI models, more cognitive labor done by machines than by humans.[01:16:00] 

So that's, in and of itself, scary. AI is generated and what they didn't, and they're trained on that model, you might get the sort of downward spiral effect. That's the concern people have. But when they talk about training on synthetic data, that concern does not apply because they are making data specifically for the purposes of passing benchmarks and they create data that are specifically good at making the models better.

So that's a different thing than sort of getting high on your own exhaust. 

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: Right. But it leaves us in a culture where we're surrounded or have surround sound of synthetically created data or non human created data, potentially it's non human created information around 

AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: us. And this is how you can get to, without needing to invoke anything sci fi or anything AGI, how you can get to humans lose control.

Because this is really the social media story said again, which is everyone says like when an AI. starts to like, control humanity, just pull the plug, [01:17:00] but there is an AI in social media, it's the thing that's choosing what human beings see, that's already like, downgrading our democracies, all the things we normally say, um, and we haven't pulled the plug because it's become integral to the value of our economy and our stock market.

AI start to compete, say, in generating content in the attention economy, they will have seen everything on the internet, everything on Twitter. They will be able to make posts and images and songs and videos that are more engaging than anything that humans create. And because they are more engaging, they'll become more viral.

They will out compete the things that are sort of bespoke human made. You will be a fool if you don't use those for your ends. And now You know, essentially, the things that AI is generating will become the dominant form of our culture. That's another way of saying humans lost control. 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: And to be clear, Aza's not saying [01:18:00] that the media or images or art generated by AI are better from a values perspective than the things that humans make.

What he's saying is they are more effective at playing the attention economy game. that social media has set up to be played because they're trained on what works best and they can simply out compete humans for that game. And they're already doing that. 

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: It's terrifying. Um, we'll, we'll still have art galleries in places that are offline though, that don't have um, AI generated content.

It'll, 

AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: it'll be art, artisanal art. 

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: Yeah. Artisanal art. Yeah.

SECTION B: BIG TECH LOBBYING

JAY TOMLINSON - HOST, BEST OF THE LEFT: Now entering Section B: Big Tech Lobbying.

Nathan Calvin on Californias AI bill SB 1047 and its potential to shape US AI policy Part 2 - 80,000 Hours Podcast - Air Date 8-29-24

 

LUIS RODRIGUEZ - HOST, 80,000 HOURS PODCAST: So we'll come back to more about what specifically is in the bill, uh, in a little bit, but I actually want to talk about kind of the proponents and the critics of the bill because it's become so incredibly controversial over the last few months and even just last week that I want to kind of look at that right off the bat.

So I guess, [01:19:00] who supports the bill? Who's in favor? 

NATHAN CALVIN: There's a, uh, a really wide variety of, of supporters. I think some of the most high profile ones have been Jeffrey Hinton and Yoshua Bengio and Stuart Russell and Lawrence Lessig, kind of some of these, uh, you know, scientific and academic luminaries of the field.

I think there's also just a wide variety of, of different nonprofit and, uh, startups and different organizations that are supportive of it. Uh, SEIU, one of the largest unions in the United States is supportive of the bill. There are also some. AI startups including, uh, Imbue and, uh, Notion that are both in support of the legislation and wide variety of others, you know, like the Latino Community Foundation.

Like, there's just a lot of different kind of civil society and non profit orgs who have formally supported the bill and say that this is important. 

LUIS RODRIGUEZ - HOST, 80,000 HOURS PODCAST: I think from memory, like the vast majority, or maybe it's like three quarters of Californians also in a poll really support the bill, which [01:20:00] that quite surprised me.

I don't think of basically any legislation ever having that much support. And probably that's wrong, but it still seems, it still seems just intuitively high to me. But yeah, let's talk about some of the opponents. Um, I guess naively, I guess naively, It's hard for me to understand why this bill has become so controversial.

Yeah, in particular, because my impression is that nearly all of the big AI companies have already adopted some version of this kind of exact set of policies internally. And you can correct me if I'm wrong there. But yeah, who AI companies? The bill's big opponents. 

NATHAN CALVIN: Yeah. So I think maybe the loudest opponent has been Andreessen Horowitz, um, A16Z and some of their, their general partners have come out just, um, really, really strongly against the, the bill.

Um, 

LUIS RODRIGUEZ - HOST, 80,000 HOURS PODCAST: and just in case anyone's not familiar, they're like maybe the biggest investor. Ever, or, or at least in, in these technologies. 

NATHAN CALVIN: Yeah. Yeah. I think that [01:21:00] they're in their category of VC firm and they're probably different ways of defining it. I think they're the largest, you know, I'm sure you could put it in different ways such that they're lower on that list or something, but they're extremely large venture capital, um, firm.

So I think there's a mix of different opponents. I think that's definitely one really significant one. I think there are also folks like Yann LeCun who has. called kind of a lot of the risk that the bill is considering, you know, science fiction and things like that. I think there has also just been kind of more quietly, but just like a lot of the kind of normal big tech interests of, you know, things like Google and the, uh, you know, tech net, like the trade associations that really kind of advocate on behalf of, of companies.

in legislative bodies have also been quite strongly against the bill. I think we've also seen some folks in, in Congress weigh in and, you know, most recently and notably, uh, Nancy Pelosi, which is a little bit painful to me as someone who's a fan of her [01:22:00] and then has a, you know, a ton of respect for her and everything that she's accomplished.

And, you know, can talk a little bit about that specifically as well, but yeah, there's a mix of, of different folks who have, who have come out against the bill. And I think they have. some overlapping and some different reasons and I agree that I'm a bit surprised by just how controversial and strong the reactions have been given how like relatively modest the legislation I think actually is and kind of how much it has been amended over the course of the process and even as it's been amended to address different issues it feels like the intensity of the opposition has kind of Increased in volume rather than decreased.

LUIS RODRIGUEZ - HOST, 80,000 HOURS PODCAST: I actually am curious about the Nancy Pelosi thing. Did she have particular criticisms? What was the opposition she voiced? 

NATHAN CALVIN: I think it's a, a mix of things. I mean, I, I do think that she, she talked about the letter that, um, Fei Fei Lee wrote in opposition of the bill and [01:23:00] cited that. I do think that that letter has one part that just is false.

Like talking about how the. The shutdown requirements of the bill apply to open source models when they're specifically exempted from those requirements. I think that the other sense of it is just, you know, I think they're pointing to some of these existing kind of. Processes and convening that are happening federally and just, you know, saying that it's it's, you know, too early to really like instantiate these more specifically in law and that this is something that the the federal government should do rather than having states like California move forward with.

And I think our response is really that California has done similar things on. Data privacy and, uh, on green energy and lots of other things where Congress has been stalled and they've taken action. And I think we do this similarly, and obviously they have a, have a difference of opinion there. But I do think that if we wait for Congress to act itself, we might be waiting a very long time.

Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins Part 3 - Your Undivided Attention - Air Date 8-26-24

 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: [01:24:00] So one of the things that we think a lot about at the Center for Humane Technology, I mean, we have so many obvious issues with social media degrading the quality of discourse, causing addiction, doomscrolling, skyrocketing mental health issues for, for youth, teen suicides. And we know the cause of it. We know that it's driven by this engagement based business model, the monetization of our attention.

And so given the sort of obviousness of this, one of the things that we've noticed is that you see tech companies saying, we're for regulation. We definitely need regulation and they'll say that publicly. And then behind the scenes there, they'll do every tactic possible to kind of block things. Um, I was at Senator Schumer's, uh, AI Insight Forum in front of all the CEOs, you know, Jensen Wong and Eric Schmidt and Bill Gates and everybody was there in one room, Zuckerberg, Elon.

And Schumer opened the meeting by having people raise their hand if they agreed that the federal government should regulate AI. Literally every single one of the CEOs hands went up. And yet, the next day, all of their policy teams went to work saying, well, yeah, but not these kinds of regulations. We've seen Meta come out publicly in [01:25:00] favor of Section 230 reform, for example, and other social media companies who support kids online safety.

So I'm just curious, how are you seeing the companies evolve their strategies in this sort of backroom opposition? 

BRODY MULLINS: Yeah, you know, it's a fascinating area because, uh, unfortunately, you know, Congress is just so, they're, they're so ill equipped to passing any law on any topic at this point. And I think the tech companies and the AI companies are taking advantage of that.

I mean, Facebook has realized Congress is dysfunctional. They're not going to pass a law. So let's just say we support it and say, hey, you know, go for it. They basically challenged Congress to regulate them and Congress can't get its act together. 

MARGARET O'MARA: Yeah, and this is not the first time in American history this has happened, you know, where, where industries say, Oh, yeah, regulate us.

Um, but also there's, you know, it's a good reminder too that Silicon Valley is never, there are many Silicon Valleys, right? There, and every company and every part of the tech world has its own, um, policy priorities, and they may not be in sync. You know, if you go back to the 1980s, the chip makers and the [01:26:00] PC makers didn't have policies in sync with one another.

Chip makers wanted to retain their market advantage. The PC makers wanted to have really cheap chips from Japan, so they didn't care if the market was flooded. Um, and we see the same thing playing out now, so, and, and yes, I think what Brody's point about the level of dysfunction. Um, this again was, was pertinent in the Gilded Age.

It's one reason he didn't have much regulation, business regulation coming out of the late 19th century either when you are able to play on those partisan differences and the fact that the two parties have different ways, different means towards the same end or have different priorities even within something like social media regulation or privacy regulation.

And so where the lead has been. taken or where, where regulations come has come from other geographies, notably from Europe. 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: So I, we hear this a lot, obviously, that Congress is dysfunctional, it's never going to pass anything. I just want to add to that picture that there are deliberate ways that companies will sow division about an issue so that it [01:27:00] prevents action from being taken.

The example that I'm most familiar with is Facebook turning the argument about what's wrong with Facebook. Facebook. Facebook. into a question about whether it's free speech or censorship, because they know that that philosophical question literally will never resolve. There is no conversation that will ever say the answer is clearly one side or the other.

And they by doing so distracted people's attention from their core business model, which is monetizing maximum engagement and attention, which is what's driving the amplification of polarizing content, oral outrage, et cetera. And so I'm curious if you have reactions to that, that one of the further strategies companies are developing is finding ways not to, uh, to sit back, but actually frame debates, actively use communication to, um, stall by using a false dichotomy.

MARGARET O'MARA: Well, these, these are companies that are very good at, uh, very persuasive, and they're to have the very persuasive tools at their disposal. And yeah, that's right. Sort of changing the conversation is a, is a key, uh, a key tactic here. It's not something the tech industry invented. And the, the tech industry [01:28:00] has, has always positioned itself for a very long time as different a different kind of business, kind of higher, kinder, gentler capitalism, um, don't be evil capitalism.

Right? And that has been part of its great appeal. Um, and, and it's genuine. I think it's earnest. It comes from a, from a genuine place. It has a history. There's a reason behind it. But at the end of the day, These are companies, these are, you know, a C suite that's accountable to its shareholders. These are publicly traded companies, they're accountable to their investors, they're accountable.

So they aren't that different from any of the other lobbying industries in Washington. Wouldn't you agree, Brody? 

BRODY MULLINS: Yeah, absolutely. And, you know, this could be a good point to talk a little bit about how lobbies change also that, you know, You know, these are not companies that are hiring, uh, connected lobbies to go up to Capitol Hill and try to get a member of Congress to support them.

They're running basically presidential campaigns on behalf of their issues. And the one of the first things that you want in a good presidential campaign or a good national campaign is a good, easy to understand motto or slogan. And you know, that's what, why these companies seem to have these, these good arguments.

I mean, [01:29:00] back to the SOPA PIPA fight that we talked about earlier, the 2012 shut down the internet day. Okay. Okay. Okay. Um, you know, the company's slogan was these bills will kill the Internet. SOPA, PIPA will kill the Internet. That absolutely was not true. But it galvanized Americans. All of a sudden, Americans who don't pay attention to Washington, don't pay attention to policy, who certainly couldn't tell you what PIPA or SOPA stood for, you know, were saying, what?

You're going to shut down. You're going to kill the Internet. You can't do that. And you're calling and say, don't shut down the Internet. Um, you know, I mean, that's a tactic that, that, that was being used even before then, but it's certainly something that tech companies have gotten better at now. 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: And just to, well, just to slow you down for a second, because when you're saying they're running presidential campaigns, I think what you mean is that, like, a presidential campaign is a nationwide thing that takes hundreds of millions of dollars to sway public opinion, and I hear you saying that each of these campaigns about certain regulations or about certain things are, these aren't subtle things, these are multi hundred million dollar campaigns sweeping the entire nation.

BRODY MULLINS: Is that right? Absolutely. Uh, and what these companies [01:30:00] do, particularly when they're in a big legislative or policy fight is sort of set up, uh, legislative war rooms and they run these presidential campaigns not to elect a individual, but for a public policy issue. Um, so they have pollsters and they have grassroots organizers and they have poll tested messages and, you know, television ads.

Um, I mean, one of the reasons that some of these Uh, antitrust bills got killed in the Senate is that the tech companies went out to key, uh, states and ran ads saying, you know, don't let these bills pass. And that scared senators who thought that, uh, the tech industry could turn those ads against them in their reelection bids.

Um, so, uh, yeah, I mean, these, these tactics and campaigns and strategy are way more sophisticated than they used to be and, and much more like a presidential campaign than what most people think a lobbying campaign is about. Um, I 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: mean, that's wild to me. Even as an industry insider, it's wild to think about.

And I think when you, when you think about lobbying, you think about backroom deals, you think about, oh, you scratch my back, I scratch yours, you [01:31:00] pass this law. Not a hundred million dollar coordinated multi year influence campaign across, you know, I mean, it's just the scale is, is just unbelievable. 

MARGARET O'MARA: And this is a story of money.

I mean, this is reflecting that these companies, the industry and its largest companies have just piles and piles and piles of money. It's money they're throwing into building AI and they're throwing into these public policy campaigns. I mean, we think about the industries that are the biggest Washington lobbyists, um, by spend.

Um, they also happen to be the most profitable, um, pharma, oil and gas, and now tech.

Nathan Calvin on Californias AI bill SB 1047 and its potential to shape US AI policy Part 3 - 80,000 Hours Podcast - Air Date 8-29-24

 

LUIS RODRIGUEZ - HOST, 80,000 HOURS PODCAST: Yeah, actually, can you give more context on that? Anthropic submitted a letter that basically said they'd support the bill if it was amended in particular ways. Is that right? 

NATHAN CALVIN: Yeah, and I think one important clarification that is I think some people interpreted a supportive amended to imply that they are currently opposed.

Uh, that's like not technically what it [01:32:00] was. They were currently neutral and they were saying that if you do these things, we will support. 

LUIS RODRIGUEZ - HOST, 80,000 HOURS PODCAST: We will actively support it. Okay. That is reassuring to me. I did, I did interpret it as, uh, we, we oppose it at the moment. 

NATHAN CALVIN: Yeah. Yeah. And again, there's some vagueness in it.

Yes. In this instance, that was not, not what was happening. Um, and I still think there are. These are large companies who I think have some of the incentives that large companies do, and you know, I think Anthropic is a company that is taking these things really seriously, and I think is pioneering some of these measures, but I also think that they're, they're still a large company are going to deal with some of the incentive issues that large companies have.

Um, and yeah, I, I really, you know, I think it's a little bit unfortunate, I think, how some of their engagement was interpreted in terms of opposition. And I think they do deserve some credit, I think, coming to the table here in a way that I think was, was actually helpful. But I think, you know, stepping back from Anthropic specifically and kind of thinking about folks who are opposing this, it's not like Anthropic is in any way like lobbying [01:33:00] against the, the bill, but there are other ones that certainly are.

And to some degree it's, it's not surprising. And it's a thing that, you know, I think we've seen before. And it's worth remembering of, you know, like Zuckerberg in his testimony in front of Congress, you know, said like, Oh, I want to be regulated. And, you know, it's a thing that you, you hear from lots of folks where they say, I want to be regulated, but then what they really mean is I want to regulate it in the exact way.

Basically, I want you to mandate for the rest of the industry, what I am doing now, and I want to just like self certify that what I'm doing now is happening. And that's it. That, that, that, that is, I think often what this really, um, And so there's some way in which it's like easy for them to support regulation in the abstract, but when they kind of look at it and, and again, like, I think there's some aspect here of, I think even within these companies of folks who care about safety, I think there's a reaction that says, you know, I understand this much better than the government does.

I kind of trust my own judgment about, you know, how to manage these trade offs and what is [01:34:00] safe, kind of better than, than some, some bureaucrat. And. You know, really it's ultimately good for me to just kind of make that decision. And there are like parts of that view that, that, you know, like, I guess I can understand how someone comes to, but I just think that it ends up in a really dysfunctional place.

You know, it's worth saying like, I am quite enthusiastic about AI and think that has like genuinely a ton of promise and is super cool. And part of the reason I work in this space is because like, I find it extremely cool and interesting and amazing. And just think that like, Some of these things are just some of the most remarkable things that humans have created and it is amazing.

And I think there is just a thing here of that this is a collective action problem where you have this goal of safety and investing more and kind of, you know, making this technology act in our, in our interests versus like trying to make as much money as possible and release things as quickly as possible and left to their own devices.

Companies are going to [01:35:00] choose the latter. And I do think that you need to Government to actually come in and say that you have to take these things seriously and that that's necessary. And I think that if we do wait until a really horrific catastrophe does happen, I think you might be quite likely to get regulation that I think is actually a lot less nuanced and deferential than what this bill is.

And so I think there's some level where they are being self interested in a way that, you know, that that was not really a surprise to me, but I think maybe the thing that I feel more strongly about is that like. I think they are not actually doing a good job of evaluating their long term self interest.

I think they are really focused on like, how do I get this stuff off my back for the near future and get to do whatever I want, and are not really thinking about what this is going to mean for them in the longer term. And I, I think that that has been a little bit disheartening to me. Um, I guess like one, one last thing I'll say here is I, I do think that there is a [01:36:00] really significant sentiment.

Among parts of the opposition that it's not really just that this bill itself is, is that bad or extreme that when you really like drill into it, like again, it is, it's, it's kind of a feels like one of those things where you like read it and it's like, this is the thing that everyone is. Screaming about, it's just like, I think it's like a pretty modest bill, um, in a lot of ways, but I think part of what they are thinking is that like, this is, you know, the first step to shutting down AI development or kind of that, like, if California does this, then lots of other states are going to do it.

And that kind of, we need to like really slam the door shut on model level regulation or else, you know, they're just going to keep going. And I think that that is like a lot of. What the sentiment here is it's like less about in some ways like the details of this specific bill and more about the sense that like They want this to stop here and that they're worried that if they like Give an inch that there will continue to be other things in the future And I don't think that is going to be tolerable [01:37:00] to the public in the long run And I think it's a bad choice, but I think that is the the calculus that they are making.

SECTION C: REGULATION

JAY TOMLINSON - HOST, BEST OF THE LEFT: Next up, Section C: Regulation.

The DOJ beat Google in court. Now what - POLITICO Tech - Air Date 8-19-24

 

STEVEN OVERLY - HOST, POLITICO TECH: Can you set the stage for us a little bit, Doha, and tell us why this ruling is so important? 

DOHA MEKKI: Sure, so the Justice Department has been enforcing the antitrust laws on behalf of the United States for a very long time, more than a century, and there are certain cases that are just synonymous with antitrust enforcement, um, standard oil.

AT& T, Microsoft, and now we have a fourth, which is United States versus Google, and the reason this is a really big deal is this is the most important case about the Internet since the invention of the Internet, and it's not very often that you get dense, meaty, [01:38:00] Opinions from federal courts. This one happened to be 277 pages, clearly outlining how a company can use its dominance to illegally maintain its monopoly power.

And the last time we did this was actually United States versus Microsoft, which is a case that was filed in 1998. And as I'm sure we'll get a chance to talk about, there are a lot of really important ways in which. United States versus Google, which is about Google's power in Internet search as all of us know it today and certain advertising markets that it uses to monetize its search functions really rhymes with Microsoft and you see it up and down the opinion.

STEVEN OVERLY - HOST, POLITICO TECH: Well, we will get into that. It has been reported that breaking up Google is now a next step under consideration. I know this litigation is ongoing and you are limited in how much you can talk about it, but we do have to ask, you know, is that something actually on the table? 

DOHA MEKKI: You correctly predicted that that is [01:39:00] not something that I can talk about because this is live litigation.

What I can say is that there is a process. Um, what the court did last week was hand down what is called a liability opinion, right? It found that Google is in fact a monopolist and that it had violated section two. And the next step is to, um, work with the court to, um, figure out a process for what a remedy looks like.

And so the last thing I would want to do is get out ahead of the court. 

JOSH SISCO: So, I mean, that is where. The rubber meets the road on this case is what happens next. Now we have to see how Google's business will change. Whatever ends up happening, if it doesn't force meaningful change, is this case, is this all for not?

DOHA MEKKI: So I have to challenge the premise. The liability decision means a lot. And as public enforcers, um, we attach significant meaning to the [01:40:00] liability phase. And that's because that is when the public gets a full accounting of what we thought the problem was. It's when the public gets to hear from witnesses that get to tell the story of not only how, but potentially why Google maintained.

It's monopoly power and ways that were ultimately found to violate the law. And so there's a lot of power and public accountability. And of course, we are very gratified that the court agreed with us that Google did, in fact, violate section 2 of the search Sherman act and is a monopolist. I think it's too soon to say exactly.

What a remedy might look like again, that is up to the court and we look forward to our role in helping to inform that, but I would not undercount or understate the power of a decision like this to transform not only how Google conducts itself, but how. These markets may evolve in the [01:41:00] future. And I think that without talking about this specific case, I think you can look to examples from other Section 2 cases like Standard Oil, like AT& T, like Microsoft to understand the power of a case like this, right?

A monopolization case to affect innovation going forward.

JOSH SISCO: You brought up Microsoft and there's a long history there that we can't really get into all those details, but this was the last time that the government took on a company of this stature. The government tried, came close, ultimately didn't break up Microsoft, but you went through this whole very prolonged convoluted remedy proceeding there.

Um, how was that informing what you guys are going to do now? 

DOHA MEKKI: So I think there are potentially a few Lessons to draw again [01:42:00] without speaking about USB Google specifically, it's it's good to be a good student of history about Microsoft. And so you might recall that when the USB Microsoft case was filed, the government did consider.

Breakups, right? Those were on the table. And what ultimately changed was decisions by new leaders. Uh, specifically, Charles James became the assistant attorney general, um, and ultimately made a decision, uh, to, uh, work out. What is essentially behavioral remedies with Microsoft as opposed to a breakup and so, um, you know, I, I can't sort of comment on that decision, but we can learn really important aspects of the Microsoft decree that again, many people will tell you were effective in making sure that Microsoft could not continue to abuse its monopoly power.

There is a monitor. There is a technical committee. There were affirmative and negative. [01:43:00] Obligations on Microsoft, um, in terms of how it engaged in these markets. And I think that there are, um, very obvious ways in which it was successful, right? It, um, ushered in different browsers. Um, companies like Google were able to offer search engines.

Um, and I, again, I think nobody would dispute that those were good things. I think what does become hotly contested is how much markets might have changed on their own apps and intervention. versus the efficacy of the actual decree.

Newsoms AI dilemma To sign or not to sign Part 2 - POLITICO Tech - Air Date 9-6-24

 

STEVEN OVERLY - HOST, POLITICO TECH: You know, this isn't the only AI bill in California. Lawmakers introduced more than 60 of them this session. What other bills passed? 

JEREMY WHITE: So to an extent that I think surprised some observers, a lot of the major bills actually did not make it to the governor's desk.

I'm thinking, for example, of a bill to outlaw automated decision making systems that display bias and [01:44:00] choices around Things like housing and hiring a bill to watermark or identify AI generated content. Um, so there were, there were certainly some big ticket items that did not make it to the governor's desk.

I think that's a reflection of the industry's, um, engagement on this one. There are a couple I'm watching, however, uh, dealing with elections, one of which would require companies to. take down deepfakes when they're flagged, another which would criminalize people who intentionally share misleading deepfakes in a campaign context.

The governor responded a few weeks back to Elon Musk sharing a deepfake of Kamala Harris by saying he would sign a bill outlawing what Elon Musk had done. Not a lot of detail about what bill the governor was talking about, either from the governor's office or from lawmakers, but the governor certainly signaled that he intended to do something on the sort of election interference and misinformation front.

STEVEN OVERLY - HOST, POLITICO TECH: The California [01:45:00] legislature has such an interesting relationship with Silicon Valley, because, you know, tech drives a huge part of the state's economy, and yet California regulators, like, tend to be quite heavy handed. With the industry, how does a I kind of fit into that dynamic? 

JEREMY WHITE: I would say that in recent years we have seen a shift in the dynamic in Sacramento, where lawmakers have been increasingly willing.

To regulate these industries to say, look, these might be economic drivers, but we have to think about the societal impact. You've seen that with the gig companies like Uber, you've seen that with the social media companies like Meta, and now you're seeing it with AI. The consistent message from these lawmakers is we don't want to stop this industry.

We see that there are many benefits. We want to regulate it responsibly, and they see a cautionary tale in areas like social media where there's a widespread consensus that it got out of control before [01:46:00] lawmakers had the ability to regulate it. I think it's notable that the state senator carrying this bill Major safety bill.

Scott Wiener represents San Francisco and has certainly seen a lot of people, including people who have supported him politically opposing this bill. And so that dynamic has been there for a while. And I do think that tension between lawmakers wanting to regulate these society transforming technologies and lawmakers seeing that there are, um, Real economic benefits and a lot of political clout with these companies.

Um, I, I think there's a real collision there and it's, it's a needle that they're, they're always trying to thread. And I would just add that, again, this is one where there is a widespread perception that Gavin Newsom falls a little more on the side of the economic benefits, not to mention the tax revenue that these industries bring.

STEVEN OVERLY - HOST, POLITICO TECH: Right. What is the significance, you think, of all of this now? going forward. Obviously, it will depend on whether Newsom signs the bill or not. But what impact do you ultimately think this could have? 

JEREMY WHITE: That's a great [01:47:00] question. I think part of why these bills are so contested is that everyone recognizes if California does something here, it's essentially setting a standard for the country.

On the other hand, I think if Gavin Newsom vetoes it, it'll be interesting to see to what extent that motivates Washington to get more into this. On the other hand, I have no doubt that Scott Wiener, um, who's a pretty dogged legislator, is going to try again, even if this one gets vetoed. And so it'll be interesting to see if, uh, the governor's decision here resolves that tension between Capitol Hill and Sacramento or ramps it up.

The DOJ beat Google in court. Now what Part 2 - POLITICO Tech - Air Date 8-19-24

 

JOSH SISCO: So I wanted to sort of broaden out a little bit here at, uh, you guys have a number of other cases. The FTC has a number of other cases against large tech companies. You have another case coming up against Google. How are you sort of thinking about the impact of this case going for on, on your other matters?

DOHA MEKKI: So I think it validates the approach. [01:48:00] We worked very hard to put on a trial that was clear eyed and persuasive about market realities. Um, one of the things that makes antitrust kind of hard to understand for ordinary people, even the policy wonks and, um, folks who are really comfortable with technical stuff here in DC is that it seems very econometrically focused.

It's technocratic and it's difficult to understand, but here's a product. That almost all of us use and by explaining to the court with people who have real experience trying to bring these products to market. In many cases, Google's own executives, we were able to be more persuasive and kind of marry up the goals of the law with how these markets actually function.

And I think that that's something that you will see. In a lot of our cases, you know, you mentioned the Google ad tech case. That is a separate [01:49:00] litigation. That trial is starting in a courthouse in the eastern district of Virginia, um, on September 9th, but remember that that case is about digital advertising technologies, right?

So that case is about. Um, how Google owns a lot of the infrastructure that advertisers and publishers rely on to show you what's called open web display ads. And that's, that's different from the products that were at issue here. Um, but again, without, um, prejudging that recognizing that it's a, uh, live litigation, have, I think.

People should expect a very similar approach, which is to be experts on how these markets actually function and to, um, Do the best job we can possibly muster to tell persuasive stories about what monopoly power looks like, what it feels like, what it sounds like, but also how the effects of it [01:50:00] really reverberate for ordinary people.

And so when I think about a case like Google search, we told a story about, um. All of the innovation that we really lost out on and how the markets could have been more vibrant but for, um, some of the conduct that we saw and so like to put this in real terms for some of your listeners, you know, imagine a world in which we had, um, five or 10 different search engines, um, maybe.

You know, someone compete on privacy, right? Some would be particularly good for, um, I don't know. People have a particular interest. Um, there has been really interesting writing about, um, how the ability to conduct Internet searches empowers women to make decisions about. Their lives, including in the context of reproductive freedom and choices about their bodies.

And so again, [01:51:00] that restriction of consumer choice is a really important value and antitrust. And when companies resort to a legal means to maintain that power and maintain that control really limits our ability to make decisions about how we want to live our lives. 

JOSH SISCO: You've been at the division for about 10 years, I think that's maybe a little bit less than I've been up than I've been on this beat when I first started on, you know, covering this, it was a fairly sleepy technocratic area of the law.

Uh, it wasn't, it didn't get anywhere near the attention that it has now, and that has changed over the last four or five years. And so I'm wondering, like. What do you think are the biggest differences in like the administration's approach to antitrust and how that shift has been for you? 

DOHA MEKKI: Yeah. So this is my third administration.

And so I've been really pleased to see bipartisan interest in antitrust. But I'm also not surprised to see it. I think there are many, uh, [01:52:00] really smart people who have tried to unpack why antitrust is having a resurgence or whether, you know, why the public is more interested in antitrust and I think an explanation that I've often found.

Really compelling is that, you know, after the financial crisis, there was, you know, the two tiered recovery. Um, there were concerns about wage stagnation. Um, there was concern about the hauling out of the middle of the country. And I think that brought questions about political economy kind of to the fore and antitrust is not a great tool for answering.

All of those questions, but it does speak to things like economic coercion and the power of corporations over citizens. And what happened, I think, is that there was more research and more scholarship. That really reoriented all of us with the roots of antitrust and concerns. The founders may have had about corporations that [01:53:00] wield their power in ways that hurt citizens.

We're putting ourselves, you know, at bird's eye level with the corporate executives and market participants that are making decisions and trying to understand markets as they are, and then syncing that up with the facts. And so I think that's been the change. Um, I think there are ways in which we've been very successful and telling those stories, but no doubt there's more for us to do.

Um, and we're always learning about how markets actually work, um, and ways that corporate conduct may be hurting people and hurting innovation.

SECTION D: THINKING THROUGH SOLUTIONS

JAY TOMLINSON - HOST, BEST OF THE LEFT: And finally, Section D: Thinking Through Solutions.

This Moment in AI How We Got Here and Where Were Going Part 3 - Your Undivided Attention - Air Date 8-12-24

 

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: Yeah, well, that's a really good segue into what I wanted to talk about next, actually, which is that the work that CHT has been doing on AI is really on a continuum to the work that The organization first started to do on social media and you [01:54:00] know, I think that's something people don't always understand very well, so I'd love for you to have a go at explaining that.

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Yeah. The, the key thing to understand that con, that connects our work on social media to AI is the focus on how good intentions with technology. Aren't enough. And it's about how the incentives that are driving, how that technology gets rolled out or designed or you know, adopted leads to, you know, worlds that are not the ones that we want.

You know, a joke that I remember making ISA when we were at AI for Good was, imagine you go back 15 years and we went to a conference called Social Media for Good. I could totally imagine that conference. In fact, I think I almost went to some of those conferences back in the day because we were all.

Everyone was so excited about the opportunities that social media presented and me included. I remember hearing Biz Stone, the co-founder of Twitter on the radio in 2009, talking about someone sending a tweet in Kenya and getting retweeted twice and suddenly, everybody in the United States saw it within 15 seconds.

And it's like, that's amazing. That's so powerful. And who's not intoxicated by that? And [01:55:00] those good use cases. Are still true. The question was, is that enough to get to the good world where technology is, you know, net synergistically improving the overall state and health of the society? And the challenge is that it is gonna keep providing these good examples, but the incentives underneath social media we're gonna derive systemic harm or systemic weakening of society.

Shortening of attention spans more division, less of a. Of, uh, information commons driven by truth, but more the incentives of clickbait, uh, the outrage economy, so on and so forth. And so the same thing here. Here we are 15 years later, we're at the UN AI for Good Conference. It's not about the good things AI can do, it's about are we incentivizing AI to systemically roll out in a way that's strengthening societies?

That's the question. 

AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: It's worth pausing there because. It's not like we are anti AI or anti technology, right? Like, it's not that we are placing attention on just the bad things AI can do. [01:56:00] That's, it's not about us saying like, let's look at all the catastrophic risks that's, or the existential risks.

That's not That's not the vantage point we take. The vantage point we take are, what are the fragilities in our society that we are going to expose with new technology that are going to undermine our ability to have all those incredible benefits? That is the place we have to point our attention to. We have a responsibility to point our attention to, and I wish there were more conferences that weren't just AI for good, but AI for, you know, making sure that things continue.

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Just one metaphor to add on top of that that I've liked using re recently is that you, you've, um, mentioned a few times is this Jenga metaphor. Like, you know, we all want a taller and more amazing building of benefits that AI can get us. But there's, imagine two ways of getting to that building. One way is we build that taller and taller building by pulling out more and more blocks from the bottom.

So we get cool AI [01:57:00] art that we love, but by creating DeepFakes that undermine people's understanding of what's true and what's real in society. We get new cancer drugs, but by also creating AI that can speak the language of biology and enable all sorts of new biological threats at the same time. So we are not people who are, you know, we are clearly acknowledging the tower is getting taller and more impressive exponentially faster every year because of the pace of scaling and compute and all the forces we're talking about.

But isn't there a different way to build that tower? than to keep pulling out more and more blocks from the bottom. That's the essence of the change that we're trying to make in the world. 

AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: And this is why, just to tie it back to something you said before, half lighting is so dangerous, because half lighting says I'm only going to look at the blocks I placed on the top, but I'm going to ignore that I'm doing it by pulling a block out from the bottom.

That's right, exactly. 

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: Okay, so what are some solutions to these problems? What kind of policies can we bring in [01:58:00] on a national level? 

AZA RASKIN - CO-HOST, YOUR UNDIVIDED ATTENTION: Yeah, there are efforts underway to work on a sort of more general federal liability coming out of product law for AI. And I just wanted to have a call out to our very talented policy team at CHT, uh, You know, our leaders there, Casey Mock and Camille Carlton, they're often more behind the scenes, but you'll be able to listen to them in one of our upcoming episodes to talk about specific AI policy ideas around liability.

And another just sort of very common sense solution, and we can tie this back to the Jenga metaphor, is how much money, how much investment should be going into upgrading our governance. So we can say that at least, you know, like 15, 25 percent of every dollar spent of the trillions of dollars going into making AI more capable should go into upgrading our ability to govern [01:59:00] and steer AI as well as the defenses for our society.

Right now, we are nowhere near that level. 

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: Yeah. But who makes the decision about what should be spent on safety? I mean, is that something that happens on a federal level? Is that something that happens on an international level? Or do we trust the companies to make those decisions for themselves? , 

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: you don't, you can't trust the companies to make decisions for themselves because then it becomes an arms race for who can hide their costs better and spend the least amount on it, which is exactly what's happening.

It's a, it's a race to the bottom. As soon as someone says, I'm not gonna spend any money on safety, and suddenly I'm gonna spend the extra money on GPUs and going faster and having a bigger, more impressive AI model so I can get even more investment money. That's how they win the race. And so it has to be something that's binding all the actors together.

We don't have international laws that can make that happen for everyone, but you can at least start nationally and use that to set international norms that globally we should be putting 25 percent of those budgets into it.

SASHA FAGAN - PRODUCER, YOUR UNDIVIDED ATTENTION: So this conversation, like a lot of the conversations we have on the show can [02:00:00] feel a little bit disempowering because it can be hard to get a sense of progress on these issues. But there have actually been some big wins for the movement and I'd love to get your guys thoughts on these, especially on the social media side.

TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Yeah, um, there's actually a lot of progress being made on some of the other issues that CHT has worked on, including the Surgeon General of the United States, Vivek Murthy, actually issued a call for a warning label on social media. And while that might seem, like, kind of empty, or like, what is that really going to do, if you look back to the history of big tobacco, the Surgeon General's warning was a key part of establishing new social norms, that, that cigarettes and tobacco were, were illegal.

And I think that we need that set of social norms for social media. You know, another thing that happened is, you know, this group, Mothers Against Media Addiction, that we talked about the need for that to exist a couple years ago, uh, Julie Scalfo has been leading the charge. And that has led to, you know, in person protests in front of Meta's campus in New York and other places.

And I believe Julie and Mama were actually present in New York when they did the ban of infinite [02:01:00] scrolling. Recently in New York State legislatures, there's been 23 state legislatures that have passed social media reform laws and the Kids Online Safety Act just passed the United States Senate, which is a landmark achievement.

I don't think something has gotten this far in tech regulation in a very long time. And President Biden said he'll sign it if it comes across his desk, and that would be amazing. You know, and this would create a duty of care for minors that use the platform, which would mean that the platforms are required to take reasonable measures to reform design for better outcomes.

It doesn't regulate how minor search in the platform, um, which deals with the issue that would have a chilling effect on, on free speech, or especially issues on L-G-B-T-Q minors. So this is, I think, progress to celebrate.

Credits

JAY TOMLINSON - HOST, BEST OF THE LEFT: That's going to be it for today. As always keep the comments coming in. I would love to hear your thoughts or questions about today's topic or anything else. You can leave a voicemail or send us a text at (202) 999-3991, or simply email me to [email protected]. The Additional Sections of the show included clips from Andrewism, Your Undivided Attention, [02:02:00] The 80,000 Hours Podcast, and Politico Tech. Further details are in the show notes. 

Thanks to everyone for listening. Thanks to Deon Clark and Erin Clayton for their research work for the show and participation in our bonus episodes. Thanks to our Transcriptionist Quartet, Ken, Brian, Ben, and Andrew, for their volunteer work helping put our transcripts together. Thanks to Amanda Hoffman for all of her work behind the scenes and her bonus show co-hosting. And thanks to those who already support the show by becoming a member or purchasing gift memberships. You can join them by signing up today at bestoftheleft.com/support, through our Patreon page, or from right inside the Apple podcast app. Membership is how you get instant access to our incredibly good and often funny weekly bonus episodes, in addition to there being no ads and chapter markers in all of our regular episodes, all through your regular podcast player. And you'll find that link in the show notes, along with a link to join our Discord community, where you can also continue the discussion. 

So, coming to from far outside the conventional wisdom of Washington DC, my [02:03:00] name is Jay, and this has been the Best of the Left podcast coming to twice weekly, thanks entirely to the members and donors to the show, from bestoftheleft.com.

 


Showing 1 reaction

  • Jay Tomlinson
    published this page in Transcripts 2024-09-13 09:46:27 -0400
Sign up for activism updates