Air Date 6/28/2024
JAY TOMLINSON - HOST, BEST OF THE LEFT: [00:00:00] Welcome to this episode of the award-winning Best of the Left podcast.
AI, like all technologies, won't be all good or all bad. In fact, my favorite understanding of emerging technologies is that they often bring simultaneous utopia and dystopia-- though many tend to focus on the benefits, while only discovering the drawbacks later.
Sources providing our Top Takes today include Vox, Linus Tech Tips, TED Talks, Global Dispatches, the FiveThirtyEight Politics Podcast, DW News, Your Undivided Attention, and Tina Huang.
Then in the additional Deeper Dive half of the show, we'll explore more current uses of AI, potential uses of AI and the ethics we need to consider, and regulating AI.
Interesting note before we begin: The first clip you'll hear today is from Vox, in which they explain the hidden ubiquity of AI. And as it happens, the video itself is an example of that, [00:01:00] as it was sponsored by Microsoft's AI Copilot program, and also Vox just signed a deal with OpenAI to repurpose their human-written journalism to train OpenAI's models. And part of that deal is for Vox to gain access to OpenAI's tech to develop their own strategies of how they want to incorporate AI into Vox. Classic.
We’re already using AI more than we realize - Vox - Air Date 2-28-24
CHRISTOPHE HAUBERSIN - HOST, VOX: Imagine a day like this: You do some exercise with a smartwatch, put on a suggested playlist, go to a friend's house and ring their camera doorbell, browse recommended shows on Netflix, check your spam folder for an email you've been waiting for, and when you can't find it talk to a customer support chatbot. Each of those things are made possible by technologies that fall under the umbrella of artificial intelligence.
But when a Pew survey asked Americans to identify whether each of those used AI or not, they only got it right about 60 percent of the time.
ALEC TYSON: Some of these applications of AI have become fairly ubiquitous. They almost exist in the [00:02:00] background and it's not terribly apparent to those folks that the tools or services they are using are powered by this technology.
CHRISTOPHE HAUBERSIN - HOST, VOX: That's Alec Tyson, one of the researchers behind that Pew study. When Tyson and his team asked respondents how often they think they use AI, almost half didn't think they regularly interact with it at all. Some of them might be right. But most probably just don't know it.
ALEC TYSON: We know about 85 percent of US adults are online every day, multiple times a day. Some folks are online almost all the time. This suggests a bit of a gap where there seem to be some folks who really must be interacting with AI, but it's not very salient to them. They don't perceive it.
CHRISTOPHE HAUBERSIN - HOST, VOX: So, why does that gap exist? Part of the problem is that the term "artificial intelligence" has been used to refer to a lot of different things.
KAREN HAO: Artificial intelligence is totally this giant umbrella tent term that is now, it's become a kitchen sink of everything.
CHRISTOPHE HAUBERSIN - HOST, VOX: That's Karen Hao. She's a reporter who covers artificial intelligence and [00:03:00] society.
KAREN HAO: In the past, there were distinct disciplines about which aspect of the human brain do we want to recreate? Do we want to recreate the vision part? Do we want to recreate our ability to hear, our ability to write and speak?
CHRISTOPHE HAUBERSIN - HOST, VOX: Giving a machine the ability to see became the field of computer vision. Giving a machine the ability to write and speak became the field of natural language processing. But on their own, these tasks still required a machine to be programmed. If we wanted machines to recognize spam emails, we had to explicitly program them to look out for specific things, like poor spelling and urgent phrasing. That meant the tools weren't very adaptable to complex situations.
But that all changed when we started recreating the brain's ability to learn. This became the subfield of machine learning, where computers are trained on massive amounts of data so that instead of needing to hand code rules about what to see or speak or write, the computers can develop rules on their own. With machine learning, a computer could learn to recognize new spam [00:04:00] emails by reviewing thousands of existing emails that humans have labeled as spam. The machine recognizes patterns in this structured data and creates its own rules to help identify those patterns. When that training data hasn't been structured and labeled by humans, that method is called "deep learning." Most of the time people talk about AI now, they're not talking about the whole field, but specifically these two methods.
Improvements in computing power, together with the massive amounts of data generated on the internet, made possible a whole new generation of technologies that leveraged machine learning. And existing ones swapped out their algorithms for machine learning too.
KAREN HAO: A lot of the "how" in the back has been swapped into AI over time, because people have realized, oh wait, we can actually get an even better performance of this product if we just swap our original algorithm, our original code out for a deep learning model.
CHRISTOPHE HAUBERSIN - HOST, VOX: Now machine learning and deep learning models power recommendation for shows, music, videos, [00:05:00] products, and advertisements. They determine the ranking of items every time we browse search results or social media feeds. They recognize images like faces to unlock phones or use filters, and the handwriting on remote deposit checks. They recognize speech in transcription, voice assistance and voice-enabled TV remotes. And they predict text in auto complete and auto correct.
But AI is seeping into more than that.
KAREN HAO: There has been this tendency over the last 10 plus years where people have started putting AI into absolutely everything.
CHRISTOPHE HAUBERSIN - HOST, VOX: Machine learning algorithms are already being used to decide which political ads we see, which jobs we qualify for, and whether we qualify for loans or government benefits, and often carry the same biases as the human decisions that preceded them.
KAREN HAO: Are you actually automating the poor decision making that happened in the past and just bringing it into the future? If you're going to use historical data to predict what's going to happen in the future, you're just going to end up with a future that looks like the past. [00:06:00]
CHRISTOPHE HAUBERSIN - HOST, VOX: And that's part of the reason why it matters to close that gap between those who knowingly interact with AI every day and those who don't quite know it yet.
ALEC TYSON: Awareness needs to grow for folks to be able to participate, In some of these conversations about the moral and ethical boundaries, what AI should be used for and what it shouldn't be used for.
AI is a Lie. - Linus Tech Tips - Air Date 6-13-24
LINUS SEBASTIAN - HOST, LINUS TECH TIPS: The classic definition of AI is probably best illustrated with fictional examples. It's what you see in sci-fi creations like Commander Data, HAL 9000, and GLaDOS. These are computers or machines that demonstrate a capacity for reason, however naive, twisted, or alien it might seem to us meatbags.
Now, you'd be forgiven for thinking that that's still the definition of AI. A lot of people seem to think that it is. But in reality, the meaning of words is ever shifting, and we would now refer to these characters as having AGI, or Artificial General Intelligence. What you're referring to as AI, then, is in [00:07:00] fact Narrow AI, or as I've taken to calling it, ANI. ANI is not a general intelligence unto itself, but rather another component of a fully-functioning system made useful by specialized algorithms and data processing utilities forming a complete artificial intelligence system.
Didn't think I could make that point in the style of Richard Stallman's famous interjection? Well, I could--haha! But I also didn't have to. That previous paragraph was actually written by GPT for Omni. And this is exactly the sort of thing that modern AI does very well. And that's because most of the time when we hear the term AI, We're actually referring to machine learning, a subset of AI involving algorithms that can analyze patterns in data. They get trained on things like text, multimedia or even just raw number outputs, and using this training data, they identify patterns through statistical [00:08:00] probability. They can be further trained through reinforcement learning, then, by rewarding correct outputs and punishing incorrect outputs--kind of like training a hamster.
The results allow these algorithms to summarize, predict, or even generate something seemingly new. And in many cases, they are so impressive that a good machine learning system can be indistinguishable from classic AI or AGI.
Well then Linus, if it looks like an AI and it quacks like an AI, what's the difference?
Well, artificial narrow intelligence is limited to specialized tasks. GPT 4 Omni, specifically, is a large language model, which means that it is trained to understand and generate natural language, like the words I'm speaking now. It's basically an autocomplete on steroids. What sets it apart from your phone's keyboard, though, is that it can also process information based on patterns that are learned during training, including definitions, [00:09:00] mathematical formulae, and so on and so forth. That makes it capable of generating unique output that wasn't part of its training data. GPT has traditionally been incapable of image, video or audio generation. There are other types of generative models, like Sora, Suno, or Dali, that feature their own specific talents, but most of them are incapable of operating outside of their specific niche, and all of them are limited by their training data in a similar manner.
And because they are limited by their training data, in many cases, the answers that they give resemble their training data, which, if you're an artist or a photographer and your work gets added to a model, is probably not your idea of fair use, much less a good time. Worse, when generative models are faced with a concept that they don't understand, or they simply run out of tokens, they can begin to hallucinate. That is to say, they just make things up as they go. Which is why sometimes you get eldritch abominations like these.
With [00:10:00] that said, these limitations don't mean that machine learning AI is a dead end. It's been deployed very effectively for diagnosing diseases and in other highly complex scenarios where the data is dense and the conclusions require interpretation.
These specialized models are extremely useful. They're just also extremely not new. Simple neural networks have been in use for decades for things ranging from handwriting recognition to web traffic analysis. And yes, even video game AI and chatbots. The main difference is that they run much faster on modern hardware.
If I had to distill down what artificial narrow intelligence really means then, I would say it's like having a thousand monkeys at a thousand typewriters with a thousand pieces of reference material for what the outputs are supposed to look like. With enough trial and error then, they do arrive at a point where they're likely to spit out a correct or at least correct enough solution. Then, we [00:11:00] take all those monkeys and we take a snapshot of the model state and we start feeding it inputs for both fun and profit.
What ANI is to a brain, then, is kind of what a single app is to a computer. It's a building block, it's something your brain is capable of, but it's just one of its many, many functions.
Shifting gears a bit, then, what would artificial general intelligence look like? Well, it would need to be able to handle everything we've talked about so far, just like your brain can take some past experiences and turn them into a new creation. But again, like your own brain, it would need to be able to run many of these models concurrently and continuously train and iterate on them rather than relying on fixed snapshots. Only then would an AGI have the ability to truly learn and adapt to new things, bringing it closer to that classical definition of AI, and really blur the lines between machine learning and machine consciousness.[00:12:00]
The problem is, even if we had software that sophisticated, we are nowhere close to being able to run an AGI, even on a modern supercomputer, let alone on your AI smartphone.
But, all right Linus, you still haven't explained why any of this is even a problem. I mean, free range meat is just marketing bollocks too, so who cares?
Well, truthfully, in most cases, I don't. I mean, Cooler Master's AI thermal paste snafu: I was never bothered by it, because I never expected my paste to be sentient anyway. But, there are situations where this kind of marketing can have an impact on user safety, and therefore does matter. Let's talk about Tesla.
Mr. Musk has said, among other things, that any vehicle from 2019 onward will be able to reach full autonomy. And he's certainly put out some impressive demos, both canned and even in the form of public beta software that you really can use. And that's really cool. [00:13:00] But unfortunately, it isn't much more than that.
You see, to operate a vehicle safely, it's not enough to be trained with images of painted lines and traffic cones, stop signs, pedestrians, vehicle telemetry data. It's not even enough to be trained to predict the likely maneuvers of nearby vehicles and life forms. On the road, anything can happen, and by definition, by it's very definition, ANI is not capable of handling an edge case that it has never seen before. Even if it was, by the way, I have some really bad news for you Tesla owners out there: Hardware 3. 0 has about 144 TOPS, or trillion operations per second worth of processing power. For context, Windows 11 Recall, a feature that does little more than take screenshots and analyze your PC usage for search, asks for 40 TOPS.
Now to be clear, TOPS is not a be-all, end-all measure of performance, and there is no way that [00:14:00] Microsoft has optimized the code for Recall nearly as much as Tesla has for full self driving. But this should still illustrate the point that Tesla either did, or should have known, that a vehicle with the AI capabilities of a family of iPhone 15 Pro users would never achieve that kind of real-time contextual awareness that's required for complex situations like operating a motor vehicle, and they misrepresented its capabilities in order to sell more software that was never going to leave beta.
That is going to be a doozy of a class action. And it's a common story that has led to this current mess where fuzzy definitions and impossible promises have turned AI into this meaningless buzzword, like all the rest of them. All of them refer to legitimate, useful technologies, some of which have really come to fruition. But their meanings have become diluted with overuse. And it means that when computer cognition finally happens, we're gonna have to call it something completely [00:15:00] different in order to differentiate it from all of the marketing wank.
What Is an AI Anyway? | Mustafa Suleyman - TED - Air Date 4-22-24
MUSTAFA SULEYMAN: Imagine if everybody had a personalized tutor in their pocket and access to low-cost medical advice, a lawyer and a doctor, a business strategist and coach--all in your pocket, 24 hours a day. But things really start to change when they develop what I call AQ: their actions quotient. This is their ability to actually get stuff done in the digital and physical world. And before long, it won't just be people that have AIs. Strange as it may sound, every organization from small business to nonprofit to national government, each will have their own. Every town, building and object will be represented by a unique interactive persona.
And these won't just be mechanistic assistants. There'll be companions, confidants, colleagues, friends, and partners as varied and unique as we all are. At this point, [00:16:00] AIs will convincingly imitate humans at most tasks.
And we'll feel this at the most intimate of scales: An AI organizing a community get-together for an elderly neighbor. A sympathetic expert helping you make sense of a difficult diagnosis. But we'll also feel it at the largest scales: Accelerating scientific discovery. Autonomous cars on the roads. Drones in the skies. They'll both order the takeout and run the power station. They'll interact with us, and of course, with each other. They'll speak every language, take in every pattern of sense data, sights, sounds, streams and streams of information, far surpassing what any one of us could consume in a thousand lifetimes.
So what is this? What are these AIs?
If we are to prioritize safety above all else, to ensure that this new wave [00:17:00] always serves and amplifies humanity, then we need to find the right metaphors for what this might become.
For years, we in the AI community, and I specifically, have had a tendency to refer to this as just tools. But that doesn't really capture what's actually happening here.
AIs are clearly more dynamic, more ambiguous, more integrated and more emergent than mere tools, which are entirely subject to human control. So to contain this wave, to put human agency at its center, and to mitigate the inevitable unintended consequences that are likely to arise, we should start to think about them as we might a new kind of digital species.
Now, it's just an analogy. It's not a literal description, and it's not perfect. For a start, they clearly aren't biological in any traditional sense. But just pause for a moment and really think about what they already do. [00:18:00] They communicate in our languages. They see what we see. They consume unimaginably large amounts of information. They have memory. They have personality. They have creativity. They can even reason to some extent and formulate rudimentary plans. They can act autonomously if we allow them. And they do all this at levels of sophistication that is far beyond anything that we've ever known from a mere tool.
And so saying AI is mainly about the math or the code is like saying we humans are mainly about carbon and water. It's true, but it completely misses the point.
And yes, I get it. This is a super arresting thought. But I honestly think this frame helps sharpen our focus on the critical issues. What are the risks? [00:19:00] What are the boundaries that we need to impose? What kind of AI do we want to build, or allow to be built?
This is a story that's still unfolding. Nothing should be accepted as a given. We all must choose what we create, what AIs we bring into the world--or not.
These are the questions for all of us here today, and all of us alive at this moment. For me, the benefits of this technology are stunningly obvious, and they inspire my life's work every single day. But quite frankly, they'll speak for themselves. Over the years, I've never shied away from highlighting risks and talking about downsides. Thinking in this way helps us focus on the huge challenges that lie ahead for all of us. But let's be clear There is no path to progress where we leave technology behind. The prize for [00:20:00] all of civilization is immense. We need solutions in healthcare, in education, to our climate crisis. And if AI delivers just a fraction of its potential, the next decade is going to be the most productive in human history.
Here's another way to think about it: In the past, unlocking economic growth often came with huge downsides. The economy expanded as people discovered new continents and opened up new frontiers. But they colonized populations at the same time. We built factories, but they were grim and dangerous places to work. We struck oil, but we polluted the planet.
Now, because we are still designing and building AI, we have the potential and opportunity to do it better, radically better. And today we're not discovering a new continent and plundering its resources; we're building one from [00:21:00] scratch. Sometimes people say that data or chips are the 21st century's new oil, but that's totally the wrong image. AI is to the mind what nuclear fusion is to energy: limitless, abundant, world-changing.
And AI really is different. That means we have to think about it creatively and honestly. We have to push our analogies and our metaphors to the very limits to be able to grapple with what's coming. Because this is not just another invention. AI is itself an infinite inventor. And yes, this is exciting and promising and concerning and intriguing all at once. To be quite honest, it's pretty surreal. But step back, see it on the long view of glacial time, and these really are the very most appropriate metaphors that we have today. Since the beginning of life on earth, we've been [00:22:00] evolving, changing, and then creating everything around us in our human world today. And AI isn't something outside of this story. In fact, it's the very opposite. It's the whole of everything that we have created, distilled down into something that we can all interact with and benefit from. It's a reflection of humanity across time. And in this sense, it isn't a new species at all. This is where the metaphors end.
Here's what I'll tell Caspian next time he asks: AI isn't separate. AI isn't even, in some senses, new. AI is us. It's all of us. And this is perhaps the most promising and vital thing of all that even a six-year-old can get a sense for.
As we build out AI, we can and must reflect all that is good, all that we love, all that is special about humanity: our [00:23:00] empathy, our kindness, our curiosity and our creativity. This, I would argue, is the greatest challenge of the 21st century--but also the most wonderful, inspiring and hopeful opportunity for all of us.
How to Limit the Threat of "Killer Robots" and Autonomous Weapons That Are Changing Warfare - Global Dispatches -- World News That Matters - Air Date 3-13-24
MARK LEON GOLDBERG - HOST, GLOBAL DISPATCHES: Can you make this real for listeners who, again, might have a hard time wrapping their brains around what a battlefield use of AI drones 10 years from now might look like, and also why that might be a problem. AI, it's super intelligent, right? It should be able to distinguish combatant from noncombatant.
PAUL SCHARRE: Yeah. I think in the near term, the uses will be probably isolated and the effects in the battlefield will probably not be massive. There is this time disconnect between what's happening out in the civilian space with AI and militaries, because it just takes militaries a while to adopt the technology. In the longer run, I think the odds are good that artificial intelligence and autonomy will [00:24:00] transform the battlefield in very profound ways. That may take some time, 10-15 years, maybe several decades, but one could certainly envision a future where there are lots of weapons that are operating autonomously, that are searching and attacking targets on their own. They're still built by humans and designed by humans and launched by humans, but once sent onto the battlefield, they have quite a degree of freedom that they don't have today, and have some measure of ability to operate intelligently.
One of the concerns that people have raised is that these systems might get it wrong. I think anyone that's interacted with the computer knows that they make mistakes, and these could have very severe life and death consequences. Another concern is that this leads to a sort of slippery slope towards militaries maybe being more liberal or less concerned about civilian casualties and civilian harm. And you could have situations where militaries delegate that to the autonomy [00:25:00] and say, "the algorithm is handling that," but the consequences for civilian harm could be quite severe, and we could see a lot of civilian casualties.
MARK LEON GOLDBERG - HOST, GLOBAL DISPATCHES: So in 2022, the United States declared that it would always retain a "human in the loop", for decisions to use nuclear weapons. I know the UK adopted a similar policy as well. Russia and China have not. Can you just explain the dangers of combining artificial intelligence and autonomy with the use of nuclear weapons. It seems obvious, but what are the scenarios that security experts like yourselves are particularly concerned about?
PAUL SCHARRE: The risks do seem obvious. I think there's been maybe more than one science fiction movie about the risks of plugging AI into nuclear weapons. So I think it's notable that the US and UK have made this statement. A couple of things are worth pointing out. One is that. It's not actually the case that there's no use of AI or autonomy or automation in nuclear operations In [00:26:00] fact, this is another area where various forms of automation have been used for decades, dating back to the cold war, but humans are very firmly in control of nuclear launch decisions.
As we see more AI being adopted I think the value here is having a clear and unambiguous statement that humans will always be in control of any decisions relating to nuclear use. So what are the real risks here? I don't think it's that someone plugs ChatGPT into a nuclear weapon, no one's proposing that. But we have seen, for example, Russia and before them, the Soviet Union, design and build systems that I think would have a degree of automation and risk that many in the US and other defense circles might be quite uncomfortable with.
One is the perimeter system that the Soviets built, which Russian defense officials have said is still operational today, that's a semi automated "dead hand" system. So there's still a human in the loop, but the way that it's designed to work is that once [00:27:00] activated, if there were a first strike that wiped out Soviet leadership, this automated system would have mechanisms to automatically detect that and then pass launch authority to a relatively junior officer sitting at a bunker. There's still a person there, but that is certainly risky, and I think concerning when you think about nuclear stability to have those kinds of automated procedures in place.
And then there's more recently, a uncrewed undersea vehicle, a robotic undersea vehicle that Russia is building called Poseidon or Status-6 that is reportedly nuclear armed—nuclear powered actually, nuclear reactor—and would be designed to carry out a nuclear strike. Again, I don't think the risk here is that the, robot would decide one day to put itself to see, but that you could imagine robotic systems or drones that have nuclear weapons on board them that get lost, that go astray, and that escalate attention or even lose a nuclear weapon [00:28:00] and all of which would be very troubling.
MARK LEON GOLDBERG - HOST, GLOBAL DISPATCHES: So what would be some sensible regulations that would limit if not prohibit fully autonomous weapons?
PAUL SCHARRE: I think one of the challenges right now is a lot of the debate internationally, and countries have been coming together since 2014 through the UN Convention on Certain Conventional Weapons, the CCW, has been painted in this kind of black or white distinction of either we could have a preemptive, legally-binding treaty that bans autonomous weapons, or we have nothing. And we just proceed where we are today, which is we have the law of war and they would apply to autonomous weapons, but nothing specific that's different.
And I think that both of those are options. There's, I think, downsides to doing nothing, and I don't think politically, really, that a comprehensive preemptive ban on autonomous weapons is likely, given where we are with the technology in the international sphere.
So I think there's a couple different regulatory approaches that are also worth being [00:29:00] considered. One would be a broad principle in international law about the role of human decision making. We never had this before, we never needed it. But that's not a bad idea to have a broad principle like we have principles of proportionality and distinction to set a broad concept of what the role is that humans are needed in the use of force.
I think there could be rooms for a more narrowly targeted ban on anti personnel autonomous weapons that would target people. Those have some unique challenges. Certainly on the nuclear side, I think that's another area where some unique rules might make sense that are specific to nuclear weapons. Nuclear power is agreeing to have a human in the loop.
I think some steps on improving reliability of AI enabled systems through better testing and evaluation would be useful to make sure that we reduce the amount of accidents or the risk of accidents. And then, it might be worse countries considering rules of the road for how drones [00:30:00] operate in contested areas as they have increasing amounts of autonomy to avoid potentially damaging incidents where we might see air or maritime drones interacting with one another and causing potentially dangerous and unwanted incidents.
MARK LEON GOLDBERG - HOST, GLOBAL DISPATCHES: Like an autonomous American drone confronting an autonomous Russian drone, and that's somehow escalating.
PAUL SCHARRE: Exactly. Or even a drone encountering a crewed vessel somewhere else from a potentially competitor nation, and the autonomy takes some action. It does whatever it was programmed to do, which might have seemed like a good idea at the time it was programmed, but one of the really important distinctions between what machines can do in humans is machines just don't have the ability to see the bigger picture, to understand the broader context.
So you can give a human direction like, "hey, listen, you always have the right to defend yourself, but don't start a war." You can tell a human that, and they may not know exactly what that [00:31:00] means in an instant ahead of time, but they can take that sort of broad guidance of, "okay, this is the broader context I'm in. We don't want to escalate things if we don't need to," and they can use their best judgment. In some of these tricky environments that we see militaries operate in contested areas in the Middle East and the Black Sea and the South China Sea and elsewhere, and you can't tell it to a machine. It's just going to do whatever it was programmed to do, and that might not be what you wanted in the moment.
How Much AI Regulation Is The Right Amount? - FiveThirtyEight Politics Podcast - Air Date 6-13-24
GALEN DRUKE - HOST, FIVETHIRTYEIGHT POLITICS: According to an Elon University poll, 54 percent of Americans describe their feelings towards AI with the word "cautious" and 70 percent of Americans believe that AI could significantly impact elections through the generation of fake information videos and audios.
I think there has been a lot of attention paid to the potential impacts of AI on this election. In fact, a little less than a year ago, we did an episode on this podcast that was titled something like the first AI election. So far, I think the general sense has been that the [00:32:00] anticipated or maybe worried impact of AI on the election has not born out. Obviously, as you cited, there was the case during the New Hampshire primary, but that this election has not thus far looked very different as a result of AI.
Would you agree with that? Do you think that that would maybe be like coming to conclusions too soon? What is your take on that?
GREGORY ALLEN: It's definitely coming to conclusions way too soon. Let me give you a few data points that strike me as really interesting. Folks might remember in the 2016 election, the Russian intelligence services were involved in creating a lot of disinformation based content, and that was coming out of the Internet Research Agency, if memory serves, it's definitely the IRA out of Russia. And that had hundreds of people working in an office in Russia, and every day they're waking up, they're clocking in, and they're cranking out deceptive information content. But there's a problem, which is most of them don't speak great [00:33:00] English. So a lot of the stuff that they're writing has the common hallmarks when English is your second language and Russian is your native language.
Well, just recently, OpenAI announced that they have detected both Russian and Chinese intelligence services using their platform to generate disinformation in advance of the election with a politically motivated intent. And I think what's really interesting there is that OpenAI/ChatGPT does not make grammatical mistakes, and OpenAI/ChatGPT does not require you to hire hundreds and hundreds of people.
And what we've seen in the text domain, which was already achievable before, now that same sort of synthetic media automatically generated stuff, highly customized stuff, it can be more audience targeted, audience calibrated, we can now bring that to audio, video, images at massive scale with capacity.
And I think there's two scenarios to think about here. number one is just massive [00:34:00] scale. What percentage of 4chan today, Is disinformation that to some greater or lesser extent, has its origins in potentially foreign content created by AI? I don't know. I don't think a reliable survey has been done or really could be done on that topic at the present time. That's a sort of scale based attack. The other attack that I would really be concerned about is just an incredibly precise, perfectly timed attack.
GALEN DRUKE - HOST, FIVETHIRTYEIGHT POLITICS: Like the October surprise p tape or n word tape, or Biden falling down or appearing to have a stroke or whatever—in the 2016 or 2020 election, people would have a stronger sense of whether or not it was real, but today, whether it's real or not, people will just not know.
GREGORY ALLEN: Yes, exactly. The right information, the right media at the right time can really be the hinge moment in really important moments in history. And my question then becomes, "could something actually make an impact on the [00:35:00] U. S. election?" As a starting hypothesis, I would say yes, it definitely could, and we should be taking steps now to make that chance go down.
GALEN DRUKE - HOST, FIVETHIRTYEIGHT POLITICS: So essentially, even if we don't have the blockbuster use of AI that people might be afraid of, such as a deep, fake in October, there could be effects of AI that are a lot less sexy, which is just the kind of information that's being spread on the Internet amongst people using social media or whatnot. But also, given the nature of our election cycle and October surprises, it could be far too soon to come to any conclusions about the impact of AI on this election.
GREGORY ALLEN: Yeah, just because something bad hasn't happened yet doesn't mean something couldn't happen. If the year before the Three Mile Island nuclear disaster, you said "there's never been a nuclear safety disaster, that means we'll always be safe." You'd be an idiot. And I think the same thing strikes me as the truth about election interference with AI. I don't know. It would be wrong of me to say that I know 100 percent that AI election interference [00:36:00] will be a big problem and a big phenomenon this year. But I do feel like I know that it could be a big phenomenon and a big problem. So I think that's enough to justify our taking steps to mitigate that risk.
GALEN DRUKE - HOST, FIVETHIRTYEIGHT POLITICS: As you mentioned, it seems like the most immediate focus in Congress would be AI uses related to the upcoming election. But beyond that, is it clear that there is the political will to regulate AI in different ways when it comes to copyright? As you've mentioned, or what's mentioned in this roadmap, for example, is a privacy bill that will, of course, affect AI. Is there bipartisan support for those things? What comes next after we've addressed the upcoming election?
GREGORY ALLEN: I think privacy is going to be really tough to pass at the federal level. At the state level, I think this is happening. It's already happened in some states. I also think at the international level, ChatGPT was briefly banned in Italy for noncompliance [00:37:00] with GDPR, the existing big European privacy regulations.
I mentioned that because all of these companies operate both in the United States and Europe, and usually when they're forced to comply with European regulation, they just do that worldwide because it's simpler than trying to calibrate what they do based on different jurisdictions. So that's, I think the story on privacy. I think that's a really tough one.
Intellectual property. I think it's a really rough political debate. There's very entrenched special interests on both sides, but one of them might win. I think we'll probably have that fight in 2025, would be my guess.
Here you have to separate the two types of AI systems that you might want to regulate. Historically, when we've been talking about AI, when the EU AI Act was first drafted, they did not have ChatGPT on their minds. The first draft of the EU AI Act predates ChatGPT. And the reason why I mentioned that is before large language models, most AI systems were [00:38:00] application specific. If you have an AI system that is a computer vision, image recognition system, if you give it a bunch of pictures of cats, it's going to be good at recognizing cats, it's not going to be good at recognizing military aircraft or tanks or something like that. Historically, AI systems are very application specific.
What's interesting about, ChatGPT and the other large language models is that they're not application specific. It will give you medical advice. It will give you legal advice. It will give you entrepreneurship advice. It will give you life coaching or psychotherapy, type of advice. And so you have these individual systems that are so diverse in the number of applications that they can do that you might want to regulate those as an entity.
In the case of the EU AI Act, for example, they separate the sector specific regulations, which is the low risk, high risk, unacceptable risk, risk pyramid, and that's based on what the AI system is doing. But then they also have this set of regulations around what they [00:39:00] call general purpose AI systems that pose a systemic risk, and that's just regulating the technology because of its capabilities.
Here's what's interesting, I think, there and in the United States and elsewhere. The regulations, at least in the legislation, mostly say, "thou shalt follow the standard" And by the way, standards coming soon, we promise. That's what's so interesting is they've actually, mandated the development of standards, and then they've mandated the following of those standards. So right now there is no existing standard for what constitutes the responsible development and the responsible operation of a super general purpose, super capable AI system like ChatGPT or Claude or Gemini, but those are coming.
And I think that's what also the U. S. Kind of has to wrestle with is do we only want to continue this existing paradigm of application specific regulation? [00:40:00] Or do we also want to regulate based on the technology overall? So far, all we've done in the latter case is mandate some transparency and reporting requirements.
How AI causes serious environmental problems (but might also provide solutions) | DW Business - DW News - Air Date 4-29-24
EMILY LESHNER - HOST, DW BUSINESS: aedifion is capitalizing on AI's ability to read and analyze data in a sliver of the amount of time it would take the world's best researchers to do the same.
This speed is what makes AI so valuable to researchers and scientists looking for solutions to the climate crisis. Scientists are now using AI to map Antarctic icebergs 10,000 times faster than humans, and to track deforestation in real time, to better predict weather patterns, and to suggest more efficient waste management systems. There's no doubt AI has the potential to do good things for the climate, but not everything about it is a gift to the environment.
Take this hum, for example, which residents in Chandler, Arizona hear 24/7. It's the sound of a data center processing [00:41:00] the billions of requests it gets throughout the day. Think of AI as the brain and data centers as the body that supports the brain to work. There are more than 8,000 data centers in the world. According to the International Energy Agency, data center energy consumption is expected to double in 2026 to what it was in 2022.
JESSE DODGE: When I started doing AI research a decade ago, I could run most of the AI systems I was using on my laptop.
EMILY LESHNER - HOST, DW BUSINESS: This is Jesse Dodge. He's an AI research scientist.
JESSE DODGE: But today we're using supercomputers. Some of the large AI systems that people are familiar with, like the chatbots or the image generation systems, those run on really large supercomputers and consume a potentially very large amount of electricity.
EMILY LESHNER - HOST, DW BUSINESS: These very large amounts of electricity produce very large amounts of heat, and also that hum you just heard. To keep the data centers from overheating, they must be cooled down. [00:42:00] And this is usually done in one of two ways, using air conditioning or water, and lots of it. Let's say I engage in a 15 question conversation with ChatGPT over how I could be more environmentally conscious. Experts calculate I would be consuming about a half liter of fresh water. And this is where AI can be a little problematic.
JESSE DODGE: Access to clean water is competing with local uses for it.
EMILY LESHNER - HOST, DW BUSINESS: This is what got Google into a bit of hot water soon after it announced plans to build a $200 million data center in the working class neighborhood of Cedillos in Chile.
SEBASTIÁN LEHUEDÉ: We all use Google search and other Google tools. So initially the neighbors were quite happy that Google had chosen this area for building their data center.
EMILY LESHNER - HOST, DW BUSINESS: This is Sebastián Lehuedé, an AI ethics and society lecturer at King's College, London.
SEBASTIÁN LEHUEDÉ: They saw it as synonymous with progress, [00:43:00] development, a new pole of innovation in the area.
EMILY LESHNER - HOST, DW BUSINESS: But once they took a look at Google's environmental impact report for the data center, they were startled by what they learned.
SEBASTIÁN LEHUEDÉ: They found out at some point that this Google data center was going to use 168 liters of water per second in an area facing drought.
EMILY LESHNER - HOST, DW BUSINESS: A drought that is now in its 15th year and caused elected officials to ration water in the capital of Santiago. But after fierce protests from the community, the permit was put on hold. A local environmental court told Google it needs to modify how it plans to cool its servers. And Google's plans for a data center in Uruguay also faced pushback when locals learned how much water it would consume.
And water isn't the only natural resource that AI requires. It needs a lot of electricity too. And most of that electricity still comes from burning fossil fuels, which released the greenhouse gas emissions causing climate change. [00:44:00] Training a single AI model produces more than five times the amount of carbon dioxide emissions generated from a car in its lifetime. That's including the emissions to manufacture the car and its fuel consumption once it leaves the factory. It's an astounding amount. Training an AI model and then ensuring its continued existence through large data centers is a massive drain on natural resources and also drives up what researchers call embodied carbon.
JESSE DODGE: So that's going to be the amount of, carbon it took to build the hardware. Just starting by mining the rare earth minerals that goes into the GPUs, shipping that across the world to then be manufactured into a GPU, and then shipping that GPU to its final destination at a data center. That does incur a really large environmental impact.
EMILY LESHNER - HOST, DW BUSINESS: It's this impact that companies like Microsoft are trying to take into account as they set climate goals. Microsoft says it's aiming to be [00:45:00] carbon negative by 2030, not just neutral. but negative. And one way it's hoping to get there is through Bolivia. More than 9,000 kilometers from Microsoft headquarters in Redmond, Washington, is a biochar facility operated by ExoMAD Green. It turns forestry waste into something that's called biochar, which is essentially charcoal.
ExoMAD Green will produce the biochar, containing carbon dioxide, and bury it underground, where it can enrich the soil and keep CO2 from getting into the atmosphere. Microsoft has bought 32,000 tons of carbon dioxide removal credits, but that's a tiny fraction of its overall annual emissions. We don't know how much of that will grow with AI.
That's one way that Microsoft can continue to expand its AI operation and data centers while saying it's still on path to being carbon negative. But is that effective enough, or is it just a [00:46:00] form of corporate greenwashing?
JESSE DODGE: If we do something like bicarbon offsets, that doesn't negate the action that we took.
EMILY LESHNER - HOST, DW BUSINESS: Meaning that doesn't undo the carbon emissions that we've produced.
JESSE DODGE: These two things don't cancel each other out.
EMILY LESHNER - HOST, DW BUSINESS: As AI advances, governments and regulatory bodies are trying their best to keep up. This year, new AI rules passed by the European Parliament will go into effect, impacting businesses like aedifion. The EU AI Act does reference the impact of AI on the environment. It asks that AI systems are developed and used in a sustainable and environmentally friendly manner, though it doesn't really spell out what that means. Chile's AI laws, which were drafted before the EU rules, but aren't nearly as comprehensive, don't address environmental impact either.
SEBASTIÁN LEHUEDÉ: I think what's concerning is not only that this is not. being addressed enough, as it should, but also that the voice of the people affected by it is not considered. So [00:47:00] even if you look at research, the press, quite often they report on the environmental impact, but they don't report on how the situation can affect the livelihood or the wellbeing of the communities participating within the value chain of artificial intelligence.
EMILY LESHNER - HOST, DW BUSINESS: If operating a data center for your AI model Is the reason why a community doesn't have access to drinking water, is that still sustainable?
SEBASTIÁN LEHUEDÉ: We need those voices to participate as well in the governance of AI. So if the UN, for example, is coming up with new regulation, it would be great to be able to hear those communities as well, because those communities, they're not against technology or AI, but that what they will say is that if we want AI, it has to be built in dialogue with local communities.
Why Are Migrants Becoming AI Test Subjects? With Petra Molnar - Your Undivided Attention - Air Date 6-20-24
AZA RASKIN - HOST, YOUR UNDIVIDED ATTENTION: I really want to know about like, well, how does the technology diffuse? Like, what's the path? What are warning signs, if at all, of it going from the border to a broader society? Where have you seen that happen? I think people seeing that path, if there is that path, is really important [00:48:00] for understanding why we might want to get ahead of it now.
PETRA MOLNAR: Yeah, for sure. And, you know, when I get asked this question, I always think about how best to answer it because I do think it's important to keep the kind of context specific to the border sometimes, because it is this kind of high risk laboratory that really impacts vulnerable people. But at the end of the day, it doesn't just stop at the border.
And that's a trend that I've been noticing the last few years for sure. So, if we go back to the robodogs that were announced by the Department of Homeland Security for border purposes in 2022, just last year, I think it was May, the New York City Police Department proudly unveiled that they're going to be rolling out robodogs on the streets of New York City.
And one was even painted with black spots on it, like a Dalmatian. So, again, very proud of its kind of like surveillance tech focus. And I should say, the robodogs were before piloted in New York and in Honolulu during the COVID 19 pandemic for surveillance on the streets, and then after public outcry, surprise, surprise, were pulled.
So again, [00:49:00] the border tech stuff doesn't just stay at the border, but it then starts proliferating into other spaces of public life. And, you know, we've seen similar technology like drones and different types of cell phone tracking be deployed against protesters and even things like sports stadium surveillance. There is some work being done in the European Union on some of the technologies that are deployed for border enforcement and for criminal justice purposes, also then being turned on, you know, people who are enjoying a football game or a soccer game, for example.
I think that's the interesting thing with tech, right? It might be developed for one thing and then repurposed for a second purpose and sold to a third purpose. And it just kind of flows in these ways that are difficult but important to track.
AZA RASKIN - HOST, YOUR UNDIVIDED ATTENTION: Yeah, there's sort of a version of 'build it, they will come'. It's like build it and it will be used. You know, one of the other things we picked up from your book is you talked about a policy I'd never heard of called CODIS, which you say moves the US closer towards construction of a discriminatory genetic [00:50:00] panopticon, a kind of dystopian tool of genetic surveillance that could potentially encompass everyone within the United States, including ordinary citizens, when they've not been convicted or even suspected of criminal conduct. Can you talk a little bit more about that?
PETRA MOLNAR: Yeah, that's the other kind of element of this dystopia. The fact that, you know, your body becomes a border in a way, not only just with biometrics, but also with DNA collection. And there's been different pilot projects kind of rolled out over the years. Again, how is that possible, right? Like, have we agreed to this as people who are crossing borders? The fact that states are now considering collecting DNA for border enforcement is very dystopic. Because I think that's ultimately what it is about, the fact that each of these incursions is moving the so called Overton window further and further.
You know, we're talking, at first it's biometrics, then it's robodogs, then it's DNA. What is it going to be next, right? And I don't mean to just fear monger or kind of [00:51:00] future-predict or anything; this is based on years of work across different borders and seeing the appetite for a level of technological incursion that I don't think it's going to stop anytime soon.
AZA RASKIN - HOST, YOUR UNDIVIDED ATTENTION: Where have there been examples in the world where things have gone the other way around? Where it's not just a temporary, like, public outcry and robodogs get taken back, but, like, something really significant has happened, where a surveillance technology from the border gets rolled back because it really doesn't fit a country's values.
PETRA MOLNAR: I will say we are catching this at a really crucial moment because there are conversations about, well, how do we regulate some of this? Like, do we put some red lines under some of this technology? And there were some really, really inspiring conversations being had at the European Union level, for example, because it went through this really long protracted process of putting together an AI act, basically, the first regional attempt to regulate AI. And even though in the end it didn't go as far as it, I think, should on border technologies, there were [00:52:00] conversations about, for example, a ban on predictive analytics used for border interdictions or pushback operations or using individualized risk assessments and things like that.
I think traction on these issues can be gained by kind of extrapolating from the border and making citizens also worry about biometric mass surveillance and surveillance in public space and things like that, and finding kind of moments of solidarity among different groups that are equally impacted by this. And that is where the conversation seems to be moving, less from now we're fact finding and showing all these kind of egregious human rights abuses, which are still happening. But like, what do we then do about it together collectively?
TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: It seems like one of the ways to motivate public action to regulate this is to show how, you know, what starts at the border to deal with "the other" and the immigration that's coming into the country, then later can get turned around to be used on our own citizens. And in your book, you actually have [talked] about how the global push to strengthen borders has gone hand in hand with the rise in [00:53:00] far-right politics, to root out the other. And you talk about examples of far-right governments who turn around and use the same technology tested at their border on their own citizens to start strengthening their regime. And you give examples, I think in Kenya, Israel, Greece. Could you just elaborate on some of the examples? Because I think if people know where this goes, then it motivates how do we get ahead of this more?
PETRA MOLNAR: Yeah, I think it's important to bring it back to political context because all around the world we're seeing the rise of anti migrant far-right groups and parties making incursions into, you know, the political space. Sometimes in small ways and sometimes in major ways and, you know, I think it's an open question what's going to happen in the United States, this year, right?, with the election that you guys have coming up.
What I've seen, for example, in Greece is that parties that are very anti migration normalize the need to bring in surveillance technology at the border and test it out in refugee camps, for example, and then say, okay, well, we're going to be using similar things by the police on the streets of Athens, for example. [00:54:00] You know, in Kenya, similar things with normalization of just kind of data extraction for the purposes of digital ID are then used and weaponized against groups that already face marginalizations like Somali Kenyans, Nubian community, and smaller groups like that.
So, again, I think the fact that there is this kind of global turn to the right and more of a fear-based kind of response to migration, motivates more technology. And you again see this kind of in the incursion of the private sector, kind of normalizing some of these really sharp interventions and say, Oh, well, you know what, we have your solution here. You are worried about migration and "the other", let's bring in this project. And then, Oh, lo and behold, you can actually use it on, you know, protesters that you don't like or sports stadium fans who are too rowdy and groups like that as well.
TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: Okay, so we just talked about Kenya and Greece, in the context of other governments, but what about Israel? What's their role in all this? Are they using these technologies at their borders?
PETRA MOLNAR: Yeah, for sure. I [00:55:00] mean, Israel is definitely a nucleus in everything that we're talking about today. And I also felt compelled to go to the occupied West Bank for the book because it's really the epicenter of so much of the technology that is then exported for border enforcement in the EU and at the US-Mexico border, right? But what is really troubling in how Israel has been developing and deploying technology is that Palestine has become the ultimate testing ground, a laboratory, if you will. Surveillance technology is tested on Palestinians, both in the West Bank and in the Gaza Strip, and then sold to governments around the world for border enforcement. And all of these projects that are normalized in these situations then can get exported out into other jurisdictions.
Big Tech AI Is A Lie - Tina Huang - Air Date 4-30-24
TINA HUANG - HOST, TINA HUANG: Back in 2023, Google came up with this very impressive demo of Gemini.
AI DEMO: Tell me what you see. I see a squiggly line. What about now? It looks like a bird to me. What if I add [00:56:00] this? The bird is swimming in the water. It has a long neck and beak. It is a duck.
TINA HUANG - HOST, TINA HUANG: Really impressive, right? But unfortunately, it came to light that this demo is a bit deceptive, but actually still pretty impressive. Which is why, when Gemini Ultra came out in 2024, I was really excited to try it out. I tried replicating the results of the Gemini demo by prompting Gemini with the same prompts and seeing what came out of it.
And, drumroll please, this.
AI DEMO: The image you sent me now appears to show a drawing of a snake on a piece of white paper. The snake is long and slender with a small head and forked tongue.
TINA HUANG - HOST, TINA HUANG: Right, definitely a snake. Now in a more recent conference, they came up with these new autonomous agents. Super fancy, really, really impressive.
GOOGLE SPOKESPERSONS: Gemini for Google Workspace is our AI powered agent to quickly identify threats. You can use Gemini in security operations. We're seeing really interesting creative agents.
TINA HUANG - HOST, TINA HUANG: But when I excitedly tried to replicate this with their Vertex AI builder:
TH: Can you help me log my pet [00:57:00] into this system?
AI: I can help you with that. Can you please provide me with your pet's name and ID?
TH: My pet is a goldfish and does not have an ID.
AI: What is its name?
TH: My goldfish name is Bobby.
AI: I'm sorry I'm having trouble creating a new pet profile for your goldfish. There is an error loading the agent.
I was again sorely disappointed. Seems to be a trend.
Now, combine this together with a pop up message from Gemini, which states, "Your conversations are processed by human reviewers to improve the technologies powering Gemini apps". I don't know, one could maybe even start hypothesizing that maybe Google is faking its AI capabilities?
And of course, they are not the only ones. Amazon has these just walk in stores where you're supposed to be able to go inside, grab anything that you need, and then when you walk out there is facial recognition technology and AI, of course, that's able to charge your credit card automatically, or so they say. According to the [00:58:00] information, there was in fact over a thousand Indian associates that were watching the cameras and labeling footage of shoppers: "An employee who worked on the technology said that actual humans, albeit distant and invisible ones based in India, reviewed about 70 percent of sales made in the cashierless shops as of mid-2022".
It's just insanity, the amount of lies that we as a public are supposed to be able to tolerate. Like, seriously, what the f***. These companies are becoming more and more reckless, to the point that they're just blatantly disregarding, say, safety of the general public.
For example, one of the biggest ironies is the fact that Sam Altman's whole, like, ousting ordeal last year as the CEO of OpenAI was linked to concerns over AI safety. Yep, remember 2015? OpenAI was a AI safety research company. Seriously, I think this is really just crossing the line here, all for the sake of gathering more investor money. And it's like actually insulting by feeding the public this continuous lie about how they're still doing everything for the public, for the future of humanity, for [00:59:00] everybody.
Trickle down economy is defined as a theory that tax breaks and benefits for corporations and the wealthy will trickle down and eventually benefit everybody. Like this. Filling the cup of the top wealthy people would eventually trickle down to all of us.
Sam Altman, of course, is a big proponent of this. In his 2021 essay called "Moore's Law [for] Everything", he states, "The key three consequences of the AI revolution is to, 1), this revolution will create phenomenal wealth. The price of many kinds of labor, which drives the cost of goods and services will fall towards zero once sufficiently powerful AI 'joins the workforce'".
And 2), "the world will change so rapidly and dramatically. that an equally drastic change in policy will be needed to distribute this wealth and enable more people to pursue the life they want".
And 3), "if we get both of these right, we can improve the standard of living for people more than we ever have before". The way that he proposes we should offset the job loss to the common folk is to have UBI, universal basic income, that is from [01:00:00] corporate and property tax rates alone.
Yeah, I don't know about that. Since when has the rich ever wanted to give away their wealth and pay more taxes? We now know that so much of the philanthropy that these really wealthy people do is also for the sake of tax breaks. So, sorry to break that illusion, if you still have that illusion. I'm not an expert here, and that's like a whole other can of worms, but I mean, I'm not going to be holding my breath on that one.
In reality, trickle down economy works more like this. The cup of the wealthiest and the most powerful just keep getting bigger. Case in point, the recklessness that we've seen in these big tech companies trying to get more investor money, they're not exactly focusing on trickling it down to the rest of us and really actually helping us, you know, benefit society.
I mean, no wonder the employees in these companies, when you start asking them questions, they end up getting pretty uncomfortable. But hey, please let me clarify here: Iam not attacking these employees from big tech [01:01:00] companies. I mean, that would also make me hypocritical because I worked at Meta, a big tech company. I also know many people that work at these big tech companies, and I don't think that they're bad people, like they're willingly contributing towards this mess.
It gets really complex because as I'll talk about later, AI very much has this ability of bettering humanity and the people working at these technologies can clearly see that. But what leadership says that they're doing versus what they're actually doing just doesn't line up.
So, before you click off this video full of doom, I want to show you that there is hope in AI doing tremendous good in this world. Actually, a lot of hope because there is no clear winner of AI right now. There is no company that has a monopoly.
MOVIE CLIP: Truly open means open to everyone.
TINA HUANG - HOST, TINA HUANG: Introducing the counter movement of closed source proprietary big tech technology: the open source movement. Open source refers to a type of software whose source code is made available to the public and can be modified and shared by anyone. It's built on principles of collaboration, transparency, and [01:02:00] community-oriented development.
It's basically the opposite of big tech AI. This movement's been around for quite some time now and there's been really big successes that we've seen from the open source community. For example, Red Hat, founded in 1993, became huge in supporting professional enterprise-level Linux distributions. The Apache Software Foundation founded in 1999 is also responsible for a lot of the open source software that are the foundation to the internet and many web technologies today. MySQL, PostgreSQL, anybody that uses databases are probably familiar with these open source databases. GitHub that really brought together the open source community. Coding languages. like Python, JavaScript, these are all open source and many of you all use it today.
These are just a few examples. What makes me really happy now is that the open source community has really stepped up the game in this whole AI situation. If you just scroll through Hugging Face, which itself is an open source collaborative ML/AI platform, you'll see lots and lots of open source AI models. And people are developing open source consumer products as well. We [01:03:00] have the 01 Light, which is a voice interface for your home computer. It's open source and allows developers to build on top in order to create their own unique agents.
ChatDev is another very interesting open source agent initiative. It's a collection of intelligent autonomous agents that work together to form a software company. For example, a CEO agent, a CTO agent, a programmer agent, tester, etc, etc. These agents work together in order to accomplish a task that the user sets out. I really recommend that you play around with it. It's super easy to use and really cool how it works.
Anyways, there is this open source AI push, and given the financial viability that has already been proven in other open source projects previously, many investors are also willing to invest in open source projects and companies. As individuals, you watching this video as well, I hope, will start thinking more about contributing towards open source. Whether that be just volunteering, contributing towards open source, using more open source products or even building your own businesses and startups, which by the way, is probably a lot easier than you think, especially [01:04:00] if you use AI to help you out.
Hey, at least check out the free HubSpot resource book. I think with open source, we'll be able to make a big step forward towards the realignment of AI innovation and development with the benefit of humanity as a whole.
Final comments on the nature of misalignment between AI and human wellbeing
JAY TOMLINSON - HOST, BEST OF THE LEFT: We've just heard clips today, starting with Vox, explaining that we're already using AI more than we realize. Linus Tech Tips explained the changing definitions of AI. The CEO of Microsoft's AI division that gave a TED Talk, which laid out a very rosy vision of the potential future of AI. Global Dispatches in contrast described how AI can and will be used by militaries on the battlefield, up to and including the automating of nuclear weapons. The FiveThirtyEight Politics Podcast discuss the threat of AI to elections and the need for regulation. DW News explained the relatively hidden water usage of AI. Your Undivided Attention highlighted the use of AI technologies for surveillance. And Tina Huang critiqued big tech's [01:05:00] tendency to overpromise and underdeliver on AI projects, all in the pursuit of investor money.
And those were just the Top Takes; there's a lot more in the Deeper Dive section. But first, a reminder that this show is supported by members who get access to bonus episodes, featuring the production crew here discussing all manner of important and interesting topics, often trying to make each other laugh in the process.
To support all of our work and have those bonus episodes delivered seamlessly to the new members-only podcast feed that you'll receive, sign up to support the show at BestOfTheLeft.Com/Support (there's a link in the show notes), through our Patreon page, if you like, or from right inside the Apple Podcast app. If regular membership isn't in the cards for you, shoot me an email requesting a financial hardship membership, because we don't let a lack of funds stand in the way of hearing more information.
And now, before we continue onto the Deeper Dives half the show, I just wanted to talk a bit more about the deals being made between journalists accompanies and AI developers.
I mentioned at the top of the [01:06:00] show that Vox had signed a deal, which was reported on, on the same day, I think, as the Atlantic signing something similar. Both publications wrote articles, actually critical of the move of their own parent companies, which has always fun to see. But the article from Vox laid out an old thought experiment and breathed new life into it, with additional analysis that I thought was worth sharing.
So the article is titled, "This article is OpenAI training data," and it starts with a quick description of the old paperclip maximizer thought experiment. " Imagine an artificial general intelligence, one essentially limitless in its power and its intelligence. This AGI is programmed by its creators with the goal of producing paperclips. Because the AGI is super intelligent, it quickly learns how to make paperclips out of anything. And because the AGI is super intelligent, it can anticipate and foil any attempt to stop it and will do so because it's one [01:07:00] directive is to make more paperclips. Should we attempt to turn the AGI off, it will fight back, because it can't make more paperclips if it's turned off, and it will win because it is super intelligent. The final result: the entire galaxy, including you, me and everyone we know, has either been destroyed or been transformed into paperclips."
Now there's a good chance that you've heard that one before, but it's worth repeating because it reminds us of the nature of maximization. When people or corporations or AIs attempt to maximize for one thing, there will always be trade offs, sometimes extreme trade-offs. In the case of the paperclip concept, the thought experiment is teaching that great care must be taken when programming an AI system that will be an efficient maximizer by its nature.
But the lesson, just as the nature of maximization itself, can be extrapolated out into other realms. I would certainly put capitalism on that list. [01:08:00] Capitalism is designed to maximize wealth, which isn't inherently evil, just like figuring out the most efficient way to make paperclips isn't inherently evil. It's the trade-offs that end up tripping us up. Think runaway climate change.
Now ideally, our economic system would be in perfect alignment with creating human wellbeing, human flourishing, human happiness, or, you know, if we were exceptionally enlightened, we would understand our own wellbeing to be an extricable be linked with all other aspects of nature, living and inert, and we would want to align our economy with a sort of whole-earth wellbeing. Instead, our economics is only concerned with financial wealth, which is only useful to humans, and is at best an approximate stand-in for wellbeing, but it is certainly not synonymous with wellbeing.
Similarly, there are businesses--this podcast included, frankly--that exist within our economic system that forces [01:09:00] their multiple priorities to be at least slightly misaligned. For instance, producing the best possible journalism and making the most amount of money are certainly not in alignment. Even way before the age of internet news and New York Times games like Wordle driving revenue, it was known that sports coverage and scandal brought in the funds to newspapers needed to subsidize the hard reporting efforts that cannot support themselves.
And all of this brings us back to journalistic companies striking deals with tech companies, because they need the money.
Back to the article. Quote: "I've seen our industry pin our hopes on search engine optimization, on the pivot to video, and back again. On Facebook and social media traffic. I can remember Apple coming to my offices at Time Magazine in 2010, promising us that the iPad would save the magazine business. [01:10:00] It did not. Each time we are promised a fruitful collaboration with tech platforms that can benefit both sides. And each time it ultimately doesn't work out, because the interests of those tech platforms do not align, and have never fully aligned, with those of the media." End quote.
But what I would like to point out is that in all of the time people have been worrying about the rise of AI and the dangerous potential for a small misalignment of intentions to result in a disaster like that of the paperclip thought experiment, precious few have thought to turn their gaze to the companies directing the development of AI systems. As we know, for-profit companies' incentives are not aligned with the longterm benefit of the planet and everything on it; far from it.
So how will they design AI with the best alignment of incentives? And moreover, how can they even go through the process of developing that AI [01:11:00] without the trade-offs of the process being catastrophic in the same way that runaway climate change was the trade-off for all of the benefits we gained from fossil fuels?
Back to the article. Quote: "AIs aren't the only maximizers. So are the companies that make AIs, from OpenAI to Microsoft, to Google, to Meta, companies in the AI business are engaged in a brutal race for data, for compute power, for human talent, for market share, and ultimately, for profits. Those goals are their paperclips. And what they are doing now, as hundreds of billions of dollars flow into the AI industry, is everything they can to maximize them." End quote.
And as those companies maximize their profits, their goal will be to extract as much value out of the raw data and human talent as they can, so that their AIs are as capable as possible, so that they can maximize the revenue they generate. For [01:12:00] example, Google is including AI responses to questions that now disincentivized users from clicking through to source material. So Google still earned ad dollars from your search, but the humans who wrote the source material that that AI drew on for its answers will earn less, due to getting less and less traffic to their sites.
So the article wraps up, quote: "If you can't connect to an audience with your content, let alone get paid for it, the imperative for producing more work dissolves. It won't just be news. The endless web itself could stop growing. Bad for all of us, including the AI companies. What happens if, while relentlessly trying to hoover up every possible bit of data that could be used to train their models, AI companies destroy the very reasons for humans to make more data. Surely they can foresee that possibility. Surely they wouldn't be so single-minded as to destroy the [01:13:00] raw material they depend on. Yet just as the AI and the paperclip thought experiment relentlessly pursues its single goal, so do the AI companies of today. Until they've reduced the news, the web, and everyone who was once part of it to little more than paperclips."
SECTION A: MORE CURRENT USES OF AI
JAY TOMLINSON - HOST, BEST OF THE LEFT: And now we'll continue with deeper dives on three topics.
Next up section a. More current uses of AI. Section B potential uses of AI and the ethics we need to consider. And section C regulating AI.
Landlords Using Shady Algorithm To Raise Rents | Judd Legum - The Majority Report w/ Sam Seder - Air Date 6-15-24
SAM SEDER - HOST, THE MAJORITY REPORT: Tell us about real page and then give us the timeline of what's been happening on that.
JUDD LEGUM: Yeah. And, uh, and this was a story that was really broken open by ProPublica a couple of years ago, um, about real page, which is a, Software program that is used by many corporate landlords, particularly any large building with a bunch of units.
That's really what it's optimized for. And the [01:14:00] landlords feed in all of the information well, beyond what you could get on Zillow or publicly available information, vacancy rates, what they're actually charging all the fees, everything that there is. And then. This program spits back out a recommended rent for that unit.
But what's insidious about this process is that essentially so many corporate landlords are using it that they know. They don't need to go underneath that recommendation because the building around the corner is also using real page and is also going to be using these prices. And what we, what they found is, since the corporate landlords have adopted this, uh, in large numbers, the rents have gone up and up and up and up.
So, that's, that's essentially how this, how the system works is that it's, it's. [01:15:00] Effectively collusion via software where they can all they're not sitting in a smoke filled room, uh, fixing the prices of. Uh, rent in a given area, whether it's Atlanta or Seattle or wherever it is, but they're doing so via this software algorithm.
SAM SEDER - HOST, THE MAJORITY REPORT: And I, I gotta say, like, I don't remember this element of the story until I read, uh, your piece on it. But if there's also sort of like a mafia quality to this where they say, look, if you're going to use this software, you cannot undercut the price that we give you because then you're screwing up everything.
JUDD LEGUM: Yeah.
SAM SEDER - HOST, THE MAJORITY REPORT: And that seems to be a big giveaway.
JUDD LEGUM: Yes. They have people who are monitoring it, who are making sure you're in compliance with their recommendations. And actually, if you price your, uh, apartments too low, uh, too many times, you'll get kicked off. Uh, the the system [01:16:00] and so we actually have learned a lot more, uh, in the last 2 years since that ProPublica story came out because there's been a series of class action lawsuits.
There's also been lawsuits filed by, uh, the attorneys general in in D. C. and elsewhere too. So. That process has started to, um, reveal even more information about how the whole system works.
SAM SEDER - HOST, THE MAJORITY REPORT: And, um, maybe you write that, um, um, uh, that it's deploying real paid software in, in one case, uh, in Houston, uh, resulted in pushing people out with higher rents, but ultimately increased revenue by 10 million.
So they're making a ton of money off of this. The landlords. But there, I mean, that's the beauty of like price fixing, right? It's like, I know I can sustain this higher than market price. If everybody sustains this higher than market price.
JUDD LEGUM: Yeah, and it's [01:17:00] essentially gotten rid of negotiation. It used to be you could go to rent an apartment.
They sell you. Here's the price 3, 000 a month, whatever it is, you go into the rental office and you say, you know, I'd really like to pay 2900 a month. Um, now, part of that is there's a housing shortage. But the other part of it is RealPage has made it clear that you are not to negotiate these prices. And they, and the corporate landlords can feel confident because before they might know, well, if I won't give these people 100 or 200 off, they're just going to go around to a corner to another nice building and those people will do it and I'll be left with an empty unit.
But they know if those people code two blocks down, they're going to run into the exact same pricing scheme and the exact same reluctance to negotiate under any circumstances. So that's really what's driving the prices up. There used to be a say, get heads in beds. You know, when you, when you ran these big buildings, the idea is [01:18:00] keep them full, but RealPay just kind of, Uh overturned that philosophy and now they're they're really holding the line on prices Even if they have to keep a couple of units empty for a little bit
SAM SEDER - HOST, THE MAJORITY REPORT: Uh you cite in uh, one of those hours one of those lawsuits this one in arizona that in phoenix 70 percent of multifamily apartments units listed in Phoenix metropolitan area are owned, operated, or managed by companies that have contracted with RealPage.
Uh, a lawsuit in D. C. 60 percent of large multifamily buildings, 50 units or more, set prices using RealPage software. These numbers, I don't know, I mean, they may have gone up since then. Um, and we don't really know, I mean, we can't, do we know, like, I don't know. Can we look at boston at new york and at, um, the dallas chicago?
I mean do we have a sense of like just how ubiquitous this real page software is or is it? Only piecemeal information.
JUDD LEGUM: Well, it's really piecemeal at this point [01:19:00] because Uh, you when the when their suits filed, they can do discovery, they can get information about what's going on. Uh, there's, there's are some ways to sort of to see, uh, and to and people have tried to collect data, but we don't have a full sense.
We don't know, um, what the full scope is. And by the way, there was a competing software company, uh, that. Had somewhat of a different approach to how it advise these corporate landlords to manage their properties. It was purchased by real page. So it's really the whole purpose of it. And they even say this in their marketing materials is you can charge.
More than the market price would bear otherwise, which, which is a pretty clear indication that you're doing something to subvert the actual competitive market. Um, and, you know, this is in the context of a housing shortage, so the prices would be going up anyway, [01:20:00] but the level of price increases that we've seen, especially in major metropolitan areas has far exceeded.
Even the inflation that we've seen in some other. Um, you know, uh, areas
SAM SEDER - HOST, THE MAJORITY REPORT: and, uh, like you say in the piece to Jeffrey Roper, who created the real page algorithm explained that quote, if you have idiots undervaluing it, uh, if undervaluing, in other words, undercutting what they have all sort of agreed, they're going to charge.
It costs the whole system. I mean, the idea that that's the whole, they've sort of given away the game at that point, right? Because the whole system that they're saying all these different landlords, they shouldn't be in a system. Uh, I mean, if it was up to me, maybe we would just like, uh, nationalize some of these places, uh, or have, uh, uh, the city or state takeover, and then it would be a system, but there shouldn't be a private, uh, cartel essentially, um, [01:21:00] Saying what undervaluing is at that point.
JUDD LEGUM: Yeah, and it works on so many different levels because one of the ways that you used to be able to get a good deal on an apartment is people move in and out randomly. So, at certain times, there might be a flood of apartments that become available just just out of random chance. If you come in during that time, you might be able to get a good deal because all the buildings are competing with each other.
But in addition to setting the prices at a high level and keeping the moving up and up and up the, through the software, all the different buildings make sure that there's not too many units. Available at any given time, they'll hold them back so that it's always an artificially constrained market, which is pretty classic as far as if you're going to collude and price fix, that's what you might want to do.
AI Edits from Landlords Using Shady Algorithm To Raise Rents | Judd Legum - The Majority Report w/ Sam Seder - Air Date 6-15-24
SAM SEDER - HOST, THE MAJORITY REPORT: Tell us about real page and then give us the timeline of what's been happening on that.
JUDD LEGUM: This was a story that was really broken open by [01:22:00] ProPublica a couple of years ago, about real page, which is a, Software program that is used by many corporate landlords, particularly any large building with a bunch of units.
That's really what it's optimized for. And the landlords feed in all of the information well, beyond what you could get on Zillow or publicly available information, vacancy rates, what they're actually charging all the fees, everything that there is. And then. This program spits back out a recommended rent for that unit.
But what's insidious about this process is that so many corporate landlords are using it that they know. They don't need to go underneath that recommendation because the building around the corner is also using real page and is also going to be using [01:23:00] these prices. Since the corporate landlords have adopted this, in large numbers, the rents have gone up and up and up.
So, that's essentially how the system works it's. Effectively collusion via software where they're not sitting in a smoke filled room, fixing the prices of. Rent in a given area, whether it's Atlanta or Seattle or wherever it is, but they're doing so via this software algorithm.
SAM SEDER - HOST, THE MAJORITY REPORT: And I gotta say, like, I don't remember this element of the story until I read, your piece on it. But if there's also sort of like a mafia quality to this where they say, look, if you're going to use this software, you cannot undercut the price that we give you because then you're screwing up everything.
JUDD LEGUM: Yeah.
SAM SEDER - HOST, THE MAJORITY REPORT: And that seems to be a big giveaway.
JUDD LEGUM: Yes. They have people who are monitoring it, who are making sure you're in compliance with their recommendations. And actually, [01:24:00] if you price your, apartments too low, too many times, you'll get kicked off. The system and so we actually have learned a lot more, in the last 2 years since that ProPublica story came out because there's been a series of class action lawsuits.
There's also been lawsuits filed by, the attorneys general in D. C. and elsewhere too. That process has started to, reveal even more information about how the whole system works.
SAM SEDER - HOST, THE MAJORITY REPORT: So they're making a ton of money off of this. The landlords. But there, I mean, that's the beauty of like price fixing, right? It's like, I know I can sustain this higher than market price. If everybody sustains this higher than market price.
JUDD LEGUM: Yeah, and it's essentially gotten rid of negotiation. It used to be you could go to rent an apartment.
They sell you. Here's the price 3, 000 a month, whatever it is, you go [01:25:00] into the rental office and you say, you know, I'd really like to pay 2900 a month. Now, part of that is there's a housing shortage. But the other part of it is RealPage has made it clear that you are not to negotiate these prices. And the corporate landlords can feel confident because before they might know, well, if I won't give these people 100 or 200 off, they're just going to go around to a corner to another nice building and those people will do it and I'll be left with an empty unit.
But they know if those people code two blocks down, they're going to run into the exact same pricing scheme and the exact same reluctance to negotiate under any circumstances. So that's really what's driving the prices up. There used to be a say, get heads in beds. The idea is keep them full, but RealPay just kind of, overturned that philosophy and now they're really holding the line on prices Even if they have to keep a couple of units empty for a little bit
SAM SEDER - HOST, THE MAJORITY REPORT: You cite in one of those hours one of those lawsuits this one [01:26:00] in arizona that in phoenix 70 percent of multifamily apartments units listed in Phoenix metropolitan area are owned, operated, or managed by companies that have contracted with RealPage.
A lawsuit in D. C. 60 percent of large multifamily buildings, 50 units or more, set prices using RealPage software. These numbers, I don't know, I mean, they may have gone up since then. Can we look at boston at new york and at, the dallas chicago?
I mean do we have a sense of like just how ubiquitous this real page software is or is it? Only piecemeal information.
JUDD LEGUM: Well, it's really piecemeal at this point because when their suits filed, they can do discovery, they can get information about what's going on. There's, there's are some ways to sort of to see, and to and people have tried to collect data, but we don't have a full sense.
We don't know, what the full scope is. And by the way, there was a competing software company, that. Had somewhat [01:27:00] of a different approach to how it advise these corporate landlords to manage their properties. It was purchased by real page. So it's really the whole purpose of it. And they even say this in their marketing materials is you can charge.
More than the market price would bear otherwise, which is a pretty clear indication that you're doing something to subvert the actual competitive market. And, you know, this is in the context of a housing shortage, so the prices would be going up anyway, but the level of price increases that we've seen, especially in major metropolitan areas has far exceeded.
Even the inflation that we've seen in some other. Areas
SAM SEDER - HOST, THE MAJORITY REPORT: and, like you say in the piece to Jeffrey Roper, who created the real page algorithm explained that quote, if you have idiots undervaluing it, if undervaluing, in other words, undercutting what they have all sort of agreed, they're going to [01:28:00] charge.
It costs the whole system. I mean, the idea that that's the whole, they've sort of given away the game at that point, right? Because the whole system that they're saying all these different landlords, they shouldn't be in a system. I mean, if it was up to me, maybe we would just like, nationalize some of these places, or have, the city or state takeover, and then it would be a system, but there shouldn't be a private, cartel essentially, Saying what undervaluing is at that point.
JUDD LEGUM: Yeah, and it works on so many different levels because one of the ways that you used to be able to get a good deal on an apartment is people move in and out randomly. So, at certain times, there might be a flood of apartments that become available if you come in during that time, you might be able to get a good deal because all the buildings are competing with each other.
But in addition to setting the prices at a high level and keeping the moving up and up and up the, through the [01:29:00] software, all the different buildings make sure that there's not too many units. Available at any given time, they'll hold them back so that it's always an artificially constrained market, which is pretty classic as far as if you're going to collude and price fix, that's what you might want to do.
Lavender & Where's Daddy: How Israel Used AI to Form Kill Lists & Bomb Palestinians in Their Homes - Democracy Now! - Air Date 4-5-24
YUVAL ABRAHAM: Lavender was designed by the military. Its purpose was, when it was being designed, to mark the low-ranking operatives in the Hamas and Islamic Jihad military wings. That was the intention, because, you know, Israel estimates that there are between 30,000 to 40,000 Hamas operatives, and it’s a very, very large number. And they understood that the only way for them to mark these people is by relying on artificial intelligence. And that was the intention.
Now, what sources told me is that after October 7th, the military basically made a decision that all of these tens of thousands of people are now people that [01:30:00] could potentially be bombed inside their houses, meaning not only killing them but everybody who’s in the building — the children, the families. And they understood that in order to try to attempt to do that, they are going to have to rely on this AI machine called Lavender with very minimal human supervision. I mean, one source said that he felt he was acting as a rubber stamp on the machine’s decisions.
Now, what Lavender does is it scans information on probably 90% of the population of Gaza. So we’re talking about, you know, more than a million people. And it gives each individual a rating between one to 100, a rating that is an expression of the likelihood that the machine thinks, based on a list of small features — and we can get to that later — that that individual is a member of the Hamas or Islamic Jihad military wings. Sources told me that [01:31:00] the military knew, because they checked — they took a random sampling and checked one by one — the military knew that approximately 10% of the people that the machine was marking to be killed were not Hamas militants. They were not — some of them had a loose connection to Hamas. Others had completely no connection to Hamas. I mean, one source said how the machine would bring people who had the exact same name and nickname as a Hamas operative, or people who had similar communication profiles. Like, these could be civil defense workers, police officers in Gaza. And they implemented, again, minimal supervision on the machine. One source said that he spent 20 seconds per target before authorizing the bombing of the alleged low-ranking Hamas militant — often it also could have been a civilian — killing those people inside their houses.
And I think this, the reliance on artificial intelligence here to mark those targets, and basically the [01:32:00] deadly way in which the officers spoke about how they were using the machine, could very well be part of the reason why in the first, you know, six weeks after October 7th, like one of the main characteristics of the policies that were in place were entire Palestinian families being wiped out inside their houses. I mean, if you look at U.N. statistics, more than 50% of the casualties, more than 6,000 people at that time, came from a smaller group of families. It’s an expression of, you know, the family unit being destroyed. And I think that machine and the way it was used led to that.
AMY GOODMAN - HOST, DEMOCRACY NOW!: You talk about the choosing of targets, and you talk about the so-called high-value targets, Hamas commanders, and then the lower-level fighters. And as you said, many of them, in the end, it wasn’t either. But [01:33:00] explain the buildings that were targeted and the bombs that were used to target them.
YUVAL ABRAHAM: Yeah, yeah. It’s a good question. So, what sources told me is that during those first weeks after October, for the low-ranking militants in Hamas, many of whom were marked by Lavender, so we can say “alleged militants” that were marked by the machine, they had a predetermined, what they call, “collateral damage degree.” And this means that the military’s international law departments told these intelligence officers that for each low-ranking target that Lavender marks, when bombing that target, they are allowed to kill — one source said the number was up to 20 civilians, again, for any Hamas operative, regardless of rank, regardless of importance, regardless of age. One source said that there were also minors being marked — not many of them, but he said that was a possibility, that [01:34:00] there was no age limit. Another source said that the limit was up to 15 civilians for the low-ranking militants. The sources said that for senior commanders of Hamas — so it could be, you know, commanders of brigades or divisions or battalions — the numbers were, for the first time in the IDF’s history, in the triple digits, according to sources.
So, for example, Ayman Nofal, who was the Hamas commander of the Central Brigade, a source that took part in the strike against that person said that the military authorized to kill alongside that person 300 Palestinian civilians. And we’ve spoken at +972 and Local Call with Palestinians who were witnesses of that strike, and they speak about, you know, four quite large residential buildings being bombed on that day, you know, entire apartments filled with families being bombed and killed. And [01:35:00] that source told me that this is not, you know, some mistake, like the amount of civilians, of this 300 civilians, it was known beforehand to the Israeli military. And sources described that to me, and they said that — I mean, one source said that during those weeks at the beginning, effectively, the principle of proportionality, as they call it under international law, quote, “did not exist.”
AMY GOODMAN - HOST, DEMOCRACY NOW!: So, there’s two programs. There’s Lavender, and there’s Where’s Daddy? How did they even know where these men were, innocent or not?
YUVAL ABRAHAM: Yeah, so, the way the system was designed is, there is this concept, in general, in systems of mass surveillance called linking. When you want to automate these systems, you want to be able to very quickly — you know, you get, for example, an ID of a person, [01:36:00] and you want to have a computer be very quickly able to link that ID to other stuff. And what sources told me is that since everybody in Gaza has a home, has a house — or at least that was the case in the past — the system was designed to be able to automatically link between individuals and houses. And in the majority of cases, these households that are linked to the individuals that Lavender is marking as low-ranking militants are not places where there is active military action taking place, according to sources. Yet the way the system was designed, and programs like Where’s Daddy?, which were designed to search for these low-ranking militants when they enter houses — specifically, it sends an alert to the intelligence officers when these AI-marked suspects enter their houses. The system [01:37:00] was designed in a way that allowed the Israeli military to carry out massive strikes against Palestinians, sometimes militants, sometimes alleged militants, who we don’t know, when they were in these spaces in these houses.
And the sources said — you know, CNN reported in December that 45% of the munitions, according to U.S. intelligence assessments, that Israel dropped on Gaza were unguided, so-called dumb bombs, that have, you know, a larger damage to civilians. They destroy the entire structure. And sources said that for these low-ranking operatives in Hamas, they were only using the dumb munitions, meaning they were collapsing the houses on everybody inside. And when you ask intelligence officers why, one explanation they give is that these people were, quote, “unimportant.” They were not important enough, from a military perspective, that the Israeli army would, one source [01:38:00] said, waste expensive munitions, meaning more guided floor bombs that could have maybe taken just a particular floor in the building.
And to me, that was very striking, because, you know, you’re dropping a bomb on a house and killing entire families, yet the target that you are aiming to assassinate by doing so is not considered important enough to, quote, “waste” an expensive bomb on. And I think it’s a very rare reflection of sort of the way — you know, the way the Israeli military measures the value of Palestinian lives in relation to expected military gain, which is the principle of proportionality. And I think one thing that was very, very clear from all the sources that I spoke with is that, you know, this was — [01:39:00] they said it was psychologically shocking even for them. That’s the combination between Lavender and Where’s Daddy? The Lavender lists are fed into Where’s Daddy? And these systems track the suspects and wait for the moments that they enter houses, usually family houses or households where no military action takes place, according to several sources who did this, who spoke to me about this. And these houses are bombed using unguided missiles. This was a main characteristic of the Israeli policy in Gaza, at least for the first weeks.
The AI Revolution is Rotten to the Core - Jimmy McGee - Air Date 9-15-23
JIMMY MCGEE - HOST, JIMMY MCGEE: There are hundreds of schemes for machine learning at this point, but neural networks are the most popular, and most of the concepts behind neural networks apply to everything else in the field, too.
Neural networks are simplistic models of brains. Networks of neurons. A very low resolution picture of our brains is that we take in some stimulus, Say the [01:40:00] light bouncing off a painting, then something happens in the brain, then we feel an emotion or sensation. There's an input, an output, and something in the middle.
And that's exactly how every beginner course describes a neural network. A layer of input nodes, one or more hidden layers, then a layer of output nodes. Hidden layer is kind of a misnomer though. The hidden layers themselves aren't a black box, and tweaking them is a big part of developing a neural network.
The data that these hidden layers produce Usually isn't meaningful to us, though. It's only used by the network, and that's probably where the hidden comes in. Explainability. Figuring out why an AI makes decisions the way it does is a big problem in machine learning, because AI doesn't follow a line of reasoning the way a person would.
Nodes are another abstract concept. There's no obvious correspondence between a node and what the computer is actually doing. Really, node just means function. And in machine learning, nodes are usually taking a vector, doing something to it, [01:41:00] and passing it on to the next layer. We could have each hidden node add up the values of everything connected to it, for example.
The secret sauce is in weighing the inputs. In real brains, some connections between neurons are stronger than others. There are still lots of questions about the human brain. But the idea is that these connections and their strength affect our thoughts and actions somehow. Since an artificial neural network is just functions sending and receiving numbers, you can multiply each of these by some factor to make it bigger or smaller, more or less influential.
The network on screen is a toy example, but neural networks are always made to achieve some goal. Let's say our input is a picture of a letter, where the brightness of each pixel is an input, and the output is a guess for what letter it is. This is just a big pile of math. It's cool that we can make a machine with billions of adjustment knobs, but we're not going to do all that work by hand.
Thankfully, neural networks can be trained to adjust their own parameters. If I have a [01:42:00] photo of a letter that I know is a B, then I can compare that to what the network guesses. This pair of image and text is a piece of training data, and big networks will go through millions or billions of them. Weights are usually randomized at first, So the network will probably say the letter is an A, X, P, and C all at once.
But we can calculate how wrong the network is, and use this to adjust the weights, so it gets a little bit better every time. If it's 20 percent confident that the letter B is the letter A, then we need to reduce the weights that influence that guess. Gradient descent is the piece of statistical magic that made the AI revolution possible.
If you're on a hill, The gradient where you're standing is an arrow, or a vector, pointing in the steepest direction. Following this gradient is the fastest way to climb the hill. Going backwards from the gradient is the fastest way to go down the hill. An error function is like a hill that represents how wrong each of our weights is.
So if [01:43:00] you take the gradient and go backwards, the network will slowly move toward zero error. In my example, it gets better at guessing letters. It's a pretty goddamn cool idea, but there are no miracles here. The concept I just described was laid out in a paper from 1958 as a theory for how the human brain works.
So, it's not exactly revolutionary, but layers of interconnected nodes still make up the structure of all those headline grabbing AI systems we see today. You would hope that that example network, the letter classifier, would learn to recognize patterns in the strokes of letters, or at least do something intelligible.
But the weights that a neural network comes up with look just about random, and a lot of the architecture behind neural Today's machine learning systems is based on somebody trying something new that happens to work. Sometimes, you can say that they make the whole gradient descent process more efficient.
But with current setups, there's never going to be some obvious improvement reflected in the hidden data [01:44:00] itself. You're never going to get a line of reasoning from AI. Famously, you can use these things to generate media, like images and music, from a text description.
Google's Deep Dream was one of the first generative models that made headlines. It started as a network for classifying images, but they were able to sort of run the system in reverse, and have it hallucinate nightmarish faces in existing pictures. The original model was made for an image recognition contest that ImageNet ran in 2014.
ImageNet made a dataset of just under 15 million images, which it doesn't own the licenses for. With new technology, the line between research and commerce is growing. And big companies often use this fact to just manifest destiny whenever they want and make us live with the consequences. Scraping millions of images and sticking them in a public dataset is a huge ethical question mark, even in an [01:45:00] academic context.
But once an economy springs up around these datasets, they're hard to get rid of. This is a lesson we've learned over and over. Companies rush to market with leaded gas or asbestos insulation, and by the time we understand what they've done, entire swaths of the planet have brain damage and lung cancer.
Google mastered this principle with AdSense, a surveillance system that probably knows your heart rate and body temperature right now, google's data harvesting operation became a load bearing piece of the internet before the public understood digital privacy, and now we can't get rid of it. ImageNet popularized scraping the internet for training data, and the project has all the same problems that we're dealing with now.
It's very biased. They stole all the pictures, and they use questionable labor practices to label them all.
Amazon's Mechanical Turk bills itself as a micro task marketplace, a place for simple, short jobs that still require a human to complete them. I wanted to join the program as a [01:46:00] worker, but Amazon didn't bother approving or denying my request. The site is apparently so bad that workers have to use a bunch of extra scripts to actually do their jobs.
And you can't get any decent work there until you've done hundreds or thousands of human intelligence tasks, also known as HITs. A platform like that was a perfect fit for the ImageNet project, and they used it to label early versions of the dataset back in 2008 or 9. They gave workers a set of pictures and some objects to identify.
Workers would mark each picture if it contained the target object. If that sounds familiar, it's exactly like solving a CAPTCHA. In fact, we've all been helping Google train its neural networks for years. These companies have a very dubious concept of consent, and we'll see a lot more of that later. You literally have to help train an AI to access many websites.
At least ImageNet paid the Turkers. But with that said, Mechanical Turk's workforce does skew toward people with no other [01:47:00] options. Oscar Schwartz, writing for IEEE Spectrum, rightly identified that mTurk is designed to make human labor invisible. Jeff Bezos called them artificial artificial intelligence, and Turkers are described offhandedly as a horde, in an article that I read creaming itself over ImageNet.
Turkers were earning a median 2 per hour in 2018, and the situation hasn't really changed in the years following. These people are invisible, poor, and very easy to exploit. Mechanical Turk is slavery as a service, but it was also the first of a new breed. Turkers are generalists, but the AI revolution needed specialists.
Appen is one of many companies specifically selling data labeling for machine learning. Their crowdsourced labor came mostly from Kenya and the Philippines at first, but when Venezuela's economy collapsed, they started snapping up jobless refugees. A journalist for MIT profiled a [01:48:00] Venezuelan app and worker.
And the situation seems pretty dire. Workers have no line of communication with the company, they have to be constantly at their computers ready to accept tasks, and like Mechanical Turk, the site barely works. Appen can afford to push people as hard as they want, because there's a huge labor supply and the workers have nowhere else to go.
They congregate in discords and write scripts to make things tolerable. Because its workers are contractors, Appen pays out like a slot machine. Some tasks offer pennies, some don't even work, and some will offer hundreds of dollars, a relative fortune. I think a good rule of thumb is that any company that has to write a slavery policy is probably up to something.
SECTION B: POTENTIAL USES OF AI AND THE ETHICS WE NEED TO CONSIDER
JAY TOMLINSON - HOST, BEST OF THE LEFT: Now entering section B: potential uses of AI and the ethics we need to consider.
Reimagining A.I. | John Wild - Planet: Critical - Air Date 5-16-24
JOHN WILD: Tsiolkovsky studied kind of the physics of his time and et cetera. And he developed some of the first practical device, uh, designs for like the space [01:49:00] rockets and the, the equations required to, for space travel.
And he did this in 1896. This, these kinds of like developments in kind of the technology of space travel emerged from following Federer, realizing that if you ended death and resurrected the dead then the planet would get overrun quite quick. So it was, so it becomes necessary to leave the cradle of the earth.
Does that make sense in the logic?
RACHEL DONALD - HOST, PLANET: CRITICAL: As logic? Sure.
JOHN WILD: The reason this becomes interesting is because Tsiolkovsky is basically the founder of the Russian space program and the former Soviet space program. His rocket designs are currently like, I'm not, not exactly the same, but are, are [01:50:00] the forefathers of our current rocket design. So you've got this link between kind of quite fascinating and crazy.
Uh, futurist imaginaries linked with technology, which ultimately developed the, uh, US space program, but how does this link with Silicon Valley? Well, if you look at say Ray Kurzweil, so you know, Ray Kurzweil is kind of the profit for Google's AI
program
I think he's probably the chief engineer.
Yeah. But he also believes in, uh, moving towards immortality. He wanted to be the first person to kind of end death. So, it, there's, like, a lot of these ideas that came from cosmism. I've been translated directly into the kind of AI tech circles, [01:51:00] which circulate. So, so Kurzweil is a serious technique, uh, player within the AI world, particularly in Google.
And this idea of extending life or eradicating death is part of the discourse which circulates within, within this community. Uh, that, that would be, uh, kind of groupings, which call themselves extropia, extropianism, extropianists. So that's not sure how you say it properly, but these ideas link directly to actual technical production.
So, so things like the Fitbit and the quantitative self movement. So the idea of like monitoring your health and maximizing health. Which you must have come across because that's part of the kind of like tech scene.
RACHEL DONALD - HOST, PLANET: CRITICAL: Human optimization.
JOHN WILD: Exactly. This human optimization comes out of this attempt to extend [01:52:00] life and eradicate death.
So you can see how the kind of cosmism is kind of like part of it. Kind of plagiarized really right into these kind of like tech ideas, which then like find themselves being sold on Amazon as Fitbits or various other optimization technologies. Uh, Kurzweil himself. In an interview in a film called, I Human, declared that one of his driving force for developing artificial intelligence, and you've got to remember that this is a chief engineer, is to resurrect his own father.
RACHEL DONALD - HOST, PLANET: CRITICAL: Oh my god.
JOHN WILD: So, so it's, so you've got Federer repeating himself right at the top of the kind of Google development chain.
RACHEL DONALD - HOST, PLANET: CRITICAL: Oh, God.
JOHN WILD: And, and, and taking a kind of slightly, uh, uh, a slight side move here. But when we talk about artificial [01:53:00] intelligence, uh, in tech circles, it gets broken down into, Uh, three different areas. The first one's narrow artificial intelligence, which is what we, what we have at the moment, which, uh, it's, it's mainly what we call machine learning.
So it's narrow in that it can do very intelligent activities, such as playing go or chess or predicting texts, but in a very narrow domain,
the next, like the day to day of. Uh, company like OpenAI. Is the development of artificial general intelligence. Now, artificial general intelligence is in Google AI terms, kind of the equivalent of human intelligence. So it's this, this ability to [01:54:00] abstract and apply intelligence to multiple domains. So, it's wider. But this idea of a general intelligence, which is, is what people are striving for an artificial general intelligence.
When you look at what a general intelligence is, then that's actually rooted in the uh, statistic statistician, Charles Spearman and the idea of the G factor, but Charles Spearman. Was a eugenicist and his reason for developing this, ranking of general intelligence was to rank human intelligence for selective breeding, et cetera.
So you've got this, you've got this kind of, this drive for artificial general intelligence. But when, when you actually work out what general intelligence is I mean, Spearman developed this to support his colonial policies, et cetera.
Trying to prove that perhaps other humans were less intelligent for various reasons. [01:55:00] So you've got this kind of hierarchical drive within artificial intelligence for basically a superhuman, or an intelligence, which is beyond human in that kind of way. And just to link him back to the cosmist kind of ideas.
You see that the idea of colonizing the solar system or spreading intelligence to the solar system is, is something which is a core concept. Within AI development circles. I mean, it's also the reason why tech billionaires are building their own spaceships. If you think of space X, blue origin, they're all, they're all influenced by, by these imaginaries. And I'm sure there's probably a lot of people saying I'm over exaggerating this at this point, but I just want to give you a couple of quotes. So this is from, Jürgen Schmid, Schmidhuber, who developed the, uh, natural language [01:56:00] model, which is used in Apple's Siri and Amazon's Alexa. This is his understanding of what he's doing. He says, so, I'm not a very human centric person. I think I'm a little stepping stone in the evolution of the universe towards a higher complexity. It is clear to me that I am not the crown of creation, and that humankind as a whole is not the crown of creation.
But we are setting the stage for something bigger than us, that transcends us, and will go out there in a way where humans cannot follow and transform the whole universe, or at least the regional universe. So I find the beauty and awe in seeing myself as a part of this much grander theme.
How will AI change the world? - TED-Ed - Air Date 12-6-22
STUART RUSSEL: There's a big difference between Asking a human to do something and giving that as the objective to an AI system. When you ask a human to fetch you a cup of coffee, you don't mean this should be their life's mission [01:57:00] and nothing else in the universe matters. Even if they have to kill everybody else in Starbucks to get you the coffee before it closes, they should do that.
No, that's not what you mean. You mean all the other things that we mutually care about, they should factor into your behavior as well. And the problem with the way we build AI systems now is we give them a fixed objective, right? Algorithms require us to specify everything in the objective. And if you say, you know, can we fix the acidification of the oceans?
Yeah, you could have a catalytic reaction that does that extremely efficiently, but you know, consumes a quarter of the oxygen in the atmosphere, which would apparently cause us to die fairly slowly and unpleasantly over the course of several hours. Um, so how do we avoid this problem, right? You might say, okay, well, just be more careful about some of these things.
specifying the objective, right? Don't forget the atmospheric oxygen. And then of course, some side effect of the reaction in the ocean poisons all the fish. Okay. Well, I meant don't kill the fish either. And then, well, what about the seaweed? Okay. Don't do anything. That's going to cause all the seaweed to die [01:58:00] and on and on and on.
Right. And the reason that we don't have to do that with humans is that humans often know that they don't know all the things that we care about. If you ask a human to get you a cup of coffee, you know, and you happen to be in the hotel Georges Sank in Paris where the coffee is, I think, 13 euros a cup, it's entirely reasonable to come back and say, well, it's 13 euros, are you sure you want, or I could go next door and, you know, get it.
And it's a perfectly normal thing for a person to do, right? To ask, you know, I'm gonna repaint your house, is it okay if I take off the drain pipes and then put them back? We don't think of this as a terribly sophisticated capability, but AI systems don't have it because the way we build them now, they have to know the full objective.
If we build systems that know that they don't know what the objective is, then they start to exhibit these behaviors, like asking permission before getting rid of all the options in the atmosphere. In all these senses, Control over the AI system comes from the machine's [01:59:00] uncertainty about what the true objective is.
It's when you build machines that believe with certainty that they have the objective. That's when you get a sort of psychopathic behavior, and I think we see the same thing in humans. What happens when general purpose AI hits the real economy? How do things change? Can we adapt? This is a very old point.
Amazingly, Aristotle actually has a passage where he says, Look, if we had fully automated weaving machines and plectrums that could pluck the lyre and produce music without any humans, then we wouldn't need any workers. That idea, which I think it was Keynes who called it technological unemployment in 1930, is very obvious to people, right?
They think, yeah, of course, if the machine does the work, then I'm going to be unemployed. If you think about the warehouses that companies are currently operating for e commerce, They are half automated. The way it works is that on old wayhouses, where you've got tons of stuff piled up all over the place, and the humans go [02:00:00] and rummage around and then bring it back and send it off, there's a robot who goes and gets the shelving unit that contains the thing that you need, but the human has to pick the object up.
out of the bin or off the shelf, because that's still too difficult. But, you know, at the same time, if you make a robot that is accurate enough to be able to pick pretty much any object, and there's a very wide variety of objects that you can buy, that would, at a stroke, eliminate three or four million jobs.
There's an interesting story that E. M. Forster wrote where everyone is entirely machine dependent. The story is really about the fact that if you hand over the management of your civilization to machines, you then lose the incentive to understand it yourself or to teach the next generation how to understand it.
And you can see Wall E actually as a modern version where everyone is enfeebled and infantilized by the machine, and that hasn't been possible up to now, right? We put a lot of our civilization into books, but the books [02:01:00] can't run it for us. And so we always have to teach the next generation. If you work it out, it's about a trillion person years of teaching and learning and an unbroken chain that goes back tens of thousands of generations.
What happens if that chain breaks? And I think that's something we have to understand as AI moves forward. The actual date of arrival of general purpose AI, you're not going to be able to pinpoint it, right? It isn't a single day. It's also not the case that it's all or nothing. The impact is going to be increasing, so with every advance in AI, it significantly expands the range of tasks.
So, in that sense, I think most experts say by the end of the century, we're very, very likely to have general purpose AI. The median is something around 2045. I'm a little more on the conservative side. I think the problem is harder than we think. I like what John McAfee, who was sort of one of the founders of AI, when he was asked this question, he said, well, somewhere between five and 500 years, and we're going to need, I think, several Einsteins to [02:02:00] make it happen.
AI and the future of humanity | Yuval Noah Harari at the Frontiers Forum - Yuval Noah Harari - Air Date 5-14-23
YUVAL NOAH HARARI: I guess everybody here is already aware of some of the most fundamental abilities of the new AI tools--abilities like writing text, drawing images, composing music and writing code. But there are many additional capabilities that are emerging, like deep faking people's voices and images, like drafting bills, finding weaknesses both in computer code and also in legal contracts, and in legal agreements. But perhaps most importantly, the new AI tools are gaining the ability to develop deep and intimate relationships with human beings.
Each of these abilities deserves an entire discussion. And it is difficult for us to understand their full [02:03:00] implications. So, let's make it simple. When we take all of these abilities together as a package, they boil down to one very, very big thing: the ability to manipulate and to generate language, whether with words, or images, or sounds.
most important aspect of the current phase of the ongoing AI revolution is that AI is gaining mastery of language at a level that surpasses the average human ability. And by gaining mastery of language, AI is seizing the master key, unlocking the doors of all our institutions, from banks to temples. Because language is the tool that we use [02:04:00] to give instructions to our bank and also to inspire heavenly visions in our minds. Another way to think of it is that AI has just hacked the operating system of human civilization.
The operating system of every human culture in history has always been language. In the beginning was the word. We use language to create mythology and laws, to create gods and money, to create art and science, to create friendships and nations.
For example, human rights are not a biological reality. They are not inscribed in our DNA. Human rights is something that we created with language by telling stories and writing laws.
Gods are also [02:05:00] not a biological or physical reality. Gods, too, is something that we humans have created with language by telling legends and writing scriptures.
Money is not a biological or physical reality. Banknotes are just worthless pieces of paper, and at present more than 90 percent of the money in the world is not even banknotes; it's just electronic information in computers passing from here to there. What gives money of any kind value is only the stories that people like bankers and finance ministers and cryptocurrency gurus tell us about money. Sam Bankman-Fried, Elizabeth Holmes, and Bernie Madoff didn't create much of real value, but, unfortunately, they were all [02:06:00] extremely capable storytellers.
Now, what would it mean for human beings to live in a world where perhaps most of the stories, melodies, images, laws, policies, and tools are shaped by a non-human, alien intelligence, which knows how to exploit, with superhuman efficiency, the weaknesses, biases, and addictions of the human mind, and also knows how to form deep and even intimate relationships with human beings.
That's the big question. Already today, in games like chess, no human can hope to beat a computer. What if the same thing happens in art, in politics, economics, and even in religion? When people think about [02:07:00] ChatGPT and the other new AI tools, they are often drawn to examples like kids using ChatGPT to write their school essays. What will happen to the school system when kids write essays with ChatGPT? Horrible.
But this kind of question misses the big picture. Forget about the school essays. Instead, think, for example, about the next US presidential race in 2024, and try to imagine the impact of the new AI tools that can mass produce political manifestos, fake news stories, and even holy scriptures for new cults.
In recent years, the politically influential QAnon cult has formed around anonymous online texts known as Qdrops. Now, followers of this cult, which are millions now in the US and the rest [02:08:00] of the world, collected, revered, and interpreted these Qdrops as some kind of new scripture, as a sacred text.
Now, to the best of our knowledge, all previous Qdrops were composed by human beings, and bots only helped to disseminate these texts online. But in the future, we might see the first cults and religions in history whose revered texts were written by a nonhuman intelligence. And of course, religions throughout history claimed that their holy books were written by a nonhuman intelligence. This was never true before. This could become true very, very quickly, with far-reaching consequences.
Now, on a more prosaic level, we might soon find ourselves conducting lengthy online discussions [02:09:00] about abortion, or about climate change, or about the Russian invasion of Ukraine, with entities that we think are fellow human beings, but are actually AI bots.
Now the catch is that it's utterly useless--it's pointless--for us to waste our time trying to convince an AI bot to change its political views. But the longer we spend talking with the bot, the better it gets to know us and understand how to hone its messages in order to shift our political views or our economic views or anything else.
Through its mastery of language, AI, as I said, could also form intimate relationships with people and use the power of intimacy to influence our opinions and worldview.
Now, there is no indication that AI has, [02:10:00] as I said, any consciousness, any feelings of its own. But in order to create fake intimacy with human beings, AI doesn't need feelings of its own; it only needs to be able to inspire feelings in us, to get us to be attached to it.
Now, in June 2022, there was a famous incident when the Google engineer Blaik Lemoine publicly claimed that the AI chatbot Lambda, on which he was working, has become sentient. This very controversial claim cost him his job. He was fired. Now, the most interesting thing about this episode wasn't Lemoine's claim, which was most probably false. The really interesting thing was his willingness to risk and ultimately lose his very lucrative job for the sake of the AI chatbot that he thought he [02:11:00] was protecting. If AI can influence people to risk and lose their jobs, what else can it induce us to do?
SECTION C: REGULATING AI
JAY TOMLINSON - HOST, BEST OF THE LEFT: And finally section C: regulating AI.
Current, former OpenAI employees warn company not doing enough control dangers of AI - PBS Newshour - Air Date 6-5-24
GEOFF BENNETT - HOST, PBS NEWSHOUR: So tell us more about who is behind this open letter and what specifically they're asking for.
BOBBY ALLYN: Yes it's a number of current and former OpenAI employees.
I actually spoke to one of them just today. And what they're saying is really loud and clear. They think OpenAI is too aggressively in search of profits and market share and that they are not focused on responsibly developing A.I. products.
And, remember, this is really important, Geoff because OpenAI started as a nonprofit research lab that was — its aim when it was founded was to develop A.I. products, different than, say, Meta or Microsoft or Amazon, which are these huge publicly traded companies that are competing with one [02:12:00] another, right?
OpenAI was supposed to be a nonprofit answer to big tech. And these employees say, look, it looks like you're operating just like big tech. You're pushing out products too quickly and society just isn't ready for them.
GEOFF BENNETT - HOST, PBS NEWSHOUR: The letter lays out a number of risks and warnings, including — quote — "the loss of control of autonomous A.I. systems, potentially resulting in human extinction."
Human extinction. What do these folks know that we don't?
[Laughter]
And how seriously should we take this concern?
BOBBY ALLYN: It sounds pretty dire, doesn't it?
And it goes back to this kind of nerdy phrase that A.I. researchers like citing known as P[doom], P meaning what's the probability and doom being — well, we know what doom means. And they like bringing this up because the theory is, if A.I. gets really smart, if it becomes super intelligent and can exceed the skills and brainpower of humanity, maybe one day it will turn on us.
Now, again, this is kind of a theoretical academic exercise at this point, that these sort of [02:13:00] killer robots would be marching around cities and at war with humanity. I don't think we're anywhere near that. But they are underscoring this, because, look, that's sort of a hypothetical risk.
But we're seeing real risks play out every single day, whether it's the rise of deepfakes, whether it's A.I. being used to impersonate people, whether it's A.I. being used to supercharge dangerous misinformation around the Web. There are real risks that, according to these former employees, OpenAI doesn't care enough about and isn't doing much to mitigate.
GEOFF BENNETT - HOST, PBS NEWSHOUR: Well, in other OpenAI news, the media world seems to be split over whether to partner with the company.
The company recently announced paid deals with the Associated Press, "The Atlantic," Vox Media, which allows them to gain access to these media outlets' content to help train their A.I. models. Meantime, you got The New York Times suing OpenAI over copyright infringement.
How do you see this all shaking out and what are the arguments on both sides of this debate over whether [02:14:00] to actually work with OpenAI?
BOBBY ALLYN: Yes, OpenAI has publishers by the scruff of their neck.
OpenAI systems were trained on the corpus of the entire Internet, and that includes every large broadcaster and newspaper you can think of. And there, as you mentioned, are two camps emerging now. In the one camp are the publishers who say, you know what, let's strike licensing deals, let's try to bring some revenue in, let's play nice with OpenAI, because we have no choice. This is the future. OpenAI is going ruthlessly towards this direction. Let's try to make some money here.
And then you have newspapers like The New York Times who are in the other camp and have chose the other direction, which is, no, no, no, OpenAI. You took all of our articles without consent, without payment. Now you're making lots of money off of the knowledge and reporting and original work that goes into, say, a New York Times article. We don't want to strike a licensing deal with us. In fact, your systems are based on material that was stolen [02:15:00] from us, so you owe us a lot of money and we do not want to play nice.
So, the way it's really going to shake out, I think, is, you know, some publishers are striking these deals. Others will join The New York Times' crusade to go after OpenAI. But it's a really, really interesting time, because, no matter what, they have this material, right, Geoff?
I mean, ChatGPT, every time you ask it a question, it is spitting out answers that are based in part on New York Times' articles, Associated Press articles, NPR articles, you name it. So that's just the future. So the question is, do you strike a deal or do you take them to court? And we're just seeing different sort of strategies here.
Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn - Your Undivided Attention - Air Date 6-7-24
TRISTAN HARRIS - HOST, YOUR UNDIVIDED ATTENTION: If you try to ground this for listeners, like, you, you're taking a big risk here with your colleagues at OpenAI, and you're coming out and saying, We need a right to whistleblow about important things that could be going wrong here.
So far what you've shared is sort of more of a technical description of the box and how do we interpret the neurons in the box and what they're doing. Why does this matter for safety? What's at stake if we don't get this right?
WILLIAM SAUNDERS: I think, you know, you can take suppose we've like taken this box and it like [02:16:00] does the task and then, you know, let's say we want to take every company in the world and integrate this box into every company in the world where this box can be used to, you know, answer customer queries or process information.
And let's suppose, you know, the box is like very good at, you know, giving advice to people. So now, you know, maybe CEOs and politicians are like getting advice and then maybe as things progress. Into the future, maybe this box is generally regarded as being, you know, as smart or smarter than most humans and able to do most jobs better than most humans.
And so now we've got this box that nobody knows exactly how it works, and nobody knows sort of how it might behave in novel circumstances. And there are some specific circumstances where, like, the box might do something that's different and possibly malicious. And again, this box is as smart or smarter than humans.
It's right in OpenAI's charter that this is, like, what OpenAI and other companies are aiming for, [02:17:00] right? And so, you know, maybe the world rewards AIs that try to sort of, like, gather more power for themselves. If you If you give an AI a bunch of money and it goes out and makes more money, then, you know, you give it even more money and power and you make more copies of this AI, and this might reward AI systems that, like, really care more about getting as much money and power in the world without any sense of ethics and what is right or wrong.
And so then, suppose you have a bunch of these questionably ethical AI boxes integrated deeply into your society, advising politicians and CEOs. This is kind of a world where you could imagine, gradually or suddenly, you wake up one day and like, humans are no longer really in control of society. And, you know, maybe they can run subtle mass persuasion to, you know, convince people to vote the way they want.
And so, it's very unclear how rapidly this kind of transition would happen. I think, you know, there's a broad range of possibilities. But some of these are [02:18:00] on timescales where it would be very hard for people to sort of realize what's going on. This is the kind of scenario. So, that keeps me up at night, that has sort of driven my research.
You want some way to learn if the AI system is giving you bad information. But, we are already in this world today.
AZA RASKIN - HOST, YOUR UNDIVIDED ATTENTION: I think what we've established is a couple things. One is that, like, William, you're right there at the frontier of the techniques for understanding how AI models work and how to make them safe. Um, that I think what I'm hearing you say is There's sort of like two major kinds of risks, although you said there are even more.
One of them is if AI systems are more effective at doing certain kinds of decision making than us, then obviously people are going to use them and replace human beings in the decision making. If an AI can write an email that's more effective at getting sales or getting responses than I am, then obviously I'm sort of a sucker if I don't use it.
[02:19:00] The AI to help me write that email. And then if we don't understand how they work, something might happen and now we've integrated them everywhere, and that's really scary. That's sort of like, risk number one. And then risk number two is that we don't know their capabilities. I remember, you know, GPT 3 was shipped to at least tens of millions of people before it was, uh, anyone realized that it could do research grade chemistry, or that GPT 4 had been shipped to 100 million people before people realized it actually did pretty well at doing theory of mind, that is, being able to strategically model what somebody else's mind is thinking and change its behavior accordingly.
And those are the kinds of behaviors we'd really like to know before it gets shipped, and that's in part what interpretability is all about, is making sure that there aren't hidden capabilities underneath the hood. And it just leads me actually to sort of a very personal question for you, which is, if you've been thinking about all of this stuff, like why, why did you want to work at OpenAI in the first place?
WILLIAM SAUNDERS: So one, you know, one point to clarify interpretability is certainly not the only way to do this, and there's a lot of other [02:20:00] research into sort of like trying to figure out what are the dangerous capabilities and even try to predict them. But it is still in a place where nobody, including people at OpenAI, knows what the next frontier model will be capable of doing when they start out training it or even when they have it.
But yeah, uh, the reasoning for working at OpenAI came down to, um, I wanted to do the most useful, cutting edge research. And so both the research projects that I talked about were, you know, using the current, like, state of the art within OpenAI. The way that the world is set up, there's a lot more friction and difficulty if you're outside of one of these companies.
So if you're in a more independent organization, You know, you might have to, you have to wait until a model is released into the world before you can work on it. Uh, you have to access it through an API. And there's only sort of like a limited set of things that you can do. And so, the best place to be is within one of these AI labs.
And, uh, that comes with some strings attached. [02:21:00] What kinds of strings? So, while you're working at a lab, you have to worry about if you communicate something publicly, will it Be something that someone at the company will be unhappy with. In the back of your mind, it is always a possibility to, you know, be fired.
And then also, there's a bunch of, you know, subtle social pressure. Like, you don't want to annoy your co workers, the people you have to see every day. You don't want to, like, criticize the work that they're doing. Again, the work is usually good, but the decisions to ship, you know, the decision to say, like, we've done enough work, we're prepared to put this out into the world, I think is a very tricky decision.
Credits
JAY TOMLINSON - HOST, BEST OF THE LEFT: That's going to be it for today. As always keep the comments coming in. I would love to hear your thoughts or questions about today's topic or anything else. You can leave a voicemail or send us a text at 202-999-3991, or simply email me to [email protected]. The additional section of the show included clips from The [02:22:00] Majority Report, Democracy Now!, Jimmy McGee, Planet Critical, TED-Ed, Yuval Noah Harari, the PBS NewsHour, and Your Undivided Attention. Further details are in the show notes.
Thanks to everyone for listening. Thanks to Deon Clark and Erin Clayton for their research work for the show and participation in our bonus episodes. Thanks to our Transcriptionist Quartet, Ken, Brian, Ben, and Andrew, for their volunteer work helping put our transcripts together. And thanks to Amanda Hoffman for all of her work behind the scenes and her bonus show co-hosting. And thanks to those who already support the show by becoming a member or purchasing gift memberships. You can join them by signing up today at bestoftheleft.com/support, through our Patreon page, or from right inside the Apple podcast app. Membership is how you get instant access to our incredibly good and often funny weekly bonus episodes, in addition to there being no ads and chapter markers in all of our regular episodes, all through your regular podcast player. You'll find that link in the [02:23:00] show notes, along with a link to join our Discord community, where you can also continue the discussion.
So, coming to you from far outside the conventional wisdom of Washington DC, my name is Jay, and this has been the Best of the Left podcast, coming to you twice weekly, thanks entirely to the members and donors to the show, from bestoftheleft.com.