Air Date 5/28/2024
JAY TOMLINSON - HOST, BEST OF THE LEFT: [00:00:00] Welcome to this episode of the award winning Best of the Left podcast. Now, everyone knows the rule about not meeting your heroes because they'll so often disappoint you. Well, today, we look at the most disappointing yet most idolized false heroes of our day: the titans of tech and the zany hi-jinks they've been getting up to recently. Sources providing our top takes today include Jacob Ward, More Perfect Union, Decoder, Internet Today, Today, Explained, ColdFusion, There Are No Girls on the Internet, and Zoe Bee. Then in the additional sections half of the show, we'll dive deeper into "That word you keep using", in which tech bros misunderstand the world, "SiliCON Valley, Emphasis on the CON", "Microsoft the Destroyer", and "Thank You for Your Service", in which people doing good work, get fired.
When a tech company like OpenAI doesn’t get the dark message at the heart of science fiction - Jacob Ward - Air Date 5-21-24
JACOB WARD - HOST, JACOB WARD: A couple of quick thoughts on Scarlett Johansson and her threatening legal action against OpenAI. For anyone who doesn't know, [00:01:00] she says that she was approached back in September by Sam Altman saying, Will you please voice our new chatbot? And she said, No, thank you. And then two days before the demo, she says she was approached again, directly by him, and asked to reconsider. And before she had a chance to respond, they went ahead and debuted this new voice that sounds eerily like her, right? And it is a reference of course, to the 2013 film Her by Spike Jones and, you know, Sam Altman even says "Her" in a tweet that he put out around the time of the demo.
So, a couple of things. First, the utter railroading of normal inputs, you know, when you don't get permission for a thing, you do it anyway, is a classic... it's a hallmark of tech folks who think about democratic inputs only up to the point that they get in their way. That's been my experience, but two—and this is the big one I want to [00:02:00] talk about—is the lack of imagination and, in some ways, lack of understanding of the point of the art you are copying, in this case. So, the 2013 film Spike Jonze directed is about a utopian, technologically harmonious landscape. People living in New York and Shanghai in this new tech world that seems quite nice. But the commentary at the heart of the film, this is what Spike Jones says, is that it's about how human beings could not connect with each other even under those kinds of circumstances, or maybe even especially under those kinds of circumstances. Like, it's supposed to be not an embracing of that kind of technology, but using the technology to reveal something really broken in us, right?
And so the co opting of "Her", of the character from Her, is like, it reminds me of sales managers using Alec Baldwin's [00:03:00] incredibly traumatizing speech in Glengarry Glen Ross: "What's my name? My name is fuck you". You know, uh, "I wear a Rolex on my watch and you'll be taking the bus home". Sales managers use that to like exhort their employees to do a better job. I've heard story after story of people getting that, being shown that video as a part of a sales training. When in fact, that movie, David Mamet's script is all about critiquing the horror and the emptiness of that life and how having a terrible boss in a sales environment is the worst kind of sort of capitalist doom, right?
And so this feels so similar to me that you would use this character, who's supposed to typify the emptiness at the heart of humanity, that tech is trying and failing to fill, that you would use that for your product is just like...[sigh].
So, anyway. It's not going to hurt their business prospects, right? They're still making a [00:04:00] tremendous amount of money and they're going to make a tremendous amount of money and they're going to plow forward. I am interested to see the way that this changes the public relations strategy of OpenAI and their reputation in the world. Because, when this beloved actress goes hard at them, I think that's going to change their perception a little bit. But just the lack of imagination really galls me.
How Peter Thiel Got Rich | The Class Room ft. Second Thought More Perfect Union - Air Date 11-10-22
JT CHAPMAN - HOST, MORE PERFECT UNION: Thiel's good grades and familial wealth earned him a spot at Stanford University, for undergrad and law school, where he got into his first venture, an alternative student newspaper aimed at conservatives, the Stanford Review.
The publication was Thiel's response to what he perceived as a takeover by the "politically correct." It seemed designed to offend, calling the school's sexual assault regulations too strict, excoriating diversity initiatives, and attacking anything that questioned Western culture.
The Review was the beginning of Thiel's later network. Many of the students who wrote for or staffed [00:05:00] the far-right newspaper would end up as Thiel's future business partners.
Stanford is also where Thiel was introduced to philosopher and professor René Girard, who influenced Thiel's worldview. Gerard wrote about how humans intently imitate each other, and how that holds society back. He specifically pointed to humans' competitive nature holding back scientific and technological progress. Thiel really connected with this viewpoint, and it fueled his belief that monopolies are actually a good thing.
PETER THIEL: If you're a startup, you want to get to a monopoly. You're starting a new company, you want to get to a monopoly.
JT CHAPMAN - HOST, MORE PERFECT UNION: Before graduating, Thiel wrote one last op-ed for The Review, where he said that the PC alternative to greed is not personal fulfillment or happiness, but anger at and envy of people who are doing something more worthwhile. So, what was more worthwhile to Thiel? The money business. Peter joined up with a few young engineers building a new way to send payment digitally--pretty revolutionary in the late 90s. The company started as Confinity, a play on infinite confidence. It briefly became [00:06:00] X.com, as partner Elon Musk insisted, but eventually became PayPal. Staffing up, Thiel recruited some of his friends from the Stanford Review. The anarcho-capitalist views of that contingent were essential in the founding of PayPal, he explained at Libertopia 2010.
PETER THIEL: The initial founding vision was that we were going to use technology to change the whole world and basically overturn the monetary system of the world. We could never win an election on getting certain things because we were in such a small minority. But maybe you could actually unilaterally change the world without having to constantly convince people and beg people and plead with people who are never gonna agree with you, through technological means.
And this is where I think technology is this incredible alternative to politics.
JT CHAPMAN - HOST, MORE PERFECT UNION: You might think of PayPal today as a harmless mechanism for buying vintage movie posters on eBay. But the real goal was to completely destroy the global order of currency.
PETER THIEL: Well, we need to take over the world. We can't slow down now.
JT CHAPMAN - HOST, MORE PERFECT UNION: In a PayPal All Hands meeting in 2001, [00:07:00] Thiel told staff, "the ability to move money fluidly and the erosion of the nation-state are closely related," as they were building a system to move money fluidly. But just a few months later, Thiel took the money and ran. PayPal went public with an IPO.
PETER THIEL: We were the first company in the US to file after 9/11.
JT CHAPMAN - HOST, MORE PERFECT UNION: Shortly after, PayPal sold to eBay. Thiel's 3.7 percent stake in the company was worth $57 million. What happens when you give a guy who wants to remake the world into one that follows his own twisted political vision $57 million? Well, it's not great. Look at his investment in Patri Friedman, a young Google engineer, pickup artist blogger, amateur model, and grandson of Milton Friedman. Which, don't get us started on Milton Friedman. But Patri had a big idea: build artificial islands at sea to house lawless libertarian utopias. Peter Thiel got wind of this and offered Friedman $500,000 to quit his job at Google and get started on the project. Thiel [00:08:00] truly saw starting new nations as the same as starting companies.
Really, he said it.
PETER THIEL: Just like there's room for starting new companies, because not all existing companies solve all the problems we need to solve, I think there is also, there should also be some room for trying to start new countries, new governments.
JT CHAPMAN - HOST, MORE PERFECT UNION: But starting countries is difficult.
AUDIENCE MEMBER: What if you start over in a new country, some African country with a few billion dollars and build up, build it from the ground up?
PETER THIEL: We've looked at this, we've looked at all these possibilities. I think the basic challenges are that, it's not that easy to get the country. you might have, it's, you might not want to be stuck with the people you already have. And then, actually, the basic infrastructure may actually cost quite a bit more. You want to do something that works much more incrementally and organically.
JT CHAPMAN - HOST, MORE PERFECT UNION: Friedman eventually left the Seasteading Institute and Thiel's involvement seemed over. But let's look at the last part of that quote: "do something that works much more incrementally and organically."
After giving up on starting a brand new country, Thiel set [00:09:00] about refashioning the country he already lived in. This is how Peter Thiel used the venture capital mindset to seize political power. Presumably to the chagrin of Thiel's friends at Libertopia, he immediately got involved with the CIA. His next company was Palantir, a surveillance and data tech outfit. And seed funding came from In-Q-Tel, a nonprofit venture capital firm dedicated to funding projects that would be helpful to the CIA.
The firm isn't officially run by the CIA, but there is a revolving door of staff between the two. And the firm is colloquially referred to as the CIA's private equity firm. Palantir eventually did help the CIA, and the FBI, and the CDC. And a host of other governmental organizations that would have gotten Thiel booed right out of Libertopia.
But to Thiel, it didn't matter that he didn't live in some anarcho-capitalist utopia, because he was building his own using his enormous wealth. Thiel exploited systems within the existing libertarian-but-only-for-billionaires system, like his tax trick. [00:10:00] ProPublica unveiled in 2021 that much of Thiel's wealth is held in a Roth IRA, a type of tax-free investment fund meant for retirement. The amount you're allowed to contribute is capped at a few thousand dollars a year. But in 1999, Thiel turned two thousand dollars he had in his account into PayPal stock, an investment which paid off. When Thiel was the first large investor in Facebook, that half a million dollar Angel investment immortalized by this guy who looks nothing like Thiel in the social network, was just a restructuring of his tax-free retirement fund.
He can eventually withdraw the over $5 billion in the account tax-free. The average IRA has 0.00008% of that. Thiel's Libertopia friends have to pay high taxes, but Thiel won't on a large portion of his wealth.
Then there's litigation financing. The ultra rich can actually gamble on court cases. They fund legal fees for a lawsuit, then take a percentage of the winnings if they pick the right side. It's completely legal. [00:11:00] And Thiel used it to silence free speech. After Gawker, an online news and blogging outlet, outed Thiel as gay, he set his eyes on destroying them. When Gawker posted a shadily-acquired sex tape of wrestler Hulk Hogan, Thiel bankrolled Hogan's lawsuit against the publication. Gawker was bankrupted, and Thiel made a profit.
Thiel uses his inordinate wealth and investment principles to get richer, to destroy the free speech of others, and to live in his own libertarian paradise.
Another big investment area is in ideas, pretty chilling ones. Let's look at the Dark Enlightenment Movement, which Quartz calls "an obscure neofascist philosophy" and media researcher David Golumbia calls "the worship of corporate power to the extent that corporate power becomes the only power in the world." One of the movement's loudest voices is blogger Curtis Yarvin. Thiel has invested heavily in Yarvin startups, basically funding a big portion of the Dark Enlightenment movement. And it's obvious the movement mirrors Thiel's beliefs: complete corporate [00:12:00] control.
Google's Sundar Pichai on AI-powered search and the future of the web - Decoder with Nilay Patel - Air Date 5-20-24
NILAY PATEL - HOST, DECODER: Can I put this into practice by showing you a search? I actually just did this search. It is the search for best Chromebook. As you know, I once bought my mother a Chromebook Pixel. It's one of my favorite tech purchases of all time. So this is search your best Chromebook. I'm going to hit generate at the top. It's going to generate the answer. And then I'm going to do something terrifying, which is I'm going to hand my phone to the CEO of Google. This is my personal phone.
SUNDAR PICHAI: Yeah.
NILAY PATEL - HOST, DECODER: Don't dig through it.
So you look at that and it's the same generation that I have seen earlier. I asked him for best Chromebook and it says, here's some stuff you might think of. And then you scroll and it's some Chromebooks, it doesn't say whether they're the best Chromebooks. And then it's a bunch of headlines. Some of it's from like Verge headlines. It was like, here's some best Chromebooks. That feels like the exact kind of thing that an AI-generated search could answer in a better way. Like, do you think that's a good experience today? Is that a waypoint or is that the destination?
SUNDAR PICHAI: I think, look, you're showing me a query in which we didn't automatically generate the AI.
NILAY PATEL - HOST, DECODER: Well, there was a button that said, do you want to do it?
SUNDAR PICHAI: But, let me let me push back, right? There's an important differentiation, right? There's a reason we are [00:13:00] giving a view without the generated AI overview. And as a user, you're initiating an action, right? So we are respecting the user intent there. And when I scroll it, I see Chromebooks. I also see a whole set of links, which I can go, which tell me all the ways you can think about Chromebooks.
NILAY PATEL - HOST, DECODER: Yeah.
SUNDAR PICHAI: I see a lot of links. So we both didn't show an AI overview in this case. As a user, you're generating the follow up question.
I think it's right that we respect the user intent.
NILAY PATEL - HOST, DECODER: Yeah.
SUNDAR PICHAI: If you don't do that, right, people will go somewhere else too, right? I think so. I, you know, so.
NILAY PATEL - HOST, DECODER: I'm saying the answer to the question. I did not write, what is the best Chromebook? I just wrote best Chromebook. The answer, the thing that identifies itself as an answer, is not on that page.
And the leap to, I had to push the button to Google pushes the button for me. And then says what it believes to be the answer, is very small. And I'm wondering if you think a page like that today is, that is the destination of the search experience, or if this is a waypoint, and you can see a [00:14:00] future, better version of that experience?
SUNDAR PICHAI: I'll give you your phone back. I'm tempted to check email right now out of habit.
Look, I think the direction of how these things will go, it's fully tough to predict. You know, users keep evolving. It's a more dynamic moment than ever. We are testing all of this. This is a case where we didn't trigger the AI overview because we felt like our AI overview is not necessarily the first experience we want to provide for that query because what's underlying is maybe a better first look at the user.
NILAY PATEL - HOST, DECODER: Yeah.
SUNDAR PICHAI: Right. And those are all quality trade offs we are making. But if the user is asking for a summary, we are summarizing and giving links. I think that seems like a reasonable direction to me.
NILAY PATEL - HOST, DECODER: I'll show you another one where it did expand automatically. This one, I only have screenshots for.
So this is Dave Lee from Bloomberg did a search. He got an AI overview and he just searched for JetBlue Mint Lounge, SFO. And it just says the answer, which I think is fine. And that's the answer. If you swipe one over, I cannot believe I'm letting the CEO of Google swipe on my camera roll, but if you swipe one [00:15:00] over, you see where it pulled from. You see the site it pulled from. It is a word for word rewrite of that site. This is the thing I'm getting at, right? The AI generated preview of that answer. If you just look at where it came from, it is almost the same sentence that exists on the source of it. And that to me, that's what I mean. It's at some point that the better experience is the AI preview. And it's just the thing that exists on all the sites underneath it. It's the same information.
SUNDAR PICHAI: Look, the thing with Search, we handle billions of queries, you can absolutely find a query and hand it to me and say, could we have done better on that query? Yes, for sure. But when I look across, in many cases, part of what is making people respond positively to AI overviews is the summary we are providing clearly adds value, helps them look at things they may not have otherwise thought about. If you aren't adding value at that level, I think people notice it over time. And I think that's a bar you're trying to meet. [00:16:00] And, our data would show over 25 years, if you aren't doing something which users find valuable or enjoyable, they let us know, right away. Over and over again, we see that. And through this transition, everything is the opposite. It's one of the biggest quality improvements we are driving in our product.
People are valuing this experience. So there's a general presumption that people don't know what they are doing, which I disagree with strongly. People who use Google are savvy. They understand. And I can give plenty of examples where I've used AI overviews as a user. Oh, this is giving context. Or, maybe there are this dimensions I didn't even think in my original query. How do I expand upon it and look at it? Yeah.
NILAY PATEL - HOST, DECODER: You've made oblique mention to OpenAI a few times, I think.
SUNDAR PICHAI: I actually haven't, I think--
NILAY PATEL - HOST, DECODER: you keep saying others, there's one other big competitor that is, I think a little more--
SUNDAR PICHAI: You're putting words in my mouth, but that's okay.
NILAY PATEL - HOST, DECODER: Yeah. Okay. Well, I would say, I saw OpenAI's [00:17:00] demo the other day of GPT 4.o, Omni. It looked a lot like the demos you gave at IO, this idea of multimodal search, the idea that you have this character you can talk to, you had gems, which was the same kind of idea. It feels like there's a race to get to kind of the same outcome for a search-like experience or an agent-like experience. Do you feel the pressure from that competition?
SUNDAR PICHAI: Well, I mean, this is no different from Siri and Alexa and we worked in the industry, I think when you're working in the technology industry, I think there is relentless innovation. We felt a few years ago, all of us building voice assistants, you could have asked the same version of this question, right? And, what was Alexa trying to do and what was Siri trying to do? So I think it's a natural extension of that. I think you have a new technology now. And it's evolving rapidly. I felt like it was a good week for technology. There was a lot of innovation I felt on Monday and Tuesday and so on. That's how I feel.
And I think it's going to be that way for a while. I'd rather have it that way. You'd rather be in a [00:18:00] place where the underlying technology is evolving, which means you can radically improve your experiences which you're putting out. I'd rather have that anytime than a static phase in which you feel like you're not able to move forward fast.
I think a lot of us have had this vision for what a powerful assistant can be. But we were held back by the underlying technology not being able to serve that goal. I think we have a technology which is better able to serve that. That's why you're seeing the progress again.
So I think that's exciting. To me, I look at it and say we can actually make Google Assistant a whole lot better. You're seeing visions of that with Project Astra. It's incredibly magical to me when I use it. I'm very excited by it.
NILAY PATEL - HOST, DECODER: It just brings me back to the first question I asked, language versus intelligence. To make these products, I think you need a core level of intelligence. Do you have in your head a measure of this is when it's going to be good enough? Or I can trust this? On all of your demo slides and all of OpenAI's demo [00:19:00] slides, there's a disclaimer that says, check this info.
And to me, it's ready when you don't need that anymore. You didn't have "check this info" at the bottom of the 10 blue links. You don't have check this info at the bottom of featured snippets necessarily.
SUNDAR PICHAI: Right. You're getting at a deeper point where hallucination is still an unsolved problem, right? In some ways, it's an inherent feature. It's what makes these models very creative. It's why it can immediately write a poem about Thomas Jefferson in the style of Nilay. It can do that, right? It's incredibly creative.
But LLMs aren't necessarily the best approach to always get at factuality, which is part of why I feel excited about Search, because in Search, we are bringing LLMs in a way, but we are grounding it with all the work we do in Search and laying it with enough context, I think we can deliver a better experience from that perspective.
I think the reason you're seeing those disclaimers is because of the inherent nature, right? There are still times [00:20:00] it's going to get it wrong. But I don't think I would look at that and underestimate how useful it can be at the same time. I think that would be a wrong way to think about it.
Google Lens is a good example. When we did Google Lens first, when we put it out, it didn't recognize all objects well. But the curve year on year has been pretty dramatic, and users are using it more and more. We get billions of queries now. We've had billions of queries now with Google Lens. It's because the underlying image recognition, paired with our knowledge entity understanding has dramatically expanded over time.
So I would view it as a continuum. And I think, again, I go back to this saying, users vote with their feet, right? Fewer people used Lens in the first year. We also didn't put it everywhere. Because we realized the limitations of the product.
NILAY PATEL - HOST, DECODER: When you talk to the DeepMind Google brain team, is there on the roadmap a solution to the hallucination problem?
SUNDAR PICHAI: It's Google DeepMind, but are we making progress? Yes, we [00:21:00] are. We have definitely made progress, when we look at metrics on factuality year on year. So we're all making it better. But it's not solved.
Are there interesting ideas and approaches which they are working on? Yes. But time will tell. But I would view it as LLMs are an aspect of AI. We're working on AI in a much broader way. But it's an area where I think we're all working definitely to drive more progress.
Elon's Reputation is Hurting Tesla - TechNewsDay - Internet Today - Air Date 4-4-24
ELIOT, HOST, TECHNEWSDAY: A lot of the appeal that Tesla cars have had for a while among consumers is increasingly at odds with the fact that the man who owns Tesla is a total jackass. Ten years ago, Teslas were among the few all-electric vehicles available on the market, and Tesla CEO Elon Musk was a super genius who was going to save the world.
Fast forward to more recent years though, and there's a lot more options for electric cars out there, while Tesla hasn't had a major model redesign in about half a decade. Instead, apparently focusing all of its design [00:22:00] efforts on the stupidest car ever. Just the dumbest thing you've ever seen in your life.
RICKY, CO-HOST, TECHNEWSDAY: Saw another one on the road yesterday and gave it a very enthusiastic thumbs down out the window. And I saw him see my thumb. And I know it hurt. Because he's the one spending the money.
ELIOT, HOST, TECHNEWSDAY: Elon! People are giving me a thumbs down on my car!
RICKY, CO-HOST, TECHNEWSDAY: Can you somehow block them from the freeway, Elon?
ELIOT, HOST, TECHNEWSDAY: And yeah, meanwhile, Elon Musk's public persona, and the public perception of him, has steadily drifted from real-life Tony Stark to "what would happen if Howard Hughes and Henry Ford had a baby with all of their worst traits and also a crippling addiction to social media and ketamine?"
RICKY, CO-HOST, TECHNEWSDAY: Now unlike most titans of industry who mostly avoid the spotlight, and for good reason, Elon Musk has gone out of his way to not only provide a clear look into his mind via social media, he's purchased a popular social media platform and reshaped it in his image. An image that a lot of people find incredibly off-putting. But is it bad for business? On the Twitter side, yes, [00:23:00] obviously. But what about Tesla? We've heard people online and in real life talk about Elon's dumb bullshit affecting their car shopping preferences for a while now. But now, we finally have the data. Here's Reuters just this week. "The ranks of would-be Tesla buyers in the United States are shrinking, according to a survey by market intelligence firm Caliber, which attributed the drop in part to CEO Elon Musk's polarizing persona. While Tesla continued to post strong sales growth last year, helped by aggressive price cuts, the electric vehicle maker is expected to report weak quarterly sales as early as Tuesday."
ELIOT, HOST, TECHNEWSDAY: And yeah, side note, so Reuters published this on Monday, and that quarterly sales prediction, it proved to be accurate.
RICKY, CO-HOST, TECHNEWSDAY: Uh oh!
ELIOT, HOST, TECHNEWSDAY: Here's the Washington Post on Tuesday. "The delivery numbers reported Tuesday come as Tesla faces soft demand for electric vehicles, high interest rates, a string of lawsuits against its technology, and controversy surrounding its chief executive, Elon Musk. Musk had warned during a January [00:24:00] earnings call that Tesla would experience a 'notably lower growth rate' this year as the company invests in a next generation vehicle it plans to start building in 2025. Tesla said it delivered 387,000 vehicles to customers in the first quarter, down 20 percent from the previous quarter and down more than 8 percent year-over-year. Ahead of Tuesday's report, Wall Street analysts generally expected Tesla to report 443,000 deliveries for the quarter, according to Wedbush Securities Analyst Dan Ives. Tesla shares fell 4. 9 percent on Tuesday."
RICKY, CO-HOST, TECHNEWSDAY: So yeah, Musk said straight up, this would be a bad quarter. So Wall Street analysts tamped down expectations, and the numbers still somehow managed to be even worse than those lowered expectations.
Anyways, back to that Reuters article: "Caliber's consideration score for Tesla, provided exclusively to Reuters, fell to 31 percent in February, less than half of its high of 70 percent in November 2021, when it started tracking consumer interest in the brand. [00:25:00] Tesla's consideration score fell 8 percentage points from January alone, even as Caliber's scores for Mercedes, BMW, and Audi, which produced gas as well as EV models, inched up during that same period, reaching 44 to 47 percent. Caliber cited strong associations between Tesla's reputation and that of Musk for the scores.
"'It's very likely that Musk himself is contributing to the reputational downfall,' Caliber CEO Shahar Silberschatz told Reuters, saying his company's survey shows 83 percent of Americans connect Musk with Tesla."
That's what happens when you become the face of your big company, and--
ELIOT, HOST, TECHNEWSDAY: This is what happens when you refuse to take our patented advice,
RICKY, CO-HOST, TECHNEWSDAY: to simply shut the fuck up!
And no, he went and did the other thing. He opened the fuck up. He bought a social media platform and continued to post.
ELIOT, HOST, TECHNEWSDAY: Yeah. It is wild, like this was--
RICKY, CO-HOST, TECHNEWSDAY: And he said everyone has to read my posts, and MrBeast's posts. You have to. You will see that MrBeast video about being locked in a [00:26:00] grocery store 25 fucking times this week. I don't care who you are! Not Mr. Beast's fault. Elon Musk, clearly with his foot on the scale.
ELIOT, HOST, TECHNEWSDAY: Yeah, I mean, people associate Elon with Tesla, which at one point was a great, it was a great asset. Wow. Not only are these cars cool, but Elon's pretty cool too.
RICKY, CO-HOST, TECHNEWSDAY: Yeah. I can drive a car just like that guy.
ELIOT, HOST, TECHNEWSDAY: Yeah. He's going to make us all live on Mars. Yeah. And it's gonna be awesome, and he's gonna save the earth, and
RICKY, CO-HOST, TECHNEWSDAY: And he's building solar panels into roof tiles!
ELIOT, HOST, TECHNEWSDAY: Yeah, we're all gonna have roofs that look like roofs, but they're solar roofs. And he's gonna save those children trapped in that cave, with giant, bullet-shaped, rigid submarine.
RICKY, CO-HOST, TECHNEWSDAY: Yep, and he's gonna dig a big tunnel.
ELIOT, HOST, TECHNEWSDAY: And we're all gonna be getting in tunnels, and getting around real fast. Bye bye traffic!
RICKY, CO-HOST, TECHNEWSDAY: But yes, as you can imagine, being that popular and the face of a company so large and so reliant on your image and marketing of it, could end up being detrimental when you inevitably turn into an alt right asshole.
Crypto’s crown prince in court - Today, Explained - Air Date 10-3-23
SEAN RAMESWARAM - HOST, TODAY, EXPLAINED: Where did Sam [00:27:00] Bankman Fried fit into that world?
ZEKE FAUX: Sam was a schlubby guy. His uniform was cotton shorts, an FTX t-shirt, and then really messy, curly hair. He acted like he had no respect for the traditional institutions of, whether that was Washington or the venture capital world or Wall Street, and yet all the people in these various worlds were obsessed with him and competing to hand him billions of dollars. You know, the U. S. Senate were inviting him to DC.
CORY BOOKER: So, Mr. Bankman Fried, I'm going to interrupt you because I've only got 30 seconds left, and I'm offended that you have a much more glorious afro than I once had. Um, uh, so really quick...
ZEKE FAUX: So, he created this image that he was the guy who understood it all, kind of like the only honest guy in crypto, if you can believe it.
ARCHIVE NEWS CLIP: He worked at a Wall Street trading firm called Jane Street. And it's a very [00:28:00] successful trading firm and his pedigree and background at Jane Street is part of what helped him get to the level that he got to.
Well, what SBF did was he operated under this philosophy of effective altruism. It basically says you make money to give away money.
ZEKE FAUX: Sam made his first money in crypto with this one weird trick. Back in 2017, Bitcoin, on a Japanese Bitcoin exchange where Japanese people traded, cost $11,000. And in the United States you could buy one for $10,000. So, this is something that's unheard of in mainstream finance, but in theory, you could buy one Bitcoin on an American app, zap it over to a Japanese app, and make a thousand bucks right there. So, not only did he do it, but he immediately figured out how to borrow tens of millions of dollars to do it at, as much of this as possible. And within a few [00:29:00] weeks, he had exploited this arbitrage to the tune of something like 20 million bucks in profit. This profit seeded his crypto trading hedge fund, which he named Alameda Research. Sam picked the name Alameda Research because it sounds innocuous. Banks at the time did not want to be involved in crypto.
SAM BANKMAN-FRIED: We just knew that was going to be a thing. And that if we named our company, like, Shitcoin Day Traders, Inc., like, they'd probably just reject us. But, I mean, no one doesn't like research.
ZEKE FAUX: Alameda Research was a hedge fund that traded all kinds of cryptocurrencies and, in theory, exploited, you know, cool arbitrages like this Japanese one. After a couple years of doing that, he, as he tells the story, realized that many of the crypto exchanges where Alameda did business were pretty subpar compared to the ones [00:30:00] that he was familiar from his time on Wall Street. And that's when he decided to start FTX.
REPORTER: Why create an exchange when there were already so many big global players out there?
SAM BANKMAN-FRIED: Yeah, I mean, the basic answer is that we didn't think any of them had nailed it.
ZEKE FAUX: FTX, which was a crypto exchange, which basically just means it's an app where you can trade all these crypto coins similar to E*TRADE or Robinhood or something like that. His app wasn't even the most popular one, but so many people were trading crypto that venture capitalists had valued FTX at 32 billion dollars.
REPORTER 2: Today, your valuation is?
SAM BANKMAN-FRIED: It's, uh, 32 billion internationally and, uh, 8 in the U. S.
REPORTER 2: How old is your company?
SAM BANKMAN-FRIED: About two and a half years.
REPORTER 2: Two and a half years. Okay, let's talk about...
SEAN RAMESWARAM - HOST, TODAY, EXPLAINED: When do we start to see cracks?
ZEKE FAUX: FTX's downfall [00:31:00] began with a sarcastic tweet. One of Sam's lieutenants had written something nice on Twitter about Sam's biggest rival, "CZ", the head of Binance, which was the biggest crypto exchange. Under this nice post, Sam wrote sarcastically,
TWEET VOICEOVER: Excited to see him repping the industry in DC going forward. Uh, he is still allowed to go to DC, right?
ZEKE FAUX: The joke is that he's an international fugitive, which is not entirely untrue, but also not a very nice thing to joke about on Twitter. A couple weeks after this tweet an article came out in the crypto news site CoinDesk that was kind of confusing, but it revealed that Sam's hedge fund Alameda owned quite a lot of a token called FTT, which was essentially stock [00:32:00] in Sam's exchange FTX. Then, also on Twitter, Sam's rival "CZ" tweeted that he would be selling off his FTT tokens. He wrote,
TWEET VOICEOVER: We won't pretend to make love after divorce. We're not against anyone, but we won't support people who lobby against other industry players behind their backs.
ZEKE FAUX: I mean, this wouldn't necessarily seem like a big deal that, you know, a rival company is selling its stock in your company, but it kind of set off a run on FTX where other people who owned FTT tokens started to sell them too. And as the price went down, it made people start to worry about the stability of FTX, and investors who had sent money to FTX to use it to bet on other cryptocurrencies started taking their money out. In theory, this shouldn't be a problem. If people have sent money to [00:33:00] FTX to gamble with, then FTX should have no problem giving the money back. Sam went on Twitter and told people,
TWEET VOICEOVER: FTX is fine. Assets are fine.
ZEKE FAUX: But it turned out FTX did not have the money that it needed to repay clients and after more and more tried to ask for their money back, eventually it was revealed that FTX did not have this money. In fact, eight billion dollars had somehow disappeared and FTX had to file for bankruptcy.
SEAN RAMESWARAM - HOST, TODAY, EXPLAINED: Where was all that money?
ZEKE FAUX: It turned out that when you sent a thousand bucks to FTX to buy some "doggie coin", or dogecoin, and then you saw in the app, you know, that you now owned, you know, 2000 doge coins, in fact, what was really going on is that that thousand dollars that you had sent in was being lent to Sam's hedge fund, Alameda [00:34:00] Research, which was taking it to other exchanges to make all sorts of crazy bets.
SEAN RAMESWARAM - HOST, TODAY, EXPLAINED: Is that legal, Zeke?
ZEKE FAUX: No. After FTX declared bankruptcy, I contacted Sam and said, I'd like to talk about what had happened and hear his side of it. So, I flew down to the Bahamas and we spent 11 hours, basically trying to answer that question.
His argument is that many of the people who traded on FTX were these hedge funds, like Alameda, and that part of the deal was that FTX would give them loans that were secured by assets. And since this is a crypto world, the security was not gold or real estate or something like that. It was random coins. So, his explanation was that the borrowing was permitted, the customers should have been [00:35:00] aware of it, and that he did not realize how out of hand it gotten.
SAM BANKMAN-FRIED: Some part of it was just literal distraction. I really should have spent some time each day taking a step back and saying, What are the most important things here? Right? And, like, how do I have oversight of those and make sure that I'm not losing track of those? And frankly, I did a pretty incomplete job at that. I spent a lot...
ZEKE FAUX: The idea that he would just not count his money to the point that eight billion dollars could just go missing without him knowing, it just seemed really implausible to me
SEAN RAMESWARAM - HOST, TODAY, EXPLAINED: It sounds like he was trying to tell you a story. What do you think the real explanation was in that moment?
ZEKE FAUX: The amazing thing is that Caroline Ellison, the CEO of Alameda, she had actually told her version of the story to all of the employees in a way that I found to be more credible. [00:36:00] In November, 2022, while FTX was in this financial distress, there was a moment where it looked like Sam's rival "CZ" was actually going to bail out FTX and buy it. So, for a couple of days there, the people at FTX kind of thought they were in the clear. And during that time, Caroline Ellison called a meeting of all her employees at Alameda. And at this meeting, she essentially confessed. She said to all the employees, Hey, I'm really sorry, but Alameda has taken out all these loans from FTX and we invested it in illiquid, which means hard to sell, things. And like, that's why we're in this trouble. But good news is, you know, "CZ" is bailing us out and hopefully the customers can get all their money back from him. And the employees were just like floored and they said, Wait, who knew that Alameda [00:37:00] was borrowing the customer funds for it's crazy crypto bets? And Caroline said, me, Sam, and then two other top lieutenants. And then one of the employees said, Well, who decided to do this? And she said, um, Sam, I guess.
The prosecutors who are now trying Sam have a recording of this meeting. So, this is Caroline who thinks that no one will ever find out and they're in the clear, in the moment, admitting to the crime. Which I think is, uh, pretty strong evidence.
The Entire OpenAI Chaos Explained - ColdFusion - Air Date 11-27-23
DAGOGO ALTRAIDE - HOST, COLDFUSION: On the 22nd of November, only five days after he was fired, OpenAI announced that they'd reached an agreement with Sam, and they had a new board, too. Sam posted on his X account that he was excited to return to OpenAI and continue the strong partnership with Microsoft.
Greg Brockman also came back into the fold, announcing his return with a picture. [00:38:00] Except for Adam D'Angelo, the old board members had all left. They were replaced by Brett Taylor, the former co CEO of Salesforce, and Larry Summers, the former Treasury Secretary. Emmett Shearer, who was the interim CEO for just 72 hours, seemed to be happy with the outcome, judging from his tweet.
So, just as abruptly as it started, the five day long saga ended with Sam Altman back at the wheel. Now, the major question is, why did the board fire their CEO in the first place? The answer is complicated and murky. There is no official explanation, only rumours and speculation so far. But, based on some reports, we can piece together some possible factors.
Please keep in mind that this is just the situation at the time of writing. The board claimed that they had some disagreements with Sam about how the company was run, and also that Sam wasn't always truthful to them. This seems like a bit of a weak reason to fire a CEO who was negotiating a deal to sell [00:39:00] shares to investors at a whopping $86 billion valuation.
That should be a big achievement for any company, but OpenAI is not a typical company. It's a bit different to the other tech giants out there. In a nutshell, OpenAI was founded in 2015 as a non profit with a mission to create artificial intelligence that would benefit humanity. At its formation, it had a celebrity team of founders, including Elon Musk. Musk would leave in 2018 due to a conflict of interest. Since then, Sam Altman has been leading the firm.
He established a for profit arm that raised billions from Microsoft. The main reason was to fund the expensive research and development for their AI models. Sam Altman was in charge of the for profit section. However, the whole firm was set up in such a way that the non profit faction had the ultimate power and it was controlled by the board members.
This odd structure left Sam and Microsoft at the mercy of the board, and they were skeptical of corporate expansion. Besides Sam [00:40:00] Altman and Greg Brockman, other board members included Ilya Sutskever. We've already mentioned him quite a few times now—he is a prominent researcher in the AI field and is very vocal about AI safety.
Then there's Adam D'Angelo, a former Facebook executive and co founder of Quora. There were other notable names on the board. It would seem like there's an ethos struggle within the company—does OpenAI go all out and try to make as much money as possible? Or do they stick to their core value of making AI that will benefit humanity?
Sam has a knack for spotting trends, though he's been working on some other side projects that were beyond the reach of OpenAI's safety conscious board. One project that raised some eyebrows was WorldCoin. It was a crypto venture that used eyeball scanning technology, and was marketed as a potential solution for AI induced job losses—a stepping stone to universal basic income.
He was also toying with the idea of launching his own AI chip making venture to reduce the over reliance on NVIDIA. He reached out to sovereign wealth funds in the Middle East for a [00:41:00] potential investment in the realm of tens of billions of dollars. Additionally, he pitched to SoftBank Group another multi billion dollar investment, this time in a company he planned to start with former Apple design maestro, Johnny Ive.
The focus was AI oriented hardware. These projects were seen as distractions by some of the board members. They wanted their CEO to focus on OpenAI and its core mission. To escalate matters even more, Sam found himself in conflict with Sutskever, who formed a new team in July within the company dedicated to controlling future, "Super intelligent AI systems." the dispute reached its boiling point in October when, according to a source familiar with the relationship, Altman made a move to reduce Sutskever's role in the company.
Fast forward to November 6th. It was the day that OpenAI hosted its first developer conference in San Francisco. Sam Altman made several announcements regarding customized versions of ChatGPT. It's going to enable users to make task specific chatbots. These custom GPTs might operate [00:42:00] independently in the future. That's a major red flag for safety concerns.
And the last reason—a possible AGI breakthrough. According to Reuters, an additional concern may have been simmering within the company. The report suggests that some staff researchers penned an internal letter to the board, cautioning about the discovery of an advanced AI with the potential to pose a threat to humanity.
These researchers flagged the potential danger of this new model in their letter, but did not specify the exact safety concerns. There has been no official statement from OpenAI regarding these letters. But they did acknowledge a project called Q*.
ANDREW CHANG: Because first, I need to be real with you. It is very hard to know right now what Q* actually is.
We know from Reuters reporting that, according to their sources, it may be some kind of powerful artificial intelligence discovery at OpenAI. The company behind ChatGPT and that there are fears it is so powerful it could [00:43:00] threaten humanity. That sounds really dramatic, but this discovery was apparently alarming enough that at some point after a group of OpenAI researchers took their concern to the board—like, "Oh my god, are you all aware of what this company is working on?"—the CEO, Sam Altman, was fired.
DAGOGO ALTRAIDE - HOST, COLDFUSION: Now it gets a little murky here, but some believe that this project could be the highly anticipated AGI, or Artificial General Intelligence, which is capable of outperforming humans in any economically viable task.
ILYA SUTSKEVER: The day will come when the digital brains that live inside our computers will become as good, and even better, than our own biological brains. We call such an AI an AGI—Artificial General Intelligence.
ARCHIVE NEWS CLIP: It was a step towards Artificial General Intelligence. I know it sounds complicated, but simply put, it's artificial intelligence that is more powerful than humans. Now, OpenAI staffers believe that this could threaten humanity.[00:44:00]
So some of them wrote a letter to the board. This could also be the reason for the firing of Sam Altman.
Scarlett Johansson’s Open AI voice fight shows the need for consent in tech - There Are No Girls on the Internet - Air Date 5-21-24
BRIDGET TODD - HOST, THERE ARE NO GIRLS ON THE INTERNET: Full disclosure, I was already working on putting together an episode re watching Spike Jonze's 2013 movie, Her, starring Scarlett Johansson's voice as an AI assistant. I really wanted to compare and contrast what the movie thought AI integration with our life would be like and what it actually has been like 10 years later. I'm really excited that the movie Her is part of the public conversation right now because it's one of my favorite movies.
If you haven't seen it, I don't want to give too much away, but Scarlett Johansson is the voice of Joaquin Phoenix's AI software. The movie imagines a future where AI is less like Siri and more like a real human. People in the Her universe fall in love with AI. They have friendships and real meaningful relationships with AI, and that's partly because AI sounds like a real human person who speaks to you and behaves like a person would, not like a robotic voice.
And as I was [00:45:00] preparing for that episode, the whole thing with Scarlett Johansson really blew up. And the more I thought about it, honestly, the madder I got. Last night I was getting ready for bed and I was sort of angrily brushing my teeth, and I found myself thinking about this yet again. And the kind of chorus in my mind that I kept saying over and over to myself was that these tech guys just think they own whatever woman they want.
Because to me, this is not even really about Scarlett Johansson, it is about what happens when consent in technology is violated again and again and again. And how it erodes the trust that we should be able to count on being at the center of our tech experiences. And how it reinforces that the most powerful companies in our world, who are shaping our collective futures, consistently demonstrate that they cannot be trusted to simply respect people, especially when those people are women.
Okay, so here's what's going on. OpenAI, the company that makes ChatGPT and a major player in the AI space, has [00:46:00] been flirting with integrating voice technology to ChatGPT since around last year. But last week, OpenAI finally revealed a new conversational interface for ChatGPT that they called Sky. Yep, just like a lot of voice technology, Sky has the voice of a woman. But Sky also has a voice that is really similar to the one that Scarlett Johansson used to play the AI assistant called Samantha in the movie Her. But then, OpenAI suddenly disabled this feature over the weekend. Grand opening, grand closing.
And this comes after OpenAI's head, Sam Altman, who you might remember we made an episode about, he was fired for something, we don't totally know what, but it seemed to be related to his lack of honesty, and then he was rehired and is now basically doing whatever the hell he wants. Well, Sam Altman was talking up this integration and comparing it to the movie Her and talking about how we'd finally have AI that felt like a real human that you could be friends with, which is a plot line right out of the movie, which spoiler alert, I do [00:47:00] think that some of these tech geniuses might actually be low key misunderstanding the takeaway from the movie. But anyway...
So, shutting down this new voice technology after Sam Altman was driving so much anticipation about it, everybody, myself included, was like, what is going on, what's the story there? So then on Monday, we get the real tea, which is that Scarlett Johansson told Wired in a statement that OpenAI actually reached out to her to ask her to be the actual voice of their new conversational interface, and she declined, twice, and that OpenAI basically just used her voice anyway, or at least a voice that sounds a lot like her voice. And OpenAI's Sam Altman even tweeted a reference to her work in the movie Her when announcing that new chat JPT voice interface. So there isn't really a ton of plausible deniability on his part even.
Okay, so this is what Sky, OpenAI's, not Scarlett Johansson's, voice integration sounds like.
SKY CHAT BOT: I [00:48:00] don't have a personal name since I'm just a computer program created by OpenAI, but you can call me assistant. What's your name?
BRIDGET TODD - HOST, THERE ARE NO GIRLS ON THE INTERNET: And here is Scarlett Johansson as the voice of the A. I., Samantha, from the movie Her.
HER MOVIE CLIP: Well, right when you asked me if I had a name, I thought, yeah, he's right, I do need a name. But I wanted to pick a good one, so I read a book called How to Name Your Baby, and out of 180, 000 names, that's the one I like the best.
Wait, you read a whole book in the second that I asked you what your name was?
In two one hundredths of a second, actually.
BRIDGET TODD - HOST, THERE ARE NO GIRLS ON THE INTERNET: It sounds pretty similar to me, and ScarJo agrees. Here's what she told Wired in a statement.
"Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt, by my voicing the system, I could bridge the gap between tech companies and creatives, and help consumers to feel comfortable with the seismic shift concerning humans and AI.
He said he felt my voice would be comforting to people. After much [00:49:00] consideration, and for personal reasons, I declined the offer. Nine months later, my friends, family, and the general public all noted how much the newest system named Sky sounded like me. When I heard the release demo, I was shocked and angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. Mr. Altman even insinuated that the similarity was intentional, tweeting a single word, Her, a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.
Two days before the chat GPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there. As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr. Altman and OpenAI, setting out what they had done, and asking them to detail the exact process by which they created the Sky Voice. Consequently, [00:50:00] OpenAI reluctantly agreed to take the Sky Voice down.
In a time where we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected."
So I really applaud Johansson here, and I think this is the first time that there has been a legal dispute over a sound alike that is, as far as we know, not AI generated. And I think it could set a precedent for this kind of thing going forward, especially for voice actors and creative professionals who can't afford lawyer's fees or a big lawsuit if their likeness or voice is used this way without their consent.
Her statement is also just a good reminder that Johansson has been here before. She is one of the most targeted celebrity figures for AI deepfaked images. So, finding out that OpenAI actually asked Scarlett Johansson to work on this twice, and when she said no, they just found a [00:51:00] sneaky workaround to do it anyway, enrages me. It enrages me as a voice professional, it enrages me as a creative, and it enrages me as a woman. You know, when I say on the show that the exploitation of women is baked into technology in a lot of ways from the ground up, that these are features and not bugs, this is a great example of what I mean. It matters that a company like OpenAI would build their anticipated voice system in a way that has the exploitation of a woman baked into its earliest foundation. And this is not happenstance. It colors how they see women and other marginalized people as just available to take from in service of them making money to create their vision, a vision that by design ignores and exploits us. Like, don't these people understand that no means no?
I should say that OpenAI says that they did not actually steal her voice, but I also want to say that I 100 percent do not believe them at all. Here's OpenAI's statement.
"We support the creative [00:52:00] community and worked closely with the voice acting industry to ensure we took the right steps to cast ChatGPT's voices. Each actor receives compensation above top of market rates, and this will continue for as long as their voices are used in our products. We believe that AI voices should not deliberately mimic a celebrity's distinct voice. Sky's voice is not an imitation of Scarlett Johansson, but belongs to a different professional actress using her own natural speaking voice. To protect their privacy, we cannot share the names of our voice talents."
So, here's my opinion about what's actually going on. I believe that they probably did work with a human voice actor and they probably intentionally picked a voice actor that sounded a lot like Scarlett Johansson. And I think they had this person ready to go, whether or not Scarlett Johansson agreed to do this or not. I don't think they really cared about actually having Scarlett Johansson's permission, and they were going to either use this sound alike or use Scarlett Johansson's real voice. Because in addition to his single word, "Her" tweet, Sam Altman, [00:53:00] the head of OpenAI, also said that the new AI voice technology, "feels like AI from the movies." openAI's chief technology officer, Mira Murati, said that that was all a coincidence.
But even still, it's like they want to have it both ways. They obviously want us, the public, to be associating their new technology with the AI in the movie Her, and they're clearly trying to capitalize on that for this rollout, but they want to have all of that and benefit from all of that without actually having the consent from the real human woman behind the voice in the movie that they're referencing. As Bethany Frankel might put it, it is a cheater brand.
Fascism and the Failure of Imagination - Zoe Bee - Air Date 5-9-24
ZOE BEE - HOST, ZOE BEE: Imagination doesn't make money, or at least it's not guaranteed to make money. Imagination means creativity, and creativity means risk. When you make something new, you don't know if it's going to be any good or if it'll even work in the first place, it's inherently risky. So you have to do this balancing act of being creative enough that [00:54:00] your ideas can sell, while not being so creative that your ideas are so new that they're scary. You can't have too much imagination. You need just enough imagination to come up with new stories, new iterations on old ideas, new sequels for established properties, but the way those stories are told, the underlying structures of them, have to stay the same. You're free to come up with new stories, as long as those stories fit within a three act structure, are written in standard English, and meet industry standards of form and style. Even in business, where people say they want innovation, what they really mean is innovation within certain boundaries. Imagination is for making small adjustments to pre existing, safe bets. We see this in education, too. When we teach students about the world, we teach them how to exist within it, not how to change it.
We teach them the status quo, how to solve equations, and how to write research papers, and how the government works. [00:55:00] We don't teach them to ask if those equations are the only way to solve the problem, or why research papers have to be written like that, or whether the government should work that way. We're not training students to imagine new ways of doing things, we're training them to do what an authority figure tells them to.
The premier example of this kind of education can actually be seen in everyone's favorite propaganda outfit, PragerU. Like I've talked about in another video, PragerU's lesson plans are bad. I think that this is all indicative of just how vapid PragerU's view of schooling is. Their lessons don't include real activities or real discussions because they simply cannot imagine a lesson that isn't someone standing at the front of a classroom talking at a bunch of children who are silently sitting at their desks with a worksheet in front of them. They say that they want to change the future of education in America, but what they're providing with these lesson plans isn't some kind of [00:56:00] educational revolution, it's boring.
Now they actually just recently came out with their first real class, something that is actually being used for real academic credit in New Hampshire schools, and I was thinking about doing a little video on it because when I was looking into it earlier, it looks so bad, and I think that it is just endlessly fascinating how PragerU sees education. So let me know in the comments if you want me to take a look at their econ 101 class so you don't have to.
But, anyway, all of these lessons, even their craftery art videos, all just come down to students following directions and doing things the right way. There's no room for imagination or critical thinking or creativity, because they don't care about that. They don't care about kids asking questions or expressing themselves or daring to do things differently. They care about kids doing what they're told.
But there's still a deeper question here. [00:57:00] Why do schools and businesses not value imaginative thinking? It's like they're afraid of imagination, like they can't risk people trying new things. But risk is only a bad thing if whatever you're holding onto, whatever the status quo is, is so valuable that you just cannot possibly chance losing it. And that's interesting, right? Like, we're all so invested in keeping things how they are that now we see any suggestion to change things as a threat. You can do whatever you want within the system we've given you, but don't you dare try to come up with a new system. Imagine things that fit within this box, don't you dare try to imagine a new box, or especially no box at all. The only value imagination has is to support our structures, not create new ones.
But what we need to remember is that these structures that we're living within [00:58:00] they're all made up. To quote Ruha Benjamin in her book Imagination Manifesto, Imagination does not just animate sci fi inspired scientific endeavors or explicitly creative pursuits like Broadway musicals, viral TikTok dances, and Jean Michel Basquiat's paintings. Imagination is also embedded in the more mundane things that govern our lives, like money, laws, and grades. Everything was imagined. Language, taxes, marriage, borders, democracy. They're all just ideas that we either collectively agreed to believe, or that powerful enough people forced us to. It's like roleplay. It's that suspension of disbelief.
Like, we all know that money is just pieces of paper, but we've all agreed to the social contract that says this piece of paper has value. That's what people mean when they talk about something being a social construct. These things are only as real as we all decide they are. [00:59:00] And that's all well and good, except, as Benjamin goes on to say, lest we forget. Designing cruel, oppressive structures involves imagination, too.
Early on in the making of this video, I was planning to title it Fascists Have No Imagination, but that's not quite right, is it? It's not that fascists don't have an imagination, clearly they do, it's the fascist imagination that allows people to imagine children as killers, imagine entire ethnic groups as vermin, and imagine themselves as the rightful rulers. Fascists have an imagination. It's just that their imagination sucks. Their imagination is cruel, rigid, and static, and this cruel, rigid, static imagination is what I'm calling the fascist unimaginary, and it lies at the heart of bigotry.
The fascist imagination puts people in boxes based on arbitrary traits, and then refuses to imagine that they could ever leave that [01:00:00] box. White supremacy cannot imagine Black philosophers. Patriarchy cannot imagine women leaders. Cisheteronormativity cannot imagine trans people existing. The fascist unimaginary shows up in our art, too.
We've been imaginative enough to invent dwarves and mermaids, but we couldn't possibly imagine Black dwarves or mermaids. We can imagine magic wielding TTRPG characters, but not ones in wheelchairs. We can imagine post apocalyptic sci fi super soldiers, but not ones who are trans. So, we do have an imagination, we've imagined so many things, all these systems and concepts and fictions, we just can't imagine any further. And that kinda doesn't make sense, right? How can we be imaginative enough to live in this world of concepts, but not imaginative enough to create new ones? What is it about the fascist unimaginary that has gotten us so stuck?
Note from the Editor on the misplaced excitement in private corporations pushing big technological advancements
JAY TOMLINSON - HOST, BEST OF THE LEFT: We've just heard [01:01:00] clips starting with Jacob Ward, discussing the lack of imagination among the tech elite who don't understand the value of art and culture they're crushing. More Perfect Union told the story of Peter Theil. Decoder spoke with and challenged the CEO of Google about the enshittification of their search. Internet Today discussed the influence of Elon Musk's nose diving reputation. Today, Explained, recorded last fall and before his conviction, discussed Sam Bankman-Fried. ColdFusion looked into the chaos at OpenAI when Sam Altman got fired and then rehired. There Are No Girls on the Internet explained the case of OpenAI getting a soundalike AI voice to Scarlett Johansson after she turned them down. And Zoe Bee described the terrible imaginations of fascists.
And that's just the top takes, there's lots more in the deeper dive section, but first, a reminder that this show is supported by members who get access to bonus episodes, featuring the production crew here [01:02:00] discussing all manner of important and interesting topics, often trying to make each other laugh in the process. To support all of our work and have these bonus episodes delivered seamlessly to the new members only podcast feed that you'll receive. Sign up to support the show at bestoftheleft.com/support—there's a link in the show notes—through our Patreon page, or from right inside the Apple Podcast app. And if regular membership isn't in the cards for you, shoot me an email requesting a financial hardship membership, because we don't let a lack of funds stand in the way of hearing more information.
Now, before we continue on to the deeper dives, half of the show, I have just a few thoughts. I was thinking about the confluence of techno-optimism and the current state of hyper-capitalized private companies driving technological advances in ways that are both honestly, somewhat utopian and unmistakably dystopian at the same time. And I wonder myself, how do we get here in the big picture? And I thought about the last great age of [01:03:00] nationalistic pride in science driven by the Cold War and the Space Race, and realized that the last time people cared this much about science and advancement, it was basically socialized.
There was the GI bill paid for a bunch of People to go to college, and then even for people who paid for it themselves, the cost of higher education was extremely low. So that gave a stepping stone for everyone who is interested in the sciences to get into it. And then the big projects themselves that people could join up with we're also being carried out by the government, most notably the Apollo program. So I was glad to read that, I wasn't the only one making this connection while reading a new piece in the Atlantic about the Scarlett Johannson kerfuffle, the writer quotes another piece from 10 months ago about Sam Altman in which it is explicitly recognized that democratic control over scientific advances is not an option in our current society.
" As with other grand projects of the [01:04:00] 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we're no longer in that world. Rather than waiting around for it to return or devoting his energies to making sure that it does, he is going full throttle forward in our present reality."
So, what I would argue, without getting too derailed into the current state of late neoliberal economics that has poisoned society for the last 40 years and sapped all ability for collective action through government because we've idolized corporations and the individuals who run them, but without getting too far down that road. I would argue that the excitement that people feel for the current age of technological advancement, for those who do, I don't put myself in that category, but for those who do, I feel like their excitement is being filtered [01:05:00] through the historic analogy of the Space Race era, which was democratically controlled, and from which many of the benefits were also democratically shared.
For instance, inventions famously like Velcro made by the government along the way during the space program, didn't become patented for private enrichment, but were made public as they were owned by all of us collectively. So this sort of, "where's my flying car" kind of nostalgia for an age of technological advancement that we hope might finally be upon us after feeling like we've been robbed of it, because, " if we went to the moon in the '60s we should be a lot further ahead than we are now," this sort of nostalgia and excitement is I think leading people to root for the success of private companies who are currently ascendant right now, in the same way that Americans and Soviets would have rooted for the success of their respective space programs.
But it [01:06:00] won't just be a small degree of difference between the outcomes of a successful government venture, any successful corporate venture. There's a growing recognition that corporations function as defacto feudal, dictatorial, fiefdoms, and they just happened to be within the context. Of democratically run countries. So the difference between corporations racing to remake the world with artificial general intelligence or to populate the galaxy rather than governments having control of such projects, along with the mandate to work for the good of all the people won't just be the difference between whose logo is slapped on the side of the project. It will be the difference between continued democracy as we know it and corporate feudalism and technocracy.
DEEPER DIVE A: THAT WORD YOU KEEP USING…
JAY TOMLINSON - HOST, BEST OF THE LEFT: And now we'll continue with deeper dives on four topics. Next up. That word you keep using. Section B. Silicon valley emphasis on the con. Section C Microsoft, the [01:07:00] destroyer and section D. Thank you for your service.
How Tech Bros Get Sci-Fi Wrong - Wisecrack - Air Date 3-7-22
MICHAEL BURNS - HOST, WISECRACK: The TLDR of Foundation is, One big brain man is so good at math. that he can accurately predict the future. He discovers that the galactic empire is doomed. So he takes a bunch of the best scientists to another planet to build a new empire.
A scientific meritocracy. The series is rooted in that old chestnut, American exceptionalism. Which as your 7th grade social studies teacher told you, it basically just means America is uniquely great. This notion especially took off after World War II, where America established its dominance. In its early chapters, Foundation seems to endorse this view.
But Asimov wrote the Foundation series over a long period, with a nearly three decade gap in the middle, and the core ideas he communicated changed to reflect the times. By the end of the series, Foundation had challenged the post war ideas at its core, exploring the follies of imperialism and exceptionalism within the context of Basically, the protagonist's initial goal of single handedly starting this new [01:08:00] world is sharply critiqued.
In fact, Asimov made it well known that he based the overall arc of the series on the history of the decline and fall of the Roman Empire. But the big boys of Silicon Valley also have some more mainstream sci fi interests. Take Star Trek. We probably don't need to explain this classic to you. With its utopian federation of forces, planets, and exploration of complex social and philosophical issues.
It's a series defined by its idealism and total standing of justice and equality. People Who worked with Steve Jobs named his ability to push you to do seemingly impossible things his reality distortion field, a term straight out of the track. And real life Scrooge McDuck Jeff Bezos has been outspoken about the influence that the show has had on him.
He almost named Amazon a MakeItSo. com after Picard's famous catchphrase. Make it so. And the complex voice controlled computers of the Enterprise were a direct inspiration for Alexa. Perhaps most tellingly, Vezos ended his high school valedictorian speech with a Star Trek quote. He [01:09:00] said, Space. The final frontier.
Meet me there. Vezos also tacitly forced William Shatner to look at his childhood Trek fan art. Billionaires. They're just as embarrassing as us. So, we've established the enormous influence sci fi has had on Silicon Valley. And hey, in some ways, knowing that these people enjoy the genre is kinda cool. And when some of the most powerful people in the world are sci fi nerds, it's going to have an impact on the culture.
But even if they share the interest of the hoi polloi, their lives are so radically different from ours, that they may be taking away something totally different from our shared sci fi faves. Let's go back to Snow Crash, which is set in a real world dystopia. Everyone lives in burb claves, which are described as a city state with its own constitution, a border, laws, cops, everything.
Everything within these burb claves has been corporatized, including the law. Meaning that big corporations can say, legally kill you for minor criminal infractions. So if [01:10:00] you work for, say, Amazon, you would likely live in the Amazon burb clave. In the world of Snow Crash, corporations basically have full control over government and society.
Not that this could ever happen in our world. And this has forced those seeking freedom into the metaverse, a place free from corporate control. It's essentially an anarchist technocracy, and the only boundaries are set by the technology you have access to. This is where companies like Meta miss the point.
Because even if other companies contribute to the Metaverse, Meta is always going to have more control. If they host the servers, if their infrastructure is what it's built upon, then the Metaverse will never be free from corporate oversight. Meta isn't going to allow anyone within the coding shops to mess around with the source code, because the company has invested time, money, Money and brain power into its design.
The metaverse of Zuckerberg's dream is one that's been divided up into multiple corporate controlled walled gardens, each with their own terms of service and currencies. [01:11:00] This ironically resembles the hellish corporate reality of snow crash more than its anarchist digital metaverse here, a tech billionaire with the.
unique ability to turn a sci fi vision into reality seems to have missed the critique at the heart of that fictional vision. And this isn't the only example of a tech bro missing some vital points from their favorite content. Take Foundation. When Musk talks about escaping climate change by moving humanity to Mars, a feat that only he is aware of, brave or willing enough to undertake, it's easy to see the connection.
Musk gets to step into the role of the big brain man who anticipates our doom and works as a private entity to solve the problem. But in being the man who built and therefore owns and operates his proposed Martian colony, Musk also takes the role of the head of the meritocracy. He contributed the most, so he's in charge.
However, in painfully ironic news, That's the very concept that the Foundation series explicitly [01:12:00] disavows, but in Musk's fantasy, he is the exceptional man with both an idea and the scientific know how to realize it. Every launch that SpaceX undertakes has the underlying goal of building towards Mars colonies.
But a one man led colonization mission and foundation isn't the solution. It's a recipe for social collapse. So not exactly something the average tech billionaire should be looking to imitate, right? But we do have one tech pro whose role model is objectively, a pretty good guy and a true bald King. Jeff Bezos is such a big tracker that he says he idolizes Enterprise Captain Jean Luc Picard.
Which is interesting because Picard is defined by his strong sense of ethics and morals, arguably even more than other captains in the series. Whereas Kirk is brash and headstrong. Picard is measured and deliberate. Kirk will make mistakes and do the right thing eventually, while Picard will agonize over the decision [01:13:00] to avoid making the mistake in the first place.
Now, we're not claiming that Bezos doesn't consider all his options before making a decision. You don't get to be a big time capitalist baron without thinking things through. There is a disconnect between the example that Picard sets and Bezos public facing actions. Here's a couple of Picard quotes to show what we mean.
In First Contact he says,
STAR TREK CLIP: The acquisition of wealth is no longer the driving force in our lives. We work to better ourselves and the rest of humanity.
MICHAEL BURNS - HOST, WISECRACK: In Next Generation he says,
STAR TREK CLIP: First time any man's freedom is trodden on. We're all damaged.
MICHAEL BURNS - HOST, WISECRACK: And also in Next Generation,
STAR TREK CLIP: I have to weigh the good of the many against the needs of the individual and try to balance them as realistically as possible.
MICHAEL BURNS - HOST, WISECRACK: These aren't outliers. They all demonstrate [01:14:00] Picard's fundamental character. He's all about altruism and bettering mankind at large. In contrast, Jeff Bezos has a literally unspendable amount of personal wealth, which he keeps adding to.
JEFF BEZOS: It's fine being the second wealthiest person in the world.
That actually
MICHAEL BURNS - HOST, WISECRACK: And Amazon has roundly been exposed for exploiting workers. Regardless of your stance on those issues, and whether or not you agree with Picard, Bezos's actions are clearly contradictory to the fictional man he idolizes. If Picard were real and he met Bezos, it's clear he'd bar his application to the Federation.
So, who cares Musk, Zuckerberg, and Bezos are big ol sci fi nerds with poor critical reading skills? Honestly? Everybody should, due to their outsized influence on government and society. This implication is especially stark when you compare the wealth of these heads of big tech to, say, national space agencies like NASA, which are vastly underfunded.
So our forays into the galaxy are yoked into a billionaire [01:15:00] space race that's enmeshed with the egos of the bros involved. What's more, big tech has a ridiculous amount of money to spend on lobbying, tailoring legislation to best suit their needs, arguably at the expense of everybody else. Zuckerberg and Meta are already trying to colonize the internet and developing nations, transforming it into some VR driven hyper monetized corpo land that would fundamentally change how we interact with one another.
Digital platforms have become a key aspect of our social interactions, and the people in charge of them have relative freedom to do whatever they want. Zuckerberg and the metaverse. Musk and his Mars colonies. Bezos. and wearing cowboy hats. They're all billionaire passion projects that either are or have the potential to fundamentally change reality for all of us in ways both tangible and conceptual.
Driven by their love for, but perhaps not coherent understanding of, sci fi, these platforms are shaping the world we live in and what we imagine the future could look like. With [01:16:00] so much at stake perhaps it's no wonder that William Gibson, one of the most prophetic sci fi authors of all time, once said that he has explicitly decided not to write certain ideas into his books because he worried that they'd be misunderstood and possibly imitated.
How Socialism Built Silicon Valley (To Defeat Socialism) - The Majority Report w/ Sam Seder - Air Date 8-11-19
MARGARET O'MARA: We're at a moment of, um, you know, people aren't feeling particularly great about the tech industry.
So it's so much so that I find myself, um, I try to, you know, remind some critics of, well, you know, we are carrying around super computers in our pockets. Like there have been some upsides to this whole operation, but there's, but you're right. There's always been a government presence and it's both in the kind of Keynesian, you know, pour money into the system of the, you know, pre 1970s period and, and very much, you know, around defense and, and, and aerospace.
And then after the seventies, the government is still there in its, you know, in, you know, how is the other way, what's the other way the American. Government, state, uh, aides, uh, enterprise and individuals, it's through the tax [01:17:00] system. So you have, you know, tax breaks for, um, you know, capital gains taxes that benefit venture capitalists and other investors.
You have, um, you know, tax breaks for, that are, some of which are targeted towards the electronics industry or for scientific purposes. Based industry, um, that again are very generative, but also are, you know, shows that there is there's effectively a federal, you know, that the government's giving a boost, um, to to these industries, but doing it in a way that allows for enough creativity iteration and, you know, Private enterprise to flourish, um, and, and people to try and fail, uh, that, and, and also does it in such an indirect way that oftentimes the people who are the beneficiaries of it feel like they did it on all on their own.
SAM SEDER - HOST, THE MAJORITY REPORT: Right. That seems to me to be highly problematic because when we get, when we go out, when California, let's say, or when, when the federal government gives a capital gains break, uh, to these companies or any type of tax breaks, what they're really [01:18:00] doing is saying, we're giving you free services. Right. It's because it's not just we're, we're, we're giving you a tax break.
It's like, we're providing you all the things that you need in terms of an environment in which to create this stuff. We're just not going to charge you for it. And so it's, we're giving you services in kind really as an investment. Rather than a tax break, we're just giving you free services. And the other thing that strikes me, too, about why California succeeded, and it goes to show that I, that I think from a, from a statutory standpoint, um, this provision where it, um, where it did not, where, where it did not allow for non compete clauses and contract law.
Will you explain that? Because that I think is, is. Is huge. And it's still very much like those type of questions are, you know, are still very much, um, uh, highly relevant across American society. It's, it seems to me, and it's a [01:19:00] very good indication. We talk about, uh, the government stepping in to the field.
to the so called freedom of contract. This was really huge. It seems to me in terms of the development of Silicon Valley, we, will you walk us through that?
MARGARET O'MARA: Yeah. So non compete clauses essentially are, you know, things that are appended to employment agreements saying you cannot leave our employee and then go to a direct competitor.
You can't be, you know, in a, in a, in a, particularly in a knowledge, Sector where the people and the ideas are the raw materials, it's really limiting that, that free movement of, of people and ideas across companies. California does not allow those, and this is an inter, you know, I'm in Seattle, Washington State, for example, does, you know, does allow non-compete.
And, and you can even see this kind of reflected in the two ecosystems of these two tech hubs. Um, which is, you know, Seattle has been long been, it's changing now, but it was kind of, it's kind of a. by one company at a time. You know, first Boeing, then Microsoft, now Amazon. Um, although [01:20:00] that's changing quite a bit.
But down in the valley, you have this Perpetual job hopping and you have people moving from one firm to another starting from the 1950s on that, you know, and with that, they're sharing ideas, they're creating this network, which for which is really critical to, you know, going back to your why California question or why the valley, you know, one thing the valley has that The Boston does not is this.
It grows in isolation. It's a very sort of specialized economy and very tight, small and tightly networked. Everyone knows each other. Everyone, you know, their kids play on little league together. They go and drink beer after work together. They work together, not at just one company, but multiple companies.
And then they go on to become a venture capitalist that funds the next generation of companies and on and on and non compete, this legal environment, contractual environment is very important. There are also so many other. California specific things that are feeding in, you know, the, the Pat Brown era investment in [01:21:00] social infrastructure, broadly defined everything from public, uh, public schools to higher education, kind of the Clark Kerr era, higher education, the expansion of public higher ed in California during the.
During the 50s and 60s, the building of roads, building of public infrastructure of all kinds, it, it enables this, this society of, of these, this path for tremendous upward mobility for so many people who happen to be there at the time, where you see these, you know, someone like Steve Jobs, who's comes from a family and his dad had a high school education and he's a product of California public schools.
But heck, there was a, there was a computer lab in his high school in the late sixties. Um, and, and this is, you know, he's able to kind of get on this incredible escalator because in part because of this public investment, which of course changes dramatically after the late 1970s.
SAM SEDER - HOST, THE MAJORITY REPORT: Well, I, I, uh, tell us how, well, I, before we get to how that public investment changes, um, but, uh, you, you write [01:22:00] that.
The, uh, homogeneity of the, of Silicon Valley was both it's, it's greatest strength and it's, it's greatest weakness. Um, explain that for us.
MARGARET O'MARA: Yeah, well the, you know, I, I refer to the valley as the, Entrepreneurial Galapagos. It grows up. I mean, it's first of all, it starts off very, it's very remote. It's rural.
You know, it's far, far away from Wall Street in Washington. The national papers don't report on Silicon Valley unless they're putting it in air quotes until like the early 1980s. It's, it's You know, it's a very kind of off to the side of the main action and that allows these very distinctive species to, to, to develop an ice in isolation of sorts, um, not only tech companies and an engineering focused businesses, but law firms and venture capital firms and marketing firms that are all devoted to, to, uh, to bringing up these, these companies.
And there's a, there's a very specific personal [01:23:00] dimension to all of it because the model of venture backed. Startups is you find a person who with promise with a promising idea and oftentimes they're a young man in his early 20s who has no business experience. Um, he comes with an engineering degree.
He's never really run anything. And so you need this whole kind of concert of services and firms that are helping you mentoring you in many different ways. And when you're making a bet on an untested person, you're often going with. Okay. You're going with, okay, this person graduated from Stanford's Masters in whatever.
That's a recognizable, you know, that produces good people. Um, this person is, um, you know, knows, used to work at this other company or knows someone I know. And so there's a lot of hiring and investing. Based on existing social networks and ties. So, when you're starting from a place to [01:24:00] a world that is entirely male and almost entirely white in the 1960s, the world of engineering, a world in which, you know, women were not We had a department chair could sort of decide they weren't going to allow women in their classes.
There were very, very few technical women in, um, in the, that world then. And those that were kind of learned on the job, but despite obstacles. And so you have this very homogenous, Pool that you're picking from and when it becomes a multi generational phenomenon. I mean, the, the real talent of, uh, the, the magic of Silicon Valley is time that you have multiple generations of people making, making it big in a company they founded or being part of a very successful enterprise.
Then they become the investors of the next generation. They're picking the winners. They're looking at the younger, you know, 20 somethings who are coming up with big ideas and saying, all right, I'm going to invest in this. Person. And oftentimes it's investing in the man, investing in the person as much as the idea.
And so [01:25:00] this is how, you know, when you're again, going with what, you know, it's very hard to let some new voices and new people in the room.
Peter Thiel And His Dorky Little Goons – Some More News - Air Date 11-2-22
CODY JOHNSTON - HOST, SOME MORE NEWS: In a piece for Vanity Fair, journalist James Pogue details the inner workings of the new right.
The latest evolution of the alt right, post left, neo reactionary movement that seeks to distinguish itself from both the Reaganism of the 80s and Bush era conservatism. As I alluded to before, the New Right is not a unified ideology. As Pogue explains, members come from a wildly diverse set of political backgrounds, from monarchists to Marxists to the literal Unabomber, who some New Rightists call Uncle Ted.
But the idea at its center, the core tenet that makes the New Right a movement at all, is the underlying belief that individualist liberal ideology, increasingly bureaucratic governments, and big tech are all combining into a world that is at once tyrannical, chaotic, and devoid of the systems of value and morality that give human life richness and meaning.
Pug points to [01:26:00] Curtis Yarvin, friend of Peter Thiel, and an ex programmer and blogger who goes by the online name Mencius Molebug. Many people in the new right will ironically call him Lord Yarvin, because apparently this guy cannot get enough Star Wars names. Menstrual Mold Grub is credited as a co founder and or prominent voice for the Neo Reactionary or Dark Enlightenment movement.
It's basically a bunch of alt right Silicon Valley bros that, like Teal, believe that democracy is not compatible with freedom and want to replace it with a vague techno monarchy. Sounds kind of familiar, right? Yarvin once put out a blog post called The Case Against Democracy with a listing of red pills that he considered to be truths because, and this can't be understated, these guys are dorks.
They're also pretty f ing fashy and border on white nationalists. Yarvin himself has blogged that although he doesn't consider himself a white nationalist, he recognizes that many of his readers are, and said he would also [01:27:00] read and link to white nationalist content. Which sure sounds like something a white nationalist would say.
By one account, Yarvin once gave a speech where he defended Hitler's decision to invade other countries, calling it Here's another blog where he doesn't seem to understand why people hate the Nazis the most, which includes the quote, On the other hand, and then there's more, and like, do I, do I need to finish that quote?
But, but, okay, to be fair and balanced, his other hand is pointing out that the Soviet Union also did mass murders. Okay. Yes, both things can be bad. Of course, this isn't a video on Lord Yarnish Dildo Bug, the totally not racist who uses liberal democratic hypocrisy to get you to sign up for fashion adjacent neo feudalism, but it's important to talk about him because he's a man deep in Peter Thiel's circle.
Theil has funded Yarvin's startup, and they will [01:28:00] correspond to discuss the politicians that Theil backs. Yarvin and the New Right, like many far right groups, believe in a fundamental conspiracy where the people at the top hold so much power, it strips agency and freedom away from everyday people. He calls it the Cathedral because nerd, and part of his belief is that there's no single entity running the show.
In fact, he believes hardly anyone who participates in it believes that it's an organized system at all. Instead, it self perpetuates by rewarding media that goes after threats against the established order, like nationalists, libertarians, and anti vaxxers. Social media, according to Yarvin, only accelerates this cycle, since the best way to get clicks from someone is generally to reaffirm their worldview, which in Yarvin's eyes reaffirms the cathedral.
In other words, he's describing, like, Capitalism and the status quo or social norms, something that has existed for as long as society exists. But since he's a tech bro at heart, he's decided that this is a brand new thing he's invented and is now giving it a silly name. The [01:29:00] solution, in the eyes of Yarvin and the New Right, is to is for a big strong boy to take power back from the cathedral and replace the whole system with a regime structured top down like a startup.
But it's like, super not a dictatorship, you guys, that's not how it's gonna turn out. In that aforementioned speech where he defended Hitler or whatever, Yarvin also gave his solution to reboot the government. As a first step towards the goal, Yarvin advocates for retiring all government employees, or RAGE, a super chill and not at all scary acronym, and replacing them with what he calls a national CEO.
These days, Yarvin is super duper careful not to use the word dictator, but not because he doesn't think we need one, but rather the optics around that word are a tad bit bad. To quote Yarvin, if you're going to have a monarchy, It has to be a monarchy of everyone, which, when you think about the definition of monarchy for more than a second, is a completely nonsense statement.
That's like saying you want to have water, but only if it's a DRY water. [01:30:00] And that's how we get back around to Thiel, who, as we have mentioned, also seems to think a country should be structured like a corporation. Autocratically, from the top down, under a single ruling figure. Whether Yarvin and the New Right have influenced Theil away from his purely libertarian roots, or Theil has come to these positions on his own, it doesn't really matter.
Because either way, both Theil and Yarvin believe this stuff. And Theil is willing to pour his money into supporting people and causes that further these beliefs.
Before 2016, Thiel had dipped his toe into conservative politics, donating around 3 million to Ron Paul's campaign in 2012, and another 2 million to Ted Cruz the same year.
But as we've already discussed, Teal's politics, which mirror much more closely to Yarvin and the New Right's beliefs, were never all that in line with mainstream Republican values. So when everyone's favorite loud boy Donald J. Trump came along in 2016, saying and doing things that had previously been considered unsayable and undoable, Theil saw a chance to [01:31:00] finally get out of the stasis of the status quo.
And so he jumped on it, donating 1. 25 million dollars to Trump's 2016 campaign and speaking in support of Trump at the Republican National Convention.
PETER THIEL: I'm not a politician, but neither is Donald. He is a builder, and it's time to rebuild America.
CODY JOHNSTON - HOST, SOME MORE NEWS: Oh, he's just like, A regular guy. I was honestly expecting some kind of, like, dark mist or something.
A guy hooked up to a bunch of tubes, maybe. After Trump won, Theil was appointed to the President Elect's transition team. And Trump reportedly told Theil he was a very special guy. Which is, like, kind of a weird thing to say to a grown adult, but whatever. It's Trump, that's how he talks. Despite this early strong start to their friendship, Things didn't stay quite smooth for long.
In 2018, the New York Times quotes Thiel as saying, there are all these ways that things have fallen short, pointing to his hopes that Trump would end the era of stupid [01:32:00] wars, rebuild the country, and move us past the culture wars. In this sense, Thiel is completely correct, in that Trump did not do any of those things.
Though, why Thiel thought he Would do those things in the first place is beyond me. In 2020 Theil notably and intentionally stayed on the sidelines Avoiding endorsing Trump or making any major political donations at all Whether he did this because he still had major issues with the way Trump had handled his first term or simply because he thought Trump Wasn't going to win is anybody's guess.
But regardless, Theil stayed out of the 2020 presidential election and thereby avoided having any personal stake in the ensuing debate about whether or not the election was rigged. You know, that debate .That's somehow still going on almost two years later. But now, as we approach that two year mark, Theil is stepping out of the shadows once more like the spooky venture capitalist that he is to financially back 16 conservative [01:33:00] candidates for the House and Senate in the 2022 midterm elections.
Many of these candidates, who include J. D. Vance, Blake Masters, Eric Schmidt, Kevin McCarthy, and Ted Internet Creep Cruise himself, have embraced the pervasive lie that Donald J. Trump won the 2020 election. While Thiel hasn't said publicly what he personally believes about those election results, it doesn't really matter.
It should be clear by this point that Trump was just a sweaty tool in the Thiel toolbox. Thiel box! Tealbox, to achieve his own political ends. And these new candidates are more of the same.
DEEPER DIVE B: SiliCON VALLEY, emphasis on the CON
JAY TOMLINSON - HOST, BEST OF THE LEFT: Now entering deeper dive B Silicon valley emphasis on the con.
Google Search Is Died, Enron Musk - The Daily Zeitgeist - Air Date 5-14-24
In 2019 there was a bloke called Ben Gomes, who is the head of Google search. Ben Gomes had been at Google since 1999. So basically the beginning, he worked directly with Sergey and Larry.
He is, and there are tons of articles about him where everything he talks about, he's talking like, like a Renaissance painter. He's like, I believe the connectivity between [01:34:00] data, like he's so romantic about it. So on February 5th, 2019, he gets through a connection of events, something called a code yellow, which is an internal Google thing that says there is a problem that's significant.
There are higher codes, but they're extremely rare. Code yellow itself is actually pretty rare. So what happened was this code yellow was the revenue and ad side of Google saying Google search, you are not making us enough money. You need to make us more money. And also, and this is very important, the amount of queries going into Google is not growing enough.
Now, little side note for you. Queries, in this case, is referring to the amount of times that people search. Now, if you think about it for just a second, Is that necessarily connected to how good Google is? Not necessarily. In fact, if there are less queries, maybe Google's better. Yeah. Maybe they found what they were looking for.
Right. Yes. Yeah. Which does not work for Google. Right. So Google is [01:35:00] then in this little farts of the code yellow and between Ben Gomes and some other guys, there's a conversation where he says, Hey guys, I feel like Google is getting too close to the money. Google seems to only care about growth. After about a month, they resolved the code yellow and there's a big email thread and there's a ton of emails that I'm just leaving out, but I'm summarizing as quick as possible.
There's also on the sidelines, this guy called Jerry Dishler, who was a, one of these noxious VP types who was kind of like, yeah, guys, we need to make more queries and we need to make more money. So could you just do that? Right. So the code yellow comes to an end and it turns out that the guy behind it is a guy called Prabhagar Raghavan.
Who was then the head of advertising at go ahead of ads on Google and Ben Gomez sends out a thing to a bunch of people who are all congratulating each other saying we got through this great job. Everyone probably got a response saying, Yeah, actually, engineering did that. You didn't do it. He didn't do anything.
Wow. So email. So these emails came out through Department of Justice is antitrust hearing. I realize [01:36:00] this is a lot of history. In 2020, Prabhagar becomes head of Google Search. So he takes over Google Search from the idealist guy, Bengo. From the idealist who worked on Google Search from the beginning. So he came in, he was mad at Bengo.
And basically pushed him out. And also, to be clear, this query's metric is insane. Having more queries means nothing. And in fact, these emails kind of detail that he takes over in 2020. Now, if you really think about it, Google started to get really bad in like 2019, 2020, and has got significantly worse constantly since 2020 to an end of 29, well, mid 2019.
They added this to mobile, but they put it fully onto desktop as well. In 2020, they made the change to make it harder to tell when something is an ad on Google now. Yeah, I definitely noticed that change. They made a bunch of changes to make Google worse. It used to be pretty easy. There was like a [01:37:00] background.
It seemed like pretty clear that they had a internal discussion and we're like, well, we don't want the product. We don't want to be tri actively tricking people. They, it was funnier than that. They were just like, yeah, we need to see the numbers go up, please. Make number go higher now. Line go up now. Yep.
But yeah, at some, like during the two thousands, like it was like, there was a balance of like, we need this to be a product, a product that people want to use and we need to make money off of ads, but they they've hit a point where they don't really give a shit if it's a product that people. Want to use it seems like it's that.
And also within these emails. And again, this is from the department of justice is suit against Google for monopoly. So, Hey, what monopoly could they have? And what's really stark about it is. What Mr. Rug's previous job was. [01:38:00] So can you think of a, what would the worst job that pre could be previously held by someone running Google search?
Just think about it for a second. You might not get it, but just, just think. What is the worst company he could have worked for that isn't like, I don't know. So different? Yeah, different conflict. Because like one of the worst would be Google ads. Mr. Raghavan ran search at Yahoo, 2005 to 2012. In that period, they went from, I think like a 33 percent market share versus Google's 36 percent to literally doing a deal where Bing would power Yahoo. Yeah. Yeah. Let me just fact check you real quick.
Let me go Yahoo that. Nope. Never been said. That's never been said by anyone. It's It's crazy because you read this thing. You read this story. And you read the emails. And I was writing it. And I was like, is this someone messing with, but this is ridiculous, right?
Because the emails are so grim. There's one with this guy, this engineer called Shashi Thakur, [01:39:00] who's like, can we tell Sundar Pichai about this and stop this? Dude, that's the CEO of Google and his former job was McKinsey. Yeah. Yeah. They're on the right side of a lot of things. I was going to say, bread prices, Oxycontin.
Yeah. Yeah. Yeah. And it's wild because. You read this story and you're like, it couldn't be this obvious, could it? And the timeline is just perfect. And I will, I will actually say something, I'm previewing something I'm working on. The only time I've ever seen worse than this story is in my next newsletter about Facebook.
Well, you don't have emails in this chain where someone is like, yeah, actually, it's good. The product sucks, right? I actually like this. This is good. I've got documents where there's someone writing, yeah, here are the changes we've made at Facebook to increase engagement that made Facebook worse, right?
Worse for the user. And it, and guess who, guess what? COO of Facebook, [01:40:00] Sheryl Sandberg until 2022 McKinsey. Yeah. No kidding. The people that run Facebook right now, all product managers, all growth people. This is the, this is tying it back to Google. The people in charge are management consultants, ads people, revenue people.
They're not the people who build anything.
Silicon Valley Deserves Your Anger - Tech Won't Save Us - Air Date 3-14-24
ED ZITRON: A few weeks ago, I wrote a piece where it was saying why everyone has kind of turned on tech. And long and short of it is, tech made a ton of promises in the 2010s and they delivered on a lot of them.
They were like, your files will be wherever you need them. You won't have to use physical media, you'll be able to stream stuff. And they delivered! Genuinely. I know that, like, streaming is extremely questionable as an industry at times, and cloud computing is Not great in many ways, but for the large part, tech actually delivered stuff.
There was cool things that people could use. The jumps between iPhones were significant. There were new laptops that were just that bit much faster. It felt exciting. And then around the mid 2010s, so the around when Waymo first announced one of their first [01:41:00] robot car things, when Elon Musk started talking a lot about Autopilot, well, that was when the Apple Watch around came out as well.
It's almost like the tech industry got frozen in amber. There were new things, but things stopped being exciting, and all of the things that we were told were coming. Robots, AI, automated companions, all of these things, Never showed up and I think as we speak right now, we're kind of seeing what happens when you fail to deliver, but you still get rich because tech has existed with these massive multipliers and made all of these people obscenely rich and people have kind of accepted it because they said, well, it's how we got iPhones.
It's how we got cloud computing, it's how we got all these things. Tech hasn't done anything like that for a while and their big magical trick is AI. Which is doing what exactly is AI doing for a consumer today much more than Siri was doing five, six, seven years ago? It isn't on top of that. It's so expensive, but everyone's putting all this money into this very expensive, [01:42:00] not super useful tech.
It just feels like, especially off the back of what we're like, Less than a year since the SVB thing happened. I'm just a bit worried. It could be okay. Things as a business operator in this space feel better, but I don't know. I am very worried about this push of AI, not just for the fake job thing, but also because I don't know.
It costs too much and it does nothing.
PARIS MARX - HOST, TECH WON'T SAVE US: Yeah, I think that's really well put. And I think there's a lot in there for us to dig into through the course of this conversation. And we'll return to the AI stuff a little bit later, but it stands out to me that, and of course, when you say SVB, that's of course, Silicon Valley Bank and the employees that happen there for listeners who might not remember, but we're in this kind of AI push this moment where this is all, you know, supposedly going to change everything.
And. It's interesting to me that you point back to the mid 2010s is this moment when a lot of stuff really significantly changed in the sense that they started making these huge promises. They were not able to fulfill on these promises. You know, the stuff that we actually got [01:43:00] from these companies were not, you know, these big steps forward, but we're like kind of incremental steps as things also seem to be getting worse and worse and worse.
And we can talk about that as well. But the mid 2010s was also kind of the last AI boom moment, right? When AI was going to replace all the jobs and all this kind of stuff.
ED ZITRON: I had a client in 2016 telling people, oh, we'll have, it will ingest all your knowledge and be able to answer questions in a chatbot 2016.
What's different?
PARIS MARX - HOST, TECH WON'T SAVE US: It's funny too, because like there was an article I read the other day that was about Apple and how it was like behind in the generative AI thing. And it was like, this is such a big threat to Apple's business model because everyone's going to be using these generative AI models. And it's going to make the voice assistant so much better.
And Apple has had so much trouble with voice assistants. And it's like, nobody liked the voice assistants the first time around, like, It never caught on. Like, Amazon has largely like disinvested from it. Sure, Lex is still around, but like, it's not a focus like it was a few years ago, [01:44:00] because it was just another one of these things that they pushed out and like, didn't really catch on with people.
And the generative AI moment isn't going to change that.
ED ZITRON: And actually, there is one thing that I left out that's very important as well. 2021 was a watershed moment. For tech in the worst possible way. It was the first time I think tech has really just tried to lie. Metaverse and crypto, what did they do?
They didn't do much. They did, however, make a bunch of guys really rich, but it's the first time I saw just like an outright con on consumers where it was like, Hey, metaverse is the future. You've got to be part of this. What is it? It's a thing you've already really seen, but we would like you to claim this is the future.
Cryptocurrency. What does this do? Nothing! It might make you rich, but it probably won't. So you have this two year period where 2021 seemed like everything was going to be okay. Everyone was going to get rich. Biden administration. Everyone's doing well. Money was frothing around. And no one really knew because I don't think most people know macroeconomics, myself [01:45:00] included.
So a lot of people didn't see it coming when. Interest rates screwed everyone and you, everyone at once realized, Oh, wow. The tech ecosystem was built on a form of financial con kind of venture capital wasn't held accountable. And so now they're trying to push through AI and it's like, well, no, we saw the metaverse that didn't do anything.
We saw crypto that didn't do anything. Tell us what this will do. It's the same McKinsey freaks telling us that this is the future.
PARIS MARX - HOST, TECH WON'T SAVE US: And, you know, the crypto and metaverse moment was like, okay, there are these visions, there are these ideas, but like, there's nothing tangible here that's really making anything better.
It's just like, how can we extract more money from people by forcing them into these features or making them believe it for a little while, whereas in the past, sure, they, you know, Lied about a ton of things and over promised about a ton of things, but there was often something tangible there that they believe, you know, there could actually be a follow through on.
And in part, it makes me think back to like the dot com boom moment where you also had a lot of these companies that really had [01:46:00] no foundation to them, but we're riding this wave. up as there was just all of this money that was flooding into the space. And it seemed like there was also a moment of that in those kind of pandemic years when there was a bunch of money flowing, it had to go somewhere.
So here's all these scams and cons to absorb it, right?
ED ZITRON: Yeah. And I think the Another part of it, and I've written about this a great deal, is generally before this, and you know, it's not perfect, I know there are plenty of examples where they didn't, but these companies found ways to grow that somewhat benefited the customer, didn't always totally do it in the nicest way, but it kind of benefited them, there was a way of it looking, you go, okay, they're making stuff for people and the people will use it right now. I think people are waking up to the fact that tech companies are willing to make their things worse to make more money. And I think that they are more aware of it than they've ever been. And I know I'm somewhat self serving and that this is my raw economy thesis.
However, I do think that there is coming a time when people are going to realize, why are these tech companies worth 20 [01:47:00] billion when they make things worse? Why is it that Microsoft is worth, what, three trillion dollars, and they're just flooding money into this system, ChetJPT, and Copilot, and all these things?
They can't even explain why you'd use it. The Super Bowl commercial, for Copilot, it was so weird, it was like, Oh yeah, do the code for my 3D open world game. Give me Mike's Trucks, a logo for Mike's Trucks, which just doesn't work in practice. It's so weird. They spent 7 million on this commercial and you'd watch it. And you're like, not even they can come up with a reason why you need generative AI. That's crazy, man. I don't remember another time in this industry when I was just like, oh yeah, no one, like no one can tell you why they're selling it.
They're just like, oh, it's the future and you should buy it today. Please use it. Please use it now. We need you to use this so that the markets think we're growing at 10 to 20 percent every quarter and so that Satya Nadella can get, he must make at least 30, 50 million. Sundar Pichai from Google gets 280 million or something or 220 million in 2022.
[01:48:00] These guys are insanely rich, but they're creating nothing. I'm all over the place because this stuff is making me a little crazy. I'm not going to lie.
PARIS MARX - HOST, TECH WON'T SAVE US: One of the pieces you wrote recently that I read in preparation for this interview was really going into that, right? How a lot of these companies we have been seeing it like slowly more and more over time.
Like if you think about Facebook, like making their product worse so that it could extract more money from people and get more data off of people to feed into, you know, these broader kind of considerations that they had around making more money off of what everyone's doing on their platform. But over time, you know, yeah.
People are talking about a lot now, but I think you've still seen it for the past number of years. The Google search engine getting worse as it's become more, you know, oriented toward the needs of advertisers versus the actual users who are using it. And like again and again, whether it's social media platforms and other things that happen online, there has been this slow degradation of the quality of these things because there's this need to extract more and more profit from it.
And The options for where you're going to make that profit have [01:49:00] become fewer as, you know, the real growth in this industry and the real innovation has tapered off. And so now it just becomes really extracting as much as possible from what is already there. Even if that means the experience and the actual quality of the product has to decline in the process.
DEEPER DIVE C: MICROSOFT THE DESTROYER
JAY TOMLINSON - HOST, BEST OF THE LEFT: You've reached Deeper Dive C: Microsoft, the destroyer.
The One Where Microsoft Admits Game Studios Are F_cked (The Jimquisition) - Jim Sterling - Air Date 5-20-24
JIM STERLING - HOST, JIM STERLING: Studio closures. They occur when a game's studio is failing.
They occur when a game's studio is succeeding. Like with the mass layoffs game publishers routinely indulge in, publishers shutting down studios. are motivated far less by the studio's performance and infinitely more by a perpetual drive to cut costs and please shareholders, all in the name of perpetuating the ridiculous myth that a company can get richer and bigger literally forever.
The myth of perpetual growth. Like I have reminded viewers for a while now, publishers axe people's jobs and shut down developers regardless of success, and sometimes because of it. Since the better a publisher does, the more pressure there is to [01:50:00] cut those costs, so it looks like they're making more money.
It's an unsustainable system, which is why the industry is such a tumultuous and unstable fucking mess all the time. And if you're one of the five people still watching this shit, you'll know that already. So, Why are we talking about it this time? Well, we've got some absolute piss drivel excuses from a Microsoft executive to laugh at.
Ha ha ha ha ha ha ha ha ha ha ha. Microsoft has been among the very worst. In fact, it may very well be the worst instigator of studio shutdowns and mass layoffs. Responsible for thousands of ruined lives in the name of pleasing a handful of already incredibly wealthy people. All these harmful, maliciously callous, cost cutting methods, at a time when the company is spending obscene, offensive amounts of cash to buy other companies in its attempt to create a de facto monopoly in the game industry.
Most disgustingly, those huge purchases of Bethesda and Activision have been a direct [01:51:00] cause of layoffs. And closures with Microsoft most recently treating itself to the shuttering of Bethesda's subsidiaries, arcane Austin Alpha do games, and perhaps most shocking of all Tango GameWorks. So what does Xbox leader Sarah Bond have to say for her company's habit of consuming other companies and upending thousands of careers in the process and continuing engaging in mass layoff?
Rather than cutting the executive class's extreme salaries and bonuses, or at the very least, maybe not spending literal billions to buy shiny new toys. Well, it turns out she has very little of worth to say at all. As if to unintentionally suggest that executives truly have no compelling arguments for their despicable behavior, and completely fail to justify it when they try and contrive some.
The last year or so in video games, largely, the industry's been flat. Wimpered Bond in an interview that has since blown up for how pathetic it is. And even in 2023, we saw just some tremendous releases, tremendously [01:52:00] groundbreaking games, but still, the growth didn't follow all of that. There it is. That word. Growth.
I love that, within a couple of sentences, Bond essentially backs up everything I've said about corporate motivations for years. Perpetual growth, at any cost, so long as the cost isn't felt by anyone but the employees on the ground level. It's not enough to make money, it's not enough to be successful, you have to keep growing, no matter how big and powerful you already are.
Ah, the slightest drop and the investors panic and pull out. It's fucking nonsense. It's funny though, this interview with Bummed has been criticized for being weak, but if anything it's making an incredibly strong case. It just so happens to be making my case. Anyway. Sorry, let's let Bond continue talking about the growth not being there.
A lot of that's related to our need to bring new players in and make gaming more accessible, but all of that has been happening at the same time that the cost associated with making these beautiful AAA [01:53:00] blockbuster games is going up, and the time it takes to make them is going up. Oh, and there that is.
One of the all time classic fucking excuses squirted out by people so rich they could fund an entire game's development and still be rich. Games are just too expensive to make! Oh, oh, pity the poor multi billion dollar business that can always find money to give management million dollar bonuses just for sitting on their fucking arses.
Pity the poor companies boasting of record revenue and raking in billions off the back of predatory in game economies. PLEASE And pity the poor companies to whom the expense of making games is happening at them. It's not a decision they're making. They're completely powerless. They're just the ones with complete and total fucking power.
Once again, a massive corporation pleads poverty and tries to claim it can't afford to do the thing it's in the literal business of doing. I've heard it too many times. I'd heard it too many times in 2014, let alone 2024. Oh, this ain't [01:54:00] tired. Overplayed excuses applied to every shitty business decision, as companies who make and sell games have spent over a decade trying to tell us they can't afford to make and sell games, while boasting to their shareholders about how much money they're pulling in by making and selling games.
The fact Bond is trotting out by far the laziest and most worn out argument really only says one thing. Microsoft thinks you're a gullible fucking moron. Who can't remember the last dozen things they've used this argument for. At some point, games have to make money! As Naughty Dog once famously said, while enjoying a lucrative sponsorship deal between Uncharted and Subway.
Fucking disingenuous c shutdowns, the Xbox president was no less offensive than she's already been. It's always extraordinarily hard when you have to make decisions like that, she lied. When we looked at those fundamental industry trends, we feel a deep responsibility to ensure that the games we make, the devices we [01:55:00] build, the services that we offer, are there through moments.
Even when the industry isn't growing, when you're through a time of transition, the news we announced earlier in the week is an outcome of that, in our commitment to make sure that the business is healthy for the long term. What? No, seriously, what the fuck was that? State of your corporate spiel, mate.
That was just words, just a bunch of buzzwords thrown together with waffling, obfuscating vaguery. Borderline salad, even. Like, this was the bit I was most looking forward to tearing apart, the excuse for the closures, but what? There's nothing to tear apart, she fucking said nothing! But anyway. She continues talking bollocks with all the commitment of a terminal bellend.
Tango Gameworks last year saw massive critical acclaim and enjoyed commercial success with Hi Fi Rush. A game many would put forth as one of the best made in years. But as we've already seen, success doesn't count [01:56:00] for much in an industry designed only to reward those who are already wealthy.
JIM STERLING - HOST, JIM STERLING: Tango's closure shocked the gaming community, but sadly, I can't say I was surprised. Disgusted, yes. But I'm not surprised by any studio's closure, no matter if they've just launched a complete stinker, or released one of the most popular games of the year. Does Sarah Bond have any thoughts on that? No, no, hang on, scratch that.
Does Sarah Bond have fucking useful thoughts on that? You already know the answer. You know, one of the things I really love about the games industry is that it's a creative art form, and it means that the situation and what success is for each game and studio is also really unique, said the executive literally trying to re litigate the definition of success.
There's no one size fits all to it for us, and so we look at each studio, each game team, and we look at a whole variety of Factors when we're faced with sort of making decisions and trade offs like that, but it all comes back to our long term commitment to the games we create, the devices [01:57:00] we build, the services and ensuring that we're setting ourselves up to be able to deliver on the problems that we're facing.
You'll notice she talked a lot of shit about all the hard decisions and all the different criteria, but didn't actually explain any of it. Didn't explain what the criteria was for Tango's shutdown. What did success mean for Hi Fi Rush? What did Hi Fi Rush have to do in order to save Tango Gameworks?
Sarah Bond, once again said nothing. But in doing so, said a damn lot. What she's said, what she's admitted to, is what I've been fucking saying for ages. Success don't matter. It won't save you. If you release a failed game, you'll be shut down. If you release a successful game, You'll be shut down.
Because once you've made the claim that success is a vaguely defined idea that means different unique things depending on whatever it is the publisher feels like doing at the time, you've told your subsidiaries, your [01:58:00] employees, one thing. You are damned. Damned if you win. Damned if you lose. Damned if you do, and you don't.
Damned the second you signed the bottom line and sold your studio to a merciless, callous corporation that defines success not by success, but by whatever saves them the most money in the short term. Fuck what Bond said about long term thinking. Corporations aren't built for that. They're built for whatever gets them.
Them lurching to the next financial quarter with the illusion that it's making all the money in the world and still somehow finding change behind the couch cushions.
Microsoft's New AI Will Turn Your Computer Into a Privacy NIGHTMARE - Zaid Tabani - Air Date 5-23-24
ZAID TABANI - HOST, ZAID TABANI: For the last few months, Microsoft has been doubling. tripling, quadrupling down on the AI revolution in any way it can. It's integration of chat GPT into Bing, the 10 billion investment they made into open AI back in 2023. Also, Microsoft is entitled to up to 49 percent of. The for profit arm of open AI.
That doesn't mean they own it, but they're, they're, they're good friends. And listen, I don't need to tell you already [01:59:00] that the speed of how all of this has developed has raised a lot of concerns for a lot of people. There are a plethora of critiques on AI as well as support for AI being a tool that will enhance human's lives.
You know, the debate well, but it all got muddy this week when Microsoft announced a new feature for Windows 11 called recall. Recall is a new feature coming to Windows 11. That's going to be included in Microsoft's copilot plus suite of AI tools and essentially what the feature does. Is it has the computer take constant snapshots of what you're doing at the time on your PC, what programs you have open, what websites you're browsing, and it stores them in an archive to essentially create a history of what your activity has been on your computer.
That's searchable to you. So if there's a website you saw a few months ago, and you want to go back to it, you can immediately look it up. And when you find the snapshot, recall is supposed to be able to recreate that state of your computer. At any time, think of it as a super advanced version of system restore, or at least that's what I think Microsoft thought they were [02:00:00] pitching to everybody.
That being said, I think taking snapshots of everyone's PC every few seconds and storing them in an archive is a terrifying privacy nightmare, which is exactly how it came off. Because yes, if you've ever had a windows PC, Or honestly, any computer. There are times where your computer just starts acting weird, and you hope to God that it has a state it was in before.
Or, sometimes something you found on your PC or a website, you don't know how to get back to, and you wish you could just, Oh, what was it I was looking at the other day? All of that is useful, until it's not. When it's terrifying and falls into the hands of bad faith actors. And listen, this immediately sparked backlash.
Tons of people were like, what the fuck? This seems like an insane privacy nightmare. Not even talking about the bloatware you'll get on laptops in general, or what devices are sending information. Microsoft responded to all this by saying that, listen, all of the snapshots that are taken by recall will only be stored On the local person's PC.
Microsoft's not going to keep any of those in the cloud. [02:01:00] In fact, none of those even reach the cloud or Microsoft at all. They're all encrypted stored locally on your PC at all times. And you can tell the program to delete certain things, to not monitor certain apps and IT professionals can disable recall altogether.
Although I will say that after this backlash, I'm pretty sure Microsoft is going to bake that feature in, in some way, shape, and or form just to cover their bases. But listen, even if recall doesn't share those snapshots with other users, even if Microsoft has no access to them and is not allowed to see those snapshots or use them for any targeted advertisement, that is a good thing.
We still live in 2024, where there are tons of data breaches all of the time, where information privacy is a huge debate, and where our machines are asking us stuff like, hey, do you want us to share diagnostics about a machine? Is, is this part of diagnostics? This isn't the 50s anymore, where you have to only worry about criminals breaking in and stealing things from your safe.
You have to worry about Data thieves, that is a serious [02:02:00] issue in 2024. People get shit stolen from them all the time. It is scary. And listen, I'm not gonna give you a scared straight, so you look over your shoulder for some creepy guy with a laptop outside of your house. I'm just saying having a history of what websites you've been to and what stuff you're looking at on your computer.
In an archive can be scary. And even if it's encrypted, that doesn't mean it's uncrackable. And those are just bad faith actors. Think about governments. And I don't want us to get too tinfoil hat here, but the U S and multiple other governments have dozens of surveillance programs for searching or accessing your computer and interpreting that information, how they will in a very imperfect political environment.
And look, this is not the only major AI innovation that's come out in the last like two, three weeks, Google has recently integrated AI into their search engine in general. And that has yielded very mixed results. I can tell you from using it for a little bit. Yeah, it's actually sped up a few things, but at the same time, it's told users that they [02:03:00] should drink pee.
And a lot of times it feels like it's scrubbing information from other sites and just presenting it, which Google already did, which already had a bunch of controversies for. But also, the AI is pulling information from multiple different sites to create its overall answer. Which kind of hurts the sites you're searching for, doesn't it?
And oh, you think that's the only bad AI news. Guess what? We have more bad AI news. Scarlett Johansson's voice was stolen. I'm sorry, not stolen. Imitated. I'm sorry, not imitated. Strongly hinted at being imitated or stolen by the creator. Of ChatGPT After a meeting with Scarlett Johansson When she was like, no, I don't want you to use my voice And then after it launched, the creator going Hey, do you, are you sure?
Can we, can we please? And now the voice has been taken down because Look, Scarlett's like, I already starred in her I don't want to make it a reality, do you not see The fucking point of that movie? And she wants An investigation into it This is what I'm trying to make the video about. There are obviously serious issues with AI in terms of content theft, uh, [02:04:00] job displacement and disruption, privacy and the ability for people to function.
Like, look, there are a ton of debates about that going on. And like I said, there are many people who tout AI's positives, which are positive. There are benefits. Here's the issue and, and regardless It's going too fast, which is reckless. People tend to constantly send me things about AI music when it happens, right?
If you don't know, music has been dipping its toes into AI for fucking years now. You remember the Drake diss track where he used Tupac's voice on the AI? That's been available before chat GPT. That's been available for a while now. There was a plugin that lets you sound like Kendrick or like A$ AP. I remember those plugins coming out, but every time somebody sends me.
Something from Suna or anything like that. I'm not necessarily scared because that algorithm is trying to do something that's fundamentally not how music works. The value benefits of what it could be aren't necessarily compatible with the industry it's trying to replace. So I'm not necessarily worried about [02:05:00] that.
Even though there are some concerns, like if you, if you, if you extrapolate it, and I'm trying to be open minded, that's not what worries me. What worries me Is people doubling and tripling and quadrupling down on this technology because they think it's the gold rush and making stupid mistakes that fuck a lot of people over and create a more dangerous precedent for the world where None of the benefits of AI exist People lose jobs, or information, or privacy, or vocations, or things are stolen, and all of a sudden things are way more unstable, far more reckless, with no guardrails.
And if you know what this sounds like, it sounds like fucking NFTs. And ironically, a lot of the proponents of AI Are NFT bros and listen, I don't want to compare them entirely because these are two different things, but it's very clear that the culture from NFTs and just, just go, just, just, just rock it off.
Right. Is pervading into AI. Which is significant. It's not fake. It's very [02:06:00] real. All of the dangers and the benefits of it are absolutely real. But when you are given like a magic car, and it can do every it can cook your breakfast, it can fly, it can do everything, and you just step on the gas? And just go, look at how fast I can go!
You're gonna fucking crash! Because you haven't learned this car yet. You don't know what the dangers of it are. You don't know what the benefits of it are.
DEEPER DIVE D: THANK YOU FOR YOUR SERVICE
JAY TOMLINSON - HOST, BEST OF THE LEFT: And now deeper dive D thank you for your service.
What’s Going on With Tesla Superchargers? - Waveform: The MKBHD Podcast - Air Date 5-9-24
MARQUES BROWNLEE - HOST, WAVEFORM: Tesla laid off, they just blew up their entire supercharger team. It's just gone. Yeah. Specifically, it seems like Elon just did it one day.
DAVID IMEL - HOST, WAVEFORM: Wait, they shut down the superchargers?
MARQUES BROWNLEE - HOST, WAVEFORM: No, no, they laid off the entire supercharger team. Now, for those unfamiliar, Tesla superchargers pretty important part of Tesla, right?
Um, for when we talk about electric cars, almost every time we talk about Tesla's advantage, we talk about the supercharger network, which is if you need to charge this car on a road trip somewhere, the only reliable, [02:07:00] consistently reliable way to do that from our own experience for years has been the supercharger network to the point where one by one over the past few months.
Basically every EV maker that ships cars to America is going to be switching their port or shipping adapters To the NACS port to see to use tesla supercharger network because it's such a big selling point here So all that being said Tesla very suddenly laying off the entire Supercharger team, which includes all of the permitting, all of the site maintenance, all of the, uh, future planning, and basically just shutting down growth of the Supercharger network, like, nipping it right on, on that day.
ANDREW MANAGANELLI - HOST, WAVEFORM: Over 500 people.
MARQUES BROWNLEE - HOST, WAVEFORM: Yeah. Seems like a pretty bold move.
ANDREW MANAGANELLI - HOST, WAVEFORM: Yeah,
MARQUES BROWNLEE - HOST, WAVEFORM: um, I think everyone sort of assumes that it's for cost cutting and, you know, Tesla stock price is doing one thing and we want their, you know, profits to do the other thing. And so, you know, there's a lot going [02:08:00] on and pretty crazy ruthless business decisions get made by this guy all the time.
So, yeah, the supercharger team is the latest victim. Completely gone. Uh, I don't know. How do we? I feel like this is going to go.
DAVID IMEL - HOST, WAVEFORM: Yeah. It's kind of wild because like you said, everyone is moving to NACS and that layoff included Rebecca Tannoushi, who was the senior director of EV charging and was the person that pretty much like handled all of the deals, convincing all of the other companies to switch to NACS.
MARQUES BROWNLEE - HOST, WAVEFORM: Yeah.
DAVID IMEL - HOST, WAVEFORM: So they kind of finished doing that and then now the entire team is out. And there were a bunch of employees that literally said they were breaking ground on new locations. And then they just got the news and all of a sudden they just have to stop.
MARQUES BROWNLEE - HOST, WAVEFORM: Yeah. So it seems like at some point we're going to feel the effects of it.
And I think it will be, well, okay. So there was an Elon tweet about how, Oh, we're stopping growth and we're focusing on just maintenance and uptime. Yeah.
ANDREW MANAGANELLI - HOST, WAVEFORM: I was just gonna say like that, my initial thought on this and I don't follow stock prices. I don't know anything about [02:09:00] business, but like when I first saw this, it felt like, Oh, we built this crazy supercharging system.
Yeah. Yeah. Everybody loves it. It's the most reliable. Everybody's going to it now. Thanks for building it later. Like, yeah, it's easier to maintain something that you built already, even instead of building it, but it seems like there's more of it. Yeah.
DAVID IMEL - HOST, WAVEFORM: Well, except for the fact that the amount of cars using it is about to like five X.
ANDREW MANAGANELLI - HOST, WAVEFORM: Yes. I don't think it's a good decision, but my first thought was like, it feels like they did it all. And I feel like we see that in tech a lot. It's like, Oh, thanks for building this awesome software or something like that. Interesting. Good luck building something somewhere else. We have what we need now.
DAVID IMEL - HOST, WAVEFORM: The peace of mind, knowing that you're always going to have a supercharger nearby is so important to like ownership of an EV. Yeah.
MARQUES BROWNLEE - HOST, WAVEFORM: Sorry to interrupt. There's a lot of upcoming locations on the map where people are like, I can't wait for that one because then I can finally do this route. I can finally do this drive.
Right now it looks like those aren't going to happen. Uh huh. So, yeah, that's, that's a bummer for those.
DAVID IMEL - HOST, WAVEFORM: Yeah. He tweeted Tesla still plans to grow the supercharger network just [02:10:00] at a slower pace for new locations and more focus on a hundred percent uptime and expansion of existing locations. So I don't know if that means just adding more stalls to existing locations.
Yeah. The uptime is already pretty dang good. Like they've had very good uptime historically. Yeah.
ANDREW MANAGANELLI - HOST, WAVEFORM: I could see focusing on uptime. Like you said, if it's going to be so many more people, maybe there's a higher opportunity of the uptime not being as good and focus on that, but. Even if that is the focus firing 500 people and slowing the growth of the network, because no matter how good the network is, if we're talking about gas cars, the network looks like crap.
MARQUES BROWNLEE - HOST, WAVEFORM: Yeah. That's the, that's the two sides of that car. It's like when you compare the Tesla supercharger network to gas cars, it seems like they need to have a lot of growth to do, but on the other side, when you compare the supercharger network to everyone else, no longer needing to build up their network because they all just signed on to do NACS.
It's like, yeah, we won. No one else is going to come close to the Tesla supercharger network. No need to build it up anymore. Yeah. [02:11:00]
ANDREW MANAGANELLI - HOST, WAVEFORM: It, it feels like when we're talking about like, even just Tesla or Rivian in vehicle software, you're like, this is the best in vehicle software of any car and you're like, how's it to an iPad?
It's like, well, it's terrible.
DAVID IMEL - HOST, WAVEFORM: Yeah.
ANDREW MANAGANELLI - HOST, WAVEFORM: So like, yeah, there's still so much room for growth here.
When a tech company says it’s disbanding its standalone safety/ethics/alignment group - Jacob Ward - Air Date 5-19-24
JACOB WARD - HOST, JACOB WARD: I've been mostly not thinking about the tech business at all lately. Honestly, since I left NBC, I've been mostly just hanging with my kids and surfing. And that's been fantastic.
But, as I occasionally check into the news, I get more and more alarmed. And one of the things that's really alarming me right now is, OpenAI, creator of ChatGPT and the rest, which is rapidly releasing frightening new products --amazing new products, but frightening as well--has just lost the two executives in charge of what's called their super alignment team. In the AI world, super alignment is this notion that we need to--and I think no one would disagree with this--bring AI into alignment with human values before it gets out of [02:12:00] control. Well, the two executives in charge of that lofty mission have left OpenAI. Ilya Sutskever, and this guy, Jan--I've never said his name out loud, so I don't know how to say it, I only know how to spell it--Leike? Jan Leike? Lika? Anyway, Jan has gone off on X, formerly Twitter, saying that his team has been sailing against headwinds for some time inside the organization, that shiny new products have been pushing his team's concerns aside, the priorities of the organization he doesn't agree with anymore, and he says it's time to go. He says that safety is not being prioritized the way he wants it to be. He very notably pointed out a lack of computing resources made available to his team, and this is the big thing because that's what's expensive about doing AI is running these huge processing centers that do the compute to make it possible to train AI and run all of your requests for [02:13:00] cat videos and knock knock jokes.
And so he and Ilya Sutskever are out. And OpenAI for its part says they're disbanding the super alignment team. And they're instead going to integrate their work and the safety considerations into the broader landscape of OpenAI. We have seen this before. Other big companies have disbanded their standalone teams and spread them through an organization, or that's what they claim.
But what we've seen over and over again is that when you set up an adversarial think tank whose job it is to fight with the executives and try and push for different priorities than just pushing out product quarter after quarter, eventually that adversarial relationship reaches a breaking point. And in a shareholder-driven environment, I think it's just very hard to stay true to your lofty ideals about trying to support something like a super alignment team, even though the purpose of that team is [02:14:00] to keep your product from ruining the world. Literally, that's what this job is all about, in theory.
Sam Altman has replied to this departed executive also on X and basically says, I agree it's something to worry about and we're going to keep working on it.
And so anyway, I'm going to go back to surfing.
Closing Credits
JAY TOMLINSON - HOST, BEST OF THE LEFT: That's going to be it for today. As always, keep the comments coming in. I would love to hear your thoughts or questions about today's topic or anything else. You can leave a voicemail or send us a text at 202-999-3991, or simply email me to [email protected]. The additional sections of the show included clips from WiseCrack, The Majority Report, Some More News, The Daily Zeitgeist, Tech Won't Save Us, Jim Sterling. Zaid Tabani, Waveform: The MKBHD Podcast, and Jacob Ward. Further details are in the show notes.
Thanks to everyone for listening. Thanks to Deon Clark and Erin Clayton for their research work for the show and participation [02:15:00] in our bonus episodes. Thanks to our Transcriptionist Quartet, Ken, Brian, Ben, and Andrew, for their volunteer work helping put our transcripts together. Thanks to Amanda Hoffman for all of her work behind the scenes and her bonus show co-hosting. And thanks to those who already support the show by becoming a member or purchasing gift memberships. You can join them by signing up today at bestoftheleft.com/support, through our Patreon page, or from right inside the Apple podcast app. Membership is how you get instant access to our impressively good and often funny weekly bonus episodes, in addition to there being no ads and chapter markers in all of our regular episodes, all through your regular podcast player. You'll find that link in the show notes, along with a link to join our Discord community, where you can also continue the discussion.
So, coming to from far outside the conventional wisdom of Washington DC, my name is Jay, and this has been the Best of the Left podcast, coming to you twice weekly, thanks entirely to the members and donors to the show, from bestoftheleft.com.