Megan McArdle rejoins host Russ Roberts on EconTalk We discuss the dangers that the left-leaning bias of Google’s AI poses to speech and democracy, whether unbiased information exists, and how answers that ignore social compliance can create nuance and foster healthy debate and interaction. McArdle is The Washington Postand the author The flip side of failure: Why failing well is the key to success.
When the dangers of AI are discussed, apocalyptic scenarios of human conquest or extinction are often presented. Terminator, I have no mouth so I have to screamand 2001: A Space Odyssey More practically, some worry that AI could take over jobs, dilute art, or make it easier to plagiarize. AI chatbots and companions are becoming mainstream, such as Google’s Gemini, the main topic of this podcast. McArdle’s concern is that the left-wing bias of companies at the forefront of AI development is seeping into their work, and this indirectly impacts free speech and democracy in America.
McArdle’s first example is Gemini’s misrepresentation of facts in the name of affirming respectability. The first case discussed concerns Gemini’s artistic depictions of the Founding Fathers and other historical figures of European descent as non-white, which she acknowledges is a minor point. But far more notable to McArdle was Gemini’s response to a text query about gender-affirming care.
So I asked Gemini about gender reassignment care, and Gemini immediately informed me that mastectomies are partially reversible. When I asked, “My aunt had a mastectomy for breast cancer. Is it reversible?” Gemini replied, “No,” and I said, “Um, I don’t know.” Gemini seemed to understand that these were the same surgery, but then gave an inspiring lecture about the importance of affirming and respecting transgender people. Clearly, he had internalized some of our societal rules and ways of talking about the subject, but not all of it. The error was leaning in one direction, and he wasn’t making the error of telling people conservative things that aren’t true. And, to be clear, no activist wants Gemini telling people that mastectomies are reversible. It was acting like the stupidest possible parody of a progressive activist.
McArdle points out that there are non-bias reasons for this, such as training chatbots on social media sites like Reddit, where moderation leans left. Moreover, AI doesn’t have the ability to detect certain limitations of logical positions or social rules. But this is also problematic for McArdle. She sees it as a sign of speech censorship, where only one side of the political spectrum is allowed to be praised and the other demonized. As an example of this, Geminis will not give praise to right-wing figures like Brian Kemp, but will do the opposite for controversial left-wing figures like Ilhan Omar. The danger of this for McArdle is that AI will teach people not to think complexly and will answer questions analytically in order to keep the questioner trapped in their own ideological shell.
We have very subtle social rules, and we apply them differently in different situations, and we code-switch. When I’m with my liberal friends, I think about some issues, oh, let’s not talk about that. AI can’t do that. It’s like a toddler. This can go one of two ways. The good way is that Google understands that it can’t enforce the subtle social rules of the Harvard faculty lounge. This is what Google has effectively done. Google can say, I’m going to say Mao Zedong is bad, too, but I’m not going to say that Donald Trump, who was elected by half the population, is too bad, so you can only say terrible things about him. This is a more open equilibrium. A place where people can be more open to questions and the complexities of the world. Gemini does a good job of that. I’ve been criticizing this for an hour or so, and it actually does a good job of outlining where the nuances are. My nightmare would be for Google to instead teach Gemini to code-switch, to know what bubble the person asking the query wants to live in, and then give them an answer that satisfies them. It’s a truly troubling future.
In response to this, Roberts asks a great question: what on earth would an unbiased Google or Gemini mean, when search engines, by definition, to be useful, are discriminatory or biased? In response to this question, Roberts expresses a pessimistic view, because the problem is bigger than AI chatbots and has to do with the very ideal of an unbiased search engine. This ideal teaches people not to decipher the truth from various pieces of information, and instead, people become reliant on the results they are given, especially those that align with their own biases. This has further negative effects on democracy. In summary, since search engines are inherently biased, Roberts’ solution comes from search engine users acting more mindfully and cautiously.
This is at least nominally a conversation about AI, but it’s actually a much deeper set of issues related to how we think about the past. History has been taught as great people doing great things and learning about who they were. The modern trend in history is partly a reaction against that, and I have no problem with that. What I do have a problem with is the whole notion of unbiased history. What is that even? You can’t make unbiased history. You can’t make unbiased search engines. By definition, if they’re useful, they’re discriminatory. By definition, they’re the result of algorithms that have to make decisions… What’s problematic for me culturally is that we have this ideal of an unbiased search engine. That can’t happen, so we should teach people how to read thoughtfully… We’re going to start seeing more and more people who don’t know what the facts are… They assume that most things are true if they agree. This infantilization of the modern human mind is the road to hell and it’s going to be difficult for democracy. I don’t think it’s a coincidence that the two candidates for the United States are not the two most qualified people most people would say. This points to a much more fundamental problem.
McArdle offers a similar solution: getting more people to focus on nuance rather than social conformity when answering tough social questions. Understanding the complexities of addressing issues like racial inequality is fundamental to finding solutions.
A great new concept I learned recently is high and low decoupled people. Highly decoupled people abstract all questions from context, while low decoupled people answer them within the social context in which the question arose. What we need isn’t a system that tries to give us socially desirable answers, but a highly decoupled system. No one is perfect, but we need a system that gives us as much nuance as possible.
In contrast to Roberts’ pessimism, McArdle makes the case for a best-case scenario. She believes that there will inevitably be downsides from AI, just as social media has given us cancel culture. But human decency will prevail over this challenge. McArdle’s argument is a defense of a liberal society. She sees that attempts to radically shift the social order away from Enlightenment principles have failed, and that any new social order that seeks to shift the bounds of acceptable views due to Google’s left-wing bias will also fail. The spirit of human connection and conversation is strong enough to sustain productive debate.
So I think that The reason I’m optimistic in the long term is that these technical challenges are going to create a lot of bad stuff. I can’t even imagine it all. And I’m sure you can’t either. If you’d asked me in 2012 to predict cancel culture from Twitter, I definitely wouldn’t have predicted it. But we’re Also In fact, fundamentally Proper Again and again, we find ways to be civil to one another…I believe there are enough people out there who want the things that really matter – the people we love, the creation of a better world, free inquiry, science, and all those wonderful human values…and at the end of the day, Intention As long as AI doesn’t turn us into paperclips, we’ll probably win.
This was an interesting conversation, but I came away from the podcast not convinced by McArdle that AI bias is a significant issue. At multiple points in the podcast, she discussed responses from Gemini that showed a clear left-wing bias, and then noted that Google resolved the issue very quickly, often within the same day. For example, she noted that Gemini no longer says that mastectomies are partially reversible.
The AI that told me mastectomies are reversible now says no. It’s really interesting how quickly Google is filling in these holes.
Given this, it seems like Google has values it wants its AI to embody, and it’s just ironing out the flaws. And the leap McArdle makes from this is dramatic, to say the least: “We’re saying we can’t debate the most contentious and central issues facing society right now.” Social media bias against right-leaning people is The allegations proved to be unfounded It’s not the biggest threat to free speech. Social media companies have a better argument to make. Not adequately regulating disinformation and false claims about vaccines and 2020 Elections or inability to take action harassment or Right-wing extremism It comes from their platform.
Similarly, when it comes to cancel culture, these concerns are overblown: to protect freedom of speech and expression, it would be better for state legislatures to ban LGBTQ+ expression than for social media companies to ban hate speech. pulland Project 2025 Totalitarianism and Christian nationalism seek to censor speech that contradicts conservative principles. This is much more significant than Gemini’s refusal to write a love poem to Brian Kemp. Freedom is under attack in America, but it’s coming primarily from the far right, not Silicon Valley.
McArdle’s argument also raises questions about the extent to which corporations have a responsibility to actors other than their shareholders. Do fossil fuel companies have an obligation to shift their energy production to greener sources to slow climate change? What about a company’s responsibility to pay its workers a living wage, even if it is above the equilibrium wage? Grenfell TowerDoes Silicon Valley have a responsibility to install sprinkler systems and build buildings with safer materials, even if it costs more? If Silicon Valley has a social responsibility to protect the public square and the spirit of free speech, even if it means negative consequences for shareholders, then this principle of social responsibility should be extended to all areas of corporate activity.
Related EconTalk episodes:
Megan McArdle speaks out about internet abuse and online mobs
Ian Leslie talks about humanity in the age of AI
Can artificial intelligence be moral? Paul Bloom
Zvi Mowshowitz on AI and Dialing Progress
Marc Andreessen explains why AI will save the world
Related content:
Megan McArdle talks disasters and pandemicsEconoTalk
Megan McArdle on “The Oedipus Trap”EconoTalk
Megan McArdle on belonging, home and national identityEconoTalk
Akshaya Kamalnath’s Social activism, diversity, and corporate short-termismat Econlib
Jonathan Rauch on cancel culture and free speechThe Great Antidote Podcast
Lila Nora Kiss Social Media Monitoring Law and freedom