Why should we ‘fight like hell’ against Big AI?

Why should we ‘fight like hell’ against Big AI?

text to speech icon

listen to this article

estimated 5 minutes

The audio version of this article has been generated by AI-based technology. There may be mispronunciations. We are working with our partners to continually review and improve results.

listen Examining how OpenAI does business:

ideas54:00Empire of AI: Tech journalist Karen Hao

Karen Hao is a technology journalist who once worked as an engineer in Silicon Valley.

She is now a vocal critic of its AI giants and their global race to create artificial general intelligence and “scale at all costs”. She says they have created a worldwide system of power that absorbs data and intellectual property, crushes human rights, and comes at huge environmental costs.

“Fundamentally, we need to separate AI from empire. We want the benefits of AI. We absolutely cannot have it at the expense of our democracy,” Hao said in a public speech held at the University of Toronto’s Schwartz Reisman Institute for Technology earlier this March.

“People should fight like hell to make sure it’s not taken away,” he told host Nahla Ayad in an interview.

book of hao The Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI Examines the global impact of Big AI: from the implications of its rapid deployment, to the erosion of democratic norms – and explores how we need to responsibly rethink, design and deliver more purpose-driven AI systems in the future.

As he told the audience in his public speech, “Let’s dream of new forms of AI that can bring us new benefits without extraordinary costs.”

Here is an excerpt from Karen Hao’s lecture:

We need to stop thinking of (AI) companies as just businesses providing products and services. These are new forms of empire that are consolidating economic and political power on historic amounts, reshaping our Earth, reshaping our geopolitics, transforming our education systems and our future careers.

Why do I use the word “empire”? Well, AI empires operate exactly like the empires of old. These are the four parallels that I draw in my book.

First, they lay claim to resources that are not their own. This includes data of individuals, intellectual property of artists, writers and creators.

Second, they exploit extraordinary amounts of labour. And it is not just workers who contribute significantly to wealth creation for these companies and rarely see any commensurate value in return. It also refers to workers who become automated once this technology is deployed in the world. This is a typical political design choice that these companies make to build their AI into labor-automating systems.

Third, they hold a monopoly on knowledge production.

What we have seen over the past decade is that the AI ​​industry has become the primary employer and funder of AI research. This means that they now have the ability to not only set the agenda on all this AI research, but they also censor and control inconvenient truths.

And so what we as the public understand about the limitations and capabilities of these AI models is filtered through the lens of what the empire wants us to know. You can imagine that if most of the world’s climate scientists were funded by fossil fuel companies, we would not have a clear picture of the climate crisis.

Fourth and finally, these companies justify their actions with moral and existential imperatives. They are a good empire on a civilizing mission to bring progress and modernity to all of humanity. They argue that if we give them access to all the data, all the resources, all the labor, they will be able to bring us a utopia or something akin to heaven. If they are defeated by an evil empire, instead, humanity goes to hell.

So the question is, is it really necessary? Do we really need empires to develop AI? Do we need empires to benefit from this technology? To answer this question, let’s consider an analogy.

Part of the challenge of talking about AI today is the complete lack of specificity in the term artificial intelligence. It’s like a transportation word. You could literally be talking about a bicycle or a rocket. But clearly these are different forms of transportation. They are designed to serve different purposes. And they have different cost-benefit trade-offs. AI is the same. It refers to such a large umbrella of different types of technologies.

So when we ask how should we benefit from AI, we really need to be quite specific and ask which AI technologies do we want more? Which one should we actually have less of? And how do we redesign, continue to improve, as well as design new forms of AI where the benefits outweigh the disadvantages?

I argue that given the types of AI systems that dominate our headlines and our imagination today, these large-scale general purpose systems like ChatGPT represent the worst possible trade-off in our portfolio of existing AI technologies. This is the version of AI that Silicon Valley wants us to adopt. This is the version of AI that allows them empire-building, but it imposes an extraordinary cost on large parts of society.

If we really want AI to be more broadly beneficial, we urgently need to move away from this approach to other alternatives.

*Excerpts edited for clarity and length. This episode was produced by Lisa Godfrey.

download ideas podcast To listen to this episode.

CATEGORIES
Share This

COMMENTS

Wordpress (0)
Disqus ( )