Ben Holliday

Notes on flattening the earth

Over the past month, I’ve been reading some of the arguments back and forth between different AI sceptics and tech commentators.

A recent post on the Tech Bubble newsletter, written by Edward Ongweso Jr, is one such example. In this, he is responding to what he describes as the limits of current debates about AI: The phony comforts of useful idiots. The term “useful idiots” is a label for those promoting a shallow, optimistic view of AI. This is seen as distracting from the negative societal impact of the technology today.

Ongweso Jr. argues that the real danger of AI is not a future “existential risk,” but the immediate damage being done by already-deployed AI systems – whether that’s in places like criminal justice systems or welfare.

I’ve now heard this point made several times: That people are thinking about “AI threats” as threats from AI agency (which is still science fiction), rather than thinking about AI threats as threats to human agency and wellbeing in the here and now – from how Big Tech is productising technology like LLMs in harmful and extractive ways. The latter is in the UK news almost every day.

There’s a lot covered in that post, but what I thought was a useful framing is the Big Tech premise of growing powerful enough in order to bend the market, and even society, to your will.

This makes me think back to Amazon’s market domination as an early internet-era business. The goal is that you get so big and powerful that everything else eventually conforms to you. It’s a type of extreme disruption. You flatten the earth. You flatten everything in your way.

As explained in the post:

“Uber’s size wasn’t an indicator of whether it would usher in a sustainable and profitable industry. But Uber used its size as a signal to attract the capital necessary to reshape labour markets, public transit systems, regulatory frameworks, consumer behaviour, and urban governance into forms more hospitable for a business model that was previously illegal, unprofitable, unsustainable, and untenable.”

“Similarly, [Ed] Zitron’s close analysis of OpenAI’s finances suggests that—barring a series of increasingly unlikely maneuvers — OpenAI (and other genAI firms) will have to go the Uber route to survive: use Smaugian hoards of capital to realise legal and political reforms that force markets, consumers, competitors, clients, and governments into forms that can accommodate previously illegal, unprofitable, unsustainable, and untenable business models.”

This is a debate about the AI bubble and tech financing as a way to eventually win whole markets. Returning to how we think about AI threats, CEO’s like Sam Altman spend their time talking about the wonderful things AI will do, but at the same time, they distract us. It’s where claims about AGI or superintelligence draw attention away from present-day problems due to how Big Tech is trying to extract growth and profits.

Frank Chimero’s writing is also useful here: Beyond the Machine. Where he says:

“The fact that OpenAI is encouraging everyone to fantasise about hardware with the Jony Ive announcement tells me the software side may not have enough headroom for the profits they need. Meanwhile, small, local models are good enough for a surprising number of use cases, while being cheaper, more private, and energy efficient.”

What will be left over is the important question. The end of an AI bubble won’t be the end of AI.

Last month, the BBC’s Technology Editor Zoe Kleinman – writing about the AI bubble – explained how, with any ‘correction’, the tech itself isn’t going away: “Just like the internet survived the dot-com crash, and crypto has survived its many crypto winters, I’m sure AI is here to stay.” In the same post, Kleinman then shared how she is seeing many great, and more local, stories about AI tools developed for good: to help those in care homes, in schools, people with hearing problems, bringing history to life, etc.

My bets are also on smaller, local models that work in the ways described above. They will be able to work effectively, but within their own deliberate limitations. It’s where more open use cases and patterns will increasingly become readily available through day-to-day work, especially in public-facing organisations.

To be able to work in this way is part of what’s increasingly being referred to as treating AI as ‘normal’ technology – see the latest work from Arvind Narayanan and Sayash Kapoor.

Building on Narayanan/Kapoor’s previous work (see my notes on AI Snake Oil), it’s the tech industry that has made it difficult to discuss AI developments critically. It’s trained us to use the term AI without a clear definition and understanding of what it is. This has allowed for the type of business practices and distractions described here.   

While the significant role of technology in our lives is inevitable, the business models and resulting societal changes driven by Big Tech do not have to be.

This is my blog where I’ve been writing for 20 years. You can follow all of my posts by subscribing to this RSS feed. You can also find me on Bluesky and LinkedIn.