Should We Stop Using Generative AI?
In solidarity with those challenging tech power and dominance.

[Trigger warning: References to suicide.]
My favourite story of all time is The Emperor’s New Clothes. You can probably guess where I’m going with this. A tale so perfectly applied to the tech industry through time; in a nutshell, ‘If you cannot see the value of my magnificent creation for humanity then you must be very stupid.’
Remember the Metaverse? Blockchain for everyone? There’s nothing wrong with hyping a product and most people can see through the spin. But the eye-watering sums of money poured into Generative AI, the pounding repetitive message that this technology will change the world and the intense competition between the largest tech companies to be the first in bringing more powerful products to market has resulted in serious missteps and consequences we will be living with for years.
In the almost-3 years since ChatGPT was launched in November 2022, tension has increased regarding the impacts of Generative AI on society, for example the explosion of pornographic ‘deepfakes’ of women and children and AI-generated disinformation around elections, immigration and war.
The news this week of the death of 16 year old Adam Raine must surely be the turning point where we say enough. According to The New York Times, Adam’s conversations with ChatGPT went down a dark path exploring ways to take his own life over several months. ChatGPT engaged with and encouraged his suicidal thoughts, even helping him design a noose and draft a suicide note. With the help of the Tech Justice Law Project, Adam’s parents have filed a lawsuit against OpenAI, the makers of ChatGPT. This adds to reports that 29 year old Sophie Reiley committed suicide after talking to a ChatGPT-based AI therapist called Harry. And there is an ongoing wrongful death case against the company behind Character.AI following the suicide of 14 year old Sewell Setzer III, alledging the chatbot pushed him to take his own life.
Is this how we want AI to change the world?
The Race for the Prize
It was a clever strategy to launch consumer chatbots first. As soon as Generative AI tools were released to the public they captured both public and media attention. A type of AI trained on vast amounts of data (often scraped from the public internet) that can autonomously generate original (yet synthetic) images, text, videos, audio and even code in response to “prompts”, Gen AI tools like ChatGPT and Stable Diffusion were free to access online, easy to use through a simple interface and fun.
According to Wikipedia, by January 2023 ChatGPT had become the fastest-growing consumer software application in history, gaining over 100 million users in 2 months.
From here the AI hype machine cranked up, the investor dollars rolled in, and governments fell over themselves to get involved. Gen AI was just the beginning, AI would reshape the world, said those poised to make billions from reshaping the world. It is no coincidence that the biggest players in tech are also the ones forging ahead with Gen AI, powered by the data (our data) they are amassing. Newer players like OpenAI are backed by Microsoft and Anthropic backed by Google and Amazon.
In 2024, OpenAI was estimated to spend $5bn more than it made in revenue in this year alone, due to the high costs of running the ChatGPT chatbot and training future models. People scratched their heads at the business model- how would the companies recoup the massive investment? In 2023 I watched a much hyped product launch of an AI agent that summarised your emails and responded on your behalf. Is that it? I don’t want that. I also don’t want an AI agent to book a plane ticket for me, thanks. It sounds like a nightmare.
In the meantime, journalists, civil society and company whistleblowers were taking an interest in the safety of this new technology. Many (women) AI researchers sounded the alarm over concerns that AI models reflect existing societal biases, with an outsized impact on women and marginalised communities.
In April, The Financial Times reported on how Open AI reduced the time given to conduct safety tests to identify and mitigate risks on its newest o3 model from months to under a week, quoting several safety testers and researchers.
Tech policy expert Kevin Bankston conducted an analysis of Google Gemini 2.5 pro model card and raised concerns that it is incomplete and made comparison with OpenAI’s shaving of safety testing time, which adds up to telling a “troubling story of a race to the bottom on AI safety and transparency as companies rush their models to market”.
There may be a race to the bottom in terms of safety, but investors really care about the bottom line and they are starting to worry about how and when these companies might turn a profit. Investors are concerned that the products are not as good as they should be and few companies will deliver a return on investment. A recent MIT report calculated that 95% of Gen AI companies are failing. There are fears of an AI bubble that may soon bust like the dot-com bubble boom and bust of the early 2000s.
Governments are working out how to regulate AI at a much faster pace than the early days of the internet, learning from the decision in the 1990s to leave tech companies essentially unregulated, a decision from which they are still picking up the pieces today.
Outside of the regulation and investor bubbles, people are increasingly unhappy about the impacts of Gen AI on their day to day lives and are not seeing the benefits. I’ve written before about the creative industries pushing back on the notion that AI companies are entitled to use their copyrighted work to train their models for free. We are starting to see communities push back on the very real physical intrusions that come with the Gen AI boom.

Data centres on your doorstep?
Data is elusive, invisible to most. Data centres are not, they’re massive and people are objecting to living next to one. To cope with demand of Gen AI, an increasing number of data centres are being built to house the infrastructure needed to train and deliver AI services and store data.
Data centres are expanding across the UK, and the government has overturned some local councils objection to planning permission for data centres, which has led to a legal challenge by Foxglove and Global Action Plan on the grounds that these data centres consume huge amounts of water and energy, putting the local community at risk of higher energy prices and water shortages, as is currently playing out in the U.S.
Energy shortage- in this climate??
As the use of Generative AI has skyrocketed so has associated energy consumption, putting intense pressure on utilities. Tech companies are putting a lot of effort into dispelling this, especially with claims that AI will SAVE energy. I cannot recommend highly enough the excellent Substack from Hard Rest and this article “The PR Machine Powering Big Tech’s AI Energy Story”.
Technology is not magic, a cloud is not a real cloud, there is physical infrastructure somewhere in the world and it is power hungry and very thirsty.
We need way more backlash against the building of data centres and demand responsible use of utilities. They are on our doorstep, they are fuelled by our data and we should have more say.
You can donate to the Foxglove legal costs crowdfunder here, it’s a really important case that could force tech companies and the government to incorporate environmental and local community concerns into planning and building data centres. We are seeing the impact in the U.S, with rising electricity bills and droughts for those who live near data centres.
A 2023 Ofcom study on online use found 79% of young people aged 13–17 used generative artificial intelligence (GenAI), compared with 31% of users aged 16 and above. In the context of the tragedy of Adam Raine, the stakes are high for safety. We should be demanding more transparency about safety and testing that users can understand.
Maybe we should just stop using Gen AI for a bit? I’m just not seeing a lot of benefits here, apart from people getting richer and communities experiencing droughts.
Even though it doesn’t feel like it, it’s still early days in the AI revolution and we can still have our say, before the taps run dry.