AI has made something strange and powerful suddenly feel normal: software that exists for one moment, one person, one task.
You can now ask an AI to build a dashboard for a meeting, a tiny CRM for a side project, a game for your child, a custom calculator, a Notion clone, a Salesforce clone, a trumpet simulator, or a useless brain-rot video that exists only to be watched once and forgotten.
TECHNICALLY, THIS IS AMAZING. BUT THE MORE IMPORTANT QUESTION IS NO LONGER CAN WE? IT IS SHOULD WE?
Because every prompt now has a hidden material life. It looks weightless on screen, but somewhere else it is compute, electricity, cooling, water, chips, data centres, grid pressure, and carbon intensity. The interface says “generate”. The physical world says “consume”. That gap is going to matter.
1/ FROM SINGLE-USE PLASTIC TO SINGLE-USE SOFTWARE
Single-use plastic did not become controversial because plastic was useless. Plastic is one of the most useful materials humans have ever made. The problem was not plastic itself. The problem was using a material designed to last hundreds of years for objects designed to last minutes.
A straw. A bag. A wrapper. A cup. The moral failure was not invention. It was a mismatch.
We took something durable, energy-intensive, and industrially complex, and applied it to throwaway moments. That is why the backlash eventually came. AI may create a similar mismatch. A powerful model that can reason, code, simulate, design, animate, search, compose, and automate is an extraordinary tool. But when we use it to answer questions we already know how to answer, generate software we do not need, or create disposable content that decays within minutes, we are doing the digital equivalent of wrapping a banana in plastic.
THE PROBLEM IS NOT AI. THE PROBLEM IS SINGLE-USE INTELLIGENCE.
2/ THE FAST FASHION OF COMPUTE
Fast fashion created a culture where clothing became cheap enough to treat as temporary. The result was not just more clothes. It was a new psychology of consumption: buy, wear once, discard, repeat.
AI is creating a similar dynamic for software and media. Software used to be expensive enough that we had to think before building it. Teams scoped things. Designers mapped workflows. Engineers chose libraries. Product managers asked whether a feature deserved to exist. Now the friction has collapsed.
That collapse is exciting. It means a child can build a game, a founder can prototype a product, a teacher can make a custom learning tool, and a small business can automate something that used to require a software team. But it also means we can casually generate things that previously would have required deliberation: a whole internal tool, a mini-SaaS, a clone of a product that already exists, ten variations of a video no one needed, or a custom app for a workflow that could have been solved with a spreadsheet.
Fast fashion did not just make clothes cheaper. It made our relationship with clothes more careless. A £5 T-shirt changed not only what people bought, but how often they bought, how quickly they discarded, and how little they expected a product to matter.
AI could do the same to software. Not because software fills landfills in the same visible way, but because it fills invisible infrastructure: data centres, GPUs, grids, storage, logs, agents, cloud services, and background processes. A disposable app may not leave a shirt in a landfill, but it can leave compute debt.
3/ THE HIDDEN COST OF “JUST ASKING”
The difficult thing about AI consumption is that a single query does not feel expensive. In many cases, it is not. Epoch AI estimates that a typical GPT-4o text query consumes roughly 0.3 watt-hours, assuming around 500 output tokens. It also notes that the figure can rise with long inputs, long outputs, or reasoning-heavy use cases. A 10,000-token input may raise the cost to around 2.5 watt-hours, while a 100,000-token input could approach 40 watt-hours.
That number is abstract, so let’s make it human. A simple AI query at 0.3 Wh is roughly equivalent to running a 10-watt LED bulb for under two minutes. It is tiny compared with making a cup of coffee. One estimate puts the electricity required to prepare a single cup of coffee with an electric kettle at around 70 Wh, meaning one coffee could equal the electricity for roughly 230 simple AI text queries.
So the argument is not that every prompt is an environmental disaster. It clearly is not. The issue is what happens when the behaviour scales.
The average US household uses around 10,791 kWh of electricity per year, or about 29.5 kWh per day. At 0.3 Wh per query, it would take almost 100,000 simple text queries to equal one day of electricity use for an average US home. That sounds reassuring until you remember that AI is not one person asking one question. It is hundreds of millions of people, inside billions of product interactions, often without even realising a model has been called. It is autocomplete, summarisation, search, chat, image generation, video generation, agents, copilots, customer service, code generation, moderation, recommendations, and disposable content.
The real cost is not the individual prompt. The real cost is the culture of unlimited prompts. A billion simple text queries at 0.3 Wh each would be around 300 MWh of electricity. More complex reasoning, coding, image, video, and agentic workflows can be much heavier because they involve more tokens, more steps, more retries, more tool calls, and sometimes multiple models working in sequence.
This is where the moral shape of the problem changes. A simple query is small, but a billion simple queries are infrastructure. A disposable AI video is entertainment, but a billion disposable AI videos are an energy system. A one-off tool is useful, but a culture of rebuilding everything from scratch is waste.
The International Energy Agency estimates that data centres consumed around 415 TWh of electricity in 2024, or about 1.5% of global electricity consumption, and projects this could more than double to around 945 TWh by 2030, with accelerated AI servers as a major driver of growth.
The interface makes all of these things feel the same. A tiny answer, a full app, a synthetic video, a bulk automation, and a multi-step agent all arrive through the same innocent-looking box.
TYPE. CLICK. GENERATE. BUT THE PHYSICAL WORLD SEES THE DIFFERENCE.
4/ THE BIGGER WASTE EXAMPLES
The problem becomes clearer when you move from a single query to a larger AI task. The numbers below are not exact measurements of every possible workflow, because providers do not consistently disclose model-level energy use, and the cost depends heavily on the model, hardware, context length, number of retries, generated tokens, tool calls, image or video model used, and whether the output is merely generated once or then stored, hosted, served, and iterated on. But even conservative ranges show the shape of the issue.
A simple question might consume around 0.3 Wh. A long document analysis could consume 2.5 Wh to 40 Wh, depending on context length. A coding session that uses an AI agent for dozens or hundreds of steps can quickly move from “one query” into “a small workload”. If a user asks an AI to generate a small internal tool and the system makes 100 model calls, that could be roughly 30 Wh at the simple-query level, before counting long context, code execution, package installation, browser testing, screenshots, deployment, image assets, database setup, and retries. If the workflow behaves more like a long-input coding agent, the number could be far higher.
NOW TAKE THE NOTION CLONE EXAMPLE.
A serious “build me Notion” task is not one prompt. It is hundreds or thousands of micro-decisions: documents, blocks, permissions, comments, databases, drag-and-drop, sync, search, authentication, sharing, version history, templates, mobile responsiveness, and real-time collaboration. Even a toy Notion clone might involve dozens of model calls, repeated code generation, debugging loops, UI revisions, database schema changes, and deployment attempts. If that process involves 200 lightweight calls, the raw text inference alone might be in the region of 60 Wh, roughly the electricity of making one cup of coffee. If it involves 500 heavier calls averaging 2.5 Wh because of long code context, that becomes 1.25 kWh, which is closer to running a 50-watt laptop for a full day. If it involves repeated agentic work across large codebases and long context windows, it can climb further.
But the bigger waste is not only the one-time generation. It is what happens afterwards. A generated Notion clone may need hosting, databases, logs, backups, security patches, storage, monitoring, analytics, and ongoing AI-assisted maintenance. It may also create organisational waste: another tool to onboard, another surface for data leakage, another internal system people forget how to use, another thing that breaks when the person who generated it moves on.
That is why “we rebuilt Notion” is such a perfect symbol of the moment. The compute used to create the clone may or may not be large in isolation. The deeper question is whether the act of rebuilding was necessary at all.
A one-minute brain-rot video is similar, but in a more culturally obvious way. Public, reliable numbers for AI video generation energy use are still scarce, and this lack of transparency is part of the problem. Hugging Face’s AI Energy Score exists because the industry needs comparable energy ratings for AI models and tasks, and its methodology is designed to expose energy consumption and efficiency differences across models.
What we can say safely is that video generation is not a single text query. It is many frames, high-dimensional media generation, often multiple attempts, often upscaling, often audio, often captions, often editing, and then storage and distribution. A one-minute synthetic video at 24 frames per second contains 1,440 frames. Even if a system does not generate each frame independently, the task is plainly closer to a high-compute media workload than to answering “what is 12 x 14?”
The waste compounds because brain-rot is rarely generated once. The whole genre depends on volume: generate ten hooks, five scripts, twenty thumbnails, multiple voiceovers, several versions of the video, then post, test, discard, repeat. The visible output is a one-minute clip. The invisible process may be dozens of prompts, several media generations, and an entire platform supply chain of storage, recommendation, moderation, playback, and analytics.
SO THE QUESTION IS NOT: IS THIS ONE OUTPUT INDIVIDUALLY CATASTROPHIC?
THE BETTER QUESTION IS: WHAT HAPPENS WHEN THE INTERFACE MAKES MILLIONS OF THESE OUTPUTS FEEL FREE?
5/ THE NOTION CLONE PROBLEM
THERE IS A NEW FLEX IN TECHNOLOGY: WE REBUILT X WITH AI.
We rebuilt Notion. We rebuilt Salesforce. We rebuilt Figma. We rebuilt a video editor. We rebuilt a CRM. We rebuilt a database. We rebuilt a trumpet.
And increasingly, this is not a joke. It is possible. AI has made single-use software real. A founder can generate an internal dashboard in an afternoon. A child can make a playable game from a prompt. A teacher can create a bespoke classroom tool. A small business can automate a workflow that would previously have needed an agency, a developer, or a SaaS subscription.
That is genuinely powerful. It democratises software creation in the same way desktop publishing democratised design, YouTube democratised broadcasting, and Shopify democratised commerce. But every democratising technology eventually meets the same question: when does abundance become waste?
Because sometimes the correct answer is not to rebuild Notion. It is to use Notion better. Sometimes the answer is not to remake Salesforce. It is to simplify the sales process. Sometimes the answer is not to generate a custom CRM from scratch. It is to use a spreadsheet, Airtable, HubSpot, or the tool the team already has.
The danger is that AI makes rebuilding feel more productive than reusing. It turns every small frustration into a potential software project. The button is in the wrong place? Rebuild the app. The workflow has three annoying steps? Generate a new tool. The existing product is 80% right? Make your own.
That sounds efficient, but it can become the opposite. Every generated tool still needs maintenance, security, hosting, permissions, data handling, edge-case management, documentation, and eventual replacement. A one-off app may be cheap to create, but expensive to live with. The cost is not only environmental. It is organisational. It creates more surfaces, more systems, more decisions, and more things to forget.
This is the software equivalent of fast fashion. The shirt is cheap, so you buy it. The app is cheap, so you generate it. The hidden cost comes later. That does not mean single-use software is bad. Some of it will be extraordinary. Temporary dashboards, emergency tools, learning aids, prototypes, accessibility helpers, creative instruments, simulations, and highly specific workflows are exactly where AI-generated software can shine.
But the new discipline will be knowing the difference between I need this because it creates real leverage and I am rebuilding this because the friction to do so has disappeared. In the old world, the cost of building software forced us to ask whether it mattered. In the new world, the interface may need to ask that question for us.
6/ BRAIN ROT HAS A SUPPLY CHAIN
The ugliest version of this is AI-generated junk media. The internet already had a content pollution problem. AI adds industrial-scale production to it: brain-rot videos, fake trailers, synthetic influencers, spammy explainers, automated slop channels, endlessly regenerated memes. Each item may be cheap, but the system is not free.
The exact energy cost of AI video generation is still hard to pin down publicly because providers do not consistently disclose model-level energy use. That lack of transparency is itself part of the problem. But we know the direction of travel. Text is relatively cheap. Long reasoning is more expensive. Image and video generation are generally more compute-heavy. Agentic workflows that call models repeatedly are more expensive still.
Recent research on AI image generation found that energy use can vary dramatically by model, with up to a 46x difference between models in the study, and that resolution changes can increase consumption by between 1.3x and 4.7xdepending on the model. That matters because video is effectively a sequence of generated visual content, often with additional layers of audio, editing, and serving infrastructure.
The interface hides this. A button that says “generate video” feels the same as a button that says “generate sentence”. It should not.
And the deeper problem is not only the electricity used to generate the thing. It is the entire consumption pattern around it. The generated video needs to be stored, served, recommended, watched, measured, moderated, copied, remixed, and replaced by the next generated thing. One brain-rot video is a tiny object. A global feed of infinite disposable synthetic media is not tiny at all.
This is where AI changes the economics of content pollution. Before AI, making low-quality content still required some human time. Now, the cost of production moves closer to zero. When production costs collapse, volume explodes. When volume explodes, platforms become polluted. When platforms become polluted, ranking systems work harder, moderation systems work harder, users search harder, and trust decays.
So the waste is not only environmental. It is cognitive. It fills feeds, search results, childhoods, and attention. A throwaway video can still have a supply chain. The only reason we do not see it is because the factory is hidden behind the button.
7/ THE INTERFACE IS THE INTERVENTION
The next big movement in AI interfaces may not be about making AI more powerful. It may be about making users more conscious.
Not in a preachy way. Not with guilt. Not with climate-shaming. But with simple, well-designed friction at the right moments.
Good interface design has always shaped behaviour. Supermarkets learned where to place sweets. Fast fashion apps learned how to make browsing endless. Social networks learned how to make posting frictionless. Ride-hailing apps learned how to make a car feel like a button. AI products will learn the same thing.
The question is whether they use that power only to increase consumption, or whether they use it to guide more thoughtful use.
One simple intervention is asking: do you need this now? Some AI tasks are urgent. Most are not. If a user asks for a large batch job, a long video render, a bulk rewrite, or a non-urgent agentic task, the interface could offer run now, run later when compute is cheaper or cleaner, or run overnight.
Another intervention is model choice, but expressed in human language. Not every question needs the biggest brain in the building. A simple equation, spelling correction, formatting task, or short summary should not automatically require a frontier model. The interface could simply say simple task detected, using lightweight mode, or this may need deeper reasoning, use advanced mode?
The user should not need to think in terms of parameter counts, context windows, token budgets, or inference hardware. The product should route the task appropriately.
A third intervention is reuse before rebuilding. When a user asks AI to generate a full software product, the interface could ask: do you want to build this from scratch, or start from an existing tool or template? For many cases, the greener and more efficient answer is not new software. It is reuse. This is where AI could behave less like a magical factory and more like a responsible product strategist.
A fourth intervention is a consumption label. If someone asks for ten videos, fifty image variations, or a multi-agent research task, the interface could show a simple estimate: light compute, medium compute, or heavy compute. Not precise down to the watt. Just visible enough to change behaviour.
We already accept this kind of framing elsewhere. Cars have fuel efficiency labels. Appliances have energy ratings. Food has nutrition labels. Flights show emissions estimates. Search results label ads. Social platforms label edited images. AI could label compute intensity.
THE GOAL WOULD NOT BE TO MAKE PEOPLE FEEL BAD. IT WOULD BE TO MAKE THE INVISIBLE VISIBLE.
8/ THE “THINK FIRST” BUTTON
There is another kind of interface intervention that matters, especially for children and education. Sometimes the best answer is not instant.A child asking AI for the answer to a maths problem may not need the answer immediately. They may need a hint. They may need to try. They may need the machine to slow down rather than speed up.
For simple questions, an AI interface could offer show me the answer, give me a hint, or help me work it out. That small choice changes the relationship. It turns AI from a vending machine into a tutor. It also reduces unnecessary compute because not every task needs a full generated response, a long chain of reasoning, or a polished final output.
This matters for adults too. A founder asking for a strategy may benefit from being challenged before receiving a 2,000-word plan. A designer asking for ten concepts may benefit from being asked which constraint matters most. A developer asking AI to build a whole product may benefit from first being shown three existing templates.
The best AI interfaces will not only answer. They will sometimes pause, compress, challenge, or ask whether the user wants the shortcut or the skill. That is not friction for its own sake. It is friction that protects attention, learning, energy, and judgement.
9/ THE NEW DESIGN PRINCIPLE: APPROPRIATE COMPUTE
The principle we need is not anti-AI. It is appropriate compute.
Use AI when it expands human capacity. Use AI when it helps someone learn, make, understand, prototype, imagine, or access something they could not otherwise do. Use AI when the value of the output justifies the cost of generation.
But do not use frontier intelligence as the default interface for every tiny action. Do not rebuild a company’s software stack just because the prompt worked. Do not generate disposable media endlessly because the marginal cost feels invisible. Do not replace every moment of thought with a model call.
This is not only an environmental argument. It is a product argument, an efficiency argument, and a cultural argument.
Single-use software will be useful. Sometimes incredibly useful. Emergency tools, temporary dashboards, bespoke education aids, one-off creative instruments, prototypes, simulations, accessibility workflows: all of these are good uses. But single-use software as a default culture could become the fast fashion of computation: cheap enough to overuse, magical enough to avoid questioning, invisible enough to feel consequence-free, and large enough, at scale, to matter.
10/ CONCLUSION: THE FUTURE IS NOT LESS AI. IT IS MORE TASTE.
The next phase of AI will not just be about who can generate more. It will be about who knows when not to.
The best AI products will not simply answer every question, build every app, and generate every possible output instantly. They will help users make better choices about when to think, when to reuse, when to wait, when to generate, and when to stop.
The interface will become a conscience layer: a small pause before waste, a lighter model for lighter work, a suggestion to reuse before rebuilding, a delayed compute option when urgency is fake, and a reminder that intelligence, even artificial intelligence, is not free.
Single-use plastic taught us that convenience has a cost. Fast fashion taught us that cheap abundance can make us careless. AI may teach us the same lesson about compute.
The question is whether we learn it early enough to design better habits into the tools themselves.