
With the advent of Generative AI claiming to revolutionise the entire business ecosystem, the environmental footprints have raised certain questions and the impact need to be re-assessed as we move towards a more tech-driven society. It’s almost a double-edge sword when the sustainability concerns are noted seriously. While on one side, AI innovations are driving advance sustainable goals like optimizing resources to drive climate research (like weather prediction, water management, and waste monitoring technologies), training the large AI models implies humongous computational power, water consumption and simultaneous creation of huge electronic waste due to short lifespan of the processing units. While training a single large language model like GPT-3, there is around 552 metric tons of carbon dioxide emission which is equivalent to 120 gasoline-powered cars driven for a year.
As models are growing more efficient, they are involving more computations and power around tuning billions of parameters. Projections are further alarming where by 2026, global data centres will use almost 1050 terawatt, almost the electricity consumption of Japan. Apart from electricity, the cooling mechanism deployed in the data centres across the globe consumes a great deal of water which may disrupt municipal water supplies and local ecosystem as mentioned in a MIT report.
Another report from 2023 revealed that training GPT-3 in US data centre could directly consume 700000 litres of clean water. Going by short figures, for each kilowatt-hour of energy usage by a data centre requires 2 litres of water for its cooling.
Even evaluating at the consumption side, millions of users interacting with the AI chatbots requires cumulative electricity and power-hungry hardware. A recent analysis noted that “it could take just weeks or months for usage emissions to exceed training emissions for popular models”.
Dominant users are unaware of the ecological impact of constant use and operation of these AI tools. Short conversations of 20-30 questions takes around half a litre of water for cooling the data centres. The water question in context of Generative AI has been undermined with respect to the aspect of carbon emission.
Going beyond operational resources, the impact of these large models can be directly connected to the exploitation of rare earth materials. Training and deploying state-of-the-art models require high end graphics processing units (GPUs) and AI chips. 3.85 million GPUs for data centres were produced in 2023 reflecting the concerns about electronic waste.
A projection in 2024 indicates that Generative AI could produce 16 million tons of e-waste by 2030 from hardware perspective. The productions of GPUs are material-heavy and energy-intensive and are engaging complex fabrication processes. Companies are coming up every day with new models that demands hardware to be upgraded simultaneously. New generations semiconductor chips are in continuous production to run more and more complex models.
Semiconductor devices generated from cobalt, lithium, gold demands mining and disrupting the ecology enormously. This not only incurs environmental damage but social costs are also huge. The “Lithium Triangle” (Chile, Argentina, and Bolivia)—has led to severe environmental and social consequences, including habitat destruction, water depletion, and ecosystem disruption. For example, Chile’s Salar de Atacama has lost 30% of its water due to mining, threatening flamingos and other species. Even displacement of indigenous communities like the Atacameño (Lickan Antay) and Quechua peoples who depend on these lands for water and farming have been observed over time resulting in conflicts with corporations.
The haste with which generative AI is being implemented at scale has unintended environmental consequences, such as the manufacturing and disposal of hardware. Addressing AI’s sustainability dilemma entails reducing the amount of energy consumed for calculation and taking care of the equipment’s lifecycle, from manufacturing and mining to recycling at the end of its lifespan. Even while generative AI promotes the digital economy, it may unintentionally contribute to pollution and resource depletion if such precautions are not taken.
From a policy perspective, the European Union (EU) – AI Act (2024) is the first to come up with some structured policy and talks on mandatory carbon caps for AI model training or banning excessively energy-intensive AI systems, no strict emission limits have been imposed allowing AI companies can still rampantly use fossil-fuel-powered data centres. Even there are enforcement challenges—tracking AI’s environmental impact is complex.
On the other side, China’s AI policy has been centralized and restrictive among major economies, there have been high promotions on “Green Computing Power” initiatives—such as energy-efficient data centers and AI-driven climate solutions like pollution monitoring. But the problem of heavy reliance on coal and water-intensive cooling persists.
Data centers are relocating to renewable-rich areas through policies like the “East Data West Computing” project, and local companies like Tencent and Alibaba claim carbon-neutral operations. However, enforcement inadequacies and the quick development of AI could overtake sustainability initiatives. China’s strategy differs from the EU’s regulatory paradigm in that it prioritizes state-led green tech investments above restrictive emissions limitations, making its AI-environmental impact still being worked out.
Top Opinions



Top Interviews


