top of page

The electricity your AI tools don’t tell you about

  • Writer: Valter Hugo Muniz
    Valter Hugo Muniz
  • 5 days ago
  • 3 min read
Photo created by Google Gemini
Photo created by Google Gemini

At my first AI workshop with members of KoGe, a Swiss network of faith-based organisations working on peace, justice, and inclusion across 37 countries, someone raised their hand mid-session: “But is it sustainable?”


A fair question. The honest answer, right now, is that we do not really know, because the companies behind these tools have found a legal way to fake the answer.


Clean on paper, coal in the grid 


Picture the electricity grid as a giant shared pool. Every power source pours into it, solar farms, coal plants, gas turbines, and everyone draws from the same mixed water. You cannot trace which electrons powering your laptop came from the sun and which came from burning coal.


Now picture a certificate system. A solar farm generates clean energy and earns one certificate per unit produced. It can sell that certificate separately from the actual electricity, to anyone, anywhere. A tech company running a coal-powered data centre can buy those certificates from a solar farm on the other side of the country and legally report: our energy is clean. The solar electricity never went near their servers. The coal kept burning. But on paper, the company is green.


That is an unbundled Renewable Energy Certificate, “unbundled” because the certificate has been detached from the actual electrons it was supposed to represent. You bought the story. The coal bought the electricity.


Bloomberg Green ran the numbers in 2024. If Amazon, Microsoft, and Meta stopped counting unbundled RECs, Amazon’s reported 2022 emissions would be roughly three times higher than disclosed, a gap equivalent to the entire annual carbon footprint of Mozambique. These are not rounding errors. They are structural choices about what gets counted and what disappears.


Google phased out unbundled RECs after acknowledging they do not reflect physical reality. The rest have not followed.


Who bears the AI cost


Those of us in the NGO and faith-based sector are not just passive consumers of AI tools. When we recommend a platform, cite a provider’s sustainability commitments, or build a case for AI investment to a board, we are implicitly vouching for the credibility of those claims. That is a different kind of exposure than a consumer clicking “agree.”


Forty-five church leaders, theologians, and academics from Africa, Asia, Latin America, the Caribbean, North America, and Europe gathered in South Korea in August 2025 to address exactly this tension. Their communiqué, produced under the World Council of Churches s’ NIFEA initiative, names something the carbon accounting debate almost never includes: the communities who bear the actual cost.


Data centres demand constant energy and cooling, often in regions where communities still lack access to clean water or stable electricity. The digital economy’s material footprint is real. It is just carefully located somewhere that does not show up in a corporate sustainability report.


Their framing goes further: this is not a technical problem with fixable rules. It is a pattern, the continuation of a logic that has always known how to make its costs invisible by moving them somewhere else.


The NGO-FBO literacy


The question “is this AI tool sustainable?” cannot be answered by reading a company’s climate disclosures at face value. It requires asking not just what is reported, but what the reporting is designed to show, and whose reality is not in the frame.


For NGOs and FBOs, this is familiar territory. We have spent decades learning to read development data, humanitarian statistics, and impact reports with exactly this kind of critical attention. Responsible AI adoption asks us to apply the same discipline to a new set of claims, including the ones being made by the platforms we are actively choosing to use.


Paying someone else to work out and counting it as your own fitness is not a health strategy. It is a story about health. And the difference, eventually, shows up in the body.

The question for the non profit sector is not whether we use AI, that conversation is largely settled. The main point is whether we use it with the same rigour we demand of the causes we serve.


What does responsible AI adoption actually require from NGOs and FBOs right now? I am thinking about this actively, and would genuinely like to know what you are seeing in your own organisations.


References:


Disclaimer: The text was copyedited using Claude AI's customised project to check style, grammar and readability.

Comments


bottom of page