How Nonprofits can use AI responsibly: a framework for ethical AI adoption
- Valter Hugo Muniz

- 4 days ago
- 2 min read
For over two years, I've experimented with and reflected on how to use AI in personal and professional workflows. There are obvious but overlooked reasons why everyone should understand not only the huge opportunities but also the various risks the Large Language Model world offers.
The changes in how we communicate, work, and exist as humans are accelerating. We can't afford to wait if we want to shape outcomes - not just for the benefits, but to ensure we don't become complicit in another revolution that increases global inequality and damages the planet.
As an advisor and trainer for nonprofit organizations, I stress the importance of reflecting before and during use. We need to understand the privacy risks of "free" services, the trust we place in algorithmic responses, and especially the biased outputs we receive from chatbots.
As data scientist Cathy O'Neil says: "Algorithms are opinions embedded in code. Algorithms are not objective. Algorithms are optimized to some definition of success. So, if you can imagine, if a commercial enterprise builds an algorithm, to their definition of success, it's a commercial interest. It's usually profit."
Privacy violations, bias, hallucinations - and that's not all. Many of these private companies lack transparency about how their models are trained, leaving us unable to understand how algorithms handle our data.
Then there's digital colonialism, where data is extracted from the Global South to train models that primarily benefit Northern organizations. This creates dependency on foreign technology infrastructure and reinforces power imbalances - "advanced" AI from developed countries decides what's best for communities in the Global South.
Finally, consider the climate costs. According to Food and Water Watch, by 2028, AI in the US could consume 300 terawatt-hours (TWh) of energy annually - enough to power over 28 million households - and require 720 billion gallons of water annually to cool servers, enough to meet the indoor needs of 18.5 million households.
And I haven't even mentioned cybersecurity, scams, deepfakes, or misinformation.
Nonprofit leaders and boards must understand that they need to protect their organizations from AI misuse. People can learn to use AI safely, but without strong policies, clear guidelines, and ongoing training, the organizational harm can destroy a nonprofit's reputation.
We also need to decide what we want to keep human, relational, and slow when we revise our organizational cultures and the way we connect with people or use our brains.
I've developed a practical framework that helps organizations and their staff use AI responsibly and in alignment with their mission. This clear, ethically grounded tool can help individuals and teams navigate AI adoption.
For those interested in learning more about the FAITHS framework, write to me at contact@valterhugomuniz.com.
Disclaimer: The text was copyedited using Claude LLM to check grammar and readability.



Comments