top of page

Why I am now (also) an AI consultant for NGOs

  • Writer: Valter Hugo Muniz
    Valter Hugo Muniz
  • Dec 12
  • 2 min read

I recently listened to a podcast where the guest stated the importance of taking responsibility for sharing what we know with others. He was specifically talking about Artificial Intelligence and the intergenerational gap that exists across our society, from family to work.


This is what I understood from the conversation: "If you understand AI well enough to help people, you have an obligation to share it. Not from superiority, but from solidarity."


At the beginning of the year, I got a message from a childhood friend. He'd sent me photos of his six-month-old son just weeks earlier—exhausted joy all over his face, talking about finally feeling like life was coming together.


A couple of months later, his message was different: "My contract just got terminated. UN agency—they're cutting 30% of staff because US funding dried up. They gave me two weeks."


This is happening everywhere. The current US administration's funding cuts are devastating organizations worldwide—big UN agencies, international NGOs, and small grassroots groups. And into this crisis walks AI, promising efficiency, optimization, and the ability to "do more with less."


But I keep asking myself: What are we trading away when we optimize our way out of resource scarcity? What parts of our humanity—our core values—are we putting on the altar of efficiency? AI is offering us a crucial moment to decide what's irreplaceable about being human in the work we do—what must remain slow, relational, attentive, even when pressure is crushing.


Without clarity, we get stuck in desperate extremes: idealistic hope that AI will maintain everything with no trade-offs, or defeatist pessimism that we'll lose our humanity no matter what. Both rob us of real choice.


But with clarity—when we understand AI well enough to ask "What does this tool optimize for, and is that what communities actually need?"—we can make intentional choices. We can use AI for genuinely administrative work while fiercely protecting the relational, trust-building work that no algorithm can replicate.


NGOs face unprecedented pressure. They're being asked to do the same work with dramatically less, while AI companies circle with promises that sound perfect when you're drowning. People like my friend—skilled, committed, with families depending on them—are losing jobs as organizations are told their only path to survival is replacing people with algorithms.


But nonprofit organizations exist because someone decided that some work is worth doing even when it's expensive, slow, and hard to scale. The best of them know that relationships matter, that trust takes time, that showing up as a human being isn't a luxury—it's the methodology.


Being told to abandon these values for efficiency isn't just a funding crisis. It's an existential crisis.


When I train NGOs in AI, I'm helping them find clarity to ask: What can we optimize without losing ourselves? Where does efficiency serve our mission, and where does it betray it?


I'm choosing to show up for organizations facing impossible choices, to help them see they have more agency than they realize. Because when we make conscious, values-driven choices about what remains irreplaceably human—even under pressure—we create work that actually transforms lives, honor the people who do that work, and preserve our own humanity in the process.


Learn more about what I offer at www.valterhugomuniz.com 


Disclaimer: Claude LLM was used to revise grammar and readability.

Comments


bottom of page