top of page

AI regulation isn't ready. We aren't either. What can we demand?

  • Writer: Valter Hugo Muniz
    Valter Hugo Muniz
  • Oct 6
  • 5 min read

Six months ago, a faith-based organization in Switzerland invited me to run a crash course on artificial intelligence, focusing on risks and opportunities. Swiss people carry healthy skepticism, especially toward anything sold as "the solution to all problems." So I focused on what could go wrong and how to stay alert.


The session went well until the Q&A. Someone asked: "What protections exist right now? What can we rely on?" I had to be honest. "The regulations aren't fully developed yet. You need to be more careful, especially when dealing with sensitive information about vulnerable communities."


The room went quiet. Six months later, that truth holds - but now we have a clearer picture of where the gaps are and what we can do.


What FBOs risk losing in the rush to AI


I have served the nonprofit sector for a decade by now. Resources shrink to the minimum, staff are overwhelmed and operations get compromised. In this environment, AI looks miraculous. Write emails in seconds. Translate documents instantly. Produce visual content without hiring designers. The promises seduce when you're exhausted and underfunded.


What worries me: without education about AI tools, the risks become catastrophic. We're not just risking data breaches - though those matter. We're risking something deeper.

Our work brings dignity, justice, and care. Machines respond. They help. But they can't replace the heart of what we do.


Regulations: the global picture, progress and gaps


Understanding the global legislative landscape shows us where our own countries are strong, where they're weak, and what we should demand from local and national authorities.


Europe set the pace with the Artificial Intelligence Act (August 2024), sorting uses by risk: some practices are banned (social scoring), while high-risk uses (hiring, credit, biometric ID) must meet strict requirements: testing, human oversight and clear information. Machine-generated content gets labeled.


The United States has no single national law - just a patchwork of state rules and federal guidance. Colorado and New York have specific laws. Enforcement is uneven.

Latin America builds momentum. Brazil's privacy law (LGPD, 2020) provides a baseline; national AI bills advance across the region.


Africa advances through continental coordination. The African Union's AI Strategy (2024) encourages ethical, inclusive AI as countries layer specific rules onto privacy law foundations.


Asia takes diverse paths. China has strict controls with labeling requirements. Singapore offers practical toolkits for safe innovation. Japan guides through ethical principles.

Global agreements are emerging. The Council of Europe's Framework Convention (May 2024) is the first legally binding international AI treaty. UNESCO and OECD principles set ethical baselines.


Progress is happening - uneven, fragmented, painfully slow for organizations needing answers today.


Switzerland chose its own path (February 2025): to ratify the Council of Europe Convention and fold protections into existing laws - a "use what we have, strengthen where needed" approach. Our privacy law already covers AI systems. Sector rules advance in driving, finance and public administration.


Despite my excitement about technology, Swiss skepticism taught me to ask: what do we need to preserve as humans when technology revolutionizes everything?


What still feels broken - everywhere


Despite legislative progress, six gaps remain.


  1. People should know when a machine shaped a life-altering decision and be able to ask for human review. This right is still uneven.

  2. We urgently need reliable labels for synthetic content to fight fraud and deepfakes. For faith communities sharing stories of hope, authenticity is sacred.

  3. Government AI contracts should require impact assessments, bias testing, human oversight, and public reporting. Too often, they don't.

  4. Jobs, housing, education and insurance need proportionate checks so "fair" and "safe" mean more than empty promises—not just banks and hospitals.

  5. We need clear triggers for reporting AI failures and common testing methods.

  6. Organizations need simple rules on lawful data use and international transfers that protect dignity everywhere.


The paradox: AI as both risk and survival tool


The paradox keeps me awake: for faith-based organizations today, AI is both a significant risk and a critical survival tool. The world drives resources toward war and profit - not toward missions of care, justice, and dignity. In this hostile funding environment, AI offers something revolutionary: strengthening impact with less. If they reject AI entirely out of fear, they risk irrelevance.


Tools that once cost thousands are now free or minimal cost. FBOs can produce professional content, translate messages instantly, analyze community needs, make processes efficient, and reach people who have never heard their message. But only if they learn to combine their caring, human voice and ethos with AI tools.


What education means for Faith-Based Organizations and communities


Knowledge of AI regulation won't make us safe. But it gives us clarity about where we're vulnerable and leverage to demand better.


For those working with public bodies:

  • Demand that government AI contracts include impact assessments, bias testing, human override, and public reporting

  • Request impact summaries for public systems in simple language

  • Support treaty-aligned updates; show up at public hearings


For organizations using AI internally:

  • Keep a model register: what each system does, where it can fail, how it's monitored

  • Create one-page descriptions explaining data sources, limitations, tests done

  • Test systems before launch and after updates; log and fix failures

  • Train staff on privacy by design and human accountability—the machine is a tool, the human stays responsible


For all of us as faith communities:

  • Invest in education; organize workshops (contact me if you need help)

  • Stay engaged with local legislative processes

  • Build coalitions with civil society groups

  • Share stories—when AI helps, tell people; when it harms, speak up


Faith Based Organisations will survive not by abandoning their mission, but by living it more fully with new tools in hand. Trust grows when three things are visible: privacy by design, clear information for people affected, and a human who stays accountable. 


Use this knowledge to stay engaged, not to feel safe. Push governments to act. Educate ourselves and our communities. We can ensure that as AI reshapes our world, human dignity, and the care that flows from it, remains at the center.


Disclaimer: ChatGPT Deep Research was used to get an overview of the current status of AI regulations and Claude to revise grammar and readability, provide feedback about narrative flow, and make this reflection more concise.


END


AI reflection series:

2) I thought AI was safe - until I saw these 3 hidden risks (and fixes): https://www.linkedin.com/pulse/i-thought-ai-safe-until-saw-3-hidden-risks-fixes-valter-hugo-muniz-ovmce/


Sources (official pages & primary explainers)


European Union


Global agreements & recommendations


United States


Latin America (Brazil)


Asia & Pacific


Switzerland

Comments


bottom of page