AI regulation isn't ready. We aren't either. What can we demand?
- Valter Hugo Muniz

- Oct 6
- 5 min read
Six months ago, a faith-based organization in Switzerland invited me to run a crash course on artificial intelligence, focusing on risks and opportunities. Swiss people carry healthy skepticism, especially toward anything sold as "the solution to all problems." So I focused on what could go wrong and how to stay alert.
The session went well until the Q&A. Someone asked: "What protections exist right now? What can we rely on?" I had to be honest. "The regulations aren't fully developed yet. You need to be more careful, especially when dealing with sensitive information about vulnerable communities."
The room went quiet. Six months later, that truth holds - but now we have a clearer picture of where the gaps are and what we can do.
What FBOs risk losing in the rush to AI
I have served the nonprofit sector for a decade by now. Resources shrink to the minimum, staff are overwhelmed and operations get compromised. In this environment, AI looks miraculous. Write emails in seconds. Translate documents instantly. Produce visual content without hiring designers. The promises seduce when you're exhausted and underfunded.
What worries me: without education about AI tools, the risks become catastrophic. We're not just risking data breaches - though those matter. We're risking something deeper.
Our work brings dignity, justice, and care. Machines respond. They help. But they can't replace the heart of what we do.
Regulations: the global picture, progress and gaps
Understanding the global legislative landscape shows us where our own countries are strong, where they're weak, and what we should demand from local and national authorities.
Europe set the pace with the Artificial Intelligence Act (August 2024), sorting uses by risk: some practices are banned (social scoring), while high-risk uses (hiring, credit, biometric ID) must meet strict requirements: testing, human oversight and clear information. Machine-generated content gets labeled.
The United States has no single national law - just a patchwork of state rules and federal guidance. Colorado and New York have specific laws. Enforcement is uneven.
Latin America builds momentum. Brazil's privacy law (LGPD, 2020) provides a baseline; national AI bills advance across the region.
Africa advances through continental coordination. The African Union's AI Strategy (2024) encourages ethical, inclusive AI as countries layer specific rules onto privacy law foundations.
Asia takes diverse paths. China has strict controls with labeling requirements. Singapore offers practical toolkits for safe innovation. Japan guides through ethical principles.
Global agreements are emerging. The Council of Europe's Framework Convention (May 2024) is the first legally binding international AI treaty. UNESCO and OECD principles set ethical baselines.
Progress is happening - uneven, fragmented, painfully slow for organizations needing answers today.
Switzerland chose its own path (February 2025): to ratify the Council of Europe Convention and fold protections into existing laws - a "use what we have, strengthen where needed" approach. Our privacy law already covers AI systems. Sector rules advance in driving, finance and public administration.
Despite my excitement about technology, Swiss skepticism taught me to ask: what do we need to preserve as humans when technology revolutionizes everything?
What still feels broken - everywhere
Despite legislative progress, six gaps remain.
People should know when a machine shaped a life-altering decision and be able to ask for human review. This right is still uneven.
We urgently need reliable labels for synthetic content to fight fraud and deepfakes. For faith communities sharing stories of hope, authenticity is sacred.
Government AI contracts should require impact assessments, bias testing, human oversight, and public reporting. Too often, they don't.
Jobs, housing, education and insurance need proportionate checks so "fair" and "safe" mean more than empty promises—not just banks and hospitals.
We need clear triggers for reporting AI failures and common testing methods.
Organizations need simple rules on lawful data use and international transfers that protect dignity everywhere.
The paradox: AI as both risk and survival tool
The paradox keeps me awake: for faith-based organizations today, AI is both a significant risk and a critical survival tool. The world drives resources toward war and profit - not toward missions of care, justice, and dignity. In this hostile funding environment, AI offers something revolutionary: strengthening impact with less. If they reject AI entirely out of fear, they risk irrelevance.
Tools that once cost thousands are now free or minimal cost. FBOs can produce professional content, translate messages instantly, analyze community needs, make processes efficient, and reach people who have never heard their message. But only if they learn to combine their caring, human voice and ethos with AI tools.
What education means for Faith-Based Organizations and communities
Knowledge of AI regulation won't make us safe. But it gives us clarity about where we're vulnerable and leverage to demand better.
For those working with public bodies:
Demand that government AI contracts include impact assessments, bias testing, human override, and public reporting
Request impact summaries for public systems in simple language
Support treaty-aligned updates; show up at public hearings
For organizations using AI internally:
Keep a model register: what each system does, where it can fail, how it's monitored
Create one-page descriptions explaining data sources, limitations, tests done
Test systems before launch and after updates; log and fix failures
Train staff on privacy by design and human accountability—the machine is a tool, the human stays responsible
For all of us as faith communities:
Invest in education; organize workshops (contact me if you need help)
Stay engaged with local legislative processes
Build coalitions with civil society groups
Share stories—when AI helps, tell people; when it harms, speak up
Faith Based Organisations will survive not by abandoning their mission, but by living it more fully with new tools in hand. Trust grows when three things are visible: privacy by design, clear information for people affected, and a human who stays accountable.
Use this knowledge to stay engaged, not to feel safe. Push governments to act. Educate ourselves and our communities. We can ensure that as AI reshapes our world, human dignity, and the care that flows from it, remains at the center.
Disclaimer: ChatGPT Deep Research was used to get an overview of the current status of AI regulations and Claude to revise grammar and readability, provide feedback about narrative flow, and make this reflection more concise.
END
AI reflection series:
1) Are you using AI responsibly? (Most people aren't): https://www.linkedin.com/pulse/you-using-ai-responsibly-most-people-arent-valter-hugo-muniz-t5sye/
2) I thought AI was safe - until I saw these 3 hidden risks (and fixes): https://www.linkedin.com/pulse/i-thought-ai-safe-until-saw-3-hidden-risks-fixes-valter-hugo-muniz-ovmce/
Sources (official pages & primary explainers)
European Union
EU AI Act (in force 1 Aug 2024) — European Commission: News · Commission: Policy page · European Parliament: Timeline brief
General Data Protection Regulation (GDPR) — Text & overview
Digital Services Act / Digital Markets Act — DSA overview · DMA overview
Global agreements & recommendations
Council of Europe AI Framework Convention (adopted 17 May 2024; opened 5 Sept 2024) — Convention hub · Official text (PDF)
UNESCO Recommendation on the Ethics of AI (adopted 24 Nov 2021) — Overview · Official text (PDF)
OECD AI Principles (2019; updated May 2024) — Principles overview · 2024 update: Press note
United States
Executive Order 14110 (30 Oct 2023) — Federal Register · White House (archived)
Blueprint for an AI Bill of Rights (4 Oct 2022) — OSTP page
Colorado AI Act (SB 24-205) — Bill page · Enacted text (PDF)
NYC Local Law 144 (Automated Hiring Tools) — City page (AEDT) · AEDT FAQ (PDF)
Latin America (Brazil)
LGPD — Lei 13.709/2018 (in force Sept 2020; sanctions since Aug 2021) — Official text (Planalto) · ANPD English PDF
Asia & Pacific
Japan — Social Principles of Human-Centric AI (2019) — Cabinet Office (PDF)
Singapore — Model AI Governance Framework & AI Verify — Framework (2nd ed. PDF) · PDPC resource page · AI Verify (overview) · AI Verify (user guide)
Switzerland
Federal Council decision on AI approach (12 Feb 2025) — Press release · Federal AI overview & guidance
Switzerland — signature/steps toward Council of Europe AI Convention (2025) — Signing announcement
Federal Act on Data Protection applies to AI (FDPIC, 8 May 2025) — FDPIC update · FDPIC AI hub
Ordinance on Automated Driving (in force 1 Mar 2025) — Fedlex (official text)
FINMA Guidance 08/2024 (18 Dec 2024) — Guidance (PDF) · News release · Guidance index


Comments