I thought AI was safe - until I saw these 3 hidden risks (and fixes)
- Valter Hugo Muniz

- Sep 27
- 3 min read
Updated: Sep 28
Safety means different things to different people. Living in a multilingual household taught me that misunderstandings happen when we don't share basic meanings. Recently, I realized my own definition was dangerously incomplete.
According to the Oxford Dictionary, safety means both "the state of being safe and protected from danger or harm" and "a place where you are safe." In both cases, there is a presumption of risks we must consider to remain protected. But what happens when the danger is invisible, and the place we think is safest - our homes - actually makes us vulnerable?
Digital risks are hard to see because we access the internet from comfortable spaces. When our bodies feel secure, our brains don't register potential threats. This makes us less careful about protecting ourselves in digital spaces where AI tools now live.
While companies like Anthropic focus on making AI "helpful, honest, and harmless," I've discovered three critical blind spots that put users at risk- and what we can do about them.
The data trap: what you don't know is costing you
Many AI services store your prompts and may use them to improve their models. WhatsApp messages are end-to-end encrypted between you and your contacts, but requests to Meta AI are processed on Meta’s servers.
Simple math: share trivial or public information? Fine. Share personal details or sensitive information about others? That data is now compromised.
Before using any AI service, check: Can you disable data training? How long do they keep your information? Opt-outs are inconsistent and no company can promise perfect security.
Your bedroom isn't private online
Privacy means freedom from unwanted observation. Yet people think browsing from their bedroom creates a safe private space. The internet remains public, and AI follows the same rules.
Being at home also doesn’t change how online services handle your data. When you use AI tools, your text usually goes to company servers. Systems can have bugs, and accounts can be stolen.
Big orgs have had incidents: Samsung banned staff use of public AI tools after code was pasted into ChatGPT; OpenAI disclosed a 2023 bug that briefly exposed some chat titles and limited payment data; researchers have found 100k–225k stolen ChatGPT logins sold on dark-web markets.
We use AI services "for free," but we pay with our data and privacy.
Brain safety: the cost nobody talks about
Here's what recently concerns me most—something rarely discussed: our brain safety.
An MIT Media Lab preprint tracked 54 people for four months with EEG. When people used an LLM to write, their brains showed weaker connectivity than those writing unaided; the authors call the lingering effect 'cognitive debt.' (Preprint; findings are preliminary.) In simple words, participants couldn't quote their own work or recall essay content, showing lasting memory deficits.
These tools feed our anxiety for instant answers while potentially damaging our brains. What the study recommends: Draft your texts first yourself; ask AI to review. Don't outsource your thinking. That made me remember what I heard from Dr. Daniel Amen on The Diary of a CEO podcast: "Use AI to potentiate your brain, not to replace it."
Moving forward with intention
My goal with this series isn't to create fear but to spark awareness. This incredible technology is developing rapidly, but are we developing wisdom to match?
To navigate AI safely: be mindful of your inputs, avoid sharing sensitive information, check opt-out options, and remember—your brain is not replaceable.
Next, I'll explore whether AI-generated content is legal and where we stand on AI regulation.
Disclaimer: ChatGPT and Claude LLMs were used to revise grammar and readability, provide feedback about narrative flow, and make this reflection more concise.

Comments