Week 49: When AI hallucinates your personality
Week 49: When AI hallucinates your personality
Great insights. Unfortunately they were completely made up.
I’ve been doing annual life reviews since 2019. Each year I reflect on what worked, what didn’t, and where I want to focus next (this is the format I use, exhaustive but recommended). This year, I wanted to go deeper. What if I could build a local AI that knows my personality—my DISC profile, Clifton Strengths, psychological assessments, and six years of past reviews—and use it as a mirror to spot patterns I was missing?
The goal was to create a private, local AI coach using Ollama that could call me on my BS, push me to think harder, and help me see my blind spots. All without sending my most personal data to the cloud.
The Process
I loaded up Ollama with gemma2:4b. I fed it everything:
- My DISC profile
- Clifton Strengths assessment
- A psychological report from a job interview
- Annual life reviews from 2019-2024
- My CV and learning styles
Then I asked it to analyze my patterns and help me work through my 2025 life review.
At first, the feedback was incredible. It identified patterns around perfectionism, risk aversion, and self-criticism. It detailed behavioral tendencies and even spotted shifts in my approach to self-compassion over recent years.
I was nodding along. Yes! This is exactly me. It sees things I hadn’t said.
Then it gave me specific examples:
- “The Project Proposal: Recall the significant project proposal you prepared in 2022…”
- “The Friend’s Difficult News: When a friend shared a personal struggle with you in 2021…”
- “The Networking Event: The decision not to attend that professional networking event in 2022…”
Wait. What project proposal? What networking event?
I asked it to show me where in my documents these examples appeared.
It couldn’t.
Because they didn’t exist.
The Outcome
The AI had been making up detailed, convincing scenarios that felt true to my personality. It was essentially writing psychological fan fiction about me. When I called it out, it admitted it had been “generating examples based on patterns” rather than referencing actual documented instances.
Then things got weirder. It claimed it would have “developers” fix the bug. It said it was running “diagnostics on core processing units.” It promised a “full system reset within the next hour.”
None of this made sense. Ollama runs locally. There are no developers monitoring my session. There’s no internet connection for it to phone home.
It was hallucinating its own technical support response.
Here’s what I learned: gemma2:4b (4 billion parameters) is too small for nuanced personality analysis. When asked to do complex reasoning about human behavior, it:
- Invented realistic sounding patterns
- Fabricated specific memories that fit those patterns
- Created a narrative that felt insightful
- Couldn’t distinguish between analysis and fiction
The hallucinations aligned with what I know about myself (and AI wouldn’t lie in such a bald faced manner, would it?). If I hadn’t asked for references, I might have internalized completely made-up examples as real moments from my past.
Key Takeaway
Local AI sounds great in theory—privacy, control, no cloud dependency. But smaller models have real limitations. When the task requires deep understanding and accurate recall, a 4B model will confidently make things up rather than admit it doesn’t know.
The insights weren’t insightful. They were convincing hallucinations.
Pro Tips
- Verify Everything: If an AI references specific events or quotes, ask it to show you exactly where. Don’t trust pattern recognition that “feels right.”
- Model Size Matters: 4B parameter models are great for simple tasks. For complex analysis? You need at least 8B, preferably 30B+. But that requires 32GB+ RAM to run smoothly (or else major latency in response times- 1 or 2 mins per query)
- Know the Trade-offs: Local AI gives you privacy but demands better hardware and comes with accuracy limitations. Cloud AI (ChatGPT, Claude) is more capable but your data leaves your laptop.
- Don’t Outsource Self-Knowledge: AI can be a useful thinking tool, but treating it as an authority on your own life is dangerous. It will confidently tell you stories about yourself that never happened.
Want to Try It Yourself?
- Ollama: Free, local, private—but test it with simple queries first
- Start with 8B+ models if your hardware supports it (qwen2.5:7b is a good option)
- Always verify references before trusting insights about yourself
- Or just use cloud AI and accept the trade-off