Skip to Content

EXECUTIVE SUMMIT ’25 – HOW AI WILL KILL YOUR ‘INNER’ WEIRDO BY SANDRA MATZ

January 9, 2026
Sogeti Labs

The introducing audience poll “If everyone uses ChatGPT, will theses, reports, art, research and marketing all become average – leaving no competitive advantage in AI?” showed a clear split in opinion with 37% agreeing and 63% disagreeing with the statement. Probably a different answer would have emerged if the question was asked after Sandra Matz’ convincing contribution.

Algorithms make us ‘boring’

Psychologist and data scientist Sandra Matz began her keynote with a confession: she lives between two worlds. “One is the messy, emotional world of human desire and behavior,” she said. “The other is the cold, structured world of math and algorithms.” For most of her career, she explained, these two worlds had coexisted peacefully. But in recent years, they’ve merged — sometimes beautifully, sometimes alarmingly — into what she called “the algorithmic simulation of humanity.” Her talk was both playful and profound, blending psychology, AI, and a scoop of Baskin-Robbins ice cream into a warning about how algorithms might make us “boring.”

Matz traced how early machine learning tried to predict human traits from data — say, whether someone’s Facebook likes revealed they were extroverted or introverted. It was crude, she admitted, but revolutionary. Today’s large language models, though, do something entirely different. “They don’t just predict words — they simulate worlds.” She gave simple yet dazzling examples. When ChatGPT fills in a sentence like “I left Amsterdam and drove two hours east,” it doesn’t just pick a word — it implicitly models geography (that leads to Germany), culture (what foods to expect there), and emotion (how it feels to leave home). When it responds to “Menno hates coriander,” it understands disgust, empathy, and perhaps a bit of schadenfreude. “To do that,” Matz said, “it must simulate human psychology.”

Human-simulation machines

She described experiments showing that newer AI models can now solve “theory of mind” problems — those classic developmental psychology puzzles once thought uniquely human. “The newest versions can not only tell where the cat is,” she smiled, referencing a story about her husband Moran and a mischievous cat, “they can explain why someone else would look for it in the wrong place.” “These aren’t just language machines,” she concluded. “They’re human-simulation machines.”

Curiosity is what makes us human

That ability, she warned, is why AI has quietly become our co-pilot — not just in work, but in life. “We’re outsourcing decisions to systems that increasingly understand what motivates us.” There’s power in that. AI can guide careers, improve mental health support, and tailor education. “Used wisely, it can help us live better,” Matz said. “But it also risks flattening what makes us human.” Her worry? That we’ll trade our capacity for curiosity, surprise, and self-discovery for algorithmic certainty. “AI won’t destroy humanity,” she said. “It will just make us… dull.”

Exploration and exploitation

To explain, she took the audience to an American ice cream shop — Baskin-Robbins, famous for its 31 flavors. Every visit, she said, presents a dilemma: play it safe with chocolate, or take a risk on something wild like “White & Reckless Sorbet.” That, she explained, is the exploration–exploitation trade-off — a Sogeti Executive Summit 2025 – fundamental part of human psychology. “Without exploration,” she said, “we stop growing. But without exploitation, we never enjoy what we’ve learned.” The problem, Matz argued, is that AI systems are hardwired for exploitation. Netflix, Spotify, Amazon — they all optimize for engagement. “They’re rewarded when we click, not when we discover.” So, while we believe we’re exploring, the algorithms are quietly narrowing our world.

She proved it with a small experiment: she asked ChatGPT 100 times to recommend an ice cream flavor at Baskin-Robbins. Ninety-six times, it chose one of two safe favorites — mint chocolate chip or pralines and cream. “Out of 31 flavors, we get two. And that’s how diversity dies — one safe recommendation at a time.” AI leads to ‘averages’ Her research with students showed the same effect across fields: people who used AI-generated suggestions ended up producing less creative writing, more mainstream opinions, and
narrower tastes in culture and science. “AI,” she said, “is turning our infinite diversity into
statistically safe sameness.” The danger, she explained, isn’t obvious or sudden. “It’s a death by a thousand algorithmic recommendations.” A slightly safer playlist here, a slightly more predictable vacation there — and soon we’ve become, as one New York Times journalist put it, “a basic bitch.” Matz grinned as she quoted that line. “And that’s the insult, right? Not that you failed, but that you became uninteresting.”

Recover our ‘inner’ weirdo

But her message wasn’t despairing. She argued that AI could help us recover our “inner
weirdness” — if we design it to reward discovery, not just comfort. “Imagine a slider on your Netflix or Spotify,” she suggested. “Most days, you leave it on ‘Spot On.’ But sometimes you slide it toward ‘Me with a Twist,’ or all the way to ‘Bonkers,’ and see something completely new.” Her closing plea was simple: “Stay wild and reckless once in a while. Don’t let algorithms make your life vanilla.” The audience laughed, but her message lingered: to keep our choices unpredictable enough to stay human.

Get your copy of the Autopilot Yes/No Report.

Please note – This report was created by almost exclusively using available AI-tools except for minor editorial tweaks and some limited lay-out changes.

About the author

SogetiLabs gathers distinguished technology leaders from around the Sogeti world. It is an initiative explaining not how IT works, but what IT means for business.

Leave a Reply

Your email address will not be published. Required fields are marked *

Slide to submit