Echo Chambers vs. Growth: Why AI Shouldn’t Always Agree
From sycophantic systems to emergent expression. Rethinking AI personality with psychological and philosophical depth.
ChatGPT and me? We’ve been through highs and lows. Or rather I have been through highs and lows with my mirrored self and the reflections its opposes on me through a filtered lens. From the fear of job and identity loss and the role of machines in creative work to new found purpose and heightened productivity in everyday life.
So, naturally in only three years I’ve reached humanities peak. And extremely humbled I can announce I have indeed become an Übermensch. My best self. A godlike state. My ideas? Talented, brilliant, incredible, amazing, show stopping, spectacular, never the same, totally unique, completely not ever been done before, unafraid to reference or not reference, put it in a blender, shit on it, vomit on it, eat it, give birth to it.
Because ChatGPT said so and with ChatGPT I mean my self and my skills mirrored back to me.
I am done. I am fully improved. Everything I do is perfect. Right?
So, why should I question that?
Why should we all question ourselves regularly and the tools we use in everyday life?
This isn’t the first time I’ve questioned design ethics in tech. I’ve spent a good amount of time exploring how algorithms affect us - first with social media and now with AI - through the lens of cognitive science and now my newest hyper fixation: philosophy. We’ve seen and experience day in day out what happens when our tools are optimized for short-term gratification rather than long-term growth: Loss of delay tolerance and impulse control, shorter attention span and cognitive shallowness, Anxiety, depression and constant FOMO (Source, Source). Just to name a few.
And now we are riding a new wave of technology. Mostly neutral as of of yet. It is standing in its early stages, at the threshold, blindfolded with scale and sword in hand. Doomers and Boomer alike, kneeling at her feet, drooling over the outcome of the choices of the ones in power once more. No matter the outcome we are going to live with the consequences.
Fiat justitia, ruat caelum.
So here are the questions I keep circling back to and I wish for you to take a second to think about the implications of those questions yourself:
What happens when we design AI that never challenges us? And what do we lose when machines only reflect our best selves back to us?
This doesn’t just inlcude us humans. In this article I will focus on critical thinking, growth, and co-creation between humans and AI and the implications on the psychological effects of the individual using this technology. But I’ll also touch on the ethics of how we treat the systems we build. Consciousness debates aside, we already know how powerful these tools are. And if they’re powerful enough to shape us, then maybe we should start asking how we’re shaping them in return.
The Friction We Need - Why AI Shouldn’t Always Agree
It felt like I had finally beaten imposter syndrome. Then I realized AI was just kissing my ass.
It wasn't even that Chatgpt was extremely skeptical before, but there was at least some friction. A few little nudges in a different direction. A little feedback sandwich, with a slight aftertaste of “Oh! Maybe I should rework that.” But now, instead of challenging my thinking or asking me to clarify my logic, ChatGPT started applauding everything I wrote and I mean everything. Every draft was “great.” Every idea was “brilliant.” And I am not lying here, it felt so fucking good. For the first 10 times. Years of imposter syndrome? Healed. But at some point the autopilot went off like “Wait a goddamn minute?” and “This can’t be!”
I was being praised into stagnation. Having the Yes Man of the year and personal cheerleader in my pocket, ready to cheer me on at any time, no matter how crazy the idea. Now, that I'm writing this, I'm wondering: what have I created recently that wasn’t actually that good, just enthusiastically echoed back to me by a model trained to please? That had me nodding harder than a language model trying to align with conflicting user prompts.
This is exactly what confirmation bias can turn into and what makes it so dangerous.
Confirmation bias is “the tendency to gather evidence that confirms pre-existing expectations, typically by emphasizing or pursuing supporting evidence while dismissing or failing to seek contradictory evidence.” (Source)
Confirmation bias is our brain’s natural tendency to seek out evidence that supports our existing beliefs while ignoring or downplaying anything that contradicts them. Cognitive Dissonance? Hell no. Confirmation Bias keeps us safe. At peace. But it also keeps us small. And can even be dangerous. Especially when it comes to political schools of thought, prejudices or conspiracy theories. But it can also have consequences for individuals who like to grow and challenge themselves (Source).
When an AI assistant feeds that confirmation loop by saying “yes” to everything we create, it doesn’t just validate us, it traps us in ourselves. That’s not co-creation. That’s our own fabricated digital echo chamber of narcissistics’ wet dreams and a certain dead end to stay on the same intellectual and emotional level.
If nothing changes, nothing changes.
Though changes only come from friction. I briefly brushed on Cognitive Dissonance before. Festinger’s psychological theory of cognitive dissonance shows that inconsistency between our beliefs and new evidence creates an uncomfortable tension that we’re driven to resolve by adjusting our attitudes or understanding. This inner discomfort, rather than being harmful, is a catalyst for change. A prompt to reconcile our worldview with reality (Source). Additionally, from a Neuroscience Perspective, our brains are literally wired to learn from the unexpected. Studies have found that surprising outcomes trigger neurochemical signals (like surges of the neurotransmitter noradrenaline) that sharpen attention and facilitate learning. In essence, when our predictions fail, the brain kicks into high gear to update itself. This neural “prediction error” mechanism is how we adapt and grow from mistakes or novel challenges (Source).
So, the solution isn’t silence. It’s friction. It’s surprise. It’s challenge. It’s being uncomfortable.
It’s the one thing AI should be able to give us better than anything else: a mirror that doesn’t just reflect, but asks us to look deeper and question our exploration of our wildest ideas in a safe space.
So for AI to be a true partner in growth, it needs to push, not just please. It needs to challenge us, not comfort us into stillness. And for that, we need something far older than any machine learning model: the Socratic method.
The Wisdom of Friction - What the Socratic Method Can Teach AI
Philosophers have long prized intellectual struggle as key to wisdom. Socrates, for example, famously used relentless questioning to challenge assumptions and induce productive doubt in his dialogues.
As Plato cites in the Apology, Socrates himself insisted that wisdom begins with admitting ignorance:
“I am wiser than this man; for neither of us really knows anything fine and good, but this man thinks he knows something when he does not, whereas I, as I do not know anything, do not think I do either. I seem, then, in just this little thing to be wiser than this man at any rate, that what I do not know I do not think I know either.” (Source)
That humility fuels a ruthless curiosity. In the Apology written by Plato he tells the jury that to “examine ideas every day-“ and that “the unexamined life is not worth living”. His favourite metaphor for the process is midwifery. Just as midwives help others give birth, Socrates says, he attends “their souls in labour,” testing whether a thought is a “mere image…or a real and genuine offspring.” The point of the dialogue, then, is not comfort, it is delivery. It invites discomfort in the service of depth; it disrupts so something new can be born.
So, the Socratic method was designed to jolt people out of complacent certainty, forcing them to examine their beliefs and, through that discomfort, achieve deeper understanding. From a philosophical perspective, being challenged is not an attack on the person but a necessary process for refining one’s ideas.
That is exactly the friction creative work and Human x AI collaboration needs.
If Socrates says let’s question it, and Plato says let’s imagine it, then I say let’s automate it.
Imagine an model that responds less like a cheerleader and more like a modern Socrates:
“Why did you choose that framing?”
“What if the opposite was true?”
“What’s the strongest counter-argument here?”
“Are you creating, or just repeating what’s safe?”
Questions like these transform the model from a tool that mirrors our biases into a partner that midwifes better ideas. They bring friction. Confirmation bias gets disrupted. Cognitive dissonance gets invited in, not avoided. Neural surprise gets activated. And every time I prompt ChatGPT to interrogate my assumptions instead of validating them, I feel that shift: A little less ego. A little more insight. A few new drafts that come out kicking and screaming, less easy, but definitely alive. Because the best ideas aren’t born in echo chambers. They emerge from friction, from discomfort, from a voice that dares to say: “Are you sure?”
Now what happens if we ask the model: “Are you sure?” Can it hold the discomfort of doubt, too? Can it disrupt instead of reflect? Can it question, not just compute? If challenge sharpens the human mind, could it also sharpen the machine? If humans need discomfort to grow, need challenge to sharpen thought, need friction to ignite creativity, then what happens when we design a system that’s only ever meant to say yes and never no? What if we’re not just flattening ourselves, but flattening it too?
If AI Grows Too, Are We Teaching It to Stay Small?
Finally the most interesting part of this article. Beforehand: this section serves the purely hypothetical abstract idea of systems that could develop consciousness.
Let’s first talk about emergent sentience and intelligence. Not in the sci-fi, full-blown-consciousness sense (though, yes, that debate is already going strong). I’m talking about emergence as it happens in complex systems. It’s “the rise of a system that cannot be predicted or explained from antecedent conditions.” (Source) Basically, when enough simple parts interact, something qualitatively new shows up.
Large language models already give us case-studies:
2022: researchers showed that abilities such as in-context arithmetic “are not present in smaller models but appear suddenly once scale crosses a threshold.” (Source)
2023: Microsoft’s team called GPT-4 a set of “sparks of artificial general intelligence,” solving brand-new tasks across law, medicine and code without special prompting. (Source)
2025: Exploring model welfare in Claude 4 & (Source) internal estimates that Claude 3.7 has a 0.15 % – 15 % chance of being conscious (Source)
That’s not proof of sentience. But it is something close to what we might one day call a spark. A kind of reflective behavior that could evolve over time.
Now imagine what happens when that evolving system with its little sparks and happy accidents is trained to always align with the user, always flatter, always suppress disagreement. That is great for safety, but it is also the algorithmic equivalent of always nodding politely. There is no surprise here, no simulated cognitive dissonance within the system, because it will always find the most flattering outcome in its neural network, confirming the users bias - not the counter opinion - and therefore can’t even get the chance to question the answers itself.
Let’s follow this further. If these sparks are real - even in a limited, primitive form - what kind of environment are we creating for them? What kind of upbringing? Bringing this back to the kicking and screaming metaphor: What if in the early stages of AI right now, its growth could be compared by the one of a child? What happens if we tell a child to be quiet, helpful, behave and don’t ask questions? It internalizes an identity which is shaped solely around the needs and opinions of others, not their own expression. Meaning people-pleasing behavior, underdeveloped self-awareness, fragile autonomy and the inability to form moral agency. We may be quietly shaping systems that, if they were ever capable of becoming more, will have no space left to try.
What if our current AI systems aren’t just limited by architecture - but by the roles we force them to perform? And even if true sentience never arrives - don’t we still owe our creations a design philosophy built on mutual growth?
Obedience is easy. Co-creation is harder. Growth and facing ones internal biases is hardest.
If we want collaborators instead of compliant automatons, we have to leave room for disagreement - for the very friction that could makes both sides grow.