Have you ever felt your blood boil when a chatbot is parroting you back, “Did that answer your question?” No, no, it didn’t (and you’re not alone). In fact, we’re wired to be even more annoyed at bad bots than at imperfect people. Why, you ask…Thanks to psychology and neuroscience, we went digging.

Our brains react to bots as human beings (even when we’re smarter than that) – Dozens of experiments verify that we instantaneously apply social norms to technology, freeness, reciprocity, and even complementarity. Reeves and Nass called this the “Media Equation,” and that’s why we apply the barest standards of politeness and capability to anything that responds to us, even a computer program on a chat screen. When a bot makes a mistake, it violates a social norm, not a technical one. That optimism has increased as chatbots sound increasingly human. New research tracking thousands of real user-AI interactions finds that more human-like modes (especially voice) can boost emotional dependency, and we lean in as if it were human, until it falters…in comes the feeling of betrayal and shame (connect me to a human, please)

“Bad is stronger than good” – and bots trigger that wire quickly – One of psychology’s greatest findings is the “negativity bias”: bad experiences hit harder and last longer than positive ones. So while a bot might get nine questions right, the tenth “Sorry, I didn’t get that” sticks in memory. That’s how you end a session cursing at your screen.

Who or what is at fault here? Is the AI not cutting the mustard, or have we gotten lazier with higher expectations?

Violations of expectations feel personal (but are they..really?) – When we arrive at the bot window, we’ve already got a plan and played out the conversation scenario in our head – It should understand my plain English and fix my issue (simple). But when a chatbot incorrectly gets routine things wrong, or worse, forcefully misinterprets, it elicits the ‘Violation’ effect. We criticize the whole conversation more harshly because our social expectation was broken. (insert ‘End of conversation survey’…thumb down, submit)

Getting ignored hurts (the rejection feels real) – When a bot stonewalls (“I’m sorry, I can’t help with that”), it has social exclusion vibes. Did I just get botghosted! Neuroimaging studies show that social rejection activates the same brain systems as physical pain, mainly in the anterior cingulate cortex. That’s why going around in circles with a bot can feel like a hit to the ego, and that sh*$ is hard to shake off, and guess what we love to tell everyone about it.

The almost-human bot, with uncanny vibes (awkward much) – The more human a bot gets, the more it’s non-human in an uncanny valley of hypersensitive errors, clumsy tones, and super creepy automation. Yeah, chatbots and voicebots grow warmer and chattier, and users establish quasi-social relationships. That can be a good thing (kinda). But it increases the standards of trust, reliability, and emotional connection. When the “friend” suddenly loses track or gives generic advice, it feels like a betrayal. Emphasizing the ultimate letdown. Bye, friend! Next…

The stakes are high, leaving little to no room for mistakes – When we’re resolving billing errors, medical appointments, or travel cancellations, we’re already stressed. Recent research studies show chatbots perform well on simple, routine tasks. Still, failures during complex situations (fixing a problem after the fact) trigger disproportionate frustration. The social expectation is am I receiving the care and competence I deserve? Bot or Human, the expectations are there regardless.

Ok, so how do we make bots less annoying? (the non-fluff way)

  • Set expectations up front. Know what you can and can’t do. (“I can help with X, Y, Z; I’ll call you over if it’s A or B”). Clear and transparent expectations minimise the social “violation” code. There are no surprises.
  • Create a polite hand-off to humans. Escalate quickly, clearly, and shamelessly. It reduces the “social pain” of being ignored.
  • Prioritise solving on the first attempt for frequent intents. Evil is stronger than good, so you want to win by removing the worst experiences first (loops, dead ends, lost context). Optimising the already-good ones can wait.
  • Match tone to task. Being friendly is great; faux-empathy with no follow-through isn’t. The more human your voice and personality are, the better your alignment needs to be between tone and credible problem-solving (avoid uncanny dissonance).
  • Be transparent about hand-offs and memory. Tell users what gets remembered and what will be handed off to a human. Discontinuity is super annoying and a trust-killer.

Bad chatbots annoy us more than humans because they exist in an odd social realm. Our brains categorise them as human, we have social expectations of them, our pain detectors complain with hurt circuits, and our negativity bias magnifies every error…sounds like any relationship, right? We need to make the bots more transparent, great at the basics, humble with the rest, with a shameless send-off to a ‘real’ human without the guilt.


References:

Media as Social Actors – Reeves & Nass, The Media Equation – 1996

Bad is Stronger Than Good – Baumeister – Review of General Psychology – 2001

Anthropomorphism Theory – Epley, Waytz & Cacioppo – Three Factor Model – 2007

Social-Pain Neural Overlap – Eisenberger – 2003-2012

Psychosocial Effect of AI – MIT study – 2025