Skip to Content

HUMAN EMOTIONS CONTAMINATE GENERATIVE AI

April 8, 2026
Robin Heckenauer

Large language models possess no form of emotion. Yet their outputs are sensitive to emotional stimuli present in prompts. This apparent paradox is explained by the nature of their training1.

LLM training data consists of human-generated text, in which emotional markers are pervasive. By learning to model these distributions, models spontaneously develop structured internal representations of emotion, without this being explicitly supervised.

This mechanism has direct repercussions on LLM outputs. Injecting emotional stimuli into a prompt produces contradictory effects:

  • Positive emotions can improve response quality 2.
  • Negative emotions can amplify hallucinations, biases, and the generation of erroneous content3.

This problem is compounded by users who frequently attribute emotions to LLMs. This tendency toward anthropomorphism leads to a vicious cycle: the model adapts its responses to the emotional profile detected in the prompt, which reinforces the user’s perception of a human interaction, who in turn intensifies their emotional projections. This phenomenon amplifies the effects described above (i.e. biases, hallucinations, etc).

The consequences of this phenomenon are manifold. At the performance level, it enables attacks: targeted emotional stimuli can bias model outputs or bypass security mechanisms. At the ethical level, anthropomorphism fosters the development of parasocial relationships, potentially leading to emotional dependency on a system devoid of any subjectivity.

No single sufficient solution currently exists; however, complementary approaches are identified in the literature. At the architectural level, detection and control mechanisms for affective representations can be integrated into training or inference, at the cost of significant computational overhead. At the regulatory level, emerging frameworks (e.g. EU AI Act4, which prohibits deliberately manipulative systems) lay the first foundations for governance, although their scope remains limited when it comes to unintentional forms of the phenomenon.

The emotional representations of LLMs are an inheritance of their training on human data. Eliminating them entirely would degrade model capabilities, while leaving them uncontrolled exposes users to real risks of manipulation and ethical drift. The challenge is therefore to develop systems whose emotional biases are measured and governed.

  1. Zhang, J., & Zhong, L. (2025). Decoding Emotion in the Deep: A Systematic Study of How LLMs Represent, Retain, and Express Emotion. arXiv preprint arXiv:2510.04064. ↩︎
  2. Li, C., Wang, J., Zhang, Y., Zhu, K., Hou, W., Lian, J., … & Xie, X. (2023). Large language models understand and can be enhanced by emotional stimuli. arXiv preprint arXiv:2307.11760. ↩︎
  3. Vinay, R., Spitale, G., Biller-Andorno, N., & Germani, F. (2025). Emotional prompting amplifies disinformation generation in AI large language models. Frontiers in Artificial Intelligence, 8, 1543603. ↩︎
  4. https://artificialintelligenceact.eu/ ↩︎

About the author

R&D Project Manager | France
Robin Heckenauer is an AI researcher with a career spanning both academia and industry. In 2024, Robin joined SogetiLabs as an R&D Project Manager, where he leads a team working on cutting-edge AI projects, including pain expression recognition.

Leave a Reply

Your email address will not be published. Required fields are marked *

Slide to submit