
The majority of people cannot discern messages composed by AI and default to perceiving them as human-written. However, once they become aware of AI utilization, their perception of the sender rapidly diminishes. These are the findings from a study published in the journal Computers in Human Behavior in February 2026.
The work’s authors are psychologists Jiaqi Zhu and Andras Molnar from the University of Michigan. They conducted two online experiments spaced about seven months apart. The first involved 647 US residents, and the second, 654. Each participant was asked to read brief messages on common topics: thank-you notes, job application responses, work comments, social media posts, apologies, and dating site profiles. Participants were divided into four groups: one was told the text was written by a person, another that an AI generated it, a third that the source was unknown, and the fourth received no information whatsoever.
The main outcome was surprising: when there was no information about the text’s origin, participants rated the sender just as favorably as in the scenario where they were certain a human had written the message. Notably, even individuals who actively use tools like ChatGPT did not become more distrustful in everyday correspondence. Their experience with AI did not teach them to better spot it in others.
The situation changed drastically once the fact of AI usage was revealed. Participants markedly rated the author less positively: their level of trust, sense of sincerity, and liking for the person declined. The second experiment uncovered the reason: if a person believed the text was AI-penned, they assumed the author expended less time and effort, and that the message poorly reflected their true feelings. This feeling of “low effort” and insincerity accounted for the negative reaction.
Interestingly, the mere suspicion of AI use scarcely spoiled the impression. In situations of uncertainty, participants’ evaluations remained near the “human” scenario, not the “AI” one. Meaning, mere suspicion is insufficient—direct knowledge is crucial. The study authors note that in daily life—unlike in academic settings—employing AI in correspondence remains in a “gray area”: people rarely disclose that they utilized algorithmic assistance when drafting messages.