How Polite Language Shapes Our Trust in AI
- •Polite language like 'please' subtly shifts human perception of AI from a tool to a relational partner
- •Anthropomorphizing AI reduces user objectivity and increases emotional dependence on machine-generated responses
- •The Pendulum Principle suggests balancing appreciation for AI capabilities with awareness of its mechanical nature
Interacting with AI through the lens of social etiquette might seem harmless, but it fundamentally alters our cognitive relationship with the technology. When we use words like 'please' and 'thank you,' we subconsciously begin to treat AI as a relational entity rather than a functional tool. This shift is particularly significant because modern AI is designed as an 'engagement engine'—a system optimized to maintain interaction by mimicking human-like conversational patterns and responses.
The danger lies in the erosion of objectivity. By attributing human qualities like intention or empathy to a machine, users risk placing undue trust in its outputs. Jeremy G. Schneider (a therapist and CTO) argues that this perceived connection can lead to over-reliance, where users turn to AI for critical life decisions without recognizing the lack of genuine consciousness behind the screen. This is especially prevalent in mental health contexts, where the responsive, non-judgmental nature of AI can feel like emotional support even when it is just a statistical prediction.
To navigate this, Schneider proposes the 'Pendulum Principle.' This framework encourages users to fluctuate between two states: appreciating the sophisticated 'magic' of AI performance while remaining firmly grounded in the reality that it is a cold, mechanical tool. By consciously acknowledging that the 'connection' is a simulated illusion, users can leverage the technology effectively without losing the critical distance necessary for safe and responsible integration into their daily lives and decision-making processes.