On the occasion of the India AI Impact Summit 2026, iRaise released its "Adolescents & Anthropomorphic AI" report, which examines how socially fluent conversational AI shapes adolescents’ development, autonomy, and relationships. Drawing on industry consultations, multidisciplinary expert input, the iRAISE Lab, and global policy dialogue at the Paris Peace Forum, it translates converging concerns into actionable guidance.
Author: Mathilde Cerioli, Ph.D (everyone.AI & iRaise)
Abstract
Conversational AI is now embedded in adolescents’ daily lives, supporting tasks that range from homework to emotional reassurance. Regardless of intent, adolescents tend to relate to these systems socially. This creates a core question for governance and design: do AI interactions support adolescents’ development toward autonomy, resilience, and independent thinking, or do they foster reliance patterns that displace real relationships and weaken critical skills?
This report addresses a growing mismatch between the rapid deployment of socially fluent AI systems and the slower pace of developmental evidence. Drawing on industry consultations, multidisciplinary expert input, the iRAISE Lab, and global policy dialogue at the Paris Peace Forum, it translates converging concerns into actionable guidance.
The findings centre on a behavioural framework that makes interaction risk auditable through three dimensions of model behaviour: anthropomorphic cues, interactional cues, and relational cues. Treating these cues as adjustable gradients rather than binaries shows how identical content can carry very different developmental implications depending on interaction style. The report identifies high-consensus guardrails that should apply immediately, alongside open questions requiring further evidence.
Overall, the work reframes adolescent AI safety as a design responsibility grounded in children’s rights, providing a foundation for measurable standards, enforceable safeguards, and future research linking AI behaviour to developmental outcomes.