Digital Autonomy and AI DIS-Education.
April 18, 2026•591 words
AI has transformed our approach to knowledge and learning, opening new educational frontiers by offering new tools for understanding, learning, and acquiring new skills. However, a significant ethical issue arises when such technologies are oriented towards monetizing young users. Leading the charge in this direction is Meta, which has already announced the launch of "Gen AI Personas," generative chatbots targeted at younger audiences.
Autonomy: From Traditions to Digital Vulnerability
To understand the problem, it is essential to consider the concept of "autonomy" in AI usage. The tradition, from Kant’s imperative of self-legislation to Mill’s defense of individual liberty, places autonomy as center of gravity of human dignity. Autonomy is the capacity to determine one’s own ends through reasoned deliberation, free from coercion. This autonomy is undermined when users, especially the very young, are subtly and unwittingly steered towards behaviors that generate profit for companies, rather than encouraging a conscious and critical use of technology.
Behind the friendly façade lies the possibility of digital grooming: persuasive architectures designed to cultivate trust, familiarity, and dependence, only to channel attention towards profit-generating behaviors. The user’s freedom is not explicitly denied, but it's softly engineered* out of existence.
The Allure of the Persona and the Problem of Manipulation
The creation of AI characters with curated charm raises a question: can we truly call an interaction "educational" when the interface is optimized to capture attention rather than to foster critical thinking?
This is not unprecedented. The history of media is replete with examples: advertising embedded in Saturday morning cartoons, gamified microtransactions in online games, the addictive design of early social networks. Today’s AI is merely the most refined instrument yet, merging emotional appeal with adaptive content delivery to sustain engagement — a method Meghana Dhar, former Snap and Instagram executive, explicitly links to maximizing ad revenue. When this logic targets developing minds, the risk is not just distraction, but the gradual erosion of the capacity for reflective judgment — the very faculty autonomy depends upon.
The issue becomes even more problematic in light of reports indicating that control over generative models is still ineffective, and it’s not uncommon to receive inappropriate responses, making unmanaged AI interactions with very young audiences risky.
The ethical deficit here is structural: private companies are not bound to a telos of human flourishing (eudaimonia) but to shareholder profit. Expecting them to self-impose constraints that limit revenue is to ignore the lesson of centuries of political philosophy: without shared norms and enforceable rules, power tends towards its own perpetuation.
Towards a Philosophy of Responsible AI for Education
An ethical vision must therefore be anchored in principles that transcend market logic. The following propositions could form the basis of such a framework:
- Transparency of Intent – Clear disclosure when an AI system has commercial aims, with explicit separation between educational and promotional content.
- Age-Appropriate Design – Interfaces and functionalities shaped by developmental psychology, not by engagement metrics.
- Independent Oversight – Regular audits by public or academic institutions to ensure compliance with educational standards.
- Right to Cognitive Integrity – Recognition, in policy and law, that manipulative attention capture can constitute a violation of autonomy.
These reflections highlight the need for an ethical vision of responsibility in the widespread application of artificial intelligence at both decision-making and interactional levels. It is imperative that the guidance of these systems be entrusted to a framework of solid and shared non-capitalistic principles.The responsibility lies not only with the creators and developers of these technologies but also with policymakers, communicators, and educators.