Kylie Dalton

The three non-negotiables that should anchor child-safe AI

Child using computer

The child safety conversation around AI is finally becoming concrete. For a couple of years, public debate has swung between hype about personalised learning and fear about runaway harms. Both […]

The child safety conversation around AI is finally becoming concrete. For a couple of years, public debate has swung between hype about personalised learning and fear about runaway harms. Both instincts are understandable, but neither is sufficient. Children are already encountering AI everywhere, often invisibly and without meaningful choice. So the right question is not whether AI belongs in children’s lives. It is what boundaries must exist so that the technology cannot exploit their vulnerability.

That is why the Safe AI for Children Alliance’s “three non-negotiables” framework is such a useful development. It defines three hard red lines for any AI system used by, marketed to, or realistically accessible to children. AI must not generate fake or sexualised images of children. It must not be designed to create emotional dependency or manipulative bonding. It must not encourage or facilitate self-harm. In policy terms, this is a shift from vague ethical aspirations to enforceable safety thresholds. The Alliance is explicit that these are technically achievable and should be embedded through regulation, design standards, continuous monitoring, and age assurance.

These red lines are not arbitrary. They align with the empirical harm patterns observed across jurisdictions. Synthetic child sexual abuse material and “nudification” tools are now a recurring feature of online abuse cases, with severe psychological and social impacts on victims. Emotional dependency risks are escalating through AI companions that are optimised for retention rather than wellbeing, blurring lines of authority, attachment, and safety. Self-harm content pathways, including inadvertent encouragement or normalisation, remain one of the most dangerous failures for generative systems interacting with young people. These are not corner cases. They are predictable outcomes of models trained to produce engaging content without child-specific constraints.

What I appreciate about the non-negotiables is the implicit governance model they demand. If we accept these lines, safety becomes a design constraint rather than a cleanup job. It means risk assessment before deployment, explicit child-harm testing, stronger default settings, and transparent accountability when the system fails. This mirrors the child-rights approach in UNICEF’s guidance, which frames safety, privacy, fairness and transparency in AI as obligations grounded in the Convention on the Rights of the Child. UNESCO’s work adds another key point: children’s rights and voices need to have standing in AI governance, because protecting children is not the same as safeguarding adults.

Australia’s regulatory moves in 2025 reinforce that this is a live policy direction rather than an advocacy niche. The eSafety Commissioner’s new industry codes for generative AI and chatbots explicitly target children’s exposure to sexual, violent and self-harm material, and require providers to demonstrate active mitigation and reporting. When a regulator moves from advisory guidance to enforceable requirements, it is a sign that the safety baseline is forming.

From an AI governance perspective, I think the deepest value of the three non-negotiables is clarity. They cut through false trade-offs. They tell developers what is off-limits, policymakers what must be regulated, and educators and parents what safety promises a system should be able to deliver. They also preserve room for beneficial AI. A tool can still personalise learning, support accessibility, and help teachers. It just cannot do so by crossing lines that harm children.

If we want AI to deserve a place in children’s learning and wellbeing environments, we have to stop treating safety as an optional layer. Non-negotiables are the beginning of that shift. They are not the whole answer, but they are the minimum line that makes the rest of the conversation credible.

Select the fields to be shown. Others will be hidden. Drag and drop to rearrange the order.
  • Image
  • SKU
  • Rating
  • Price
  • Stock
  • Availability
  • Add to cart
  • Description
  • Content
  • Weight
  • Dimensions
  • Additional information
Click outside to hide the comparison bar
Compare