THE AIMS+ FRAMEWORK FOR CREATIVITY, METACOGNITION, AND CHILD-SAFE AI DESIGN
Much of the conversation around AI for children still focuses too narrowly on the model itself. Is it smart enough? Is it safe enough? Is it accurate enough? These questions matter, but they are no longer sufficient. The next generation of child-facing AI will not be defined only by better foundation models. It will be defined by better system design.
For children, an AI experience is never just the model. It is the model, the interface, the surrounding safeguards, the role of trusted adults, the context of use, and the product form itself. A child does not encounter artificial intelligence as an abstract technical capability. They encounter it as a voice, a prompt, a toy, a canvas, a suggestion, a workflow, or a moment of help. What matters is not only what the AI can do, but how the whole experience shapes agency, attention, learning, creativity, and reflection.
At FINH, we believe the most useful way to think about child-facing AI is not as a model problem, but as a systems design problem. To make that more practical, we use a simple evaluation toolkit we call AIMS+.
AIMS+ stands for:
Agent — What role is the AI playing?
Interface — how is the child interacting with it?
Mentors — how are parents, teachers, or trusted adults designed into the experience?
Shape — what is the overall structure and form of the experience?
+ — the additional system conditions that make child-facing AI viable in practice, including safety, orchestration, governance, and deployment reality.
We like AIMS+ because it shifts the discussion away from model capability in isolation and toward the quality of the full system around the child. It creates a more practical way to evaluate whether an AI experience is likely to be empowering, developmental, safe, and actually useful.
WHY A FRAMEWORK LIKE AIMS+ MATTERS
One of the recurring mistakes in AI product design is assuming that if the intelligence improves, the product automatically improves. In reality, the same model can produce radically different outcomes depending on the role it is given, the interface it appears through, the presence or absence of trusted adults, and the form in which the child experiences it.
A highly capable model can still create a poor child experience if it is given the wrong job. A safe model can still feel disempowering if the interface removes too much agency. A creative tool can still fail developmentally if no role is given to parents or teachers. And a well-designed software experience may still underperform if the use case really calls for a more bounded, embodied, or intentional format.
This is why we think AIMS+ matters. It helps teams move from asking “is the model good?” to asking a much better question: “is the system well designed for a child?”
That is a more useful design question, a more useful product question, and ultimately a more useful business question too.
A IS FOR AGENT
The first lens is Agent.
Different use cases require different AI roles. A child may need a creative spark in one moment, a making copilot in another, a tutor in another, and a reflective coach in another. These are not the same thing, and they should not be designed as though they are.
A brainstorming partner should behave differently from a step-by-step guide. A making copilot should behave differently from a reflective coach. A tutor may need to scaffold and explain, while a creative partner may need to provoke, suggest, and then step back. A reflective agent may be most useful after the act of creation, helping a child think about what they made, why they made it, and what they might change next time.
This is one of the most important but under-discussed choices in child-facing AI. Too many products still assume a single generic assistant can serve every purpose. In reality, the quality of the experience often depends on matching the right agent behavior to the right developmental moment.
Under the AIMS+ framework, one of the first questions should be: is this the right AI role for this child, in this context, for this task?
I IS FOR INTERFACE
The second lens is Interface.
Interface design shapes how children experience agency. A conversational assistant creates one kind of relationship. A visual studio creates another. A guided workflow creates another. Voice, touch, physical controls, and embodied interaction all shift the nature of the experience.
For children, this is not a cosmetic choice. It has direct consequences for confidence, focus, safety, and developmental value.
A chat interface may be excellent for ideation, questioning, and reflection. A visual studio may be better for active making. A guided workflow may work best where structure is needed. A voice interface may reduce friction for younger children. A physical interface may create more intentional use patterns and reduce distraction or drift.
The same underlying intelligence can feel empowering in one interface and overbearing in another. That is why interface design should be treated as a core part of the intelligence design, not a wrapper placed around it afterwards.
Under AIMS+, the key question becomes: does this interface help the child stay active, confident, and appropriately in control?
M IS FOR MENTORS
The third lens is Mentors.
We use the word mentors deliberately because “human in the loop” is correct, but too cold and mechanical for the reality of children’s lives. In practice, the adults around the child are not just oversight mechanisms. They can be parents, teachers, facilitators, reviewers, interpreters, and conversation extenders.
That distinction matters.
Too often, adults are included only as supervisors or rule-enforcers. But some of the most powerful child-facing systems may be the ones where trusted adults are designed into the loop in lightweight, meaningful ways. A parent might help extend a creative conversation at home. A teacher might help contextualise an AI-assisted output inside classroom goals. An adult might help the child reflect, revisit, or build confidence around something they made.
In many cases, the developmental value of the system depends not just on the child-AI interaction, but on how effectively it connects to the human relationships around the child.
Under AIMS+, the question is: how are mentors designed into this system, not as blockers, but as enablers of better outcomes?
S IS FOR SHAPE
The fourth lens is Shape.
By shape, we mean the overall structure and form of the experience. This includes the product form, the delivery mode, the degree of boundedness, and whether the system is best expressed as software, hardware, or some combination of both.
We like the word shape because it is broader than form factor alone. It captures not only what the product is, but how the experience is structured around the child.
Some child-facing AI experiences are best delivered as software. Others may be stronger as dedicated devices, constrained tools, or hybrid hardware-software systems. A flexible app can provide reach and iteration speed, but it may also sit inside a more distracting ecosystem. A dedicated device may appear more limited, but that limitation can be exactly what makes it developmentally appropriate. A hybrid system may create stronger family participation, more physical interaction, or better focus than software alone.
Shape also includes the rhythm of the experience. Is it open-ended or guided? Solitary or collaborative? Fast and generative, or slow and reflective? Always available, or intentionally bounded? These structural choices strongly influence what children actually do.
Under AIMS+, the question becomes: what shape best supports the kind of childhood interaction and developmental outcome we want?
THE PLUS MATTERS TOO
The plus in AIMS+ matters because child-facing AI does not succeed on design alone. It also depends on everything that makes those experiences viable and responsible in practice.
That includes safety systems, age-appropriate orchestration, governance, moderation, parental controls, deployment environments, and the operational realities of running live products used by children and families.
In other words, AIMS covers the core design lenses, and the plus reminds us that design must sit inside a responsible system.
This is particularly important in child-facing AI because the distance between prototype and real-world deployment is large. It is relatively easy to imagine a compelling child-facing AI demo. It is much harder to build a system that can actually be deployed safely, governed responsibly, and improved over time in real settings.
The plus is there to stop the framework from becoming too theoretical. It reminds us that good child-facing AI must not only be well designed, but well operated.
WHY CREATIVITY AND METACOGNITION SIT AT THE CENTRE
We believe this framework matters most in experiences centred around creativity and metacognition.
The most valuable AI experiences for children may not be the ones that generate the fastest output. They may be the ones that help children think more clearly about what they are making, why they are making it, and how to improve it.
That is where creativity and metacognition come together.
A strong child-facing AI system should help children generate ideas with confidence, make creative choices more intentionally, reflect on what they made and why, iterate on their work, and stay active creators rather than passive recipients of AI output.
Metacognition is what turns creative activity into developmental value. It is what helps a child move from simply producing something to understanding their own process. That is why, when evaluating AI systems for children, we should not just ask whether the child made something. We should ask whether the system helped the child think.
The AIMS+ framework is useful here because each of its lenses affects whether that reflection happens. The wrong agent can do too much. The wrong interface can make the child passive. The absence of mentors can mean reflection never extends beyond the screen. The wrong shape can undermine focus or engagement. And without the plus — the surrounding safety and system design — even a promising experience may fail in practice.
WHY WE ARE WELL PLACED TO EXPLORE THIS
At FINH, we are in a distinctive position because we do not approach child-facing AI from a single angle. Across FINH, AstroSafe, and DIY.org, we operate a three-part ecosystem that allows us to think about these systems holistically, not in isolation, and to put them into practice in the real world.
FINH brings product invention: the ability to explore new concepts, interaction models, and hardware or software experiences designed specifically for kids and families.
AstroSafe brings the child-safe AI infrastructure layer: moderation, parental controls, safe search, age-appropriate orchestration, and governance systems required to deploy child-facing AI responsibly.
DIY.org brings live deployment: a real consumer platform where children are already creating, learning, and engaging with AI-powered experiences, giving us direct visibility into behaviour, engagement, and what actually works in practice.
Together, these three capabilities create more than a set of adjacent businesses. They form a connected system for designing, testing, and deploying child-facing AI end to end. That means we can apply a framework like AIMS+ not just conceptually, but practically: through invention, infrastructure, deployment, iteration, and learning.
This is important because the future of AI for children will not be shaped by any one layer alone. It will be shaped by how these layers work together. Owning capabilities across all three gives us a stronger foundation to understand that interplay, design more thoughtfully, and move from theory to applied product reality.
A PRACTICAL TOOLKIT FOR EVALUATING CHILD-FACING AI
One of the reasons we like AIMS+ is that it can be used not only to design new products, but also to evaluate existing ones.
For any child-facing AI system, teams can ask:
Agent — Is the AI playing the right role for the intended developmental moment?
Interface — Does the interface preserve agency and match the child’s mode of engagement?
Mentors — Are parents, teachers, or trusted adults meaningfully designed into the loop?
Shape — Is the structure and product form helping or hurting the intended outcome?
Plus — Are the safety, governance, orchestration, and deployment conditions in place to support responsible use in the real world?
That will not answer everything, but it creates a far better starting point than evaluating the model alone.
It also turns a vague conversation about “AI for kids” into something more concrete. Teams can use it to assess product ideas, compare concepts, identify weak points in an experience, and make more intentional design decisions.
CLOSING
The future of AI for children will not be won by whichever company has the most capable model alone. It will be shaped by those who can design complete systems that understand childhood more deeply.
That means designing with Agent, Interface, Mentors, Shape, and the wider system conditions around them in mind.
If we get this right, child-facing AI will not simply help children produce more. It will help them think better, create more intentionally, and grow with greater confidence in their own ideas.
That is the opportunity we are most interested in building toward.
If you want, I can also make this a bit punchier and more blog-like at the top, with a stronger opening hook and a sharper FINH point of view.