Get the Newsletter Every Other Friday!
I recently shared a newsletter with my community, and thought I'd turn it into a blog post, as I felt the discussion and responses I got from it were validating how important these conversations really are.
AI is becoming a bigger part of our kids’ lives. From homework helpers and video apps to chatbots that feel “friendly,” parents are left scrambling to understand what’s safe, what’s risky, and what actually helps our kids. I had the opportunity to sit in on a recent webinar with UNICEF, where they presented their Global Guidance on AI for Children, and while it’s meant for policymakers and tech designers, the core ideas are totally relevant to parents too, so I felt it was important to share it with you.
First, why this matters to you
UNICEF’s guidance highlights that AI systems are everywhere kids go online, and how these systems are designed and governed can either support kids’ rights and well-being or introduce real harms, from data misuse and bias to harmful content and emotional dependency. I think we as parents understand this and see it more clearly every day.
The Big 10 Requirements
UNICEF lays out 10 principles that should guide how AI works for kids, whether it’s a homework tool, social app, game, or something more. These aren’t rules you control, but they’re ideas you can look for and talk about at home as part of building your child's digital literacy toolkit, because the more prepared they are, and the more the conversations happen openly, the better outcomes we'll see.
AI needs guardrails before it reaches kids
AI shouldn’t be left unregulated; systems kids interact with should have oversight that protects them. Yes, this should all be regulated by big tech, doing the right thing for our kids, but we can still set up guardrails at home/school.
Safety first
That means design that prevents harm, like inappropriate outputs, scams, or unsafe interactions. We need to be having conversations with our kids about potential safety concerns, what they might look like, and how to handle risky situations. It all comes back to media literacy and digital citizenship.
Kids’ data and privacy matter
If a platform uses AI, it should protect personal information and not exploit or share data without consent. Talk to your kids about this. Do they know the platform's policy on this? Do they understand what information they are sharing and how it's being used?
Fairness and non-bias
AI shouldn’t give harmful stereotypes or treat some kids differently because of who they are. This is a big one. This takes into account where they are globally, economically, socially, etc.
Explainability and accountability
You and your child should be able to understand how an AI tool works and who’s responsible if things go wrong. This is what I hope to help parents and kids with in Tech Healthy Families and the resources I provide you in the Parent Hub and on Teachers Pay Teachers.
Human and child rights at the center
Tools should respect kids’ rights, not erode them, including freedom of expression and dignity. Kids have spoken, and many still feel that AI and social media and other resources/platforms are valuable to their lives in so many ways, but they DO believe that big tech has a responsibility.
Support well-being and growth
AI should help kids learn, create, and explore without sacrificing mental health or social skills. We need to help kids understand when manipulation or emotional dependency might be happening, while also acknowledging that AI can indeed be beneficial for well-being if developed and used the right way.
Inclusion for all children
AI should work for kids across cultures, abilities, and contexts, not just a small subset. Again, we can't leave out areas that might be marginalized, coming back to equal access to resources.
Prepare kids for an AI world
Kids deserve chances to understand AI, not just be shaped by it. We want to build up their skills in digital literacy, being self sufficient and able to navigate it with confidence, not over rely on it.
Supportive environment
Communities, schools, and families should have tools and knowledge to keep children safe with AI. We know a lot of schools are ramping up AI support in schools, but not all schools around the world have this ability, nor do all families have the resources or knowledge to help support their kids. It's a community effort to make this happen.
What’s new in this 2025 update
UNICEF expanded the guidance to respond to things parents are actually seeing, generative AI that makes convincing content, AI “companions,” AI in gaming communities, and even worrying things like AI-generated explicit content or exploitation material.
Practical Tips for Families to Practice UNICEF's Guidelines at Home
In the Parent Hub
Here’s what you can do today, in your conversations and habits at home:
1. Talk openly about AI with your kids
Make “AI” something you explore together, not something you have to police. Ask what tools they use and what they think they’re doing with them. We really want to have kids participate in conversations thoughtfully and be involved. Right now, it's created for adults and talked about with adults, and we leave the voice of the kids out, more often than not.
2. Build basic AI literacy (without fear)
Ask questions like:
“How might this tool use your info?”
“Does it feel helpful, or just entertaining?”
“Who made this tool, and why?”
"What safety features are standard, or have to be turned on?"
That’s true digital fluency, and it prepares kids for the future, and understands digital safety as well. Provide spaces for them to give feedback and share their knowledge. They really do have great insights when it comes to talking about tools and platforms they use.
3. Set standards that match the UNICEF principles
When choosing apps or tools, ask:
"Is your privacy respected?"
"Can you explain how or why the AI made something?"
"Does it feel safe and fair?"
"How does it impact your mental health or support well-being?"
If it doesn’t meet those checks, maybe skip it. AI can be empowering, but kids need to make sure it's not compromising their safety and well-being.
4. Use AI together as a family activity
Turn it into a learning moment. Tech Healthy Families has a lot of resources, for both families and teachers. Taking the time to ask question, dig in, and provide scenarios or feedback helps our kids build critical thinking. Up-skilling is key right now, so taking the time to invest in learning about AI and understanding it will only benefit you all.
5. Principles of digital literacy are still foundational
Now is not the time to forget key foundational elements of media literacy and digital citizenship when it comes to AI. We still need to question resources, verify answers, cross-check sources of information, use our critical thinking skills before taking an AI answer as the be all, end all, to a conversation or query. As we teach in school, be intentional about why you're using the platform...what's your goal?
Want tools and activities you can actually use at home?
Head over to the Parent Hub where we’ve pulled:
Age-friendly AI activities
Conversation starters
AI literacy games for families