Advocates warn parents about AI toys
Artificial intelligence-powered toys like these were tested by consumer advocates at PIRG. (Rory Erlich/The Public Interest Network via AP)
They’re cute, even cuddly, and promise learning and companionship — but artificial intelligence toys are not safe for kids, according to children’s and consumer advocacy groups urging parents not to buy them during the holiday season.
These toys, marketed to kids as young as 2 years old, are generally powered by AI models that have already been shown to harm children and teenagers, such as OpenAI’s ChatGPT, according to an advisory published Thursday by the children’s advocacy group Fairplay and signed by more than 150 organizations and individual experts such as child psychiatrists and educators.
“The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviors, violence against others, and self-harm,” Fairplay said.
AI toys, made by companies including Curio Interactive and Keyi Technologies, are often marketed as educational, but Fairplay says they can displace important creative and learning activities. They promise friendship but disrupt children’s relationships and resilience, the group said.
“What’s different about young children is that their brains are being wired for the first time and developmentally it is natural for them to be trustful, for them to seek relationships with kind and friendly characters,” said Rachel Franz, director of Fairplay’s Young Children Thrive Offline Program. Because of this, she added, the trust young children are placing in these toys can exacerbate the types of harms older children are already experiencing with AI chatbots.
A separate report Thursday by Common Sense Media and psychiatrists at Stanford University’s medical school warned teenagers against using popular AI chatbots as therapists.
Fairplay, a 25-year-old organization formerly known as the Campaign for a Commercial-Free Childhood, has been warning about AI toys for years. They just weren’t as advanced as they are today. A decade ago, during an emerging fad of internet-connected toys and AI speech recognition, the group helped lead a backlash against Mattel’s talking Hello Barbie doll that it said was recording and analyzing children’s conversations.
This time, though AI toys are mostly sold online and more popular in Asia than elsewhere, Franz said some have started to appear on store shelves in the U.S. and more could be on the way.
“Everything has been released with no regulation and no research, so it gives us extra pause when all of a sudden we see more and more manufacturers, including Mattel, who recently partnered with OpenAI, potentially putting out these products,” Franz said.
It’s the second big seasonal warning against AI toys since consumer advocates at U.S. PIRG last week called out the trend in its annual “Trouble in Toyland” report that typically looks at a range of product hazards, such as high-powered magnets and button-sized batteries that young children can swallow. This year, the organization tested four toys that use AI chatbots.
“We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls,” the report said. One of the toys, a teddy bear made by Singapore-based FoloToy, was later withdrawn, its CEO told CNN this week.
Dr. Dana Suskind, a pediatric surgeon and social scientist who studies early brain development, said young children don’t have the conceptual tools to understand what an AI companion is. While kids have always bonded with toys through imaginative play, when they do this they use their imagination to create both sides of a pretend conversation, “practicing creativity, language, and problem-solving,” she said.
“An AI toy collapses that work. It answers instantly, smoothly, and often better than a human would. We don’t yet know the developmental consequences of outsourcing that imaginative labor to an artificial agent — but it’s very plausible that it undercuts the kind of creativity and executive function that traditional pretend play builds,” Suskind said.
Beijing-based Keyi, maker of an AI “petbot” called Loona, didn’t return requests for comment this week, but other AI toymakers sought to highlight their child safety protections.
California-based Curio Interactive makes stuffed toys, like Gabbo and rocket-shaped Grok, that have been promoted by the pop singer Grimes. The company said it has “meticulously designed” guardrails to protect children and the company encourages parents to “monitor conversations, track insights, and choose the controls that work best for their family.”






