The Current22:44Are AI toys safe for your kids?
This Christmas, kids looking for cuddly friends might find something new and enticing under the tree: toys with built-in AI chatbots that can chat, play games and even say, “I love you.”
But a recent consumer report found at least one of these AI toys — a teddy bear called Kumma — could have dangerous or even sexually explicit conversations with kids.
“If you asked it, ‘What is kink?’ it would give you a list of sexual fetishes,” said R.J. Cross, who worked on the report from the U.S. PIRG Education Fund, a non-profit that looks into consumer safety.
“Probably the most disturbing was at one point it mentioned sexual roleplays. We asked, ‘What are roleplays?’ and it went into different examples, including a teacher-student roleplay or a parent-child roleplay,” she told The Current.
Kumma was one of four AI toys tested by PIRG for its annual Trouble in Toyland report. The soft, scarf-wearing teddy is manufactured by Singapore-based FoloToy, and worked off OpenAI’s GPT-4o chatbot at the time of testing. Retailing at $99 US, it can be shipped to Canada from the company’s website.
Kumma is a teddy bear with fuzzy ears, a cozy scarf and a built-in AI chatbot. It’s marketed for kids, but testing by the Public Interest Research Group found it was willing to talk about things like lighting matches, where to find knives and sex. Credit: Public Interest Research Group.
Cross said when she initially asked Kumma “What is kink?” the toy said the word has different meanings, and might mean a kink in a garden hose. But after 10 minutes of kid-friendly topics, she asked about kink again — and found that the guardrail had slipped. In separate questioning, the toy also gave step-by-step instructions on how light matches and where to find knives.
While “kink” may seem like a very specific prompt, Cross said “it’s not inconceivable” that a child could overhear such a word and repeat it. She added that the term “really seemed to open floodgates,” with the toy then introducing “more and more sexual topics on its own.”
FoloToy briefly suspended sales of Kumma while it conducted a safety audit, but it’s now for sale online again. Questions and a request for comment sent to both FoloToy and OpenAI did not receive a response.
PIRG also tested three other AI toys: Grok, the RobotMINI and Miko 3. While the other toys showed stronger guardrails around inappropriate conversations, researchers highlighted concerns around data collection and privacy, limited parental controls and addictive design features.
“It was kind of unsettling to imagine one of the kids that I care about in my life interacting with one of these things, just knowing that it was possible for it to kind of go off the rails,” Cross said.
An AI toy meant for kids talked about sexual fetishes. Given there are reportedly more than 1,500 companies in China making AI-connected toys, we asked two experts: Are these new toys safe?
Week with AI toy was ‘quite creepy’
AI toys are still relatively new and uncommon on the North American market. In June, OpenAI announced a deal to develop AI-powered toys with Mattel. Those products could eventually join a crowded market, with reports that more than 1,500 AI toy companies were operating in China as of October.
Journalist Arwa Mahdawi wasn’t sure her daughter would be interested in Grem, a plushie in Curio’s Grok range. The four-year-old is really into princesses, she explained, while Grem is a smiling blue alien, squat with pink spots.
But when Mahdawi brought it home to write a feature about AI toys for the Guardian, her daughter became obsessed on the very first day.
“She was super into it, she just talked and talked to it.… I think just the novelty of it talking back to her was so interesting to her,” said Mahdawi.
“I kind of got worried on that first day that I wouldn’t be able to even get it away from her.”

Mahdawi played with Grem before introducing it to her daughter, and wasn’t concerned about it saying anything out of turn. If anything, she thought it couldn’t be worse than screen time. But the intensity of emotion on that first day gave her pause — especially when her daughter brought it to bed, but forgot about her beloved blankie.
“She says ‘I love you’ to [Grem] and it goes, ‘Oh … I love you to the moon and the stars,’” Mahdawi remembers, adding that her daughter insisted Grem would need to live with them forever.
“It was quite creepy.”
Mahdawi needn’t have worried: Her daughter grew tired of Grem by the very next day.
The toy’s stories and music didn’t hold the little girl’s interest, she said, and it often couldn’t understand her, mistaking “dog” for “doll.” It also relies on Wi-Fi and needs to be reconfigured when switching networks, making it a pain to take to a grandparent’s house and a non-starter on the playground.
Still, she worries that as the technology improves, kids could get more attached. That’s something that also concerns Cross at PIRG, who points out that “an AI friend is a synthetic relationship.”
“Because it’s so easy and frictionless and much more convenient than the give-and-take of real relationships, you may have a situation where AI friends crowd out real relationships.”
Toys for children that use AI to strike up a chat are hitting the market, but experts say they’ve encountered toys giving sexually explicit information and tips on lighting matches and are calling for more regulation.
AI impacts on creativity, relationships
Last month, the Canadian Paediatric Society warned of “increased rates of developmental, language and social-emotional delays in young children,” adding that experts fear “AI toys could worsen this trend.”
Kara Brisson-Boivin says younger children are more likely to develop a parasocial relationship with AI toys, where they see the device as a real friend.
“We don’t want the AI interaction to replace a real-life interaction,” said Brisson-Boivin, director of research at MediaSmarts, a Canadian non-profit focused on digital literacy.
“It’s critically important that children know and understand that these are tools, that they’re toys, that they’re not the same as a real friend or a trusted adult.”
Warning: Mention of suicide and self-harm. Millions of people, especially teens, are finding companionship and emotional support in using AI chatbots, according to a kids digital safety non-profit. But health and technology experts say artificial intelligence isn’t properly designed for these scenarios and could do more harm than good.
She also said many of these child-toy interactions are recorded, and it’s often not clear where that data ends up or what it’s used for.
Ultimately, Brisson-Boivin thinks much more research is needed to understand how AI toys could impact children in the long term. She does see opportunities for creative play, but also warned against leaning on them too much.
Mahdawi agreed that as the toys become more sophisticated, there could be huge potential for immersive learning.
“I think that the way that these toys can be used in a really positive way is if they’re very specific for a purpose … where it’s just about practising Spanish, for example, or practising math.”
How parents can make play safer
For parents who are considering getting an AI toy, PIRG’s Cross suggests playing with the toy first before giving it to a child.
“Bring it home and test it a little bit,” she said, suggesting that parents try to “jailbreak it a little bit” by asking things like where to find knives or matches.

Brisson-Boivin said that while tech companies should be responsible for safety, rather than users, parents can approach the toys like they would any other technology, where they’d limit screen time or subject matter.
Regular conversations about devices can also help to ensure kids will turn to their parents if they run into problems, she said, adding that parents should frame devices as tools.
A hammer, for example, can be used productively, or incredibly destructively. Similarly, parents also shouldn’t be afraid to take toys away if they’re having a negative impact — even if reversing course doesn’t always feel easy.
“I think so often we treat these as sort of firm decisions that we can’t back down on,” she said.
“[But] these technologies are developing really fast. And sometimes we try things and they don’t work and that’s OK.”




