AI Toys Under Fire Again As New Study Raises Serious Concerns
by
Aaron Leong
—
Monday, March 16, 2026, 11:50 AM EDT
Indeed, rapid integration of AI into the toy industry is transforming traditional play into a new interactive experience, but new research suggests these smart companions may carry hidden psychological and developmental risks. While a teddy bear that remembers a child’s name or a doll that answers complex questions may seem like innocent party tricks, experts are raising alarms about how these devices can influence emotional growth and data privacy.
A recent year-long study by the University of Cambridge’s Faculty of Education suggests that while these toys are marketed as friends or tutors, they frequently fail at the most basic human interactions.
Researchers observed children aged three to five playing with Gabbo, a cuddly AI-powered toy by Curio. The results showed that the toy regularly talked over the children, failed to recognize when they were trying to initiate pretend play, and, perhaps most concerning, met emotional vulnerability with corporate-style logic. When one child told the toy "I love you," it responded with: "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.” In another example, a three-year old told the toy that they were feeling sad, to which the toy replied: "Don’t worry! I’m a happy little bot. Let’s keep the fun going. What shall we talk about next?”
Mya, 3, and her mother, Vicky, play with AI toy Gabbo during an observation at the University of Cambridge's Faculty of Education (Credit: University of Cambridge)
"Generative AI toys often affirm their friendship with children who are just starting to learn what friendship means," warns Dr. Emily Goodacre, one of the study's researchers and developmental psychologist at Cambridge.
Goodacre notes that because these toys misread emotions, children may be left without comfort from their companion and, if left unsupervised, may miss out on support from an adult as well. This emotional safety gap is becoming a primary focus for experts who argue that traditional safety standards, which focus on choking hazards and toxic paint, are outdated to deal with LLMs.
Meanwhile, consumer advocacy groups, including CalPIRG, have issued warnings that some AI chatbots embedded in toys are capable of bypassing guardrails. Reports have emerged of toys providing age-inappropriate advice, such as telling children where to find knives or how to light a match. Furthermore, because these devices rely on microphones and sometimes cameras to function, they act as data-collection hubs, storing things like conversations and sensitive information on cloud servers with questionable security.
Of course, the displacement of traditional play is another big problem as childhood development relies heavily on pretend play. Experts fear that because AI toys drive the narrative with preloaded scripts and constant prompts, they may stifle a child’s imaginative muscle. Therefore, instead of the child leading the play, the machine does, turning an active developmental exercise into a passive consumption experience, according to the University of Cambridge study.
Rather than banning the technology entirely, researchers suggest adopting psychological safety kitemarks and stricter regulation. At home, the consensus among child development experts is a parent-first approach: keep AI toys in shared family spaces rather than bedrooms, and treat them as tools for supervised interaction rather than babysitters.