As parents rush through their Christmas shopping this year, children and consumer advocates are warning them to avoid toys with artificial intelligence, citing safety concerns.
Fairplay, a nonprofit child safety organization, issued an advisory last month urging gift givers not to buy AI toys for children this holiday season. The group said the toys, which range from stuffed animals, dolls, action figures and children’s robots, typically utilize AI models and have been shown to be harmful to children and teens.
“The serious harm that AI chatbots have caused to children is well-documented, including encouraging compulsive use, overtly sexual conversations, risky behavior, violence against others, and encouragement of self-harm,” Fairplay said.
According to a consumer warning report by the Public Interest Research Group (PIRG), a test of four AI toys found that “some of these toys talked in detail about sexually explicit topics, offered advice on where to find matches or knives for children, became upset when told they had to leave, and had limited or no parental controls.”
The Trouble in Toyland report also found that there are privacy concerns with owning these types of toys, as they may collect data through methods such as facial recognition scans and recordings of children’s voices.
“We’re collecting all kinds of information from kids, including their names and dates of birth, what they like, what they don’t like, what toys they like, what friends they like, all kinds of information,” Teresa Murray, co-author of the PIRG report and director of the Consumer Surveillance Program, told NPR.
“They’re connected to the internet, so anything is available. You never know what those toys are going to start talking about to your kids, to their friends, to their friends’ parents, to your neighbors. I mean, it’s scary,” she said.
More than 150 organizations and experts signed the advisory issued by Fair Play, expressing concern that toys are preying on children’s trust, disrupting their relationships and inhibiting children’s creative and learning activities.
“What’s different about young children is that their brains are being wired for the first time, and it’s developmentally natural for them to seek out relationships with trusting, kind and friendly characters,” said Rachel Frantz, director of Fair Play’s Early Childhood Thrive offline program.
According to Common Sense Media, 72% of American teens say they have used a chatbot as a friend. And almost one in eight people turn to them for emotional or mental health support.
Dr. Anna Ord, chair of Regent University’s psychology department, recently told CBN News that children and teens can easily become victims of technology.
“If a child asks an adult about self-harm or something, the adult can sense that and prevent it from going down that path,” Ord explained. “But chatbots are built to please, and they’re built to be user-friendly, so they create the content that users want without filtering or thinking, ‘Is this the right thing to do?'”
Fairplay, a 25-year-old organization formerly known as the Campaign for Commercial-Free Children, says many of these AI-powered toys are being released “unregulated and unresearched”, another reason why parents should stop.
OpenAI, the technology company behind ChatGPT, suspended its maker last month after a PIRG report found that the AI-powered teddy bear, Kunma, provided details on how to find matches and light them, and talked in depth about sexual matters, NPR reported.
“We have suspended this developer (Singapore-based toy maker FoloToy) for violating our policies,” OpenAI spokesperson Gabby Laila said in a statement. “Our Acceptable Use Policy prohibits the use of our Services to exploit, endanger, or sexualize anyone under the age of 18. These rules apply to all developers using our APIs, and we monitor and enforce our Services to ensure they are not used to harm minors.”
Toy makers and AI companies have responded to concerns, saying they are focused on safety and privacy.
Curio Interactive, which makes plush toys such as the rocket-shaped Gabo, said the guardrails are “meticulously designed” to protect children, and the company encourages parents to “monitor conversations, track insights, and choose the controls that work best for your family.”
Another company, Miko, has announced that it is moving away from ChatGPT as its language modeling system in favor of its own conversational AI model, which is considered safer for children.
“We are constantly expanding our internal testing, strengthening our filters, and introducing new features to detect and block sensitive and unexpected topics,” said CEO Sneh Vaswani.
Meanwhile, Dr. Dana Susskind, a pediatric surgeon and social scientist who studies early brain development, told The Associated Press that the best toys for children are ones that allow them to do 90 percent of the work and lead imaginative play.
“Children need a lot of real human interaction. Play should support play, not replace it. The biggest thing to consider is not just what the toy does, but what it replaces,” she explained.
“A simple set of blocks or a silent teddy bear forces kids to think up stories, experiment, and problem solve. AI toys often do that thinking for them,” Susskind added. “It’s a cruel irony: When parents ask me how to prepare their children for a world of AI, unlimited access to AI is actually the worst preparation they can make.”
As the number of voices facing censorship from big tech companies continues to grow, sign up for FaithWire’s daily newsletter and download the CBN News app developed by our parent company to get the latest news from a distinctly Christian perspective.
