Josh was at the end of his rope when he turned to ChatGPT for help with a parenting quandary. The 40-year-old father of two had been listening to his “super loquacious” four-year-old talk about Thomas the Tank Engine for 45 minutes, and he was feeling overwhelmed.
“He was not done telling the story that he wanted to tell, and I needed to do my chores, so I let him have the phone,” recalled Josh, who lives in north-west Ohio. “I thought he would finish the story and the phone would turn off.”
But when Josh returned to the living room two hours later, he found his child still happily chatting away with ChatGPT in voice mode. “The transcript is over 10k words long,” he confessed in a sheepish Reddit post. “My son thinks ChatGPT is the coolest train loving person in the world. The bar is set so high now I am never going to be able to compete with that.”
From radio and television to video games and tablets, new technology has long tantalized overstretched parents of preschool-age kids with the promise of entertainment and enrichment that does not require their direct oversight, even as it carried the hint of menace that accompanies any outside influence on the domestic sphere. A century ago, mothers in Arizona worried that radio programs were “overstimulating, frightening and emotionally overwhelming” for children; today’s parents self-flagellate over screen time and social media.
But the startlingly lifelike capabilities of generative AI systems have left many parents wondering if AI is an entirely new beast. Chatbots powered by large language models (LLMs) are engaging young children in ways the makers of board games, Teddy Ruxpin, Furby and even the iPad never dreamed of: they produce personalized bedtime stories, carry on conversations tailored to a child’s interests, and generate photorealistic images of the most far-fetched flights of fancy – all for a child who can not yet read, write or type.
Can generative AI deliver the holy grail of technological assistance to parents, serving as a digital Mary Poppins that educates, challenges and inspires, all within a framework of strong moral principles and age-appropriate safety? Or is this all just another Silicon Valley hype-bubble with a particularly vulnerable group of beta testers?
‘My kids are the guinea pigs’
For Saral Kaushik, a 36-year-old software engineer and father of two in Yorkshire, a packet of freeze-dried “astronaut” ice-cream in the cupboard provided the inspiration for a novel use of ChatGPT with his four-year-old son.
“I literally just said something like, ‘I’m going to do a voice call with my son and I want you to pretend that you’re an astronaut on the ISS,’” Kaushik said. He also instructed the program to tell the boy that it had sent him a special treat.
“[ChatGPT] told him that he had sent his dad some ice-cream to try from space, and I pulled it out,” Kaushik recalled. “He was really excited to talk to the astronaut. He was asking questions about how they sleep. He was beaming, he was so happy.”
Childhood is a time of magic and wonder, and dwelling in the world of make-believe is not just normal but encouraged by experts in early childhood development, who have long emphasized the importance of imaginative play. For some parents, generative AI can help promote that sense of creativity and wonder.

Josh’s daughter, who is six, likes to sit with him at the computer and come up with stories for ChatGPT to illustrate. (Several parents interviewed for this article requested to be identified by their first names only.) “When we started using it, it was willing to make an illustration of my daughter and insert that in the story,” Josh said, though more recent safety updates have resulted in it no longer producing images of children. Kaushik also uses ChatGPT to convert family photographs into coloring book pages for his son.
Ben Kreiter, a father of three in Michigan, explained ChatGPT to his two-, six-, and eight-year-old children after they saw him testing its image-generation capabilities for work (he designs curriculums for an online parochial school). “I was like, ‘I tell the computer a picture to make and it makes it,’ and they said: ‘Can we try?’” Soon, the children were asking to make pictures with ChatGPT every day. “It was cool for me to see what they are imagining that they can’t quite [draw] on a piece of paper with their crayons yet.”
Kreiter, like all the parents interviewed for this article, only allowed his children to use ChatGPT with his help and supervision, but as they became more enamored with the tool, his concern grew. In October 2024, news broke of a 14-year-old boy who killed himself after becoming obsessed with an LLM-powered chatbot made by Character.ai. Parents of at least two more teenagers have since filed lawsuits alleging that AI chatbots contributed to their suicides, and news reports increasingly highlight troubling tales of adults forming intense emotional attachments to the bots or otherwise losing touch with reality.
“The more that it became part of everyday life and the more I was reading about it, the more I realized there’s a lot I don’t know about what this is doing to their brains,” Kreiter said. “Maybe I should not have my own kids be the guinea pigs.”
Research into how generative AI affects child development is in its early stages, though it builds upon studies looking at less sophisticated forms of AI, such as digital voice assistants like Alexa and Siri. Multiple studies have found that young children’s social interactions with AI tools differ subtly from those with humans, with children aged three to six appearing “less active” in conversations with smart speakers. This finding suggests that children perceive AI agents as existing somewhere in the middle of the divide between animate and inanimate entities, according to Ying Xu, a professor of education at the Harvard Graduate School of Education.
Understanding whether an object is a living being or an artefact is an important cognitive development that helps a child gauge how much trust to place in the object, and what kind of relationship to form with it, explained Xu, whose research focuses on how AI can promote learning for children. Children begin to make this distinction in infancy and usually develop a sophisticated understanding of it by age nine or 10. But while children have always imbued inanimate objects such as teddy bears and dolls with imagined personalities and capacities, at some level they know that the magic is coming from their own minds.
“A very important indicator of a child anthropomorphizing AI is that they believe AI is having agency,” Xu said. “If they believe that AI has agency, they might understand it as the AI wanting to talk to them or choosing to talk to them. They feel that the AI is responding to their messages, and especially emotional disclosures, in ways that are similar to how a human responds. That creates a risk that they actually believe they are building some sort of authentic relationship.”
In one study looking at children aged three to six responding to a Google Home Mini device, Xu found that the majority perceived the device to be inanimate, but some referred to it as a living being, and some placed it somewhere in between. Majorities thought the device possessed cognitive, psychological and speech-related capabilities (thinking, feeling, speaking and listening), but most believed it could not “see”.
Parents who spoke with the Guardian remarked upon this kind of ontological gray zone in describing their children’s interactions with generative AI. “I don’t fully know what he thinks ChatGPT is, and it’s hard to ask him,” said Kaushik of his four-year-old. “I don’t think he can articulate what he thinks it is.”
Josh’s daughter refers to ChatGPT as “the internet”, as in, “I want to talk to ‘the internet’.” “She knows it’s not a real person, but I think it’s a little fuzzy,” he said. “It’s like a fairy that represents the internet as a whole.”
For Kreiter, seeing his children interact with Amazon’s Alexa at a friend’s house raised another red flag. “They don’t get that this thing doesn’t understand them,” he said. “Alexa is pretty primitive compared to ChatGPT, and if they’re struggling with that … I don’t even want to go there with my kids.”
A related concern is whether generative AI’s capacity to deceive children is problematic. For Kaushik, his son’s sheer joy at having spoken with what he thought was a real-life astronaut on the ISS led to a sense of unease, and he decided to explain that it was “a computer, not a person”.
“He was so excited that I felt a bit bad,” Kaushik said. “He genuinely believed it was real.”
John, a 40-year-old father of two from Boston, experienced a similar qualm when his son, a four-year-old in the thralls of a truck obsession, asked whether the existence of monster trucks and fire trucks implied the existence of a monster-fire truck. Without thinking much of it, John pulled up Google’s generative AI tool on his phone and used it to generate a photorealistic image of a truck that had elements of the two vehicles.
It was only after a pitched argument between the boy, who swore he had seen actual proof of the existence of a monster-fire truck, and his older sister, a streetwise seven-year-old who was certain that no such thing existed in the real world, that John started to wonder whether introducing generative AI into his children’s lives had been the right call.
“It was a little bit of a warning to maybe be more intentional about that kind of thing,” he said. “My wife and I have talked so much more about how we’re going to handle social media than we have about AI. We’re such millennials, so we’ve had 20 years of horror stories about social media, but so much less about AI.”
To Andrew McStay, a professor of technology and society at Bangor University who specializes in research on AI that claims to detect human emotions, this kind of reality-bending is not necessarily a big concern. Recalling the early moving pictures of the Lumière brothers, he said: “When they first showed people a big screen with trains coming [toward them], people thought the trains were quite literally coming out of the screen. There’s a maturing to be done … People, children and adults, will mature.”
Still, McStay sees a bigger problem with exposing children to technology powered by LLMs: “Parents need to be aware that these things are not designed in children’s best interests.”
Like Xu, McStay is particularly concerned with the way in which LLMs can create the illusion of care or empathy, prompting a child to share emotions – especially negative emotions. “An LLM cannot [empathize] because it’s a predictive piece of software,” he said. “When they’re latching on to negative emotion, they’re extending engagement for profit-based reasons. There is no good outcome for a child there.”
Neither Xu nor McStay wants to ban generative AI for children, but they do warn that any benefits for children will only be unleashed through applications that are specifically designed to support children’s development or education.
“There is something more enriching that’s possible, but that comes from designing these things in a well-meaning and sincere way,” said McStay.
Xu allows her own children to use generative AI – to a limited extent. Her daughter, who is six, uses the AI reading program that Xu designed to study whether AI can promote literacy and learning. She has also set up a custom version of ChatGPT to help her 10-year-old son with math and programming problems without just giving him the answers. (Xu has explicitly disallowed conversations about gaming and checks the transcripts to make sure her son’s staying on topic.)
One of the benefits of generative AI mentioned to me by parents – the creativity they believe it fosters – is very much an open question, said Xu.
“There is still a debate over whether AI itself has creativity,” she said. “It’s just based on statistical predictions of what comes next, and a lot of people question if that counts as creativity. So if AI does not have creativity, is it able to support children to engage in creative play?”
A recent study found that having access to generative AI prompts did increase creativity for individual adults tasked with writing a short story, but decreased the overall diversity of the writers’ collective output.
“I’m a little worried by this kind of homogenizing of expression and creativity,” Xu said about the study. “For an individual child, it might increase their performance, but for a society, we might see a decrease of diversity in creative expressions.”
AI ‘playmates’ for kids
Silicon Valley is notorious for its willingness to prioritize speed over safety, but major companies have at times shown a modicum of restraint when it came to young children. Both YouTube and Facebook had existed for at least a decade before they launched dedicated products for under-13s (the much-maligned YouTube Kids and Messenger Kids, respectively).
But the introduction of LLMs to young children appears to be barreling ahead at a breakneck pace.
While OpenAI bars users under 13 from accessing ChatGPT, and requires parental permission for teenagers, it is clearly aware that younger children are being exposed to it – and views them as a potential market.
In June, OpenAI announced a “strategic collaboration” with Mattel, the toymaker behind Barbie, Hot Wheels and Fisher-Price. That same month, chief executive Sam Altman responded to the tale of Josh’s toddler (which went pretty viral on Reddit) with what sounded like a hint of pride. “Kids love voice mode on ChatGPT,” he said on the OpenAI podcast, before acknowledging that “there will be problems” and “society will have to figure out new guardrails.”
Meanwhile, startups such as Silicon Valley-based Curio – which collaborated with the musician Grimes on an OpenAI-powered toy named Grok – are racing to stuff LLM-equipped voice boxes into plushy toys and market them to children.

(Curio’s Grok shares a name with Elon Musk’s LLM-powered chatbot, which is notorious for its past promotion of Adolf Hitler and racist conspiracy theories. Grimes, who has three children with former partner Musk, was reportedly angered when Musk used a name she had chosen for their second child on another child, born to a different mother in a concurrent pregnancy of which Grimes was unaware. In recent months, Musk has expressed interest in creating a “Baby Grok” version of his software for children aged two to 12, according to the New York Times.)
The pitch for toys like Curio’s Grok is that they can “learn” your child’s personality and serve as a kind of fun and educational companion while reducing screen time. It is a classically Silicon Valley niche – exploiting legitimate concerns about the last generation of tech to sell the next. Company leaders have also referred to the plushy as something “between a little brother and a pet” or “like a playmate” – language that implies the kind of animate agency that LLMs do not actually have.
It is not clear if they are actually good enough toys for parents to worry too much about. Xu said that her daughter had quickly relegated AI plushy toys to the closet, finding the play possibilities “kind of repetitive”. The children of Guardian and New York Times writers also voted against Curio’s toys with their feet. Guardian writer Arwa Mahdawi expressed concern about how “unsettlingly obsequious” the toy was and decided she preferred allowing her daughter to watch Peppa Pig: “The little oink may be annoying, but at least she’s not harvesting our data.” Times writer Amanda Hess similarly concluded that using an AI toy to replace TV time – a necessity for many busy parents – is “a bit like unleashing a mongoose into the playroom to kill all the snakes you put in there”.
But with the market for so-called smart toys – which includes AI-powered toys, projected to double to more than $25bn by 2030 – it is perhaps unrealistic to expect restraint.
This summer, notices seeking children aged four to eight to help “a team from MIT and Harvard” test “the first AI-powered storytelling toy” appeared in my neighborhood in Brooklyn. Intrigued, I made an appointment to stop by their offices.
The product, Geni, is a close cousin to popular screen-free audio players such as Yoto and the Toniebox. Rather than playing pre-recorded content (Yoto and Tonies offer catalogs of audiobooks, podcasts and other kid-friendly content for purchase). However, Geni uses an LLM to generate bespoke short stories. The device allows child users to select up to three “tiles” representing a character, object or emotion, then press a button to generate a chunk of narrative that ties the tiles together, which is voiced aloud. Parents can also use an app to program blank tiles.
Geni co-founders Shannon Li and Kevin Tang struck me as being serious and thoughtful about some of the risks of AI products for young children. They “feel strongly about not anthropomorphizing AI”, Tang said. Li said that they want kids to view Geni, “not as a companion” like the voice-box plushies, but as “a tool for creativity that they already have”.
Still, it’s hard not to wonder whether an LLM can actually produce particularly engaging or creativity-sparking stories. Geni is planning to sell sets of tiles with characters they develop in-house alongside the device, but the actual “storytelling” is done by the kind of probability-based technology that tends toward the average.
The story I prompted by selecting the wizard and astronaut tiles was insipid at best:
They stumbled upon a hidden cave glowing with golden light.
“What’s that?” Felix asked, peeking inside.
“A treasure?” Sammy wondered, her imagination swirling, “or maybe something even cooler.”
Before they could decide, a wave rushed into the cave, sending bubbles bursting around them.
The Geni team has trained their system on pre-existing children’s content. Does using generative AI solve a problem for parents that the canon of children’s audio content cannot? When I ran the concept by one parent of a five-year-old, he responded: “They’re just presenting an alternative to books. It’s a really good example of grasping for uses that are already handled by artists or living, breathing people.”
The market pressures of startup culture leave little time for such existential musings, however. Tang said the team is eager to bring their product to market before voice-box plushies sour parents on the entire concept of AI for kids.
When I asked Tang whether Geni would allow parents to make tiles for, say, a gun – not a far-fetched idea for many American families – he said they would have to discuss the issue as a company.
“Post-launch, we’ll probably bring on an AI ethics person to our team,” he said.
“We also don’t want to limit knowledge,” he added. “As of now there’s no right or wrong answer to how much constraint we want to put in … But obviously we’re referencing a lot of kids content that’s already out there. Bluey probably doesn’t have a gun in it, right?”