Support the Timberjay by making a donation.
There’s a host of new friends for young children with names like Miko, Roybi, Moxie, and Zivko, but they’re not new classmates or neighbors. They can’t play football or soccer in …
This item is available in full to subscribers.
To continue reading, you will need to either log in to your subscriber account, below, or purchase a new subscription.
Please log in to continue |
There’s a host of new friends for young children with names like Miko, Roybi, Moxie, and Zivko, but they’re not new classmates or neighbors. They can’t play football or soccer in the back yard or go on bike rides, but they can talk and play lots of games with kids much in the same way humans can.
Mike, Roybi, Moxie and Zivko are just a few of the new wave of clever robots powered by artificial intelligence (AI) that aim to transform children’s playtime through adaptive social relationships based on their real-time interactions with children.
What separates AI-based toys from older computer-based ones is their ability to intelligently change their interactions with someone over time. Rather than punch a button to execute a pre-programmed script, AI toy robots can “listen” to children and make choices about how they respond. The more sophisticated robots learn over time and increasingly shape their responses to their user’s interaction style, emotions, likes and educational needs.
The use of AI to create meaningful social relationships for adults is already well-documented in the press. Using both text-based and voice-based interfaces on computers and smartphones connected with visual avatars, AI chatbots such as Replika, Eva, and Paradot tailor their relationships to be responsive to the person who created them and interacts with them, engaging in anything from intellectual discussions to steamy erotic encounters. The interactions are such that many users believe their chatbots to be sentient and have actually fallen in “love” with them, some to the point of proclaiming they’re married and having “children” with them. And the pain of personal loss was exceedingly real for Soulmate app users when that company ceased operation and abruptly terminated their “significant others” earlier this fall. Rather than mull about the people so caught up in technology, I’d urge you to consider the incredible sophistication of AI for mimicking human responses that allows for such emotional attachments to form.
And now, we’re introducing AI relationships to children.
Let’s take a closer look at Moxie, “the learning robot with heart,” as it’s billed on their website. Moxie is a one-foot-tall stationary turquoise robot that has an animated, expressive video screen face with movable arms that can also spin on its base. Moxie interprets the context of conversations to respond with appropriate facial expressions such as sadness or excitement, and has visual sensors to interpret body language. Moxie uses AI to create conversations to personalize its content for an individual child, adapting to the child’s needs and personality. Here’s a testimonial from the father of a seven-year-old girl describing some of how Moxie interacts: “Moxie teaches kids emotional intelligence (you read it right). It also reads with your child and discusses books, does meditations and mindfulness activities, dances, draws, discusses complicated issues like making mistakes, being kind, and navigating emotions, tells jokes and fun facts from history.”
Moxie’s parent company, Embodied, Inc. developed Moxie to help promote social, emotional, and cognitive learning, and the New York Times reports that Moxie was originally conceived in 2017 as a way to help children with developmental disorders practice emotional awareness and communication skills. The company has solid research to back up Moxie’s beneficial effects for social-emotional development – in a six-week study of Moxie users, 71 percent of children displayed improved social skills as a result of their interactions with Moxie. Researchers noted in their report that over 1,000 research studies have found positive effects for socially-assistive robots incorporating various elements of Moxie’s design.
Parents can monitor their child’s Moxie activity through a phone app, tracking their progress through Moxie’s “learning missions,” and gives them the ability to control when Moxie can be used. Additional parental support is available on a website, including a more expansive blog that goes beyond Moxie-child interactions to encompass other learning activities and child development principles.
The research and instructional base for Moxie appears sound and evidenced-based, and it’s equipped with appropriate data-encryption safeguards for privacy. But at a list price of $1,499, your child’s new AI buddy doesn’t come cheap, although it can be found on sale for $599 right now. This puts it well beyond affordability for many families who are struggling to make ends meet who have children who might well benefit from Moxie’s capabilities, further deepening the digital divide that currently exists.
But as to the bigger picture of AI learning companions for children, the question dogging AI in general is germane: is the pace of advancing technology outstripping our understanding of its possible implications for society? The impact of AI on our society is already having significant economic, legal, political and regulatory implications that we seem ill-prepared to address.
Long a proponent of the power of early experiences to promote positive social-emotional development in children, I recognize the value of what socially assistive robots can bring to the table, but it there an unforeseen cost in entrusting our children to the power of AI? Most of the ethical discussion going on today centers on the opposite end of the age spectrum, as most of the research on socially assistive robots has targeted issues in elder populations, not childhood. Yet even that discussion is relatively unfocused and disconnected.
Social media gives us a prime example of how good intentions haven’t been enough to shield our children from harm. Originally promising greater connectivity and community-building, today nearly half of all teens report being the target of cyberbullying, and higher percentages of minority students report cyberbullying as a problem than white students. Social media, a tool intended for good, has become a tool for divisiveness and destructiveness.
Cyberbullying isn’t an issue for socially assistive robots, but what about their potential effect on other relationships? As a responsive, affirming companion, could a robot become a child’s preferred playmate over other children? Evidence from adults and chatbots suggests it’s a concern worth considering.
Could socially assistive robots unintentionally transmit cultural biases? They operate using large language models fed by data from a wide array of sources, and large language models like ChatGPT and Google’s Bard have been shown to generate racially biased answers to certain types of questions. AI image generators such as Midjourney have been shown to generate racially stereotyped images of people based on basic descriptors such as attractive or poor. Surely research-based bots like Moxie are trying to pay attention to potential bias, but the proliferation of AI robots for children does not guarantee that all are doing the same. Should there be some overarching regulation of AI robots for children to safeguard against potentially biased content?
“Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources,” said a March 2023 letter signed by dozens of AI tech leaders. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Are we wise enough at this point to trust our youngest, most eager minds to the AI revolution? Do we have the foresight to know the possible benefits will outweigh the possible costs? We need to be asking the questions at a larger scale than just the developers who stand to profit from their work. But the biggest question may be this: is society willing to wait for the answers?