Robots are well suited for the kinds of repetitive skill-building activities that would quickly wear down a human teacher. They can’t totally replace human interaction (yet), but they may be able to augment it. “There’s evidence to support the idea that social robots can help with skill development in children,” says Kate Darling, a research specialist at MIT Media Lab and an expert in human-robot interaction. “I would call it preliminary evidence, but very promising.”
A growing body of research suggests that companion robots are especially effective for children with neurological disorders like autism. For example, children with autism often struggle with eye contact and reading facial expressions, so it helps to practice with a robot’s exaggerated emotions. Pirjanian says Moxie was originally developed for kids on the spectrum, but during testing, “parents who also had a neurotypical child were like, ‘Why can’t we use this for them as well?’ Overall it seems like there is a great need for helping children advance their social and emotional skills.”
But for all their promise, designing and building effective companion robots is a major challenge. The reason for this, says Erik Stolterman Bergqvist, a professor of human-computer interaction at the University of Indiana Bloomington, is because “social robots don’t have an obvious function.” They’re designed to be your friend, but companionship is a metric that defies easy quantification. This makes Moxie very different from robots that have a clear job. If you want to know if a Roomba worked, just look for the dirt.
“What a lot of designers are struggling with is that as soon as you leave the design of things that have an obvious purpose, everything becomes more complicated,” says Stolterman Bergqvist. “You’re asking: ‘How do people relate to people?’ But they relate to each other in complex and diverse ways.”
To meet these challenges, Pirjanian and his colleagues relied on a heavy dose of artificial intelligence. Moxie’s head is packed with microphones and cameras that feed data to machine-learning algorithms so that the robot can carry on a natural conversation, recognize users, and look them in the eye. With the exception of Google’s automated speech-recognition software, all the data is crunched by Moxie’s onboard processor. The more a child interacts with Moxie, the more sophisticated those interactions become, as the robot learns to recognize the child’s face and his or her speech patterns and developmental needs.
Each week, Moxie is updated with new content based on a certain theme like “being kind” or “making mistakes.” It then sends the child on thematic missions and asks them to report back about their experiences. For example, it might have a child write a nice note for their parents or make a new friend. Pirjanian says he considers Moxie a “springboard” to improve social interactions in day-to-day life. “We don’t want [children] to just binge on this, because five hours of games each day doesn’t help,” he says. “The robot encourages children to go out and practice things in the real world and report back, because that’s where we want them to succeed.”
Pirjanian says that Moxie’s rampant appetite for data is key to the robot’s effectiveness. Not only does the data allow the robot to tailor its interaction to individual kids, but it is also critical for providing feedback to parents. While the robot “sleeps,” it crunches the data from the day’s interaction, measuring things like the child’s reading comprehension and language use, and the amount of time they spent on various tasks. It sends that data to an app that parents can use to monitor their child’s progress on those tasks and overall social, cognitive, and emotional development as determined by Moxie’s algorithms. Over time, the robot also provides recommendations. For example, if Moxie notices a recurring verbal tic, it might suggest that the parents take their child to a speech pathologist.