Making mistakes is human, but it’s not limited to humans. Robots can also glitch. As we fast-forward into a future with upgraded AI technology making its way into the classroom (and beyond), are kids willing to trust information from a robot, or would they prefer it to come from a human?
That’s the question researchers Li Xiaquian and Yow Wei Quin of Singapore University of Technology and Design wanted to answer. To see whether human or machine was more reliable, they ran an experiment with kids ages 3–5, giving them a screen that paired each of them with an accurate human, an inaccurate human, an accurate robot, or an inaccurate robot.
It turned out that both younger and older children trusted an accurate human or robot equally. However, younger kids given information by an inaccurate human or robot were more likely to trust the human—but why?
To err is (not only) human
Just one human and one robot played both the accurate and inaccurate “teachers” onscreen. The human was a college student who had been pre-recorded on video, filming one accurate and one inaccurate video. The same went for the robot. It obviously looked enough like a robot, but it had humanoid features, so it would be somewhat recognizable. The human on video made sure to match her movements to the robot so they could appear as similar as possible. The robot spoke in a voice that was vaguely human but still distinguishable as robotic.
The research team predicted that kids would trust robots that were accurate but not trust inaccurate robots (even if they were humanoid)—and trust them even less than inaccurate humans. They also predicted that this dynamic would change as the kids grew. “We hypothesized that with age, children would rely on information about the informant’s accuracy, more than whether the informant is a human or a robot, to decide whether to trust the informant,” the researchers said in a study recently published in Child Development.
Communication breakdown
Each child participating in the experiment was shown video clips. In each clip, the human or robot would look at a familiar or new object on a table in front of them and give it a correct or incorrect name. Both human and robot communicated information the subjects would know (such as “this is a banana”) as well as some things they wouldn’t have known before (such as “this is a blicket,” for an unfamiliar, made-up object). The children were then shown an image of just the object, followed by the question “What do you think this is called?”
After interacting with the robot or human in the video, kids had to rate whether the person or robot was good at identifying things. They were also shown a picture of one of the objects previously shown on-screen and asked what it is and whether their person or robot would identify it the same way. This provided a sense of whether the children felt their partners had been accurate.
Overall, it appeared that younger children were basing their trust on who was giving them the information. To them, even inaccurate humans were still more trusted than inaccurate robots. Four was the transitional age when children started focusing on the accuracy of the information rather than whether it was coming from a robot or a human. Four-and-five-year-olds didn’t care so much about who was giving them the information but what they thought the informant knew. Wrong answers made them equally unlikely to trust robots and humans.
The researchers’ predictions were right for younger children. While they trusted accurate robots and humans equally, they were more likely to trust an inaccurate human than an inaccurate robot. Older children distrusted inaccurate humans and robots equally. Something the team wants to find out in the future is children’s opinions of robots before they engage with them, which may influence later trust.
For now, although many kids interact with AI more and more at school, it doesn’t look like humans should be going anywhere.
Child Development, 2023. DOI: 10.1111/cdev.14048