What can artificial intelligence learn from dogs? Quite a lot, say researchers from the University of Washington and Allen Institute for AI. They recently trained neural networks to interpret and predict the behavior of canines. Their results, they say, show that animals could provide a new source of training data for AI systems — including those used to control robots.
To train AI to think like a dog, the researchers first needed data. They collected this in the form of videos and motion information captured from a single dog, a Malamute named Kelp. A total of 380 short videos were taken from a GoPro camera mounted to the dog’s head, along with movement data from sensors on its legs and body. Essentially, Kelp was being recorded in the same way Hollywood uses motion capture to record actors playing CGI creations. But instead of Andy Serkis bringing Gollum to life, they were capturing a dog going about its daily life — walking, playing fetch, and going to the park.
With this information in hand, the researchers analyzed Kelp’s behavior using deep learning. This is an AI technique that can be used to sift patterns from data. In this case, that meant matching the motion data of Kelp’s limbs and the visual data from the GoPro with various doggy activities. The resulting neural network trained on this information could predict what a dog would do in certain situations. If it saw someone throwing a ball, for example, it would know that the reaction of a dog would be to turn and chase it.
Speaking to The Verge, the paper’s lead author, Kiana Ehsani, explained that the predictive capacity of their AI system was very accurate, but only in short bursts. In other words, if the video shows a set of stairs, then you can guess the dog is going to climb them. But beyond that, life is simply too varied to predict. “Whether or not the dog will see a toy or an object it wants to chase, who knows,” says Ehsani, a PhD student at the University of Washington.
What’s really clever, though, is what the researchers did next. Taking the neural network trained on the dog’s behavior, they wanted to see if it had learned anything else about the world that they had not explicitly programmed. As they explain in the paper, dogs “clearly demonstrate visual intelligence, recognizing food, obstacles, other humans and animals,” so does a neural network trained to act like a dog show the same cleverness?
It turns out yes — although only in a very limited capacity. The researchers applied two tests to the neural network, asking it to identify different scenes (e.g., indoors, outdoors, on stairs, on a balcony) and “walkable surfaces” (which are exactly what they sound like: places you can walk). In both cases, the neural network was able to complete these tasks with decent accuracy using just the basic data it had of a dog’s movements and whereabouts.
“Our intuition for this was that dogs are really good at finding where to walk — where they’re allowed to go and where they’re not,” says Ehsani. “This is a very hard task for a computer because it requires a lot of prior knowledge.” This knowledge might be whether a surface is too steep to walk on or if it’s spiky and uncomfortable. It would be time-consuming to program a robot with all these rules, but a dog already knows them all. So by watching Kelp’s behavior, the neural network learned these rules without having to be taught them. In other words, it learned from the dog.
Now, it’s important to include a lot of caveats here. The software created by Ehsani and her colleagues is not in any way a model of a dog’s brain or its consciousness. All it’s doing is learning a few very basic rules from a limited set of data, i.e., where a dog likes to walk. And as with every other AI system, there’s no reasoning happening here; the software is simply finding patterns in the data. This in itself is not new. Researchers are always training AI systems from similar data.
But, as Ehsani points out, this seems to be the first time anyone has ever tried learning from a dog, and the fact that it worked suggests that animals could be a useful source of training data. After all, dogs know plenty of other things that would be very useful for robots. What humans look like, for example, or the difference between an adult and a baby. Dogs know to avoid cars and how to navigate stairs, which are important lessons for any robot that needs to operate in a human environment.
Of course, this paper is only a very simple demonstration of how we might learn from animals, and much more work needs to be done before this paradigm is productive. But Ehsani says she’s confident it could have some very useful applications. “One immediate thing that comes to my mind is making a robot dog. It’s already a hard task for a robot, to know how to move and where to go, or if they want to chase something,” she says. “This this would definitely help us building a more efficient and better robot dog.”