19 Sep 2022 10:23 GMT
Canines only see in shades of blue and yellow and have a slightly higher density of vision receptors designed to detect movement than humans, the scientists say.
Scientists at Emory University have decoded visual images of a dog’s brain, offering a first look at how the canine mind reconstructs what it sees. The researchers recorded neural data from dogs and humans in functional magnetic resonance imaging exams (fMRI) and used a machine learning algorithm to analyze the patterns. The results suggest that dogs are more attuned to actions in their environment than to who or what is performing the action, according to communicated the authors of the research.
“Humans are very object-oriented,” says Gregory Berns, an Emory professor and one of the study’s lead authors. “There are 10 times more nouns than verbs in the English language. because we have a particular obsession with naming objects. Dogs seem to be less concerned with who or what they are watching and more concerned with the action itself,” she details.
Berns points out that dogs and humans also have big differences in their visual systems, with dogs only seeing in shades of blue and yellow, as well as having a slightly higher density of vision receptors designed to detect movement than people.
The researcher argues that it makes a lot of sense that the brains of dogs are highly tuned, more than anything, with actions. “Animals need to be very concerned about things going on in their environment to keep from being eaten or to monitor animals they might want to hunt. Action and movement are paramount“, Explain.
How did they do the study?
The researchers recorded neural fMRI data from two awake dogs while watching videos in three 30-minute sessions. Berns and his colleagues pioneered training techniques to make dogs walk up to an fMRI scanner and hold completely still and unrestrained while their neural activity is measured. Two humans also underwent the same experiment, watching the same video in three separate sessions while lying down getting an fMRI.
The videos were created from a dog’s visual perspective, so they were interesting enough for them to watch for an extended period of time. The scenes recorded included dogs being petted by people and receiving treats, sniffing, playing, eating or walking. They also showed moving cars or bicycles, a cat walking, a deer crossing a road, people sitting, hugging, kissing, eating, or offering a rubber bone or ball to the camera.
“We showed that we can monitor activity in a dog’s brain while it’s watching a video and, at least to some extent, reconstruct what it’s watching,” Berns said. “The fact that we can do that is remarkable.Beyond humans, the technique has been applied to only a handful of other species, including several primates.
“While our work is based on just two dogs, it provides proof of concept that these methods work in canines,” says Erin Phillips, first author of the study, published last Tuesday in the Journal of Visualized Experiments. “I hope this paper helps pave the way for other researchers to apply these methods in dogs, as well as other species, so that we can get more data and a better understanding of how the minds of different animals work,” he added.
For Philips, understanding how different animals perceive the world is important to its current field research on how predator reintroduction can affect ecosystems.
“Historically, there hasn’t been much overlap in computing and ecology,” says the scientist. “But machine learning is a growing field that is starting to find broader applications, including in ecology,” she concludes.