In late December, four scientists at Kyoto University in Kyoto, Japan (Guohua Shen, Tomoyasu Horikawa, Kei Majima, and Yukiyasu Kamitani) published the findings of their recent research on using artificial intelligence (AI) to decode thoughts on the online scientific journal, BioRxiv.
The researchers developed a method to “see” inside people’s minds using a functional magnetic resonance imaging (fMRI) scanner, which detects changes in blood flow in the brain.
For the research, over the course of 10 months, three volunteers were shown images of natural and man-made objects (such as a swan, an aeroplane, or a stained glass window), artificial geometric shapes, and alphabetical letters for varying lengths of time.
In some instances, brain activity was measured while a subject was looking at one of 25 images. In other cases, it was logged afterward, when subjects were asked to think of the image they were previously shown.
Once the brain activity was scanned, a computer reverse-engineered (or “decoded”) the information to generate visualizations of a subjects’ thoughts.
The scientists from Kyoto developed new techniques of “decoding” thoughts using deep neural networks (artificial intelligence). Signals from the scanner were fed into an artificial neural network, a computer system that apes how a human mind works and is able to learn and solve problems.
The network learned to recognize images, and then used signals from volunteers’ brains to reconstruct what they were thinking about.
With remarkable accuracy, the colors and shapes of the images it produced closely match the photos that the volunteers were looking at while in the scanner.
The new technique allows the scientists to decode more sophisticated “hierarchical” images, which have multiple layers of color and structure, like a picture of a bird or a man wearing a cowboy hat, for example.
“We have been studying methods to reconstruct or recreate an image a person is seeing just by looking at the person’s brain activity,” Kamitani, one of the scientists, tells CNBC Make It.
“Our previous method was to assume that an image consists of pixels or simple shapes. But it’s known that our brain processes visual information hierarchically extracting different levels of features or components of different complexities.”
And the new AI research allows computers to detect objects, not just binary pixels. “These neural networks or AI model can be used as a proxy for the hierarchical structure of the human brain,” Kamitani says.
The flowchart, pictured below, is made by the research team at the Kamitani Lab at Kyoto University and breaks down the science of how a visualization is “decoded.”
The two charts displayed below show the results the computer reconstructed for subjects whose activity was logged while they were looking at natural images and images of letters.
As for the subjects’ whose brain waves were measured based on remembering the images, the scientists had another breakthrough.
The chart below demonstrates how when decoding brain signals resulting from a subject remembering images, the AI system had a harder time reconstructing. That’s because it’s more difficult for a human to remember an image of a cheetah or a fish exactly as it was seen.
The scientists wrote in a paper published in BioRxiv: “We found that the generated images resembled the stimulus images (both natural images and artificial shapes) and the subjective visual content during imagery.
Machine learning has previously been used to study brain scans (MRI, or magnetic resonance imaging) and generate visualizations of what a person is thinking when referring to simple, binary images like black and white letters or simple geographic shapes.
Neural networks are at the centre of many developments in AI including Google’s translation tool, Facebook’s facial recognition software and Snapchat’s live filters.