Madrid, 30 years old (Europe Press)
It has been discovered that an artificial network of nanowires can be tuned to respond in a similar way to the brain when electrically stimulated.
Scientists from the University of Sydney and the National Institute for Materials Science (NIMS) in Japan have found that by keeping this network of nanowires in a brain state “on the brink of chaos,” it performs tasks at an optimal level.
They say this indicates that the fundamental nature of neural intelligence is physical, and that their discovery opens an exciting avenue for the development of artificial intelligence. The study was published in the journal Nature Communications.
“We use wires of 10 micrometers in length and no more than 500 nanometers thick that are randomly arranged in a two-dimensional plane,” lead author Joel Hochstetter, a doctoral candidate at the Nano Institute and University of Sydney School of Physics, said in a statement.
“When the wires overlap, they form an electrochemical junction, like synapses between neurons,” he said. “We found that electrical signals sent through this network automatically find the best route to transmit information. This architecture allows the network to ‘remember’ previous paths through the system.”
Using simulations, the research team tested the random nanowire network to see how to make it work better for solving simple tasks.
If the network stimulated signal was too low, the pathways were too predictable and orderly and did not produce output complex enough to be useful. If the electrical signal flooded the network, the output was completely messy and useless for troubleshooting. The optimal signal to produce a useful output was on the verge of this chaotic state.
“Some theories in neuroscience suggest that the human brain can operate on the edge of this chaos, or the so-called critical state,” said co-author Zdenka Koncic, a professor at the University of Sydney. “Some neuroscientists believe that in this case we achieve the maximum performance of the brain.”
“The exciting thing about this result is that it suggests that these types of nanowire networks can be tuned into systems with diverse collective dynamics, similar to the brain, which can be harnessed to improve information processing.”
In a nanowire network, the connections between the cables allow the system to integrate memory and operations into a single system. This differs from standard computers, which separate memory (RAM) and processes (CPU).
“These junctions act like computer transistors, but with the added property of remembering that signals have taken this path before. As such, they are called memristors,” Hochstetter said.
This memory takes a physical form, with the junctions at the intersection points between the nanowires act as switches whose behavior depends on the historical response to electrical signals. When signals are applied across these junctions, tiny silver threads grow that activate the junctions by allowing current to flow through them.
“This creates a memory network within the random nanowire system,” he said.
Hochstetter and his team created a simulation of the physical network to show how it can be trained to solve very simple tasks. “For this study, we trained the network to convert a simple waveform into more complex types of waveforms,” Hochstetter said.
In the simulation, they tuned the amplitude and frequency of the electrical signal to see where the best performance would occur. “We found that if you push the signal too slowly, the network does the same thing over and over again without learning or development. If we push it too hard and too fast, the network becomes erratic and unpredictable,” he said.
Professor Koncic said that linking memory to processes has enormous practical advantages for the future development of artificial intelligence.
“The algorithms needed to train the network to figure out which junction should be given the appropriate ‘load’ or information weight consume a lot of energy,” he said.
“The systems we develop eliminate the need for such algorithms. We only allow the network to develop its own weighting, which means we only need to worry about signal input and output, a framework known as ‘warehouse computing’.
Grid weights are self-adjusting, and are likely to release large amounts of energy. This means that any future AI system that uses such networks will have a much smaller energy footprint, he said.