There's a way to turn almost any object into a computer - and it could cause shockwaves in AI

(Nanowerk News) The latest chip in the iPhone 7 has 3.3 billion transistors packed into a piece of silicon around the size of a small coin. But the trend for smaller, increasingly powerful computers could be coming to an end. Silicon-based chips are rapidly reaching a point at which the laws of physics prevent them being any smaller. There are also some important limitations to what silicon-based devices can do that mean there is a strong argument for looking at other ways to power computers.
Perhaps the most well-known alternative researchers are looking at is quantum computers, which manipulate the properties of the chips in a different way to traditional digital machines. But there is also the possibilty of using alternative materials – potentially any material or physical system – as computers to perform calculations, without the need to manipulate electrons like silicon chips do. And it turns out these could be even better for developing artificial intelligence than existing computers.
The idea is commonly known as “reservoir computing” and came from attempts to develop computer networks modelled on the brain. It involves the idea that we can tap into the behaviour of physical systems – anything from a bucket of water to blobs of plastic laced with carbon nanotubes – in order to harness their natural computing power ("Reservoir Computing as a Model for In-Materio Computing").
Input and output
Reservoir computers exploit the physical properties of a material in its natural state to do part of a computation. This contrasts with the current digital computing model of changing a material’s properties to perform computations. For example, to create modern microchips we alter the crystal structure of silicon. A reservoir computer could, in principle, be made from a piece of silicon (or any number of other materials) without these design modifications.
The basic idea is to stimulate a material in some way and learn to measure how this affects it. If you can work out how you get from the input stimulation to the output change, you will effectively have a calculation that you can then use as part of a range of computations. Unlike with traditional computer chips that depend on the position of electrons, the specific arrangement of the particles in the material isn’t important. Instead we just need to observe certain overall properties that let us measure the output change in the material.
For example, one team of researchers has built a simple reservoir computer out of a bucket of water. They demonstrated that, after stimulating the water with mechanical probes, they could train a camera watching the water’s surface to read the distinctive ripple patterns that formed. They then worked out the calculation that linked the probe movements with the ripple pattern, and then used it to perform some simple logical operations. Fundamentally, the water itself was transforming the input from the probes into a useful output –- and that is the great insight.
General purpose brain cells
It turns out that this idea of reservoir computing aligns with recent neuroscience research that discovered parts of the brain appear to be “general-purpose”. These areas are predominantly made up of collections of neurons that are only loosely ordered yet can still support cognitive function in more specialised parts of the brain, helping to make it more efficient. As with the computer, if this reservoir of neurons is stimulated with a specific signal it will respond in a very characteristic way, and this response can help perform computations.
For example, recent work ("Distributed Fading Memory for Stimulus Properties in the Primary Visual Cortex") suggests that when we hear or see something, one general part of the brain is stimulated by sound or light. The response of the neurons in that area of the brain is then read by another more specialised area of the brain.
Research indicates that reservoir computers could be extremely robust and computationally powerful ("Fully analogue photonic reservoir computer") and, in theory, could effectively carry out an infinite number of functions. In fact, simulated reservoirs have already become very popular in some aspects of artificial intelligence thanks to precisely these properties. For example, systems using reservoir methods for making stock-market predictions have indicated that they outperform many conventional artificial intelligence technologies ("Short-term stock price prediction based on echo state networks"). In part, this is because it turns out to be much easier to train AI that harnesses the power of a reservoir than one that does not.
Ultimately, this is still a relatively new technology and a good deal of research remains to be done into its capabilities and implications. But it is already clear that there are a huge number of potential applications of this type of technology both in AI and more broadly. This could include anything from analysing and processing real-time data to image/pattern recognition and controlling robots.
Source: By Mark Douthwaite, PhD Candidate in High Integrity Systems Engineering (HISE), University of York and Matt Dale, PhD Student in Non-Standard Computation, York Center for Complex Systems Analysis, University of York