If we want to talk about the computational modeling of the brain, we have to look at the notion of the brain as a computer. It is one perspective under question. In the notion of computational modeling of the brain, there is an idea that the brain can be identified as a computer or could be reduced as the computer. We can discuss that philosophically and technically, but if we simplify things and look at the computer, there is a hardware part and a software.
I think it is not an exaggeration to say that the hardware part of the brain is relatively well understood and we understand, for instance, the architecture of the organization of the brain as the structure, as different elements at different scales. In this respect, there is a great deal of efforts in research these days of implementing this architecture using simulations on software and also the actual implementation on computer chips, which architecture is actually that of elementary assemblies of neural cells. We call these approaches neuromorphic in a sense that you really want to proceed to implementing an architecture in a substrate that is not biological, that really mimics the brain. And then you could hope that by using this architecture you would learn about brain function. Or you would obtain results that would be at least equivalent in performance to that of the brain. The performance both in terms of computational power which is, I think, we can agree, quite high for the brain, and also in terms of efficiency.
It is not so much of an issue of how many operations can be performed, but it is a matter of efficiency in terms of energy consumed. If you look at these high-performance computers and supercomputers, like the one I was alluding to, I think it requires the energy supply in the amount of about one-two, five megaWatts. It is completely crazy when you compare the efficiency of the brain to the same amount of operations, in theory, at least in terms of the capacity for operations. Human brain can require an energy supply that is equivalent to a light bulb of 10-20 Watts. We are looking at the things that are tremendously different in terms of efficiency, which makes it very puzzling for computer scientists to try to mimic this extremely powerful object that is able to perform so much elementary operations at once, with a very limited supply of energy. In this respect it’s fascinating and it’s one trend of research in the computational modeling of the brain. It is actually trying to use the brain as a model for the computer. So it is like two sides of the same coin.
Whether we are going to learn about the brain function by implementing the brain in silico, that’s a different question, and I guess may be yes. But as I said, one of the motivation to develop these neuromorphic computing solutions is to reach greater efficiency in terms of energy needs. Then, if these solutions exist and are available, then I guess a neuroscientist may use those brainy computers as models in that they can observe and that can implement or test hypotheses about brain function or dysfunction. You can look at the way the functions of the brain in silico would be altered by modifying some of the parameters and understanding whether that would lead to be pseudobehaviors that are observed in some patients, for instance. You could also implement models for brain repair as well. That makes it very attractive and an alternative for current practices in biological research which are relatively limited in terms of testing this kind of hypothesis, because you have to look at the animal models, which are imperfect. Also, you have to look at patients. The different solutions to treat a given patients are relatively, I would say, limited because you are dealing with the person and not a computer and you don’t want to make mistakes obviously.
So that’s one way of doing things for modeling the brain as a computer. This is a hardware portion. But there is also a software portion that is also an object to an active research everywhere. There is for instance, the Human Brain Project in the EU that has triggered a lot of interest and which one of the deliverables and the objectives is to implement a software version of the brain where at this time there is no implementation in silico, but more as a software program, that would basically model every single cell, may be not in the brain but in some brain regions, for instance, the olfactory system or somatosensory system of a rodent, that was published actually last year, for instance. The approach they are taking is actually to mimic or to implement equations, if you will, for each and every single cell. The way these different cells interact also is modeled with some software modules. And basically you have a supercomputer running this software and you can also proceed the same way as I was describing before by observing the end product of this brain activity that emerges spontaneously or is altered by pseudostimulus that you can also model with software. This is also very interesting, very flexible and also opens great a perspective in terms, again, of modelling brain functions and dysfunctions. It remains uncertain whether this can scale up to the dimension and complexity of the whole brain and whether we are going to learn how the brain implements function and behavior – this is definitely a very active field of research in neuroscience.
So, today we are looking at this revolution of machine learning and how it can penetrate the industry and consumer goods. But the question is, is it really a translational aspect of neuroscience or are we going to learn about how the brain works with machine learning? As of today, I don’t think that’s the case. I think in that respect, the implementation of machine learning is remarkable and is full of promises, but also poses some societal and ethical challenges. If you look just at the scientific portion of it, you would hope that by observing these neuronal networks in action by performing classification on series of images or translating natural languages, you would learn how the brain does that. And that’s not the case because although the architecture of the software mimics that of the brain networks, yet the mechanisms by how these software bits realize this function is not clear, meaning that it is very hard to generalize and understand the mechanisms that have been implemented by the network to perform with high performance a given the task. In this respect you don’t have the insight of looking at the implementation of a given function and therefore you cannot bridge that with the actual brain activity that may be observed in humans. So, taking together, I mean, with the emergence of new mathematical tools, the immediate access to huge amounts of data, computer resources – all those elements are in place to, basically, approach this fascinating question of a better understanding the brain activity, brain function, brain dysfunction, with new methods and new resources that we didn’t have even recently.