Deep neural networks are more and more more helping to ** design microchips**, are expecting how proteins fold, and outperform human beings at complex games. However, researchers have now found there are fundamental theoretical limits to how stable and correct those AI systems can really get.

These results could help shed light on what is really possible with AI and what is not, the scientists add.

In artificial neural networks, components called “neurons” receive data and cooperate to solve a problem, such as image recognition. The neural network constantly adjusts the links between its neurons and checks whether the resulting behavior patterns are better suited to finding a solution. Over time, the network discovers which patterns are best for computing results. It then adopts them as default and mimics the learning process in the ** human brain**. A neural network is said to be “deep” when it has multiple layers of neurons.

Although deep neural networks are getting used for more and more more practical applications which include analyzing medical scans and empowering autonomous vehicles, there may be now overwhelming proof that they could frequently prove unstable—that is, a mild alteration in the data they obtain can result in a wild change in outcomes. For example, previous research discovered that changing a single pixel on an image could make an AI think a horse is a frog, and medical images can get changed in a manner imperceptible to the ** human eye** such that might cause an AI to misdiagnose cancer 100 percent of the time.

Previous research suggested there’s mathematical evidence that stable, accurate neural networks exist for a extensive variety of problems. However, in a new study, researchers now discover that even though stable, accurate neural networks may theoretically exist for lots problems, there may also paradoxically be no algorithm which could actually successfully compute them.

“Theoretically, there’re only a few limitations on what neural networks can achieve,” says study co-lead author Matthew Colbrook, a mathematician of the University of Cambridge in England. The trouble emerges when looking to compute those neural networks. “A digital computer can just compute particular specific neural networks,” says study co-lead author Vegard Antun, a mathematician of the University of Oslo in Norway. “Some-times computing a desirable neural network is impossible.”

These new findings may sound confusing, as if to say that there may be a type of cake but that there is no recipe to make it.

“We would say that it isn’t the recipe this is the problem. Rather, it is the tools you need to make the cake this is the problem,” says study senior author Anders Hansen, a mathematician of the University of Cambridge in England. “We’re saying that there is probably a recipe for the cake, however irrespective of the mixers you have available, you could not be capable of make the desired cake. Moreover, while you attempt to make the cake with your mixer in the kitchen, you may end up with a very unique cake.”

In addition, to continue the analogy, “it could also be the case which you can not tell whether or not the cake is incorrect till you try it, and then it is too late,” Colbrook says. “There are, however, some cases when your mixer is enough to make the cake you want, or at the least an awesome approximation of that cake.”

These new findings on the constraints of neural networks echo preceding studies on the constraints of mathematics from mathematician Kurt Gödel and on the constraints of computation from computer scientist Alan Turing. Roughly, they revealed “that there are mathematical statements that could in no way be proven or disproven and that there are fundamental computational problems that a computer can’t solve,” Antun says.

The new study reveals that an algorithm might not be able to compute a stable, accurate neural network for a given problem irrespective of how much data it could access or the accuracy of that data. This is much like Turing’s argument that there are problems that a computer might not solve irrespective of computing power and runtime, Hansen says.

“There are inherent barriers on what computers can achieve, and those barriers will show up in AI as well,” Colbrook says. “This means that theoretical outcomes on the existence of neural networks with incredible properties might not yield an accurate description of what’s possible in reality.”

These new findings do not reveal that every neural networks are entirely flawed, however that they may just prove stable and accurate in limited scenarios. “In some cases, it’s possible to compute stable and accurate neural networks,” Antun says. “The key problem is the part ‘in certain cases.’ The large issue is to discover these cases. Currently, there’s very little understanding of how to do this.”

The researchers discovered there has been frequently a tradeoff among stability and accuracy in neural networks. “The problem is that we need both stability and accuracy,” Hansen says. “In practice, for safety-critical applications, one may need to sacrifice a few accuracy to secure stability.”

As a part of the new study, the researchers developed what they name Fast Iterative Restarted Networks (FIRENETs). These neural networks can provide a mixture of both stability and accuracy when it comes to tasks including analyzing medical images.

These new findings concerning the limitations of neural networks aren’t aimed at damping ** artificial intelligence** research, and can rather spur new work exploring ways to bend those rules.

“Figuring out what can and can not be carried out could be healthy for AI in the long run,” Colbrook says. “Note that the terrible outcomes of Turing and Gödel sparked an great effort in mathematical foundations and computer science. This brought about a lot of modern computer science and modern logic, respectively.”

For instance, these new findings mean the existence of a classification theory for explaining which stable neural networks with a given accuracy may be computed through an algorithm. Using the previous cake analogy, “it’d be a classification theory explaining which cakes can be baked with the mixers which are physically possible to design,” Antun says. “And if it’s not possible to bake the cake, we need to know how close one could get to the type of cake you desire.”

The findings explained in the journal Proceedings of the National Academy of Science.