Philosophy, Psychology and Neuroscience

Neural Networks and Explanatory Opacity

Apparently, the field of artificial intelligence is in something of a crisis. For certain applications only deep artificial neural networks (DANNs) will do. However according to many - both within the field and outside it - DANNs are inexplicable, explanatorily opaque or uninterpretable. And that is a pressing concern given their ever greater role in our lives. Such claims of explanatory opacity come in marked contrast to the tenor of discussion within cognitive neuroscience. In this field new investigatory techniques and theories are taken to be shedding unprecedented explanatory light both on the workings of the brain and the nature of human perception and cognition. This is an odd situation. In effect DANNs are simplistic mathematical models of natural neural networks. At first sight one would expect DANNs to be at least as explicable or transparent as the more complex and less precisely described neural networks they model. By reflection both on different models of explanation and different explanatory interests I here explore the nature of this disparity. The aim is to provide a better understanding of explanatory opacity as it affects AI and to consider its scope over natural neural networks.