𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Safety-critical neural computing: explanation and verification in knowledge augmented neural networks

✍ Scribed by J.H. Johnson; P.D. Picton; N.J. Hallam


Publisher
Elsevier Science
Year
1993
Tongue
English
Weight
601 KB
Volume
8
Category
Article
ISSN
0954-1810

No coin nor oath required. For personal study only.

✦ Synopsis


This paper addresses the problem that conventional neural networks cannot contain a priori knowledge and cannot explain their output. A mathematical theory of 'black-box classifiers' is developed which covers most of the best known neural architectures. The limitations of the non-model based computational paradigm are discussed; these include inability to predict the behaviour of systems with multiple-valued, discontinuous, catastrophic, and chaotic state spaces. Worse, they include the inability to detect the presence of such systems, when they are working outside their range of competence, and when they are working with data of quality outside their range of experience. Of themselves, neural networks cannot communicate with human decision makers in human terms, often the choice is 'take it or leave it'. Knowledge-based computation does not necessarily have these drawbacks, and can therefore augment the powerful neural computing paradigm where it is weakest. We consider three fundamental ways of combining the two computational paradigms, and show how the explanation facility of knowledge-based systems can be used to induce explanation on the output of neural subsystems. We conclude with an architecture which we believe to be generic for safety-critical neural computation.