Artificial intelligence and neural networks: Steps toward principled integration: Edited by Vasant Honavar and Leonard Uhr, Academic Press, Boston, MA: 1994, $89.95, 653 pp., ISBN 0-12-355055-6
✍ Scribed by Raju S. Bapi; Sue L. McCabe
- Book ID
- 104348744
- Publisher
- Elsevier Science
- Year
- 1996
- Tongue
- English
- Weight
- 388 KB
- Volume
- 9
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
✦ Synopsis
Are symbolic artificial intelligence (SAI) and neural network (NN) approaches toward building intelligent systems complementary? Are they two competing theories trying to arrive "there" first? Do they have such irreconcilable differences that it is a waste of effort trying to forge them together? Do we have a firm theoretical basis for "principled integration"? These are some of the questions that the book sets out to address. The book presents some thought-provoking discussion on all these issues and leaves the reader with enough avenues ("leads") to search for answers. It is a useful compendium of issues and results by some of the leading researchers in the field and in several directions: basic issues in symbol versus connectionist paradigms, representation and inference, vision, language, and learning. As always happens with a book that reports on a fast growing field such as neural networks, some of the material becomes dated by the time the book sees the light of publication. In the review we will point out (-where we are aware of) some of the latest results. To make the review more interesting we start with a discussion of general issues in the integration of symbolic AI and NN. Specific comments on some of the chapters are organised along the lines of division in the book. We conclude with more general comments.
Newell's (1980) symbol system hypothesis is a formal statement of the theoretical underpinnings of the AI enterprise. They claimed that human intelligence entails a symbol system and a set of procedures to manipulate these symbols. Combining this idea with Fodor's Language of Thought hypothesis (Fodor, 1976), a typical AI-er tends to reduce human thought (intelligence) to logic, rules, knowledge, and symbol processing. Searle's (1980) "Chinese room problem" highlighted the inadequacy of symbolic AI in explaining human cognition. Harnad (1990) argued that the woes of symbolic AI relate to the "ungrounded" nature of the atomic symbols. He pointed out that the initial set of symbols, being arbitrary, is flexible enough to support compositionality (atomic symbols can be combined and molecular representations can be decomposed according to a formal syntax) and systematicity (the atomic and molecular symbols and the rules of syntax can be systematically assigned a meaning). He asserted however, that the very fact that these symbols are arbitrary, means they become "ungrounded" (their meaning and interpretation reside in the subject's "head" and are otherwise "rootless") and give credence to Chinese roomlike arguments.