Application of a Modular Feedforward Neural Network for Grade Estimation
β Scribed by Pejman Tahmasebi; Ardeshir Hezarkhani
- Publisher
- Springer US
- Year
- 2011
- Tongue
- English
- Weight
- 312 KB
- Volume
- 20
- Category
- Article
- ISSN
- 1573-8981
No coin nor oath required. For personal study only.
β¦ Synopsis
This article presents new neural network (NN) architecture to improve its ability for grade estimation. The main aim of this study is to use a specific NN which has a simpler architecture and consequently achieve a better solution. Most of the commonly used NNs have a fully established connection among their nodes, which necessitates a multivariable objective function to be optimized. Therefore, the more the number of variables in the objective function, the more the complexity of the NN. This leads the NN to trap in local minima. In this study, a new NN, in which the connections based on the final performance are eliminated, is used. Toward this aim, several network architectures were tested, and finally a network which yielded the minimum error was selected. This selected network has low complexity and connection among nodes which help the learning algorithm to converge rapidly and more accurately. Furthermore, this network has this ability to deal with the small number of data sets. For testing and evaluating this new method, a case study of an iron deposit was performed. Also, to compare the obtained results, some common techniques for grade estimation, e.g., geostatistics and multilayer perceptron (MLP) were used. According to the obtained results, this new NN architecture shows a better performance for grade estimation.
π SIMILAR VOLUMES
The number of required hidden units is statistically estimated for feedforward neural networks that are constructed by adding hidden units one by one. The output error decreases with the number of hidden units by an almost constant rate, if each appropriate hidden unit is selected out of a great num
This paper presents a message-passing architecture simulating multilayer neural networks, adjusting its weights for each pair, consisting of an input vector and a desired output vector. First, the multilayer neural network is defined, and the difficulties arising from parallel implementation are cla