Principal Component Analysis (PCA) has been widely applied to identify the sources of dimenSional variation in automotive body assembly; however, current PCA methods based on a covariance matrix are not appropriate for dealing with high-dimensional data, noisy data, and missing data. Due to its adap
On the approximation error introduced using principal component analysis in neural networks
β Scribed by Rossella Cancelliere; Mario Gai
- Publisher
- Elsevier Science
- Year
- 2001
- Tongue
- English
- Weight
- 373 KB
- Volume
- 47
- Category
- Article
- ISSN
- 0362-546X
No coin nor oath required. For personal study only.
β¦ Synopsis
Multi-Layer Perceptron (MLP) neural network is currently the most widely used model to deal with applications and therefore it is relevant to design and test methods to improve its efficiency at run time. This paper deals with a new method to use Principal Component Analysis (PCA) to cut differences in neural unit contributions below a chosen tolerance, so reformulating the data set in a more compact and fast to hand form; this idea is applied to the problem of tracking interferometric signals in astronomy. The approximation introduced by suppression of a given number of PCA terms affects the propagation of the signals through the network; we experimentally derived the discrepancy obtained on the output unit of the network and related it to the analytical formulation obtained using the classical technique of error propagation. We can infer a smaller computational load associated to the new, compressed data set, which may be of interest for acceleration of real-world applications.
π SIMILAR VOLUMES
According to production records and field tests of the cellular phone industry, the existing hot bar blade design has two defects: (1) temperature distributions along the edge of the hot bar blade are nonuniform during the heating and soldering processes; and (2) the blade cannot reach the desired t