An optimal parallel perceptron learning algorithm for a large training set
โ Scribed by Tzung-Pei Hong; Shian-Shyong Tseng
- Publisher
- Elsevier Science
- Year
- 1994
- Tongue
- English
- Weight
- 211 KB
- Volume
- 20
- Category
- Article
- ISSN
- 0167-8191
No coin nor oath required. For personal study only.
โฆ Synopsis
In [2], a parallel perceptron learning algorithm on the single-channel broadcast communication model was proposed to speed up the learning of weights of perceptrons [3]. The results in [2] showed that given n training examples, the average speedup is 1.48*n~ n by n processors. Here, we explain how the parallelization may be modified so that it is applicable to any number of processors. Both analytical and experimental results show that the average speedup can reach nearly O(r) by r processors if r is much less than n.
๐ SIMILAR VOLUMES
The intersection radius of a finite collection of geometrical objects in the plane is the radius of the smallest closed disk that intersects all the objects in the collection. Bhattacharya et al. showed how the intersection radius can be found in linear time for a collection of line segments in the
Based on the properties of star polygon and that the convex polygon is a special kind of star polygon, with the star point as the origin and the two lines respectively parallel to the x-axis and y-axis as coordinate axis, a relative coordinate system is built and the planar area is divided into four