Title of program: SPLINESMOOTH smoothing a set of experimental data, measured at a sequence of values of some independent variable. This is done as a first Catalogue number: AAQO step towards interpolating, integrating, differentiating, or otherwise transforming the function represented by the data.
Smooth interpolation of large sets of scattered data
โ Scribed by Richard Franke; Greg Nielson
- Publisher
- John Wiley and Sons
- Year
- 1980
- Tongue
- English
- Weight
- 783 KB
- Volume
- 15
- Category
- Article
- ISSN
- 0029-5981
No coin nor oath required. For personal study only.
๐ SIMILAR VOLUMES
A key challenge in pattern recognition is how to scale the computational efficiency of clustering algorithms on large data sets. The extension of non-Euclidean relational fuzzy c-means (NERF) clustering to very large (VL = unloadable) relational data is called the extended NERF (eNERF) clustering al
Large sets of packings were investigated extensively. Much less is known about the dual problem, Le., large sets of coverings. We examine two types of important questions in this context; what is the maximum number of disjoint optimal coverings? and what is the minimum number of optimal coverings fo
In this paper a characterization of the optimal (using the minimum norm criterion) interpolant, convex along the edges of a triangulation, using data at the vertices is obtained. We thereby generalize results obtained by Nielson for the unconstrained case. 1995 Academic Press. Inc