<p>Implement numerical algorithms in Java using NM Dev, an object-oriented and high-performance programming library for mathematics.Youâll see how it can help you easily create a solution for your complex engineering problem by quickly putting together classes.</p><p><i>Numerical Methods Using Java<
Numerical Methods Using Java: For Data Science, Analysis, and Engineering
â Scribed by Haksun Li PhD
- Publisher
- Apress
- Year
- 2022
- Tongue
- English
- Leaves
- 1196
- Edition
- 1
- Category
- Library
No coin nor oath required. For personal study only.
⌠Synopsis
Implement numerical algorithms in Java using NM Dev, an object-oriented and high-performance programming library for mathematics.Youâll see how it can help you easily create a solution for your complex engineering problem by quickly putting together classes.
Numerical Methods Using Java covers a wide range of topics, including chapters on linear algebra, root finding, curve fitting, differentiation and integration, solving differential equations, random numbers and simulation, a whole suite of unconstrained and constrained optimization algorithms, statistics, regression and time series analysis. The mathematical concepts behind the algorithms are clearly explained, with plenty of code examples and illustrations to help even beginners get started.Â
What You Will Learn
- Program in Java using a high-performance numerical library
- Learn the mathematics for a wide range of numerical computing algorithms
- Convert ideas and equations into code
- Put together algorithms and classes to build your own engineering solution
- Build solvers for industrial optimization problems
- Do data analysis using basic and advanced statistics
Programmers, data scientists, and analysts with prior experience with programming in any language, especially Java.Â
⌠Table of Contents
Table of Contents
About the Author
About the Technical Reviewer
Acknowledgments
Preface
Chapter 1: Introduction to Numerical Methods in Java
1.1. Library Design
1.1.1. Class Parsimony
1.1.2. Java vs. C++ Performance
1.2. Setup
1.2.1. Installing the Java Development Kit
1.2.2. NetBeans with Maven
1.2.3. nmdev.jar
1.2.4. IntelliJ IDEA
1.2.5. SuanShu
1.3. About This Book
1.3.1. Sample Code
Chapter 2: Linear Algebra
2.1. Vector
2.1.1. Element-Wise Operations
2.1.2. Norm
2.1.3. Inner Product and Angle
2.2. Matrix
2.2.1. Matrix Operations
2.2.2. Element-Wise Operations
2.2.3. Transpose
2.2.4. Matrix Multiplication
2.2.5. Rank
2.2.6. Determinant
2.2.7. Inverse and Pseudo-Inverse
2.2.8. Kronecker Product
2.3. Matrix Decomposition
2.3.1. LU Decomposition
2.3.2. Cholesky Decomposition
2.3.3. Hessenberg Decomposition and Tridiagonalization
2.3.4. QR Decomposition
2.3.5. Eigen Decomposition
2.3.6. Singular Value Decomposition
2.4. System of Linear Equations
2.4.1. Row Echelon Form and Reduced Row Echelon Form
2.4.2. Back Substitution
2.4.3. Forward Substitution
2.4.4. Elementary Operations
2.4.4.1. Row Switching Transformation
2.4.4.2. Row Multiplying Transformation
2.4.4.3. Row Addition Transformation
2.4.5. Gauss Elimination and Gauss-Jordan Elimination
2.4.6. Homogeneous and Nonhomogeneous Systems
2.4.7. Over-Determined Linear System
2.5. Sparse Matrix
2.5.1. Dictionary of Keys
2.5.2. List of Lists
2.5.3. Compressed Sparse Row
2.5.4. Sparse Matrix/Vector Operations
2.5.5. Solving Sparse Matrix Equations
Chapter 3: Finding Roots of Equations
3.1. An Equation of One Variable
3.2. Jenkins-Traub Algorithm
3.3. The Bisection Method
3.4. Brentâs Method
3.4.1. Linear Interpolation Method, False Position Method, Secant Method
3.4.2. Inverse Quadratic Interpolation
3.4.3. Brentâs Method Implementation
3.5. The Newton-Raphson Method
3.5.1. Halleyâs Method
Chapter 4: Finding Roots of System of Equations
4.1. System of Equations
4.2. Finding Roots of Systems of Two Nonlinear Equations
4.3. Finding Roots of Systems of Three or More Equations
Chapter 5: Curve Fitting and Interpolation
5.1. Least Squares Curve Fitting
5.2. Interpolation
5.2.1. Linear Interpolation
5.2.2. Cubic Hermite Spline Interpolation
5.2.3. Cubic Spline Interpolation
5.2.4. Newton Polynomial Interpolation
Linear Form
Quadratic Form
General Form
5.3. Multivariate Interpolation
5.3.1. Bivariate Interpolation
5.3.2. Multivariate Interpolation
Chapter 6: Numerical Differentiation and Integration
6.1. Numerical Differentiation
6.2. Finite Difference
6.2.1. Forward Difference
6.2.2. Backward Difference
6.2.3. Central Difference
6.2.4. Higher-Order Derivatives
6.3. Multivariate Finite Difference
6.3.1. Gradient
6.3.2. Jacobian
6.3.3. Hessian
6.4. Riddersâ Method
6.5. Derivative Functions of Special Functions
6.5.1. Gaussian Derivative Function
6.5.2. Error Derivative Function
6.5.3. Beta Derivative Function
6.5.4. Regularized Incomplete Beta Derivative Function
6.5.5. Gamma Derivative Function
6.5.6. Polynomial Derivative Function
6.6. Numerical Integration
6.7. The Newton-Cotes Family
6.7.1. The Trapezoidal Quadrature Formula
6.7.2. The Simpson Quadrature Formula
6.7.3. The Newton-Cotes Quadrature Formulas
6.8. Romberg Integration
6.9. Gauss Quadrature
6.9.1. Gauss-Legendre Quadrature Formula
6.9.2. Gauss-Laguerre Quadrature Formula
6.9.3. Gauss-Hermite Quadrature Formula
6.9.4. Gauss-Chebyshev Quadrature Formula
6.10. Integration by Substitution
6.10.1. Standard Interval
6.10.2. Inverting Variable
6.10.3. Exponential
6.10.4. Mixed Rule
6.10.5. Double Exponential
6.10.6. Double Exponential for Real Line
6.10.7. Double Exponential for Half Real Line
6.10.8. Power Law Singularity
Chapter 7: Ordinary Differential Equations
7.1. Single-Step Method
7.1.1. Eulerâs Method (Polygon Method)
7.1.1.1. Eulerâs Formula
7.1.1.2. Implicit Euler Formula
7.1.1.3. Trapezoidal Formula
7.1.1.4. Prediction-Correction Method
7.1.2. Runge-Kutta Family
7.1.2.1. Second-Order Runge-Kutta Method
7.1.2.2. Third-Order Runge-Kutta Method
7.1.2.3. Higher-Order Runge-Kutta Method
7.1.3. Convergence
7.1.4. Stability
7.2. Linear Multistep Method
7.2.1. Adams-Bashforth Method
7.2.1.1. Adams-Bashforth Implicit Formulas
7.3. Comparison of Different Methods
7.4. System of ODEs and Higher-Order ODEs
Chapter 8: Partial Differential Equations
8.1. Second-Order Linear PDE
8.1.1. Parabolic Equation
8.1.2. Hyperbolic Equation
8.1.3. Elliptic Equation
8.2. Finite Difference Method
8.2.1. Numerical Solution for Hyperbolic Equation
8.2.2. Numerical Solution for Elliptic Equation
8.2.2.1. Direct Transfer
8.2.2.2. Linear Interpolation
8.2.3. Numerical Solution for Parabolic Equation
Chapter 9: Unconstrained Optimization
9.1. Brute-Force Search
9.2. C2OptimProblem
9.3. Bracketing Methods
9.3.1. Fibonacci Search Method
9.3.2. Golden-Section Search
9.3.3. Brentâs Search
9.4. Steepest Descent Methods
9.4.1. Newton-Raphson Method
9.4.2. Gauss-Newton Method
9.5. Conjugate Direction Methods
9.5.1. Conjugate Directions
9.5.2. Conjugate Gradient Method
9.5.3. Fletcher-Reeves Method
9.5.4. Powell Method
9.5.5. Zangwill Method
9.6. Quasi-Newton Methods
9.6.1. Rank-One Method
9.6.2. Davidon-Fletcher-Powell Method
9.6.3. Broyden-Fletcher-Goldfarb-Shanno Method
9.6.4. Huang Family (Rank One, DFP, BFGS, Pearson, McCormick)
Chapter 10: Constrained Optimization
10.1. The Optimization Problem
10.1.1. General Optimization Algorithm
10.1.2. Constraints
10.1.2.1. Equality Constraints
10.1.2.2. Inequality Constraints
10.2. Linear Programming
10.2.1. Linear Programming Problems
10.2.2. First-Order Necessary Conditions
10.2.3. Simplex Method
10.2.4. The Algebra of Simplex Method
10.3. Quadratic Programming
10.3.1. Convex QP Problems with Only Equality Constraints
10.3.2. Active-Set Methods for Strictly Convex QP Problems
10.3.2.1. Primal Active-Set Method
10.3.2.2. Dual Active-Set Method
10.4. Semidefinite Programming
10.4.1. Primal and Dual SDP Problems
10.4.2. Central Path
10.4.3. Primal-Dual Path-Following Method
10.5. Second-Order Cone Programming
10.5.1. SOCP Problems
10.5.1.1. Portfolio Optimization
10.5.2. Primal-Dual Method for SOCP Problems
10.6. General Nonlinear Optimization Problems
10.6.1. SQP Problems with Only Equality Constraints
10.6.2. SQP Problems with Inequality Constraints
Chapter 11: Heuristics
11.1. Penalty Function Method
11.2. Genetic Algorithm
11.2.1. Differential Evolution
11.3. Simulated Annealing
Chapter 12: Basic Statistics
12.1. Random Variables
12.2. Sample Statistics
12.2.1. Mean
12.2.2. Weighted Mean
12.2.3. Variance
12.2.4. Weighted Variance
12.2.5. Skewness
12.2.6. Kurtosis
12.2.7. Moments
12.2.8. Rank
12.2.8.1. Quantile
12.2.8.2. Median
12.2.8.3. Maximum and Minimum
12.2.9. Covariance
12.2.9.1. Sample Covariance
12.2.9.2. Correlation
12.2.9.3. Covariance Matrix and Correlation Matrix
12.2.9.4. Ledoit-Wolf Linear Shrinkage
12.2.9.5. Ledoit-Wolf Nonlinear Shrinkage
12.3. Probability Distribution
12.3.1. Moments
12.3.2. Normal Distribution
12.3.3. Log-Normal Distribution
12.3.4. Exponential Distribution
12.3.5. Poisson Distribution
12.3.6. Binomial Distribution
12.3.7. T-Distribution
12.3.8. Chi-Square Distribution
12.3.9. F-Distribution
12.3.10. Rayleigh Distribution
12.3.11. Gamma Distribution
12.3.12. Beta Distribution
12.3.13. Weibull Distribution
12.3.14. Empirical Distribution
12.4. Multivariate Probability Distributions
12.4.1. Multivariate Normal Distribution
12.4.2. Multivariate T-Distribution
12.4.3. Multivariate Beta Distribution
12.4.4. Multinomial Distribution
12.5. Hypothesis Testing
12.5.1. Distribution Tests
12.5.1.1. Normality Test
Shapiro-Wilk Test
Jarque-Bera Test
DâAgostino Test
Lilliefors Test
12.5.1.2. Kolmogorov Test
12.5.1.3. Anderson-Darling Test
12.5.1.4. Cramer Von Mises Test
12.5.1.5. Pearsonâs Chi-Square Test
12.5.2. Rank Test
12.5.2.1. T-Test
12.5.2.2. One-Way ANOVA Test
12.5.2.3. Kruskal-Wallis Test
12.5.2.4. Wilcoxon Signed Rank Test
12.5.2.5. Siegel-Tukey Test
12.5.2.6. Van der Waerden Test
12.6. Markov Models
12.6.1. Discrete-Time Markov Chain
12.6.2. Hidden Markov Model
12.6.2.1. The Likelihood Question
12.6.2.2. The Decoding Question
12.6.2.3. The Learning Question
12.7. Principal Component Analysis
12.8. Factor Analysis
12.9. Covariance Selection
Chapter 13: Random Numbers and Simulation
13.1. Uniform Random Number Generators
13.1.1. Linear Congruential Methods
13.1.2. Mersenne Twister
13.2. Sampling from Probability Distribution
13.2.1. Inverse Transform Sampling
13.2.2. Acceptance-Rejection Sampling
13.2.3. Sampling from Univariate Distributions
13.2.3.1. Gaussian or Normal Distribution
13.2.3.2. Beta Distribution
13.2.3.3. Gamma Distribution
13.2.3.4. Poisson Distribution
13.2.3.5. Exponential Distribution
13.2.4. Sampling from Multivariate Distributions
13.2.4.1. Multivariate Uniform Distribution Over Box
13.2.4.2. Multivariate Uniform Distribution Over Hypersphere
13.2.4.3. Multivariate Normal Distribution
13.2.4.4. Multinomial Distribution
13.2.5. Resampling Method
13.2.5.1. Bootstrapping Methods
13.2.5.2. The Politis-White-Patton Method
13.3. Variance Reduction
13.3.1. Common Random Numbers
13.3.2. Antithetic Variates
13.3.3. Control Variates
13.3.4. Importance Sampling
Chapter 14: Linear Regression
14.1. Ordinary Least Squares
14.1.1. Assumptions
14.1.2. Model Properties
14.1.3. Residual Analysis
14.1.4. Influential Point
14.1.5. Information Criteria
14.1.6. NM Dev Linear Regression Package
14.2. Weighted Least Squares
14.3. Logistic Regression
14.4. Generalized Linear Model
14.4.1. Quasi-family
14.5. Stepwise Regression
14.6. LASSO
Chapter 15: Time-Series Analysis
15.1. Univariate Time Series
15.1.1. Stationarity
15.1.2. Autocovariance
15.1.3. Autocorrelation
15.1.4. Partial Autocorrelation
15.1.5. White Noise Process and Random Walk
15.1.6. Ljung-Box Test for White Noise
15.1.7. Model Decomposition
15.2. Time-Series Models
15.2.1. AR Models
15.2.1.1. AR(1)
15.2.1.2. AR(2)
15.2.1.3. AR(p)
15.2.1.4. Estimation
15.2.1.5. Forecast
15.2.2. MA Model
15.2.2.1. MA(1)
15.2.2.2. MA(p)
15.2.2.3. Invertibility and Causality
15.2.2.4. Estimation
15.2.2.5. Forecast
15.2.3. ARMA Model
15.2.3.1. ARMA(1,1)
15.2.3.2. ARMA(p, q)
15.2.3.3. Forecast
15.2.3.4. Estimation
15.2.4. ARIMA Model
15.2.4.1. Unit Root
15.2.4.2. ARIMA(p, d, q)
15.2.4.3. ARIMAX(p, d, q)
15.2.4.4. Estimation
15.2.4.5. Forecast
15.2.5. GARCH Model
15.2.5.1. ARCH(q)
15.2.5.2. GARCH(p, q)
15.2.5.3. Estimation
15.2.5.4. Forecast
15.3. Multivariate Time Series
15.3.1. VAR Model
15.3.1.1. VAR(1)
15.3.1.2. VAR(p)
15.3.1.3. VARX(p)
15.3.1.4. Estimation
15.3.1.5. Forecast
15.3.2. VMA Model
15.3.2.1. VMA(1)
15.3.2.2. VMA(q)
15.3.3. VARMA Model
15.3.4. VARIMA Model
15.4. Cointegration
15.4.1. VEC Model
15.4.2. Johansen Cointegration Test
References
Index
⌠Subjects
Numerical Methods; NM Dev; Data Science; Analysis
đ SIMILAR VOLUMES
<span>This in-depth guide covers a wide range of topics, including chapters on linear algebra, root finding, curve fitting, differentiation and integration, solving differential equations, random numbers and simulation, a whole suite of unconstrained and constrained optimization algorithms, statisti
<span>This in-depth guide covers a wide range of topics, including chapters on linear algebra, root finding, curve fitting, differentiation and integration, solving differential equations, random numbers and simulation, a whole suite of unconstrained and constrained optimization algorithms, statisti
<span>This in-depth guide covers a wide range of topics, including chapters on linear algebra, root finding, curve fitting, differentiation and integration, solving differential equations, random numbers and simulation, a whole suite of unconstrained and constrained optimization algorithms, statisti
Instead of presenting the standard theoretical treatments that underlie the various numerical methods used by scientists and engineers, Using R for Numerical Analysis in Science and Engineering shows how to use R and its add-on packages to obtain numerical solutions to the complex mathematical probl
<P>Instead of presenting the standard theoretical treatments that underlie the various numerical methods used by scientists and engineers, <B>Using R for Numerical Analysis in Science and Engineering</B> shows how to use R and its add-on packages to obtain numerical solutions to the complex mathemat