<p><p>This advanced textbook introduces the main concepts and advances in systems and control theory, and highlights the importance of geometric ideas in the context of possible extensions to the more recent developments in nonlinear systems theory. Although inspired by engineering applications, the
Stability and Control of Linear Systems
β Scribed by Andrea Bacciotti
- Publisher
- Springer Nature
- Year
- 2018
- Tongue
- English
- Leaves
- 200
- Series
- Studies in Systems, Decision and Control 185
- Edition
- 1
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
This advanced textbook introduces the main concepts and advances in systems and control theory, and highlights the importance of geometric ideas in the context of possible extensions to the more recent developments in nonlinear systems theory. Although inspired by engineering applications, the content is presented within a strong theoretical framework and with a solid mathematical background, and the reference models are always finite dimensional, time-invariant multivariable linear systems. The book focuses on the time domain approach, but also considers the frequency domain approach, discussing the relationship between the two approaches, especially for single-input-single-output systems. It includes topics not usually addressed in similar books, such as a comparison between the frequency domain and the time domain approaches, bounded input bounded output stability (including a characterization in terms of canonical decomposition), and static output feedback stabilization for which a simple and original criterion in terms of generalized inverse matrices is proposed.
The book is an ideal learning resource for graduate students of control theory and automatic control courses in engineering and mathematics, as well as a reference or self-study guide for engineers and applied mathematicians.
β¦ Table of Contents
Preface
Contents
Notations and Terminology
1 Introduction
1.1 The Abstract Notion of System
1.1.1 The Input-Output Operator
1.1.2 Discrete Time and Continuous Time
1.1.3 Input Space and Output Space
1.1.4 State Space
1.1.5 Finite Dimensional Systems
1.1.6 Connection of Systems
1.1.7 System Analysis
1.1.8 Control System Design
1.1.9 Properties of Systems
1.2 Impulse Response Systems
1.3 Initial Conditions
1.3.1 Deterministic Systems
1.3.2 Time Invariant Systems
1.3.3 Linear Systems
1.3.4 External Stability
1.3.5 Zero-Initialized Systems and Unforced Systems
1.4 Differential Systems
1.4.1 Admissible Inputs
1.4.2 State Equations
1.4.3 Linear Differential Systems
2 Unforced Linear Systems
2.1 Prerequisites
2.2 The Exponential Matrix
2.3 The Diagonal Case
2.4 The Nilpotent Case
2.5 The Block Diagonal Case
2.6 Linear Equivalence
2.7 The Diagonalizable Case
2.8 Jordan Form
2.9 Asymptotic Estimation of the Solutions
2.10 The Scalar Equation of Order n
2.11 The Companion Matrix
3 Stability of Unforced Linear Systems
3.1 Equilibrium Positions
3.2 Conditions for Stability
3.3 Lyapunov Matrix Equation
3.4 Routh-Hurwitz Criterion
4 Linear Systems with Forcing Term
4.1 Nonhomogeneous Systems
4.1.1 The Variation of Constants Method
4.1.2 The Method of Undetermined Coefficients
4.2 Transient and Steady State
4.3 The Nonhomogeneous Scalar Equation of Order n
4.4 The Laplace Transform Method
4.4.1 Transfer Function
4.4.2 Frequency Response Analysis
5 Controllability and Observability of Linear Systems
5.1 The Reachable Sets
5.1.1 Structure of the Reachable Sets
5.1.2 The Input-Output Map
5.1.3 Solution of the Reachability Problem
5.1.4 The Controllability Matrix
5.1.5 Hautus' Criterion
5.2 Observability
5.2.1 The Unobservability Space
5.2.2 The Observability Matrix
5.2.3 Reconstruction of the Initial State
5.2.4 Duality
5.3 Canonical Decompositions
5.3.1 Linear Equivalence
5.3.2 Controlled Invariance
5.3.3 Controllability Form
5.3.4 Observability Form
5.3.5 Kalman Decomposition
5.3.6 Some Examples
5.4 Constrained Controllability
6 External Stability
6.1 Definitions
6.2 Internal Stability
6.3 The Case C=I
6.4 The General Case
7 Stabilization
7.1 Static State Feedback
7.1.1 Controllability
7.1.2 Stability
7.1.3 Systems with Scalar Input
7.1.4 Stabilizability
7.1.5 Asymptotic Controllability
7.2 Static Output Feedback
7.2.1 Reduction of Dimension
7.2.2 Systems with Stable Zero Dynamics
7.2.3 A Generalized Matrix Equation
7.2.4 A Necessary and Sufficient Condition
7.3 Dynamic Output Feedback
7.3.1 Construction of an Asymptotic Observer
7.3.2 Construction of the Dynamic Stabilizer
7.4 PID Control
8 Frequency Domain Approach
8.1 The Transfer Matrix
8.2 Properties of the Transfer Matrix
8.3 The Realization Problem
8.4 SISO Systems
8.4.1 The Realization Problem for SISO Systems
8.4.2 External Stability
8.4.3 Nyquist Diagram
8.4.4 Stabilization by Static Output Feedback
8.5 Disturbance Decoupling
A Internal Stability Notions
A.1 The Flow Map
A.2 Equilibrium Points and Stability in Lyapunov Sense
B Laplace Transform
B.1 Definition and Main Properties
B.2 A List of Laplace Transforms
B.2.1 Elementary Functions
B.2.2 Discontinuous functions
B.2.3 Dirac Delta Function
B.3 Inverse Transform
B.4 The Laplace Transform of a Vector Function
References
Index
π SIMILAR VOLUMES
This advanced textbook introduces the main concepts and advances in systems and control theory, and highlights the importance of geometric ideas in the context of possible extensions to the more recent developments in nonlinear systems theory. Although inspired by engineering applications, the conte
This book presents a unified algebraic approach to stabilization problems of linear boundary control systems with no assumption on finite-dimensional approximations to the original systems, such as the existence of the associated Riesz basis. A new proof of the stabilization result for linear system
<p><p>It is well-known that actuator saturation is present in practically all control systems, the signal amplitude that an actuator can deliver usually being limited by physical or safety constraints. Neglect of these amplitude bounds and the consequent actuator saturation can be a source of perfor
<p>One of the main problems in control theory is the stabilization problem consisting of finding a feedback control law ensuring stability; when the linear approximation is considered, the natΒ ural problem is stabilization of a linear system by linear state feedback or by using a linear dynamic con