Adaptive Dynamic Programming for Control

Algorithms and Stability

Nonfiction, Science & Nature, Technology, Automation, Mathematics, Applied
Cover of the book Adaptive Dynamic Programming for Control by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang, Springer London
View on Amazon View on AbeBooks View on Kobo View on B.Depository View on eBay View on Walmart
Author: Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang ISBN: 9781447147572
Publisher: Springer London Publication: December 14, 2012
Imprint: Springer Language: English
Author: Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
ISBN: 9781447147572
Publisher: Springer London
Publication: December 14, 2012
Imprint: Springer
Language: English

There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of  adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods:
• infinite-horizon control for which the difficulty of solving partial differential Hamilton–Jacobi–Bellman equations directly is overcome, and  proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences;
• finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control;
• nonlinear games for which  a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point.
Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium.
In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time:
• establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm;
• demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and
• shows how ADP methods can be put to use both in simulation and in real applications.
This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.

View on Amazon View on AbeBooks View on Kobo View on B.Depository View on eBay View on Walmart

There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of  adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods:
• infinite-horizon control for which the difficulty of solving partial differential Hamilton–Jacobi–Bellman equations directly is overcome, and  proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences;
• finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control;
• nonlinear games for which  a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point.
Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium.
In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time:
• establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm;
• demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and
• shows how ADP methods can be put to use both in simulation and in real applications.
This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.

More books from Springer London

Cover of the book Applications of Multi-Criteria and Game Theory Approaches by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Cover of the book Contemporary Interventional Ultrasonography in Urology by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Cover of the book Success in Academic Surgery: Health Services Research by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Cover of the book Omnidirectional Vision Systems by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Cover of the book Ryan's Ballistic Trauma by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Cover of the book Special Procedures in Foot and Ankle Surgery by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Cover of the book Displacement of the Hip in Childhood by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Cover of the book Liquid Biofuels: Emergence, Development and Prospects by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Cover of the book Multi-finger Haptic Interaction by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Cover of the book Morse Theory and Floer Homology by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Cover of the book Practical Procedures in Orthopaedic Surgery by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Cover of the book Pathology of the Larynx by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Cover of the book Brain CT Scans in Clinical Practice by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Cover of the book Computational Techniques for Structural Health Monitoring by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Cover of the book Iterative Identification and Control by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
We use our own "cookies" and third party cookies to improve services and to see statistical information. By using this website, you agree to our Privacy Policy