████    重点词汇
████    难点词汇
████    生僻词
████    词组 & 惯用语

[学习本文需要基础词汇量:6,000 ]

hi everyone I'm Steve Brenton and this

is the first video lecture on a series

I'm calling a bootcamp on control where

I'm going to rapidly go through the

highlights of optimal and modern control

theory so this is going to include how

to write down a system description of a

control system with inputs and outputs

in terms of a system of linear

differential equations and now how to

design controllers to manipulate the

behavior of that system how to design

estimators like the common filter so the

diffuse limited sensors you could

reconstruct various aspects of that

system this is not meant to be an

exhaustive in-depth treatment of the

subject but really kept at a high level

and my goal is to first of all get you

familiar with the major types of optimal

and modern control theory I want to

teach you how to use these in MATLAB to

actually work with a real system and

what I also want to give you a feeling

for is what in control theory is easy

and what's still quite challenging today

so that you can get up to speed on the

real pressing needs of control theory

today okay and again this is not

exhaustive so you know if this is really

important to you and you want to you

know you like control theory and you

want to go more into depth there's

deeper treatments both on the math side

and only applied design side okay and so

I want to give you just a little bit of

perspective I think about the world in

terms of dynamical systems so systems of

ordinary differential equations in terms

of the state of your system and this has

been an extremely successful viewpoint

for modeling real world phenomenon okay

so we model the fluid flow over a wing

or the population dynamics in a city or

the spread of a disease or the stock

market climate planets moving around the

solar system all of these are modeled as

dynamical systems and this has been a

very very successful framework to take

in data from the real world and build

models that you can use for prediction

but often we want to go beyond just

describing the system of interest and we

want to actually manipulate the system

actively to change its behavior

and so that could be just imposing some

control logic just setting inputs into

the that system in a certain pre-planned

way to manipulate it or you could

actually measure that system and make

decisions based on how the system is

responding to what you're doing okay and

so that's kind of the overarching view

and control theory is that you have some

dynamical system and interest maybe it's

a pendulum or a crane that you want to

make more stable you write down the

system of equations and then you design

some control policy that changes the

behavior of your system to be more

desirable okay so that's what we're

going to talk about and so I want to

begin by just talking about the various

types of control that there are so

there's lots of control that goes around

all around us every day that is not

active it's called passive control so

I'm going to draw just a diagram of the

different types of control so one type

that's very common you see all the time

is passive control okay so for example

if you see a large 18-wheeler transport

truck going down the highway and it has

those streamlined tail sections that's a

form of passive control that's passively

causing the air around the truck to

behave in a favorable way to reduce drag

and if you can get away with passive

control of your system that's actually

great because you just have to design an

up front and then there's no energy

expenditure and hopefully you get the

desired effective for example minimizing

drag on a truck but passive control is

typically not enough and so oftentimes

we need to do something like active

control and so active control

essentially just means that this is

control when we're actually pumping

energy into the system to actively

manipulate its behavior okay and there's

lots and lots of different types of

active control so one that I'm going to

tell you about is open-loop this is

probably the most common form of active

control where essentially you have your

system of interest

and I'm just going to actually draw this

as a block here so you have some system

and the system has some input so I'm

going to call them variable U and it has

some outputs that are variable Y okay

and so what open loop control does is it

essentially reverse designs your system

and inverts the dynamics to figure out

exactly what is the perfect input u to

get a desired output Y okay and so if I

take something like a inverted pendulum

so we know that if I if I am very

careful I can stabilize this inverted

pendulum but it in physics you'll learn

that if you just pump this pendulum up

and down at a high enough frequency it

will naturally stabilize the dynamics

okay so if my base just oscillates at a

high frequency sine wave then the

dynamics of this pendulum so the base is

you may be why is the angle of this

pendulum and my desired control is to

make this pendulum essentially stay at

vertical okay and so if I pump in energy

in a pre-planned way I just make my hand

go up and down in the sinusoid I can put

in a sinusoid

Y that I want okay and essentially that

is open with control it's very commonly

used essentially you think about your

system you pre plan a trajectory and you

just enact that control law okay but the

downside of open-loop control is that

you're always putting in energy to this

to this u so in the inverted pendulum

example I constantly have to be pumping

this thing up and down sinusoidally and

the minute I stop the stability it

becomes unstable and it falls okay and

so the idea is that what we can do is

called closed-loop feed feedback control

so closed-loop feedback control

and essentially what this means is that

we take sensors bring my pendants drying

out

we take sensors sensor measurements of

what the system is actually doing and

then somehow we build a controller I'm

just going to call this a controller and

we feed that back into our our input

signal that can manipulate the system so

for example in that inverted pendulum

example as a human if I had a tall

enough pendulum so it's slow enough I

could actually measure with my eyes if

it's starting to wobble and I could do

much more subtle control so if you have

ever played around with as a kid with a

broomstick cat trying to stabilize it

you know that you can actually get

pretty good at it so that with very low

energy input or very small hand motions

you can stabilize this thing so that it

doesn't fall okay and so that's the

basic idea is that by measuring the

output you can often do much much better

than just feeding in kind of a

pre-planned control law okay so sensor

based feedback measuring the output and

then feeding that back as the input is

basically going to be the entire subject

of what we're going to talk about in

this control boot camp so closed-loop

feedback control is the name of the game

and that's that's most of what we're

going to talk about now that's not to

say that if you can design a good open

loop or a good passive control there you

know there are some times you would do

that but in the systems we're going to

be interested in closed loop feedback

based on sensors is going to give

dramatically better performance okay and

so I want to talk a little bit about why

you would have feedback so I just want

to make a quick list why feedback

because this is a very very important

important topic in control theory so I

want to motivate again just maybe in

more concrete terms

why would I actually measure the system

and feed it back instead of just

ignoring any measurements and using

open-loop so why feedback over open-loop

control okay so this is a question I

always ask my class and I let them think

for a little bit why would you actually

want to have

the sensors feeding back into your

system okay so one answer that I get

most often is maybe my system has some

inherent uncertainty okay so if my

system is uncertain

so uncertainty is one of the main and

enemies of open-loop control right so if

I have this pendulum and I perfectly

pre-planned what I want to do let's say

that the pendulum is one centimeter

taller or it's a little bit heavier or

there's wind blowing or something like

that then any kind of uncertainty in

that system is going to make it so that

my pre-planned trajectory is going to be

suboptimal

but if I measure the outputs and I

realize that it's not doing what I want

it to do I can adjust my control law

even if I don't have a perfect model of

my system okay so uncertainty is a big

one

another really important one is

instability so with open-loop control I

can never fundamentally change the

behavior of the system itself so in the

pendulum example I could pump in an

amount of energy with the sinusoidal

based motion that would force the system

to kind of correct itself up to vertical

but I'm not actually changing the

systems dynamics itself the system still

is unstable and has an unstable

eigenvalue but when I have feedback

control I can directly manipulate the

actual dynamics of this closed-loop

system and I can change the the dynamic

properties I can change the eigenvalues

of this closed-loop system okay and I'm

going to show you that as the last

example in this overview so the third

thing that I think is really really neat

is that with feedback control you can

also reject disturbances in your system

so let's say that I have some external

disturbance D that's coming into my

system and this happens all of the time

so so for example let's say in my

pendulum example there's a gust of wind

so that's a

disturbance that would be very hard for

me to predict or model or measure so

there's this gust of wind that comes and

if I had an open lead strategy

essentially it might not be able to

correct for that gust of wind where is

that gust of wind will pass through the

system dynamics will be measurable

through some sensor and if my feedback

control is good enough I can actually

correct for that disturbance so I think

of uncertainty as internal system

uncertainty kind of disturbances to my

model and I think if disturbances as

external or exogenous forcing of the

system that may be too difficult or too

costly or too complicated to to model or

predict or measure okay and feedback

essentially handles all of those basic

issues that can handle disturbances that

can handle uncertainty and it can

fundamentally change the stability of

your system to make it more or less

stable by actually changing the

eigenvalues of this closed-loop system

and unfortunately open-loop can't do any

of those things which is a huge drawback

and I guess the fourth one is energy or

efficiency

so I'll just say efficient control so

again in the case of the pendulum in the

open-loop case I constantly had to pump

this thing up and down so I was always

putting energy in but in the case of

sensor-based or elegant feedback control

you can picture yourself trying to

stabilize this broomstick if you're

doing a really good job if you have a

really good controller this thing is

barely moving at all and so you almost

have to put no energy in to correct it

so effective sensor based feedback

control is also much more efficient

which is really really important in lots

of applications so if you're going to

send a rocket somewhere you better have

an efficient controller because you

don't want to be wasting fuel okay so

the last thing I want to show you is

just this idea of why you can change the

fundamental system dynamic dynamics and

change the stability with feedback

control okay so the basic property that

we're going to or the basic mathematical

architecture we're going to be working

with in this class is going to be a stay

space system of ordinary differential

equations so we're going to have a state

variable X X as a vector that describes

all of the quantities of interest in my

system so for example in my pendulum it

could be the angle and angular velocity

it could be two states if I have you

know an airplane going through the sky

it could be the three the position

vector XY and Z and also its its

rotation angles and their derivatives

okay so it could be like a six degree of

freedom or twelve state twelve component

vector X and so what we're going to look

at is the system X dot equals ax so

we're going to start with linear systems

of equations that describe how those

states interact with each other okay and

so I'm going to assume that we're all

pretty comfortable with this linear

systems of OD e so for example we know

that the solution of this is X of T

equals e to the matrix say T times X at

time x zero okay so we know how the

system behaves we know that if a has any

eigenvalues with a positive real part

then the system will be unstable and if

all of the eigen values have negative

real part then these have stable

dynamics that they go to zero as time

goes to infinity but what we're going to

do in control theory is we're going to

add plus B U so we're going to add this

ability to actuate or manipulate our

system okay so we're going to say that U

is our actuator it's the thing we can

its our control knob okay so it could be

in the case of the pendulum it could be

the position of the base or it could be

the voltage onto a motor that controls

something but this is the knob that we

get to turn to try to stabilize our

system and B tells you how this control

knob directly affects the time rate of

change of my state okay and down the

road we're going to look at another

extension where we're actually going to

measure only certain aspects of the

state so we're going to measure so

linear combination of the state X and

this might actually be a limited set of

measurements we might not measure all of

this the state of its high-dimensional

and we might only have access to those

few sensor measurements in Y but for now

let's just talk about the top equation

so if I assume that I can measure

everything in the system and in this

case of the pendulum as a human I have a

pretty good estimate of its vertical

position and how fast it's moving so

let's say I can measure all of X then we

can develop a control law let's say u

equals minus some matrix K times X okay

so I'm just going to say let's posit a

basic control law that my control input

U is going to be some matrix times X

just some constant constant times the

components of X when I plug this in so

this is this is really sensor based

feedback where y equals x okay in this

case we're assuming that y equals x we

can measure all of our state and we're

going to feed that back into a control

law which is minus K u equals minus K

times X and we're going to try to modify

the dynamics so if you plug u equals

minus KX into our dynamics we basically

get and let's make another color here we

basically get X dot equals ax and then

minus B K X okay so B is maybe a tall

vector the same or set of vectors the

same height as X K it's kind of the

transpose size of that and so this is a

matrix of size n by n if X is an

n-dimensional state and so this equals a

minus BK times X so notice that by by

measuring the state in this case we're

measuring the full state X and feeding

that back to the control u through this

law u equals minus

a X we're able to actually change the

dynamic matrix so now we have a new

dynamical system X dot equals a minus BK

times X and so it's actually the

eigenvalues and of this matrix that tell

you if the system is stable so I can

have a really originally unstable system

like this inverted pendulum and by

measuring the state and feeding it back

to my control knobs I get to move I can

stabilize the dynamics I can actually

make the system asymptotically stable

okay and so figuring out when you can do

this so this doesn't work for all

systems and for all measurements and for

all actuators so figuring out when the

system is controllable and how to design

this case so that it is well controlled

are going to be the subjects of the next

couple of lectures okay but really

really important feedback solve all of

these fundamental problems if I have an

uncertainty in my system I can

compensate for it by measuring what's

actually happening and feeding that back

if I have an instability in my system I

can actually change the dynamics with

this feedback and you can't really do

that with open-loop I can also account

for external disturbances like a gust of

wind that might have been really hard to

measure and could totally throw off your

pre-planned trajectory but if you

measure what's happening you can account

for and correct for that

and finally feedback control is

efficient if you're doing effective

feedback control to stabilize a system

then the more effective you are the less

energy you have to put in okay

so this should be a really exciting set

of lectures I'm really hoping to get you

up to speed quickly and with MATLAB

examples so that you can control these

systems you can design controllers to

actually manipulate your system to do

what you want it to do okay thank you