This part of the course will be about modeling dynamic systems. Mathematical models are indispensible

in analyzing systems and in many stages of engineering design process. To start with,

note that a basic trade-off in modeling is simplicity versus accuracy. In other words,

to have more accurate models you have to increase its complexity.

Linear models offer a fair trade-off in this sense. By changing the number of parameters in

your model, you can have simpler or more accurate ones. Let us recall the definition of a linear

system. Assume that y1 is the output of the system corresponding to the input u1, and

y2 is the output corresponding to u2. For the system to be linear, you need two conditions

to hold. Firstly, the output corresponding to the sum of u1 and u2 should be the sum

of the corresponding outputs, that is y1 plus y2. This property of linear systems is known

as the superposition property. Secondly, if you multiply the input to a linear system

by a scalar, the output is also multipled by the same scalar. That is, a times u1

will yield a times y1 as the output. This property is known as the homogeneity.

We are going to use linear differential equations as linear mathematical models of dynamic systems.

In many cases, our differential equations will have constant coefficients. Occasionally,

the coefficients may also be functions of time. The former case will correspond to linear

time-invariant, in short LTI, systems, whereas in the latter case we'll have linear time-varying,

that is, LTV systems.

Before considering linear models, let us discuss briefly the nonlinear ones. A model, or rather

a differential equation, will be nonlinear if you have nonlinear terms in it. For example,

terms such as the square of derivatives or products of different derivatives or nonlinear

functions like cosine, sine, logarithms, etc.

In some cases you can have nonlinerarities, which cannot be expressed in a closed form.

The saturation type of nonlinearity is for example one of them, where the output depends

linearly on the input for smaller values of it, but becomes saturated for higher values

of the input. The deadzone nonlinearity is another example, which is quite common in

flow systems, especially in valves. Finally, here is a graphical representation of square-law

nonlinearity, where the output is proportional to the square of the input.

Coming back to linear models, The most commonly used linear frequency-domain model for dynamic

systems is the transfer function. It is defined as the ratio of the Laplace transform of the

output to the Laplace transform of the corresponding input under zero initial conditions. Naturally,

since the transfer function is a ratio of two Laplace transforms, it is denoted as a

function of s.

Consider a linear time-invariant single-input-single-output system represented by such a differential

equation. Here, x is the input signal and y is the output, and the coefficients a's

and b's are constants. We assume that the order of the highest derivative in y is larger

than the order of the highest derivative in x, that is, n is larger than m.

Now, take the Laplace transform of both sides of this equation. Remember, we assume that

all initial conditions are zero. So, effectively, derivatives will be replaced with multiplications

of the Laplace transforms of y and x by s. Taking terms in the left and right hand side

of the equation into parentheses of Y of s and X of s, we can obtain the ratio of Y of s to X of s;

in other words, the transfer function of the system represented by this differential equation.

At this point, we can note several properties of transfer functions. First of all, it is

clear from the derivation that the transfer function is a representation for linear time-invariant

single-input-single-output systems. Also, it is a mathematical model relating the input

directly to the output. Note that the transfer function we have denoted as G of s is independent

of the input. It does not provide any information on the physical structure of the system. That

is, we do not know whether the transfer function above is referring to a mechanical system

or electrical system or a thermal system, for example. Nevertheless, if the input is

given, you can obtain the Laplace transform of the output by multiplying the transfer

function with the Laplace transform of the input. On the other hand, you can obtain the

transfer function of a system by studying the inputs and the corresponding outputs.

That is, you can apply an input and measure the output, and then you can obtain the transfer

function by building the ratio of their Laplace transforms.

Transfer functions can also be obtained by exploiting the physical relationships between

the input and the output of the system. Here is a simplified example: Let us consider a

satellite orbiting the earth, which we are going to represent by this diagram. The attitude

of the satellite is defined as the angle of its axis with respect to some fixed reference

and is denoted by theta. To rotate the satellite around its centre of mass, one would use the

thrusters located at A and B. They are of distance of l from the centre of mass. Each

thruster is assumed to provide a force of capital-F over 2. So, the torque applied to

the centre of mass by the thrusters is F times l.

Now, we would like to find the transfer function from the torque, that is, the input of the

system, to the angular position theta, which is the output of our system. The transfer

function can be obtained as the ratio of capital-theta of s to capital-T of s, where capital-T of s is the

Laplace transform of the torque function and capital-theta of s is the Laplace transform

of the angular position theta. In this simplified example, this ratio can be obtained using

Newton's second law. Namely, the moment of inertia of the satellite multiplied by the

angular acceleration is equal to the torque applied. If we take the Laplace transform

of both sides, under zero initial conditions of course, we can write the transfer function

theta of s to T of s as equal to 1 over J s-square.

Here, the fact that the transfer function has two s factors in the denominator means

that the angular position is to be obtained from the torque input by double integration.

Here is another example: A parallel RLC circuit. Let us now obtain the transfer function from

the input current to the output voltage. In doing this, we are going to denote the Laplace

transforms of the output voltage v, the input current i and the current in the resistor

R, namely i-sub-R, with their corresponding capital letters.

In fact, the transfer function of this circuit can be obtained directly, if we consider the

impedance expressions of the elements. We know that v is equal to R times i-sub-R by

Ohm's law. So, what we need to do is to write the resistance current in terms of the input

current using the current division principle. And then, the ratio of V of s to I of s can be

obtained as R-L-s divided by R-L-C s-square plus L-s plus R.

Note that this is a strictly proper transfer function and the denominator degree is 2,

since we have 2 dynamic elements in the circuit.

Now, let us consider the transfer function

G of s of a system with output Y and input X. It follows that the Laplace transform of the

output of the system can be obtained as G of s times X of s. And by the convolution property

of the Laplace transform, that would correspond to x of t convolved with g of t, or g of t convolved

with x of t. Of course, here smallcase-g refers to the inverse Laplace transform of the transfer

function G of s. Now, if the input is the unit-impulse function, that would mean that X of s is equal

to 1. Hence, the response of the system in this case would be simply G of s.

In time domain, that would mean the output is equal to the inverse Laplace transform

of the transfer function, which we had denoted as smallcase-g of t. That's why the inverse

Laplace transform of the transfer function is also referred to as the impulse-response

function.

Taking now the idea of using transfer functions

as mathematical models of dynamic systems one step further, we will consider the block

diagrams. A block diagram is a pictorial representation of transfer functions belonging to each component

in the system and of the flow of signals.

In drawing block diagrams, we will need three sorts of symbols. Namely, blocks, summing

points and branch points. Blocks represent the input-output relationship of the components

with their transfer functions. The input signal is shown by an incoming arrow and the output

signal is shown by an outgoing arrow. In a summing point, whether the signals are going

to be added or substracted is shown by plus and minus signs on the incoming arrows. In

some cases, there may be more than two input signals on a summing point. On the other hand,

branch points are for copying signals, so that they can be inputted to more than one

component.

The first block diagram we are going to consider

shows a basic feedback configuration. In such a feedback connection, the feedback path can

be a unity feedback, or it can contain a transfer function, which we will denote as H of s.

On this system we can define three different transfer functions. The first one is the open

loop transfer function, which is the product of all blocks in the loop. In this case, G of s

and H of s. In other words, it is the transfer function from the signal E to the signal B.

The second one is the feedforward transfer function, which is, in general, the product

of all blocks from the input R of s to the output C of s. In this case, we have only G of s on the

path from R to C.

The last one, maybe the most important one, is the closed-loop transfer function. That

is the transfer function from the input R to the output C. To derive the closed-loop

transfer function, first note C of s is equal to G of s times E of s and the signal E is the

difference between R and B. Here, you can replace B with H times C. We can substitute

this expression for E in the equation on the left hand side, to obtain C of s is equal to

G multipled by R minus H times C. And now, rearranging this equation, the ratio of C of s

to R of s can be obtained as G over 1 plus G times H.

Note that the closed-loop transfer function is obtained as the feedforward transfer function

divided by 1 plus the open-loop transfer function. Another important point we have to emphasise

here is that we are considering a negative feedback configuration. If the feedback were

positive, then the denominator of the closed-loop transfer function would be 1 minus the open-loop

transfer function.

Next, we will analyze a block diagram where

we have two inputs, namely the reference input R and the disturbance D. Since this is a linear

system, we can obtain the output C of s using the superposition principle. So, the output

will have two components, C-sub-D and C-sub-R. C-sub-R is the component that is caused by

R of s only, in other words, whenever the disturbance D is equal to zero. And C-sub-D is the component

that is caused by the disturbance only. This corresponds to the case where the reference

input R is equal to zero.

First consider the case where D of s is equal to zero and redraw the block diagram. Remembering

that the closed-loop transfer function can be obtained as the feedforward transfer function

divided by 1 plus the open-loop transfer function, we can write the ratio C-sub-R to R, as G1

times G2 divided by 1 plus G1 G2 H.

Now, consider the case where R of s is equal to zero. In this case, the feedforward transfer

function is only G2 of s and the loop transfer function is G2 times G1 times H. So, the closed-loop

transfer function C-sub-D by D will be G2 divided by 1 plus G1 G2 H.

It is interesting to note that both closed-loop transfer functions share the same common denominator,

namely 1 plus the open-loop transfer function. In fact, this is called the characteristic

function of the system and it is the common denominator of all transfer functions you

can write between the signals in this loop.

Having the transfer functions from the reference

input and the disturbance to the output we can write the output as the superposition

of the contributions from these two inputs.

At this point, let's assume that the transfer functions G1 H and G1 G2 H are very large

in magnitude. This can be achieved, for example, either by increasing the gain of the feedback

H of s or the controller G1 of s. So, you can neglect the unity terms in the denominators

of the transfer functions defining C-sub-D and C-sub-R. This, in turn, means the transfer

function C-sub-D by D is approximately zero. In other words, the output will not be affected

by the disturbance at all. On the other hand, the transfer function from the reference input

to the output will be approximately 1 over H. That means, by changing H of s, you can impose

any transfer function you like in the closed-loop, whatever the plant transfer function G2 is.

All these show us two important advantages of using feedback. Firstly, you can reduce

the effect of disturbances, and secondly, you can also reduce the effect of changes

in the plant dynamics to your output by applying feedback. And, the higher your feedback gain,

the better will be the disturbance rejection and robustness of your closed-loop system.

Next, let us discuss briefly how we can draw

and manipulate block diagrams. Whenever a physical system is given, drawing a block

diagram of it involves two stages. One, finding the transfer function of each component and,

two, assembling these blocks.

Here is a simple electic circuit example. To draw the block diagram of this RC circuit,

with the input e-sub-i and the output e-sub-o, let us first derive the transfer functions of

the components R and C. A resistive component can be described by the Ohm's law as i equals

1 over R multiplying e-sub-i minus e-sub-o. The Laplace transforms of these quantities

are, of course, similarly related, which can be expressed as a block diagram like this.

Also writing the component equation relating the current to the voltage on a capacitor,

we can express the transfer function of it as 1 over C s. Now, we have the block diagrams

of all components in the system, and to get the overall block diagram, what we need is

only to assemble them.

Let us now summarize a couple of rules about

how to combine and arrange components in a block diagram, namely the block diagram reduction

rules. We start by combining cascade blocks. If you have two blocks, say, G1 and G2 in

cascade and if you apply, say, signal A to the input, then the output will be G2 G1 A.

The same output will also be obtained for a transfer function G2 times G1. That means

two cascade blocks can be reduced to a single one representing the product of their transfer

functions.

Next, consider moving a block behind a summing point. if the inputs of a subsystem composed

of a block and a summing point are A and B, then the output will be G A minus B. In order

to maintain the same input-output relationships when the block G of s is moved behind the summing

point, the input B has to be passed through a block of 1-over-G.

What if we move a block behind a branch point? Well, in this case the block G of s has to be

kept on both branches, so that we can end up with the same input-output relationships.

To see what happens in moving a branch point

behind a block, let's again consider a fictitous input A and the corresponding outputs of this

subsystem. So, moving the branch point behind the block G(s) would bring an extra 1 over

G block to one of the branches.

Another interesting case is when you have a feedback loop and you would like to move

a block on the feedback path to the feedforward path. Now, in such a feedback loop for example,

if the input is A and the output is B, the input to the G1 block in the feedforward path

happens to be A minus G2 B. Now, if you move the G2-block to the feedforward path then

we have to filter the signal A through the block 1-over-G2 in order to keep the same

input-output relationship as in the original feedback loop.

Lastly, remember that we have derived the transfer function of a closed-loop system.

So, whenever we have a feedback loop with the feedforward path as G1 and the feedback

path as G2, we know that we can reduce it to a single block decribed by the transfer

function G1 divided by 1 plus G1 G2.

Note that in all such manipulations, the key principle is to keep the same input-output

relationships in the subsystem that you are replacing.

We can now apply such block diagram reduction

rules on an example. Let us obtain the overall transfer function of this system by applying

some block diagram algebra. Our aim here will be to arrange the blocks so that all feedback

loops are either nested one in the other or completely separated from each other. We can

reach such a configuration by moving the block G1 behind the summing point in front of it.

Now, the order of three summing points can

be arranged arbitrarly. So, now we have all feedback loops as nested.

We can now start simplifying from the innermost

loop. That is a positive feedback loop with the feedforward blocks G1 and G2 and the feedback

block H1. Therefore, I can replace this part of the block diagram with a single block having

the transfer function G1 G2 divided by 1 minus G1 G2 H1.

Again, considering the inner negative feedback

loop we can reduce it to single block

and simplify it so that we now have a

unity feedback closed loop system,

which in turn can also be represented

as a single block and simplified as in the lower block diagram.

Interestingly, the overall transfer function

we have just obtained is in the following form: The numerator is a product of transfer

functions in the feedforward path. In the denominator, on the other hand, besides the

term 1, we have a sum of transfer functions corresponding to each loop.

Let's see them again. So, this was our original block diagram. And this is the overall transfer

function we obtained from the input R to the output C. The feedforward blocks are G1, G2

and G3, and their product appears in the denominator. On the other hand, all loop transfer functions,

that is, the loop of G1 G2 H1, the loop of G2 G3 H2, and the loop of G1 G2 G3 are

appearing in the denominator.

In this second part the video we will discuss

an alternative way of modeling dynamic systems. Namely, the state-space representations.

To start with, let us contrast transfer functions and state-space models.

As we have seen earlier, transfer functions can be used in modeling linear, time-invariant,

and single-input-single-output systems. They are based on Laplace transforms, therefore

they constitute a frequency domain approach to modeling dynamic systems.

On the other hand, state-space models can be used to model linear or nonlinear, time-invariant

or time-varying, multi-input-multi-output as well as single-input-single-output systems.

Besides, they are time domain entities. In other words, they are differential equations

in a special format.

To explain the state-space approach, first

we need to define the concept of state. The state of a system is the smallest set of variables

such that the knowledge of them at an initial time, maybe together with knowledge of the

input of the system, completely determines the behaviour of the system for all future

values. In other words, the value of the state at any instant summarizes the past of a system,

which is necessary to compute its future.

As a simple example, consider this differential equation. y-double-dot minus y-dot equals

u. Here, u is the input of the system and y is the output. If the value of y and its derivative

at time t-sub-zero is known, then you can use these initial conditions to solve y of t

from this differential equation. That is, you can obtain all future values of y and

its derivative.

This means y(t) and y-dot(t) constitute a state for the system defined by this differential

equation.

If the state of the system involves more than one variable, like in this example, you can

show them in a vector, which we are going to call the state vector and denote by a bold

letter x. So, the knowledge of the state vector at a particular initial time t-sub-0 together

with the future input values can be used to obtain the state vector for all values of

t greater than t-sub-0.

Now, consider a multi-input-multi-output system

with r inputs from u1 to u-r and m outputs from y1 to y-m. The existence of a dynamic

behaviour in a system means that the future values of some signals in it depends on the

past of the system, and hence, requires existence of memory elements, that is, integrators.

For example, integrators in a linear electric circuit are capacitors, which integrate current,

or inductors, which integrate voltage. Since they accumulate past information, we are going

to use the outputs of such integrators as state-variables.

Let's assume that we have n such states, that is, x1 up to x-n. Now, writing the state equations

means expressing the inputs of these integrators, namely the derivatives of these state variables

in terms of the state variables, input variables and maybe the time variable t.

Similarly, you can also write the outputs as functions of the state variables, inputs

and time. Let us define the state vector x, the input vector u and the output vector y,

so that we can write all these equations as vector equations.

So, now we have two sets of equations. x-dot of t is equal to a vector function f of x, u and

t, and y of t equals a vector function g of x, u and t. The former is called as the state

equation, whereas the latter is called as the output equation.

In general, these equations are nonlinear.

However, they can be linearized around a given operating point, so that you can write them

as x-dot of t equals A-of-t x-of-t plus B-of-t u-of-t; and y of t equals C-of-t x-of-t plus D-of-t u-of-t.

Here, A, B, C and D, which are functions of time, are matrices of appropriate dimensions.

In fact, A is called as the State matrix; B as Input matrix; C as Output matrix and

D as Direct Transmission matrix or Direct Injection matrix.

In this figure, you can see the interconnections of the variables u, x and y representing this

linear state space model.

And, besides the linearity, if the system

is also time-invariant, the state and output equations are of this simplified form, where

A, B, C and D are constant matrices.

Our example for deriving state and output

equations is a parallel RLC circuit. Here, we have two integrating elements, namely the

capacitor and the inductor. So, the state variables are the capacitor voltage and the

inductor current and we have to express the derivatives of these variables in terms of

them and the input current. We can first write the element equation of the capacitor and

then express the capacitor current i-sub-C in terms of i, i-sub-L and v-sub-C. On the

other hand, the element equation of the inductor directly relates the derivative of the inductor

current to the capacitor voltage.

We can combine these two equations in a matrix equation and defining the state vector as

v-sub-C i-sub-L, we can recognize this matrix equation as x-dot equals A x-of-t plus B u-of-t.

On the other hand, since the output is the capacitor voltage, it can be expressed as

C times x of t, where C is a row vector written as 1 0. Also note that in this case the direct

transmission matrix is zero.

The next example is a mechanical one. An automobile

shock absorber in its simplest form can be modeled as a spring-mass-damper system. In

this figure k is the spring constant, m is the mass and b represents the friction coefficient.

A differential equation describing this system can be obtained as a force-balance equation.

Namely, the input force can be written as a sum three forces: The force to accelerate

the mass, the force to overcome the friction in the damper, and the force to expand the

spring.

To obtain a state-space model we are going to choose our state variables as y of t and

its derivative, in other words, the position and the velocity of the mass. Since the derivative

of the position is the velocity, the first state equation will be written as x1-dot is

equal to x2. And the derivative of x2, which is the acceleration, can be solved from the

force-balance equation. This gives us the second state equation.

These two equations can now be cast in a matrix form, where we can see the A and B matrices.

Besides, the output is the position. The C matrix turns out to be composed of a 1 and

a 0. And, once again, in this case, too, the direct transmission matrix is zero.

Having discussed two alternative ways of modeling

dynamic sytems, namely the state-space representations and the transfer functions, let's see how we

can convert one of them to the other. We first start with writing transfer functions from

state-space models.

To be able to write transfer functions, we have to assume that the state-space model

is linear, time-invariant and a single-input-single-output one. So, we have constant A, B, C and D matrices

in our model with matching dimensions. We start by taking Laplace transforms of both

sides in the system and output equations. Now, arrange the system equation so that we

have all X vectors on the left hand side and collect them into a parentheses of X so that

we can solve for the X of s vector in terms of u.

Now, substituting what we have found for X of s into the output equation, we can solve for

the ratio of Y of s to U of s, which is the transfer function of this system.

Note that the s variable appears only in the matrix s I minus A which is inverted. That

means G of s will be obtained as the ratio of two polynomials and the denominator polynomial

will be the determinant of s I minus A.

So, we conclude that any pole of the transfer function G of s must be an eigenvalue of the

system matrix A.

Let us now continue with our example mass-spring-damper

system and obtain the transfer function from the state-space description we have derived

before.

Recall the system and output equations of this system. We can use the formula expressing

the transfer function in terms of the A, B, C and D matrices. Substitute the matrices

A, B and C into this formula and rearrange the terms under the matrix inversion. It is

not diffult to invert this matrix as its adjont matrix divided by its determinant. Note that

the polynomial in the denominator, that is, s-square plus b-over-m s plus k-over-m is

the determinant of the matrix s I minus A.

Performing the matrix multiplications and simplifying, we obtain the transfer function

as 1 over m s-square plus b s plus k.

Let's check this result with the differential equation we have written for this system before.

By taking Laplace transforms of both sides under zero initial conditions, we can obtain

the same transfer function between the output y and the input u. That shows the correspondance

between the state-space description and the transfer function we have derived for this

system.

Now, we consider the inverse problem, namely,

obtaining a state-space representation for a given transfer function. First, we will

consider a relatively simpler case. That is, we will assume that the numerator of our transfer

function is unity. If you write the differential equation corresponding to such a transfer

function, you will have only the input function u on the right hand side.

Now, let us choose the state variables as the output and its derivatives. That is, the

first state variable will be equal to y and x-n will be the n-minus-first derivative of

y. If you differentiate these equalities you will notice that the derivative of each state

variable is simply the next state variable and the derivative of x-n, which is equal

to the n-th derivative of y, can be written from the differential equation in terms of

the other derivatives of y or other state variables.

To obtain the system equation, you only need to write these equations in a matrix form.

And, the output equation is simply a rephrasing of the equation y is equal to x1.

So, now, that's our A matrix; this is the B matrix and that turns out to be the C matrix.

Let us consider the general case, where we

also have a polynomial of s in the numerator of our transfer function, We have here the

most general case, where the numerator and the denominator polynomials are of the same

degree. But if the system is a strictly-proper one, of course, some of the coefficients in

the numerator like b0, b1 etc. will be zero.

That means, the differential equation for this system will have also the derivatives

of u on the right hand side. Since no derivatives of the input variable are allowed in the state-space

description, we are going to choose our state variables as follows:

Starting from x1 equals y minus beta-0 u, each state variable will be composed of not

only the derivative of the previous one, but also a contribution from the input u. So x2

will be x1-dot minus beta-1 u and likewise x-n will be x-n-minus-1-dot minus beta-n-minus-1

u. And since x-n-dot will be a combination of n-th derivative of y and all the derivatives

of the input u, using the differential equation, it must be possible to write it as linear

combination of the derivatives of y and the input, or the state variables and the input

for some suitable set of beta's.

To solve for such a set of beta parameters, you can use this last equation and substitute

all the state variables x1 to x-n as expressed with the derivatives of y and u, and then

compare it to the parameters in your differential equation and solve the beta parameters in

terms of a and b coefficients.

Once you have determined your beta's, the

only thing that remains is to write all these equations so that you have y and the derivatives

of the state variables on the left hand side, and rearrange everything in a matrix form.

We see that we obtained the same system matrix. That's natural, because the system matrix

depends only on the denominator polynomial, that is, the "a" parameters. On the other

hand, the input matrix is composed of the beta's. Also note that the state-space description

will have a direct injection term, that is beta-0 u, only if the degree of the numerator

polynomial is equal to the degree of the denominator polynomial in the transfer function.

Here we have an alternative way of obtaining

a state-space description from a transfer function, namely by a block diagram approach.

Let's see this on an example:

Here we have a transfer function with its denominator in a factorized form. It's a fairly

general one. There are first order factors in the numerator and the denominator. There

is a second order factor in the denominator, and there is also a 1-over-s, that is, an

integrating factor in the transfer function. To start with, we can show the block diagram

of such a transfer function as consisting of three blocks and now our aim is to express

the entire block diagram using only integrator and constant-gain blocks. For example, this

part corresponds to the block with the tranfer function s plus z over s plus p. This is

how you can express a second order block in terms of only integrators and constant-gain

blocks; and finally the third block is an integrator by itself.

So, we have a block diagram for this transfer

function, where all blocks are either integrators or constants. We are going to choose the outputs

of these integrators as the state variables. Their inputs will then be the derivatives

of our state variables. Using the interconnections in the block diagram, we can express the derivatives

of the state variables in terms of the state variables and the input.

You can write such equations first in frequency domain and then translate them to time domain,

or you may prefer to write them in time domain directly. Finally, collecting them in a matrix

form, you can have your system equation and output equation corresponding to your transfer

function.

And, that concludes our discussion of mathematical

models of dynamic systems. Going over the examples and exercises in your textbook, check

that you can distinguish between linear/nonlinear and time-varying/time-invariant models,

that you can derive transfer functions of simple electrical or mechanical systems,

that you can use block diagram algebra to simplify block diagrams,

that you have an understanding of the concept of state,

that you can derive state-space models of simple electrical and mechanical systems

and that you can obtain transfer functions from state-space representations of linear

systems and vice versa.