## Section 2.3: Flows on the line - linear stability analysis

Section 2.3: Flows on the line - linear stability analysis

If you haven’t come across Taylor Polynomials before, take a look here at this post I wrote for my first year pure maths class. If you want to know more, take a look at the posts before and after at the same link.

OK, let's say we’ve got some first order autonomous differential equation

x

and we’ve found a fixed point = . Is there a way that we can tell whether it’s stable or unstable? Well, in fact we’ve already done that by inspection of the phase portrait. Can we do this in a slightly more formal way? Well, it turns out that we can ask about small fluctuations close to the fixed point. Let’s say that we want to look at some (t) which is just a little away from . We could write this as:

x

*

x

x

*

x

x(t)=+η(t)

*

x

where we are going to say that η (the Greek letter eta, pronounced eat-a) is small.

We can take derivatives of both sides of the above equation and because is a constant we have that

*

x

x

η

Now what can we say of (x)? Well, let’s sub in , where we have temporarily removed that functional dependence on t, but we know that it’s there really. So what can we say about

f

x=+η

*

x

f(+η)

*

x

Well, we have said that η is small, so presumably we can say that as long as f is continuous

f(+η)≈f()

*

x

*

x

We say that this is a zeroth order approximation. This is just a constant, and we know that in reality isn’t a constant because η depends on time, so what would the next approximation be? Well, for a small shift in η, how much will f shift? Well, it’s going to be related to the gradient of , along with how much we are moving in η, so we write:

f(+η)

*

x

f

f(+η)=f()+ηf'()+O()

*

x

*

x

*

x

2

η

The is read as “terms of order ” and is small compared to the second term so long as η is small. Essentially this means that these are terms that we are going to ignore. We have approximated the function at this point by a linear function in η, and this is to remind us that we have thrown away higher order terms than this. Let’s say that we are looking at a point which is η=0.1 away from a fixed point (in some units). Then =0.01, which is smaller than η, and so as long as the derivative is not very small there compared to the second derivative (see later), we can ignore the terms.

O()

2

η

2

η

2

η

2

η

Let’s just draw a figure to make sure that we understand this for a general function about some point, let’s call it . The equivalent expression to the above would be:

f

c

f(c+a)=f(c)+af'(c)+O()

2

a

In[]:=

Show[Plot[,{x,0.5,1.4},PlotStyleRed],Plot[2x-1,{x,1,1.2},PlotStyleBlue],Plot[2x-1,{x,0.8,1.4},PlotStyle{Dashed,Blue}],Graphics[Line[#]]&/@{{{1,1},{0.5,1}},{{1.2,0},{1.2,1+0.22}},{{1.2,1+0.22},{0.5,1+0.22}},{{1,0},{1,1}}},Graphics[Text[Style[#[[1]],15],#[[2]]]]&/@{{"c",{1,-0.2}},{"c+a",{1.2,-0.2}},{" f(c)",{0.4,1}},{" f(c)+f'(c)a",{0.35,1.4}}},AxesOrigin{0.5,0},PlotRange{{0.2,1.4},{-0.3,2}},AxesLabel{Style["x",14],Style["f(x)",14]}]

2

x

Out[]=

We see that the value of the function at is very well approximated by the value of the function at plus a times the gradient of the function at . So long as a isn’t very large, this is a good approximation.

c+a

c

c

Ok, so this should give us a bit of an intuition as to why

f(+η)=f()+ηf'()+O()

*

x

*

x

*

x

2

η

The point is that the first two terms don’t capture it perfectly, but the next term will be of size , which for small is even smaller than itself, so we can ignore it to first approximation. How on earth does this help us? Well, the first thing to note is that is a fixed point, which means that at that point '(t)=0 and so , so actually we can write:

2

η

η

η

*

x

*

x

f()=0

*

x

f(+η)=ηf'()+O()

*

x

*

x

2

η

This means that so long as we are careful, we can make an approximation that:

f(+η)=ηf'()

*

x

*

x

We have to keep in the back of our minds when this approximation is valid, so it’s only true close to the fixed points (and as long as we have made sure that the terms that we are throwing away are smaller than the ones that we are keeping - see later for when this might not be true).

And our differential equation then becomes (remembering that (t)=(t)):

x

η

η

*

x

Make sure you can show this!

Remember is just a constant. It turns out that this is a differential equation that we can solve. It’s actually the same differential equation as that for population growth and radioactive decay, and has solution:

f'()

*

x

η=

η

0

f'()t

*

x

e

where is the value of η at =0. Now, what does this say? Well, it says that if you start off with a small perturbation away from then if > 0, this perturbation will grow exponentially - ie. you will move further and further away from , and if < 0 then the perturbation will decay to zero exponentially - ie. you will move closer and closer to = .

η

0

t

*

x

f'()

*

x

x=

*

x

f'()

*

x

x

*

x

If, starting from a little way away from a fixed point, you move away from it, then that is an unstable fixed point. If starting from a little away from a fixed point, you move towards it, then that is a stable fixed point. This is actually something that we saw before.

We actually said all of this on the previous, but now we have proved it. We said back then that:

Now, in the new language:

∘ If then is an unstable fixed point.

f'()>0

*

x

*

x

∘ If then is a stable fixed point.

f'()<0

*

x

*

x

Actually, we can say a little more. Not only does the sign of the derivative tell us about whether a fixed point is stable or not, but how large it is tells us how stable or unstable. If the derivative is, for instance, large and positive then not only is it an unstable fixed point, but because of the exponential solution, we will very quickly move away from the point. The timescale over which the perturbation size decays to 1/e of its original size is given by . The same analysis holds for the stable fixed point.(Note that as a sanity check, is in units of inverse time, so being a timescale makes sense.)

1

f'()

*

x

f'()

*

x

1

f'()

*

x

We kind of knew this already. If we have a function which has a very small slope as it passes through the fixed point (let’s say with a negative gradient, then we would very very slowly move towards it. If the slope was very large, then we will be moving towards it faster.

But hang on, I hear you say! What about if ? Well, it turns out that in that case we have to look at to tell if it's stable, or unstable. When might that be the case? Well, how about this differential equation?

f'()=0

*

x

f''()

*

x

x

2

x(t)

Then it’s clear that =0 is a fixed point, but what kind is it? the derivative of the function is zero at =0, so we can’t use that rule. Let’s look at the phase portrait:

*

x

(f(x)=)

2

x

x

In[]:=

Plot[,{x,-2,2},AxesLabel{Style["x",14],Style["",14]},AspectRatio1]

2

x

x

Out[]=

hmmm...well, if we start from the left (ie. negative x), the value of is positive, so we will move to the right...and if we start from the right then is also positive, so we will also move to the right. Does that mean that if we start at negative we will end up moving all the way over to positive ? Well, no, if we start from, say = -1, you will start moving to the right, but as you get closer to you will slow down...and never actually get to the fixed point (as we’ve said before).

x

x

x

x

x

x=0

What we see though is that if you start off just to the left of the fixed point you will move towards it (like for a stable fixed point), but if you start just to the right of it you will move away from it, just like an unstable fixed point...so which is it? Well, it's called a half stable fixed point. The arrows and the fixed point symbol would be drawn like this:

In[]:=

ShowPlot[,{x,-2,2},AxesLabel{Style["x",14],Style["",14]}],Graphics[Arrow[{{-1.5,0},{-0.5,0}}]],Graphics[Arrow[{{0.5,0},{1.5,0}}]],Graphics[Circle[{0,0},0.1]],GraphicsDisk{0,0},0.1,,,AspectRatio1

2

x

x

π

2

3π

2

Which shows that it’s stable from the left and unstable from the right.

If the parabola was the other way, we would have

is stable from the right and unstable from the left.

There is one final, trivial case...the case where:

OK, so that gives us the full classification of fixed points and their stability for dynamical systems on the line. We are almost done with this section. Next we will look at existence and uniqueness, a subtle, but important little topic.