We describe what a differential equation is and discuss the exponential growth model. Recall that #y^{(n)}#, the #n#-th derivative of #y#, is also called the derivative of order #n#.
A differential equation is an equation whose unknown is a function of a variable in which one or more derivatives of the unknown function may occur.
A differential equation for one or more functions of a single (independent) variable is called an ordinary differential equation, abbreviated ODE.
In this course we will only be concerned with ODEs for a single function of a single variable. The general form of an ordinary differential equation for a function \(y\) of a single variable \(t\) on a certain interval is \[ \varphi(t,y,y',y'',\ldots)=0\] where \(\varphi\) is a multivariate function. Here, #y# is the unknown function, also referred to as the dependent variable, and #t# is the argument of the unknown function, also referred to as the independent variable.
- The order of this ordinary differential equation is the order of the highest derivative of \(y\) occurring in \(\varphi\).
- If the function \(\varphi\) is a polynomial function in each of the derivatives of #y# (not necessarily #y# itself), then the degree of this ordinary differential equation is equal to the degree of \(\varphi\) as a polynomial in the highest derivative of #y#.
- If the function \(\varphi\) is linear in #y# and all of its derivatives, then the ODE is called linear.
In general a differential equation can have more unknowns, which are functions of one or more variables. A differential equation for two ore more functions of two or more variables, in which case the derivatives are partial derivatives, is called a partial differential equation, abbreviated: PDE.
Partial differential equations are not covered in this chapter: the differential equations we will be concerned with have a single unknown function of one variable.
An unknown in a differential equation is not a number but a function (of place or time, for instance). It needs to be differentiable, and therefore also continuous.
Computing an antiderivative of the function #g# is the same as solving the differential equation \[y'=g\] Often we write #y'(t) =g(t)# for this equation. As is known from the chapter Integration, the general solution is denoted \[\int g(t)\,\dd t\]
This expression represents a differentiable function #y(t)# with derivative #g(t)# and is determined up to a constant term. For example, if #g(t)=1#, the constant function #1#, then #y(t) = t# is a solution of the differential equation #y'(t)=1# and each solution is of the form #t+C#, where #C# is a constant.
The integral #\int g(t)\,\dd t# thus describes the general solution of the differential equation #y'(t) =g(t)#. It is a set of solutions described by means of a function rule in which one or more parameters (called integration constants), like #C# in the above example, may occur.
Since the independent variable is often time (it is good practice to let the variable #t# stand for time), the ODEs are also known as dynamical systems.
The purpose of equations is to find solutions (or at least useful properties of the solutions). In general, those are found by rewriting the equation to simpler differential equations, or systems of equations (if possible even without derivatives). Just as in the case of equations without derivatives, it is convenient for this process to be able to indicate that two equations have the same solution.
Two ODEs having the same dependent unknowns and independent variables are called equivalent if they have the same solution.
For example, the two differential equations
\[y\cdot y'-2y'+y=2\phantom{xxxx}\text{ and }\phantom{xxxx} (y'+1)\cdot(y-2)=0\]
are equivalent since, by standard algebraic operations, one can be transformed in the other.
The differential equation #y'=t\cdot y# with unknown #y# as a function of #t# follows from #\frac{y'}{y}=t# after multiplication by #y#. But the solution #y=0# of the first ODE is not a solution of the second ODE. Therefore, they are not equivalent. Just as for ordinary equations (disregarding the domain for #y#), the differential equation #y'=t\cdot y# is equivalent to \[\frac{y'}{y}=t\lor y=0\]
The rate of growth in a population is determined by various biological processes and is directly influenced by the size of the population. Under ideal circumstances, if the size of the population increases, its rate of growth increases as well. A first-order differential equation can be used to express the rate of growth for a population as a proportion of its size. An example is \[y' = \dfrac{1}{3} y \]
Solving this differential equation means finding a function \(y=f(x)\) so its growth \(\frac{\dd y}{\dd x}\), namely the first-order derivative \(f'(x)\), is equal to one third of the value #f(x)# of that function.
In our concrete example, the independent variable is #x#, time, and the unknown function \(y = f(x)\), the dependent variable, indicates how big the population is. The ODE expresses the mathematical intuition that the rate of population growth becomes higher as the population increases.
Here are three well-known dynamical systems for modelling growth.
- Exponential growth or decay: \(y'(t) = r\cdot y( t)\) where #r# is a constant.
- Bounded exponential growth: \(y'(t) = r\cdot \bigl(K- y(t)\big)\) where \(r\) and \(K\) are nonnegative constants.
- Logistic growth: \(y'(t) = r\cdot y(t)\cdot\left(1-\frac{y(t)}{K}\right)\) where \(r\) and \(K\) are nonzero constants.
Below, the first example is dealt with in greater detail.
Consider the following differential equation. \[y(x)\cdot y'(x) = 2-5x\]
Here, #y# is the dependent variable and #x# is the independent variable.
We show how this ODE can be solved. Thanks to the chain rule for differentiating we can rewrite the left-hand side.
\[\begin{array}{rcl}
\frac{1}{2}\dfrac{\dd}{\dd x}\left( y(x)\right)^2 &=& 2-5x\\
&&\phantom{xx}\color{blue}{y(x)\cdot y'(x) =\frac{1}{2}\frac{\dd}{\dd x}\left(y(x)\right)^2}\\
\dfrac{\dd}{\dd x}\left( y(x)\right)^2 &=& 2\cdot \left(2-5x\right)\\
&&\phantom{xx}\color{blue}{\text{factor }2\text{ to the right}}\\
\left( y(x)\right)^2 +C &=& \int2\cdot \left(2-5x\right)\,\dd x\\
&&\phantom{xx}\color{blue}{\text{both sides integrated}}\\
\left( y(x)\right)^2 &=&2\cdot \left(2\cdot x-{{5\cdot x^2}\over{2}}\right)+C\\
&&\phantom{xx}\color{blue}{\text{anti-derivative computed at the right}}\\ y(x)&=&\pm \sqrt{2\cdot \left(2\cdot x-{{5\cdot x^2}\over{2}}\right)+C}\\
&&\phantom{xx}\color{blue}{\text{root taken}}\\
\end{array}
\]
In the definition of linearity, by requiring that #\varphi# be linear in #y# and all of its derivatives, we mean that it is linear as a function of the vector #\rv{y,y',\ldots,y^{(n)}}#, so no cross products, like #y\cdot y'# are allowed.
For instance, a linear first-order ODE has the form
\[a(t)\cdot y' +b(t)\cdot y+c(t)=0\]
for functions #a#, #b#, #c# of the independent variable #t#. Later, we will describe how to solve such ODEs.
A first-order ODE of degree #1# need not be linear, since it is not required to be a polynomial of degree #1# in #y#. For instance, #y' = t^2\cdot y^3+5# is of order #1# and of degree #1#, but not linear.
There can be many solutions to an ODE. The initial value problem corresponding to the ODE will bring down the number of solutions by means of natural additional conditions on the unknown function.
In general, a differential equation can have multiple solutions. By using constants, often called integration constants, we can sometimes describe the general solution of a differential equation by means of a function rule containing integration constants.
To pinpoint a given solution additional conditions are required. When these conditions are all related to the value of the solution #y# or a derivative thereof for a given value of the independent variable (think of the state #y(0)# at time \(0\) or the speed #y'(0)# at time \(0\)), then we speak of an initial value problem. This is often abbreviated to IVP.
If there is a unique solution of the ODE satisfying the initial condition (by this we mean the condition(s) of the initial value problem), we call it the specific solution of the initial value problem.
Consider the initial value problem \[ y' =y\phantom{xxx} \text{ with initial condition }\phantom{xxx} \rv{t,y} = \rv{0,1}\]
The initial condition states that the graph of the solution #y# should go through the point #\rv{0,1}#; in other words, #y(0)=1#.
The general solution of the ODE, as we will see soon, is #y(t)=C\cdot\e^t#. This function satisfies the initial condition #y(0)=1# if and only if #C=1#. Therefore, the specific solution of the initial value problem is \[y(t)=\e^t\]
If all the additional requirements are related to the boundary of an interval on which the function is defined (think of #y(2)=7# and #y(9)=13# for a function #y# on the interval #\ivcc{2}{9}#), then we also speak of a boundary value problem. We also talk about the specific solution of the boundary value problem.
Apart from making sure that there is a solution, we will be concerned with finding solutions. We will deal with the special example of exponential growth.
Let # r # be a real number. The general solution of the differential equation \[\frac{\dd y}{\dd t}= r \cdot y\] is \[y(t)=C\cdot \e^{ r \cdot t}\] where \(C\) is a constant.
The exponential function \[y(t)=\e^t\] is equal to its own derivative and so satisfies the differential equation \(y'=y\). In other words, it is a solution of the ODE. The equation \(y'=y\) has other solutions, for example \[y(t)= 2\e^t,\quad y(t)= -\e^t,\quad y(t)= -\tfrac{1}{3}\e^t \] These solutions are all of the form #y(t)=C\cdot \e^t# for a certain constant \(C\). The theorem shows that every solution has this form.
Suppose that #y# is a solution. In order to prove the theorem we show that #z(t)=y(t)\cdot\e^{- r \cdot t}# is a constant function.
We have #y(t)= z(t)\cdot \e^{ r \cdot t}# and, thanks to the product rule for differentiation, \[\begin{array}{rcl}y'(t)&=&\frac{\dd}{\dd t}\left(z(t)\cdot \e^{ r \cdot t}\right)\\ &=& z'(t)\cdot \e^{ r \cdot t} + z(t)\cdot \frac{\dd}{\dd t}\bigl(\e^{ r \cdot t}\bigr)\\ &=& z'(t)\cdot \e^{ r \cdot t} + r \cdot z(t)\cdot \e^{ r \cdot t}\end{array}\] The fact that the function \(y\) satisfies the differential equation \(y'= r \cdot y\) means \[z'(t)\cdot \e^{ r \cdot t} + r \cdot z(t)\cdot \e^{ r \cdot t}= r \cdot z(t)\cdot \e^{ r \cdot t}\] This is equivalent to #z'(t)\cdot \e^{ r \cdot t}=0#, and since #\e^{ r \cdot t}# is never zero, to #z'(t)=0#. In view of the Uniqueness of the antiderivative up to a constant, this means that the function \(z(t)\) is constant. Therefore, there is a constant #C# such that #C=y(t)\cdot\e^{- r \cdot t}#. This means #y(t)=C\cdot\e^{ r \cdot t}#.
The special case where # r =0# is already known. The only functions having derivatives #0#, are the constant functions.
If # r \gt0#, then #\abs{y(t)}#, the size of #y#, will increase and even tend to #\infty# for #t\to\infty#. This case represents growth.
If # r \lt0#, then #\abs{y(t)}#, the size of #y#, will decrease and even tend to #0# for #t\to\infty#. This case represents decay.
In many exponential growth models, we know the value of #y#, say #y = 2#, at the starting point, say #t=0#. In order to find the specific solution of the exponential growth equation satisfying this constraint, we substitute the boundary values into the general solution \(y(t)=C\cdot \e^{ r \cdot t}\). This gives the equation \(2 = C\). Thus, \(y(t) = 2 \e^{ r \cdot t}\) is the specific solution.
In the figure below the function #y(t) = y_0 a ^t = y_0 \e^{k\cdot t}# is drawn. Here, #k= \ln(a)# and #y_0=y(0)\ne0#. We let #k\gt0# for a growth model. If we take #k=-r# with #r\gt0#, we obtain a decay model. This model can be viewed after clicking the relevant box (decay).
In order to determine the value of #a#, and hence #k#, for a growth model, we use the doubling time #t_{\text{double}}#. This is the time in which the value of #y# doubles. In a formula this can be expressed as \(y(t+t_{\text{double}}) = {2}y(t)\). Substitution of the function rule gives \(y_0 a ^{t+t_{\text{double}}} = {2}y_0a ^{t}\), so\[ t_{\text{double}} = \frac{\ln\left(2\right)}{\ln(a)} \phantom{xxx}\text{ and }\phantom{xxx}k= \ln(a)= \frac{\ln(2)}{t_{\text{double}}}\]This expression is independent of #t#, as illustrated by the constant distance between the points drawn on the #t#-axis.
In order to determine the value of #a#, and hence #r#, from the decay model, we use the halving time #t_{\text{half}}#. This is the time in which the value of #y# halves. In a formula this can be expressed as \(y(t+t_{\text{half}}) = \frac{1}{2}y(t)\). Substitution of the function rule gives \(y_0 a ^{t+t_{\text{half}}} = \frac{1}{2}y_0a ^{t}\), so \[ t_{\text{half}} = \frac{\ln\left(\frac12\right)}{\ln(a)} = -\frac{\ln(2)}{\ln(a)}\phantom{xxx}\text{ and }\phantom{xxx}r=- \ln(a)= \frac{\ln(2)}{t_{\text{half}}}\]This expression is also independent of #t#, as illustrated by the constant distance between the points drawn on the #t#-axis. In the graph both the initial value #y_0# and the doubling time #t_{\text{double}}# or halving time #t_{\text{half}}# can be dragged to different values.
Solve the following differential equation. \[y'(x) = x^2+1\]
Give your answer in the form #y(x)= f(x)#, where #f(x)# is an expression in the independent variable #x# and the constant of integration #C# and no other variables.
#y(x) = {{x^3}\over{3}}+x+C#
This is a simple example of a differential equation. By integrating the left- and right-hand side we arrive at the problem of finding an antiderivative of #x^2+1#:
\[\begin{array}{rcl}
y(x) +D &=& \int \left(x^2+1\right)\,\dd x\\
&&\phantom{xx}\color{blue}{D\text{ is a constant}}\\
y(x) +D &=& \displaystyle {{x^3}\over{3}}+x+C \\
&&\phantom{xx}\color{blue}{\text{antiderivative at the right computed with constant }C}\\
y(x) &=& \displaystyle {{x^3}\over{3}}+x+C \\
&&\phantom{xx}\color{blue}{\text{constants to the right and replaced by a single constant }C}\\
\end{array}
\]