Skip to main content

CLP-3 Multivariable Calculus

Section 2.2 Partial Derivatives

We are now ready to define derivatives of functions of more than one variable. First, recall how we defined the derivative, \(f'(a)\text{,}\) of a function of one variable, \(f(x)\text{.}\) We imagined that we were walking along the \(x\)-axis, in the positive direction, measuring, for example, the temperature along the way. We denoted by \(f(x)\) the temperature at \(x\text{.}\) The instantaneous rate of change of temperature that we observed as we passed through \(x=a\) was
\begin{equation*} \diff{f}{x}(a) =\lim_{h\rightarrow 0}\frac{f(a+h) - f(a)}{h} =\lim_{x\rightarrow a}\frac{f(x) - f(a)}{x-a} \end{equation*}
Next suppose that we are walking in the \(xy\)-plane and that the temperature at \((x,y)\) is \(f(x,y)\text{.}\) We can pass through the point \((x,y)=(a,b)\) moving in many different directions, and we cannot expect the measured rate of change of temperature if we walk parallel to the \(x\)-axis, in the direction of increasing \(x\text{,}\) to be the same as the measured rate of change of temperature if we walk parallel to the \(y\)-axis in the direction of increasing \(y\text{.}\) We’ll start by considering just those two directions. We’ll consider other directions (like walking parallel to the line \(y=x\)) later.
Suppose that we are passing through the point \((x,y)=(a,b)\) and that we are walking parallel to the \(x\)-axis (in the positive direction). Then our \(y\)-coordinate will be constant, always taking the value \(y=b\text{.}\) So we can think of the measured temperature as the function of one variable \(B(x) = f(x,b)\) and we will observe the rate of change of temperature
\begin{equation*} \diff{B}{x}(a) = \lim_{h\rightarrow 0}\frac{B(a+h) - B(a)}{h} = \lim_{h\rightarrow 0}\frac{f(a+h,b) - f(a,b)}{h} \end{equation*}
This is called the “partial derivative \(f\) with respect to \(x\) at \((a,b)\)” and is denoted \(\pdiff{f}{x}(a,b)\text{.}\) Here
  • the symbol \(\partial\text{,}\) which is read “partial”, indicates that we are dealing with a function of more than one variable, and
  • the \(x\) in \({\pdiff{f}{x}}\) indicates that we are differentiating with respect to \(x\text{,}\) while \(y\) is being held fixed, i.e. being treated as a constant.
  • \({\pdiff{f}{x}}\) is read “partial dee \(f\) dee \(x\)”.
Do not write \(\diff{}{x}\) when \(\pdiff{}{x}\) is appropriate. We shall later encounter situations when \(\diff{}{x}f\) and \(\pdiff{}{x}f\) are both defined and have different meanings.
If, instead, we are passing through the point \((x,y)=(a,b)\) and are walking parallel to the \(y\)-axis (in the positive direction), then our \(x\)-coordinate will be constant, always taking the value \(x=a\text{.}\) So we can think of the measured temperature as the function of one variable \(A(y) = f(a,y)\) and we will observe the rate of change of temperature
\begin{equation*} \diff{A}{y}(b) = \lim_{h\rightarrow 0}\frac{A(b+h) - A(b)}{h} = \lim_{h\rightarrow 0}\frac{f(a,b+h) - f(a,b)}{h} \end{equation*}
This is called the “partial derivative \(f\) with respect to \(y\) at \((a,b)\)” and is denoted \(\pdiff{f}{y}(a,b)\text{.}\)
Just as was the case for the ordinary derivative \(\diff{f}{x}(x)\) (see Definition 2.2.6 in the CLP-1 text), it is common to treat the partial derivatives of \(f(x,y)\) as functions of \((x,y)\) simply by evaluating the partial derivatives at \((x,y)\) rather than at \((a,b)\text{.}\)

Definition 2.2.1. Partial Derivatives.

The \(x\)- and \(y\)-partial derivatives of the function \(f(x,y)\) are
\begin{align*} \pdiff{f}{x}(x,y) &= \lim_{h\rightarrow 0}\frac{f(x+h,y) - f(x,y)}{h}\\ \pdiff{f}{y}(x,y) &= \lim_{h\rightarrow 0}\frac{f(x,y+h) - f(x,y)}{h} \end{align*}
respectively. The partial derivatives of functions of more than two variables are defined analogously.
Partial derivatives are used a lot. And there many notations for them.

Definition 2.2.2.

The partial derivative \(\pdiff{f}{x}(x,y)\) of a function \(f(x,y)\) is also denoted
\begin{equation*} \pdiff{f}{x}\qquad f_x(x,y)\qquad f_x\qquad D_xf(x,y)\qquad D_xf\qquad D_1 f(x,y)\qquad D_1 f \end{equation*}
The subscript \(1\) on \(D_1 f\) indicates that \(f\) is being differentiated with respect to its first variable. The partial derivative \(\pdiff{f}{x}(a,b)\) is also denoted
\begin{equation*} \pdiff{f}{x}\bigg|_{(a,b)} \end{equation*}
with the subscript \((a,b)\) indicating that \(\pdiff{f}{x}\) is being evaluated at \((x,y)=(a,b)\text{.}\)
The notation \({\left(\pdiff{f}{x}\right)}_{\!y}\) is used to make explicit that the variable \(y\) is being held fixed
 1 
There are applications in which there are several variables that cannot be varied independently. For example, the pressure, volume and temperature of an ideal gas are related by the equation of state \(PV= \text{(constant)} T\text{.}\) In those applications, it may not be clear from the context which variables are being held fixed.
.

Remark 2.2.3. The Geometric Interpretation of Partial Derivatives.

We’ll now develop a geometric interpretation of the partial derivative
\begin{equation*} \pdiff{f}{x}(a,b) = \lim_{h\rightarrow 0}\frac{f(a+h,b) - f(a,b)}{h} \end{equation*}
in terms of the shape of the graph \(z=f(x,y)\) of the function \(f(x,y)\text{.}\) That graph appears in the figure below. It looks like the part of a deformed sphere that is in the first octant.
The definition of \(\pdiff{f}{x}(a,b)\) concerns only points on the graph that have \(y=b\text{.}\) In other words, the curve of intersection of the surface \(z=f(x,y)\) with the plane \(y=b\text{.}\) That is the red curve in the figure. The two blue vertical line segments in the figure have heights \(f(a,b)\) and \(f(a+h,b)\text{,}\) which are the two numbers in the numerator of \(\frac{f(a+h,b) - f(a,b)}{h}\text{.}\)
A side view of the curve (looking from the left side of the \(y\)-axis) is sketched in the figure below.
Again, the two blue vertical line segments in the figure have heights \(f(a,b)\) and \(f(a+h,b)\text{,}\) which are the two numbers in the numerator of \(\frac{f(a+h,b) - f(a,b)}{h}\text{.}\) So the numerator \(f(a+h,b) - f(a,b)\) and denominator \(h\) are the rise and run, respectively, of the curve \(z=f(x,b)\) from \(x=a\) to \(x=a+h\text{.}\) Thus \(\pdiff{f}{x}(a,b)\) is exactly the slope of (the tangent to) the curve of intersection of the surface \(z=f(x,y)\) and the plane \(y=b\) at the point \(\big(a,b, f(a,b)\big)\text{.}\) In the same way \(\pdiff{f}{y}(a,b)\) is exactly the slope of (the tangent to) the curve of intersection of the surface \(z=f(x,y)\) and the plane \(x=a\) at the point \(\big(a,b, f(a,b)\big)\text{.}\)

Subsection 2.2.1 Evaluation of Partial Derivatives

From the above discussion, we see that we can readily compute partial derivatives \(\pdiff{}{x}\) by using what we already know about ordinary derivatives \(\diff{}{x}\text{.}\) More precisely,
  • to evaluate \(\pdiff{f}{x}(x,y)\text{,}\) treat the \(y\) in \(f(x,y)\) as a constant and differentiate the resulting function of \(x\) with respect to \(x\text{.}\)
  • To evaluate \(\pdiff{f}{y}(x,y)\text{,}\) treat the \(x\) in \(f(x,y)\) as a constant and differentiate the resulting function of \(y\) with respect to \(y\text{.}\)
  • To evaluate \(\pdiff{f}{x}(a,b)\text{,}\) treat the \(y\) in \(f(x,y)\) as a constant and differentiate the resulting function of \(x\) with respect to \(x\text{.}\) Then evaluate the result at \(x=a\text{,}\) \(y=b\text{.}\)
  • To evaluate \(\pdiff{f}{y}(a,b)\text{,}\) treat the \(x\) in \(f(x,y)\) as a constant and differentiate the resulting function of \(y\) with respect to \(y\text{.}\) Then evaluate the result at \(x=a\text{,}\) \(y=b\text{.}\)
Now for some examples.

Example 2.2.4.

Let
\begin{equation*} f(x,y) = x^3+y^2+ 4xy^2 \end{equation*}
Then, since \(\pdiff{}{x}\) treats \(y\) as a constant,
\begin{align*} \pdiff{f}{x} &= \pdiff{}{x}(x^3) + \pdiff{}{x}(y^2) +\pdiff{}{x}(4xy^2)\\ &= 3x^2+0 + 4y^2\pdiff{}{x}(x)\\ &= 3x^2 +4y^2 \end{align*}
and, since \(\pdiff{}{y}\) treats \(x\) as a constant,
\begin{align*} \pdiff{f}{y} &= \pdiff{}{y}(x^3) + \pdiff{}{y}(y^2) +\pdiff{}{y}(4xy^2)\\ &= 0 + 2y + 4x\pdiff{}{y}(y^2)\\ &= 2y+8xy \end{align*}
In particular, at \((x,y)=(1,0)\) these partial derivatives take the values
\begin{alignat*}{2} \pdiff{f}{x}(1,0) &= 3(1)^2 +4(0)^2&=3\\ \pdiff{f}{y}(1,0) &= 2(0) +8(1)(0)\ &=0 \end{alignat*}

Example 2.2.5.

Let
\begin{equation*} f(x,y) = y\cos x + xe^{xy} \end{equation*}
Then, since \(\pdiff{}{x}\) treats \(y\) as a constant, \(\pdiff{}{x} e^{yx}=y e^{yx}\) and
\begin{align*} \pdiff{f}{x}(x,y) &= y\pdiff{}{x}(\cos x) + e^{xy}\pdiff{}{x}(x) +x\pdiff{}{x}\big(e^{xy}\big) \qquad\text{(by the product rule)}\\ &= -y\sin x + e^{xy} +xye^{xy}\\ \pdiff{f}{y}(x,y) &= \cos x\pdiff{}{y}(y) + x\pdiff{}{y}\big(e^{xy}\big)\\ &= \cos x + x^2e^{xy} \end{align*}
Let’s move up to a function of four variables. Things generalize in a quite straight forward way.

Example 2.2.6.

Let
\begin{equation*} f(x,y,z,t) = x\sin(y+2z) +t^2e^{3y}\ln z \end{equation*}
Then
\begin{align*} \pdiff{f}{x}(x,y,z,t) &= \sin(y+2z)\\ \pdiff{f}{y}(x,y,z,t) &= x\cos(y+2z) +3t^2e^{3y}\ln z\\ \pdiff{f}{z}(x,y,z,t) &= 2x\cos(y+2z) +t^2e^{3y}/z\\ \pdiff{f}{t}(x,y,z,t) &= 2te^{3y}\ln z \end{align*}
Now here is a more complicated example — our function takes a special value at \((0,0)\text{.}\) To compute derivatives there we revert to the definition.

Example 2.2.7.

Set
\begin{equation*} f(x,y)=\begin{cases} \frac{\cos x-\cos y}{x-y}&\text{if } x\ne y \\ 0&\text{if } x=y \end{cases} \end{equation*}
If \(b\ne a\text{,}\) then for all \((x,y)\) sufficiently close to \((a,b)\text{,}\) \(f(x,y) = \frac{\cos x-\cos y}{x-y}\) and we can compute the partial derivatives of \(f\) at \((a,b)\) using the familiar rules of differentiation. However that is not the case for \((a,b)=(0,0)\text{.}\) To evaluate \(f_x(0,0)\text{,}\) we need to set \(y=0\) and find the derivative of
\begin{equation*} f(x,0) = \begin{cases} \frac{\cos x-1}{x}&\text{if } x\ne 0 \\ 0&\text{if } x=0 \end{cases} \end{equation*}
with respect to \(x\) at \(x=0\text{.}\) As we cannot use the usual differentiation rules, we evaluate the derivative
 2 
It is also possible to evaluate the derivative by using the technique of the optional Section 2.15 in the CLP-1 text.
by applying the definition
\begin{align*} f_x(0,0) &= \lim_{h\rightarrow 0}\frac{f(h,0)-f(0,0)}{h}\\ &= \lim_{h\rightarrow 0}\frac{\frac{\cos h-1}{h}-0}{h} &\qquad\text{(Recall that $h\ne 0$ in the limit.)}\\ &= \lim_{h\rightarrow 0}\frac{\cos h-1}{h^2}\\ &= \lim_{h\rightarrow 0}\frac{-\sin h}{2h} &\qquad\text{(By l'Hôpital's rule.)}\\ &= \lim_{h\rightarrow 0}\frac{-\cos h}{2} &\qquad\text{(By l'Hôpital again.)}\\ &=-\frac{1}{2} \end{align*}
We could also evaluate the limit of \(\frac{\cos h-1}{h^2} \) by substituting in the Taylor expansion
\begin{equation*} \cos h = 1 -\frac{h^2}{2}+\frac{h^4}{4!} -\cdots \end{equation*}
We can also use Taylor expansions to understand the behaviour of \(f(x,y)\) for \((x,y)\) near \((0,0)\text{.}\) For \(x\ne y\text{,}\)
\begin{align*} \frac{\cos x-\cos y}{x-y} &=\frac{\left[1-\frac{x^2}{2!} +\frac{x^4}{4!}-\cdots\right] -\left[1-\frac{y^2}{2!} +\frac{y^4}{4!}-\cdots\right]}{x-y}\\ &=\frac{-\frac{x^2-y^2}{2!} +\frac{x^4-y^4}{4!}-\cdots}{x-y} \\ &= -\frac{1}{2!}\frac{x^2-y^2}{x-y} +\frac{1}{4!}\frac{x^4-y^4}{x-y}-\cdots \\ &= -\frac{x+y}{2!} +\frac{x^3+x^2y+xy^2+y^3}{4!}-\cdots \end{align*}
So for \((x,y)\) near \((0,0)\text{,}\)
\begin{equation*} f(x,y)\approx\begin{cases} -\frac{x+y}{2} &\text{if $x\ne y$} \\ 0 &\text{if $x=y$} \end{cases} \end{equation*}
So it sure looks like (and in fact it is true that)
  • \(f(x,y)\) is continuous at \((0,0)\) and
  • \(f(x,y)\) is not continuous at \((a,a)\) for small \(a\ne 0\) and
  • \(\displaystyle f_x(0,0)=f_y(0,0)=-\frac{1}{2}\)

Example 2.2.8.

Again set
\begin{equation*} f(x,y)=\begin{cases} \frac{\cos x-\cos y}{x-y}&\text{if } x\ne y \\ 0&\text{if } x=y \end{cases} \end{equation*}
We’ll now compute \(f_y(x,y)\) for all \((x,y)\text{.}\)
The case \(y\ne x\text{:}\) When \(y\ne x\text{,}\)
\begin{align*} f_y(x,y) & = \pdiff{}{y}\frac{\cos x-\cos y}{x-y}\\ &=\frac{(x-y)\pdiff{}{y}(\cos x-\cos y) - (\cos x-\cos y)\pdiff{}{y}(x-y) }{(x-y)^2}\\ &\hskip2in\text{(by the quotient rule)}\\ &=\frac{(x-y)\sin y + \cos x-\cos y }{(x-y)^2} \end{align*}
The case \(y= x\text{:}\) When \(y = x\text{,}\)
\begin{align*} f_y(x,y) &= \lim_{h\rightarrow 0}\frac{f(x,y+h)-f(x,y)}{h}\\ &= \lim_{h\rightarrow 0}\frac{f(x,x+h)-f(x,x)}{h}\\ &= \lim_{h\rightarrow 0}\frac{\frac{\cos x-\cos(x+h)}{x-(x+h)}-0}{h} &\qquad\text{(Recall that $h\ne 0$ in the limit.)}\\ &= \lim_{h\rightarrow 0}\frac{\cos(x+h)-\cos x}{h^2} \end{align*}
Now we apply L’Hôpital’s rule, remembering that, in this limit, \(x\) is a constant and \(h\) is the variable — so we differentiate with respect to \(h\text{.}\)
\begin{align*} f_y(x,y) &= \lim_{h\rightarrow 0}\frac{-\sin(x+h)}{2h} \end{align*}
Note that if \(x\) is not an integer multiple of \(\pi\text{,}\) then the numerator \(-\sin(x+h)\) does not tend to zero as \(h\) tends to zero, and the limit giving \(f_y(x,y)\) does not exist. On the other hand, if \(x\) is an integer multiple of \(\pi\text{,}\) both the numerator and denominator tend to zero as \(h\) tends to zero, and we can apply L’Hôpital’s rule a second time. Then
\begin{align*} f_y(x,y) &= \lim_{h\rightarrow 0}\frac{-\cos(x+h)}{2}\\ &=-\frac{\cos x}{2} \end{align*}
The conclusion:
\begin{equation*} f_y(x,y)=\begin{cases} \frac{(x-y)\sin y + \cos x-\cos y }{(x-y)^2}&\text{if } x\ne y\\ -\frac{\cos x}{2}&\text{if } x=y \text{ with } x \text{ an integer multiple of }\pi\\ DNE&\text{if } x=y \text{ with } x \text{ not an integer multiple of }\pi \end{cases} \end{equation*}

Example 2.2.9. Optional — A Little Weirdness.

In this example, we will see that the function
\begin{equation*} f(x,y)=\begin{cases} \frac{x^2}{x-y}&\text{if } x\ne y \\ 0&\text{if } x=y \end{cases} \end{equation*}
is not continuous at \((0,0)\) and yet has both partial derivatives \(f_x(0,0)\) and \(f_y(0,0)\) perfectly well defined. We’ll also see how that is possible. First let’s compute the partial derivatives. By definition,
\begin{align*} f_x(0,0)&=\lim_{h\rightarrow 0}\frac{f(0+h,0)-f(0,0)}{h} =\lim_{h\rightarrow 0}\frac{\overbrace{\tfrac{h^2}{h-0}}^{h}-\,0}{h} =\lim_{h\rightarrow 0}1\\ &=1\\ f_y(0,0)&=\lim_{h\rightarrow 0}\frac{f(0,0+h)-f(0,0)}{h} =\lim_{h\rightarrow 0}\frac{\frac{0^2}{0-h}-0}{h} =\lim_{h\rightarrow 0}0\\ &=0 \end{align*}
So the first order partial derivatives \(f_x(0,0)\) and \(f_y(0,0)\) are perfectly well defined.
To see that, nonetheless, \(f(x,y)\) is not continuous at \((0,0)\text{,}\) we take the limit of \(f(x,y)\) as \((x,y)\) approaches \((0,0)\) along the curve \(y=x-x^3\text{.}\) The limit is
\begin{gather*} \lim_{x\rightarrow 0} f\big(x,x-x^3\big) =\lim_{x\rightarrow 0} \frac{x^2}{x-(x-x^3)} =\lim_{x\rightarrow 0} \frac{1}{x} \end{gather*}
which does not exist. Indeed as \(x\) approaches \(0\) through positive numbers, \(\frac{1}{x}\) approaches \(+\infty\text{,}\) and as \(x\) approaches \(0\) through negative numbers, \(\frac{1}{x}\) approaches \(-\infty\text{.}\)
So how is this possible? The answer is that \(f_x(0,0)\) only involves values of \(f(x,y)\) with \(y=0\text{.}\) As \(f(x,0)=x\text{,}\) for all values of \(x\text{,}\) we have that \(f(x,0)\) is a continuous, and indeed a differentiable, function. Similarly, \(f_y(0,0)\) only involves values of \(f(x,y)\) with \(x=0\text{.}\) As \(f(0,y)=0\text{,}\) for all values of \(y\text{,}\) we have that \(f(0,y)\) is a continuous, and indeed a differentiable, function. On the other hand, the bad behaviour of \(f(x,y)\) for \((x,y)\) near \((0,0)\) only happens for \(x\) and \(y\) both nonzero.
Our next example uses implicit differentiation.

Example 2.2.10.

The equation
\begin{equation*} z^5 + y^2 e^z +e^{2x}=0 \end{equation*}
implicitly determines \(z\) as a function of \(x\) and \(y\text{.}\) That is, the function \(z(x,y)\) obeys
\begin{equation*} z(x,y)^5 + y^2 e^{z(x,y)} +e^{2x}=0 \end{equation*}
For example, when \(x=y=0\text{,}\) the equation reduces to
\begin{equation*} z(0,0)^5=-1 \end{equation*}
which forces
 3 
The only real number \(z\) which obeys \(z^5=-1\) is \(z=-1\text{.}\) However there are four other complex numbers which also obey \(z^5=-1\text{.}\)
\(z(0,0)=-1\text{.}\) Let’s find the partial derivative \(\pdiff{z}{x}(0,0)\text{.}\)
We are not going to be able to explicitly solve the equation for \(z(x,y)\text{.}\) All we know is that
\begin{equation*} z(x,y)^5 + y^2 e^{z(x,y)} + e^{2x} =0 \end{equation*}
for all \(x\) and \(y\text{.}\) We can turn this into an equation for \(\pdiff{z}{x}(0,0)\) by differentiating
 4 
You should have already seen this technique, called implicit differentiation, in your first Calculus course. It is covered in Section 2.11 in the CLP-1 text.
the whole equation with respect to \(x\text{,}\) giving
\begin{equation*} 5z(x,y)^4\ \pdiff{z}{x}(x,y) + y^2 e^{z(x,y)}\ \pdiff{z}{x}(x,y) +2e^{2x} =0 \end{equation*}
and then setting \(x=y=0\text{,}\) giving
\begin{equation*} 5z(0,0)^4\ \pdiff{z}{x}(0,0) +2 =0 \end{equation*}
As we already know that \(z(0,0)=-1\text{,}\)
\begin{equation*} \pdiff{z}{x}(0,0) = -\frac{2}{5z(0,0)^4} =-\frac{2}{5} \end{equation*}
Next we have a partial derivative disguised as a limit.

Example 2.2.11.

In this example we are going to evaluate the limit
\begin{equation*} \lim_{z\rightarrow 0}\frac{(x+y+z)^3-(x+y)^3}{(x+y)z} \end{equation*}
The critical observation is that, in taking the limit \(z\rightarrow 0\text{,}\) \(x\) and \(y\) are fixed. They do not change as \(z\) is getting smaller and smaller. Furthermore this limit is exactly of the form of the limits in the Definition 2.2.1 of partial derivative, disguised by some obfuscating changes of notation.
Set
\begin{equation*} f(x,y,z) = \frac{(x+y+z)^3}{(x+y)} \end{equation*}
Then
\begin{align*} \lim_{z\rightarrow 0}\frac{(x+y+z)^3-(x+y)^3}{(x+y)z} &=\lim_{z\rightarrow 0}\frac{f(x,y,z)-f(x,y,0)}{z}\\ &=\lim_{h\rightarrow 0}\frac{f(x,y,0+h)-f(x,y,0)}{h}\\ &=\pdiff{f}{z}(x,y,0)\\ &={\left[\pdiff{}{z}\frac{(x+y+z)^3}{x+y}\right]}_{z=0} \end{align*}
Recalling that \(\pdiff{}{z}\) treats \(x\) and \(y\) as constants, we are evaluating the derivative of a function of the form \(\frac{({\rm const}+z)^3}{\rm const}\text{.}\) So
\begin{align*} \lim_{z\rightarrow 0}\frac{(x+y+z)^3-(x+y)^3}{(x+y)z} &={\left.3\frac{(x+y+z)^2}{x+y}\right|}_{z=0}\\ &=3(x+y) \end{align*}
The next example highlights a potentially dangerous difference between ordinary and partial derivatives.

Example 2.2.12.

In this example we are going to see that, in contrast to the ordinary derivative case, \(\pdiff{r}{x}\) is not, in general, the same as \(\big(\pdiff{x}{r}\big)^{-1}\text{.}\)
Recall that Cartesian and polar coordinates
 5 
If you are not familiar with polar coordinates, don’t worry about it. There will be an introduction to them in §3.2.1.
(for \((x,y)\ne (0,0)\) and \(r \gt 0\)) are related by
\begin{align*} x&=r\cos\theta\\ y&=r\sin\theta\\ r&=\sqrt{x^2+y^2}\\ \tan\theta&=\frac{y}{x} \end{align*}
We will use the functions
\begin{equation*} x(r,\theta) = r\cos\theta\qquad \text{and}\qquad r(x,y) = \sqrt{x^2+y^2} \end{equation*}
Fix any point \((x_0,y_0)\ne (0,0)\) and let \((r_0,\theta_0)\text{,}\) \(0\le\theta_0 \lt 2\pi\text{,}\) be the corresponding polar coordinates. Then
\begin{gather*} \pdiff{x}{r}(r,\theta) = \cos\theta\qquad \pdiff{r}{x}(x,y) = \frac{x}{\sqrt{x^2+y^2}} \end{gather*}
so that
\begin{align*} \pdiff{x}{r}(r_0,\theta_0)=\left(\pdiff{r}{x}(x_0,y_0)\right)^{-1} &\iff \cos\theta_0= \left(\frac{x_0}{\sqrt{x_0^2+y_0^2}}\right)^{-1} = \left(\cos\theta_0\right)^{-1}\\ &\iff \cos^2\theta_0= 1\\ &\iff \theta_0=0,\pi \end{align*}
We can also see pictorially why this happens. By definition, the partial derivatives
\begin{align*} \pdiff{x}{r}(r_0,\theta_0) &= \lim_{\dee{r}\rightarrow 0} \frac{x(r_0+\dee{r},\theta_0) - x(r_0,\theta_0)}{\dee{r}}\\ \pdiff{r}{x}(x_0,y_0) &= \lim_{\dee{x}\rightarrow 0} \frac{r(x_0+\dee{x},y_0) - r(x_0,y_0)}{\dee{x}} \end{align*}
Here we have just renamed the \(h\) of Definition 2.2.1 to \(\dee{r}\) and to \(\dee{x}\) in the two definitions.
In computing \(\pdiff{x}{r}(r_0,\theta_0)\text{,}\) \(\theta_0\) is held fixed, \(r\) is changed by a small amount \(\dee{r}\) and the resulting \(\dee{x}=x(r_0+\dee{r},\theta_0) - x(r_0,\theta_0)\) is computed. In the figure on the left below, \(\dee{r}\) is the length of the orange line segment and \(\dee{x}\) is the length of the blue line segment.
On the other hand, in computing \(\pdiff{r}{x}\text{,}\) \(y\) is held fixed, \(x\) is changed by a small amount \(\dee{x}\) and the resulting \(\dee{r}=r(x_0+\dee{x},y_0) - r(x_0,y_0)\) is computed. In the figure on the right above, \(\dee{x}\) is the length of the pink line segment and \(\dee{r}\) is the length of the orange line segment.
Here are the two figures combined together. We have arranged that the same \(\dee{r}\) is used in both computations. In order for the \(\dee{r}\)’s to be the same in both computations, the two \(\dee{x}\)’s have to be different (unless \(\theta_0=0,\pi\)). So, in general, \(\pdiff{x}{r}(r_0,\theta_0)\ne \big(\pdiff{r}{x}(x_0,y_0)\big)^{-1}\text{.}\)

Example 2.2.13. Optional — Example 2.2.12, continued.

The inverse function theorem, for functions of one variable, says that, if \(y(x)\) and \(x(y)\) are inverse functions, meaning that \(y\big(x(y)\big)=y\) and \(x\big(y(x)\big)=x\text{,}\) and are differentiable with \(\diff{y}{x}\ne 0\text{,}\) then
\begin{equation*} \diff{x}{y}(y) = \frac{1}{\diff{y}{x}\big(x(y)\big)} \end{equation*}
To see this, just apply \(\diff{}{y}\) to both sides of \(y\big(x(y)\big)=y\) to get \(\diff{y}{x}\big(x(y)\big)\ \diff{x}{y}(y)=1\text{,}\) by the chain rule (see Theorem 2.9.3 in the CLP-1 text). In the CLP-1 text, we used this to compute the derivatives of the logarithm (see Theorem 2.10.1 in the CLP-1 text) and of the inverse trig functions (see Theorem 2.12.7 in the CLP-1 text).
We have just seen, in Example 2.2.12, that we can’t be too naive in extending the single variable inverse function theorem to functions of two (or more) variables. On the other hand, there is such an extension, which we will now illustrate, using Cartesian and polar coordinates. For simplicity, we’ll restrict our attention to \(x \gt 0\text{,}\) \(y \gt 0\text{,}\) or equivalently, \(r \gt 0\text{,}\) \(0 \lt \theta \lt \frac{\pi}{2}\text{.}\) The functions which convert between Cartesian and polar coordinates are
\begin{alignat*}{2} x(r,\theta)&=r\cos\theta\qquad& r(x,y)&=\sqrt{x^2+y^2}\\ y(r,\theta)&=r\sin\theta& \theta(x,y)&=\arctan\left(\frac{y}{x}\right) \end{alignat*}
The two functions on the left convert from polar to Cartesian coordinates and the two functions on the right convert from Cartesian to polar coordinates. The inverse function theorem (for functions of two variables) says that,
  • if you form the first order partial derivatives of the left hand functions into the matrix
    \begin{equation*} \left[\begin{matrix} \pdiff{x}{r}(r,\theta) & \pdiff{x}{\theta}(r,\theta) \\ \pdiff{y}{r}(r,\theta) & \pdiff{y}{\theta}(r,\theta) \end{matrix}\right] =\left[\begin{matrix} \cos\theta & -r\sin\theta \\ \sin\theta & r\cos\theta \end{matrix}\right] \end{equation*}
  • and you form the first order partial derivatives of the right hand functions into the matrix
    \begin{equation*} \left[\begin{matrix} \pdiff{r}{x}(x,y) & \pdiff{r}{y}(x,y) \\ \pdiff{\theta}{x}(x,y) & \pdiff{\theta}{y}(x,y) \end{matrix}\right] =\left[\begin{matrix} \frac{x}{\sqrt{x^2+y^2}} & \frac{y}{\sqrt{x^2+y^2}} \\ \frac{-\frac{y}{x^2}}{1+(\frac{y}{x})^2} & \frac{\frac{1}{x}}{1+(\frac{y}{x})^2} \end{matrix}\right] =\left[\begin{matrix} \frac{x}{\sqrt{x^2+y^2}} & \frac{y}{\sqrt{x^2+y^2}} \\ \frac{-y}{x^2+y^2} & \frac{x}{x^2+y^2} \end{matrix}\right] \end{equation*}
  • and if you evaluate the second matrix at \(x=x(r,\theta)\text{,}\) \(y=y(r,\theta)\text{,}\)
    \begin{equation*} \left[\begin{matrix} \pdiff{r}{x}\big(x(r,\theta),y(r,\theta)\big) & \pdiff{r}{y}\big(x(r,\theta),y(r,\theta)\big) \\ \pdiff{\theta}{x}\big(x(r,\theta),y(r,\theta)\big) & \pdiff{\theta}{y}\big(x(r,\theta),y(r,\theta)\big) \end{matrix}\right] =\left[\begin{matrix} \cos\theta & \sin\theta \\ -\frac{\sin\theta}{r} & \frac{\cos\theta}{r} \end{matrix}\right] \end{equation*}
  • and if you multiply
     6 
    Matrix multiplication is usually covered in courses on linear algebra, which you may or may not have taken. That’s why this example is optional.
    the two matrices together
    \begin{align*} &\left[\begin{matrix} \pdiff{r}{x}\big(x(r,\theta),y(r,\theta)\big) & \pdiff{r}{y}\big(x(r,\theta),y(r,\theta)\big) \\ \pdiff{\theta}{x}\big(x(r,\theta),y(r,\theta)\big) & \pdiff{\theta}{y}\big(x(r,\theta),y(r,\theta)\big) \end{matrix}\right]\ \left[\begin{matrix} \pdiff{x}{r}(r,\theta) & \pdiff{x}{\theta}(r,\theta) \\ \pdiff{y}{r}(r,\theta) & \pdiff{y}{\theta}(r,\theta) \end{matrix}\right]\\ &=\left[\begin{matrix} \cos\theta & \sin\theta \\ -\frac{\sin\theta}{r} & \frac{\cos\theta}{r} \end{matrix}\right]\ \left[\begin{matrix} \cos\theta & -r\sin\theta \\ \sin\theta & r\cos\theta \end{matrix}\right]\\\ &=\left[\begin{matrix} (\cos\theta)(\cos\theta) + (\sin\theta)(\sin\theta) &(\cos\theta)(-r\sin\theta)+(\sin\theta)(r\cos\theta) \\ (-\frac{\sin\theta}{r})(\cos\theta)+(\frac{\cos\theta}{r})(\sin\theta) & (-\frac{\sin\theta}{r})(-r\sin\theta) + (\frac{\cos\theta}{r})(r\cos\theta) \end{matrix}\right] \end{align*}
  • then the result is the identity matrix
    \begin{equation*} \left[\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right] \end{equation*}
    and indeed it is!
This two variable version of the inverse function theorem can be derived by applying the derivatives \(\pdiff{}{r}\) and \(\pdiff{}{\theta}\) to the equations
\begin{align*} r\big(x(r,\theta),y(r,\theta)\big) &=r \\ \theta\big(x(r,\theta),y(r,\theta)\big) &=\theta \end{align*}
and using the two variable version of the chain rule, which we will see in §2.4.

Exercises 2.2.2 Exercises

Exercise Group.

Exercises — Stage 1
1.
Let \(f(x,y) = e^x\cos y\text{.}\) The following table gives some values of \(f(x,y)\text{.}\)
\(x=0\) \(x=0.01\) \(x=0.1\)
\(y=-0.1\) 0.99500 1.00500 1.09965
\(y=-0.01\) 0.99995 1.01000 1.10512
\(y=0\) 1.0 1.01005 1.10517
  1. Find two different approximate values for \(\pdiff{f}{x}(0,0)\) using the data in the above table.
  2. Find two different approximate values for \(\pdiff{f}{y}(0,0)\) using the data in the above table.
  3. Evaluate \(\pdiff{f}{x}(0,0)\) and \(\pdiff{f}{y}(0,0)\) exactly.
2.
You are traversing an undulating landscape. Take the \(z\)-axis to be straight up towards the sky, the positive \(x\)-axis to be due south, and the positive \(y\)-axis to be due east. Then the landscape near you is described by the equation \(z=f(x,y)\text{,}\) with you at the point \((0,0,f(0,0))\text{.}\) The function \(f(x,y)\) is differentiable.
Suppose \(f_y(0,0) \lt 0\text{.}\) Is it possible that you are at a summit? Explain.
3. (✳).
Let
\begin{equation*} f(x,y)=\begin{cases}\frac{x^2y}{x^2+y^2}& \text{if } (x,y)\ne (0,0)\\ 0 & \text{if } (x,y)=(0,0) \end{cases} \end{equation*}
Compute, directly from the definitions,
  1. \(\displaystyle \pdiff{f}{x}(0,0)\)
  2. \(\displaystyle \pdiff{f}{y}(0,0)\)
  3. \(\displaystyle \diff{}{t} f(t,t)\Big|_{t=0}\)

Exercise Group.

Exercises — Stage 2
4.
Find all first partial derivatives of the following functions and evaluate them at the given point.
  1. \(\displaystyle f(x,y,z)=x^3y^4z^5\qquad (0,-1,-1)\)
  2. \(\displaystyle w(x,y,z)=\ln\left(1+e^{xyz}\right)\qquad (2,0,-1)\)
  3. \(\displaystyle f(x,y)=\frac{1}{\sqrt{x^2+y^2}}\qquad (-3,4)\)
5.
Show that the function \(z(x,y)=\frac{x+y}{x-y}\) obeys
\begin{equation*} x\pdiff{z}{x}(x,y)+y\pdiff{z}{y}(x,y) = 0 \end{equation*}
6. (✳).
A surface \(z(x, y)\) is defined by \(zy - y + x = \ln(xyz)\text{.}\)
  1. Compute \(\pdiff{z}{x}\text{,}\) \(\pdiff{z}{y}\) in terms of \(x\text{,}\) \(y\text{,}\) \(z\text{.}\)
  2. Evaluate \(\pdiff{z}{x}\) and \(\pdiff{z}{y}\) at \((x, y, z) = (-1, -2, 1/2)\text{.}\)
7. (✳).
Find \(\pdiff{U}{T}\) and \(\pdiff{T}{V}\) at \((1, 1, 2, 4)\) if \((T, U, V, W)\) are related by
\begin{equation*} (TU-V)^2 \ln(W-UV) = \ln 2 \end{equation*}
8. (✳).
Suppose that \(u = x^2 + yz\text{,}\) \(x = \rho r \cos(\theta)\text{,}\) \(y = \rho r \sin(\theta)\) and \(z = \rho r\text{.}\) Find \(\pdiff{u}{r}\) at the point \((\rho_0 , r_0 , \theta_0) = (2, 3, \pi/2)\text{.}\)
9.
Use the definition of the derivative to evaluate \(f_x(0,0)\) and \(f_y(0,0)\) for
\begin{equation*} f(x,y)=\begin{cases} \frac{x^2-2y^2}{x-y}&\text{if } x\ne y\\ 0&\text{if } x=y \end{cases} \end{equation*}

Exercise Group.

Exercises — Stage 3
10.
Let \(f\) be any differentiable function of one variable. Define \(z(x,y)=f(x^2+y^2)\text{.}\) Is the equation
\begin{equation*} y\pdiff{z}{x}(x,y)-x\pdiff{z}{y}(x,y) = 0 \end{equation*}
necessarily satisfied?
11.
Define the function
\begin{equation*} f(x,y)=\begin{cases}\frac{(x+2y)^2}{x+y}& \text{if } x+y\ne 0 \\ 0 &\text{if } x+y=0 \end{cases} \end{equation*}
  1. Evaluate, if possible, \(\pdiff{f}{x}(0,0)\) and \(\pdiff{f}{y}(0,0)\text{.}\)
  2. Is \(f(x,y)\) continuous at \((0,0)\text{?}\)
12.
Consider the cylinder whose base is the radius-1 circle in the \(xy\)-plane centred at \((0,0)\text{,}\) and which slopes parallel to the line in the \(yz\)-plane given by \(z=y\text{.}\)
When you stand at the point \((0,-1,0)\text{,}\) what is the slope of the surface if you look in the positive \(y\) direction? The positive \(x\) direction?