Differential Equations
Table of Contents
Part 1A Michaelmas
This is the first term of the Mathematical Tripos you should be taking this with the other first term courses at the same time as Cambridge designed it this way for a good reason.
Course
- Course notes are here or here
- A guy on YouTube made lectures for the course based on the notes
- This full course that contains some of the Boyce book and also has boundary value problems and matches better with 1B content we'll be doing later
- There is a problem bank with solutions
- The sample sheets
The above full course covers basically everything in the recommended reading then we can use the SciML book lectures to fill in the discrete content like recurrence relations/difference equations and learn Julia for the computational projects we'll have to do later.
Appropriate books
- J. Robinson An introduction to Differential Equations. Cambridge University Press, 2004
- W.E. Boyce and R.C. DiPrima Elementary Differential Equations and Boundary-Value Problems. Wiley, 2004
- G.F.Simmons Differential Equations (with applications and historical notes). McGraw-Hill 1991
- D.G. Zill and M.R. Cullen Differential Equations with Boundary Value Problems. Brooks/Cole 2001
Calculus crash course
We completely skipped scalar calculus (single variable) in the prereqs so here is a crash course. Next term we learn Real Analysis anyway so everything here will be rigorously learned eventually.
Exp and Log functions
Explained here (Clemson Math 1060 sec 1.3). These Clemson U lectures average ~20m and are just the right difficulty if you forgot high school like most people so gaps will be filled in and there's lot's of constructions instead of descriptive/abstract definitions.
Logarithmic inverses, why does \(\large b^{log_b{(x)}} = x\)?
- \(\large 2^{log_2{(1)}} = 2^0 = 1\)
- \(\large 2^{log_2{(2})} = 2^1 = 2\)
- \(\large 2^{log_2{(3})} \approx 2^{1.6} \approx 3\)
- \(\large 2^{log_2{(4})} = 2^2 = 4\)
A base e log or ln(x) is often used instead of base 2 because \(\large log_{2}(x) = \frac{ln(x)}{ln(2)}\) meaning you are off only by a constant factor and now can enjoy all the nice properties ex and ln(x) have for doing quick approximations by hand. For small inputs where x2 is evaluated smaller than x then ex can be approximated with 1 + x and ln(1 + x) can be approximated with x. To see how etime = growth and ln(growth) = time then read this.
Notice how a log is a linearization of an exponent, if you log 2x then log(2x) = xlog(2) and you mapped a quadratic to a linear space.
Trig Functions
If you don't remember trig at all try this (Math 1060 sec 1.4) review of Trig functions and their inverses. Skip through and remind yourself that sin-1 is not 1/sin. Where do those square root x/y coords come from on the unit circle? Explained in the Trig Boot Camp (Brown University). Two right angle triangles (deg 30/60/90 and 45/45/90) need to be scaled down so the hypotenuse is 1 to fit on the unit circle.
We have just seen another linearization. Watch a few minutes of this (Wildberger Rational Trig) starting @5:56 to see motion around a nonlinear curve being mapped to the linear motion of the two axis moving back and forth.
Limits
Skim through these, if you want to do exercises pause the video before he works out an example and do it yourself.
- Limits (Math 1060 sec 2.1) introduction and secants/tangents.
- Informal definition using approximations from the left and right.
- Techniques I for computing limits.
- Techniques II we mainly need the squeeze theorem @16m.
- Infinite Limits if the y-axis/output goes to infinity.
- Limits at Infinity (Math 1060 2.5)
- Continuity crash course on continuity on an interval.
Derivatives
- Derivative (Math 1060 3.1) shows the geometry of a derivative at a point.
- Derivative as a Function or x -> x2 derivative returns x -> 2x for all 'points'/inputs
- Derivative Rules the usual formulas, higher-order derivatives, ex is it's own derivative, and remark that differentiation is a linear operator.
- L'Hopital's Rule I and II is very similar to the definition of big-O.
Watch when needed
- Product Rule (Math 1060 3.4) (fg)' = f'g + fg' and (f/g)' = (f'g - fg')/g2
- Trig Derivatives covers reciprocal limits too. Lightly skim this just so you know the derivative of sinx is cosx and that if you take the derivative of sinx 4x then it's a cycle that returns to sinx.
- Rates of Change explains real life calculus meaning economic cost function derivatives and physics how the derivative of f(time) = position gives you f(time) = velocity. You can undo the velocity derivative to go back to the position function using indefinite integration.
- Chain Rule the derivative of function composition or f'(g)*g'
- Derivatives of Log/Exp the derivative of ex or exp(x) is itself.
Optimization to watch when needed.
- Max/Min (Math 1060 4.1) of a function.
- Mean Value Theorem the point of the MVT is to establish a function with a positive derivative on an open interval would have to be increasing.
- What Derivatives Tell Us about being +/- around a point then a local max/min
- Taylor Series (Essence of Calculus 11) now you know why polynomials exist and where factorial numbers can be generated.
- Applied Optimization I and II examples.
Integrals
In the first sample sheet we drill integrals you could wait until then to watch these while working through the practice problems.
- Linear Approximation and Differentials (Math 1060 4.6) dy = f'(c)dx or dy/dx = f'(point).
- Indefinite integrals I and II the undo method for derivatives.
- Area Approximations to set up the definite integral abstraction.
- Riemann Sums and Definite Integral constructions.
- Theorem of Calculus I and II if you know 2x is the derivative of x2 then b2 - a2 = the integral from a to b of 2x.
- Mean Value Theorem for integrals.
- Substitution Method Part I, II and III
Cambridge Lecture 1
We have access to the first Cambridge lecture here where he covers a review of calculus we're supposed to know. We only need to know limits, the derivative and differentials. Later we go through all the notes as they define calculus from scratch in this course.
First definition we already know: the limit of the ratio f(a + h) - f(a) over h as h gets closer to 0 is the derivative. 'x naught' is x0 notation meaning the same as f(a) where 'a' is supposed to be a specific point/input. Here are the LH and RH limits.
Lets go through the f(x) = |x| example. Plug that into the derivative equation for f(0) = |0| we get (|0 + h| - |0|)/h or |h|/h. However h is negative because it's approaching 'negative 0' from the left side where all the x values are negative so it's (h/-h) or -1.
Try an example for little-o where f(x) = x and g(x) = x2 so f(x)/g(x) is x/x2 = 1/x and if you crank inputs to x to infinity then the limit is 0.
Try an example for big-O where f(x) = x and g(x) = x then f(x)/Mg(x) where M is some constant is x/Mx = M or their ratio differs by a constant M meaning g(x) bounds f(x) by a constant factor. He remarks the notation O(g(x)) means an entire class of functions. O(x) represents all linear functions, O(x2) all quadratic functions, O(1) all constant functions. The extension to infinity x > X means some input to f(x) and g(x) past a threshold input. If f(x) = x + 5, m = 1, and g(x) = x2 then for x > 3 f(x) <= Mg(x) and f(x) is in O(x2) though that is a loose upper bound when the constant M can be juked to show x is within linear O(x). Notice the definition allows us to set the constant M to any value we want and is solely included to eliminate constants from big-O notation. If f(x) = 1000x and g(x) = x then f(x) is in O(x) you don't write O(1000x) as any arbitrary constant is included in the definition. This means O(x) + O(x) or O(2x) is still O(x).
Example 3 if f(x) = x2 then plugging it into the derivative equation:
- \(\frac{(x + h)^2 - x^2}{h}\)
- \(\frac{x^2 + 2xh + h^2 - x^2}{h}\)
- \(\frac{2xh + h^2}{h}\)
- = 2x + h then take the limit of h to zero
- = 2x + 0
Since h is tending towards 0, before we take the limit we know the remainder term h2 is going to be so small it will approach 0 before we take the limit of h to zero so this is 2xh/h + o(h). Try examples if h = .0001 then check what .00012 is.
Some rules are introduced we already know like chain/product rules. He says to look over the notes and see how big-O is used in the proofs.