3144d50
committed
Commits
Comments (0)
Files changed (10)

+18 0codes/homework5/Makefile

+11 0codes/homework5/README.txt

+27 0codes/homework5/functions.f90

+515 0codes/homework5/notebook/quadrature2.ipynb

+0 0codes/homework5/notebook/quadrature2.pdf

+313 0codes/homework5/notebook/quadrature2.py

+76 0codes/homework5/quadrature.f90

+64 0codes/homework5/test.f90

+262 0notes/homework5.rst

+1 1notes/homeworks.rst
codes/homework5/functions.f90
codes/homework5/notebook/quadrature2.ipynb
+ "We will first look at the Trapezoid method. This method is implemented by evaluating the function at $n$ points and then computing the areas of the trapezoids defined by a piecewise linear approximation to the original function defined by these points. In the figure below, we are approximating the integral of the blue curve by the sum of the areas of the red trapezoids."
+ "The area of a single trapezoid is the width of the base times the average height, so between points $x_j$ and $x_{j+1}$ this gives:\n",
+ "$$ h\\left(\\frac 1 2 f(x_0) + f(x_1) + f(x_2) + \\cdots + f(x_{n2}) + \\frac 1 2 f(x_{n1})\\right) = h\\sum_{j=0}^{n1} f(x_j)  \\frac h 2 \\left(f(x_0) + f(x_{n1})\\right) = h\\sum_{j=0}^{n1} f(x_j)  \\frac h 2 \\left(f(a) + f(b))\\right). $$\n",
+ "This can be implemented as follows (note that in Python fj[1] refers to the last element of fj, and similarly fj[2] would be the next to last element)."
+ "If we increase n, the number of points used, and hence decrease h, the spacing between points, we expect the error to converge to zero for reasonable functions $f(x)$.\n",
+ "The trapezoid rule is \"second order accurate\", meaning that the error goes to zero like $O(h^2)$ for a function that is sufficiently smooth (for example if its second derivative is continuous). For small $h$, the error is expected to be behave like $Ch^2 + O(h^3)~$ as $h$ goes to zero, where $C$ is some constant that depends on how smooth $h$ is. \n",
+ "If we double n (and halve h) then we expect the error to go down by a factor of 4 roughly (from $Ch^2$ to $C(h/2)^2~$).\n",
+ "We can check this by trying several values of n and making a table of the errors and the ratio from one n to the next:"
+ "Convergence might be easier to see in a plot. If a method is p'th order accurate then we expect the error to behave like $E\\approx Ch^p$ for some constant $C$, for small $h$. This is hard to visualize. It is much easier to see what order accuracy we are achieving if we produce a loglog plot instead, since $E = Ch^p~$ means that $\\log E = \\log C + p\\log h$ \n",
+ "If the function $f(x)$ is not as smooth (has larger second derivative at various places) then the accuracy with a small number of points will not be nearly as good. For example, consider the function $f_2(x) = 1 + x^3 + \\sin(kx)~~~$ where $k$ is a parameter. For large $k$ this function is very oscillatory. In order to experiment with different values of $k$, we can define a \"function factory\" that creates this function for any given $k$, and also returns the true integral over a given interval:"
+ "This doesn't look very good, but for larger values of $n$ we still see the expected convergence rate:"
+ "In this case the $O(h^2)~$ behavior does not become apparent unless we use much smaller $h$ values so that we are resolving the oscillations:"
+ "There are much better methods than the Trapezoidal rule that are not much harder to implement but get much smaller errors with the same number of function evaluations. One such method is Simpson\u2019s rule, which approximates the integral over a single interval from $x_i$ to $x_{i+1}$ by\n",
+ "$$\\int_{x_i}^{x_{i+1}} f(x)\\, dx \\approx \\frac h 6 (f(x_i) + 4f(x_{i+1/2}) + f(x_{i+1})),$$\n",
+ "Derivation: The trapezoid method is derived by approximating the function on each interval by a linear function interpolating at the two endpoints of each interval and then integrating this linear function. Simpson's method is derived by approximating the function by a quadratic function interpolating at the endpoints and the center of the interval and integrating this quadratic function."
+ "$$\\frac{h}{6}[f(x_0) + 4f(x_{1/2}) + 2f(x_1) + 4f(x_{3/2}) + 2f(x_2) + \\cdots + 2f(x_{n2}) + 4f(x_{n3/2}) + f(x_{n1})].$$\n",
+ "This method is 4th order accurate, which means that on fine enough grids the error is proportional to \\Delta x^4. Hence increasing n by a factor of 2 should decrease the error by a factor of 2^4 = 16. Let's try it on the last function we were experimenting with:"
+ "Note that the errors get smaller much faster and the ratio approaches 16. The improvement over the trapezoid method is seen more clearly if we plot the errors together:"
+ "Even though Simpson'e method is derived by integrating a quadratic approximation of the function, rather than linear as with the Trapezoid Rule, in fact it also integrates a cubic exactly, as seen if we try it out with the function f1 defined at the top of this notebook. (This is because the error between the cubic and the quadratic approximation on each interval is not zero but does have integral equal to zero since it turns out to be an odd function about the midpoint.) For this reason Simpson's Rule is fourth order accurate in general rather than only third order, as one might expect when going from a linear to quadratic approximation.\n",
codes/homework5/notebook/quadrature2.pdf
Binary file added.
codes/homework5/notebook/quadrature2.py
+# We will first look at the Trapezoid method. This method is implemented by evaluating the function at $n$ points and then computing the areas of the trapezoids defined by a piecewise linear approximation to the original function defined by these points. In the figure below, we are approximating the integral of the blue curve by the sum of the areas of the red trapezoids.
+# The area of a single trapezoid is the width of the base times the average height, so between points $x_j$ and $x_{j+1}$ this gives:
+# $$ h\left(\frac 1 2 f(x_0) + f(x_1) + f(x_2) + \cdots + f(x_{n2}) + \frac 1 2 f(x_{n1})\right) = h\sum_{j=0}^{n1} f(x_j)  \frac h 2 \left(f(x_0) + f(x_{n1})\right) = h\sum_{j=0}^{n1} f(x_j)  \frac h 2 \left(f(a) + f(b))\right). $$
+# This can be implemented as follows (note that in Python fj[1] refers to the last element of fj, and similarly fj[2] would be the next to last element).
+# If we increase n, the number of points used, and hence decrease h, the spacing between points, we expect the error to converge to zero for reasonable functions $f(x)$.
+# The trapezoid rule is "second order accurate", meaning that the error goes to zero like $O(h^2)$ for a function that is sufficiently smooth (for example if its second derivative is continuous). For small $h$, the error is expected to be behave like $Ch^2 + O(h^3)~$ as $h$ goes to zero, where $C$ is some constant that depends on how smooth $h$ is.
+# If we double n (and halve h) then we expect the error to go down by a factor of 4 roughly (from $Ch^2$ to $C(h/2)^2~$).
+# We can check this by trying several values of n and making a table of the errors and the ratio from one n to the next:
+# Convergence might be easier to see in a plot. If a method is p'th order accurate then we expect the error to behave like $E\approx Ch^p$ for some constant $C$, for small $h$. This is hard to visualize. It is much easier to see what order accuracy we are achieving if we produce a loglog plot instead, since $E = Ch^p~$ means that $\log E = \log C + p\log h$
+# If the function $f(x)$ is not as smooth (has larger second derivative at various places) then the accuracy with a small number of points will not be nearly as good. For example, consider the function $f_2(x) = 1 + x^3 + \sin(kx)~~~$ where $k$ is a parameter. For large $k$ this function is very oscillatory. In order to experiment with different values of $k$, we can define a "function factory" that creates this function for any given $k$, and also returns the true integral over a given interval:
+# This doesn't look very good, but for larger values of $n$ we still see the expected convergence rate:
+# In this case the $O(h^2)~$ behavior does not become apparent unless we use much smaller $h$ values so that we are resolving the oscillations:
+# There are much better methods than the Trapezoidal rule that are not much harder to implement but get much smaller errors with the same number of function evaluations. One such method is Simpson’s rule, which approximates the integral over a single interval from $x_i$ to $x_{i+1}$ by
+# Derivation: The trapezoid method is derived by approximating the function on each interval by a linear function interpolating at the two endpoints of each interval and then integrating this linear function. Simpson's method is derived by approximating the function by a quadratic function interpolating at the endpoints and the center of the interval and integrating this quadratic function.
+# $$\frac{h}{6}[f(x_0) + 4f(x_{1/2}) + 2f(x_1) + 4f(x_{3/2}) + 2f(x_2) + \cdots + 2f(x_{n2}) + 4f(x_{n3/2}) + f(x_{n1})].$$
+# This method is 4th order accurate, which means that on fine enough grids the error is proportional to \Delta x^4. Hence increasing n by a factor of 2 should decrease the error by a factor of 2^4 = 16. Let's try it on the last function we were experimenting with:
+# Note that the errors get smaller much faster and the ratio approaches 16. The improvement over the trapezoid method is seen more clearly if we plot the errors together:
+# Even though Simpson'e method is derived by integrating a quadratic approximation of the function, rather than linear as with the Trapezoid Rule, in fact it also integrates a cubic exactly, as seen if we try it out with the function f1 defined at the top of this notebook. (This is because the error between the cubic and the quadratic approximation on each interval is not zero but does have integral equal to zero since it turns out to be an odd function about the midpoint.) For this reason Simpson's Rule is fourth order accurate in general rather than only third order, as one might expect when going from a linear to quadratic approximation.