# Numerical Methods by Anne Greenbaum PDF

Download Numerical Methods – Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals.

Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results.

The book gives instructors the flexibility to emphasize different aspects–design, analysis, or computer implementation–of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book also includes polynomial interpolation at Chebyshev points, use of the MATLAB package Chebfun, and a section on the fast Fourier transform. Supplementary materials are available online.

## Features of Numerical Methods

• Clear and concise exposition of standard numerical analysis topics
• Explores nontraditional topics, such as mathematical modeling and Monte Carlo methods
• Covers modern applications, including information retrieval and animation, and classical applications from physics and engineering
• Promotes understanding of computational results through MATLAB exercises
• Provides flexibility so instructors can emphasize mathematical or applied/computational aspects of numerical methods or a combination
• Includes recent results on polynomial interpolation at Chebyshev points and use of the MATLAB package Chebfun
• Short discussions of the history of numerical methods interspersed throughout
• Supplementary materials available online

Preface
1 MATHEMATICAL MODELING
1.1 Modeling in Computer Animation
1.1.1 A Model Robe
1.2 Modeling in Physics: Radiation Transport
1.3 Modeling in Sports
1.4 Ecological Models
1.5 Modeling a Web Surfer and Google
1.5.1 The Vector Space Model
1.6 Chapter 1 Exercises
2 BASIC OPERATIONS WITH MATLAB
2.1 Launching MATLAB
2.2 Vectors
2.3 Getting Help
2.4 Matrices
2.5 Creating and Running .m Files
2.7 Plotting
2.9 Printing
2.10 More Loops and Conditionals
2.11 Clearing Variables
www.Technicalbookspdf.com
2.14 Chapter 2 Exercises
3 MONTE CARLO METHODS
3.1 A Mathematical Game of Cards
3.1.1 The Odds in Texas Holdem
3.2 Basic Statistics
3.2.1 Discrete Random Variables
3.2.2 Continuous Random Variables
3.2.3 The Central Limit Theorem
3.3 Monte Carlo Integration
3.3.1 Buffon’s Needle
3.3.2 Estimating π
3.3.3 Another Example of Monte Carlo Integration
3.4 Monte Carlo Simulation of Web Surfing
3.5 Chapter 3 Exercises
4 SOLUTION OF A SINGLE NONLINEAR EQUATION IN ONE
UNKNOWN
4.1 Bisection
4.2 Taylor’s Theorem
4.3 Newton’s Method
4.4 Quasi-Newton Methods
4.4.1 Avoiding Derivatives
4.4.2 Constant Slope Method
4.4.3 Secant Method
4.5 Analysis of Fixed Point Methods
4.6 Fractals, Julia Sets, and Mandelbrot Sets
4.7 Chapter 4 Exercises
5 FLOATING-POINT ARITHMETIC
5.1 Costly Disasters Caused by Rounding Errors
5.2 Binary Representation and Base 2 Arithmetic
www.Technicalbookspdf.com
5.3 Floating-Point Representation
5.4 IEEE Floating-Point Arithmetic
5.5 Rounding
5.6 Correctly Rounded Floating-Point Operations
5.7 Exceptions
5.8 Chapter 5 Exercises
6 CONDITIONING OF PROBLEMS; STABILITY OF ALGORITHMS
6.1 Conditioning of Problems
6.2 Stability of Algorithms
6.3 Chapter 6 Exercises
7 DIRECT METHODS FOR SOLVING LINEAR SYSTEMS AND LEAST
SQUARES PROBLEMS
7.1 Review of Matrix Multiplication
7.2 Gaussian Elimination
7.2.1 Operation Counts
7.2.2 LU Factorization
7.2.3 Pivoting
7.2.4 Banded Matrices and Matrices for Which Pivoting Is Not Required
7.2.5 Implementation Considerations for High Performance
7.3 Other Methods for Solving Ax = b
7.4 Conditioning of Linear Systems
7.4.1 Norms
7.4.2 Sensitivity of Solutions of Linear Systems
7.5 Stability of Gaussian Elimination with Partial Pivoting
7.6 Least Squares Problems
7.6.1 The Normal Equations
7.6.2 QR Decomposition
7.6.3 Fitting Polynomials to Data
7.7 Chapter 7 Exercises
8 POLYNOMIAL AND PIECEWISE POLYNOMIAL INTERPOLATION
www.Technicalbookspdf.com
8.1 The Vandermonde System
8.2 The Lagrange Form of the Interpolation Polynomial
8.3 The Newton Form of the Interpolation Polynomial
8.3.1 Divided Differences
8.4 The Error in Polynomial Interpolation
8.5 Interpolation at Chebyshev Points and chebfun
8.6 Piecewise Polynomial Interpolation
8.6.1 Piecewise Cubic Hermite Interpolation
8.6.2 Cubic Spline Interpolation
8.7 Some Applications
8.8 Chapter 8 Exercises
9 NUMERICAL DIFFERENTIATION AND RICHARDSON
EXTRAPOLATION
9.1 Numerical Differentiation
9.2 Richardson Extrapolation
9.3 Chapter 9 Exercises
10 NUMERICAL INTEGRATION
10.1 Newton–Cotes Formulas
10.2 Formulas Based on Piecewise Polynomial Interpolation
10.3.1 Orthogonal Polynomials
10.5 Romberg Integration
10.6 Periodic Functions and the Euler–Maclaurin Formula
10.7 Singularities
10.8 Chapter 10 Exercises
11 NUMERICAL SOLUTION OF THE INITIAL VALUE PROBLEM FOR
ORDINARY DIFFERENTIAL EQUATIONS
11.1 Existence and Uniqueness of Solutions
11.2 One-Step Methods
www.Technicalbookspdf.com
11.2.1 Euler’s Method
11.2.2 Higher-Order Methods Based on Taylor Series
11.2.3 Midpoint Method
11.2.4 Methods Based on Quadrature Formulas
11.2.5 Classical Fourth-Order Runge–Kutta and Runge–Kutta–Fehlberg
Methods
11.2.6 An Example Using MATLAB’s ODE Solver
11.2.7 Analysis of One-Step Methods
11.2.8 Practical Implementation Considerations
11.2.9 Systems of Equations
11.3 Multistep Methods
11.3.2 General Linear m-Step Methods
11.3.3 Linear Difference Equations
11.3.4 The Dahlquist Equivalence Theorem
11.4 Stiff Equations
11.4.1 Absolute Stability
11.4.2 Backward Differentiation Formulas (BDF Methods)
11.4.3 Implicit Runge–Kutta (IRK) Methods
11.5 Solving Systems of Nonlinear Equations in Implicit Methods
11.5.1 Fixed Point Iteration
11.5.2 Newton’s Method
11.6 Chapter 11 Exercises
12 MORE NUMERICAL LINEAR ALGEBRA: EIGENVALUES AND
ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS
12.1 Eigenvalue Problems
12.1.1 The Power Method for Computing the Largest Eigenpair
12.1.2 Inverse Iteration
12.1.3 Rayleigh Quotient Iteration
12.1.4 The QR Algorithm
12.2 Iterative Methods for Solving Linear Systems
www.Technicalbookspdf.com
12.2.1 Basic Iterative Methods for Solving Linear Systems
12.2.2 Simple Iteration
12.2.3 Analysis of Convergence
12.2.5 Methods for Nonsymmetric Linear Systems
12.3 Chapter 12 Exercises
13 NUMERICAL SOLUTION OF TWO-POINT BOUNDARY VALUE
PROBLEMS
13.1 An Application: Steady-State Temperature Distribution
13.2 Finite Difference Methods
13.2.1 Accuracy
13.2.2 More General Equations and Boundary Conditions
13.3 Finite Element Methods
13.3.1 Accuracy
13.4 Spectral Methods
13.5 Chapter 13 Exercises
14 NUMERICAL SOLUTION OF PARTIAL DIFFERENTIAL EQUATIONS
14.1 Elliptic Equations
14.1.1 Finite Difference Methods
14.1.2 Finite Element Methods
14.2 Parabolic Equations
14.2.1 Semidiscretization and the Method of Lines
14.2.2 Discretization in Time
14.3 Separation of Variables
14.3.1 Separation of Variables for Difference Equations
14.4 Hyperbolic Equations
14.4.1 Characteristics
14.4.2 Systems of Hyperbolic Equations
14.4.3 Boundary Conditions
14.4.4 Finite Difference Methods
14.5 Fast Methods for Poisson’s Equation
www.Technicalbookspdf.com
14.5.1 The Fast Fourier Transform
14.6 Multigrid Methods
14.7 Chapter 14 Exercises
APPENDIX A REVIEW OF LINEAR ALGEBRA
A.1 Vectors and Vector Spaces
A.2 Linear Independence and Dependence
A.3 Span of a Set of Vectors; Bases and Coordinates; Dimension of a Vector
Space
A.4 The Dot Product; Orthogonal and Orthonormal Sets; the Gram–Schmidt
Algorithm
A.5 Matrices and Linear Equations
A.6 Existence and Uniqueness of Solutions; the Inverse; Conditions for
Invertibility
A.7 Linear Transformations; the Matrix of a Linear Transformation
A.8 Similarity Transformations; Eigenvalues and Eigenvectors
APPENDIX B TAYLOR’S THEOREM IN MULTIDIMENSIONS
References
Index

## Preface Numerical Methods

In this book we have attempted to integrate a reasonably rigorous mathematical treatment of elementary numerical analysis with motivating examples and applications as well as some historical background. It is designed for use as an upper division undergraduate textbook for a course in numerical analysis that could be in a mathematics department, a computer science department, or a related area. It is assumed that the students have had a calculus course, and have seen Taylor’s theorem, although this is reviewed in the text. It is also assumed that they have had a linear algebra course. Parts of the material require multivariable calculus, although these parts could be omitted. Different aspects of the subject—design, analysis, and computer implementation of algorithms— can be stressed depending on the interests, background, and abilities of the students.Numerical Methods

We begin with a chapter on mathematical modeling to make the reader aware of where numerical computing problems arise and the many uses of numerical methods. In a numerical analysis course, one might go through all or some of the applications in this chapter or one might just assign it to students to read. Next is a chapter on the basics of MATLAB, which is used throughout the book for sample programs and exercises. Another high-level language such as SAGE could be substituted, as long as it is a language that allows easy implementation of high-level linear algebra procedures such as solving a system of linear equations or computing a QR decomposition. This frees the student to concentrate on the use and behavior of these procedures rather than the details of their programming, although the major aspects of their implementation are covered in the text in order to explain proper interpretation of the results. Numerical Methods

The next chapter is a brief introduction to Monte Carlo methods. Monte Carlo methods usually are not covered in numerical analysis courses, but they should be. They are very widely used computing techniques and demonstrate the close connection between mathematical modeling and numerical methods. The basic statistics needed to understand the results will be useful to students in almost any field that they enter. The next chapters contain more standard topics in numerical analysis— solution of a single nonlinear equation in one unknown, floating-point arithmetic, conditioning of problems and stability of algorithms, solution of linear systems and least squares problems, and polynomial and piecewise polynomial interpolation.Numerical Methods

Most of this material is standard, but we do include some recent results about the efficacy of polynomial interpolation when the interpolation points are Chebyshev points. We demonstrate the use of a MATLAB software package called chebfun that performs such interpolation, choosing the degree of the interpolating polynomial adaptively to attain a level of accuracy near the machine precision. In the next two chapters, we discuss the application of this approach to numerical differentiation and integration. We have found that the material through polynomial and piecewise polynomial interpolation can typically be covered in a quarter, while a semester course would include numerical differentiation and integration as well and perhaps some material on the numerical solution of ordinary differential equations (ODEs). Appendix A covers background material on linear algebra that is often needed for review. The remaining chapters of the book are geared towards the numerical solution of differential equations. There is a chapter on the numerical solution of the initial value problem for ordinary differential equations. This includes a short section on solving systems of nonlinear equations, which should be an easy generalization of the material on solving a single nonlinear equation, assuming that the students have had multivariable calculus. Numerical Methods

The basic Taylor’s theorem in multidimensions is included in Appendix B. At this point in a year long sequence, we usually cover material from the chapter entitled “More Numerical Linear Algebra,” including iterative methods for eigenvalue problems and for solving large linear systems. Next come two-point boundary value problems and the numerical solution of partial differential equations (PDEs). Here we include material on the fast Fourier transform (FFT), as it is used in fast solvers for Poisson’s equation. The FFT is also an integral part of the chebfun package introduced earlier, so we are now able to tell the reader a little more about how the polynomial interpolation procedures used there can be implemented efficiently. One can arrange a sequence in which each quarter (or semester) depends on the previous one, but it is also fairly easy to arrange independent courses for each topic.

This requires a review of MATLAB at the start of each course and usually a review of Taylor’s theorem with remainder plus a small amount of additional material from previous chapters, but the amount required from, for example, the linear algebra sections in order to cover, say, the ODE sections is small and can usually be fit into such a course. We have attempted to draw on the popularity of mathematical modeling in a variety of new applications, such as movie animation and information retrieval, to demonstrate the importance of numerical methods, not just in engineering and scientific computing, but in many other areas as well. Through a variety of examples and exercises, we hope to demonstrate some of the many, many uses of numerical methods, while maintaining the emphasis on analysis and understanding of results. Exercises seldom consist of simply computing an answer; in most cases a computational problem is combined with a question about convergence, order of accuracy, or effects of roundoff. Always an underlying theme is, “How much confidence do you have in your computed result?” We hope that the blend of exciting new applications with old-fashioned analysis will prove a successful one. Software that is needed for some of the exercises can be downloaded from the book’s web page, via http://press.princeton.edu/titles/9763.html. Also provided on that site are most of the MATLAB codes used to produce the examples throughout the book. Acknowledgments. The authors thank Richard Neidinger for his contributions and insights after using drafts of the text in his teaching at Davidson College. We also thank the Davidson College students who contributed ideas for improving the text, with special thanks to Daniel Orr for his contributions to the exercises. Additional exercises were contributed by Peter Blossey and Randall LeVeque of the University of Washington. We also thank Danny Kaplan of Macalester College for using an early version of the text in his classes there, and we thank Dan Goldman for information about the use of numerical methods in special effects.

## Editorial Reviews – Numerical Methods

### Review

“[Numerical Methods] is a very pleasant book, where the concepts involved are clearly explained. All chapters begin with motivating examples that give a precise idea of the methods developed. In addition, every chapter ends with an extensive collection of exercises, useful to understand the importance of the results. These are complemented by a series of exercises, designed to be performed with Matlab, useful to appreciate the behavior of the methods studied.” (European Mathematical Society)

“An instructor could assemble several different one-semester courses using this book–numerical linear algebra and interpolation, or numerical solutions of differential equations–or perhaps a two-semester sequence. This is a charming book, well worth consideration for the next numerical analysis course.”—William J. Satzer, MAA Focus

“Distinguishing features are the inclusion of many recent applications of numerical methods and the extensive discussion of methods based on Chebyshev interpolation. This book would be suitable for use in courses aimed at advanced undergraduate students in mathematics, the sciences, and engineering.” (Choice)

### Review

“This is an excellent introduction to the exciting world of numerical analysis. Fulfilling the need for a modern textbook on numerical methods, this volume has a wealth of examples that will keep students interested in the material. The mathematics is completely rigorous and I applaud the authors for doing such a marvelous job.”―Michele Benzi, Emory University

“Filled with polished details and a plethora of examples and illustrations, this ambitious and substantial text touches every standard topic of numerical analysis. The authors have done a huge amount of work and produced a major textbook for this subject.”―Lloyd N. Trefethen, University of Oxford

### Review

I teach computers to do math, so– disclaimer– I’m on the applied, not pure math side of NA. I came across this text while compiling a Body of Knowledge entry on Spectral, Fourier and Chebyshev methods for IEEE and the International Association of Bodies of Knowledge (The 9bok dot org people who certify math BoKs). The usual “track” for advanced undergrads is Calc up to PDE’s, some linear algebra, a little computer arithmetic (and maybe some of my field, Computer Algebra), then on to Engineering or Physics.

Along the way, most of us will touch Numerical Analysis. There are two distinct sides to NA– pure, as a way of defining formal proofs with “results” as much as methods, and applied– solving problems, especially using algorithms, via close approximation, guessing, brute force, iteration, and other “cheats.” The problem with many of the classic NA texts is that “applied” usually means, you guessed it, physics and engineering. Today, however, NA is as much at home with digital artists, game programmers creating physics engines, animators, Maya programmers, etc. as with physicists!

You’d think with that going on, there would be some rocking texts that are also fun. Not the case. Sadly, most of the “better” (read understandable) texts in NA date back to the late 1980s, when there was no internet (there were 50 websites in 1992 when Clinton took office). In fact, this author’s book on Iterative Linear methods dates back to 1987, and John Boyd’s classic on Fourier Spectrals to 1989.

This text changes a lot of that! The authors use a LOT more current examples you’re likely to find in many other fields, from protein folding to NASCAR. Who uses computers to “guess” at difficult PDE solutions other than astrophysicists? Try Neurologists modeling cognition as Dynamic Systems! Yes, the applications today are way beyond what they were in 1987, and we finally have an NA text that covers not only the basics, but MANY cutting edge areas– like fractals– that weren’t even taken seriously back then.

To be fair, some of the examples just give a “taste” of the field, and were filled in by experts, but not really used in the text, and apparently not really understood by the authors. For example, Dan Goldman was tapped to give some fun examples of collision detection for Yoda in Star Wars, but if you look up Inverse Kinematics or Kinematics in the Index, there is no mention. Lorentz transforms and dynamic analysis are not really covered, and when their NA engines are mentioned (Newton’s Method, for example), they are in the context of Julia sets and fractals, not Kinematics.

If you haven’t taken linear algebra and aren’t very familiar with matrices, there is “some” review here, but not enough to make this text fun and painless. You really need to brush up on LA before tackling this. Look at it this way: you can think of many NA techniques as you would visual basic behind Excel. Your computer is using a lot of “spreadsheet” type crunching, only of functions, to “guesstimate” things like zeros/roots, parameters, etc. So what are these “spreadsheets” called by mathematicians? Right, Linear Algebra (on roids). Back in ancient times (the 90’s) NA was considered the playground of mathematicians, engineers and physicists. Today, game programmers, animators, digital artists and even programmers like yours truly need to understand it to know what’s going on underneath the calculations– namely, crunching, right down to the stacks and registers.

Another group that will like this text are the embedded circuit folks– instead of nail biting about on or off chip memory limits, many of the newer memory limit “work arounds” are in NA functions, algorithms and shortcuts. Sure, we’ll eventually have to SOLVE the memory issues, but for now, the real world IS about working around with “close enough” solutions. You won’t find any of these applications in most NA texts– the present work is a gem, and unique in being up to date on many NOW applications, including several beyond the traditional physics and engineering examples. Oh, and yes, you do learn analysis here too, including the proofs and pure math sides if your track is math. I’m not qualified to opine in that “pure proof” track, but if you’re on the applied side of NA, and want to go deeper, you’ll love this text.

History note for a few emailers: Thanks for reminding me that 1987 is “recent” compared to many NA techniques that adapt Euler to algorithmic form– by “recent” I also mean that these authors use examples like web surfing and Google’s (secret sauce) analytics. I’m talking examples too, not just the fact –which I’ll gladly grant– that much of NA stands on the shoulders of giants going back to the 1700s. I challenge anyone to show me an NA text that is this relevant to today’s applications, however! If you’re a student, you also won’t feel like you’re being forced to study stuff that will have no relevance to your future. If you’re a prof– don’t you want to orient your students via examples that are being used right now? They WILL thank you!

### Review

I am an undergraduate student in Applied Mathematics who just used this text in a course on Numerical Analysis in one of the last courses I’m taking before moving on to grad school for Computer Science.

As a previous reviewer noted, this text really goes out of its way to motivate the reader with a bit of a firehose approach to introducing all the different ways in which this material can be applied to computing problems, from graphics processing to machining to airfoil simulation to web search. This field of study may have been developed centuries ago largely by astrophysicists, but the application goes way beyond that with the widespread availability of computers and their ability to implement algorithms that involve too many steps to complete by hand.

The complete chapter list is

1) Mathematical Modeling
2) Basic Operations with MATLAB
3) Monte Carlo Methods
4) Solution of a Single Nonlinear Equation in One Unknown
5) Floating-Point Arithmetic
6) Conditioning of Problems; Stability of Algorithms
7) Direct Methods for Solving Linear Systems and Least Squares Problems
8) Polynomial and Piecewise Polynomial Interpolation
9) Numerical Differentiation and Richardson Extrapolation
10) Numerical Integration
11) Numerical Solution of the Initial Value Problem for Ordinary Differential Equations
12) More Numerical Linear Algebra: Eigenvalues and Iterative Methods for Solving Linear Systems
13) Numerical Solution of Two-Point Boundary Value Problems
14) Numerical Solution of Partial Differential Equations

Appendices:

A) Review of Linear Algebra
B) Taylor’s Theorem in Multidimensions

First off, the prerequisites review is not going to help you unless you already learned linear algebra and calculus (at least up to power series) in detail in the past. If you’re purchasing this for a university course, they should be enforcing prerequisites anyway. If you’re purchasing this for self-study, be aware that it’s not teaching you much math. Finding polynomial roots, solving linear systems, solving least squares problems, differentiation, integration, and solving differential equations should be something you already know how to do. This text is simply showing you how to do it with a computer when an analytic solution is either not available or otherwise too difficult. Additionally, you’ll probably be a little lost without a background in probability in the Monte Carlo chapter, but that chapter really does not tie in with the rest of the text and can be easily omitted (we covered it, but the professor didn’t expect much and was clear in stating this is a non-traditional topic for a text in numerical analysis).

The upside of this text is that it is extremely readable. Derivations, theorems, and proofs are integrated with narrative and are given a clear explanation. It came across a bit dense at first, but as the error analysis for each algorithm relies upon the order of the remainder term in a Taylor series expansion, the proofs should eventually click given the similarity from one to the next. For the most part, the algorithms themselves are not very complicated. Figuring out how to code some of the solvers for linear systems took me a few hours, though I think that had more to do with learning specific language features than anything about the algorithm itself. The harder part is the error analysis, and you really need to slow down, write it all down and figure out what’s happening before you move on. There isn’t much complexity analysis, and what little there is should not be challenging if you have any background in more general algorithmic analysis/discrete math.

As a quick note for students who are not using MATLAB, beware some of the specific low-level features of the language you’re using. Undergrad math departments like to use computer algebra systems, and these are very nice for symbolic computation, but they often don’t work the same. We used Maple in my course, and by default, Maple uses software floating-point numbers and performs much of the computation in memory, not in the processor ALU, and this changes the error analysis as software floats don’t adhere to the IEEE double-precision standard the text discusses and are not limited by machine precision. You can get identical results to those in the text, with identical error analysis, but only if you explicitly tell the software to use hardware floating-point representations. You may need to dig into the documentation to figure out how to do this.

Downsides: This may or may not be a downside depending upon what you’re looking for, but this is very much a university textbook. It’s teaching you how to think like a numerical analyst. The code snippets are nice, but this is not a reference or a cookbook. Much of the code is fragments. You can copy and paste it straight into a MATLAB script and it won’t run. You’re not going to pick up this book and instantly learn how to do something. You’re going to struggle to learn something. But you will learn. There’s a fair amount (arguably too much) of MATLAB-specific discussion: for instance, how to use chebfun to find Chebyshev nodes when interpolating high-degree polynomials. You’re on your own figuring out how to do this if you don’t have MATLAB or some comparable numerical computing software. Note that Octave and Numpy have much of the same functionality and are free, so if you can’t get student MATLAB, you can still follow along.

Note that there is also no solution manual available yet and no answers to any of the exercises. This may or may not make a difference to you. If you’re in a course being taught by a professor, the professor will grade and provide feedback and answer homework questions. If you’re purchasing this for self-study, be aware. It’s really easy if you have MATLAB or Maple or know how to use Numpy to use built-in functions to check for the true solution of a linear system or a least squares problem and then see if your code worked by comparison. If you don’t have those tools or don’t know how to use them, however, the text itself will not help you.

The bottom line is I would not recommend this for self-study unless you already have a math, cs, or engineering background and simply never covered numerical analysis. You need to have the prerequisite background, which is at least a full year of calculus and a full term of linear algebra. Ideally, you’ve had calc III and some background in programming and algorithmic complexity analysis, but these are not completely necessary. The coverage of each topic is brief and you will not find it difficult if you have the background. You’re not going to get very deep into how to find eigenvalues of large matrices. This book is simply the next step when you’ve already learned calculus and linear algebra to begin considering the engineering problems faced when you implement techniques on a computer and now have to worry about conditioning and stability and floating-point arithmetic. It gives you the tools to understand the sources of error and to properly weigh trade-offs using rigorous quantification.