Rick Muller, Sandia National Laboratories
version 0.6
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Python is the programming language of choice for many scientists to a large degree because it offers a great deal of power to analyze and model scientific data with relatively little overhead in terms of learning, installation or development time. It is a language you can pick up in a weekend, and use for the rest of one's life.
The Python Tutorial is a great place to start getting a feel for the language. To complement this material, I taught a Python Short Course years ago to a group of computational chemists during a time that I was worried the field was moving too much in the direction of using canned software rather than developing one's own methods. I wanted to focus on what working scientists needed to be more productive: parsing output of other programs, building simple models, experimenting with object oriented programming, extending the language with C, and simple GUIs.
I'm trying to do something very similar here, to cut to the chase and focus on what scientists need. In the last year or so, the IPython Project has put together a notebook interface that I have found incredibly valuable. A large number of people have released very good IPython Notebooks that I have taken a huge amount of pleasure reading through. Some ones that I particularly like include:
I find IPython notebooks an easy way both to get important work done in my everyday job, as well as to communicate what I've done, how I've done it, and why it matters to my coworkers. I find myself endlessly sweeping the IPython subreddit hoping someone will post a new notebook. In the interest of putting more notebooks out into the wild for other people to use and enjoy, I thought I would try to recreate some of what I was trying to get across in the original Python Short Course, updated by 15 years of Python, Numpy, Scipy, Matplotlib, and IPython development, as well as my own experience in using Python almost every day of this time.
There are two branches of current releases in Python: the older-syntax Python 2, and the newer-syntax Python 3. This schizophrenia is largely intentional: when it became clear that some non-backwards-compatible changes to the language were necessary, the Python dev-team decided to go through a five-year (or so) transition, during which the new language features would be introduced and the old language was still actively maintained, to make such a transition as easy as possible. We're now (2013) past the halfway point, and, IMHO, at the first time when I'm considering making the change to Python 3.
Nonetheless, I'm going to write these notes with Python 2 in mind, since this is the version of the language that I use in my day-to-day job, and am most comfortable with. If these notes are important and are valuable to people, I'll be happy to rewrite the notes using Python 3.
With this in mind, these notes assume you have a Python distribution that includes:
A good, easy to install option that supports Mac, Windows, and Linux, and that has all of these packages (and much more) is the Entought Python Distribution, also known as EPD, which appears to be changing its name to Enthought Canopy. Enthought is a commercial company that supports a lot of very good work in scientific Python development and application. You can either purchase a license to use EPD, or there is also a free version that you can download and install.
Here are some other alternatives, should you not want to use EPD:
Linux Most distributions have an installation manager. Redhat has yum, Ubuntu has apt-get. To my knowledge, all of these packages should be available through those installers.
Mac I use Macports, which has up-to-date versions of all of these packages.
Windows The PythonXY package has everything you need: install the package, then go to Start > PythonXY > Command Prompts > IPython notebook server.
Cloud This notebook is currently not running on the IPython notebook viewer, but will be shortly, which will allow the notebook to be viewed but not interactively. I'm keeping an eye on Wakari, from Continuum Analytics, which is a cloud-based IPython notebook. Wakari appears to support free accounts as well. Continuum is a company started by some of the core Enthought Numpy/Scipy people focusing on big data.
Continuum also supports a bundled, multiplatform Python package called Anaconda that I'll also keep an eye on.
This is a quick introduction to Python. There are lots of other places to learn the language more thoroughly. I have collected a list of useful links, including ones to other learning resources, at the end of this notebook. If you want a little more depth, Python Tutorial is a great place to start, as is Zed Shaw's Learn Python the Hard Way.
The lessons that follow make use of the IPython notebooks. There's a good introduction to notebooks in the IPython notebook documentation that even has a nice video on how to use the notebooks. You should probably also flip through the IPython tutorial in your copious free time.
Briefly, notebooks have code cells (that are generally followed by result cells) and text cells. The text cells are the stuff that you're reading now. The code cells start with "In []:" with some number generally in the brackets. If you put your cursor in the code cell and hit Shift-Enter, the code will run in the Python interpreter and the result will print out in the output cell. You can then change things around and see whether you understand what's going on. If you need to know more, see the IPython notebook documentation or the IPython tutorial.
Many of the things I used to use a calculator for, I now use Python for:
2+2
(50-5*6)/4
(If you're typing this into an IPython notebook, or otherwise using notebook file, you hit shift-Enter to evaluate a cell.)
There are some gotchas compared to using a normal calculator.
7/3
Python integer division, like C or Fortran integer division, truncates the remainder and returns an integer. At least it does in version 2. In version 3, Python returns a floating point number. You can get a sneak preview of this feature in Python 2 by importing the module from the future features:
from __future__ import division
Alternatively, you can convert one of the integers to a floating point number, in which case the division function returns another floating point number.
7/3.
7/float(3)
In the last few lines, we have sped by a lot of things that we should stop for a moment and explore a little more fully. We've seen, however briefly, two different data types: integers, also known as whole numbers to the non-programming world, and floating point numbers, also known (incorrectly) as decimal numbers to the rest of the world.
We've also seen the first instance of an import statement. Python has a huge number of libraries included with the distribution. To keep things simple, most of these variables and functions are not accessible from a normal Python interactive session. Instead, you have to import the name. For example, there is a math module containing many useful functions. To access, say, the square root function, you can either first
from math import sqrt
and then
sqrt(81)
or you can simply import the math library itself
import math
math.sqrt(81)
You can define variables using the equals (=) sign:
width = 20
length = 30
area = length*width
area
If you try to access a variable that you haven't yet defined, you get an error:
volume
and you need to define it:
depth = 10
volume = area*depth
volume
You can name a variable almost anything you want. It needs to start with an alphabetical character or "_", can contain alphanumeric charcters plus underscores ("_"). Certain words, however, are reserved for the language:
and, as, assert, break, class, continue, def, del, elif, else, except,
exec, finally, for, from, global, if, import, in, is, lambda, not, or,
pass, print, raise, return, try, while, with, yield
Trying to define a variable using one of these will result in a syntax error:
return = 0
The Python Tutorial has more on using Python as an interactive shell. The IPython tutorial makes a nice complement to this, since IPython has a much more sophisticated iteractive shell.
Strings are lists of printable characters, and can be defined using either single quotes
'Hello, World!'
or double quotes
"Hello, World!"
But not both at the same time, unless you want one of the symbols to be part of the string.
"He's a Rebel"
'She asked, "How are you today?"'
Just like the other two data objects we're familiar with (ints and floats), you can assign a string to a variable
greeting = "Hello, World!"
The print statement is often used for printing character strings:
print greeting
But it can also print data types other than strings:
print "The area is ",area
In the above snipped, the number 600 (stored in the variable "area") is converted into a string before being printed out.
You can use the + operator to concatenate strings together:
statement = "Hello," + "World!"
print statement
Don't forget the space between the strings, if you want one there.
statement = "Hello, " + "World!"
print statement
You can use + to concatenate multiple strings in a single statement:
print "This " + "is " + "a " + "longer " + "statement."
If you have a lot of words to concatenate together, there are other, more efficient ways to do this. But this is fine for linking a few strings together.
Very often in a programming language, one wants to keep a group of similar items together. Python does this using a data type called lists.
days_of_the_week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"]
You can access members of the list using the index of that item:
days_of_the_week[2]
Python lists, like C, but unlike Fortran, use 0 as the index of the first element of a list. Thus, in this example, the 0 element is "Sunday", 1 is "Monday", and so on. If you need to access the nth element from the end of the list, you can use a negative index. For example, the -1 element of a list is the last element:
days_of_the_week[-1]
You can add additional items to the list using the .append() command:
languages = ["Fortran","C","C++"]
languages.append("Python")
print languages
The range() command is a convenient way to make sequential lists of numbers:
range(10)
Note that range(n) starts at 0 and gives the sequential list of integers less than n. If you want to start at a different number, use range(start,stop)
range(2,8)
The lists created above with range have a step of 1 between elements. You can also give a fixed step size via a third command:
evens = range(0,20,2)
evens
evens[3]
Lists do not have to hold the same data type. For example,
["Today",7,99.3,""]
However, it's good (but not essential) to use lists for similar objects that are somehow logically connected. If you want to group different data types together into a composite data object, it's best to use tuples, which we will learn about below.
You can find out how long a list is using the len() command:
help(len)
len(evens)
One of the most useful things you can do with lists is to iterate through them, i.e. to go through each element one at a time. To do this in Python, we use the for statement:
for day in days_of_the_week:
print day
This code snippet goes through each element of the list called days_of_the_week and assigns it to the variable day. It then executes everything in the indented block (in this case only one line of code, the print statement) using those variable assignments. When the program has gone through every element of the list, it exists the block.
(Almost) every programming language defines blocks of code in some way. In Fortran, one uses END statements (ENDDO, ENDIF, etc.) to define code blocks. In C, C++, and Perl, one uses curly braces {} to define these blocks.
Python uses a colon (":"), followed by indentation level to define code blocks. Everything at a higher level of indentation is taken to be in the same block. In the above example the block was only a single line, but we could have had longer blocks as well:
for day in days_of_the_week:
statement = "Today is " + day
print statement
The range() command is particularly useful with the for statement to execute loops of a specified length:
for i in range(20):
print "The square of ",i," is ",i*i
Lists and strings have something in common that you might not suspect: they can both be treated as sequences. You already know that you can iterate through the elements of a list. You can also iterate through the letters in a string:
for letter in "Sunday":
print letter
This is only occasionally useful. Slightly more useful is the slicing operation, which you can also use on any sequence. We already know that we can use indexing to get the first element of a list:
days_of_the_week[0]
If we want the list containing the first two elements of a list, we can do this via
days_of_the_week[0:2]
or simply
days_of_the_week[:2]
If we want the last items of the list, we can do this with negative slicing:
days_of_the_week[-2:]
which is somewhat logically consistent with negative indices accessing the last elements of the list.
You can do:
workdays = days_of_the_week[1:6]
print workdays
Since strings are sequences, you can also do this to them:
day = "Sunday"
abbreviation = day[:3]
print abbreviation
If we really want to get fancy, we can pass a third element into the slice, which specifies a step length (just like a third argument to the range() function specifies the step):
numbers = range(0,40)
evens = numbers[2::2]
evens
Note that in this example I was even able to omit the second argument, so that the slice started at 2, went to the end of the list, and took every second element, to generate the list of even numbers less that 40.
We have now learned a few data types. We have integers and floating point numbers, strings, and lists to contain them. We have also learned about lists, a container that can hold any data type. We have learned to print things out, and to iterate over items in lists. We will now learn about boolean variables that can be either True or False.
We invariably need some concept of conditions in programming to control branching behavior, to allow a program to react differently to different situations. If it's Monday, I'll go to work, but if it's Sunday, I'll sleep in. To do this in Python, we use a combination of boolean variables, which evaluate to either True or False, and if statements, that control branching based on boolean values.
For example:
if day == "Sunday":
print "Sleep in"
else:
print "Go to work"
(Quick quiz: why did the snippet print "Go to work" here? What is the variable "day" set to?)
Let's take the snippet apart to see what happened. First, note the statement
day == "Sunday"
If we evaluate it by itself, as we just did, we see that it returns a boolean value, False. The "==" operator performs equality testing. If the two items are equal, it returns True, otherwise it returns False. In this case, it is comparing two variables, the string "Sunday", and whatever is stored in the variable "day", which, in this case, is the other string "Saturday". Since the two strings are not equal to each other, the truth test has the false value.
The if statement that contains the truth test is followed by a code block (a colon followed by an indented block of code). If the boolean is true, it executes the code in that block. Since it is false in the above example, we don't see that code executed.
The first block of code is followed by an else statement, which is executed if nothing else in the above if statement is true. Since the value was false, this code is executed, which is why we see "Go to work".
You can compare any data types in Python:
1 == 2
50 == 2*25
3 < 3.14159
1 == 1.0
1 != 0
1 <= 2
1 >= 1
We see a few other boolean operators here, all of which which should be self-explanatory. Less than, equality, non-equality, and so on.
Particularly interesting is the 1 == 1.0 test, which is true, since even though the two objects are different data types (integer and floating point number), they have the same value. There is another boolean operator is, that tests whether two objects are the same object:
1 is 1.0
We can do boolean tests on lists as well:
[1,2,3] == [1,2,4]
[1,2,3] < [1,2,4]
Finally, note that you can also string multiple comparisons together, which can result in very intuitive tests:
hours = 5
0 < hours < 24
If statements can have elif parts ("else if"), in addition to if/else parts. For example:
if day == "Sunday":
print "Sleep in"
elif day == "Saturday":
print "Do chores"
else:
print "Go to work"
Of course we can combine if statements with for loops, to make a snippet that is almost interesting:
for day in days_of_the_week:
statement = "Today is " + day
print statement
if day == "Sunday":
print " Sleep in"
elif day == "Saturday":
print " Do chores"
else:
print " Go to work"
This is something of an advanced topic, but ordinary data types have boolean values associated with them, and, indeed, in early versions of Python there was not a separate boolean object. Essentially, anything that was a 0 value (the integer or floating point 0, an empty string "", or an empty list []) was False, and everything else was true. You can see the boolean value of any data object using the bool() function.
bool(1)
bool(0)
bool(["This "," is "," a "," list"])
The Fibonacci sequence is a sequence in math that starts with 0 and 1, and then each successive entry is the sum of the previous two. Thus, the sequence goes 0,1,1,2,3,5,8,13,21,34,55,89,...
A very common exercise in programming books is to compute the Fibonacci sequence up to some number n. First I'll show the code, then I'll discuss what it is doing.
n = 10
sequence = [0,1]
for i in range(2,n): # This is going to be a problem if we ever set n <= 2!
sequence.append(sequence[i-1]+sequence[i-2])
print sequence
Let's go through this line by line. First, we define the variable n, and set it to the integer 20. n is the length of the sequence we're going to form, and should probably have a better variable name. We then create a variable called sequence, and initialize it to the list with the integers 0 and 1 in it, the first two elements of the Fibonacci sequence. We have to create these elements "by hand", since the iterative part of the sequence requires two previous elements.
We then have a for loop over the list of integers from 2 (the next element of the list) to n (the length of the sequence). After the colon, we see a hash tag "#", and then a comment that if we had set n to some number less than 2 we would have a problem. Comments in Python start with #, and are good ways to make notes to yourself or to a user of your code explaining why you did what you did. Better than the comment here would be to test to make sure the value of n is valid, and to complain if it isn't; we'll try this later.
In the body of the loop, we append to the list an integer equal to the sum of the two previous elements of the list.
After exiting the loop (ending the indentation) we then print out the whole list. That's it!
We might want to use the Fibonacci snippet with different sequence lengths. We could cut an paste the code into another cell, changing the value of n, but it's easier and more useful to make a function out of the code. We do this with the def statement in Python:
def fibonacci(sequence_length):
"Return the Fibonacci sequence of length *sequence_length*"
sequence = [0,1]
if sequence_length < 1:
print "Fibonacci sequence only defined for length 1 or greater"
return
if 0 < sequence_length < 3:
return sequence[:sequence_length]
for i in range(2,sequence_length):
sequence.append(sequence[i-1]+sequence[i-2])
return sequence
We can now call fibonacci() for different sequence_lengths:
fibonacci(2)
fibonacci(12)
We've introduced a several new features here. First, note that the function itself is defined as a code block (a colon followed by an indented block). This is the standard way that Python delimits things. Next, note that the first line of the function is a single string. This is called a docstring, and is a special kind of comment that is often available to people using the function through the python command line:
help(fibonacci)
If you define a docstring for all of your functions, it makes it easier for other people to use them, since they can get help on the arguments and return values of the function.
Next, note that rather than putting a comment in about what input values lead to errors, we have some testing of these values, followed by a warning if the value is invalid, and some conditional code to handle special cases.
Functions can also call themselves, something that is often called recursion. We're going to experiment with recursion by computing the factorial function. The factorial is defined for a positive integer n as
$$ n! = n(n-1)(n-2)\cdots 1 $$First, note that we don't need to write a function at all, since this is a function built into the standard math library. Let's use the help function to find out about it:
from math import factorial
help(factorial)
This is clearly what we want.
factorial(20)
However, if we did want to write a function ourselves, we could do recursively by noting that
$$ n! = n(n-1)!$$The program then looks something like:
def fact(n):
if n <= 0:
return 1
return n*fact(n-1)
fact(20)
Recursion can be very elegant, and can lead to very simple programs.
Before we end the Python overview, I wanted to touch on two more data structures that are very useful (and thus very common) in Python programs.
A tuple is a sequence object like a list or a string. It's constructed by grouping a sequence of objects together with commas, either without brackets, or with parentheses:
t = (1,2,'hi',9.0)
t
Tuples are like lists, in that you can access the elements using indices:
t[1]
However, tuples are immutable, you can't append to them or change the elements of them:
t.append(7)
t[1]=77
Tuples are useful anytime you want to group different pieces of data together in an object, but don't want to create a full-fledged class (see below) for them. For example, let's say you want the Cartesian coordinates of some objects in your program. Tuples are a good way to do this:
('Bob',0.0,21.0)
Again, it's not a necessary distinction, but one way to distinguish tuples and lists is that tuples are a collection of different things, here a name, and x and y coordinates, whereas a list is a collection of similar things, like if we wanted a list of those coordinates:
positions = [
('Bob',0.0,21.0),
('Cat',2.5,13.1),
('Dog',33.0,1.2)
]
Tuples can be used when functions return more than one value. Say we wanted to compute the smallest x- and y-coordinates of the above list of objects. We could write:
def minmax(objects):
minx = 1e20 # These are set to really big numbers
miny = 1e20
for obj in objects:
name,x,y = obj
if x < minx:
minx = x
if y < miny:
miny = y
return minx,miny
x,y = minmax(positions)
print x,y
Here we did two things with tuples you haven't seen before. First, we unpacked an object into a set of named variables using tuple assignment:
>>> name,x,y = obj
We also returned multiple values (minx,miny), which were then assigned to two other variables (x,y), again by tuple assignment. This makes what would have been complicated code in C++ rather simple.
Tuple assignment is also a convenient way to swap variables:
x,y = 1,2
y,x = x,y
x,y
Dictionaries are an object called "mappings" or "associative arrays" in other languages. Whereas a list associates an integer index with a set of objects:
mylist = [1,2,9,21]
The index in a dictionary is called the key, and the corresponding dictionary entry is the value. A dictionary can use (almost) anything as the key. Whereas lists are formed with square brackets [], dictionaries use curly brackets {}:
ages = {"Rick": 46, "Bob": 86, "Fred": 21}
print "Rick's age is ",ages["Rick"]
There's also a convenient way to create dictionaries without having to quote the keys.
dict(Rick=46,Bob=86,Fred=20)
The len() command works on both tuples and dictionaries:
len(t)
len(ages)
We can generally understand trends in data by using a plotting program to chart it. Python has a wonderful plotting library called Matplotlib. The IPython notebook interface we are using for these notes has that functionality built in.
As an example, we have looked at two different functions, the Fibonacci function, and the factorial function, both of which grow faster than polynomially. Which one grows the fastest? Let's plot them. First, let's generate the Fibonacci sequence of length 20:
fibs = fibonacci(10)
Next lets generate the factorials.
facts = []
for i in range(10):
facts.append(factorial(i))
Now we use the Matplotlib function plot to compare the two.
figsize(8,6)
plot(facts,label="factorial")
plot(fibs,label="Fibonacci")
xlabel("n")
legend()
The factorial function grows much faster. In fact, you can't even see the Fibonacci sequence. It's not entirely surprising: a function where we multiply by n each iteration is bound to grow faster than one where we add (roughly) n each iteration.
Let's plot these on a semilog plot so we can see them both a little more clearly:
semilogy(facts,label="factorial")
semilogy(fibs,label="Fibonacci")
xlabel("n")
legend()
There are many more things you can do with Matplotlib. We'll be looking at some of them in the sections to come. In the meantime, if you want an idea of the different things you can do, look at the Matplotlib Gallery. Rob Johansson's IPython notebook Introduction to Matplotlib is also particularly good.
There is, of course, much more to the language than I've covered here. I've tried to keep this brief enough so that you can jump in and start using Python to simplify your life and work. My own experience in learning new things is that the information doesn't "stick" unless you try and use it for something in real life.
You will no doubt need to learn more as you go. I've listed several other good references, including the Python Tutorial and Learn Python the Hard Way. Additionally, now is a good time to start familiarizing yourself with the Python Documentation, and, in particular, the Python Language Reference.
Tim Peters, one of the earliest and most prolific Python contributors, wrote the "Zen of Python", which can be accessed via the "import this" command:
import this
No matter how experienced a programmer you are, these are words to meditate on.
Numpy contains core routines for doing fast vector, matrix, and linear algebra-type operations in Python. Scipy contains additional routines for optimization, special functions, and so on. Both contain modules written in C and Fortran so that they're as fast as possible. Together, they give Python roughly the same capability that the Matlab program offers. (In fact, if you're an experienced Matlab user, there a guide to Numpy for Matlab users just for you.)
Fundamental to both Numpy and Scipy is the ability to work with vectors and matrices. You can create vectors from lists using the array command:
array([1,2,3,4,5,6])
You can pass in a second argument to array that gives the numeric type. There are a number of types listed here that your matrix can be. Some of these are aliased to single character codes. The most common ones are 'd' (double precision floating point number), 'D' (double precision complex number), and 'i' (int32). Thus,
array([1,2,3,4,5,6],'d')
array([1,2,3,4,5,6],'D')
array([1,2,3,4,5,6],'i')
To build matrices, you can either use the array command with lists of lists:
array([[0,1],[1,0]],'d')
You can also form empty (zero) matrices of arbitrary shape (including vectors, which Numpy treats as vectors with one row), using the zeros command:
zeros((3,3),'d')
The first argument is a tuple containing the shape of the matrix, and the second is the data type argument, which follows the same conventions as in the array command. Thus, you can make row vectors:
zeros(3,'d')
zeros((1,3),'d')
or column vectors:
zeros((3,1),'d')
There's also an identity command that behaves as you'd expect:
identity(4,'d')
as well as a ones command.
The linspace command makes a linear array of points from a starting to an ending value.
linspace(0,1)
If you provide a third argument, it takes that as the number of points in the space. If you don't provide the argument, it gives a length 50 linear space.
linspace(0,1,11)
linspace is an easy way to make coordinates for plotting. Functions in the numpy library (all of which are imported into IPython notebook) can act on an entire vector (or even a matrix) of points at once. Thus,
x = linspace(0,2*pi)
sin(x)
In conjunction with matplotlib, this is a nice way to plot things:
plot(x,sin(x))
Matrix objects act sensibly when multiplied by scalars:
0.125*identity(3,'d')
as well as when you add two matrices together. (However, the matrices have to be the same shape.)
identity(2,'d') + array([[1,1],[1,2]])
Something that confuses Matlab users is that the times (*) operator give element-wise multiplication rather than matrix multiplication:
identity(2)*ones((2,2))
To get matrix multiplication, you need the dot command:
dot(identity(2),ones((2,2)))
dot can also do dot products (duh!):
v = array([3,4],'d')
sqrt(dot(v,v))
as well as matrix-vector products.
There are determinant, inverse, and transpose functions that act as you would suppose. Transpose can be abbreviated with ".T" at the end of a matrix object:
m = array([[1,2],[3,4]])
m.T
There's also a diag() function that takes a list or a vector and puts it along the diagonal of a square matrix.
diag([1,2,3,4,5])
We'll find this useful later on.
You can solve systems of linear equations using the solve command:
A = array([[1,1,1],[0,2,5],[2,5,-1]])
b = array([6,-4,27])
solve(A,b)
There are a number of routines to compute eigenvalues and eigenvectors
A = array([[13,-4],[-4,7]],'d')
eigvalsh(A)
eigh(A)
Now that we have these tools in our toolbox, we can start to do some cool stuff with it. Many of the equations we want to solve in Physics involve differential equations. We want to be able to compute the derivative of functions:
$$ y' = \frac{y(x+h)-y(x)}{h} $$by discretizing the function $y(x)$ on an evenly spaced set of points $x_0, x_1, \dots, x_n$, yielding $y_0, y_1, \dots, y_n$. Using the discretization, we can approximate the derivative by
$$ y_i' \approx \frac{y_{i+1}-y_{i-1}}{x_{i+1}-x_{i-1}} $$We can write a derivative function in Python via
def nderiv(y,x):
"Finite difference derivative of the function f"
n = len(y)
d = zeros(n,'d') # assume double
# Use centered differences for the interior points, one-sided differences for the ends
for i in range(1,n-1):
d[i] = (y[i+1]-y[i-1])/(x[i+1]-x[i-1])
d[0] = (y[1]-y[0])/(x[1]-x[0])
d[n-1] = (y[n-1]-y[n-2])/(x[n-1]-x[n-2])
return d
Let's see whether this works for our sin example from above:
x = linspace(0,2*pi)
dsin = nderiv(sin(x),x)
plot(x,dsin,label='numerical')
plot(x,cos(x),label='analytical')
title("Comparison of numerical and analytical derivatives of sin(x)")
legend()
Pretty close!
Now that we've convinced ourselves that finite differences aren't a terrible approximation, let's see if we can use this to solve the one-dimensional harmonic oscillator.
We want to solve the time-independent Schrodinger equation
$$ -\frac{\hbar^2}{2m}\frac{\partial^2\psi(x)}{\partial x^2} + V(x)\psi(x) = E\psi(x)$$for $\psi(x)$ when $V(x)=\frac{1}{2}m\omega^2x^2$ is the harmonic oscillator potential. We're going to use the standard trick to transform the differential equation into a matrix equation by multiplying both sides by $\psi^*(x)$ and integrating over $x$. This yields
$$ -\frac{\hbar}{2m}\int\psi(x)\frac{\partial^2}{\partial x^2}\psi(x)dx + \int\psi(x)V(x)\psi(x)dx = E$$We will again use the finite difference approximation. The finite difference formula for the second derivative is
$$ y'' = \frac{y_{i+1}-2y_i+y_{i-1}}{x_{i+1}-x_{i-1}} $$We can think of the first term in the Schrodinger equation as the overlap of the wave function $\psi(x)$ with the second derivative of the wave function $\frac{\partial^2}{\partial x^2}\psi(x)$. Given the above expression for the second derivative, we can see if we take the overlap of the states $y_1,\dots,y_n$ with the second derivative, we will only have three points where the overlap is nonzero, at $y_{i-1}$, $y_i$, and $y_{i+1}$. In matrix form, this leads to the tridiagonal Laplacian matrix, which has -2's along the diagonals, and 1's along the diagonals above and below the main diagonal.
The second term turns leads to a diagonal matrix with $V(x_i)$ on the diagonal elements. Putting all of these pieces together, we get:
def Laplacian(x):
h = x[1]-x[0] # assume uniformly spaced points
n = len(x)
M = -2*identity(n,'d')
for i in range(1,n):
M[i,i-1] = M[i-1,i] = 1
return M/h**2
x = linspace(-3,3)
m = 1.0
ohm = 1.0
T = (-0.5/m)*Laplacian(x)
V = 0.5*(ohm**2)*(x**2)
H = T + diag(V)
E,U = eigh(H)
h = x[1]-x[0]
# Plot the Harmonic potential
plot(x,V,color='k')
for i in range(4):
# For each of the first few solutions, plot the energy level:
axhline(y=E[i],color='k',ls=":")
# as well as the eigenfunction, displaced by the energy level so they don't
# all pile up on each other:
plot(x,-U[:,i]/sqrt(h)+E[i])
title("Eigenfunctions of the Quantum Harmonic Oscillator")
xlabel("Displacement (bohr)")
ylabel("Energy (hartree)")
We've made a couple of hacks here to get the orbitals the way we want them. First, I inserted a -1 factor before the wave functions, to fix the phase of the lowest state. The phase (sign) of a quantum wave function doesn't hold any information, only the square of the wave function does, so this doesn't really change anything.
But the eigenfunctions as we generate them aren't properly normalized. The reason is that finite difference isn't a real basis in the quantum mechanical sense. It's a basis of Dirac δ functions at each point; we interpret the space betwen the points as being "filled" by the wave function, but the finite difference basis only has the solution being at the points themselves. We can fix this by dividing the eigenfunctions of our finite difference Hamiltonian by the square root of the spacing, and this gives properly normalized functions.
The solutions to the Harmonic Oscillator are supposed to be Hermite polynomials. The Wikipedia page has the HO states given by
$$\psi_n(x) = \frac{1}{\sqrt{2^n n!}} \left(\frac{m\omega}{\pi\hbar}\right)^{1/4} \exp\left(-\frac{m\omega x^2}{2\hbar}\right) H_n\left(\sqrt{\frac{m\omega}{\hbar}}x\right)$$Let's see whether they look like those. There are some special functions in the Numpy library, and some more in Scipy. Hermite Polynomials are in Numpy:
from numpy.polynomial.hermite import Hermite
def ho_evec(x,n,m,ohm):
vec = [0]*9
vec[n] = 1
Hn = Hermite(vec)
return (1/sqrt(2**n*factorial(n)))*pow(m*ohm/pi,0.25)*exp(-0.5*m*ohm*x**2)*Hn(x*sqrt(m*ohm))
Let's compare the first function to our solution.
plot(x,ho_evec(x,0,1,1),label="Analytic")
plot(x,-U[:,0]/sqrt(h),label="Numeric")
xlabel('x (bohr)')
ylabel(r'$\psi(x)$')
title("Comparison of numeric and analytic solutions to the Harmonic Oscillator")
legend()
The agreement is almost exact.
We can use the subplot command to put multiple comparisons in different panes on a single plot:
phase_correction = [-1,1,1,-1,-1,1]
for i in range(6):
subplot(2,3,i+1)
plot(x,ho_evec(x,i,1,1),label="Analytic")
plot(x,phase_correction[i]*U[:,i]/sqrt(h),label="Numeric")
Other than phase errors (which I've corrected with a little hack: can you find it?), the agreement is pretty good, although it gets worse the higher in energy we get, in part because we used only 50 points.
The Scipy module has many more special functions:
from scipy.special import airy,jn,eval_chebyt,eval_legendre
subplot(2,2,1)
x = linspace(-1,1)
Ai,Aip,Bi,Bip = airy(x)
plot(x,Ai)
plot(x,Aip)
plot(x,Bi)
plot(x,Bip)
title("Airy functions")
subplot(2,2,2)
x = linspace(0,10)
for i in range(4):
plot(x,jn(i,x))
title("Bessel functions")
subplot(2,2,3)
x = linspace(-1,1)
for i in range(6):
plot(x,eval_chebyt(i,x))
title("Chebyshev polynomials of the first kind")
subplot(2,2,4)
x = linspace(-1,1)
for i in range(6):
plot(x,eval_legendre(i,x))
title("Legendre polynomials")
As well as Jacobi, Laguerre, Hermite polynomials, Hypergeometric functions, and many others. There's a full listing at the Scipy Special Functions Page.
Very often we deal with some data that we want to fit to some sort of expected behavior. Say we have the following:
raw_data = """\
3.1905781584582433,0.028208609537968457
4.346895074946466,0.007160804747670053
5.374732334047101,0.0046962988461934805
8.201284796573875,0.0004614473299618756
10.899357601713055,0.00005038370219939726
16.295503211991434,4.377451812785309e-7
21.82012847965739,3.0799922117601088e-9
32.48394004282656,1.524776208284536e-13
43.53319057815846,5.5012073588707224e-18"""
There's a section below on parsing CSV data. We'll steal the parser from that. For an explanation, skip ahead to that section. Otherwise, just assume that this is a way to parse that text into a numpy array that we can plot and do other analyses with.
data = []
for line in raw_data.splitlines():
words = line.split(',')
data.append(map(float,words))
data = array(data)
title("Raw Data")
xlabel("Distance")
plot(data[:,0],data[:,1],'bo')
Since we expect the data to have an exponential decay, we can plot it using a semi-log plot.
title("Raw Data")
xlabel("Distance")
semilogy(data[:,0],data[:,1],'bo')
For a pure exponential decay like this, we can fit the log of the data to a straight line. The above plot suggests this is a good approximation. Given a function $$ y = Ae^{-ax} $$ $$ \log(y) = \log(A) - ax$$ Thus, if we fit the log of the data versus x, we should get a straight line with slope $a$, and an intercept that gives the constant $A$.
There's a numpy function called polyfit that will fit data to a polynomial form. We'll use this to fit to a straight line (a polynomial of order 1)
params = polyfit(data[:,0],log(data[:,1]),1)
a = params[0]
A = exp(params[1])
Let's see whether this curve fits the data.
x = linspace(1,45)
title("Raw Data")
xlabel("Distance")
semilogy(data[:,0],data[:,1],'bo')
semilogy(x,A*exp(a*x),'b-')
If we have more complicated functions, we may not be able to get away with fitting to a simple polynomial. Consider the following data:
gauss_data = """\
-0.9902286902286903,1.4065274110372852e-19
-0.7566104566104566,2.2504438576596563e-18
-0.5117810117810118,1.9459459459459454
-0.31887271887271884,10.621621621621626
-0.250997150997151,15.891891891891893
-0.1463309463309464,23.756756756756754
-0.07267267267267263,28.135135135135133
-0.04426734426734419,29.02702702702703
-0.0015939015939017698,29.675675675675677
0.04689304689304685,29.10810810810811
0.0840994840994842,27.324324324324326
0.1700546700546699,22.216216216216214
0.370878570878571,7.540540540540545
0.5338338338338338,1.621621621621618
0.722014322014322,0.08108108108108068
0.9926849926849926,-0.08108108108108646"""
data = []
for line in gauss_data.splitlines():
words = line.split(',')
data.append(map(float,words))
data = array(data)
plot(data[:,0],data[:,1],'bo')
This data looks more Gaussian than exponential. If we wanted to, we could use polyfit for this as well, but let's use the curve_fit function from Scipy, which can fit to arbitrary functions. You can learn more using help(curve_fit).
First define a general Gaussian function to fit to.
def gauss(x,A,a): return A*exp(a*x**2)
Now fit to it using curve_fit:
from scipy.optimize import curve_fit
params,conv = curve_fit(gauss,data[:,0],data[:,1])
x = linspace(-1,1)
plot(data[:,0],data[:,1],'bo')
A,a = params
plot(x,gauss(x,A,a),'b-')
The curve_fit routine we just used is built on top of a very good general minimization capability in Scipy. You can learn more at the scipy documentation pages.
Many methods in scientific computing rely on Monte Carlo integration, where a sequence of (pseudo) random numbers are used to approximate the integral of a function. Python has good random number generators in the standard library. The random() function gives pseudorandom numbers uniformly distributed between 0 and 1:
from random import random
rands = []
for i in range(100):
rands.append(random())
plot(rands)
random() uses the Mersenne Twister algorithm, which is a highly regarded pseudorandom number generator. There are also functions to generate random integers, to randomly shuffle a list, and functions to pick random numbers from a particular distribution, like the normal distribution:
from random import gauss
grands = []
for i in range(100):
grands.append(gauss(0,1))
plot(grands)
It is generally more efficient to generate a list of random numbers all at once, particularly if you're drawing from a non-uniform distribution. Numpy has functions to generate vectors and matrices of particular types of random distributions.
plot(rand(100))
One of the first programs I ever wrote was a program to compute $\pi$ by taking random numbers as x and y coordinates, and counting how many of them were in the unit circle. For example:
npts = 5000
xs = 2*rand(npts)-1
ys = 2*rand(npts)-1
r = xs**2+ys**2
ninside = (r<1).sum()
figsize(6,6) # make the figure square
title("Approximation to pi = %f" % (4*ninside/float(npts)))
plot(xs[r<1],ys[r<1],'b.')
plot(xs[r>1],ys[r>1],'r.')
figsize(8,6) # change the figsize back to 4x3 for the rest of the notebook
The idea behind the program is that the ratio of the area of the unit circle to the square that inscribes it is $\pi/4$, so by counting the fraction of the random points in the square that are inside the circle, we get increasingly good estimates to $\pi$.
The above code uses some higher level Numpy tricks to compute the radius of each point in a single line, to count how many radii are below one in a single line, and to filter the x,y points based on their radii. To be honest, I rarely write code like this: I find some of these Numpy tricks a little too cute to remember them, and I'm more likely to use a list comprehension (see below) to filter the points I want, since I can remember that.
As methods of computing $\pi$ go, this is among the worst. A much better method is to use Leibniz's expansion of arctan(1):
$$\frac{\pi}{4} = \sum_k \frac{(-1)^k}{2*k+1}$$n = 100
total = 0
for k in range(n):
total += pow(-1,k)/(2*k+1.0)
print 4*total
If you're interested a great method, check out Ramanujan's method. This converges so fast you really need arbitrary precision math to display enough decimal places. You can do this with the Python decimal module, if you're interested.
Integration can be hard, and sometimes it's easier to work out a definite integral using an approximation. For example, suppose we wanted to figure out the integral:
$$\int_0^\infty\exp(-x)dx=1$$from numpy import sqrt
def f(x): return exp(-x)
x = linspace(0,10)
plot(x,exp(-x))
Scipy has a numerical integration routine quad (since sometimes numerical integration is called quadrature), that we can use for this:
from scipy.integrate import quad
quad(f,0,inf)
There are also 2d and 3d numerical integrators in Scipy. See the docs for more information.
Very often we want to use FFT techniques to help obtain the signal from noisy data. Scipy has several different options for this.
from scipy.fftpack import fft,fftfreq
npts = 4000
nplot = npts/10
t = linspace(0,120,npts)
def acc(t): return 10*sin(2*pi*2.0*t) + 5*sin(2*pi*8.0*t) + 2*rand(npts)
signal = acc(t)
FFT = abs(fft(signal))
freqs = fftfreq(npts, t[1]-t[0])
subplot(211)
plot(t[:nplot], signal[:nplot])
subplot(212)
plot(freqs,20*log10(FFT),',')
show()
There are additional signal processing routines in Scipy that you can read about here.
As more and more of our day-to-day work is being done on and through computers, we increasingly have output that one program writes, often in a text file, that we need to analyze in one way or another, and potentially feed that output into another file.
Suppose we have the following output:
myoutput = """\
@ Step Energy Delta E Gmax Grms Xrms Xmax Walltime
@ ---- ---------------- -------- -------- -------- -------- -------- --------
@ 0 -6095.12544083 0.0D+00 0.03686 0.00936 0.00000 0.00000 1391.5
@ 1 -6095.25762870 -1.3D-01 0.00732 0.00168 0.32456 0.84140 10468.0
@ 2 -6095.26325979 -5.6D-03 0.00233 0.00056 0.06294 0.14009 11963.5
@ 3 -6095.26428124 -1.0D-03 0.00109 0.00024 0.03245 0.10269 13331.9
@ 4 -6095.26463203 -3.5D-04 0.00057 0.00013 0.02737 0.09112 14710.8
@ 5 -6095.26477615 -1.4D-04 0.00043 0.00009 0.02259 0.08615 20211.1
@ 6 -6095.26482624 -5.0D-05 0.00015 0.00002 0.00831 0.03147 21726.1
@ 7 -6095.26483584 -9.6D-06 0.00021 0.00004 0.01473 0.05265 24890.5
@ 8 -6095.26484405 -8.2D-06 0.00005 0.00001 0.00555 0.01929 26448.7
@ 9 -6095.26484599 -1.9D-06 0.00003 0.00001 0.00164 0.00564 27258.1
@ 10 -6095.26484676 -7.7D-07 0.00003 0.00001 0.00161 0.00553 28155.3
@ 11 -6095.26484693 -1.8D-07 0.00002 0.00000 0.00054 0.00151 28981.7
@ 11 -6095.26484693 -1.8D-07 0.00002 0.00000 0.00054 0.00151 28981.7"""
This output actually came from a geometry optimization of a Silicon cluster using the NWChem quantum chemistry suite. At every step the program computes the energy of the molecular geometry, and then changes the geometry to minimize the computed forces, until the energy converges. I obtained this output via the unix command
% grep @ nwchem.out
since NWChem is nice enough to precede the lines that you need to monitor job progress with the '@' symbol.
We could do the entire analysis in Python; I'll show how to do this later on, but first let's focus on turning this code into a usable Python object that we can plot.
First, note that the data is entered into a multi-line string. When Python sees three quote marks """ or ''' it treats everything following as part of a single string, including newlines, tabs, and anything else, until it sees the same three quote marks (""" has to be followed by another """, and ''' has to be followed by another ''') again. This is a convenient way to quickly dump data into Python, and it also reinforces the important idea that you don't have to open a file and deal with it one line at a time. You can read everything in, and deal with it as one big chunk.
The first thing we'll do, though, is to split the big string into a list of strings, since each line corresponds to a separate piece of data. We will use the splitlines() function on the big myout string to break it into a new element every time it sees a newline (\n) character:
lines = myoutput.splitlines()
lines
Splitting is a big concept in text processing. We used splitlines() here, and we will use the more general split() function below to split each line into whitespace-delimited words.
We now want to do three things:
For this data, we really only want the Energy column, the Gmax column (which contains the maximum gradient at each step), and perhaps the Walltime column.
Since the data is now in a list of lines, we can iterate over it:
for line in lines[2:]:
# do something with each line
words = line.split()
Let's examine what we just did: first, we used a for loop to iterate over each line. However, we skipped the first two (the lines[2:] only takes the lines starting from index 2), since lines[0] contained the title information, and lines[1] contained underscores.
We then split each line into chunks (which we're calling "words", even though in most cases they're numbers) using the string split() command. Here's what split does:
import string
help(string.split)
Here we're implicitly passing in the first argument (s, in the doctext) by calling a method .split() on a string object. In this instance, we're not passing in a sep character, which means that the function splits on whitespace. Let's see what that does to one of our lines:
lines[2].split()
This is almost exactly what we want. We just have to now pick the fields we want:
for line in lines[2:]:
# do something with each line
words = line.split()
energy = words[2]
gmax = words[4]
time = words[8]
print energy,gmax,time
This is fine for printing things out, but if we want to do something with the data, either make a calculation with it or pass it into a plotting, we need to convert the strings into regular floating point numbers. We can use the float() command for this. We also need to save it in some form. I'll do this as follows:
data = []
for line in lines[2:]:
# do something with each line
words = line.split()
energy = float(words[2])
gmax = float(words[4])
time = float(words[8])
data.append((energy,gmax,time))
data = array(data)
We now have our data in a numpy array, so we can choose columns to print:
plot(data[:,0])
xlabel('step')
ylabel('Energy (hartrees)')
title('Convergence of NWChem geometry optimization for Si cluster')
I would write the code a little more succinctly if I were doing this for myself, but this is essentially a snippet I use repeatedly.
Suppose our data was in CSV (comma separated values) format, a format that originally came from Microsoft Excel, and is increasingly used as a data interchange format in big data applications. How would we parse that?
csv = """\
-6095.12544083, 0.03686, 1391.5
-6095.25762870, 0.00732, 10468.0
-6095.26325979, 0.00233, 11963.5
-6095.26428124, 0.00109, 13331.9
-6095.26463203, 0.00057, 14710.8
-6095.26477615, 0.00043, 20211.1
-6095.26482624, 0.00015, 21726.1
-6095.26483584, 0.00021, 24890.5
-6095.26484405, 0.00005, 26448.7
-6095.26484599, 0.00003, 27258.1
-6095.26484676, 0.00003, 28155.3
-6095.26484693, 0.00002, 28981.7
-6095.26484693, 0.00002, 28981.7"""
We can do much the same as before:
data = []
for line in csv.splitlines():
words = line.split(',')
data.append(map(float,words))
data = array(data)
There are two significant changes over what we did earlier. First, I'm passing the comma character ',' into the split function, so that it breaks to a new word every time it sees a comma. Next, to simplify things a big, I'm using the map() command to repeatedly apply a single function (float()) to a list, and to return the output as a list.
help(map)