Rick Muller, Sandia National Laboratories

version 0.6

This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Python is the programming language of choice for many scientists to a large degree because it offers a great deal of power to analyze and model scientific data with relatively little overhead in terms of learning, installation or development time. It is a language you can pick up in a weekend, and use for the rest of one's life.

The Python Tutorial is a great place to start getting a feel for the language. To complement this material, I taught a Python Short Course years ago to a group of computational chemists during a time that I was worried the field was moving too much in the direction of using canned software rather than developing one's own methods. I wanted to focus on what working scientists needed to be more productive: parsing output of other programs, building simple models, experimenting with object oriented programming, extending the language with C, and simple GUIs.

I'm trying to do something very similar here, to cut to the chase and focus on what scientists need. In the last year or so, the IPython Project has put together a notebook interface that I have found incredibly valuable. A large number of people have released very good IPython Notebooks that I have taken a huge amount of pleasure reading through. Some ones that I particularly like include:

- Rob Johansson's excellent notebooks, including Scientific Computing with Python and Computational Quantum Physics with QuTiP lectures;
- XKCD style graphs in matplotlib;
- A collection of Notebooks for using IPython effectively
- A gallery of interesting IPython Notebooks

I find IPython notebooks an easy way both to get important work done in my everyday job, as well as to communicate what I've done, how I've done it, and why it matters to my coworkers. I find myself endlessly sweeping the IPython subreddit hoping someone will post a new notebook. In the interest of putting more notebooks out into the wild for other people to use and enjoy, I thought I would try to recreate some of what I was trying to get across in the original Python Short Course, updated by 15 years of Python, Numpy, Scipy, Matplotlib, and IPython development, as well as my own experience in using Python almost every day of this time.

There are two branches of current releases in Python: the older-syntax Python 2, and the newer-syntax Python 3. This schizophrenia is largely intentional: when it became clear that some non-backwards-compatible changes to the language were necessary, the Python dev-team decided to go through a five-year (or so) transition, during which the new language features would be introduced and the old language was still actively maintained, to make such a transition as easy as possible. We're now (2013) past the halfway point, and, IMHO, at the first time when I'm considering making the change to Python 3.

Nonetheless, I'm going to write these notes with Python 2 in mind, since this is the version of the language that I use in my day-to-day job, and am most comfortable with. If these notes are important and are valuable to people, I'll be happy to rewrite the notes using Python 3.

With this in mind, these notes assume you have a Python distribution that includes:

- Python version 2.7;
- Numpy, the core numerical extensions for linear algebra and multidimensional arrays;
- Scipy, additional libraries for scientific programming;
- Matplotlib, excellent plotting and graphing libraries;
- IPython, with the additional libraries required for the notebook interface.

A good, easy to install option that supports Mac, Windows, and Linux, and that has all of these packages (and much more) is the Entought Python Distribution, also known as EPD, which appears to be changing its name to Enthought Canopy. Enthought is a commercial company that supports a lot of very good work in scientific Python development and application. You can either purchase a license to use EPD, or there is also a free version that you can download and install.

Here are some other alternatives, should you not want to use EPD:

**Linux** Most distributions have an installation manager. Redhat has yum, Ubuntu has apt-get. To my knowledge, all of these packages should be available through those installers.

**Mac** I use Macports, which has up-to-date versions of all of these packages.

**Windows** The PythonXY package has everything you need: install the package, then go to Start > PythonXY > Command Prompts > IPython notebook server.

**Cloud** This notebook is currently not running on the IPython notebook viewer, but will be shortly, which will allow the notebook to be viewed but not interactively. I'm keeping an eye on Wakari, from Continuum Analytics, which is a cloud-based IPython notebook. Wakari appears to support free accounts as well. Continuum is a company started by some of the core Enthought Numpy/Scipy people focusing on big data.

Continuum also supports a bundled, multiplatform Python package called Anaconda that I'll also keep an eye on.

This is a quick introduction to Python. There are lots of other places to learn the language more thoroughly. I have collected a list of useful links, including ones to other learning resources, at the end of this notebook. If you want a little more depth, Python Tutorial is a great place to start, as is Zed Shaw's Learn Python the Hard Way.

The lessons that follow make use of the IPython notebooks. There's a good introduction to notebooks in the IPython notebook documentation that even has a nice video on how to use the notebooks. You should probably also flip through the IPython tutorial in your copious free time.

Briefly, notebooks have code cells (that are generally followed by result cells) and text cells. The text cells are the stuff that you're reading now. The code cells start with "In []:" with some number generally in the brackets. If you put your cursor in the code cell and hit Shift-Enter, the code will run in the Python interpreter and the result will print out in the output cell. You can then change things around and see whether you understand what's going on. If you need to know more, see the IPython notebook documentation or the IPython tutorial.

Many of the things I used to use a calculator for, I now use Python for:

In [1]:

```
2+2
```

Out[1]:

In [2]:

```
(50-5*6)/4
```

Out[2]:

There are some gotchas compared to using a normal calculator.

In [3]:

```
7/3
```

Out[3]:

Python integer division, like C or Fortran integer division, truncates the remainder and returns an integer. At least it does in version 2. In version 3, Python returns a floating point number. You can get a sneak preview of this feature in Python 2 by importing the module from the future features:

`from __future__ import division`

In [4]:

```
7/3.
```

Out[4]:

In [5]:

```
7/float(3)
```

Out[5]:

In the last few lines, we have sped by a lot of things that we should stop for a moment and explore a little more fully. We've seen, however briefly, two different data types: **integers**, also known as *whole numbers* to the non-programming world, and **floating point numbers**, also known (incorrectly) as *decimal numbers* to the rest of the world.

We've also seen the first instance of an **import** statement. Python has a huge number of libraries included with the distribution. To keep things simple, most of these variables and functions are not accessible from a normal Python interactive session. Instead, you have to import the name. For example, there is a **math** module containing many useful functions. To access, say, the square root function, you can either first

```
from math import sqrt
```

and then

In [6]:

```
sqrt(81)
```

Out[6]:

or you can simply import the math library itself

In [7]:

```
import math
math.sqrt(81)
```

Out[7]:

You can define variables using the equals (=) sign:

In [8]:

```
width = 20
length = 30
area = length*width
area
```

Out[8]:

If you try to access a variable that you haven't yet defined, you get an error:

In [9]:

```
volume
```

and you need to define it:

In [ ]:

```
depth = 10
volume = area*depth
volume
```

You can name a variable *almost* anything you want. It needs to start with an alphabetical character or "_", can contain alphanumeric charcters plus underscores ("_"). Certain words, however, are reserved for the language:

```
and, as, assert, break, class, continue, def, del, elif, else, except,
exec, finally, for, from, global, if, import, in, is, lambda, not, or,
pass, print, raise, return, try, while, with, yield
```

Trying to define a variable using one of these will result in a syntax error:

In [10]:

```
return = 0
```

Strings are lists of printable characters, and can be defined using either single quotes

In [11]:

```
'Hello, World!'
```

Out[11]:

or double quotes

In [12]:

```
"Hello, World!"
```

Out[12]:

But not both at the same time, unless you want one of the symbols to be part of the string.

In [13]:

```
"He's a Rebel"
```

Out[13]:

In [14]:

```
'She asked, "How are you today?"'
```

Out[14]:

In [15]:

```
greeting = "Hello, World!"
```

The **print** statement is often used for printing character strings:

In [16]:

```
print greeting
```

But it can also print data types other than strings:

In [17]:

```
print "The area is ",area
```

You can use the + operator to concatenate strings together:

In [18]:

```
statement = "Hello," + "World!"
print statement
```

Don't forget the space between the strings, if you want one there.

In [19]:

```
statement = "Hello, " + "World!"
print statement
```

You can use + to concatenate multiple strings in a single statement:

In [20]:

```
print "This " + "is " + "a " + "longer " + "statement."
```

Very often in a programming language, one wants to keep a group of similar items together. Python does this using a data type called **lists**.

In [21]:

```
days_of_the_week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"]
```

You can access members of the list using the **index** of that item:

In [22]:

```
days_of_the_week[2]
```

Out[22]:

*n*th element from the end of the list, you can use a negative index. For example, the -1 element of a list is the last element:

In [23]:

```
days_of_the_week[-1]
```

Out[23]:

You can add additional items to the list using the .append() command:

In [24]:

```
languages = ["Fortran","C","C++"]
languages.append("Python")
print languages
```

The **range()** command is a convenient way to make sequential lists of numbers:

In [25]:

```
range(10)
```

Out[25]:

In [26]:

```
range(2,8)
```

Out[26]:

*step* of 1 between elements. You can also give a fixed step size via a third command:

In [27]:

```
evens = range(0,20,2)
evens
```

Out[27]:

In [28]:

```
evens[3]
```

Out[28]:

Lists do not have to hold the same data type. For example,

In [29]:

```
["Today",7,99.3,""]
```

Out[29]:

However, it's good (but not essential) to use lists for similar objects that are somehow logically connected. If you want to group different data types together into a composite data object, it's best to use **tuples**, which we will learn about below.

You can find out how long a list is using the **len()** command:

In [30]:

```
help(len)
```

In [31]:

```
len(evens)
```

Out[31]:

One of the most useful things you can do with lists is to *iterate* through them, i.e. to go through each element one at a time. To do this in Python, we use the **for** statement:

In [32]:

```
for day in days_of_the_week:
print day
```

This code snippet goes through each element of the list called **days_of_the_week** and assigns it to the variable **day**. It then executes everything in the indented block (in this case only one line of code, the print statement) using those variable assignments. When the program has gone through every element of the list, it exists the block.

(Almost) every programming language defines blocks of code in some way. In Fortran, one uses END statements (ENDDO, ENDIF, etc.) to define code blocks. In C, C++, and Perl, one uses curly braces {} to define these blocks.

Python uses a colon (":"), followed by indentation level to define code blocks. Everything at a higher level of indentation is taken to be in the same block. In the above example the block was only a single line, but we could have had longer blocks as well:

In [33]:

```
for day in days_of_the_week:
statement = "Today is " + day
print statement
```

**range()** command is particularly useful with the **for** statement to execute loops of a specified length:

In [34]:

```
for i in range(20):
print "The square of ",i," is ",i*i
```

Lists and strings have something in common that you might not suspect: they can both be treated as sequences. You already know that you can iterate through the elements of a list. You can also iterate through the letters in a string:

In [35]:

```
for letter in "Sunday":
print letter
```

*slicing* operation, which you can also use on any sequence. We already know that we can use *indexing* to get the first element of a list:

In [36]:

```
days_of_the_week[0]
```

Out[36]:

If we want the list containing the first two elements of a list, we can do this via

In [37]:

```
days_of_the_week[0:2]
```

Out[37]:

or simply

In [38]:

```
days_of_the_week[:2]
```

Out[38]:

If we want the last items of the list, we can do this with negative slicing:

In [39]:

```
days_of_the_week[-2:]
```

Out[39]:

which is somewhat logically consistent with negative indices accessing the last elements of the list.

You can do:

In [40]:

```
workdays = days_of_the_week[1:6]
print workdays
```

Since strings are sequences, you can also do this to them:

In [41]:

```
day = "Sunday"
abbreviation = day[:3]
print abbreviation
```

**range()** function specifies the step):

In [42]:

```
numbers = range(0,40)
evens = numbers[2::2]
evens
```

Out[42]:

We have now learned a few data types. We have integers and floating point numbers, strings, and lists to contain them. We have also learned about lists, a container that can hold any data type. We have learned to print things out, and to iterate over items in lists. We will now learn about **boolean** variables that can be either True or False.

We invariably need some concept of *conditions* in programming to control branching behavior, to allow a program to react differently to different situations. If it's Monday, I'll go to work, but if it's Sunday, I'll sleep in. To do this in Python, we use a combination of **boolean** variables, which evaluate to either True or False, and **if** statements, that control branching based on boolean values.

For example:

In [43]:

```
if day == "Sunday":
print "Sleep in"
else:
print "Go to work"
```

(Quick quiz: why did the snippet print "Go to work" here? What is the variable "day" set to?)

Let's take the snippet apart to see what happened. First, note the statement

In [44]:

```
day == "Sunday"
```

Out[44]:

*equality testing*. If the two items are equal, it returns True, otherwise it returns False. In this case, it is comparing two variables, the string "Sunday", and whatever is stored in the variable "day", which, in this case, is the other string "Saturday". Since the two strings are not equal to each other, the truth test has the false value.

The if statement that contains the truth test is followed by a code block (a colon followed by an indented block of code). If the boolean is true, it executes the code in that block. Since it is false in the above example, we don't see that code executed.

The first block of code is followed by an **else** statement, which is executed if nothing else in the above if statement is true. Since the value was false, this code is executed, which is why we see "Go to work".

You can compare any data types in Python:

In [45]:

```
1 == 2
```

Out[45]:

In [46]:

```
50 == 2*25
```

Out[46]:

In [47]:

```
3 < 3.14159
```

Out[47]:

In [48]:

```
1 == 1.0
```

Out[48]:

In [49]:

```
1 != 0
```

Out[49]:

In [50]:

```
1 <= 2
```

Out[50]:

In [51]:

```
1 >= 1
```

Out[51]:

We see a few other boolean operators here, all of which which should be self-explanatory. Less than, equality, non-equality, and so on.

Particularly interesting is the 1 == 1.0 test, which is true, since even though the two objects are different data types (integer and floating point number), they have the same *value*. There is another boolean operator **is**, that tests whether two objects are the same object:

In [52]:

```
1 is 1.0
```

Out[52]:

We can do boolean tests on lists as well:

In [53]:

```
[1,2,3] == [1,2,4]
```

Out[53]:

In [54]:

```
[1,2,3] < [1,2,4]
```

Out[54]:

In [55]:

```
hours = 5
0 < hours < 24
```

Out[55]:

If statements can have **elif** parts ("else if"), in addition to if/else parts. For example:

In [56]:

```
if day == "Sunday":
print "Sleep in"
elif day == "Saturday":
print "Do chores"
else:
print "Go to work"
```

Of course we can combine if statements with for loops, to make a snippet that is almost interesting:

In [57]:

```
for day in days_of_the_week:
statement = "Today is " + day
print statement
if day == "Sunday":
print " Sleep in"
elif day == "Saturday":
print " Do chores"
else:
print " Go to work"
```

**bool()** function.

In [58]:

```
bool(1)
```

Out[58]:

In [59]:

```
bool(0)
```

Out[59]:

In [60]:

```
bool(["This "," is "," a "," list"])
```

Out[60]:

The Fibonacci sequence is a sequence in math that starts with 0 and 1, and then each successive entry is the sum of the previous two. Thus, the sequence goes 0,1,1,2,3,5,8,13,21,34,55,89,...

A very common exercise in programming books is to compute the Fibonacci sequence up to some number **n**. First I'll show the code, then I'll discuss what it is doing.

In [61]:

```
n = 10
sequence = [0,1]
for i in range(2,n): # This is going to be a problem if we ever set n <= 2!
sequence.append(sequence[i-1]+sequence[i-2])
print sequence
```

Let's go through this line by line. First, we define the variable **n**, and set it to the integer 20. **n** is the length of the sequence we're going to form, and should probably have a better variable name. We then create a variable called **sequence**, and initialize it to the list with the integers 0 and 1 in it, the first two elements of the Fibonacci sequence. We have to create these elements "by hand", since the iterative part of the sequence requires two previous elements.

We then have a for loop over the list of integers from 2 (the next element of the list) to **n** (the length of the sequence). After the colon, we see a hash tag "#", and then a **comment** that if we had set **n** to some number less than 2 we would have a problem. Comments in Python start with #, and are good ways to make notes to yourself or to a user of your code explaining why you did what you did. Better than the comment here would be to test to make sure the value of **n** is valid, and to complain if it isn't; we'll try this later.

In the body of the loop, we append to the list an integer equal to the sum of the two previous elements of the list.

After exiting the loop (ending the indentation) we then print out the whole list. That's it!

We might want to use the Fibonacci snippet with different sequence lengths. We could cut an paste the code into another cell, changing the value of **n**, but it's easier and more useful to make a function out of the code. We do this with the **def** statement in Python:

In [62]:

```
def fibonacci(sequence_length):
"Return the Fibonacci sequence of length *sequence_length*"
sequence = [0,1]
if sequence_length < 1:
print "Fibonacci sequence only defined for length 1 or greater"
return
if 0 < sequence_length < 3:
return sequence[:sequence_length]
for i in range(2,sequence_length):
sequence.append(sequence[i-1]+sequence[i-2])
return sequence
```

We can now call **fibonacci()** for different sequence_lengths:

In [63]:

```
fibonacci(2)
```

Out[63]:

In [64]:

```
fibonacci(12)
```

Out[64]:

**docstring**, and is a special kind of comment that is often available to people using the function through the python command line:

In [65]:

```
help(fibonacci)
```

If you define a docstring for all of your functions, it makes it easier for other people to use them, since they can get help on the arguments and return values of the function.

Next, note that rather than putting a comment in about what input values lead to errors, we have some testing of these values, followed by a warning if the value is invalid, and some conditional code to handle special cases.

Functions can also call themselves, something that is often called *recursion*. We're going to experiment with recursion by computing the factorial function. The factorial is defined for a positive integer **n** as

First, note that we don't need to write a function at all, since this is a function built into the standard math library. Let's use the help function to find out about it:

In [66]:

```
from math import factorial
help(factorial)
```

This is clearly what we want.

In [67]:

```
factorial(20)
```

Out[67]:

However, if we did want to write a function ourselves, we could do recursively by noting that

$$ n! = n(n-1)!$$The program then looks something like:

In [68]:

```
def fact(n):
if n <= 0:
return 1
return n*fact(n-1)
```

In [69]:

```
fact(20)
```

Out[69]:

Recursion can be very elegant, and can lead to very simple programs.

Before we end the Python overview, I wanted to touch on two more data structures that are very useful (and thus very common) in Python programs.

A **tuple** is a sequence object like a list or a string. It's constructed by grouping a sequence of objects together with commas, either without brackets, or with parentheses:

In [70]:

```
t = (1,2,'hi',9.0)
t
```

Out[70]:

Tuples are like lists, in that you can access the elements using indices:

In [71]:

```
t[1]
```

Out[71]:

However, tuples are *immutable*, you can't append to them or change the elements of them:

In [72]:

```
t.append(7)
```

In [73]:

```
t[1]=77
```

In [74]:

```
('Bob',0.0,21.0)
```

Out[74]:

In [75]:

```
positions = [
('Bob',0.0,21.0),
('Cat',2.5,13.1),
('Dog',33.0,1.2)
]
```

In [76]:

```
def minmax(objects):
minx = 1e20 # These are set to really big numbers
miny = 1e20
for obj in objects:
name,x,y = obj
if x < minx:
minx = x
if y < miny:
miny = y
return minx,miny
x,y = minmax(positions)
print x,y
```

Here we did two things with tuples you haven't seen before. First, we unpacked an object into a set of named variables using *tuple assignment*:

```
>>> name,x,y = obj
```

We also returned multiple values (minx,miny), which were then assigned to two other variables (x,y), again by tuple assignment. This makes what would have been complicated code in C++ rather simple.

Tuple assignment is also a convenient way to swap variables:

In [77]:

```
x,y = 1,2
y,x = x,y
x,y
```

Out[77]:

**Dictionaries** are an object called "mappings" or "associative arrays" in other languages. Whereas a list associates an integer index with a set of objects:

In [78]:

```
mylist = [1,2,9,21]
```

*key*, and the corresponding dictionary entry is the *value*. A dictionary can use (almost) anything as the key. Whereas lists are formed with square brackets [], dictionaries use curly brackets {}:

In [79]:

```
ages = {"Rick": 46, "Bob": 86, "Fred": 21}
print "Rick's age is ",ages["Rick"]
```

There's also a convenient way to create dictionaries without having to quote the keys.

In [80]:

```
dict(Rick=46,Bob=86,Fred=20)
```

Out[80]:

The **len()** command works on both tuples and dictionaries:

In [81]:

```
len(t)
```

Out[81]:

In [82]:

```
len(ages)
```

Out[82]:

We can generally understand trends in data by using a plotting program to chart it. Python has a wonderful plotting library called Matplotlib. The IPython notebook interface we are using for these notes has that functionality built in.

As an example, we have looked at two different functions, the Fibonacci function, and the factorial function, both of which grow faster than polynomially. Which one grows the fastest? Let's plot them. First, let's generate the Fibonacci sequence of length 20:

In [83]:

```
fibs = fibonacci(10)
```

Next lets generate the factorials.

In [84]:

```
facts = []
for i in range(10):
facts.append(factorial(i))
```

Now we use the Matplotlib function **plot** to compare the two.

In [85]:

```
figsize(8,6)
plot(facts,label="factorial")
plot(fibs,label="Fibonacci")
xlabel("n")
legend()
```

Out[85]:

The factorial function grows much faster. In fact, you can't even see the Fibonacci sequence. It's not entirely surprising: a function where we multiply by n each iteration is bound to grow faster than one where we add (roughly) n each iteration.

Let's plot these on a semilog plot so we can see them both a little more clearly:

In [86]:

```
semilogy(facts,label="factorial")
semilogy(fibs,label="Fibonacci")
xlabel("n")
legend()
```

Out[86]:

There is, of course, much more to the language than I've covered here. I've tried to keep this brief enough so that you can jump in and start using Python to simplify your life and work. My own experience in learning new things is that the information doesn't "stick" unless you try and use it for something in real life.

You will no doubt need to learn more as you go. I've listed several other good references, including the Python Tutorial and Learn Python the Hard Way. Additionally, now is a good time to start familiarizing yourself with the Python Documentation, and, in particular, the Python Language Reference.

Tim Peters, one of the earliest and most prolific Python contributors, wrote the "Zen of Python", which can be accessed via the "import this" command:

In [87]:

```
import this
```

No matter how experienced a programmer you are, these are words to meditate on.

Numpy contains core routines for doing fast vector, matrix, and linear algebra-type operations in Python. Scipy contains additional routines for optimization, special functions, and so on. Both contain modules written in C and Fortran so that they're as fast as possible. Together, they give Python roughly the same capability that the Matlab program offers. (In fact, if you're an experienced Matlab user, there a guide to Numpy for Matlab users just for you.)

Fundamental to both Numpy and Scipy is the ability to work with vectors and matrices. You can create vectors from lists using the **array** command:

In [88]:

```
array([1,2,3,4,5,6])
```

Out[88]:

**array** that gives the numeric type. There are a number of types listed here that your matrix can be. Some of these are aliased to single character codes. The most common ones are 'd' (double precision floating point number), 'D' (double precision complex number), and 'i' (int32). Thus,

In [89]:

```
array([1,2,3,4,5,6],'d')
```

Out[89]:

In [90]:

```
array([1,2,3,4,5,6],'D')
```

Out[90]:

In [91]:

```
array([1,2,3,4,5,6],'i')
```

Out[91]:

To build matrices, you can either use the array command with lists of lists:

In [92]:

```
array([[0,1],[1,0]],'d')
```

Out[92]:

**zeros** command:

In [93]:

```
zeros((3,3),'d')
```

Out[93]:

In [94]:

```
zeros(3,'d')
```

Out[94]:

In [95]:

```
zeros((1,3),'d')
```

Out[95]:

or column vectors:

In [96]:

```
zeros((3,1),'d')
```

Out[96]:

There's also an **identity** command that behaves as you'd expect:

In [97]:

```
identity(4,'d')
```

Out[97]:

as well as a **ones** command.

The **linspace** command makes a linear array of points from a starting to an ending value.

In [98]:

```
linspace(0,1)
```

Out[98]:

In [99]:

```
linspace(0,1,11)
```

Out[99]:

**linspace** is an easy way to make coordinates for plotting. Functions in the numpy library (all of which are imported into IPython notebook) can act on an entire vector (or even a matrix) of points at once. Thus,

In [100]:

```
x = linspace(0,2*pi)
sin(x)
```

Out[100]:

In conjunction with **matplotlib**, this is a nice way to plot things:

In [101]:

```
plot(x,sin(x))
```

Out[101]:

Matrix objects act sensibly when multiplied by scalars:

In [102]:

```
0.125*identity(3,'d')
```

Out[102]:

as well as when you add two matrices together. (However, the matrices have to be the same shape.)

In [103]:

```
identity(2,'d') + array([[1,1],[1,2]])
```

Out[103]:

In [104]:

```
identity(2)*ones((2,2))
```

Out[104]:

To get matrix multiplication, you need the **dot** command:

In [105]:

```
dot(identity(2),ones((2,2)))
```

Out[105]:

**dot** can also do dot products (duh!):

In [106]:

```
v = array([3,4],'d')
sqrt(dot(v,v))
```

Out[106]:

as well as matrix-vector products.

**determinant**, **inverse**, and **transpose** functions that act as you would suppose. Transpose can be abbreviated with ".T" at the end of a matrix object:

In [107]:

```
m = array([[1,2],[3,4]])
m.T
```

Out[107]:

**diag()** function that takes a list or a vector and puts it along the diagonal of a square matrix.

In [108]:

```
diag([1,2,3,4,5])
```

Out[108]:

We'll find this useful later on.

You can solve systems of linear equations using the **solve** command:

In [109]:

```
A = array([[1,1,1],[0,2,5],[2,5,-1]])
b = array([6,-4,27])
solve(A,b)
```

Out[109]:

There are a number of routines to compute eigenvalues and eigenvectors

**eigvals**returns the eigenvalues of a matrix**eigvalsh**returns the eigenvalues of a Hermitian matrix**eig**returns the eigenvalues and eigenvectors of a matrix**eigh**returns the eigenvalues and eigenvectors of a Hermitian matrix.

In [110]:

```
A = array([[13,-4],[-4,7]],'d')
eigvalsh(A)
```

Out[110]:

In [111]:

```
eigh(A)
```

Out[111]:

Now that we have these tools in our toolbox, we can start to do some cool stuff with it. Many of the equations we want to solve in Physics involve differential equations. We want to be able to compute the derivative of functions:

$$ y' = \frac{y(x+h)-y(x)}{h} $$by *discretizing* the function $y(x)$ on an evenly spaced set of points $x_0, x_1, \dots, x_n$, yielding $y_0, y_1, \dots, y_n$. Using the discretization, we can approximate the derivative by

We can write a derivative function in Python via

In [208]:

```
def nderiv(y,x):
"Finite difference derivative of the function f"
n = len(y)
d = zeros(n,'d') # assume double
# Use centered differences for the interior points, one-sided differences for the ends
for i in range(1,n-1):
d[i] = (y[i+1]-y[i-1])/(x[i+1]-x[i-1])
d[0] = (y[1]-y[0])/(x[1]-x[0])
d[n-1] = (y[n-1]-y[n-2])/(x[n-1]-x[n-2])
return d
```

Let's see whether this works for our sin example from above:

In [209]:

```
x = linspace(0,2*pi)
dsin = nderiv(sin(x),x)
plot(x,dsin,label='numerical')
plot(x,cos(x),label='analytical')
title("Comparison of numerical and analytical derivatives of sin(x)")
legend()
```

Out[209]:

Pretty close!

Now that we've convinced ourselves that finite differences aren't a terrible approximation, let's see if we can use this to solve the one-dimensional harmonic oscillator.

We want to solve the time-independent Schrodinger equation

$$ -\frac{\hbar^2}{2m}\frac{\partial^2\psi(x)}{\partial x^2} + V(x)\psi(x) = E\psi(x)$$for $\psi(x)$ when $V(x)=\frac{1}{2}m\omega^2x^2$ is the harmonic oscillator potential. We're going to use the standard trick to transform the differential equation into a matrix equation by multiplying both sides by $\psi^*(x)$ and integrating over $x$. This yields

$$ -\frac{\hbar}{2m}\int\psi(x)\frac{\partial^2}{\partial x^2}\psi(x)dx + \int\psi(x)V(x)\psi(x)dx = E$$We will again use the finite difference approximation. The finite difference formula for the second derivative is

$$ y'' = \frac{y_{i+1}-2y_i+y_{i-1}}{x_{i+1}-x_{i-1}} $$We can think of the first term in the Schrodinger equation as the overlap of the wave function $\psi(x)$ with the second derivative of the wave function $\frac{\partial^2}{\partial x^2}\psi(x)$. Given the above expression for the second derivative, we can see if we take the overlap of the states $y_1,\dots,y_n$ with the second derivative, we will only have three points where the overlap is nonzero, at $y_{i-1}$, $y_i$, and $y_{i+1}$. In matrix form, this leads to the tridiagonal Laplacian matrix, which has -2's along the diagonals, and 1's along the diagonals above and below the main diagonal.

The second term turns leads to a diagonal matrix with $V(x_i)$ on the diagonal elements. Putting all of these pieces together, we get:

In [114]:

```
def Laplacian(x):
h = x[1]-x[0] # assume uniformly spaced points
n = len(x)
M = -2*identity(n,'d')
for i in range(1,n):
M[i,i-1] = M[i-1,i] = 1
return M/h**2
```

In [115]:

```
x = linspace(-3,3)
m = 1.0
ohm = 1.0
T = (-0.5/m)*Laplacian(x)
V = 0.5*(ohm**2)*(x**2)
H = T + diag(V)
E,U = eigh(H)
h = x[1]-x[0]
# Plot the Harmonic potential
plot(x,V,color='k')
for i in range(4):
# For each of the first few solutions, plot the energy level:
axhline(y=E[i],color='k',ls=":")
# as well as the eigenfunction, displaced by the energy level so they don't
# all pile up on each other:
plot(x,-U[:,i]/sqrt(h)+E[i])
title("Eigenfunctions of the Quantum Harmonic Oscillator")
xlabel("Displacement (bohr)")
ylabel("Energy (hartree)")
```

Out[115]:

We've made a couple of hacks here to get the orbitals the way we want them. First, I inserted a -1 factor before the wave functions, to fix the phase of the lowest state. The phase (sign) of a quantum wave function doesn't hold any information, only the square of the wave function does, so this doesn't really change anything.

But the eigenfunctions as we generate them aren't properly normalized. The reason is that finite difference isn't a real basis in the quantum mechanical sense. It's a basis of Dirac Î´ functions at each point; we interpret the space betwen the points as being "filled" by the wave function, but the finite difference basis only has the solution being at the points themselves. We can fix this by dividing the eigenfunctions of our finite difference Hamiltonian by the square root of the spacing, and this gives properly normalized functions.

The solutions to the Harmonic Oscillator are supposed to be Hermite polynomials. The Wikipedia page has the HO states given by

$$\psi_n(x) = \frac{1}{\sqrt{2^n n!}} \left(\frac{m\omega}{\pi\hbar}\right)^{1/4} \exp\left(-\frac{m\omega x^2}{2\hbar}\right) H_n\left(\sqrt{\frac{m\omega}{\hbar}}x\right)$$Let's see whether they look like those. There are some special functions in the Numpy library, and some more in Scipy. Hermite Polynomials are in Numpy:

In [116]:

```
from numpy.polynomial.hermite import Hermite
def ho_evec(x,n,m,ohm):
vec = [0]*9
vec[n] = 1
Hn = Hermite(vec)
return (1/sqrt(2**n*factorial(n)))*pow(m*ohm/pi,0.25)*exp(-0.5*m*ohm*x**2)*Hn(x*sqrt(m*ohm))
```

Let's compare the first function to our solution.

In [117]:

```
plot(x,ho_evec(x,0,1,1),label="Analytic")
plot(x,-U[:,0]/sqrt(h),label="Numeric")
xlabel('x (bohr)')
ylabel(r'$\psi(x)$')
title("Comparison of numeric and analytic solutions to the Harmonic Oscillator")
legend()
```

Out[117]:

The agreement is almost exact.

We can use the **subplot** command to put multiple comparisons in different panes on a single plot:

In [118]:

```
phase_correction = [-1,1,1,-1,-1,1]
for i in range(6):
subplot(2,3,i+1)
plot(x,ho_evec(x,i,1,1),label="Analytic")
plot(x,phase_correction[i]*U[:,i]/sqrt(h),label="Numeric")
```

Other than phase errors (which I've corrected with a little hack: can you find it?), the agreement is pretty good, although it gets worse the higher in energy we get, in part because we used only 50 points.

The Scipy module has many more special functions:

In [119]:

```
from scipy.special import airy,jn,eval_chebyt,eval_legendre
subplot(2,2,1)
x = linspace(-1,1)
Ai,Aip,Bi,Bip = airy(x)
plot(x,Ai)
plot(x,Aip)
plot(x,Bi)
plot(x,Bip)
title("Airy functions")
subplot(2,2,2)
x = linspace(0,10)
for i in range(4):
plot(x,jn(i,x))
title("Bessel functions")
subplot(2,2,3)
x = linspace(-1,1)
for i in range(6):
plot(x,eval_chebyt(i,x))
title("Chebyshev polynomials of the first kind")
subplot(2,2,4)
x = linspace(-1,1)
for i in range(6):
plot(x,eval_legendre(i,x))
title("Legendre polynomials")
```

Out[119]:

Very often we deal with some data that we want to fit to some sort of expected behavior. Say we have the following:

In [120]:

```
raw_data = """\
3.1905781584582433,0.028208609537968457
4.346895074946466,0.007160804747670053
5.374732334047101,0.0046962988461934805
8.201284796573875,0.0004614473299618756
10.899357601713055,0.00005038370219939726
16.295503211991434,4.377451812785309e-7
21.82012847965739,3.0799922117601088e-9
32.48394004282656,1.524776208284536e-13
43.53319057815846,5.5012073588707224e-18"""
```

In [121]:

```
data = []
for line in raw_data.splitlines():
words = line.split(',')
data.append(map(float,words))
data = array(data)
```

In [122]:

```
title("Raw Data")
xlabel("Distance")
plot(data[:,0],data[:,1],'bo')
```

Out[122]:

Since we expect the data to have an exponential decay, we can plot it using a semi-log plot.

In [123]:

```
title("Raw Data")
xlabel("Distance")
semilogy(data[:,0],data[:,1],'bo')
```

Out[123]:

For a pure exponential decay like this, we can fit the log of the data to a straight line. The above plot suggests this is a good approximation. Given a function $$ y = Ae^{-ax} $$ $$ \log(y) = \log(A) - ax$$ Thus, if we fit the log of the data versus x, we should get a straight line with slope $a$, and an intercept that gives the constant $A$.

There's a numpy function called **polyfit** that will fit data to a polynomial form. We'll use this to fit to a straight line (a polynomial of order 1)

In [124]:

```
params = polyfit(data[:,0],log(data[:,1]),1)
a = params[0]
A = exp(params[1])
```

Let's see whether this curve fits the data.

In [125]:

```
x = linspace(1,45)
title("Raw Data")
xlabel("Distance")
semilogy(data[:,0],data[:,1],'bo')
semilogy(x,A*exp(a*x),'b-')
```

Out[125]:

In [126]:

```
gauss_data = """\
-0.9902286902286903,1.4065274110372852e-19
-0.7566104566104566,2.2504438576596563e-18
-0.5117810117810118,1.9459459459459454
-0.31887271887271884,10.621621621621626
-0.250997150997151,15.891891891891893
-0.1463309463309464,23.756756756756754
-0.07267267267267263,28.135135135135133
-0.04426734426734419,29.02702702702703
-0.0015939015939017698,29.675675675675677
0.04689304689304685,29.10810810810811
0.0840994840994842,27.324324324324326
0.1700546700546699,22.216216216216214
0.370878570878571,7.540540540540545
0.5338338338338338,1.621621621621618
0.722014322014322,0.08108108108108068
0.9926849926849926,-0.08108108108108646"""
data = []
for line in gauss_data.splitlines():
words = line.split(',')
data.append(map(float,words))
data = array(data)
plot(data[:,0],data[:,1],'bo')
```

Out[126]:

This data looks more Gaussian than exponential. If we wanted to, we could use **polyfit** for this as well, but let's use the **curve_fit** function from Scipy, which can fit to arbitrary functions. You can learn more using help(curve_fit).

First define a general Gaussian function to fit to.

In [127]:

```
def gauss(x,A,a): return A*exp(a*x**2)
```

Now fit to it using **curve_fit**:

In [128]:

```
from scipy.optimize import curve_fit
params,conv = curve_fit(gauss,data[:,0],data[:,1])
x = linspace(-1,1)
plot(data[:,0],data[:,1],'bo')
A,a = params
plot(x,gauss(x,A,a),'b-')
```

Out[128]:

**curve_fit** routine we just used is built on top of a very good general **minimization** capability in Scipy. You can learn more at the scipy documentation pages.

Many methods in scientific computing rely on Monte Carlo integration, where a sequence of (pseudo) random numbers are used to approximate the integral of a function. Python has good random number generators in the standard library. The **random()** function gives pseudorandom numbers uniformly distributed between 0 and 1:

In [129]:

```
from random import random
rands = []
for i in range(100):
rands.append(random())
plot(rands)
```

Out[129]:

**random()** uses the Mersenne Twister algorithm, which is a highly regarded pseudorandom number generator. There are also functions to generate random integers, to randomly shuffle a list, and functions to pick random numbers from a particular distribution, like the normal distribution:

In [130]:

```
from random import gauss
grands = []
for i in range(100):
grands.append(gauss(0,1))
plot(grands)
```

Out[130]:

In [131]:

```
plot(rand(100))
```

Out[131]:

In [132]:

```
npts = 5000
xs = 2*rand(npts)-1
ys = 2*rand(npts)-1
r = xs**2+ys**2
ninside = (r<1).sum()
figsize(6,6) # make the figure square
title("Approximation to pi = %f" % (4*ninside/float(npts)))
plot(xs[r<1],ys[r<1],'b.')
plot(xs[r>1],ys[r>1],'r.')
figsize(8,6) # change the figsize back to 4x3 for the rest of the notebook
```

The idea behind the program is that the ratio of the area of the unit circle to the square that inscribes it is $\pi/4$, so by counting the fraction of the random points in the square that are inside the circle, we get increasingly good estimates to $\pi$.

The above code uses some higher level Numpy tricks to compute the radius of each point in a single line, to count how many radii are below one in a single line, and to filter the x,y points based on their radii. To be honest, I rarely write code like this: I find some of these Numpy tricks a little too cute to remember them, and I'm more likely to use a list comprehension (see below) to filter the points I want, since I can remember that.

As methods of computing $\pi$ go, this is among the worst. A much better method is to use Leibniz's expansion of arctan(1):

$$\frac{\pi}{4} = \sum_k \frac{(-1)^k}{2*k+1}$$In [133]:

```
n = 100
total = 0
for k in range(n):
total += pow(-1,k)/(2*k+1.0)
print 4*total
```

**decimal** module, if you're interested.

Integration can be hard, and sometimes it's easier to work out a definite integral using an approximation. For example, suppose we wanted to figure out the integral:

$$\int_0^\infty\exp(-x)dx=1$$In [134]:

```
from numpy import sqrt
def f(x): return exp(-x)
x = linspace(0,10)
plot(x,exp(-x))
```

Out[134]:

**quad** (since sometimes numerical integration is called *quadrature*), that we can use for this:

In [135]:

```
from scipy.integrate import quad
quad(f,0,inf)
```

Out[135]:

There are also 2d and 3d numerical integrators in Scipy. See the docs for more information.

In [136]:

```
from scipy.fftpack import fft,fftfreq
npts = 4000
nplot = npts/10
t = linspace(0,120,npts)
def acc(t): return 10*sin(2*pi*2.0*t) + 5*sin(2*pi*8.0*t) + 2*rand(npts)
signal = acc(t)
FFT = abs(fft(signal))
freqs = fftfreq(npts, t[1]-t[0])
subplot(211)
plot(t[:nplot], signal[:nplot])
subplot(212)
plot(freqs,20*log10(FFT),',')
show()
```

There are additional signal processing routines in Scipy that you can read about here.

As more and more of our day-to-day work is being done on and through computers, we increasingly have output that one program writes, often in a text file, that we need to analyze in one way or another, and potentially feed that output into another file.

Suppose we have the following output:

In [137]:

```
myoutput = """\
@ Step Energy Delta E Gmax Grms Xrms Xmax Walltime
@ ---- ---------------- -------- -------- -------- -------- -------- --------
@ 0 -6095.12544083 0.0D+00 0.03686 0.00936 0.00000 0.00000 1391.5
@ 1 -6095.25762870 -1.3D-01 0.00732 0.00168 0.32456 0.84140 10468.0
@ 2 -6095.26325979 -5.6D-03 0.00233 0.00056 0.06294 0.14009 11963.5
@ 3 -6095.26428124 -1.0D-03 0.00109 0.00024 0.03245 0.10269 13331.9
@ 4 -6095.26463203 -3.5D-04 0.00057 0.00013 0.02737 0.09112 14710.8
@ 5 -6095.26477615 -1.4D-04 0.00043 0.00009 0.02259 0.08615 20211.1
@ 6 -6095.26482624 -5.0D-05 0.00015 0.00002 0.00831 0.03147 21726.1
@ 7 -6095.26483584 -9.6D-06 0.00021 0.00004 0.01473 0.05265 24890.5
@ 8 -6095.26484405 -8.2D-06 0.00005 0.00001 0.00555 0.01929 26448.7
@ 9 -6095.26484599 -1.9D-06 0.00003 0.00001 0.00164 0.00564 27258.1
@ 10 -6095.26484676 -7.7D-07 0.00003 0.00001 0.00161 0.00553 28155.3
@ 11 -6095.26484693 -1.8D-07 0.00002 0.00000 0.00054 0.00151 28981.7
@ 11 -6095.26484693 -1.8D-07 0.00002 0.00000 0.00054 0.00151 28981.7"""
```

This output actually came from a geometry optimization of a Silicon cluster using the NWChem quantum chemistry suite. At every step the program computes the energy of the molecular geometry, and then changes the geometry to minimize the computed forces, until the energy converges. I obtained this output via the unix command

```
% grep @ nwchem.out
```

since NWChem is nice enough to precede the lines that you need to monitor job progress with the '@' symbol.

We could do the entire analysis in Python; I'll show how to do this later on, but first let's focus on turning this code into a usable Python object that we can plot.

First, note that the data is entered into a multi-line string. When Python sees three quote marks """ or ''' it treats everything following as part of a single string, including newlines, tabs, and anything else, until it sees the same three quote marks (""" has to be followed by another """, and ''' has to be followed by another ''') again. This is a convenient way to quickly dump data into Python, and it also reinforces the important idea that you don't have to open a file and deal with it one line at a time. You can read everything in, and deal with it as one big chunk.

The first thing we'll do, though, is to split the big string into a list of strings, since each line corresponds to a separate piece of data. We will use the **splitlines()** function on the big myout string to break it into a new element every time it sees a newline (\n) character:

In [138]:

```
lines = myoutput.splitlines()
lines
```

Out[138]:

Splitting is a big concept in text processing. We used **splitlines()** here, and we will use the more general **split()** function below to split each line into whitespace-delimited words.

We now want to do three things:

- Skip over the lines that don't carry any information
- Break apart each line that does carry information and grab the pieces we want
- Turn the resulting data into something that we can plot.

For this data, we really only want the Energy column, the Gmax column (which contains the maximum gradient at each step), and perhaps the Walltime column.

Since the data is now in a list of lines, we can iterate over it:

In [139]:

```
for line in lines[2:]:
# do something with each line
words = line.split()
```

Let's examine what we just did: first, we used a **for** loop to iterate over each line. However, we skipped the first two (the lines[2:] only takes the lines starting from index 2), since lines[0] contained the title information, and lines[1] contained underscores.

We then split each line into chunks (which we're calling "words", even though in most cases they're numbers) using the string **split()** command. Here's what split does:

In [140]:

```
import string
help(string.split)
```

In [141]:

```
lines[2].split()
```

Out[141]:

This is almost exactly what we want. We just have to now pick the fields we want:

In [142]:

```
for line in lines[2:]:
# do something with each line
words = line.split()
energy = words[2]
gmax = words[4]
time = words[8]
print energy,gmax,time
```

**float()** command for this. We also need to save it in some form. I'll do this as follows:

In [143]:

```
data = []
for line in lines[2:]:
# do something with each line
words = line.split()
energy = float(words[2])
gmax = float(words[4])
time = float(words[8])
data.append((energy,gmax,time))
data = array(data)
```

We now have our data in a numpy array, so we can choose columns to print:

In [144]:

```
plot(data[:,0])
xlabel('step')
ylabel('Energy (hartrees)')
title('Convergence of NWChem geometry optimization for Si cluster')
```

Out[144]:

I would write the code a little more succinctly if I were doing this for myself, but this is essentially a snippet I use repeatedly.

Suppose our data was in CSV (comma separated values) format, a format that originally came from Microsoft Excel, and is increasingly used as a data interchange format in big data applications. How would we parse that?

In [145]:

```
csv = """\
-6095.12544083, 0.03686, 1391.5
-6095.25762870, 0.00732, 10468.0
-6095.26325979, 0.00233, 11963.5
-6095.26428124, 0.00109, 13331.9
-6095.26463203, 0.00057, 14710.8
-6095.26477615, 0.00043, 20211.1
-6095.26482624, 0.00015, 21726.1
-6095.26483584, 0.00021, 24890.5
-6095.26484405, 0.00005, 26448.7
-6095.26484599, 0.00003, 27258.1
-6095.26484676, 0.00003, 28155.3
-6095.26484693, 0.00002, 28981.7
-6095.26484693, 0.00002, 28981.7"""
```

We can do much the same as before:

In [146]:

```
data = []
for line in csv.splitlines():
words = line.split(',')
data.append(map(float,words))
data = array(data)
```

**map()** command to repeatedly apply a single function (**float()**) to a list, and to return the output as a list.

In [147]:

```
help(map)
```