Python and parallel programming.

May 31, 2008

Just fork(), they say.

For those of you not in the know, the Python has had a long-running debate over what’s known as the Global Interpreter Lock. I say “debate”, though from what I can see it usually -aside from the odd serious debate – amounts to newbies popping up and saying “why can’t I multi-thread properly on multi-core systems”, and the veterans telling them the way things are. The argument usually goes: multi-threading is how I’m supposed to take advantage of multi-cores. Not around here, replies the Python community; around here, we use the tried and tested methods like forking child processes.

So, I’ve been reading about this for a while, but yesterday was the first time that I’ve actually run across it in my own work. I’ve been writing a genetic algorithm for some modeling that I’m involved in, and with the recent construction of frankenputer, I’ve finally run into the need to parallelize my code to get some speed ups. I wrote this GA in python because a). I’ve come to loathe C++, and b). it’s just so damned easy to code in. Thus, I needed to parallelize a GA written in Python. Thankfully, genetic algorithms are embarrassingly parallel, so I didn’t need too much in the way of inter-process communication, but here was where I ran into a few bumps. For the veterans, my troubles are going to seem incredibly banal, but for other newbies, I thought that I might document my steps.

First: why not threads? If you don’t know, the Global Interpreter Lock (which applies to the C implementation of Python for sure; I’m not sure if other implementations have a GIL or not) is essentially a limit which means that for every instance of the Python interpreter, only one thread is in the driving seat at a time, so running multiple threads at once does nothing to boost speed even on multi-core systems. Thanks to the GIL, the only way to utilize multiple processors is to spawn multiple interpreters, because each interpreter has its own GIL and the processes will run separately.

At least on Unix-like systems (e.g. Linux, OS X, etc), one of the recommended ways to handle this is to create child processes with fork(); the parent processes acts as a bookkeeper, while the child processes do the work.

The general form of it is this:

def child():
    # Actions to be completed in the child process go here.

def parent():
    # Actions to be completed in the parent process go here.


# In the main section of your code:
newpid = os.fork()

if newpid == 0:

Why does this work? It takes a bit of a mental adjustment to understand. (Note: because fork() is a Unix and look-alike OS system command, this basic paradigm will work for any programming langauge that provides an interface to the command, of which Python is only one. It just happens to be the one that I’m using).

Just before the os.fork() call, there is one process in memory. After the os.fork() call, there are two processes in memory: one is the parent, one is the child, and the child is an exact copy of the parent. When I say exact copy, I mean it. From the point of view of the programmer, each process has all the same variables, with the same values, and the point of execution starts from immediately after the os.fork() call in both processes.

How does each process know which one they’re in? The newpid variable is the key. os.fork() will return one of two values: to the parent process, it will return the process id (PID) of the child process, while to the child process it will return 0. That’s it! Easy, eh?

Usually, yes. There are some subtleties to account for, certainly. For instance, remembering that each process has the same state (variables + values) going into life after the fork() process, you have to make sure not to do things like have both processes try to work with the same file at the same time (though that’s not the same as intentionally using an object to communicate, like with a pipe). But for my purposes, this trick was almost complete.

Almost, anyways. The situation was complicated by the fact that I needed to spawn more than one process – in fact, what I wanted to do was spawn a total of eight to max out the processors. Getting the list of child processes up to eight wasn’t that hard, since it just meant keeping track of the child process ids and counting them to know whether I’d hit eight yet. But what happens when the children are finished?

Well, the child processes die, but how does the parent know? Herein lied the rub, and I ended up meandering down the wrong path for a while. There’s a few ways to keep track of child processes. Two of the main ones are signals and os.waitpid(), and I started with signals.

In the Unix world, when child processes die they send a signal, SIGCHLD. In Python, you can install a handler for that signal, essentially a function which is called when the parent process receives SIGCHLD. I thought that this sounded like a great way to keep all eight cores humming: just spawn eight child processes, then prune the list of child pids by dropping them when SIGCHLD is caught. However, this turns out to be problematic, because the signal itself wreaked havoc with my child processes in ways I don’t even fully understand; the biggest problem was that SIGCHLD aborted the subprocess calls that my other children were making. Googling around, I discovered that the use of SIGCHLD is generally discouraged (though the reasons for this are usually vague), so I switched to the other method.

The other method is os.waitpid(). This call allows the parent to determine the status of child processes, and it takes two arguments. The first is the process id of the child to poll, and the second is the option. I’ve leave it to the python docs to explain all the subtleties. The important point is that by passing a pid of -1 to waitpid(), the function will report on the status of all the child processes that the parent has spawned, which allowed me to poll the children while I did other processing with a call to waitpid() that looked like this:

(pid,status) = os.waitpid(0,os.WNOHANG)
if pid:
    # One of the children has died, let's get another going.

waitpid() returns two numbers: a process id, and a status code (see the documentation for wait() in the os module). If none of the children have finished, the return from waitpid() is (0,0); otherwise, the first element of the tuple is the process id of the child that was finished.

This turned out to be a much better mechanism for tracking the children that my parent process had spawned, and meant that the rest of the effort involved in getting my GA to run in parallel was minimal. So the main point of this particular screed, besides gratifying myself for the time I spent debugging, was to hopefully prevent someone else from making the same mistake that I did. So, there you go!

UPDATE: I was just re-reading this post, and I realized that I may have seemed like I was being a little hard on the GIL. From what I’ve read, it does seem like it has some serious advantages to go with the potential downside, and I’m fairly convinced that it was a good idea overall. I also didn’t mention that there are packages that will help you parallelize your code: for simpler jobs, you could try things like Parallel Python (which, from what I understand, does essentially what I did above with more finesse and the ability to do some inter-process communication), or you could scale up to the big guns and use a python binding to MPI – the Message Passing Interface – like mpi4py or pypar. I ended up writing my own because my problem was actually too simple for both methods, and the extra functionality just got in the way.


May 30, 2008

Okay, so I’m still trying to carve out some time from actual science to do some writing about science, but in the meantime, I thought I might share the wonder that is the FRANKEN-PUTER! It’s ALIVE! AHAHAHAHA!

This is a simulation workhorse that I’m building for the lab; it’s an octo-core (2 x quad-core Xeon) system, 8 gigs of RAM, etc. The problem is, I can’t stick the mobo in the chasis because the case (a Coolermaster Stacker 830) was sent with the wrong standoffs for the motherboard. I understand that I’m not the only person to have this problem, and they’re rushing out new spacers to me.

I didn’t want to wait to get Ubuntu running on this beast, though, so I set it running regardless. I’m not taxing it too much while its guts are spilled all over the table, but hearing the CPU fans purr when I fire it up and watching that beautiful heron jump onto the new 22-inch flat screen LCD does give me a pleasant little shiver…

Before I forget!

May 28, 2008

Over at Skulls in the Stars, there’s a great challenge going on to write blog posts on older papers in your field (which bummed me out a bit when I read it, seeing as I thought I was being so clever by doing pretty much exactly the same thing); it ends at the end of May, so link’em if you’ve got’em! A current listing of the submissions can be found here. (Hat tip to Coturnix @ A Blog Around The Clock).

P.S. You can view my entry here.

Back to our regularly scheduled program.

May 28, 2008

Blogging’s been light for the past couple of days, because I’ve been tending to the home fires, but I expect things to pick back up again now that I’m back in Montréal. On the upcoming list:

  • My next post on a classic paper from foraging theory.
  • A rant on what not to do when buying a digital camera, to replace the rant I already wrote and then lost.
  • A discussion on why journals should accept LaTeX.

And that’s just the *planned* bits. But for now, I’m fighting jet lag and I’m off to bed.

The consequence of “intelligent design”.

May 28, 2008

I’m reading a great book right now called Liberty: The Lives and Times of Six Women in Revolutionary France, by Lucy Moore. The basic premise consists of (as the title suggests) the lives of six women who were important during the French Revolution, both to tell their stories and to illustrate larger issues that ran through the Revolution like threads. The book is fantastic so far, but one passage leapt out at me as an illustration of the perils of the thinking inherent “intelligent design”. Moore is quoting and interpreting the influence that Rousseau had on the French and especially French women. I quote from Moore’s book in full here to illustrate the point:

Rousseau, in glorifying women as wives and mothers, denied them any role outside the home. ‘There are no good morals for women outside of a withdrawn and domestic life,’ he wrote. ‘A woman outside her home loses her greatest confidence, and is shorn of her true adornments, shows herself indecently. If she has a husband, what is she out seeking among men?’ For him, as for so many of his generation, sexual inequality created an ideal equilibrium: men were dominant, active and reasoning, and their role was public; women were emotional, modest, and loving, and their role was private. ‘A taller stature, a stronger voice, and features more strongly marked seem to have no necessary bearing on one’s sex, but these exterior modifications indicate the intentions of the creator in the modifications of the spirit,’ he reasons in La Nouvelle Héloïse. ‘The souls of a perfect woman and a perfect man must not resemble each other more than their appearances.’ According to this argument, the complementary differences between the sexes were essential to maintaining social harmony. (emphasis mine).

I don’t know Rousseau’s work well enough to know how far he might have taken this particular thought (I’m a biologist, not a philosopher), but it really doesn’t matter – I’m interested in the thought itself, because it exemplifies the problem with creationism and its cousin, ID.

In a heavy bit of irony, the problem is an issue that is actually usually attributed to evolutionary theory by critics: the “just-so” story, which is a unverifiable, ad hoc explanation for a phenomenon. Rousseau makes a laundry list of differences between men and women (the possible existence of which is bitterly debated these days), and then in a crucial move, justifies those differences by appealing to the wishes of the all-powerful Creator. In doing so, Rousseau not only explains and accepts those differences, but also removes any need to change a thing! Since the Creator wills it to be so, there is no need, and some might say we have no right, to modify the “natural” order of things.

This is a modification / elaboration of teleological arguments like the watchmaker argument advanced by William Paley, who is, incidentally, only the most famous proponent of this particular claptrap. Teleological arguments – arguments from design – only attempt to prove that God exists, but Rousseau’s sentiment goes farther by presupposing God and then divining God’s will from the way things are. This useful construction (a classic just-so story) allows creationists to deny any change or progress that they wish to, simply by claiming that God made things the way that they are and there is no need to modify them; just ignore any inconvenient historical fact that gets in the way here. It is also implicit in the creationist argument that “micro-evolution” is fine, but “macro-evolution” is rubbish – God set forth the species as they are, and so speciation can’t occur. Of course, putting aside the fact that evolutionary theorists don’t use the term macroevolution as a categorical one, but rather one of continuum, the recent deluge of credible evidence showing speciation and macroevolution occurring in front of our eyes has left creationists retreating into babble about “information”.

And yet, in a twisted bit of logic, it is creationists who accuse evolutionary theorists of blundering into this particular fallacy….

Insert pithy blog post here.

May 25, 2008

Well, I just spent two hours writing a great post on five things not to do when buying a digital camera, and then the desktop blogging client I was trying out (Ecto) dumped the text and I can’t get it back. So now, I’m just sitting here and spitting incoherently at my keyboard.

I’m going to go have dim sum…

Which browser do *you* use?

May 24, 2008

Via Boing Boing, I saw this pop into my RSS feeds:

Smartest Browser and OS

The IQ League apparently has a 60 second IQ test, which they’re mapping onto their web server logs to trumped the smartest browser, operating system, website referrers, countries, and so on. Of course, IQ is a contentious field to begin with, and things like this tend to anger the ever-lovin’ spit out of psychologists in this area (there’s nothing like the statistical properties of g to get them foaming at the mouth).

But it’s fun to look at, nonetheless…