Registration is open for my October Webinars (about regexps and technical training)

September has been busy with work and holidays, but I’m gearing up for an exciting and busy October. Among other things, I’m giving two (free) Webinars in that month, and you can already register for them:

  • Intermediate Regular expressions: In my previous Webinar about regular expressions, I covered the basics.  In this one, we’ll go further, spending a great deal of time talking about groups, backreferences, and some other topics that tend to confuse people.  I’ll be using Python to demonstrate regexps, but this Webinar isn’t only aimed at Python developers. Registration is free; sign up here.
  • Technical training: I’m starting to spend more and more time helping people to become technical trainers, teaching programming in high-tech companies. (Indeed, I’m in the process of starting my coaching program for people interested in improving their training skills.) In this Webinar, which I expect will be the first of many, I’ll give an overview of the technical-training landscape, how it works, and why you should seriously consider providing training services. I’ll briefly review pedagogical, logistical, and business considerations for the aspiring technical trainer. Registration for this training is free; you can sign up here.

Now available: My interview on “Developer on Fire” about training programmers

I was recently interviewed by Dave Rael for the “Developer on Fire” podcast. That episode has now been released, at Enjoy!

In PostgreSQL, as in life, don’t wait too long to commit

I recently helped a client with serious PostgreSQL problems. Their database is part of an application that configures and monitors large networks; each individual network node reports its status to the database every five minutes. The client was having problems with one table in particular, containing about 25,000 rows — quite small, by modern database standards, and thus unlikely to cause problems.

However, things weren’t so simple: This table was growing to more than 20 GB every week, even though autovacuum was running. In response, the company established a weekly ritual: Shut down the application, run a VACUUM FULL, and then restart things. This solved the problem in the short term — but by the end of the week, the database had returned to be about 20 GB in size.  Clearly, something needed to be done about it.

I’m happy to say that I was able to fix this problem, and that the fix wasn’t so difficult. Indeed, the source of the problem might well be obvious to someone with a great deal of PostgreSQL experience. That’s because the problem mostly stemmed from a failure to understand how PostgreSQL’s transactions work, and how they can influence the functioning of VACUUM, the size of your database, and the reliability of your system. I’ve found that this topic is confusing to many people who are new to PostgreSQL, and I thus thought that it would be a useful to walk others through the problem, and then describe how we solved it.

PostgreSQL, like many other modern relational databases, uses a technology known as multiversion concurrency control (“MVCC”).  MVCC reduces the number of exclusive locks placed on a database’s rows, by keeping around multiple versions of each row. If that sounds weird, then I’ll try to explain. Let’s start with a very simple table:


Now let’s add 1 million rows to that table:

INSERT INTO Foo (x) VALUES (generate_series(1,1000000));

Now, how much space does this table take up on disk? We can use the built-in pg_relation_size function, but that’ll return a large integer representing a number of bytes. Fortunately, PostgreSQL also include pg_size_pretty, which turns a number into something understandable by computer-literate humans:

SELECT pg_size_pretty(pg_relation_size('foo'));

35 MB

What happens to the size of our table if we UPDATE each row, incrementing x by 1?

UPDATE Foo SET x = x + 1;
SELECT pg_size_pretty(pg_relation_size('foo'));

69 MB

If you’re new to MVCC, then the above is probably quite surprising. After all, you might expect an UPDATE query to change each of the existing 1 million rows, keeping the database size roughly identical.

However, this is not at all the case: Rather than replace or update existing rows, UPDATE in an MVCC system adds new rows. Indeed, in an MVCC-based system, our queries never directly change or remove any rows. INSERT adds new rows, DELETE marks rows as no longer needed, and UPDATE performs both an INSERT and a DELETE.

That’s right: In an MVCC system, UPDATE doesn’t change existing rows, although we might have the illusion of this occurring.  Every time you UPDATE a table, you’re doubling its size.

At this point, many newcomers to PostgreSQL are even more confused: How can it be that deleted rows don’t go away? And why would UPDATE work in such a crazy way? And how can all of this possibly help me?

The key to understanding what’s happening is to realize that once a row has been added to the system, it cannot be changed. However, its space can be reused by new rows, if the old row is no longer accessible by any transaction.

This starts to make a bit more sense if you realize that every transaction in PostgreSQL has a unique ID number, known as the “transaction ID.” This number constantly increases, providing us with a running count of transactions in the system. Every time we INSERT a row, PostgreSQL records the first transaction ID from which the row should now be visible, in a column known as “xmin.” You an think of xmin as the row’s birthday. You cannot normally see xmin, but PostgreSQL will show it to you, if you ask for it explicitly:

SELECT  xmin, * FROM Foo WHERE id < 5 ORDER BY id;

│  xmin  │ id │ x │

│ 187665 │  1 │ 2 │
│ 187665 │  2 │ 3 │
│ 187665 │  3 │ 4 │
│ 187665 │  4 │ 5 │

In an MVCC-based system, DELETE doesn’t really delete a row.  Rather, it indicates that a row is no longer available to future transactions. We can see an initial hint of this if, when we SELECT the contents of a table, we ask for not only xmin, but also xmax — the final transaction (or “expiration date,” if you like) for which a row should be visible:

SELECT  xmin, xmax, * FROM Foo WHERE id < 5 ORDER BY id;

│  xmin  │ xmax │ id │ x │

│ 187656 │    0 │  1 │ 2 │
│ 187656 │    0 │  2 │ 3 │
│ 187656 │    0 │  3 │ 4 │
│ 187656 │    0 │  4 │ 5 │

We can see that each of these rows was added in the same transaction ID (187656). Moreover, all of these rows are visible, since their xmax value is 0. (Don’t worry; we’ll see other xmax values in a little bit.)  If I update some  of these rows, we’ll see that their xmin value will be different:

UPDATE Foo SET x = x + 1 WHERE id IN (1,3);
SELECT  xmin, * FROM Foo WHERE id < 5 ORDER BY id;

│  xmin  │ id │ x │

│ 187668 │  1 │ 3 │
│ 187665 │  2 │ 3 │
│ 187668 │  3 │ 5 │
│ 187665 │  4 │ 5 │

The rows with ID 1 and 3 were added in transaction 187668, whereas the rows with ID 2 and 4 were added in an earlier transaction, number 187665.

Since I’ve already told you that rows cannot be changed by a query, this raises an interesting question: What happened to the rows with IDs 1 and 3 that were added in the first transaction, number 187665?

The answer is: Those rows still exist in the system! They have not been modified or removed. But the system did mark them as having an xmax of 187668, the transaction ID in which the new, replacement rows are now available.

Transaction IDs only go up. But you can imagine a hypothetical time-traveler’s database, in which you could move the transaction ID down, and see rows from an earlier era. (Somehow, I doubt that this would be a very popular television series.)

The thing is, we don’t need a hypothetical, time-traveler’s database in order to see such behavior. All we need is to open two separate sessions to our PostgreSQL database at the same time, and have each session open a transaction.

You see, when you BEGIN a new transaction, your actions are hidden from the rest of the world — and similarly, modifications made elsewhere in the database are hidden from you.  You can thus have a situation in two different sessions see completely different versions of the same row, thanks to their different transaction IDs.  This might sound crazy, but it actually helps the database to run smoothly, with a minimum of conflicts and locks.

So, let’s open two different PostgreSQL sessions, and take a look at what happens.  In order to distinguish between them, I used the \set command in psql to make PROMPT1 either “Session A>” or “Session B>”. Here’s session A:

Session A> BEGIN;
Session A> select  xmin, xmax, * from foo where id < 5 order by id;

│  xmin  │ xmax │ id │ x │

│ 187668 │    0 │  1 │ 3 │
│ 187665 │    0 │  2 │ 3 │
│ 187668 │    0 │  3 │ 5 │
│ 187665 │    0 │  4 │ 5 │

And here is session B:

Session B> BEGIN;
Session B> select  xmin, xmax, * from foo where id < 5 order by id;

│  xmin  │ xmax │ id │ x │

│ 187668 │    0 │  1 │ 3 │
│ 187665 │    0 │  2 │ 3 │
│ 187668 │    0 │  3 │ 5 │
│ 187665 │    0 │  4 │ 5 │

At this point, they are identical; both sessions see precisely the same rows, with the same xmin and xmax. Now let’s modify the values of two rows in session A:

Session A> update foo set x = x + 1 where id in (1,3);

What do we see now in session A?

Session A> select xmin, xmax, * from foo where id < 5 order by id;

│  xmin  │ xmax │ id │ x │

│ 187670 │    0 │  1 │ 4 │
│ 187665 │    0 │  2 │ 3 │
│ 187670 │    0 │  3 │ 6 │
│ 187665 │    0 │  4 │ 5 │

Just as we would expect, we have two new rows (IDs 1 and 3), and two old rows (IDs 2 and 4), which we can distinguish via the xmin value.  In all cases, the rows have an xmax value of 0.

Let’s return to session B, and see what things look like there.  (Note that I haven’t committed any transactions here!)

Session B> select  xmin, xmax, * from foo where id < 5 order by id;

│  xmin  │  xmax  │ id │ x │

│ 187668 │ 187670 │  1 │ 3 │
│ 187665 │      0 │  2 │ 3 │
│ 187668 │ 187670 │  3 │ 5 │
│ 187665 │      0 │  4 │ 5 │

That’s right — the rows that session B sees are no longer the rows that session A sees.  Session B continues to see the same rows as were visible when it started the transaction. We can see that the xmax of session B’s old rows is the same as the xmin of session A’s new rows. That’s because any transaction after 187670 will see the new row (i.e., with an xmin of 187670), but any transaction started before 187670 will see the old row (i.e., with an xmax of 187670).

I hope that the picture is now coming together: When you INSERT a new row, its xmin is the current transaction ID, and its xmax is 0.  When you DELETE a row, its xmax is changed to be the current transaction ID.  And when you UPDATE a row, PostgreSQL does both of these actions.  That’s why the UPDATE I did earlier, on all of the rows in the table, doubled its size.  (More on that in a moment.)

This allows different transactions to work independently, without having to wait for locks. Imagine if session B wants to look at row with ID 3; because of MVCC, it doesn’t need to wait to read it.  Heck, session B can even change the row with ID 3, although it’ll hang until session A commits or rolls back.

And indeed, another benefit of MVCC is that rollbacks are trivially easy to implement: If we COMMIT session A and roll back session B, then session A’s view of the world will be the current one, for all following transactions.  But if we ROLLBACK session A, then everything is fine; PostgreSQL keeps track of the fact that session A was rolled back, and that we should actually be using (until further notice) the rows with an xmax of 187670.

As you can imagine, this can lead to some trouble. If every UPDATE increases the number of rows, and every DELETE fails to free up any space, every PostgreSQL database will eventually run out of room, no? Well, yes — except for VACUUM, which is the garbage-collecting mechanism for PostgreSQL’s rows. VACUUM can be run manually, but it’s normally run via “autovacuum,” a program that runs in the background, and invokes VACUUM on a regular basis.

VACUUM doesn’t free space from your PostgreSQL database. Rather, it identifies rows whose xmin is greater than the current transaction ID, and thus marks those rows as available for re-use. PostgreSQL thus keeps track of not only the current transaction ID, but also the ID of the earliest still-open transaction. So long as that transaction is open, it needs to be able to see the rows from that point in time.

Don’t confuse VACUUM with VACUUM FULL; the later does indeed clear out space from a PostgreSQL database, but locks the database while it’s doing so. If you have a serious 24/7 application, then VACUUM FULL is a very bad idea. The regular autovacuum daemon, albeit perhaps with a bit of configuration, should take care of identifying which rows can and should be marked for deletion.

In the case of my client’s problem, my initial thought was that autovacuum wasn’t running. However, a quick look at the pg_stat_user_tables view showed that autovacuum was running on a regular basis.

However, VACUUM will mark a row as eligible for reuse if no other transactions can see it. (After all, if a session remains open, then it still needs to see those old row versions.)  Meaning that if a transaction remains open from the time the system starts up, VACUUM will never identify any rows as eligible for reuse, because there is still an open transaction whose ID is lower than the xmax of the deleted rows.

And this is precisely what happened to my client: It turns out that their application, at startup, was opening several transactions with BEGIN statements… and then never committing those transactions or rolling them back.  None of those transactions led to a lock in PostgreSQL, because they weren’t executing any query that would lead to a conflict. And autovacuum was both running and reporting that it ran on each table — because it had indeed done so.

But when it came time to mark the old rows as eligible for reuse, VACUUM ignored all of the rows in the entire database, including our 23,000-row table — because of the transaction that the application had opened at startup. It didn’t matter how many times we can VACUUM, whether it was done manually or automatically, or what else was running at the time: All of the old versions of every row in our table were kept around by PostgreSQL, leading to a massive disk usage and database slowness. Stopping the application had the effect of closing those transactions, and thus allowing VACUUM FULL to do its magic, albeit at the cost of system downtime. And because the application opened the transactions at startup, any fix was bound to be a temporary one.

My client was a bit surprised to discover that the entire database could encounter such problems from an open transaction that was neither committed nor rolled back. However, I hope that you now see why the space wasn’t reclaimed, and why the database acted as it did.

The solution, of course, was to get rid of those three open, empty transactions at startup; once we did that, everything ran smoothly.

Transactions are a brilliant and useful invention, but in PostgreSQL (and other MVCC systems), you shouldn’t be holding onto open transactions for too long. Doing so might have surprisingly unpleasant consequences. Commit or rollback as soon as you can — or you might find yourself with a database accumulating gigabytes full of old, unnecessary rows.

Announcing: Coaching for technical trainers

What, exactly, do I do for a living?

Yes, I’m a programmer — or as I’m supposed to say nowadays, a “full-stack Web developer.” And yes, I’m a lead developer/CTO.  And yes, I’m a writer, with two ebooks written (about Python and visiting China), two more on the way, and my monthly Linux Journal column now in its 20th (!) year.

But over the last few years, another role has increasingly dominated the others: Much of my time is now spent as a technical trainer, teaching a variety of open-source technologies — Python, Ruby, Git, and PostgreSQL — to companies around the world.

Some software developers believe that training is less demanding, less fulfilling, or even less lucrative than creating software. And for some of them, that might well be true.

I have personally found training to be no less demanding than developing software — but I have also found that it is extremely fulfilling, and that it does a more-than-adequate job of paying the bills. We all know that high-tech companies are desperately looking for high-quality developers; when I do my job right, I provide them with such developers, ready to use the latest technologies to solve problems more efficiently and reliably than would otherwise have been possible.

The good news is that my training business is going quite well; I’m now booked solid for several months in advance, and I get to work with some great companies and very bright engineers. Part of my motivation for writing ebooks is now to reach the people whom I cannot possibly teach in person.

Being a trainer requires a number of skills beyond knowing how to program: You also need to know how to organize a syllabus, prepare exercises, prepare slides and printed materials, speak in front of a group, and answer questions. Beyond that, you also need to understand the business side of training — what are companies looking for, how do you approach them, how much do you charge them, and how can you grow your training business when companies are satisfied with your work. These skills, like many others, take time develop, and everyone has their own strengths and weaknesses. But I believe that if you’re willing to put in the effort, then you can learn how to become a technical trainer, and have the same sense of job satisfaction that I do.

If you are such a person, interested in offering technical training — on any subject, not just the technologies in which I specialize — then I invite you to consider joining my coaching program for technical trainers. This isn’t a course; it’s a personalized program that will help you to improve, month by month, in all of the ways that you need to become successful. You’ll have access not just to me, but to other trainers with varying levels of experience, who will provide you with feedback — just as I hope you’ll help them. The program is still in its infancy, but I believe that I’ve put together a combination of resources that can help everyone to become a trainer.

I’m launching the coaching program in two weeks, on October 1st, 2015. There aren’t any formal start of finish times; if you want to start later, then that’s fine, as well. My hope is that you’ll stay in the program for as long as you need to improve, getting feedback from me and others.  I also hope and expect that the program will more than pay for itself.

I’ll be holding a free Webinar on the subject of technical training on October 14th, at which I’ll also be taking questions from anyone who might be new to this field, or be curious about what it involves. I invite you to read more about my coaching program, to contact me if you have any questions about it, and to register for the Webinar that I’ll be holding next month!

Understanding nested list comprehensions in Python

In my last blog post, I discussed list comprehensions, and how to think about them. Several people suggested (via e-mail, and in comments on the blog) that I should write a follow-up posting about nested list comprehensions.

I must admit that nested list comprehensions are something that I’ve shied away from for years. Every time I’ve tried to understand them, let alone teach them, I’ve found myself stumbling for words, without being clear about what was happening, what the syntax is, or where I would want to use them. I managed to use them on a few occasions, but only after a great deal of trial and error, and without really understanding what I was doing.

Fortunately, the requests that I received, asking how to work with such nested list comprehensions, forced me to get over my worries. I’ve figured out what’s going on, and even think that I understand what my problem was with understanding them before.

The key thing to remember is that in a list comprehension, we’re dealing with an iterable. So when I say:

[ len(line) 
for line in open('/etc/passwd') ]

I’m saying that I want to iterate over the file object we got from opening /etc/passwd. There will be one element in the output list for each element in the input iterable — aka, every line in the file.

That’s great if I want my list comprehension to return something based on each line of /etc/passwd. But each line of /etc/passwd is a string, and thus also iterable. Maybe I want to return something not based on the lines of the file, but on the characters of each line.

Were I to use a “for” loop to process the file, I would use a nested loop — i.e., one loop inside of the other, with the outer loop iterating over lines and the inner loop iterating over consonants. It turns out that we can use a nested list comprehension, too. Here’s a simple example of a nested list comprehension:

[(x,y) for x in range(5) for y in range(5)]

If your reaction to this is, “What in the blazes does that mean?!?” then you’re not alone. Until just recently, that’s what I thought, too.

However: If we rewrite the above nested list comprehension using my preferred (i.e., multi-line) list-comprehension style, I think that things become a bit clearer:

 for x in range(5)  
 for y in range(5)]

Let’s take this apart:

  • Our output expression is the tuple (x,y). That is, this list comprehension will produce a list of two-element tuples.
  • We first run over the source range(5), giving x the values 0 through 4.
  • For each value in x, we run through the source range(5), giving y the values 0 through 4.
  • The number of values in the output depends on the number of runs of  the final (second) “for” line.
  • The output, not surprisingly, will be all of the two-element tuples from (0,0) to (4,4).

Now, let’s mix things up by changing them a bit:

  for x in range(5)  
  for y in range(x+1)]

Notice that now, the maximum value of y will vary according to the value of x. So we’ll get from (0,0) to (4,4), but we won’t see such things as (2,4) because y will never be larger than x.

Again, it’s important to understand several things here:

  • Our “for y” loop will execute once for each iteration over x.
  • In our “for y” loop, we have access to the variable x.
  • In our “for x” loop, we don’t have access to y (unless you consider the last value of y to be useful, but you really shouldn’t).
  • Our (x,y) tuple is output once for each iteration of the *final* loop, at the bottom.

Here’s another example: Assume that we have a few friends over, and that we have decided to play several games of Scrabble. Being Python programmers, we have stored our scores in a dictionary:

{'Reuven':[300, 250, 350, 400], 
 'Atara':[200, 300, 450, 150], 
 'Shikma':[250, 380, 420, 120], 
 'Amotz':[100, 120, 150, 180] }

I want to know each player’s average score, so I write a little function:

def average(scores):  
    return sum(scores) / len(scores)

If we want to find out each individual’s average score, we can use our function and a standard comprehension — in this case, a dict comprehension, to preserve the names:

 >>> { name : average(score)  
       for name, score in scores.items() }

{'Amotz': 137, 'Atara': 275, 'Reuven': 325, 'Shikma': 292}

But what if I want to get the average score, across all of the players? In such a case, I will need to grab each of the scores from inside of the inner lists. To do that, I can use a nested list comprehension:

>>> average([ one_score  
              for one_player_scores in scores.values()  
              for one_score in one_player_scores ])


What if I’m only interested (for whatever reason) in including scores that were above 200? As with all list comprehensions, I can use the “if” clause to weed out values that I don’t want. That condition can use any and all of the values that I have picked out of the various “for” lines:

>>> [ one_score      
      for one_player_scores in scores.values()     
      for one_score in one_player_scores
      if one_score > 200]

[300, 250, 350, 400, 300, 450, 250, 380, 420]

If I want to put these above-200 scores into a CSV file of some sort, I could do the following:

>>> ','.join([ str(one_score)  
               for one_player_scores in scores.values() 
               for one_score in one_player_scores  
               if one_score > 200])


Here’s one final example that I hope will drive these points home: Let’s assume that I have information about a hotel. The hotel has stored its information in a Python list. The list contains lists (representing rooms), and each sublist contains one or more dictionaries (representing people). Here’s our data structure:

rooms = [[{'age': 14, 'hobby': 'horses', 'name': 'A'},  
          {'age': 12, 'hobby': 'piano', 'name': 'B'},  
          {'age': 9, 'hobby': 'chess', 'name': 'C'}],  
         [{'age': 15, 'hobby': 'programming', 'name': 'D'}, 
          {'age': 17, 'hobby': 'driving', 'name': 'E'}],  
         [{'age': 45, 'hobby': 'writing', 'name': 'F'},  
          {'age': 43, 'hobby': 'chess', 'name': 'G'}]]

What are the names of the people staying at our hotel?

 >>> [ person['name']      
       for room in rooms
       for person in room ]

['A', 'B', 'C', 'D', 'E', 'F', 'G']

How about the names of people staying in our hotel who enjoy chess?

>>> [ person['name']  
      for room in rooms  
      for person in room  
      if person['hobby'] == 'chess' ]

['C', 'G']

Basically, every “for” line flattens the items over which you’re iterating by one more level, gives you access to that level in both the output expression (i.e., first line) and in the condition (i.e., optional final line).

I hope that this helps you to understand nested list comprehensions. If it did, please let me know! (And if it didn’t, please let me know that, as well!)

Want to understand Python’s comprehensions? Think in Excel or SQL.

Comprehensions are among the most useful constructs in Python. They merge the old, trusty “map” and “filter” functions into a single piece of compact, elegant syntax, allowing us to expression complex ideas in a minimum of code. Comprehensions are one of the most important tools in a Pythonista’s toolbox.

And yet, I have found that a very large number of Python programmers, including some experienced developers, are not completely comfortable with comprehensions. There are two reasons for this: First, it’s not obvious when to use them, and what sorts of problems they solve. The second problem, which is at least as important, is that the syntax is hard for people to remember and understand.

I’ve started to use a new explanation and introduction to comprehensions in my Python classes, and have found that it helps to lower the learning curve to some degree. In this post, I’m publicizing this explanation, in the hopes that it’ll help Python developers to understand when, where, and how to use comprehensions.

Let’s take a simple problem: I want to take a list of five integers, and get a list of their squares. If you give this problem to a new (or even intermediate) Python programmer, the answer would look something like this:

numbers = range(5)
output = [ ]
for number in numbers:
    output.append(number * number)

Now, the thing is that this does work. (In my courses, I often use the phrase, “Unfortunately, this works.”) Often, when I talk about comprehensions, I talk about functional programming, the idea of immutable data structures, the idea that we don’t want to change things, and the benefits of thinking in terms of mapreduce.

But let’s ignore all of that, and ask a simpler question: If you were to give this problem to your accountant, how would they solve the problem?

Almost certainly, an accountant would fire up Excel, and put the numbers in a column:


Let’s assume that the above numbers are in the spreadsheet’s column A. The Excel user would, given this task, then tell Excel that column B should be calculated as A*A. And it would be done:

A  B
-  -
0  0
1  1
2  4
3  9
4  16

You could argue that the difference here is that Excel has a GUI, and Python doesn’t. But that’s missing the point. The real difference is that our accountant told Excel how to transform the first column into the second column, whereas our Python developer wrote a program that describe how to carry out that transformation.

We can think about this in a different way, too: Rather than solving the problem serially, as in the above for loop, the accountant is thinking in a parallel manner, applying a single expression to a large data set. The Excel user doesn’t care, or even know, the order in which the numbers are handed to the expression. The important thing is that the expression is applied once to each of the numbers, and that the final result appears in the correct order.

We might laugh at Excel, and dismiss its users as technical neophytes. And certainly, many users of Excel would deny that they possess serious programming chops. But this sort of thinking, which is so fundamental and natural to Excel users, is alien to many programmers. Which is a shame, because it allows us to express a very large number of ideas in a simple way.

To summarize this approach:

  • Think of your input as an iterable source of data
  • Think of what operation you want to apply to each element of that source
  • Get a new sequence out

That’s what the traditional “map” function does. Python does have a “map” function, but today, we typically use list comprehensions instead.

Let’s try to make this a bit more concrete, using the example that I used above: Let’s say that we have a list of five numbers, and we want to turn that list into a list of its squares. The list-comprehension syntax looks as follows:

[number * number for number in range(5) ]

Yikes. No wonder people are scared off by this syntax.  Let’s take the above syntax apart:

  • First of all, we’re going to get a list back. (It’s called a “list comprehension” for a reason.) That’s because of the square brackets, which are mandatory, and which tell Python what sort of object to create.
  • The data source will be “range(5),” which returns a list.
  • Each element in the data source will be assigned, in turn, to the iteration variable “number.”
  • We’ll invoke the operation “number * number” on each element of the data source.

In other words, we’re creating a new list, the elements of which are the result of applying our expression to each element of the source. This sounds suspiciously like what our accountant did above, using Excel: We’re telling Python what we want, and how to transform our source to that result. But how are things done internally? How is the list created? We neither know nor care.

List-comprehension syntax can be daunting for people to understand, in part because the order of the operations seems unusual. I’ve found that it can help to rewrite list comprehensions in the following way:

[number * number
 for number in range(5) ]

Yes, that’s right — I now spread list comprehensions across two lines; the first describes the operation I want to invoke, and the second line describes the data source. If this still seems unfamiliar, let’s try to bring it into a context with which you might have some experience:

[number * number           # SELECT
 for number in range(5) ]  # FROM

While they’re not directly equivalent, there are a fair number of similarities between a SELECT query in SQL, the placement of its SELECT expression and FROM clause, and our list comprehension.  The FROM clause in an SQL query describes our data source, which is typically going to be a table, but can also be a view or even the result of a function call. And the initial part of the SELECT is often the name of a column, but  can include function calls and operators.

On the one hand, the SELECT-FROM combination seems almost too simple to mention, in that you’re just retrieving a selected set of values from a data source.  On the other hand, such queries form the backbone of the database industry. In the same way, such functionality forms the backbone of many Python programs, iterating over a data structure, and plucking out part of it, transforming that part, and then returning a new list.

One of my favorite examples (and an exercise in my ebook, “Practice Makes Python“) is to take the /etc/passwd file used in Unix, and get the usernames contained within that file. /etc/passwd consists of one record per line, and the fields are separated by colons. Here are several lines from the /etc/passwd on my computer:

nobody:*:-2:-2::0:0:Unprivileged User:/var/empty:/usr/bin/false
root:*:0:0::0:0:System Administrator:/var/root:/bin/sh
daemon:*:1:1::0:0:System Services:/var/root:/usr/bin/false
_uucp:*:4:4::0:0:Unix to Unix Copy Protocol:/var/spool/uucp:/usr/sbin/uucico

We might normally think of a file as a collection of bytes, to which we give semantic meaning when we read it. But in Python, we’re encouraged to see a file as an ordered, iterable collection of lines of text. True, I can read from a file based on bytes, but it’s so common to want to read files by line that the language provides several constructs to do so.

We know that we can iterate over the lines of a file:

for line in open('/etc/passwd'):

This demonstrates that a file is iterable, which means that it can serve as a data source for a list comprehension. This means that the above code can be rewritten as:

 for line in open('/etc/passwd')]

Again, the first line in our list comprehension represents the expression we want to apply to every element of our data source. In this case, the expression is just the line.  If we want to get the username from each of  these lines, we just need to apply the “split” method on the string, returning a list — and then retrieve index 0 from the resulting list.  For example:

 for line in open('/etc/passwd')]

Again, we can think of it in terms of an SQL query:

SELECT username
FROM users

But of course, “username” in the above is a column name.  A more equivalent query to my list comprehension would be a “Users” table with an “info” column, queried as follows:

SELECT split_part(info, ':', 1)
FROM users;

Note that in this case, I’m using the built-in PostgreSQLsplit_part” operator to perform the equivalent operation to the str.split method in Python.

Remember that in the case of my SQL query, the result of a query always looks and acts like a table. The number and types of columns returned will depend on the number and types of expressions that I have in the SELECT  statement.  But the result set will have one or more columns, and zero or more rows.

In the same way, the result of a list comprehension is always going to be a list.  You can have whatever expression you want inside of the list comprehension; the expression represents one item in a list, not the list itself.

For example, let’s assume that I want to turn the usernames in /etc/passwd into a list of dictionaries. This doesn’t require a dictionary comprehension, which creates a single dictionary.  Rather, it requires a list  comprehension, in which the expression creates a dictionary.  Here’s a simple-minded such list comprehension:

[ {'name':line.split(":")[0]}
   for line in open('/etc/passwd')]

The above will work, in that it creates a list of dictionaries. And each dictionary has a single key-value pair.  But it seems a bit silly to do the above.  Rather, I’d probably want to have a dictionary containing the username and the numeric user ID, which is at index 2. I can then write:

[ {'name':line.split(":")[0], 'id':line.split(":")[2]}
for line in open('/etc/passwd')]

Again, we can think about this in terms of Excel, or even in terms of SQL: My query now produces a single column of results, but each column contains a text string. Or we can even say that the query produces two columns of results, which is not at all unusual in the world of SQL.

Let’s ignore the efficiency (or lack thereof) of invoking str.split twice in one comprehension: When I run this code on my Mac, it results in an exception, claiming that an index is out of range.

The reason is simple: I split each line into a list. But if there’s a line that doesn’t contain any : characters, it’ll be turned into a single-element list. I thus need to weed out any lines that won’t conform. Specifically, on my Mac at least, I need to remove any lines in /etc/passwd that are comments, meaning that they start with the ‘#’ character.

In the world of list comprehensions, I say the following:

[ {'name':line.split(":")[0], 'id':line.split(":")[2]}
for line in open('/etc/passwd')
if not line.startswith("#")]

Let’s extend our earlier SQL analogy further, adding the equivalent SQL syntax in comments after our Python code:

[ {'name':line.split(":")[0], 'id':line.split(":")[2]}    # SELECT
for line in open('/etc/passwd')                           # FROM
if not line.startswith("#")]                              # WHERE

Of course, when the first line of our comprehension becomes this long, it’s often a good idea to use a function, instead. And since the first line can be any legitimate Python expression, a function is often a good idea:

def get_user_info(line):
    name, passwd, id, rest = line.split(":", 3)   # max 4 fields
    return {'name':name, 'id':id}

[ get_user_info(line)             # SELECT
for line in open('/etc/passwd')   # FROM
if not line.startswith("#")]      # WHERE

A list comprehension thus gives you power similar to an SQL SELECT query — except that you’re not querying data in a table, but rather any object that conforms to Python’s iteration protocol, which includes a very  large number of built-in and custom-made objects.

Now, when would you want to use a list comprehension? And how does it differ from a for loop?

Using a list comprehension is appropriate whenever you want to transform data. That is, you have an iterable data source, and you want to create a new list whose elements are based on those of the data source. For  example, let’s assume that (for some reason) I want to find out how many times each character is used in /etc/passwd.  I can thus do the following, using collections.Counter:

from collections import Counter
counts = [Counter(line)
          for line in open('/etc/passwd')
          if not line.startswith("#")]

We know that “counts” is a list, because I used a list comprehension to create it. It is a list containing many Counter objects, one for each non-comment line in /etc/passwd. What if I want to find out what the most  popular character is in each line? I can modify my expression, asking the Counter object for the most common character:

counts = [Counter(line).most_common(1)
          for line in open('/etc/passwd')
          if not line.startswith("#")]

I can extend my expression even more, to get the most popular character from each line (inside of a two-element tuple in a one-element list):

counts = [Counter(line).most_common(1)[0][0]
          for line in open('/etc/passwd')
          if not line.startswith("#")]

And now I can find out how many times each most-popular character appears:

          for line in open('/etc/passwd')
          if not line.startswith("#")])

On my computer, the answer is:

Counter({':': 71, 'e': 4, 's': 1})

Meaning that in 71 non-comment lines, “:” is the most common, but in 4 lines it’s “e”, and in one line it’s “s”.  Now, could I have done this with a for loop?  Yes, of course — but because I’m dealing with iterables, and  because I’m using objects that work with such iterables, I can chain them together to get an answer in a way that doesn’t require me to tell Python how to do its job. I’m doing things like our accountant did, back at the  start of this article — I’m saying what I want, and letting Python do the hard work of dealing with this for me.

When would I use a for loop, then? The distinction is between whether you want to get a list back, and whether you want to execute a command a number of times.  If you want to build a list, and if it’s built on an iterable that already exists, then I’d say a list comprehension is almost certainly going the be the best bet.  But if you want to execute something a number of times without creating a list, then a comprehension is the a bad way to do it; you should use a “for” loop, instead.

It’s true that list comprehensions are faster than for loops. But most of the time, for loops are used for different things than list comprehensions. “for” loops shouldn’t be used when you want to turn one iterable structure into another; that’s for comprehensions. And you shouldn’t execute something (e.g., print) many times via a list comprehension, even if you can do so via a called function.  I’ve found that the dividing line between when to use a “for” loop, and when to use a comprehension, is clearly delineated in the minds of experienced Python developers, but very hazy among newcomers to the language, and to these ideas.

So, to summarize:

  • If you want to execute a command numerous times, use a “for” loop.
  • If you have an iterable, and want to create a new iterable, then a list comprehension is probably your best bet.
  • Building a list comprehension is sort of like working in Excel: You start with a set of data, and you create a new set of data. Any expression can be used to map from one to the other.  You don’t care about how Python does things behind the scenes; you just want to get your new data back.
  • A list comprehension consists of either two or three parts, which are often easier to understand if you put them on separate lines: (1) the expression, (2) the data source, and (3) an optional “if” statement.
  • These three lines are analogous to SQL’s SELECT, FROM, and WHERE clauses in a query.  And just as each of those (SELECT, FROM, and WHERE) can use arbitrary expressions, so too can Python’s list comprehensions use arbitrary expressions. A list comprehension will always return a list, though — just as a SELECT will always return a table-like result set.
  • Do you want to create a set, or perhaps a dictionary, rather than a list?  Then you can use a set comprehension or a dict comprehension. The idea is the same as everything I’ve said about list comprehensions, except that your result will be a single set or a single dictionary.

Do you find it difficult to work with list comprehensions?  If so, what’s hard for you about them?  And does the above help to make their use, and their syntax easier to remember?  I’m eager to hear your reactions, so that I can improve these explanations even further.

Why you should almost never use “is” in Python

It’s really tempting, when you first start to use Python, to use “is” rather than “==”.  It’s a bit more readable, and it feels like it should just work, especially when you’re dealing with integers. In a language that uses “or” and “and” instead of “||” and “&&”, it seems logical to use “is” instead of “==”. And if you try “is” with small integers, or even with short strings, you might be lulled into thinking that you should use “is” in lots of places.

But you shouldn’t.  Really, in almost no case, should you use “is”; rather, you should almost certainly use “==”.  In fact, there’s only one case in which most Python programmers should be using “is”, and that’s to check to see if something is None.

In this blog post, which is the result of many questions and discussions I’ve had with students in my Python classes, I’m going to try to describe the reasons for this — and along the way, describe some parts of how Python’s objects are allocated, and what we mean when we say that two objects are “the same.”

Let’s start with the basics: Everything in Python is an object. Every object in Python has a unique ID number, which we can retrieve from an object by using the built-in “id” function:

>>> id(5)

>>> id('abc')

>>> id([1,2,3])

Now, if two variables are pointing to the same object, they will (not surprisingly) return the same ID:

>>> x = [1,2,3]
>>> y = x
>>> id(x)
>>> id(y)

Given that x and y point to the same list, changes to the list will be reflected in both variables:

>>> x[0] = '!'
>>> y[1] = '?'
>>> x
['!', '?', 3]
>>> y
['!', '?', 3]

In such a case, it’s pretty clear that x and y are both pointing to precisely the same object. They aren’t just equal in value; they are one and the same — aliases for one another.

We can ask Python if this is true by using the “is” operator, also known as the “identity operator.” “is” doesn’t compare the values of x and y. Rather, it checks to see if x and y have the same ID. If so, then they are the same object. If not, then they aren’t. It’s as simple as that. Perhaps it goes without saying, but two objects that “is” each other are also “==” to each other, since an object’s value should be equal to itself:

>>> x == y

>>> x is y

>>> id(x) == id(y)

The above code shows that x and y have the same ID. This means that they “is” each other; we’re dealing with two names for the same object. Their values are thus equal, which is what “==” checks.

Again: The “is” operator returns “True” if two names are referring to the same object. And the “==” operator returns “True” if two names point to objects that contain the same value.

The most common usage, by far, is when we want to know if something is None. True, we would use “==”. But in both readability and speed, “is None” trumps “== None”. So your code should generally say:

if x is None:
    print("x is None!")

It shouldn’t surprise us to find out that “is” is faster than “==”. After all, “is” is implemented in C, and is a simple comparison of the IDs of the two objects. No function call is needed, and we certainly don’t need to compare the values of the two objects, which can also take some time.

The use of “is None” works because the None object is a singleton in Python. No matter what you do, id(None) will always return the same value. (Note that this value won’t stay constant across different invocations of Python.)  In other words:

>>> id(None)

>>> id(None)

>>> x = None
>>> id(x)

What happens if you try to create a new instance of None? Well, we would first have to find out None’s type:

>>> type(None)
<type 'NoneType'>

Unfortunately, NoneType isn’t a defined identifier in Python:

>>> NoneType
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NameError: name 'NoneType' is not defined

So if we want to create a new instance of None, we’ll need to do it ourselves:

>>> type(None)()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: cannot create 'NoneType' instances

Aha.  Well, that’s a shame. But I was using Python 2.7 in the above example. What if I try Python 3?

>>> type(None)
<class 'NoneType'>

>>> NoneType()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NameError: name 'NoneType' is not defined

>>> x = type(None)()
>>> type(x)
<class 'NoneType'>
>>> x is None

So no matter how you slice it, None is a singleton. Which is why you can (and should) use “is None”, rather than “== None”, in your code.

But what happens if you decide that you want to use “is” in other places? The problem is that it will sometimes work. That “sometimes” is because “is” exposes some of Python’s internal optimizations in ways that can be a bit surprising.

Strings are how I was initially introduced to the difference between “==” and “is”, and the danger of using “is” over-zealously. Two equal strings should be “==”, but are they “is”?

>>> x = 'a' * 5
>>> y = 'a' * 5
>>> x == y
>>> x is y

Well, that’s interesting — and I got the same result in Python 2.7, 3.4, and also in PyPy. But why should this be the case? One possibility is that strings are immutable, and that having Python use a single object for each string that we create, would be efficient. And indeed, this is true — so long as the string is short:

>>>> x = 'a' * 5000
>>>> y = 'a' * 5000
>>>> x == y
>>>> x is y

The above, which works the same in Python 2.7, 3.4, and in PyPy, demonstrates that Python won’t reuse just any string that we have created. There is a limit.  I experimented with things a bit, and I found that 21 is the magic length at which strings are no longer “is” to one another. That is:

>>> x = 'a' * 20
>>> y = 'a' * 20
>>> x is y

>>> x = 'a' * 21
>>> y = 'a' * 21
>>> x is y

The above was true in Python 2.7 and 3.4, and also in PyPy. However, I also found some seemingly weird behavior, which is undoubtedly because of the way in which Python byte-compiles and then executes for loops:

>>> for i in range(15,25):
        x = 'a' * i
        y = 'a' * i
        print("[{0}] x is y: {1}".format(i, x is y))

[15] x is y: False
[16] x is y: False
[17] x is y: False
[18] x is y: False
[19] x is y: False
[20] x is y: False
[21] x is y: False
[22] x is y: False
[23] x is y: False
[24] x is y: False

Wow, that’s kind of strange, no? Indeed, in a for loop, I found that the only number for which the two strings were “is” to one another was 1:

>>> for i in range(0,10):
...     x = 'a' * i
...     y = 'a' * i
...     print("[{0}] x is y: {1}".format(i, x is y))
[0] x is y: False
[1] x is y: True
[2] x is y: False
[3] x is y: False
[4] x is y: False
[5] x is y: False
[6] x is y: False
[7] x is y: False
[8] x is y: False
[9] x is y: False

At the same time, if you create a long literal string and assign it to a variable, you’ll likely find that the strings are “is” to one another:

>>> x = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'

>>> y = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'

>>> x is y

(Forgive the re-formatting that WordPress did to the above assignments; in Python, they were both on one line.)

I’m not sure what is going on here, but it just goes to show that you really shouldn’t use “is” unless you know what you’re doing.  And even if you think that you know what you’re doing, you might still be surprised!  Bottom line: Using “is” on strings is almost always a bad idea.

Now, this is generally something that we don’t need to think or care about very much. But let’s say that you’re working with large strings, and that these strings might repeat themselves on occasion. In such a case, you will end up with many copies of the same string. Python helps us to solve this problem by “interning” strings. Interning is a technique that has been around for many years in the programming world, which allows us to store only one copy of any given string. In Python 2, we use the built-in “intern” function. In Python 3, we must use sys.intern; intern is no longer a builtin.

“intern” takes a string (and only a string) as a parameter. It returns a reference — either to a new string that was created, or to a string that was already allocated. Thus, the length of the string doesn’t matter; even in the case of a long string, it will only be allocated a single time:

>>> from sys import intern     # Python 3 only
>>> x = intern('a' * 5000)
>>> y = intern('a' * 5000)
>>> x is y

As you can see, using “intern” guarantees that every unique string is allocated only once. If you use “intern” on the same string a second time, Python returns a reference to the first string.

Python uses “intern” internally for a variety of purposes.  If you’re working with long strings that repeat themselves, then it might be worth using intern. But for the most part, Python creates and allocates so many objects that a few strings here and there are probably not going to make a difference.    Certainly, you should only use “intern” once you have identified bottlenecks.

You might think that even if strings are allocated multiple times, and are thus not “is” to one another, at least integers are going to be identical. After all, Python wouldn’t allocate new objects for numbers, would it?

We can test this pretty easily, of course:

>>> x = 200
>>> y = 200
>>> x is y

Well, that’s encouraging, right?  Let’s try something bigger:

>>> x = 2000
>>> y = 2000
>>> x is y

So yes, it turns out that even integers that are equal aren’t necessarily pointing to the same object.   As Amy Hanlon pointed out in her fantastic talk about Python “wats”, this is because Python pre-allocates a number of integers. If your integer is within that range, then they will use the same object, and be “is” to one another. But if you’re outside of that range, then you’ll have two separate objects. Unless, of course, you allocate them in the same line of code:

>>> x = 2000; y = 2000
>>> x is y

Have I mentioned that you really shouldn’t use “is” to compare objects except for None? I hope that you’re increasingly convinced.

I’ll close this post with a bit of mischief: In theory, if two objects are “is”, then they’re pointing to the same object — which means that they should be identical to one another, and thus also give us a True response to “==”.  While Python doesn’t allow us to redefine “is”, we can redefine what an object says when we try to compare with using “==”:

>>> class Foo(object):
...     def __eq__(self, other):
...         return False
>>> f1 = Foo()
>>> f2 = f1
>>> f1 is f2
>>> f1 == f2

I cannot think of a situation in which this would be a desirable thing to do. But it’s fun, and allows us to sharpen our understanding of the difference between “==” and “is”.

Free Webinar on June 23rd: Introduction to Regular Expressions

If you’re a programmer, then you have likely heard about regular expressions (“regexps”) before. However, it’s also likely that you have tried to learn them, and have found them to be completely confusing. That’s not unusual; while regular expressions provide us with a powerful tool for analyzing text, their terse, dense, and cryptic syntax can make the effort not seem worthwhile.

On June 23rd, I’m going to be offering a one-hour free Webinar introducing regular expressions, showing how they can make your code more powerful and expressive.

While I’ll mostly be using Python, I’ll also show some other languages and platforms (e.g., Ruby, JavaScript, and the Unix “grep” command).

My demo and discussion will be about an hour long, and will be followed by ample time for Q&A.  My previous Webinars have been lots of fun; I hope that you’ll join in!  You can get (free) tickets at EventBrite.

And hey, if you’re an independent consultant, you can get a double dose of me on that same day; we Freelancers Show panelists will be doing our monthly Q&A just beforehand.  Come and get your questions about consulting answered by our panel of experts!

I look forward to seeing you at one or both of these events!  If you have any questions, you can e-mail me or contact me on Twitter as @reuvenmlerner.

New ebook: Jewish guide to visiting China

Jewish Guide to Visiting ChinaAs many people know, I’ve visited China seven times over the last three years, traveling there to give courses in Python and Ruby. I just got back from my most recent trip, and found it to be as fun and exciting as ever. You could say that I’ve gotten a bit obsessed with the country; I read books about China, have been taking daily Chinese lessons since August, and publish a free weekly newsletter (Mandarin Weekly) with links to useful resources for people learning Chinese.

Given that I keep kosher and Shabbat, other religious Jews are increasingly asking me for advice on what, where, and how they can be Jewishly observant when visiting China on business or pleasure. No one in China is likely to know or care about such subjects, let alone know anything about Judaism, so it can be a bit daunting to visit there for the first time.

I’ve collected my advice into a 40-page ebook, the “Jewish guide to visiting China.” If you’re a religiously observant Jew who will be visiting China for short periods of time, then I believe this guide can significantly reduce the time (and stress) you’ll need to invest before your trip.

I’m just launching it now — and for the first week it’s online, I’m offering a discount coupon (“YouTaiRen” — aka 犹太人 — the word for “Jew” in Chinese) giving 20% off of the normal $6 price. This price includes PDF, Mobi, and ePub formats, which should suit any computer or ebook reader.

Again: The Jewish guide to visiting China, now available for 20% off with the “YouTaiRen” offer code.

I expect that the book will expand significantly over time; if you purchase this book, you’re automatically entitled to updates and upgrades.

I welcome comments, suggestions, and additions!

Free one-hour Webinar about Python’s magic methods on May 6th

I’ll be giving another free one-hour Webinar about Python — this time, about the magic methods that Python offers developers who want to change the ways in which their objects work.

We’ll start off with some of the simplest magic methods, but will quickly move onto some of the more interesting and advanced ones — affecting the way that our objects are compared, formatted, hashed, pickled, and sauteed.  (OK, maybe not sauteed.)  Some familiarity with Python objects is expected, but not too much advanced knowledge is necessary.

Register now at EventBrite; if you have any questions, please contact me at, or as @reuvenmlerner on Twitter.  It should be a lot of fun; I hope to see you there!