A quick introduction to implementing Python iterators

When you put a piece of Python data into a “for” loop, the loop doesn’t execute on the data itself.  Rather, it executes on the data’s “iterator.”  An iterator is an object that knows how to behave inside a loop.

Let’s take that apart.  First, let’s assume that I say:

for letter in 'abc':
    print(letter)

I’m not really iterating over ‘abc’.  Rather, I’m iterating over the iterator object that I got from ‘abc’.  That is invisible and behind the scenes, but it happens all the same.  We can get the iterator of any object with the iter() function:

>>> s = 'abc'

>>> iter(s)
<iterator at 0x10a47f150>

>>> iter(s)
<iterator at 0x10a47f190>

>>> iter(s)
<iterator at 0x10a47f050>

Notice that each time we invoke iter(s), we get back a new and different object.  (We can tell, because there is a different address in memory for each one.)  That’s because each iterator is used only once.  Once you get to the end of an iterator object, the object is thrown out, and you need to get a new one.

OK, so what can we do with this iterator object?  Why do we care about it so much?  Because we can invoke the next() function on it.  Each time we do so, we’re basically telling the object that we want to get the next piece of data that it’s providing:

>>> i = iter(s)

>>> next(i)
'a'

>>> next(i)
'b'

>>> next(i)
'c'

So far, so good: Each time we invoke next(i), we ask our iterator object (i) to give us the next element.  But there are only three elements in s, which raises the question of what we’ll get when we invoke next() another time:

>>> next(i)
StopIteration

In other words, Python raises an exception (StopIteration) when we get to the end.  We can now invoke next(i) as many times as we want; we’ll always get StopIteration, which indicates that there is nothing more to get.

You can thus think of a “for” loop as a “while” loop that catches the StopIteration exception, and then leaves the loop when it happens. Consider this function:

def myfor(data):
    i = iter(data)
    while True:
        try:
            print next(i)
        except StopIteration:
            break

Now, this “myfor” function only prints the elements of the sequence it was given, so it’s not really a replacement for loop.  But it’s not a bad way to begin to understand how these things work. Our function starts off by getting an iterator for our data.  It then assumes that we are going to iterate forever on the object, using the “while True” infinite loop. However, we know that when next(i) is done providing elements of data, it will raise StopIteration.  At that point, we’ll catch the exception and return from the function.

Let’s assume that you want to make instances of your class iterable. This means that when we invoke iter() on an instance of your class, we’ll want to get back an iterator.  Which means that we’ll want to get back an object on which we can invoke next(), and either get the next object or the StopIteration exception.

The easiest way to do this is to define both __iter__ (which is invoked when you run iter() on an object) and __next__ (which is invoked when you run next() on an iterator) within your class object. That is, you’ll define __iter__ to return self, because the object is its own iterator.  And you’ll define __next__ to return the next piece of data in turn, or to raise StopIteration if there is no more data.

Remember that in an iterator, there is no “previous” or “reset” or anything of the sort.  All you can do is move forward, one item at a time, until you get to the end.

So let’s say that I want to define a simple iterator, one that returns the elements of a piece of data.  (Yes, basically what you already get built in by Python.)  We can say:

class MyIter(object):
    def __init__(self, data):
        self.data = data
        self.index = 0
    def __iter__(self):
        return self
    def __next__(self):   # In Python 2, this is just "next")
        if self.index >= len(self.data):
            raise StopIteration
        value = self.data[self.index]
        self.index += 1
        return value

Now I can say

>>> m = MyIter('abc')
>>> for letter in m:
        print(letter)

and it will work!

You can take any class you want, and make it into an iterator by adding the  __iter__ method (which returns self) and the __next__ (or in Python 2, “next”)  method.  Once you have done that, instances of MyIter can now be put inside of “for” loops, list comprehensions, or anything else that expects an “iterable” type of data.

If you don’t use “with”, when does Python close files? The answer is: It depends.

One of the first things that Python programmers learn is that you can easily read through the contents of an open file by iterating over it:

f = open('/etc/passwd')
for line in f:
    print(line)

Note that the above code is possible because our file object “f” is an iterator. In other words, f knows how to behave inside of a loop — or any other iteration context, such as a list comprehension.

Most of the students in my Python courses come from other programming languages, in which they are expected to close a file when they’re done using it. It thus doesn’t surprise me when, soon after I introduce them to files in Python, they ask how we’re expected to close them.

The simplest answer is that we can explicitly close our file by invoking f.close(). Once we have done that, the object continues to exist — but we can no longer read from it, and the object’s printed representation will also indicate that the file has been closed:

>>> f = open('/etc/passwd')
>>> f
<open file '/etc/passwd', mode 'r' at 0x10f023270>
>>> f.read(5)
'##\n# '

f.close()
>>> f
<closed file '/etc/passwd', mode 'r' at 0x10f023270>

f.read(5)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-11-ef8add6ff846> in <module>()
----> 1 f.read(5)
ValueError: I/O operation on closed file

But here’s the thing: When I’m programming in Python, it’s pretty rare for me to explicitly invoke the “close” method on a file. Moreover, the odds are good that you probably don’t want or need to do so, either.

The preferred, best-practice way of opening files is with the “with” statement, as in the following:

with open('/etc/passwd') as f:
    for line in f:
        print(line)

The “with” statement invokes what Python calls a “context manager” on f. That is, it assigns f to be the new file instance, pointing to the contents of /etc/passwd. Within the block of code opened by “with”, our file is open, and can be read from freely.

However, once Python exits from the “with” block, the file is automatically closed. Trying to read from f after we have exited from the “with” block will result in the same ValueError exception that we saw above. Thus, by using “with”, you avoid the need to explicitly close files. Python does it for you, in a somewhat un-Pythonic way, magically, silently, and behind the scenes.

But what if you don’t explicitly close the file? What if you’re a bit lazy, and neither use a “with” block nor invoke f.close()?  When is the file closed?  When should the file be closed?

I ask this, because I have taught Python to many people over the years, and am convinced that trying to teach “with” and/or context managers, while also trying to teach many other topics, is more than students can absorb. While I touch on “with” in my introductory classes, I normally tell them that at this point in their careers, it’s fine to let Python close files, either when the reference count to the file object drops to zero, or when Python exits.

In my free e-mail course about working with Python files, I took a similarly with-less view of things, and didn’t use it in all of my proposed solutions. Several people challenged me, saying that not using “with” is showing people a bad practice, and runs the risk of having data not saved to disk.

I got enough e-mail on the subject to ask myself: When does Python close files, if we don’t explicitly do so ourselves or use a “with” block? That is, if I let the file close automatically, then what can I expect?

My assumption was always that Python closes files when the object’s reference count drops to zero, and thus is garbage collected. This is hard to prove or check when we have opened a file for reading, but it’s trivially easy to check when we open a file for writing. That’s because when you write to a file, the contents aren’t immediately flushed to disk (unless you pass “False” as the third, optional argument to “open”), but are only flushed when the file is closed.

I thus decided to conduct some experiments, to better understand what I can (and cannot) expect Python to do for me automatically. My experiment consisted of opening a file, writing some data to it, deleting the reference, and then exiting from Python. I was curious to know when the data would be written, if ever.

My experiment looked like this:

f = open('/tmp/output', 'w')
f.write('abc\n')
f.write('def\n')
# check contents of /tmp/output (1)
del(f)
# check contents of /tmp/output (2)
# exit from Python
# check contents of /tmp/output (3)

In my first experiment, conducted with Python 2.7.9 on my Mac, I can report that at stage (1) the file existed but was empty, and at stages (2) and (3), the file contained all of its contents. Thus, it would seem that in CPython 2.7, my original intuition was correct: When a file object is garbage collected, its __del__ (or the equivalent thereof) flushes and closes the file. And indeed, invoking “lsof” on my IPython process showed that the file was closed after the reference was removed.

What about Python 3?  I ran the above experiment under Python 3.4.2 on my Mac, and got identical results. Removing the final (well, only) reference to the file object resulted in the file being flushed and closed.

This is good for 2.7 and 3.4.  But what about alternative implementations, such as PyPy and Jython?  Perhaps they do things differently.

I thus tried the same experiment under PyPy 2.7.8. And this time, I got different results!  Deleting the reference to our file object — that is, stage (2), did not result in the file’s contents being flushed to disk. I have to assume that this has to do with differences in the garbage collector, or something else that works differently in PyPy than in CPython. But if you’re running programs in PyPy, then you should definitely not expect files to be flushed and closed, just because the final reference pointing to them has gone out of scope. lsof showed that the file stuck around until the Python process exited.

For fun, I decided to try Jython 2.7b3. And Jython exhibited the same behavior as PyPy.  That is, exiting from Python did always ensure that the data was flushed from the buffers, and stored to disk.

I repeated these experiments, but instead of writing “abc\n” and “def\n”, I wrote “abc\n” * 1000 and “def\n” * 1000.

In the case of Python 2.7, nothing was written after the “abc\n” * 1000. But when I wrote “def\n” * 1000, the file contained 4096 bytes — which probably indicates the buffer size. Invoking del(f) to remove the reference to the file object resulted in its being flushed and closed, with a total of 8,000 bytes. So in the case of Python 2.7, the behavior is basically the same regardless of string size; the only difference is that if you exceed the size of the buffer, then some data will be written to disk before the final flush + close.

In the case of Python 3, the behavior was different: No data was written after either of the 4,000 byte outputs written with f.write. But as soon as the reference was removed, the file was flushed and closed. This might point to a larger buffer size. But still, it means that removing the final reference to a file causes the file to be flushed and closed.

In the case of PyPy and Jython, the behavior with a large file was the same as with a small one: The file was flushed and closed when the PyPy or Jython process exited, not when the last reference to the file object was removed.

Just to double check, I also tried these using “with”. In all of these cases, it was easy to predict when the file would be flushed and closed: When the block exited, and the context manager fired the appropriate method behind the scenes.

In other words: If you don’t use “with”, then your data isn’t necessarily in danger of disappearing — at least, not in simple simple situations. However, you cannot know for sure when the data will be saved — whether it’s when the final reference is removed, or when the program exits. If you’re assuming that files will be closed when functions return, because the only reference to the file is in a local variable, then you might be in for a surprise. And if you have multiple processes or threads writing to the same file, then you’re really going to want to be careful here.

Perhaps this behavior could be specified better, and thus work similarly or identically on different platforms? Perhaps we could even see the start of a Python specification, rather than pointing to CPython and saying, “Yeah, whatever that version does is the right thing.”

I still think that “with” and context managers are great. And I still think that it’s hard for newcomers to Python to understand what “with” does. But I also think that I’ll have to start warning new developers that if they decide to use alternative versions of Python, there are all sorts of weird edge cases that might not work identically to CPython, and that might bite them hard if they’re not careful.

If you enjoyed this explanation, check out my free e-mail course on working with files in Python, or my e-book, “Practice Makes Python,” with 50 battle-tested exercises in Python programming!

My latest side project: DailyTechVideo.com, posting new conference videos every day

If you’re like me, you love to learn. And in our industry, a primary way of learning involves attending conferences.

However, if you’re like me, you never have the time to actually attend them.  (In my case, the fact that I live far away from where many conferences take place is an additional hindrance.)

Fortunately, a very large number of talks at modern conferences are recorded. This means that even if you didn’t attend a conference, you can still enjoy (and learn from) the talks that were there.

However, this leads to a new and different problem: There are too many talks for any one person to watch. How can you find things that are interesting and relevant?

My latest side project aims to solve this problem, at least in part: DailyTechVideo.com offers, as its name implies, a high-quality, thought-provoking talk about technology each day. To date, almost all of the talks reflect the technologies that are of interest to me, which typically means that they are open source programming languages, databases, or Web application frameworks. But I have tried to include conference videos that have provoked and prodded my thinking, and which are likely to be helpful for other professionals in the computer industry. Moreover, I’m hoping to receive suggestions from people who have seen interesting videos in fields with which I’m less familiar (e.g., hardware or robotics), who can help me to improve my own understanding and knowledge.

So if you enjoy learning, I invite you to subscribe to DailyTechVideo.com, and/or to follow its Twitter feed at @DailyTechVideo.

And if you can suggest videos to include, e-mail me at reuven@lerner.co.il, or tweet me at @ReuvenMLerner or @DailyTechVideo. I already have another 4-5 weeks of videos queued up, but I’m always on the lookout for new and interesting ones.