## Implementing “zip” with list comprehensions

I love Python’s “zip” function. I’m not sure just what it is about zip that I enjoy, but I have often found it to be quite useful. Before I describe what “zip” does, let me first show you an example:

```>>> s = 'abc'
>>> t = (10, 20, 30)

>>> zip(s,t)
[('a', 10), ('b', 20), ('c', 30)]```

As you can see, the result of “zip” is a sequence of tuples. (In Python 2, you get a list back.  In Python 3, you get a “zip object” back.)  The tuple at index 0 contains s[0] and t[0]. The tuple at index 1 contains s[1] and t[1].  And so forth.  You can use zip with more than one iterable, as well:

```>>> s = 'abc'
>>> t = (10, 20, 30)
>>> u = (-5, -10, -15)

>>> list(zip(s,t,u))
[('a', 10, -5), ('b', 20, -10), ('c', 30, -15)]```

(You can also invoke zip with a single iterable, thus ending up with a bunch of one-element tuples, but that seems a bit weird to me.)

I often use “zip” to turn parallel sequences into dictionaries. For example:

```>>> names = ['Tom', 'Dick', 'Harry']
>>> ages = [50, 35, 60]

>>> dict(zip(names, ages))
{'Harry': 60, 'Dick': 35, 'Tom': 50}```

In this way, we’re able to quickly and easily product a dict from two parallel sequences.

Whenever I mention “zip” in my programming classes, someone inevitably asks what happens if one argument is shorter than the other. Simply put, the shortest one wins:

```>>> s = 'abc'
>>> t = (10, 20, 30, 40)
>>> list(zip(s,t))
[('a', 10), ('b', 20), ('c', 30)]```

(If you want zip to return one tuple for every element of the longer iterable, then use “izip_longest” from the “itertools” package.)

Now, if there’s something I like even more than “zip”, it’s list comprehensions. So last week, when a student of mine asked if we could implement “zip” using list comprehensions, I couldn’t resist.

So, how can we do this?

First, let’s assume that we have our two equal-length sequences from above, s (a string) and t (a tuple). We want to get a list of three tuples. One way to do this is to say:

```[(s[i], t[i])              # produce a two-element tuple
for i in range(len(s))]   # from index 0 to len(s) - 1```

To be honest, this works pretty well! But there are a few ways in which we could improve it.

First of all, it would be nice to make our comprehension-based “zip” alternative handle inputs of different sizes.  What that means is not just running range(len(s)), but running range(len(x)), where x is the shorter sequence. We can do this via the “sorted” builtin function, telling it to sort the sequences by length, from shortest to longest. For example:

```>>> s = 'abcd'
>>> t = (10, 20, 30)

>>> sorted((s,t), key=len)
[(10, 20, 30), 'abcd']```

In the above code, I create a new tuple, (s,t), and pass that as the first parameter to “sorted”. Given these inputs, we will get a list back from “sorted”. Because we pass the builtin “len” function to the “key” parameter, “sorted” will return [s,t] if s is shorter, and [t,s] if t is shorter.  This means that the element at index 0 is guaranteed not to be longer than any other sequence. (If all sequences are the same size, then we don’t care which one we get back.)

Putting this all together in our comprehension, we get:

```>>> [(s[i], t[i])
for i in range(len(sorted((s,t), key=len)[0]))]
```

This is getting a wee bit complex for a single list comprehension, so I’m going to break off part of the second line into a function, just to clean things up a tiny bit:

```>>> def shortest_sequence_range(*args):
return range(len(sorted(args, key=len)[0]))

>>> [(s[i], t[i])
for i in shortest_sequence_range(s,t) ]```

Now, our function takes *args, meaning that it can take any number of sequences. The sequences are sorted by length, and then the first (shortest) sequence is passed to “len”, which calculates the length and then returns the result of running “range”.

So if the shortest sequence is ‘abc’, we’ll end up returning range(3), giving us indexes 0, 1, and 2 — perfect for our needs.

Now, there’s one thing left to do here to make it a bit closer to the real “zip”: As I mentioned above, Python 2’s “zip” returns a list, but Python 3’s “zip” returns an iterator object. This means that even if the resulting list would be extremely long, we won’t use up tons of memory by returning it all at once. Can we do that with our comprehension?

Yes, but not if we use a list comprehension, which always returns a list. If we use a generator expression, by contrast, we’ll get an iterator back, rather than the entire list. Fortunately, creating such a generator expression is a matter of just replacing the [ ] of our list comprehension with the ( ) of a generator expression:

```>>> def shortest_sequence_range(*args):
return range(len(sorted(args, key=len)[0]))

>>> g = ((s[i], t[i])
for i in shortest_sequence_range(s,t) )

>>> for item in g:
print(item)
('a', 10)
('b', 20)
('c', 30)
```

And there you have it!  Further improvements on these ideas are welcome — but as someone who loves both “zip” and comprehensions, it was fun to link these two ideas together.

## Fun with floats

I’m in Shanghai, and before I left to teach this morning, I decided to check the weather.  I knew that it would be hot, but I wanted to double-check that it wasn’t going to rain — a rarity during Israeli summers, but not too unusual in Shanghai.

I entered “shanghai weather” into DuckDuckGo, and got the following:

Never mind that it gave me a weather report for the wrong Chinese city. Take a look at the humidity reading!  What’s going on there?  Am I supposed to worry that it’s ever-so-slightly more humid than 55%?

The answer, of course, is that many programming languages have problems with floating-point numbers.  Just as there’s no terminating decimal number to represent 1/3, lots of numbers are non-terminating when you use binary, which computers do.

As a result floats are inaccurate.  Just add 0.1 + 0.2 in many programming languages, and prepare to be astonished.  Wait, you don’t want to fire up a lot of languages? Here, someone has done it for you: http://0.30000000000000004.com/ (I really love this site.)

If you’re working with numbers that are particularly sensitive, then you shouldn’t be using floats. Rather, you should use integers, or use something like Python’s decimal.Decimal, which guarantees accuracy at the expense of time and space. For example:

```>> from decimal import Decimal
>>> x = Decimal('0.1')
>>> y = Decimal('0.2')
>>> x + y
Decimal('0.3')
>>> float(x+y)
0.3```

Of course, you should be careful not to create your decimals with floats:

```>> x = Decimal(0.1)
>>> y = Decimal(0.2)
>>> x + y
Decimal('0.3000000000000000166533453694')```

Why is this the case? Let’s take a look:

```>> x
Decimal('0.1000000000000000055511151231257827021181583404541015625')

>>> y
Decimal('0.200000000000000011102230246251565404236316680908203125')```

So, if you’re dealing with sensitive numbers, be sure not to use floats! And if you’re going outside in Shanghai today, it might be ever-so-slightly less humid than your weather forecast reports.

## Announcing: An online community for technical trainers

Over the last few years, my work has moved away from day-to-day software development, and more in the direction of technical training: Helping companies (and individuals) by teaching people how to solve problems in new ways.  Nowadays, I spend most of my time teaching courses in Python (at a variety of levels), regular expressions, data science, Git, and PostgreSQL.

And I have to say: I love it. I love helping people to do things they couldn’t do before.  I love meeting smart and interesting people who want to do their jobs better.  I love helping companies to become more efficient, and to solve problems they couldn’t solve before.  And I love the travel; next week, I leave for my 16th trip to China, and I’ll likely teach 5-6 classes in Europe before the year is over.

The thing is, I’m not alone: There are other people out there who do training, and who have the same feeling of excitement and satisfaction.

At the same time, trainers are somewhat lonely: To whom do we turn to improve our skills? Not our technical skills, but our skills as trainers? And our business skills as consultants who are looking to improve our knowledge of the training market?

Over the last year, I’ve started to help more and more people who are interested in becoming trainers. I’ve started a coaching practice. I’ve given Webinars and talks at conferences. I’ve started to work on a book on the subject.

But as of last week, I’ve also started a new, free community for technical trainers on Facebook. If you engage in training, or have always wanted to do so, then I invite you to join our new, free community on Facebook, at http://facebook.com/groups/techtraining .

I should note that this group is not for people running training businesses. Rather, it’s for the trainers themselves — the people who spend several days each month in a classroom, trying to get their ideas across in the best possible ways.

In this group, we’ll share ideas about (among other things):

• How to find clients
• How to prepare courses
• What a good syllabus and/or proposals look like
• How to decide whether a course is worth doing
• How to price courses
• Working on your own vs. via training companies
• How to upsell new courses to your clients
• How can education research help us to teach better

If you are a trainer, or want to be one, then I urge you to join our new community, at at http://facebook.com/groups/techtraining .  We’ve already had some great exchanges of ideas that will help us all to learn, grow, and improve. Join us, and contribute your voice to our discussion!

## Speedy string concatenation in Python

As many people know, one of the mantras of the Python programming language is, “There should be one, and only one, way to do it.”  (Use “import this” in your Python interactive shell to see the full list.)  However, there are often times when you could accomplish something in any of several ways. In such cases, it’s not always obvious which is the best one.

A student of mine recently e-mailed me, asking which is the most efficient way to concatenate strings in Python.

The results surprised me a bit — and gave me an opportunity to show her (and others) how to test such things.  I’m far from a benchmarking expert, but I do think that what I found gives some insights into concatenation.

First of all, let’s remember that Python provides us with several ways to concatenate strings.  We can use the + operator, for example:

```>> 'abc' + 'def'
'abcdef'```

We can also use the % operator, which can do much more than just concatenation, but which is a legitimate option:

```>>> "%s%s" % ('abc', 'def')
'abcdef'```

And as I’ve mentioned in previous blog posts, we also have a more modern way to do this, with the str.format method:

```>>> '{0}{1}'.format('abc', 'def')
'abcdef'```

As with the % operator, str.format is far more powerful than simple concatenation requires. But I figured that this would give me some insights into the relative speeds.

Now, how do we time things? In Jupyter (aka IPython), we can use the magic “timeit” command to run code.  I thus wrote four functions, each of which concatenates in a different way. I purposely used global variables (named “x” and “y”) to contain the original strings, and a local variable “z” in which to put the result.  The result was then returned from the function.  (We’ll play a bit with the values and definitions of “x” and “y” in a little bit.)

```def concat1():
z = x + y
return z

def concat2():
z = "%s%s" % (x, y)
return z

def concat3():
z = "{}{}".format(x, y)
return z

def concat4():
z = "{0}{1}".format(x, y)
return z```

I should note that concat3 and concat4 are almost identical, in that they both use str.format. The first uses the implicit locations of the parameters, and the second uses the explicit locations.  I decided that if I’m already benchmarking string concatenation, I might as well also find out if there’s any difference in speed when I give the parameters’ iindexes.

I then defined the two global variables:

```x = 'abc'
y = 'def'```

Finally, I timed running each of these functions:

```%timeit concat1()
%timeit concat2()
%timeit concat3()
%timeit concat4()```

The results were as follows:

• concat1: 153ns/loop
• concat2: 275ns/loop
• concat3: 398ns/loop
• concat4: 393ns/loop

From this benchmark, we can see that concat1, which uses +, is significantly faster than any of the others.  Which is a bit sad, given how much I love to use str.format — but it also means that if I’m doing tons of string processing, I should stick to +, which might have less power, but is far faster.

The thing is, the above benchmark might be a bit problematic, because we’re using short strings.  Very short strings in Python are “interned,” meaning that they are defined once and then kept in a table so that they need not be allocated and created again.  After all, since strings are immutable, why would we create “abc” more than once?  We can just reference the first “abc” that we created.

This might mess up our benchmark a bit.  And besides, it’s good to check with something larger. Fortunately, we used global variables — so by changing those global variables’ definitions, we can run our benchmark and be sure that no interning is taking place:

```x = 'abc' * 10000
y = 'def' * 10000```

Now, when we benchmark our functions again, here’s what we get:

• concat1: 2.64µs/loop
• concat2: 3.09µs/loop
• concat3: 3.33µs/loop
• concat4: 3.48µs/loop

Each loop took a lot longer — but we see that our + operator is still the fastest.  The difference isn’t as great, but it’s still pretty obvious and significant.

What about if we no longer use global variables, and if we allocate the strings within our function?  Will that make a difference?  Almost certainly not, but it’s worth a quick investigation:

```def concat1():
x = 'abc' * 10000
y = 'def' * 10000
z = x + y
return z

def concat2():
x = 'abc' * 10000
y = 'def' * 10000
z = "%s%s" % (x, y)
return z

def concat3():
x = 'abc' * 10000
y = 'def' * 10000
z = "{}{}".format(x, y)
return z

def concat4():
x = 'abc' * 10000
y = 'def' * 10000
z = "{0}{1}".format(x, y)
return z

```

And our final results are:

• concat1: 4.89µs/loop
• concat2: 5.78µs/loop
• concat3: 6.22µs/loop
• concat4: 6.19µs/loop

Once again, we see that + is the big winner here, but (again) but less of a margin than was the case with the short strings.  str.format is clearly shorter.  And we can see that in all of these tests, the difference between “{0}{1}” and “{}{}” in str.format is basically zero.

Upon reflection, this shouldn’t be a surprise. After all, + is a pretty simple operator, whereas % and str.format do much more.  Moreover, str.format is a method, which means that it’ll have greater overhead.

Now, there are a few more tests that I could have run — for example, with more than two strings.  But I do think that this demonstrates to at least some degree that + is the fastest way to achieve concatenation in Python.  Moreover, it shows that we can do simple benchmarking quickly and easily, conducting experiments that help us to understand which is the best way to do something in Python.