I have been consulting, developing, and offering training classes in both Ruby and Python for a number of years now — more than 15 years in Python, and more than 7 years in Ruby. Inevitably, when someone from one of my courses hears that I use more than one language, they ask me, “So, which one do you prefer?”
One way to address this is to speak like a parent (which I am), and to give the analogy that just like I love all three of my children equally but differently, I love these two languages equally, but differently. But the most recent time I was answering this question, I asked myself, how do you like them differently? What is appealing about each of these languages? Why do you enjoy working with (and teaching) both of them?
I began to search for analogies that would describe the relationship between Ruby and Python, and the reason why I enjoy working with them both. I would sometimes extend the children analogy, saying that they’re like siblings. But beyond the fact that I’m not their parent, I decided that there were enough differences to make the sibling analogy not quite appropriate. Perhaps it would be most appropriate to call them cousins, or even second cousins.
But then I hit upon another analogy, one which might indicate my age and television-watching habits as a child, but which I think is somewhat apt: The Odd Couple.
I remember the Odd Couple as an American sitcom from the 1970s, broadcast in endless reruns on certain stations, in which two divorced men become roommates and friends, despite their with wildly different habits and outlooks on life. (I should note that the Neil Simon play and movie, upon which the TV series was based, is far darker, and really surprised me when I saw it after years of watching the TV show.)
The viewers aren’t ever expected to prefer neatnik, uptight Felix or sloppy, happy-go-lucky Oscar, but rather to appreciate the differences between the two, and to see a bit of themselves in each character. In some ways — and perhaps more philosophically than was ever intended — the play, movie, and show are there to tell us that there is no one “right” way to approach life, and that each has its advantages and disadvantages. Balance is the key.
The more I think about it, the more I like this analogy, because it speaks to the differences between the languages, and the reasons why I love to work in each of them. Python, not surprisingly, is Felix: It’s clean, crisp, elegant, and engineered precisely. It’s no surprise that Python has been called “executable pseudo-code,” in that I’ve met a very large number of people (many of whom take my courses) who have been working with Python for months without knowing precisely what they were doing.
Python is conservative by nature, and that has served the language well for more than two decades. Indeed, you could argue that the entire 2-to-3 Python upgrade issue, which has been causing ripples of late, is the result of Python betraying this conservative culture, and making a clean break with past versions for the first time in its history. There are parts of Python that drive me crazy, such as len being a builtin function, list.sort not returning a value, the limits on lambda. the need for both tuples and lists, and the way that super works. But every language has its issues, and a very large number of them were improved or removed altogether in Python 3.
But other parts of the language are beautiful, such as the way in which operator overloading is done. Sure, Ruby lets you rewrite + directly, but I think that there’s something about Python’s __add__ which tells newcomers that they should avoid messing with it until they know what they’re doing. I have also grown to love list comprehensions (as well as dictionary and set comprehensions), even though I readily admit that the syntax is difficult for beginners. Also, the Python standard library is just a joy to work with; you can really depend on things working pretty well. And one of the things that people hate at first about Python, namely the required whitespace, is sheer genius in my book. Decorators are also wonderful; while I don’t use them much, they are an elegant and powerful way to intercept function and class definitions, and do all sorts of wild stuff with them.
Ruby, on the other hand, is Oscar: It’s infinitely flexible, messy, and creative — but it works the way you want it to work. Ruby inherited many of the characteristics of Perl, which Larry Wall deliberately meant to be close to natural human language. Sure, it’s a minor miracle that Ruby’s syntax can be described using computers, given its complexity, but that complexity allows me flexibility, creativity, and intellectual excitement that I can’t get elsewhere. Add blocks to the mixture, and you have a language which gives you raw building blocks that allow you to solve problems quickly, easily, and naturally, with less code than would otherwise be necessary. For example, ActiveRecord might have its problems, but I generally love its API and the magic that it performs on my behalf. The way that validations and associations look like declarations (but are actually class methods) is great, making for readable code.
Of course, Ruby has its problems, as well: The object model is elegant and simple — but nearly impossible for newcomers to the language to grasp. (I should know, I teach quite a lot of them.) The fact that everything ends with “end” drives me a bit crazy. So do the differences between procs, lambdas, and blocks. And the “stubby lambda” syntax. But again, every language has its issues and trade-offs, and the ones that Ruby has made are more than reasonable for my work.
Matz has said that Ruby was optimized for programmer happiness, and Avdi Grimm has used the word “joy” to describe programming in Ruby — and I have to agree with both of them. Programming in Python feels more like solving a puzzle, but programming feels more satisfying; I’m unleashing my creative energies, and using the language to solve problems in the way that I want. Python is crisp and demanding, and Ruby is messy and fun. You know, like Felix and Oscar.
Of course, the style of the languages might be very different — but at the end of the day, there’s a lot of overlap between the two. IPython and Pry, PyPi and RubyGems, dicts and hashes, “def initialize” and “def __init__” — if you know Ruby, then learning Python isn’t very difficult, and vice versa. Both are byte compiled, interpreted, object-oriented, strongly typed, dynamic languages. Both have a GIL, which drives people crazy with threading. Both make reflection and metaprogramming easy and natural. Both languages encourage modularization of code, with short functions. Both encourage you to test your code. Both have active open-source communities. And both can be used to solve lots of problems, easily and quickly.
Indeed, the languages are similar enough that I’ve often “stolen” ideas, examples, and exercises from my Python classes for my Ruby classes, and vice versa. And I’ve often thought, when reading the documentation for a method on a built-in Ruby class, that it’s a shame that there’s no Python equivalent… only to discover that there is.
I love Python’s PEP process, which makes it easy for the community to document and discuss changes to the language. And yet, somehow, Ruby has moved from version 1.9 to 2.0 to 2.1 in the last few years, with great improvement on all fronts, without such a clear-cut process. I’m not quite sure how Ruby manages to do it, but it does, and rather impressively.
So, which do I prefer? For Web development, I use Ruby (and Rails or Sinatra). For small projects and problem solving, and sysadmin types of things, I use Python. If I had to do large-scale calculations, then NumPy would make Python a no-brainer. As a first programming language to teach young people, I think that Python is an almost perfect choice. And for mind-twisting, understand-how-languages-work examples, Ruby beats everyone hands down.
At the end of the day, I’m happy to have a foot in each camp, and to be comfortable with both. Because sometimes you want to be Felix, and sometimes you want to be Oscar, and it’s always nice to have to choose between the two.
In some programming languages, the idea of “reflection” is somewhat exotic, and takes a while to learn. In Python (and Ruby, for that matter), the language is both dynamic and open, allowing you to poke around in the internals of just about any object you might like. Reflection is a natural part of working with these languages, which lends itself to numerous things that you might want to do.
Reflection in Python is easy because everything is an object, and every Python object has attributes, which we can list using “dir”. For example, I can list the attributes of a string:
>>> s = 'abc' >>> dir(s) ['__add__', '__class__', '__contains__', '__delattr__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__getslice__', '__gt__', '__hash__', '__init__', '__le__', '__len__', '__lt__', '__mod__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', '__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '_formatter_field_name_split', '_formatter_parser', 'capitalize', 'center', 'count', 'decode', 'encode', 'endswith', 'expandtabs', 'find', 'format', 'index', 'isalnum', 'isalpha', 'isdigit', 'islower', 'isspace', 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lstrip', 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', 'rsplit', 'rstrip', 'split', 'splitlines', 'startswith', 'strip', 'swapcase', 'title', 'translate', 'upper', 'zfill']
Since everything is an object, including built-in classes, I can get the attributes of a base type, as well. For example, I can get the attributes associated with the “str” class:
['__add__', '__class__', '__contains__', '__delattr__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__getslice__', '__gt__', '__hash__', '__init__', '__le__', '__len__', '__lt__', '__mod__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', '__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '_formatter_field_name_split', '_formatter_parser', 'capitalize', 'center', 'count', 'decode', 'encode', 'endswith', 'expandtabs', 'find', 'format', 'index', 'isalnum', 'isalpha', 'isdigit', 'islower', 'isspace', 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lstrip', 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', 'rsplit', 'rstrip', 'split', 'splitlines', 'startswith', 'strip', 'swapcase', 'title', 'translate', 'upper', 'zfill']
(If you see a great deal of overlap here between the string instance and the str type, that’s because of the way in which Python handles attribute scoping. If you cannot find an attribute on an object, then Python looks for the attribute on the object’s class.)
Functions are also objects in Python, which means that we can list their attributes, as well:
def hello(name): print "Hello, %s" % name >>> hello("Dolly") Hello, Dolly
>>> dir(hello) ['__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__', '__dict__', '__doc__', '__format__', '__get__', '__getattribute__', '__globals__', '__hash__', '__init__', '__module__', '__name__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'func_closure', 'func_code', 'func_defaults', 'func_dict', 'func_doc', 'func_globals', 'func_name']
Given an object and the name of one of its attributes, I can retrieve, set, or check for the existence of the attribute value with the builtin “getattr”, “setattr”, and “hasattr” functions. For example, I can first check to see if an attribute has been defined on an object:
>>> hasattr(hello, 'x') False
Then I can set the attribute:
>>> setattr(hello, 'x', 5) >>> hello.x 5
Notice that this does indeed mean that I have added a new “x” attribute to my function “hello”. It might sound crazy that Python lets me set an attribute on a function, and even crazier that the attribute can be a random name. But that’s a fact of life in Python; in almost every case, you can add or change just about any attribute on just about any object. Of course, if you do this to classes that you didn’t create, or that aren’t expecting you to change things, then you can end up causing all sorts of strange behavior.
To retrieve our attribute value, we use the builtin “getattr” function:
>>> getattr(hello, 'x') 5
Note that there isn’t any difference between saying “hello.x” and invoking getattr. By the same token, there’s no difference between putting “hello.x” on the left side of an assignment, and using the builtin “setattr” function.
One of the places where we tend to set lots of attributes is in our __init__ methods. While many people think of __init__ as a constructor, that’s not really the case. Rather, __init__ is invoked on our new object after it has been created; its job is to take the new, blank-slate object and add whatever attributes we’ll want and need during the rest of its life. It’s thus common for us to do something like this:
class Foo(object): def __init__(self, x, y): self.x = x self.y = y >>> f = Foo(10, 20) >>> f.x 10 >>> f.y 20
The above is all pretty standard stuff, but let’s still walk through it: We create an instance of Foo. This means that Python first creates a naked Foo object (using the __new__ method behind the scenes), which is then passes to Foo.__init__, along with the parameter values from our invocation. Inside of the __init__ method, there are three local variables: self, which points to the new object that we just created, x, and y. There are actually two x’s and two y’s in this method: The local variables x and y, and the attributes x and y that sit on self. From Python’s perspective, there is no connection between the local x and self.x. At the same time, we see an obvious, semantic connection between the two — which is precisely as it should be.
I was teaching a Python class this week, when someone complained to me about the fact that Python code has this repeated boilerplate code in each object’s __init__ method. Why, he asked me, does Python not have a way to automatically take all of the local variables, and pass them directly as attributes on the instance? That is, if I pass parameters ‘x’ and ‘y’ to my object instantiation, then the method shouldn’t have to be told to run self.x = x, or self.y = y. Rather, Python could just do that automatically
This is almost certainly not something that we want to do in Python. The language (and its community) loves to make things explicit rather than implicit (as stated in the Zen of Python), and this sort of black magic seems more appropriate for Ruby than Python. But it did seem like an interesting challenge to find out how easily I could implement such a thing, and I gladly took the challenge.
In order to solve this problem, I decided to work backwards: My ultimate goal would be to set attributes on self. That is, I would want to invoke setattr on self, passing it the name of the attribute I want to set, plus the associated value. Thus, if the function sees that it has parameters self, x, and y, it should invoke setattr(self, ‘x’, VALUE) and setattr(self, ‘y’, VALUE). I say VALUE here, because we still haven’t figured out where we’ll get the attribute names from, let alone their values.
It’s actually not that hard, as a general rule, to get the attribute names from a function. I can just go to any function and ask for the __code__ attribute. Underneath that, among other things, is a co_varnames attribute, containing the names of local variables defined in the function. So if I can just get __code__.co_varnames from inside of __init__ when it is invoked, we’ll be in great shape. (Note that in Python 3, the names of these attributes have changed slightly.)
Well, sort of: It’s nice that __code__ is available as an attribute on the function, but those attributes are only available via the function’s name. How can I refer to the function from within the function itself? Is there a pointer to “the current function”?
Not directly, no. But the “inspect” module, which comes with Python, is perfect for such cases. Normally, we think of inspect as allowing us to look at Python objects from the outside. But inspect also allows us to look at the current stack frame, which includes the function that is currently executing. For example:
>>>frame = inspect.currentframe() >>> frame.f_code <code object <module> at 0x106a24830, file "<stdin>", line 1>
The above was executed outside of a function, which means that the function-related information is missing. Things get much more interesting when we’re inside of a function, however: f_code returns the function object on which we’re working (i.e., the stuff that is usually under the function’s __code__ attribute):
def foo(): print(inspect.currentframe().f_code.co_name) >>> foo() foo
We can also get other function attributes, such as the names of local variables:
def foo(x): print(inspect.currentframe().f_code.co_varnames) >>> foo(5) ('x',)
As you can see, we can get the names of local variables — including parameter names — from the co_varnames attribute. A very simple version of our “autoinit” function could thus take an object and one or more parameters, the names of which would be used to set attributes on that object. For example:
def autoinit(obj, x, y): for varname in inspect.currentframe().f_code.co_varnames: if varname == 'obj': continue else: setattr(obj, varname, 'hello') >>> import os >>> autoinit(os, 5, 10) >>> os.x 'hello' >>> os.y 'hello'
In the above example, we define our function such that it’ll take the names of all local variables (except for “obj”) and assign new attributes to that object. However, the value is always ‘hello’. How can we assign the actual values that are being passed to the parameters?
The easiest solution, I think, is to use the locals() function, which returns a dictionary of the currently defined local variables. The keys of this dictionary are strings, which means that we can pretty trivially use the variable names to grab the value — and then make the assignment:
def autoinit(obj, x, y): for varname in inspect.currentframe().f_code.co_varnames: if varname == 'obj': continue else: setattr(obj, varname, locals()[varname]) >>> autoinit(os, 5, 10) >>> os.x 5 >>> os.y 10
So far, so good! We now have a function that can use the parameter names in setting attributes. However, if this function is going to work, then it’s not going to exist on its own. Rather, we want to invoke “autoinit” from within our __init__ method. This means that we need autoinit to set attributes not based on its own parameters, but rather based on its caller’s parameters. The inspect.currentframe method returns the current stack frame, but we want the caller’s stack frame.
Fortunately, the implementers of inspect.currentframe thought of this, and provide an easy and elegant solution: If we invoke inspect.currentframe with a numeric parameter, Python will jump back that number of stack frames. Thus, inspect.currentframe() returns the current stack frame, inspect.currentframe(1) returns the caller’s stack frame, and inspect.currentframe(2) inspect the caller’s caller’s stack frame.
By invoking inspect.currentframe(1) from within __init__, we can get the instance onto which we want to add the attributes, as well as the attribute names and values themselves. For example:
import inspect def autoinit(): frame = inspect.currentframe(1) params = frame.f_locals self = params['self'] paramnames = frame.f_code.co_varnames[1:] # ignore self for name in paramnames: setattr(self, name, params[name]) class Foo(object): def __init__(self, x, y): autoinit() >>> f = Foo(100, 'abc') >>> f.x 100 >>> f.y 'abc' >>> g = Foo(200, 'ghi') >>> g.x 200 >>> g.y 'ghi'
Hey, that’s looking pretty good! However, there is still one problem: Python doesn’t see a difference between parameters and local variables. This means that if we create a local variable within __init__, autoinit will get confused:
class Bar(object): def __init__(self, x, y): autoinit() z = 999
>>> b = Bar(100, 'xyz') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in __init__ File "<stdin>", line 7, in autoinit KeyError: 'z'
As you can see, autoinit tries to find the value of our ‘z’ local variable — but because the variable has not been set, it isn’t in the locals() dictionary. But the bigger problem is that autoinit is trying to look for z at all — as a local variable, rather than a parameter, we don’t want it to be set as an attribute on self!
The solution is to use the co_argcount attribute to our code object, which says how many arguments our function takes. For example:
def foo(x): y = 100 return inspect.currentframe() >>> s = foo(5) >>> print(s.f_code.co_argcount) 1 >>> print(s.f_code.co_varnames) ('x', 'y')
We can improve our implementation of autoinit by only taking the first co_argcount elements of co_varnames. So far as I can tell (and I don’t know if this is official, or just a convenient accident), the arguments always come first in co_varnames. Our final version of autoinit thus looks like this:
def autoinit(): frame = inspect.currentframe(1) params = frame.f_locals nparams = frame.f_code.co_argcount self = params['self'] paramnames = frame.f_code.co_varnames[1:nparams] for name in paramnames: setattr(self, name, params[name])
Sure enough, if we try it:
class Foo(object): def __init__(self, x, y): autoinit() z = 100 >>> f = Foo(10, 20) >>> print f.x 10 >>> print f.y 20
Success! Of course, this implementation still doesn’t handle *args or **kwargs. And as I wrote above, it’s very much not in the spirit of Python to have such magic happening behind the scenes. Yet, I’ve found this to be an interesting way to discover and play with functions, the “inspect” module, and how arguments are handled.
If you liked this explanation, then you’ll likely also enjoy my ebook, “Practice Makes Python,” with 50 exercises meant to improve your Python fluency.
I had so much fun writing the previous blog post about Python scoping that I decided to expand it into a free e-mail course. Each day (for five days), you’ll receive another lesson about how scopes work in Python, and why this is important for you to know as a Python developer.
So if you’ve ever been unclear on the “LEGB rule,” wanted to know when and how to use the “global keyword,” or even how you can nearly break your Python implementation through scope abuse, this e-mail course should help.
Please e-mail me if you have questions or comments about this e-mail course! I’ve had so much fun putting this one together that I’m very likely to create additional ones, so your suggestions for future topics are extremely welcome.
Let’s say I want to try something on a list in Python. While I usually like to call my test list objects “mylist”, I sometimes forget, and create a variable named “list”:
list = ['a', 'b', 'c']
If you’re like me, then you might not immediately notice that you’ve just defined a variable whose name is the same as a built-in type. Other languages might have defined “list” as a reserved word, such that you cannot define it. (Just try creating a variable named “if”, and you’ll see what I mean.) But Python won’t stop you. This means that now, instead of type(list) returning “type” (i.e., indicating that list is a data type), it’ll say:
>>> type(list) <type 'list'>
If you’re new to Python, and think that it’s normal for the return type of “list” to be “list”, let’s get a bit bolder:
>>> list = 'abc' >>> type(list) <type 'str'>
(If you’re using Python 3, then you’ll see “<class ‘str’>”, and not “<type ‘str’>”. But there’s really no difference.) Now things are getting downright weird. Of course, I can fix the situation:
>>> del(list) >>> type(list) <type 'type'>
What’s going on here? How was I able to turn lists into strings? And how did deleting “list” suddenly restore things? The answer is: Python’s scoping rules. They are fairly simple to understand, and very consistent (as you would expect from Python), but have implications that can cause all sorts of weirdness if you’re not sure of what to expect. The Python scoping rules are Local, Enclosing Function, Global, and Builtins, often abbreviated as “LEGB.” This means that when Python encounters an identifier (variable or function) name, it will look for the name as follows:
- If you’re inside of a function, it’ll first look in that function,
- If you have defined a function within a function (and beginners really shouldn’t be doing this), then Python will look in the enclosing function,
- Python then looks at the “global” level, which is another way of saying in the current file, and
- As a last resort, Python looks in the __builtins__ module, the namespace in which Python’s standard and built-in types are located.
So if I have a one-line Python program:
there are no functions, and thus no L or E to consider. Python will look in G, the global namespace, meaning the current file. If there is an x defined there, then great; that’s what will be printed. If there is no x defined in the current file, then Python will look in __builtins__, which doesn’t have an x, and we’ll get a NameError exception — meaning, Python doesn’t know what name you’re talking about. So far, so good, right? Well, now consider what happens when we define a new variable, by assigning it a value. If you define the variable inside of a function, then it’s in the “local” scope. If you define the variable at the top level of a file, then it’s in the “global” scope. And if you define it in the __builtins__ module (and you really shouldn’t be doing that), then it’ll be in the “builtins” scope. When we defined “list” (e.g., “list = ‘abc'”), we were defining a new variable in the global scope. We didn’t replace or remove the builtin “list” at all! Indeed, the builtin “list” is still available if we use its full name, __builtins__.list:
>>> list = 'abc' >>> type(list) <type 'str'> >>> type(__builtins__.list) <type 'type'>
The problem, then, isn’t that we have replaced the built-in “list”, but rather that we have masked it. Once we have defined “list” in the global scope, all naked references to “list” in that file — as well as in functions defined within that file — will see our global variable “list”, rather than __builtins__.list. How, then, did it help for me to delete “list”? Because del(list) doesn’t delete __builtins__.list. (You can do that, by the way, but that’s for another blog post.) Rather, del(list) in our case deletes “list” from the global scope. When we then ask Python for type(list), it looks in L, E, and G, and doesn’t find anything. It thus goes to __builtins__, finds “list”, and returns us the type of “list”, which is once again a “type” or “class”. Whew! If you enjoyed this, then you might like my three-day, online Python class, which includes such tidbits. The course will begin on January 14th (read more about it at masterpython.com). You can also subscribe to my free, exclusive technology newsletter.
I spend a large proportion of my time teaching classes in a variety of open-source technologies — specifically, Ruby, Python, PostgreSQL, and Git. One of the questions that invariably arises in these classes has to do with the case sensitivity of the technology in question. That is, is the variable “x” the same as the variable “X”?
In nearly ever case, the technologies with which I work are case sensitive, meaning that “x” and “X” are considered two completely different identifiers. Indeed, the Ruby language goes so far as to give capitalized identifiers a special status, calling them “constants.” (They’re not really constants, in that you can always redefine a Ruby constant. However, you will get a warning when you reassign it. For this reason, I prefer to call them “stubborns,” so that people don’t get the wrong idea.)
SQL is a completely different story, however: The SQL standard states that SQL queries and identifiers (e.g., table names) aren’t case sensitive. Thus, there’s no difference between
select id, email from people;
SELECT ID, EMAIL FROM PEOPLE;
I find both of these styles to be somewhat unreadable, and over the years have generally followed Joe Celko‘s advice for capitalization in SQL queries:
- SQL keywords are in ALL CAPS,
- Table names have Initial Caps, and
- Column names are all in lowercase.
Given that rule, the above query would look like this:
SELECT id, email FROM People;
Again, this capitalization scheme is completely ignored by PostgreSQL. It’s all for our benefit, as developers, who want to be able to read our code down the road.
Actually, that’s not entirely true: PostgreSQL doesn’t exactly ignore the case, but rather forces all of these names to be lowercase. So if you say
CREATE TABLE People ( id SERIAL NOT NULL, email TEXT NOT NULL, PRIMARY KEY(id) );
PostgreSQL will create a table named “people”, all in lowercase. But because of the way PostgreSQL works, forcing all names to lowercase, I can still say:
SELECT * FROM People;
And it will work just fine.
Now, there is a way around this, namely by using double quotes. Whereas single quotes in PostgreSQL are used to create a text string, double quotes are used to name an identifier without changing its case.
Let me say that again, because so many people get this wrong: Single quotes and double quotes in PostgreSQL have completely different jobs, and return completely different data types. Single quotes return text strings. Double quotes return (if you can really think of them as “returning” anything) identifiers, but with the case preserved.
Thus, if I were to repeat the above table-creation query, but use double quotes:
CREATE TABLE "People" ( id SERIAL NOT NULL, email TEXT NOT NULL, PRIMARY KEY(id) );
I have now created a table in which the table name has not been forced to lowercase, but which has preserved the capital P. This means that the following query will now fail:
select * from people; ERROR: relation "people" does not exist LINE 1: select * from people; ^
It fails because I have created a table “People”, but I have told PostgreSQL to look for a table “people”. Confusing? Absolutely. If you use double quotes on the name of a table, column, index, or other object when you create it, and if there is even one capital letter in that identifier, you will need to use double quotes every single time you use it. That’s frustrating for everyone involved — it means that we can’t use the nice capitalization rules that I mentioned earlier, and that various queries will suddenly fail to work.
The bottom line, then, is to avoid using double quotes when creating anything. Actually, you should avoid double quotes when retrieving things as well — otherwise, you might discover that you’re trying to retrieve a column that PostgreSQL doesn’t believe exists.
Now, let’s say that you like this advice, and you try to take it to heart. Unfortunately, there are places where you still might get bitten, despite your best efforts.
For example, the GUI tool for PostgreSQL administration, PGAdmin 3, is used by many people. (I’m an old-school Unix guy, and thus prefer the textual “psql” client.) I’ve discovered over the years that while PGAdmin might be a useful and friendly way to manage your databases, it also automatically uses double quotes when creating tables. This means that if you create a table with PGAdmin, you might find yourself struggling to find or query it afterwards.
Another source of frustration is the Active Record ORM (object-relational mapper), most commonly used in Ruby on Rails. Perhaps because Active Record was developed by users of MySQL, whose table and column names are case-sensitive by default, Active Record automatically puts double quotes around all table and column names in queries. This can lead to frustrating incompatibilities — such as if you want to access the column in Ruby using CamelCase, but in a case-insensitive way in the database.
PostgreSQL is a fabulous database, and has all sorts of great capabilities. Unless you really want your identifiers to be case-sensitive, though, I strongly suggest that you avoid using double quotes. And if you encounter problems working with columns, check the database logs to see whether the queries are being sent using double quotes. You might be surprised, and manage to save yourself quite a bit of debugging time.
One of the most celebrated phrases that has emerged from Ruby on Rails is “convention over configuration.” The basic idea is that software can traditionally be used in many different ways, and that we can customize it using configuration files. Over the years, configuration files for many types of software have become huge; installing software might be easy, but configuring it can be difficult. Moreover, given the option, everyone will configure software differently. This means that when you join a new project, you need to learn that project’s specific configuration and quirks.
“Convention over configuration” is the idea that we can make everyone’s lives easier if we agree to restrict our freedom. Ruby on Rails does this by telling you precisely what your directories will be named, and where they will be located. Rails tells you what to call your database tables, your class names, and even your filenames. The Ruby language, while generally quite open and flexible, also enforces certain conventions: Class and module names must begin with capital letters, for example.
It can take some time for developers to accept these conventions. Indeed, I was one of them: When I first started to work with Rails, I was somewhat offended to be told precisely what my database column names would be, especially when those names contradicted advice that I had heard and adopted years ago. (The advice was to prefix every column in a database table with the name of the table, which would make it more easily readable in joins. Thus the primary key of the “People” table would be person_id, followed by person_first_name, person_last_name, and so forth.) Over time, I have grown not only to use these Rails conventions, but to enjoy working with them; it turns out that people can changes pretty easily, at least when it comes to these arbitrary decisions.
The real benefit of such conventions has nothing to do with my own work. Rather, it reduces the need for communication among people working on the same project. If everyone does it the same way, then there are fewer things to negotiate, and we can all concentrate on the real problems, rather than the ones which are relatively arbitrary.
Back in college, I was the editor of the student newspaper. We, like many newspapers, used the AP Stylebook to determine the style that we would use. The AP Stylebook was our bible; whatever it said, we did. Of course, we also had our own local style, to cover things that AP didn’t, such as building names and numbers (e.g., we could refer to “Building 54″). In some cases, I personally disagreed with the AP Stylebook, especially when it came to the “Oxford comma.” But by keeping that rule, we were able to download articles from the Washington Post and LA Times, and stick them into our newspaper with minimal editing. Again, I prefer the serial comma, and use it in my personal writing. By adhering to a standard, I was able to ensure consistency in our writing, and reduce the workload of the (already hard-working) newspaper staff.
Twice in the last few weeks, I’ve been reminded of the benefits of convention over configuration — both times, when developers on projects I inherited decided to flout the rules. Their decisions weren’t wrong, but they were so wildly different from the conventions of Rails that they caused trouble, delays, and bugs.
So you can imagine my surprise when I looked for the application.js file, and didn’t find it. That was bad enough, but the asset pipeline mechanism, as well as the deployment scripts I was developing, got rather confused by the absence of application.js. When I confronted the original developer about this, he told me that actually, he liked to call it something else entirely, reflecting the name of the application and client. Why? He didn’t really have a technical reason; it was all for reasons of aesthetics. The fact is that the rest of the Rails ecosystem expected application.js, though, so his decision meant that the rest of the software needed to be configured in a special, different way.
As a way of justifying his decision, the other developer told me, “Conventions shouldn’t be a boundary when developing.” No, just the opposite — the idea is that conventions are there to limit you, to tell you to work in a way that everyone else works, so that things will be smoother. In much of the world, we drive on the right side of the road. This is utterly random; as numerous countries (e.g., England) have proven, you can drive on the other side of the road just fine — but only so long as everyone is doing it. The moment everyone decides on their own conventions, big problems can occur.
When Biblical Hebrew wants to describe anarchy, it uses the phrase, “People did whatever was right in their own eyes.”
Something similar occurred with another project where I inherited code from someone else: One of my favorite things about Ruby on Rails is the fact that it runs the application in an “environment.” The three standard environments are development (which is optimized for developer speed, not for execution speed), production (which is optimized for execution speed), and test (which is meant for testing). The environments aren’t meant to change the application logic, but rather the way in which the application behaves. For example, I recently changed the way in which e-mail is sent to users of my dissertation software, the Modeling Commons. When I send the e-mail in the “production” environment, the e-mail is actually sent — but when I do so within the “development” environment, the e-mail is opened in a browser, so that I can examine it. This is standard and expected behavior; all Rails applications have development, production, and test environments — and some even havea “staging” environment, in which we prepare things.
My client’s software, which I inherited from someone else, decided to do something a bit different: The code was meant to be used on several different sites, each with slightly different logic. The developer decided to use Rails environments in order to distinguish between the logical functions. Thus, if you run the application under the “xyz” environment, you’ll get one logical path, and if you run the application under the “abc” environment, you’ll get another logical path.
It’s hard to describe the number of surprises and problems that this seemingly small decision has created: It means that we can’t really test the application using the normal Rails tools, because nothing will work correctly in the “test” environment. It means that the Phusion Passenger server that we installed to run the application needs an additional, special configuration parameter (not normally needed in production) to find the right database, and execute with the correct algorithms. It means that when you’re trying to trace through the logic of the application, you need to check the environment.
Basically, all of the things that you can assume about most Rails applications aren’t true in this one.
Now, the point of me writing this isn’t to say that I’m brilliant and that other developers are stupid — although it is true that Reuven’s First Law of Consulting states that a new consultant on a project must call his predecessor a moron. Rather, it’s to point to the fact that conventions are there for a reason, and that if you insist on ignoring them, you’ll be increasing the learning curve that other developers will need to work on your application. Now, if you have oodles of time and money, that’s just fine — but as a general rule, a developer’s time is a software company’s greatest expense, and anything you can do to increase productivity, and decrease the need for explanations and communication, is worthwhile.
By the way, this is the whole reason why one of the Python mantras is, “There’s only one way to do it” — a direct contrast with the Ruby and Perl mantra, “There’s more than one way to do it.” Having a single, common way to do things makes everyone’s code more similar readable, and easier to understand. It doesn’t stop you from doing brilliant and interesting things, but does ask that you demonstrate your brilliance within the context of established practice.
Of course, this doesn’t mean that conventions are written in stone, or that they are unchangeable. But if and when you ignore them, it should be for good reason. Even if you’re right, think about whether you’re so right that it’s worth having multiple people learn your way of doing things, instead of the way that they’re used to doing them.
What do you think? Have you see these sorts of issues in your work? Let me know!
Hello out there!
I’ve been privileged to work with many great people and companies since 1995, when I first started working as a consultant. I’ve helped companies to create Web applications from an idea, to learn programming languages, to improve their business processes, and to optimize their databases.
The time has come to describe some of what I’ve learned. Sure, I’ve given plenty of conference talks, and my Linux Journal column has been published every month since 1996. But there are all sorts of things that are too short, or too esoteric, for those forums, and this blog is where I can share some thoughts on the intersection between technology and society.
Given that I’m also finishing a PhD in Learning Sciences at Northwestern University, you can expect to see some comments about technology and education here, as well. You’re also welcome to check out the Modeling Commons, the collaborative platform for NetLogo modeling that I have created as part of my doctoral studies.
If you have any ideas, comments, or suggestions, I’m happy to hear them; always feel free to contact me at email@example.com. I read every message, and am happy to hear from clients and colleagues.