The real questions to ask offshore developers

Friends of mine, who are not software developers, have a small, retail Internet business.  The original developers created the application in Python, and my friends are looking for a full-stack Web/Python developer to help them.  Frustrated with their inability to find someone who can commit to their project, my friends have decided to hire offshore developers, which is another way of saying, “cheap programmers in Eastern Europe or India.”

Earlier this week, these friends e-mailed me the resumes of three Ukrainian programmers, asking me which seemed most appropriate, and what questions they should be asking.

The resumes were, from my perspective, largely identical.  All the programmers declared themselves to be “experts,” or “very experienced,” or just “experienced,” at Python, JavaScript, Web development, SQL, and many other technologies.  And the fact is, they probably are quite skilled at all of these technologies; the Ukrainians with whom I have worked — as well as the Indians, Chinese, and Romanians — have all been quite skilled, technically.

But here’s the thing: Technical skill isn’t the primary consideration when hiring a developer. This is doubly true when hiring an offshore developer.  That’s because the problems that I’ve seen with offshore programmers aren’t technical, but managerial.  As I told my friends, you would much rather have a so-so Python programmer who is reliable, and communicative than a genius programmer who is unreliable, or uncommunicative.  The sad fact is that many of the offshore outsourcing companies have talented programmers, but poor management and leadership, leading to breakdowns in communication, transparency, and scheduling, rather than technology.

Sure, a developer might know the latest object-oriented techniques, or know how to create a RESTful JSON API in his or her sleep.  But the programmer’s job isn’t to do those things. Rather, the programmer’s job is to do whatever the business needs to grow and improve.  If that requires fancy-shmancy programming techniques and algorithms, then great.  But most of the time, it just requires someone willing to pay attention to the project’s needs and schedule, writing simple and reliable code that’s necessary for the business to succeed.

The questions that you should be asking an offshore developer aren’t that different from the ones that you should be asking a developer in your own country, who speaks your language, and lives in your time zone.  Specifically, you should be asking about their communication patterns and processes.  Of course, you don’t want a dunce working on your programming project — but good communication and processes will smoke out such a person very quickly.

If there are no plans or expectations for communication, then you’re basically hoping that the developer knows what you want, that he or she will do it immediately, and that things won’t change — a situation that is pretty much impossible.

Good processes and a good developer will lead to a successful project.  Good processes and a bad developer will make it clear that the developer needs to go, and soon.  Bad processes and a developer of any sort will make it hard to measure performance, leading to frustrating on everyone’s part — and probably missed deadlines, overspent budgets, and more.

So I told my friends that they should get back to these Ukrainian programmers, and ask them the following questions:

  • What task tracking system do you prefer to use, in order to know what needs to be done, what has been done, and who has taken responsibility for each task?
  • How often do you want to meet to review progress?
  • Do you use automated testing to ensure that when we make progress, we can be sure that it works, and that we haven’t introduced regressions?
  • How easily will a third party be able to download the repository from Git (or whatever version-control system you’re using), and run those tests to verify that everything is working?

The answers to these questions are far, far more important than the technical skills of the person you’re hiring. Moreover, these are things that we can test empirically: If the developer doesn’t do one or more of them, we’ll know right away, and can find out what is going wrong.

If the developer is good, then he or she will encourage you to set up a task tracker, meet every day (or at least, every other day) to review where things are.  You’ll hear that automated testing is part of the development progress, and that of course it’s possible to download, install, and run the application on any compatible computer.

If the developer hedges on these things, or if he or she asks you to trust him, then that’s a bad sign.  Truth be told, the developer might be fantastic, brilliant, and do everything you want.  But do you want to take that risk?

If the developer has regular communication with you, tests their code, and allows you to download and run the application on your own, then you’re in a position to either praise them and keep the relationship going — or discover that things aren’t good, and shut it down right away.

Which brings me to my final point: With these sorts of communication practices in place, you’ll very quickly discover if the developers are doing what they promised.  If so, then that’s great for everything.  But if not, then you’ll know this within a week or less — and then you can get rid of them.

There are plenty of talented software developers in the world, but there are many fewer who both understand your business and make its success a priority.  A developer who values your business will want to demonstrate value and progress on a very regular basis.  Someone who cannot demonstrate value and progress probably isn’t deserving of your attention or money, regardless of where they live or what language they speak.  But if you can find someone excellent, who values you and your business, and who wants to help you succeed?  Then by all means, hire them — and it doesn’t matter whether they’re in Ukraine, or anywhere else.

What questions do you ask offshore developers before hiring them?

Starting a new software project? Don’t start coding right away.

It’s always fun to start a new project. I should know; I’ve been a consultant since 1995, and have started hundreds of projects of various shapes and sizes.  It’s tempting, when I first meet a new client and come to an agreement, to dive right into the code, and start trying to solve their problems.

But that would be a mistake.

More important than code, more important than servers, more important than even finding out what problems I’m supposed to be solving, is the issue of communication.  How will the client communicate their questions and problems to me?  How will I tell them what I am doing?  Even more importantly, how I will I tell them where I’m having problems, or need help?

Before you begin to code, you need to set up two things: First, a time and frequency of meeting.  Will it be every day at 8 a.m.?  Every Monday at 2 p.m.?  Tuesdays and Thursdays at 12 noon?  It doesn’t matter that much, although I have found that daily morning meetings are a good way to start the day.  (When you work on an international team, though, someone’s “morning” meeting is someone else’s evening meeting.)  These meetings, whether you want to call them standups, weekly reviews, or something else, are to make sure that everyone is on the same page.  Are there problems?  Issues?  Bugs?  New feature requests?  Is someone stuck, and needs help?  All of that can be discussed in the meeting.  And by setting a regular time for the meeting, you raise the chances that when something goes wrong (and it will), there will be a convenient time and place to discuss the problems.

I’m actually of the opinion that it’s often good to have both a daily meeting (for daily updates) and a weekly one (for review and planning).  Whatever works for you, stick with it.  But you want it to be on everyone’s schedule.

The second thing that you should do is set up a task tracker.  Whether it’s Redmine, Trello, GitHub issues, or even Pivotal Tracker, every software project should have such a task tracker.  They come in all shapes, sizes, and price points, including free.  A task tracker allows you to know, at a glance, what tasks are finished, which are being worked on right now, and which are next in line.  A task tracker lets you prioritize tasks for the coming days.  And it allows you to keep track of who is doing what.

Once you have set up the tracker and meeting times, you can meet to discuss initial priorities, putting these tasks (or “stories,” as the cool agile kids like to say) in the tracker.  Now, when a developer isn’t sure what to work on next, he or she can go to the task tracker and simply pick the top things off of the list.

This isn’t actually all that hard to do.  But it makes a world of difference when working on a project.

If you build it, they will come — but they might hate you

Several months ago, I was teaching an introductory Python course, and I happened to mention the fact that I use Git for all of my version-control needs.  I think that I would have gotten a more positive response if I had told them that my hobby is kicking puppies.

The reactions were roughly — and I’m not exaggerating here — something like, “What?  You use Git?!?  That so-called version control system whose main feature is eating our files?!?”   And I got this not just from one person, but from all 20-something people who were taking my Python course.  The more experience they had with Git, the more violently negative their reactions were.

I managed to calm them down a bit, and tried to tell them that Git is a wonderful system, except for one little problem, namely the fact that its interface is very hard to understand.  But, I promised them, once you understand how Git works, and once you start to work with it within the context of understanding what it’s doing, things start to make sense, and you can really enjoy and appreciate the system.

I should note that since that Python class, I’ve returned to the same company to give two day-long Git classes.  Based on the feedback I received, the Git class was very helpful, and I’m guessing that this is because I concentrated on what Git is really doing, and how the commands map to those actions.  I’m pretty sure that people from that class are starting to appreciate the power and flexibility of Git, rather than focusing only on their frustrations with it.

However, my experience working with and teaching Git have taught me a great deal about designing both software and UIs.  We love to say and think that excellent products with terrible marketing never get anywhere.  And in the commercial world, that might well be true. Everyone loves to quote the movie “Field of Dreams” (which I never really liked anyway), and how the main character builds a baseball field after repeatedly hearing, “If you build it, they will come.” As numerous other people have said, this is not the case for businesses: If you build it, they probably won’t come, unless you’ve invested time and money in marketing your product. 

However, in the open-source world,  we expect to invest time in learning a technology, and are generally more technical folks in any event.  Thus, we tend to be more forgiving of bad UIs, focusing on features rather than design. It’s thus possible for something brilliant, efficient, flexible, and profoundly frustrating for new users to become popular. Git is a perfect example of this.

Now, I happen to think that Git is one of the most brilliant pieces of software I’ve ever seen. Really, it’s impressively designed.  However, the commands are counter-intuitive for many people who used other version-control systems, and it’s possible to get yourself into a situation from which an expert can extract himself or herself, but in which a novice is completely befuddled.  Once you understand how Git works (brilliantly described in this video), things start to make sense.  But getting to that point can take a great deal of time, and not everyone has that time.

In open source, then, “If you build it, they will come” might sometimes work.  However, even if they do come, and even if they use the software that you have written, you might end up in a particularly unenviable situation: People will use the software, but will hate you for the way in which you designed it.

The upshot, then, is that it’s worth taking a bit of time to think about your users, and how they will use your system.  It’s worth taking the time to create an interface (including commands) that will make sense for people.  Look at WordPress, for example: It packs in a great deal of functionality, but also pays attention to the UI… and as a result, has become a hugely dominant part of the Web ecosystem.

Sure, Git is famous and popular, and I’m one of its biggest fans, at least in terms of functionality. But if Linus had spent just a bit more time thinking about command names, or behaviors, I think that we would have had an equally powerful tool, but with fewer people in need of courses to understand why their files are getting trampled.

Good intentions, unexpected results: Mailing lists and DMARC

If there’s anything that software people know, it’s that changing one part of a program can result in a change in a seemingly unrelated part of the program.  That’s why automated testing is so powerful; it can show you when you have made a mistake that you not only didn’t intend, but that you didn’t expect.

If unexpected results can happen in a system that you control and supposedly understand, it’s not hard to imagine what happens when the results of your changes involve many pieces of software other than yours, running on computers other than yours, being used by customers who aren’t yours.

This would appear to be the situation with one of the latest anti-spam and security features for e-mail, known as DMARC.

I’m not intimately familiar with this standard, but I’ve seen other standards relating to e-mail in the past to know that anything having to do with e-mail will be frustrating for some of the people involved.  E-mail is in use by so many people, on so many computers, and by so many different programs, that you can’t possibly make changes without someone getting upset.  Nevertheless, the DMARC implementation and rollout by a number of large e-mail providers over the last few weeks has been causing trouble.

Let me explain: DMARC promises, to some degree, to reduce the amount of spam that we get by verifying that the sender’s e-mail address (in the “From” field) matches the server from which the e-mail was sent.  So if you get e-mail from me, with a “From” address of “reuven@lerner.co.il”, DMARC will verify that the e-mail was really sent from the lerner.co.il server.  To anyone who has received spam, or fake messages, or illegal “phishing” messages, this sounds like a great thing: No longer will you get messages from your friend with a hotmail.com address, asking for money now that they’re stranded in London.  It really, admirably aims to reduce the number of such messages.

How? Very simply, by checking that the “From” address in the message matches the server from which the message was sent.  If your DMARC-compliant server receives e-mail from “reuven@lerner.co.il”, but the server was some anonymous IP address in Mongolia, your server will refuse to receive the e-mail message.

So far, so good.  But of course, for every rule, there are exceptions.  Consider, for example, e-mail lists: When someone posts to a list, the “From” address is preserved, so that the message appears to be coming from the sender.  But in fact, the message isn’t coming from the sender.  Rather, it’s coming from the e-mail program running on a server.

For example, if I (reuven@lerner.co.il) send e-mail to a mailing list (list@example.com), the e-mail will really be coming from the example.com server.  But it’ll have a “From” address of reuven@lerner.co.il.  So now, if a receiver is using DMARC, they’ll see the discrepancy, and refuse to receive the e-mail message.

If lerner.co.il is using DMARC in the strictest way possible, then reuven@lerner.co.il sending to list@example.com will have especially unpleasant consequences: lerner.co.il will refuse to receive its own subscriber’s message to the list, because DMARC will show it to be a fake.  These refusals will count as a “bounce” on the mailing list, meaning a message that failed to get to the recipient’s inbox.  Enough such bounces, and everyone at lerner.co.il will be unsubscribed.

Yes, this means that if your e-mail provider uses DMARC, and if you subscribe to an e-mail list, then posting to such a list may result (eventually) in every other user of your provider being unsubscribed from the list!

I’ve witnessed this myself over the last few weeks, as members of a large e-mail list I maintain for residents of my city have slowly but surely been unsubscribed.  Simply put, any time that a Hotmail, Yahoo, or AOL users posts to the list for Modi’in residents, all of these companies (and perhaps more) refuse the message.  This refusal increases the number of bounces attributed to the users, and eventually results in mass auto-subscriptions.

As if that weren’t bad enough (and yes, it’s pretty bad), people who have been passively reading (i.e., not participating) in the e-mail list for years are now getting cryptic messages from the list-management software, saying that they have been unsubscribed because of excessive bounces.  Most people have no idea what this means, which in turn leads to the list managers (such as me) having to explain intricate e-mail policy issues.

There are some solutions to this problem, of course.  But they’re all bad, so far as I can tell, and came without any serious warning or notification.  And when it comes to e-mail, you really don’t want to start rejecting message en masse without warning.  The potential solutions are:

  1. Subscribers can receive the digest mode of the list, which is always “From” an address on the server.  If you get the digest, this problem won’t happen to you.  If you are a mailing-list subscriber, rather than a list administrator, this is really the only recourse that you have.
  2. The list managers can change the list such that instead of each message being “From” the individual, it’ll come from the list’s address.  I know that there are some people who say that this is the right behavior for e-mail lists, but I have long subscribed (so to speak) to the school of thought that you don’t want to change the “From” address.  (For more on this subject, you can read “reply-to considered harmful” and its associated messages.)
  3. Supposedly, Mailman (the list-management software that I use) now has some support for DMARC that might solve the problem.  But the more I learn about DMARC, the less I’m convinced that Mailman can do anything.

And by the way, it’s not just little guys like me who are suffering.  The IETF, which writes the standards that make the Internet work, recently discovered that their e-mail lists are failing, too.

E-mail lists are incredibly useful tools, used by many millions (and perhaps billions) of people around the world.  You really don’t want to mess with how they work unless there’s a very good reason to do so.  Yes, spam and fraud are big problems, and I welcome the chance to change them.  

But really, would it have been so hard to contact all of the list-management software makers (how many can there be?) and work out some sort of deal?  Or at least get the message out to those of us running lists that this is going to happen?  I have personally spent many hours now researching this problem, and trying to find a solution for my list subscribers, with little or no success.

This all brings me back to my original point: The intentions here were good, and DMARC sounds like a good idea overall.  But it is affecting, in a very negative way, a very large number of people who are now suddenly, and to their surprise, cut off from their friends, colleagues, workplaces, and organizations.  The fact that AOL and other e-mail providers are saying, “Well, you’ll just need to reconfigure your list software,” without considering whether we want to do this, or whether e-mail lists really need to change after more than two decades (!) of working in a certain way, is rather surprising to me.  I’m not sure if there’s any way back, but I certainly hope that this is the last time such a drastic, negative solution is foisted on the public in this way.

Teaching and acting (or, why I don’t plan to sell recorded classes in the near future)

Several weeks ago, my wife and I saw a wonderful play at our local theater in Modi’in  (“Mother Courage and Her Children“).  At the end, the actors came out to receive their richly deserved applause.  Three times, the actors came out, took their bows, and were warmly applauded by the audience.  We loved their performance — but just as importantly, they loved performing, and they loved to see and hear the reactions from the audience, both during and after the play.

I’m sure that some or all of these actors have worked in television and the movies; Israel is a small country, and it’s hard for me to believe that actors can decide only to work in a single medium.  But I’ve often heard that actors prefer to work on stage, because they can have a connection with the audience.  When they say something funny, sad, or upsetting, they can feel (and even hear) the audience’s reaction.

But while we often hear about TV and movie stars making many millions of dollars off of their work, it’s less common for stage actors to make that kind of money.  That’s because when you act on stage, you’re by definition limiting your audience to the number of people who can fit in a theater.  Even the largest theaters aren’t going to hold more than a few hundred seats; by contrast, even a semi-successful TV show or movie will get tens or hundreds of thousands of viewers on a given night.  (And yes, TV and film have many more expenses than plays do — but the fact remains that you can scale up the number of TV and film viewers much more easily than you can a play.  Plus, movies and TV can both be shown in reruns.)

Another difference is the effort that you need to put into a stage production, as opposed to a TV program or a movie: In the former case, you need to perform each and every night.  In the latter, you record your performance once — and yes, it’ll probably require multiple takes — and then it can be shown any number of times in the future.  You can even be acting on stage while your TV show is broadcast.  Or more than one of your movies can be shown simultaneously, in thousands of cities around the world.

What does this have to do with me?  And why have I been thinking about this so much over the last few weeks, since seeing that play?

While I’m a software developer and consultant, I also spend a not-insignificant time teaching people: In any given week, I will give 2-4 full days of classes in Python, Ruby, Ruby on Rails, PostgreSQL, and Git, with other classes likely to come in the next few months.

I’m starting to dip my toes into the waters of teaching online, and hope to do it increasingly frequently over the coming months and years.  But unlike most online programming courses currently being offered, I intend to make most or all of my courses real-time, live, and in person.

This has some obvious disadvantages: It means that people will need to be available during the precise hours that I’m teaching. It means that the course will have to be higher in price than a pre-recorded video course, because I cannot amortize my time investment over many different purchases and viewings.  And it means that the course is limited in size; I cannot imagine teaching more than 10 people online, just as I won’t teach an in-person class with more than 20 people.

Given all of these disadvantages, why would I prefer to do things this way, live and in person?

The answer, in a word, is: Interactions.

I’m finishing my PhD in Learning Sciences, and if there’s anything that I have gained from my studies and research, it’s that personal interactions are the key to deep learning. That’s why my research is all about online collaboration; I deeply believe that it’s easiest and best to learn when you speak with, ask questions of, challenge, and collaborate with others, ideally when you’re trying to solve a problem.

I’m not saying that it’s impossible to learn on your own; I certainly spend enough hours each week watching screencasts and lectures, and reading blog posts, to demonstrate that it’s possible, pleasurable, and beneficial to learn in these ways. But if you want to understand a subject deeply, then you should communicate somehow with other people.

That’s one of the reasons why pair programming is so helpful, improving both the resulting software and the programmers who engage in the pairing. That’s why open source is so successful — because in a high-quality open-source project, you’ll have people constantly interacting, discussing, arguing, and finally agreeing on the best way to do things. And that’s why I constantly encourage participants in my classes to work together when they’re working on the exercises that I ask them to solve: Talking to someone else will help you to learn better, more quickly, and more deeply.

I thus believe that attending an in-person class offers many advantages over seeing a recorded screencast or lecture, not because the content is necessarily better, but because you have the opportunity to ask questions, to interact with the teacher, to clarify points that weren’t obvious the first time around, and to ask how you might be able to integrate the lectures into your existing work environment.

So for the students, an in-person class is a huge win.  What do I get out of it?  Why do I prefer to teach in person?

To answer that, I return to the topic with which I started this post, namely actors who prefer to work on stage, rather than on TV and in movies. When I give a course, it’s almost like I’m putting on a one-man show. Just as actors can give the same performance night after night without getting bored, I can give the same “introduction to Python” course dozens of times a year without tiring of it.  (And yes, I do constantly update my course materials — but even so, the class has stayed largely the same for some time.)  I’m putting on a show, albeit an interactive and educational one, and while I put on the same show time after time, I don’t get tired of it.

And the reason that I don’t get tired of it? Those same interactions, which are so beneficial to the students’ learning and progress, are good for me, as the instructor.  They keep me on my toes, allow me to know what is working (and what isn’t), provide me with an opportunity to dive more deeply into a subject that is of particular interest to the participants, and assure me that the topics I’m covering are useful and important for the people taking my class.

I live and work in Israel, and one of the things that I love about teaching Israelis is that I’m almost guaranteed to be challenged and questioned at nearly ever turn. Israelis are, by nature, antagonistic toward authority.  As a result, my lectures are constantly interrupted by questions, challenges, and requests for proof.

I have grown so accustomed to this way of things, that it once backfired on me: Years ago, I gave a one-day course in the US that ended at lunchtime — it turns out that the Americans were very polite and quiet, and didn’t ask any questions, allowing me to get through an entire day’s worth of material in just half of the time.  I have since learned to make cultural adjustments to the number of slides I prepare for a given day, depending on where I will be teaching!

When I look at stage actors, and see them giving the same performance that they have given an untold number of times in the past, I now understand where they’re coming from. For them, each night gives them a chance to expose a new audience to the ideas that they’re trying to get across through their characters and dialogue.  And yes, they could do that in a movie — but then they would be missing the interactions that they have with the audience, which provide a sense of excitement that’s hard to match.

Does this mean that I won’t ever record screencasts or lectures?  No, I’m sure that I will do that at some point, and I already have some ideas for doing so. But they’ll be fundamentally different from the courses that I teach, complementing the full-length courses, rather than replacing them. At the end of the day, I get a great deal of satisfaction from lecturing and teaching, both because I see that people are learning (and thus gaining a useful skill), and because my interactions with them are so precious to me, as an instructor.

Convention over confusion

One of the most celebrated phrases that has emerged from Ruby on Rails is “convention over configuration.” The basic idea is that software can traditionally be used in many different ways, and that we can customize it using configuration files. Over the years, configuration files for many types of software have become huge; installing software might be easy, but configuring it can be difficult. Moreover, given the option, everyone will configure software differently. This means that when you join a new project, you need to learn that project’s specific configuration and quirks.

“Convention over configuration” is the idea that we can make everyone’s lives easier if we agree to restrict our freedom. Ruby on Rails does this by telling you precisely what your directories will be named, and where they will be located. Rails tells you what to call your database tables, your class names, and even your filenames. The Ruby language, while generally quite open and flexible, also enforces certain conventions: Class and module names must begin with capital letters, for example.

It can take some time for developers to accept these conventions. Indeed, I was one of them: When I first started to work with Rails, I was somewhat offended to be told precisely what my database column names would be, especially when those names contradicted advice that I had heard and adopted years ago. (The advice was to prefix every column in a database table with the name of the table, which would make it more easily readable in joins.  Thus the primary key of the “People” table would be person_id, followed by person_first_name, person_last_name, and so forth.)  Over time, I have grown not only to use these Rails conventions, but to enjoy working with them; it turns out that people can changes pretty easily, at least when it comes to these arbitrary decisions.

The real benefit of such conventions has nothing to do with my own work. Rather, it reduces the need for communication among people working on the same project. If everyone does it the same way, then there are fewer things to negotiate, and we can all concentrate on the real problems, rather than the ones which are relatively arbitrary.

Back in college, I was the editor of the student newspaper. We, like many newspapers, used the AP Stylebook to determine the style that we would use. The AP Stylebook was our bible; whatever it said, we did.  Of course, we also had our own local style, to cover things that AP didn’t, such as building names and numbers (e.g., we could refer to “Building 54″). In some cases, I personally disagreed with the AP Stylebook, especially when it came to the “Oxford comma.” But by keeping that rule, we were able to download articles from the Washington Post and LA Times, and stick them into our newspaper with minimal editing. Again, I prefer the serial comma, and use it in my personal writing. By adhering to a standard, I was able to ensure consistency in our writing, and reduce the workload of the (already hard-working) newspaper staff.

Twice in the last few weeks, I’ve been reminded of the benefits of convention over configuration — both times, when developers on projects I inherited decided to flout the rules. Their decisions weren’t wrong, but they were so wildly different from the conventions of Rails that they caused trouble, delays, and bugs.

The first case had to do with the Rails “asset pipeline,” a part of Rails which handles static assets such as JavaScript and CSS files. The idea is that you create a file called application.js, and that file then tells Rails about all of the JavaScript files used by your application. Before deploying a new version of your application, Rails combines all of these files into one big file, thus improving site performance (by reducing the number of files to download) and improving caching. The asset pipeline is a great idea, and it even works well — but in many cases, getting it to work correctly can be difficult and painful, particularly if you’re new to Rails.

So you can imagine my surprise when I looked for the application.js file, and didn’t find it.  That was bad enough, but the asset pipeline mechanism, as well as the deployment scripts I was developing, got rather confused by the absence of application.js. When I confronted the original developer about this, he told me that actually, he liked to call it something else entirely, reflecting the name of the application and client. Why? He didn’t really have a technical reason; it was all for reasons of aesthetics. The fact is that the rest of the Rails ecosystem expected application.js, though, so his decision meant that the rest of the software needed to be configured in a special, different way.

As a way of justifying his decision, the other developer told me, “Conventions shouldn’t be a boundary when developing.”  No, just the opposite — the idea is that conventions are there to limit you, to tell you to work in a way that everyone else works, so that things will be smoother.  In much of the world, we drive on the right side of the road.  This is utterly random; as numerous countries (e.g., England) have proven, you can drive on the other side of the road just fine — but only so long as everyone is doing it.  The moment everyone decides on their own conventions, big problems can occur.

When Biblical Hebrew wants to describe anarchy, it uses the phrase, “People did whatever was right in their own eyes.”

Something similar occurred with another project where I inherited code from someone else: One of my favorite things about Ruby on Rails is the fact that it runs the application in an “environment.”  The three standard environments are development (which is optimized for developer speed, not for execution speed), production (which is optimized for execution speed), and test (which is meant for testing). The environments aren’t meant to change the application logic, but rather the way in which the application behaves.  For example, I recently changed the way in which e-mail is sent to users of my dissertation software, the Modeling Commons. When I send the e-mail in the “production” environment, the e-mail is actually sent — but when I do so within the “development” environment, the e-mail is opened in a browser, so that I can examine it.  This is standard and expected behavior; all Rails applications have development, production, and test environments — and some even havea  “staging” environment, in which we prepare things.

My client’s software, which I inherited from someone else, decided to do something a bit different: The code was meant to be used on several different sites, each with slightly different logic.  The developer decided to use Rails environments in order to distinguish between the logical functions.  Thus, if you run the application under the “xyz” environment, you’ll get one logical path, and if you run the application under the “abc” environment, you’ll get another logical path.

It’s hard to describe the number of surprises and problems that this seemingly small decision has created: It means that we can’t really test the application using the normal Rails tools, because nothing will work correctly in the “test” environment. It means that the Phusion Passenger server that we installed to run the application needs an additional, special configuration parameter (not normally needed in production) to find the right database, and execute with the correct algorithms. It means that when you’re trying to trace through the logic of the application, you need to check the environment.

Basically, all of the things that you can assume about most Rails applications aren’t true in this one.

Now, the point of me writing this isn’t to say that I’m brilliant and that other developers are stupid — although it is true that Reuven’s First Law of Consulting states that a new consultant on a project must call his predecessor a moron.  Rather, it’s to point to the fact that conventions are there for a reason, and that if you insist on ignoring them, you’ll be increasing the learning curve that other developers will need to work on your application.  Now, if you have oodles of time and money, that’s just fine — but as a general rule, a developer’s time is a software company’s greatest expense, and anything you can do to increase productivity, and  decrease the need for explanations and communication, is worthwhile.

By the way, this is the whole reason why one of the Python mantras is, “There’s only one way to do it” — a direct contrast with the Ruby and Perl mantra, “There’s more than one way to do it.” Having a single, common way to do things makes everyone’s code more similar readable, and easier to understand. It doesn’t stop you from doing brilliant and interesting things, but does ask that you demonstrate your brilliance within the context of established practice.

Of course, this doesn’t mean that conventions are written in stone, or that they are unchangeable.  But if and when you ignore them, it should be for good reason.  Even if you’re right, think about whether you’re so right that it’s worth having multiple people learn your way of doing things, instead of the way that they’re used to doing them.

What do you think?  Have you see these sorts of issues in your work?  Let me know!