Tuesday, June 9, 2009


Yep, its another blog/article on interviewing. I just searched on google for "interview programming" and got 21.8M hits. So what's so bad about one more? :)

I wanted to have a few words about interviewing for mid-level and senior developers. I do this for my company as one of my many functions (including software architect and lead developer). The reason I feel compelled to write is disappointment. I just cannot reconcile how good some resumes look and how poor the results are off the paper.

When I interview, I *never* *ever* go the puzzle/trick route. I think those have very little to do with our jobs at best and give misleading results at worst. To me, you should focus where the big cost items are in software - in its (lack of) structure and in its (lack of) flexibility. Does the code smell?

It is how easily the software can be changed over time that determines its real cost.

So when I interview, I first establish that they have good grasp of object orient design fundamentals. Then I focus on the key area of "best practices." Does the candidate understand when they are creating a mess? Do they recognize conceptual pollution when they see it?

To ascertain this, I give them a take home (java) "test." I want them to not feel a great deal of time pressure (when was the last time your boss dropped a programming task in your lap and said it needed to be done in 20 minutes?). I give them overnight to complete the test, but in reality it should only take them 20 minutes.

The actual test is short. It's a two page code sample for them to critique. I tell them that their boss is interested in hiring the developer of that code and boss wants their feedback. The test shows the following kinds of problems:
  • embedding strings directly in code
  • mixing of abstractions
  • blocks of code commented out (with no other explanation as to why it was commented out)
  • the use of public virtual functions called in a constructor (a questionable practice)
  • complete lack of comments
  • errors in relationships (coder assumed 1:1 when its really 1:N)
  • missing abstractions (over use of string data)
  • over-reliance on open-ended integers (instead of enums)
In all, there are about a dozen issues that could be flagged. I have had several candidates tell me that everything looked good to them. My average score is probably 5 issues. I tend to get interested in a candidate if they score 7 or more. The best I have seen was from a really, really good developer who did it sight-unseen in about 10 minutes and got 14 issues (I give credit if they raise and can defend most technical concerns).

I don't consider this tricky questioning. It basically boils down to me having developers that don't keep me awake at night wondering what messes they are making. I feel that a competent developer ought to be able to get most of this almost by instinct.

So why do 90% of the candidates not make the bar?

Wednesday, April 8, 2009

Third Life, Fourth Life, ...

I was in a discussion today with a colleague of mine (I'll call him Jim). He was asking my opinion of an important message he put at the end of his application installer. I suggested some minor tweaks, which he appreciated. But I also asked him to emphasize things, possibly by an alert or by an extra dedicated panel (most installers are wizards). His comment was that doing so would only aid maybe 10% more of the users.

What sad but true observation.

I was taken back about a year earlier when I remember standing at the back of a training class watching one of our sales engineers using a product I designed. He was working his way through a guided work flow. When an error message came up (you know, the one with the big red X on it), he couldn't click it away fast enough. Breaking with best practices, I interrupted him asking him what the error message was and he didn't know.

This was truly an eye-opener for me. To him, an interrupting dialog/alert was a nuisance to be disposed of as quickly as possible. It was impeding his work flow. I try to put a lot of effort into the information content of error messages to be as helpful as possible to the end user. Was all that effort wasted? I liken careful error handling to putting guard rails on a winding mountainous road.

Back to my conversation with Jim.

We were discussing possible changes to user interfaces. I remarked that maybe with virtual reality becoming commonplace, we could actually be more effective in informing users of the severity of problems. Maybe a future application would have a user traveling down a path, say a winding mountain path. When a severe error occurs, rather than a dialog, we could actually push them off a virtual cliff. At least the experience would be hard to dismiss and might even be memorable.

Maybe instead of giving the user the Ok/Cancel button of today, such a future application might present a button with the label "Respawn?"

Thursday, January 15, 2009

Primitive Types and the Domain Model

I was at the No Fluff Just Stuff conference this year (5th year in a row for me), listening to Neal Ford talk about development concepts. He made a casual remark that caught and held my attention far after he had moved on to other topics. He said

Never use primitive types in your domain model

As you veteran developers know, the core of many well-built applications is a clear, concise domain model. This forms the basis for key abstractions and is the framework around which you typically hang your business logic. Most of my experience over the years has been with developing large, complex desktop applications where a good domain model is key. I assume that for medium-to-large enterprise web applications, a good domain model is also highly valued.

So what exactly does the quote above mean? I'll give you some examples. The first is from one of our early domain models where we represented network topology elements (computers, routers, etc). One of the data members of our network representation was bandwidth, as in set-the-bandwidth-of-this-network-to-X. We assumed that the number was megabits per second. Of course, we wrote some independent utility scaling routines to go to kilobits per second and gigabits per second.

Do you see the inferior design? Of course, we should never have used a primitive type (like double) to represent bandwidth. What we should have done was create a Bandwidth class and passed this to the "set" and "get" routines. This class would have had the concept of units (possibly a class like BitRate) and would have encapsulated all the conversion routines in one nice package. Maybe you would find lines of code like

double capacity = bw.inMegabitsPerSec();

And importantly constructed like this

Bandwidth bw = new Bandwidth (3.5, MegabitsPerSec);

That is so much easier to read and is far clearer than

double bw = 3.5;

A second example comes from some code that I trapsed across not a month ago. We have a switchboard-like, sockets-based server that handles communications between applications within a given suite. In sending a message through this switchboard to another application, you identify the target application by a name. The application names and versions that form a suite come from a configuration file. We allow aliases/monikers for the applications as well as trying not to be case sensitive. Part of this is convenience, and part of it is because the applications are written in diverse technologies (C++, Perl, Java, C#).

I was looking at this code for the first time, and I saw a lot of string manipulation going on. There was trimming (of spaces), folding of cases, and substring matching -a lot of it. In tracing through the code, a string that was already lower case, was set to lower case again. Clearly somewhere along the line, the programmer (or programmers) lost track of the flow of the data in the application, where messages were entering the system. The string hacking was so prolific that it made the code harder to read.

So what was missing here? Instead of using a string primitive type, how about a class called ApplicationName. It would encapsulate all the concepts of case insensitivity and of aliases. I even found code later on that emitted information to logs and to the end user, so it could even encapsulate the concept of "pretty name" (a standard, human-consumable name).

I know what a few of you are thinking. You are concerned about the performance cost of allocating all those instances. With modern optimizing compilers, you'll find that performance costs won't really be an issue. The cost that will eat your lunch is the maintenance cost for the software over time.

So the next time you are working in your domain model, and you feel the urge to use an int, double, string, or other primitive type, think again.