Friday, August 15, 2008

The Problem with Thinking Additively

How does your company decide how to improve your products? If yours is like most, you have product managers who survey selected customers, soliciting feedback on improvements. Between their suggestions and your development staff, you come up with a list of new features for that next development cycle.

Maybe this works for your company, but it is a flawed process. By focusing on your existing product and thinking additively, you effectively warp your process into being tool-centric. The Next Great Release is only a few new features away. Developers think this way, product managers think this way, and existing customers think this way (potential customers may not).

So what exactly is wrong with this picture? Everyone is focused on the tool. It is a distraction from what is important, namely solving the user's problem. If you could really see how your tool is used by end users, you might notice that there is a lot of interaction between the user, the tool, and other things.

I'll call this workflow. It is the sequence of steps that the user is following in going from nothing to solution. It might involve using your tool multiple times in different ways. It might involve generating intermediate workproducts. It might involve interacting with others at certain points. Ultimately, it might be all this activity around your tool that determines its success, rat more than its feature set.
Workflow may not be prominent in all problem domains, but the more complex the domain, the more likely that workflow is significant. Consider my company's domain for example. We provide tools to allow technical people to work with network and server data, to build a models of their system, and to predict the behavior of the system under various loads.

Recipes

One day one of our consultants passed by and proudly flopped a meaty document on my desk and informed me that he had finally codified a methodology to solve one of our users' most common modeling problems. It was a masterpiece at 90 pages, was well organized, was valuable to our users, and made an impressive sound hitting my desk.
In a later meeting, while developers and product managers argued about getting SAN support into our tools, I was still thinking about that sound. If the problem domain was that complex, surely we should be selling guidance to our users, right? And we couldn't ship a consultant in every box, could we?

In a sense, the methodology was a recipe, and arguably was as valuable as the tool. If we could productize it, users would see real value in a productivity boost. The analogy to a cooking recipe is apt. It has ingredients (preconditions), a step by step description of what to do (a process), and a view of the expected result. Users might choose different recipes, depending on what they are trying to do.
One of the most valuable things a recipe can do is to establish context. By knowing context, any support you give the user can be made more relevant. Maybe your user has filtered a network traffic report and it is empty. Is that an error condition? Maybe. Is there sage advice to offer them - only if you know what they are trying to do. If you build tools without context, in a way, you can only offer abstract options and abstract advice. It becomes much harder to embed domain knowledge in your tool.

Steps

Steps are the building blocks of recipes. Our product is composed of more than one application, and some steps control these other applications, launching them with the appropriate inputs and outputs when appropriate. Some kinds of steps generate workproducts. Some steps encompass critique of data or state, sometimes encompassing expert advice. Some steps allow declaration of data (like a list of server names). A lot of steps have fixed or parameterized functionality, but one type of step is very highly customizable in terms of both its behavior and its designable user interface.

Flow and Dependency

There is a concept of information flow involved in recipes. Since recipes have multiple steps and steps might have inputs and outputs, there is a flow among the steps. The inputs and outputs of each step are typed and the flow matches up by type. This brings up one of the strengths of this approach, namely the managing of dependencies. If a user makes a change farther up in the recipe, all downstream steps that share that flow are now out-of-date, meaning that the steps need to be executed again to bring their outputs up-to-date. As long as this out-of-date state is visible to users, it is a simple matter of making sure that required changes propagate correctly.

Tracability

In our problem domain, users are making business decisions on modeling results, involving potentially millions of dollars. Confidence in the results is paramount, requiring an understanding of how those results were arrived at. If the results are unexpected, they need to work backwards in the flow to figure out where their expectations diverge from the results. Users might work backwards examining not only intermediate workproducts, but also look at assumptions along the way.
Consider this requirement in the context of a loop. Whether in a conventional programming language or a flow chart, a loop has a conditional, a loop body, and a loop terminator. There are undesirable characteristics of this representation. First, it assumes that the logic associated with the loop body is the same for each pass through the loop. When you consider that each pass might represent the analysis of a different set of data, and that the user might want to tweak each pass, the flow chart style logic is not flexible enough.

A way to represent this is to unroll the loop. That is, represent the loop body explicitly for each iteration of the loop. It takes more space to do this certainly, but it meets the traceability requirement and allows the user to modify each pass independently. There is also a dynamic reconfiguration aspect here. Maybe the loop is a pass for each test case in the collected data. So if the user adds a test case, a new series of steps might dynamically come into existence representing the new pass through the loop.

A final benefit in explicitly representing all steps required in a recipe, is that the user has a roadmap that tells them how far they need to go. They can tell how far they have come by a "done flag" that each step displays.

Business Politics

An easily overlooked issue with introducing workflow into your products involves business politics. You might have to champion a fairly sizable change to how you are going to spend your development dollars. If introducing workflow competes for the same resources that would be used to develop new features, you might meet resistance.

In my company, I started the sales pitch at the customer advisory council level. Being composed entirely of our biggest customers, the council readily bought into the concept. This should be expected, as users are the ones who ultimately pay the price for lack of workflow support through lost productivity.

Next up was an internal presentation that included developers, consultants, and product managers. The development staff was positive towards the idea. The product managers were uncertain about this; the workflow solution came out of some creative thinking in development, which is backwards from what they expected. The really interesting responses came from the consulting staff, who are effectively very advanced users. Most were favorable to the idea, but consultants have a history of being mavericks. As subject matter experts, each might have their own way of solving problems.

One consultant stood up in this meeting and gave the most negative response, saying something like "Tell me why I would ever use something like this. Why would we build this application?" I'm happy to say that the same consultant caught me in the hall about a year after the product was released and told me, "I don't know how I got along without this!"

I'd like to close this with an admission. To this day, I never have read every page of that original methodology. Just don't tell that consultant.

Thursday, August 7, 2008

The Touch

On another topic, I picked up an Apple iPod Touch, the one with the big touch screen and a cousin to the iPhone. Man, that is a nice piece of engineering. The user interface is very interesting. I have not really used touch devices before, but I can see that user interfaces will not translate to mobile touch devices very well at all.

Consider these…

Screen real estate is at a premium. The balance between being efficient and being rich without being cluttered is very narrow, and they have done a good job at the user interface.

One of the fun user interfaces involves scrolling through lists (think of artist/album lists). As a user, you think of pressing your finger on the list and flicking it in a direction. This sends the list rolling in the direction your finger directed. The fun thing is that the faster you flick, the faster the list scrolls. And the list has friction! Its velocity decays over time. And if you hit the end of the list, it kind of compresses a bit then rebounds, kind of like its made of rubber. I’m ashamed to admit that even after having the Touch a week, I still sometimes play around with flick-scrolling.

One thing that I ran into was the lack of tool tips. With a mouse-based interface, you can linger over items to get a hint to pop up as to the function of the widget. But with touch screens, there is no lingering. You press on a button with a finger. That’s it. I missed the ability to browse the functionality that tool tips provide. A possible compromise might be to have a tool tip pop up while the press down occurs. A user might move off a widget without releasing, hence restoring the viewing of tool tips without activating the widget.

Another slight surprise was how much your finger actually obscures. Apple did a good job of averaging a hotspot to represent the center point of your finger, so I rarely missed items, even as a newbie user. But your finger tip is quite large. When you click say on a small keyboard, or a small control, its nice to know what you hit. This happens with several forms of feedback. If you hit a lone control and pick your finger up, you often see the control glow for a couple of seconds, indicating that is was hit. This is a “nice touch” (sorry, I couldn’t resist). When clicking on a virtual keyboard, when you click the letter, a larger representation of that letter jumps up above your finger while it is pressed, allowing visual confirmation of your choice.

One of the really glitzy user interface features is controlling zooming. The Touch uses the Safari web browser. When you navigate to a page, you see the whole page (usually as you’d see it rendered on your computer screen) only shrunk down to fit the mobile screen. This makes actually using this sized page dubious. Most users will zoom in and then move around by dragging the screen with their finger tip. The zooming is what’s interesting. If you want to blow up a screen, you put two fingers on the screen and spread them out. The screen zooms bigger in real time in proportion to how much you spread your fingers. Zooming back out is the opposite; use two fingers and squeeze them together. Very, very slick. The obvious usability defect I ran into was that the zooming factor was not preserved when jumping to a new page. This means you keep having to zoom in every time you follow a link.

The Touch also has real orientation sensor. (Please pardon my printer jargon to follow.) If you are viewing a web page in portrait mode, and you rotate the Touch to landscape mode, the screen will reorient itself so you are no longer looking at it sideways! That one is a real crowd pleaser. Some applications take advantage of this; for example, the drawing programs allow you to erase a screen by shaking the Touch. Ah, that takes me back to the old Etch-a-sketch days. I ran into an interesting usability issue with this feature. I took the Touch to the gym, where I primarily listen to music (you know, to keep the old forty-something body going). The Touch would not deactivate the screen (to save power) because I was doing sit-ups, causing the screen to switch between portrait and landscape mode with each grunt of effort. Thankfully once it went into screen deactivate, no amount of sit-ups would reactivate the screen (there is a separate gesture to wake it up).

I guess the moral of the story is that even a multi-billion dollar company that thrives on slick user interfaces makes usability mistakes. The second moral of the story is that touch-based interfaces are a new challenge to those of us who create mouse-based applications. The final moral of the story is to go out and get an iPod Touch – now.

Friday, August 1, 2008

What About User Interface Performance?

What process do you use in determining product requirements for an upcoming release? It probably goes something like the product manager collaborates with the sales department to write up an MRS (marketing requirements document), the product manager writes up a PRD (product requirements document), and then development takes that input and generates an SRS (software requirements document). The MRS describes the required enhancements in terms of features, benefits, and their expected impact on sales. The PRD should describe new and changed features that support the improved vision. The SRS should specify enough detail that a developer can implement changes to support the PRD.

This pipeline can work well. But it is not perfect. As a pipeline, it is only as good as the information that is feeding it. In particular, missing information often results in software which may not meet the originally envisioned requirements. (Anyone who has had a brush with dealing with an outsourced team can appreciate this in spades.) One often neglected area concerns performance, particularly in user interfaces. Those who are developing web applications are less likely to suffer from this lack than those who are working on desktop client applications.

What might performance specification look like? This can take the form of a maximum time for a certain action, a minimum throughput specification (events per second), or a specification of expected volumes. These are typically lacking in most projects, or are injected very late in the development cycle, where the workable responses available to the development team are going to be limited. If you are unlucky, there are no performance requirements, and this lack causes you to choose an architecture that does not scale well.

To avoid this, you should try whenever possible to specify performance requirements in your SRS. It is unlikely that you’ll be able to find hard numbers in terms of time constraints. This is partly due to the cost of determining these (think of the time and expense of canvassing your customer base), but also human tolerance of delay is vague. If a dialog takes 500 milliseconds to come up, versus 200 milliseconds, is anyone really going to complain?

One easy specification to include should be the average and expected maximum sizes of things. Image your user interface includes a dialog that shows a list of items. The average and maximum length of that list should be specified; even a guess is better than nothing, but take care to get several considered opinions. In a product that I worked on, when I asked our subject matter expert what an average and maximum value for our list would be, I was told 7 and 12 (these were big abstract things in the list). Knowing that, we never saw any problematic performance during development. We knew that the implementation was probably n^2, but were unconcerned. Our QA stress test case was 30 items. Image our chagrin when handling a customer call where said customer had 600 items in their list. Trust me when I say that you do not want customers going in your product where your QA team has not.

Another easy specification is concerns response time and cancelling long operations. First, make sure that you never do real work in your event dispatch thread; this is the main thread that runs your user interface. Doing real work there means that your application becomes unresponsive. So your SRS should specify places that progress indicators should be used (a wait cursor, a status bar message, or a full blown progress dialog). This has the implication that at least one extra thread should be doing the real work in the background. If your user interface operation is expected to take more than a certain amount of time (say a handful of seconds), you should specify that it be cancellable by the user.

These sound like common sense, and for many practitioners they are. But in the press to get product out the door, try hard to avoid performance concerns from getting pushed back into the next release.