What process do you use in determining product requirements for an upcoming release? It probably goes something like the product manager collaborates with the sales department to write up an MRS (marketing requirements document), the product manager writes up a PRD (product requirements document), and then development takes that input and generates an SRS (software requirements document). The MRS describes the required enhancements in terms of features, benefits, and their expected impact on sales. The PRD should describe new and changed features that support the improved vision. The SRS should specify enough detail that a developer can implement changes to support the PRD.
This pipeline can work well. But it is not perfect. As a pipeline, it is only as good as the information that is feeding it. In particular, missing information often results in software which may not meet the originally envisioned requirements. (Anyone who has had a brush with dealing with an outsourced team can appreciate this in spades.) One often neglected area concerns performance, particularly in user interfaces. Those who are developing web applications are less likely to suffer from this lack than those who are working on desktop client applications.
What might performance specification look like? This can take the form of a maximum time for a certain action, a minimum throughput specification (events per second), or a specification of expected volumes. These are typically lacking in most projects, or are injected very late in the development cycle, where the workable responses available to the development team are going to be limited. If you are unlucky, there are no performance requirements, and this lack causes you to choose an architecture that does not scale well.
To avoid this, you should try whenever possible to specify performance requirements in your SRS. It is unlikely that you’ll be able to find hard numbers in terms of time constraints. This is partly due to the cost of determining these (think of the time and expense of canvassing your customer base), but also human tolerance of delay is vague. If a dialog takes 500 milliseconds to come up, versus 200 milliseconds, is anyone really going to complain?
One easy specification to include should be the average and expected maximum sizes of things. Image your user interface includes a dialog that shows a list of items. The average and maximum length of that list should be specified; even a guess is better than nothing, but take care to get several considered opinions. In a product that I worked on, when I asked our subject matter expert what an average and maximum value for our list would be, I was told 7 and 12 (these were big abstract things in the list). Knowing that, we never saw any problematic performance during development. We knew that the implementation was probably n^2, but were unconcerned. Our QA stress test case was 30 items. Image our chagrin when handling a customer call where said customer had 600 items in their list. Trust me when I say that you do not want customers going in your product where your QA team has not.
Another easy specification is concerns response time and cancelling long operations. First, make sure that you never do real work in your event dispatch thread; this is the main thread that runs your user interface. Doing real work there means that your application becomes unresponsive. So your SRS should specify places that progress indicators should be used (a wait cursor, a status bar message, or a full blown progress dialog). This has the implication that at least one extra thread should be doing the real work in the background. If your user interface operation is expected to take more than a certain amount of time (say a handful of seconds), you should specify that it be cancellable by the user.
These sound like common sense, and for many practitioners they are. But in the press to get product out the door, try hard to avoid performance concerns from getting pushed back into the next release.
No comments:
Post a Comment