Showing posts with label Requirements. Show all posts
Showing posts with label Requirements. Show all posts

Thursday, September 11, 2014

The Tipping Point

"You can push that car just a little too far any Sunday afternoon. And if you break your neck in some damnfool wreck, they forget about you soon." --Charlie Daniels, "Stroker Ace"

I generally prefer to work at small to midsize companies. The reasoning is fairly simple, too. I like flexibility. Not a complete lack of defined policies, mind you. Policies are important. They give companies a road map for, if not efficiency, then at least predictable inefficiency. But smaller companies tend to have the flexibility to know when you need to stray off the beaten path. Large companies do not. And there's a certain logic behind that. The more you do, the more you rely on that predictability. Policies have saved my neck a time or two. Being able to say "I can't start this effort because our policies have not been followed" has forced projects to refine requirements and has gone a long way to prevent train wrecks. But there's a flip side, too.

At larger companies, it's harder to change policies that either no longer fit, or need refinement from the original draft in order to produce the intended result. There's also more risk in proposing new policies, I think. There generally tend to be more people that need convincing, which tends to mean more people that don't want to "Change what has been working for us". It often comes to a point where it's easier to just live with the inefficiency than try to change it. Or worse, at least in some ways, ignore policy and try not to get caught.

Okay. So far, I haven't told anyone something they don't know. This is one of the factors that need to be weighed when deciding whether or not to accept an offer or to leave a company. It's, hopefully, one of the factors you investigate when interviewing a company. As an aside, please note that I didn't say "interviewing with a company." You're as much deciding whether or not to offer them your services as they are offering you a position. Never forget, you're a contractor.

The point is, while working for smaller companies, I've noticed a reoccurring phenomenon, which I've started calling The Tipping Point. As smaller companies grow, they start to take on more work and start to do more. At this point, I have observed two results.

If the company does not at least have some policies guiding their work, the company tends to collapse. There comes a time when it's simply too late to gain control of source code, whether because nothing is in source control or there's no structure. When it's simply too late to implement good project policies because stakeholders aren't interested in becoming properly involved in projects and that disinterest is too entrenched to change. When projects are dangerous to deploy because no one knows what anyone else is doing. At this point, while the company may struggle on, it's no longer possible to fix the problems, and the only smart move is to leave.

The second result I've observed usually happens when there are a modicum of policies in place, and is usually hastened by bringing in An Expert. Perhaps a new CIO, perhaps a contractor. The buildup, again in my observations, tends to start slowly. No deploying projects without communicating with other groups. Nothing terrible there. Common sense, in fact. Issues, bugs, and defects need to be logged. Again- why would you not? And we need to get a sense of how much we're spending on projects, so we need people to log time spend on projects. Not my favorite activity, but I can't deny the importance.

But then things grow. Communicating deployments becomes getting signoff from other groups before deploying. The bug tracking system becomes a full-on change management system designed to integrate with every step of any process. Except yours, inevitably. Rather than turning in time tracking spreadsheets, either a web app gets build, or you're to use a built-in feature of the change management system.

Finally, comes the tipping point. I wish I could say this is an exaggeration, but in one case, I was entering time into an application that had a database lookup for projects, yet my projects never seemed to show up. I also had to email the same to my manager. The time tracker was in half hour increments, but my manager wanted 15-minute increments. Every change, even to dev environments, needed to be entered into five different systems (again- not exaggerating) and cleared in a bi-weekly Change Control Board meeting.

And all this doesn't even take into account well-documented problems when you mix in an Expert Scrum Consultant. I think I've made my opinions on the subject pretty clear, but it's pointless to deny what often happens when Scrum policies get blindly applied. And that's the problem- policies getting blindly applied. The "Monkey See Monkey Do" style of "Best Practices".

It's a rare gem when you find an organization with the self-discipline to recognize a policy that needs refining or simply removed. Or temporarily ignored on a non-precedential basis. A good set of policies are a balancing act. Too many and you get buried under their weight. Too few and you get buried under your own weight. That's The Tipping Point.

Thursday, August 1, 2013

Painting Fences


Imagine this exchange, if you will:

Homeowner: We need to to paint the outside of our fence white.
Contractor: No problem.

(2 weeks later)

Homeowner: I know we asked you to paint our fence white, but now we need it painted red.
Contractor: You don't like the white?
Homeowner: It's not working for us, and we'd like to try red.
Contractor: No problem.

(2 weeks later)

Homeowner: Okay, sorry about this but now we want our fence green.
Contractor: Sure, I can do that, but I have to ask. Why the color changes?
Homeowner: The neighbor's dogs bark all night and we're trying to find a fence color that calms them down.

Silly, isn't it? Ridiculous, in fact. And yet, a conversation in which I've participated more than a few times in my professional career. It stems from the business side of a software development project telling the development team what they want, which should be avoided at all costs. Yes. I just said that. The business shouldn't be allowed to tell development what they want. The reasoning is fairly simple, too. They don't know what they want and even if they did, they don't have the technical vocabulary to communicate it.

What the business does know is what it needs. There's a pain point, a failing, an inefficiency, or a breakage somewhere that needs to be resolved. That's what they should be talking to you about. They need to show what the process is now, what they don't like about it, and what it should look like when the problem has been solved.

There's a subtle difference between business needs as requirements and implementation details as requirements. Subtle enough to go unnoticed sometimes. Subtle enough to even seem reasonable. It seems almost reasonable to say "The account creation form needs to validate the format of the email address." Or "I want to be able to delete accounts". But it's like two lines that aren't quite parallel. Extend them out for a long enough distance and they end up in very different places.

Let's take a look at the two examples above. Validating an email address seems reasonable. Necessary, even. After all, you don't want users accidentally putting in a bad email address, right? If we're going to have any success at all, we're going to need a regex.  Something along the lines of
bool isEmailValid = Regex.Match(emailAddress, RegexString).Success;
Problem is,writing that regex is harder than it looks. Bigger problem is that it likely doesn't solve the underlying issue. Note, I said "likely" because we don't actually know what the real issue is. All we know is that we need to validate an email address. There's no mention of why. But if the underlying issue is making sure that the user enters a valid email address, which seems reasonable, then this doesn't actually solve the problem. It's very easy to write a regex (okay- let's face it. Search for a regex and copy/paste it. C'Mon- you know you do it.) that doesn't properly validate an email address, and prohibitively expensive to write one that does.

This solution falls even shorter of the actual need if the need is to make sure that the user enters a valid address that they have access to. Which may be what the business meant. It also may not have been what they meant, but would agree this was the proper direction if they had communicated their need, thus giving the architecture team the chance to respond and ask questions. So, due to a solution disguised as a requirement, we have a solution that will likely either validate malformed email addresses or reject valid addresses and does nothing to make sure the user has access to the email address. So what, in fact, have we actually accomplished?

The second example seems pretty reasonable, too. After all, do we really want dead accounts laying around? And besides, what if there's a user that we don't want accessing their account anymore? It's not like this will be an every day thing, but just in case. The problem here isn't whether or not a solution can be implemented, it's the repercussions that are not being considered. What do we do with the order history of a deleted account? If we keep it, to what do we associate the orders? If we delete it, how do we explain the discrepancies in financial reporting? Or inventory levels vs. newly adjusted sales numbers? What if you need to process a refund? Good grief, what if you need to undelete the account? This requirement is like an early 80's Buick. As soon as you fix something you uncover two more problems.

There's no easy solution to the problem. The business side needs to be careful to communicate needs. User stories are supposed to help with this, but they only help if used properly. Here's where a good BA will come into play. A good BA can make sure that user stories aren't reduced to endless copies of "As the product owner, I want {technical solution} so that the product works properly". The three parts of the story are necessary because, when used correctly, they define a need instead of a solution. And the architect needs to ask himself, for every requirement, "What is the need behind this?" If the answer isn't immediately obvious, there's a good chance that you're dealing with a solution disguised as a requirement.


(Ed. note: If the customer requesting the fence painting is Mr. Miyagi, just do it.)

Wednesday, June 26, 2013

Certain Uncertainty

"There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are also unknown unknowns. There are things we don't know we don't know." --Donald Rumsfeld

--Ed. Note: Changed to link to Gene Hughson's G+ profile rather than just mention his name. Should have thought of that the first time.

+Gene Hughson , author of a blog that is shortly going to be added to my "What I Read" section, has a fascinating article on the dangers of Premature Certainty. It's quite long, but well worth reading. Worth reading a couple of times, in fact.

Right out of the gate, he gets to the meat of it:
"Premature certainty, locking in too soon to a particular quality of service metric or approach to a functional requirement can cause as many problems as requirements that are too vague. For quality of service, also known as “non-functional”, requirements, this manifests as metrics that lack a basis in reality, metrics that do not account for differing circumstances, and/or metrics that fail to align with their objective. For functional requirements, premature certainty presents as designs masquerading as requirements (specifying “how” instead of “what”) and/or contradictory requirements. For both functional and quality of service requirements, this turns what should be an aid to understanding into an impediment, as well as a source of conflict."
There are several important points here that every software architect needs to keep in mind. The temptation in a software project is to start talking about performance and quality of service- the dreaded Non-Functional Requirements, or NFRs- far too soon. It's an understandable temptation as it's near and dear to the business unit's heart. It's also a subject largely misunderstood by the business unit's heart. When Donald Knuth said "Premature optimization is the root of all evil (or at least most of it) in programming.", he was talking to programmers more than the business unit, but both sides ought to take heed of his warning.

So why is this so dangerous? Generally, isn't focusing on optimization and performance A Good Thing? Well, sure, but there's a great difference between focusing on performance and deciding on how to measure it. Especially if you decide on how to measure it before you're finished.

The problem is that NFRs are more like decorating a living room than building one. When you build a living room, you have definite dimensions you conform to.  You have building materials that you need to use, you have building codes you need to adhere to, and these are all very measurable and easily tracked. In decorating a living room, however, you have a general set guidelines to work within, but few hard and fast rules, if any. "I want the room to be light and airy" or "I want a more modern feel" are both valid sets of instructions for room designers, but it would be absurd to tell a designed "I want a minimum of 5.8 on the Light/Airy scale and am willing to settle for a 3.2 Modernness level". Yes, the designer is expected to engage the room's owner on every step of the design process, but you can't really know the final result until you see the final result. And there's certainly no scale against which to measure the final result.

This is the difference between Qualitative and Quantitative requirements, and making sure that everyone understands the difference between the two, and into which any given requirement falls, is the difference between building meaningful and meaningless requirements. Hughson references Tom Graves' Metrics for Qualitative Requirements:
To put it at perhaps its simplest, there’s a qualitative difference between quantitative-requirements and qualitative ones: and the latter cannot and must not be reduced solely to some form of quantitative metric, else the quality that makes it ‘qualitative’ will itself be lost.

And therein lies the proverbial rub. The concept of the managed project so often comes from purely quantitative worlds. Construction is an excellent example. The projects I saw at a defense contractor are also prime examples of quantitative projects- you received a set of specs and you build to the specs. NFRs didn't factor in. Software development simply isn't like that and the disconnect comes in when we try to treat software development projects as if they were purely quantitative. As architects, it often falls on us to manage this disconnect because we're often caught between the business, which often hasn't considered the difference between the two types of requirements, and the development staff, who will almost certainly understand the difference but on a more experiential level that is more difficult to communicate to someone lacking the same experiences. And as the group that is often tasked with communication between the two groups, it's of critical importance that software architects understand the difference between the two kinds of requirements and can clearly communicate the difference and its importance to the business unit.

Hughson also goes on to discuss the importance of developing functional requirements that have a level of certainly to be useful while still keeping the level of uncertainty needed to stay flexible and allow architects to do their job. Also a good read, and also necessary to keep in mind. However, as I'm in the middle of attempting to manage the development of NFRs on a current project, I very much focused on the first part and wanted to write this out as an attempt to better understand the points Hughson was making. I think it helped me and I hope it helped you, too.