Tuesday, August 27, 2013

Why Bad Code Won't Die


Let's face it. Code goes bad. Maybe, due to technical debt, it started out bad and now it's time to pay the note on your debt. Maybe, due to a new version of your software platform, there's a better way of handling what your application is meant to handle. And maybe your application is now handling situations that were not taken into consideration during development, whether anticipating them was reasonable or not. Whatever the reason, what was once perfectly acceptable code often becomes obsolete.

The challenge isn't writing code that won't go bad. You do your best, with the realization that it might just happen anyway. The real challenge is getting the situation resolved, because you can't just charge in and make changes- at least, I hope you can't. It's a sad fact of the developer's life that projects like this may be necessary, but are often rejected, leading to frustration and a lack of confidence in leadership.

It's a common failing in communicating with others. You assume that things that make sense to you make sense to others and that things that have value to you have value to others. It's human nature, really, to assume that the context in which you view life is the context in which others do, too. Because improving the quality of your code is important to you, makes sense to you, and is just intuitively obvious to you, you tend to assume that it is to others. Then you construct your communication, consciously or not, along that assumption.

The problem is that most development managers and just about everyone higher than that on the corporate totem pole view project priority in a completely different context than developers do. And since they hold the final decision on whether or not a development project gets green-lit, it's critical to understand why "Yes" to projects, why they say "No", and the thought process that goes into that decision. If you can tailor your pitch to the way upper management thinks then you can at least communicate your need in a way they understand. This, of course, doesn't guarantee anything. But you at least avoid getting in your own way.

Developers tend to consider a well structured application as the end goal in a project and as the motivation for taking, or not taking, action. Which is good- that's what they're there for. The problem is that upper IT management rarely, if ever, evaluates projects through that lens. To them, risk is the key decision point. Not just the risk of breaking something else due to the changes, either. Risk to the project schedule as a whole. Risk involved in an application changing- even if it's for the better. Users get used to the way an application works and there's often resistance to change. The unfortunate truth is that, in the mind of an upper manager, "This is better code" never trumps "This is currently working". Attempting to approach a "Code Improvement" project from a technical point of view will rarely work. In fact, it has never worked for me. Not once.

The first thing you need to do is explain the necessity for the change in the point of view of what is important to upper management. If you can't do that, learn to suffer in silence because this change will go nowhere. Since upper management's priority is a low risk of interrupting business processes, it is critical to show that not making the desired changes will lead to a greater risk of interruption than not making the changes. Risk of application failure is a great thing to focus on. So is inability to support known, or likely, business needs in the future. If the current state of the code is no longer performing well due to increased load, point out the trend in increasing load and try to project a timeframe for the application no longer functioning. Are there upcoming needs that either can't be supported given the current state of the application or can be implemented in a greatly reduced time given a change? Have hard numbers and facts. State the known needs that can't be implemented and how that impacts the business. Show how a project can be completed more quickly if the desired change is done now. Remember- upper management cares about how well the IT department, as a whole, can support business needs as a whole. Present your case in terms that are important to your audience.

Second, have a plan. Don't just outline a problem and dump the mess in someone else's lap. Explain how the problem can be fixed. Go on from there to how long you think it will take- remember, your audience is thinking in terms of a schedule of many projects. Scheduling an unanticipated project impacts the rest of the project calendar. Make sure your audience understands the impact and that the impact is worth it. Then outline the risks. This is important because as soon as you give your opening argument, your audience has already started thinking in terms of risk. Then present how you will handle that risk. Will other applications be affected? How will you minimize the impact on those applications. Will this change business processes? How can IT work with the affected business units to make sure they understand how to interact with the changes. What happens if something goes horribly, horribly wrong? What failure points will you be looking for and what will you do if they do fail? How will you minimize the impact of a massive failure on the business, how will you fix the failure, and how much effort might it take? How will the project calendar be affected, what projects may have to be put off in order to do this, and what can be done to minimize the overall delay?

None of this, of course, guarantees success. I've made pitches like this and heard the equivalent of "I understand, but we want to focus on new feature development". Or, "I'll take a closer look at this and see if we have time on the project calendar." Which is often a long way around of saying "No". Sometimes upper management simply isn't interested or just doesn't want to accept the risk. This happens. But management is never willing to accept risk they don't understand or change that doesn't seem beneficial. It's your job to communicate the need to fix bad code in a way that the decision makers will consider a high priority. If you understand how the decision makers think, you understand how to communicate with them.

Which, if you think about it, goes for life in general as well.

Tuesday, August 13, 2013

Best Practices


"The cart before the horse is neither beautiful nor useful." --Henry David Thoreau

You're doing account signup forms wrong.

Okay, I should say "If you're doing account signup forms, you're doing them wrong." and while logically (if not grammatically) more accurate, I thought it made less of an impact as an opening statement.

The problems with most signup forms are myriad. Why do sites force you to enter your email address twice? Mobile platforms have figured out that masked password input boxes aren't always necessary, and give you either an option to turn it off or give limited clear-text viewing of your input. Websites haven't gotten the memo. And if I see one more site serving up a picture of what can only be described as a dust storm and telling me to type in the letters in the box to prove that I'm human, I'm going to flip and write a blog post about it. Just. Stop. It.

Now, as I hope I've made clear in my writing, I don't care about forms, CAPTCHA, or even UX issues. Or rather, I care about them only in the context of the thought process that went into them, and that's where account signup (and often account management) process fall flat. It's due to a very insidious concept known as "Best Practices".

I hate best practices. As soon as that phrase is first used in a requirements gathering meeting, I step on it like I would a roach. "Best Practices" used to refer to process that the industry had adopted, formally or not, as the best way known at the time to approach a problem. That, I have no problem with. What I have a problem with is the fact that "Best Practices" doesn't mean that anymore in software development. Anymore, it means "What is everyone else doing?" Which leads to lazy planning. Which leads to bad results. And I really hate bad results.

"Best Practices" are insidious. Because it's assumed that these practices are used because they're the best way of approaching a problem, people stop thinking about solutions as they apply to their specific needs. Any time you start implementing solutions without considering whether or not that solution is actually a solution to a problem you have, you have at best added unnecessary complexity to your project. At worst, you end up implementing code that hurts you in the long run.

Worse, though, is that no one ever advances that body of knowledge when "best practices" are applied blindly. Since everyone is using the same solutions as everyone else, no one thinks up new ways of solving problems. In the UX arena, this results in carbon-copy sites that don't stand out and present all the same inconveniences that the other sites present. In the web security arena, it's even worse because you are implementing that worst kind of security. The kind that makes you feel secure without necessarily offering any concrete benefits. Since the "security" principles that are labeled "Best Practices" are applied blindly, you don't know if the implemented solution solves the problem at hand, much less whether or not the problem at hand is one that needs solving in your context.

As an example, let's take password masking. Many mobile devices briefly show in clear text the latest character typed into a password box. This is not new. In fact, I've read people claim that Apple innovated that idea for the iPhone, but my Palm 600 did that. This idea is ten years old and the web world still hasn't caught on. It's become an expected inconvenience because everyone is looking at everyone else's paired "Enter Password/Reenter Password" boxes. Even worse, this is billed as a security measure without asking whether or not this is a *necessary* security measure. Does it really hurt anything if a site offered a way of viewing your password in clear text, thus avoiding the usual paired password box routine?

Am I saying that these measure are all unnecessary? Absolutely not. Except when they are. And if requirements are being gathered in the context of your needs and your solution, then it becomes obvious what you do and don't need. The problem is that "Best Practices" are applied backwards. The solution is selected and the problem lays unexamined. Software development is merely implemented problem solving. You can't solve a problem you have not examined.

Wednesday, August 7, 2013

What's the point of an architect


"I know I can't do everything myself. So I know I specialize in my melodies and I do some of my demo work. I pass it on to my producers who are much better at the production level." --Paul Taylor


I'm not asking why we need software architecture. Anyone reading this knows why. Insuring that standards are met. Insuring extensibility and code maintainability. Making sure that the design patterns used are necessary and that the necessary patterns are used. Choosing proper technology and it's use. This is not in question. I'm asking, "What's the point of an architect?" What do I bring to the table to justify my presence on the project, and indeed the salary and benefits I draw to do what I do, and only what I do. Couldn't all this be done by a senior developer or a development lead? In sort, what is the value of me.

Paul Graham once wrote that a software developer must hold a program in his head:
"A good programmer working intensively on his own code can hold it in his mind the way a mathematician holds a problem he's working on. Mathematicians don't answer questions by working them out on paper the way schoolchildren are taught to. They do more in their heads: they try to understand a problem space well enough that they can walk around it the way you can walk around the memory of the house you grew up in. At its best programming is the same. You hold the whole program in your head, and you can manipulate it at will."

This isn't to imply that programmers are necessarily smarter than average. It does mean, however, that we need to be able to visualize abstract concepts in our mind  More important that how well we think is how we think. The problem is that applications are becoming more and more complicated. N-Tier applications. Web services. Multiple platforms. And with more and more to keep track of, the percentage of the code space as a whole that a developer can hold in his head becomes less and less. 

That's where an architect comes into play. More and more I find myself not holding an application's problem space or code space in my head. I find myself holding the project problem space and its place in the enterprise in my head. Where the developers focus on visualizing code, I focus on visualizing the enterprise as a whole to help the applications fit into that space. 

Consider the evolutionary development of a surgical team. At one point in time all surgery took was a guy with a sharp knife and an understanding of the human body, such as it was at the time. Any modern surgeon would be horrified at the prospect of an operating theater running in such a manner. Now it takes surgeons- maybe more than one. It takes an anesthesiologist and nurses. It takes someone to coordinate the surgeries so that, as much as is possible, there is available surgical resources for high priority cases. Modern surgery is complicated and beyond what any one person, no matter how talented, can manage.

So, too, goes modern application development. It takes developers. It takes senior developers or dev leads- sometimes both- to manage and coordinate development activities and even to mentor more junior developers. It takes DBAs and perhaps other SMEs (Subject Matter Experts) to lend specialized knowledge in areas the development team may not have. And it takes architects to make sure that the application has its proper place in the enterprise, making proper use of existing tools, and insuring that the application being built will work in other contexts, if applicable. And just as the modern surgeon would be horrified at the prospect of the old west surgeon giving a patient a bottle of whiskey before digging out a bullet, I'm horrified at the idea of "One Project, One Developer". Good software development is more complicated than that, no matter how talented the developer.

As any regular readers should know (Thanks, guys!), I ask "What is this?" and "Why is this?" a lot and don't take a lot for granted. Questioning my role in a project- and again, not the role of architecture but the role of the architect itself- makes me more focused on what I should be doing, rather than what I may want to be doing. Because let's face it. I came from software development and at heart I will probably always be a software developer. But after asking "What?" and "Why?", I had an interesting conversation with the tech lead on my project. The specifics of the conversation are less important than the conclusion. I said (roughly)

"I'm starting to get that itch in the back of my architect's neck that means I'm getting in your way. You know the direction we need to take, and you know why. So I'm going to step out of your way and let you get it done as you know best."

Thursday, August 1, 2013

Painting Fences


Imagine this exchange, if you will:

Homeowner: We need to to paint the outside of our fence white.
Contractor: No problem.

(2 weeks later)

Homeowner: I know we asked you to paint our fence white, but now we need it painted red.
Contractor: You don't like the white?
Homeowner: It's not working for us, and we'd like to try red.
Contractor: No problem.

(2 weeks later)

Homeowner: Okay, sorry about this but now we want our fence green.
Contractor: Sure, I can do that, but I have to ask. Why the color changes?
Homeowner: The neighbor's dogs bark all night and we're trying to find a fence color that calms them down.

Silly, isn't it? Ridiculous, in fact. And yet, a conversation in which I've participated more than a few times in my professional career. It stems from the business side of a software development project telling the development team what they want, which should be avoided at all costs. Yes. I just said that. The business shouldn't be allowed to tell development what they want. The reasoning is fairly simple, too. They don't know what they want and even if they did, they don't have the technical vocabulary to communicate it.

What the business does know is what it needs. There's a pain point, a failing, an inefficiency, or a breakage somewhere that needs to be resolved. That's what they should be talking to you about. They need to show what the process is now, what they don't like about it, and what it should look like when the problem has been solved.

There's a subtle difference between business needs as requirements and implementation details as requirements. Subtle enough to go unnoticed sometimes. Subtle enough to even seem reasonable. It seems almost reasonable to say "The account creation form needs to validate the format of the email address." Or "I want to be able to delete accounts". But it's like two lines that aren't quite parallel. Extend them out for a long enough distance and they end up in very different places.

Let's take a look at the two examples above. Validating an email address seems reasonable. Necessary, even. After all, you don't want users accidentally putting in a bad email address, right? If we're going to have any success at all, we're going to need a regex.  Something along the lines of
bool isEmailValid = Regex.Match(emailAddress, RegexString).Success;
Problem is,writing that regex is harder than it looks. Bigger problem is that it likely doesn't solve the underlying issue. Note, I said "likely" because we don't actually know what the real issue is. All we know is that we need to validate an email address. There's no mention of why. But if the underlying issue is making sure that the user enters a valid email address, which seems reasonable, then this doesn't actually solve the problem. It's very easy to write a regex (okay- let's face it. Search for a regex and copy/paste it. C'Mon- you know you do it.) that doesn't properly validate an email address, and prohibitively expensive to write one that does.

This solution falls even shorter of the actual need if the need is to make sure that the user enters a valid address that they have access to. Which may be what the business meant. It also may not have been what they meant, but would agree this was the proper direction if they had communicated their need, thus giving the architecture team the chance to respond and ask questions. So, due to a solution disguised as a requirement, we have a solution that will likely either validate malformed email addresses or reject valid addresses and does nothing to make sure the user has access to the email address. So what, in fact, have we actually accomplished?

The second example seems pretty reasonable, too. After all, do we really want dead accounts laying around? And besides, what if there's a user that we don't want accessing their account anymore? It's not like this will be an every day thing, but just in case. The problem here isn't whether or not a solution can be implemented, it's the repercussions that are not being considered. What do we do with the order history of a deleted account? If we keep it, to what do we associate the orders? If we delete it, how do we explain the discrepancies in financial reporting? Or inventory levels vs. newly adjusted sales numbers? What if you need to process a refund? Good grief, what if you need to undelete the account? This requirement is like an early 80's Buick. As soon as you fix something you uncover two more problems.

There's no easy solution to the problem. The business side needs to be careful to communicate needs. User stories are supposed to help with this, but they only help if used properly. Here's where a good BA will come into play. A good BA can make sure that user stories aren't reduced to endless copies of "As the product owner, I want {technical solution} so that the product works properly". The three parts of the story are necessary because, when used correctly, they define a need instead of a solution. And the architect needs to ask himself, for every requirement, "What is the need behind this?" If the answer isn't immediately obvious, there's a good chance that you're dealing with a solution disguised as a requirement.


(Ed. note: If the customer requesting the fence painting is Mr. Miyagi, just do it.)