Monday, July 29, 2013

Clay Pots

"Perfection is not attainable, but if we chase perfection we can catch excellence." --Vince Lombardi


I tried to find a link to the experiment, but could not. Perhaps it’s allegorical. However, I read once about an experiment done by a pottery teacher. She divided the class into two teams. She told the first team to make the perfect clay pot. She told the second to simply make as many clay pots as they could. At the end of the experiment, the perfect clay pot was indeed made, but not by team 1. As it turned out, the constant iterative practice by team two trumped the careful work of team one.




NOTE: Thank you to +Dave Aronson for pointing me to the link I couldn't find!
http://kk.org/cooltools/archives/000216
He also has some thoughts on the subject at http://www.dare2xl.com/2010/08/just-do-it.html

This is not an article about getting better at software development the more you develop software. If you’re reading this, you already know that. No, this is about software architecture and building the perfect design. Which, as I explained in my first article, doesn't exist anyway.

Every Software Developer/Architect/Engineer/Whatever that I've worked with has shared a couple of characteristics. They want to get their work done right, and they take time to think through what they're doing before they do it. Both of which are commendable. The problem comes when this leads to Analysis Paralysis. When the process of thinking things through in order to make the perfect design deadlocks the developer and he can't move on.

When that happens to you, remember the clay pots.

Agile methodologies such as Scrum and XP were developed, in part, to avoid analysis paralysis at the project level. With a focus on action and testing the results, agile methodologies seek to create the perfect pot by creating pots until they get it right. As it turns out, this technique works just as well at the individual level.

Sometimes the best way to break through design indecision is to just start writing it. Build the class stubs, make them interact, and build unit tests around them. How well does it work? How badly doesn't it work? Then consider what worked, what didn't, refine your ideas and start over. Wash, rinse, and repeat until you’re happy. Or at least satisfied. Or at least still on this side of “I’m so frustrated I’m about to throw my laptop through a window”. Seeing how the design plays out and forcing yourself to refine and retest can often lead to better results than trying to think through every detail in advance so that you create the "perfect" design the first time.

Don’t get me wrong. I’m not advocating against careful thought. I’m not saying “Don’t plan” or “Don’t think”. And I'm certainly not saying you should just throw code against the wall until you get something that looks workable.

Consider the T.V. show "Dr. House". His beliefs that there is one absolute right way of handling a problem is completely detrimental to software development. But one of the few things that I agree with Dr. House *in practice* is his insistence on thinking through a problem before acting on it. But if you remember the series, he follows the clay pot model. Think. Do. Refine. Think again. Continue until done. You won’t get it right the first time, and you should be very suspicious if you do, so don’t grind on it. And I love his attitude that making mistakes is expected. No one cares, as long as your end result is solid. 

Here’s the thing I tell architects and developers alike. There are no points for style. No one is counting code check ins, no one is counting compilations, no one is counting design iterations, and no one cares as long as the end product is a good one. Until then, if you have to slam it in with your knees, do so.

Often, you don't know what works until you've seen something that does not.

Thursday, July 25, 2013

Layering It On

Once you know who you are, you know what you have to do.” --Steve Perry, The 97th Step


We all know that building your application in layers is important. Portability, separation of concerns, extensibility, and blog articles are all highly dependent on proper application layers. The problem I see isn't a lack of understanding the importance or disagreeing with it. The problem I consistently see is people not understanding how to layer their applications. Part of this is, of course, practice. My first attempt at building an application with a 3-Tier architecture was an epic disaster that would have made the Titanic step back and say, “DAMN- I thought this was bad.” My second one was also pretty terrible. But better than the first.

Practice become easier with understanding, though. Tips, circumstances, and examples are all limited in scope in that they only give you a small slice of the whole picture. But once you understand what a layer is, why it’s important, and how to look at it, the rest is just reinforcing your understanding with practical experience. As any regular readers will know (Have I been around long enough to have regular readers?), I see software architecture as applied philosophy. I know I've used this one already, but:


“This, what is it in itself, and by itself, according to its proper constitution? What is the substance of it? What is the matter, or proper use? What is the form, or efficient cause? What is it for in this world, and how long will it abide? Thus must thou examine all things that present themselves unto thee.” --Meditations, Marcus Aurelius


I originally used this in understanding classes and properly understanding what they do, but it applies to application structure as well. Once you understand what something is, be it a class, a layer, a carburetor, or a hammer, you know what to do with it. So let’s take a pretty typical application stack- Presentation, Controller, Model, and Persistence. We start by considering each layer as a real-world entity, with things it knows about, actions it knows how to take, and actions it does not know how to take. Then we ask ourselves Aurelius' questions about these entities.


Presentation

What is a Presentation layer, in itself and by itself? Not to put too fine a point on it, but the presentation layer presents data, both to the end user and to the model. That’s what it knows how to do. It knows how to arrange data in a way that makes your application usable and useful. It knows how to sort and filter data so the user can get to the important data without wading through the unimportant data. 

Is your presentation layer interpreting data for any reason other than how to properly display it or properly send it to the lower reaches of the application? Then your presentation layer is doing something it doesn't know how to do.

Controller

Of all the application layers, I've seen more misunderstanding about the Controller than any other. And this is a prime example of why understanding needs to come first, because this one is easy to get wrong if you don’t understand it. The Controller is a Switchboard Operator. Okay- there are a ton of more recent comparisons that are just as good, but I’m going with switchboard operator. The controller routes requests from one place to another, and that’s it. It knows where a request came from and based on that, it knows where the request goes next. A controller that routes the request to different receivers based on some conditional logic with the data itself is interpreting and attaching meaning to the data. It doesn't know how to do that.

Model

In and of itself, what is a Model Layer? What's its purpose? The model knows what the data means, how it should be interpreted, and how it should be used. Which is, admittedly, the meat of the application but there are a few things this layer doesn't do as a part of its purpose. It doesn't know where data comes from. It doesn't know where data goes when the it is done doing what it does. In this way, it’s a lot like an assembly line worker. A widget  shows up and the model performs a task on it. Then the widget moves on. Where it came from and where it goes next are not important. The task performed is the only thing that is.

Persistence

What is the form or efficient cause of the Persistence layer? Sure, this layer interacts with data, but the question is "What is the... *efficient* cause". In its most efficient form, the persistence layer retrieves the data it’s asked for and stores the data it’s told to. It doesn't know how to do anything else. If, for instance, you've asked your persistence layer to tell the model if the correct data has been retrieved, then you’re asking your persistence layer for something it doesn't know how to do. If, as is common, you’re asking your persistence layer to know whether or not data is correct before storage then you are also asking it for something it doesn't know.


Although this becomes much easier with practice, the underlying key to application layering is knowing what you want your layer to do, and making sure that it doesn't do anything else. Thinking about your application layers as specialists helps greatly in keeping in mind what they should, and shouldn't, be doing. You don’t call your pediatrician when your car dies and you don’t call a ticket box office when your roof leaks. Don’t call a model layer when you need to know how to display data.

Monday, July 22, 2013

You're A Contractor

"If it looks like a duck, and quacks like a duck, we have at least to consider the possibility that we have a small aquatic bird of the family Anatidae on our hands." --Douglas Adams, Dirk Gently's Holistic Detective Agency

If you're not reading Hayim Macabee's Effective Software Design blog, you probably ought to be. His Continuous Learning post is an important read and got me thinking. The article is about how important it is for software developers to never stop learning and improving their skills. Which is true and something worth reminding people about periodically. But as I read, it occurred to me that continuously learning is only half the answer.

I started out in ColdFusion professionally, back when ColdFusion was actually a profession. (Sorry, Ben) There are a myriad of reasons why ColdFusion isn't a viable career option, some fair, some born of real misconceptions, and all irrelevant, for practical purposes, to a developer who has realized that he's in a dead end specialty. The reality of the situation was that I was working in a rapidly shrinking circle and it was time to get out.

Learning a new language isn't difficult. Getting someone to hire you for it is something else altogether. My employer at the time was generally unwilling to pay for training and even less willing to put new technologies or techniques to use. I had to rely on personal projects, online learning, a ton of reading, and the one Java bootcamp I could convince my company to send me to. I eventually found myself in a position where I was competent in both C# and Java and felt I could handle a development position using either language. But with no professional experience in either, getting a recruiter- much less a hiring manager- to agree was a challenge. And so I found myself in a Joseph Heller Catch-22. I couldn't get the professional experience I needed to get out of ColdFusion without first getting out of ColdFusion.

I have a lot of people to thank for helping me break out of that career black hole. A recruiter who knew me well enough to trust me when I said "Just get me the interview. I promise you I won't embarrass you". A hiring manager who believed me when I said "Learning C# is easy and you don't have to teach me how to be a software developer." A technical lead that didn't believe me when I said that and sat for almost an hour making me prove it. And finally (Warning- blatant sappy moment) a dad that taught me to bet big on myself.

The problem I had at the time was that the way I was viewed professionally did not match what I needed it to in order to advance my career where I wanted it to go. It was +Tom Searcy at Hunt Big Sales who put the problem, and the solution, in sharp focus for me. Everyone is a contractor. With that simple phrase, he put into focus everything that had been a problem for me when I was breaking out of the ColdFusion world. Ultimately, I work for myself, you work for yourself, and it's up to you to make sure that how you are seen professionally is how you want to be seen. This isn't about "job hopping" and this isn't about always being prepared to switch companies. This is about making sure that you can direct your career the way you want to direct it.

You direct and take control of your career path by both learning what you need to know and by getting seen being the kind of professional you want to be. Neither step is useful without the other. It hasn't been a good idea to try and bluff your way into an IT career for a long, long time now. And it does little good (Trust me!) to know how to handle the position you want if no one sees you as competent. You must do both. Hayim Macabee has outlined some excellent ways of handling the former. Thankfully, there are now more ways than ever to manage the latter.

Social Media

Not so much Facebook as Google+ and Linkedin, although that might just be my personal preferences. If you want to be seen as a skilled software developer, start by looking at the information people can readily gain by looking you up. Does it show you participating in software development discussions? Are you interacting with others? Asking questions? Offering advice? Joining, or even starting, conversations?

No? Then why not? A duck doesn't have to tell people that he's a duck. He quacks.

Projects

It used to be that recruiters and "resume specialists" would tell you to leave personal projects off your resume because they were no more relevant than hobbies in the job search world. Whether or not they still do, personal projects can be made relevant to how you are viewed in the industry. We all know that developers learn by doing. Now, it's easy to show people that you're both learning and doing. Got a project you're working on? Put the code on GitHub. Tell people what you did, why you did it, and ask them to use it. Is anyone going to hire you over a project you put out on GitHub or BitBucket? No. Probably not. But if you use these tools, then you are getting seen acting as the kind of software developer you want to be. It's part of managing your professional perception.

Blog

Go start one. Now. No- wait. Finish reading mine, then go start one. Be seen publicly talking about the things you want to be known for. Then tell me about it, and I'll put it on the list of things I read. And then I'll write about the stuff I think about when I read what you have to say. Talk about the subjects on which you want to be known as an authority. Then go talk to other people on their blogs.

Software Developers have to be continuously learning. But that's half the issue. If you want to be a duck, it's time to get out in public and quack.

Friday, July 19, 2013

What Is a Software Architect


"That's incidental. What's the first and principal thing he does, what need does he serve..." --Dr. Hannibal Lecter

As any regular readers know, (Thanks, by the way!) I pretty strongly believe that understanding a thing tells you want to do with that thing. And since I've been writing for some time now about being a "Software Architect", we should take some time and ask the question "What does it mean to be a Software Architect?" As I've said before, not agreeing on definitions leads to communication breakdown.

On the surface, a software architect designs software. he is responsible for the design, even if he doesn't necessarily go on from there to build it. Although he might go on to do just that, as in many organizations the architect and the developer wear the same skin. So, the architect creates not just the class diagrams but often the technologies as well, thus shaping what the final product will look like.

But, as Dr. Lecter would say, that's incidental.

In order to design a software application, an architect needs to know two things. What is needed of the application and what is the best way of fulfilling those needs.Without these pieces of information, the design will fail somewhere along the line. So first of all, the architect needs to be familiar with the business requirements at hand. Secondly, the architect needs to have a broad enough field of experience to know the proper technologies to implement the requirements. If the architect only has a shoe or a bottle to pound in a nail then the final result can't help but fail. So, a software architect is starting to sound a lot like a general contractor. You get the request, you choose the materials, plan your work, and then hand things off to subcontractors to fulfill.

That, too, is incidental.

Software doesn't exist in a void or for its own sake. It exists to fill a need. A real-life need that is both concrete and immediate. Whether the application is as innocuous as Angry Birds or as complicated as the Linux Kernel, it exists because someone needed it. In order to design a useful application, you need to understand that need. When you understand a need, you can provide a solution. If you provide a solution, then you can provide a spec doc that can actually be fulfilled. If you have a spec doc that can be fulfilled, then you can choose the right tools and build the proper design.

This is not incidental. This is the first and principal thing that you are and the need you serve. You're a problem solver. This has been my employment Elevator Pitch for years.

"What do you do?"

"I solve problems."

Monday, July 15, 2013

The Ant, The Tiger, and The Programmer

You have power over your mind - not outside events. Realize this, and you will find strength.” --Marcus Aurelius


My weaknesses... I wish I could come up with something. I'd probably have the same pause if you asked me what my strengths are. Maybe they're the same thing.” --Al Pacino


Consider the ant. While pound-for-pound one of the strongest animals out there, since it weighs in at a stunning 0.0003 grams, that metric is less than useful. No, the greatest strength of the ant lies in a colony’s sheer number of ants and the fact that they can act with a single-minded determination to get a task done. Very little short of poisoning the lot of them interrupts their task once they begin and they have the ability for hundreds, if not thousands, act as if one being.

Consider the tiger. Tigers are not pack animals. Add a second tiger to a tiger’s territory and you’ll likely end up with a dead tiger. They do not like competition. And yet, as hunters go tigers are frighteningly effective. They can get to be 4 meters long (13 feet), weigh 600 kg (1300 lbs), can run at about 90 kph (55 mph) nearly silently, and are powerful enough to bring down a rhino or an elephant. Where the tiger walks, things that don’t want to be lunch best move carefully.



Two remarkably effective animals. One that can only follow orders, but carries out its task with a determination and undeterrence rarely found in nature.  The other nearly unable to work with others of its kind, but with a frightening level of individual ability. Both achieve their goals but in opposite, and incompatible, ways.

Despite having just called the two “incompatible”, a development team must be a colony of tigers. Considering the dev lead as the colony “queen”, each developer must be able to accept their marching orders and complete their assignments with the level of determination of an ant. Yet they must not be the mindless workers that ant embody. They must have the individuality of a tiger and a tiger’s ability to successfully determine how to achieve a goal and then the ability to actually achieve it. And the “Colony Queen”, i.e. Dev Lead, needs to understand that the tiger is not a mindless implementer of tasks and if treated as such then the overall effort is endangered.

We all know developers that are like the ant. Practically useless on their own because they can’t hold an application in their head or understand how various objects should relate or even how various pieces of functionality impact each other. But hand them a task and they complete it. We all know developers that are like the tiger. Highly skilled, able to intuitively understand the task at hand and how best to implement it. But get in his way, critique his code, touch his check ins, or even talk to him when he’s in the middle of something and he’s going to bite.

As a developer it is critical to your career development to embody the strengths of both the tiger and the ant while taking on none of their weaknesses. You must be a colony member that hits his tasks reliably. You must be a tiger that can rely on your own skill and experience. However, when your ability and understanding seems to conflict with your direction, you must be able to do something that neither ant nor tiger can do. You must be able to communicate your results and opinions to team leadership, do so clearly and helpfully, and understand that your ideas will not always be taken.

Make no mistake. This is the difference between a successful software developer and one that is perpetually wondering why he can’t keep a job. It’s not ability. Every developer I've ever worked with is either a tiger or an ant. The successful ones are the developers that can maintain those strengths without succumbing to the weaknesses. Are you an ant? Don’t become so focused on carrying out tasks that you forget that you can contribute your knowledge and experience. Not just repository check ins. Are you a tiger? Remember that at some point, you’re going to be overruled. Learn why, rather than biting. Is there a non-functional limitation, such as time frame or lack of stakeholder buy-in, that simple cannot be controlled? Is your preferred path incompatible with another development team’s work? Remember that setbacks are learning opportunities and treat them with grace. Remember also that others have skill and experience as well and respect that. Especially if you disagree with them.


Neither the tiger nor the ant will ever be anything more than what they are. Their weaknesses insure that they are only useful in narrow circumstances. Take on the strengths of both and the weaknesses of neither and you will quickly find that your organization find more and more situations where you are considered useful, or even necessary.

Friday, July 12, 2013

Projects and Icebergs

Once, I watched a class being taught to a small group of children. The subject was Aikido, an ancient martial art which utilized much inner energy, or ki. The instructor used an analogy to show internal versus external strength. ‘Ki’, he said, ‘is much like an iceberg. There is a tip, which is visible, much as external strength, which uses muscles; then there is the internal strength, which is at once much greater, and yet hidden.
When he had completed his explanation he asked, ‘Are there any questions?’ A small boy of perhaps four or five years raised his hand. ‘What’s an iceberg?’” --Steve Perry “Matadora”

Software development projects hinge on many things. The makeup of the development team. Resources that can be committed to the project. Management and stakeholder buy-in. However, the biggest problem I've seen is communication and the biggest communication problem I've seen is defining terms so that everyone is using a common lexicon.

Some years ago, I was part of a development project that was a miserable failure. The six month project was still in development after two years and eventually a competitor beat us to market. Looking back on it, the core problem could be distilled down to the fact that no two groups of people agreed on what any phase of the project meant. It’s not that anyone wanted to skip defining the project work or goals, or wanted to skimp on testing, or even that no one wanted to go through the time to meet to discuss project progress. The problem was that no one agreed with anyone else on how to define these aspects of a project or what should be done with them.

To some, “Testing” meant developers testing their work, and nothing else. To others, testing meant focus groups. Some thought that the project goals could be adequately communicated in a Project Charter document, others wanted to have agile-like meetings to discuss goals. And no two groups agreed on what “Finished and ready to deploy” meant. One group thought we were building an ecommerce site while another was adamant that the site should not handle payment transactions. One group defined “Deployment” as a full release that would allow all customers to begin using the site. Others defined “Deployment” as a limited release, similar to a beta test.

Even worse, there was no common definition of project roles. Sure, there was a project manager, product owners, executive stakeholders, a dev lead, and QA staff. We hit all those bullet points. The problem was that no two people agreed on what these roles meant. Because of that we had the dual problems of nobody performing certain critical tasks while stakeholders were fighting over how other tasks should be handled because multiple groups believed that the task was theirs to do.

As a result, work stagnated as developers began undoing work that one group had requested. Frustration levels ran high and developers began dragging their feet on work they didn't see a point in doing, knowing that someone else would soon have them undo the work. Testing was a mess as some groups refused to test functionality, thinking that was the developers job, and others submitted the application to a focus group and expected any feedback to be addressed. Finally, the project died when developers started leaving for other jobs.

I want to take a moment and underscore that last point. Not only did a lack of common definition of concepts cause the failure of this project, but it hindered future development projects due to development staff leaving the company.

In contrast is the project I’m on now. Development is going to take at least a year and a half and the resulting platform will have implications far beyond the current business need. We are coordinating three different business units with similar, but not identical needs. Work will be performed by four parallel development teams working separately but coordinating with each other. And this is all possible because before we even began gathering requirements we defined every project concept we could think of so that there was no misunderstanding. The definition of each project role. What "Scrum" and "Agile" meant in the context of our project. And, of course, the definition of "Project Complete".

As a result, milestones have become easier to hit. Not easy, mind you, just easier. We have agreed on what "Acceptance Criteria" and "User Story" mean so that when we do requirements gathering we all know what information is to be presented and in what form. Development milestones are easier to communicate, since we all know what to expect of code that is “Ready for sprint review”. When we talk about testing, we have agreed on what that means. Now everyone knows what has happened at each testing stage. As a result, the state of the code is clear at any given step. We have agreed on what it means to release code to production and we have a clear definition of what it means for this project to be finished.

Better yet, we have a common agreement on all project roles. Because we took the time to define “Product Owner”, we know what to expect from that role. Those responsibilities are clearly documented. As is the meaning of “Project Manager”, “Project Architect”, “Development Lead”, and “Business Analyst”. We know who does what, what to expect from whom, and to whom to look for any needed deliverable.

Of course, a common definition does not guarantee success. Agreement on terms doesn't remove the work of meeting project milestones. Because of the common lexicon, though, we can take a look at any part of the project and make a determination of whether or not it meets the definition it is supposed to. If two parties disagree on whether or not the definition has been met, we know exactly who makes the final decision. If that decision is that the definition is not properly met, we know what needs to be changed and exactly who is responsible for making the change.

Communication is one of the biggest single points of failure in a software development project. Nothing can tank a project faster than misunderstandings or people not clearly communicating their understandings and expectations. But before you can clearly communicate, everyone on the project must know what an iceberg is.

Monday, July 8, 2013

5 Architecture Mistakes to Avoid

“Success does not consist in never making mistakes but in never making the same one a second time.” --George Bernard Shaw

“Insanity: doing the same thing over and over again and expecting different results.” --Albert Einstein

In a previous article I referenced what I consider to be the two biggest sins a software architect can make. Following is a list of what I consider the biggest preventable mistakes an architect can make. These mistakes can tank a project or cause major grief after deployment and are all preventable if you keep them in mind.

1. Missing the obvious
It’s embarrassing when you've realized that you've designed an online payment processor that can’t cleanly handle the addition of a new payment vendor or a new method of payment. This is Software Architecture 101. Assume that at some point in time there will be a need to use a new vendor because they offer a type of payment that your current vendor does not.
Whether it’s a hard-coded value, a data type used outside of a reasonable context, or a missed design pattern, this mistake is a landmine. And no one wants Amnesty International protesting your code base.

2. Accounting for the unreasonable
Over-architecting is as bad as under-architecting. Don’t create a multiple endpoint web service that acts as a front end to a Windows service unless you’re certain that a piece of functionality needs to be that accessible. Don’t react to “This might change”. React to “This is likely to change.”

3. Being overly clever
We've all done it at one point in time or another. We all want to show how clever we are, or how smart we are, or how innovative we are. We've all, at one time or another, written a section of code that makes others go “Wait... what?” The urge to do this does not make you a bad developer. It makes you normal. Well, normal as the term relates to software developers.
That being said, elegant code is code that solves a problem in a manner that seems effortless. It is not defining every method of a class as a closure so that the entire functionality of the class can be redefined at runtime. If your code is so abstract that you need a Philosophy degree to understand it, you need to rethink. Prefer simplicity in your designs. The next guy will thank you for it.

4. Not understanding the platform
Sure- software architecture concepts apply across platforms. It doesn't matter what you use, you break up your application into usable modules (whatever you call them) that do one thing. The Strategy Pattern is useful in any OOP environment, and the underlying concept is important for Functional Programming.
That being said, if you don’t understand the tools that your development platform offers, you can only choose the correct tools by luck. And luck is a terrible thing to rely on.  Rails is not the same as .Net is not the same as Java is not the same as ColdFusion. And while approaching design on a platform from the point of view gained from another platform can yield some interesting insights, if you don’t understand your platform you almost can’t help but miss something. Building a webservice in .Net? Better understand WCF. Building anything in Ruby on Rails? Get used to Gems and understand what they offer. Get used to the Active Record Pattern. Doesn't matter if you don’t like it (I don’t), you can’t understand how to account for its weaknesses, and whether or not it’s worth bothering to do so, if you don’t understand how it works in the first place.

5. Being single-focused
This isn't exactly the “If all you have is a hammer, all problems look like a nail” adage that is so common in software development. This is the acknowledgement that developers tend to see projects through a particular lens, regardless of their skills or experience. I tend to write class libraries that serve as an API for whatever front end application may need to consume them. In general, this has served as a solid solution and has become something of a “Go-To” approach for me. Until, on my current project, the dev lead ask me “Why aren't we using SSIS?” He was absolutely right, too. SSIS was the right solution. Don’t fall in love with a particular tool. Be flexible enough to choose the right tool.

Bonus Mistake
The assumption that being an architect makes you right and forgetting that it makes you accountable. Okay- that’s a bonus mistake two-fer.
*Always* listen to what the team developers have to say, regardless of whether or not you agree with them. In fact, especially if you don’t agree with them. Not listening to skilled professionals makes them mad, which will lead to all sorts of problems. They’re the ones actually using the software design you've created, and when the rubber hits the road it’s not uncommon to find some bumps. On the other hand, if the feedback you get is consistently taking a problematic path, you know you need to step back and talk to the dev team so you can come to some basic understandings of how you see things and how they see things.
That being said, at least where I work, architecture is not a democracy. We don’t vote on it and if I create something that doesn't work “But the dev team liked it better” doesn't cut it. I’m responsible for the design. That means listening. That means thinking. That also means being able to make a decision. And at some point making a decision means being able to say “Thank you for your input. I've taken your feedback into consideration and this is the direction we’re taking.”

These mistakes can all turn an application or a development project ugly. They can cause maintenance issues, extensibility issues, or project issues. They can manifest themselves quickly or lie in wait until another factor brings them to light. But they are all preventable and mark the difference between the architect that helps create an elegant and useful solution, and the architect that development teams have to put up with and work around in order to release something workable.

Monday, July 1, 2013

Foreseeable and Imaginary Design

“Architecture that protects against foreseeable change has foreseeable value. Architecture that protects against imaginary change has imaginary value.” --Office saying

Way back when I was a young web developer I worked for a small, very specialized, health insurance company. The business model was fairly simple. Provide health insurance to U.S. citizens traveling to foreign countries where their regular health insurance wouldn't be usable, and then get independent insurance agents to sell it. The IT and Development departments were fairly carefully run, in that decisions were not made lightly and there was a great deal of thought put into infrastructure- network, database, and application. Central to much of this was the all-important Customer record, containing all the necessary information about an insured, an insured’s spouse, and links to the insured’s dependents.
Eventually, as often happens with a small company looking to become a medium sized company, they branched out their business model. Rather than just selling to travelers, they bid for and won a contract to become the official health insurance provider of a small island nation. Excitement abounded, as this contract was, from my understanding, worth as much as the current revenue of the company. To put it mildly, much effort was put into implementing the needs of this nation.
That’s when a problem arose. Polygamy, specifically men with multiple wives, was not only allowable but a common occurrence. The Customer record represented a customer as it related to a purchased insurance policy and the data model only allowed for one spouse for a customer that purchased an insurance package. This, of course, could be changed but this relationship was so central that changing it would have been like throwing a rock through a window. There would be obvious damage to both the flagship application and the website, but once that was fixed there would still be tiny invisible cracks in the infrastructure. Cracks that were invisible until they suddenly caused more damage. Ultimately, the level of effort was deemed too much.
As architects, it is our job to anticipate and protect against foreseeable change. The key word here is “foreseeable”. Architecture that protects against foreseeable change has foreseeable value. Architecture that protects against imaginary change has imaginary value. The two worst sins in application architecture are missing the obvious and accounting for the unreasonable. Of the two, the latter is the more subtle mistake. When you account for the unreasonable you not only extend the amount of work that needs to be done, but you also increase the complexity. Do you really need a calculator that allows client applications to define their own implementation of a number? Certainly not. However, say you’re designing an application that is meant to move data from one repository to another and it’s likely that this process will not be needed elsewhere. Or at least not many other places. As an architect, it’s your job to say “SSIS package”, or something similar based on your preferred platform. If you find yourself designing a webservice that takes an interface implementation that defines a recordset, determines a queuing database to save the recordset to based on a data manager received from a factory and then notifies a Windows service to process the queue then you really, really need to ask yourself if you’re working too hard and making things too complex.
Which makes the most important job of the architect not implementing design patterns or creative ways of solving problems. Not even, should your organization task architects with this, providing accurate time and effort estimations for development projects. These are all secondary to finding answers to the questions “What changes are possible and which of those changes are foreseeable”. Clearly it is not the architect’s job to answer those questions. That falls squarely on the business stakeholders. But it is the architect’s job to ask and to account for the changes that are foreseeable, and to ignore the changes that are imaginary. I've actually created a “Mental Warning Flag” list. If I find myself implementing something on that mental list, I stop and ask for another set of eyes to give feedback. To use a phrase that I will almost certainly be used again “Two developers working together are more than twice as smart as two developers working separately”.
In the insurance company’s case, the data model seems reasonable for the original intended purpose. A Customer represented someone who had purchased an insurance package. The purchaser can have, at most, one spouse because the target customers were only legally allowed one spouse. Not even divorce and remarriage were relevant because the record represented the relationship as it applied to a purchased insurance package. The marital status and spouse for a given insurance package had no impact on past or future spousal relationships.

Did an architect ask the two most important questions? I don’t know. I wasn’t there. If so, was the possibility of eventually covering people that could legally have multiple spouses considered foreseeable or imaginary? I don’t know that either. What I do know is that if you are acting as an application’s architect, be it a defined role in your organization or because the task falls on you as the application’s developer, it is your job to ask those questions and continue asking until you are satisfied with the answers.  And that next time I model a data relationship involving marriage, I’m going to ask about the possibility of polygamy.