Monday, December 2, 2013

Healthcare.gov failures in leadership

"Good management consists in showing average people how to do the work of superior people." --John D. Rockefeller

"Good management is the art of making problems so interesting and their solutions so constructive that everyone wants to get to work and deal with them." --Paul Hawken

Yep. I'm back at this well. Partially because it's a great way to boost hits (I don't use ads, but I do have an ego) but mostly because I've been there. Not in a project this size, but I've lived the nightmare. Maybe someone that can affect this boondoggle
reads this and listens. Likely not. I have an ego, but I'm a pretty small fish. More important to me is that the developers and IT staff affected by projects like this understand the failings so that they know when to update their resume.

This one is coming from a New York Times article titled Inside the Race to Rescue a Health Care Site, and Obama. And again, whether Ms. Stolberg and Mr. Shear realize it, they paint a picture familiar to many IT veterans.

Failures of Testing

I've written about the technical failures of the development teams but what has been written in this article serves to underscore something all development teams know, but fewer do. Testing. From the article:
"To do that, they would have to take charge of a project that, they would come to discover, had never been fully tested and was flailing in part because of the Medicare agency’s decision not to hire a "systems integrator" that could coordinate its complex parts." 
"The website had barely been tested before it went live, so a large number of software and hardware defects had not been uncovered." 
"'There’s so much wrong, you just don’t know what’s broken until you get a lot more of it fixed,' Mark Bertolini, the chief executive of Aetna, said on CNBC."
Regression testing. Unit testing. User Acceptance testing. I can't think of a better way to talk about the dangers of not budgeting time to properly test your project.
"In Herndon, as engineers tried to come to grips with repeated crashes, a host of problems were becoming apparent: inadequate capacity in its data center and sloppy computer code, partly the result of rushed work amid the rapidly changing specifications issued by the government."
Code reviews and proper architecture concern would have prevented the "sloppy code" issue. The rest of this segueways nicely into my major point, thought.

Failure to understand a software development project

Clearly, project management didn't know how to manage a large-scale project. I've said before that the evidence could not be clearer that this project was managed Waterfall-style. And while the far-preferable Agile development style can also struggle with changing requirements, the iterative workflow, combined with constant checkpoints with the business stakeholders allows for better options for dealing with the fact of life that is spec change. Here are a few examples from my playbook:

"No, we will not change specifications in mid-sprint. If you want this change, submit it to the Icebox and give it a priority. We will then do the design and estimation work and add it to a future sprint."

"No. This is a two week sprint. We will not finish early. We will not rush. We have a planned amount of work that will take two weeks to deliver properly."

"No, we will not add to this sprint. The sprint covers an amount of work that can be accurately delivered in two weeks."

"No, we will not include this work into a sprint until there is proper user acceptance criteria attached. The delivered code will then conform to the acceptance criteria listed. No more, no less. I'm here to help you write good acceptance criteria."

Enforcing this kind of project discipline is crucial. It should be well-communicated up front. Everyone should know what an iceberg is. And once the expectation is set, that expectation is enforced. But even with that, the problem runs far deeper. Some quotes from the article really chilled me:
"Out of that tense Oval Office meeting grew a frantic effort aimed at rescuing not only the insurance portal and Mr. Obama’s credibility, but also the Democratic philosophy that an activist government can solve big, complex social problems." 
"'We’re about to make some history,' she (Ms. Sebelius,) said.
"...reveals an insular White House that did not initially appreciate the magnitude of its self-inflicted wounds"
As any regular readers know, my overall philosophy comes from Marcus Aurelius:
“This, what is it in itself, and by itself, according to its proper constitution? What is the substance of it? What is the matter, or proper use? What is the form, or efficient cause? What is it for in this world, and how long will it abide? Thus must thou examine all things that present themselves unto thee.”  
By itself and in its proper constitution, Healthcare.gov is a website that allows people to purchase health insurance. It is not a justification of a political philosophy. It's not about making history. Those are incidental. You cannot make good decisions if you don't understand the context for those decisions. You can't make good decisions about a development project if you see it as anything other than a software development project. And don't think for a second that minimizes its importance somehow. Most developers I know care a good deal more for their code than they do your politics. But if management mistakenly sees a project as some kind of ideological movement or as setting their legacy then they aren't making project management decisions.

Management culture

Finally, what may be the worst failing of the management of this project. It's culture. The standards management sets for its operations. The only place I've ever heard of management leading so poorly is in Dilbert. For example:
"For weeks, aides to Ms. Sebelius had expressed frustration with Mr. McDonough, mocking his “countdown calendar,” which they viewed as an example of micromanagement."
Mocking other stakeholders. Ignoring, for a moment, the notion of expecting professionals to not act like spoiled children, why was this behavior considered acceptable? Management sets the standard for how people act. A good manager understands this. A good manager sets an expectation of professionalism. Complaining is one thing. Open mocking should be the sort of thing people are embarrassed to do, or even to listen to.
"Contractors responsible for different parts of the portal barely talked to one another, hoping to avoid blame."
There's enough written about how a management culture of blame creates a toxic environment. Problems aren't addressed. People pass the buck rather than address problems. The concern is to not be left holding the bag, rather than making sure work is done well. Fear does not create quality. And clearly, this is not known here.
 "Mr. Obama, meanwhile, was under assault. After years of telling Americans, “If you like your insurance plan, you can keep it,” he was being accused of lying. On the night of Oct. 28, Ms. Jarrett, one of Mr. Obama’s closest confidantes and a guardian of his personal credibility, took to Twitter to defend him — and to shift the blame. 
“FACT,” she wrote. “Nothing in #Obamacare forces people out of their health plans. No change is required unless insurance companies change existing plans.”
The tweet touched a nerve; it was not the first time the Obama White House had used the insurance industry as a scapegoat. Ms. Ignagni’s (chief executive of America’s Health Insurance Plans,) members were furious. “Here it comes — we knew it would happen,” one executive recalled thinking."
 The Obama administration built support for the ACA by building hostility to the insurance industry, with whom they must now work. So, in essence, the Obama Administration has taken every opportunity to publicly condemn the insurance industry and is now relying on that same industry to launch not only it's centerpiece website, but the validation of their political philosophy. A management team that does not treat their vendors with respect will find themselves very lonely once they need their vendors. UnitedHealth and CIGNA have already "mostly shied away from the online marketplace" (From the article). To assume this has nothing to do with the administration's clear contempt for their vendors is folly.

I have a difficult time summing up my shock at the sheer number of leadership failures involved here. Not just in knowing how to run a software development project- clearly Medicare and the Dept. of Health and Human Services considered this a "Fire and Forget" project and that they were absolved of all responsibility to be involved once the project started. More disturbing, though, is the culture that management allows. Childish behavior and finger-pointing should not be acceptable. And that attitude starts from the top.

Wednesday, November 13, 2013

Dedication and Vision

"If you do not have an absolutely clear vision of something, where you can follow the light to the end of the tunnel, then it doesn't matter whether you're bold or cowardly, or whether you're stupid or intelligent. Doesn't get you anywhere." --Werner Herzog

Short article today.

Ran across something that really made me think about software development projects. Three developers in San Francisco built an alternative to Healthcare.gov in two weeks.

Two weeks.

Now, you can't purchase- just search and compare rates and plans. But again. Two weeks.

No complicated procurement process. No contractors. Certainly less than hundreds of millions of dollars. Just three dedicated guys with a clear vision.

A G+ friend of mine +Chaka Hamilton took this a step further:
I bet if the gov offered a darpa style challenge, they'd have a working website in half the time, and cost. They obviously have learned nothing from open source / crowd source community.
 Indeed. What if they had.

Tuesday, October 29, 2013

A New Perspective

"I believe everyone should have a broad picture of how the universe operates and our place in it. It is a basic human desire. And it also puts our worries in perspective." --Stephen Hawking 

"Everything we hear is an opinion, not a fact. Everything we see is a perspective, not the truth." --Marcus Aurelius 

This  article falls more heavily under the "Musings" title of my blog. I'm less making a point than I am thinking out loud. As always, feedback and insights are always welcome.

My job responsibilities are changing. I phrase it like that because I doubt my actual title will change, merely the meaning of that title. I rarely handle project level work anymore. Rather, I'm more involved with enterprise level architecture decisions. I haven't implemented a design pattern in quite awhile. I find myself, instead, setting the standard of what patterns are best to use or avoid in certain situations. Or what frameworks we will be using or whether we will use an off the shelf solution or build our own. In other words, my implementation decisions are becoming less important than my opinions and experience with those decisions. I find this a very new perspective and more than anything else, I find myself more and more writing about new perspectives on old ideas.

For instance, take our current investigation into unit tests. The discussion started with "What mock object framework should we use?" We quickly boiled down to "Which framework will be unduly burdensome to the development staff?" This actually eliminated a couple of frameworks at the beginning. But when we settled on two that are, more or less, of equal use the conversation quickly changed to unit test standards. I have a few strong opinions on the matter. I believe that unit tests should cast a wide loop so that behavior consistency can be assured. If that means mocking Internal methods to assure their consistency, then so be it. If that means only using strict mock objects, despite their fragility, then so be it. In fact, I like fragile unit tests. If the behavior of the class changes then the tests should break. I realize that my opinions are not shared by the community at large, and I'm okay with that. I'm always open to debate, but I approach things somewhat differently than normal. I think I've made that clear. Now, of our enterprise level architects, one disagrees with me and the other is still weighing arguments and that's great. That kind of conversation is a new perspective for me.

So what's this new perspective and what does it give me? Because I'm no longer considering patterns and practices for a given set of circumstances, but rather to be followed in the enterprise, I have to more seriously consider the pros and cons of those patterns and practices. I find that thinking of effects of standards on the enterprise at large makes me think differently about the effects of my design decisions at the project level. Not just "Does this work here" but "Would this work in other, similar, situations and if not, why not?" If the answer to the second question is "No", then should I reconsider my decisions at the project level. Note that I'm not offering concrete conclusions here. I'm expanding my perspective and thus expanding the pool of questions I ask myself before making a decision.

Maybe that's my point here, although I expect I'll be getting comments about my approach to unit tests. That's fine, too. The day I stop listening to others is the day I stop being useful.

Tuesday, October 22, 2013

The Unambiguous Measure of Success


"[T]he presence of an unambiguous measure of ex-post success (profit) serves to harness the natural tendency toward overoptimism that otherwise would almost certainly be present when someone else’s money is being spent." --Robert Wagner, "Economic Policy in a Liberal Democracy"

Every once in a while I'll come across a quote or an article that makes me think about software development. Often that's because I tend to read a lot of material related to software development, but sometimes it's not. Such is the case of Donald Boudreaux's Quotation of the Day for October 23rd. The full quote is:
[T]he presence of an unambiguous measure of ex-post success (profit) serves to harness the natural tendency toward overoptimism that otherwise would almost certainly be present when someone else’s money is being spent.  The necessity of putting one’s money on the line and of being responsible for the ultimate outcome surely has a sobering effect on the assessment of the prospects for such projects [that governments typically undertake], an effect that is weakened when tax money is used in a setting where no judgement about profitability has to be faced.
I'm no economist and I don't pretend to be one. And this isn't a post about economics, anyway. What caught my eye was the idea that putting one's money on the line and being responsible for the ultimate outcome by setting an unambiguous measure of success. Even though we all know, often through painful experience, why clear and unambiguous project goals are a necessity, I think it's interesting to look at it with an economist's viewpoint.

Instead of "someone else's money is being spent", let's use "someone else's resources are being spent". In other words, not just the salaries of development staff but also time and infrastructure. If the burden of this is borne solely by the development staff, then the tendency of the customer stakeholders is toward overoptimism. Features, both initial and scope creep, and timelines all trend toward pushing the limits of what the development staff can reasonable accomplish. At least, that's been my experience.

A couple of things happen when the customer is expected to give a clear and unambiguous measure of success. Not just in terms of a requirements document, but close involvement in the development process both in defining clear acceptance criteria for user stories and in reviewing the results of development sprints. The customer is now spending their resources on the project. Their staff has to be available for clarifying requirements. Their staff is has committed their time to insuring that the "measure of success" is being met. And their staff also has to budget, and therefore use effectively, their time for the project in relation to the time needed for other tasks. The tendency of those with a stake in the game is to be more careful with how those resources are spent and to insure that resources aren't wasted. I guess this makes sense, really. People tend to spend money more frivolously when not using their money, or when it doesn't look like they are using their money. It's why managing credit can be tricky and it's why casinos use chips instead of currency. Why should spending resources on a project be any different?

I like this quote. It's a truism of software projects, for that matter projects in general, that you can't finish a project if you don't know what "finished" looks like. I'd never stopped to think about how a clearly defined measure of success affects the customer in a project.

Thursday, October 17, 2013

Health Care Exchange Project Pt. 3


"One test is worth a thousand expert opinions." -Wernher Von Braun

And finally the last piece of the puzzle. As a software developer, I find the previous types of project problems maddening. However, this last category boggles my mind. Perhaps I'm just an idealist, but I truly want to believe that this sort of thing doesn't happen anymore. Sadly, I read The Daily WTF far too often to really believe it. We are now down to technical failures.



Again, quoting from From the Start, Signs of Trouble at Health Portal
"Others warned that the fixes themselves were creating new problems, and said that the full extent of the problems might not be known because so many consumers had been stymied at the first step in the application process."
"'So much testing of the new system was so far behind schedule, I was not confident it would work well,' Richard S. Foster, who retired in January as chief actuary of the Medicare program, said in an interview last week."
We all know that maintenance, especially bug fixing, is the true bulk of any software development work. And we all know that testing is the heart of finding, and therefore fixing, bugs. This is not under dispute. However, I've been associated with so many development projects that ignore this basic principle that I want to weep sometimes. And it's always the same. "We don't have time to test because we're busy building features". Or "We'll focus on testing later". Or the worst, "We'll worry about bugs when they're reported by users". (Yes- I've been told that)

To be clear. Unit tests insure that a unit of code has a consistent result at any point in time. It doesn't insure that the code does what it is supposed to. It insures that what the code does hasn't changed due to other factors. Unit tests are how you do regression testing. At least, how you do it without resulting in Cthulhu-level madness.

User Acceptance Testing insures that the users can actually perform the tasks called for in the specifications. This doesn't happen at the end of the project. This happens at planned stages throughout the project so that testing happens on a manageable set of features. A set of features that can be easily documented, easily described, and easily managed. Failure to do this step before rollout is inexcusable.

And while we're at it, what about Exception handling testing? What effect does any given exception have? How is it reported, both to support and to the user? How are exceptions tracked? Testing isn't just about making sure the application works well. It's about insuring that it fails gracefully.
"The biggest contractor, CGI Federal, was awarded its $94 million contract in December 2011. But the government was so slow in issuing specifications that the firm did not start writing software code until this spring (Em. mine- MO), according to people familiar with the process."
I'm pretty okay with most of this, but the failure is so bad that it's worth mentioning. The award amount doesn't bother me. I'm also pretty okay with two years of requirements. This isn't like turning on a switch and watching everything work. This is a serious development project- far more serious than anything I've participated in. I would have been more surprised to see the award amount or the planning time significantly lower.

But read the bit I emphasized. Development didn't start at any point during planning. In other words, a project with this level of work and complexity was attempted Waterfall-style. Not Agile. Waterfall. In a project like this, the technical leadership deliberately passed on the ability to easily reach to changing requirements. And the ability to work on completed requirements as they become available. And on increased involvement between development and stakeholders. And continuous testing.

I'll take the heat for saying this- Agile isn't a buzzword. It isn't a topic for bloggers to discuss. And it isn't an alternative methodology. It's the only sane way of approaching a development project of any more than a trivial size. You simply can not anticipate everything ahead of time, and attempting to do so harms the project more than it helps. Case in point.

All of the problems the NYT article describes are serious. Any more than one or two of them will probably sink a project. The fact that people are reporting this many fundamental mistakes makes this an example everyone familiar with software development should understand. If only to protect your career.

Health Care Exchange Project Pt. 2


“No matter how good the team or how efficient the methodology, if we’re not solving the right problem, the project fails.” - Woody Williams

In my previous article I started talking about the New York Times article From the Start, Signs of Trouble at Health Portal. See previous article for the disclaimers that hold here as well.

In Part Two, I want to talk about the parts of the article that, to me, describe a major failure in project leadership. Not to be confused with executive leadership. In this case, the described failures in those responsible for the actual project leadership.

"Failure to plan is a planning to fail". It's a cliche for a reason. Starting a development project without a plan, or with an obviously flawed plan, is a massive waste of time and money. From the article:


"Dr. Donald M. Berwick, the administrator of the federal Centers for Medicare and Medicaid Services in 2010 and 2011 'The staff was heroic and dedicated, but we did not have enough money, and we all knew that,'"
From the beginning, we have a serious issue. The project wasn't funded.  The only way this works is if you have a plan to scale back what you can't pay for. The quotes in the previous article regarding executive leadership make this an impossibility, however.
 "Some people intimately involved in the project seriously doubted that the (Medicare and Medicaid) agency had the in-house capability to handle such a mammoth technical task of software engineering while simultaneously supervising 55 contractors."
"The political people in the administration do not understand how far behind they are." 
Well, now we have some insight into some of the executive leadership issues. Project management is supposed to be responsible for insuring that the correct groups are responsible for units of work and are responsible for reporting progress. It very much sounds to me like the project management team fell flat here. This is by no means unique to ACA, nor even to government development projects. I've seen, far too often, project managers who think their job ends with the kickoff meeting. Or who only schedule meetings and do little else. A good sign of a sinking project is negligent project management. Worse is project management that is unfamiliar with what the role entails.
"A round-the-clock effort is under way, with the government leaning more heavily on the major contractors"
"Worried about their reputations, contractors are now publicly distancing themselves from the troubled parts of the federally run project."
"Senior executives at Oracle, a subcontractor based in California that provided identity management software used in the registration process that has frustrated so many users, defended the company’s work. 'Our software is running properly,' said Deborah Hellinger, Oracle’s vice president for corporate communications."
How often have you seen this story play out? Lack of planning and poor leadership lead to "crunch times". As a result, development staff is required to work late. Demands raise past what is reasonable and soar up to the ceiling of what is possible. The result?  Discontent and CYA. The quotes above tell me that the contractors have already given up on the project and its leadership. Worse yet, the "blame game" has gotten into full swing. Blame doesn't happen when the project staff sees the project as salvageable. Blame only happens when the project is seen as a loss and people only want to salvage their careers. In a very real way, the blame game prevents problems from getting fixed.

Project management problems are a warning sign that is difficult to see. Oftentimes, poor project leadership isn't obvious until the project is well under way. At that point, recovering the project can be problematic. Tasks have already been assigned inappropriately, Reporting is either far behind or nonexistent. And at the worst, although I didn't see any evidence of this in the article, poor project management often involved a lack of clear project goals. This last is, in my opinion, the worst way project management can fail. If a project without clear end goals is allowed to continue, failure is the only possible result.

Health Care Exchange Project Pt. 1


“I have witnessed boards that continued to waste money on doomed projects because no one was prepared to admit they were failures, take the blame and switch course. Smaller outfits are more willing to admit mistakes and dump bad ideas.” - Luke Johnson

Let me start off saying that this isn't political commentary. If you want to talk about whether or not the Affordable Care Act (ACA) is a good idea or should be repealed, please go elsewhere and don't pollute my blog with political commentary.

Earlier today, I came across NYT article headlined "From the Start, Signs of Trouble at Health Portal" and written by Robert Pear, Sharon LeFraniere, and Ian Austin. Before going on, I highly recommend reading the article. I also found a very readable companion piece written by Megan McArdle.

The reason it caught my attention isn't that it is about ACA but rather the amount of project failures that I can relate to. The signs of common project management mistakes are so obvious, anyone experienced with software development projects will find themselves nodding their head with nearly every paragraph. This article doesn't describe a mere failing project. The failures described here are so blatant, so numerous, that I have to wonder if we're looking at deliberate sabotage. Or possibly the article is merely social satire.

The point of this isn't to slam the ACA, let me make that clear. The desired end of the project isn't what caught my attention. Nor am I trying to "discredit" the ACA. This is a real world study of how to run a software project into the ground and guarantee its failure before the first line of code is written. This article should, absent all else, serve as a warning to those undertaking software development projects, and indeed projects of any kind.

As this is far longer than my usual blog posts, I'm breaking it up into three parts, by what I see as logical categories of the project's failures. For the first one, lack of executive leadership.

It's frustrating when software projects are crippled by executive-level politicking. Executive-level leadership is foundational to a project's success. And like a building with a flawed foundation, poor executive leadership will destroy a project in unexpected ways. I have been involved in several projects that lacked executive-level leadership. All collapsed. Quoting the NYT article:




"Politics made things worse. To avoid giving ammunition to Republicans opposed to the project, the administration put off issuing several major rules until after last November’s elections. The Republican-controlled House blocked funds. More than 30 states refused to set up their own exchanges, requiring the federal government to vastly expand its project in unexpected ways."
"Administration officials dug in their heels, repeatedly insisting that the project was on track despite evidence to the contrary." 
"Mr. (Henry) Chao’s (Chief digital architect) superiors at the Department of Health and Human Services told him, in effect, that failure was not an option, according to people who have spoken with him. Nor was rolling out the system in stages or on a smaller scale, as companies like Google typically do so that problems can more easily and quietly be fixed." 
I see a few serious warning signs here. First, not all stakeholders were on board with the project. I don't care what rationale you use, if the stakeholders aren't on board, the project is doomed. This is as close to political as I want to get here. Neither the reasons the Democrats had for steamrolling the Republicans on the ACA nor the reasons the Republicans have for trying to kill it matter in this context. The unassailable, unarguable, truth is that if a major stakeholder want to kill a project then the project is dead. Ideology does not change this, nor does intent. Whether the result of the project is necessary, helpful, or detrimental is also irrelevant. And it is inexcusable for executive leadership to begin a project like this, through intent or ignorance of This Law, with such high level opposition.

Just as bad, however, is the unwillingness to acknowledge that the project is in trouble, but rather relying on the "Failure is not an option" method of resuscitating a project. Take a moment and count the number of times someone told you "Failure is not an option" or "Just make it happen" and it actually helped. Go ahead. Raise your hand if you got a number above zero. Anyone out there have their hand raised?

Didn't think so.

Attention all managers- these phrases do not solve problems, nor do they create an environment in which problems can be solved. For the love of all that is holy, stop using them.

Several reasonable suggestions are on the table. Postponement. Small scale rollouts, Google-style. The reasons given for ignoring these options are just about the worst possible. Executive-level politics.

Executive leadership enables development projects in ways that many people on the project never see. Because of that it's difficult to see when executive leadership is failing. Sadly, the results are less hidden.

Thursday, October 10, 2013

Finding Valrhona, or Habits of Effective Architects


I have very strong opinions on the subject of software, architecture, and quality in general. Coors is not beer. A Hershey bar is not chocolate. Neither Velveeta nor Kraft Singles are cheese. Starbucks does not serve anything I identify as coffee. "Cowboy Coding" is not software development". This really isn't about my quirks in food quality. More a list of items that I find helpful in making sure I'm helping to deliver Valrhona.

Understand what you need to deliver

Before you select technologies, before you start with the design patterns, and certainly before you put hand to keyboard, make sure you understand what pain point you're relieving for your customer. Software development is about solving problems. So many times, I see projects skipping right over the problem to be solved and heading right to implementing a solution. Oftentimes, talking through the problem to be solved makes a murky solution obvious. If you're stuck, ask yourself "What problem are we solving?" If you don't know, you know what to do next.

Solve for realistic problems

If you don't need a full enterprise-y solution with distributed widgets, Something As A Service, and Fully Configurable Everything, don't build it. This is kind of an extension of the previous point, but for every architecture decision you make ask yourself "What value is this adding to the solution?" if you don't have an answer then you don't need the Thingy.

On the other hand, understand that no software project is ever finished and that no set of requirements stay static. Especially once development starts. As Helmuth Graf von Moltke said, "No campaign plan survives first contact with the enemy". There are well established patterns for solving common problems. A thorough understanding of the Gang of Four's design patterns will go a long way in helping you avoid common pitfalls. Don't use them just to use them, but don't avoid them just because they're common. They're common for a reason.

Stop. Collaborate and Listen.

Okay, for those of you who get the reference, I apologize. Note there's no link. I don't want to infect those of you who don't get it. But, origin aside, it's good advice for the architect. Even if you're sure of yourself, get feedback. A development team is more than the sum of its parts, and several smart developers working together produce far better results than several smart developers working separately. Capitalize on this. Ask for comments and then listen to them. Especially the criticism. The worst that can happen is that you'll feel more confident in your design.

Along those lines, keep up on the current trends in software developemnt. I'm not saying you have to be KanBanAgileScrumTDD just because others have written about how shiney they are. But you won't know how these concepts can, or can't, help if you aren't familiar with them.

Strive for Elegance, but understand what it means

To me, an "Elegant" solution is not necessarily overly-clever. It does not have to solve problems in a new way. And it certainly doesn't take Donald Knuth to understand. To me, "Elegance" makes the solution look easy. Sure, maybe you come up with a better way of solving a problem. But maybe you recognize that some techniques are "Tried and True" for a reason. Either way, your result shouldn't look like a bunch of work. It should look obvious.

Know when to say when. And when not to.

Understand that at some point in your career (or, in my case, at some point in your day), the pursuit of higher quality will conflict with the overall effort in such a way that the pursuit does more harm than good. be able to recognize that time and let go.

Understand that at some point in your career, you will be expected to sacrifice quality for the overall effort in a manner that does more harm than good. Don't dig in. Don't get stubborn. Learn to present your case in terms that the decision makers understand. You will not always get your way, but you will become known as an asset that is always looking out out for the overall project.

Note that there isn't a lot of actual code advice here. Sure, I could tel you that if you're instantiating different types of classes depending on context, consider an Object Factory or even an Abstract Factory. Or if you have a somewhat complex process that other processes interact with, or a subsystem that might change, consider a Facade. I could give you the ol' "Design from the Interface" advice or even tell you that if you find yourself considering recursive queries maybe you should step back a bit. But I think that if you really take the above to heart, everything else is just details.

Wednesday, October 2, 2013

Is Open Source Any Help

"Courage is what it takes to stand up and speak; courage is also what it takes to sit down and listen." --Winston Churchill

I've had some thoughts about Open Source Software percolating for some time now. Before getting to them, though, I want to lay out what Open Source is, especially for the non-programmers reading this. As a programmer myself, the concept of source code is so deeply embedded in what I do, it's a little like trying to explain "air". So I turned to Google. And while the linked search brings back a lot of sites that offer good definitions, I like the one returned by Google at the top of the page. "A text listing of commands to be compiled or assembled into an executable computer program." "Open Source" is, then, software that makes the source code publicly available. Some Open Source Software is free, some is not. Some allows others to make changes to the source code, some do not. But, by definition, all Open Source Software makes the code available for public perusal.

Part of the reason I'm thinking about it is due to an opinion piece written by Open Source advocate Richard Stallman. Now, personally, I think Stallman relies too heavily on hysteria and exaggeration to make his points, but agree or disagree, this piece got me thinking. His point is that all software should be Free Open Source (FOSS). "Free", in this case doesn't refer to cost to use, but rather the freedom for others to use the source of a piece of software for their own purposes.

The other thing that got me thinking was Healthcare.gov making all of their code Open Source and even freely available for others to use. It's a bold step, darn near unprecedented.

To start out my thought process, I'm going to quote, verbatim, my comment on +Matt Quackenbush's post, where he linked the article. While my opinion has wandered a bit, I want to start out with an accurate historical record of where I started.

Yay. Another Stallman rant. SIGH

Okay, here's the thing that Stallman either doesn't realize or just doesn't care about.

To most people, source code is useless. Even to most programmers. He wants to liken code to language? Then source code is a book. For any important software, reading it would be akin to reading the entire Encyclopedia Britannica. Doable, even informative. But hardly a priority, and rarely worth the effort.

So what you end up with is similar to RMS' beef with Ubuntu's Amazon search. You end up with one batch of so-called experts yelling "IT'S DANGEROUS" and another yelling "NO IT'S NOT!". For everyone else, all that can be done is decide is more credible: Someone known for blatant histrionics and exaggeration or a company looking to defend their product. The truth is likely somewhere in between and likely more shaded than either side wants to admit. But how am I to know?
My point was that making a software's source freely available is only useful to those with the ability to read the code and the willingness and ability to devote the time to really understanding it. And programmer reading this understands the difficulty of this, but for non-programmers it's important to point out that when we programmers understand software, we have to hold the entirety of the code in our head. Paul Graham explained this in the best way I've ever seen, and I encourage all to read this essay of his. My point here is that even for an experienced developer, holding a significant part of, say Windows 7 or Adobe Acrobat, represents a serious amount of  time and effort. Time and effort that is often better spent elsewhere.

So where does that leave the rest of us? Before I answer that, let me pose a few questions. Raise your hand if you're a trained and educated Climatologist. Figuratively, but literally if you wish. For those of you with neither your figurative or literal hand in the air, how much do you know, of your own experience and knowledge, on the issue of climate change? Have you done an exhaustive reading of the current research? If you have, how much of it do you understand? What about peer reviews of research on both sides? In other words, what do you really know? If you're like me, darn little. Not really.

That leaves us in the position of judging credibility based on... Well, based on what? Largely, I suspect, our own biases. Likely a bit of, "Well, that just makes sense to me", which is a terribly way to judge scientific research since results can often defy what we would see as "common sense results". Please note- this is not about what you believe to be true. It's about the fact that you believe something to be true, rather than know something to be true.

The same is true of FOSS. Personally, I don't have the time to put in a serious review of the Ubuntu operating system. On a practical level, it doesn't matter to me if it is FOSS or closed source. All I can do is try to critically judge the credibility of people who have and decide who I believe. I can read people's opinions, although I freely admit that it is not practical to do an exhaustive enough study to give me enough information to make an accurate determination of the credibility of the people involved. And this is coming from someone that understands the issue here. Those that have chosen other paths in life don't even have that advantage.

That's where I was as of yesterday. Today, I read something that extended my thoughts on the matter, namely the fact that HealthCare.gov has embraced FOSS to the point of making their back-end code, called APIs (Application Programming Interface), available to those that want to use it. HealthCare.gov is not an exception to the issue I outlines above. However, when I read their statement I found myself thinking "It takes real courage to put yourself under that kind of microscope". People will be reading their source code, people will be judging not only its quality but what it actually does. So Kudos to you HealthCare.gov. Not for finding a way to make your source truly open to all, but for having the courage to stand up and be judged by those that can and will. Sometimes courage is its own reward, and I hope it is for you guys.

Tuesday, September 3, 2013

Tips for Standing Out

"If you don't get noticed, you don't have anything. You just have to be noticed, but the art is in getting noticed naturally, without screaming or without tricks. --Leo Burnett"

Recently, I have been going through our family digital pictures. There's quite a lot of them, and I'm afraid that I haven't been great about any sort of categorizing or even avoiding duplicates. So, with literally thousands of digital pictures, I'm sorting, organizing, and moving them to cloud storage. In going through them, I started wondering. I had just gone through 30+ pictures of dolphins at a zoo and was currently looking at almost fifty pictures of a gift exchange at Christmas (more pictures than there appeared to be attendees), and I started considering the value of an individual photo in the age of ubiquitous digital cameras.

Sure- I could have organized the images in a given folder by general context and then subordered the images within a context by general worth. Color and lighting quality, view of the subject of the photo, etc. Establishing a baseline to determine which photos are worth keeping and which are not. And after a great deal of time and effort, I could have come up with the absolute best images to keep.

I didn't. I deleted something like 80 images because after picking out a few that represented the scene or event, even going through the rest of them to see if there were any other good images simply wasn't worth the time. The value of each individual image was very low. If it didn't immediately stick out as worth keeping, it wasn't even worth the time to look further at the image.

It shouldn't be difficult to see where this is going, but let's keep moving, shall we?

So how do we, as developers, stand out from the crowd. It's tempting to believe that consistently delivering quality work will to this, but let's face facts. In this world of development teams, project teams, and managers that have so much on their plates that things just naturally fall through the cracks, this simply isn't true. Nope- not even for you. (Mostly directed at 10 Years Ago Matt, who honestly believed just this.)

So what helps people stand out? When a manager thinks of your department or wants to assign a person to a task, what can make you jump into mind?

How to Stand Out


  1. Don't be afraid to pitch ideas. Don't come off as critical and certainly make sure you aren't taking time away from something else. But don't be afraid to try. Code is best communicated through code so don't rely on the clumsiness of spoken word to communicate your code ideas. Build a demo. Cite sources. Make a pitch.
  2. Talk to people. When you talk informally with colleagues or even, if your organization allows this, superiors, you have an opportunity to discuss ideas outside the delicate context of "Here's a change I think we need to make". This gives a degree of safety to the discussion- you're not proposing invasive changes, you're talking shop. You're exchanging ideas rather than making a pitch. Again- don't be critical and don't be a pest. But participate in the developer community of your office. People remember contributors. "Head down, mouth shut" often also means "forgotten".
  3. Don't be afraid to ask for things. Is there a project you want to be a part of? Is there another role you'd like to fill? Ask. Maybe the decision maker will agree with you and maybe not. But your odds of getting what you want increase sharply when you communicate what you want.
  4. In order to do #3, you really need to take this step. Be honest with yourself on what you want and what you can do. A central theme to everything I've said here is that you cannot properly use something you don't know and understand. That includes you. Want to become a Development Lead? Understand, then, your leadership abilities and how you can best display the skills necessary for the job. Asking is important, but if you can't show that you fit what you want, then you won't get what you want.
  5. Be the guy that asks questions and offers solutions. Everyone can criticize. Even if they're not being critical, any good developer can summarize a problem. The trick is to be the one trying to help reach a solution.

How to NOT Stand Out

  1. Brag. We've all created elegant code and we've all come up with clever solutions. And we all like to talk about them. But there's a line between talking about things you've done and bragging. If you never talk about your accomplishments, chances are that no one will recognize them. If you brag, chances are no one will care.
  2. Butt in. Again, there's a fine line between offering help and butting in. Chances are, if you're the guy that always has a better solution and can't let even the smallest issue go by without comment, you've crossed that line. This, too, will make you stand out.
  3. Criticize. You want to offer solutions because they think they can help. Great. But if you want to offer solutions because you think the current implementation is bad/stupid/incompetent then you'll definitely stand out from the crowd. Just not the way you want.
Let's face it. There are a lot of developers out there. There are even a lot of good developers who deserve to stand out. But it's not the job of other people to notice you. It's your job to be noticed for adding worth to what you do.

Tuesday, August 27, 2013

Why Bad Code Won't Die


Let's face it. Code goes bad. Maybe, due to technical debt, it started out bad and now it's time to pay the note on your debt. Maybe, due to a new version of your software platform, there's a better way of handling what your application is meant to handle. And maybe your application is now handling situations that were not taken into consideration during development, whether anticipating them was reasonable or not. Whatever the reason, what was once perfectly acceptable code often becomes obsolete.

The challenge isn't writing code that won't go bad. You do your best, with the realization that it might just happen anyway. The real challenge is getting the situation resolved, because you can't just charge in and make changes- at least, I hope you can't. It's a sad fact of the developer's life that projects like this may be necessary, but are often rejected, leading to frustration and a lack of confidence in leadership.

It's a common failing in communicating with others. You assume that things that make sense to you make sense to others and that things that have value to you have value to others. It's human nature, really, to assume that the context in which you view life is the context in which others do, too. Because improving the quality of your code is important to you, makes sense to you, and is just intuitively obvious to you, you tend to assume that it is to others. Then you construct your communication, consciously or not, along that assumption.

The problem is that most development managers and just about everyone higher than that on the corporate totem pole view project priority in a completely different context than developers do. And since they hold the final decision on whether or not a development project gets green-lit, it's critical to understand why "Yes" to projects, why they say "No", and the thought process that goes into that decision. If you can tailor your pitch to the way upper management thinks then you can at least communicate your need in a way they understand. This, of course, doesn't guarantee anything. But you at least avoid getting in your own way.

Developers tend to consider a well structured application as the end goal in a project and as the motivation for taking, or not taking, action. Which is good- that's what they're there for. The problem is that upper IT management rarely, if ever, evaluates projects through that lens. To them, risk is the key decision point. Not just the risk of breaking something else due to the changes, either. Risk to the project schedule as a whole. Risk involved in an application changing- even if it's for the better. Users get used to the way an application works and there's often resistance to change. The unfortunate truth is that, in the mind of an upper manager, "This is better code" never trumps "This is currently working". Attempting to approach a "Code Improvement" project from a technical point of view will rarely work. In fact, it has never worked for me. Not once.

The first thing you need to do is explain the necessity for the change in the point of view of what is important to upper management. If you can't do that, learn to suffer in silence because this change will go nowhere. Since upper management's priority is a low risk of interrupting business processes, it is critical to show that not making the desired changes will lead to a greater risk of interruption than not making the changes. Risk of application failure is a great thing to focus on. So is inability to support known, or likely, business needs in the future. If the current state of the code is no longer performing well due to increased load, point out the trend in increasing load and try to project a timeframe for the application no longer functioning. Are there upcoming needs that either can't be supported given the current state of the application or can be implemented in a greatly reduced time given a change? Have hard numbers and facts. State the known needs that can't be implemented and how that impacts the business. Show how a project can be completed more quickly if the desired change is done now. Remember- upper management cares about how well the IT department, as a whole, can support business needs as a whole. Present your case in terms that are important to your audience.

Second, have a plan. Don't just outline a problem and dump the mess in someone else's lap. Explain how the problem can be fixed. Go on from there to how long you think it will take- remember, your audience is thinking in terms of a schedule of many projects. Scheduling an unanticipated project impacts the rest of the project calendar. Make sure your audience understands the impact and that the impact is worth it. Then outline the risks. This is important because as soon as you give your opening argument, your audience has already started thinking in terms of risk. Then present how you will handle that risk. Will other applications be affected? How will you minimize the impact on those applications. Will this change business processes? How can IT work with the affected business units to make sure they understand how to interact with the changes. What happens if something goes horribly, horribly wrong? What failure points will you be looking for and what will you do if they do fail? How will you minimize the impact of a massive failure on the business, how will you fix the failure, and how much effort might it take? How will the project calendar be affected, what projects may have to be put off in order to do this, and what can be done to minimize the overall delay?

None of this, of course, guarantees success. I've made pitches like this and heard the equivalent of "I understand, but we want to focus on new feature development". Or, "I'll take a closer look at this and see if we have time on the project calendar." Which is often a long way around of saying "No". Sometimes upper management simply isn't interested or just doesn't want to accept the risk. This happens. But management is never willing to accept risk they don't understand or change that doesn't seem beneficial. It's your job to communicate the need to fix bad code in a way that the decision makers will consider a high priority. If you understand how the decision makers think, you understand how to communicate with them.

Which, if you think about it, goes for life in general as well.

Tuesday, August 13, 2013

Best Practices


"The cart before the horse is neither beautiful nor useful." --Henry David Thoreau

You're doing account signup forms wrong.

Okay, I should say "If you're doing account signup forms, you're doing them wrong." and while logically (if not grammatically) more accurate, I thought it made less of an impact as an opening statement.

The problems with most signup forms are myriad. Why do sites force you to enter your email address twice? Mobile platforms have figured out that masked password input boxes aren't always necessary, and give you either an option to turn it off or give limited clear-text viewing of your input. Websites haven't gotten the memo. And if I see one more site serving up a picture of what can only be described as a dust storm and telling me to type in the letters in the box to prove that I'm human, I'm going to flip and write a blog post about it. Just. Stop. It.

Now, as I hope I've made clear in my writing, I don't care about forms, CAPTCHA, or even UX issues. Or rather, I care about them only in the context of the thought process that went into them, and that's where account signup (and often account management) process fall flat. It's due to a very insidious concept known as "Best Practices".

I hate best practices. As soon as that phrase is first used in a requirements gathering meeting, I step on it like I would a roach. "Best Practices" used to refer to process that the industry had adopted, formally or not, as the best way known at the time to approach a problem. That, I have no problem with. What I have a problem with is the fact that "Best Practices" doesn't mean that anymore in software development. Anymore, it means "What is everyone else doing?" Which leads to lazy planning. Which leads to bad results. And I really hate bad results.

"Best Practices" are insidious. Because it's assumed that these practices are used because they're the best way of approaching a problem, people stop thinking about solutions as they apply to their specific needs. Any time you start implementing solutions without considering whether or not that solution is actually a solution to a problem you have, you have at best added unnecessary complexity to your project. At worst, you end up implementing code that hurts you in the long run.

Worse, though, is that no one ever advances that body of knowledge when "best practices" are applied blindly. Since everyone is using the same solutions as everyone else, no one thinks up new ways of solving problems. In the UX arena, this results in carbon-copy sites that don't stand out and present all the same inconveniences that the other sites present. In the web security arena, it's even worse because you are implementing that worst kind of security. The kind that makes you feel secure without necessarily offering any concrete benefits. Since the "security" principles that are labeled "Best Practices" are applied blindly, you don't know if the implemented solution solves the problem at hand, much less whether or not the problem at hand is one that needs solving in your context.

As an example, let's take password masking. Many mobile devices briefly show in clear text the latest character typed into a password box. This is not new. In fact, I've read people claim that Apple innovated that idea for the iPhone, but my Palm 600 did that. This idea is ten years old and the web world still hasn't caught on. It's become an expected inconvenience because everyone is looking at everyone else's paired "Enter Password/Reenter Password" boxes. Even worse, this is billed as a security measure without asking whether or not this is a *necessary* security measure. Does it really hurt anything if a site offered a way of viewing your password in clear text, thus avoiding the usual paired password box routine?

Am I saying that these measure are all unnecessary? Absolutely not. Except when they are. And if requirements are being gathered in the context of your needs and your solution, then it becomes obvious what you do and don't need. The problem is that "Best Practices" are applied backwards. The solution is selected and the problem lays unexamined. Software development is merely implemented problem solving. You can't solve a problem you have not examined.

Wednesday, August 7, 2013

What's the point of an architect


"I know I can't do everything myself. So I know I specialize in my melodies and I do some of my demo work. I pass it on to my producers who are much better at the production level." --Paul Taylor


I'm not asking why we need software architecture. Anyone reading this knows why. Insuring that standards are met. Insuring extensibility and code maintainability. Making sure that the design patterns used are necessary and that the necessary patterns are used. Choosing proper technology and it's use. This is not in question. I'm asking, "What's the point of an architect?" What do I bring to the table to justify my presence on the project, and indeed the salary and benefits I draw to do what I do, and only what I do. Couldn't all this be done by a senior developer or a development lead? In sort, what is the value of me.

Paul Graham once wrote that a software developer must hold a program in his head:
"A good programmer working intensively on his own code can hold it in his mind the way a mathematician holds a problem he's working on. Mathematicians don't answer questions by working them out on paper the way schoolchildren are taught to. They do more in their heads: they try to understand a problem space well enough that they can walk around it the way you can walk around the memory of the house you grew up in. At its best programming is the same. You hold the whole program in your head, and you can manipulate it at will."

This isn't to imply that programmers are necessarily smarter than average. It does mean, however, that we need to be able to visualize abstract concepts in our mind  More important that how well we think is how we think. The problem is that applications are becoming more and more complicated. N-Tier applications. Web services. Multiple platforms. And with more and more to keep track of, the percentage of the code space as a whole that a developer can hold in his head becomes less and less. 

That's where an architect comes into play. More and more I find myself not holding an application's problem space or code space in my head. I find myself holding the project problem space and its place in the enterprise in my head. Where the developers focus on visualizing code, I focus on visualizing the enterprise as a whole to help the applications fit into that space. 

Consider the evolutionary development of a surgical team. At one point in time all surgery took was a guy with a sharp knife and an understanding of the human body, such as it was at the time. Any modern surgeon would be horrified at the prospect of an operating theater running in such a manner. Now it takes surgeons- maybe more than one. It takes an anesthesiologist and nurses. It takes someone to coordinate the surgeries so that, as much as is possible, there is available surgical resources for high priority cases. Modern surgery is complicated and beyond what any one person, no matter how talented, can manage.

So, too, goes modern application development. It takes developers. It takes senior developers or dev leads- sometimes both- to manage and coordinate development activities and even to mentor more junior developers. It takes DBAs and perhaps other SMEs (Subject Matter Experts) to lend specialized knowledge in areas the development team may not have. And it takes architects to make sure that the application has its proper place in the enterprise, making proper use of existing tools, and insuring that the application being built will work in other contexts, if applicable. And just as the modern surgeon would be horrified at the prospect of the old west surgeon giving a patient a bottle of whiskey before digging out a bullet, I'm horrified at the idea of "One Project, One Developer". Good software development is more complicated than that, no matter how talented the developer.

As any regular readers should know (Thanks, guys!), I ask "What is this?" and "Why is this?" a lot and don't take a lot for granted. Questioning my role in a project- and again, not the role of architecture but the role of the architect itself- makes me more focused on what I should be doing, rather than what I may want to be doing. Because let's face it. I came from software development and at heart I will probably always be a software developer. But after asking "What?" and "Why?", I had an interesting conversation with the tech lead on my project. The specifics of the conversation are less important than the conclusion. I said (roughly)

"I'm starting to get that itch in the back of my architect's neck that means I'm getting in your way. You know the direction we need to take, and you know why. So I'm going to step out of your way and let you get it done as you know best."

Thursday, August 1, 2013

Painting Fences


Imagine this exchange, if you will:

Homeowner: We need to to paint the outside of our fence white.
Contractor: No problem.

(2 weeks later)

Homeowner: I know we asked you to paint our fence white, but now we need it painted red.
Contractor: You don't like the white?
Homeowner: It's not working for us, and we'd like to try red.
Contractor: No problem.

(2 weeks later)

Homeowner: Okay, sorry about this but now we want our fence green.
Contractor: Sure, I can do that, but I have to ask. Why the color changes?
Homeowner: The neighbor's dogs bark all night and we're trying to find a fence color that calms them down.

Silly, isn't it? Ridiculous, in fact. And yet, a conversation in which I've participated more than a few times in my professional career. It stems from the business side of a software development project telling the development team what they want, which should be avoided at all costs. Yes. I just said that. The business shouldn't be allowed to tell development what they want. The reasoning is fairly simple, too. They don't know what they want and even if they did, they don't have the technical vocabulary to communicate it.

What the business does know is what it needs. There's a pain point, a failing, an inefficiency, or a breakage somewhere that needs to be resolved. That's what they should be talking to you about. They need to show what the process is now, what they don't like about it, and what it should look like when the problem has been solved.

There's a subtle difference between business needs as requirements and implementation details as requirements. Subtle enough to go unnoticed sometimes. Subtle enough to even seem reasonable. It seems almost reasonable to say "The account creation form needs to validate the format of the email address." Or "I want to be able to delete accounts". But it's like two lines that aren't quite parallel. Extend them out for a long enough distance and they end up in very different places.

Let's take a look at the two examples above. Validating an email address seems reasonable. Necessary, even. After all, you don't want users accidentally putting in a bad email address, right? If we're going to have any success at all, we're going to need a regex.  Something along the lines of
bool isEmailValid = Regex.Match(emailAddress, RegexString).Success;
Problem is,writing that regex is harder than it looks. Bigger problem is that it likely doesn't solve the underlying issue. Note, I said "likely" because we don't actually know what the real issue is. All we know is that we need to validate an email address. There's no mention of why. But if the underlying issue is making sure that the user enters a valid email address, which seems reasonable, then this doesn't actually solve the problem. It's very easy to write a regex (okay- let's face it. Search for a regex and copy/paste it. C'Mon- you know you do it.) that doesn't properly validate an email address, and prohibitively expensive to write one that does.

This solution falls even shorter of the actual need if the need is to make sure that the user enters a valid address that they have access to. Which may be what the business meant. It also may not have been what they meant, but would agree this was the proper direction if they had communicated their need, thus giving the architecture team the chance to respond and ask questions. So, due to a solution disguised as a requirement, we have a solution that will likely either validate malformed email addresses or reject valid addresses and does nothing to make sure the user has access to the email address. So what, in fact, have we actually accomplished?

The second example seems pretty reasonable, too. After all, do we really want dead accounts laying around? And besides, what if there's a user that we don't want accessing their account anymore? It's not like this will be an every day thing, but just in case. The problem here isn't whether or not a solution can be implemented, it's the repercussions that are not being considered. What do we do with the order history of a deleted account? If we keep it, to what do we associate the orders? If we delete it, how do we explain the discrepancies in financial reporting? Or inventory levels vs. newly adjusted sales numbers? What if you need to process a refund? Good grief, what if you need to undelete the account? This requirement is like an early 80's Buick. As soon as you fix something you uncover two more problems.

There's no easy solution to the problem. The business side needs to be careful to communicate needs. User stories are supposed to help with this, but they only help if used properly. Here's where a good BA will come into play. A good BA can make sure that user stories aren't reduced to endless copies of "As the product owner, I want {technical solution} so that the product works properly". The three parts of the story are necessary because, when used correctly, they define a need instead of a solution. And the architect needs to ask himself, for every requirement, "What is the need behind this?" If the answer isn't immediately obvious, there's a good chance that you're dealing with a solution disguised as a requirement.


(Ed. note: If the customer requesting the fence painting is Mr. Miyagi, just do it.)

Monday, July 29, 2013

Clay Pots

"Perfection is not attainable, but if we chase perfection we can catch excellence." --Vince Lombardi


I tried to find a link to the experiment, but could not. Perhaps it’s allegorical. However, I read once about an experiment done by a pottery teacher. She divided the class into two teams. She told the first team to make the perfect clay pot. She told the second to simply make as many clay pots as they could. At the end of the experiment, the perfect clay pot was indeed made, but not by team 1. As it turned out, the constant iterative practice by team two trumped the careful work of team one.




NOTE: Thank you to +Dave Aronson for pointing me to the link I couldn't find!
http://kk.org/cooltools/archives/000216
He also has some thoughts on the subject at http://www.dare2xl.com/2010/08/just-do-it.html

This is not an article about getting better at software development the more you develop software. If you’re reading this, you already know that. No, this is about software architecture and building the perfect design. Which, as I explained in my first article, doesn't exist anyway.

Every Software Developer/Architect/Engineer/Whatever that I've worked with has shared a couple of characteristics. They want to get their work done right, and they take time to think through what they're doing before they do it. Both of which are commendable. The problem comes when this leads to Analysis Paralysis. When the process of thinking things through in order to make the perfect design deadlocks the developer and he can't move on.

When that happens to you, remember the clay pots.

Agile methodologies such as Scrum and XP were developed, in part, to avoid analysis paralysis at the project level. With a focus on action and testing the results, agile methodologies seek to create the perfect pot by creating pots until they get it right. As it turns out, this technique works just as well at the individual level.

Sometimes the best way to break through design indecision is to just start writing it. Build the class stubs, make them interact, and build unit tests around them. How well does it work? How badly doesn't it work? Then consider what worked, what didn't, refine your ideas and start over. Wash, rinse, and repeat until you’re happy. Or at least satisfied. Or at least still on this side of “I’m so frustrated I’m about to throw my laptop through a window”. Seeing how the design plays out and forcing yourself to refine and retest can often lead to better results than trying to think through every detail in advance so that you create the "perfect" design the first time.

Don’t get me wrong. I’m not advocating against careful thought. I’m not saying “Don’t plan” or “Don’t think”. And I'm certainly not saying you should just throw code against the wall until you get something that looks workable.

Consider the T.V. show "Dr. House". His beliefs that there is one absolute right way of handling a problem is completely detrimental to software development. But one of the few things that I agree with Dr. House *in practice* is his insistence on thinking through a problem before acting on it. But if you remember the series, he follows the clay pot model. Think. Do. Refine. Think again. Continue until done. You won’t get it right the first time, and you should be very suspicious if you do, so don’t grind on it. And I love his attitude that making mistakes is expected. No one cares, as long as your end result is solid. 

Here’s the thing I tell architects and developers alike. There are no points for style. No one is counting code check ins, no one is counting compilations, no one is counting design iterations, and no one cares as long as the end product is a good one. Until then, if you have to slam it in with your knees, do so.

Often, you don't know what works until you've seen something that does not.

Thursday, July 25, 2013

Layering It On

Once you know who you are, you know what you have to do.” --Steve Perry, The 97th Step


We all know that building your application in layers is important. Portability, separation of concerns, extensibility, and blog articles are all highly dependent on proper application layers. The problem I see isn't a lack of understanding the importance or disagreeing with it. The problem I consistently see is people not understanding how to layer their applications. Part of this is, of course, practice. My first attempt at building an application with a 3-Tier architecture was an epic disaster that would have made the Titanic step back and say, “DAMN- I thought this was bad.” My second one was also pretty terrible. But better than the first.

Practice become easier with understanding, though. Tips, circumstances, and examples are all limited in scope in that they only give you a small slice of the whole picture. But once you understand what a layer is, why it’s important, and how to look at it, the rest is just reinforcing your understanding with practical experience. As any regular readers will know (Have I been around long enough to have regular readers?), I see software architecture as applied philosophy. I know I've used this one already, but:


“This, what is it in itself, and by itself, according to its proper constitution? What is the substance of it? What is the matter, or proper use? What is the form, or efficient cause? What is it for in this world, and how long will it abide? Thus must thou examine all things that present themselves unto thee.” --Meditations, Marcus Aurelius


I originally used this in understanding classes and properly understanding what they do, but it applies to application structure as well. Once you understand what something is, be it a class, a layer, a carburetor, or a hammer, you know what to do with it. So let’s take a pretty typical application stack- Presentation, Controller, Model, and Persistence. We start by considering each layer as a real-world entity, with things it knows about, actions it knows how to take, and actions it does not know how to take. Then we ask ourselves Aurelius' questions about these entities.


Presentation

What is a Presentation layer, in itself and by itself? Not to put too fine a point on it, but the presentation layer presents data, both to the end user and to the model. That’s what it knows how to do. It knows how to arrange data in a way that makes your application usable and useful. It knows how to sort and filter data so the user can get to the important data without wading through the unimportant data. 

Is your presentation layer interpreting data for any reason other than how to properly display it or properly send it to the lower reaches of the application? Then your presentation layer is doing something it doesn't know how to do.

Controller

Of all the application layers, I've seen more misunderstanding about the Controller than any other. And this is a prime example of why understanding needs to come first, because this one is easy to get wrong if you don’t understand it. The Controller is a Switchboard Operator. Okay- there are a ton of more recent comparisons that are just as good, but I’m going with switchboard operator. The controller routes requests from one place to another, and that’s it. It knows where a request came from and based on that, it knows where the request goes next. A controller that routes the request to different receivers based on some conditional logic with the data itself is interpreting and attaching meaning to the data. It doesn't know how to do that.

Model

In and of itself, what is a Model Layer? What's its purpose? The model knows what the data means, how it should be interpreted, and how it should be used. Which is, admittedly, the meat of the application but there are a few things this layer doesn't do as a part of its purpose. It doesn't know where data comes from. It doesn't know where data goes when the it is done doing what it does. In this way, it’s a lot like an assembly line worker. A widget  shows up and the model performs a task on it. Then the widget moves on. Where it came from and where it goes next are not important. The task performed is the only thing that is.

Persistence

What is the form or efficient cause of the Persistence layer? Sure, this layer interacts with data, but the question is "What is the... *efficient* cause". In its most efficient form, the persistence layer retrieves the data it’s asked for and stores the data it’s told to. It doesn't know how to do anything else. If, for instance, you've asked your persistence layer to tell the model if the correct data has been retrieved, then you’re asking your persistence layer for something it doesn't know how to do. If, as is common, you’re asking your persistence layer to know whether or not data is correct before storage then you are also asking it for something it doesn't know.


Although this becomes much easier with practice, the underlying key to application layering is knowing what you want your layer to do, and making sure that it doesn't do anything else. Thinking about your application layers as specialists helps greatly in keeping in mind what they should, and shouldn't, be doing. You don’t call your pediatrician when your car dies and you don’t call a ticket box office when your roof leaks. Don’t call a model layer when you need to know how to display data.