Showing posts with label DesignDecisions. Show all posts
Showing posts with label DesignDecisions. Show all posts

Friday, May 11, 2018

The Binary Fallacy

Two roads diverged in a yellow wood,
And sorry I could not travel both
And be one traveler, long I stood
And looked down one as far as I could
To where it bent in the undergrowth;
-Robert Frost, The Road Not Taken

Binary Thinking is the process of thinking of things in terms of two opposites. You can either do A or it's opposite B. But that's it. In Neuro-Linguistic Programming, it's a technique for manipulating someone into making the decision you want by framing the choice as a matter of either the thing you want them to do or the opposite, presented as a harmful choice. In Cognitive Behavioral Therapy it's a pattern to be broken to help people gain more control over their lives. In Software Architecture it's a warning sign.



If stuck between two different ways of designing something, the answer is always door number three.

Why do humans think like this? Why didn't Frost just keep walking straight through the woods and not worry about the roads? Why didn't The Clash stop worrying about staying and going and just Rock the Casbah? I don't really know. I'm not a psychologist. I'm just a software architect who's familiar with Analysis Paralysis. The phenomenon of getting nothing whatsoever done because you're locked up in the decision making process. And after years of trying to figure out how to avoid this, I came to one inescapable conclusion.

When you're in analysis paralysis it's because every path you're considering is wrong

In my opinion, this happens when you realize deep down that you're looking at the problem wrong and considering bad options. You can't decide because you know your options are both bad.

So what do you do? Well, that's the tricky part and adages about in which part of the box to think aren't as helpful as people who use them think they are. If thinking of something new was as easy as the realization that we need something new then Elon Musk wouldn't be quite the icon he's become. The important question is HOW? First, you just need to stop and accept the fact that you need to come up with something different. Your first decisions were wrong and you need to set them aside. Don't go back to them. Then I recommend you look at the great Stoic thinkers. You start by asking "What is this and what is it for?" I often tell developers to answer those two questions for every application, for every class they build, for every database and table they create. And you don't build the thing in question until you can answer those questions.

Then you ask if a thing is needed or just wanted. If it's not something you need, based on the answers to the above questions, then you leave it out. You strip away the irrelevant. Often I find myself locked up because I can't find a clean way of integrating the irrelevant into my design. Once I stop and realize the piece that doesn't fit doesn't need to fit, things get easier.

I've often said that software development is 90% thinking and 10% typing. If you understand how to think clearly, how to organize your thoughts, and how to tell the difference between the necessary and the unnecessary then there's little that can keep you from being a great software developer.

Thursday, October 10, 2013

Finding Valrhona, or Habits of Effective Architects


I have very strong opinions on the subject of software, architecture, and quality in general. Coors is not beer. A Hershey bar is not chocolate. Neither Velveeta nor Kraft Singles are cheese. Starbucks does not serve anything I identify as coffee. "Cowboy Coding" is not software development". This really isn't about my quirks in food quality. More a list of items that I find helpful in making sure I'm helping to deliver Valrhona.

Understand what you need to deliver

Before you select technologies, before you start with the design patterns, and certainly before you put hand to keyboard, make sure you understand what pain point you're relieving for your customer. Software development is about solving problems. So many times, I see projects skipping right over the problem to be solved and heading right to implementing a solution. Oftentimes, talking through the problem to be solved makes a murky solution obvious. If you're stuck, ask yourself "What problem are we solving?" If you don't know, you know what to do next.

Solve for realistic problems

If you don't need a full enterprise-y solution with distributed widgets, Something As A Service, and Fully Configurable Everything, don't build it. This is kind of an extension of the previous point, but for every architecture decision you make ask yourself "What value is this adding to the solution?" if you don't have an answer then you don't need the Thingy.

On the other hand, understand that no software project is ever finished and that no set of requirements stay static. Especially once development starts. As Helmuth Graf von Moltke said, "No campaign plan survives first contact with the enemy". There are well established patterns for solving common problems. A thorough understanding of the Gang of Four's design patterns will go a long way in helping you avoid common pitfalls. Don't use them just to use them, but don't avoid them just because they're common. They're common for a reason.

Stop. Collaborate and Listen.

Okay, for those of you who get the reference, I apologize. Note there's no link. I don't want to infect those of you who don't get it. But, origin aside, it's good advice for the architect. Even if you're sure of yourself, get feedback. A development team is more than the sum of its parts, and several smart developers working together produce far better results than several smart developers working separately. Capitalize on this. Ask for comments and then listen to them. Especially the criticism. The worst that can happen is that you'll feel more confident in your design.

Along those lines, keep up on the current trends in software developemnt. I'm not saying you have to be KanBanAgileScrumTDD just because others have written about how shiney they are. But you won't know how these concepts can, or can't, help if you aren't familiar with them.

Strive for Elegance, but understand what it means

To me, an "Elegant" solution is not necessarily overly-clever. It does not have to solve problems in a new way. And it certainly doesn't take Donald Knuth to understand. To me, "Elegance" makes the solution look easy. Sure, maybe you come up with a better way of solving a problem. But maybe you recognize that some techniques are "Tried and True" for a reason. Either way, your result shouldn't look like a bunch of work. It should look obvious.

Know when to say when. And when not to.

Understand that at some point in your career (or, in my case, at some point in your day), the pursuit of higher quality will conflict with the overall effort in such a way that the pursuit does more harm than good. be able to recognize that time and let go.

Understand that at some point in your career, you will be expected to sacrifice quality for the overall effort in a manner that does more harm than good. Don't dig in. Don't get stubborn. Learn to present your case in terms that the decision makers understand. You will not always get your way, but you will become known as an asset that is always looking out out for the overall project.

Note that there isn't a lot of actual code advice here. Sure, I could tel you that if you're instantiating different types of classes depending on context, consider an Object Factory or even an Abstract Factory. Or if you have a somewhat complex process that other processes interact with, or a subsystem that might change, consider a Facade. I could give you the ol' "Design from the Interface" advice or even tell you that if you find yourself considering recursive queries maybe you should step back a bit. But I think that if you really take the above to heart, everything else is just details.

Tuesday, August 13, 2013

Best Practices


"The cart before the horse is neither beautiful nor useful." --Henry David Thoreau

You're doing account signup forms wrong.

Okay, I should say "If you're doing account signup forms, you're doing them wrong." and while logically (if not grammatically) more accurate, I thought it made less of an impact as an opening statement.

The problems with most signup forms are myriad. Why do sites force you to enter your email address twice? Mobile platforms have figured out that masked password input boxes aren't always necessary, and give you either an option to turn it off or give limited clear-text viewing of your input. Websites haven't gotten the memo. And if I see one more site serving up a picture of what can only be described as a dust storm and telling me to type in the letters in the box to prove that I'm human, I'm going to flip and write a blog post about it. Just. Stop. It.

Now, as I hope I've made clear in my writing, I don't care about forms, CAPTCHA, or even UX issues. Or rather, I care about them only in the context of the thought process that went into them, and that's where account signup (and often account management) process fall flat. It's due to a very insidious concept known as "Best Practices".

I hate best practices. As soon as that phrase is first used in a requirements gathering meeting, I step on it like I would a roach. "Best Practices" used to refer to process that the industry had adopted, formally or not, as the best way known at the time to approach a problem. That, I have no problem with. What I have a problem with is the fact that "Best Practices" doesn't mean that anymore in software development. Anymore, it means "What is everyone else doing?" Which leads to lazy planning. Which leads to bad results. And I really hate bad results.

"Best Practices" are insidious. Because it's assumed that these practices are used because they're the best way of approaching a problem, people stop thinking about solutions as they apply to their specific needs. Any time you start implementing solutions without considering whether or not that solution is actually a solution to a problem you have, you have at best added unnecessary complexity to your project. At worst, you end up implementing code that hurts you in the long run.

Worse, though, is that no one ever advances that body of knowledge when "best practices" are applied blindly. Since everyone is using the same solutions as everyone else, no one thinks up new ways of solving problems. In the UX arena, this results in carbon-copy sites that don't stand out and present all the same inconveniences that the other sites present. In the web security arena, it's even worse because you are implementing that worst kind of security. The kind that makes you feel secure without necessarily offering any concrete benefits. Since the "security" principles that are labeled "Best Practices" are applied blindly, you don't know if the implemented solution solves the problem at hand, much less whether or not the problem at hand is one that needs solving in your context.

As an example, let's take password masking. Many mobile devices briefly show in clear text the latest character typed into a password box. This is not new. In fact, I've read people claim that Apple innovated that idea for the iPhone, but my Palm 600 did that. This idea is ten years old and the web world still hasn't caught on. It's become an expected inconvenience because everyone is looking at everyone else's paired "Enter Password/Reenter Password" boxes. Even worse, this is billed as a security measure without asking whether or not this is a *necessary* security measure. Does it really hurt anything if a site offered a way of viewing your password in clear text, thus avoiding the usual paired password box routine?

Am I saying that these measure are all unnecessary? Absolutely not. Except when they are. And if requirements are being gathered in the context of your needs and your solution, then it becomes obvious what you do and don't need. The problem is that "Best Practices" are applied backwards. The solution is selected and the problem lays unexamined. Software development is merely implemented problem solving. You can't solve a problem you have not examined.

Monday, July 29, 2013

Clay Pots

"Perfection is not attainable, but if we chase perfection we can catch excellence." --Vince Lombardi


I tried to find a link to the experiment, but could not. Perhaps it’s allegorical. However, I read once about an experiment done by a pottery teacher. She divided the class into two teams. She told the first team to make the perfect clay pot. She told the second to simply make as many clay pots as they could. At the end of the experiment, the perfect clay pot was indeed made, but not by team 1. As it turned out, the constant iterative practice by team two trumped the careful work of team one.




NOTE: Thank you to +Dave Aronson for pointing me to the link I couldn't find!
http://kk.org/cooltools/archives/000216
He also has some thoughts on the subject at http://www.dare2xl.com/2010/08/just-do-it.html

This is not an article about getting better at software development the more you develop software. If you’re reading this, you already know that. No, this is about software architecture and building the perfect design. Which, as I explained in my first article, doesn't exist anyway.

Every Software Developer/Architect/Engineer/Whatever that I've worked with has shared a couple of characteristics. They want to get their work done right, and they take time to think through what they're doing before they do it. Both of which are commendable. The problem comes when this leads to Analysis Paralysis. When the process of thinking things through in order to make the perfect design deadlocks the developer and he can't move on.

When that happens to you, remember the clay pots.

Agile methodologies such as Scrum and XP were developed, in part, to avoid analysis paralysis at the project level. With a focus on action and testing the results, agile methodologies seek to create the perfect pot by creating pots until they get it right. As it turns out, this technique works just as well at the individual level.

Sometimes the best way to break through design indecision is to just start writing it. Build the class stubs, make them interact, and build unit tests around them. How well does it work? How badly doesn't it work? Then consider what worked, what didn't, refine your ideas and start over. Wash, rinse, and repeat until you’re happy. Or at least satisfied. Or at least still on this side of “I’m so frustrated I’m about to throw my laptop through a window”. Seeing how the design plays out and forcing yourself to refine and retest can often lead to better results than trying to think through every detail in advance so that you create the "perfect" design the first time.

Don’t get me wrong. I’m not advocating against careful thought. I’m not saying “Don’t plan” or “Don’t think”. And I'm certainly not saying you should just throw code against the wall until you get something that looks workable.

Consider the T.V. show "Dr. House". His beliefs that there is one absolute right way of handling a problem is completely detrimental to software development. But one of the few things that I agree with Dr. House *in practice* is his insistence on thinking through a problem before acting on it. But if you remember the series, he follows the clay pot model. Think. Do. Refine. Think again. Continue until done. You won’t get it right the first time, and you should be very suspicious if you do, so don’t grind on it. And I love his attitude that making mistakes is expected. No one cares, as long as your end result is solid. 

Here’s the thing I tell architects and developers alike. There are no points for style. No one is counting code check ins, no one is counting compilations, no one is counting design iterations, and no one cares as long as the end product is a good one. Until then, if you have to slam it in with your knees, do so.

Often, you don't know what works until you've seen something that does not.

Thursday, July 25, 2013

Layering It On

Once you know who you are, you know what you have to do.” --Steve Perry, The 97th Step


We all know that building your application in layers is important. Portability, separation of concerns, extensibility, and blog articles are all highly dependent on proper application layers. The problem I see isn't a lack of understanding the importance or disagreeing with it. The problem I consistently see is people not understanding how to layer their applications. Part of this is, of course, practice. My first attempt at building an application with a 3-Tier architecture was an epic disaster that would have made the Titanic step back and say, “DAMN- I thought this was bad.” My second one was also pretty terrible. But better than the first.

Practice become easier with understanding, though. Tips, circumstances, and examples are all limited in scope in that they only give you a small slice of the whole picture. But once you understand what a layer is, why it’s important, and how to look at it, the rest is just reinforcing your understanding with practical experience. As any regular readers will know (Have I been around long enough to have regular readers?), I see software architecture as applied philosophy. I know I've used this one already, but:


“This, what is it in itself, and by itself, according to its proper constitution? What is the substance of it? What is the matter, or proper use? What is the form, or efficient cause? What is it for in this world, and how long will it abide? Thus must thou examine all things that present themselves unto thee.” --Meditations, Marcus Aurelius


I originally used this in understanding classes and properly understanding what they do, but it applies to application structure as well. Once you understand what something is, be it a class, a layer, a carburetor, or a hammer, you know what to do with it. So let’s take a pretty typical application stack- Presentation, Controller, Model, and Persistence. We start by considering each layer as a real-world entity, with things it knows about, actions it knows how to take, and actions it does not know how to take. Then we ask ourselves Aurelius' questions about these entities.


Presentation

What is a Presentation layer, in itself and by itself? Not to put too fine a point on it, but the presentation layer presents data, both to the end user and to the model. That’s what it knows how to do. It knows how to arrange data in a way that makes your application usable and useful. It knows how to sort and filter data so the user can get to the important data without wading through the unimportant data. 

Is your presentation layer interpreting data for any reason other than how to properly display it or properly send it to the lower reaches of the application? Then your presentation layer is doing something it doesn't know how to do.

Controller

Of all the application layers, I've seen more misunderstanding about the Controller than any other. And this is a prime example of why understanding needs to come first, because this one is easy to get wrong if you don’t understand it. The Controller is a Switchboard Operator. Okay- there are a ton of more recent comparisons that are just as good, but I’m going with switchboard operator. The controller routes requests from one place to another, and that’s it. It knows where a request came from and based on that, it knows where the request goes next. A controller that routes the request to different receivers based on some conditional logic with the data itself is interpreting and attaching meaning to the data. It doesn't know how to do that.

Model

In and of itself, what is a Model Layer? What's its purpose? The model knows what the data means, how it should be interpreted, and how it should be used. Which is, admittedly, the meat of the application but there are a few things this layer doesn't do as a part of its purpose. It doesn't know where data comes from. It doesn't know where data goes when the it is done doing what it does. In this way, it’s a lot like an assembly line worker. A widget  shows up and the model performs a task on it. Then the widget moves on. Where it came from and where it goes next are not important. The task performed is the only thing that is.

Persistence

What is the form or efficient cause of the Persistence layer? Sure, this layer interacts with data, but the question is "What is the... *efficient* cause". In its most efficient form, the persistence layer retrieves the data it’s asked for and stores the data it’s told to. It doesn't know how to do anything else. If, for instance, you've asked your persistence layer to tell the model if the correct data has been retrieved, then you’re asking your persistence layer for something it doesn't know how to do. If, as is common, you’re asking your persistence layer to know whether or not data is correct before storage then you are also asking it for something it doesn't know.


Although this becomes much easier with practice, the underlying key to application layering is knowing what you want your layer to do, and making sure that it doesn't do anything else. Thinking about your application layers as specialists helps greatly in keeping in mind what they should, and shouldn't, be doing. You don’t call your pediatrician when your car dies and you don’t call a ticket box office when your roof leaks. Don’t call a model layer when you need to know how to display data.

Monday, July 8, 2013

5 Architecture Mistakes to Avoid

“Success does not consist in never making mistakes but in never making the same one a second time.” --George Bernard Shaw

“Insanity: doing the same thing over and over again and expecting different results.” --Albert Einstein

In a previous article I referenced what I consider to be the two biggest sins a software architect can make. Following is a list of what I consider the biggest preventable mistakes an architect can make. These mistakes can tank a project or cause major grief after deployment and are all preventable if you keep them in mind.

1. Missing the obvious
It’s embarrassing when you've realized that you've designed an online payment processor that can’t cleanly handle the addition of a new payment vendor or a new method of payment. This is Software Architecture 101. Assume that at some point in time there will be a need to use a new vendor because they offer a type of payment that your current vendor does not.
Whether it’s a hard-coded value, a data type used outside of a reasonable context, or a missed design pattern, this mistake is a landmine. And no one wants Amnesty International protesting your code base.

2. Accounting for the unreasonable
Over-architecting is as bad as under-architecting. Don’t create a multiple endpoint web service that acts as a front end to a Windows service unless you’re certain that a piece of functionality needs to be that accessible. Don’t react to “This might change”. React to “This is likely to change.”

3. Being overly clever
We've all done it at one point in time or another. We all want to show how clever we are, or how smart we are, or how innovative we are. We've all, at one time or another, written a section of code that makes others go “Wait... what?” The urge to do this does not make you a bad developer. It makes you normal. Well, normal as the term relates to software developers.
That being said, elegant code is code that solves a problem in a manner that seems effortless. It is not defining every method of a class as a closure so that the entire functionality of the class can be redefined at runtime. If your code is so abstract that you need a Philosophy degree to understand it, you need to rethink. Prefer simplicity in your designs. The next guy will thank you for it.

4. Not understanding the platform
Sure- software architecture concepts apply across platforms. It doesn't matter what you use, you break up your application into usable modules (whatever you call them) that do one thing. The Strategy Pattern is useful in any OOP environment, and the underlying concept is important for Functional Programming.
That being said, if you don’t understand the tools that your development platform offers, you can only choose the correct tools by luck. And luck is a terrible thing to rely on.  Rails is not the same as .Net is not the same as Java is not the same as ColdFusion. And while approaching design on a platform from the point of view gained from another platform can yield some interesting insights, if you don’t understand your platform you almost can’t help but miss something. Building a webservice in .Net? Better understand WCF. Building anything in Ruby on Rails? Get used to Gems and understand what they offer. Get used to the Active Record Pattern. Doesn't matter if you don’t like it (I don’t), you can’t understand how to account for its weaknesses, and whether or not it’s worth bothering to do so, if you don’t understand how it works in the first place.

5. Being single-focused
This isn't exactly the “If all you have is a hammer, all problems look like a nail” adage that is so common in software development. This is the acknowledgement that developers tend to see projects through a particular lens, regardless of their skills or experience. I tend to write class libraries that serve as an API for whatever front end application may need to consume them. In general, this has served as a solid solution and has become something of a “Go-To” approach for me. Until, on my current project, the dev lead ask me “Why aren't we using SSIS?” He was absolutely right, too. SSIS was the right solution. Don’t fall in love with a particular tool. Be flexible enough to choose the right tool.

Bonus Mistake
The assumption that being an architect makes you right and forgetting that it makes you accountable. Okay- that’s a bonus mistake two-fer.
*Always* listen to what the team developers have to say, regardless of whether or not you agree with them. In fact, especially if you don’t agree with them. Not listening to skilled professionals makes them mad, which will lead to all sorts of problems. They’re the ones actually using the software design you've created, and when the rubber hits the road it’s not uncommon to find some bumps. On the other hand, if the feedback you get is consistently taking a problematic path, you know you need to step back and talk to the dev team so you can come to some basic understandings of how you see things and how they see things.
That being said, at least where I work, architecture is not a democracy. We don’t vote on it and if I create something that doesn't work “But the dev team liked it better” doesn't cut it. I’m responsible for the design. That means listening. That means thinking. That also means being able to make a decision. And at some point making a decision means being able to say “Thank you for your input. I've taken your feedback into consideration and this is the direction we’re taking.”

These mistakes can all turn an application or a development project ugly. They can cause maintenance issues, extensibility issues, or project issues. They can manifest themselves quickly or lie in wait until another factor brings them to light. But they are all preventable and mark the difference between the architect that helps create an elegant and useful solution, and the architect that development teams have to put up with and work around in order to release something workable.