Last week,
hgs asked,
I find it interesting that lots of people write about how to produce clean code,
how to do good design, taking care about language choice, interfaces, etc, but few people
write about the cases where there isn't time... So, I need to know what are the forces that tell you
to use a jolly good bodge?
I suspect we don't hear much about it because these other problems are often
caused by that excuse.
And, in the long run, taking on that technical debt will likely cause you to go so slow that that's the
more interesting problem. In other words, by ignoring the need for good code, you are jumping into
a downward spiral where you are giving yourself even less time (or, making it take so long to do anything
that you may as well have less time).
I think the solution is to start under-promising and over-delivering, as opposed to how most of us do it
now: giving lowball estimates because we think that's what they want to hear. But why lie to them?
If you're using iterative and incremental development, then if you've over-promised one iteration, you
are supposed to dial down your estimates for what you can accomplish in subsequent iterations, until
you finally get good at estimating. And estimates should include what it takes to do it right.
That's the party-line answer to the question. In short: it's never OK to write sloppy code, and
you should take precautions against ever putting yourself in a situation where those
viscous forces pull you in that direction.
The party-line answer is the best answer, but it doesn't fully address the question, and I'm not
always interested in party-line answers anyway. The
viscosity (when it's easier
to do the wrong thing that the right thing) is the force behind the bodge. I don't like it, but I
recognize that there are going to be times you run into it and can't resist.
In those cases where you've already painted yourself into a corner, what then? That's the interesting
question here. How do you know the best
places to hack crapcode together and ignore those things that may take a little longer in the short run, but
whose value shows up in the long run?
The easy answer is the obvious one: cut corners in the code that is least likely to need to change or
be touched again. That's because (assuming your hack works) if we don't have to look at the code again,
who really cares that it was a nasty hack? The question whose answer is not so easy or
obvious is "what does such a place in the code look like?"
By the definition above, it would be the lower levels of your code. But if you do that, and inject a bug, then
many other parts of your application would be affected. So maybe that's not the right place to do it.
Instead, it would be better to do it in the higher levels, on which very little (if any) other code
depends. That way, you limit the effects of it. More importantly, if there are no outgoing dependencies
on it, it is easier to change than if other code were highly dependent on it. [
1]
Maybe the crapcode can be isolated: if a class is already aweful, can you derive a new class from it and
make any new additions with higher quality? If a class is of high quality and you need to hack something together,
can you make a child class and put the hack there? [
2]
Uncle Bob recently discussed when
unit and acceptance
testing may safely be done away with. He put those numbers around 10 and a few thousand lines of code,
respectively.
In the end, there is no easy answer that I can find where I would definitively say, "that's the place for a bodging."
But I suspect there are some patterns we can look for, and I tried to identify a couple of those above.
Do you have any candidates you'd like to share?
Notes:
[1] A passing thought for which I have no answers:
The problem with even identifying those places is that by hacking together solutions, you are more likely
to inject defects into the code, which makes it more likely you'll need to touch it again.
[2] I use inheritance here because the new classes should be
able to be used without violating
LSP.
However, you may very well be able to make those changes by favoring
composition.
If you can, I'd advocate doing so.
Hey! Why don't you make your life easier and subscribe to the full post
or short blurb RSS feed? I'm so confident you'll love my smelly pasta plate
wisdom that I'm offering a no-strings-attached, lifetime money back guarantee!
Leave a comment
I don't understand where the need to write this crappy code is at. If you're writing it from scratch, it shouldn't take any more time to write it correctly the first time around, should it? Or is crap code actually a euphanism [sic] for skipping writing out any logic or pseudo code ahead of time and you just sit down and code? If that's the case I'd question how much time is saved.
Posted by Allen
on Feb 18, 2008 at 09:17 AM UTC - 6 hrs
I have actually found one situation where it makes sense to write code quickly at the sacrifice of readability, maintainability and performance.
When I have to write code that I know will be run only once and never again and I have an urgent deadline, it make sense to write sloppy code.
This is, admittedly, really rare. I have run into it a couple of times though. Typically this is for some relatively easy data translation assignment. When I have a tight deadline and I only run the code once, it doesn't makes much sense to do everything "correctly".
If the project is complicated, however, then unit testing comes into play and the code will likely need to be revised multiple times - at which point it pays to do things correctly.
Even when I run into that situation I still find it hard to let go and quit worrying about the organization of the code.
Posted by
Steve Bryant
on Feb 18, 2008 at 09:38 AM UTC - 6 hrs
Project viscosity is not really what I'm asking about; that's a kind of force, but one to fight against. I'm talking more about what Steve Bryant is saying. In my striving to improve as a programmer I'm pushed in the direction of "perfectionism". So, Allen, I'm trying to find out how, why, where, when to push back against that. There is balance to be struck. If I could come up with simple designs easily this would be less of a problem. I'd be able to work faster, and have good enough code. But I find myself looking for clean, robust, easy to maintain solutions, even elegance, when a hack may well suffice. I think "the simplest thing that could possibly work" is not really "the most elegant thing...". For very high quality programmers, maybe elegance is the easiest thing.
I think the points Sam makes about coupling are good ones, as are the points about quick short, one-shot hacks. Some of these decisions seem to depend on knowing the future of the code, and that's never easy, especially after years of looking for edge cases to consider. "Beware the buffer overruns, and shun/ the dubious things on stack!" It's "just good enough" code, satisficing in programming... That's what I'd like to be able to do better.
Posted by hgs
on Feb 18, 2008 at 11:06 AM UTC - 6 hrs
@Allen- As it turns out, I misunderstood what hgs was asking (as you can see from his comment above).
I guess if you are writing a project from scratch, of course it is always better to write nice code to the best of your ability. I was considering the case when time is at issue, and you can't "do it right" but you can take on some technical debt and get it working - where is the best place to do that? (Or, which will have you paying the least interest on that debt?)
@Steve- that's a good point, and I hadn't considered the case here where you are writing one-off scripts. In fact, I've just finished one up today that isn't quite up to quality - there are no unit tests, there are 3 methods in a God-class, but it works and does the job. It's to be thrown away after we run it on the full data set.
@hgs- Sorry I missed the question entirely =). As I understand the question now, it is also interesting, but Steve has pretty much summed up my opinion - when I write scripts that are to be thrown away, I don't bother with caring about design or TDD or any semblance of things that lead to quality. I only care if it works or not, and I do away with the formalities.
Steve McConnell mentioned that he doesn't like this approach, because it can lead to bad habits - it would be better to stay in good habits. I agree that it would be best on a personal level to do that, but I don't know that it would be better from a business standpoint. I guess you just have to have discipline, and enforcing good practices on yourself even when they aren't required helps bring discipline.
If other people are going to look at that code or it is intended to be saved, then I have to reconsider my position on it, and make sure it has some quality aside from "it works."
I don't know if there is a nuanced ground in between, where sometimes a hack is OK in a real system (aside from that viscosity or the technical debt due to time-crunch, which I've already gave my current opinion on.
Good discussion so far!
Posted by
Sammy Larbi
on Feb 18, 2008 at 01:15 PM UTC - 6 hrs
No need to apologize, Sam, because these are sides of the same coin. And I am thinking of components of bigger programs, but perhaps they do some small job and won't be touched again.
Jeff Atwood's Coding Horror blog has a "Worse is Better" entry (30-Jan-2008) which way down has links to www.randsinrepose.com which suggest that this is due to me be more of a completer than an incrementalist. [This is probably another difficulty I have with practicing agile, doing just enough to complete this story when I know things down the road will make me want to rip it up and rewrite, but I digress.]
The trouble is that these one-shot scripts and similar last much longer than you think. Rewriting them means understanding the code you've forgotten about of course, which pushes you in the direction of doing it "properly" in the first place.
Posted by hgs
on Feb 18, 2008 at 02:08 PM UTC - 6 hrs
Thanks for pointing me over there - I had read Jeff's entry, but didn't take the time to follow the links (or the comments).
I guess I'm more of an incrementalist, although I wasn't always that way. I don't know what it was about YAGNI that I latched onto, but I do know that once I heard about it, I was hooked. I guess maybe I had been beaten up too many times by actually doing stuff that didn't end up needing to be done (or moreso than your average bear).
I thought Ron Jeffries (I think it was he) put it best when explaining why it's OK to not "complete" even if you "know" its going to happen. I wish I could find the quote, but the gist was (which you probably already know):
If you keep your code easily modifiable (which will be easier if you're not implementing extra features), then you can always add that new feature in with ease later. If you implement it now, and it turns out to be correct, you haven't saved much time (perhaps even wasted time if you had to write sloppier code to get it in in time). If you never end up needing it, then you saved a bunch of time.
I don't think this applies to things like checking for a divide by 0 error - they are bigger than that, I would imagine, though I'm not sure where I'd draw a line.
About the one-shot scripts: I generally do literally delete them when I'm done. Sometimes I save them just in case I want to go back and grab a snippet for later use (probably in another throwaway script), or perhaps if I remember doing something and I wanted to see how I did it (when I'm exploring new territory). But they don't typically make their way into production code - I store them in a local place on my own HD if I store them at all.
These scripts are typically things like reformatting a text file so I can import it easily with a DB util, or moving data from one place to another. They are very specific to a single task, so although some bits might at some point in the future be useful elsewhere, it's nothing I could really do much with as a whole.
Posted by
Sammy Larbi
on Feb 18, 2008 at 03:54 PM UTC - 6 hrs
Leave a comment