What's with this nonsense about unit testing?
Giving You Context
Joel Spolsky and Jeff Atwood raised some controversy when discussing quality and unit testing on their
Stack Overflow podcast (or, a
transcript of the relevant part).
Joel started off that part of the conversation:
But, I feel like if a team really did have 100% code coverage of their unit tests, there'd be a couple of problems. One, they would have spent an awful lot of time writing unit tests, and they wouldn't necessarily be able to pay for that time in improved quality. I mean, they'd have some improved quality, and they'd have the ability to change things in their code with the confidence that they don't break anything, but that's it.
But the real problem with unit tests as I've discovered is that the type of changes that you tend to make as code evolves tend to break a constant percentage of your unit tests. Sometimes you will make a change to your code that, somehow, breaks 10% of your unit tests. Intentionally. Because you've changed the design of something... you've moved a menu, and now everything that relied on that menu being there... the menu is now elsewhere. And so all those tests now break. And you have to be able to go in and recreate those tests to reflect the new reality of the code.
So the end result is that, as your project gets bigger and bigger, if you really have a lot of unit tests, the amount of investment you'll have to make in maintaining those unit tests, keeping them up-to-date and keeping them passing, starts to become disproportional to the amount of benefit that you get out of them.
Apparently, that was enough for
others to come out of the shadows and discuss frankly
why they don't see value in unit tests.
Joel was talking about people who suggest having 100% code coverage, but he said a couple of things about unit testing in general, namely the second and third paragraphs I quoted above: that changes to code may cause a ripple effect where you need to update up to 10% of your tests, and that "as your project gets bigger ... [effort maintaining your tests] starts to become disproportional to the amount of benefit that you get out of them."
One poster at Hacker News mentioned that it's possible for your tests to have 100% code coverage without really testing anything, so they can be a false sense of security (don't trust them!).
Bill Moorier said,
The metric I, and others I know, have used to judge unit testing is: does it find bugs? The answer has been no. For the majority of the code I write, it certainly doesn't find enough bugs to make it worth the investment of time.
He followed up by saying that user reports, automated monitoring systems, and logging do a much better job at finding bugs than unit tests do.
I don't really care if you write unit tests for your software (unless I also have to work on it or (sometimes) use it in some capacity).
I don't write unit tests for everything. I don't practice TDD all the time. If you're new to it I'd recommend that
you do it though -- until you have enough experience to determine which tests will bring you the value you want. (If you're not good at it, and haven't tried it on certain types of tests, how else would you know?)
The Points
All of that was there to provide you context for this simple, short blog post:
- If changing your code means broken tests cascading through the system to the tune of 10%, you haven't written unit tests, have you?
(Further, the sorts of changes that would
needlessly break so many unit-cum-integration tests would be rare, unless you've somehow happened or tried very hard to design a tightly coupled spaghetti monster while writing unit tests too.)
-
I've not yet met a project where the unit tests are the maintenance nightmare. More often, it's the project itself, and it probably doesn't have unit tests to maintain. The larger the code base, with large numbers of dependencies and high coupling, the more resistant it is to change - with or without unit tests. The unit tests are there in part to give you confidence that your changes haven't broken the system when you do make a change.
If you're making changes where you expect the interface and/or behavior to change, I just don't see where the maintenance nightmare comes from regarding tests. In fact, you can run them and find out what else in your code base needs to change as a result of your modifications.
In short, these scenarios don't happen enough such that they would make testing worthless.
-
You may indeed write a bunch of tests that don't do anything to test your code, but why would you? You'd
have to try pretty hard to get 100% code coverage with your tests while succesfully testing nothing.
Perhaps some percentage of your tests under normal development will provide a false sense of security. But without any tests whatsoever, what sense of security will you have?
-
If you measure the value of unit testing by the number of bugs it finds (with more being better), you're looking at it completely wrong. That's like measuring the value of a prophylactic by the number of diseases you
get after using it. The value is in the number of bugs that never made it into production. As a 2008 study from
Microsoft finds [PDF], at least with TDD, that number can be astonishingly high.
-
As for user reports, automated monitoring systems, and logging doing a better job at finding bugs than unit testing: I agree. It's just that I'd prefer my shipped software to have fewer bugs for them to find, and I certainly don't look at my users as tests for my software quality once it's in production.
What are your thoughts?
Hey! Why don't you make your life easier and subscribe to the full post
or short blurb RSS feed? I'm so confident you'll love my smelly pasta plate
wisdom that I'm offering a no-strings-attached, lifetime money back guarantee!
Leave a comment
Thank you, Sam. I was thinking some similar things. The harder it is to write a unit test in the first place, the more likely it is that the code you're testing is too tightly coupled. It also seems that people tend to conflate unit tests with integration tests. The latter I would expect to break due to code changes.
Posted by Allen
on Feb 20, 2011 at 06:18 PM UTC - 6 hrs
I happen to agree with a lot of the points that Joel and Bill made.
For one, unit tests are by definition not integration tests and I find that a lot of bugs are due to misunderstandings about how units integrate together.
I find a more fundamental problem with unit tests to be issue of factoring out dependencies on outside systems (usually handled via mocking techniques). I spend a lot of time writing mock objects (whether I'm using a framework like Mockito or not) and getting them to perform just the way I want. I end up writing 50 lines of mock objects to test 10 lines of code. And since my mock objects are tightly coupled to the implementation of the code I'm testing (which is pretty much required by a lot of mock frameworks), any changes in implementation do result in a lot of unit tests breaking.
Then of course, there's the whole problem with having to change your code design around unit tests. Things like static methods and the inability to mock constructors hurts the understandability and maintainability of your code. I know lots of TDD enthusiasts like to say that unit tests actually make your code better, but that's not always true. They make your code work a certain way, it's not always better.
In general unit tests are just another tool in ongoing fight for better software. If you treat them like a crutch you're setting up yourself up for failure. Keep using your brain, that's what they pay us for.
Posted by Dave
on May 13, 2011 at 03:42 PM UTC - 6 hrs
@Dave,
I agree with you on the mocks- it's something I didn't consider.
"Keep using your brain, that's what they pay us for."
I really like that idea. Thanks =)
Posted by
Sammy Larbi
on Jun 12, 2011 at 12:53 PM UTC - 6 hrs
Leave a comment