My Secret Life as a Spaghetti Coder
home | about | contact | privacy statement
If I am healthy, my body may come to rely on being so and forget what to do when I am sick. Therefore, it is better to be sick than to be healthy.
Since I spent my morning reading reddit and typing comments there instead of writing today's blog post, I'll let you in on this discussion that's going on over there: Larry O'Brien's article 30K application lines + 110K testing lines: Evidence of...? was posted to this thread on reddit, and the FUD started to fly. (If you're interested in the subject, there's also a thread about programmers not getting it, or not wanting to.)

It started with Larry quoting himself on praising extreme programming, and mentioning 110 thousand lines of test code to 30 thousand lines of application code, with the application having been developed in Python. Alan Holub took that as an indictment of dynamic languages, with Larry quoting him as saying:
I want to take exception to the notion that Python is adequate for a real programming project. The fact that 30K lines of code took 110K lines of tests is a real indictment of the language. My guess is that a significant portion of those tests are addressing potential errors that the compiler would have found in C# or Java. Moreover, all of those unnecessary tests take a lot of time to write, time that could have been spent working on the application.
In fact, many people were shocked at the amount of tests compared to code, and that's what the discussion (at least the part I was interested in) centered around. Four times as much test code as application logic is too much. It would shackle you, instilling fear in your heart and soul. No changes would ever be made with that kind of viscosity. Furthermore, tests can provide a false sense of security, a blankie, if you will.

Blankie!!!

You've got to be kidding. Having a test that will tell you when you broke existing functionality is pressure to avoid changes? To me, that's liberating!

Contrast that with not having a test to tell you when something broke. Does it even make sense to say having tests pressures you to avoid changes? Only if you fear having a program that works over having one that you think works.

Let me try a different approach. Take the following simple program:

if (someCondition is true)
do something
else
something else

if (anotherCondition is true)
do another thing

There are four execution paths: One where both someCondition and anotherCondition are true, one where they are both false, and one each where one is true and the other isn't.

In other words, we have six lines of code and at least four tests we should write to cover all the cases. If each test is just a single line, we still need to write the method names and end lines, so that would give us three lines per test - for a total of twelve lines of test code.

The test code size is already double the number of lines in our application code for this simple, six line program with four execution paths. How many execution paths are in a 30 thousand line program?

Seeing as the number of execution paths in code is more likely to grow exponentially than linearly with each new line that gets written, 110 thousand lines of test code isn't actually all that much.

Further, the solution to the blankie problem is not "have fewer tests," it is to recognize that passing the tests is a necessary - not sufficient - condition of working software.

Following the blankie argument to its logical conclusion - that fewer tests mean you write better code because you are more careful - we should have no tests.

In fact part of the reason we want the tests is that we can make changes to the code with less fear of unknowingly breaking existing functionality or introducing defects in the software.

In the end, if someone is using the tests as a tool to do wrong, there is something wrong with the person, not the test. They will find another way to do wrong, even if we remove the tests from their arsenal.

Hey! Why don't you make your life easier and subscribe to the full post or short blurb RSS feed? I'm so confident you'll love my smelly pasta plate wisdom that I'm offering a no-strings-attached, lifetime money back guarantee!



Suppose you want to write an algorithm that, when given a set of data points, will find an appropriate number of clusters within the data. In other words, you want to find the k for input to the k-means algorithm without having any a priori knowledge about the data. (Here is my own failed attempt at finding the k in k-means.)

def test_find_k_for_k_means_given_data_points()
  data_points = [1,2,3,9,10,11,20,21,22]
  k = find_k_for_k_means(data_points)
  assert(k==3, "find_k_for_k_means found the wrong k.")
end

The test above is a reasonable specification for what the algorithm should do. But take it further: can you actually design the algorithm by writing unit tests and making them pass?

I've previously expressed my doubt that TDD makes an effective approach to algorithm design. More recently, I alluded to a some optimism towards the same idea. Then, in a comment to the same post, Dat Chu asked about using unit tests in algorithm design, also referencing something that Ben Nadel had said about asserting in code-comments how the state of the algorithm should look at certain points.

That all led to this post, and me wanting to lay my thoughts out a little further.

In the general case, I agree with Dat that it would be better to have the executable tests/specs. But, what Ben has described sounds like a stronger version of what Steve McConnell called the pseudocode programming process in Code Complete 2, which can be useful in working your way through an algorithm.

Taking it to the next step, with executable asserts - the "Iterative Approach To Algorithm Design" post came out of a case similar to the one described at the top. Imagine you're coming up with something completely new to you (in fact, in our case, we think it is new to anyone), and you know what you want the results to be, but you're not quite sure how to transform the input to get them.

What good does it do me to have that test if I don't know how to make it pass? The unit test is useful for testing the entire unit (algorithm), but not as helpful for testing the bits in between.

Now, you could potentially break the algorithm into pieces - but if you're working through it for the first time, it's unlikely you'll see those breaking points up front. When you do see them, you can write a test if you like. However, if it's not really a complete unit, then you'll probably end up throwing the test away.

Because of that, and the inability to notice the units until after you've created them, I like the simple assert statements as opposed to the tests, at least in this case.

When we tried solving Sudoku using TDD during a couple of meetings of the UH Code Dojo, we introduced a lot of methods I felt were artificially there, just to be able to test them. We also created an object where one might not have existed had we known a way to solve Sudoku through code to begin with.

Now, we could easily clean up the interface when we're done, but I don't really feel a compulsion to practice TDD when working on algorithms like I've outlined above. I will write several tests for them to make sure they work, but (at least today) I prefer to work through them without the hassle of writing tests for the subatomic particles that make up the unit.


This is the seventh in a series of answers to 100 Interview Questions for Software Developers.

The list is not intended to be a "one-size-fits-all" list. Instead, "the key is to ask challenging questions that enable you to distinguish the smart software developers from the moronic mandrills." Even still, "for most of the questions in this list there are no right and wrong answers!"

Keeping that in mind, I thought it would be fun for me to provide my off-the-top-of-my-head answers, as if I had not prepared for the interview at all. Here's that attempt.

Though I hope otherwise, I may fall flat on my face. Be nice, and enjoy (and help out where you can!).

This week's answers are about testing. More...


What's with this nonsense about unit testing?

Giving You Context

Joel Spolsky and Jeff Atwood raised some controversy when discussing quality and unit testing on their Stack Overflow podcast (or, a transcript of the relevant part). Joel started off that part of the conversation:
But, I feel like if a team really did have 100% code coverage of their unit tests, there'd be a couple of problems. One, they would have spent an awful lot of time writing unit tests, and they wouldn't necessarily be able to pay for that time in improved quality. I mean, they'd have some improved quality, and they'd have the ability to change things in their code with the confidence that they don't break anything, but that's it.

But the real problem with unit tests as I've discovered is that the type of changes that you tend to make as code evolves tend to break a constant percentage of your unit tests. Sometimes you will make a change to your code that, somehow, breaks 10% of your unit tests. Intentionally. Because you've changed the design of something... you've moved a menu, and now everything that relied on that menu being there... the menu is now elsewhere. And so all those tests now break. And you have to be able to go in and recreate those tests to reflect the new reality of the code.

So the end result is that, as your project gets bigger and bigger, if you really have a lot of unit tests, the amount of investment you'll have to make in maintaining those unit tests, keeping them up-to-date and keeping them passing, starts to become disproportional to the amount of benefit that you get out of them.
More...


Last week, hgs asked,
I find it interesting that lots of people write about how to produce clean code, how to do good design, taking care about language choice, interfaces, etc, but few people write about the cases where there isn't time... So, I need to know what are the forces that tell you to use a jolly good bodge?
I suspect we don't hear much about it because these other problems are often caused by that excuse. And, in the long run, taking on that technical debt will likely cause you to go so slow that that's the more interesting problem. In other words, by ignoring the need for good code, you are jumping into a downward spiral where you are giving yourself even less time (or, making it take so long to do anything that you may as well have less time). More...


When I posted about why it's important to test everything first, Marc Esher from MXUnit asked:
What do you find hard about TDD? When you're developing and you see yourself not writing tests but jamming out code, what causes those moments for you? And have you really, in all honesty, ever reaped significant benefits either in productivity or quality from unit testing? Because there's a pretty large contingent of folks who don't get much mileage out of TDD, and I can see where they're coming from.
My TDD Stumbling Blocks
I'll address the first bit in one word: viscosity. When it's easier to do the wrong thing than the right thing, that's when I "see myself not writing tests but jamming out code."

But what causes the viscosity for me? Several things, really: More...


Because when you don't, how do you know your change to the code had any effect?

When a customer calls with a trouble ticket, do you just fix the problem, or do you reproduce it first (a red test), make the fix, and test again (a green test, if you fixed the problem)?

Likewise, if you write automated tests, but don't run them first to ensure they fail, it defeats the purpose of having the test. Most of the time you won't run into problems, but when you do, it's not fun trying to solve them. Who would think to look at a test that's passing?

The solution, of course, is to forget about testing altogether. Then we won't be lulled into a false sense of security. Right?


This is a story about my journey as a programmer, the major highs and lows I've had along the way, and how this post came to be. It's not about how ecstasy made me a better programmer, so I apologize if that's why you came.

In any case, we'll start at the end, jump to the beginning, and move along back to today. It's long, but I hope the read is as rewarding as the write.

A while back, Reg Braithwaite challenged programing bloggers with three posts he'd love to read (and one that he wouldn't). I loved the idea so much that I've been thinking about all my experiences as a programmer off and on for the last several months, trying to find the links between what I learned from certain languages that made me a better programmer in others, and how they made me better overall. That's how this post came to be. More...


Here's some pseudocode that got added to a production system that might just be the very definition of a simple change:
  1. Add a link from one page to cancel_order.cfm?orderID=12345
  2. In that new page, add the following two queries:
    1. update orders set canceled = 1, canceledOn=getDate() where orderID=#url.orderID#
    2. delete from orderItems
Now, upload those changes to the production server, and run it real quick to be sure it does what you meant it to do.

Then you say to yourself, "Wait, why is the page taking several seconds to load?"

"Holy $%^@," you think aloud, "I just deleted every item from every order in the system!"

It's easy enough for you to recover the data from the backups. It isn't quite as easy to recover from the heart attack.

Steve McConnell (among others) says that the easiest changes are the most important ones to test, as you aren't thinking quite as hard about it when you make them.

Are there any unbelievers left out there?


A couple of weeks ago the UH Code Dojo embarked on the fantastic voyage that is writing a program to solve Sudoku puzzles, in Ruby. This week, we continued that journey.

Though we still haven't completed the problem (we'll be meeting again tenatively on October 15, 2007 to do that), we did construct what we think is a viable plan for getting there, and began to implement some of it.

The idea was based around this algorithm (or something close to it): More...


A couple of days ago the UH Code Dojo met once again (we took the summer off). I had come in wanting to figure out five different ways to implement binary search. The first two - iteratively and recursively - are easy to come up with. But what about three other implementations? I felt it would be a good exercise in creative thinking, and pehaps it would teach us new ways to look at problems. I still want to do that at some point, but the group decided it might be more fun to tackle to problem of solving any Sudoku board, and that was fine with me.

Remembering the trouble Ron Jeffries had in trying to TDD a solution to Sudoku, I was a bit weary of following that path, thinking instead we might try Peter Norvig's approach. (Note: I haven't looked at Norvig's solution yet, so don't spoil it for me!) More...


If you don't care about the background behind this, the reasons why you might want to use rules based programming, or a bit of theory, you can skip straight the Drools tutorial.

Background
One of the concepts I love to think about (and do) is raising the level of abstraction in a system. The more often you are telling the computer what, and not how, the better.

Of course, somewhere someone is doing imperative programming (telling it how), but I like to try to hide much of that somewhere and focus more on declarative programming (telling it what). Many times, that's the result of abstraction in general and DSLs and rules-based programming more specifically. More...


We could all stand to be better at what we do - especially those of us who write software. Although many of these ideas were not news to me, and may not be for you either, you'd be surprised at how you start to slack off and what a memory refresh will do for you.

Here are (briefly) 10 ways to improve your code from the NFJS session I attended with Neal Ford. Which do you follow? More...


If you aren't one of the lucky few who get to attend the Google Test Automation Conference, there's still good news for us: They'll be posting the presentations to YouTube Google Channel.


Last Saturday, I had the fortune of attending the JUnit Workshop put on by Agile Houston. It was great getting to meet and work with some of the developers in the Houston Area. We started a couple of hours late because of a mixup at the hotel, but it was a good chance to chat with the other developers.

I signed up for a story to implement forums for JUnit.org, which would be used to post hard to test code and receive tests for them. The twist was that we wanted to compile the code and run the unit tests that others posted in response against the code, providing pass/fail and code coverage statistics (that sounds a lot harder than it really is). The other set of stories I signed up for was related to articles, news, (something else related to those that I can't recall), and RSS feeds for each of them. More...


Just a friendly reminder that Agile Houston is hosting a JUnit website improvement workshop on Saturday, June 16, 2007. The workshop starts at 9 AM and continues all day.

It's at the Courtyard Marriott. We'll be TDDing improvements to the JUnit.org website and I should be there from about 9 AM to 3 PM. See you there!


I recently found the Google Testing Blog and they have a series called "Testing on the Toilet," which are quick one-page write-ups on automated testing issues.

The latest issue of TotT covers Extracting Methods to Simplify Testing.

One of the interesting things for me is that they mention
The first hint that this method could use refactoring is the abundance of comments. Extracting sections of code into well-named methods reduces the original method's complexity. When complexity is reduced, comments often become unnecessary.
This isn't the first place this has come up, and it's not new either (to those in the know). In fact, not too long ago, I pondered something similar when I asked if you can name a block of code, is that a valid indicator that it should be a method?

Anyway, I think the Google Testing Blog is a good place to go to learn about testing. (and oh yeah- Google is holding a test automation conference in New York towards the end of summer. It's free, but you'll have to justify why you should be one of the 150 people accepted to attend.)


Want to get a report of a certain session? I'll be attending No Fluff Just Stuff in Austin, Texas at the end of June. So, look at all that's available, and let me know what you'd like to see.

I haven't yet decided my schedule, and it's going to be tough to do so. I'm sure to attend some Groovy and JRuby sessions, but I don't know which ones. In any case, I want to try to leave at least a couple of sessions open for requests, so feel free to let me know in the comments what you'd like to see. (No guarantees though!). Here's what I'm toying with so far (apologies in advance for not linking to the talks or the speakers' blogs, there are far too many of them =)): More...


Don't forget to (learn how to) unit test it using XMLUnit.


Often when we make a change to some code, if it is not properly designed, the result is that cascading changes need to be done throughout the application because bugs will ripple through the system. That's one of the ideas behind why we want to have low coupling between classes. We also have unit testing to combat the effects of this, but let's just suppose we haven't written any, and haven't used JUnit Factory to create regression tests.

Given that there is a lot of code out there that isn't quite perfect, wouldn't it be nice to have a tool that could analyze changes and where they would affect other code? I can imagine how such a tool might work, but I haven't heard of one before now (that I recall, anyway).

So the point of it all: I heard something about BMC and IBM teaming up on such a tool (my understanding is that BMC started it, and IBM later joined the project). I'm assuming it'd be in Java, but does anyone have information on this? Can anyone confirm or deny the story I heard?


Hot off the presses: On Saturday, June 16, 2007 from 9:00 AM (and lasting all day) Agile Houston is hosting a JUnit workshop.

Attendees will be pairing and TDDing improvements to JUnit.org, including: (quoting Agile Houston's announcement) More...


Have any of you Java guys or gals seen or tried JUnit Factory from Agitar? It generates functional unit tests for each method in a given class. A good description of how this can be used (since it can't detect how you expect the code should work, only how it does work) is provided in the FAQ: More...


Yesterday I was working on a little Java program that, when given a table, a "possible" candidate key (which could be composite), and some non-key columns would check to see if the table was in 1st, 2nd, or 3rd normal form(s). One constraint is that this program needs to be procedural in style (or rather, all of my code must reside in the file that contains the main() function).

I started out with the pseudocode programming process in psvm. My listing started to look like: More...


Our second meeting of the UH Code Dojo was just as fun as the first. This time, we decided to switch languages from Java to Ruby. And although we started quite slowly (we came unprepared and undecided on a problem and a language), we pretty much finished the anagram problem.

Now, I mentioned it was slow at first - because we were trying to decide on a problem. I'm guessing we spent about 30-45 minutes just looking over Ruby Quiz and then moving on to Pragmatic Dave's Code Kata. We learned from our experience though, and resolved to determine before-hand what we would do in the future. In any case, we finally decided on anagrams as our problem, and one of us mentioned something to the effect of "I don't know about you all, but I use Java too much at work." Of course, there's not much you can say to an argument like that - Ruby it was!

Since we found ourselves violating YAGNI at the first meeting, we decided to do a little more discussion of the problem before we started coding. One of the initial paths we explored was looping over each letter and generating every possible combination of letters, from 1 to n (n being the length of the input). We then realized that would need a variable number of nested loops, so we moved on to recursion. After that, we explored trying to use yield in conjunction with recursion, in an isolated environment. I don't recall the reasoning behind it, but whatever it was, we were starting to discover that when we passed that fork on the road a few minutes back, we took the path that led to the cannibals. (As a side note, if you're unfamiliar: yield sits in a function, which takes a closure as an argument, and runs the code in the closure -- I think that's a simple way of putting it, anyway).

After smelling the human-stew awaiting us, we backtracked a little and started looking for another idea. Enter idea number two: I'm not sure how to describe it in words, so I'll just show the code now and try to explain it afterwards:

char_count = Array.new(26).fill(0)
dictionary = ['blah', 'lab', 'timmy', 'in', 'hal', 'rude', 'open']

word = "BlAhrina".downcase

word.each_byte { |x| char_count[x - ?a] += 1 }

dictionary.each do |entry|
   char_count2 = char_count.clone
   innocent = true
   entry.each_byte do |letter|
     index = letter - ?a
     if char_count2[index] > 0
       char_count2[index] -= 1
     else
       innocent = false
       break
     end
   end
   puts entry if innocent
end

That's it: quite simple. First we initialize an array with a cell corresponding to each letter of the alphabet. Each cell holds a number, which represents the number of times that letter is used in out input, called word. These cells are set by using the line word.each_byte {...}.

Then for each entry in the dictionary, we do something similar: loop through each letter. If the total count for each letter goes to 0, we have a match (and in our case, simply print it to the standard output device). It's really a simple, elegant solution, and I think we're all glad we didn't go down the painful path of recursion. It would be fairly trivial to add a file which contained a real dictionary, and loop over that instead of each word in our array, but we didn't have one handy (nor did we happen to notice that Dave had provided one). And it would have been just an extra step on top of that to find all the anagrams in the dictionary.

I know this is just a silly little problem that you're not likely to encounter in real life, but it shows how even the simplest of problems can be tough without some thought, and I found it to be great practice. In particular, one problem we had was with trying to use TDD. Although we spent some time looking for tests we could write, and ways to test, and we even wrote an empty test thinking about what to put in there - none of us seemed to find anything to test. Now that we see the solution, it's fairly easy to figure out how to test it, but trying to drive the design with the test was proving fruitless for us. Dave alluded to this on the Kata page:
Apart from having some fun with words, this kata should make you think somewhat about algorithms. The simplest algorithms to find all the anagram combinations may take inordinate amounts of time to do the job. Working though alternatives should help bring the time down by orders of magnitude. To give you a possible point of comparison, I hacked a solution together in 25 lines of Ruby. It runs on the word list from my web site in 1.5s on a 1GHz PPC. It’s also an interesting exercise in testing: can you write unit tests to verify that your code is working correctly before setting it to work on the full dictionary.
I didn't read that before we had started (in fact, it wasn't until we had finished that anyone noticed it), but as you can tell, this exercise performed as promised. Our solution was under 25 lines, and while we didn't test it on his word list, I think our results would have been comparable (in fact, I wouldn't be surprised if we had the same basic solution he did).

Thoughts anybody?


A while back, I posted a short screencast on how to use Selenium to automate testing for your web application. Since familiarizing myself with it, I've intermittently thought about using it as part of my TDD cycle. However, I always felt like it would be to much trouble to be worth it.

However, Dan Bunea thought differently, and has posted a tutorial on his experience using Selenium with TDD to InfoQ. It's aimed at the .NET developer, but may be worth a read if you are into that stuff, as I am.


On Monday (Jan. 29, 2007) we had our first meeting of the UH Code Dojo, and it was a lot of fun. Before we started, I was scared that I'd be standing there programming all by myself, with no input from the other members. But, my fear was immediately laid to rest. Not only did everyone participate a lot, one of the members, Matt (I didn't catch his last name), even took over the typing duties after about 45 minutes. That turned out to be a good thing - since I still can't type worth a crap on my laptop, and he was much faster than me.

Now, it had only been a couple of months since I last used Java - but it still amazes me how little time off you need to forget simple things, like "import" for "require." I found myself having several silly syntax errors for things as small as forgetting the semicolon at the end of the lines.

Overall, we started with 7 people, and had 5 for most of the meeting. We finished with four because one person had tons of homework to do. It seemed like those five of us who stayed were genuinely enjoying ourselves.

In any case, we decided to stick with the original plan of developing a tic-tac-toe game. You would think that a bunch of computer scientists could develop the whole game in the two hours we were there. But, you'd have been wrong.

I came away from the meeting learning two main points, which I think illustrate the main reasons we didn't complete the game:
  1. YAGNI is your friend
  2. Writing your tests first, and really checking them is very worthwhile
More...


I don't want to turn this into a mouthpiece for the code dojo at University of Houston, but I'm pretty excited about it since we've set the date of our first meeting. We're planning on doing it January 29, 2007 at 7:00 PM. Check the website for more details (such as the room). We have yet to decide on the first problem to solve / topic, but we will have that done by the end of next week. After that, I probably won't post much here about it, or I'll try not to anyway (I realize folks in China, for instance, could probably care less about it).


For those that don't know, cfrails is supposed to be a very light framework for obtaining MVC architecture with little to no effort (aside from putting custom methods where they belong). It works such that any changes to your database tables are reflected immediately throughout the application.

For instance, if you change the order of the columns, the order of those fields in the form is changed. If you change the name of a column or it's data type, the labels for those form fields are changed, and the validations for that column are also changed, along with the format in which it is displayed (for example, a money field displays with the local currency, datetime in the local format, and so forth). I've also been developing a sort-of DSL for it, so configuration can be performed quite easily programmatically (not just through the database), and you can follow DRY to the extreme. Further, some of this includes (and will include) custom data types (right now, there are only a couple of custom data types based on default data types). More...


The last couple of weeks I've been soliciting teammates and friends of mine to help on starting a code dojo at the University of Houston. Well, we got the go-ahead yesterday from the CougarCS organization, so now we're just trying to plan when we'll have our first meeting. If you go to UH or live around Houston (I don't think we'll be checking IDs or anything), I'd encourage you to come to one of our meetings. You can find more information at CodeDojo.org. Right now, as I said, we don't have a meeting schedule or anything, but you can follow the link to our google group and stay informed that way (of course we will be posting it on the webpage as well).

If you don't live in Houston, but want to start a dojo of your own, we also plan to provide a place for others to post information. We don't have the infrastructure set up yet, but if you contact me, I'll be glad to let you know when we do. Of course, you won't have to have our cheesy logo up there =).


A couple of days ago I wrote about wanting to do a nice test runner interface to my unit (and integration) tests in Coldfusion. Well, it seems that just a couple of days before that, Laura Arguello (possibly in collaboration with her partner, Nahuel Foronda) released cfcUnit Runner on RIA Forge.

Up until now, I've been using CFUnit, about which she says "I believe it could also used to run CFUnit tests, but CFUnit will need to implement a service façade that Flex can use."

I'm going to get cfcUnit and download cfcUnit runner and try it out sometime soon. It looks really sweet. Then, if I can automatically run tests marked as slow or choose to skip all those marked as such, Laura (and Nahuel?) will have saved me a bunch of time and provided for all of us exactly the system I was thinking I wanted!

Update: Robert Blackburn, creator of CFUnit, said in the comments at Laura and Nahuel's blog that he is indeed working on something similar for CFUnit, and would be willing to implement the service façade Laura mentioned. Awesome!


As I'm finishing up a Ruby on Rails project today, I've been reflecting on some of the issues we had, and what caused them. One glaring one is our lack of automated tests - in the unit (what Rails calls unit, anyway) test and functional test categories.

The "unit" tests - I'm not too concerned about, overall. These tests run against our models, and since most of them simply inherit from ActiveRecord::Base (ActiveRecord is an ORM for Ruby), with some relationships and validation thrown in (both of which taken care of by ActiveRecord). In the few cases we have some real code to test, we've (for the most part) tested it.

What concerns me are the functional tests (these test our controllers). Of course, Rails generates test code for the scaffolds (if you use them), so we started with a nice, passing, suite of tests. But the speed of development in Rails combined with the lack of a convenient way to run tests and the time it takes to run them has caused us to allow our coverage to degrade, pretty severely. It contributed to very few automated tests being written, compared with what we may have done in a Java project, for example.

Of course, there were some tests written, but not near as many as we'd normally like to have. When a small change takes just a couple of seconds to do and and (say, for instance) 30 seconds to run the test, it becomes too easy to just say, "forget it, I'm moving on to the next item on the list." It definitely takes a lot of discipline to maintain high coverage (I don't have any metrics for this project, but trust me, its nowhere near acceptable).

Well that got me thinking about Coldfusion. I notice myself lacking in tests there as well. I'd traditionally write more in Coldfusion that what we did on this Rails project, but almost certainly I'd write less than in an equivalent Java project. And it's not just less because there is less code in a CF or Rails project than in Java - I'm talking more about the percent of code covered by the tests, rather than the raw number. It's because there is no convenient way to run them, and run them quickly.

For Rails development, I'm using RadRails (an Eclipse plugin), so at least I can run the tests within the IDE. But, there is no easy way to run all the tests. I take that back, there is, but for some reason, it always hangs on me at 66%, and refuses to show me any results. I can also use rake (a Ruby make, in a nutshell) to run all the tests via the console, but it becomes very difficult to see which tests are failing and what the message was with any more than a few tests. Couple this with the execution time, and I've left testing for programming the application.

In Coldfusion, it takes quite a while to run the tests period. This is due partly to the limitations of performance in creating CFCs, but also to the fact I'm testing a lot of queries. But at least I can run them pretty conveniently, although it could be a lot better. Now, I've got some ideas to let you run one set of tests, or even one test at a time, and to separate slow tests from fast ones, and choose which ones you want to run. So, look out in the future for this test runner when I'm done with it (it's not going to be super-sweet, but I expect it could save some much-needed time). And then the next thing to go on my mile-long to-do list will be writing a desktop CF server and integrate the unit testing with CFEclipse... (yeah, right - that's going on the bottom of the list).


Selenium is an incredibly easy tool you can use to set up automated tests for your web applications. However, if you're like me, you might wince at the thought of having to learn yet another technology - and put it off for the time being due to the "curve" associated with learning it.

To combat that feeling, I created a screencast - starting from download and going through creating an automated test suite. In about 6 minutes, you can have some automated tests for your application to run in just about any browser. The time it saves in manually re-testing is well worth the minor investment you make in getting automated tests. So, check it out. More...


Recently on the Test Driven Development Yahoo Group, James Carr initiated a discussion on TDD Anti-Patterns for a paper he's writing to submit for consideration to be published in IEEE's Software Magazine TDD Special Issue.

It's certainly been an interesting discussion, and he catalogued several of the anti-patterns on his blog.

I think it's a good idea to have names for these smells, since I've noticed a couple of them in my own tests - and it helps remind me to fix them. In particular, I didn't like a few tests in cfrails, which seemed only to test that "such and such works," without asserting anything really. This falls under The Secret Catcher:
A test that at first glance appears to be doing no testing due to the absence of assertions, but as they say, “the devil’s in the details.” The test is really relying on an exception to be thrown when a mishap occurs, and is expecting the testing framework to capture the exception and report it to the user as a failure.
I don't recall if I fixed them yet or not, but you can be sure I'll be paying more attention to my tests now!


Well, I guess I lied when I said xorBlog wouldn't be developed until I had caught up on my writing. I still haven't gotten caught up, but this morning I couldn't stand it any more - I had to have a way to categorize posts. Now, I didn't TDD these, and I didn't even put them in the right place. True to the name of the blog, I interspersed code where it was needed. I feel absolutely dirty, but I just couldn't spare the time at the moment to do it right, and I could no longer endure not having any categories. So, I took about 15 minutes, coded up a rudimentary category system, violated DRY in 2 places, and put a few comments like "this needs to be refactored into a CFC" throughout the code (as it needed).

At least I have some categories now (its not as gratifying a feeling as I thought it would be, however). I plan on refactoring this as soon as I have a chance. I'll write about it as well - it might make for some more interesting reading in the TDDing xorBlog series of posts.


Given a class LeapYear with method isleap? and a data file consisting of year, isleap(true/false) pairs, we want to generate individual tests for each line of data. Using Ruby, this is quite simple to do. One way is to read the file, and build a string of the code, then write that to a file and then load it. That would certainly work, but using define_method is a bit more interesting. Here is the code my partner Clay Smith and I came up with:

require 'test/unit'
require 'leapyear'
class LeapYearTest < Test::Unit::TestCase
   def setup
     @ly = LeapYear.new
   end
   def LeapYearTest.generate_tests
     filename = "testdata.dat"
     file = File.new(filename, "r") #reading the file
     file.each_line do |line| #iterate over each line of the file
      year, is_leap = line.split; #since a space separates the year from if it is a leap year or not, we split the line along a space
      code = lambda { assert_equal(is_leap.downcase=="true", @ly.isleap?(year.to_i)) } #create some code
      define_method("test_isleap_" + year, code) #define the method, and pass in the code
     end
     file.close
   end
end

LeapYearTest.generate_tests

One thing to note, that I initially had trouble with, was the to_i. At first, it never occurred to me that I should be using it, since with Coldfusion a string which is a number can have operations performed on it as if it were a number. In Ruby, I needed the to_i, as isleap? was always returning false with the String version of year.

A more interesting item to note is that in the line where we define the method, if you were to attach a block like this:

define_method("test_isleap_"+year) { assert_equal(is_leap.downcase=="true", @ly.isleap?(year.to_i)) }

Then the solution will not work. It creates the correct methods, but when it evaluates them, it appears as though it will use the last value of year, rather than the value at the time of creation.


Since school has been back in, I've been much busier than (seemingly) ever. I haven't had the opportunity to do some of the things I've wanted (such as writing more in TDDing xorBlog), but I wanted to share my Wizard, which you can teach to learn spells. I should also note that this is the product of work with my partner, Clayton Smith.

We were given the assignment as a few unit tests:

require 'wizard'
require 'test/unit'

class WizardTest < Test::Unit::TestCase
   def setup
     @wiz = Wizard.new
   end

   def test_teach_one_spell
     got_here = false
     @wiz.learn('telepathy') { puts "I see what you're thinking"; got_here = true}
     @wiz.telepathy
     assert(got_here)
   end

   def test_teach_another_spell
     got_here = false
     spell_code = lambda { puts "no more clouds"; got_here = true}
     @wiz.learn('stop_rain', &spell_code)

     @wiz.stop_rain
     assert(got_here)
   end

   def test_teach_a_couple_of_spells
     got_here1 = false
     got_here2 = false
     @wiz.learn('get_what_you_want') { |want| puts want; got_here1 = true }
     @wiz.learn('sleep') { puts 'zzzzzzzzzzzz'; got_here2 = true}

     @wiz.get_what_you_want("I'll get an 'A'")
     @wiz.sleep

     assert(got_here1 && got_here2)
   end

   def test_unknown_spell
     @wiz.learn('rain') { puts '...thundering...' }

     assert_raise(RuntimeError, "Unknown Spell") {@wiz.raln }
   end
end

We simply had to make the tests pass:

class Wizard
   def initialize()
     @spells=Hash.new
   end
   def learn(spell, &code)
     @spells[spell]=code
   end
   def method_missing(method_id, *args)
     begin
       @spells["#{method_id}"].call(args)
     rescue
       raise("Unknown Spell")
     end
   end
end

Basically, all that happens is that when you create a Wizard, the initialize() method is called, and it creates a Hash to store spells in. Then you have a method, learn(), which takes as a parameter a block of code and stores that code in the Hash. Then when someone calls a method that doesn't exist for an object, Ruby automatically calls the method_missing() method. In this method, all I do is try to call the code stored in the hash under the name of the method they tried to call. If that doesn't work, I raise an exception with the message "Unknown Spell." Quite simple to do something so complex. I can't even imagine how I'd do something like that in a more "traditional" language (though, I can't say that I've tried either).

Can you imagine how cool it would be to say, have one programmer who wrote this Wizard into a game, and other programmers who just litter places in the game (in SpellBooks, consisting of Spells of course) with code that names the spell and specifies what it does? Without even needing to know anything about each other! Instantly extending what a Wizard can do, without even having to so much as think about changing Wizard - it's a beautiful world, indeed.


Oh my! I almost forgot the most important part in 'Beginning Ruby' - how to test your code!

require 'test/unit'
class MyTest < Test::Unit::TestCase
    def test_hookup
       assert(2==2)
    end
end

Running that in SciTE gives the following output:

Loaded suite rubyunittest
Started
.
Finished in 0.0 seconds.

1 tests, 1 assertions, 0 failures, 0 errors

On the other hand, if you change one of those twos in the assert() to a three, it shows this:

Loaded suite rubyunittest
Started
F
Finished in 0.079 seconds.

1) Failure:
test_hookup(MyTest) [rubyunittest.rb:4]:
<false> is not true.

1 tests, 1 assertions, 1 failures, 0 errors

The Ruby plugin for Eclipse gives a nice little GUI, however, where you can see the green bar (assuming your code passes all the tests). I also have a classmate who is writing a Ruby unit testing framework based on NUnit for his thesis, so it is supposed to be the bollocks. I'll let you all know more when I know more.


Now that we can insert posts, it is possible to update, select, delete, and search for them. To me, any one of these would be a valid place to go next. However, since I want to keep the database as unchanged as possible, I'll start with test_deletePost(). This way, as posts are inserted for testing, we can easily delete them.

Here is the code I wrote in xorblog/cfcs/tests/test_PostEntity:

<cffunction name="test_deletePost" access="public" returnType="void" output="false">
   <cfset var local = structNew()>   
   <cfset local.newID=_thePostEntity.insertPost(name="blah", meat="blah", originalDate="1/1/1900", author="yoda")>
   <cfset local.wasDeleted = _thePostEntity.deletePost(local.newID)>

   <cfset assertTrue(condition=local.wasDeleted, message="The post was not deleted.")>

   <cfquery name="local.post" datasource="#_datasource#">
      select id from post where id = <cfqueryparam cfsqltype="cf_sql_integer" value="#local.newID#">
   </cfquery>

   <cfset assertEquals(actual = local.post.recordcount, expected = 0)>
</cffunction>

And the corresponding code for deletePost():

<cffunction name="deletePost" output="false" returntype="boolean" access="public">
   <cfargument name="id" required="true" type="numeric">

   <cfset var local = structNew()>
   <cfset local.result = false>
   <cftry>
      <cfquery name="local.del" datasource="#_datasource#">
         delete from post where id = <cfqueryparam cfsqltype="cf_sql_integer" value="#id#">
      </cfquery>
      <cfset local.result=true>
   <cfcatch>
   
   </cfcatch>
   </cftry>
   <cfreturn local.result>
</cffunction>

Originally, I just left the test as asserting that local.wasDeleted was true. However, writing just enough of deletePost() in xorblog/cfcs/tests/PostEntity to get the test to pass resulted in the simple line <cfreturn true>. Since that would always pass, I also added a check that the inserted post no longer existed.

Now that we have some duplicate code, its definitely time to do some refactoring. More on that next time. (To be continued...)


We left off after writing the test for the insertPost() method. Now, we're going to make that test pass by writing the code for it. First you'll need to create PostEntity.cfc in the xorblog/cfcs/src directory, and make sure to surround it in the proper <cfcomponent> tags. What follows is that code:

<cffunction name="insertPost" output="false" returntype="numeric" access="public">
   <cfargument name="name" required="true" type="string">
   <cfargument name="meat" required="true" type="string">
   <cfargument name="originalDate" required="true" type="date">
   <cfargument name="author" required="true" type="string">
   
   <cfset var local = structNew()>
   <cftransaction>
   <cfquery name="local.ins" datasource="#variables._datasource#">
      insert into post
      (name, meat, originalDate, lastModifiedDate, author)
      values
      (<cfqueryparam cfsqltype="cf_sql_varchar" value="#arguments.name#">,
       <cfqueryparam cfsqltype="cf_sql_longvarchar" value="#arguments.meat#">,
       <cfqueryparam cfsqltype="cf_sql_timestamp" value="#arguments.originalDate#">,
       <cfqueryparam cfsqltype="cf_sql_timestamp" value="#arguments.originalDate#">,
       <cfqueryparam cfsqltype="cf_sql_varchar" value="#arguments.author#">,
   </cfquery>

   <cfquery name="local.result" datasource="#_datasource#">
      select max(id) as newID from post
      where originalDate=<cfqueryparam cfsqltype="cf_sql_timestamp" value="#arguments.originalDate#">
      and name=<cfqueryparam cfsqltype="cf_sql_varchar" value="#arguments.name#">
   </cfquery>
   </cftransaction>
   <cfif local.result.recordcount is 0>
      <cfthrow message="The new post was not properly inserted.">
   </cfif>
   <cfreturn local.result.newID>
   </cffunction>

There isn't really anything special here, unless you are new to Coldfusion. If that's the case, you'll want to take note of the <cfqueryparam> tag - using it is considered a "best practice" by most (if not all) experienced Coldfusion developers.

The other item of note is that if you were to run this code by itself, it still wouldn't work, since we haven't defined variables._datasource. Many developers would do this in a function called init() that they call each time they create an object. I've done it as well.

I suppose if you were rigorously following the YAGNI principle, you might wait until creating the next method that would use that variable before defining it. I certainly like YAGNI, but my OCD is not so bad that I won't occasionally allow my ESP to tell me that I'm going to use something, even if I don't yet need it. With that said, I try only do it in the most obvious of cases, such as this one.

Now that we've written the code for insertPost(), its time to run the test again. Doing so, I see that I have two test that run green (this one, and our test_hookup() from earlier. We've gone red-green, so now it's time to refactor. Unfortunately, I don't see any places to do that yet, but I think they'll reveal themselves next time when we write our second test and second method in PostEntity. (To be continued...)


So we decided that blog software centers around posts and that for any other feature to be useful, we'd need them first. Therefore, we'll start with a model component for our posts, and we'll call it PostEntity. Before I create that file though, I'm going to go back into my test_PostEntity.cfc file and write a test or two for some functionality that PostEntity should provide.

Thinking of things we should be able to do regarding the storage of posts, it's easy to identify at least insert(), update(), and delete(). However, since you can't update or delete a post that doesn't exist, I figured I'd start with adding a post. I came up with the following test:

<cffunction name="test_insertPost" access="public" returntype="void" output="false">
      <cfset var local = structNew()>
      <cfset local.nameOfPost = "My Test Post" & left(createUUID(),8)>
      <cfset local.meatOfPost = "The meat of the post is that this is a test." & left(createUUID(),8)>
      <cfset local.dateOfPost = now()>
      <cfset local.author = "Sam #createUUID()#">

      <cfset local.newID=_thePostEntity.insertPost(name=local.nameOfPost, meat=local.meatOfPost, originalDate=local.dateOfPost, lastModifiedDate=local.dateOfPost, author=local.author)>

      <cfquery name="local.post" datasource="#variables._datasource#">
         select name, meat, originalDate, author
         from post
         where id = <cfqueryparam cfsqltype="cf_sql_integer" value="#local.newID#">
      </cfquery>

      <cfset assertEquals(actual=local.post.name, expected=local.nameOfPost)>
      <cfset assertEquals(actual=local.post.meat, expected=local.meatOfPost)>
      <cfset assertEquals(actual=local.post.author, expected=local.author)>

      <!--- dateCompare isn't working correctly, so we are testing each datepart --->      
      <cfset assertEquals(actual=month(local.post.originalDate), expected=month(local.dateOfPost))>
      <cfset assertEquals(actual=day(local.post.originalDate), expected=day(local.dateOfPost))>
      <cfset assertEquals(actual=year(local.post.originalDate), expected=year(local.dateOfPost))>
      <cfset assertEquals(actual=hour(local.post.originalDate), expected=hour(local.dateOfPost))>
      <cfset assertEquals(actual=minute(local.post.originalDate), expected=minute(local.dateOfPost))>
      <cfset assertEquals(actual=second(local.post.originalDate), expected=second(local.dateOfPost))>

      <!--- clean up --->
      <cfquery datasource="#_datasource#">
         delete from post where id = #local.newID#
      </cfquery>
   </cffunction>

You'll notice I used a UUID as part of the data. There's no real point to it, I suppose. I just wanted to have different data each time, and thought this would be a good way to achieve that.

You should also be uncomfortable about the comment saying dateCompare isn't working - I am anyway. It doesn't always fail, but occasionally it does, and for reasons I can't figure out, CFUnit isn't reporting why. For now, so I can move on, I'm assuming it is a bug in CFUnit. Since I can test each date part that is important to me individually and be sure the dates are the same if they all match, I don't feel too bad.

Another thing to note is the use of the var local. By default, any variables created are available everywhere, so to keep them local to a function, you need to use the var keyword. I like to just create a struct called local and put all the local variables in there - it just makes things easier.

Finally, some people might not like the length of that test. Right now, I don't either, but we'll see what we can do about that later. Others may also object to using more than one assertion per test. I don't mind it so much in this case since we really are only testing one thing. If you like, you could also create a struct out of each and write a UDF like structCompare() and do the assertion that way. I haven't tested this one personally, but there is one available at cflib. In either case, I don't see much difference, other than one way I have to write more code than I need.

Now I run the test file we created and find that, as expected, the test still fails. Besides the fact that we don't even have a PostEntity.cfc, we haven't yet instantiated an object of that type, nor have we defined _datasource and the like. Let's do that in the setUp() method.

<cffunction name="setUp" access="public" returntype="void" output="false">
   <cfset variables._datasource="xorblog">
   <cfset variables.pathToXorblog = "domains.xorblog">
   <cfset variables._thePostEntity = createObject("component", "#variables.pathToXorblog#cfcs.src.PostEntity").init(datasource=_datasource)>
</cffunction>

Now our tests still fail, because we have no code or database. So create the datasource and database with columns as needed:

id (int, primary key, autonumber)
name (nvarchar 50)
meat (ntext)
originalDate (datetime)
lastModifiedDate (datetime)
author (nvarchar 50)

Next time, we'll start coding and get our first green test. (To be continued...)


Since I wanted to start this blog, I thought it would be good practice to write the software that runs it using test-driven development. I've used a bit of TDD recently for additions to existing applications, but I've not yet started writing an application using it from beginning to end. I'm getting sick of eating Italian microwaveable dinners when I have to maintain code. This is my chance to eat something else. So, without further ado, we'll jump right in.

The first thing I did of course, was to create my directory structure. For the time being, we have:

xorblog/cfcs/src

and

xorblog/cfcs/tests

I like to keep the tests separate from the source. I don't have a reason behind it, other than it helps keep me a bit organized.

Next, I thought about what a blog needs. We want to deliver items that have the highest business value first, and move on to things that are lower on the value scale later. In doing this, we get a working application sooner rather than later, and hence the blog can be used at the earliest possible moment in its development.

With that in mind, we probably shouldn't start with things like Comments or functionality that lets us get included in places like Technorati. Since you need content to make anything else useful, I thought I'd start with that. Indeed, the Post is the core part of a blog. Therefore, the first thing I did was create test_PostEntity.cfc under xorblog/cfcs/tests.

Now, I'm using CFUnit for my tests, and this assumes you already have it set up. If you need help on that, you can visit CFUnit on SourceForge.

The first thing I do in test_PostEntity.cfc is write test_hookup(), to make sure everything is working:

<cfcomponent extends="net.sourceforge.cfunit.framework.TestCase" output="false" name="test_PostEntity">
   <cffunction name="test_hookup" access="public" returntype="void" output="false">
      <cfset assertEquals(expected=4, actual=2+2)>
   </cffunction>
</cfcomponent>

Next, we need a way to see the status of and run our tests. For this we have test_runner.cfm, which for the most part just copies what you'll find at the CFUnit site linked above:

<cfset testClasses = ArrayNew(1)>
<cfset ArrayAppend(testClasses, "domains.xorblog.cfcs.tests.test_PostEntity")>
<!--- Add as many test classes as you would like to the array --->
<cfset suite = CreateObject("component", "globalcomponents.net.sourceforge.cfunit.framework.TestSuite").init( testClasses )>
<cfoutput>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
   <title>Unit Tests for xorBlog</title>
</head>
<body>
<h1>xorBlog Unit Tests</h1>
<cfscript>
   createobject("component", "globalcomponents.net.sourceforge.cfunit.framework.TestRunner").run(suite,'');
</cfscript>
</body>
</html>
</cfoutput>


Finally, we run that page in a browser to make sure the test runs green - and it does. Now that we have our test environment set up, we can start writing tests for our PostEntity that doesn't yet exist. (To be continued...)



Google
Web CodeOdor.com

Me
Picture of me

Topics
.NET (19)
AI/Machine Learning (14)
Answers To 100 Interview Questions (10)
Bioinformatics (2)
Business (1)
C and Cplusplus (6)
cfrails (22)
ColdFusion (78)
Customer Relations (15)
Databases (3)
DRY (18)
DSLs (11)
Future Tech (5)
Games (5)
Groovy/Grails (8)
Hardware (1)
IDEs (9)
Java (38)
JavaScript (4)
Linux (2)
Lisp (1)
Mac OS (4)
Management (15)
MediaServerX (1)
Miscellany (76)
OOAD (37)
Productivity (11)
Programming (168)
Programming Quotables (9)
Rails (31)
Ruby (67)
Save Your Job (58)
scriptaGulous (4)
Software Development Process (23)
TDD (41)
TDDing xorblog (6)
Tools (5)
Web Development (8)
Windows (1)
With (1)
YAGNI (10)

Resources
Agile Manifesto & Principles
Principles Of OOD
ColdFusion
CFUnit
Ruby
Ruby on Rails
JUnit



RSS 2.0: Full Post | Short Blurb
Subscribe by email:

Delivered by FeedBurner