Some trials and tribulations of testing after the fact

Alonso Del Arte
6 min readMar 29, 2019

--

Photo by Dan Lohmar on Unsplash

As long as people have been writing computer programs, people have been testing computer programs in some way or another.

After all, you don’t want your program to fail when everyone’s counting on it for some important purpose.

Like for example, an automatic avionics system that kicks in only if the pilots are actually incapacitated, and not under any other circumstance. Or a 9–1–1 dispatch system that correctly switches over to another disk when one disk is full.

To a lot of people for a long time, testing meant executing the program and then visually inspecting the print-out to make sure the results were correct. Of course this can be tedious and error-prone.

Using the computer itself to check the outputs is not a new idea. And as long as we’re not working in artificial intelligence, we don’t have to worry about the computer trying to fool us.

We can properly call this “automated testing,” of which “automatic testing” is a subset (that’s a setup in which the computer automatically runs the test every time something, like saving a file in the project, occurs).

It turns out that using automated testing to guide the process of writing a program is not a new idea either. But many programmers don’t at first realize how beneficial “test-driven development” can be, even if they do reap some of the benefits of automated testing.

Automated testing after writing the program is better than no automated testing at all. But test-driven development (TDD) is much better than testing after the fact.

In this article, I won’t really be talking about any specific programming language (like Java) or unit testing framework (like JUnit). The underlying principles are quite applicable to most other modern programming languages and testing frameworks.

According to the JUnit FAQ,

Tests should be written before the code. Test-first programming is practiced by only writing new code when an automated test is failing.

Whenever a customer test fails or a bug is reported, first write the necessary unit test(s) to expose the bug(s), then fix them. This makes it almost impossible for that particular bug to resurface later.

Test-driven development is a lot more fun than writing tests after the code seems to be working. Give it a try!

If I had read this two years ago, I would have thought it was completely backwards, and “fun” should not be a criterion here, in my opinion.

But now, having experienced some of the problems that come with testing after the fact, I am switching over to TDD, though with some exemptions for some user interface components.

I’m not completely dogmatic in my use of TDD. I really prefer to first write stubs (functions that return obviously wrong values, or procedures that don’t do anything).

And then I write tests which the stubs will flunk. Once I see those are failing, I write the actual subroutines that I think will pass the test.

If you’ve been testing after the fact, I hope that the following overview of some of the pitfalls will convince you to switch over to TDD as well.

The program is not testable

The main drawback to testing after the fact that most experts cite is that the program is not testable. Then you have to seriously rework it so that it can be tested.

It’s not something I’ve actually experienced. My project that convinced me to use automated testing involves mathematical functions. A function either returns the right value or it doesn’t. That’s very testable.

I can still think of a few simplistic but clear examples of a program that is not testable.

Take the infamous FizzBuzz. If your implementation just prints to the console, it’s not very testable. You need to write something like fizzBuzz(int n), which can be tested against any int, and not just the ones from 1 to 100.

So the fix is very easy in that case. Now imagine a more complicated program, one with dozens of procedures (by that I mean subroutines that are like functions but don’t return anything).

Rewriting that to be testable could turn out to be a major pain in the neck. And maybe not much different than debugging spaghetti code in a legacy application.

Writing tests after the fact is a chore

Even when the program is testable, writing tests after the fact can still feel like quite a chore.

Presumably at this point you’ve done a bit of what I’m going to call, for lack of a better term, “manual testing.” Then most likely the automated test will merely confirm your program works correctly. Big deal, no one cares.

For example, in a couple of subprograms of my visualization of imaginary quadratic integers project, I wrote for my own use very limited read-eval-print-loops (REPLs) which I used to test “manually.”

Then my automated tests merely repeated my “manual” tests, though of course I quickly realized I could make the automated tests go through hundreds if not thousands of cases in a matter of seconds.

And also there is the possibility that you’ve “over-engineered” the program, made it more complicated than it needs to be. Then writing tests might be even more of a chore.

With a deadline looming, if your program seems to be working, you might even decide to skip the automated tests, and hope the quality assurance team catches any problems before the program goes out to the end users.

A test fails because the test is wrong

In my opinion, the worst thing about testing after the fact is when a test fails because the test itself is wrong. So you go over your program with a fine-tooth comb and you get frustrated because everything should work correctly.

And then finally it occurs to you to look at the test and then you realize you’ve made some silly mistake in your test. This has definitely happened to me, and it contributes to how testing after the fact feels like a chore.

Other pitfalls?

I have probably forgotten to mention some pitfall or other. If so, please let me know in the comments.

Test-driven development prevents these problems

With test-driven development, the program will be testable because you’re writing it to pass the tests. This is true even if you actually first write stubs, then tests and then change the stubs so that they actually pass the tests.

The program will be testable because you’re writing it to pass specific tests. And you don’t have to worry about over-engineering as long as you strive to figure out the minimum necessary to pass the tests.

You should also strive to have a test initially fail for the right reason. In my opinion, a compilation error does not make for a valid failing first test — in JavaScript you really need to watch out for this.

Nor should an unexpected exception be counted as a meaningfully failing first test.

Only a wrong result tested in an assertion should count as a meaningful first failure. Without a meaningful first failure, you should not move on to working on making the test pass.

The meaningful first failure is important because it will save you the frustration of a false fail because of a bad test… as well as the frustration of a false pass.

With TDD it’s still possible to write a bad test. The difference is that a bad test in TDD can be far more clarifying than a bad test when testing after the fact, and much less frustrating.

With TDD, writing tests becomes an integral part of the process, instead of a low-priority chore that you might not even have time for.

Still, “manual” testing in a REPL is still a good idea. It might merely confirm that the program works correctly. Or it might expose a problem, in which case, you should write an automated test specifically for that problem you’ve just uncovered.

In conclusion, if you’re writing automated tests only after your program seems to be working correctly, and you’re experiencing any of the problems listed in this article, it’s time for you and your team to switch over to TDD.

--

--

Alonso Del Arte
Alonso Del Arte

Written by Alonso Del Arte

is a Java and Scala developer from Detroit, Michigan. AWS Cloud Practitioner Foundational certified

No responses yet