TDD is a waste of time… until you use it on a project that matters to you

Alonso Del Arte
11 min readMay 6, 2019

--

Photo by Aziz Acharki on Unsplash

It hardly needs to be said that any computer program, except the simplest and least important, needs to be tested. When and how it is to be tested is still a matter of debate, and some hold the opinion that test-driven development (TDD) is a flash in the pan and a waste of time.

Criticisms of TDD include that the methodology emphasizes process over substance. In many demonstrations of TDD, the reader is bored as they see the eventual solution to the example problem coming from miles away.

The examples used to explain TDD by necessity should be easy to implement. Or at least easy to understand.

But they should not be about duplicating something that is already built into the chosen platform, which for this article will be the Java Development Kit.

The example that I will use today is of a function that gives the nth prime number. For example, p(4) = 7, p(25) = 97, p(168) = 997 (see entry A40 in Sloane’s OEIS for more prime numbers).

Do you care about something like that? You might, you might not. It’s nice if you get it to work, but if you try but don’t get it to work, or if you don’t try at all, no one cares. It’s a low stakes situation.

The very first thing you should do is write the test. I personally prefer to write a stub first, which is either a function that returns an obviously wrong value, or a procedure that does nothing at all.

And since you gotta have both test and source to run the tests anyway, it really doesn’t make a difference which one you write first, just as long as the source is something that will fail the first time. Here’s the stub:

    // STUB TO FAIL THE FIRST TEST
public static int nthPrime(int index) {
return -1024;
}

And now the test:

    @Test
public void testNthPrimeFirstPrimeIs2() {
int expected = 2;
int actual = nthPrime(0);
assertEquals(expected, actual);
}

For computers, we’re supposed to make everything zero-indexed, most of the time. So, even though 2 is the first prime, it should be the result for index 0, right? I’ll come back to this point later.

The easiest way to make the test pass is to have nthPrime() just return 2.

    public static int nthPrime(int index) {
return 2;
}

The test should now pass. The TDD skeptics are probably rolling their eyes right now. The next test is very similar:

    @Test
public void testNthPrimeSecondPrimeIs3() {
int expected = 3;
int actual = nthPrime(1);
assertEquals(expected, actual);
}

We can actually hold off writing the actual implementation of nthPrime() just a tad longer.

    public static int nthPrime(int index) {
return index + 2;
}

Actually, we could keep this game going longer still. But you get the idea. This is the sort of thing the TDD skeptics point to in their criticism.

I don’t like having one test for a single prime, and here we have two tests that each deal with a single prime. I think a test shouldn’t be for just one prime, unless that prime is for some reason special to our implementation. Like these next two tests:

    @Test(timeout = 1000)
public void testNthPrimeLargest32BitPrime() {
int actual = nthPrime(105097564);
assertEquals(Integer.MAX_VALUE, actual);
}

@Test(expected = ArithmeticException.class)
public void testNthPrimeBeyondRange() {
int index = 105097565;
int actual = nthPrime(index);
System.out.println(index + "th prime is said to be "
+ actual);
}

Quick note: Integer.MAX_VALUE is the largest signed 32-bit prime. It’s a Mersenne prime that was known to Euler back in 1772, though Euler most likely did not know or care that it’s the 105,097,565th prime.

The easiest way to make testNthPrimeLargest32BitPrime() pass is to simply have nthPrime() check if index == 105097564, and if it is, return Integer.MAX_VALUE, otherwise return index + 2. Thus we postpone actually implementing nthPrime() longer still.

I would like testNthPrimeBeyondRange() better if it came up with a random int greater than 105,097,564.

    @Test(expected = ArithmeticException.class)
public void testNthPrimeBeyondRange() {
int index = 105097565
+ (int) Math.floor(Math.random() * 300000000);
System.out.println("Prime at " + index + "?");
int actual = EratosthenesSieve.nthPrime(index);
System.out.println(index + "th prime is said to be "
+ actual);
}

But what I would like even more is a test that checks a group of consecutive primes, like say, a hundred consecutive primes just shy of 10,000.

In initialization early on, we should have private final int PRIME_PI_10000 = 1229, and maybe in setUpClass() we should have ArrayList<Integer> PRIMES_TO_10000 initialized and filled with the first 1229 prime numbers.

    @Test
public void testNthPrimeCluster() {
int actual;
for (int i = 1129; i < PRIME_PI_10000; i++) {
actual = nthPrime(i);
assertEquals(PRIMES_TO_10000.get(i).intValue(), actual);
}
}

Now we can move on to actually trying to write an implementation that will work.

Here’s my first try at the actual implementation. Before running the test suite, I disabled testNthPrimeBeyondRange() just in case that one takes too long (no need to worry about testNthPrimeLargest32BitPrime() because that one has a timeout).

    public static int nthPrime(int index) {
List<Integer> primes = new ArrayList<>();
primes.add(2);
int currIndex = 0;
int currTestIndex, currDiv;
int currNum = 1;
int currLen = 1;
boolean divNotFoundYet;
while (currIndex < index) {
divNotFoundYet = true;
currNum += 2;
currTestIndex = 0;
while (divNotFoundYet && currTestIndex < currLen) {
currDiv = primes.get(currTestIndex);
divNotFoundYet = (currNum % currDiv != 0);
currTestIndex++;
}
if (divNotFoundYet) {
primes.add(currNum);
currLen++;
currIndex++;
}
}
return currNum;
}

Hmm… all the tests passed, except for testNthPrimeBeyondRange(), which was skipped with @Ignore; testNthPrimeLargest32BitPrime(), which timed out because I left out the check of whether index == 105097564; and… testNthPrimeFirstPrimeIs2(), because 2 was the expected result but the actual result was 1.

Okay, so I forgot that 2 is special because it is an even prime. Well, I just added an if statement at the top to return 2 for an index of 0.

Early on it was easy to dismiss testNthPrimeFirstPrimeIs2() as a low-information test, since we might have assumed the only time it would fail would be the obligatory first fail of TDD.

But here it demonstrates one benefit of TDD: as long as we have a meaningful first fail which we then turn to a first pass, any subsequent fails will immediately point up where the problem is. You will use the debugger a lot less, or maybe not at all.

This example provides a way to illustrate refactoring: notice that currLen and currPrime both get incremented every time a prime is added to the list. It would be preferable to have only one of those or the other, even though it would not affect program performance in any noticeable way.

I leave this optimization as an optional exercise for you. The way I would do it is to first adjust where currLen or currPrime is used to make a decision, and delete the declaration only once NetBeans or IntelliJ (or whatever IDE I’m working in) lets me know the variable is not used.

Another optimization, which would be more consequential to program performance, would be to only try as divisors primes less than √m, where m is a number we think might be prime.

For example, to verify that 997 is prime, we don’t actually need to check that it’s not divisible by 991, 983, 977, and so on and so forth down to 37. It suffices to verify it’s not divisible by 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31. That’s just eleven primes instead of 168 primes.

Also, since currNum only loops over odd numbers, it is unnecessary to test whether or not it’s divisible by the first prime, 2. This optimization is in the category of greater elegance but with no appreciable performance gain.

Hopefully you refactor this to make these optimizations and every test will still pass, except those that are skipped or time out. In my case, however, it occurs that testNthPrimeCluster() and testNthPrimeSecondPrimeIs3() are now failing.

The problem for me arose when the index is not 0 nor the special values at the edge of the range of the function. I was quickly able to pinpoint that I had neglected to correct the condition for one of the while loops.

TDD is no substitute for original human thought. But it is a great substitute for the slow, tedious and error-prone process of you checking one by one the latest output of the program against established results each time you make a seemingly minor and inconsequential change.

One occasional criticism of testing in general is that the source and tests can go out of sync when the requirements for the program change.

At least in TDD, the solution is simple: you change the tests to reflect the new requirements. Then you run the tests.

The only tests that should fail are the ones affected by the change in program requirements. All the other tests ought to still pass. Assuming of course they were passing before.

Once you have tests that fail because of the changed requirements, that’s when you change your source to meet the new requirements.

Or only change one test at a time, provided your IDE gives you the ability to run a single test out of a many by itself, like IntelliJ does. And also provided that you leave yourself some indicator of where you left off if you have to interrupt your workflow.

To continue with our example, even if it’s not one you particularly care about, let’s say that your product manager now tells you that the client wants p(0) = 1, so that then the actual primes match up to a 1-indexed array rather than a 0-indexed array.

Of course you know very well that 1 is not a prime number. There are good reasons for that which apply regardless of unique factorization. And there are also stupid reasons that are frequently mentioned.

Good or bad, the reasons for 1 not being prime are irrelevant now that the client has indicated they want nthPrime(0) to return 1. We will need to add a test for that:

    @Test
public void testNthPrimeZeroethPrimeIs1() {
int expected = 1;
int actual = nthPrime(0);
assertEquals(expected, actual);
}

And we will also need to make changes to all of our previously written tests:

    @Test
public void testNthPrimeFirstPrimeIs2() {
int expected = 2;
int actual = nthPrime(1);
assertEquals(expected, actual);
}

@Test
public void testNthPrimeSecondPrimeIs3() {
int expected = 3;
int actual = nthPrime(2);
assertEquals(expected, actual);
}

@Test(timeout = 1000)
public void testNthPrimeLargest32BitPrime() {
int actual = nthPrime(105097565);
assertEquals(Integer.MAX_VALUE, actual);
}

@Test(expected = ArithmeticException.class)
public void testNthPrimeBeyondRange() {
int index = 105097566
+ (int) Math.floor(Math.random() * 300000000);
System.out.println("Prime at " + index + "?");
int actual = nthPrime(index);
System.out.println(index + "th prime is said to be "
+ actual);
}

@Test
public void testNthPrimeCluster() {
int actual;
for (int i = 1129; i < PRIME_PI_10000; i++) {
actual = nthPrime(i);
assertEquals(PRIMES_TO_10000.get(i - 1).intValue(),
actual);
}
}

On that last one, take care to either add an initial 1 to PRIMES_TO_10000 or adjust how you index into that list. If you choose the latter option, take care not to iterate the index beyond the last element, or the test might trip up IndexOutOfBoundsException.

All of these tests should now fail, except for testNthPrimeBeyondRange(), since it is certain that the pseudorandom index will be out of range even in the unlikely event that Math.random() were to give us precisely 0.0.

Once we’re certain the failing tests are failing for the right reasons, and not throwing exceptions due to careless mistakes on our part, we can go ahead and change the source to meet the new requirements.

With TDD, the prospect of making changes should not cause us concern, whether those changes are in response to a new client requirement or simply because we think we can improve what we’ve written so far.

Another possible optimization is that the containing class could have a static private store of prime numbers, initialized with maybe the first thousand prime numbers. Then, it’s a simple look up when index is small enough.

For example, we call nthPrime(5000). The private store gets augmented to contain the first 5,000 primes. If then we call nthPrime(3000), the function merely looks up the 3,000th prime in the private store.

A call of nthPrime(8000) would again require augmenting the private store, to something like 16 kilobytes.

And a call like nthPrime(105097342) could be problematic, however: that might require augmenting the private store to about 210 megabytes. I think the Java Virtual Machine would throw some kind of error before getting even close to the prime number we’re looking for.

It’s important to note that TDD is not a magic fix for whatever problems you have writing software. If you can’t figure out how to program something without TDD, you’re not going to be able to figure it out with TDD.

But if you can figure it out, TDD will get you there faster, and with greater confidence in what you’ve written.

Another criticism of TDD is that it takes longer to go from the specifications to a working program. Even if that’s true, it’s better to spend more time building a solid foundation than to spend a lot of time working with a debugger.

I now come back to: do you care about nthPrime()? Probably not. And you might also be asking if I care, or if I would care if I wasn’t using it for the example in this article.

In my own Algebraic Integer Calculator project (available through GitHub), I frequently need the Boolean isPrime() function, which I wrote myself. But I have never needed nthPrime().

For example, isPrime(Integer.MAX_VALUE) takes not even a second to return true. Even if that function actually tries dividing Integer.MAX_VALUE by each odd number from 3 to 46,341 (including obviously composite numbers like 9 and 46,335), it’s still much faster than trying to do the sieve of Eratosthenes up to Integer.MAX_VALUE.

If I really did need in Java a function that would quickly and reliably give me the nth prime for some arbitrary index around 105,000,000, it would probably make more sense to buy it from someone than to try to program it myself.

When working in Wolfram Mathematica, I do find myself using its built-in Prime[], which is also recognized by Wolfram Alpha. Try it right now: Prime[105097342] (or use any 9-digit index that will give you a prime less than 2³¹ − 1).

It might take a few seconds, but most of that time will be due to your Internet connection speed. It will most likely be a hell of a lot faster than what we’ve come up with here, assuming it does deliver a result, without causing an error pertaining to a stack overflow or something like that.

In Mathematica, I use the Boolean PrimeQ[] more often than Prime[]. And when I do use Prime[], it is almost always with a Range[], to obtain consecutive prime numbers, not to pick out one prime at an arbitrary index.

When I need consecutive primes in a Java program, or in a JUnit test, it makes a lot more sense to sieve them out with a simple algorithm like the sieve of Eratosthenes early on (for example, in the JUnit @BeforeClass-annotated setUpClass()).

If you can implement a quick and reliable nthPrime() for large indices in Java, that’s great. It would most certainly be because you have an actual need for it, not because you read this article.

A much more convincing demonstration if you’re a TDD skeptic is to use TDD on a project that actually matters to you. The reason need not necessarily be because the project is directly relevant to your paycheck.

Try TDD out on just a small unit of your project. Preferably something that you don’t think is terribly difficult, but do think could be very time-consuming.

I think you will be pleasantly surprised by how easily and confidently you implement something you thought would have taken a lot longer.

--

--

Alonso Del Arte
Alonso Del Arte

Written by Alonso Del Arte

is a Java and Scala developer from Detroit, Michigan. AWS Cloud Practitioner Foundational certified

No responses yet