Cory LaNou
Wed, 21 Apr 2021

Table Driven Testing In Parallel

Overview

Table driven testing is not a unique concept to the Go programming language. However, there are some great features that make table driven testing in Go faster and more reusable. This article will cover how to get the most out of table driven testing with Go (golang).

Table driven testing is not a unique concept to the Go programming language. However, there are some great features that make table driven testing in Go faster and more reusable. This article will cover how to get the most out of table driven testing with Go (golang).

Target Audience

This article is aimed at developers that have minimum experience with Go, including some testing experience.

In this article, we'll cover the following topics:

  • Introduction to a basic test in Go
  • Running basic tests
  • Creating a table driven test
  • Isolating a specific test to run in a table driven test
  • Running tests in parallel
  • Running sub-tests in parallel

The basic test

For this article, we'll write some tests for the standard libraries strings.Index. We purposefully picked this function as it's easy to understand what it does and will not detract from the testing methodologies we will be presenting.

Given the following string:

Gophers are amazing

If we searched for the index of the word are, we would expect to get the value of 8 back.

Now that we have our first test case, we can write the following test:

func Test_Index(t *testing.T) {
	sentence := "Gophers are amazing"
	substring := "are"
	got := strings.Index(sentence, substring)
	// expected value:
	exp := 8
	if got != exp {
		t.Errorf("unexpected value for indexing %q for %q.  got %d, exp %d", sentence, substring, got, exp)
	}
}

And if we run the test, we'll see that it passes with the following output:

$ go test -v ./...
=== RUN   Test_Index
--- PASS: Test_Index (0.00s)
PASS
ok      github.com/gopherguides/training        0.060s

More Tests

For a function like this, we can quickly come up with a set of possible test cases:

  • Test that we match the first word in a sentence
  • Test that we don't match any words in a sentence

While we can come up with even more use cases, this will be enough to illustrate the point of this article.

Here are the two new tests:

func Test_First(t *testing.T) {
	sentence := "Gophers are amazing"
	substring := "Gophers"
	got := strings.Index(sentence, substring)
	// expected value:
	exp := 0
	if got != exp {
		t.Errorf("unexpected value for indexing %q for %q.  got %d, exp %d", sentence, substring, got, exp)
	}
}
func Test_None(t *testing.T) {
	sentence := "Gophers are amazing"
	substring := "rust"
	got := strings.Index(sentence, substring)
	// expected value:
	exp := -1
	if got != exp {
		t.Errorf("unexpected value for indexing %q for %q.  got %d, exp %d", sentence, substring, got, exp)
	}
}

And here is the output of all three tests when we run them:

$ go test -v ./...
=== RUN   Test_Index
--- PASS: Test_Index (0.00s)
=== RUN   Test_First
--- PASS: Test_First (0.00s)
=== RUN   Test_None
--- PASS: Test_None (0.00s)
PASS
ok      github.com/gopherguides/training        0.189s

Parallel

While all of these tests run very quickly, there is no reason that we shouldn't make use of t.Parallel. By adding this to each test, we'll run all the tests that specify t.Parallel at the same time.

Note: It's important that any time you use t.Parallel, that the tests running don't share any state with each other. If they do, it's likely you'll pollute another test and create unpredictable test results.

To use t.Parallel, you can insert the call as the first line of your tests. Now, each test that specifies the parallel feature will all be run in parallel with each other:

func Test_Index(t *testing.T) {
	t.Parallel()
	sentence := "Gophers are amazing"
	substring := "are"
	got := strings.Index(sentence, substring)
	// expected value:
	exp := 8
	if got != exp {
		t.Errorf("unexpected value for indexing %q for %q.  got %d, exp %d", sentence, substring, got, exp)
	}
}


func Test_First(t *testing.T) {
	t.Parallel()
	sentence := "Gophers are amazing"
	substring := "Gophers"
	got := strings.Index(sentence, substring)
	// expected value:
	exp := 0
	if got != exp {
		t.Errorf("unexpected value for indexing %q for %q.  got %d, exp %d", sentence, substring, got, exp)
	}
}


func Test_None(t *testing.T) {
	t.Parallel()
	sentence := "Gophers are amazing"
	substring := "rust"
	got := strings.Index(sentence, substring)
	// expected value:
	exp := -1
	if got != exp {
		t.Errorf("unexpected value for indexing %q for %q.  got %d, exp %d", sentence, substring, got, exp)
	}
}

The output will change to show that tests are now running in parallel as well:

$ go test -v ./...
=== RUN   Test_Index
=== PAUSE Test_Index
=== RUN   Test_First
=== PAUSE Test_First
=== RUN   Test_None
=== PAUSE Test_None
=== CONT  Test_Index
=== CONT  Test_None
--- PASS: Test_Index (0.00s)
--- PASS: Test_None (0.00s)
=== CONT  Test_First
--- PASS: Test_First (0.00s)
PASS
ok      github.com/gopherguides/training        0.202s

Code Duplication

If we look at all three tests, we can see that they all share a lot of common characteristics:

  • They all define a test case
  • They all define an expected outcome
  • Each test runs the use case and captures the actual outcome
  • Finally, each test checks if the expectations matched the actual outcome

Not only is there a lot of code duplication, but each time a new test is added, that boilerplate test code is duplicated yet again. As a result, the defining characteristics of each test case are utlimately lost in the common setup logic.

Table Drive Testing

In Go, we can address the previous concerns of code duplication and lack of code reuse by taking a table driven test approach.

Setting up table driven testing typically consists of the following concepts:

  • Set up a set of test cases
  • Iterate those test cases and validate the results

In Go, a common approach to defining the test cases is to us a a slice of anonymous struct. While this struct definition is almost always different for each table driven test, it usually consists of a test case, expected outcome, and an identifying, human readable name to show up in the test output. Here is an example:

tcs := []struct {
	name  string // the name of the subtest
	input string // arbitrary inputs
	exp   string // what you expect to see
}{
	{
		name:  "some test case",
		input: "some input",
		exp:   "some expected output",
	},
}

The variable tcs stands for "test case scenarios".

After defining your test cases, you will then iterate over them and perform the tests:

// iterate over all the tests
for _, tc := range tcs {
	// use the test values from tc to actually run the test
}

Converting Our Test

Now that we understand the basic structure of a table driven test, we can re-write our three tests as a single table drive test:

func Test_Index(t *testing.T) {
	t.Parallel()
	tcs := []struct {
		sentence  string
		substring string
		exp       int
	}{
		{"Gophers are amazing", "are", 8},
		{"Gophers are amazing", "Gophers", 0},
		{"Gophers are amazing", "rust", -1},
	}
	for _, tc := range tcs {
		t.Logf("testing indexing %q for %q", tc.sentence, tc.substring)
		got := strings.Index(tc.sentence, tc.substring)
		if got != tc.exp {
			t.Errorf("unexpected value for indexing %q for %q.  got %d, exp %d", tc.sentence, tc.substring, got, tc.exp)
		}
	}
}

Now when we run the test, we see it's all one test again:

$ go test -v ./...
=== RUN   Test_Index
=== PAUSE Test_Index
=== CONT  Test_Index
    strings_test.go:21: testing indexing "Gophers are amazing" for "are"
    strings_test.go:21: testing indexing "Gophers are amazing" for "Gophers"
    strings_test.go:21: testing indexing "Gophers are amazing" for "rust"
--- PASS: Test_Index (0.00s)
PASS
ok      github.com/gopherguides/training        0.154s

One of the big advantages of table driven testing is that it's now very easy to see and identify the actual tests. As you can see, all of our test cases are right next to each other now:

tcs := []struct {
	sentence  string
	substring string
	exp       int
}{
	{"Gophers are amazing", "are", 8},
	{"Gophers are amazing", "Gophers", 0},
	{"Gophers are amazing", "rust", -1},
}

In addition to being able to quickly identify the test cases, we can now add new test cases quickly and easily by just adding more values in the anonymous struct.

Isolating Table Driven Tests

While table driven testing now allows for us to focus on the "what" we are testing, it actually made isolating a test much harder. For instance, what if I'm trying to debug the code, and I only want the second test case to run. Currently, I would have to comment out all test cases but the one I want.

As a result, in Go 1.7, they added the t.Run command to the testing framework. This allows us to run each sub-test as an isolated test.

The only change we need to make to use t.Run is in our iteration loop:

func Test_Index(t *testing.T) {
	t.Parallel()
	tcs := []struct {
		sentence  string
		substring string
		exp       int
	}{
		{"Gophers are amazing", "are", 8},
		{"Gophers are amazing", "Gophers", 0},
		{"Gophers are amazing", "rust", -1},
	}
	for _, tc := range tcs {
		t.Run(tc.substring, func(t *testing.T) {
			t.Logf("testing indexing %q for %q", tc.sentence, tc.substring)
			got := strings.Index(tc.sentence, tc.substring)
			if got != tc.exp {
				t.Errorf("unexpected value for indexing %q for %q.  got %d, exp %d", tc.sentence, tc.substring, got, tc.exp)
			}
		})
	}
}

The test output now identifies each iteration of our test cases as a unique test:

$ go test -v ./...
=== RUN   Test_Index
=== PAUSE Test_Index
=== CONT  Test_Index
=== RUN   Test_Index/are
    strings_test.go:24: testing indexing "Gophers are amazing" for "are"
=== RUN   Test_Index/Gophers
    strings_test.go:24: testing indexing "Gophers are amazing" for "Gophers"
=== RUN   Test_Index/rust
    strings_test.go:24: testing indexing "Gophers are amazing" for "rust"
--- PASS: Test_Index (0.00s)
    --- PASS: Test_Index/are (0.00s)
    --- PASS: Test_Index/Gophers (0.00s)
    --- PASS: Test_Index/rust (0.00s)
PASS
ok      github.com/gopherguides/training        0.267s

If we want to run a specific test, we can use the -run argument and isolate that specific test:

$ go test -v -run Test_Index/Gophers ./...
=== RUN   Test_Index
=== PAUSE Test_Index
=== CONT  Test_Index
=== RUN   Test_Index/Gophers
    strings_test.go:24: testing indexing "Gophers are amazing" for "Gophers"
--- PASS: Test_Index (0.00s)
    --- PASS: Test_Index/Gophers (0.00s)
PASS
ok      github.com/gopherguides/training        0.250s

Parallel Sub-testing

You might have noticed that although we gained a lot of great features with table driven testing, we've actually lost the "parallel" nature of testing each of our test cases. Luckily, the Go team already thought of this, and we are allowed to place the t.Parallel() in our sub tests as well:

func Test_Index(t *testing.T) {
	t.Parallel()
	tcs := []struct {
		sentence  string
		substring string
		exp       int
	}{
		{"Gophers are amazing", "are", 8},
		{"Gophers are amazing", "Gophers", 0},
		{"Gophers are amazing", "rust", -1},
	}
	for _, tc := range tcs {
		t.Run(tc.substring, func(t *testing.T) {
			t.Parallel() // <== Run each sub test in parallel
			t.Logf("testing indexing %q for %q", tc.sentence, tc.substring)
			got := strings.Index(tc.sentence, tc.substring)
			if got != tc.exp {
				t.Errorf("unexpected value for indexing %q for %q.  got %d, exp %d", tc.sentence, tc.substring, got, tc.exp)
			}
		})
	}
}

And here is the new output:

$ go test -v ./...
=== RUN   Test_Index
=== PAUSE Test_Index
=== CONT  Test_Index
=== RUN   Test_Index/are
=== PAUSE Test_Index/are
=== RUN   Test_Index/Gophers
=== PAUSE Test_Index/Gophers
=== RUN   Test_Index/rust
=== PAUSE Test_Index/rust
=== CONT  Test_Index/are
=== CONT  Test_Index/rust
=== CONT  Test_Index/Gophers
    strings_test.go:25: testing indexing "Gophers are amazing" for "rust"
=== CONT  Test_Index/are
    strings_test.go:25: testing indexing "Gophers are amazing" for "rust"
=== CONT  Test_Index/rust
    strings_test.go:25: testing indexing "Gophers are amazing" for "rust"
--- PASS: Test_Index (0.00s)
    --- PASS: Test_Index/Gophers (0.00s)
    --- PASS: Test_Index/are (0.00s)
    --- PASS: Test_Index/rust (0.00s)
PASS
ok      github.com/gopherguides/training        0.156s

Bug Time!

How closely did you examine the output of the previous test? If you were paying attention, you might have already spotted that when we added t.Parallel to the sub tests, we actually added a concurrency bug for Go programs. While it's not obvious by looking at the code, because we told Go to run the sub tests in parallel, it actually puts them in Go routines behind the scenes. As such, when it launches the go routines, our test case variables (the tc variable in the for loop iterator) are now evaluated when the go routines run.

The result is that all the tests are scheduled to run, and by the time they actually are executed by the Go scheduler, the for loop has completed, and each test ends up using the variables for the last test case. In this case, they all test for the rust use case. Even worse, is that ALL tests show up as passing, as it's only really testing the last test case at this point, which happens to be passing.

To correct for this, you can rebind the tc variable inside the loop. This will create a copy of it so that the go routine will now use the locally scoped variable, and no longer bind to the variable in the for loop:

func Test_Index(t *testing.T) {
	t.Parallel()
	tcs := []struct {
		sentence  string
		substring string
		exp       int
	}{
		{"Gophers are amazing", "are", 8},
		{"Gophers are amazing", "Gophers", 0},
		{"Gophers are amazing", "rust", -1},
	}
	for _, tc := range tcs {
		tc := tc // rebind tc into this lexical scope
		t.Run(tc.substring, func(t *testing.T) {
			t.Parallel()
			t.Logf("testing indexing %q for %q", tc.sentence, tc.substring)
			got := strings.Index(tc.sentence, tc.substring)
			if got != tc.exp {
				t.Errorf("unexpected value for indexing %q for %q.  got %d, exp %d", tc.sentence, tc.substring, got, tc.exp)
			}
		})
	}
}

As you can see, all of the tests are now getting the correct test case data:

$ go test -v ./...
=== RUN   Test_Index
=== PAUSE Test_Index
=== CONT  Test_Index
=== RUN   Test_Index/are
=== PAUSE Test_Index/are
=== RUN   Test_Index/Gophers
=== PAUSE Test_Index/Gophers
=== RUN   Test_Index/rust
=== PAUSE Test_Index/rust
=== CONT  Test_Index/are
    strings_test.go:26: testing indexing "Gophers are amazing" for "are"
=== CONT  Test_Index/rust
=== CONT  Test_Index/Gophers
=== CONT  Test_Index/rust
    strings_test.go:26: testing indexing "Gophers are amazing" for "rust"
=== CONT  Test_Index/Gophers
    strings_test.go:26: testing indexing "Gophers are amazing" for "Gophers"
--- PASS: Test_Index (0.00s)
    --- PASS: Test_Index/are (0.00s)
    --- PASS: Test_Index/rust (0.00s)
    --- PASS: Test_Index/Gophers (0.00s)
PASS
ok      github.com/gopherguides/training        0.097s

Summary

In this article, we took several basic tests, and combined them into a single table drive test. This allowed for us to separate out the "noise" of the test from the "focus" of the test. We also learned that we can better manage our sub tests with the use of the t.Run directive. And to make our test suite run even faster, we can continue to use the t.Parallel directive.

Want More?

Check out our article about using t.Cleanup in our Test Cleanup article.

Also, did you know you can do table driven benchmarking? Stay tuned for a future article where we show you how to reuse your code to create several benchmarks in a table driven format.

More Articles

Quick Tips: Pointer Optimizations in Go

Overview

This article explores important performance considerations when working with pointers in Go. We'll cover key topics like returning pointers to local variables, choosing between pointer and value receivers for methods, and how to properly measure and optimize pointer-related performance using Go's built-in tools. Whether you're new to Go or an experienced developer, these tips will help you write more efficient and maintainable code.

Learn more

Hype Quick Start Guide

Overview

This article covers the basics of quickly writing a technical article using Hype.

Learn more

Writing Technical Articles using Hype

Overview

Creating technical articles can be painful when they include code samples and output from running programs. Hype makes this easy to not only create those articles, but ensure that all included content for the code, etc stays up to date. In this article, we will show how to set up Hype locally, create hooks for live reloading and compiling of your documents, as well as show how to dynamically include code and output directly to your documents.

Learn more