6. Testing, Benchmarks, Fuzzing
Build trustworthy suites: isolation, temp resources, parallelism, realistic benchmarks, and fuzz seeds.
Question: What is the benefit of using subtests (
t.Run
) in table-driven tests?
Answer: Subtests allow you to run distinct, named test cases within a single parent test function. This provides clearer output, allows for focused testing of a single case (go test -run TestParent/SubTestName
), and enables common setup/teardown logic around the subtests.
Explanation: Each t.Run
creates a separate test scope. If one subtest fails, the others will still run. This makes debugging much easier than a simple loop where the first failure aborts the test and you don't know the status of other cases.
Question: How would you mock a dependency for testing in Go?
Answer: The idiomatic way is to use interfaces. Define your application's behavior in terms of small interfaces, and then provide a fake or stub implementation of that interface in your tests.
Explanation: Instead of depending on a concrete type (e.g., *sql.DB
), depend on an interface you define (e.g., type UserStore interface { GetUser(id int) (*User, error) }
). Your real implementation will wrap *sql.DB
, while your test implementation can be a simple struct that returns hardcoded data without needing a real database. This decouples your components and makes them easy to test in isolation.
Question: What is fuzzing in Go and when would you use it?
Answer: Fuzzing (introduced in Go 1.18) is a type of automated testing that continuously feeds randomly generated inputs to a function to find edge cases, bugs, or vulnerabilities that would be missed by traditional unit tests.
Explanation: You should use fuzzing for any function that parses complex inputs, especially from untrusted sources. Good candidates include functions that handle file formats, network protocols, or complex data structures. A fuzz test starts with a set of seed inputs and mutates them over time to explore the input space.
Question: How and when should you use
t.Parallel
?
Answer: Call t.Parallel()
at the start of a test or subtest to run it concurrently with others.
Explanation: Parallelizing independent tests speeds up suites, but beware of shared global state and port/file conflicts. Use isolated temp dirs (t.TempDir()
).
Question: What are best practices for benchmarks?
Answer: Avoid measuring setup. Put work inside the for i := 0; i < b.N; i++ {}
loop, use b.ReportAllocs()
, and isolate I/O.
Explanation: Use b.ResetTimer()
after setup and b.StopTimer()
around expensive, non-measured parts. Compare with -benchmem
and consider realistic inputs.
Question: What are example tests and golden files?
Answer: Example tests (ExampleX
) verify output in comments and double as documentation; golden tests compare output to files under testdata/
.
Explanation: Use cmp.Diff
to show readable diffs. Regenerate goldens intentionally and review changes in code review.