10. Performance & Profiling
Measure before optimizing: use pprof
, reduce allocations, and leverage pools carefully.
Question (Optional, advanced): Your Go application is running slow. How would you begin to investigate the performance issue?
Answer: The standard approach is to use Go's built-in profiling tool, pprof
. By importing the net/http/pprof
package, you can expose an HTTP endpoint that provides CPU profiles, memory allocation profiles (heap), and goroutine traces.
Explanation: pprof
is a powerful tool for identifying bottlenecks. You can capture a CPU profile for a duration (e.g., 30 seconds) and then use the go tool pprof
command to analyze it. The analysis can be a top-down list of functions consuming the most CPU, or a flame graph for a more visual representation. It's the first and most important step before attempting any optimization.
Question: How do you reduce allocations and help the compiler?
Answer: Preallocate slices (make
with capacity), reuse buffers (bytes.Buffer
/strings.Builder
), avoid capturing loop vars in closures, and let escape analysis keep values on stack.
Explanation: Inspect escape decisions with go build -gcflags='-m'
. Prefer returning small structs by value; avoid unnecessary interface conversions that force heap allocation.
Question: When to use
sync.Pool
?
Answer: For short-lived, high-frequency temporary objects to reduce GC pressure. Do not store critical resources; pooled objects can be dropped at any GC cycle.
Question: How do you capture CPU/heap profiles from tests or programs?
Answer: Use go test -cpuprofile cpu.out -memprofile mem.out
or programmatically via pprof
.
Explanation: Analyze with go tool pprof
or UI (-http=:0
).
Question: What flags help with micro-benchmarking and allocation analysis?
Answer: go test -bench=. -benchmem -count=5
to measure time and allocs with reduced noise.