This content originally appeared on DEV Community and was authored by Shrijith Venkatramana
Hello, I’m Shrijith Venkatramana. I’m building LiveReview, a private AI code review tool that runs on your LLM key (OpenAI, Gemini, etc.) with flat, no-seat pricing — built for small teams. Do check it out and give it a try!
If you’re diving into concurrency in Go, goroutines are your best friend. They’re lightweight threads that make parallel execution straightforward. In this post, we’ll explore key patterns for using goroutines effectively. We’ll start with the basics and move into more advanced setups, complete with runnable code examples. By the end, you’ll have solid techniques to apply in your projects.
Starting Simple: Launching Your First Goroutine
Goroutines let you run functions concurrently without much overhead. To start one, just use the go
keyword before a function call.
Key point: Goroutines run independently, so the main function might exit before they finish. Always ensure proper synchronization if needed.
Here’s a basic example:
package main
import (
"fmt"
"time"
)
func sayHello() {
fmt.Println("Hello from goroutine!")
}
func main() {
go sayHello() // Launches the goroutine
time.Sleep(1 * time.Second) // Wait to let goroutine finish (not ideal in production)
fmt.Println("Main function done.")
}
// Output:
// Hello from goroutine!
// Main function done.
In this code, without the sleep, the main function could end first, and you might not see the output. For better ways to wait, check out the next sections.
For more on goroutine basics, see the official Go tour: A Tour of Go – Goroutines.
Synchronizing Goroutines with WaitGroups
When you have multiple goroutines, you often need to wait for them all to complete. That’s where sync.WaitGroup
comes in. It acts like a counter: add tasks, mark them done, and wait.
Bold tip: Always call wg.Done()
in a defer statement to avoid leaks if panics occur.
Example with three goroutines:
package main
import (
"fmt"
"sync"
"time"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done()
fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 3; i++ {
wg.Add(1)
go worker(i, &wg)
}
wg.Wait()
fmt.Println("All workers completed.")
}
// Output:
// Worker 1 starting
// Worker 3 starting
// Worker 2 starting
// Worker 3 done
// Worker 1 done
// Worker 2 done
// All workers completed.
This pattern ensures the main function waits properly. Use it for batch processing tasks.
Communicating Safely: Channels Between Goroutines
Channels provide a way for goroutines to send and receive data safely. They’re typed and can be buffered or unbuffered.
Key rule: Unbuffered channels block until both sender and receiver are ready. Buffered ones allow sending without immediate receiving, up to the buffer size.
Simple producer-consumer example:
package main
import "fmt"
func producer(ch chan<- int) {
for i := 1; i <= 5; i++ {
ch <- i
fmt.Printf("Sent: %d\n", i)
}
close(ch)
}
func main() {
ch := make(chan int)
go producer(ch)
for num := range ch {
fmt.Printf("Received: %d\n", num)
}
fmt.Println("Channel closed.")
}
// Output:
// Sent: 1
// Received: 1
// Sent: 2
// Received: 2
// Sent: 3
// Received: 3
// Sent: 4
// Received: 4
// Sent: 5
// Received: 5
// Channel closed.
Close channels when done sending to signal the end. This prevents deadlocks in receivers.
Learn more about channels in the Go spec: Effective Go – Channels.
Handling Multiple Channels: The Select Statement
When dealing with multiple channels, select
lets you wait on several operations at once. It’s great for non-blocking checks or multiplexing.
Important: Use a default case to avoid blocking if no channels are ready.
Example with two channels:
package main
import (
"fmt"
"time"
)
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
go func() {
time.Sleep(1 * time.Second)
ch1 <- "Message from channel 1"
}()
go func() {
time.Sleep(2 * time.Second)
ch2 <- "Message from channel 2"
}()
for i := 0; i < 2; i++ {
select {
case msg1 := <-ch1:
fmt.Println(msg1)
case msg2 := <-ch2:
fmt.Println(msg2)
}
}
fmt.Println("All messages received.")
}
// Output:
// Message from channel 1
// Message from channel 2
// All messages received.
This pattern is useful in servers handling multiple inputs, like timeouts or cancellations.
Managing Errors in Concurrent Code
Errors in goroutines can be tricky since panics might not propagate to the main thread. Use channels to send errors back.
Best practice: Create a dedicated error channel and check it after waiting.
Example with error propagation:
package main
import (
"errors"
"fmt"
"sync"
)
func task(id int, errCh chan<- error, wg *sync.WaitGroup) {
defer wg.Done()
if id == 2 {
errCh <- errors.New("error in task 2")
return
}
fmt.Printf("Task %d completed\n", id)
}
func main() {
var wg sync.WaitGroup
errCh := make(chan error, 3) // Buffered to hold all potential errors
for i := 1; i <= 3; i++ {
wg.Add(1)
go task(i, errCh, &wg)
}
wg.Wait()
close(errCh)
for err := range errCh {
if err != nil {
fmt.Println("Error:", err)
}
}
fmt.Println("Main done.")
}
// Output:
// Task 1 completed
// Task 3 completed
// Error: error in task 2
// Main done.
Collect errors in a channel and handle them collectively. This keeps your code clean.
Scaling with Worker Pools
For CPU-bound tasks, limit concurrency with a worker pool. Use a channel to queue jobs and a fixed number of goroutines to process them.
Tip: Size the pool based on available cores, often using runtime.NumCPU()
.
Here’s a table of pros and cons:
Aspect | Pros | Cons |
---|---|---|
Resource Use | Controls goroutine count | Overhead in channel management |
Performance | Prevents overload | Potential bottlenecks if pool too small |
Example code:
package main
import (
"fmt"
"sync"
"time"
)
func worker(id int, jobs <-chan int, results chan<- int) {
for j := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, j)
time.Sleep(time.Second)
results <- j * 2
}
}
func main() {
const numJobs = 5
jobs := make(chan int, numJobs)
results := make(chan int, numJobs)
var wg sync.WaitGroup
for w := 1; w <= 3; w++ { // 3 workers
wg.Add(1)
go func(w int) {
defer wg.Done()
worker(w, jobs, results)
}(w)
}
for j := 1; j <= numJobs; j++ {
jobs <- j
}
close(jobs)
wg.Wait()
close(results)
for r := range results {
fmt.Printf("Result: %d\n", r)
}
}
// Output (order may vary):
// Worker 1 processing job 1
// Worker 2 processing job 2
// Worker 3 processing job 3
// Worker 1 processing job 4
// Worker 2 processing job 5
// Result: 2
// Result: 4
// Result: 6
// Result: 8
// Result: 10
This limits to three workers, processing five jobs efficiently.
Fan-Out and Fan-In for Parallel Processing
Fan-out distributes work to multiple goroutines; fan-in collects results. Combine with channels for pipelines.
Key: Use WaitGroups for synchronization and buffered channels for results.
Example fanning out computations:
package main
import (
"fmt"
"sync"
"time"
)
func compute(input int, out chan<- int) {
time.Sleep(time.Second) // Simulate work
out <- input * input
}
func main() {
inputs := []int{1, 2, 3, 4}
out := make(chan int, len(inputs))
var wg sync.WaitGroup
for _, in := range inputs {
wg.Add(1)
go func(in int) {
defer wg.Done()
compute(in, out)
}(in)
}
wg.Wait()
close(out)
for result := range out {
fmt.Printf("Result: %d\n", result)
}
}
// Output (order may vary):
// Result: 1
// Result: 4
// Result: 9
// Result: 16
Fan-out to compute squares in parallel, then fan-in to collect. Great for data processing.
Cancelling Goroutines Gracefully with Contexts
Use context.Context
to signal cancellation, timeouts, or deadlines across goroutines.
Essential: Pass context to functions and check for cancellation regularly.
Example with timeout:
package main
import (
"context"
"fmt"
"time"
)
func longTask(ctx context.Context) {
select {
case <-time.After(3 * time.Second):
fmt.Println("Task completed")
case <-ctx.Done():
fmt.Println("Task cancelled:", ctx.Err())
}
}
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
go longTask(ctx)
time.Sleep(4 * time.Second) // Wait to see outcome
fmt.Println("Main done.")
}
// Output:
// Task cancelled: context deadline exceeded
// Main done.
This pattern is crucial for HTTP servers or long-running ops. Always propagate context.
For deeper dives, refer to the context package docs: Go Context Package.
As you integrate these patterns, start small—test with basic sync first, then add channels and contexts. Experiment in your codebases to see performance gains. Remember, profiling with go tool pprof
can help spot issues. With these tools, your Go apps will handle concurrency more reliably and efficiently.
This content originally appeared on DEV Community and was authored by Shrijith Venkatramana