Go was designed with concurrency as one of its core strengths. In a world where systems are expected to be fast, responsive, and able to handle thousands of simultaneous tasks, understanding Go's concurrency model is not just useful—it's essential.
In this article, we'll explore what makes concurrency in Go unique, how goroutines and channels work, common pitfalls, and best practices that will help you write scalable and maintainable concurrent programs.
What Is Concurrency?
Concurrency is the ability of a program to perform multiple tasks independently and make progress on all of them, even if they're not happening at the exact same time (as in parallelism).
In Go, concurrency is not just supported—it's a first-class feature. While other languages often rely on heavy threads and complex synchronization mechanisms, Go offers a lightweight and elegant solution through goroutines and channels.
Goroutines: Lightweight Threads
A goroutine is a function that runs concurrently with other goroutines. It's extremely cheap in terms of memory and resources—starting a goroutine typically costs just a few kilobytes.
Starting a goroutine is as simple as using the go
keyword:
func sayHello() {
fmt.Println("Hello from goroutine")
}
func main() {
go sayHello()
time.Sleep(time.Second) // Give goroutine time to run
}
In this example, sayHello()
runs concurrently with main()
. Without the time.Sleep
, the program might exit before the goroutine finishes.
Goroutine Lifecycle
Understanding the lifecycle of goroutines is crucial:
func main() {
fmt.Println("Main starts")
go func() {
fmt.Println("Goroutine 1 starts")
time.Sleep(2 * time.Second)
fmt.Println("Goroutine 1 ends")
}()
go func() {
fmt.Println("Goroutine 2 starts")
time.Sleep(1 * time.Second)
fmt.Println("Goroutine 2 ends")
}()
time.Sleep(3 * time.Second)
fmt.Println("Main ends")
}
Channels: Safe Communication
Goroutines are great, but without a way to communicate or synchronize them, they become difficult to manage. That's where channels come in.
A channel is a typed conduit through which goroutines can communicate safely.
func worker(jobs <-chan int, results chan<- int) {
for job := range jobs {
results <- job * 2
}
}
func main() {
jobs := make(chan int, 5)
results := make(chan int, 5)
// Start three worker goroutines
for i := 0; i < 3; i++ {
go worker(jobs, results)
}
// Send 5 jobs to the workers
for j := 1; j <= 5; j++ {
jobs <- j
}
close(jobs)
// Collect the results
for r := 0; r < 5; r++ {
fmt.Println(<-results)
}
}
In this example:
- We create multiple worker goroutines
- Each one reads from the
jobs
channel and sends the result to theresults
channel - This model enables parallel processing with safe communication
Channel Operations
Channels support several operations:
// Create channels
ch := make(chan int) // Unbuffered channel
buffered := make(chan int, 3) // Buffered channel
// Send and receive
ch <- 42 // Send value to channel
value := <-ch // Receive value from channel
// Check if channel is closed
value, ok := <-ch
if !ok {
fmt.Println("Channel is closed")
}
// Close a channel
close(ch)
Buffered vs Unbuffered Channels
Channels in Go can be buffered or unbuffered:
- Unbuffered channels block the sender until the receiver is ready and vice versa
- Buffered channels allow sending a fixed number of values without a receiver
// Unbuffered channel
ch1 := make(chan int)
// Buffered channel with capacity 2
ch2 := make(chan int, 2)
ch2 <- 1 // Doesn't block
ch2 <- 2 // Doesn't block
// ch2 <- 3 // Would block if not received
Understanding when to use which type is key to writing efficient programs.
Select Statement
The select
statement is like a switch for channels, allowing you to handle multiple channel operations:
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
go func() {
time.Sleep(1 * time.Second)
ch1 <- "from ch1"
}()
go func() {
time.Sleep(2 * time.Second)
ch2 <- "from ch2"
}()
for i := 0; i < 2; i++ {
select {
case msg1 := <-ch1:
fmt.Println(msg1)
case msg2 := <-ch2:
fmt.Println(msg2)
case <-time.After(3 * time.Second):
fmt.Println("timeout")
return
}
}
}
Common Pitfalls in Go Concurrency
1. Race Conditions
Two or more goroutines access shared data at the same time without proper synchronization.
Problem:
var counter int
func increment() {
counter++ // Race condition!
}
func main() {
for i := 0; i < 1000; i++ {
go increment()
}
time.Sleep(time.Second)
fmt.Println(counter) // Unpredictable result
}
Solution: Use channels or sync.Mutex
to protect shared state:
var (
counter int
mutex sync.Mutex
)
func increment() {
mutex.Lock()
counter++
mutex.Unlock()
}
2. Deadlocks
When goroutines wait for each other forever.
Example:
ch := make(chan int)
ch <- 1 // Blocks forever without a receiver
Solution: Ensure that every send has a corresponding receive, and avoid circular dependencies.
3. Leaking Goroutines
Goroutines that never exit or are stuck waiting.
Problem:
func leak() {
ch := make(chan int)
go func() {
<-ch // This goroutine will never exit
}()
// Channel is never written to, goroutine leaks
}
Solution: Use context.Context
to signal cancellation and timeouts:
func withContext(ctx context.Context) {
ch := make(chan int)
go func() {
select {
case <-ch:
// Normal operation
case <-ctx.Done():
// Context cancelled, exit gracefully
return
}
}()
}
Advanced Concurrency Patterns
Worker Pool Pattern
type Job struct {
ID int
Data string
}
type Result struct {
Job Job
Output string
}
func worker(id int, jobs <-chan Job, results chan<- Result) {
for job := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, job.ID)
time.Sleep(time.Second) // Simulate work
result := Result{
Job: job,
Output: fmt.Sprintf("Processed by worker %d", id),
}
results <- result
}
}
func main() {
jobs := make(chan Job, 100)
results := make(chan Result, 100)
// Start 3 workers
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}
// Send 9 jobs
for j := 1; j <= 9; j++ {
jobs <- Job{ID: j, Data: fmt.Sprintf("job %d", j)}
}
close(jobs)
// Collect results
for r := 1; r <= 9; r++ {
result := <-results
fmt.Printf("Job %d: %s\n", result.Job.ID, result.Output)
}
}
Fan-In Pattern
Combining multiple channels into one:
func fanIn(input1, input2 <-chan string) <-chan string {
c := make(chan string)
go func() {
for {
select {
case s := <-input1:
c <- s
case s := <-input2:
c <- s
}
}
}()
return c
}
func main() {
input1 := make(chan string)
input2 := make(chan string)
go func() {
for i := 0; i < 5; i++ {
input1 <- fmt.Sprintf("input1: %d", i)
time.Sleep(time.Second)
}
}()
go func() {
for i := 0; i < 5; i++ {
input2 <- fmt.Sprintf("input2: %d", i)
time.Sleep(time.Second)
}
}()
c := fanIn(input1, input2)
for i := 0; i < 10; i++ {
fmt.Println(<-c)
}
}
Best Practices for Writing Concurrent Go Code
1. Keep It Simple
Avoid overusing goroutines. Concurrency is powerful, but it adds complexity.
2. Use context
It provides a standardized way to handle timeouts and cancellations:
func doWork(ctx context.Context) error {
select {
case <-time.After(5 * time.Second):
// Work completed
return nil
case <-ctx.Done():
// Context cancelled
return ctx.Err()
}
}
3. Prefer Channels over Shared Memory
Follow Go's philosophy: "Do not communicate by sharing memory; share memory by communicating."
4. Limit Goroutines
Unbounded goroutines can cause memory exhaustion. Use worker pools or semaphore patterns if needed:
// Semaphore pattern to limit concurrent operations
func limitedWorker(semaphore chan struct{}, work func()) {
semaphore <- struct{}{} // Acquire
defer func() { <-semaphore }() // Release
work()
}
func main() {
maxWorkers := 10
semaphore := make(chan struct{}, maxWorkers)
for i := 0; i < 100; i++ {
go limitedWorker(semaphore, func() {
// Do work
time.Sleep(time.Second)
})
}
}
5. Use Tools
go vet
: Static analysis toolgo run -race
: Race detectorgo tool pprof
: Performance profiling
# Run with race detector
go run -race main.go
# Check for common mistakes
go vet ./...
Real-World Example: Concurrent Web Scraper
Here's a complete example that demonstrates how to fetch multiple URLs concurrently:
package main
import (
"fmt"
"net/http"
"time"
)
func fetch(url string, ch chan<- string) {
start := time.Now()
resp, err := http.Get(url)
if err != nil {
ch <- fmt.Sprintf("Error fetching %s: %v", url, err)
return
}
defer resp.Body.Close()
secs := time.Since(start).Seconds()
ch <- fmt.Sprintf("%.2fs %d %s", secs, resp.StatusCode, url)
}
func main() {
urls := []string{
"https://golang.org",
"https://example.com",
"https://httpbin.org",
"https://github.com",
"https://stackoverflow.com",
}
start := time.Now()
ch := make(chan string)
for _, url := range urls {
go fetch(url, ch)
}
for range urls {
fmt.Println(<-ch)
}
fmt.Printf("Total time: %.2fs\n", time.Since(start).Seconds())
}
Enhanced Version with Context and Error Handling
package main
import (
"context"
"fmt"
"net/http"
"sync"
"time"
)
type Result struct {
URL string
Duration time.Duration
Status int
Error error
}
func fetchWithContext(ctx context.Context, url string, results chan<- Result) {
start := time.Now()
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
results <- Result{URL: url, Error: err}
return
}
client := &http.Client{Timeout: 10 * time.Second}
resp, err := client.Do(req)
if err != nil {
results <- Result{URL: url, Error: err}
return
}
defer resp.Body.Close()
results <- Result{
URL: url,
Duration: time.Since(start),
Status: resp.StatusCode,
Error: nil,
}
}
func main() {
urls := []string{
"https://golang.org",
"https://example.com",
"https://httpbin.org",
"https://github.com",
"https://stackoverflow.com",
}
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
results := make(chan Result, len(urls))
var wg sync.WaitGroup
start := time.Now()
for _, url := range urls {
wg.Add(1)
go func(url string) {
defer wg.Done()
fetchWithContext(ctx, url, results)
}(url)
}
// Close results channel when all goroutines are done
go func() {
wg.Wait()
close(results)
}()
// Process results
for result := range results {
if result.Error != nil {
fmt.Printf("Error fetching %s: %v\n", result.URL, result.Error)
} else {
fmt.Printf("%.2fs %d %s\n",
result.Duration.Seconds(), result.Status, result.URL)
}
}
fmt.Printf("Total time: %.2fs\n", time.Since(start).Seconds())
}
Performance Considerations
Goroutine Overhead
While goroutines are lightweight, they still have overhead:
func BenchmarkGoroutineCreation(b *testing.B) {
for i := 0; i < b.N; i++ {
done := make(chan bool)
go func() {
done <- true
}()
<-done
}
}
func BenchmarkChannelCommunication(b *testing.B) {
ch := make(chan int)
go func() {
for {
<-ch
}
}()
b.ResetTimer()
for i := 0; i < b.N; i++ {
ch <- i
}
}
Memory Management
Monitor goroutine memory usage:
import (
"runtime"
"time"
)
func monitorGoroutines() {
ticker := time.NewTicker(5 * time.Second)
defer ticker.Stop()
for range ticker.C {
fmt.Printf("Number of goroutines: %d\n", runtime.NumGoroutine())
}
}
Key Takeaways
- Goroutines are cheap but not free - use them wisely
- Channels are the primary communication mechanism - prefer them over shared memory
- Use context for cancellation and timeouts
- Always handle goroutine lifecycle to prevent leaks
- Profile and test your concurrent code thoroughly
- Keep it simple - concurrency adds complexity
Whether you're building a web server, a CLI tool, or a distributed system, mastering Go's concurrency model will set you apart as a Go developer. Remember that concurrent code should be both correct and performant - start with correctness, then optimize for performance.
The patterns and techniques shown in this article provide a solid foundation for building concurrent applications in Go. As you gain experience, you'll develop an intuition for when and how to use these tools effectively.
Conclusion
Concurrency in Go is one of the language's most powerful features. With goroutines and channels, Go provides a simple yet expressive model for building concurrent systems. However, with great power comes great responsibility. Understanding the fundamentals and following best practices is essential for writing reliable and scalable Go applications.