author
Kevin Kelche

Golang Mutex - A Complete Guide


Introduction

In concurrent programming, Mutex (short for mutual exclusion) is a synchronization mechanism that allows for multiple threads or processes to share resources without interfering with each other. The main purpose of a mutex is to avoid race condition, which occurs when two or more processes try to mutate the same resource simultaneously.

Golang’s standard library provides mutex which is important given that it is a concurrent programming language. In this article, we will explore the basics of Mutex, including its syntax, methods, and use cases. We will also look at some advanced topics such as deadlocks and starvation, and how to avoid them.

A Race Condition Example

Before we dive into mutex let’s see an example of why we need them in the first place. Consider the following code:

main.go
package main

import (
  "fmt"
  "sync"
)

func main() {
  var wg sync.WaitGroup
  var counter int

  for i := 0; i < 1000; i++ {
    wg.Add(1)
    go func() {
      counter++
      wg.Done()
    }()
  }

  wg.Wait()
  fmt.Println(counter)
}

Copied!

In this example, we are creating a counter variable and incrementing it inside a goroutine 1000 times.

By running this code, we find out that the counter does not amount to 1000 as expected but a number around 800 - 1000. This is because the counter variable is being accessed by multiple goroutines at the same time, and the value of the counter is being overwritten by the last goroutine to access it thus a different final result. This is referred to as race condition.

Now let’s get into mutex.

Mutex Basics

Mutex is a struct provided by the sync package in the standard library and is used to synchronize access to a shared resource. To use a mutex, we need to import the sync package and create a mutex variable.

import "sync"

var mutex sync.Mutex

Copied!

The sync.Mutex struct has an integer field called state which is used to keep track of the state of the mutex. state can be either 0 or 1 depending on whether the mutex is locked or unlocked respectively.

The sync.Mutex struct also has two main methods: Lock() and Unlock().

mutex.Lock()
defer mutex.Unlock()
//critical section of code

Copied!

The Lock() method acquires the Mutex if it is not already locked and blocks until it is available. The Unlock() method releases the Mutex if it is currently locked after which it becomes available to other goroutines. We use defer to ensure that the mutex is unlocked even if the code panics.

Solving the Race Condition Example

Now that we have a feel of the sync.Mutex methods let’s solve the race condition from the code we wrote earlier.

main.go
package main

import (
  "fmt"
  "sync"
)

func main() {
  var wg sync.WaitGroup
  var mutex sync.Mutex
  var counter int

  for i := 0; i < 1000; i++ {
    wg.Add(1)
    go func() {
      mutex.Lock()
      counter++
      mutex.Unlock()
      wg.Done()
    }()
  }

  wg.Wait()
  fmt.Println(counter)
}

Copied!

By running this code, we see that the counter now amounts to 1000 as expected. This is because the mutex is locking the counter variable and only allowing one goroutine to access it at a time.

Mutex.TryLock()

The TryLock() method tries to acquire the mutex and returns a boolean value indicating whether the mutex was acquired or not. This method is rarely used.

mutex.TryLock()

Copied!

Mutex vs RWMutex

While Mutex provides a mutual exclusion for shared resources, it can be heavy-handed in certain situations. For example, for a resource that is frequently read but only occasionally written to, using a Mutex to protect it would be inefficient by blocking all reads while a write is in progress.

To address this issue Go provides another synchronization mechanism called RWMutex (short for read-write mutex). The RWMutex allows multiple goroutines to read a resource at the same time but only one goroutine to write to it at a time.

The syntax and usage of RWMutex are similar to Mutex except that it has two additional methods: RLock() and RUnlock().

var rwMutex sync.RWMutex

rwMutex.RLock()
rwMutex.RUnlock()

Copied!

Deadlocks

A deadlock occurs when two or more goroutines are waiting for each other to release a mutex, thus causing the program to hang indefinitely or terminate with a fatal error: all goroutines are asleep - deadlock! error if no other goroutines are running.

main.go
package main

import (
  "fmt"
  "sync"
)

type Brand struct {
  Mutex sync.Mutex
  Data map[string]string
}

func NewBrand() Brand{
  return Brand{
    Data: make(map[string]string),
  }
}

func (b *Brand) Has(key string) bool {
  b.Mutex.Lock()
  defer b.Mutex.Unlock()
  _, ok := b.Data[key]
  return ok
}

func (b *Brand) Add(key, value string) {
  b.Mutex.Lock()
  defer b.Mutex.Unlock()
  if !b.Has(key) {
    b.Data[key] = value
  }
  return
}

func main() {
  brand := NewBrand()
  brand.Add("name", "Apple")
  fmt.Println(brand.Has("name"))
}

Copied!

By running this code, we see that the program exits with a fatal error. This is because the Has() method is trying to acquire the mutex but it is already locked by the Add() method. This is a deadlock because both methods are waiting for each other to release the mutex.

To avoid this, it’s important to ensure that mutexes are acquired and released in the correct order. For example in the code above, we can avoid the deadlock by only using a single mutex in the Add() method and removing the mutex from the Has() method or using the logic directly in the Add() method without calling it.

main.go
...
func (b *Brand) Add(key, value string) {
  b.Mutex.Lock()
  defer b.Mutex.Unlock()
  _, ok := b.Data[key]
  if !ok {
    b.Data[key] = value
  }
}

Copied!

Starvation

Starvation occurs when a goroutine is waiting for a mutex to be unlocked but is unable to acquire it because other goroutines are constantly locking and unlocking the mutex. This can be avoided by using a sync.Cond struct. This struct is used to notify a goroutine that a condition has been met.

var cond sync.Cond

cond.L.Lock()
cond.Wait()
cond.L.Unlock()

Copied!

Mutex vs Channel

Before going further, let’s solve the race condition example using channels.

main.go
package main

import (
  "fmt"
  "sync"
)

func main() {
    var w sync.WaitGroup
    c := make(chan bool, 1)
    counter := 0
    for i := 0; i < 1000; i++ {
        w.Add(1)
        go func() {
            c <- true
            counter++
            <- c
            w.Done()
        }()
    }
    w.Wait()
    fmt.Println(counter)
}

Copied!

Running this code will give us the same result as the mutex example. By using a buffered channel with a capacity of 1, we can limit the number of goroutines that can access the counter variable at a time.

When choosing between a mutex and a channel, it is important to know that though channels can be used for synchronization, they are not meant for that purpose. They can be overkill for simple use cases and even offer a degraded performance.

However, the choice between the two depends on the use case and the trade-offs in terms of performance, complexity, and maintainability.

Performance Benchmarks

To compare the performance of mutexes and channels, we will use the sync/atomic package to increment a counter variable. The sync/atomic package provides atomic memory primitives for synchronizing goroutines.

main.go
package main

import (
  "sync"
)

func useMutex() {
  var wg sync.WaitGroup
  var mutex sync.Mutex
  var counter int64

  for i := 0; i < 1000; i++ {
    wg.Add(1)
    go func() {
      mutex.Lock()
      counter++
      mutex.Unlock()
      wg.Done()
    }()
  }
  wg.Wait()
}

func useChannel() {
  var w sync.WaitGroup
  c := make(chan bool, 1)
  var counter int64

  for i := 0; i < 1000; i++ {
    w.Add(1)
    go func() {
      c <- true
      counter++
      <- c
      w.Done()
    }()
  }
  w.Wait()
}



func main() {
  useMutex()
  useChannel()
}

Copied!

We will use the testing package to benchmark the performance of the two functions.

main_test.go
package main

import (
  "testing"
)

func BenchmarkUseMutex(b *testing.B) {
  for i := 0; i < b.N; i++ {
    useMutex()
  }
}

func BenchmarkUseChannel(b *testing.B) {
  for i := 0; i < b.N; i++ {
    useChannel()
  }
}

Copied!

Run the benchmark test using the go test command.

output
go test -bench=.

goos: linux
goarch: amd64
pkg: sleep/mutex/bench
cpu: Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
BenchmarkUseMutex-4            838     1259419 ns/op
BenchmarkUseChannel-4          516     2095668 ns/op
PASS
ok    sleep/mutex/bench  2.536s

Copied!

From the benchmark tests above, we can see that the useMutex() function is faster that the useChannel() function. This is because the useMutex() function is using a mutex lock which that has a lower overhead than a channel. The channel implementation requires additional synchronization logic to ensure that only one goroutine can access the counter variable at a time. The useMutex() function on the other hand, only requires acquiring and releasing the mutex lock, which is faster and simpler.

However, it is important to note that the performance of mutexes and channels depends on the use case and scale of the application.

Conclusion

In conclusion, mutexes are a powerful tool to build thread-safe applications that can scale without any synchronization issues. By using the techniques discussed in this article, you can avoid common pitfalls when using mutexes and build applications with confidence.

Subscribe to my newsletter

Get the latest posts delivered right to your inbox.