JET Academy

What is Mutual Exclusion?

Mutual Exclusion — a synchronization principle in parallel or concurrent programming that prevents multiple processes or threads from accessing a shared resource simultaneously, allowing only one process to enter the critical section at a time. Mutual exclusion is a fundamental concept aimed at solving race condition and data corruption problems. Also abbreviated as "mutex."

What is Mutual Exclusion?

In modern computer systems, multiple processes or threads run simultaneously (multitasking, multithreading). These processes frequently access shared resources — memory areas, files, databases, printers, network connections. If multiple processes perform write operations on the same data simultaneously, the result can be unpredictable and incorrect. Mutual exclusion is the solution to this problem.

Critical Section: The portion of program code that accesses a shared resource. Only one process/thread can be in the critical section at a time. Others must wait (wait/block).

Example scenario (bank account): Two processes withdraw money from the same bank account:

  • Account has 1000 AZN
  • Process A: wants to withdraw 500 AZN
  • Process B: wants to withdraw 600 AZN

Without mutual exclusion:

  1. Process A reads balance: 1000 AZN
  2. Process B reads balance: 1000 AZN (A hasn't written yet)
  3. Process A calculates: 1000 - 500 = 500, writes
  4. Process B calculates: 1000 - 600 = 400, writes
  5. Result: Account has 400 AZN (incorrect, should be 400 AZN or 1000 - 500 = 500 or rejection)

With mutual exclusion:

  1. Process A acquires lock
  2. Process A reads balance, writes update
  3. Process A unlocks
  4. Process B acquires lock (now permitted)
  5. Process B reads balance, writes update
  6. Process B unlocks
  7. Result: Correct data

Conditions of Mutual Exclusion

Dijkstra (Edsger Dijkstra, computer science pioneer) defined four essential requirements for mutual exclusion:

1. Mutual Exclusion (itself): At most one process can be in the critical section. Two processes cannot be in the critical section simultaneously.

2. Progress: If no process is in the critical section and some processes want to enter, the selection process must occur in finite time and one process must be selected. There must be no deadlock.

3. Bounded Waiting: A process cannot wait infinitely to enter the critical section. Waiting time is bounded. There must be no starvation problem — a process must not be continuously skipped.

4. No Assumptions: No assumptions should be made about the speed or number of processes. The solution must work in all circumstances.

Mutual Exclusion Mechanisms

1. Lock / Mutex

Mutex (Mutual Exclusion Lock): The simplest and most widespread mechanism. Binary variable (0 or 1, unlocked or locked).

Operating principle:

  • When a process wants to enter the critical section, it "locks" the mutex
  • If the mutex is already locked, the process blocks (waits)
  • When exiting the critical section, the mutex is "unlocked"
  • The waiting process (if any) wakes up and locks the mutex

Code example (C, POSIX threads):

#include <pthread.h>


pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;

int shared_counter = 0;


void* thread_function(void* arg) {

pthread_mutex_lock(&mutex); // Lock

// Critical section

shared_counter++;

printf("Counter: %d\n", shared_counter);

pthread_mutex_unlock(&mutex); // Unlock

return NULL;

}

Spinlock: A mutex type that busy-waits (active waiting) to acquire a lock. The process doesn't block, continuously checks in a loop (spin). Effective for short critical sections, but wastes CPU.

Recursive Mutex: Can be locked multiple times by the same thread. Unlock count must equal lock count.

2. Semaphore

Semaphore: A more general mechanism than mutex. Counter variable (0, 1, 2, ..., N). Two operations: wait (P, down) and signal (V, up).

Binary Semaphore: Counter is 0 or 1. Equivalent to mutex.

Counting Semaphore: Counter from 0 to N. Manages access to N resources. For example, if there are 10 connections in a database connection pool, semaphore counter = 10.

Operating principle:

  • wait() / P(): Decrements counter. If counter < 0, process blocks.
  • signal() / V(): Increments counter. Wakes up one of the blocked processes.

Code example (C, POSIX):

#include <semaphore.h>


sem_t semaphore;

sem_init(&semaphore, 0, 1); // Initial value = 1 (binary semaphore)


void* thread_function(void* arg) {

sem_wait(&semaphore); // P operation, lock

// Critical section

printf("In critical section\n");

sem_post(&semaphore); // V operation, unlock

return NULL;

}

Use in Producer-Consumer problem:

sem_t empty; // Empty buffer slots

sem_t full; // Full buffer slots

sem_t mutex; // Mutex for buffer access


sem_init(&empty, 0, BUFFER_SIZE);

sem_init(&full, 0, 0);

sem_init(&mutex, 0, 1);


// Producer

void* producer(void* arg) {

while(1) {

produce_item();

sem_wait(&empty); // Wait for empty slot

sem_wait(&mutex); // Lock buffer

add_to_buffer();

sem_post(&mutex); // Unlock buffer

sem_post(&full); // Signal full slot

}

}


// Consumer

void* consumer(void* arg) {

while(1) {

sem_wait(&full); // Wait for full slot

sem_wait(&mutex); // Lock buffer

remove_from_buffer();

sem_post(&mutex); // Unlock buffer

sem_post(&empty); // Signal empty slot

consume_item();

}

}

3. Monitor

Monitor: High-level synchronization construct. Used in object-oriented programming. Combines shared data and access methods in one unit. Mutual exclusion is automatically provided.

Characteristics:

  • Only one thread can execute monitor methods simultaneously
  • Wait and notify with condition variables
  • Java synchronized methods and blocks use the monitor concept

Code example (Java):

class BankAccount {

private int balance = 1000;

// Synchronized method - monitor

public synchronized void withdraw(int amount) {

if (balance >= amount) {

balance -= amount;

System.out.println("Withdrew: " + amount);

} else {

System.out.println("Insufficient funds");

}

}

public synchronized void deposit(int amount) {

balance += amount;

System.out.println("Deposited: " + amount);

}

public synchronized int getBalance() {

return balance;

}

}

Condition Variables:

class ProducerConsumer {

private Queue<Integer> buffer = new LinkedList<>();

private int capacity = 10;

public synchronized void produce(int item) throws InterruptedException {

while (buffer.size() == capacity) {

wait(); // Wait if buffer full

}

buffer.add(item);

notify(); // Notify consumer

}

public synchronized int consume() throws InterruptedException {

while (buffer.isEmpty()) {

wait(); // Wait if buffer empty

}

int item = buffer.remove();

notify(); // Notify producer

return item;

}

}

4. Read-Write Lock

Read-Write Lock: Different lock strategy for read and write operations.

Principle:

  • Multiple readers can access the resource simultaneously (reading doesn't modify data)
  • Writer requires exclusive access — no readers or other writers allowed
  • When writer is waiting, new readers are blocked (writer priority)

Use case: Database, cache, shared data structures. Many read operations, few writes.

Code example (C++, std::shared_mutex):

#include <shared_mutex>


std::shared_mutex rw_mutex;

int shared_data = 0;


// Reader thread

void reader() {

std::shared_lock<std::shared_mutex> lock(rw_mutex);

// Multiple readers can be here simultaneously

int value = shared_data;

std::cout << "Read: " << value << std::endl;

}


// Writer thread

void writer(int new_value) {

std::unique_lock<std::shared_mutex> lock(rw_mutex);

// Only one writer, no readers

shared_data = new_value;

std::cout << "Written: " << new_value << std::endl;

}

5. Peterson's Algorithm (Software solution)

Peterson's Algorithm: Software solution providing mutual exclusion for two processes without hardware support. By Gary Peterson in 1981.

Algorithm (for 2 processes):

bool flag[2] = {false, false};

int turn = 0;


// Process 0

void process0() {

flag[0] = true;

turn = 1;

while (flag[1] && turn == 1); // Busy wait

// Critical section

flag[0] = false;

}


// Process 1

void process1() {

flag[1] = true;

turn = 0;

while (flag[0] && turn == 0); // Busy wait

// Critical section

flag[1] = false;

}

Note: Modern CPUs can perform instruction reordering and Peterson's Algorithm may not work in this case. Memory barrier/fence is needed.

6. Hardware-based solutions

Test-and-Set: Atomic (indivisible) hardware instruction. Tests variable and sets new value simultaneously.

bool test_and_set(bool *target) {

bool old_value = *target;

*target = true;

return old_value;

}


// Lock implementation

bool lock = false;


void enter_critical_section() {

while (test_and_set(&lock)); // Spin until lock acquired

}


void exit_critical_section() {

lock = false;

}

Compare-and-Swap (CAS): On modern CPUs (x86: CMPXCHG). Compares value and updates in same atomic operation.

bool compare_and_swap(int *ptr, int expected, int new_value) {

if (*ptr == expected) {

*ptr = new_value;

return true;

}

return false;

}

Atomic Operations: Atomic (uninterruptible) operations at CPU level. In C11/C++11 <stdatomic.h> and <atomic>.

#include <atomic>


std::atomic<int> counter(0);


void increment() {

counter.fetch_add(1, std::memory_order_relaxed);

// Atomic, thread-safe

}

Mutual Exclusion Problems

1. Deadlock

Two or more processes wait for resources held by each other and none can proceed.

Example:

  • Process A: Holds Lock1, waits for Lock2
  • Process B: Holds Lock2, waits for Lock1
  • Result: Deadlock, both processes wait infinitely

Four conditions of deadlock (Coffman conditions):

  1. Mutual exclusion
  2. Hold and wait
  3. No preemption
  4. Circular wait

Deadlock prevention:

  • Always acquire locks in the same order
  • Timeout mechanism
  • Lock ordering
  • Deadlock detection algorithms

2. Livelock

Processes are active but none makes progress. Unlike deadlock, processes change state but achieve no result.

Example: Two people meet in a corridor, both move to the same side, meet again, continues...

3. Starvation

Process cannot acquire lock for a long time or infinitely. Other processes always get priority.

Solution: Fair scheduling, aging (priority increases as waiting time increases).

4. Priority Inversion

Low-priority process holds lock, high-priority process waits, medium-priority process runs. High-priority process depends on low-priority one.

Solution: Priority inheritance protocol — lock-holding process temporarily receives high priority.

5. Convoying

Multiple high-priority processes wait for lock released by one low-priority process.

Performance and Optimization

Lock Granularity:

  • Coarse-grained lock: One lock for large data structure. Simple, but low concurrency.
  • Fine-grained lock: Separate locks for small parts. High concurrency, but complex, overhead.

Lock-Free and Wait-Free algorithms: Thread-safe data structures without locks using atomic operations like CAS (Compare-and-Swap). High performance, but difficult to design.

Lock Elision and Transactional Memory: Optimization of locks at hardware level. Intel TSX (Transactional Synchronization Extensions).

RCU (Read-Copy-Update): Mechanism used in Linux kernel. Readers don't take locks, writer creates new version and updates pointer.

Practical Use and Recommendations

Hold locks briefly: Critical section should be minimal. Perform long operations outside lock.

Lock ordering: When using multiple locks, always acquire in same order (deadlock prevention).

Try-lock with timeout: pthread_mutex_trylock(), std::mutex::try_lock(). For escaping deadlock with timeout.

Avoid nested locks: Avoid nested locks if possible.

Testing: Thread-safety tests, race condition detectors (Valgrind Helgrind, Thread Sanitizer).

Documentation: Clearly document which lock protects which data.

In conclusion, mutual exclusion is a fundamental concept in concurrent and parallel programming and ensures safe access to shared resources. Various mechanisms like mutex, semaphore, monitor, read-write lock are used in different scenarios. It's important to avoid problems like deadlock, livelock, and starvation. Proper mutual exclusion strategy enables creation of thread-safe, reliable, and performant programs. Understanding and proper implementation of mutual exclusion in multithreaded programming is a critical skill and an important part of software engineering.

Register to Learn More About Our Courses

Other Course Fields