Memory Barrier

From Software Engineers Wiki
Jump to: navigation, search

What is a memory barrier?



When a program access data in the memory (or memory mapped I/O memory), the compiler may reorder the accesses. CPU may reorder the accesses too. Memory barriers are used to instruct the compiler and CPU to restrict the order.

Compiler memory barrier

Compiler barrier prevents the compiler from moving the memory accesses either side of it to the other side. It is usually implemented as following inline assembly code.

asm volatile ("" : : : "memory")

CPU memory barrier

Some CPU architecture provides instructions for memory barrier to prevent the CPU from reordering memory accesses.

ARM Microprocessor

ARM architecture provides three instructions for barriers.

  • DMB: Data Memory Barrier. It ensures that all explicit memory accesses that appear in program order before the DMB instruction are observed before any explicit memory accesses that appear in program order after the DMB instruction. It does not affect the ordering of any other instructions executing on the processor.
  • DSB: Data Synchronization Barrier. It acts as a special kind of memory barrier. No instruction in program order after this instruction executes until this instruction completes. CPU waits until the explicit memory accesses complete.
  • ISB: Instruction Synchronization Barrier flushes the pipeline in the processor, so that all instructions following the ISB are fetched from cache or memory, after the instruction has been completed.

In Linux kernel

Compiler memory barriers:

  • barrier()

CPU memory barriers:

  • mb()
  • wmb()
  • rmb()

SMP conditional CPU memory barriers:

  • smp_mb()
  • smb_wmb()
  • smb_rmb()

For ARM architectures, CPU memory barriers are implemented using dsb instruction.

#define dsb(opt)        asm volatile("dsb " #opt : : : "memory")

#define mb()            dsb(sy)
#define rmb()           dsb(ld)
#define wmb()           dsb(st)


Personal tools