📌 Understanding Buffer Memory in Linux 📌

Prashant Lakhera
3 min readFeb 28, 2024

--

Many of you who use Linux may have run the free -m command and noticed a column called buff/cache. Have you ever wondered what buff or buffer means in that context and its purpose? (I will discuss cache in an upcoming post)

# free -m
total used free shared buff/cache available
Mem: 7593 503 7013 16 322 7090
Swap: 0 0 0

🧠 What is buffer memory?

Buffer memory in Linux is used primarily for managing file system metadata and for read/write operations to block devices like hard drives or SSDs.

It acts as a temporary holding area that allows the system to schedule disk access more efficiently and merge multiple smaller operations into larger ones, reducing overall I/O overhead.

🛠️ How does the Kernel manage buffer memory?

Buffers are managed through a linked list structure, allowing the kernel to keep track of free, used, and dirty buffers (buffers that need to be written to disk). The kernel uses algorithms like Least Recently Used (LRU) to decide which buffers to reuse when it needs space for new data.

Also, for performance optimization, the kernel optimizes buffer memory usage through techniques like read-ahead (preloading data it predicts will be requested soon) and write-behind (delaying writes to combine them into larger batches).

💾 What will happen if system memory is low?

When system memory is low, the kernel can reclaim memory from buffers and cache by flushing dirty buffers to disk and freeing clean ones.

📝 How it all works for data reading and writing 📝

Data Reading:

1️⃣ When a process reads data from a file, the kernel first checks if the data is already in buffer memory. If it is (a cache hit), the kernel can return the data directly from memory, which is much faster than reading from disk.

2️⃣ If the data is not in memory (a cache miss), the kernel allocates buffer space, reads the data from the disk into the buffer, and then passes the data to the process. The data remains in the buffer for future reads, improving performance for repeated access.

Data Writing:

1️⃣ When a process writes data, the kernel can store it in buffer memory instead of writing it directly to the disk. This allows the process to continue execution without waiting for the disk write, which can significantly improve performance.

2️⃣ The kernel later writes the buffered data to disk more efficiently, such as combining multiple small writes into a larger one or ordering writes to reduce disk head movement.

🏁 So overall, buffer memory, as shown in the free-m output, represents a crucial part of Linux’s strategy for improving system performance. By effectively managing this memory, the kernel can significantly reduce the time spent on disk I/O operations, which are typically much slower than memory operations, thereby speeding up the overall operation of the system. The dynamic nature of this memory allocation ensures that while buffer memory is used to enhance performance, it does not interfere with the memory needs of user-level applications.

Pic credit: https://www.linuxatemyram.com/

📚 If you’re interested in a more in-depth explanation of these topics, please check out my new book “Cracking the DevOps Interview”

To learn more about AWS, check out my book “AWS for System Administrators”

--

--

Prashant Lakhera
Prashant Lakhera

Written by Prashant Lakhera

AWS Community Builder, Ex-Redhat, Author, Blogger, YouTuber, RHCA, RHCDS, RHCE, Docker Certified,4XAWS, CCNA, MCP, Certified Jenkins, Terraform Certified, 1XGCP

No responses yet