100 Days of DevOps — Day 76-How Linux Kernel is organized

To view the updated DevOps course(101DaysofDevOps)

Course Registration link: https://www.101daysofdevops.com/register/

Course Link: https://www.101daysofdevops.com/courses/101-days-of-devops/

YouTube link: https://www.youtube.com/user/laprashant/videos

Welcome to Day 76 of 100 Days of DevOps, Focus for today is How Linux Kernel is organized

In Linux we have

  • User Space: User(Processes/Application/Services) need to do something and for that, a typical interface is a shell
  • Kernel Space: Kernel is the only component that has direct access to hardware.
User Space ----> Kernel Space

If User needs to interact with Kernel there is a limited option, which is provided by the kernel and strictly defined by the kernel what user can do

  • Signal
  • System Calls

System Calls

  • An essential part of the Linux Operating System
  • Processes cannot access the kernel directly
  • System calls are used as an interface for processes to the kernel. glibc provides a library interface to use system calls from programs
  • A common task like opening, listing, reading, and writing to files all involves system calls
  • The fork() and exec() system calls determine how a process start
  • fork(): the kernel creates an almost identical copy of the current process and replaces that
  • exec(): the kernel starts a program, which replaces the current process

There is one more thing involved in this whole process called libraries which is just an additional code used by either shell or process to add more functionalities. For eg: the most important one is glibc which provides functions and system calls

# ldd $(which passwd)

Typically we have two types of libraries

  • Static: eg header files(stdio.h)
  • Dynamic: Stored on disk(eg: libc.so),generally find inside /lib64 or /usr/lib64

Generally, the application doesn’t have any idea where to find these libraries, these are helper program ld.so which does this task on behalf of an application. ld.soprograms reads some default directories

# ls -l /etc/ld.so.conf.d/

So if we have some libraries in some non-standard path we can put inside this directory and then run ldconfig to update the library cache

# ldconfig -v

The way Kernel interact with hardware is via drivers

Kernel → Drivers → Hardware

One important thing to note Linux Kernel is pluggable i.e drivers are not a part of Kernel and can be plugged when it’s required, another way to define is Linux Kernel is modular in nature.

Now let’s zoom in more into Kernel

The kernel has an interface called Memory Management which decides how information is stored/fetch from RAM

Scheduling interface determine which process get CPU attention and which process need to wait

Drivers Determine how Kernel interact with Disk

Kernel --> Memory Management --> RAM

Now to check the currently loaded module, run lsmod which is just userspace interface for /proc/modules and it represents the data in a nice format.

# lsmod

Now to get more information about particular module run modinfo

# modinfo isofs

Strace

strace — trace system calls and signals

-c — count time, calls, and errors for each syscall and report summary

# strace -fc ls

To check for a specific system call

-e expr — a qualifying expression: option=[!]all or option=[!]val1[,val2]…

# strace -e open ls

To check for library call use ltrace

ltrace — A library call tracer

  • c count time and calls, and report a summary on exit.
  • -f trace children (fork() and clone()).
# ltrace -fc ls

Signals

Signals provide software interrupt, it’s a method to tell a process that it has to do something

# kill -l

Looking forward from you guys to join this journey and spend a minimum an hour every day for the next 100 days on DevOps work and post your progress using any of the below medium.

Reference

AWS Community Builder, Ex-Redhat, Author, Blogger, YouTuber, RHCA, RHCDS, RHCE, Docker Certified,4XAWS, CCNA, MCP, Certified Jenkins, Terraform Certified, 1XGCP