Everything Linux: Introduction

Evan Wireman
7 min readJun 26, 2021

--

Linus Torvalds, creator of Linux

In this section, I will go over some of the history of the Linux kernel, introduce some of the benefits of Linux, and discuss some of the core functions of any operating system.

Linux Who?

Linux was developed by Linus Torvalds, pictured above, in 1991. Linus began developing Linux as a personal project while studying computer science at the University of Helsinki. His intention was to create a free, Unix-like kernel. Over time, this project became one of the most used, studied, and revered systems to date.

Perhaps the most important benefit of Linux is that it isn’t a commercial operating system. Linux source code is under the GNU General Public License, which means that anyone has access to study the code. In fact, you can find it here: https://github.com/torvalds/linux.

The availability of Linux’s source code introduces several other important benefits. For instance, Linux is completely free to download and use. Since the source code is exposed, Linux is also fully customizable.

Since Linux’s creation in 1991, the source code has been contributed to by countless skilled developers. These engineers have helped to make Linux one of the most powerful systems in existence today.

How do Operating Systems Operate?

Technically speaking, Linux is not an operating system (OS). Linux is a kernel, which is the first piece of code that is loaded into RAM when a computer is booted up. The kernel provides key functionality to everything else on the system, such as communication with hardware, access to memory, and more.

However, kernels lack many of the things that make operating systems operate, at least in the way we are used to. For instance, kernels lack graphical desktops, filesystems, compilers, text editors, etc. These utilities are typically found in Linux Distributions, which are (generally) full blown operating systems built upon the Linux kernel. Since the shape and capabilities of a system are determined by the kernel, computer scientists often use the terms operating system and kernel interchangeably.

Now we can get into the interesting part of this section. The main roles of any operating system are as follows:

  • Communicating with the hardware (such as RAM, the CPU, hard disks, etc.)
  • Providing an environment for higher level (user) programs to execute

A key point to notice about the roles listed above is that all higher level programs require access to some of the hardware in order execute. In Unix-like systems, the OS acts as a liaison between user programs and the hardware. So, whenever a program needs access to a piece of the hardware, it must request some resource from the OS. The kernel is responsible for determining whether to fulfill the request. Assuming the request is approved, the kernel communicates with the hardware on behalf of the user program.

This hardware-communication system is managed through the use of execution modes. These modes are introduced by the hardware, which sets specific restrictions depending on which mode a program is in. Essentially, user programs operate in User Mode, which means they have access to a small subset of the system’s resources. However, the kernel operates in Kernel Mode, which means that it has access to all that the system has to offer.

Processes vs. Programs

A very important abstraction found in every operating system is the process. Processes are defined as “an instance of a program in execution.” A process requires certain resources in order to run, namely memory and access to the CPU. In regards to the memory requirement, process run within an address space, which is a subset of memory that the kernel allows this process to access. Access to the CPU is only granted when the scheduler, a foundational part of any operating system, grants access to the process. Both address spaces and schedulers will be discussed in much greater depth in future sections of this series.

Handling Multiple Processes

Linux is a multiprocessing operating system. This means that multiple processes can be running on the system at once. This introduces several complications, namely shared memory issues.

Let’s analyze a hypothetical situation. Let’s say that we are at a bank, and the bank’s computer system has two functions: deposit and withdrawal. Let’s assume I have a shared bank account with someone I trust (for simplicity, we will call this person Jimbo), and that the balance in the account is currently $1000.

Now let’s assume that both me and Jimbo wake up one morning and discover that we each owe $1000 for car repairs. Jimbo lives about 15 minutes from me, and is closer to another branch of the same bank. So, we each go to our respective branches and, at the same time, attempt to withdraw $1000.

There are two main outcomes here. The most obvious outcome is that one of our withdrawals gets processed first, and the other is left attempting to withdraw $1000 from an account that now has a balance of $0.

However, what if my bank reads my account balance and sees its $1000. At the same moment, Jimbo’s bank sees that the account balance is at $1000. Both banks trigger withdrawals at the same moment, and we are left with a balance of -$1000.

Operating systems are responsible from preventing this sort of situation. They do so through the use of mutex’s and semaphores. Both of these will be discussed in future sections, but it is useful to know these terms, as they are important when speaking about multiprocessing systems, such as Linux.

In addition to the risks of shared memory between multiple processes, it is worth noting how Linux actually manages the execution of multiple processes. There are two system calls, fork() and exit(), that are used to create and destroy processes respectively.

A process that invokes the fork() system call is referred to as the parent process, while the new process that spawns is called the child process. Since we know that a process is essentially a sequential execution of the lines of a program, it is easy to infer where the ability to spawn new processes might be important.

Examples would be games with a voice chat feature, such as Call of Duty. When you join a pre-game lobby, you are able to communicate with the other players in the game. The core game process, which contacted the Call of Duty servers and found a lobby for you to join, spawns a child process that handles the communication between you and the other players.

An important question is how processes know when to end. For example, could you imagine if, after leaving a game of Call of Duty, you could still hear the ramblings of your teammates? Process synchronization is another core function of operating systems, and will be discussed in more detail in future sections.

Handling Memory

In my opinion, memory management is the most complicated aspect of Linux, and also the most important. Linux implements several systems to allow for secure, reliable memory access. These systems include:

Virtual Memory: This is a clever memory abstraction that allows for multiprocessing, memory allocation, and much more. Virtual memory is built upon a virtual address space, which essentially provides user processes with virtual, or fake, memory addresses. The kernel and a piece of hardware called the Memory Management Unit are able to translate these virtual addresses into physical locations within memory.

RAM Usage: Linux systems partition a computer’s RAM into two parts: a small area responsible for storing the kernel code, and the remaining space which is handled by virtual memory. This remaining space can be used in three different ways, including satisfying kernel requests for dynamic data structures, satisfying process requests for generic memory areas, and for getting better performance from disks through use of caches.

Kernel Memory Allocator: This is a subsystem within the kernel that is responsible for satisfying memory requests from all parts of the system. These requests may come from other kernel subsystems or system calls from user programs.

Caching: Caching allows for faster memory access when dealing with hard disks. Communicating with hard disks is incredibly slow compared to communicating with the RAM, so Linux implements a system that “guesses” which information a process may need from a hard disk, and stores a copy of it on RAM. Then, when the process issues a request for that information, the kernel first checks the cache. If the information is found, we refer to this as a cache hit, and ultimately results in much faster memory access speeds.

These memory management tools are far more complicated and brilliantly engineered than I can explain in this article. However, I will attempt to go over these systems in further detail later in this series.

Key Take-Aways

  • Linux is a kernel, not an operating system
  • You, and everyone else, has the ability to go and read the Linux source code
  • While operating systems only have two main responsibilities, they are incredibly complex systems that are worthy to be studied in depth
  • Linux implements many tactics to allow for efficient, secure, and reliable computing
  • The operating system buzz words introduced above, such as virtual memory and processes, are important to keep in mind as you read on

I hope you enjoyed this article. I am sure that my explanations of these systems are not optimally written, so feel free to leave a comment providing further detail or context to anything discussed in this article.

Happy hacking :)

--

--

Evan Wireman
Evan Wireman

Written by Evan Wireman

Graduate computer science student with a passion for low-level systems.

No responses yet