Preparing for a Linux interview in a short time is not a challenge anymore. Here are some popular Linux interview questions to get you through. Bridge your knowledge gap with the top linux interview questions and answers for experienced and freshers listed here that discusses topics like the difference between Unix and Linux, usage of different commands in Linux, etc. and get hired as a Linux administrator, cloud administrator and similar profiles.Once you are prepared with these tricky interview questions, you will be able to pass the toughest of Linux interviews easily.
|Unix is an Operating System and Linux is mainly a Kernel for Linux Based OS.||Linux is a Unix-Like based OS, it means the core functionalities behind is similar to UNIX OS.|
|Unix is an operating system having some common command as that of Linux.||Linux is an operating system having some common command as that of Unix.|
|Unix uses Command Line Interface.||Linux uses Graphical User Interface with an optional Command Line Interface.|
|Unix is mainly used in Server Systems, Mainframes and High-End Computers||Linux is mainly used in Home Based PC, Mobile Phones, Desktops, etc.|
|Unix has a rigid requirement of the Hardware. Hence, cannot be installed on every other machine.||Linux is very flexible and can be installed on most of the Home Based Pcs.|
|Different Versions of Unix are: AIS, HP-UX, BSD, Iris, etc.||Different Versions of Linux are: Ubuntu, Debian, OpenSuse, Redhat, Solaris, etc.|
|Both Unix are written in C & Assembly language.||Both Linux are written in C & Assembly language.|
Type any one of the following command to find os name and version in Linux:
cat /etc/os-release lsb_release -a hostnamectl
% hostnamectl Static hostname: WDFL41000139D Icon name: computer-vm Chassis: vm Machine ID: bfc98d9a56631ccde8f8578d58347195 Boot ID: dc39baafe82849b39413507cfd395b54 Virtualization: microsoft Operating System: SUSE Linux Enterprise Server 12 SP2 CPE OS Name: cpe:/o:suse:sles:12:sp2 Kernel: Linux 4.4.121-92.98-default Architecture: x86-64 %lsb_release -a LSB Version: n/a Distributor ID: SUSE Description: SUSE Linux Enterprise Server 12 SP2 Release: 12.2 Codename: n/a
Also type the following command to find Linux kernel version:
ps (i.e., process status) command is used to provide information about the currently running processes, including their PIDs (process identification numbers). A process is a running instance of a program. Every process is assigned a unique PID by the.
$ ps -ef $ ps -ef | grep tomcat
All the files and directories in Linux have below 3 permissions, represented by a three digit octal value .-
Read - It provides the ability to read the contents of a file (represented by 'r' in the first position "r--")
Write - It Provides the ability to edit or delete the content of a file (represented by 'r' in the second position "-w-")
Execute - It Provides the ability to execute the file (represented by 'x' in the third position "--x")
The octal value is calculated as the sum of the permissions:
“read” is 4
“write” is 2
“execute” is 1
SELinux is known as Security-enhanced Linux. In today’s world data is everything, Protecting your server and keep is up to server is a major challenge. Linux kernel gives security option of SELinux which is designed to protect the server from misconfigurations and unauthorized data access/modification. It helps in defining policy for accessing programs and files.
SELinux comes with 3 modes Enforcing, Permissive and Disabled.
Linux Loader or Linux Boot Loader is also known as LILO, a combination of initial 2 characters of Linux LOader.
LILO loads Linux operating system into main memory to boot system and to start working on it. Multiple operating systems like Windows & Mac OS comes up with their respective boot loaders. When you install Linux OS, you need to install a special boot loader for it. We have multiple boot loaders available in the market, LILO is one of them.
When the system started, BIOS performs some initial tests and transfers control to the Master Boot Record. Now, LILO loads the Linux OS and starts it. The best part of using LILO is that it allows fast boot of Linux OS.
/bin: Its critical directory used to bring the system online in single-user mode to repair it. This contains executable programs(can say scripts) for this.
/sbin: This directory holds commands needed to boot the system in normal condition but not executed or used by normal users.
/usr: This is one of the largest directories in the Linux system, mostly mounted from a separate partition. Birnies and files all programs are installed reside here.
/usr/bin: This directory contains programs, executables and scripts not used for boot process but used by users to execute. Most of the programs or executable executed by users rather than root
/usr/sbin: Program binaries or executables required for a system administrator is kept under this directory. This program binaries or executables are not required for boot process or normal users.
cron Job is similar to Task Scheduler in windows. corn is a software utility which schedules a command or script on your server to run automatically at a mentioned time and date. cron jobs can be very useful to automate repetitive tasks according to our need.
For example, we need to delete some temporary files every week to conserve our disk space. Once we have a script in place doing required action, We can set up a cron job to perform a certain action on a specific time. Scripts executed as a cron job are typically used to modify files, directories or databases. However, they can perform other tasks that do not modify data on the server, like sending email notifications.
We need to enter below lines in crobtab by:
testmachine@myworld-linux:~$ crontab -e
0 0 * * 0 /path/to/command
ctrl+D error is one of the common errors occur when the root user tries to make any Permanent entry in fstab file & by mistake changes the path location of mounted file systems in os, it gives error while rebooting the system and fails to load.
fstab is a system configuration file on the Linux operating system that contains information about major filesystems. This file is located in /etc directory and can be viewed like “cat /etc/fstab”.
We have set steps to recovery system from ctrl+D error.
FTP is the simplest file transfer protocol to exchange files to and from a remote computer or network system. Similar to Windows, Linux, and UNIX operating systems they also have built-in command-line prompts that can be used as FTP clients to establish FTP connection. FTP works in Client-Server architecture to communicate and transfer the file during an established FTP session.
When the Client initiates a connection to the server, it’s called Passive Connection. Whereas when Server initiates a connection to the client, it’s called Active Connection.
In Phase 1, When Connection initiated with the server, User credentials are passed for authentication. This is the control connection phase. In Phase 2, When actual data is transferred between client & server, This is data connection phase.
When we execute “ps aux” in Linux terminal console, we can see multiple states of processes running in the system under the STAT column.
R: Process is running with CPU or waiting for CPU (Running or Runnable)
S: Process is waiting for the set event to complete, Like an input from terminal (SLEEP)
D: Process is with uninterruptible sleep stats which cannot be changed or killed \ rollback. The only way to go away is the reboot system.
Z: Process in Zombie status means the process is already killed but process information and data still exist in the process table.
T: Process either completed or terminated by the operating system or user. This is also known as Terminated / Completed.
The root is the most privileged account in Linux for the system administrator. The root user has you full access to the system to perform all kind of access. The root is default account of Linux, created with Linux installation only. The root user is also known as the Root account or superuser. Due to uncontrolled access of Root account, this account needs to be secured and used crucially and carefully.
Some of the functions can be performed by the Root account:-
Granting permission on files and directories is one of the crucial and data security stuff for Linux System Administrator. Permission on any file or directory is a combination of 9 alphabets. First 3 from the left represents owner access, then middle 3 represents user group access and the last 3 are for other users who are not the owner or part of a group having access.
We have several ways to grant permissions using the numeric and alphabetical method. Having knowledge of these shortcuts makes administrators life easy.
rwx = 111 in binary = 7
rw- = 110 in binary = 6
r-x = 101 in binary = 5
r-- = 100 in binary = 4
So when you mention 765 numeric code to provide access permission, User will get like:- rwxrw-r-x
CPU usage percentage is very common confusion in LINUX. Sometimes, User or new system administrator complaint that CPU is showing >100% which is some configuration issues or VM issues. But this is not true.
Linux treats all processors individually and when you run top common, it should CPU usage per processor wise. Here, System is not considering all processors as a whole and showing usage of each processor individually which lead to total PU utilization to >100%.
To resolve this confusion, IRIX mode has been introduced. This is the default mode of Linux now. In Irix Mode, System considers all processors as a whole and CPU usage can vary from 0 to 100 only. IRIX feature was introduced in Solaris and adopted by Linux later.
Tar & ZIP are two most commonly used utilities in Linux system
TAR is archiver utility which will archive the selected files or directories. Extension of TAR is .tar.
gz is known as gunzip used compress files only. Extension of gz is .gz. You can use gz on TAR to compression directories achieved by TAR.
ZIP is archiver and compression utility for files and directory. Extension of ZIP is .zip.
The benefit of TAR can be applied on directories. Sometimes, you do not want to compress the filer but want to bundle them, TAR is perfect for it. TAR with GZIP is the best combination. Like GZIP, we also have bzip2 which use a completely new algorithm to compress files has given less size in compare to gzip.
LINUX is an open-source operating system which allows users to modify kernel as per their requirement. This facilitates the different part of Linux to be deployed, modified and tested by a different organization. This result in multiple flavors of Linux available in the market and each has its own feature.
Major Linux distributions are as below:-
Ubuntu: It’s the most common and well-known distribution. It has lots of ree installed apps for user’s easiness. It’s very easy to use and available in the command line and GUI both.
Red Hat Enterprise: Red Hat Enterprise Linux or RHEL is commercial Linux distribution. It stale, tested, user-friendly and most important NOT free to use.
Debian: Debian is one of the fastest and user-friendly Linux version.
Linux Mint: Its a special type of distribution works on the windows system as well. This for beginners to get hands-on the Linux system.
Fedora: Fedora is not in use on high numbers due to less stability. It supports a GNOME3 desktop environment by default.
The basic steps of this process are mentioned below for your convenience.
The kernel is the lowest level of software that controls all hardware and communicates with users for functions needs to perform. Linux Kernel is the core of the operating system that provided a user interface to perform user commands and control associated hardware. Linux Kernel is a layer that provides the ability to the user to control system hardware, develop applications on the operating system. All underline hardware with the system is communicable though Kernel only. The kernel gives you the independence to use software and programming language by your choice, Kernel is capable enough to convert them in machine language to control the required hardware subsystem.
Linux Kernal is free and open-source software and as per General Public License (GPL), it becomes legal for anyone to edit it.
Open-source software authorized you to distribute software with the source. This privileged people to review and add features as per their requirements. This gives a win-win solution to the complete community.
Network Teaming also is known as Ethernet Channel Bonding that enables two or more Network Interfaces Card (NIC) to work as a single virtual NIC card. This means the machines will be supposed to work on the virtual one and which may increase the bandwidth and provides redundancy of NIC Cards. This helps us in achieving redundant links, load balancing or fault tolerance networks in the production system. If one physical NIC is down or can say unplugged, it will automatically move the resources to other NIC card. Channel/NIC bonding works with the help of the bonding driver in Kernel.
2 main types of Network Teaming:-
Shell is an interpreter which converts scripts or executables to machine action. Shell prompt is a Command-line interface as well as GUI (Graphical User Interface) that takes inputs from the user and executes selected program according to that. Shell Scripts can be combined in a package for automation and schedule background tasks. Shell Scripts will be saved using extension .sh and scheduled using corn jobs.
Some majorly known shells as below:
Email client primarily is a desktop or mobile application that enables users to receive and send emails directly on the desktop or mobile. Typically, email client requires an email address to be set up, mail server details & connectivity to the mail server to configure and use email service. These configuration and settings include email address, password, POP3/IMAP and SMTP address, port number, email aliases, and other related preferences.
A mail server or an email server is a server that supports email function in network and support clients to handles and delivers e-mail over a network. This can be over intranet or internet. Email server receives emails from client computers and delivers them to other mail servers after proper authentication and authorization. Mail servers use MTA (Mail transfer agent) with SMTP (Simple Mail Transfer Protocol) to support email transmission. You have used any open source free MTA or any paid version the basis of your requirements and security policies.
We use several commands in our day to day Linux activities and support. You can also use help or man page for a list of commands and the available options for quick reference. Most basic sand majorly uses are as below:
SMTP(Simple Mail Transfer Protocol) is a push protocol and is used to send the mail whereas POP (post office protocol) or IMAP (internet message access protocol) are used to retrieve those emails at the receivers or client-side. The SMTP server on Linux is very fast, reliable and secure. Also, it supports POP3, IMAP and webmail access. Linux systems in a network can use the SMTP server to send alerts notifications. The mail transfer agent is an application use SMTP to transmit Email over the network. We have some of the most popularly used open-source Mail transfer agents like POSTFIX, SENDMAIL, EXIM, QMAIL, MUTT, ALPINE etc.. Each agent has its own advantages and disadvantages. You can review your system and can install the required one as per need.
Postfix is a free and open-source mail MTA (Mail Transfer Agent). This application used to send and receive the email. It is responsible for routing and delivering electronic mail. This is a cross-platform and most popular system.
First, We need to understand the difference and relation between the network connection and a network interface. A network interface can have many connections but one connection is only bounded to a specific network interface. Network connections are unclassified by default. It’s a system or network administrator's responsibility to create a zone with specific details to enable a level of trust by creating firewall policies.
Network Zone explains the trust level of a network connection. Creating Zones helps in identifying the secure network or unsecured network. Your system can have large access in a secure zone and limited to other zones or unsecured networks. This helps Network administrators to plan the level of monitoring for different networks.
The initial network zones:
|trusted||Fully trusted connections. All the incoming traffic is allowed.|
|home work internal||Partly trusted connections. User/administrator defines open services.|
|DMZ||Mostly untrusted connections, the demilitarized zone.|
|Public external||Mostly untrusted connections. User/administrator defines the open services.|
|block||Fully untrusted connections. No incoming traffic is allowed.|
|drop||Fully untrusted connections. All packets are dropped immediately.|
Physical volume or Physical disk is the first layer of the disk management. It’s considered as physical disks connected to the system. It can be local to the system or from SAN storage. Normally Datacenter and storage team manage it. Any disk addition and expansion can be done if space is available at this level.
A volume group is the second layer or middle layer between physical volume and logical volume. Volume group club all physical volume and display them as single storage to the system for further partitioning and usage. Due to large system in today’s environment, Application or Database need bigger space than an available physical disk. Volume group allows to clod multiple physical disk as one volume. This leverage system team to use bigger disks with any splitting at their end.
Logical volume or logical disk is the management of volume group to divide disk as per usage instead of allocation physical storage directly. Logical Volume Management (LVM) partitions can allocate across the physical drives and be resized like traditional disks.
Swap space is like pagefile in windows. Swap space is virtual memory where disk reserved for swap will behave like actual RAM.
In Swap space, Some amount of physical disk to hold some transaction or data temporarily. Ideally, this data should be residing in RAM or memory but due to a situation where memory is under pressure, the system moves some transaction or data into swap space. RAM is always costlier than disk space and disk performance is increasing day by day. Physical Memory need proper management for cost-effectiveness and swap spaces help in using the disk as a physical memory or temporary and least used data. The system will consider this as a part of memory only. Swapping of memory to and from physical storage is managed by systems memory management. By default, this is an auto driven process and does not require any manual intervention.
For more accurate memory management We also have the tools to manage swap space as per our requirements
By default, the standard input device is Keyboard and the standard output device is a Display screen. But to automate the processes or to pass the output of one process to another process these standard ways does not work. Linux feature of directing input & outputs data to and from processes is called Input/Output Redirection. Input / Output redirection is a required feature for good programming and shell scripting. It’s used for taking input and showing results as per requirement. Input from user and passing to other process is called input redirection whereas if process further pass its output to another process or function that will be output redirection for that and input redirection for receiving one.
In Linux, we have three redirections available as below: -
Data & system security is one of the biggest challenges today. We need to secure our system from all possible vulnerabilities. The system should be on the basis of best practices either adopted from and best practices either from approved vendors or from inhouse from experts. SSSH has secured shell services used to connect a Linux system in a secured manner. SSH is is the most common tool for a system administrator for better system management and security. SSH gives some advance features that need proper knowledge and expertise to use. SSH gives more feature to the user.
Some of the very simple steps to secure ssh services as below:-
Service Calls provide a feature to use operating system services. It provides an interface between the process and the operating system for better functionality and requirements. System calls are not for beginners, it needs some level of expertise to use. System call provides additional control over the system. Processes are the most basic unit on Linux System and process management need some system calls in Linux, some of them are:-
|Fork||Creates a new process|
|Exec||Execute the program|
|Wait||Force process to wait|
|Exit||Exit/terminate the process|
|Clone||Creates Child Process|
|Exit_Group||Exit/terminate all threads in the process|
|Nice||Change the priority of the running process|
|Getppid||Find parent ID of the process|
|Vfork||Create Child Process and block parent|
From the command shell, use the command for memory usage information :
% cat /proc/meminfo MemTotal: 16250912 kB MemFree: 3281056 kB MemAvailable: 10404492 kB Buffers: 1101852 kB Cached: 4654684 kB SwapCached: 129304 kB Active: 7930860 kB Inactive: 2892144 kB Active(anon): 4118480 kB Inactive(anon): 1197660 kB Active(file): 3812380 kB Inactive(file): 1694484 kB Unevictable: 236 kB
There are other commands also which gives memory info :
free - m vmstat top htop
To list every occurrence of the term “warn” on a separate line,
run grep -o warn <path>. Adding the r flag to the command makes recursive search for every file under the given path, and the I flag ensures that matches in binary files are ignored.
In addition, the w flag can be included to match the exact term only, and ignore superstrings such as “warnings”, and to make the search case-insensitive, the i flag can be added.
% grep -iworI warn | wc -l 12
In Linux, Curl command is a tool to transfer data from or to a server. In Linux, curl command is used to test an application's endpoint or connectivity to an upstream service endpoint. It is used to determine if the application can reach another service, like a database, or to check if the service is up and running . This command doesn’t require any user interaction.
$ curl -I -s application:5000 HTTP/1.0 500 INTERNAL SERVER ERROR
This example with an exception shows that the application can't reach server. Options used In the above command,
-I option -> shows the header information -s option -> silences the response body.
Curl command with -O option is used to download the file :
curl -O http://knowledgehut.com/myfile.tar.gz #Save the file as tar,gz
|Hard link||Soft link|
|Hard link associates two (or more) filenames with an inode.||Soft link is a special file type which points to another file and the contents of this special file is the name of the file that it points to.|
|Hard links all share the same disk data blocks while functioning as independent directory entries.||Soft links are created by the - ln -s command.|
|Hard links may not span disk partitions since inode numbers are only unique within a given device.||Once a file which is pointed to by a symbolic link is deleted, the link still points to it, leaving a hanging link|
|Command to create a hard link to ‘knowledgehut’ is:|
$ ln knowledgehut hlink
|Command to create a symbolic link to ‘knowledgehut’ :|
$ ln -s knowledgehut slink
SED command in UNIX stands for stream editor, which is used to make changes to file content.
It can be used to find and replace strings or patterns without opening a file
The default behavior is that the SED command replaces the first occurrence of a pattern in each and it won’t replace the second, third or multiple occurrence in the line.
If we use the ‘g’ option along with the above command then SED command will replace all Unix strings with Linux globally ( g stands for globally) .
sed 's/unix/linux/g' sample.txt
sed '/^$/d' sample.txt
Here “^” symbol represents the starting point of a line and “$” represents the end of the line. Whereas “^$” represents the empty lines , d stands for delete .
Top is the command used to get the list of running processes and resource utilization (RAM or CPU usage). It gives all the information about each process running on the host :
Sample Output :
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7629 greys 20 0 749m 291m 28m S 1 7.4 16:51.40 firefox
19935 greys 20 0 133m 14m 10m S 0 0.4 2:38.52 smplayer
1 root 20 0 4020 880 592 S 0 0.0 0:00.96 init
2 root 15 -5 0 0 0 S 0 0.0 0:00.00 khutreadd
3 root RT -5 0 0 0 S 0 0.0 0:00.04 datamigration/0
4 root 15 -5 0 0 0 S 0 0.0 0:00.90 ksoftirqd/0
5 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/0
6 root RT -5 0 0 0 S 0 0.0 0:00.06 datamigration/1
Most commonly used options with the top command are below –
top -u -> Process by a user. top – i -> exclude idle tasks top -p -> Show a particular process
lsof - ‘LiSt Open Files’ is used to find out which files are opened or are in use .
# lsof -u kunand
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 1838 kunand mem REG 253,0 122436 190247 /lib/libselinux.so.1
sshd 1838 kunand mem REG 253,0 255968 190256 /lib/libgssapi_krb5.so.2.2
sshd 1838 kunand mem REG 253,0 874580 190255 /lib/libkrb5.so.3.3
Zombie process is a process whose execution is completed but have not been removed from the process table.
When a program forks and the child finishes before the parent , kernel has some of the child information .
In case parent needs to check for child's exit status - parent calls 'wait()'. Child is said to be a zombie process in the duration child terminating and the parent calling 'wait()'.
Execute the below command
ps aux | grep Z
child will have a 'Z' in its status field to indicate zombie process
Also this command will give details of all zombie processes in the processes table.
As zombie processes are already dead , the user Cannot kill something which is already dead.
Execute the below command :
kill -s SIGCHLD pid
Replace the pid with Parent process ID, so that parent process will remove all the child processes that are dead.
This can be checked by using du command ( disk usage)
du –sh . | grep G - lists all the directory which has GIGS in Size.
$ du –sh . | grep G
Below commands can be used :
nslookup - to find the IP address from a hostname or vice-versa.
ipconfig or ifconfig - based on whether the host is Windows or Unix
hostname -i - on Linux
touch command is used to create an empty filename
$ touch knowledgehut.txt
Also for existing files or directories, touch command changes the last access time to current time.
|Head command is used to display first few lines (default is 10) of the file.||Tail is used to display last few lines(default is 10) of the file .|
#Display first 10 lines of the file - application.log
#Display first 50 lines of the file - application.log
head -50 application.log
#Display last 10 lines of the file - application.log
#Display last 50 lines of the file - application.log
tail -50 application.log
This can be done using 'sed' command :
# Here 'p' to print and '-n' to not print each line
sed -n 10,20p input.txt > output.txt
This can be done using wc command (word count)-
#Number of lines
$ wc -l knowledgehut.txt 4 knowledgehut.txt
#Number of words
$ wc -w knowledgehut.txt 3 knowledgehut.txt
#Number of characters
$ wc -m knowledgehut.txt 19 knowledgehut.txt
Aliases are abbreviated shortcuts used to represent a command or a group of commands executed with or without custom options.
#Alias for log directory
alias logs="cd /user/application/logs"
These aliases can be put in the ~/.bash_aliases file.
To have the aliased command on any existing terminal, user needs to source ~/.bashrc from the terminal :
Below are the system calls used for Process management:
LILO is a boot loader for Linux. LILO stands for Linux Loader that is used to load Linux into memory.
It is used mainly to load the Linux operating system into main memory so as to begin operation.
Lilo handles some tasks such as locating the kernel, identifying other supporting programs, load memory and starts the kernel. The configuration file of lilo is located at “/etc/lilo.conf”. Lilo reads this configuration file and it tells Lilo where to place the bootloader.
find –perm option is used to find files based on permissions
Here "." or period denotes the current directory
$ find . -perm 777
The seven types of modes available, as mentioned below:
mode=0 (balance-rr): This mode is based on the Round-robin policy and it is the default mode. This mode provides load balancing and fault tolerance kind of features. It routes the packets in the Round-robin fashion that means from the first available slave through the last.
UMASK is user file-creation mode to determine permissions of newly created files. When any user creates a file or directory under Linux or UNIX, the default setting of the permissions was applied on the basis of UMASK mentioned in the config file. By default, UMASK id022 but you can change it complete system or for a particular user. Any files can have 3 types of permission read, write and execute, numeric representation is 777 for full permissions to a user, group and other users. When any user creates a file on the system with default UMASK 022 then files will get 755 permissions. 755 means read, write & execute to the user and read & execute to the group and other users.
UMASK is a very crucial command to control file & directory security. The system administrator can control file access permissions using UMASK in an efficient manner.
As a system administrator or application admin, You need to execute multiple tasks or reports on a regular basis. We can automate or schedule such activities in Linux system using cron & anacron.
We can use either cron or anacron as per our requirement but both have its features. Both cron and anacron are daemons processes.
Corn assumes your system is running continuously and online for execution. anacron can works when your system is not online 24X7. If our system is off and we have a job scheduled during this time, the job never gets executed.
Anacron uses timestamp file to check when was last time command or task was executed if schedule task or process missed the schedule due offline system. On the other hand, corn executes the required task a predefined schedule.
anacron every hour or day and check all required execution to execute whereas corn runs every minute to perform required action.
Corn job can be configured by any normal user but Anacron can be scheduled only by the superuser.
Corn is best when you can not expect a delay in execution time whereas Anacron is good when we can expect action in set intervals instead of specified timestamp.
The ext3 file system is an enhanced version of the ext2 file system. The most important difference between the Ext2 and Ext3 is that Ext3 supports journaling.
Ext2 is a legacy file system has loys shortcomings. In case of a system crash or unexpected power failure or unclean reboot of the system, the system administrator needs to check all ext2 mounted drives for consistency. This needs to be performed an e2fsck program. This is a time-consuming process and during this time, any data on volumes is unreachable.
Ext3 is a newer filesystem with supports journaling. Journaling feature in ext3 file systems eliminates the requirement of consistency check of the file system in case of a system crash or unclean reboot. The only possible situation of consistency check requirement in ext3 is with hardware failures. In such a case, recovery time depends on hardware speed, storage performance, and system resources. File size and a number of files do not create any impact, normally journaling complete consistency check in a few seconds.
In Linux, User passwords save in /etc/passwd system file. This file accessible to all users and visible passwords is a security risk. Linux comes up with a feature of shadow password or encrypted password. In Shadow password, Passwords are encrypted before saving in /etc/passwd system file. This gives you security from unauthorized system access. The pwconv command is used for providing shadow passwords. This command creates the file as /etc/shadow and changes all passwords to ‘x’ in the /etc/passwd file. This functionality may require additional installation of shadow suite.
The original password is encrypted after creating shadow password by an encryption key
Encryption key saves along with the encrypted password for a further refresh
When a user using a shadow password tries to connect, the system decrypts the password before connecting.
Window Manager is client software that controls icons, placement of icons, the appearance of the window after login to the system. This is desktop management software. As we know LINUX is an open-source operating system, We have a long list of WM software available in the market. The system administrator can install and configure it as per user or environment requirements. One thing before using WM software, they will consume additional resources on the system.
The/etc/.xinitrc file is hidden system files allows you to change the window manager while login from any or particular user account. The prefix of “.” In the file, the name shows that it is hidden file and you will not be able to view it with the normal ls command. WM gives enhance user experience or add-on features.
Some of the popular command for WM or desktop management are:-
No, TELNET is not a secure way of communication. Talent sends data and sensitive information in plain text over the network and that can be easily accessible and readable by anyone. This gives the open opportunity to the hacker to hurt your system. As a System administrator, you need to close all possible security risk for your system and having Talent is one of the tops in the list.
SSH (Secure Shell) is a secure alternative of talent. SSH is completely secure and replaces legacy telnet usage. SSH save user identity, password, and data from the network attacks. Linux comes up with a free version of SSH known as OpenSSH. For extended features, We can also use paid versions of SSH.
The Virtual hosts are used to host the multiple domains on a single apache instance. We can have one virtual host for each IP your server has, or the same IP Address but different ports, or the same IP Address, the same port but different hostnames. The latter is called "the name-based vhosts".
In IP-based virtual hosting, we can run more than one web site on the same server machine, but each web site has its own IP Add while In Name-based virtual hosting, we can host multiple websites on the same IP address. But for this to succeed, we have to put more than one DNS record for your IP address in the DNS database. In the production shared web hosting environment, getting the dedicated IP address for every domain hosted on the server is not feasible in terms of the cost. Most of the customers won't be able to afford the cost of having a dedicated IP Add. Here is the place where the concepts of the Name-based virtual hosting find its place.
How the system administrator can manage and monitor memory usage in Linux?
Memory monitoring and usage management are one of the critical system administrator requirement. It’s always required to keep the system under monitoring to check if memory is low or any user or process is over-consuming it. Linux comes up with multiple commands that you can use to monitor and manage the usage. Different ways to check memory usage:-
Free: Free command gives details of memory used, free, cache and total. By default values are in KB but you can pass -m to have values in MB.
/proc/meminfo: This is a system file to monitor memory. It will give you 6 entries Total Active RAM, Total Inactive RAM, User Active RAM, User Inactive RAM, File Active RAM & File Inactive RAM.
Vmstat: VMSTATS give you memory statistics.
Top: Top command gives you memory usage and total RAM. This command also used for monitoring.
This is one of the most basic and useful commands. This command (“ls”) is used by a normal user or system administrator on a regular basis. This command is used to list down files and directories in the present working directory.
“ls” Command comes up with multiple options:-
“ls” without any options will list down all files & directories in plain text. This command gives more desired outputs after clubbing it with grep & less command that allows your filter the list or highlights the required file. This command is also compatible with input-output redirection option which is very helpful for logging.
Samba is an open-source software suite that runs on the Unix/Linux based platforms but it is able to communicate with the Windows clients like a native application. So Samba is able to provide the service by employing the Common Internet File System (CIFS).
At the heart of the CIFS is the Server Message Block (SMB) protocol. Samba does this by performing the four key things –
Samba can be run on many different platforms including Linux, Unix, OpenVMS and the operating systems other than the Windows and allows users to interact with a Windows client or server natively. It can basically be described as Standard Windows interoperability suite of the programs for Linux and Unix.
When we have a process in progress which handle your prompt, there were some signals (orders) that we can send to theses process to indicate what we need:
Control+C sends SIGINT which interrupts the application. Usually causing it to abort, but a process is able to intercept the signal and do whatever it likes: for instance, from the Bash prompt, try Ctrl-C. In Bash, it cancels whatever you've typed and gives you a blank prompt (as opposed to the quitting Bash)
Control+Z sends SIGTSTP to foreground application, effectively putting in the background on suspended mode. This is very much useful when we want the application to continue its process while we are doing another job in the current shell. When we finish the job, we can go back into the application by running FG (or %x where x is the job number as shown in jobs).
NSCD means Name Service Cache Daemon which provides a cache for the most common name service requests. When resolving a user, group, service the process will first try to connect to the NSCD socket (something like /var/run/nscd/socket2).
If NSCD died, then the connection will fail and NSCD won't be used for same and that should not be a problem.
If NSCD in a hung state, then the connection may hang or succeed. If this succeeds then the client will send the request. Now, we can configure NSCD to disable caching for any type of the database (for instance by having enable-cache hosts no in the /etc/nscd.conf for hosts database).
However, if NSCD is in a hung state, it may not be able to even give that simply won't do the answer, so that won't necessarily help. NSCD is a caching daemon, it's meant to improve the performance. Disabling it would potentially make those lookups slower. However, that's only true for some kind of databases. For the instance, if user/service/group databases are only in small files (/etc/passwd, /etc/group, /etc/services), then using NSCD for those will probably bring little benefit if any. NSCD will be very useful for the host's database.
The Random Number Generator gathers noise of environment from the device drivers and other sources into the entropy pool. It also keeps an estimate of the number of bits of the noise in an entropy pool. It is from this entropy pool and will generate random numbers.
/dev/random will only be returning Random bytes from the entropy pool. If the entropy pool is empty, reads to /dev/random will be blocked until the additional environmental noise will be gathered. This is suited to high-quality randomnesses, such as the one-time pad or key generation.
/dev/urandom will return as many random bytes requested. But if the entropy pool is empty, this will generate data using SHA, MD5 or any other available algorithm. It never blocks the operations. Due to which, the values are vulnerable to the theoretical cryptographic attack, though no known methods will exist.
For cryptographic purposes, we should really use the /dev/random because of the nature of data it returns. Possible waiting should be considered as an acceptable tradeoff for the sake of the security, IMO. When we need random data fast, we should use the /dev/urandom of course.
Both /dev/urandom and the /dev/random are using exact same CSPRNG (a cryptographically secure pseudorandom number generator). They can only differ in very few ways that have nothing to do with the “true” randomness and then /dev/urandom is the preferred source of cryptographic randomness on the UNIX-like systems.
LVM is a short form of logical volume manager requires to resize filesystem size. This size of LVM can be extended and reduced using lvextend and lvreduce command respectively. We can think of LVM as dynamic partitions, meaning that we can create/resize/delete LVM partitions from the command line while our Linux system is running: here is no need to reboot the systems to make kernel aware of the newly-created or resized partitions.
LVM provided functions:
This is very much possible that we have free storage space but still we cannot add any new data in the file system because all the Inodes are consumed as the df -I command will show that. This may happen in a case where the file system contains a very large number of very small-sized files. This will consume all the Inodes and though there would be free space from a Hard-disk-drive point of view but from a file system point of view no Inode available to store any new file.
A storage unit can contain numerous small files. The inode structure fills up before the data storage of disk, no more files can be copied to the disk. Once inode storage is freed up in the structure, new files can be written to storage.
POP3 mail is the only account that is assigned to the /bin/false shell. However, assigning bash shell to the POP3 mail only gives user login access, which is always avoided. /bin/nologin can also be used for the same. This shell access is provided to the user when we don’t want to give shell access to the user. The user cannot access the shell service and it rejects shell login on the server as in Telnet. It is mainly for the security of all shells.
POP3 is basically used for downloading the mail to the mail program. So for illegal downloading of all emails on the shell, this account is assigned to the /bin/false shell or the /bin/nologin. These both shells are the same as they both do the same work of rejecting the user login to the shell.
The main difference between these two shells is that the false shell shows the incorrect code and any unusual coding when a user login to the shell. But the nologin shell simply tells that no account is available. So nologin shell is used often in the Linux.
syslogd daemon process facilitates the event tracking in a Linux system and logs useful information or future analysis. syslogd provides two system utilities, one for logging and other for the kernel messages. syslogd mainly reacts to the set of signals given by users.
Some of the signals given to syslogd:
Memory monitoring and usage management are one of the critical system administrator requirement. It’s always required to keep the system under monitoring to check if memory is low or any user or process is over-consuming it. Linux comes up with multiple commands that you can use to monitor and manage the usage. Different ways to check memory usage:-
Full form of NFS is Network File System. NFS is used for sharing of the files and folders between Linux/Unix systems by Sun Microsystems in late 1980. NFS helps you in mounting your local file systems or drive over a network and remote \ client hosts can use it as it mounted locally on their system. With the help of the NFS, we can set up file sharing between the cross-operating system, Unix to Linux system and vice versa. If you want to use Linux system mount on windows, you need to use SAMBA\CIFS in place of NFS.
Benefits of NFS:-
Systemd is the first process of the Linux system and very well designed process in compare with init.
Systemd is multithreaded and faster than init. Systemd is standard processes to control programs need to be run during Linux boot. It was conceived from the top, not just to fix the bugs, but to be a correct implementation of all the base system services. A systemd, may refer to all packages, utilities and the
libraries around the daemon. It was designed to overcome all the shortcomings of init. It is itself a background process which is designed to start the processes in parallel, and thus reducing the boot time and computational overheading. It has a lot of other features as compared to init.
/proc is a virtual file system that provides detailed information about the Linux kernel, hardware, and running processes. /prod is a generic file available in all flavours of Linux. Files under /proc directory named as Virtual files. These files are created when the system boots up and dissolve on shutdown. It contains information about running processes and works as an information zone for the kernel.
/proc is also a hidden tool for a system administrator for analyzed and troubleshooting performance and system bottleneck related issues.
These virtual files have unique qualities. Most of them are listed as zero bytes in size as they reside in memory, not on disk. Virtual files such as the /proc/interrupts, /proc/meminfo, /proc/mounts, and the /proc/partitions provide an up-to-the-moment glimpse of system’s hardware. Others: /proc/filesystems file and /proc/sys/ directories provide system configuration information and interfaces. These are tools for a system administrator to troubleshoot and analyze the issues.
Boot Loader is a package that loads operating system to memory during boot. Windows comes up with its own boot loader whereas Linux gives you to select boot loader as per your environment and requirement.
GNU GRUB or GRUB (Grand Unified Bootloader) is a type of boot loader package that supports multiple operating systems. It allows feasibility of selecting the required OS during boot. GNU GRUB gives the option to select the operating system to load during boot. GNU GRUB is an advanced level of legacy GRUB.
Command Line Interface is also known as CLI. This is an interface for users to interact and instruct system in command line fashion. CLI is the basis on text-based interact to accept user request and response. While comparing with GUI, CLI is lightweight and consume less CPU & Memory resources.
considering the GUI of different versions and flavour, User needs to change there way of working and need additional learning. Whereas CLI is independent of this and allows the user to use any Linux system in the same manner. CLI also comes up with help option so that users need not remember all commands and option and they can refer help or man page for details options and definitions.
Linux is a free and open-source operating system which is based on Linux kernel, which was first released on September 17, 1991. In Linux, users can create modifications and variations of the source code for computers and other devices, which are known as distributions. It is most commonly used as a server, but also used in desktop computers, e-book readers, smartphones, etc.
The most popular Linux distributions are Fedora, ubuntu and Debian, which the common commercial distributions include Red Hat Enterprise Linux and SUSE Linux Enterprise Server.
Professionals can opt for jobs as Linux administrator, cloud administrator, etc once they have an in-depth understanding of the core Linux concepts. According to Salary.com, a Linux Administrator has an average base salary of $92,115, with their base salary ranging from $73,345 to $103,894.
Top companies from around the globe run on Linux, to name a few companies Google, Twitter, Facebook, Amazon, IBM, etc.
Are you planning to get into a reputed organization? Then have a glance at these Linux interview questions and answers. Linux is one of the best-known operating systems. Linux jobs are the best choice to magnify your skills. Learning Linux with these interview questions and answers will help you fetch the best jobs in the market.
If you are looking to crack Linux job interview we have built Linux job interview questions and answers page to ensure an outstanding performance in the interview. To keep a track of all your dream jobs, you can also visit our website www.knowledgehut.com for more information.
These basic Linux interview questions will be an added advantage to crack your next Linux interview. We have different series of Linux interview questions and answers for experienced and freshers for your interview based on basic and advanced levels. So, prepare better with our extensive list of Linux interview questions. Our advanced Linux interview questions will not only help you prepare for the interview but will also enable you to manage difficult projects with ease.