
CHAPTER 1. PREPARING A CONTROL NODE AND MANAGED NODES TO USE RHEL SYSTEM ROLES
NODES TO USE RHEL SYSTEM ROLES:
Before you can use individual RHEL System Roles to manage services and settings,
prepare the involved hosts.
1.1. INTRODUCTION TO RHEL SYSTEM ROLES
RHEL System Roles is a collection of Ansible roles and modules. RHEL System Roles provide a
configuration interface to remotely manage multiple RHEL systems.
The interface enables managing system configurations across multiple versions of RHEL,
as well as adopting new major releases.
On Red Hat Enterprise Linux 9,
the interface currently consists of the following roles:
-
Certificate Issuance and Renewal
-
Kernel Settings (kernel_settings)
-
Metrics (metrics)
-
Network Bound Disk Encryption client and Network Bound Disk Encryption server (nbde_client
-
and nbde_server)
-
Networking (network)
-
Postfix (postfix)
-
SSH client (ssh)
-
SSH server (sshd)
-
System-wide Cryptographic Policies (crypto_policies)
-
Terminal Session Recording (tlog)
-
All these roles are provided by the rhel-system-roles package available in the AppStream repository.
Additional resources
Red Hat Enterprise Linux (RHEL) System Roles
/usr/share/doc/rhel-system-roles/ provided by the rhel-system-roles package.
1.2. RHEL SYSTEM ROLES TERMINOLOGY
You can find the following terms across this documentation:
Ansible playbook:
Playbooks are Ansible’s configuration, deployment, and orchestration language.
They can describe a
policy you want your remote systems to enforce,
or a set of steps in a general IT process.
Control node:
Any machine with Ansible installed.
You can run commands and playbooks,
invoking /usr/bin/ansible
or /usr/bin/ansible-playbook, from any control node.
You can use any computer that has Pythoninstalled on it as a control node - laptops, shared desktops,
and servers can all run Ansible. However,
you cannot use a Windows machine as a control node. You can have multiple control nodes.
Inventory:
A list of managed nodes. An inventory file is also sometimes called a “hostfile”. Your inventory can
specify information like IP address for each managed node. An inventory can also organize managed
nodes, creating and nesting groups for easier scaling. To learn more about inventory, see the
Working with Inventory section.
Managed nodes:
The network devices, servers, or both that you manage with Ansible. Managed nodes are also
sometimes called “hosts”. Ansible is not installed on managed nodes.
1.3. PREPARING A CONTROL NODE
RHEL includes Ansible Core in the AppStream repository with a limited scope of support.
If you require
additional support for Ansible,
contact Red Hat to learn more about the Ansible Automation Platform
subscription.
Prerequisites:
You registered the system to the Customer Portal.
You attached a Red Hat Enterprise Linux Server subscription to the system.
If available in your Customer Portal account, you attached an Ansible Automation Platform
subscription to the system.
Procedure:
-
Install the rhel-system-roles package:
-
[root@control-node]# dnf install rhel-system-roles
-
This command installs Ansible Core as a dependency.
-
Create a user that you later use to manage and execute playbooks:
-
[root@control-node]# useradd ansible
-
Switch to the newly created ansible user:
-
[root@control-node]# su - ansible
-
Perform the rest of the procedure as this user.
-
Create an SSH public and private key
-
[ansible@control-node]$ ssh-keygen
-
Generating public/private rsa key pair.
-
Enter file in which to save the key (/home/ansible/.ssh/id_rsa): password
Use the suggested default location for the key file.
Optional: Configure an SSH agent to prevent Ansible from prompting you for the SSH key
password each time you establish a connection.
Create the ~/.ansible.cfg file with the following content:
-
[defaults]
-
inventory = /home/ansible/inventory
-
remote_user = ansible
-
[privilege_escalation]
-
become = True
-
become_method = sudo
-
become_user = root
-
become_ask_pass = True
With these settings:
Ansible manages hosts in the specified inventory file.
Ansible uses the account set in the remote_user parameter when it establishes SSH
connections to managed nodes.
Ansible uses the sudo utility to execute tasks on managed nodes as the root user.
For security reasons, configure sudo on managed nodes to require entering the password
of the remote user to become root. By specifying the become_ask_pass=True setting in
~/.ansible.cfg, Ansible prompts for this password when you execute a playbook.
Settings in the ~/.ansible.cfg file have a higher priority and override settings from the global
/etc/ansible/ansible.cfg file.
Create the ~/inventory file. For example, the following is an inventory file in the INI format with
three hosts and one host group named US:
-
managed-node-01.example.com
-
managed-node-02.example.com ansible_host=192.0.2.100
-
managed-node-03.example.com
Note that the control node must be able to resolve the hostnames.
If the DNS server cannot
resolve certain hostnames,
add the ansible_host parameter next to the host entry to specify its IP address.
1.4. PREPARING A MANAGED NODE
Ansible does not use an agent on managed hosts.
The only requirements are Python, which is installed by
default on RHEL, and SSH access to the managed host.
However, direct SSH access as the root user can be a security risk.
Therefore, when you prepare a
managed node, you create a local user on this node and configure a sudo policy.
Ansible on the control
node can then use this account to log in to the managed node and execute playbooks as different users,
such as root.
Prerequisites
You prepared the control node.
Procedure
1. Create a user:
[root@managed-node-01]# useradd ansible
The control node later uses this user to establish an SSH connection to this host.
2. Set a password to the ansible user:
[root@managed-node-01]# passwd ansible
Changing password for user ansible.
New password: password
Retype new password: password
passwd: all authentication tokens updated successfully.
You must enter this password when Ansible uses sudo to perform tasks as the root user.
3. Install the ansible user’s SSH public key on the managed node:
a. Log into the control node as the ansible user, and copy the SSH public key to the managed
node:
[ansible@control-node]$ ssh-copy-id managed-node-01.example.com
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed:
"/home/ansible/.ssh/id_rsa.pub"
The authenticity of host 'managed-node-01.example.com (192.0.2.100)' can't be
established.
ECDSA key fingerprint is
SHA256:9bZ33GJNODK3zbNhybokN/6Mq7hu3vpBXDrCxe7NAvo.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that
are already installed
Red Hat Enterprise Linux 9 Configuring basic system settings
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is
to install the new keys
ansible@managed-node-01.example.com's password: password
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'managed-node-01.example.com'"
and check to make sure that only the key(s) you wanted were added.
b. Remotely execute a command on the control node to verify the SSH connection:
[ansible@control-node]$ ssh managed-node-01.example.com whoami
ansible
4. Create a sudo configuration for the ansible user:
a. Use the visudo command to create and edit the /etc/sudoers.d/ansible file:
[root@managed-node-01]# visudo /etc/sudoers.d/ansible
The benefit of using visudo over a normal editor is that this utility provides basic sanity
checks and checks for parse errors before installing the file.
b. Enter the following content to the /etc/sudoers.d/ansible file:
ansible ALL=(ALL) NOPASSWD: ALL
These settings grant permissions to the ansible user to run all commands as any user and
group on this host without entering the password of the ansible user.
Additional resources
Preparing the control node
The sudoers(5) man page
1.5. VERIFYING ACCESS FROM THE CONTROL NODE TO MANAGED
NODES
After you configured the control node and prepared managed nodes, test that Ansible can connect to
the managed nodes.
Perform this procedure as the ansible user on the control node.
Prerequisites
You prepared the control node as described in Preparing a control node .
You prepared at least one managed node as described in Preparing a managed node .
If you want to run playbooks on host groups, the managed node is listed in the inventory file on
the control node.
Procedure
CHAPTER 1. PREPARING A CONTROL NODE AND MANAGED NODES TO USE RHEL SYSTEM ROLES
1. Use the Ansible ping module to verify that you can execute commands on an all managed hosts:
[ansible@control-node]$ ansible all -m ping
BECOME password: password
managed-node-01.example.com | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
...
The hard-coded all host group dynamically contains all hosts listed in the inventory file.
2. Use the Ansible command module to run the whoami utility on a managed host:
[ansible@control-node]$ ansible managed-node-01.example.com -m command -a
whoami
BECOME password: password
managed-node-01.example.com | CHANGED | rc=0 >>
root
If the command returns root, you configured sudo on the managed nodes correctly, and
privilege escalation works.
Figure 2.15 – RHEL 9 in Google Cloud Platform: VM instance – console
It takes some time to get set up in the cloud, configure your account, and find the SSH key
(which will be shown in Chapter 8, Administering Systems Remotely),
but once it’s all set up, it’s easy to get a
new instance up and running.
To become an administrator, you only need to run the following command:
[miguel@rhel-instance ~]$ sudo -i
[root@rhel-instance ~]#
Deploying RHEL on the cloud 55
Now, you can check the time configuration with timedatectl and change it, as follows:
[root@rhel-instance ~]# timedatectl
Local time: Sat 2021-12-12 17:13:29 UTC
Universal time: Sat 2021-12-12 17:13:29 UTC
RTC time: Sat 2021-12-12 17:13:29
Time zone: UTC (UTC, +0000)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
[root@rhel-instance ~]# timedatectl set-timezone Europe/Madrid
[root@rhel-instance ~]# timedatectl
Local time: Sat 2021-12-12 18:20:32 CET
Universal time: Sat 2021-12-12 17:20:32 UTC
RTC time: Sat 2021-12-12 17:20:32
Time zone: Europe/Madrid (CET, +0100)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
You can also change the language configuration with localectl, like so:
[root@rhel-instance ~]# localectl
System Locale: LANG=en_US.UTF-8
VC Keymap: us
X11 Layout: n/a
To change locale or language support, you will need to install its language package first, as follows:
[root@rhel-instance ~]# yum install glibc-langpack-es –y
... [output omitted] ...
[root@rhel-instance ~]# localectl set-locale es_ES.utf8
[root@rhel-instance ~]# localectl
System Locale: LANG=es_ES.utf8
VC Keymap: us
X11 Layout: n/a
RHEL 9 Advanced Installation Options:
You now have a machine configured that you can use throughout this book. These locale changes are
not needed to proceed—they’re only needed to create a machine with the same configuration as in
the previous chapter.
Now that we know how to automatically redeploy VMs using Anaconda and how to get instances
in the cloud, let’s move on and look at some of the best practices to be taken into account when
performing installations.
Installation best practices
RHEL installations have many options you can choose from, and what you choose should be tailored
for your specific use case. However, some common recommendations apply. Let’s look at the most
common types.
The first type is blueprints. Here’s how you can use them:
• Standardize the core installation and create a blueprint for it:
This blueprint will be minimal enough to serve as the base for all other blueprints
and deployments.
• Build a set of blueprints for common cases when needed:
Try to use an automation platform to build extended cases (that is, Ansible).
Try to make the cases modular (that is, an app server; database blueprints can be combined
into one single machine).
Be aware of the requirements you must apply to your templated blueprints and adapt to the
environments you will use.
The second type is software. Here are some guidelines regarding this:
• The less software that’s installed, the smaller the attack surface. Try to keep servers with the
minimal set of packages required on them for them to run and operate (that is, try not to add
a graphical user interface (GUI) to your servers).
• Standardize the installed tools where possible to be able to react quickly in case of emergency.
• Package your third-party applications so that you have healthy life cycle management (whether
with RPM Package Manager (RPM) or in containers).
• Establish a patching schedule.
Installation best practices 57
The third type is networking. Here are some recommendations in terms of this:
• In VMs, try not to overuse the number of network interfaces.
• In physical machines, use interface teaming/bonding whenever possible. Segment networks
using virtual local area networks (VLANs).
The fourth type is storage. Here are some useful suggestions on how best to use this:
• For servers, use Logical Volume Manager (LVM) where possible (usually, everything but
/boot or /boot/efi).
• If you think you will need to reduce your filesystems, use ext4; otherwise, go for the default
of xfs.
• Partition the disk carefully by doing the following:
Keep the default boot partition with its default size. If you change it, enlarge it (you may
need space there during upgrades).
The default swap partition is the safest bet unless the third-party software has specific
requirements.
For long-lived systems, have at least separate partitions for / (root) /var, /usr, /tmp,
and /home, and consider even a separate one for /var/log and /opt (for ephemeral
cloud instances or short-lived systems, this does not apply).
The fifth type is security. Follow these guidelines:
• Do not disable Security-Enhanced Linux (SELinux). It has been improved a lot in the latest
versions and it’s unlikely to interfere with your system (if required, set it in permissive mode
instead of fully disabling it).
• Do not disable the firewall. Automate port opening with the service deployment.
• Redirect logs to a central location whenever possible.
• Standardize the security tools and configuration that you want to install to check system
integrity and audit (that is, Advanced Intrusion Detection Environment (AIDE), logwatch,
fapolicyd, Integrity Measurement Architecture (IMA), and auditd).
• Review software install (RPM) GNU Privacy Guard (GPG) keys, as well as ISO images, to
ensure integrity.
• Try to avoid using passwords (especially for your root account) and use strong ones where needed.
• Review your systems with OpenSCAP to check on security (if needed, create your own hardware
Security Content Automation Protocol (SCAP) profile with help from your security team).
58 RHEL 9 Advanced Installation Options
Finally, we will look at the miscellanea type, as follows:
• Keep system time synchronized.
• Review logrotate policies to avoid “disk full” errors due to logs.
Following these best practices will help you avoid issues and make the installed base more manageable.
With that, you know how to deploy RHEL on a system in a structured, repeatable manner while
providing services to other teams in a fast and resilient fashion.
Summary
In the previous chapter, we mentioned how to prepare a machine that we can work with throughout
this book. An alternative to that is using cloud instances, with which we could be consuming VM
instances from the public cloud, which may simplify our consumption and provide us with enough
free credit to prepare for RHCSA. Also, once the self-training process is complete, the machines can
be still used to provide your own public services (such as deploying a blog).
Understanding the need to standardize your environments and the impact of doing so is also important
when you’re working with Linux as a professional. It is key to start with a good set of practices
(automating installations, keeping track of installed software, reducing the attack surface, and so on)
from the beginning.
Now that you’ve completed this chapter, you are ready to continue with the rest of this book, since
you now have an instance of RHEL 9 available to work and practice with. In the next chapter, we will
review the basics of the system to make ourselves comfortable and gain confidence in using the system.
Basic Commands and Simple Shell Scripts
Once you have your first Red Hat Enterprise Linux (RHEL) system running, you want to start using it, practicing, and getting comfortable with it. In this chapter, we will review the basics of logging into the system, navigating through it, and getting to know the basics in terms of its administration.
The set of commands and practices described in this chapter will be used on many occasions when managing systems, so it is important to study them with care.
The following topics will be covered in this chapter:
• Logging in as a user and managing multi-user environments
• Changing users with the su command
• Understanding users, groups, and basic permissions
• Using the command line, environment variables, and navigating through the filesystem
• Understanding I/O redirection on the command line
• Filtering output with grep and sed
• Listing, creating, copying, and moving files, directories, links, and hard links
• Using tar and gzip
• Creating basic shell scripts
• Using system documentation resources
60 Basic Commands and Simple Shell Scripts
Logging in as a user and managing multi-user
environments
Login is the process during which a user identifies themselves in the system – usually, by providing a
username and password, a couple of pieces of information often referred to as credentials.
The system can be accessed in many ways. The initial case for this, which we are covering here, is
how a user accesses it when they install a physical machine (such as a laptop) or via the virtualization
software interface. In this case, we are accessing the system through a console.
During installation, the user was created with an assigned password, and no graphical interface was
installed. We will access the system in this case via its text console. The first thing we are going to
do is to log in to the system using it. Once we start the machine and the boot process is completed,
we will enter, by default, the multi-user text mode environment in which we are being requested to
provide our login:
Figure 3.1 – The login process and username request
The blinking cursor will let us know that we are ready to enter our username, in this case, user, and
then press Enter. A line requesting the password will appear:
Figure 3.2 – The login process and password request
We may now type the user’s password to complete the login and, by pressing Enter on your keyboard,
start a session. Note that no characters will be displayed on the screen when typing the password to
avoid eavesdropping. The following screenshot shows the session running:
Logging in as a user and managing multi-user environments 61
Figure 3.3 – The completed login process and the session running
Now, we are fully logged in to the system with the credentials for a user called user. This will define
what we can do in the system, which files we can access, and even how much disk space we are assigned.
The console can have more than one session. To make that possible, we have different terminals
through which we can log in. The default terminal can be reached by simultaneously pressing the
Ctrl + Alt + F1 keys. In our case, nothing will happen, as we are already in that terminal. We could
move to the second terminal by pressing Ctrl + Alt + F2, to the third one by pressing Ctrl + Alt + F3,
and so on for the rest of the terminals (by default, six are allocated). This way, we can run different
commands in different terminals.
Using the root account
Regular users will not be able to make changes to the system, such as creating new users or adding
new software to the whole system. To do so, we need a user with administrative privileges and for
that, the default user is root. This user always exists in the system and its identifier – the User Id
(UID) – has the value 0.
In the previous installation, we configured the root password, making the account accessible through
the console. To use it by logging in to the system, we only need to type the user root into one of the
terminals shown right next to the login, then hit Enter, and then provide its password, which won’t
be displayed. This way, we will access the system as the administrator, root:
Figure 3.4 – The completed login process for root
62 Basic Commands and Simple Shell Scripts
Important Note
Above the login prompt, there is a message suggesting how the activation of the web console
(cockpit) can be done – the cockpit is a set of tools that enables web management for the system.
The cockpit is covered in Chapter 4, Tools for Regular Operations.
Using and understanding the command prompt
The command line that appears once we have logged in and are waiting for our commands to be typed
and run is called the command prompt.
In its default configuration, it will show the username and hostname between brackets to let us know
with which user we are working. Next, we see the path, in this case, ~, which is the shortcut for the
user’s home directory (in other words, /home/user for user, and /root for root).
The last part and, probably the most important one, is the symbol before the prompt:
• The $ symbol is used for regular users with no administrative privileges.
• The # symbol is used for root or once a user has acquired administrative privileges.
Important Note
Be careful when using a prompt with the # sign, as you will be running as an administrator
and the system will likely not stop you from damaging it.
Once we have identified ourselves within the system, we are logged in and have a running session.
It is time to learn how to change from one user to the other in the following section.
Changing users with the su command
As we have entered a multi-user system, it is logical to think that we will be able to change between
users. Even when this can be done easily by opening a session for each, sometimes we want to act as
several users within one session.
To do so, we can use the su tool. The name of the tool is usually referred to as Substitute User.
Let’s use that last session, in which we logged in as root, and turn ourselves into user.
Before doing so, we can always ask which user we are logged in as by running the whoami command:
[@rhel-instance ~]# whoami
root
Changing users with the su command 63
Now, we can make the change from root to user:
[root@rhel-instance ~]# su user
[user@rhel-instance root]$ whoami
user
Now, we have a session as user. We can finish this session by using the exit command:
[user@rhel-instance root]$ exit
exit
[root@rhel-instance ~]# whoami
root
As you may have seen, when we are logged in as root, we can act as any user without knowing
its password. But how can we impersonate root? We can do so by running the su command and
specifying the root user. In this case, the root user’s password will be requested:
[user@rhel-instance ~]$ su root
Password:
[@rhel-instance user]# whoami
root
As root is the user with the ID 0 and the most important one, when running su without specifying
the user we want to turn into, it will default to turning us into root:
[user@rhel-instance ~]$ su
Password:
[root@rhel-instance user]# whoami
root
Each user can define several options in their own environment, such as, for example, their preferred
editor. If we want to fully impersonate the other user and take their preferences (or environment
variables, as they are referred to on many occasions), we can do so by adding - after the su command:
[user@rhel-instance ~]$ su -
Password:
Last login: mar feb 15 04:57:29 CET 2022 on pts/0
[root@rhel-instance ~]#
64 Basic Commands and Simple Shell Scripts
We can also switch from root to user:
[root@rhel-instance ~]# su - user
Last login: Tue Feb 15 04:53:02 CET 2022 from 192.168.122.1 on
pts/0
[user@rhel-instance ~]$
As you can observe, it behaves as if a new login was done, but within the same session. Now, let’s move on
to managing the permissions for the different users in the system, as addressed in the following section.
Understanding users, groups, and basic permissions
Multi-user environments are defined by being able to handle more than one user simultaneously. But
to be able to administer the system resources, two capabilities help with the tasks:
• Groups: Can aggregate users and provide permissions for them in blocks.
Each user has a primary group.
By default, a group is created for each user and assigned to it as a primary with the same name
as the username.
• Permissions: Assigned to files, determining which users and groups can access each file.
Standard Linux (and UNIX or POSIX) permissions include user, group, and others (ugo).
The whole system comes with a set of permissions assigned by default to each file and directory. Be
careful when changing them.
There is a certain principle in UNIX that Linux has inherited: everything is a file. Even when there
may be some corner cases to this principle, it remains true on almost any occasion. It means that
a disk is represented as a file in the system (in other words, such as /dev/sdb mentioned in the
installation), a process can be represented as a file (under /proc), and many other components in
the system are also represented as files.
This means that, when assigning permissions to files, we can also assign permissions to many other
components and capabilities implemented by them by virtue of the fact that