As you know, I have been fortunate enough to be selected by my instructors to participate in the provincial cloud computing competition. As a result, I have joined the project group in campus.
As a member of the group, I need to study hard and continuously expand my knowledge. To achieve good results at the upcoming provincial competition, we need to learn about the structure of private clouds and the different types of container clouds.
One suggested option for a private cloud solution is OpenStack, which can be complex and require significant effort to master.
However, I am still motivated to pursue this technology as I have a strong interest in IT and Linux-related topics, and I believe that the challenge of learning OpenStack will ultimately improve my knowledge and skills.
Therefore, I made a decision to write some articles on my blog site to document my study process.
Preparation
Nodes
At first of the first, I need to understand a basic example structrue of the OpenStack.
Without doubt, this picture below is a reasonable and official one.
However, limited by the performance and small disk storage, I can only create mainly 2 nodes and an extra resource node to provide the images and repos.
I won’t create independent Object Storage Node and Block Storage Node while it’s a better choice to add 2 extra virtual disks to the Compute Node.
And for the Cinder Service, I will only provide 1 disk with 2 partitions to run the service.
The details of my VirutalBox properties is blow:
By the way, I have to explain the Arch VM, it’s only a resource node to provide the HTTP downloading and yum repo service.
So I just use 256MB RAM and 1 core, but 2 disks to storage the multiple and large repo files.
Network
Network Interfaces
In order to set up the OpenStack Services, each node (compute and controller) need to use 2 network interfaces.
The first one is used to connect to the Management NetWork while the second one is used to connect the Operation Network.
Network Interface
Network
Usage
enp0s3
192.168.56.0/24
Management NetWork
enp0s8
172.129.1.0/24
Operation Network
Nodes IP Address
So the detail netowrk properties is below:
Node
Management Address
Operation Address
controller
192.168.56.2
172.129.1.1
compute
192.168.56.3
172.129.1.2
Resource
192.168.56.100
None
Operating System
CentOS will be installed in the controller and compute and the Arch Linux will be installed in Resouce.
Node
OS
controller
CentOS 7
compute
CentOS 7
Resource
Arch Linux
Set up the network
Edit the file /etc/sysconfig/network-scripts/ifcfg-enp0s3 and /etc/sysconfig/network-scripts/ifcfg-enp0s8 on each nodes.
1
2
# vim /etc/sysconfig/network-scripts/ifcfg-enp0s3
# vim /etc/sysconfig/network-scripts/ifcfg-enp0s8
Using the provided username and password, log in to the provided OpenStack private cloud platform. Under the current tenancy, create two virtual machines using the CentOS7.9 image and 4vCPU/12G/100G_50G type. The second network card should be created and connected to both the controller and compute nodes (the second network card’s subnet is 10.10.X.0/24, where X represents the workstation number, and no routing is needed). Verify the security group policies to ensure normal network communication and ssh connection, and configure the servers as follows:
Set the hostname of the control node to ‘controller’ and that of the compute node to ‘compute’;
Modify the hosts file to map IP addresses to hostnames.
After completing the configuration, submit the username, password, and IP address of the control node in the answer box.
The first quiz is eazy, just some steps can be done.
Using the provided HTTP service address, there are CentOS 7.9 and IaaS network YUM repositories available under the HTTP service. Use this HTTP source as the network source for installing the IaaS platform. Set up the yum source file http.repo for both the controller node and compute node. After completion, submit the username, password, and IP address of the control node to the answer box.
Well, it’s still a easy question.
First, delete the old repo files in two nodes:
1
[root@controller ~]# rm -rfv /etc/yum.repos.d/*
1
[root@compute ~]# rm -rfv /etc/yum.repos.d/*
Second, according to the question, we should create and edit a file named after http.repo.
1
[root@controller ~]# vim /etc/yum.repos.d/http.repo
Then type the password of the root in compute node, the file will be sent.
And of course, I will use the quick way to do the same executions.
[Question 3] Configure SSH without keys [0.5 points]
Configure the controller node to access the compute node without a key, and then attempt an SSH connection to the hostname of the compute node for testing. After completion, submit the username, password, and IP address of the controller node in the answer box.
It’s also an easy and necessary operation we have to do, because we can make the controller node easier to transfer files and execute commands in compute node.
So the first thing we have to do is generate a ssh-key:
1
[root@controller ~]# ssh-keygen
Then press the Enter to confirm your requirements of generation according to the information in terminal.
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:FYN98pz53tfocj5Q4DO90jqqN+lJdzXi9WKMFNjm4Wc root@Resource
The key's randomart image is:
+---[RSA 3072]----+
| oo |
| . o=o |
| o*== |
| . +Ooo |
| S +BE+.|
| .+=*.o|
| ..o*=oo|
| .+o+o=.+|
| .++o *o..|
+----[SHA256]-----+
And now it’s time to put the key into the compute node!
Just simply execute the ssh-copy-id:
1
[root@controller ~]# ssh-copy-id root@compute
And type the password at the last time! You needn’t enter the ssh password of the compute node anymore!
Then this quiz is solved!
[Question 4] Basic Installation [0.5 points]
Install the openstack-iaas package on both the control node and compute node, and configure the basic variables in the script files of the two nodes according to Table 2 (the configuration script file is /etc/openstack/openrc.sh).
Table 2 Cloud Platform Configuration Information
Service Name
Variable
Parameter/Password
Mysql
root
000000
Keystone
000000
Glance
000000
Nova
000000
Neutron
000000
Heat
000000
Zun
000000
Keystone
DOMAIN_NAME
demo
Admin
000000
Rabbit
000000
Glance
000000
Nova
000000
Neutron
000000
Heat
000000
Zun
000000
Neutron
Metadata
000000
External Network
eth1 (depending on actual situation)
So according to the Quiz, we have to install the package openstack-iaas at first:
[Question 5] Database Installation and Tuning [1.0 point]
Use the iaas-install-mysql.sh script on the controller node to install services such as Mariadb, Memcached, and RabbitMQ. After installing the services, modify the /etc/my.cnf file to meet the following requirements:
Set the database to support case sensitivity;
Set the cache for innodb table indexes, data, and insert data buffer to 4GB;
Set the database’s log buffer to 64MB;
Set the size of the database’s redo log to 256MB;
Set the number of redo log file groups for the database to 2. After completing the configuration, submit the username, password, and IP address of the controller node in the answer box.
Before we execute the iaas-install-mysql.sh to install services, we need to run the iaas-pre-host.sh script on each nodes, in order to install some packages the services need.
1
2
[root@controller ~]# cd /usr/local/bin/
[root@controller bin]# ./iaas-pre-host.sh
1
2
[root@compute ~]# cd /usr/local/bin/
[root@compute bin]# ./iaas-pre-host.sh
After the script finished, we need to reconnect the ssh shell or reboot the system of each nodes.
Then we can do the first step, run iaas-install-mysql.sh in controller node.
#
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]
#
# This group is read by the server
#
[mysqld]
# Disabling symbolic-links is recommended to prevent assorted security risks
lower_case_table_names = 1
innodb_buffer_pool_size = 4G
innodb_log_buffer_size = 64M
innodb_log_file_size = 256M
innodb_log_files_in_group = 2
symbolic-links=0
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
max_connections=10000
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
max_connections=10000
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
Finally we Save it.
1
:wq
The quiz was sovled!
[Question 6] Keystone Service Installation and Usage [0.5 points]
Use the iaas-install-keystone.sh script on the controller node to install the Keystone service. After installation, use the relevant commands to create a user named chinaskill with the password 000000. Upon completion, submit the username, password, and IP address of the controller node in the answer box.
To install the Keystone service, we need to run the iaas-install-keystone.sh script:
1
[root@controller bin]# ./iaas-install-keystone.sh
If the installation is successful, the information backed should be like:
[root@controller bin]# openstack user create --domain demo --password-prompt chinaskill
Then type the password 000000, you will get these information:
1
2
3
4
5
6
7
8
9
10
11
12
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | ff38535aa995441d8641b24d86881583 |
| enabled | True |
| id | 206814a5dfba4a9194701d124a815ca3 |
| name | chinaskill |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
It means that you create the user successfully! And this quiz was also solved!
[Question 7] Glance Installation and Usage [0.5 points]
Use the iaas-install-glance.sh script on the controller node to install the glance service. Use the command to upload the provided cirros-0.3.4-x86_64-disk.img image (which is available on an HTTP service and can be downloaded independently) to the platform, name it cirros, and set the minimum required disk size for startup to 10G and the minimum required memory for startup to 1G. After completion, submit the username, password, and IP address of the controller node to the answer box.
Well, it’s a little chanllenging, isn’t it?
But don’t worry, we do the installation at first:
1
[root@controller bin]# ./iaas-install-glance.sh
Then we download the cirros-0.3.4-x86_64-disk.img
1
2
[root@controller bin]# cd ~
[root@controller ~]# wget http://192.168.56.100/img/cirros-0.3.4-x86_64-disk.img
Confirm the filename:
1
2
3
4
[root@controller ~]# ls -lh
total 13M
-rw-------. 1 root root 1.3K May 4 16:09 anaconda-ks.cfg
-rw-r--r-- 1 root root 13M Apr 27 2022 cirros-0.3.4-x86_64-disk.img
It means the operation is finished and successful!
Now this quiz was solved!
[Question 8] Nova Installation and Optimization [0.5 points]
Use the iaas-install-placement.sh, iaas-install-nova-controller.sh, and iaas-install-nova-compute.sh scripts to install the Nova service on the controller node and compute node respectively. After installation, please modify the relevant Nova configuration files to solve the problem of virtual machine startup timeout due to long waiting time, which leads to failure to obtain IP address and error reporting. After configuring, submit the username, password, and IP address of the controller node to the answer box.
We should run iaas-install-placement.sh script in controller node to install the placment service at first:
1
2
[root@controller ~]# cd /usr/local/bin/
[root@controller bin]# ./iaas-install-placement.sh
After installation of placement, we should run iaas-install-nova-controller.sh script to install nova service in controller node:
Then we should install nova service in compute node, but before that we should copy the public key of controller node to it.
So we run:
1
[root@compute ~]# ssh-copy-id root@controller
Then run iaas-install-nova-compute.sh:
1
2
[root@compute ~]# cd /usr/local/bin/
[root@compute bin]# ./iaas-install-nova-compute.sh
Installed!
1
2
3
4
5
6
7
8
9
10
11
+----+--------------+---------+------+---------+-------+------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+--------------+---------+------+---------+-------+------------+
| 6 | nova-compute | compute | nova | enabled | up | None |
+----+--------------+---------+------+---------+-------+------------+
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': d955f2a9-ec41-4ea0-b72a-8f3c38977c2e
Checking host mapping for compute host 'compute': c17f7c5c-5821-4891-b6ca-a6684b028db1
Creating host mapping for compute host 'compute': c17f7c5c-5821-4891-b6ca-a6684b028db1
Found 1 unmapped computes in cell: d955f2a9-ec41-4ea0-b72a-8f3c38977c2e
Then run the check command in controller to verify if the nova service installed successfully!
1
2
[root@controller bin]# source /etc/keystone/admin-openrc.sh
[root@controller bin]# openstack compute service list
And you will see the hostname of compute node:
1
2
3
4
5
6
7
+----+----------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+----------------+------------+----------+---------+-------+----------------------------+
| 4 | nova-conductor | controller | internal | enabled | up | 2023-05-06T03:14:27.000000 |
| 5 | nova-scheduler | controller | internal | enabled | up | 2023-05-06T03:14:28.000000 |
| 6 | nova-compute | compute | nova | enabled | up | 2023-05-06T03:14:25.000000 |
+----+----------------+------------+----------+---------+-------+----------------------------+
Ok, now we should do the final operation, edit the file /etc/nova/nova.conf
1
[root@controller bin]# vim /etc/nova/nova.conf
Just simply change #vif_plugging_is_fatal=true to vif_plugging_is_fatal=false, but we can use vim command quickly:
Using the provided scripts iaas-install-neutron-controller.sh and iaas-install-neutron-compute.sh, install the neutron service on the controller and compute nodes. After completion, submit the username, password, and IP address of the control node to the answer box.
This quiz is easy, just run the scripts in each nodes:
Then the Neutron Service was installed successfully! Quiz Solved!
[Question 10] Installation of Doshboard [0.5 points]
Use the iaas-install-dashboad.sh script to install the dashboard service on the controller node. After installation, modify the Djingo data in the Dashboard to be stored in a file (this modification solves the problem of ALL-in-one snapshots not being accessible in other cloud platform dashboards). After completion, submit the username, password and IP address of the controller node to the answer box.
Use the iaas-install-swift-controller.sh and iaas-install-swift-compute.sh scripts to install the Swift service on the control and compute nodes respectively. After installation, use a command to create a container named “examcontainer”, upload the cirros-0.3.4-x86_64-disk.img image to the “examcontainer” container, and set segment storage with a size of 10M for each segment. Once completed, submit the username, password, and IP address of the control node to the answer box.
At first we need to create partitions in compute node
[root@compute bin]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xe8f17fde.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-41943039, default 2048): +10G
Last sector, +sectors or +size{K,M,G} (20971520-41943039, default 41943039):
Using default value 41943039
Partition 1 of type Linux and of size 10 GiB is set
Command (m for help): n
Partition type:
p primary (1 primary, 0 extended, 3 free)
e extended
Select (default p): p
Partition number (2-4, default 2):
First sector (2048-41943039, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 2 of type Linux and of size 10 GiB is set
Command (m for help): p
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xe8f17fde
Device Boot Start End Blocks Id System
/dev/sdb1 20971520 41943039 10485760 83 Linux
/dev/sdb2 2048 20971519 10484736 83 Linux
Partition table entries are not in disk order
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@compute bin]# mkfs.ext4 /dev/sdb1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@compute bin]# mkfs.ext4 /dev/sdb2
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621184 blocks
131059 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Then run iaas-install-swift-controller.sh and iaas-install-swift-compute.sh scripts:
[Question 12] Creating a Cinder volume [0.5 points]
Using the iaas-install-cinder-controller.sh and iaas-install-cinder-compute.sh scripts, install the Cinder service on both the control node and compute node. On the compute node, expand the block storage by creating an additional 5GB partition and adding it to the back-end storage for Cinder block storage. After completion, submit the username, password, and IP address of the compute node to the answer box.
[root@compute bin]# vgdisplay
--- Volume group ---
VG Name cinder-volumes
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <10.00 GiB
PE Size 4.00 MiB
Total PE 2559
Alloc PE / Size 2438 / 9.52 GiB
Free PE / Size 121 / 484.00 MiB
VG UUID QHk53K-Kj2O-ilc2-pxk6-Upqe-meRE-vfJu6P
--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <49.00 GiB
PE Size 4.00 MiB
Total PE 12543
Alloc PE / Size 12542 / 48.99 GiB
Free PE / Size 1 / 4.00 MiB
VG UUID 2tEud0-Ydx6-cFfX-dZMM-F9IC-l3nc-sLS38v
Well, it’s finished.
[Question 13] Installation and Usage of Manila Service [0.5 point]
Install the Manila service on the control and compute nodes using the iaas-install-manila-controller.sh and iaas-install-manila-compute.sh scripts, respectively. After installing the service, create a default_share_type share type (without driver support), and then create a shared storage called share01 with a size of 2G and grant permission for OpenStack management network segment to access the share01 directory. Finally, submit the username, password, and IP address of the control node to the answer box.
[root@compute bin]# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x6e07efc2.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-6291455, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-6291455, default 6291455):
Using default value 6291455
Partition 1 of type Linux and of size 3 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
+--------------------------------------+---------+------+-------------+-----------+-----------+--------------------+-----------------------------+-------------------+
| ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone |
+--------------------------------------+---------+------+-------------+-----------+-----------+--------------------+-----------------------------+-------------------+
| 0cdd5acb-5e54-4cdd-9187-467e2800d212 | share01 | 2 | NFS | available | False | default_share_type | compute@lvm#lvm-single-pool | nova |
+--------------------------------------+---------+------+-------------+-----------+-----------+--------------------+-----------------------------+-------------------+
Grant permission for OpenStack management network segment to access the share01 directory.
1
[root@controller bin]# manila access-allow share01 ip 192.168.56.0/24 --access-level rw
Check if the operation succeed!
1
[root@controller bin]# manila access-list share01
1
2
3
4
5
+--------------------------------------+-------------+-----------------+--------------+--------+------------+----------------------------+------------+
| id | access_type | access_to | access_level | state | access_key | created_at | updated_at |
+--------------------------------------+-------------+-----------------+--------------+--------+------------+----------------------------+------------+
| cad9f433-6ad3-4db9-afe1-90dc52374a08 | ip | 192.168.56.0/24 | rw | active | None | 2023-05-06T06:55:13.000000 | None |
+--------------------------------------+-------------+-----------------+--------------+--------+------------+----------------------------+------------+
Done!
[Question 14] Barbican Service Installation and Usage [0.5 points]
Install the Barbican service using the iaas-install-barbican.sh script. After the installation is complete, use the openstack command to create a key named “secret01”. Once created, submit the username, password, and IP address of the control node in the answer box.
Well, it’s easy, run iaas-install-barbican.sh in controller node.
1
[root@controller bin]# ./iaas-install-barbican.sh
Create a key named “secret01”
1
[root@controllerbin]# openstack secret store --name secret01 --payload secretkey
Done!
[Question 15] Cloudkitty Service Installation and Usage [0.5 points]
Install the cloudkitty service using the iaas-install-cloudkitty.sh script. After installation, enable the hashmap rating module and then create the volume_thresholds group. Create a service matching rule for volume.size and set the price per GB to 0.01. Next, apply discounts to corresponding large amounts of data. Create a threshold in the volume_thresholds group and set a discount of 2% (0.98) if the threshold is exceeded for volumes over 50GB. After completing the setup, submit the username, password, and IP address of the control node in the answer box.
After setting up the OpenStack platform, disable memory sharing in the system and enable transparent huge pages. After completing this, submit the username, password, and IP address of the control node to the answer box.
1
[root@controller ~]# find / -name defrag
Disable memory sharing in the system and enable transparent huge pages.
1
[root@controller ~]# echo never > /sys/kernel/mm/transparent_hugepage/defrag
In a Linux server with high concurrency, it is often necessary to tune the Linux parameters in advance. By default, Linux only allows a maximum of 1024 file handles. When your server reaches its limit during high concurrency, you will encounter the error message “too many open files”. To address this, create a cloud instance and modify the relevant configuration to permanently increase the maximum file handle count to 65535 for the control node. After completing the configuration, submit the username, password, and IP address of the controller node to the answer box.
Finally just reconnect to the ssh shell, and get the maximum file handles again.
1
2
[root@controller ~]# ulimit -n
65535
[Question 18] Linux System Tuning - Dirty Data Writing Back [1.0 point]
There may be dirty data in the memory of a Linux system, and the system generally defaults to writing back to the disk after 30 seconds of dirty data. Modify the system configuration file to temporarily adjust the time for writing back to the disk to 60 seconds. After completion, submit the username, password, and IP address of the controller node to the answer box.
[Question 19] Linux System Tuning - Preventing SYN Attacks [0.5 points]
Modify the relevant configuration files on the controller node to enable SYN cookies and prevent SYN flood attacks. After completion, submit the username, password, and IP address of the controller node to the answer box.