OpenStack Interview Questions and Answers for 2024

OpenStack is an open source cloud computing infrastructure software project and is one of the three most active open source projects in the world. Here's a list of the top, perfectly curated list of self-explaining Red Hat OpenStack interview questions and answers that will help you achieve success in your upcoming interviews. With the increasing reliability of the cloud and consistent migration, it is necessary to have a platform where companies can leverage cloud benefits. OpenStack is an open-source platform using pooled virtual resources to build and manage private and public clouds. To help you get along with the concepts, we have covered every aspect of OpenStack from freshers to advanced. If you just getting started with OpenStack, then this guide will help you achieve your goals with detailed information covering questions and answers from freshers to advanced levels with simple coding examples to help you get through concepts.

  • 4.7 Rating
  • 60 Question(s)
  • 30 Mins of Read
  • 6324 Reader(s)

Beginner

OpenStack is an open-source platform that uses virtual resources to create and manage private and public clouds. The tools that come under the OpenStack platform are known as "projects,". These projects offer different functionality for seamlessly handling the core cloud-computing services, such as computing, networking, storage, identity, and image services. We can combine several projects to create a unique deployable cloud for ease and seamless working.

Virtualization abstracts the resources such as storage, CPU, and RAM from different vendor-specific programs and is split by a hypervisor even before we distribute them. OpenStack uses APIs to abstract those virtual resources into different pools for powering the cloud computing tools used by the administrators and users.

We can see that OpenStack offers almost all the benefits of cloud computing. 

  1. Enables fast innovation- with the availability of orchestration and self-service, developers can get faster access to all required resources to run an application efficiently and quickly. It allows them to provision machines faster and on-demand, reducing the time taken for development and testing.  
  2. Faster time-to-market- with faster development services, end clients do not have to wait for the product, resulting in faster time-to-market. 
  3. Improves scalability and resource utilization- not as the public cloud, but OpenStack is still highly scalable. We can simply up and down the resources on demand. It helps to Optimizely allocate and distribute the resources where they are required the most. 
  4. Eliminates vendor lock-in- one of the significant problems with the traditional methods was the vendor lock-in that no more exists with OpenStack solutions. If we are not happy with the services of one vendor, we can switch to another without worrying about the underlying infrastructure and compatibility issues. It improves the flexibility of your app, which can blend well on any platform. 
  5. Cost-effective- with their pay-per-use model, companies leveraging the resources only have to pay for those they use. Companies will have to pay less if the resources are no longer in need. 

Despite several benefits, OpenStack has significant challenges or disadvantages that everyone must know. 

  • Scarcity of talent- As a new technology, finding the right talent for the job can be challenging. It is because managers prefer employees to have knowledge of OpenStack along with other cloud technologies, which is a rare case. 
  • Installation task might be challenging- the suites of projects and applications that must be configured per the customer’s requirements can be tricky. So, a specialist must handle the installation part. To overcome it, we can go for vendors who offer migration and installation services.  
  • Upgrading OpenStack- as we know, cloud-based infrastructure ensures high availability and reliability. But upgrading OpenStack is difficult due to its complex nature and multi-project development approach. OpenStack faces high downtime that cannot be ignored. 
  • Too many vendors- at the beginning, companies, and vendors took their time to evolve and adapt OpenStack. As time passes, we can see many vendors in the market offering OpenStack services and guaranteeing implementation. But, choosing the right way to work with OpenStack can be confusing and tricky. As if we do not go for the right option and understand your business requirement, we might face losses in the long term. 

The three most common OpenStack cloud types are public, private, and hybrid. 

  • OpenStack public cloud- everyone can access the infrastructure, often run by service providers. Being available to all comes with significant risks, such as loss of data, modifies data, and others. These are the cheapest source of accessing resources. 
  • OpenStack private cloud- these clouds are private to a single company that can access its resources. These clouds are being shared among several providers but no one access the data of another provider. But this option is costly, as you need to launch, run, and manage the private cloud. 
  • OpenStack hybrid cloud offers the benefits of both public and private clouds. The resources will be distributed fairly whenever there is a requirement. 

This is one of the most frequently asked Red Hat OpenStack interview questions and answers for freshers. Check out the explanation with the illustration below.

OpenStack has several components with modular architecture and is responsible for working with the cloud platforms. Some standard components are necessary for a cloud to function seamlessly. 

  • Compute (Nova) is a cloud computing fabric controller responsible for managing resources and working with virtualization technologies, bare metals, and complex system configurations. It offers excellent flexibility in designing the cloud without needing proprietary software and hardware. Not only this, but it also helps us to integrate third-party products and legacy systems seamlessly. 
  • Image service (Glance) allows us to discover, register, and restore virtual images. It is based on client-server architecture, delivering the REST API that lets us query the virtual machine metadata to retrieve the original image. We can use stored images as templates to deploy new virtual machine instances. This service has the capability to support VM images, such as Raw, VirtualBox (VDI), VMWare (VMDK, OVF), Hyper-V (VHD), and Qemu/KVM (qcow2). 
  • Object storage (Swift) helps create and store a considerable amount of accessible data. Anyone with access can leverage, update, and retrieve the stored data. Due to its distributed architecture, it offers high redundancy, scalability, and performance. Companies use it for storing data at affordable prices and safely. It also ensures data replication and distribution across multiple devices. 
  • Dashboard (Horizon)- we can implement an OpenStack dashboard via Horizon, ensuring authorized access. It offers a graphical interface for automating cloud resources. It is beneficial for service providers and vendors to leverage third-party services such as monitoring, billing, and other management tools. It lets the developers automate tools for managing the OpenStack resources using EC2-compatible API. 
  • Identity Service (Keystone) offers a centralized list of all users mapped against all OpenStack services that they can use and are using. Due to its seamless integration with backend services like LDAP while working as an authentication system for all cloud computing systems. It successfully supports different forms of authentication like standard username & password credentials, AWS-style (Amazon Web Services) logins, and token-based systems.  
  • Networking (Neutron)- it provides us with extensive networking capabilities, like managing complex networks, IP addresses, and routing for OpenStack. It ensures that the network will not act as a barrier while cloud deployment and allows users to configure network settings accordingly. It lets us create our networks and connect devices and servers across networks. We can also leverage SDN technology for a scalable network with multi-tenancy. With this extensive framework, we can quickly deploy and manage other network services, such as VPNs, firewalls, load balancing, and intrusion detection systems. 
  • Block storage (Cinder)- offers block-level storage for apps with OpenStack compute instances. We can manage our storage needs by integrating more storage blocks using the dashboard and Nova. it uses many storage platforms, such as Linux server, EMC (ScaleIO, VMAX, and VNX), Ceph, Coraid, CloudByte, IBM, Hitachi data systems, SAN volume controller, etc. 
  • Telemetry (Ceilometer) provides a single point for all billing systems that authorizes the customer’s billing across all OpenStack components. It receives notifications from other services for analyzing and gathering information. 

OpenStack storage is the next most complex thing to networking. OpenStack supports different types of storage. Being open-source, the architecture of OpenStack storage is similar to leading public clouds, making it easier to get along with prior cloud understanding.  

Different types of OpenStack storage options are- ephemeral/block storage, file storage, and object storage. 

  • Ephemeral storage is the primary storage option attached to an instance by default as a file system during the provisioning process. So, it is the preferable choice for all users, and they do not have to worry about managing it. But it is volatile, meaning the storage will get deleted permanently once the user deletes the instance. Thus, it is suitable for storing temporary data, such as (OS) files, caches, buffers, etc. 
  • Block storage- unlike OS files and buffers, some files are essential that we cannot store in temporary storage. Some data, like database files, tables, and others, need a non-volatile option where data loss does not cost us money. Block storage is that option that is a persistent storage and is managed independently of instance provisioning and termination. This storage is available as volumes that can be created, attached to an instance, and accessed by the user from within the instance. Also, we can create multiple volumes of services by different service tiers, allowing us to select volumes backed by SSDs. to protect the data, it allows us to snapshot and take backups for data recovery. The Cinder service manages it. 
  • File storage- in some cases, data is not readily available to a single instance and is highly distributed across several instances line NFS. then we must use file storage that helps in creating persistent file shares. We can mount them to any instance and access them remotely. The Manila service manages such storage.  
  • Object storage- this option is entirely different from other storage options. It is persistent cloud-native storage with built-in replication mechanisms for data durability and geo-redundancy. object storage is not attached to instances at all, and we need to access it via APIs. It has a flat structure with no directory hierarchy where each data chunk is an individual object. Swift service manages this storage.

Expect to come across this important search engine marketing question in your next interviews.

We can deploy the OpenStack in a single-node or multi-node configuration. A node refers to the grouping of OpenStack services that can run on the bare metal, container, or VM. node offers horizontal scaling and HA.

OpenStack has three node architecture with three essential nodes- compute, network, and controller. 

  • The controller node is where most of the OpenStack services stay and run. This node supplies the APIs, services, and scheduling to the cloud. This node comes with the dashboard, the image store, and the identity service. The Nova compute management service, and the Neutron server are configured on this node. This node runs portions of the Block Storage, Object Storage, Orchestration, Telemetry services, Identity service, Image service, management portions of Compute, Networking, various Networking agents, and the dashboard. 
  • Network node- This OpenStack node provides virtual networking and networking services to Nova instances using the Neutron Layer 3 and DHCP network services.
  • Compute node- VM instances are installed here that use iSCSI targets provisioned by the Cinder volume service. We can deploy more than one compute node where each node requires a minimum of two network interfaces. It also runs a Networking service agent connecting instances to virtual networks and provides firewalling services to instances. 

These nodes share a common subnet called the management subnet. While The Controller node and each compute node share, a separate common subnet called the data subnet. The Network node and Compute node are attached to the data network through physical interfaces. 

A hypervisor is the software component enabling VMs to run on host machines, allowing many OS to share the same hardware. In the case of a single OS, all the resources of the cloud or host machine are consumed by it. To avoid this, hypervisors are used. They control and allocate hardware resources to specific OS as per their need without impacting the performance of other OS. 

Hypervisors and OpenStack work together to create, manage, and monitor virtual machines. OpenStack supports many hypervisors.  

  • XenServer- Xen Cloud Platform (XCP), XenServer, and other XAPI-based Xen variants run Windows and Linux virtual machines. However, we must install the nova-compute service in a para-virtualized virtual machine. 
  • Xen: The Xen Project Hypervisor uses libvirt as a management interface into OpenStack’s nova-compute to execute the NetBSD, FreeBSD, Windows, and Linux virtual machines. 
  • VMware EVMware ESXi. vSphere’s 5.1.0 and newer versions run VMware-based Windows and Linux images through a connected VMware vCenter server. 
  • UML (User Mode Linux). It can be used for development purposes. 
  • QEMU. This is the Quick EMUlator used for development purposes. 
  • Hyper-V. Microsoft Hyper-V runs nova-compute natively on the Windows virtualization platform. It also runs FreeBSD, Linux, and Windows virtual machines. 
  • LXC. These arLinux containers that run Linux-based VMs.

We can use the following command to check your hypervisor name in OpenStack. 

$ nova hypervisor-list 
$ nova hypervisor-show ID 

This service offers a single point of integration to manage authentication, authorization, and other services. It is the first service that a user interacts with. Once the user is authorized, they can access other OpenStack services. Even other services leverage identity services to authorize the user's identity so they can explore services available within the deployment. We can integrate identity services with the external user management system if we want. It allows users and services to locate other services.

Identity service has three components- 

  • Server- a centralized server offering authentication and authorization services using a RESTful interface. 
  • Drivers- are integrated into the server for accessing identity information in repositories that are external to OpenStack or exist in the infrastructure where OpenStack is deployed. 
  • Modules - runs in the address space of OpenStack using the identity services. These modules are responsible for extracting and sending the required information based on the incoming requests to a centralized server for authorization.  

These are the commands that we can run on the terminal to pause and unpause a specific instance by providing the name for that instance. 

$ novaunpause INSTANCE_NAME 
$ nova pause INSTANCE_NAME 

It is the process to reserve the physical core parts of virtual machines as per the requirements, also known as isolating the CPU.

Don't be surprised if this question pops up as one of the top OpenStack interview questions in your next interview.

Cell lets us scale an OpenStack Compute cloud in a distributed way without relying on complicated technologies, such as database and message queue clustering, thus making it a preferable choice for large deployments.

Once we enable this functionality, it will partition the hosts in an OpenStack Compute cloud into groups called cells. These are configured as a tree with the top-level cell having a host running the nova-api service rather than nova-compute. Whereas the child cells can run nova-* services except nova-api. We can consider a cell as a Compute deployment where each cell has its database server and message queue broker.

The nova-cells handle communication between cells via RPC and select cells for new instances. Cell scheduling is different from the host schedule. The nova-cells select a cell whose request will be sent to the nova-cells service and then to the host scheduler and stats building. These cells are disabled by default, and we need to enable it explicitly by setting “true” to its functionality. 

This filter allows the users to use a HTTP PATCH to alter their own password.

A share is offered by a file system that can access the instances. For creating and managing shares, we use manila client commands. 

Command to share a network. 

$ manila share-network-create \ 
    --name sharenetwork \ 
    --description "My Manila network" \ 
    --neutron-net-id dca0efc7-523d-43ef-9ded-af404a02b055 \ 
    --neutron-subnet-id 29ecfbd5-a9be-467e-8b4a-3415d1f82888 

Command to list share networks. 

$ manila share-network-list 

Command to create a share. 

$ manila create NFS 1 \ 
    --name share \ 
    --description "My Manila share" \ 
    --share-network sharenetwork \ 
    --share-type default 

Command to display a share. 

$ manila show share 

Command to list shares. 

$ manila list 

Command to list the share export locations. 

$ manila share-export-location-list share 

Command to allow access. 

$ manila access-allow myshare ip 20.0.0.0/24 --access-level ro 

Command to list access. 

$ manila access-list myshare 

Command to deny the access. 

$ manila access-deny myshare 0c8470ca-0d77-490c-9e71-29e1f453bf97 
$ manila access-deny myshare f151ad17-654d-40ce-ba5d-98a5df67aadc 

Command to create a snapshot. 

$ manila snapshot-create --name mysnapshot --description "My Manila snapshot" myshare 

Command to delete a share. 

$ manila delete mysharefromsnap 

Instances refer to the virtual machines that run inside the cloud on any physical compute node. These instances are managed by the compute services. A host is a node where a group of instances resides. 

You can run several commands to work around with instances. 

Command to pause an instance. 

$ openstack server pause INSTANCE_NAME 

Command to unpause an instance. 

$ openstack server unpause INSTANCE_NAME 

Command to suspend an instance. 

$ openstack server suspend INSTANCE_NAME 

Command to resume a suspended instance. 

$ openstack server resume INSTANCE_NAME 

Command to shutdown an instance and store it with corresponding data. Anything in memory is gone. 

$ openstack server shelve SERVERNAME 

Restore an instance. 

$ openstack server unshelve SERVERNAME 

Command to remove the instance and all data.  

$ nova shelve-offload SERVERNAME 

Command to soft reboot an instance. 

$ openstack server reboot SERVER 

Command to hard reboot an instance. 

$ openstack server reboot --hard SERVER 

We can customize the behaviour of the DHCP server by creating a dnsmasq configuration file.

The command is- 

$ apt-get install neutron-dhcp-agent 

List the images we can access. 

$ openstack image list 

Command to delete a specific image 

$ openstack image delete IMAGE 

Command to describe a specific image 

$ openstack image show IMAGE 

Command to update the image. 

$ openstack image set IMAGE

Some advanced and useful commands are- 

Command to upload the kernel image. 

$ openstack image create "cirros-threepart-kernel" \ 
  --disk-format aki --container-format aki --public \ 
  --file ~/images/cirros-0.3.5-x86_64-kernel 

Command to upload RAM image. 

$ openstack image create "cirros-threepart-ramdisk" \ 
  --disk-format ari --container-format ari --public \ 
  --file ~/images/cirros-0.3.5-x86_64-initramfs 

Command to upload three-part image 

$ openstack image create "cirros-threepart" --disk-format ami \ 
  --container-format ami --public \ 
  --property kernel_id=$KID-property ramdisk_id=$RID \ 
  --file ~/images/cirros-0.3.5-x86_64-rootfs.img 

Command to register a raw image. 

$ openstack image create "cirros-raw" --disk-format raw \ 
  --container-format bare --public \ 
  --file ~/images/cirros-0.3.5-x86_64-disk.img 

The bare metal services of OpenStack, codenamed ironic, refers to the collection of different components that helps in managing and provisioning the physical machines for any project as per the requirement. 

The bare metal services have the following components. 

  • Ironic-api- processes application requests and sends them to the ironic-conductor over a remote procedure call (RPC).  
  • ironic-conductor- it uses drivers for executing hardware operations, such as adding/editing/deleting nodes, powers on/off nodes, and provisions/deploys/cleans bare metal nodes. 
  • Ironic-python-agent- A python service running in a temporary RAM disk to provide remote access to ironic-conductor and ironic-inspector services.

The Bare Metal service interacts with other OpenStack services, such as 

  • the OpenStack Telemetry module (ceilometer), to gather the IPMI metrics. 
  • the OpenStack Identity service (keystone) for authenticating requests and locating other OpenStack services 
  • the OpenStack Image service (glance) for retrieving images and image meta-data 
  • the OpenStack Networking service (neutron) for configuring DHCP and network. 
  • the OpenStack Compute service (nova) works as a user-facing API, for instance, management. 
  • the OpenStack Object Storage (swift) for temporarily storing config drive, user images, deployment logs, and inspection data. 

These are the components that are used within any network. 

  • Networks 
  • Routers 
  • Subnets 
  • Ports 
  • Vendor Plugins 

Don't be surprised if this question pops up as one of the top OpenStack interview questions in your next interview.

To create a bare-metal node, we must provide the details like PM address, user name, and password that match the details in the hardware’s BIOS/IPMI configuration. 

Below is the command. 

$ nova baremetal-node-create --pm_address PM_ADDRESS --pm_user PM_USERNAME \ 
  --pm_password PM_PASSWORD $(hostname -f) 1 512 10 aa:bb:cc:dd:ee:ff 

In the following example, we are using the PM address 1.2.3.4, the PM user name ipmi, and password ipmi. 

$ nova baremetal-node-create --pm_address 1.2.3.4 --pm_user ipmi \ 
  --pm_password ipmi $(hostname -f) 1 512 10 aa:bb:cc:dd:ee:ff 

 For adding the network interface to the node, we must provide the interface’s MAC address.  

$ nova baremetal-interface-add 1 aa:bb:cc:dd:ee:ff

For displaying the details of the bare-metal, we can use the following command. 

$ nova baremetal-node-show 1 

Alarms offer monitoring-as-a-service for all the resources running on OpenStack. This efficiency monitoring service helps us to scale automatically in and out of a group of instances using orchestration services. Alarm is useful for notifying you about the health of the cloud’s resources, which means how they are being used, in what amount, are the resources enough, and others. 

Alarms follow a tri-state model- 

  • ok- means the rule that governs the alarm is evaluated as “false”. 
  • alarm- means the alarm's rule is evaluated as “true”. 
  • insufficient data- means there are not enough data points in the evaluation periods for determining the alarm state. 

Don't be surprised if this pops up as one of the top OpenStack interview questions in your next interview.

The networking options available are- 

  • Flat Network Manager: it fetches the IP addresses for VM instances from the subnet and injects them into the image while launching. 
  • Flat DHCP Network Manager: it fetches the IP addresses for VM instances from the subnet mentioned by the network administrator. 
  • VLAN Network Manager: the compute creates a VLAN and bridge; the DHCP server is started for each VLAN to pass the IP addresses to the VM instances. 

The components of identity user management are- 

  • Users: refers to the digital representation of a person, service, or system that leverages the OpenStack cloud services. 
  • Tenants: refers to a container for grouping or isolating resource or identity objects. Based on the service operator, a tenant map to a customer, account, organization, or project. 
  • Roles: A role refers to a set of privileges that determines a user's permissions to perform in a given tenant. 

Advanced

In OpenStack, there could be scenarios where the system capacities can be exhausted without any notification. We can use quotas that specify the operational limit to prevent such issues.

For example, if we want to set the number of gigabytes allowed for each project, you can control it via quotas to optimize resource usage. We can enforce the quotas at the project and the project-user level.

By changing the quotas, we can control and provide more terabytes on a compute node. We can use the dashboard to view default Compute and Block Storage quotas for new projects and update it from there.

OpenStack has a key manager, barbican, the secrets manager for the Red Hat OpenStack Platform. With barbican API and the command line, we can simply manage everything from centrally managing the certificates and keys to passwords used by other OpenStack services.  

The secrets are managed using- 

  • Symmetric encryption keys- we can use them for Block Storage (cinder) volume encryption, ephemeral disk encryption, and Object Storage (swift) encryption, among others. 
  • Asymmetric keys and certificates- we can use them for glance image signing and verification, among others. 

Also, we can integrate barbican with the Block Storage (cinder) and Compute (nova) components.

Key features of secret management- 

  • Storage and retrieval of secrets. 
  • Define and manage access policies for the key. 
  • Administrative functionality and control of the lifecycle of key material. 
  • An efficient auditing ability in OpenStack services for key access and lifecycle events. 
  • Key management as a service available for all PaaS application(s) deployed on an OpenStack cloud. 
  • We can scale key management effectively and improves availability. 
  • We can encrypt volumes/disks. 

A must-know for anyone looking to prepare for OpenStack advanced interview questions, this is one of the frequent questions asked of content managers.

We can use commands to work around the secrets. 

Command to list secrets. 

Secrets are identified by their URI, indicated as a href value. 

openstack secret list 

Commands to add new secrets. 

openstack secret store --name testSecret --payload 'TestPayload' 

Command to update secret. 

openstack secret update https://192.168.123.163:9311/v1/secrets/ca34a264-fd09-44a1-8856-c6e7116c3b16 'TestPayload-updated' 

Command to delete a secret 

openstack secret delete https://192.168.123.163:9311/v1/secrets/ecc7b2a4-f0b0-47ba-b451-0f7d42bc1746 

Using Python SDK, we can create Python scripts for creating and managing resources in the openStack cloud. It implements the Python binding to the OpenStack API for carrying out automation tasks in Python by calling Python objects. Also, we can implement the OpenStack command line tools using Python SDK.

An end user of the OpenStack cloud can simply provision their resources within the set limit enforced by the cloud administrator. The user can do it using- the OpenStack dashboard or command-line clients. 

  • OpenStack dashboard is a web-based UI, code-named horizon, allowing end users to view, create, and manage resources. 
  • OpenStack command-line clients allow us to run simple commands to view, create, and manage resources in a cloud and automate tasks using scripts.

If we want to use the snapshots for migrating an instance from the OpenStack projects to the cloud, below are the steps. 

  • Create a snapshot for the instance that needs to be migrated. 
  • Download that snapshot. 
  • Import the snapshot to the new environment. 
  • Boot the new instance from the snapshot.

Below are the commands for each step.

Step 1: Create a snapshot for the instance that needs to be migrated. 

Shut down the source VM before taking the snapshot so the data will be flushed to the disk. To check the instance name, run the list command. 

$ nova list  

Shut down the instance. 

$ nova stop myInstance 

Now take a snapshot. 

$ nova image-create --poll myInstance myInstanceSnapshot  

Check the status of the image. 

$ nova image-list 

Step 2: Download that snapshot. 

Get the image ID. 

$ nova image-list 

Download the image using the image ID. 

$ glance image-download --file snapshot.raw f30b204e-1ce6-40e7-b8d9-b353d4d84e7d 

Step 3: Import the snapshot to the new environment. 

Go to the new environment, and import the snapshot. 

$ glance --os-image-api-version 1 image-create \ 
  --container-format bare --disk-format qcow2 --copy-from IMAGE_URL 

Step 4: Boot the new instance from the snapshot. 

Now use the snapshot to create the new instance. 

$ nova boot --flavor m1.tiny --image myInstanceSnapshot myNewInstance

To check the client’s version number, run the following command. 

$ PROJECT --version 

For checking the version number for the OpenStack client, run the following command. 

$ openstack --version

Like other OpenStack services, the telemetry service also has a logging option. We can change the target of logging, the format of the log entries, and the log levels, and other details via ceilometer.conf file. We get a list of configuration options.

We can use the stderr as the standard output for log messages that we can change to logfile or sysfile.

OpenStack provides a filter scheduler that allows us to filter and weigh to make an informed decision regarding where to create a new instance. This scheduler only works with compute nodes.

The filter scheduler goes through all the compute nodes and evaluates them against some set of filers applied. The resulting hosts will be ordered by weight. Then, the scheduler will select a host for the requested number of instances that choose the most weighted hosts. If a scheduler cannot find a host for the next instance, it means there is no perfect match to schedule the instance. If the existing filtering algorithm does not work as expected to provide us with the right result, then we can implement our algorithm

A common and most important OpenStack question for interview, don't miss this one.

Each instance has fixed, private, public, or floating IP addresses. Private IPs are for internal communication among instances; public IPs are for communication within the network but outside the cloud. Whenever we launch an instance, an IP address will be assigned automatically, which will remain the same till its termination, even after reboot.  

The cloud administrator configures a pool of IP addresses available in OpenStack compute. The number of floating IP addresses we can assign to a project depends on the limit set in the quota.  

For managing the floating IP address, we use the OpenStack command. 

Command to list details of the floating IP address. 

$ openstack floating ip pool list 

Command to allocate a floating IP address to a project. 

$ openstack floating ip create public 

Command to associate an IP address with an instance. 

$ openstack server add floating ip INSTANCE_NAME_OR_ID FLOATING_IP_ADDRESS 

Command to disassociate a floating IP address from an instance. 

$ openstack server remove floating ip INSTANCE_NAME_OR_ID FLOATING_IP_ADDRESS 

Removing floating IP address from a project. 

$ openstack floating ip delete FLOATING_IP_ADDRESS 

One of the most frequently posed OpenStack scenario based interview questions, be ready for this conceptual question.

To manage projects, users, and roles, we must be administrators. They can assign users to specific projects, and a user can be a member of several projects. Roles define what a user can do, and they can assign roles to user-project pairs.

For defining the actions for OpenStack service roles, the admin needs to make changes in the  /etc/PROJECT/policy.json files. While setting up the cloud, the operator defines at least one project, role, and user. While the admin can add, update, and delete projects and users, assign users to one or more projects, and change or remove the assignment. To delete a user account, remove it from its primary project.

A project in OpenStack is referred to as the group of zero or more users. In compute, each project has a VM. while in containers, each project owns a container. We can associate users to more than one projects. Each project-user pair has a specific role associated to it. 

To work around  projects, we can use openstack commands. 

Command to list projects to get their details. 

$ openstack project list 

Command to create a new project. 

$ openstack project create --description 'my new project' new-project \ 
  --domain default 

Command to disable a project temporarily. 

$ openstack project set PROJECT_ID --disable 

Command to enable a disabled project. 

$ openstack project set PROJECT_ID --enable 

Command to update the name of the project. 

$ openstack project set PROJECT_ID --name project-new 

Command to delete a project. 

$ openstack project delete PROJECT_ID 

Users are the members that we can associate with one or more projects. Below are the commands. 

Command to list the users. 

$ openstack user list 

Command to create new user. We need to specify the username, password, and email address as without it the user cannot log into the dashboard. 

$ openstack user create --project new-project --password PASSWORD new-user 

To disable the user’s account temporarily. 

$ openstack user set USER_NAME --disable 

Command to enable the disabled account. 

$ openstack user set USER_NAME --enable 

Command to change the name and description of the user. 

$ openstack user set USER_NAME --name user-new --email new-user@example.com 

User has been updated. 

Command to delete the user. 

$ openstack user delete USER_NAME 

Roles are assigned to a project-user pair. Roles define the actions that can be performed by the user for that specific project. 

Below are the commands. 

Command to list all the available roles. 

$ openstack role list 

Command to create a new role. 

$ openstack role create new-role 

Command to list users and user ID to assign the role. 

$ openstack user list 

List all the roles to check the role ID to assign. 

$ openstack role list 

List all the projects and check project ID to assign to the role. 

$ openstack project list 

Assigning a role to project-user pair. 

$ openstack role add --user USER_NAME --project TENANT_ID ROLE_NAME 

Command to verify the assignment of role. 

$ openstack role assignment list --user USER_NAME \ 
  --project PROJECT_ID --names 

Command to check the role details. 

$ openstack role show ROLE_NAME 

Command to remove the role. 

$ openstack role remove --user USER_NAME --project TENANT_ID ROLE_NAME

A staple in OpenStack interview questions for experienced, be prepared to answer this one using your hands-on experience.

A RabbitMQ broker is the logical grouping of many nodes where each node run the RabbitMQ application and share users, virtual hosts, queues, exchanges, bindings, and runtime parameters. A group of nodes is referred to as cluster. For seamless operation of RabbitMQ, all the data is replicated across all nodes within the OpenStack ansible for improved and high availability. The nodes interact with each other using domain names.  

For creating a RabbitMQ cluster, there are two ways. 

  • Manually using rabbitmqctl. 
  • Specifying cluster nodes in a config, with rabbitmq-autocluster, or rabbitmq-clusterer plugins. 

In every IT or other companies, they backup their data for disaster recovery purposes. If we have deployed the OpenStack cloud using the OpenStack Ansible then we can take backup of the /etc/openstack_deploy/ directory that stores the ive inventory, host structure, network information, passwords, and options that are applied to the configuration files for each service. Create backup at the remote location.

To troubleshoot the issue, we can perform the following checks. 

  • Check the physical connectivity of the hosts to the network. 
  • Check the available interface bonding. 
  • Check for various configurations, such as VLAN configs and trunking to edge and uplink ports on physical switches. 
  • Check that the hosts will be under the same IP subnet and having proper routing. 
  • Ensure that there is no iptables applied to the hosts that can deny the traffic.

To check the status of every services in OpenStack, we need to access every controller node and run the service <service_name> status command.

All the database operation are handled by the glance-registry that will manage the storage of the image index and properties. The glance-api is responsible for handling the API interactions and the image store.

If we face any issue regarding the image service then we must go through the /var/log/glance-api.log and /var/log/glance-registry.log under the glance api controller.

For better troubleshooting, perform the following actions.

  • Download the image to check if it is readable front he store.
  • Upload the image to check if the image is registering and writing properly to the image store.
  • Run the openstack image list to check if the API and registry is working fine.

SQL database is commonly used by the OpenStack services for storing the information. But other databases are also supported by OpenStack, such as PostgreSQL. The database runs on the controller node. 

Below are commandsf or installing and configuring and installing components for database. 

a. Install the packages. 

# yum install mariadb mariadb-server python2-PyMySQL  

b. Create and edit the file /etc/my.cnf.d/openstack.cnf file, and add the following [mysql] section. 

[mysqld] 
bind-address = 10.0.0.11
default-storage-engine = innodb 
innodb_file_per_table 
max_connections = 4096 
collation-server = utf8_general_ci 
character-set-server = utf8 

c. Start the database service and make sure that the database will also start when the system boots. Below is the command. 

# systemctl enable mariadb.service 
# systemctl start mariadb.service 

d. Run the mysql_secure_installation script for securing the database service.  

# mysql_secure_installation 

Some major OpenStack concerns are- 

  • Data residency- it is the concern regarding who owns the cloud data and if we can trust the cloud operator with our data. 
  • Data disposal- it concerns if the operator sanitizes the cloud system media before disposing and releasing for reuse. 
  • Data not removed securely- we can eliminate this challenge by configuring system settings for auto vacuuming the unwanted data to free space.

Compliance may be required for various OpenStack activites, such as regulatory and legal requirements, customer need, privacy considerations, and security best practices. So, to adhere to regulation and standards, business and customer must have compliance function in place. It describes the status of the company regarding audits, assessments, and certifications that are necessary to strengthen the security of the company.

The shared file service in OpenStack, code named manila, offers services that allows us to manage the shared file system in a multi-tenant cloud. This service allows us to create a shared file system and manage its properties, such as visibility, accessibility and usage quotas. It allows us to work with various storage providers following shared file system protocols: NFS, CIFS, GlusterFS, and HDFS.

To ensure the smooth working of the OpenStack components, config files has necessary information. If an user without authorized access made any changes accidentally then it can impact the availability, resulting in denial of services to end users. So make sure that the user ownership of such critical configuration files must be set to root and group ownership must be set to manila. Not only this, the containing directory should have the same ownership.

To ensure the proper access, run the following command and check the config. 

$ stat -L -c "%U %G" /etc/manila/manila.conf | egrep "root manila" 
$ stat -L -c "%U %G" /etc/manila/api-paste.ini | egrep "root manila" 
$ stat -L -c "%U %G" /etc/manila/policy.json | egrep "root manila" 
$ stat -L -c "%U %G" /etc/manila/rootwrap.conf | egrep "root manila" 
$ stat -L -c "%U %G" /etc/manila | egrep "root manila" 

The data processing service of OpenStack, code named sahara, offers a platform that let us provision and manage the instance clusters using frameworks like Hadoop and Spark. A user can upload and execute the applications to access data from object storage or external providers using the dashboard or the REST API. the controller of data processing services use the orchestration service (heat) for creating instance clusters that can be scaled up and down as requested.

A staple in OpenStack interview questions for experienced, be prepared to answer this one using your hands-on experience.

Once you run instance in a virtualized environment, open new security controls that might be available while deploying onto bare metal. We can implement various technologies to the virtualization stack for improving the information assurance for cloud tenants. 

The OpenStack users with strong security requirements must deploy these technologies. It might be possible that all technologies are not available for every scenario. Some technologies inspect instance data like run state that are not required by the users of the system.

Different security services for instances are- 

  • Entropy to instances. 
  • Trusted images  
  • Migration of instance 
  • Update and patches 
  • Monitoring, alerting, and reporting. 
  • Firewalls and other security controls. 

We can use different methods to do that- 

  • We can run the ping command to find where the issue is exactly within the network. In case of an instance, ping an external host, such as google.com. If it works, then there is no network issue. 
  • If that does not work, ping the IP address of the compute node where the instance is hosted. If that works, then there the issues exists somewhere in between the compute node and compute node’s gateway. 
  • If it does not work, then the issue lies between the instance and the compute node. Then we must check the bridge connecting the compute node’s main NIC with the vnet NIC of the instance. 
  • If everything works fine, then launch another instance and see if both instance can ping each other. If they cannot, then check firewall settings.

We can spin a new VM using below command. 

# openstack server create –flavor {flavor-name} –image {Image-Name-Or-Image-ID}  –nic net-id={Network-ID} –security-group {Security_Group_ID} –key-name {Keypair-Name} <VM_Name> 

It uses the puppet as the config management tool that will allow us to automate the installation of OpenStack. In the background, puppet pushes the config to the nodes.

The sanitization process eliminates the information from the media that we cannot retrieve or reconstruct. Sanitization techniques include clearing, purging, cryptographic erase, and destruction for ensuring that information is not disclosed to unauthorized users when such media is reused or released for disposal.

The bare-metal driver for OpenStack Compute helps manage the provision of physical hardware using standard cloud APIs and orchestration tools. We can use it for single-project clouds, such as the self-deployment of OpenStack.

Using the bare-metal driver, we must create a network interface and add it to a bare-metal node for launching an instance from a bare-metal image. We can run commands to list and remove bare-metal nodes and network interfaces linked to those bare metals.

Commands to manage bare-metal nodes. 

Adding a network interface to a bare-metal node. 

Baremetal-interface-add 

Lists network interfaces linked to a bare-metal node. 

Baremetal-interface-list 

Remove a network interface. 

Baremetal-interface-remove 

Create a bare-metal node. 

Baremetal-node-create 

Remove a bare-metal node and any linked interfaces. 

Baremetal-node-delete 

Lists available bare-metal nodes. 

Baremetal-node-list 

Show information about bare-metal node 

Baremetal-node-show 

The volume is a block storage device we can detach, just like a USB. we can attach a volume to a single instance only. We can use the openstack client command for creating and managing volumes. 

Migrating a volume- 

For migrating a volume with its data from one location to another, we must have admin rights. Volume migration can be done if they are detached with no snapshots. There are some scenarios when we want to migrate the volume, such as bringing down the physical storage device for maintenance without impacting the functioning, modifying the volume properties, and freeing up space in the back end. 

For migrating a volume, we can use the command- 

$ openstack volume migrate [-h] --host <host> [--force-host-copy] 
                                  [--lock-volume | --unlock-volume] 
                                  <volume> 

In the above example- 

  1. force-host-copy forces the host-based migration mechanism and bypasses any driver optimizations. 
  2. lock-volume | --unlock-volume applicable to the available volume for determining if other commands cause the termination of volume migration.  
  3. lock-volume locks the volume state so the migration will not get aborted in between. 

Using the networking capability, we can easily create and manage the network objects, such as networks, subnets, ports, and others, that can be used by other OpenStack services. To accommodate different networking equipment and software, we can use plugins. This will improve the flexibility of OpenStack architecture and deployments.

The code-named neutron, a networking service, allows us to define and address network connectivity within the cloud. Operators can leverage different networking technologies to power their cloud networking using such services. It also offers an API that lets us to configure and manage different network services ranging from L3 forwarding and Network Address Translation (NAT) to load balancing, perimeter firewalls, and virtual private networks.

OpenStack networking has three components. 

  • API Server- it offers support for Layer 2 networking and IP Address Management (IPAM), as well as an extension for a Layer 3 router construct that enables routing between Layer 2 networks and gateways to external networks. Its wide range of APIs enables interoperability with various commercial and open-source network technologies. 
  • OpenStack Networking plug-ins and agents- it has various plugins and agents for Plugs and unplugs ports, creates networks or subnets, and provides IP addresses, and more. But make sure to use one plugin at a time. 
  • Messaging queue accepts and routes RPC requests between agents to complete API operations. 

This, along with other interview questions on OpenStack, is a regular feature in OpenStack interviews, be ready to tackle it with an approach mentioned below.

In OpenStack, tokens are used for authenticating and authorizing the user’s interaction with OpenStack APIs. tokens are of different types specifying various levels and scopes of authorization. The general purpose of the token is to relay information about your role. A user can have multiple role assignments, projects, domains, and others, known as authorization scopes, where the token has a single scope of operation.  

Different scopes of tokens are- 

  • Unscoped tokens- they do not have a service catalogue, roles, or authorization scope. They just prove the user’s identity to keystone. There is no need to specify an authorization scope in your authentication request. 
  • Project-scoped tokens- such tokens authorize the user to operate in a specific cloud tenancy and are helpful for things like spinning up compute resources or carving off block storage. To consume resources in deployment, users must have role assignments on projects. 
  • Domain-scoped tokens- such tokens authorize the user to operate on the contents of a domain or the domain itself. If the user has access to the domain, they can manage things within the domain, such as creating new users and projects. 
  • System-scoped tokens- some APIs affect the entire deployment system, such as modifying endpoints and service management. Such tasks are reserved for operators and require system-scoped tokens, representing the role assignments a user has to operate on the deployment, such as system admins and cloud admins. 

Account reaper removes the data from the deleted accounts in the background. A reseller issue a delete request on the account’s storage URL that will mark the account for deletion. It will change the status column of the account_stat table in the account database to DELETED, specifying the account will have to be deleted later. There is no retention time and undelete option. The reseller will make those decisions if they want to delete the account. 

To protect the account from mistaken delete requests, we can set a delay_reaping value in the [account-reaper] section of the account-server.conf to delay the actual deletion of data. Well, there is no option of undelete currently. We have to update the account database replicas directly, setting the status column to an empty string and updating the put_timestamp to be greater than the delete_timestamp.

A must-know for anyone looking to prepare for OpenStack advanced interview questions, this is one of the frequent questions asked of content managers.

The image service of OpenStack is called Glance, which allows us to discover, register and retrieve virtual images. Using its REST API, we can query the VM image metadata to retrieve the actual image. We can store such virtual machine images in different locations, varying from simple file systems to object-storage systems like OpenStack Object Storage.

This service is central to Infrastructure-as-a-Service (IaaS), accepting API requests for disk or server images and metadata definitions from end users. Not only this, but we can also store the disk or server images on various repository types, including OpenStack Object Storage. 

OpenStack image services have different components- 

  • glance-api- that will accept the API calls for discovering, retrieving, and storing the images. 
  • glance-registry will allow us to store, process, and retrieve the metadata about images.  
  • Database- stores the image metadata. We can use databases like- MySQL or SQLite. 
  • Storage repository for image files- it supports various storage repos, such as file systems, Object Storage, RADOS block devices, VMware datastore, and HTTP. 
  • Metadata definition service- includes common API for vendors, admins, services, and users to define their own custom metadata meaningfully. 

Description

Top OpenStack Tips and Tricks for Programmers

One of the most commonly used red hat OpenStack interview questions and answers is tips and tricks to analyze your understanding of OpenStack. So, we have summed up some points for you to consider. 

  • Make sure to rarely used images in Glance as QCOW2, but to store frequently used images, we must use the RAW format as it ensures better performance. But, in the end, the choice depends on the cloud admin based on the OpenStack storage configs. 

  • For better execution of the OS in the guest, Nova automatically optimizes the compute host properties, such as libvirt and KVM. for providing the guest OS details to Nova, we must set the glance image properties.  

os_type=linux # Generic name, like linux or windows 

os_distro=rhel7.1 # Use osinfo-query os to list supported variants 

For ensuring the usage of the newer and more scalable virtio-scsi para-virtualized SCSI controller, instead of the older virt-blk, we must set the below properties explicitly: 

hw_scsi_model=virtio-scsi 

hw_disk_bus=scsi 

For early initialization of cloud instances, to configure basics like partition / filesystem size and SSH keys, we must have the cloud-init package. 

While working on the Linux hosts, we must install and enable the QEMU guest agent which allows graceful guest shutdown and (in the future) automatic freezing of guest filesystems when snapshots are requested. 

How to Prepare for an OpenStack Interview

Preparing for OpenStack needs you to be cross-skilled and have many prerequisites. 

  1. Linux- one of the most important prerequisites for OpenStack as it is deployed on Linux environment. To have a better understanding of OpenStack and run commands, we need to have a good understanding of Linux for better debugging, and troubleshooting. We can change configs easily. 
  2. Networking- the cloud is all about networking, sending, receiving, and accessing data over the network, we must have good knowledge of VLAN, VXLAN, GRE, L2, L3 networking, OVS and Linux Bridges. 
  3. Hypervisors- must have a basic understanding of hypervisor and how it works. Some examples are- KVM and ESXi. 
  4. Storage- basic understanding of storage, Raid Configurations, LUNS, and Multipath Configurations. 

To prepare for the OpenStack developer, we must also know python, as the entire OpenStack is written in Python.  

Apart from all this, below is the unending list of what we must understand to ace OpenStack. 

  • Linux 
  • Devops/Automation tools (Puppet, Chef and Ansible) 
  • CI/CD Tools- (Jenkins) 
  • Hypervisors- (KVM/ESXi) 
  • SDN- (Contrail/ OpenDaylight) 
  • SDDC- (Ceph) 
  • SCM/ Version Control- (Github, Gerrit, CVS) 
  • NVF’s 
  • Containers ( Dockers, Mesos, Kubernetes) 

With the drafted answers, you can confidently answer the questions related to job positions like  

  • OpenStack operator 
  • OpenStack site reliability engineer 
  • OpenStack architect 
  • DevOps engineer 
  • CI/CD engineer 
  • Cloud software architect 

and others. 

Prepare with these OpenStack scenario-based interview questions and crack your scheduled interview at organizations like  

  • RedHat,  
  • Dell,  
  • HP,  
  • IBM,  
  • Cisco,  
  • Cloudscaling, etc. 

What to Expect in an OpenStack Interview

Well, it is true that companies are not much aware of OpenStack and its internal working, except those who are dedicatedly working on its development or those who are offering deployment solutions to others. If you are preparing for a company that is building a component or some API for OpenStack, then you must be technically sound along with an understanding of how OpenStack works. If you are just starting, make sure you practice OpenStack basic interview questions to kickstart your preparation.

But, if you are preparing for the company providing support to other companies or are using AWS or Azure but are willing to move to open source (OpenStack) then make sure you have detailed knowledge of each component of OpenStack and their internals. OpenStack troubleshooting questions will be your best bet. The average salary for an OpenStack Engineer is $122,983 in the US. To fall into high salary brackets, you must go for the best online courses for DevOps

Summary

Preparing for an interview is tough without any proper guidelines, from where to start, what to expect, and different levels of questions. But for helping people who are willing to learn and implement OpenStack, we have created these detailed interview questions and answers to help you develop your deep understanding of OpenStack. To keep up with the competitive market and understand the concepts of handling the core cloud-computing services, you must go through the best OpenStack courses.

With the growing scope and need to innovate the ways to configure, deploy, and develop applications, OpenStack has become a necessity. But with the rare skills in the market, this role has a great demand and a high PayScale. To stand out among others, we need to keep upskilling ourselves. 

Whether you are fresher or experienced and want to make a difference, go through the entire guide with dedication to understand the concept. We have mentioned simple commands that you can try on the terminal. This guide will boost your curiosity to learn and explore more about OpenStack. I find OpenStack interesting, hoping the same for you.

Happy learning! 

Read More
Levels