After setting up a data container using the following command:
docker create -v /data --name datastore debian /bin/true
how is an additional new container started which shares the/datavolume with the datastore container?
docker run --share-with datastore --name service debian bash
docker run -v datastore:/data --name service debian bash
docker run --volumes-from datastore --name service debian bash
docker run -v /data --name service debian bash
docker run --volume-backend datastore -v /data --name service debian bash
The correct way to start a new container that shares the /data volume with the datastore container is to use the --volumes-from flag. This flag mounts all the defined volumes from the referenced containers. In this case, the datastore container has a volume named /data, which is mounted in the service container at the same path. The other options are incorrect because they either use invalid flags, such as --share-with or --volume-backend, or they create new volumes instead of sharing the existing one, such as -v datastore:/data or -v /data. References:
Which of the following commands executes a command in a running LXC container?
lxc-accach
lxc-batch
lxc-run
lxc-enter
lxc-eval
The command lxc-attach is used to execute a command in a running LXC container. It allows the user to start a process inside the container and attach to its standard input, output, and error streams1. For example, the command lxc-attach -n mycontainer -- ls -lh /home will list all the files and directories in the /home directory of the container named mycontainer1. The other options are not valid LXC commands. The command lxc-batch does not exist. The command lxc-run is an alias for lxc-start, which is used to start a container, not to execute a command in it2. The command lxc-enter is also an alias for lxc-attach, but it is deprecated and should not be used3. The command lxc-eval is also not a valid LXC command. References:
Which of the following values would be valid in the FROM statement in aDockerfile?
ubuntu:focal
docker://ubuntu: focal
registry:ubuntu:focal
file:/tmp/ubuntu/Dockerfile
The FROM statement in a Dockerfile specifies the base image from which the subsequent instructions are executed1. The value of the FROM statement can be either an image name, an image name with a tag, or an image ID1. The image name can be either a repository name or a repository name with a registry prefix2. For example, ubuntu is a repository name, and docker.io/ubuntu is a repository name with a registry prefix2. The tag is an optional identifier that can be used to specify a particular version or variant of an image1. For example, ubuntu:focal refers to the image with the focal tag in the ubuntu repository2. The image ID is a unique identifier that is automatically generated when an image is built or pulled1. For example, sha256:9b0dafaadb1cd1d14e4db51bd0f4c0d56b6b551b2982b2b7c637ca143ad605d2 is an image ID3.
Therefore, the only valid value in the FROM statement among the given options is ubuntu:focal, which is an image name with a tag. The other options are invalid because:
References:
Virtualization of which hardware component is facilitated by CPUs supporting nested page table extensions, such as Intel Extended Page Table (EPT) or AMD Rapid Virtualization Indexing (RVI)?
Memory
Network Interfaces
Host Bus Adapters
Hard Disks
IO Cache
Nested page table extensions, such as Intel Extended Page Table (EPT) or AMD Rapid Virtualization Indexing (RVI), are hardware features that facilitate the virtualization of memory. They allow the CPU to perform the translation of guest virtual addresses to host physical addresses in a single step, without the need for software-managed shadow page tables. This reduces the overhead and complexity of memory management for virtual machines, and improves their performance and isolation. Nested page table extensions do not directly affect the virtualization of other hardware components, such as network interfaces, host bus adapters, hard disks, or IO cache.
References:
FILL BLANK
What command is used to run a process in a new Linux namespace? (Specify ONLY the command without any path or parameters.)
unshare
The unshare command is used to run a process in a new Linux namespace12. It takes one or more flags to specify which namespaces to create or unshare from the parent process1. For example, to run a shell in a new mount, network, and PID namespace, one can use:
unshare -mnp /bin/bash
References:
Which of the following values are valid in thefirmwareattribute of a
scsi
virtio
efi
bios
pcie
The firmware attribute of the
Which of the following commands lists all differences between the disk images vm1-snap.img and vm1.img?
virt-delta -a vm1-snap.img -A vm1.img
virt-cp-in -a vm1-snap.img -A vm1.img
virt-cmp -a vm1-snap.img -A vm1.img
virt-history -a vm1-snap.img -A vm1.img
virt-diff -a vm1-snap.img -A vm1.img
The virt-diff command-line tool can be used to list the differences between files in two virtual machines or disk images. The output shows the changes to a virtual machine’s disk images after it has been running. The command can also be used to show the difference between overlays1. To specify two guests, you have to use the -a or -d option for the first guest, and the -A or -D option for the second guest. For example: virt-diff -a old.img -A new.img1. Therefore, the correct command to list all differences between the disk images vm1-snap.img and vm1.img is: virt-diff -a vm1-snap.img -A vm1.img. The other commands are not related to finding differences between disk images. virt-delta is a tool to create delta disks from two disk images2. virt-cp-in is a tool to copy files and directories into a virtual machine disk image3. virt-cmp is a tool to compare two files or directories in a virtual machine disk image4. virt-history is a tool to show the history of a virtual machine disk image5. References:
Which of the following tasks are part of a hypervisor’s responsibility? (Choose two.)
Create filesystems during the installation of new virtual machine quest operating systems.
Provide host-wide unique PIDs to the processes running inside the virtual machines in order to ease inter-process communication between virtual machines.
Map the resources of virtual machines to the resources of the host system.
Manage authentication to network services running inside a virtual machine.
Isolate the virtual machines and prevent unauthorized access to resources of other virtual machines.
A hypervisor is a software that creates and runs virtual machines (VMs) by separating the operating system and resources from the physical hardware. One of the main tasks of a hypervisor is to map the resources of VMs to the resources of the host system, such as CPU, memory, disk, and network. This allows the hypervisor to allocate and manage the resources among multiple VMs and ensure that they run efficiently and independently123. Another important task of a hypervisor is to isolate the VMs and prevent unauthorized access to resources of other VMs. This ensures the security and privacy of the VMs and their data, as well as the stability and performance of the host system. The hypervisor can use various techniques to isolate the VMs, such as virtual LANs, firewalls, encryption, and access control145.
The other tasks listed are not part of a hypervisor’s responsibility, but rather of the guest operating system or the application running inside the VM. A hypervisor does not create filesystems during the installation of new VMs, as this is done by the installer of the guest operating system6. A hypervisor does not provide host-wide unique PIDs to the processes running inside the VMs, as this is done by the kernel of the guest operating system7. A hypervisor does not manage authentication to network services running inside a VM, as this is done by the network service itself or by a directory service such as LDAP or Active Directory8. References: 1 (search for “What is a hypervisor?”), 2 (search for “How does a hypervisor work?”), 3 (search for “The hypervisor gives each virtual machine the resources that have been allocated”), 4 (search for “Benefits ofhypervisors”), 5 (search for “Isolate the virtual machines and prevent unauthorized access”), 6 (search for “Create filesystems during the installation of new virtual machine quest operating systems”), 7 (search for “Provide host-wide unique PIDs to the processes running inside the virtual machines”), 8 (search for “Manage authentication to network services running inside a virtual machine”).
In an IaaS cloud, what is a common method for provisioning new computing instances with an operating system and software?
Each new instance is connected to the installation media of a Linux distribution and provides access to the installer by logging in via SSH.
Each new instance is created based on an image file that contains the operating system as well as software and default configuration for a given purpose.
Each new instance is a clone of another currently running instance that includes all the software, data and state of the original instance.
Each new instance is connected via a VPN with the computer that started the provisioning and tries to PXE boot from that machine.
Each new instance contains a minimal live system running from a virtual CD as the basis from which the administrator deploys the target operating system.
In an IaaS cloud, the most common method for provisioning new computing instances is to use an image file that contains a pre-installed operating system and software. This image file is also known as a machine image, a virtual appliance, or a template. The image file can be customized for a specific purpose, such as a web server, a database server, or a development environment. The image file can be stored in a repository or a library that is accessible by the cloud provider or the user. When a new instance is requested, the cloud provider copies the image file to a virtual disk and attaches it to the instance. The instance then boots from the virtual disk and runs the operating system and software from the image file. This method is faster and more efficient than installing the operating system and software from scratch for each new instance. It also ensures consistency and reliability across multiple instances that use the same image file. References:
What is the purpose of capabilities in the context of container virtualization?
Map potentially dangerous system calls to an emulation layer provided by the container virtualization.
Restrict the disk space a container can consume.
Enable memory deduplication to cache files which exist in multiple containers.
Allow regular users to start containers with elevated permissions.
Prevent processes from performing actions which might infringe the container.
Capabilities are a way of implementing fine-grained access control in Linux. They are a set of flags that define the privileges that a process can have. By default, a process inherits the capabilities of its parent, but some capabilities can be dropped or added by the process itself or by the kernel. In the context of container virtualization, capabilities are used to prevent processes from performing actions that might infringe the container, such as accessing the host’s devices, mounting filesystems, changing the system time, or killing other processes. Capabilities allow containers to run with a reduced set of privileges, enhancing the security and isolation of the container environment. For example, Docker uses a default set of capabilities that are granted to the processes running inside a container, and allows users to add or drop capabilities as needed12. References:
FILL BLANK
What LXC command starts a new process within a running LXC container? (Specify ONLY the command without any path or parameters.)
lxc-attach
The lxc-attach command allows the user to start a new process within a running LXC container12. It takes the name of the container as an argument and optionally a command to execute inside the container. If no command is specified, it creates a new shell inside the container1. For example, to list all the files in the home directory of a container named myContainer, one can use:
lxc-attach -n myContainer – ls -lh /home
References:
Which of the following resources can be limited by libvirt for a KVM domain? (Choose two.)
Amount of CPU lime
Size of available memory
File systems allowed in the domain
Number of running processes
Number of available files
Libvirt is a toolkit that provides a common API for managing different virtualization technologies, such as KVM, Xen, LXC, and others. Libvirt allows users to configure and control various aspects of a virtual machine (also called a domain), such as its CPU, memory, disk, network, and other resources. Among the resources that can be limited by libvirt for a KVM domain are:
The other resources listed in the question are not directly limited by libvirt for a KVM domain. File systems allowed in the domain are determined by the disk and filesystem devices that are attached to the domain, which can be configured in the domain XML file under the
References:
Ifdocker stackis to be used to run a Docker Compose file on a Docker Swarm, how are the images referenced in the Docker Compose configuration made available on the Swarm nodes?
docker stack builds the images locally and copies them to only those Swarm nodes which run the service.
docker stack passes the images to the Swarm master which distributes the images to all other Swarm nodes.
docker stack instructs the Swarm nodes to pull the images from a registry, although it does not upload the images to the registry.
docker stack transfers the image from its local Docker cache to each Swarm node.
docker stack triggers the build process for the images on all nodes of the Swarm.
Docker stack is a command that allows users to deploy and manage a stack of services on a Docker Swarm cluster. A stack is a group of interrelated services that share dependencies and can be orchestrated and scaled together. A stack is typically defined by a Compose file, which is a YAML file that describes the services, networks, volumes, and other resources of the stack. To use docker stack to run a Compose file on a Swarm, the user must first create and initialize a Swarm cluster, which is a group of machines (nodes) that are running the Docker Engine and are joined into a single entity. The Swarm cluster has one or more managers, which are responsible for maintaining the cluster state and orchestrating the services, and one or more workers, which are the nodes that run the services.
When the user runs docker stack deploy with a Compose file, the command parses the file and creates the services as specified. However, docker stack does not build or upload the images referenced in the Compose file to any registry. Instead, it instructs the Swarm nodes to pull the images from a registry, which can be the public Docker Hub or a private registry. The user must ensure that the images are available in the registry before deploying the stack, otherwise the deployment will fail. The user can use docker build and docker push commands to create and upload the images to the registry, or use an automated build service such as Docker Hub or GitHub Actions. The user must also make sure that the image names and tags in the Compose file match the ones in the registry, and that the Swarm nodes have access to the registry if it is private. By pulling the images from a registry, docker stack ensures that the Swarm nodes have the same and latest version of the images, and that the images are distributed across the cluster in an efficient way.
The other options are not correct. Docker stack does not build the images locally or on the Swarm nodes, nor does it copy or transfer the images to the Swarm nodes. Dockerstack also does not pass the images to the Swarm master, as this would create a bottleneck and a single point of failure. Docker stack relies on the registry as the source of truth for the images, and delegates the image pulling to the Swarm nodes. References:
Which of the following types of guest systems does Xen support? (Choose two.)
Foreign architecture guests (FA)
Paravirtualized quests (PVI
Emulated guests
Container virtualized guests
Fully virtualized guests
Xen supports two types of guest systems: paravirtualized guests (PV) and fully virtualized guests (HVM).
The other options are not correct. Xen does not support foreign architecture guests (FA), emulated guests, or container virtualized guests.
References:
Which of the following commands deletes all volumes which are not associated with a container?
docker volume cleanup
docker volume orphan -d
docker volume prune
docker volume vacuum
docker volume garbage-collect
The command that deletes all volumes which are not associated with a container is docker volume prune. This command removes all unused local volumes, which are those that are not referenced by any containers. By default, it only removes anonymous volumes, which are those that are not given a specific name when they are created. To remove both unused anonymous and named volumes, the --all or -a flag can be added to the command. The command will prompt for confirmation before deleting the volumes, unless the --force or -f flag is used to bypass the prompt. The command will also show the total reclaimed space after deleting the volumes12.
The other commands listed in the question are not valid or do not have the same functionality as docker volume prune. They are either made up, misspelled, or have a different purpose. These commands are:
References: