Storage Pool Manager plays a crucial role in managing storage domains within a data center. It ensures metadata integrity and providing mechanisms for automatic failovers and prioritizations. The SPM enhances the reliability, performance, and manageability of the storage infrastructure. So let’s try to see what are the usage of the Storage Pool Manager. It is a role given to a host in the data center to manage its storage domains. It’s a central role in the storage management. The SPM is responsible for managing the storage domain within a data center. It handles all metadata operations for the storage domain, which ensures data consistency and integrity. The SPM role helps you in managing snapshots, handling virtual machine disk allocations, and performing storage-related administrative tasks. The manager moves the SPM role to a different host if the SPM host encounters problems accessing that storage. So there is a process of SPM failover mechanism that happens. There are automatic reassignment of the SPM role that occurs. So if the host that is currently assigned the SPM role encounters a issue, like it’s losing access to the storage or the OLVM manager will– is not able to identify that host that is existing or it’s lost the communication– so under that scenarios, the OLVM manager will automatically move the SPM role to another suitable host. And this failure mechanism ensures continuous availability and management of storage domains, reducing the risk of downtime due to storage access problems. Only one host can be the SPM in the data center at one time to ensure the metadata integrity. It is influenced by the SPM priority. So each host in the data center can be assigned an SPM priority, which influences the likelihood of that host being the selected member or selected host for the SPM role. The SPM priority is a configurable setting that you can do inside your host management. So for every host, you can set up the priority for that particular option. So SPM priority settings can be defined at the host. You can alter the host, and the likelihood of the host being assigned the SPM role depends on the priority setting that you have done to that particular host. And a host with high SPM priority will be assigned the SPM role before a host with a low SPM priority. In the next slide, we’ll be seeing about how to set up the priority of the SPM manually. So let’s try to get into setting the SPM priority. You’ll edit the host and the storage that you define. So you’ll go to the host, and you’ll edit your host. And the SPM properties or the SPM properties of the host, you can define the SPM priority settings. So SPM priority settings can be configured as low, high, or normal. So SPM priorities in Oracle Virtualization Manager will help you in defining different levels. So you have got the SPM priority levels that can be categorized to your host, which can be high priority, normal, low. Or I don’t want to include this host as a SPM role host, so I can set it as never. So these are the priorities, and these will help you in specifying which host will get the SPM role. The highest priority will always be having the SPM role assigned to them. We have also got VDSM, which is also termed as Virtual Desktop and Server Manager. In Virtual Desktop Server Manager, we have got the VDSM, which is a vital component, which actually bridges the gap between the OLVM engine and the physical virtual resources on the KVM host. So you can manage and monitor the physical resources by using the VDSM, and it’s actually helping you getting the critical statistics and logs. The VDSM ensures that smooth operations like high availability, optimized performance of the virtual environments, is carried on. So let’s try to see the Virtual Desktop. The Virtual Desktop and Server Manager service is actually acting like an agent on the host. So it manages and monitor your physical resources, and it manages and monitor your virtual machine running on the host. It is a daemon on the VM host and communicates with the engine to help you in managing and monitoring the physical resources, like it helps you in resource allocation. So VDSM is responsible for managing the physical resources of the host, including the CPU, memory, storage, and network interfaces. It makes sure that the resources are efficiently allocated to virtual machines as and when needed. It also does the hardware monitoring with the physical health of the hardware. Checking for issues such as overheating, hardware failures, or performance bottlenecks, this can help in maintaining the reliability and performance of the host. It also goes in for optimization, where it optimizes the use of physical resources by dynamically adjusting allocations based on the current load and requirements of the virtual machine. It manages and monitors the virtual machines running on the host. So it also implements the life cycle of your virtual manager or your virtual machine. So basically, it is trying to give you handling of the complete life cycle of the virtual machines on the host, like creating of the virtual machines, starting, stopping, pausing, and deleting the virtual machines. It gives you the performance monitoring of your virtual environments, like it monitors the performance of running VMs, collecting data on the CPU usage, memory consumptions, network activities. And this data can help you in identifying performance issues and optimizing your VM operations. It is also responsible for resource scheduling. It schedules VM operations to ensure optimal performance and resource utilization. It gathers statistics and collects logs. So basically, the VDSM gathers a wide range of statistics and metrics from both the host and the VMs. This includes the data related to performance, the statistics of the resource usage, and the operational metrics. It collects and manages the logs related to the host and VM operations, which are crucial for troubleshooting, auditing, and ensuring compliance and organizational policies. Now let’s get down to the virtual machines. So OLVM being based on the KVM hypervisor and leveraging the Oracle Linux distribution, it integrates virtualization capabilities with management features to streamline virtual machine deployments, management of virtual machines and performance optimizations of the virtual machines. The virtual machines can be created either for Linux or Windows operating systems. The OLVM allows you to create virtual machines running various flavors of Linux, leveraging Oracle Linux on other compatible distributions. It also helps you in creating Windows-based virtual machines with Windows operating system, providing flexibility for both Linux and Windows environments. It helps in cloning from templates, can be cloned from an existing template in the VM portal. So you can define a template, and you can clone multiple virtual machines using the template. It imports an Open Virtual Appliance file into your environment. It also helps you in importing the OVA files into and from your environments. And it also gives you the options to configure multiple instance types, where you’ve also got some default instance types that are pre-allocated and created with the installation.
Continue reading...genernal
OLVM Hosts….
Let’s try to understand the hosts in OLVM. We have got few major physical hosts that exist inside your Oracle Linux Virtualization Manager. The first one is the engine host. The engine host provides you with the administration tools. It’s a central management component and it runs the OLVM engine and it consists of interfaces for administering and for managing your virtual environments. The OLVM engine can be used for managing your clusters, host, storage, virtual machines, networking, and other aspects of the virtual infrastructure. And it provides the features related to different structures like reporting, enabling administrators to track performance and health of the virtual environments. So an engine by itself is a separate physical host on which you install the it engine utility, which is actually used for managing the complete virtualized environment. Then we have got the KVM host, which is capable of hosting virtual machines. The engine registers the KVM hosts which are actually capable of hosting your virtual machine. So virtual machine deployments and features or managing of these virtual machines all is done through your KVM host. It’s a physical machine which is actually hosted environment machine, which is registered with your engine. And OLVM can manage Multiple Oracle Linux KVM hosts. A single engine Can manage multiple hosts. As we have seen earlier, a single engine powered engine can manage multiple data centers. Each data center can have multiple clusters and, each cluster can have multiple hosts. So if you want to work with the host, these are the options which are available. So if you look at your administration portal page, you go to your compute, and inside the compute, you’ll select the host, and inside the host, you’ll get a list of available hosts that have been configured under that particular listing of your engine. So you can see all the host. And if you look at the table that describes the host available for you, it gives you the name of the host, the host name or the IP address, the cluster name to which it belongs, and the data center to which it belongs. And you can also view the virtual machines and the memory used by that particular host. So active virtual machines and the memory being utilized. Other than that, to manage the host, we need to select the host and go to the menu options or the buttons which are provided, like the New button to create a new host or register a new host, edit an existing host, installation like reinstall the host, host console to access the host console, Copy host networks, if we want to select the host networks and register it with some other host, I can use the copy host networks. The important button there is the management button. The management button helps you in managing the host, which gives you the options to convert your host into maintenance mode, get the options like activate the host if it’s inactive, refresh the capabilities to upgrade or update the information of the host, manage the power, that is power management, restart, stop, or start the host, you can go with accessing with SSH management, you can restart, stop, and you can select as SVM, you can convert this host to be a part of your storage pool management SVM role and then configure local storage. So if you want to configure a local storage on the KVM host. I can select the KVM host and make it as a local storage host. But the host should be in maintenance mode. So to manage the host, we have got the Manage button, to which we can manage multiple actions that can be performed on the host. Once you have created a new host or registered the host, you learn how to register a host in module number 4, where we’ll be talking about the host installation. So in chapter 4, you’ll be discussing on how to register the host. But once the host is registered, you get a detailed page of the host. The detailed page of the host describes multiple properties of the single host. So it gives you a number of virtual machines running over it. What are the network interfaces configured, what are the host devices, permissions, affinity labels, errata, and the events. It also gives you the commands to manage the host from the details page. So you have got the Edit button to edit your host, management button, installation button, and to access the host console. So these are the buttons, which are available for you to manage your host. Now coming down to the affinity label. Like last time, you have seen the affinity rules that we can configure in the cluster. Now we have also seen can configure affinity labels in the cluster, which can be used with the affinity rules. You can also have host affinity labels. Host affinity labels are used to influence the placement and behavior of virtual machines in relation to specific host within a cluster. These labels provide a way to group host together based on certain criteria and enforce policies that dictate how VMs interact with these groups. If I’m talking about the host affinity labels, they are custom tags. Affinity labels help in organizing and managing hosts more efficiently by creating logical grouping. It is basically used to enforce policies that define how VMs should interact with the host. It improves resource utilization and it supports high availability and fault tolerance if you have configured the affinity labels.
Continue reading...OLVM CLusters….
Now let’s get into the next component, which is called the clusters. So OLVM has got clusters which provide a robust framework for managing virtualized environments, offering high availability, load balancing, and efficient resource management. Proper setup and management of clusters will ensure a stable and scalable virtualization infrastructure, which is capable of meeting the demands of modern enterprise workloads. By grouping the hosts into clusters, OLVM will actually make it more simpler for the administrators and enhance the capabilities of the virtualized environments. So when we look at the clusters, a cluster is basically a part of your data center. Inside a cluster, you might have multiple different KVM hosts that can be configured, and they are utilizing the same network and the same storage so that they can work with the mechanisms of virtualization migrations or virtual machine migrations, implementing features for storage utilization, quality of service, affinity management, fencing management. A lot of things can be done with these clusters and the host inside the cluster. So a data center can consist of multiple clusters, and a cluster can consist of multiple KVM hosts. And if you look at this, the host consists of multiple virtual machines having guest agents installed over it, and the host interacts with the oVirt engine using the VDSM daemon. And they are connected to the shared storages so that they can implement a proper cluster mechanism inside the environment. So now let’s try to understand more about the clusters. Let’s try to see what are the clusters basically having or what are the properties of the cluster. So clusters, they are logical grouping of hosts that share the same storage domains and have the same type of CPU. The idea here is clusters are termed as logical groups of hosts, which are basically the physical servers, or also termed as a KVM host, that share common resources and configuration. Host within a cluster share the same storage domains and the network configurations. Each cluster in the system must belong to a data center, and each host in the system must belong to a cluster. So the idea of the cluster is the cluster is integrated with your data center. Each cluster must belong to a data center within the OLVM environment, and each host in the system must be assigned to a specific cluster and by extension to a specific data center. Virtual machines are dynamically allocated to any host in a cluster and can be migrated between them, according to policies defined on the cluster. So dynamic allocation of virtual machines, like– virtual machines are dynamically allocated to any host inside the cluster based on the available resources and defined policies that you have specified. Like Policies can be migration and allocations, which are governed by the policies at the cluster level. You can define load balancing policies and high availability policies that can help the virtual machines to decide or the OLVM to decide in which host the virtual machine should be started or created. The cluster is the highest level at which power and load-sharing policies can be defined. So at the cluster level, you can define the policies, which are actually power management policies and load-sharing policies. Like I can configure a load balancing policy, which can be set to distribute workloads evenly across host, optimizing resources, usage, and performance. In power management, power policies can manage the power state of the host to save energy during the low usage period. The number of hosts and virtual machines that belong to a cluster are displayed in the results list under host count and VM count. So I can get a option that is monitoring can be done to understand how many hosts are there in the cluster and how many VMs are there in the column, which identifies the clusters, host count, and VM count. And administrators can monitor resource usage, performance metrics, and overall cluster health through the OLVM interface. The default setup will create one default cluster in the default data center. If you look at the page of the cluster, you can see that if you want to get into the clusters or see the cluster, which is defined for you, and try to create new clusters inside the environment– using the Administration portal, you can go to the Compute menu. Inside the Compute menu, you will select Clusters. And in the Cluster page, you can see the details of the default cluster that is assigned to you. Just concentrate on the column host count that specifies how many hosts are created inside the cluster. And VM count specifies how many virtual machines are created inside the cluster or belongs to this cluster. If you want to create new clusters or edit the existing cluster or upgrade the cluster, you can do that by using the button. So New is actually for creating a new cluster, Edit for editing a cluster or upgrading a cluster from one release to the other. So let’s try to see what are the components that I need to provide if I want to create a new cluster and what are the things that I should configure for it. If I’m creating a new cluster with a new button, it opens the New Cluster page for me. The New Cluster page involves configuring various settings to ensure that the cluster operates efficiently and meets the specific needs of your virtualized environments. These settings include general configurations, like the CPU type, memory optimization, scheduling policies, high-availability architecture, load balancing, fencing, power management, optimization, migration policies. A lot of configurations can be set using the new cluster when you’re creating the cluster. Later, you can edit the cluster and define these components. So now let’s try to see the properties and the configuration options for the cluster. Let’s talk about the first part, which is the general settings. In the general settings, we have got the data center in which you are creating the cluster, the name of the cluster, the description of the cluster, and the comments. So name, you can enter a unique name for the cluster inside the data center. Description is optional. You can provide a description for the cluster. And then you can select the data center to which the cluster is belonging. And you have got the compatibility version that you can also go with for defining the compatibility features for your cluster. Then we have got the CPU type in manage– selecting what is the CPU architecture, what is the management network, and then the CPU type. In the CPU type, we have got the CPU architecture, which selects the CPU type supported by all host in the cluster, and CPU level defines a specific CPU model to ensure compatibility and optimal performance across all hosts. So you can specify what level of CPU you are trying to configure, whether it’s x86-64 or ppc64, s390x. These are the architectures of the CPU. And you can also configure the CPU type. But once you configure these, then you will be following the same structure of host that will be configured for your cluster, so same value of the CPU type should be configured for your cluster. Then we have got something which is called as the optimization. In optimization, we have got the memory overcommit features that will allow the host to allocate more virtual memory than physical available. So I can set up these optimization parameters. And I can also go with setting up the scheduling policies. In the scheduling policies, you can schedule policies that determines how VMs are distributed across the host in the cluster. So it can be even distribution or power saving, or it can be strict. These are the scheduling policies that can be configured. You can also define custom properties, which defines any custom properties needed for those scheduling properties. Then we have got the fencing policy. The fencing policy will enable fencing to isolate and restart a nonresponsive host, and fencing ensures that VMs are safely relocated to the other host. You can set the time delay before fencing is initiated. By default, the time delay is set to 60 seconds. We have also got the optimization settings that can be done to specify how the optimization workloads are handled. And we have got the next set of policies, which are called as a migration policies. That’s defining migration threshold that sets threshold for automatic migration of VMs to balance the load, or it can be parallel migration, which defines a number of parallel migrations allowed to prevent network congestion. So these are a few of the things that you can set when you are defining your new cluster. Once the cluster is created, you can get into the Cluster Detail page by highlighting the cluster. And in the Cluster Detail page, we have got multiple options that you can look through. You have got the general properties that gives the information of the cluster to which data center it is belonging, what is the compatibility version, what is the cluster ID allocated, what is the cluster CPU type, what is the– you can say like a chipset and frameworks that have been defined, number of VMs inside the cluster, number of volumes which are up, and volumes which are down. So you can get these details from the General tab. The other components are the logical networks, which identifies which logical networks is your cluster using, host that identifies the hosts that are a part of this cluster, virtual machines identifies which are the virtual machines part of the cluster. Then you have got the affinity groups and affinity tables, which we’ll be talking in the next slides, CPU profiles, the permissions that are allocated, and the events that are happening on the cluster. So these are the detailed descriptions of things related to your cluster. Now, from the Clusters Detail page, you can come down to the affinity group. And affinity group provides a robust mechanism to control and optimize the placement of VMs within a cluster by defining positive and negative affinity rules and setting enforcement modes. So administrators can ensure high availability and optimize performance and manage resources efficiently and comply with the licensing requirements. The proper use of affinity groups enhances the overall reliability, performance, and manageability of your virtualized environments. So if you look at this, you have got the options which are given as affinities, which are positive and negative affinity. Define a name to the affinity, you’ll define the description for the affinity. And then you have got your VM affinity rule and host affinity rule, which is positive or which can be defined as positive affinity or negative affinity. Positive affinity ensures the VMs are placed on the same host. Negative affinity ensures that VMs are placed on a different host....
Continue reading...Storage Domain – OLVM
Now, let’s talk about the storage domains. In Oracle Linux Virtualization Manager, a storage domain is a fundamental concept that represents a logical storage container where various virtualization related data is stored. Let’s try to understand how it is done. Have a common storage interface. So a storage domain provides a standardized interface through which virtualization components interact with storage resources. This interface abstracts the underlying storage technologies like NFS, GlusterFS, iSCSI, FCP and allows the OLVM to manage and utilize storage resources uniformly. Contain complete images of templates, VM, snapshots, and ISO files. So basically, storage domain in OLVM will contain the images of various components, like it can be a part containing templates, pre-configured VM templates, which can be used for rapid deployment of your virtual machines or VMs, which are like virtual machine disk images that store operating system, application, and data. It also stores snapshots, which is point in time copies of your VM disks, ISO files that can be disk images used for installing operating systems or applications on virtual machines. OLVM support storage domains that are block device storage domains and file system storage domains. Like in block devices, it supports SAN, which is iSCSI or FCP. And in File systems, it supports NAS which is NFS or Gluster file system. So if I go with the block devices, which is a SAN storage style, iSCSI uses the internet small computer system interface protocol to provide block level access to storage volumes, and that is provided over your IP networks. The Fiber Channel protocol provides block level access to storage volumes over fiber channel networks, offering high speed, low latency storage access. With file system NAS type network access to storage, which is consisting of NFS, which is the storage is over a network sharing files and directories as if they are mounted locally. And GlusterFS is a distributed file system that allows OLVM hosts to access storage distributed across multiple servers. Virtual machines should share the same storage domain to be migrated. Again, storage domains helps in migration and sharing. So virtual machines using OLVM can be migrated between hosts within the same data center. For successful migration, the source and destination host must have access to the same storage domain where the VM disk image is residing. It also supports– the data center can have at least one data domain. The data center requirements are basically categorized as non shared between data centers data domain. So a data domain is actually a domain that is created for each data center. It cannot be shared between data centers. So every data center must have at least one storage domain attached to it. And this ensures that there is a dedicated storage location for storing your VM data, and it cannot be shared between your data centers. So it’s non shared between data centers. So storage domains cannot be shared between different OLVM data centers and each data center manages its own set of storage domains to maintain isolation and control over storage resources. Now, let’s talk about something called as storage pool manager, which is also termed as the SPM. The storage pool manager plays a crucial role in managing storage domains within a data center. So let’s try to see how does the storage pool manager help managing your storage domains. SPM is a management role assigned to one of the host in the data center. So the role is assigned to one of the host within a data center by the OLVM engine. The host acts as a designated SPM for that particular data center. It manages the storage domains of the data center. The SPM is responsible for managing the storage domains within the data center. This includes coordinating access to storage resources and ensuring the integrity of storage metadata across all storage domains. And it also works with controlling access to storage by coordinating the metadata between the storage domains. SPM controls access to these storages by controlling and coordinating the metadata between storage domains or metadata includes information about virtual machines, disk images, templates, snapshots, and other virtualization related data which is stored inside your storage domain. The host running as SPM can still host virtual resources, so it’s not like the host, which is acting like SPM is only managing storage domain, but it can also host its own virtual resources. The SPM role is a primary role involving managing storage. The host running the SPM can still host virtual resources such as virtual machines and their applications. This ensures that the host contributes to both storage management and a compute resource within the data center. Now, what happens if the host is affected? So the engine assigns the role to another host, if the SPM host becomes unavailable. So there is a failover mechanism that is done internally. If the host currently serving as the SPM becomes unavailable due to hardware failure, network issues or other reasons, the OLVM engine automatically assigns the SPM role to another host within that particular data center. And the failover mechanism ensures continuity in storage management operations with minimizing disruptions in virtualized environments. So basically, internal failure mechanism is handled when the engine or the host becomes unavailable. Now, there’s something which is called as storage lease. So basically, a storage lease is a mechanism that enables virtual machines to maintain consistent access to storage across different host within a data center. Let’s try to see how does it work. When you add a storage domain, a special volume is created called as the xlease volume. And virtual machines are able to acquire a lease on this special volume. So special volume is created when you define this particular storage lease. When you add a storage domain to an OLVM, a volume called xlease is automatically created within the storage domain. And this volume is designated for managing storage leases for VMs, and the virtual machines are able to acquire a lease on this particular special volume. So virtual machines have the capability to acquire a lease on the xlease volume within their associated storage domain. This lease mechanism ensures that VM can maintain exclusive access to their required storage resources. A storage lease is configured automatically for the virtual machines on selecting storage domains to hold the VM lease. So when configuring a virtual machine and selecting a storage domain to hold its disk, images, and related resources, OLVM will automatically configure a storage lease for that VM. This lease allows the VM to start and operate on any host within that data center that has access to the storage domain holding that lease. It also gives you the options to enabling mobility, so it enables virtual machines to start on another host. So storage leads is supporting the virtual machines to get migrated and started on the other host doors. The lease ID and other information are sent from the SPM to...
Continue reading...Data Center – OLVM
It’s important to understnad topics that are related to understanding data centers, understanding clusters, understanding hosts and understanding storage domains. We’ll also cover understanding networks and understanding virtual machines. So we’ll have a basic idea about what are the different components or the core components that make your Oracle Linux Virtualization Manager work to implement the high-level virtualization and high-availability architectures and management of quality of services. Let’s try to understand each of these topics as we go further. When we talk about the core components, your Oracle Linux Virtualization Manager is actually categorized into data centers, clusters, and hosts. So the main idea is your virtualization environment consists of the main core components, like the data center cluster and the host. So data center can consist of multiple clusters, and clusters can consist of multiple hosts. The hosts here are referred to KVM host. Inside the host, you can have multiple virtual machines that can be created depending on the availability of resources under those hosts. And if hosts are configured with cluster management, we can also go with clustered environments, where you can have migration of virtual machines and implementation of, you can say, fencing and other features if your host and clusters are configured, if your cluster contains multiple hosts. And with these, you also need to know the high-availability considerations, like how I can configure high availability architectures inside my Oracle Linux Virtualization Manager. As I said, virtual machine migration is one of the feature of high-availability architecture and that is possible with clusters and hosts configured properly. Then we have got to understand the networks. We have got the logical networks, and how these logical networks are configured and what are the components related to logical networks. And we are going to see these storage components like storage domains and understand what type of storage domains you can configure inside your Oracle Linux virtual manager. And at the end, we’ll also see a bit of event logging and notification. All these components that we are looking in here, we’ll be understanding these components, like host, virtual machines, high ability considerations, network, storage, as we go further in the course with individual chapters related to all these components. So let’s get started with understanding the basic idea related to the core components and their features and their settings. So the first thing is the data centers in Oracle Linux Virtualization Manager. A data center represents the highest organizational level within the OLVM hierarchy, encapsulating clusters, host, storage, and network configuration. So basically a data center in OLVM is designed to provide a comprehensive and unified management framework for all virtualized resources within an organization. So you can understand a data center is a logical grouping of the components like clusters, hosts, storage domains, and networks. It serves as the primary container for organizing and managing resources, ensuring consistent policies, configuration, and resource allocation across the infrastructures. Inside the data center, you might have multiple clusters. So inside the oVirt engine– the oVirt engine can handle multiple data centers. You can configure many data centers into engine. And each data center can have multiple different sets of clusters, clusters and hosts like clusters. Each data center contains one or more clusters. Clusters are group of hosts that share the same storage domains and network configuration. And inside the cluster, you can have hosts, which are physical servers within a data center, which are organized into clusters. Each host within a data center must belong to a cluster, ensuring efficient resource management and virtual machine allocation. Then when we talk about the storage domains, the data centers encapsules storage domains, which are logical entities representing physical storage resources. They can be configured as NFS, iSCSI, fiber channel, gluster file system. So there are different types of storage domains that you can configure inside your Oracle Linux Virtualization Manager. The shared storage is actually a storage that is shared between the component of your clusters. So all clusters and hosts within a data center share access to these storage domains, enabling virtual machine migrations and centralized storage management. Different types of storage domains can be created, including data, ISO, export domains, and each serving specific purpose inside your virtual environments. Other than this, the data center also consists of networks. You have got networks which are defined at the data center level and applied to clusters and host supporting various network topologies like VLANs and multi-interface configurations. So if you want to go with a data center creation, you can do that by getting into your administration portal. Using a GUI interface, you will go to the menu, which is called as a Compute. And inside the compute, you will select Data Centers, which will take you to the page for the data centers which are available inside your engine. And when I look at the data center part, I can see every installation will have one default data center created for you. You can create other data centers. You can define properties for it by using the buttons, which are listed there, which are called as the New button, so New button or the Edit button. The New button will help you in creating a new data center and Edit will help you in editing an existing data center. You can also remove the data centers from the system by using the Remove button. So these are the different actions that you can perform using your Data Center main page. Now, let’s see what are the features that I need to consider when I’m creating a new data center. So in the new data center, when you click that New data center button, you get this window, which pops up for defining the new data center properties. The new data center will have to be provided with a name. The name of the data center, it’s a text field, and it can have up to 40-character limit, and it should be a unique data center inside your engine. Then we have got the other component which is called as the storage type. Very much important to define what kind of storage is managed under this particular data center, whether it’s a shared storage or whether it’s a local storage type. So you can have a data center getting attached to local storage, which will not have a cluster configuration, cluster management, and high-availability architectures. But it only supports a single host and a single non-shareable cluster or a local cluster that is created for a local storage type data center. If I use shared, the shared storage domain can be anything related to iSCSI, NFS, FC, which is fiber channels, POSIX, or gluster file system. They can be added to the same data center. And local and shared domains, however, cannot be mixed. So again, at the time of creating the data center, you decide whether you want to create the data center with shared storage, or whether it’s a local storage data center. Usually, you create the local storage data center if you are trying to do some kind of testing, or doing some kind of development stages or creating virtual machines for test and development. But on production environments, majorly, for high-availability architectures for implementing fencing, or getting into QOS management and other features of high-availability architecture, we need to have a shared domain. Then we have got the compatibility version. The compatibility version parameter defines the Red Hat virtualization compatibility. So to what level is it compatible? So that is what is defined by the compatibility version. Then we have got the quota. The quota is a resource limitation tool provided with Red Hat virtualization, which can be defined as disabled, ordered, or enforced. The values inside the quota mode can be audit, disabled, or enforced. By default, it is audit. If you want to edit the quota settings and you want to allow the machines to utilize beyond the set limit, I can use the audit option. And I can get it logged to see whether I’m using more than the value required, which has been set as a quota. But if I enforce the quota, which is usually done on the production environments to force the systems not to go beyond the quota that is allocated to them, and disable this, you’re not controlling any quota limit. So you define these options at the data center level. We have also got something which is called as the quality of service. Quality of service in Oracle. Linux Virtualization Manager is a powerful feature that allows administrators to manage and optimize the performance of virtual resources by prioritizing workloads or setting resource limits and guarantees and implementing bandwidth management. Quality of service ensures that critical applications receive the necessary resource, that resource contention is minimized. And implementing QOS policies enhances the reliability, efficiency, and predictability of a virtualized environment, leading to a better overall performance and user satisfaction. And to do that, or to configure the default QOS policies for the storage, you can do that by getting into your Data Center page for your data center, selecting that data center from the data center page, and you get the details of the data center page in which you have got the QOS, Quality Of Service. And then you can define the quality of service by using the options like for the storage quality of service, you have got the New button there to which you can create a new quality of service definition. To applying the storage quality of service to manage input outputs per second and bandwidth for virtual machines, ensure that high-performance storage needs are met without impacting other virtualization environments. We have got– One of the option here is the Storage option, where we are creating a new storage QOS, where we can define the QOS name and the description, and then we can define the properties of the quality of service definition. Like I want to define the IOps limits, I can configure the total amount of IOps, amount of IOps for read, and amount of IOps for write that you’re giving as a limitation for the environment. So you’ve got set the limits on the I/O operations per second or bandwidth for storage devices to control the storage performance and reserve a minimum number of IOps for critical VMs to ensure they receive the necessary storage performance.
Continue reading...OLVM Engine Setup….
So now let’s see the engine-setup command. The engine-setup command is a command used to configure and set up the Oracle Linux Virtualization Manager engine after the installation of the engine package is done. So the command is a part of oVirt engine package and is crucial for initializing and configuring the OLVM environment. So you run the engine-setup command on the host where you install the manager, and you’ll enter yes to configure the manager. So the first question it will ask you– it’s interactive command, so it asks you whether you want to install the engine on this host or not. So you’ll select yes as the option. And for the remaining configuration options, you can provide the inputs or you can go with the default values. So it is giving you an interactive question-based setup configuration. It’s actually dividing the configurations into multiple groupings. We’ll talk about these groupings as we go further in the slides. For now, you can go with using the questions or answering the questions with default values, or you can provide the required parameter values for the answers. Once you have answered all the questions, the setup will display a list of values you entered similarly to a summary information about what you have selected with the questionnaire that was provided with the engine-setup command. And then when the configuration is complete, details about how to log in to the administration portal are displayed. So once the summary is displayed, you will accept that particular summary page and you start installing and configuring your engine. After the configurations are done, it will try to display the details about the portal access, how will you access the portal? It will display you the login information to the administration portal after the installation is complete. Now when the engine setup is happening, as I said, we have got multiple configuration groupings that are actually given as questions. So they are actually grouping the questions into a set of groups, which are useful for configuring different components of your Oracle Virtualization Manager. So engine configuration options are categorized into grouping like database configuration. The database configuration gives you the options to configure the PostgreSQL database that OLVM uses to store its data. It prompts for database credentials, database name, and other related settings. It can also give you the options to set up and configure a local database instance if one is already available or configured locally or have an access to a remote database if you want to have a remote database configured. Then we have got network configurations. It sets up the network parameters, including the host name, the IP address used by the OLVM engine. It configures your firewall settings to allow traffic on necessary ports if you enable automatically configure my firewall, if you select yes for that. It will go for configuring your firewall, allowing you all the ports that are enabled on the system for your Oracle Linux Virtualization Manager. Then it goes in something which is called as administration user setup. It prompts for creation of the administration user and providing a password to that user. It configures secure access to the OLVM web administration portal by this user. The default user name is the admin user and the password is provided at the time of installation. And then we have got the certificates and security configurations. So it manages SSL certificate creation and configuration for secure communication. And you can use a self-signed certificate or import this self-signed certificate inside your browser to have access with certificate encryption and authorization. It also gives you a categorization of service configurations, which sets up and enables necessary services to start the OLVM services at boot time. We talked about these services, what are the services available? So these services are configured and you can configure the services, start your Oracle Linux Virtualization Manager service at boot time. It configures the oVirt engine service, ensures it starts automatically, and runs correctly. So these are the engine configuration options which are available for configuring your engine. Then we have got the next set, which is once you have configured, the next thing you need to do is use an alternate hostname. So you’ll log in to your manager host as a root user and you will provide an alternative hostname, which refers to an additional hostname that can be used to access the OLVM engine. And the feature is particularly useful in scenarios where the OLVM engine needs to be accessible via multiple network interfaces or with different hostname. So how will you do that? You log in to your manager as root user and then you will create the file, which is actually the engine-config daemon directory in which you will create a file, which is called as custom-sso-setup.conf. It might already be available there. You can just try to edit that file and provide the alternative hostname there, or if it’s not available, you will create this file. Then you will list and go for SSO_ALTERNATE_ENGINE_FQDNS, where you provide the alternate fully qualified domain names for your engine machine. So each of these names, like here, you can see in the example, it has given two hostname alias1 and alias2. So the list of alternate hostnames must be separated by a space and enclosed in quotations. And then you will restart your Linux Virtualization Manager by running the systemctl command restart ovirt-engine that is restarting your service. Now why would I like to do the alternative host configuration? It gives me enhanced accessibility. It provides multiple entry points to the OLVM engine, improving accessibility and user convenience. It gives you the options to load balance. You can distribute load across different host names. Increased flexibility, allows the network segmentation and tailored access based on different use cases and requirement. So these are the reasons why I would go for configuring an alternate host name. And configuring an alternate host name is recommended from Oracle. And once you have done that, you will log in to your environment. To log in to the environment, you will navigate to the web browser, you will access the web browser, and provide the address, which is the managers-fqdn, which is managers fully qualified domain name. And if the port number is not defaulting to 443, if it’s a different port number that you have configured, then provide the port number with colon and port number. Otherwise, if it’s 443, it will automatically default to 443 port number. And then the application that you’ll access will be ovirt-engine. Then you can change the preferred language. You can view the administration portal in multiple languages. And you can redefine these languages. You can change the language....
Continue reading...OLVM Engine Firewall Requirements….
So when you run the engine setup command, the program will automatically configure the firewall ports on the host. A proper firewall configuration is very much crucial for ensuring that the OLVM engine operates efficiently and securely. So we need to configure these ports and enable services inside the firewall, which is done automatically by using the engine setup command. But if you want to do the setups manually, we have got a list of firewall services and ports that should be enabled as the manual configurations inside your firewall. So let’s see, what are the available services that should be enabled? that is the basic services that should be enabled inside your firewall. So in the table, it is showing you all the basic services that should be enabled. And let’s try to discuss on each of these services, like the SSH access to the manager. So you should enable the SSH port so that you can have passwordless authentication access to your host and your engine machine by using your Linux environment. So for passwordless access, you can use SSH access to be enabled on your manager. So port number 22 should be enabled on the system. The next important ports and the service that you need to enable is your web interface API access. Now web interface API access requires you to enable the access to the web interface, that is using your portal to access your environment. Administration portal should be enabled and accessed by using these web interface or API access. The ports which are used here is port number 443 for TCP and port number 80 of TCP. That will be used for defining your web interface and REST APIs. It’s crucial for accessing the management interface securely. And these should be enabled inside your firewall. The next thing is database communication components. So in database communication components, port number 5432, which is the default port for PostgreSQL which OLVM uses as a database backend. This is where the OLVM stores its configurations and operational data. So to allow the access to this database, I need to enable port number 5432 for the database. Then we have got the VDSM, which is Virtual Desktop and Server Manager communication. So in VDSM communication, the ports that should be enabled for facilitating communication between the OLVM engine and the host is 54321, which is the default port. And also 54322, which is the secondary VDSM import that should be enabled on your firewall. Then we have got the Simple Protocol for Independent Computing Environment, which is called as a SPICE. So SPICE protocol, they are ports that can be used for remote desktop connections to virtual machines using SPICE protocol, which provides high-quality remote display capabilities. And the range of ports is 5900 to 6923. These are the different set of ports that can be enabled on your firewall. We have also got the storage domain ports that will be specific to the storage domains, like NFS storage domain will have ports that will be enabled for accessing your TCP/IP over NFS server port, NFS mounted device ports. So you have got different set of port numbers that should be enabled, that is basically port number 111, which is the– it should be enabled on TCP and UDP, which is used by the port manager service, which maps your RPC program numbers to your network addresses. You have got 32803, which is used by the nfs-mountd service. So there are multiple ports that should be enabled for your storage access. Then we have got also for iSCSI storage domains, that is 3260 TCP/IP port that should be enabled for iSCSI target access. So these are some of the ports that should be enabled. And you can do that by specifying the commands manually on your firewall command line utility, or you can enable that on your firewall daemon by using the firewall-cmd and specify the add ports to enable these ports and permanently store these ports inside your firewall. So add these ports and permanently store these ports by using your firewall CMD command, which is used for adding these ports inside your firewall daemon. And store these ports permanently inside your environment. Enable the PKI dependencies and PostgreSQL:13 appstream modules. So these two modules should be enabled on your Linux environment prior to installing your OLVM engine on the host machine. So basically, the PKI is a system used for managing digital certificates like public key encryptions, security related elements. It typically involves components of certificate authority, certificate signing request, revocation list of certificates, public and private keys. So PKI dependencies or pki-deps, what you see over there in OLVM is generally referring to the necessary packages or libraries which are required to support PKI operations. The PostgreSQL:13 is a version of PostgreSQL Relational Database Management system, and is an open-source database system that can be installed on your environments. So these are the two modules that should be enabled inside your Virtualization Manager. The next set is the related modules that should be enabled and the modules that should be disabled for installing your Oracle Linux Virtualization engine, like you need to disable the virt:ol package and you need to install or enable the virt:kvm_utils version 2. The KVM utils is the component or utility package related to your kernel-based virtual machine, which is a virtualization module in the Linux kernel that allows the kernel to function as a hypervisor. So you’re not working with virt:ol, because from version 4.4, you could say like the new versions are working with kvm_utils2. So you need to install the kvm_utils2. The other component is enabling ol8_baseos_latest repository. So this basically gives you the options related to the base operating system latest packages and references to the latest packages inside your environment. Then we have got the ovirt release 8. So I need to install the ovirt module, which is the oracle_ovirt_release_el8, Enterprise Linux 8, is a package for Oracle Linux 8 that provides access to ovirt, a free open-source virtualization management platform. So Oracle version of ovirt includes enhancements and integration specific to Oracle Linux. So these are the components you should enable or disable inside your Linux node, which will be acting like a Linux Virtualization Manager node. Now the next step will be to make sure that the repositories that should be enabled. The following repositories should be enabled inside your system. Make sure that these repositories are enabled by enabling these Virtualization Manager repositories which are essential for setting up and managing your virtualization environment. They provide the necessary packages, updates for virtualization platform like libvirt, qemu, or ovirts. So repositories for Linux managers host packages that facilitate the installation and management of virtualization platforms and tools. And if a required repository is not enabled, you can do that by using the dnf config-manager command. And you can use the set command or set option and you can enable or disable a repository. So –set with dnf config-manager enable and the repository name that you want to enable, Or if extra repositories are enabled, you can disable those repositories by using the dnf config-manager command....
Continue reading...OLVM Engine Host Pre-requisites….
So let’s understand the prerequisites that are needed for engine installation. Before I can install an engine, I should have a host machine that is preconfigured and installed with a minimal install of Oracle Linux 8.5 or it can be installed with Unbreakable Enterprise Kernel Release 6, Unbreakable Enterprise Kernel Release 7, or anything that is similar with Red Hat Compatible Kernel. So these are the basic prerequisites, or requirements you could say, for installing your Oracle Linux Virtualization Manager engine host, or engine machine. The other important thing is the processor. The processor is very much important. It should be 64-bit x86 CPU, with hardware virtualization support enabled in that particular machine. It should be supporting Intel VT-x or AMD-V. And these technologies are essential for effectively managing your virtual machine. There are other requirements like the memory, the disk space, and the network requirements, which we’ll be discussing as we go further in the module. And the other important thing that you should have is your Linux operating system should be enabled with the following channels, which are listed in the table. You should enable it with BaseOS Latest, AppStream, KVM AppStream, oVirt 44, which is for version 4.4, oVirt 44 Extras, and the Gluster AppStream. So these are the required channels that should be enabled on your Oracle Linux Virtualization Manager. And for configuring the VDSM on the host, the host should have one extra channel that should be enabled, which is called as UEKR7. So these are the prerequisites for installing your engine. And based on these prerequisites, you can continue installation. But again, there is something which is called as deployment sizes. So for different deployments, Oracle has categorized different sizes that you should follow as recommended from Oracle systems. So let’s try to see why do we require to have different deployment sizes and why does the deployment size matter. So when we talk about deployment size, the first thing that comes into picture is the resource allocation. So deployment size matters based on resource allocation. So matching the deployment size to your needs will make sure that you allocate resources efficiently– like, a small firm will avoid wasting money on excess hardware, while a large institution has the resources it needs for peak performance. So depending on the type of organization and type of deployments that you’re having, you will go for allocating the resources. Then we have got the next step, which is called performance optimization. So each development size helps optimize performance based on your workload. It can be a small deployment, which might focus on testing environments, or it can be a large deployment, which are geared towards high-demand applications for transaction processing or any kind of queryable data. So depending on that– depending on the performance requirements– you decide on the size, what is the deployment size you want to use. The other categorization that is important for deciding on deployment size is the scalability. That is, knowing your deployment size helps you plan for future growth. The next important point here is the cost management. The cost management will help you in choosing the right size, which helps you in managing the cost. So small deployments keep expenses low, which is avoiding unnecessary hardware and operational cost, while large deployments can justify the higher cost with the need for higher performance and reliability. So these are the four different characteristics that defines what is the best suited deployment size that I need to select on. As we go further in the slides, we’ll be discussing on what are the recommendations and what is the minimal requirement for these different deployment sizes. So if you look at the hardware requirements, Oracle has categorized hardware requirements based on the sizes. So based on the deployment sizes we have, we have categorized them into three different size allocation, like small. And then we have got large, and we have got the medium size. So we have got different deployment sizes– small deployment, medium deployment, and the large deployment. So small deployments are ideal for test environments or small businesses or departments within a large organization which wants to use the virtualization. So they provide a cost-effective solution for managing a limited number of virtual machines. Whereas, when we talk about the medium deployments, they are suitable for medium-sized businesses or large departments within organizations. The size, balance, cost and the performance will be defining the options or resources for the moderate number of virtual machines and the host. Then we have got the large deployments are designed for enterprise environments that can be actually used for deploying enterprise-level deployments or production-level deployments that can be configured by using these large deployments with extensive virtualization needs. So based on this, we have got different sets of hardware requirements. So you can decide what level of deployment you want to do– small, large, or medium deployment. Then, when we come down to the idea of each of these, let’s try to see where they are useful. Let’s talk about each individual hardware requirement and see where they are useful for us. We have got the small deployment. If you look at the small deployment– so this particular deployment is useful for setups with 1 to 5 KVM hosts and up to 50 virtual machines. You will need about 16 gigabytes of memory, with four virtual CPUs and 50 GB of disk space to configure it. So these are the recommendations for a small deployment. Minimum requirement is lesser than the recommended values. So we can see 64-bit two core CPUs, or you can say two core CPUs, 4 GB, and 25 GB, which is the local writable hard disk space. That is enough for minimum values for a small deployment. But recommended, or a good sizing recommendation, is to use four cores, 16 GB or greater memory space, and 50 GB or greater for local writable hard disk. Now, let’s try to see what is the importance of this particular small deployment. So imagine you’re running a small software development firm, with around 10 developers. They need a virtualized environment for testing their applications on different operating systems. A small deployment would be perfect. You might start with a three host, each powerful enough to run multiple virtual machines, and set up around 30 virtual machines for various testing environments. So this kind of scenario can be used as a choice for deploying a small deployment size. Now, let’s get into the next level, which is the medium level. In medium-level deployment, this is actually used for setups with five to 50 hosts. So it can accommodate from five hosts to 50 hosts, and it can take up to 50 to 500 virtual machines. And the requirements here are increased– eight core CPUs, 32 GB or greater available RAM, and 100 GB or greater for local writable disks. The importance of this is think about a medium-sized health care organization. They need a robust virtualization environment to handle patient data, various departmental applications. So a medium deployment would fit well. You might have 20 hosts spread across departments– if you’re having different departments– billing department, you’ve got patient records, you have got radiology department or X-ray departments– so you can have different connections spread across these departments. It can accommodate up to 500 virtual machines for different applications and services. So the medium can be used in these type of scenarios where the utilization is at a medium level. Let’s talk about the next deployment size, which is called a large deployment size. This is for setups with 50 to 200 host environments that can be connected to your virtual machine, with over 500 to around 2,000 virtual machines. And recommended is 64 GB or greater of available system RAM, with 16 cores or greater CPUs that should be allocated, and at least 200 GB of writable disk space. So this becomes your large deployment. And this is a recommendation from Oracle that, if you are using these options, then it is termed as a large deployment. Consider a large financial institution. They need a highly reliable and scalable virtualized environment for their trading platforms. So customer databases, internal service environments, a large deployment here might involve 100 hosts distributed across multiple data centers for redundancy and high availability architectures, supporting around, let’s say, 1,500 virtual machines, running various critical applications on those virtual machines. So this is actually the different deployment sizes and where you can utilize these different deployment sizes. So depending on your needs and requirements, you can decide on whether you want to go with a small deployment, a medium deployment, or a large deployment.
Continue reading...OLVM Administration Consoles…..
In the world of Oracle Linux Virtualization Manager, three essential portals stand ready to assist you in every aspect of managing your virtual environment. They are the administration portal, VM portal, and the monitoring portal. Let us start with the administration portal. Let’s picture the administration portal as a nerve center of your virtual infrastructure. Accessible through any web browser, the administrators can wield the power tools to oversee, create, maintain every component of their virtual ecosystem. Within this portal, we can create and manage virtual infrastructures. From defining intricate network configurations, to managing storage domains, administrators have full control over the foundational elements that support their virtual environment. Installation and management of hosts. Whether it’s setting up new hosts or fine tuning existing ones, administrators can efficiently manage the entire life cycle of their host, ensuring they operate at peak performance. Creation and management of logical entities. By creating and managing data centers and clusters, administrators can organize resources effectively, optimizing the resource allocation, enhancing the scalability. Creation and management of virtual machines. From creation to fine tuning, administrators can manage every aspect of their virtual machines, tailoring them to meet the specific workload requirements. User and permission management. Ensuring secure access and proper delegation of responsibilities is crucial to the administration portal. Administrator can manage user accounts and permissions with ease, maintaining a robust security posture. Now let’s talk about the next portal, which is the VM portal. The VM portal is specially designed to cater to users who primarily engage with virtual machines within the Oracle Linux Virtualization Manager environment. It serves as a user friendly interface, offering a seamless experience for accessing virtual machines and conducting fundamental management tasks, like creating, editing, and removing virtual machines, or stopping, starting, and migrating virtual machines, all with minimal effort. Within the VM portal, users are greeted with a comprehensive overview of their virtual machines. This dashboard-like interface allows users to perform various actions, providing them with a holistic view and control over their virtualized assets. Users can initiate a range of operations, including starting, stopping, editing, configuring, and accessing detailed information about each virtual machine. The capabilities available to users within the VM portal are determined and managed by system administrators. Administrators have the authority to delegate additional administrative tasks to users based on the roles and responsibilities. These delegated tasks may encompass creating, modifying, or removing virtual machines, allowing users to customize their virtualized environments to meet specific requirements, managing virtual disks and network interfaces, enabling users to configure storage, networking and setting the networks according to their needs, leveraging snapshots to create in time backups of virtual machines, facilitating quick recovery and rollback to previous states in case of unforeseen issues. Furthermore, the VM portal will also facilitate direct connections to virtual machines through VNC clients. And these clients provide users with a familiar desktop-like environment, enhancing the user experience by enabling seamless interactions with their virtual machines. The choice of protocol can be VNC or Spice for connecting to a virtual machine, which can be determined by the administrators during the virtual machine creation process. Now let’s talk about the next type of portal, which is the monitoring portal. So the monitoring portal is a powerful tool with Oracle Linux Virtualization Manager that empowers administrators with comprehensive monitoring capabilities. With a suit of advanced tools and virtualization at their disposal, administrators can effectively monitor the health and performance of their virtualized infrastructure by closely tracking key metrics and swiftly identifying potential issues. Administrators can make informed decisions to optimize performance and reliability, ensuring smooth operations across the board. Furthermore, the Linux Virtualization Manager offers enhanced reporting and monitoring capabilities through seamless integration with Grafana, a leading open source analytics platform. This integration provides administrators with access to a wealth of insightful data stored in the engine data warehouse. With Grafana, administrators can create customized dashboards tailored to their specific monitoring needs. Now, these custom dashboards will act as centralized, offering a real time visibility into critical resources and performance metrics across the virtualized environment. Administrators can effortlessly track key indicators, for example, CPU utilization, memory usage, storage capacity, or network throughput, enabling proactive monitoring and rapid response to potential issues. The Grafana initiative interface and robust visualization tools allow administrators to craft dashboards that provide comprehensive insights at a glance, whether it’s monitoring the health of individual hosts, or tracking the performance of virtual machines, or even analyzing the trends over time. So with Grafana and Linux Virtualization Manager gets empowered for administrators to make data driven decisions and ensure optimal operations of their virtual infrastructure. The cockpit web interface, which is actually a valuable tool that empowers users to monitor the resources of a KVM host and perform various administrative tasks. Cockpit needs to be installed and activated separately to leverage its functionality. Once installed, users can use this cockpit web interface in multiple ways, and they can access these cockpit web interfaces through administration portal or by establishing a direct connection to the host. By utilizing the cockpit web interface, users can gain insight into crucial metrics such as CPU usage, memory utilization, disk space, and network activity of the KVM host. This gives the real time monitoring capability, which allows administrators to stay informed about the health and performance of their virtualized environment. The flexibility of accessing this cockpit web interface from either the administration portal or directly connecting to the host offers a user’s convenience and accessibility. Whether users prefer a centralized management approach through the administration portal or they can have a direct hands-on approach by connecting to the host, cockpit provides a seamless experience for monitoring and managing your KVM host in Oracle Linux Virtualization Manager. Now let’s come down to understanding the virtual machine consoles. So you’ve got two options for providing graphical consoles to your virtual machines. One is virtual network computing, which is also called as VNC, and the remote desktop protocol, which is the RDP. These consoles allow you to work and interact directly with your virtual machine, just as you would with your physical machines. Now, if you opt with VNC or the remote viewer, you can access the console using either the remote viewer application or the VNC client. To use a locally installed remote viewer application, you can install it via your package manager or download it from the Virtual Machine Manager. And for a browser based console clients, it is very much important that the certificate authorities should be copied or installed inside your browser. You can get the certificate authority certificate by navigating to your engines or host address with the certificate address, or you can download it from your administration portal login page. For RDP, which is remote desktop, it is available exclusively for Windows environment. To use RDP, you need to access virtual machines from windows machines with Microsoft Remote Desktop application installed. Additionally, you must set up remote sharing on the virtual machine and configure the firewall to allow remote desktop connections before connecting to a Windows virtual machine using RDP.
Continue reading...OLVM Databases….
Let’s try to understand the databases provided inside Oracle Linux Virtualization Manager. There are two PostGreSQL databases in play. The first one named engine is created as part of the engine configuration process. And if you choose to install the ovirt_engine DWH package, a second database called ovirt_engine_history is created. The engine database is where the persistent information about the Oracle Linux Virtualization Manager environment is stored. This includes details about its configuration, current state, and performance metrics. It continuously collects historical configuration information and statistical metrics updating them every minute. On the other hand, the orvit_engine_history database serves as a management history database. It stores historical configuration information and statistical metrics for data centers, clusters, and hosts. This data can be accessed by any application that needs historical insights into your virtualization environment. Now, here is an interesting feature that is both history and engine databases can be run on a remote host. This helps reduce the load on the engine host enhancing performance and scalability. But remember, it’s essential note to remember that running these databases on remote host is currently in technology preview feature, meaning it’s still under development and may have limitations.
Continue reading...
Recent Comments