Let’s understand what a load balancer is. Say, for example, this is a client. And in the middle, there is an entity, which is load balancer. And then there are different servers. Client will send the request. So you can think of it like one entry point. And then load balancer sits in between. And it is going to distribute the traffic to multiple backends. So that means here, there are multiple servers or multiple backends. It load balances or distributes the incoming traffic to multiple backends. Now, what is the benefit associated with the load balancer? The first benefit is scaling. At any point of time, you can increase the number of servers. Let’s say there were three servers. I made it four. So you can increase the number of backend servers. So that flexibility is there in case of load balancing. The second benefit is resource utilization. So this load balancer, it is going to distribute the traffic to different backends based on load balancing policies. So it will ensure that these resources are properly utilized. So this is also one of the benefits. And the third benefit that I can think of is load balancer ensures that you get high availability. Why? Because you see, there are multiple backends, multiple servers behind a load balancer. Even if one of the server becomes unhealthy, then load balancer is going to continue distributing the traffic to other servers. And that is how it ensures high availability. So now, let’s discuss about the types of load balancers. There are two types of load balancers. The first one is public load balancer, and then we have private load balancers. As the name suggests, public load balancer is going to have a public IP, and it will be reachable from the internet. Private load balancer will have private IP. And this IP address, that will be from the hosting subnet, the subnet inside which this private load balancer is going to reside. And then, it is only visible from within the Virtual Cloud Network. Now let’s talk about the load balancer concepts. There are a total of nine concepts, and I will discuss it one by one. The first concept is that of backend servers. So what is a backend server? At the very start, I mentioned that load balancer distributes the incoming traffic to multiple servers that are placed in the backend. So backend server basically means that these are the servers that will generate the content. For example, this is your load balancer, this is the incoming traffic. So it can be TCP or it can be HTTP traffic. So this is the incoming traffic. And then the load balancer is going to forward it to or distribute it to the backend server. So ultimately, this backend server is responsible for generating content. Now the second component is backend set. Backend set is nothing. It’s a logical entity. And you can think of it like a list of backend servers. For example, backend server 1, backend server 2. And along with the backend server, the health check policy as well as the load balancing policy– we are going to look at both these terms– are also included. That means the backend set is defined by the list of backend servers plus the health check policy plus the load balancing policy. So this is what is your backend set. Now let’s talk about health check policy. So the third construct is health check policy. Now what is this health check policy? This is simply a test. And what is the test all about? As I mentioned, we have the load balancer, we have the incoming request, then the load balancer is going to distribute it to different backends. Let’s take the example of a backend server 1. Now this test is going to confirm whether this backend server is available. So if the backend server fails, then the load balancer is going to take this server out of rotation. And there are two ways in which you can conduct the health check. The first is you can conduct it at a TCP level, and the second is HTTP level. In case of TCP level, it is going to be a connection attempt. And in case of HTTP level, it is going to be a request. In case of HTTP level, the request is going to be sent at a specific URI, and then the response will be validated. But the primary purpose of health check is to confirm or is to determine whether the backend servers are available and healthy. Now the fourth concept is regarding listener. As the name suggests, listener– listener means it is going to check for incoming traffic on load balancer’s IP address. And to configure a listener, what is the information that we provide? We provide protocol and port number, HTTP, HTTP/2, HTTPS, and TCP. So listener checks for incoming traffic on the load balancer’s IP address. And in case you are handling these traffic types, you need to configure at least one listener per traffic type. We shall continue this discussion...
Continue reading...genernal
Oracle Autonomous Database Cloud 2024 Professional….
I have been working with Oracle Autonomous database from a while. So I decided to write the certification exam for the same: 1Z0-931-24. Glad to say...
Continue reading...Cleared Oracle 19c (1z0-083)….
It was long overdue but I have finally managed to clear the exam 1z0-083. And that means I am now 2019 Certified. I used the MyLearn...
Continue reading...Oracle AI Vector Search Benefits….
One of the biggest benefits of Oracle AI Vector Search is that semantic search on unstructured data can be combined with relational search on business data...
Continue reading...Importing Vector Embedding Models….
One way of creating vector embeddings could be to use someone’s domain expertise to quantify a predefined set of features or dimensions such as shape, texture,...
Continue reading...23ai Vector Database Fundamentals, Vector Data Type, Embeddings….
Oracle AI Vector Search is designed for Artificial Intelligence (AI) workloads and allows you to query data based on semantics, rather than keywords.Oracle Database 23ai introduces...
Continue reading...Continuous GoldenGate Capture During Rolling Upgrades in 23ai….
The rolling database upgrade is the way to reduce the downtime during the database upgrade. And you can perform rolling database upgrade with several different method, which include manual task or script, and you can also use DBMS_ROLLING package. When you perform rolling database upgrade using DBMS_ROLLING, it simplifies the overall database upgrade process, which include init, build, start, upgrade, switchover, and finish. So many of these steps can be simplified using DBMS_ROLLING package. Now suppose that you want to perform database rolling upgrade using this package and in the primary database, there is the Oracle GoldenGate capture process running. So whenever you perform transactions on the primary database, the GoldenGate capture continuously capture changes. And also you have a standby database. And this got to be physical standby database, eventually converted to transient logical standby database as part of a rolling database upgrade. So we’re going to upgrade transient logical standby database first. During the upgrade of transient logical standby database, users can connect to a primary database to continue to work. And the Oracle GoldenGate capture process also can be up and running without having any downtime. But as part of a switchover, so let’s say we already completed the upgrade operation for the transient logical standby database, so it’s time to switch over to be able to upgrade original primary database. The question is, how do you handle Oracle GoldenGate capture process? Prior to 23ai, you had to start and then you had to take care of the work of GoldenGate capture process that was running on primary manually. But with the Oracle Database 23ai, when we perform switchover operation, the replication of Oracle GoldenGate capture structure, that’s automatic as part of DBMS_ROLLING switchover. And also, in addition, it provide a support for an Application Continuity and also support for transient Application Continuity. So as you can see, when the roles are changed because of metadata, replication can be replicated to the transient logical standby database. The Oracle GoldenGate capture process also can be failover to the primary database, new primary database automatically. . We can use this feature for the database release upgrade and also for the complex maintenance tasks, and emergency apply of nonrolling patches.
Continue reading...Oracle RAC Two Stage Rolling Patch in 23ai….
In Oracle Database 23ai, the Oracle RAC two-stage rolling patch provides a framework where such patches that include data dictionary changes can be applied in a rolling fashion and enabled after the patch has been applied to the last instance without requiring downtime to apply a patch. This feature splits the patching operation between applying binary changes at the software level and SQL changes at the database level. During phase 1, patch is applied to all instances, but the fix is not enabled. When you look at the example in the slide, the example shows a full node RAC database, where 23.3.1 software version is installed across all four nodes. When you apply patches with Oracle RAC two-stage rolling patch strategy, we apply patch one node at a time. While patches are being applied in a node, all the other instances running in the remaining servers still can access the Oracle Database because patch will be applied. However, until patch is applied in the last node, the fix is not enabled, thus users can access the database. So we’re going to apply patch in the first node first. So users disconnected and user can reconnect to one of the three surviving and remaining servers. And once the patch is applied, the user can reconnect to database instance running on instance 1. And by doing so we can apply patches at software level in the second server and also third server and the first server. On completion of phase 1, fix is enabled through SQL statement. So we’re going to run alter system enable rac two_stage rolling update all. So before running this command, the binary that is activated is a 23.3.1 even after the patch is applied. But after you run all the system enable rac two_stage rolling update all, this is the time when the fix is enabled. So this software version is updated. So this feature helps to reduce the planned downtime. Reducing the need to take the database instance down also improves the performance as the workloads are not subject to rewarming the cache after instance restart. So this is a nice feature. The feature significantly reduces the number of nonrolling patches.
Continue reading...Local Rolling Database Maintenance in 23ai….
Local Rolling Database Maintenance. Starting with Oracle Database 23ai, you can apply rolling patches locally for Oracle Real Application Clusters and Oracle RAC One Node deployment. It’s very similar to single-server rolling database maintenance, but this feature is used for the multinode RAC environment. So let’s take a look at how it works. So let’s assume a two-node RAC database example. So we have host A and host B, and CDB1 instance running out of host A, CDC2 instance running out of host B. When we perform our place patching, we install the software in a new home and then we apply patch. Once patch operation is complete with the local rolling database maintenance, we can start new home from a new instance out of a new home, while the original instance is still running out of original home. So at one point, on host A, we’re going to have two instances, one from original home and the other instance out of a new home. Once everything is ready, services and instances, the services and sessions are moved to new instance running out of a new home. And once the sessions are moved, then the original instance is stopped. And same thing happened on host B. So we install the new version of the software or we install or patch the software in a new home and then start a new instance out of a new home, in this example, CDB2/4. And then sessions are moved to new home, the original instance is stopped. Local database maintenance provides uninterrupted database availability during maintenance activities such as patching for Oracle RAC and Oracle RAC One Node databases. This significantly improves the availability of your databases without causing extra workload on other cluster nodes. So let’s take a look at examples. First, you download Oracle Database installation image file and extract the image file into new Oracle home directory. And from the new Oracle home directory, start OUI and apply required release update. And then perform software installation. So it is installing the patched software in a new home. And then we’re going to run SRVCTL modify database command with a -localrolling option. This is to enable local rolling to create a new rec instance. So as soon as you run this command, new instances are created but stopped. For example, if you have a two-node rec database in the first node, the new instance is created but stopped. In the second node, a new instance is created but stopped. And then we’re going to transfer Oracle RAC and Oracle RAC One Node database and PDBs and services from the old Oracle home to the new Oracle home. And this is the step to start the instances out of a new home and then transfer services, the PDBs, the services to new instances and then stop original instances and that’s by SRVCTL transfer instance. And now you’re going to verify database configuration changes. And the output of a server control configure database command, it should show new instance names for the database.
Continue reading...Smooth Reconfiguration of Oracle RAC Instancesi in 23ai….
Servers leaving or joining a cluster resulting a reconfiguration that is essentially a synchronization event to recover all the changes made by the failed instance. Oracle RAC has reduced the time, the sessions wait on this event during reconfiguration. In Oracle RAC 23ai, smart reconfiguration reduces the impact of a service disruption from both planned and unplanned operations, utilizing several features, such as Recovery Buddy and PDB and service isolation and smooth reconfiguration, resulting in a faster reconfiguration than previous releases. So we’re going to take a look at some of the features introduced in previous releases, and then we’re going to get into smooth reconfiguration that was introduced in 23ai. Let’s review global resource management. One user makes a connection to one of the RAC instances, and user can submit SQL statement requesting a set of blocks. In order to catch database blocks, the buffers must be allocated and also master metadata must be allocated to be able to describe changes to those buffers. An internal algorithm is used to decide which instance should contain the master metadata structure for that entity. In our example, the master metadata structure are distributed across instance 1 and instance 2. During the instance startup, this information on the master metadata structure for the entity has persisted in the data dictionary and reused during instance startup. And also the global resources are managed for unplanned instance crash and also planned service relocation as well. Now, let’s review PDB and service isolation. Let’s assume, there are three PDBs– PDB1, PDB2, PDB3. PDB1 is running on instance 1 and PDB2 is running on instance 1, instance 2, and instance 3. And also PDB3, it is available in instance 2 and instance 3. So when you make any changes to PDB1, then the metadata structure owned by PDB1 is only available in instance 1. When you make any changes to PDB3, the master metadata structure for PDB3 will be distributed across the instances where PDB3 is up and running. In this example, in instance 2 and instance 3. So let’s take a look at RAC reconfiguration. The PDB in RAC embodiment is reconfigured only needed if PDB1 is open on instance 2 as an example. So, for example, originally, PDB1 was available on instance 1. When you start PDB1 even in instance 2, the master metadata is redistributed across instance 1 and instance 2. And also if CDB 2 goes up, then what will happen? All the PDBs that were running out of instance 2 are no longer available. So the master metadata that was kept in instance 2 must be redistributed across surviving instances, in this example, instance 1 and instance 2. And impact is isolated to the affected PDBs only if the PDB is unaffected when CDB instance crashes. And also PDB1 is open on instance 2, or fourth instance is brought up. So these cases, the impact is only isolated at the PDB level. The PDB and service isolation. This is a feature that is used for CDB, and this is an enhancement from the service-oriented buffer cache access. And this feature improves performance by reducing distributed lock manager operations for services not offered in all PDB instances. The next topic is Buddy Recovery for reconfiguration. The Recovery Buddy feature is a feature that reduces the waiting time during reconfiguration. In prior releases, Oracle RAC instances identified and recovered the changes made by the failed instance by reading the redo logs. So, for example, for instance 1, PDB1 goes down. In order to recover blocks, the heavily modified on PDB1, one of the surviving instances must access the redo log file owned by PROD 1 and then identify blocks to be recovered. So that’s in a physical I/O. And it is a time-consuming operation. With the Recovery Buddy feature, we can reduce this I/O because of in-memory log and also because of Recovery Buddy concept. So, for example, in the three-node direct database, we actually assign the Buddy Recovery for each instance. For example, PDB1 is a Recovery Buddy of PROD 3, and also PROD 2 is a Recovery Buddy of a PROD 1, PROD 3 is a Recovery Buddy of PROD 2. So which means that when you make any changes to the blocks in instance 1, PROD 1, then changes will be captured directly in PROD 1 but also the same changes will be maintained in the Recovery Buddy memory. So in-memory log. And same thing– when you make any changes to PROD 2, this change will be maintained not only locally but also in the Buddy Recovery instance as well. So here’s an example. So we connect to instance 1 and request the blocks like this and then make changes. So we make changes like this. Now, when you make any changes to PROD 1, these changes are maintained in PROD 1 but also the same change is maintained in the Recovery Buddy instance. So if PROD 1 goes down, instead of having access to online redo log file owned by Prod 1, we can directly access to in-memory log preserved in the Recovery Buddy instance and then read it to identify blocks to be recovered. So once we identify blocks to be recovered and apply changes, then it recover. So this feature reduces the time required for reconfiguration. Smooth reconfiguration. Smooth reconfiguration of Oracle Real Application Clusters instance reduces brownout time during cluster reconfiguration. So here’s an example. Suppose that you run srvctl command to stop instance, in the previous version, as soon as you run srvctl stop instance command, your instance is just stopped. And also until the metadata that was kept in the stop instance is redistributed, your database was frozen for that amount of time until global resources are recovered. However, in 23ai, we changed the algorithm slightly. So you request a stop instance. However, instead of stopping instance immediately, we perform the resource remastering operation first. So we distribute the metadata before performing stopping of an instance. So after redistributing instance, then we’re going to actually shut down instance. So when you actually look at the differences between version 19c, for example, and then 23ai, we slightly change the order, and that reduced the time required for the cluster reconfiguration. So in 19c, as soon as you issue stop instance command, srvctl stop instance, your instance must be killed and stopped. And then the global resource must be remastered. So for a short amount of time, your database wasn’t able to perform any activities. However, in 23ai, when user requests the srvctl stop instance, instead of stopping an instance, we remaster the resource first. And then after the resources are remastered, then we could actually shut down instance. That reduces reconfiguration time. So this feature it distributes resource coordinator. So resource coordinator is same as the master resource, the owner, or the resource master. So same terminology. So we call it now resource coordinator, before shutting down instances for planned maintenance....
Continue reading...
Recent Comments