Now let’s talk about the fifth concept, which is load balancing policy. Let’s say this is incoming traffic, the load balancer sits here, and then these are the backend servers. Now this load balancing policy is going to tell the load balancer how to distribute traffic.
So how to distribute traffic amongst the backend servers– and there are three such policies. The first one is round robin. What happens in round robin? The incoming traffic is sent sequentially to each server, and this is the default policy. So incoming traffic will be distributed sequentially to each server.
The second policy is least connection. Now this policy will enable load balancer to send the traffic to the backend server. Let’s say backend 1 has the fewest active connections, hence load balancer will send the traffic to backend 1. So this leased connection policy will send the traffic to the backend server with fewest active connections. And therefore, the name says leased connection.
The third policy is IP hash. Now, in this IP hash, the incoming IP, which is nothing but the source, is going to be used as a hashing key. So what typically is going to happen is, using this policy, the load balancer service will route the non-sticky traffic to the same backend server. So if a request is originating from a particular IP address and if we have selected IP hash, then the non-sticky traffic is going to be sent to the same backend server.
Now one thing to note here is that this load balancing policy will apply differently to TCP load balancers, to sticky sessions, to sticky HTTP, and non-sticky HTTP. So these are the three load balancing policy types.
Now the sixth concept is regarding the shape– the shape of the load balancer. So, in OCI, the load balancers– they use flexible shape. Now what does it mean? It means you have to specify two things– first, you have to specify the minimum value. And secondly, you have to specify the maximum value.
Now what’s the value here? Value is basically the load balancer’s bandwidth. And what is the significance of having this minimum value? It specifies instant readiness for the load. And what is the significance of maximum value? Using this, we can have a control on the cost.
And in case of OCI, you can specify from 10 Mbps to 8,000 Mbps. So this is the range. For example, you can specify minimum value as 10 Mbps, and you can specify maximum value as, let’s say, 3,000 Mbps, just as an example.
Then the seventh concept is on SSL. So there is a client, and then there is a server. By SSL, we mean that it’s basically an encrypted link. And in OCI, there is support for SSL termination.
What happens in case of SSL termination? This is the client, this is the load balancer, and this is the back end. Now this traffic is going to be SSL traffic, but this traffic is going to be unencrypted. Therefore, the name says SSL termination. The communication between client and load balancer is encrypted, and then the load balancer is going to decrypt it and send it to the backend instances. So this is what is your SSL termination.
Then there is a concept of point to point SSL. In this case,the SSL is terminated at load balancer. So this is encrypted. This is also encrypted, but here the SSL is going to terminate, and then the load balancer is going to re-initiate an SSL to back end. And therefore, we call it point to point SSL. So both these will be encrypted, unlike SSL termination where this portion is unencrypted– the communication between the load balancer and the back end instances.
The third option is SSL tunneling. So, in this scenario, the load balancer is going to tunnel incoming SSL to the backend server or application server. So these are the three different options when it comes to SSL.
Now I will discuss the eighth concept, which is session persistence. Now what is the meaning of session persistence? Let’s say there is a client and this client has sent a request– so there is the load balancer, and there are different backend servers.
Now by session persistence, we mean that all the requests that are originating from a client will be sent to a single backend server. And can you think of any use case? So there are scenarios like shopping cart and login sessions where we need this kind of session persistence. And there are two ways in which we can enable session persistence.
So the first method is application cookie stickiness, and the second option is load balancer cookie stickiness. So we’ll not go into these details, but I just wanted to explain you at a pretty high level what is the meaning of session persistence. So, in a nutshell, all the requests originating from a client are going to be sent to a single back end server.
Now the ninth concept is with respect to certificates. So while creating a listener, if you are using HTTPS, then you need to associate a SSL server certificate with the load balancer. And using this certificate, the load balancer is going to terminate the connection and decrypt the request.
Now let’s talk about Load Balancer routing. The first is path based routing. And secondly, we have host based routing. And all these can be configured using routing policies in load balancer.
So what happens in path based routing? Let’s say this is the incoming traffic. Let’s say it is www.abc.com. So this is the load balancer. Now you want that if the path is, let’s say, abc.com/app, it should go to this particular backend pool.
If the path is, let’s say, videos, it should go to this particular backend pool. Similarly, if it is /images, it should go to a different backend pool. So this kind of setup is what is known as a path based routing. You see here, this is the path.
Now, in case of host based routing, again you assume this is the incoming traffic, there is this load balancer. In this case, it is going to be two different hosts. For example, abc.example.com and xyz.example.com.
This request is going to be sent to a different backend pool, and this request is going to be sent to a different backend pool. So this is what is your host based routing.
Hope this helped.
Recent Comments