Let’s understand what is the meaning of site-to-site VPN? So there are some old names of this particular service. So if you hear VPN Connect or IPsec VPN, these are the old names of site-to-site VPN connection. What does this site-to-site VPN connection do? It provides an IPsec connection between on-premise environment and Virtual Cloud Network. So if you have an on premise environment, along with your your Virtual Cloud Network. So you can have a DRG, Dynamic Routing Gateway, and a customer premises equipment over here. So using these you can have IPsec connection that is established between the on-premise network and the Virtual Cloud Network. Now, what is the meaning of this IPsec connection? This typically represents that the IP packets are going to be encrypted before they are transferred and decrypt when it arrives. So that is what is your IPsec protocol. Now remember, this communication, this happens over the internet. So the traffic traverses over the public internet, but it’s an encrypted connection. Now, let’s look at the two modes of an IPsec connection. So it has two modes. The first mode is transport mode. And the second mode is tunnel mode. Now, before you understand this, you have to understand, what is the difference between a header and a payload? Now, header, simply think of it like an envelope or the box. And this payload, think of it like the content or the data. So this header contains information about the packet, things like origin IP, destination IP. But the actual data, or the content, is what is your payload. So two things, header and payload. Now, if you understand this, you will easily be able to understand transport mode. Because in case of transport mode, the header, it stays intact. And the IPsec protocol, that is going to encrypt only the actual payload. So this is what happens in your transport mode. Now, in case of tunnel mode, it’s going to encrypt and authenticate the entire packet. And when I say entire packet, it includes both header and the payload. So why is this important? This is important because OCI supports tunnel mode. So the tunnel mode is supported by Oracle. So now, let’s talk about, what are the advantages of site-to-site VPN connection? The first advantage is it is cost effective. I already mentioned that the communication traverses through public internet, which means there is no need for dedicated lease lines. So it is not required. And hence, it is cost effective. The second advantage is it is quick to set up. Usually, if you would like to conduct a POC, you can very quickly set up a site-to-site VPN connection. The third benefit is that the communication is encrypted. And whenever you create a site-to-site VPN connection, so each connection will have two tunnels. So tunnel one, tunnel two....
Continue reading...February 15, 2025
OCI VCN Connectivity Options….
Lets understand how OCI VCN can be connected with other VCNs in same or different regions. So when we have a VCN, we can either connect this VCN to another VCN, which is in the same region, or we can connect the VCN to another VCN which is in a different region. So both options are supported in Oracle Cloud Infrastructure. When both the VCNs are in same region, we call that process local peering. And when the VCNs are in different regions, we call it remote peering. Now there are two ways in which you can configure local peering. The first option is you can use local peering gateways. The second option is you can also use dynamic routing gateway. Then in case of remote peering connection, you can configure it using dynamic routing gateway. So this is with respect to when you need to connect a VCN to another VCN if you would like to connect the VCN to your on-premises network. So this is basically the customer location or on-prem network. Now if that is the case, here there are three options. The first option is you use public internet. The second option is you use something which is known as site-to-site VPN. So site-to-site VPN connection. And the third option that you have is FastConnect. So in case of public internet, we typically use gateways like internet gateway or NAT gateway. And then we can configure connectivity over the internet. In case of site-to-site VPN, it’s basically an encrypted connection. So in terms of security, it is a secure connectivity over internet. IPsec VPN. But ultimately, the traffic traverses over internet. So what it means is there is no throughput guarantee. And in case of FastConnect, this one is dedicated connectivity. And when I say dedicated connectivity, it implies that you will get low latency and high bandwidth. So these are the options when it comes to VCN connectivity. You can connect your VCN to another VCN in the same region or in a different region. And then you can also connect your VCN to on-premises network. So as I mentioned, there are two options. You can configure local peering. You can also configure remote peering. So if there are two VCNs in the same region. And we are connecting them via local peering gateway. So that is your local peering. And if there are two VCNs in two different regions. And we are using dynamic routing gateway in order to facilitate communication between these two Virtual Cloud networks....
Continue reading...OCI Load Balancer Types and Policies….
Now let’s talk about the fifth concept, which is load balancing policy. Let’s say this is incoming traffic, the load balancer sits here, and then these are the backend servers. Now this load balancing policy is going to tell the load balancer how to distribute traffic. So how to distribute traffic amongst the backend servers– and there are three such policies. The first one is round robin. What happens in round robin? The incoming traffic is sent sequentially to each server, and this is the default policy. So incoming traffic will be distributed sequentially to each server. The second policy is least connection. Now this policy will enable load balancer to send the traffic to the backend server. Let’s say backend 1 has the fewest active connections, hence load balancer will send the traffic to backend 1. So this leased connection policy will send the traffic to the backend server with fewest active connections. And therefore, the name says leased connection. The third policy is IP hash. Now, in this IP hash, the incoming IP, which is nothing but the source, is going to be used as a hashing key. So what typically is going to happen is, using this policy, the load balancer service will route the non-sticky traffic to the same backend server. So if a request is originating from a particular IP address and if we have selected IP hash, then the non-sticky traffic is going to be sent to the same backend server. Now one thing to note here is that this load balancing policy will apply differently to TCP load balancers, to sticky sessions, to sticky HTTP, and non-sticky HTTP. So these are the three load balancing policy types. Now the sixth concept is regarding the shape– the shape of the load balancer. So, in OCI, the load balancers– they use flexible shape. Now what does it mean? It means you have to specify two things– first, you have to specify the minimum value. And secondly, you have to specify the maximum value. Now what’s the value here? Value is basically the load balancer’s bandwidth. And what is the significance of having this minimum value? It specifies instant readiness for the load. And what is the significance of maximum value? Using this, we can have a control on the cost. And in case of OCI, you can specify from 10 Mbps to 8,000 Mbps. So this is the range. For example, you can specify minimum value as 10 Mbps, and you can specify maximum value as, let’s say, 3,000 Mbps, just as an example. Then the seventh concept is on SSL. So there is a client, and then there is a server. By SSL, we mean that it’s basically an encrypted link. And in OCI, there is support for SSL termination. What happens in case of SSL termination? This is the client, this is the load balancer, and this is the back end. Now this traffic is going to be SSL traffic, but this traffic is going to be unencrypted. Therefore, the name says SSL termination. The communication between client and load balancer is encrypted, and then the load balancer is going to decrypt it and send it to the backend instances. So this is what is your SSL termination. Then there is a concept of point to point SSL. In this case,the SSL is terminated at load balancer. So this is encrypted. This is also encrypted, but here the SSL is going to terminate, and then the load balancer is going to re-initiate an SSL to back end. And therefore, we call it point to point SSL. So both these will be encrypted, unlike SSL termination where this portion is unencrypted– the communication between the load balancer and the back end instances. The third option is SSL tunneling. So, in this scenario, the load balancer is going to tunnel incoming SSL to the backend server or application server. So these are the three different options when it comes to SSL. Now I will discuss the eighth concept, which is session persistence. Now what is the meaning of session persistence? Let’s say there is a client and this client has sent a request– so there is the load balancer, and there are different backend servers. Now by session persistence, we mean that all the requests that are originating from a client will be sent to a single backend server. And can you think of any use case? So there are scenarios like shopping cart and login sessions where we need this kind of session persistence. And there are two ways in which we can enable session persistence. So the first method is application cookie stickiness, and the second option is load balancer cookie stickiness. So we’ll not go into these details, but I just wanted to explain you at a pretty high level what is the meaning of session persistence. So, in a nutshell, all the requests originating from a client are going to be sent to a single back end server. Now the ninth concept is with respect to certificates. So while creating a listener, if you are using HTTPS, then you need to associate a SSL server certificate with the load balancer. And using this certificate, the load balancer is going to terminate the connection and decrypt the request. Now let’s talk about Load Balancer routing. The first is path based routing. And secondly, we have host based routing. And all these can be configured using routing policies in load balancer. So what happens in path based routing? Let’s say this is the incoming traffic. Let’s say it is www.abc.com. So this is the load balancer. Now you want that if the path is, let’s say, abc.com/app, it should go to this particular backend pool. If the path is, let’s say, videos, it should go to this particular backend pool. Similarly, if it is /images, it should go to a different backend pool. So this kind of setup is what is known as a path based routing. You see here, this is the path....
Continue reading...Understanding OCI Load Balancer….
Let’s understand what a load balancer is. Say, for example, this is a client. And in the middle, there is an entity, which is load balancer. And then there are different servers. Client will send the request. So you can think of it like one entry point. And then load balancer sits in between. And it is going to distribute the traffic to multiple backends. So that means here, there are multiple servers or multiple backends. It load balances or distributes the incoming traffic to multiple backends. Now, what is the benefit associated with the load balancer? The first benefit is scaling. At any point of time, you can increase the number of servers. Let’s say there were three servers. I made it four. So you can increase the number of backend servers. So that flexibility is there in case of load balancing. The second benefit is resource utilization. So this load balancer, it is going to distribute the traffic to different backends based on load balancing policies. So it will ensure that these resources are properly utilized. So this is also one of the benefits. And the third benefit that I can think of is load balancer ensures that you get high availability. Why? Because you see, there are multiple backends, multiple servers behind a load balancer. Even if one of the server becomes unhealthy, then load balancer is going to continue distributing the traffic to other servers. And that is how it ensures high availability. So now, let’s discuss about the types of load balancers. There are two types of load balancers. The first one is public load balancer, and then we have private load balancers. As the name suggests, public load balancer is going to have a public IP, and it will be reachable from the internet. Private load balancer will have private IP. And this IP address, that will be from the hosting subnet, the subnet inside which this private load balancer is going to reside. And then, it is only visible from within the Virtual Cloud Network. Now let’s talk about the load balancer concepts. There are a total of nine concepts, and I will discuss it one by one. The first concept is that of backend servers. So what is a backend server? At the very start, I mentioned that load balancer distributes the incoming traffic to multiple servers that are placed in the backend. So backend server basically means that these are the servers that will generate the content. For example, this is your load balancer, this is the incoming traffic. So it can be TCP or it can be HTTP traffic. So this is the incoming traffic. And then the load balancer is going to forward it to or distribute it to the backend server. So ultimately, this backend server is responsible for generating content. Now the second component is backend set. Backend set is nothing. It’s a logical entity. And you can think of it like a list of backend servers. For example, backend server 1, backend server 2. And along with the backend server, the health check policy as well as the load balancing policy– we are going to look at both these terms– are also included. That means the backend set is defined by the list of backend servers plus the health check policy plus the load balancing policy. So this is what is your backend set. Now let’s talk about health check policy. So the third construct is health check policy. Now what is this health check policy? This is simply a test. And what is the test all about? As I mentioned, we have the load balancer, we have the incoming request, then the load balancer is going to distribute it to different backends. Let’s take the example of a backend server 1. Now this test is going to confirm whether this backend server is available. So if the backend server fails, then the load balancer is going to take this server out of rotation. And there are two ways in which you can conduct the health check. The first is you can conduct it at a TCP level, and the second is HTTP level. In case of TCP level, it is going to be a connection attempt. And in case of HTTP level, it is going to be a request. In case of HTTP level, the request is going to be sent at a specific URI, and then the response will be validated. But the primary purpose of health check is to confirm or is to determine whether the backend servers are available and healthy. Now the fourth concept is regarding listener. As the name suggests, listener– listener means it is going to check for incoming traffic on load balancer’s IP address. And to configure a listener, what is the information that we provide? We provide protocol and port number, HTTP, HTTP/2, HTTPS, and TCP. So listener checks for incoming traffic on the load balancer’s IP address. And in case you are handling these traffic types, you need to configure at least one listener per traffic type. We shall continue this discussion...
Continue reading...
Recent Comments