In Kubernetes, every component that needs to communicate — whether pods talking to each other, the control plane coordinating with worker nodes, or workloads reaching external services — relies on a consistent, routable IP addressing model. Understanding how Kubernetes assigns and manages these IP addresses is foundational to designing reliable, secure, and scalable clusters.
This document provides a comprehensive technical reference covering the Container Network Interface (CNI) specification, the two primary CNI plugins available on Oracle Cloud Infrastructure (OCI), their architectural differences, IP address allocation strategies, and operational considerations, including version management.
2. Kubernetes Architecture Overview
Before examining networking in detail, it is important to understand the cluster components that networking must interconnect. A Kubernetes cluster is composed of a control plane (master node) and one or more worker nodes. The control plane manages overall cluster state, while worker nodes execute containerized workloads as pods.
Figure 1 — Kubernetes Cluster Architecture
| MASTER NODE (Control Plane) | ||||
| API Server (kube-apiserver) | Controller Manager | Scheduler (kube-scheduler) | etcd (cluster store) | Cloud Controller |
| ▼ Scheduling & Orchestration ▼ | ||||
| WORKER NODES | ||||
| Worker Node 1 | Worker Node 2 | Worker Node 3 | Worker Node N | … |
| Pod A Pod B (kubelet + kube-proxy) | Pod C Pod D (kubelet + kube-proxy) | Pod E Pod F (kubelet + kube-proxy) | Pod … | |
Each worker node runs a kubelet (node agent), a kube-proxy (network rules manager), and a container runtime. Pods are the atomic unit of deployment and may contain one or more containers sharing a network namespace. Networking must facilitate communication across all of these boundaries — and this is precisely where CNI plugins come in.
3. The Container Network Interface (CNI)
The Container Network Interface (CNI) is a specification and collection of libraries that define how network plugins should be written for Linux container environments. CNI plugins are responsible for three core networking tasks on worker nodes:
• Configuring network interfaces for newly scheduled pods.
• Assigning IP addresses to pods from a defined address space.
• Maintaining network connectivity throughout the pod lifecycle.
The CNI specification is deliberately generic — it does not mandate a single networking approach. Instead, it provides a standard interface so that different network implementations (overlay networks, native VPC networking, etc.) can be plugged in depending on the environment and requirements.
| Key Constraint All node pools within a single Kubernetes cluster must use the same CNI plugin. The CNI plugin selection is made at cluster creation time and cannot be changed afterward. This decision should be made carefully, as it has architectural implications for IP address management, node pool compatibility, and network performance. |
3.1 CNI Plugin Selection by Network Type
The choice of CNI plugin is determined by the network type configured for the cluster. Each network type maps to a specific CNI implementation:
Figure 2 — CNI Plugin Selection Reference
| Network Type | CNI Plugin Used | Node Pool Compatibility |
| Flannel Overlay | Flannel CNI Plugin | Managed Node Pools only |
| VCN-Native Pod Networking | OCI VCN-Native CNI Plugin | Managed + Virtual Node Pools |
3.2 CNI Plugin Version Management
When a cluster is first created, it automatically uses the most recent version of the selected CNI plugin. Operators have two options for managing plugin updates:
• Oracle-managed updates (default): Oracle automatically deploys new plugin versions as they are released.
• Manual version management: The operator selects and applies specific versions, taking responsibility for keeping the plugin current with cluster requirements.
| Important Regardless of who is responsible for triggering updates, CNI plugin updates are only applied when worker nodes are next rebooted. To check for pending updates on an OCI VCN-native cluster, inspect the VCN-native IP CNI daemonset logs. Output present indicates pending updates; no output means the cluster is fully up to date. |
Figure 3 — CNI Plugin Update Lifecycle
| Cluster Created (latest CNI version) | → | Oracle Managed (auto-update, default) | → | OR Manual Version (user responsibility) | → | Updates Applied on Next Worker Node Reboot |
4. Flannel CNI Plugin
Flannel is a simple, widely-adopted overlay networking solution for Kubernetes. It creates a private virtual network that spans all worker nodes in the cluster, enabling pods to communicate with each other regardless of which physical node they are running on.
4.1 How Flannel Works
Flannel operates by assigning a unique subnet to each worker node and encapsulating pod traffic in an overlay network (typically using VXLAN or UDP encapsulation). Key operational characteristics include:
• Pod networking is decoupled from the VCN CIDR block — pod IP addresses do not consume VCN IP address space.
• All pod communication is restricted to pods within the same cluster; pods are not directly reachable from outside the cluster by default.
• Compatible only with managed node pools — virtual node pools are not supported with the Flannel CNI plugin.
4.2 IP Address Allocation with Flannel
Flannel uses CIDR-based allocation to distribute IP addresses to pods. The following reference table summarizes the key allocation parameters:
Figure 4 — Flannel IP Allocation Reference
| Flannel CNI — IP Allocation Reference | |
| API Endpoint Subnet | /30 CIDR block (1 IP required) |
| Per Worker Node Allocation | /25 CIDR block = 128 addresses (127 usable) |
| Max Pods per Node | 110 pods (platform cap) |
| Default Cluster Pod CIDR | /16 → supports up to 512 nodes |
| Pods CIDR Mutability | Immutable after cluster creation |
| Flannel Scope | Pods within same cluster only (no external access) |
CIDR Block Planning
When creating a Flannel-based cluster, careful planning of CIDR blocks is essential. The following rules must be observed:
• The Kubernetes API endpoint subnet requires only a /30 CIDR block (a single IP address is sufficient).
• The pod CIDR block must not overlap with the API endpoint, worker node, or load balancer subnets.
• Each worker node is allocated a /25 CIDR block (128 addresses, 127 usable), accommodating up to 110 pods per node.
• The cluster-level pod CIDR block defaults to /16, which supports up to 512 worker nodes. To exceed this limit, a larger pod CIDR block must be specified at cluster creation time — this value is immutable after creation.
| Planning Note The cluster’s pod CIDR block is the most consequential networking decision for Flannel-based clusters. Because it cannot be modified after cluster creation, operators must account for anticipated growth. A /16 default supports 512 nodes, which is sufficient for most deployments — but large-scale clusters should specify a wider block using the custom create workflow. |
5. OCI VCN-Native Pod Networking CNI Plugin
The OCI VCN-Native Pod Networking CNI plugin takes a fundamentally different approach from Flannel. Rather than using an overlay network, it integrates directly with OCI’s Virtual Cloud Network (VCN), assigning real VCN IP addresses to pods using the VNIC (Virtual Network Interface Card) secondary IP mechanism.
This approach eliminates network encapsulation overhead, enables direct routing of pod traffic within the VCN, and provides full OCI network policy enforcement at the pod level.
5.1 Dual Subnet Architecture
A defining characteristic of VCN-native pod networking is its use of two distinct subnets per node pool, each serving a specific purpose:
Figure 5 — OCI VCN-Native Dual Subnet Architecture
| Subnet | Primary Role | Type | Gateway |
| Worker Node Subnet | Control plane ↔ worker node processes (kubelet, kube-proxy) | Private or Public (Regional recommended) | Internet GW / NAT |
| Pod Subnet | Pod-to-pod communication; direct pod IP access; OCI service access | Regional (required) | Service GW / NAT GW |
| API Endpoint Subnet | Kubernetes API server endpoint — single IP required | Private (/30 sufficient) | — |
• Worker Node Subnet: Carries traffic between control plane components (kube-apiserver, kube-controller-manager, kube-scheduler) and worker node processes (kubelet, kube-proxy). Can be private or public; a regional subnet is recommended.
• Pod Subnet: Carries pod-to-pod traffic and enables direct access to individual pods via their private IP addresses. Must be a regional subnet and must support egress to OCI services via a Service Gateway and to the internet via a NAT Gateway.
5.2 IP Address Allocation with VCN-Native Networking
In VCN-native networking, pods receive secondary IP addresses from VNICs attached to their worker node. The capacity model is as follows:
Figure 6 — OCI VCN-Native IP Capacity Reference
| OCI VCN-Native CNI — IP Capacity Reference | |
| Primary VNIC | 1 per worker node (primary IP) |
| Secondary VNICs | Varies by compute shape |
| Secondary IPs per VNIC | Up to 31 secondary IPs |
| Pod IP Source | Secondary IP from VNIC assigned to pod |
| Max Pods formula | (Number of VNICs × 31 secondary IPs) |
| Supported K8s Version | v1.22 or later |
| Supported Node Types | Managed + Virtual Node Pools |
| Service Mesh Support | Istio, Linkerd, OCI Service Mesh (Oracle Linux 7; OL8 planned) |
Each worker node has a primary VNIC with a primary IP address. Depending on the compute shape selected for the node pool, one or more secondary VNICs may also be attached. Each VNIC can hold up to 31 secondary IP addresses. A pod claims one secondary IP address from a VNIC, which it uses for all inbound and outbound communication.
The maximum number of pods schedulable on a given node is therefore a function of the number of VNICs the compute shape supports, multiplied by 31 secondary IPs per VNIC. Operators should consult OCI compute shape documentation to determine VNIC counts for specific shapes.
5.3 Prerequisites and Compatibility
Before deploying the OCI VCN-Native CNI plugin, operators must verify the following requirements:
• Kubernetes Version: Clusters must be running Kubernetes v1.22 or later.
• Worker Node Image: If using an OKE-specific base image, do not select images released before June 2022.
• Node Pool Compatibility: Supported on both managed and virtual node pools — an advantage over Flannel.
• Service Mesh Support: Istio, Linkerd, and OCI Service Mesh are supported. Worker nodes must run Kubernetes 1.26 or later; Oracle Linux 7 is currently supported, with Oracle Linux 8 support planned.
• Security Rules: Specific ingress and egress security rules are required for both the pod subnet and the worker node subnet. Consult OCI official documentation for the complete rule set.
| Security Consideration When using the OCI VCN-Native CNI plugin, pod subnets and worker node subnets require distinct security list rules. Failing to configure these correctly will prevent pod scheduling, inter-pod communication, or control plane connectivity. Always validate security rules against the OCI documentation before creating a production cluster. |

Recent Comments