Policies are OCI’s access control mechanism — they determine who can do what, and where. Getting them right is the difference between a team that can self-serve on OKE and one that’s constantly blocked waiting for admin intervention. This post covers every policy category you’ll need: required, quick-create, and optional.
Who needs policies, and who doesn’t
When a tenancy is created, OCI automatically creates an administrators group. Members of this group can perform any action on any resource without needing additional policies — they have full, unrestricted access by default.
Everyone else needs explicit policies. If you want non-admin users to create, update, or delete OKE clusters, you must write policy statements that grant those specific permissions to their group. OCI follows a deny-by-default model: nothing is permitted unless explicitly allowed.
| User type | OKE access |
| Administrators group | Full OKE access — no extra policies required |
| All other groups | Explicit policies required for every OKE action |
Policy statements follow a consistent pattern in OCI:
Allow group <group-name> to <verb> <resource-type> in <location>
Replace <group-name> with your actual group name, and <location> with either tenancy (for root-level access) or compartment <name> for compartment-scoped access.
Required policies for OKE operations
These policy statements are the minimum needed for a non-admin user group to create, update, and delete clusters and node pools. Every OKE deployment needs all of these in place.
| Policy statement | Purpose |
| manage cluster-family | Admin-level access to all cluster resources (the “catch-all”) |
| manage instance-family | Create and manage compute instances for nodes |
| read virtual-network-family | Read VCN and subnet configurations |
| use network-security-groups | Attach and use NSGs for cluster networking |
| use subnets | Place cluster nodes and endpoints into subnets |
| use vnics | Attach VNICs to node instances |
| inspect compartments | Read compartment structure and metadata |
| use private-ips | Required for VCN-native clusters (always needed) |
| manage public-ips | Required only if cluster has a public IP endpoint |
| Note: The private-ips policy must be used for VCN-native clusters. The manage-public-ips policy is only needed if your cluster has a public IP address on the API endpoint. For clusters with a public Kubernetes endpoint in an Oracle-managed tenancy, you need to use vnics, use private-ips, and manage public-ips together. |
Quick-create workflow policies
If your users will be creating clusters using the OCI Console’s quick-create workflow — which automatically provisions all the network resources — they need these additional policies on top of the required set above:
Allow group <group-name> to manage vcns in <location>Allow group <group-name> to manage subnets in <location>Allow group <group-name> to manage internet-gateways in <location>Allow group <group-name> to manage nat-gateways in <location>Allow group <group-name> to manage route-tables in <location>Allow group <group-name> to manage security-lists in <location>
If your users are specifying pre-existing network resources during cluster creation, these quick-create policies are not needed — the required set from the previous section is sufficient.
Optional policies
Beyond the required statements, OCI offers several optional policies that give you finer-grained control over specific OKE features. None of these are needed to simply create and run a cluster, but they unlock specific capabilities your team may need.
| Optional policy | Permission statement | When to use it |
| Cloud Shell access | use cloud-shell in tenancy | Users need to access clusters via OCI Cloud Shell |
| Vault + encryption keys | read vaults + read keys | Users select customer-managed encryption keys for volumes |
| Cluster inspection | inspect clusters, use cluster-node-pools, read cluster-work-requests | Read-only monitoring and visibility into cluster state |
| Service gateway | manage service-gateways | Worker nodes need to access OCI services without public internet |
| Capacity reservation | use compute-capacity-reservations | Reserve compute capacity in advance for cluster nodes |
The service gateway policy is particularly useful in production setups — it lets worker nodes access other OCI services in the same region without routing traffic over the public internet, improving both security and latency. The vault policy is needed if your team wants to use customer-managed encryption keys for boot volumes and block volumes attached to cluster nodes.
Kubernetes RBAC on top of IAM
IAM policies control access at the OCI infrastructure level — who can create or delete clusters, node pools, and associated resources. But once users are inside a cluster, Kubernetes has its own access control layer: RBAC (Role-Based Access Control).
OKE supports the Kubernetes RBAC authorizer, which lets you define fine-grained permissions at the cluster level using Kubernetes Role, ClusterRole, RoleBinding, and ClusterRoleBinding objects. This means you can have a user with IAM permissions to access a cluster but restricted to reading pods in a specific namespace — the two layers work together.
| Best practice: Use OCI IAM policies to control who can manage the cluster infrastructure, and Kubernetes RBAC to control what users can do inside the cluster. Keep the two concerns separate and you’ll have a much cleaner access model. |
Putting it all together
Here’s a practical summary of which policy set applies to which scenario:
| Scenario | Policies needed |
| Non-admin users managing clusters | Required set (cluster-family + supporting) |
| Users creating clusters via quick-create | Required set + quick-create networking policies |
| Encrypted volumes, Cloud Shell, Vault | Required set + relevant optional policies |
| Always replace the <group-name> and <location> placeholders with your actual values. A policy with a literal placeholder will not work — it appears to exist but grants nothing, and the error can be subtle. |

Recent Comments