My moded OCI-OKE Terraform Module & why you should try it!

my terraform oci-oke  quick start fork. easier , more user friendly kubernetes in Oracle Cloud.

Intro

If you’ve ever set up Kubernetes clusters on Oracle Cloud with Terraform or Resource Manager, you’ve probably tried the OKE Quickstart modules. Sure, they’re a solid starting point—streamlined, efficient. But after diving deeper into them, I realized the stack wasn’t exactly easy to use and felt more readable to their builders than to us users.
So, I forked the official module and gave it a few tweaks, transforming it into an Easy Quickstart—a version that’s user-friendly, intuitive, and actually makes sense when you’re designing your baseline cluster from the ground up.

Why Quickstart Over Terraform-oke module?

OCI offers two popular module stacks for OKE:

  • Terraform-oci-oke-quickstart:
    A highly opinionated, out-of-the-box solution ideal for rapid deploy with reasonable defaults and tooling integration.
  • Terraform-oci-oke:
    A comprehensive, Highly customizable alternative that requires deeper knowledge of OCI.
    • My Cons
      • Complexity: The comprehensive feature set can overwhelm users with less experience.
      • Longer Setup Time: Extensive configuration options may slow down the deployment process.
      • Steep Learning Curve: Not ideal for beginners or quick deployments.

I chose Quickstart because:

  1. Ease of Use: Simplified for quick setup, focusing on essential parameters aligned with reference architectures.
  2. Rapid Deployment: Facilitates quick OKE cluster deployments with minimal configuration.
  3. Reference-Based: Pre-configured for common scenarios, making it ideal for testing and POC projects.

But what’s good in the new Fork?

Struggling to navigate the defaults in a module or can’t get exactly the cluster you want ? That’s what this fork is for!
To help match most used cases, I introduced the following changes:

1. Repatriated Submodules Locally📦

By moving oci-networking submodules (e.g., VCN, Subnet, Nat gateway) into my local repo:

  • I reduced external dependencies when new module versions are released but not fully tested.
  • Simplified customization options (e.g., modifying module logic locally).

2. Added all Useful Variables in one spot🚀

I updated the terraform.tfvars or env-vars (TF_VAR) files to include variables that matter the most to end-users:

  • cluster_name: Easy naming for the OKE cluster via app_name variable.
  • cluster_type: whether you want an enhanced or basic cluster. Also fixed the missing reference in the oke main.tf.
  • k8s_version: Specify the Kubernetes version upfront.
  • node_k8s_version: Specify the worker nodes version upfront.
  • node_pool_instance_shape_1: choose non default instance shape
  • initial_num_worker_nodes: Specify how many nodes upon deployment
  • node_pool_min, node_pool_max: Control autoscaling behavior per use case.
  • addons flag: enable or diable Prometheus, Grafana or ingress.
  • ingress_email_issuer: Ensures certificate provider will use this to contact you about expiring certificates.
  • create_new_vcn: whether to create a new vcn or reuse existing one
  • node_pool_name_1: name of the first node pool
  • An many more…

3. Keeping code from breaking on Day 2 🤕⚡

The code relies on data source dynamic values like “latest” which can byte in the ass when the catalog is updated.
That’s why I Added ignore_changes blocks to resources like “node_pool” to prevent Terraform from breaking the deployment when:

  • Running terraform apply on day 2 while your node image_id isn’t the most recent anymore.
  • Manually scaling the node pool to add or remove capacity (life happens 😉).

Click (▼)to see Lifecycle section below
resource "oci_containerengine_node_pool" "oke_node_pool" {
  cluster_id         = var.oke_cluster_ocid
  compartment_id     = var.oke_cluster_compartment_ocid
  ...
  --- snipet
    lifecycle {
    ignore_changes = [
      node_config_details.0.size,
      node_source_details.0.image_id  # Add this line to ignore changes to image_id
    ]
  }

Key Benefits

  • Focus on common Use cases: All necessary variables centralized in a configuration file to deploy with minimal effort
  • Full control on all modules: All modules are repatriated locally, reducing the risk of bugs related to remote update.
  • Ensure stack consistency: Stabilize data source dependent resources against dynamic values in future apply runs .

Getting started

I. My OKE Quickstart Fork on GitHub

Since I’ve already built a Multi-cloud Terraform GitHub repository, I added this fork under terraform-provider-oci directory (see below):

Repo: https://github.com/brokedba/terraform-examples
Image Not Found

OKE Fork Directory: is located under terraform-provider-oci/oke-quickstartz sub-directory.

Image Not Found
  • last update since my fork: the repo version went from  0.9.2 to 0.9.3.
  • I’ll do my best to keep it up to date in the future (no guarantee)
  • Contributions are welcome 🙏🏻

. Repository: github.com/brokedba/terraform-examples
Subdirectory: oke-quickstartz
Version: latest( 0.9.3 )

II. Quick Deployment Guide


Step 1: Clone the Repo

git clone https://github.com/brokedba/terraform-examples.git
$ cd terraform-examples/terraform-provider-oci/oke-quickstartz
$ terraform init

Step 2: Set Up Environment Variables

use an env-vars file to simplify deployment with the following content, replacing placeholders with your values:
export TF_VAR_user_ocid="ocid1.user.oc1..."
export TF_VAR_fingerprint="3b:..."
export TF_VAR_private_key_path="~/.oci/oci_api_key.pem"
export TF_VAR_tenancy_ocid="ocid1.tenancy.oc1...."
export TF_VAR_region="ca-toronto-1"
export TF_VAR_availability_domain_number="1"
export TF_VAR_ssh_public_key=$(cat ~/.ssh/id_rsa.pub)
export TF_VAR_compartment_ocid="ocid1.compartment.oc1...."
export TF_VAR_cluster_type="ENHANCED_CLUSTER"
export TF_VAR_prometheus_enabled=true
export TF_VAR_grafana_enabled=true
export TF_VAR_ingress_nginx_enabled=true
export TF_VAR_node_pool_instance_shape_1='{"instanceShape":"VM.Standard.E4.Flex","ocpus":2,"memory":16}'
export TF_VAR_node_pool_name_1="pool1" #oke_pool
export TF_VAR_node_pool_initial_num_worker_nodes_1=1
export TF_VAR_node_k8s_version="v1.30.1"  # worker nodes version
export TF_VAR_k8s_version="v1.30.1"       # master /control plane node version
export TF_VAR_node_pool_max_num_worker_nodes_1=3
export TF_VAR_cluster_endpoint_visibility="Public"

Then source the file to load the variables:

$ source env-vars

Step 3: Run and Preview the Plan

Run terraform plan to see exactly what will be created:

$ terraform plan

Overview

The plan provisions an OKE cluster along with supporting infrastructure (VCN, subnets, security lists, and node pools) and integrates tools like Prometheus, Grafana, and Metrics Server for monitoring. It will create 32 new resources.

Click (▼) to See Plan Output below
Plan: 32 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + app_url                         = (known after apply)
  + cluster_endpoint_visibility     = "Public"
  + cluster_type_value              = "ENHANCED_CLUSTER"
  + comments                        = "The application URL will be unavailable for a few minutes after provisioning while the application is configured and deployed to Kubernetes"
  + deploy_id                       = (known after apply)
  + deployed_oke_kubernetes_version = "v1.30.1"
  + deployed_to_region              = "ca-toronto-1"
  + dev                             = "Made with ❤ by Oracle Developers. Forked and Hacked by @Clouddude🍉"
  + generated_private_key_pem       = (sensitive value)
  + grafana_admin_password          = (sensitive value)
  + grafana_url                     = (known after apply)
  + kubeconfig                      = (sensitive value)
  + kubeconfig_for_kubectl          = "export KUBECONFIG=./generated/kubeconfig"
  + oke_cluster_ocid                = (known after apply)
  + oke_node_pools                  = {
      + pool1 = {
          + node_k8s_version             = "v1.30.1"
          + node_pool_autoscaler_enabled = true
          + node_pool_id                 = (known after apply)
          + node_pool_max_nodes          = "3"
          + node_pool_min_nodes          = "1"
          + node_pool_name               = "pool1"
        }
    }
  + stack_version                   = "0.9.2"
  + subnets                         = {
      + oke_k8s_endpoint_subnet = {
          + subnet_id   = (known after apply)
          + subnet_name = "oke_k8s_endpoint_subnet"
        }
      + oke_lb_subnet           = {
          + subnet_id   = (known after apply)
          + subnet_name = "oke_lb_subnet"
        }
      + oke_nodes_subnet        = {
          + subnet_id   = (known after apply)
          + subnet_name = "oke_nodes_subnet"
        }
    }

Final Step: Deploy Your Cluster

Once all the configurations are completed, deploy it with terraform apply.

terraform apply

Key Resources to Be Created

  • VCN (oci_core_vcn.main): Serving as the networking backbone with CIDR: 10.20.0.0/16.
  • Subnets (oci_core_subnet):
    • oke_k8s_endpoint_subnet: For the Kubernetes API endpoint.
    • oke_lb_subnet: For the load balancer.
    • oke_nodes_subnet: For worker nodes, restricted from public internet access.
  • Gateways : Provides internet, NAT, and OCI service access.
    • oci_core_internet_gateway
    • oci_core_nat_gateway
    • oci_core_service_gateway
  • OKE Cluster (oci_containerengine_cluster.oke_cluster):
    • Enhanced Kubernetes cluster (v1.30.1).
    • Pod CIDR: 10.244.0.0/16;
    • Services CIDR: 10.96.0.0/16.
    • Public Kubernetes API endpoint. Can be changed to false.
  • Node Pool (oci_containerengine_node_pool):
    • Name: pool1, up to 3 nodes (min: 1, max: 3).
    • Shape: VM.Standard.E4.Flex (2 OCPUs, 16GB memory).
  • Security Lists:
    • Granular rules for ingress/egress traffic between pods, worker nodes, control planes, & load balancers.
  • Route Tables:
    • For directing traffic to OCI services and internet.
  • Helm Releases:
    • Grafana: Configured with subpath and root URL.
    • Prometheus: Includes scrape configs for monitoring NGINX.
    • Metrics Server: Enables resource usage tracking in Kubernetes.
    • Ingress NGINX: Deployed with custom annotations for OCI load balancers.
  • Kubernetes Ingress (kubernetes_ingress_v1.grafana):
    • Configures ingress rules for accessing Grafana externally.

III. Future enhancements 🚀

Stay tuned! ⚡

Conclusion

This fork was pimped to save you time, make deployments real-world ready, and keep things user-friendly—especially if you’ve just aced the OKE Specialist Certification!
I hope you’ll try it, share your feedback, and join the contributors on GitHub to make it even better. Kube it up! 🚀
Check it out on GitHub. 😊

Share this…

Don't miss a Bit!

Join countless others!
Sign up and get awesome cloud content straight to your inbox. 🚀

Start your Cloud journey with us today .