Release Notes: Kong Ingress Controller OSS & Istio mTLS Migration

Release Notes: Kong Ingress Controller OSS & Istio mTLS Migration

Overview

This release introduces a major architectural upgrade to our cluster ingress and service mesh infrastructure. We have successfully migrated from the community NGINX Ingress Controller to Kong Ingress Controller OSS as our primary edge proxy. In tandem with this migration, we have fully integrated Istio mTLS to enforce Zero-Trust networking and encrypt all inter-service communication across the cluster.

This strategic upgrade enhances our platform’s scalability, hardens our internal network security posture, and provides a more robust foundation for advanced API management, authentication, and traffic control.


Key Achievements & Deliverables

1. Ingress Controller Migration (NGINX to Kong)

We have completely uncoupled our application suite (n-apps-ui, n-apps-api, trinity, identity-server, etc.) from NGINX and transitioned them to Kong.

  • Routing & Proxying: Configured Kong as the centralized entry point (ingressClassName: kong) for all external traffic.
  • Annotation Mapping: Successfully refactored and translated over 20+ legacy NGINX-specific annotations into their native Kong CRD equivalents.
  • Protocol Support: Ported complex routing requirements natively to Kong, including gRPC and WebSocket support.

2. Security Hardening & Zero-Trust Architecture

  • Istio mTLS Enforcement: Deployed Istio Envoys as sidecar proxies across all critical backend pods. Configured cluster-wide PeerAuthentication policies set to STRICT mode, guaranteeing that all internal pod-to-pod communication is cryptographically validated and encrypted.
  • Security Header Injection: Replaced brittle, plaintext NGINX configuration snippets with Kong’s response-transformer plugin to dynamically inject X-Frame-Options and secure web headers at the edge.
  • mTLS Edge Authentication: Replaced NGINX’s static client-certificate verification (auth-tls-verify-client) by offloading mutual TLS (mTLS) enforcement to Istio. By leveraging Istio’s authentication policies and sidecar architecture, we achieve robust, dynamic client authentication at the edge and internally, without requiring Enterprise-only Kong plugins.

3. Codebase & Infrastructure Cleanup

  • Helm Chart Standardization: Refactored the core Helm templates (ingress.yaml, service.yaml) across all platform microservices to dynamically toggle between Kong and NGINX configurations based on a unified Values.ingress.ingressClass flag.
  • Deprecation of Legacy Dependencies: Safely excised deprecated NGINX configurations, reducing controller bloat and optimizing cluster resource utilization.

Exploring the Role of Istio (The Service Mesh)

While Kong manages traffic entering the cluster, Istio secures and coordinates the traffic between internal services.

Where is Istio Used?

  • Istio is deployed as a Service Mesh across application namespaces (e.g., kong-istio-project). It operates via Sidecar Proxies (Envoy) injected into pods like n-apps-api or trinity.
  • Interception: These sidecars intercept all internal network requests to enforce encryption and security policies.
  • Management: The Istio control plane (istiod) centrally manages these proxies, pushing dynamic security certificates and routing rules.

Why was Istio Introduced?

The primary goal is Zero-Trust Network Architecture:

  • Mandatory Encryption (mTLS): In standard Kubernetes, internal traffic is often plain-text. Istio uses Mutual TLS (mTLS) with STRICT enforcement to ensure every internal connection is encrypted and cryptographically verified.
  • Identity-Based Security: Istio allows us to define security rules based on Service Identity rather than IP addresses, preventing unauthorized lateral movement within the cluster.
  • Separation of Concerns: This architecture allows Kong to focus on external API management (Rate limiting, CORS) while Istio handles internal microservice-to-microservice trust.

Impact & Client Value

  • Enhanced Security: The implementation of Istio STRICT mTLS ensures that even if the internal cluster network is compromised, unauthorized services cannot intercept or spoof internal API traffic.
  • Future-Proofing: Moving to Kong Ingress Controller OSS unlocks enterprise-grade plugin capabilities (Rate Limiting, OIDC, request transformation) that can be easily layered onto our APIs in future iterations without architectural debt.
  • Improved Observability: Kong’s native integration points offer cleaner layer-7 traffic metrics, simplifying troubleshooting and traffic analysis natively via Prometheus.

Testing & Validation Status

Extensive validation has been performed globally across the development and staging tiers:

  • End-to-End browser UI navigation and external API accessibility.
  • Cross-origin routing (/web, /api) tested successfully.
  • Inter-pod gRPC and REST handshakes securely established over Istio sidecars.
  • Absolute URL (301 trailing slash) redirect stabilization verified on external domains.

NGINX to Kong Upgrade Migration (Customer / Operator Steps)

You will use the provided repository for this release. Kong is the default ingress in the charts; the only manual value changes required are during rollback (to switch back to NGINX).

Prerequisites

  • Helm 3.x installed and configured with access to the target cluster.
  • Console Helm charts from the provided repository (Kong is already configured as default).
  • Controller cluster: For Kong + Istio mTLS, use the --set flags in the upgrade command below.
  • Runtime cluster: If using project namespaces with Ingress resources, run the script in Section 3 after Kong is deployed.

1. Controller Cluster: Upgrade to Kong and Rollback

1.1 Upgrade Controller cluster to Kong (install / set Kong)

From the console-controller-init chart directory, run:

helm dependency update .

helm upgrade --install console-monitoring . \
  --namespace monitoring --create-namespace \
  --values values-sandbox.yaml \
  --set global.istioNamespace=monitoring \
  --set istio.enabled=true \
  --set trinity.istio.enabled=false \
  --set identity-server.istio.enabled=false \
  --set identity-server-v2.istio.enabled=false
  --set cleanupDisabledResources=false

Wait for 20–30 seconds after the first command is executed. Then, to enable Istio for the Trinity and Identity Server components on the controller cluster, run the following command.
helm upgrade --install console-monitoring . \
  --namespace monitoring --create-namespace \
  --values values-sandbox.yaml \
  --set global.istioNamespace=monitoring \
  --set istio.enabled=true \
  --set trinity.istio.enabled=true \
  --set identity-server.istio.enabled=true \
  --set identity-server-v2.istio.enabled=true
  --set cleanupDisabledResources=false

Verify: IngressClass kong exists, and Trinity, Identity Server, and other apps are reachable via Kong.

Rollback (optional) – Controller cluster to NGINX
NOTE: Do this only when something went wrong and you need to revert to NGINX. The only change required for rollback is to switch your values back to NGINX.
You can do this in either of two ways mentioned below:


Update your values file (e.g. values-sandbox.yaml or the file you pass to --values) as follows:

  • Set NGINX on and Kong off:
    • ingress-nginx.enabled: true
    • kong-ingress.enabled: false
  • Set ingressClassName (or ingressClass where applicable) to nginx for all relevant sub-charts:
    • trinitytrinity.ingress.ingressClassName: nginx
    • trinityDeploymentstrinity.trinityDeployments.deployment.env.INGRESS_CLASS: nginx
    • identity-server – set to nginx (this chart uses ingressClass):
      • identity-server.ingress.ingressClass: nginx
    • identity-server-v2 – set to nginx (this chart uses ingressClass):
      • identity-server-v2.ingress.ingressClass: nginx
    • cert-manager:
      • cert-manager.ingressClassName: nginx
      • trinity.certmanager.ingressClassName: nginx (cert-manager settings inside the trinity values scope)
    • jaeger:
      • jaeger.jaegeringressClass: nginx
    • kibana:
      • kibana.ingress.className: nginx
    • Trinity License Agent (if enabled under trinity values):
      • trinity.trinityLicenseAgent.trinityLicenseAgentFrontend.ingress.ingressClassName: nginx
      • trinity.trinityLicenseAgent.trinityLicenseAgentBackend.ingress.ingressClassName: nginx
  • If you also want to fully roll back Istio and External Secrets as in the example rollback command:
    • istio.enabled: false
    • trinity.istio.enabled: false
    • identity-server.istio.enabled: false
    • identity-server-v2.istio.enabled: false
    • global.externalSecrets.enabled: false

Important (about helm rollback): Helm rollback does not allow --set overrides and does not “recompute” values. It restores a previous Helm revision exactly as it was deployed.
So, if you want to use helm rollback to return to NGINX, the previous revision must already have all the NGINX settings above, including trinity.trinityDeployments.deployment.env.INGRESS_CLASS=nginx.
If the previous revision was not an NGINX revision (or was missing any of these settings), use the helm upgrade --install ... --values ... / --set ... rollback commands shown below.


Then run the upgrade. Either use your updated values file only:
helm upgrade --install console-monitoring . \
  --namespace monitoring --create-namespace \
  --values values-sandbox.yaml

Or override with `--set` to switch back to NGINX without editing the file:
helm upgrade --install console-monitoring . \
  --namespace monitoring --create-namespace \
  --values values-sandbox.yaml \
  --set istio.enabled=false \
  --set trinity.istio.enabled=false \
  --set identity-server.istio.enabled=false \
  --set identity-server-v2.istio.enabled=false \
  --set ingress-nginx.enabled=true \
  --set kong-ingress.enabled=false \
  --set trinity.ingress.ingressClassName=nginx \
  --set trinity.trinityDeployments.deployment.env.INGRESS_CLASS=nginx \
  --set identity-server.ingress.ingressClass=nginx \
  --set identity-server-v2.ingress.ingressClass=nginx \
  --set cert-manager.ingressClassName=nginx \
  --set trinity.certmanager.ingressClassName=nginx \
  --set jaeger.jaegeringressClass=nginx \
  --set kibana.ingress.className=nginx \
  --set trinity.trinityLicenseAgent.trinityLicenseAgentFrontend.ingress.ingressClassName=nginx \
  --set trinity.trinityLicenseAgent.trinityLicenseAgentBackend.ingress.ingressClassName=nginx \
  --set global.externalSecrets.enabled=false

(Remove any --set lines above for components you do not have enabled in your deployment.)

Verify: IngressClass nginx is in use and applications are reachable via NGINX.


2. Runtime Cluster: Upgrade to Kong and Rollback

2.1 Upgrade Runtime cluster to Kong

From the console-runtime-init chart directory, run:

helm dependency update .

helm upgrade --install console-monitoring . \
  --namespace monitoring --create-namespace \
  --values values.yaml \
  --set global.istioNamespace=monitoring \
  --set istio.enabled=true \
  --set trinity.istio.enabled=false \
  --set identity-server.istio.enabled=false \
  --set identity-server-v2.istio.enabled=false
  

Then, to enable Istio for Trinity and Identity Server components on the runtime cluster, run:

helm upgrade --install console-monitoring . \
  --namespace monitoring --create-namespace \
  --values values.yaml \
  --set global.istioNamespace=monitoring \
  --set istio.enabled=true \
  --set trinity.istio.enabled=true \
  --set identity-server.istio.enabled=true \
  --set identity-server-v2.istio.enabled=true

Then run the project-namespace Ingress update script (Section 3) so that all project namespaces (from the database) have their Ingress resources updated to use Kong.

Rollback (optional) – Runtime cluster to NGINX

Do this only when something went wrong and you need to revert to NGINX. The only change required for rollback is to switch your values back to NGINX. You can do this in either of two ways mentioned below.


Important (about helm rollback): Helm rollback restores a previous revision exactly as deployed and does not accept --set overrides. If you want to roll back to NGINX using helm rollback, make sure the target revision is an NGINX revision (with ingress-nginx.enabled=true, kong-ingress.enabled=false, and any component-specific ingressClassName values already set to nginx). Otherwise, use Option A or Option B below.

Option A — Update your values file

Set in your values file (e.g. values.yaml):

  • ingress-nginx.enabled: true
  • kong-ingress.enabled: false
  • ingressClassName to nginx for all components that expose Ingress (e.g. Trinity License Agent frontend/backend, identity-server if used on runtime).

Then run:

helm upgrade --install console-monitoring . \
  --namespace monitoring --create-namespace \
  --values values.yaml

**Option B — Use `--set` (no file edit)**
helm upgrade --install console-monitoring . \
  --namespace monitoring --create-namespace \
  --values values.yaml \
  --set ingress-nginx.enabled=true \
  --set kong-ingress.enabled=false \
  --set trinityLicenseAgent.trinityLicenseAgentFrontend.ingress.ingressClassName=nginx \
  --set trinityLicenseAgent.trinityLicenseAgentBackend.ingress.ingressClassName=nginx \
  --set identity-server.ingress.ingressClassName=nginx
  --set identity-server-v2.istio.enabled=false

3. Runtime Cluster: Update Ingress in Project Namespaces (application-ingress-controller-migation.sh)

We provide a script that fetches all projects from the database (these projects are used as namespaces in the cluster) and updates all Ingress resources in those namespaces to the correct ingress class with the appropriate annotations. This ensures that after Kong is deployed on the Runtime cluster, every project namespace uses the desired ingress controller.

Script name and location: The script is named application-ingress-controller-migation.sh and is supplied in the console-runtime-init (runtime-init) charts folder under scripts.

What the script does:

  • Prompts the user for database credentials.

  • Connects to the database and retrieves the list of projects that correspond to namespaces in the cluster.

  • Prompts the user for the target ingressClassName (for example: kong or nginx).

  • For each such namespace, finds the Ingress resources and updates them to the target ingress class and required annotations.

  • Generates a detailed report file in the same folder where the script is executed, summarizing success and failure for each namespace and ingress.

Customer steps:

  • 1 . Ensure your kubectl current context is set to the Runtime cluster.

  • 2 . Ensure you have permission to list and update Ingress resources in the project namespaces.

  • 3 . Navigate to the folder where the script is provided (e.g. the console-runtime-init / scripts folder).

  • 4 . Run the script, for example: ./application-ingress-controller-migation.sh

  • 5 . When prompted, enter your database credentials.

  • 6 . When prompted, enter the target ingress class name (kong during migration, nginx if you are rolling back).

  • 7 . Wait for the script to complete updating Ingress in the discovered namespaces.

  • 8 . Verify in the cluster that Ingress resources in project namespaces have the expected ingressClassName and that traffic is flowing correctly.

  • 9 . Review the generated report file in the current folder for a summary of processed namespaces, ingress objects, and their status.

  • 10 . When all ingress updates are complete and validated, run the ./application-database-ingress-update.sh script once. Also give required permission. This script updates the relevant application records in the database to reflect the new ingress configuration.

Report format (example):

================= INGRESS UPDATE REPORT =================
Execution Time : <timestamp>
Cluster        : <cluster-name>
Total Namespaces Processed : <count>

---------------------------------------------------------
Namespace : namespace-1
---------------------------------------------------------
Summary
  Total Ingress Processed : 2
  Success                 : 2
  Failed                  : 0

Ingress Update Details
  - ingress-name-1 : SUCCESS
  - ingress-name-2 : SUCCESS


---------------------------------------------------------
Namespace : namespace-2
---------------------------------------------------------
Summary
  Total Ingress Processed : 2
  Success                 : 2
  Failed                  : 0

Ingress Update Details
  - ingress-name-1 : SUCCESS
  - ingress-name-2 : SUCCESS


=========================================================
OVERALL SUMMARY
=========================================================
Namespaces Processed : 2
Total Ingress        : 4
Total Success        : 4
Total Failed         : 0
=========================================================
STATUS : SUCCESS
=========================================================

Rollback: If you roll back to NGINX (Section 2), run the same script again and choose nginx as the target ingressClassName so that Ingress resources in project namespaces are switched back to the NGINX ingress class and annotations.

NOTE: Only do this if you have executed step 3


Summary

Step Cluster Action
1 Controller Upgrade: From console-controller-init directory run helm upgrade --install with values-sandbox.yaml and the Istio/Kong --set flags (see Section 1.1). Rollback: Update values to NGINX (ingress-nginx.enabled: true, kong-ingress.enabled: false, ingressClassName: nginx everywhere), then run the same helm command without the --set flags (see Section 1.2).
2 Runtime Upgrade: From console-runtime-init directory run helm upgrade --install with your values file (repo defaults to Kong). Rollback: Update values to NGINX, then run helm upgrade without Kong overrides; revert project-namespace Ingresses if the script was used.
3 Runtime Run ./application-ingress-controller-migation.sh (provided in the console-runtime-init / scripts folder). The script prompts for database credentials, fetches projects (used as namespaces), and updates Ingress in those namespaces to the correct ingress class and annotations.

For exact value paths and script names, refer to the value files in this repository and the script documentation shipped with the release.