We’re excited to announce Trinity 4.8.0, a major update focused on infrastructure modernization and expanded cloud provider support. This release introduces two key changes:
- Kong Ingress Controller OSS Migration – Migration from the community NGINX Ingress Controller to Kong Ingress Controller OSS as our primary edge proxy, enhancing scalability and providing a more robust foundation for advanced API management, authentication, and traffic control.
- AWS (EKS) Vendor Support – Full support for Amazon EKS as a cloud provider, enabling organizations on AWS to use the complete Trinity platform on EKS clusters.
- Marketplace Chart Updates (Reels & Alpha) – Kong Ingress Controller support added to the Reels and Alpha Helm charts, available as new marketplace packages.
Part 1: Kong Ingress Controller OSS Migration
Overview
This release introduces a major architectural upgrade to our cluster ingress infrastructure. We have successfully migrated from the community NGINX Ingress Controller to Kong Ingress Controller OSS as our primary edge proxy.
This strategic upgrade enhances our platform’s scalability and provides a more robust foundation for advanced API management, authentication, and traffic control.
What Changed
- All application Ingress resources (n-apps-ui, n-apps-api, trinity, identity-server, etc.) now use
ingressClassName: kong. - 20+ NGINX-specific annotations refactored to Kong CRD equivalents (including gRPC and WebSocket support).
- Security headers (X-Frame-Options, etc.) injected via Kong’s response-transformer plugin instead of NGINX configuration snippets.
- Helm templates refactored to toggle between Kong and NGINX via a unified
Values.ingress.ingressClassflag. - Kong unlocks future plugin capabilities (Rate Limiting, OIDC, request transformation) without architectural debt.
- Cleaner layer-7 traffic metrics via Kong’s native Prometheus integration.
NGINX to Kong Upgrade Migration (Customer / Operator Steps)
You will use the provided repository for this release. Kong is the default ingress in the charts; the only manual value changes required are during rollback (to switch back to NGINX).
Prerequisites
- Helm 3.x installed and configured with access to the target cluster.
- Console Helm charts from the provided repository (Kong is already configured as default).
- New section — Istio configuration (keep disabled): New Istio-related keys have been added to the values file for future use. These must remain
falsefor this release:istio.enabled: false(top-level)istiod:andbase:sections (Istio control plane and base CRD configuration)global.istioNamespace: monitoringtrinity.istio.enabled: falseidentity-server.istio.enabled: falseidentity-server-v2.istio.enabled: false
Istio service mesh integration is not included in this release and will be available in an upcoming release. Do not enable these values.
- New section —
kong-ingress: A newkong-ingresssection has been added to the values file. When enabling Kong, populate this section with your image repositories/tags, proxy LoadBalancer IP, and annotations. Refer tovalues-sandbox.yamlfor a fully configured example. Key sub-sections:controller— Kong Ingress Controller image and settingsgateway— Kong Gateway image, admin service, and proxy (LoadBalancer) configurationpodAnnotations— Required annotations for sidecar compatibility
For rollback to NGINX, setkong-ingress.enabled: falseandingress-nginx.enabled: true.
ingressClassNameupdated across values file: As part of the Kong migration,ingressClassName(oringressClass) has been changed fromnginxtokongat multiple places:trinity.ingress,cert-manager,identity-server.ingress,identity-server-v2.ingress, andtrinityLicenseAgentfrontend/backend ingress sections. Ensure these are set tokongwhen upgrading, or revert tonginxwhen rolling back.- Controller cluster: Use the upgrade command below.
- Runtime cluster: If using project namespaces with Ingress resources, run the script in Section 3 after Kong is deployed.
1. Controller Cluster: Upgrade to Kong and Rollback
1.1 Upgrade Controller cluster to Kong (install / set Kong)
Important — Update deployment chart names: The following environment variables in your values file must be updated to use the new Kong-compatible chart names:
RUNTIME_INIT_CHART_NAME: "console-runtime-init-kong-support"
SIGNOZ_CHART_NAME: signoz-chart-kong-support
CLICKHOUSE_CRD_FILE_NAME: clickhouse-crds
API_APP_DEPLOY_CHART_NAME: n-apps-api-kong
UI_APP_DEPLOY_CHART_NAME: n-app-ui-kong
Ensure these values are set in your values-sandbox.yaml (under the Trinity deployments environment section) before running the upgrade.
Ensure the following values are set to Kong in your values file before running the upgrade:
- Set Kong on and NGINX off:
kong-ingress.enabled: trueingress-nginx.enabled: false
- Set ingressClassName (or ingressClass where applicable) to
kongfor all relevant sub-charts:- trinity –
trinity.ingress.ingressClassName: kong - trinityDeployments –
trinity.trinityDeployments.deployment.env.INGRESS_CLASS: kong - identity-server – set to kong (this chart uses
ingressClass):identity-server.ingress.ingressClass: kong
- identity-server-v2 – set to kong (this chart uses
ingressClass):identity-server-v2.ingress.ingressClass: kong
- cert-manager:
cert-manager.ingressClassName: kongtrinity.certmanager.ingressClassName: kong(cert-manager settings inside thetrinityvalues scope)
- kibana:
kibana.ingress.className: kong
- Trinity License Agent (if enabled under
trinityvalues):trinity.trinityLicenseAgent.trinityLicenseAgentFrontend.ingress.ingressClassName: kongtrinity.trinityLicenseAgent.trinityLicenseAgentBackend.ingress.ingressClassName: kong
- trinity –
From the console-controller-init chart directory, run:
helm dependency update .
helm upgrade --install console-monitoring . \
--namespace monitoring --create-namespace \
--values values-sandbox.yaml
Verify: IngressClass kong exists, and Trinity, Identity Server, and other apps are reachable via Kong.
Rollback (optional) – Controller cluster to NGINX
NOTE: Do this only when something went wrong and you need to revert to NGINX.
NOTE: When rolling back, ensure you also revert to the required images that were used in your earlier NGINX-based deployment (e.g. the NGINX Ingress Controller image and any other component images from the previous release).
Revert all the values mentioned above in Section 1.1 back to NGINX — set ingress-nginx.enabled: true, kong-ingress.enabled: false, and change all ingressClassName / ingressClass values from kong to nginx. Then run:
helm upgrade --install console-monitoring . \
--namespace monitoring --create-namespace \
--values values-sandbox.yaml
Verify: IngressClass nginx is in use and applications are reachable via NGINX.
2. Runtime Cluster: Upgrade to Kong and Rollback
2.1 Upgrade Runtime cluster to Kong
Ensure the following values are set to Kong in your runtime values file (values.yaml):
- Set Kong on and NGINX off:
kong-ingress.enabled: trueingress-nginx.enabled: false
- Set ingressClassName to
kongfor all relevant sub-charts:- Trinity License Agent Frontend –
trinityLicenseAgent.trinityLicenseAgentFrontend.ingress.ingressClassName: kong - Trinity License Agent Backend –
trinityLicenseAgent.trinityLicenseAgentBackend.ingress.ingressClassName: kong - identity-server (if enabled on runtime) –
identity-server.ingress.ingressClassName: kong
- Trinity License Agent Frontend –
From the console-runtime-init chart directory, run:
helm dependency update .
helm upgrade --install console-monitoring . \
--namespace monitoring --create-namespace \
--values values.yaml
Then run the project-namespace Ingress update script (Section 3) so that all project namespaces (from the database) have their Ingress resources updated to use Kong.
Rollback (optional) – Runtime cluster to NGINX
NOTE: Do this only when something went wrong and you need to revert to NGINX.
NOTE: When rolling back, ensure you also revert to the required images that were used in your earlier NGINX-based deployment (e.g. the NGINX Ingress Controller image and any other component images from the previous release).
Revert the values in your runtime values file back to NGINX — set ingress-nginx.enabled: true, kong-ingress.enabled: false, and change all ingressClassName / ingressClass values from kong to nginx. Then run:
helm upgrade --install console-monitoring . \
--namespace monitoring --create-namespace \
--values values.yaml
3. Runtime Cluster: Update Ingress in Project Namespaces (application-ingress-controller-migation.sh)
We provide a script that fetches all projects from the database (these projects are used as namespaces in the cluster) and updates all Ingress resources in those namespaces to the correct ingress class with the appropriate annotations. This ensures that after Kong is deployed on the Runtime cluster, every project namespace uses the desired ingress controller.
Script name and location: The script is named application-ingress-controller-migation.sh and is supplied in the console-runtime-init (runtime-init) charts folder under scripts.
What the script does:
- – Prompts the user for database credentials.
- – Connects to the database and retrieves the list of projects that correspond to namespaces in the cluster.
- – Prompts the user for the target
ingressClassName(for example:kongornginx). - – For each such namespace, finds the Ingress resources and updates them to the target ingress class and required annotations.
- – Generates a detailed report file in the same folder where the script is executed, summarizing success and failure for each namespace and ingress.
Customer steps:
- – Ensure your kubectl current context is set to the Runtime cluster.
- – Ensure you have permission to list and update Ingress resources in the project namespaces.
- – Navigate to the folder where the script is provided (e.g. the console-runtime-init / scripts folder).
- – Run the script, for example:
./application-ingress-controller-migation.sh - – When prompted, enter your database credentials.
- – When prompted, enter the target ingress class name (
kongduring migration,nginxif you are rolling back). - – Wait for the script to complete updating Ingress in the discovered namespaces.
- – Verify in the cluster that Ingress resources in project namespaces have the expected
ingressClassNameand that traffic is flowing correctly. - – Review the generated report file in the current folder for a summary of processed namespaces, ingress objects, and their status.
- – When all ingress updates are complete and validated, run the
./application-database-ingress-update.shscript once. Also give required permission. This script updates the relevant application records in the database to reflect the new ingress configuration.
Report format (example):
================= INGRESS UPDATE REPORT =================
Execution Time : <timestamp>
Cluster : <cluster-name>
Total Namespaces Processed : <count>
---------------------------------------------------------
Namespace : namespace-1
---------------------------------------------------------
Summary
Total Ingress Processed : 2
Success : 2
Failed : 0
Ingress Update Details
- ingress-name-1 : SUCCESS
- ingress-name-2 : SUCCESS
---------------------------------------------------------
Namespace : namespace-2
---------------------------------------------------------
Summary
Total Ingress Processed : 2
Success : 2
Failed : 0
Ingress Update Details
- ingress-name-1 : SUCCESS
- ingress-name-2 : SUCCESS
=========================================================
OVERALL SUMMARY
=========================================================
Namespaces Processed : 2
Total Ingress : 4
Total Success : 4
Total Failed : 0
=========================================================
STATUS : SUCCESS
=========================================================
Rollback: If you roll back to NGINX (Section 2), run the same script again and choose nginx as the target ingressClassName so that Ingress resources in project namespaces are switched back to the NGINX ingress class and annotations.
NOTE: Only do this if you have executed step 3.
Summary
| Step | Cluster | Action |
|---|---|---|
| 1 | Controller | Upgrade: From console-controller-init directory run helm upgrade --install with values-sandbox.yaml (see Section 1.1). Rollback: Update values to NGINX (ingress-nginx.enabled: true, kong-ingress.enabled: false, ingressClassName: nginx everywhere), then run the same helm command (see Section 1.2). |
| 2 | Runtime | Upgrade: From console-runtime-init directory run helm upgrade --install with your values file (repo defaults to Kong). Rollback: Update values to NGINX, then run helm upgrade without Kong overrides; revert project-namespace Ingresses if the script was used. |
| 3 | Runtime | Run ./application-ingress-controller-migation.sh (provided in the console-runtime-init / scripts folder). The script prompts for database credentials, fetches projects (used as namespaces), and updates Ingress in those namespaces to the correct ingress class and annotations. |
For exact value paths and script names, refer to the value files in this repository and the script documentation shipped with the release.
Part 2: AWS (EKS) Vendor Support
Overview
This release adds Amazon EKS (Elastic Kubernetes Service) as a supported cloud provider alongside AKS, GKE, and OKE. All existing Trinity workflows—environments, namespaces, asset build & deployment, application lifecycle (scale, restart, logs), domains, vaults, secret sync (AWS Secrets Manager), storage, and alerts—work on EKS the same way as other providers.
AWS authentication uses IRSA or Pod Identity (no service account key file upload required).
Prerequisites
- EKS cluster with IRSA or Pod Identity configured for the Trinity Service Account.
- Kubeconfig with access to the target EKS cluster.
- For secret sync:
secretsmanager:GetSecretValuepermission on the Trinity IAM role. - Controller cluster: Associate OIDC provider (IRSA) or install
eks-pod-identity-agentaddon (Pod Identity); create IAM role with trust policy for the Trinity Service Account; annotate SA witheks.amazonaws.com/role-arn(IRSA only). - Runtime cluster: Create an EKS Access Entry for the Trinity IAM role and associate the required access policy.
New Configuration Sections in Values File
The following new sections have been introduced in the values file (values-sandbox.yaml). Please update your values file accordingly:
- Istio sections (keep disabled): New Istio-related keys added for future use. These must remain
falsefor this release:istio.enabled: false(top-level)istiod:andbase:sections (Istio control plane and base CRD configuration)global.istioNamespace: monitoringtrinity.istio.enabled: falseidentity-server.istio.enabled: falseidentity-server-v2.istio.enabled: false
Istio service mesh integration will be available in an upcoming release. Do not enable these values.
kong-ingresssection: Configures the Kong Ingress Controller and Gateway deployment. Populate with your image repositories/tags, proxy LoadBalancer IP, and annotations. Key sub-sections:controller,gateway,podAnnotations. Refer tovalues-sandbox.yamlfor a fully configured example. For rollback to NGINX, setkong-ingress.enabled: falseandingress-nginx.enabled: true.
ingressClassNameupdated: Changed fromnginxtokongat multiple places:trinity.ingress,cert-manager,identity-server.ingress,identity-server-v2.ingress, andtrinityLicenseAgentfrontend/backend ingress sections. Revert tonginxwhen rolling back.
IRSA vs Pod Identity (quick reference)
| IRSA | EKS Pod Identity | |
|---|---|---|
| How it works | OIDC proves which SA is calling AWS | EKS links SA to IAM role via association |
| Service Account | Annotate with eks.amazonaws.com/role-arn |
No annotation; use create-pod-identity-association |
| Cluster requirement | OIDC provider in IAM | eks-pod-identity-agent addon |
For detailed step-by-step commands, refer to the AWS EKS IRSA Setup and AWS EKS Pod Identity Setup command reference documents shipped with this release.
What’s New for Users
- Environment initialization: Choose AWS as the cloud provider when creating a new environment. Upload your Kubernetes config file; the service account file step is skipped for AWS.
- Asset deployment: Run build and deployment for assets against AWS/EKS environments—same steps as other clouds, with the correct cluster and namespace resolution.
- Applications & pods: Scale deployments, restart workloads, stream pod logs, and inspect running resources on EKS from the familiar application screens.
- Namespaces & projects: Create and manage namespaces tied to AWS environments; use CRS and domain flows where your projects require them.
- Vault configuration: Configure and list vaults for EKS clusters from the vault configuration screen.
- Secret sync: Sync secrets from AWS Secrets Manager into EKS project namespaces using the existing sync-secret flow.
- Storage: Use storage classes, PVCs, and storage operations in EKS environments as you would for AKS or GKE.
- Alerts & monitoring hooks: Continue using alert and namespace-related features with AWS selected as the environment provider.
Upgrade & Migration
- New deployments: Deploy Trinity using the updated charts and select AWS when initializing environments.
- Existing deployments: No changes are required for existing AKS, GKE, or OKE environments. EKS support is additive.
- Database: No mandatory database migrations for enabling EKS. Use any release-specific SQL scripts as documented for your version.
Part 3: Marketplace Chart Updates (Reels & Alpha)
Overview
As part of the Kong Ingress Controller migration, the Reels and Alpha Helm charts have been updated to use Kong as the default ingress. These updated charts are available as new packages in the Neutrinos Marketplace.
Reels
The Reels platform has been migrated from NGINX Ingress Controller to Kong Ingress Controller OSS. Kong is now the default ingress in the Reels charts, with all NGINX-specific annotations refactored to Kong CRD equivalents across services (reels-fe, reels-manager, mdm-service, integration-service, cms-service, audit-service, process-executor, mdm-cron-service).
Marketplace package name: Reels (v1.0.0)
Alpha
The Alpha platform Helm chart has been updated to integrate Kong Ingress Controller OSS as the supported edge ingress. External traffic is routed through Kong using global.ingress.ingressClassName: kong and Kong-specific configuration. The chart renders KongPlugin resources for rate limiting, CORS, and security headers via Kong’s response-transformer plugin.
Marketplace package name: alpha-helm-deployment (v1.0.11)
Deployment
Deploy the updated Reels and Alpha charts from the Neutrinos Marketplace using the package names above. Kong Ingress Controller must be installed and configured in the target cluster before deploying these packages.
Docker Images
The following images are updated in this release and are used across the Kong migration, AWS EKS support, and marketplace chart updates:
| # | Repository Name | Docker Image | Tag |
|---|---|---|---|
| 1 | trinity-frontend (UI) | neutrinos.azurecr.io/trinity/trinity-frontend/ui |
26.03.4.8.0 |
| 2 | trinity-frontend (API) | neutrinos.azurecr.io/trinity/trinity-frontend/api |
26.03.4.8.0 |
| 3 | trinity-orchestrator | neutrinos.azurecr.io/trinity/trinity-orchestrator |
26.03.4.8.0 |
| 4 | trinity-db-operations | neutrinos.azurecr.io/trinity/trinity-db-operations |
26.03.4.8.0 |
| 5 | trinity-assets-orchestrator | neutrinos.azurecr.io/trinity/trinity-assets-orchestrator |
26.03.4.8.0 |
| 6 | trinity-deployments | neutrinos.azurecr.io/trinity/trinity-deployments |
26.03.4.8.0 |
| 7 | trinity-promscale-connector | neutrinos.azurecr.io/trinity/trinity-promscale-connector |
26.03.4.8.0 |
| 8 | trinity-alerts-base-app | neutrinos.azurecr.io/trinity-utils/alerts-base-app |
26.03.4.8.0 |
| 9 | trinity-alerts-email-service | neutrinos.azurecr.io/trinity/trinity-alerts-email-service |
26.03.4.8.0 |
| 10 | trinity-app-logs | neutrinos.azurecr.io/trinity/trinity-app-logs |
26.03.4.8.0 |
Release Summary
This release adds Kong Ingress Controller OSS as the default edge proxy (replacing NGINX), Amazon EKS as a first-class cloud provider in Trinity Stack, and updated Reels (Reels (v1.0.0)) and Alpha (alpha-helm-deployment) marketplace packages with Kong support—covering the full operational surface: asset build and deployment, application lifecycle (deploy, scale, logs, restarts), environments and namespaces, domains, CRS, vaults, AWS Secrets Manager sync, storage, and alerts. Authentication for EKS stays simple with IRSA or Pod Identity and no uploaded service account key files.