Skip to content

Goodbye to Ingress NGINX – What Happens Next?

The Kubernetes community has officially started the countdown to retire Ingress NGINX, one of the most widely used ingress controllers in the ecosystem.

SIG Network and the Security Response Committee have announced that Ingress NGINX will move to best-effort maintenance until March 2026, after which there will be no new releases, no bug fixes, and no security updates. 

At the same time, the broader networking story in Kubernetes is evolving: the old beta Ingress APIs have been removed, and the Gateway API is now positioned as the successor to Ingress. In this blog, we describe why this is happening, when a replacement make sense, and how/when you should migrate.


Why Is Ingress NGINX Being Deprecated

A few overlapping forces pushed the community to retire Ingress NGINX.

1. Chronic Maintainer Burnout

For years, Ingress NGINX was effectively maintained by one or two volunteers, largely in their spare time. That’s a brutal model for something that sits directly on the edge of thousands of production clusters. This is unfortunately the case with many popular open source projects these days.

Despite its significant popularity, the project never attracted enough sustained, funded maintainer time to keep up with.

  • Kubernetes API changes
  • New security expectations
  • The growing feature surface (annotations, custom behaviors, etc.)

Eventually, the responsible thing to do was to stop pretending this was sustainable and set a clear end-of-life timeline.

T Hockin Response


2. Security Incidents and Risk Profile

In 2025, a serious vulnerability CVE-2025-1974 underscored just how dangerous it is to have a heavily-used, under-resourced edge component. 

Ingress NGINX’s flexibility — especially around things like arbitrary snippet annotations — was once seen as a superpower. Over time, those same capabilities turned into hard-to-reason-about attack surface, making it increasingly difficult to secure. 

The conclusion CNCF came to was to protect the broader ecosystem. This meant that Kubernetes should stop investing in a controller with too much legacy baggage and too few maintainers.


3. The Ecosystem Has Moved On!!!

Ingress NGINX was built in an era when the Ingress API was the only game in town. Since then, more advanced needs emerged. For example,

  • Multi-tenant clusters and platform teams
  • Complex routing (headers, canaries, multi-protocol)
  • Service mesh integration
  • Clear need for separation of infra vs app concerns

The Kubernetes community responded by designing the Gateway API, a more expressive, role-oriented, and extensible standard.

Once Gateway API hit v1.0 GA and gained multiple high-quality implementations, it no longer made sense to cling to a legacy controller that couldn’t evolve safely. 


Is the Ingress API Itself Deprecated?

The short answer

The GA Ingress API (networking.k8s.io/v1) is still supported but effectively “feature-frozen”. The Gateway API is the official successor.

Kubernetes docs and community posts make two key points:

  1. Gateway API is the “next-generation” Ingress API, designed to solve limitations around expressiveness, roles, and multi-protocol routing. 
  2. Ingress will remain stable but mostly static; new features land in Gateway API, not in the Ingress resource. 

So while Ingress API itself isn’t being removed any time soon, it is essentially a legacy interface for new feature development.


What Are Good Replacements for Ingress NGINX?

At a high level, you have two options:

Option 1: Gateway-First Controllers

If you’re making a strategic move, you should seriously consider Gateway API-native controllers. A non-exhaustive list of implementations from the official Gateway API docs are listed below.

Gateway API Implementation Description Official Website
Envoy Gateway CNCF project providing a modern, Envoy-based Gateway API implementation. https://gateway.envoyproxy.io
NGINX Gateway Fabric Gateway API–native implementation built by NGINX/F5. https://docs.nginx.com/nginx-gateway-fabric/
Kong Kubernetes Gateway Kong’s API-gateway-driven implementation with Gateway API support. https://docs.konghq.com/kubernetes-ingress-controller/
Traefik Proxy Popular cloud-native ingress & edge router with Gateway API support. https://traefik.io/traefik/
HAProxy Kubernetes Gateway High-performance HAProxy-based Gateway API implementation. https://www.haproxy.com/kubernetes-ingress/
Contour Envoy-powered ingress controller with mature Gateway API support. https://projectcontour.io
Istio (Gateway API Support) Service mesh with native Gateway API support via Istio Gateways. https://istio.io
Cilium Gateway API eBPF-powered networking stack with Gateway API support. https://cilium.io

These controllers let you define:

  • GatewayClass – which implementation / infra flavor
  • Gateway – where and how traffic enters (IPs, ports, TLS)
  • HTTPRoute / TCPRoute / UDPRoute – how traffic is routed to backends

This model is more expressive than Ingress and explicitly built for multi-team and multi-protocol environments. 

Note

On the flip side, this may not be a good idea for brownfield deployments. Migrating one or two manifests is easy. Migrating dozens of Helm charts, multiple environments, and all your pipelines and tooling is not. Most teams don’t have the luxury of a clean greenfield deployment.


Option 2: F5/NGINX Ingress Controller

If you like NGINX and want to stay in that ecosystem, F5/NGINX maintains a separate, open-source NGINX Ingress Controller that’s positioned as a long-term, fully supported alternative to the community Ingress NGINX. 

You can continue using networking.k8s.io/v1 Ingress OR use its Gateway API support (via NGINX Gateway Fabric) for more advanced scenarios. 

A complete list of Ingress Controllers is maintained by CNCF.

Info

If you unfortunately are a user of nginx-specific annotations, the most realistic short-term solution is probably Traefik, because it now includes support for nginx-style annotations through their brand new ingress-nginx provider.


Option 3: Service Mesh Option

If you already use a Service Mesh, then your simplest path is to use the ingress controller that already comes with the mesh.

  1. Istio
  2. Cilium

Bot support both Ingress and Gateway API. So, you’re not boxing yourself off from using Gateway API for greenfield projects and applications used.


How and When Should You Migrate?

For the migration of the controller from "Ingress NGINX" to "Alternative controller", your timeline essentially looks like the following:

Timeline Status / Description
Now – March 2026 Ingress NGINX is on best-effort maintenance. Critical issues may be addressed, but there are no guarantees.
After March 2026 No more releases, bug fixes, or security patches. The project is fully EOL — you’re on your own.

Realistically, if you’re running this in production, you want to be off Ingress NGINX well before March 2026 so you’re not doing emergency edge migrations under pressure.


Step 1: Choose Your Future API and Controller

We recommend users consider the framework described below to plan their migration.

Greenfield / New Platform

Start with Gateway API and a Gateway-native controller (Envoy, NGINX Gateway Fabric, Istio, cloud Gateway, etc.). 

Existing Simple Clusters with Minimal Complexity

Migrate to another Ingress controller (e.g., Traefik) using networking.k8s.io/v1 while planning a longer-term Gateway adoption.

Given the retirement of Ingress NGINX and the clear direction of the ecosystem, new investments should lean Gateway-first.


Step 2: Plan Your Migration from Ingress to Gateway API

The Gateway API docs and multiple vendors all recommend a phased, but parallel migration. A pragmatic flow would look like the following:

  1. Inventory everything Dump all Ingresses: "kubectl get ingress -A -o yaml". Group by hostname, criticality, and special behaviors (annotations, rewrites, custom snippets). 

  2. Stand up Gateway API & a Controller Install the Gateway API CRDs. Deploy your chosen implementation (Envoy, NGINX, Istio, etc.).

  3. Convert Ingress → Gateway Resources Map IngressClass → GatewayClass and Ingress → Gateway + HTTPRoute. Use tools like ingress2gateway to automate boilerplate where possible. 

  4. Run Ingress and Gateway in parallel Yes. You can keep Ingress (and Ingress NGINX) serving production traffic. Use a separate hostname or IP for the Gateway path to test behavior with real traffic in a safe way. 

  5. Cut Over Gradually Start with low-risk services. Monitor latency, error rates, and logs closely. Once stable, update DNS / fronting load balancers to point to the Gateway and decommission the old Ingress for that service.

  6. Retire Ingress NGINX Once no production traffic flows through it, remove the the Ingress NGINX controller deployment, IngressClass(es) referencing it and definitely remove the legacy annotations that only made sense for that controller.


Final Thoughts

The deprecation and EOL of Ingress NGINX is definitely disruptive to adopters. But, it is also an opportunity to adopt a more secure, maintainable, and future-proof networking model.

TL;DR for our customers: "Just solve the immediate ingress problem. Migrate to Gateway API on your own timeline with a plan. This is not a crisis!".

  1. Treat Ingress NGINX as on borrowed time.
  2. Treat networking.k8s.io/v1 Ingress as “stable legacy” i.e. perfectly fine for simple use cases, but not where new capabilities will land.
  3. Treat Gateway API as the new standard and the safest place to invest for the next decade of Kubernetes networking.