Get Pro License

Debugging Kubernetes Ingress: Parsing Nginx Controller Logs at Scale

The Nginx Ingress Controller is the traffic cop of the Kubernetes world. It sees everything. Every request to every microservice in your cluster passes through it. This makes it the single source of truth for debugging cluster-wide issues.

However, accessing that truth is painful. Running kubectl logs -n ingress-nginx -l app=ingress-nginx results in a chaotic firehose of mixed JSON and text from dozens of different domains (hosts) simultaneously.

In this guide, we'll show you how to pipe Kubernetes logs directly into LogLens to filter by host, debug 503 errors, and analyze upstream latency without setting up a complex ELK stack.


The Setup: Piping kubectl to LogLens

LogLens is designed to accept data from stdin (standard input). This means you don't need to save logs to a file first. You can stream them directly from your cluster.

# The Magic Command
kubectl logs -n ingress-nginx \
  -l app.kubernetes.io/name=ingress-nginx \
  --tail=1000 \
| loglens watch - --json

Note: The - argument tells LogLens to read from stdin.

1. The "Multi-Tenant" Problem: Filtering by Host

In a Kubernetes cluster, one Ingress Controller might handle traffic for api.example.com, blog.example.com, and staging.internal. When you look at the raw logs, they are all mixed together.

If staging.internal is being DDOSed, it drowns out the logs for your production API. With LogLens, you can filter this stream instantly.

# Stream logs ONLY for api.example.com
kubectl logs -n ingress-nginx -l app=ingress-nginx -f \
| loglens watch - --where 'host == "api.example.com"'

2. Debugging "503 Service Temporarily Unavailable"

A 503 error from the Ingress Controller usually means one thing: It can't find a healthy pod to talk to. This happens during bad deployments, crash loops, or misconfigured Services.

To confirm this, we look for requests where the status is 503 and inspect the upstream_addr. If the upstream address is empty or missing, Nginx failed to resolve the service.

# Analyze 503 errors from the last hour of logs
kubectl logs -n ingress-nginx -l app=ingress-nginx --since=1h \
| loglens count - 'status == 503'

If the count is high, you know your issue is Pod availability, not the Nginx configuration itself.

3. Spotting "Death Spirals" (Upstream Latency)

Sometimes your pods aren't dead, just incredibly slow. In Kubernetes, if your pods slow down, the Ingress Controller holds connections open, eventually running out of workers.

We can pipe the logs into stats describe to see if the latency is coming from the upstream pods.

# Calculate latency stats from the log stream
kubectl logs -n ingress-nginx -l app=ingress-nginx --tail=5000 \
| loglens stats describe - upstream_response_time

4. Identifying the Noisiest Service

If your Ingress Controller CPU is spiking, you need to know which service is receiving the traffic. You can perform a group-by analysis on the stream.

# Who is getting all the traffic?
kubectl logs -n ingress-nginx -l app=ingress-nginx --tail=10000 \
| loglens stats group-by - --by service_name --avg request_time

Note: Ensure your Nginx Ingress logging format includes $service_name for this to work perfectly, otherwise group by host or path.


Summary

You don't always need a heavy observability platform to debug Kubernetes. For real-time incidents, the combination of kubectl and loglens gives you powerful, structured analysis directly in your terminal.

  • Use Pipes | to connect kubectl to LogLens.
  • Filter by host to isolate microservices.
  • Monitor upstream_response_time to check Pod health.

Download LogLens