Kubernetes NGINX Sidecar ReverseProxy - Connection refused Localhost

While using NGINX reverse Proxy as a Sidecar container for your microservice, We often use localhost to connect to the application container on the same POD

but it can potentially harm your application availability in certain systems and lead to a lot of 502 or connection refused issues

nginx connection refused

Here is a sample log from the NGINX sidecar container with a connection refused error message

2024/02/10 18:50:59 [error] 7#0: *2943813 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.61.118, server: _, request: "POST /users/get HTTP/1.1", upstream: "http://[::1]:5000/users/get", host: "internal-api.gritfy.io"

If you look at the nginx configuration of this sidecar nginx container it looks like this

events {
    worker_connections 1024;
}
http{

    log_format upstream_time '$http_x_forwarded_for - $remote_user [$time_local] '
                             '"$request" $status $body_bytes_sent '
                             '"$http_referer" "$http_user_agent"'
                             'rt=$request_time uct="$upstream_connect_time" uht="$upstream_header_time" urt="$upstream_response_time"';
    
    upstream app_server {
        server localhost:5000;
      }
    
    server 
    {
        listen      80;
        server_name _;
        charset     utf-8;
        client_max_body_size 100m;
            
        access_log /etc/nginx/logs/access.log upstream_time;
        
        # gzip settings
        gzip on;
        
            # some more content
            
            proxy_redirect off;
            proxy_pass http://app_server;
        }

    }
  }

Neat and simple, we define an upstream app_server and sending the traffic to the backend application container at localhost:5000

It looks clean and works too but what caused this connection-refused issues then?

It's due to the DNS round-robin and localhost resolving to both IPV4 and IPv6.

 

Localhost DNS resolving - IPV4 and IPV6

If your base image is enabled with IPV4 and IPV6 support, when you do nslookup for the localhost - you will see a result like this

As you can see the domain name localhostresolves to both IPv4 and IPv6 loopback IP address

  • IPv4 Loopback IP address - 127.0.0.1
  • IPv6 Loopback IP address - ::1 short form 0:0:0:0:0:0:0:1
/etc/nginx # nslookup localhost
Server: 10.100.0.10
Address: 10.100.0.10#53

Non-authoritative answer:
Name: localhost
Address: 127.0.0.1
Name: localhost
Address: ::1

The problem with this DNS round-robin is that - it resolves to both IPv4 and IPv6 and if your backend does not listen on the IPv6 interface - the connection will be refused and you will see 502

As the name suggests - Due to Round Robin methodology of this DNS - the probability of this issue is 50%. It also means, 50% of your customer calls will fail

 

How to resolve this on the NGINX Sidecar container

The solution to this problem is straightforward and there are multiple ways to achieve it too.

With Loopback IP

The easiest way to solve this localhost IPv6 route problem is to use the Loopback IP of IPv4 directly instead of using the domain name localhost

Instead of

proxypass http://localhost:5000;

use

proxypass http://127.0.0.1:5000;

This takes away the requirement of DNS resolution every time a call is made and the DNS cache issues too.

If you are using an upstream variable like this

upstream app_server {
  server localhost:5000;
}

You can change it to this

upstream app_server {
  server 127.0.0.1:5000;
}

 

With Resolver and IPv6 off

Another intuitive and recommended way is to use the resolver directive of NGINX

the resolver directive of NGINX is used to configure the DNS or Name resolution configuration such as

  • DNS server used for name resolution
  • DNS cache interval
  • IPv4 or IPv6 preference etc

With Resolver, you can disable the ipv6 like this

resolver 8.8.8.8 ipv6=off valid=30s;

Here we are using the public DNS 8.8.8.8 but you can optionally use the DNS server IP of your Kubernetes cluster which you can find in /etc/resolv.conf when you SSH to any of the POD or worker Node

It might look something like this

search default.svc.cluster.local svc.cluster.local cluster.local home
nameserver 10.96.0.10
options ndots:5

To use the internal name server of your cluster you either use this IP ( you have to find yours )

resolver 10.96.0.10 ipv6=off valid=30s;

If you do not want an IP address but to use something common across Kubernetes clusters - you can use the kube-dns hostname as follows

resolver kube-dns.kube-system.svc.cluster.local ipv6=off;

Here is the full NGINX configuration sample with the resolver ( this can be written in many other ways too)

events {
    worker_connections 1024;
}
http{

    log_format upstream_time '$http_x_forwarded_for - $remote_user [$time_local] '
                             '"$request" $status $body_bytes_sent '
                             '"$http_referer" "$http_user_agent"'
                             'rt=$request_time uct="$upstream_connect_time" uht="$upstream_header_time" urt="$upstream_response_time"';

    
    upstream app_server {
        resolver kube-dns.kube-system.svc.cluster.local ipv6=off;
        server localhost:5000;
      }
    
    server 
    {
        listen      80;
        server_name _;
        charset     utf-8;
        client_max_body_size 100m;
            
        access_log /etc/nginx/logs/access.log upstream_time;
        
        # gzip settings
        gzip on;
        gzip_disable "msie6"
        gzip_vary on;
        gzip_proxied any;
        gzip_comp_level 6;
        gzip_buffers 32 16k;
        gzip_http_version 1.1;
        gzip_min_length 256;
        gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;

        location ~ \.(download)$ {
            add_header 'Cache-Control' 'private';
            expires 0;
        }

        location ~ \.(aspx|jsp|cgi)$ {
            return 410;
        }

        location /v4 {
            try_files $uri @yourapplication; 
        }

        location /v5 {
            try_files $uri @yourapplication;
        }

        location /v6 {
            try_files $uri @yourapplication;
        }


        location @yourapplication {

            if ($request_method = OPTIONS) {
                add_header 'Access-Control-Allow-Origin' '*';
                more_set_headers -s '400 401 402 403 404 409 412 413 429 500 502 207 202 ' 'Access-Control-Allow-Origin: *' ;

                add_header 'Access-Control-Allow-Credentials' 'true';
                add_header 'Access-Control-Allow-Methods' 'GET, POST, PATCH, PUT, DELETE, OPTIONS';
                add_header 'Access-Control-Expose-Headers' 'Content-Disposition';
                add_header 'Access-Control-Expose-Headers' 'Content-Length';
                return 204;
            }

            add_header 'Access-Control-Allow-Origin' '*';
            more_set_headers -s '400 401 402 403 404 409 412 413 429 500 502 207 202 ' 'Access-Control-Allow-Origin: *' ;

            add_header 'Access-Control-Allow-Credentials' 'true';
            add_header 'Access-Control-Allow-Methods' 'GET, POST, PATCH, PUT, DELETE, OPTIONS';
            add_header 'Access-Control-Expose-Headers' 'Content-Disposition';
            add_header 'Access-Control-Expose-Headers' 'Content-Length';

            

            set $xip "$http_x_forwarded_for";

            if ($http_x_real_ip){
              set $xip "$http_x_real_ip";
            }

            set_real_ip_from 0.0.0.0/0;
            real_ip_header X-Forwarded-For;
            real_ip_recursive on;
            proxy_connect_timeout 180s;
            proxy_send_timeout 180s;
            proxy_read_timeout 180s;
            proxy_set_header X-Forwarded-For $xip;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Real-IP $xip;
            proxy_set_header Host $host;
            proxy_set_header Sec-Fetch-Site same-site;
            
            #proxy_set_header Origin $http_host;
            proxy_hide_header Access-Control-Allow-Origin;
            proxy_hide_header Access-Control-Allow-Credentials;
            proxy_hide_header Access-Control-Allow-Methods;
            proxy_hide_header Access-Control-Allow-Headers;
            #add_header Access-Control-Allow-Origin $host;
            # we don't want nginx trying to do something clever with
            # redirects, we set the Host: header above already.
            
            
            proxy_redirect off;
            proxy_pass http://app_server;
        }

    }
  }

 

What if you are not using localhost?

If you are not using localhost or using service URL or any external URL instead

you can follow the same resolver method

Here is a detailed article I have written about this and the DNS cache with external hostname

NGINX Dynamic IP address upstream - DNS Cache issue | How to solve

 

Hope this helps

 

Cheers
Sarav AK

Follow me on Linkedin My Profile
Follow DevopsJunction onFacebook orTwitter
For more practical videos and tutorials. Subscribe to our channel

Buy Me a Coffee at ko-fi.com

Signup for Exclusive "Subscriber-only" Content

Loading