'Setup nginx not to crash if host in upstream is not found
We have several rails apps under common domain in Docker, and we use nginx to direct requests to specific apps.
our_dev_server.com/foo # proxies to foo app
our_dev_server.com/bar # proxies to bar
Config looks like this:
upstream foo {
server foo:3000;
}
upstream bar {
server bar:3000;
}
# and about 10 more...
server {
listen *:80 default_server;
server_name our_dev_server.com;
location /foo {
# this is specific to asset management in rails dev
rewrite ^/foo/assets(/.*)$ /assets/$1 break;
rewrite ^/foo(/.*)$ /foo/$1 break;
proxy_pass http://foo;
}
location /bar {
rewrite ^/bar/assets(/.*)$ /assets/$1 break;
rewrite ^/bar(/.*)$ /bar/$1 break;
proxy_pass http://bar;
}
# and about 10 more...
}
If one of these apps is not started then nginx fails and stops:
host not found in upstream "bar:3000" in /etc/nginx/conf.d/nginx.conf:6
We don't need them all to be up but nginx fails otherwise. How to make nginx ignore failed upstreams?
Solution 1:[1]
If you can use a static IP then just use that, it'll startup and just return
503
's if it doesn't respond.Use the
resolver
directive to point to something that can resolve the host, regardless if it's currently up or not.Resolve it at the
location
level, if you can't do the above (this will allow Nginx to start/run):location /foo { resolver 127.0.0.1 valid=30s; # or some other DNS (your company's internal DNS server) #resolver 8.8.8.8 valid=30s; set $upstream_foo foo; proxy_pass http://$upstream_foo:80; } location /bar { resolver 127.0.0.1 valid=30s; # or some other DNS (your company's internal DNS server) #resolver 8.8.8.8 valid=30s; set $upstream_bar foo; proxy_pass http://$upstream_bar:80; }
Solution 2:[2]
For me, option 3 of the answer from @Justin/@duskwuff solved the problem, but I had to change the resolver IP to 127.0.0.11 (Docker's DNS server):
location /foo {
resolver 127.0.0.11 valid=30s;
set $upstream_foo foo;
proxy_pass http://$upstream_foo:80;
}
location /bar {
resolver 127.0.0.11 valid=30s;
set $upstream_bar bar;
proxy_pass http://$upstream_bar:80;
}
But as @Justin/@duskwuff mentioned, you could use any other external DNS server.
Solution 3:[3]
The main advantage of using upstream
is to define a group of servers than can listen on different ports and configure load-balancing and failover between them.
In your case you are only defining 1 primary server per upstream so it must to be up.
Instead, use variables for your proxy_pass
(es) and remember to handle the possible errors (404s, 503s) that you might get when a target server is down.
Example of using a variable:
server {
listen 80;
set $target "http://target-host:3005"; # Here's the secret
location / { proxy_pass $target; }
}
Solution 4:[4]
Another quick and easy fix for someone's scenario, i can start and stop without my main server bombing out
extra_hosts:
- "dockerhost:172.20.0.1" # <-- static ipv4 gateway of the network ip here thats the only sorta downside but it works for me, you can ifconfig inside a container with the network to find yours, kinda a noob answer but it helped me
networks:
- my_network
server {
listen 80;
server_name servername;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass https://dockerhost:12345;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Solution 5:[5]
I had the same "Host not found" issue because part of my host was being mapped using $uri
instead of $request_uri
:
proxy_pass http://one-api-service.$kubernetes:8091/auth;
And when the request changed to the auth subrequest, the $uri
lost its initial value. Changing the mapping to use $request_uri
instead of $uri
solved my issue:
map $request_uri $kubernetes {
# ...
}
Solution 6:[6]
Bases on Justin's answer, the fastest way to do the trick is to replace final host with an IP address. You need to assign a static IP address to each container with --ip 172.18.0.XXX
parameter. NGINX won't crash at startup and will simply respond with 502 error if host is not available.
Run container with static IP:
docker run --ip 172.18.0.XXX something
Nginx config:
location /foo {
proxy_pass http://172.18.0.XXX:80;
}
Refer to this post how to setup a subnet with Docker.
Solution 7:[7]
We had a similar problem, we solved it by dynamically including conf files with the upstream container which are generated by a side-car container that reacts on events on the docker.sock and included the files using a wildcard in the upstream configuration:
include /etc/upstream/container_*.conf;
In case, the list is empty, we added a server entry that is permanently down - so the effective list of servers is not empty. This server entry never gets any requests
server 127.0.0.1:10082 down;
And a final entry that points to an (internal) server in the nginx that hosts error pages (e.g. 503)
server 127.0.0.1:10082 backup;
So the final upstream configuration looks like this:
upstream my-service {
include /etc/upstream/container_*.conf;
server 127.0.0.1:10082 down;
server 127.0.0.1:10082 backup;
}
In the nginx configuration we added a server listening on the error port:
server {
listen 10082;
location / {
return 503;
add_header Content-Type text/plain;
}
error_page 503 @maintenance;
location @maintenance {
internal;
rewrite ^(.*)$ /503.html break;
root error_pages/;
}
}
As said, the configuration file for each upstream container is generated by a script (bash,curl,jq) that interacts with the docker.socket using curl and it's rest api to get the required information (ip, port) and uses this template to generate the file.
server ${ip}:${port} fail_timeout=5s max_fails=3;
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | hooknc |
Solution 2 | DJDaveMark |
Solution 3 | emyller |
Solution 4 | Vladimir Djuricic |
Solution 5 | Washington Guedes |
Solution 6 | |
Solution 7 | Gerald Mücke |