'django errno 104 Connection reset by peer

I am trying to run my django server on an Ubuntu instance on AWS EC2. I am using gunicorn to run the server like this :

gunicorn --workers 4 --bind 127.0.0.1:8000 woc.wsgi:application --name woc-server --log-level=info --worker-class=tornado --timeout=90 --graceful-timeout=10

When I make a request I am getting 502, Bad Gateway on the browser. Here is the server log http://pastebin.com/Ej5KWrWs

Some sections of the settings.py file where behaviour is changed based on hostname are

iUbuntu is the hostname of my laptop

if socket.gethostname() == 'iUbuntu':
    '''
    Development mode
    "iUbuntu" is the hostname of Ishan's PC
    '''
    DEBUG = TEMPLATE_DEBUG = True
else:
    '''
    Production mode
    Anywhere else than Ishan's PC is considered as production
    '''
    DEBUG = TEMPLATE_DEBUG = False

if socket.gethostname() == 'iUbuntu':
    '''Development'''
    ALLOWED_HOSTS = ['*', ]
else:
    '''Production Won't let anyone pretend as us'''
    ALLOWED_HOSTS = ['domain.com', 'www.domain.com',
                     'api.domain.com', 'analytics.domain.com',
                     'ops.domain.com', 'localhost', '127.0.0.1']

(I don't get what's the purpose of this section of the code. Since I inherited the code from someone and the server was working I didn't bothered removing it without understanding what it does)

if socket.gethostname() == 'iUbuntu':
    MAIN_SERVER = 'http://localhost'
else:
    MAIN_SERVER = 'http://domain.com'

I can't figure out what's the problem here. The same code runs fine with gunicorn on my laptop.

I have also made a small hello world node.js to serve on the port 8000 to test nginx configuration and it is running fine. So no nginx errors.

UPDATE:

I set DEBUG to True and copied the Traceback http://pastebin.com/ggFuCmYW

UPDATE:

Thanks to the reply by @ARJMP. This indeed is the problem with celery consumer not getting connected to broker.

I am configuring celery like this : app.config_from_object('woc.celeryconfig') and the contents of celeryconfig.py are:

BROKER_URL = 'amqp://celeryuser:celerypassword@localhost:5672/MyVHost' CELERY_RESULT_BACKEND = 'rpc://'

I am running the worker like this :celery worker -A woc.async -l info --autoreload --include=woc.async -n woc_celery.%h

And the error that I am getting is:

consumer: Cannot connect to amqp://celeryuser:**@127.0.0.1:5672/MyVHost: [Errno 104] Connection reset by peer.



Solution 1:[1]

Ok so your problem as far as I can tell is that your celery worker can't connect to the broker. You have some middleware trying to call a celery task, so it will fail on every request (unless that analyse_urls.delay(**kw) is conditional)

I found a similar issue that was solved by upgrading their version of celery.

Another cause could be that the EC2 instance can't connect to the message queue server because the EC2 security group won't allow it. If the message queue is running on a separate server, you may have to make sure you've allowed the connection between the EC2 instance and the message queue through AWS EC2 Security Groups

Solution 2:[2]

try setting the rabbitmq connection timeout to 30 seconds. this usually clears up the problem of being unable to connect to a server.

you can add connection_timeout to your connection string:

BROKER_URL = 'amqp://celeryuser:[email protected]:5672/MyVHost?connection_timeout=30'

note the format with the question mark: ?connection_timeout=30

this is a query string parameter for the RMQ connection string.

also - make sure the url is pointing to your production server name / url, and not localhost, in your production environment

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Community
Solution 2 Derick Bailey