'Can't log MLflow artifacts to S3 with docker-based tracking server

I'm trying to set up a simple MLflow tracking server with docker that uses a mysql backend store and S3 bucket for artifact storage. I'm using a simple docker-compose file to set this up on a server and supplying all of the credentials through a .env file. When I try to run the sklearn_elasticnet_wine example from the mlflow repo here: https://github.com/mlflow/mlflow/tree/master/examples/sklearn_elasticnet_wine usingTRACKING_URI = "http://localhost:5005 from the machine hosting my tracking server, the run fails with the following error: botocore.exceptions.NoCredentialsError: Unable to locate credentials. I've verified that my environment variables are correct and available in my mlflow_server container. The runs show up in my backend store so the run only seems to be failing at the artifact logging step. I'm not sure why this isn't working. I've seen a examples of how to set up a tracking server online, including: https://towardsdatascience.com/deploy-mlflow-with-docker-compose-8059f16b6039. Some use minio also but others just specify their s3 location as I have. I'm not sure what I'm doing wrong at this point. Do I need to explicitly set the ARTIFACT_URI as well? Should I be using Minio? Eventually, I'll be logging runs to the server from another machine, hence the nginx container. I'm pretty new to all of this so I'm hoping it's something really obvious and easy to fix but so far the Google has failed me. TIA.

version: '3'

services:
  app: 
    restart: always
    build: ./mlflow
    image: mlflow_server
    container_name: mlflow_server
    expose:
      - 5001
    ports:
      - "5001:5001"
    networks:
      - internal 
    environment:
      - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
      - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
      - AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
      - AWS_S3_BUCKET=${AWS_S3_BUCKET}
      - DB_USER=${DB_USER}
      - DB_PASSWORD=${DB_PASSWORD}
      - DB_PORT=${DB_PORT}
      - DB_NAME=${DB_NAME}
    command: >
      mlflow server 
      --backend-store-uri mysql+pymysql://${DB_USER}:${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_NAME} 
      --default-artifact-root s3://${AWS_S3_BUCKET}/mlruns/
      --host 0.0.0.0 
      --port 5001

  nginx: 
    restart: always
    build: ./nginx
    image: mlflow_nginx
    container_name: mlflow_nginx
    ports:
      - "5005:80" 
    networks:
      - internal 
    depends_on:
      - app

networks:
  internal:
    driver: bridge


Solution 1:[1]

Finally figure this out. I didn't realize that the client also needed to have access to the AWS credentials for S3 storage.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 ithunkathought