Description
Bug Overview
I'm using the nginxinc/nginx-s3-gateway Docker image as a proxy layer to serve static frontend files stored in a private S3 bucket. The container is running with the correct IAM role and is able to access the S3 bucket.
However, when I configure a custom location block like the following:
location ^~ /my-page {
try_files $uri $uri/ @s3;
}
I frequently encounter 404 errors when accessing paths like /my-page or /my-page/index.html. It appears that the S3 gateway is not correctly resolving the URI or falling back as expected.
Would appreciate any guidance on whether this is a misconfiguration or an issue with how the S3 gateway handles rewrites and try_files.
- Files do exist in S3 with paths like my-page/index.html
- IAM role and permissions are correctly configured (files are accessible when requested directly via correct S3 path)
- The 404s appear inconsistently depending on the request path
Below is the full config for reference:
Dockerfile:
FROM nginxinc/nginx-s3-gateway:latest
COPY js_fetch_trusted_certificate.conf /etc/nginx/templates/gateway/js_fetch_trusted_certificate.conf.template
COPY rules.conf /etc/nginx/templates/gateway/s3_server.conf.template
js_fetch_trusted_certificate.conf:
js_fetch_trusted_certificate /etc/ssl/certs/Amazon_Root_CA_1.pem;
rules.conf:
location /my-page {
try_files $uri $uri/ @s3;
}
Expected Behavior
Route to map to correct s3 files and be accessible all the time
Steps to Reproduce the Bug
- S3 bucket with private access
- Container with IAM role
- Build docker container like an attached Dockerfile and custom rules
Environment Details
- Version of the S3 container used:
Latest: https://hub.docker.com/r/nginxinc/nginx-s3-gateway - Target deployment platforms [e.g. AWS/GCP/local cluster/etc...]: AWS
- S3 backend implementation: [e.g. AWS, Ceph, NetApp StorageGrid, etc...]: AWS
- Authentication method: [e.g. IAM, IAM with Fargate, IAM with K8S, AWS Credentials, etc...]
IAM with K8s
Additional Context
No response