I am currently working on a Jenkins declarative pipeline to connect the Jenkins builds with Kubernetes, Helm and Netflix Spinnaker. One of TODOs has been to deploy different artifacts (e.g. a helm chart my-chart-0.0.1.tar.gz) to an AWS S3-compatible bucket inside a Minio installation with help of pipeline-aws-plugin.

When running

withAWS(endpointUrl: 'https://minio.domain.tld', credentials: config.credentialsId) {
	s3Upload(file: "my-file.txt", bucket: "my-bucket")				
}

my pipeline always threw an exception with

com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: null; S3 Extended Request ID: null), S3 Extended Request ID: null
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1695)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1350)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1101)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:758)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:732)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:714)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:674)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:656)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:520)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4705)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4652)

Trying other clients with Minio

At first I suspected some misconfiguration of my Minio installation. I checked the S3 upload with mc and AWS’ own CLI. Both worked flawlessly so it had to be something else.

Enable logging

To get some more debugging output, I configured Jenkins’ to log events for com.amazonaws and org.apache.http.wire. The debugging output does not show up inside the build job’s console output but under the configured logger.

Host-style access to S3 buckets

After scanning the debug output, I noticed the following:

http-outgoing-11 >> "PUT /my-file.txt HTTP/1.1[\r][\n]"
Jan 21, 2019 9:36:15 PM FINE org.apache.http.impl.conn.Wire wire
http-outgoing-11 >> "Host: my-bucket.minio.domain.tld[\r][\n]"
Jan 21, 2019 9:36:15 PM FINE org.apache.http.impl.conn.Wire wire
http-outgoing-11 >> "x-amz-content-sha256: UNSIGNED-PAYLOAD[\r][\n]"
...
http-outgoing-11 << "[\r][\n]"
Jan 21, 2019 9:36:15 PM FINE org.apache.http.impl.conn.Wire wire
http-outgoing-11 << "default backend - 404"
Jan 21, 2019 9:36:15 PM FINE com.amazonaws.services.s3.internal.S3ErrorResponseHandler createException
Failed in parsing the error response : default backend - 404

When pipeline-aws-plugin initiates a request to my bucket, it does not https://minio.domain.tld/my-bucket but https://my-bucket.minio.domain.tld. This is totally fine for AWS S3 buckets. But with the Minio deployment in our Kubernetes cluster, this does not work out of the box:

  1. By default, our Minio deployment does not use the –address parameter described in https://github.com/minio/minio/issues/4681
  2. Our Minio ingress does also not listen to 4th level domains like my-bucket.minio.domain.tld so that the nginx proxy returns the “default backend – 404” string seen in the log output above.

Solving the issue

Instead of configuring host-style access I fixed it by simply using the pathStyleAccessEnabled: true in my s3Upload step. When enabled, pipeline-aws-plugin does not use the bucket name as 4th level subdomain but appends the bucket name to the host name:

withAWS(endpointUrl: 'https://minio.domain.tld', credentials: config.credentialsId) {
	s3Upload(pathStyleAccessEnabled: true, file: "my-file.txt", bucket: "my-bucket")				
}

I am asking you for a donation.

You liked the content or this article has helped and reduced the amount of time you have struggled with this issue? Please donate a few bucks so I can keep going with solving challenges.

Categories: DevOps