Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apply skip_s3_checksum option to GetObject calls #36704

Open
b0bcarlson opened this issue Mar 14, 2025 · 13 comments
Open

Apply skip_s3_checksum option to GetObject calls #36704

b0bcarlson opened this issue Mar 14, 2025 · 13 comments
Labels
backend/s3 enhancement new new issue not yet triaged

Comments

@b0bcarlson
Copy link

Terraform Version

Terraform v1.11.2
on linux_amd64
+ provider registry.terraform.io/digitalocean/digitalocean v2.49.1

Use Cases

Using "S3 compatible" apis (specifically Backblaze in my case), where the api doesn't support the x-amz-checksum-mode header.

Attempted Solutions

Using version 1.11.1 (which works as expected). Something changed in 1.11.2 which "broke" this, however I was not able to find any changes between the versions that look like they would cause this changed behavior.

Proposal

The documentation https://developer.hashicorp.com/terraform/language/backend/s3#skip_s3_checksum indicates that skip_s3_checksum only applies to uploading. My proposal is that it also applies to the GetObject action. As mentioned before, I have no issue with 1.11.1, but in 1.11.2 running terraform init returns an error:

Error: Error refreshing state: Unable to access object "terraform.tfstate" in S3 bucket "<snip>": operation error S3: GetObject, https response error StatusCode: 400, RequestID: <snip>, HostID: <snip>, api error InvalidArgument: Unsupported header 'x-amz-checksum-mode' received for this API call.

For reference, my backend block is:

terraform {
  backend "s3" {
    endpoints = {
      s3 = "https://s3.us-west-004.backblazeb2.com"
    }
    skip_credentials_validation = true
    skip_metadata_api_check     = true
    skip_region_validation      = true
    skip_requesting_account_id  = true
    region                      = "us-east-1"
    bucket                      = "<snip>"
    key                         = "terraform.tfstate"
    skip_s3_checksum            = true
  }
}

References

No response

@b0bcarlson b0bcarlson added enhancement new new issue not yet triaged labels Mar 14, 2025
b0bcarlson added a commit to b0bcarlson/dns that referenced this issue Mar 14, 2025
b0bcarlson added a commit to b0bcarlson/dns that referenced this issue Mar 14, 2025
* Add permissions to publish action
Closes https://github.com/b0bcarlson/bobcodes.net/security/code-scanning/1

* Pin terraform to 1.11.1; Pin digitalocean provider
Ref hashicorp/terraform#36704
@yoyrandao
Copy link

+, I've encountered the same issue on the same version but a little bit differently:

Error saving state: failed to upload state: operation error S3: PutObject, https response error StatusCode: 400, RequestID: <REQUEST_ID>, HostID: <HOST_ID>, api error XAmzContentSHA256Mismatch: UnknownError

I'm also using S3-compatible storage, so seems like it does not support these headers. I haven't seen a breaking changes in new releases also.

An error is reproducing on Terraform 1.11.2 (in 1.11.0 everything is ok).

@Khagou
Copy link

Khagou commented Mar 17, 2025

same problem..

InvalidArgument: x-amz-content-sha256 must be UNSIGNED-PAYLOAD or a valid SHA256

As the person above said, we had to use the previous release 1.11.1

@kuznero
Copy link

kuznero commented Mar 17, 2025

Same with hcloud provider:

╷
│ Error: Failed to save state
│ 
│ Error saving state: failed to upload state: operation error S3: PutObject,
│ https response error StatusCode: 400, RequestID:
│ tx00000d821f687dafc497f-0067d882e3-a622973-***-prod1-ceph3, HostID:
│ a622973-***-prod1-ceph3-***-prod1, api error XAmzContentSHA256Mismatch:
│ UnknownError
╵
╷
│ Error: Failed to persist state to backend
│ 
│ The error shown above has prevented Terraform from writing the updated
│ state to the configured backend. To allow for recovery, the state has been
│ written to the file "errored.tfstate" in the current working directory.
│ 
│ Running "terraform apply" again at this point will create a forked state,
│ making it harder to recover.
│ 
│ To retry writing this state, use the following command:
│     terraform state push errored.tfstate
│ 
╵
$ terraform version
Terraform v1.11.2
on linux_amd64

@crw
Copy link
Contributor

crw commented Mar 17, 2025

Thanks for this report. I'll send it across to the AWS Provider team at HashiCorp that maintains the s3 backend.

@crw
Copy link
Contributor

crw commented Mar 17, 2025

Probably the result of updating the AWS SDK: #36625

@justinretzolk
Copy link
Member

Confirmed that this was a result of the upgrade in #36625, which bumped the S3 module version to 1.75.2. That version includes a recent change that causes the x-amz-checksum-mode header to be included by default. Notably, Terraform does not set the ChecksumMode field; it's defaulted to ENABLED due to this recent upstream change. Similarly, after this update, the AWS Go SDK defaults to automatically calculating a CRC32 checksum for uploads when an algorithm is not specified (as happens when skip_s3_checksum is specified) or when a precalculated checksum isn't provided.

From what I've gathered, there seems to be two paths towards a resolution:

  1. Update hashicorp/aws-sdk-go-base to support the request_checksum_calculation and response_checksum_validation arguments from the AWS config and their respective environment variables (AWS_REQUEST_CHECKSUM_CALCULATION and AWS_RESPONSE_CHECKSUM_VALIDATION)
  2. The same can be accomplished with the relevant Options when creating the S3 client for the backend config

I'll leave it to the team to decide what's the correct approach. For whatever it's worth, I tested using the AWS CLI with and without the arguments set in my AWS config and verified that setting the arguments removed the x-amz-checksum-mode header from my request. I believe the latter of the two arguments is what would prevent the XAmzContentSHA256Mismatch error, but that's not entirely clear to me at this point in time.

A few helpful documents from AWS on the matter:

@lantins
Copy link

lantins commented Mar 21, 2025

I've seen the same issue using an OCI Bucket with v1.11.2 but it works with v1.11.1

api error InvalidArgument: x-amz-content-sha256 must be UNSIGNED-PAYLOAD or a valid sha256 value

@lantins lantins marked this as not a duplicate and then as a duplicate of #36742 Mar 21, 2025
@GreenVine
Copy link

GreenVine commented Mar 22, 2025

Got the same issue when using OCI buckets: Error saving state: failed to upload state: operation error S3: PutObject. api error InvalidArgument: x-amz-content-sha256 must be UNSIGNED-PAYLOAD or a valid sha256 value.

I was able to mitigate this by setting the following environment variables:

  • AWS_REQUEST_CHECKSUM_CALCULATION: when_required
  • AWS_RESPONSE_CHECKSUM_VALIDATION: when_required (optional)

@kuznero
Copy link

kuznero commented Mar 23, 2025

Got the same issue when using OCI buckets: Error saving state: failed to upload state: operation error S3: PutObject. api error InvalidArgument: x-amz-content-sha256 must be UNSIGNED-PAYLOAD or a valid sha256 value.

I was able to mitigate this by setting the following environment variables:

* `AWS_REQUEST_CHECKSUM_CALCULATION: when_required`

* `AWS_RESPONSE_CHECKSUM_VALIDATION: when_required` (optional)

This worked for me. Thanks, @GreenVine!

@Aransh
Copy link

Aransh commented Mar 27, 2025

Having the same issue with Linode Object Storage.
Following for an official solution

@EliranTurgeman

This comment has been minimized.

@stufently
Copy link

Hi, any standart solution?
Now I use
export AWS_REQUEST_CHECKSUM_CALCULATION=when_required
export AWS_RESPONSE_CHECKSUM_VALIDATION=when_required

@0Zusu
Copy link

0Zusu commented Apr 1, 2025

hi, is there a way to get this fix through flags in the s3 resource? i would think that the flag:
"skip_s3_checksum= true" would have skipped this altogether instead of adding enviroment variables to basically skip the same related thing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend/s3 enhancement new new issue not yet triaged
Projects
None yet
Development

No branches or pull requests