Skip to content

CredentialsProviderError: Could not load credentials from any providers intermittently #5829

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
3 tasks done
danielblignaut opened this issue Feb 27, 2024 · 6 comments
Closed
3 tasks done
Assignees
Labels
bug This issue is a bug. p3 This is a minor priority issue response-requested Waiting on additional info and feedback. Will move to \"closing-soon\" in 7 days.

Comments

@danielblignaut
Copy link

Checkboxes for prior research

Describe the bug

Hi,

we are receiving the following error: CredentialsProviderError: Could not load credentials from any providers intermittently from our Fargate ECS task that is trying to connect to DynamoDB. We do have regular healthchecks that create new instances of the DynamoDB client and it seems reasonable to assume that this could be related to #4867

It seems reasonable to assume that the SDK should manage credentials in the background, including caching them for subsequent client instantiations, etc.

Are there plans to add support for this?

SDK version number

@aws-sdk/[email protected]

Which JavaScript Runtime is this issue in?

Node.js

Details of the browser/Node.js/ReactNative version

v20.10.0

Reproduction Steps

deploy an ECS tasks the creates multiple AWS clients in succession

Observed Behavior

we are receiving the following error: CredentialsProviderError: Could not load credentials from any providers intermittently

Expected Behavior

The AWS client should instantiate itself successfully

Possible Solution

The AWS SDK should take care to cache credentials

Additional Information/Context

No response

@danielblignaut danielblignaut added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Feb 27, 2024
@kuhe
Copy link
Contributor

kuhe commented Feb 27, 2024

Please add a code example of how you are initializing the client and what environment variables are being passed in.

For example:

process.env.??? = ???

new DynamoDB({
  credentials: ???
});

@aBurmeseDev aBurmeseDev added response-requested Waiting on additional info and feedback. Will move to \"closing-soon\" in 7 days. p3 This is a minor priority issue and removed needs-triage This issue or PR still needs to be triaged. labels Feb 27, 2024
@danielblignaut
Copy link
Author

We were originally following:

this._client = new DynamoDBClient({
      endpoint: opts.endpoint,
    });

after reading https://repost.aws/questions/QUXMX9z9zCSb21qsnedFZQpg/having-trouble-converting-aws-node-sdk-v2-lambda-to-v3

we now use:

import { defaultProvider } from '@aws-sdk/credential-provider-node';

this.client = new DynamoDBClient({
      endpoint: opts.endpoint,
      credentials: defaultProvider(),
    });

within our ECS task definition, in both cases, the only relevant environment variable we have set is:

AWS_REGION=us-east-1

We have the correct task & execution IAM roles set. I'm unfamiliar if AWS injects additional runtime env vars to help the credentials provider, I haven't logged and included those here for security.

On a per request basis, we do instantiate multiple AWS clients. My understanding was the the credentials config was internally a singleton like object so the credentials would be shared per client instantiation. Currently I wonder if this is not the case? In which case is it advised for us to follow a singleton pattern (one global client) to be used across requests? Are there performance drawbacks from this at all?

Finally, not sure if this is relevant but we are bunding our source code with esbuild:

npx esbuild src/server.ts --bundle --platform=node --outdir=./dist --sourcemap

I'm unsure if something during the bundling process is causing an issue?

As mentioned, the issue is intermittent so I can deploy the service and get 200 responses the majority of the time but if I start running k6 load tests we will start to see some errors.

@RanVaknin
Copy link
Contributor

Hi @danielblignaut ,

Credential resolution is happening per SDK call, and not on client creation.
When the API call fires, the SDK will try to construct the request, it will check the cache that is set on that defaultProvider, if the cache is empty, it will attempt to retrieve new credentials by going through all of the credential providers on the chain until it finds a valid source of credentials (or errors out).

When you deploy your application on ECS, and you use the default provider, the SDK will look through the entire provider list until it reaches the very last one (fromContainerMetadata) which in your case should resolve, if ECS injected the correct env variables, the provider will use those to fetch info from the container metadata service and exchange that info for valid credentials.

The env variable the SDK's provider expects is ENV_CMDS_AUTH_TOKEN AND either AWS_CONTAINER_CREDENTIALS_FULL_URI OR AWS_CONTAINER_CREDENTIALS_RELATIVE_URI.

On a per request basis, we do instantiate multiple AWS clients.

Our recommendation is to construct the SDK client once per service, per region. And not per SDK call. That would explain why your credentials are never cached.

Without seeing a more complete code snippet it's hard to give advise on what went wrong.

Thanks,
Ran~

@RanVaknin RanVaknin self-assigned this Mar 1, 2024
@RanVaknin RanVaknin added response-requested Waiting on additional info and feedback. Will move to \"closing-soon\" in 7 days. and removed response-requested Waiting on additional info and feedback. Will move to \"closing-soon\" in 7 days. labels Mar 1, 2024
@danielblignaut
Copy link
Author

danielblignaut commented Mar 1, 2024 via email

@RanVaknin
Copy link
Contributor

Happy to hear that this unblocked you.

All the best,
Ran~

Copy link

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 16, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug This issue is a bug. p3 This is a minor priority issue response-requested Waiting on additional info and feedback. Will move to \"closing-soon\" in 7 days.
Projects
None yet
Development

No branches or pull requests

4 participants