Skip to content
This repository was archived by the owner on Nov 27, 2023. It is now read-only.

Deployed stack to ECS created in wrong region #1056

Open
robogeek opened this issue Dec 13, 2020 · 25 comments
Open

Deployed stack to ECS created in wrong region #1056

robogeek opened this issue Dec 13, 2020 · 25 comments
Labels
bug 🐞 App is not working correctly. ecs

Comments

@robogeek
Copy link

Description

I have an ECS Context that references one of my profiles (not the default profile, because I do not use the default profile). The Context explicitly lists the AWS region, as does the AWS Profile. Further I've set AWS_PROFILE and AWS_REGION. The deployed cluster is created in us-east-1 when I've specified us-west-2.

Steps to reproduce the issue:

  1. docker context use ecs
  2. docker compose up (possibly with --project-name todo)

Describe the results you received:

The deployed stack ends up in the us-east-1 region. The load balancer URL shown by docker compose ps contains us-east-1 and inspecting the infrastructure on the AWS Management Console website shows it is in us-east-1.

Describe the results you expected:

I expected it to respect the declaration of us-west-2

Additional information you deem important (e.g. issue happens only occasionally):

This happens every time.

Output of docker version:

I just updated to Docker Desktop for macOS 3.0.1. This behavior happened for several previous releases. I don't remember if it happened this way when I first used the ECS Context last summer.

Client: Docker Engine - Community
 Cloud integration: 1.0.4
 Version:           20.10.0
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        7287ab3
 Built:             Tue Dec  8 18:55:43 2020
 OS/Arch:           darwin/amd64
 Context:           ecs2
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.0
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       eeddea2
  Built:            Tue Dec  8 18:58:04 2020
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          v1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Output of docker context show:

I have two ECS contexts that are both referring to the same AWS Profile. I created the second just now to see if there would be any significant difference.

[
    {
        "Name": "ecs",
        "Metadata": {
            "Description": "us-west-2",
            "Type": "ecs"
        },
        "Endpoints": {
            "docker": {
                "SkipTLSVerify": false
            },
            "ecs": {
                "Profile": "notes-app",
                "Region": "us-west-2"
            }
        },
        "TLSMaterial": {},
        "Storage": {
            "MetadataPath": "/Users/david/.docker/contexts/meta/...",
            "TLSPath": "/Users/david/.docker/contexts/tls/.."
        }
    }
]
[
    {
        "Name": "ecs2",
        "Metadata": {
            "Type": "ecs"
        },
        "Endpoints": {
            "docker": {
                "SkipTLSVerify": false
            },
            "ecs": {
                "Profile": "notes-app"
            }
        },
        "TLSMaterial": {},
        "Storage": {
            "MetadataPath": "/Users/david/.docker/contexts/meta/...",
            "TLSPath": "/Users/david/.docker/contexts/tls/..."
        }
    }
]

Output of docker info:

$ docker --context default info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.4.2-docker)
  scan: Docker Scan (Docker Inc., v0.5.0)

Server:
 Containers: 2
  Running: 0
  Paused: 0
  Stopped: 2
 Images: 293
 Server Version: 20.10.0
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc version: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.19.121-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 3.847GiB
 Name: docker-desktop
 ID: NARD:HKVV:4764:6VGH:QRMA:OMOY:HLT3:743D:IGLG:VDQ3:3ICL:RHWC
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 40
  Goroutines: 44
  System Time: 2020-12-13T21:16:15.326411Z
  EventsListeners: 3
 HTTP Proxy: gateway.docker.internal:3128
 HTTPS Proxy: gateway.docker.internal:3129
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

Additional environment details (AWS ECS, Azure ACI, local, etc.):

$ cat ~/.aws/config 
[root-techsparx]
region = us-west-2
output = json
[notes-app]
region = us-west-2
output = json
$ echo $AWS_PROFILE
notes-app
$ echo $AWS_REGION
us-west-2
@gorrog
Copy link

gorrog commented Dec 14, 2020

@robogeek I logged the following issue on Friday which looks like it is related to this one. I closed the issue after managing to figure out a work around, but now that I'm seeing this, I'm wondering if I should have left it open. In any event, check what worked for me and see if it helps you.
#1050

@ndeloof ndeloof added the ecs label Dec 14, 2020
@ndeloof ndeloof self-assigned this Dec 14, 2020
@ndeloof ndeloof added the bug 🐞 App is not working correctly. label Dec 14, 2020
@ndeloof
Copy link
Collaborator

ndeloof commented Dec 14, 2020

I tested a few scenarios:
[x] using env only

  • context created with "From Environment" option,
  • export AWS_REGION=us-west-2
  • docker compose up,

=> application is deployed in expected region us-west-2

[x] using an aws profile

  • context created using an existing AWS profile, which is set with region = eu-north-1
  • docker compose up

=> application is deployed in expected region eu-north-1

Please note: if you created a docker context for an existing aws profile, AWS_PROFILE will be ignored.

I'm not sure how you ended with a context to have both

"ecs": {
                "Profile": "notes-app",
                "Region": "us-west-2"
            }

Seems this context was created with an earlier release of the ECS integration. We don't store region in docker context anymore (as demonstrated by your second context)

@fdansey-ostmodern
Copy link

I had the same problem and got around it by creating a new docker ecs context with a different name (re-creating the context with the same name did not work).

@robogeek
Copy link
Author

I was able to do some exploring today, but results are inconclusive. I tried to delete my ECS contexts and create two new ones, one created from a profile, the other created to use environment variables.

With a simple Compose file

services:
    whoami:
        image: containous/whoami
        ports: 
            - "80:80"

It deployed the cluster to my preferred region (us-west-2)

But with another, that's slightly more complex, it still ignores the preferred region and uses us-east-1

Inconclusive... And @gorrog I had seen your report but your solution didn't work for me.

@ndeloof
Copy link
Collaborator

ndeloof commented Dec 15, 2020

I hardly can imagine how the compose file complexity can impact setting up the AWS go SDK....

@fdansey-ostmodern
Copy link

My above workaround actually does not work. I am running the docker compose up command on a GitHub self-hosted runner and it feels like there is something stateful about the AWS region name. It happily defaults to us-east-1 region. How should the region be set?

@ndeloof
Copy link
Collaborator

ndeloof commented Dec 15, 2020

Region should be set:

  1. by your ~/.aws/config file if you created a docker ecs context based on profiles
  2. by AWS_REGION / AWS_DEFAULT_REGION, or region configured by AWS_PROFILE if you use docker ecs context based on environment variables

@fdansey-ostmodern
Copy link

Thanks, that was the problem. I was setting the AWS_CONFIG_FILE variable to another path, so although my debugging commands (e.g. aws configure list) were reporting the profile and region correctly, the docker compose was not finding ~/.aws/config and defaulting to us-east-1 region, even though I was able to create the docker ecs context with the --profile.

@ndeloof
Copy link
Collaborator

ndeloof commented Dec 15, 2020

AWS_CONFIG_FILE should be considered by the aws-go-sdk. I wonder it has some subtle difference vs the aws CLI that could explain this behaviour. Unfortunately there's no debug mode that could be enabled to check where it retrieves AWS configuration/credentials during init.

@jordisala1991
Copy link

I was having the same problem, not sure why I got this config, but I ended up with this:

~/.aws/config:

[profile default]
region = eu-west-3

~/.aws/credentials:

[default]
aws_access_key_id = xxxx
aws_secret_access_key = xxx
region = eu-west-3

There was a mismatch in my case, I had to edit the config and change profile default for default , and it started working as expected. Previous this change, all my compose up ended up on us-east-1

@robogeek
Copy link
Author

If it makes any difference I do not have a [default] profile configured in the ~/.aws files. Instead I follow a practice of explicitly declaring the profile for each project.

@robogeek
Copy link
Author

robogeek commented Dec 15, 2020

Here's the results from four rounds of docker compose up. I wanted to test whether specifying a --project-name made a difference and it doesn't. What this shows is that using a "from environment" context does the right thing. I've been instead using a context that directly specifies the profile, and this uses the wrong region.

The two contexts used are:

  • ecs-env context was created with the option to take values from the environment. In the environment, $AWS_PROFILE is set to notes-app, and that profile is declared for the us-west-2 region.
  • ecs-notes context was created to directly reference the notes-app context.

The Compose file used was the one I showed above for containous/whoami.

Context Command Region
ecs-env docker compose up us-west-2
ecs-env docker compose up --project-name whowho us-west-2
ecs-notes docker compose up us-east-1
ecs-notes docker compose up --project-name whowho us-east-1

The test was run on macOS using Docker for Mac. Then I reran the test on my Windows laptop. The Docker commands were run inside a WSL2 Ubuntu instance. But that instance is connected to the Docker for Windows installed on that machine. It gave the same behavior.

@robogeek
Copy link
Author

@ndeloof I delivered the situation where this issue arose - namely, the difference between deploying with a context configured for from environment versus one configured with a specific AWS Profile. The details are in the previous comment.

Using the context configured with a specific profile, it deployed to the wrong region. Using the from environment context it deployed to the expected region.

I don't know if you saw this and am making it clearer what was reported.

ndeloof added a commit that referenced this issue Jan 4, 2021
should help to diagnose #1084 #1056

Signed-off-by: Nicolas De Loof <[email protected]>
ulyssessouza pushed a commit that referenced this issue Jan 7, 2021
should help to diagnose #1084 #1056

Signed-off-by: Nicolas De Loof <[email protected]>
@rbarriuso
Copy link

I had the same problem, and what @jordisala1991 mentioned above worked for me.

Nevertheless I agree with @robogeek that the AWS Region should be a setting per Docker context, not per user.

@jdgamsterdam
Copy link

I think the problem is when you initially setup credentials it does it wrong. (see jordisala1991). You need to manually change your credentials for the default in the config. https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html . For whatever reason it is using a different format for default and additional profiles. Remove the keyword "profile" before default. Seems to work then.

@pcgeek86
Copy link

pcgeek86 commented Jul 8, 2021

I'm having the same problem. Posted the region issue over on StackOverflow here: https://stackoverflow.com/questions/68304238/change-aws-region-for-docker-cli-context-using-ecs

I also posted about the issue in the Docker Slack channel for AWS called #docker-for-aws.

image

The ideal solution, in my opinion, would be to have a region specifier on the Docker context itself.

Example

docker context update myecsprofile --region us-west-2

@ndeloof ndeloof removed their assignment Sep 30, 2021
@mat007
Copy link
Contributor

mat007 commented Oct 27, 2021

We’ll be looking into this next sprint.
(internally tracked as https://docker.atlassian.net/browse/IL-694)

@stephanierifai
Copy link

Hey folks - I've asked the team to push this back one sprint due to competing priorities, but we still have it on our radar. Thanks!

@schmkr
Copy link

schmkr commented Mar 17, 2022

@stephanierifai how long are those sprints in your team? Has the issue been picked up by now by any chance? :)

@stephanierifai
Copy link

Hi sorry yes, we did some investigating but no resolution quite yet. Will keep you updated.

@stale
Copy link

stale bot commented Sep 21, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale Inactive issue label Sep 21, 2022
@richseviora
Copy link

FYI - I ran into this issue myself with the same fix (remove profile from [profile default] in ~/.aws/config

@stale
Copy link

stale bot commented Oct 10, 2022

This issue has been automatically marked as not stale anymore due to the recent activity.

@stale stale bot removed the stale Inactive issue label Oct 10, 2022
@konsalex
Copy link

konsalex commented Dec 5, 2022

Any progress on this folks?

@alramalho
Copy link

alramalho commented Jan 20, 2023

I was able to fix it using

aws configure set default.region <region>

It seems the all "select a profile" thing ain't properly working

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug 🐞 App is not working correctly. ecs
Projects
None yet
Development

No branches or pull requests