Skip to content
This repository was archived by the owner on Nov 27, 2023. It is now read-only.

Cannot mount same EFS partition from multiple Compose files #1085

Closed
robogeek opened this issue Dec 27, 2020 · 5 comments
Closed

Cannot mount same EFS partition from multiple Compose files #1085

robogeek opened this issue Dec 27, 2020 · 5 comments
Labels
bug 🐞 App is not working correctly. ecs stale Inactive issue

Comments

@robogeek
Copy link

robogeek commented Dec 27, 2020

Description

I want to mount the same EFS partition from multiple Compose files. But when launching the second Compose file I get this error:

mount target already exists in this AZ (Service: AmazonElasticFileSystem; Status Code: 409; Error Code: MountTargetConflict; Request ID: e378fd6a-8cc2-48f8-bcbc-ba0caebd155a; Proxy: null)

Steps to reproduce the issue:

  1. Create an EFS partition using Terraform - just create the partition, do not create access points or mount points
  2. Use that partition as an external volume in one Compose file
  3. Use that partition as an external volume in another Compose file

Describe the results you received:

The mount works when deploying the first Compose file. The services created include the mount targets for each accessibility zone, etc.

When deploying the second Compose file, the services created also include the mount targets, and as soon as the mount targets start being created instead an error pops up and the whole deployment fails out.

Describe the results you expected:

I expected the second deployment to succeed.

For example the ECS Context could recognize there's already a Mount Target and proceed to use that rather than throw up its hands and quit like this.

Additional information you deem important (e.g. issue happens only occasionally):

Compose file 1

version: '3.8'

services:
  wordpress:
    image: wordpress:latest
       ...
    volumes:
      - wpfs:/var/www/html

...

volumes:
  wpfs:
    external: true
    name: ${WP_EFS_ID}

I've arranged WP_EFS_ID to be passed as --environment WP_EFS_ID=fs-ID-STRING when running docker compose up.

Compose file 2:

version: "3.8"
services:
    sshd:
        image: ledokun/sshd
        ...
        volumes:
            - wpfs:/var/www/html
...
volumes:
  wpfs:
    external: true
    name: ${WP_EFS_ID}

This is a separate Compose file in a separate directory. To get the WP_FS_ID I wrote a small Terraform script to access the state file for the first directory, and from there to access the output containing the ID string. That means I'm 100% certain of using the correct ID string in both cases. Further, both are using the default VPC.

Output of docker version:

$ docker version
Client: Docker Engine - Community
 Cloud integration: 1.0.4
 Version:           20.10.0
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        7287ab3
 Built:             Tue Dec  8 18:55:43 2020
 OS/Arch:           darwin/amd64
 Context:           ecs-env
 Experimental:      true
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Output of docker context show:
You can also run docker context inspect context-name to give us more details but don't forget to remove sensitive content.

$ docker context inspect ecs-env
[
    {
        "Name": "ecs-env",
        "Metadata": {
            "Description": "credentials read from environment",
            "Type": "ecs"
        },
        "Endpoints": {
            "docker": {
                "SkipTLSVerify": false
            },
            "ecs": {
                "CredentialsFromEnv": true
            }
        },
        "TLSMaterial": {},
        "Storage": {
            "MetadataPath": "/Users/david/.docker/contexts/meta/680fd64ab74e714ddd488483239eeeb3c4e2e99e10f7a853b73e99d0e4158454",
            "TLSPath": "/Users/david/.docker/contexts/tls/680fd64ab74e714ddd488483239eeeb3c4e2e99e10f7a853b73e99d0e4158454"
        }
    }
]

In other words, I've set up a Context that derives from environment variables. These variables correctly refer to an AWS profile, and I've successfully used this profile to deploy several things to ECS using both Terraform files and Compose files.

Output of docker info:

$ docker info
unknown command "info" for "docker"

When running the ECS context, this is the result of the info command

Additional environment details (AWS ECS, Azure ACI, local, etc.):

@ndeloof ndeloof added bug 🐞 App is not working correctly. ecs labels Jan 4, 2021
@ndeloof
Copy link
Collaborator

ndeloof commented Jan 4, 2021

That's indeed a buggy behaviour, but hard to fix:
if ECS integration don't create mount targets when another already exists, then it should also not delete it on down as another deployment might rely on those, and then we will get zombie resources.

@ndeloof
Copy link
Collaborator

ndeloof commented Jan 4, 2021

Confirmed we can create only one Mount Target per Filesystem + AZ (subnet)
Still we need to configure those ones with application security groups, so can't just "reuse existing"

@stale
Copy link

stale bot commented Jul 8, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale Inactive issue label Jul 8, 2021
@stale
Copy link

stale bot commented Jul 15, 2021

This issue has been automatically closed because it had not recent activity during the stale period.

@stale stale bot closed this as completed Jul 15, 2021
@thorfi
Copy link

thorfi commented Aug 26, 2021

Bump and link to #1739 (same essential issue)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug 🐞 App is not working correctly. ecs stale Inactive issue
Projects
None yet
Development

No branches or pull requests

3 participants