-
-
Notifications
You must be signed in to change notification settings - Fork 337
Occasionally EC2s are left in a Stopped state #398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Haven't seen this before. My machines are always terminated. But let me check my configuration. |
I didn't find any special settings in my configuration. Do you have log files available? |
I have looked through the EC2 logs but havent found anything that jumped out at me. Are there specific logs that would help? |
We also didn't see this behaviour |
This will be indirectly solved by #392 |
@cmavromichalis Shouldn't be an issue anymore and can be closed right? |
I've noticed that as I update the GitLab runner versions occasionally there are EC2's in a Stopped state. I have to manually delete these when running the terraform as the terraform won't destroy EC2's in a Stopped state. I haven't been able to root cause why they are put in a stopped state or any kind of pattern for why. It seems to just happen randomly and infrequently.
My scenario is the GitLab CI docker-machine runner - one runner agent as described in the README. We update GitLab and GitLab runner's versions once a month. Sometime during the month of use the EC2 is started then put in a Stopped state.
Are there logs I could look at or provide to help root cause this?
The text was updated successfully, but these errors were encountered: