-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Importing ovh_vrack forces replacement #734
Comments
Hello @maxime1907, thanks for opening this issue. I took your exact configuration and I'm not able to reproduce the issue ( What are the values that you have before and after for |
Hello, thanks for your answer Here is the debug output log of this command:
Here is the output of terraform plan after the import: # module.ovh_managed_kubernetes.ovh_vrack.vrack must be replaced
-/+ resource "ovh_vrack" "vrack" {
~ id = "pn-REDACTED" -> (known after apply)
name = "REDACTED"
~ ovh_subsidiary = "FR" -> (known after apply) # forces replacement
~ service_name = "pn-REDACTED" -> (known after apply)
~ urn = "urn:v1:eu:resource:vrack:pn-REDACTED" -> (known after apply)
# (1 unchanged attribute hidden)
~ order (known after apply)
~ plan {
~ duration = "P1M" -> (known after apply)
~ pricing_mode = "default" -> (known after apply)
# (2 unchanged attributes hidden)
}
} |
I just re-tested two things:
I used exactly the config you provided and used a name and description that are different from the ones of the already created vRack:
Then I imported the resource:
I also tried an import with the
And ran:
The next I think that there is another issue in your configuration that triggers this replacement, or maybe a version mismatch (are you sure you're using v0.50.0 ? Maybe you need to re-run a |
Maybe its because i am trying to import the vrack that gets automatically created when you create a public cloud project though the OVH manager interface yes i am using the latest version which is What i did is:
note that i also have an attach resource who wants to be replaced, i dont know if that can help: # module.ovh_managed_kubernetes.ovh_vrack_cloudproject.attach must be replaced
-/+ resource "ovh_vrack_cloudproject" "attach" {
~ id = "vrack_pn-REDACTED-cloudproject_REDACTED" -> (known after apply)
~ service_name = "pn-REDACTED" -> (known after apply) # forces replacement
# (1 unchanged attribute hidden)
} also my ovh api keys are restricted to a specific list of routes for security measures, so it might have something to do with it:
|
I cannot reproduce the issue, even with a vRack automatically created via a cloud project. I'm not sure how to dig more, I think you should try the import in a separated Terraform workspace to verify that the issue doesn't come from another resource in your plan. By using only the config from my previous comment, I never have a replacement. |
Hello, I’m encountering an issue where Terraform is prompting me to recreate a VRack, even though it was successfully applied about 4 weeks ago. Details:
OVH provider version : 0.48.0 |
Hello @GeryDeMerciYanis, from the log you sent it seems that the resource |
Hi @amstuta, After comparing the states across different environments, it appears that the resource Since this is a staging environment, I will attempt to recreate the resource and see if I can reproduce the issue. I will let you know if I do. |
Hello, the PR #754 should fix your issues because we won't read |
Hello, now it forces replacement also on plan... # module.ovh_managed_kubernetes.ovh_vrack.vrack must be replaced
-/+ resource "ovh_vrack" "vrack" {
~ id = "pn-REDACTED" -> (known after apply)
name = "REDACTED"
+ ovh_subsidiary = "FR" # forces replacement
~ service_name = "pn-REDACTED" -> (known after apply)
~ urn = "urn:v1:eu:resource:vrack:pn-REDACTED" -> (known after apply)
# (1 unchanged attribute hidden)
~ order (known after apply)
+ plan { # forces replacement
+ duration = (known after apply)
+ plan_code = "vrack"
+ pricing_mode = (known after apply)
}
} |
@maxime1907 did you import this vrack ? If so, it is normal to have this diff because we changed the way to import the resources created via an order: import {
to = ovh_vrack.vrack
id = "<service name>"
} and $ terraform plan -generate-config-out=vrack.tf
$ terraform apply You can still import it using |
Oh i see so in both import methods we have to get rid of these fields to avoid any conflict |
Yes exactly, we think that it makes no sense to enter order information once the resource is created. |
Describe the bug
We are trying to import an existing vrack into terraform. We have the correct vrack id and the import is successful, however, when running terraform plan it shows that we have to replace the vrack and everything inside it because the plan and ovh_subsidiary objects are added. I assume this is not intended behaviour.
Terraform Version
v1.9.6
OVH Terraform Provider Version
v0.50.0
Affected Resource(s)
Please list the resources as a list, for example:
Terraform Configuration Files
Expected Behavior
VRack is imported and next terraform plan only updates its fields.
Actual Behavior
After importing the resource it is forced to be replaced.
Steps to Reproduce
terraform import 'module.ovh_managed_kubernetes.ovh_vrack.vrack' 'pn-XXXXXX'
terraform plan -out=terraform.plan
Temporary workaround
References
Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
ovh_cloud_project
it forces replacement #667The text was updated successfully, but these errors were encountered: