- Unit tests: In Terraform terminology they refer to tests that validate a resource schema. That is done automatically here for all resources and data sources using Terraform Framework Plugin. Additionally, we have general unit testing for testing a resource or unit without calling external systems like MongoDB Atlas.
- Acceptance (acc) tests: In Terraform terminology they refer to the use of real Terraform configurations to exercise the code in plan, apply, refresh, and destroy life cycles (real infrastructure resources are created as part of the test), more info here.
- Migration (mig) tests: These tests are designed to ensure that after an upgrade to a new Atlas provider version, user configs do not result in unexpected plan changes, more info here. Migration tests are a subset of Acceptance tests.
- A resource and associated data sources are implemented in a folder that is also a Go package, e.g.
advancedcluster
implementation is ininternal/service/advancedcluster
- We enforce "black box" testing, tests must be in a separate "_test" package, e.g.
advancedcluster
tests are inadvancedcluster_test
package. - Acceptance and general unit tests are in corresponding
_test.go
file as the resource or data source source file. If business logic is extracted into a separate file, unit testing for that logic will be including in its associated_test.go
file, e.g. state_transition_search_deployment_test.go. - Migration tests are in
_migration_test.go
files. - When functions are in their own file because they are shared by resource and data sources, a test file can be created to test them, e.g. model_alert_configuration_test.go has tests for model_alert_configuration.
- All resource folders must have a
main_test.go
file to handle resource reuse lifecycle, e.g. here. internal/testutil/acc
contains helper test functions for Acceptance tests.internal/testutil/mig
contains helper test functions specifically for Migration tests.internal/testutil/unit
contains helper test functions for MacT (Mocked Acceptance Tests) and MipT- (Mocked Import Plan Tests). MacT is used to capture and replay HTTP traffic with MongoDB Atlas and allow diff assertions on requests. MipT is used to test PlanModifier logic.
- Unit tests must not create Terraform resources or use external systems, e.g unit tests using Atlas Go SDK must not call MongoDB Atlas.
- Mock of specific interfaces like
admin.ProjectsApi
is prefered to use the wholeclient
, e.g. resource_project.go - Testify Mock is used for test doubles.
- Altlas Go SDK mocked interfaces are generated in mockadmin package using Mockery, example of use in resource_project_test.go.
- There must be at least one
basic acceptance test
for each resource, e.g.: TestAccSearchIndex_basic. They test the happy path with minimum resource configuration. Basic import tests
are done as the last step in thebasic acceptance tests
, not as a different test, e.g. basicTestCase. Exceptions apply for more specific import tests, e.g. testing with incorrect IDs. Import tests verify that the Terraform Import functionality is working fine.- Data sources are tested in the same tests as the resources, e.g. commonChecks.
- Helper functions such as
resource.TestCheckTypeSetElemNestedAttrs
orresource.TestCheckTypeSetElemAttr
can be used to check resource and data source attributes more easily, e.g. resource_serverless_instance_test.go.
- Use
PreCheck: PreCheckGovBasic
- Use the
acc.ConfigGovProvider
together with your normal terraform config - Modify the
checkExist
andCheckDestroy
to useacc.ConnV2UsingGov
- Follow naming convention:
TestAccGovProject_withProjectOwner
, note prefix:TestAccGov
TestMigGovProject_regionUsageRestrictionsDefault
, note prefix:TestMigGov
- Although
Gov
tests can be run together with other acceptance tests, using theTest(Acc|Mig)Gov
makes it easier to run only gov tests or find similar gov tests
- There must be at least one
basic migration test
for each resource that leverages on thebasic acceptance tests
using helper test functions such asCreateAndRunTest
, e.g. TestMigServerlessInstance_basic.
These enviroment variables can be used in local to speed up development process.
Enviroment Variable | Description |
---|---|
MONGODB_ATLAS_PROJECT_ID |
Re-use an existing project reducing test run duration for resources supporting this variable |
MONGODB_ATLAS_CLUSTER_NAME |
Re-use an existing cluster reducing significantly test run duration for resources supporting this variable |
Acceptance and migration tests can reuse projects and clusters in order to be more efficient in resource utilization.
- A project can be reused using
ProjectIDExecution
. It returns the ID of a project that is created for the current execution of tests for a resource, e.g. TestAccConfigRSDatabaseUser_basic.- As the project is shared for all tests for a resource, sometimes tests can affect each other if they're using global resources to the project (e.g. network peering, maintenance window or LDAP config). In that case:
- Run the tests in serial (
resource.Test
instead ofresource.ParallelTest
) if the tests are fast and saving resources is prefered, e.g. TestAccConfigRSProjectAPIKey_multiple. - Don’t use
ProjectIDExecution
and create a project for each test if a faster test execution is prefered even if more resources are needed, e.g. TestAccFederatedDatabaseInstance_basic.
- Run the tests in serial (
- As the project is shared for all tests for a resource, sometimes tests can affect each other if they're using global resources to the project (e.g. network peering, maintenance window or LDAP config). In that case:
- A cluster can be reused using
ClusterNameExecution
. This function returns the project id (created withProjectIDExecution
) and the name of a cluster that is created for the current execution of tests for a resource, e.g. TestAccSearchIndex_withSearchType. Similar precautions to project reuse must be taken here. If a global resource to cluster is being tested (e.g. cluster global config) then it's prefered to run tests in serial or create their own clusters. - Plural data sources can be challenging to test when tests run in parallel or they share projects and/or clusters.
- Avoid checking for a specific total count as other tests can also create resources, e.g. resource_network_container_test.go.
- Don't assume results are in a certain order, e.g. resource_network_container_test.go.
Experimental framework for hooking into the HTTP Client used by the Terraform provider and capture/replay traffic.
- Works by mutating a
terraform-plugin-testing/helper/resource.TestCase
- Limited to
TestAccMockable*
tests inresource_advanced_cluster_test.go
:- Remember to run
export MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true
for the TPF implementation to be used and the tests to work.
- Remember to run
- Enabled test cases should always be named with
TestAccMockable
prefix, e.g.:TestAccMockableAdvancedCluster_tenantUpgrade
- To create a new
TestAccMockable
you would need to (see example commit)- (1) Write the normal acceptance test
- (2) Change the
resource.ParallelTest(t, resource.TestCase)
-->unit.CaptureOrMockTestCaseAndRun(t, mockConfig, &resource.TestCase)
mockConfig
is unit.MockHTTPDataConfig can specify sideEffects (e.g., lowering retry interval), select diff conditions (avoid irrelvant POST/PATCHes used as diff), and can be shared across tests within a resource.
Running a test without the HTTP_MOCKER_CAPTURE
or HTTP_MOCKER_REPLAY
will run the test case unmodified.
- Running all MacT tests in replay mode: (
make
target sets env-vars for you)export ACCTEST_PACKAGES=./internal/service/advancedcluster && make testmact
- Running a single test in replay mode:
- Update your test env vars (e.g.,
.vscode/settings.json
)
"go.testEnvVars": { "HTTP_MOCKER_REPLAY": "true", // MUST BE SET "MONGODB_ATLAS_ORG_ID": "111111111111111111111111", // Some tests might require this "MONGODB_ATLAS_PROJECT_ID": "111111111111111111111111", // Avoids ProjectIDExecution creating a new project "MONGODB_ATLAS_CLUSTER_NAME": "mocked-cluster", // Avoids ProjectIDExecutionWithCluster creating a new cluster "HTTP_MOCKER_DATA_UPDATE": "true", // (optional) can be used to update `steps.*.diff_requests.*.text` (payloads) // Other test specific variables }
- Update your test env vars (e.g.,
- Running all MacT tests in capture mode:
export ACCTEST_PACKAGES=./internal/service/advancedcluster && make testmact-capture
- Running a single test in capture mode:
- Add
"HTTP_MOCKER_CAPTURE": "true"
to yourgo.testEnvVars
- Add
TF_LOG=debug
can help with debug logs and will also print the Terraform config for each step.
It is advised to only run a single test at a time when a plural data source is used.
- Re-run the test with
"HTTP_MOCKER_CAPTURE": "true"
- This should update the
{TestName}.yaml
file, e.g., TestAccMockableAdvancedCluster_tenantUpgrade.yaml
- This should update the
- Re-run the test with
"HTTP_MOCKER_REPLAY": "true"
- This should update all the
*.json
files at the start of each step - Never update
*.json
files manually!
- This should update all the
Explain the {TestName}.yaml
files, e.g., TestAccMockableAdvancedCluster_tenantUpgrade.yaml
- Why is there both
request_responses
anddiff_requests
what is the relationship?request_responses
anddiff_requests
are both lists of requests with the same structure.request_responses
are matched by the http_mocker_round_tripper during replaydiff_requests
are used for thegolden
diffs in the{test_name}/*.json
files only.- Every
diff_request
will have a duplicate inrequest_responses
- What is
response_index
?- A field to ensure the response order is followed.
- For example, when updating a cluster, the read is called both before and after the update.
- Using the
response_index
we ensure to not include a response until after thediff_request
has been sent
- What is
duplicate_responses
?- A counter increasd for every response that is the same as the previous response.
- Not used during replay.
Experimental framework for testing PlanModifier logic. It creates a test case with two steps and mocks the HTTP request/responses:
- Import state with a fixed
.tf
file - Run terraform plan with an updated
*.tf
file and perform plan checks, for example check for known/unknown values in the plan. The actual update is not performed. See Hashicorp docs for plan check options.
For a full example see plan_modifier_test.go.
For a full example of generation see http_mocker_plan_checks_test.go
- Stores the last
GET
response from an existing MacT test case step. For example the last GET of/api/atlas/v2/groups/{groupId}/clusters/{clusterName}
- ImportName:
ClusterTwoRepSpecsWithAutoScalingAndSpecs
- GET responses are stored in
testdata/{ImportName}/import_*.json
- ImportName:
- The Terraform configuration is:
- Import step is always the same to ensure the config matches the response from (1). Stored in
testdata/{ImportName}/main.tf
- Plan config is different per test. During planning all
GET
responses are as before (1) since API shouldn't have any changes. Stored intestdata/{ImportName}/main_{plan_step_name}.tf
- Import step is always the same to ensure the config matches the response from (1). Stored in
plan_step_name
is meant to be created manually (usually by copy-pastingmain.tf
and making changes)- Use
testCases := map[string][]plancheck.PlanCheck{}
to test many different plan configs for the same import