Skip to content

Commit b195c06

Browse files
authored
Merge branch 'master' into test-skip
2 parents 0fbf9ad + dcebdb3 commit b195c06

23 files changed

+478
-73
lines changed

.ci/Jenkinsfile

Lines changed: 26 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ pipeline {
88
BASE_DIR="src/github.com/elastic/elastic-package"
99
JOB_GIT_CREDENTIALS = "f6c7695a-671e-4f4f-a331-acdce44ff9ba"
1010
PIPELINE_LOG_LEVEL='INFO'
11+
AWS_ACCOUNT_SECRET = 'secret/observability-team/ci/elastic-observability-aws-account-auth'
1112
}
1213
options {
1314
timeout(time: 1, unit: 'HOURS')
@@ -40,16 +41,19 @@ pipeline {
4041
steps {
4142
cleanup()
4243
withMageEnv(){
43-
dir("${BASE_DIR}"){
44-
sh(label: 'Check',script: 'make check')
44+
withCloudTestEnv() {
45+
dir("${BASE_DIR}"){
46+
sh(label: 'Check',script: 'make check')
47+
}
4548
}
4649
}
4750
}
4851
post {
4952
always {
5053
dir("${BASE_DIR}") {
5154
archiveArtifacts(allowEmptyArchive: true, artifacts: 'build/test-results/*.xml')
52-
archiveArtifacts(allowEmptyArchive: true, artifacts: 'build/elastic-stack-dump/logs/*.log')
55+
archiveArtifacts(allowEmptyArchive: true, artifacts: 'build/elastic-stack-dump/stack/logs/*.log')
56+
archiveArtifacts(allowEmptyArchive: true, artifacts: 'build/elastic-stack-dump/check/logs/*.log')
5357
junit(allowEmptyResults: false,
5458
keepLongStdio: true,
5559
testResults: "build/test-results/*.xml")
@@ -71,3 +75,22 @@ def cleanup(){
7175
}
7276
unstash 'source'
7377
}
78+
79+
def withCloudTestEnv(Closure body) {
80+
def maskedVars = []
81+
// AWS
82+
def aws = getVaultSecret(secret: "${AWS_ACCOUNT_SECRET}").data
83+
if (!aws.containsKey('access_key')) {
84+
error("${AWS_ACCOUNT_SECRET} doesn't contain 'access_key'")
85+
}
86+
if (!aws.containsKey('secret_key')) {
87+
error("${AWS_ACCOUNT_SECRET} doesn't contain 'secret_key'")
88+
}
89+
maskedVars.addAll([
90+
[var: "AWS_ACCESS_KEY_ID", password: aws.access_key],
91+
[var: "AWS_SECRET_ACCESS_KEY", password: aws.secret_key],
92+
])
93+
withEnvMask(vars: maskedVars) {
94+
body()
95+
}
96+
}

docs/howto/system_testing.md

Lines changed: 66 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -12,19 +12,18 @@ Conceptually, running a system test involves the following steps:
1212
1. Depending on the Elastic Package whose data stream is being tested, deploy an instance of the package's integration service.
1313
1. Create a test policy that configures a single data stream for a single package.
1414
1. Assign the test policy to the enrolled Agent.
15-
1. Wait a reasonable amount of time for the Agent to collect data from the
15+
1. Wait a reasonable amount of time for the Agent to collect data from the
1616
integration service and index it into the correct Elasticsearch data stream.
1717
1. Query the first 500 documents based on `@timestamp` for validation.
1818
1. Validate mappings are defined for the fields contained in the indexed documents.
1919
1. Validate that the JSON data types contained `_source` are compatible with
20-
mappings declared for the field.
20+
mappings declared for the field.
2121
1. Delete test artifacts and tear down the instance of the package's integration service.
2222
1. Once all desired data streams have been system tested, tear down the Elastic Stack.
2323

2424
## Limitations
2525

2626
At the moment system tests have limitations. The salient ones are:
27-
* They can only test packages whose integration services can be deployed via Docker Compose. Eventually they will be able to test packages that can be deployed via other means, e.g. a Terraform configuration.
2827
* There isn't a way to do assert that the indexed data matches data from a file (e.g. golden file testing).
2928

3029
## Defining a system test
@@ -39,21 +38,38 @@ Packages have a specific folder structure (only relevant parts shown).
3938
manifest.yml
4039
```
4140

42-
To define a system test we must define configuration at two levels: the package level and each data stream's level.
41+
To define a system test we must define configuration on at least one level: a package or a data stream's one.
4342

44-
### Package-level configuration
45-
46-
First, we must define the configuration for deploying a package's integration service. As mentioned in the [_Limitations_](#Limitations) section above, only packages whose integration services can be deployed via Docker Compose are supported at the moment.
43+
First, we must define the configuration for deploying a package's integration service. We can define it on either the package level:
4744

4845
```
4946
<package root>/
5047
_dev/
5148
deploy/
52-
docker/
53-
docker-compose.yml
49+
<service deployer>/
50+
<service deployer files>
51+
```
52+
53+
or the data stream's level:
54+
5455
```
56+
<package root>/
57+
data_stream/
58+
<data stream>/
59+
_dev/
60+
deploy/
61+
<service deployer>/
62+
<service deployer files>
63+
```
64+
65+
`<service deployer>` - a name of the supported service deployer: `docker` (Docker Compose service deployer) or `tf` (Terraform service deployer).
66+
67+
### Docker Compose service deployer
5568

56-
The `docker-compose.yml` file defines the integration service(s) for the package. If your package has a logs data stream, the log files from your package's integration service must be written to a volume. For example, the `apache` package has the following definition in it's integration service's `docker-compose.yml` file.
69+
When using the Docker Compose service deployer, the `<service deployer files>` must include a `docker-compose.yml` file.
70+
The `docker-compose.yml` file defines the integration service(s) for the package. If your package has a logs data stream,
71+
the log files from your package's integration service must be written to a volume. For example, the `apache` package has
72+
the following definition in it's integration service's `docker-compose.yml` file.
5773

5874
```
5975
version: '2.3'
@@ -66,7 +82,43 @@ services:
6682

6783
Here, `SERVICE_LOGS_DIR` is a special keyword. It is something that we will need later.
6884

69-
### Data stream-level configuration
85+
### Terraform service deployer
86+
87+
When using the Terraform service deployer, the `<service deployer files>` must include at least one `*.tf` file.
88+
The `*.tf` files define the infrastructure using the Terraform syntax. The terraform based service can be handy to boot up
89+
resources using selected cloud provider and use them for testing (e.g. observe and collect metrics).
90+
91+
Sample `main.tf` definition:
92+
93+
```
94+
variable "TEST_RUN_ID" {
95+
default = "detached"
96+
}
97+
98+
provider "aws" {}
99+
100+
resource "aws_instance" "i" {
101+
ami = data.aws_ami.latest-amzn.id
102+
monitoring = true
103+
instance_type = "t1.micro"
104+
tags = {
105+
Name = "elastic-package-test-${var.TEST_RUN_ID}"
106+
}
107+
}
108+
109+
data "aws_ami" "latest-amzn" {
110+
most_recent = true
111+
owners = [ "amazon" ] # AWS
112+
filter {
113+
name = "name"
114+
values = ["amzn2-ami-hvm-*"]
115+
}
116+
}
117+
```
118+
119+
Notice the use of the `TEST_RUN_ID` variable. It contains a unique ID, which can help differentiate resources created in potential concurrent test runs.
120+
121+
### Test case definition
70122

71123
Next, we must define configuration for each data stream that we want to system test.
72124

@@ -97,10 +149,8 @@ The `data_stream.vars` field corresponds to data stream-level variables for the
97149

98150
Notice the use of the `{{SERVICE_LOGS_DIR}}` placeholder. This corresponds to the `${SERVICE_LOGS_DIR}` variable we saw in the `docker-compose.yml` file earlier. In the above example, the net effect is as if the `/usr/local/apache2/logs/access.log*` files located inside the Apache integration service container become available at the same path from Elastic Agent's perspective.
99151

100-
When a data stream's manifest declares multiple streams with different inputs
101-
you can use the `input` option to select the stream to test. The first stream
102-
whose input type matches the `input` value will be tested. By default, the first
103-
stream declared in the manifest will be tested.
152+
When a data stream's manifest declares multiple streams with different inputs you can use the `input` option to select the stream to test. The first stream
153+
whose input type matches the `input` value will be tested. By default, the first stream declared in the manifest will be tested.
104154

105155
#### Placeholders
106156

@@ -152,4 +202,4 @@ Finally, when you are done running all system tests, bring down the Elastic Stac
152202

153203
```
154204
elastic-package stack down
155-
```
205+
```

go.mod

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ require (
1010
github.com/elastic/go-elasticsearch/v7 v7.9.0
1111
github.com/elastic/go-licenser v0.3.1
1212
github.com/elastic/go-ucfg v0.8.3
13-
github.com/elastic/package-spec/code/go v0.0.0-20210126144901-46090e1310d3
13+
github.com/elastic/package-spec/code/go v0.0.0-20210127201409-dd08da649371
1414
github.com/go-git/go-billy/v5 v5.0.0
1515
github.com/go-git/go-git/v5 v5.1.0
1616
github.com/go-openapi/strfmt v0.19.6 // indirect

0 commit comments

Comments
 (0)