Skip to content

Commit f2f7ff3

Browse files
committed
reafactor code fences to use proper language or command shortcode
1 parent d0b6107 commit f2f7ff3

File tree

11 files changed

+57
-169
lines changed

11 files changed

+57
-169
lines changed

content/en/docs/Integrations/architect/index.md

+6-5
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,12 @@ If you are adapting an existing configuration, you might be able to skip certain
1717
## Example
1818

1919
### Setup
20-
To use Architect in conjunction with Localstack, simply install the ```arclocal``` command (sources can be found [here](https://github.com/localstack/architect-local)).
21-
```
22-
npm install -g architect-local @architect/architect aws-sdk
23-
```
24-
The ``` arclocal``` command has the same usage as the ```arc``` command, so you can start right away.
20+
To use Architect in conjunction with Localstack, simply install the `arclocal` command (sources can be found [here](https://github.com/localstack/architect-local)).
21+
{{< command >}}
22+
$ npm install -g architect-local @architect/architect aws-sdk
23+
{{< /command >}}
24+
25+
The `arclocal` command has the same usage as the `arc` command, so you can start right away.
2526

2627
Create a test directory
2728

content/en/docs/Integrations/pulumi/index.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -52,8 +52,8 @@ Installing dependencies...
5252

5353
This will create the following directory structure.
5454

55-
```language
56-
% tree -L 1
55+
{{< command >}}
56+
$ tree -L 1
5757
.
5858
├── index.ts
5959
├── node_modules
@@ -62,7 +62,7 @@ This will create the following directory structure.
6262
├── Pulumi.dev.yaml
6363
├── Pulumi.yaml
6464
└── tsconfig.json
65-
```
65+
{{< / command >}}
6666

6767
Now edit your stack configuration `Pulumi.dev.yaml` as follows:
6868

content/en/docs/Integrations/spring-cloud-function/index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -245,7 +245,7 @@ Let's configure it to lookup our function Beans by HTTP method and path, create
245245
new `application.properties` file under `src/main/resources/application.properties`
246246
with the following content:
247247

248-
```properties
248+
```env
249249
spring.main.banner-mode=off
250250
spring.cloud.function.definition=functionRouter
251251
spring.cloud.function.routing-expression=headers['httpMethod'].concat(' ').concat(headers['path'])

content/en/docs/Integrations/terraform/index.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ The following changes go into this file.
3434

3535
First, we have to specify mock credentials for the AWS provider:
3636

37-
```
37+
```hcl
3838
provider "aws" {
3939
4040
access_key = "test"
@@ -48,7 +48,7 @@ provider "aws" {
4848
Second, we need to avoid issues with routing and authentication (as we do not need it).
4949
Therefore we need to supply some general parameters:
5050

51-
```
51+
```hcl
5252
provider "aws" {
5353
5454
access_key = "test"
@@ -66,7 +66,7 @@ provider "aws" {
6666
Additionally, we have to point the individual services to LocalStack.
6767
In case of S3, this looks like the following snippet
6868

69-
```
69+
```hcl
7070
endpoints {
7171
s3 = "http://localhost:4566"
7272
}
@@ -79,7 +79,7 @@ In case of S3, this looks like the following snippet
7979
### S3 Bucket
8080

8181
Now we are adding a minimal s3 bucket outside the provider
82-
```
82+
```hcl
8383
resource "aws_s3_bucket" "test-bucket" {
8484
bucket = "my-bucket"
8585
}
@@ -89,7 +89,7 @@ resource "aws_s3_bucket" "test-bucket" {
8989
### Final Configuration
9090

9191
The final (minimal) configuration to deploy an s3 bucket thus looks like this
92-
```
92+
```hcl
9393
provider "aws" {
9494
9595
access_key = "mock_access_key"
@@ -128,7 +128,7 @@ $ terraform deploy
128128

129129
Here is a configuration example with additional endpoints:
130130

131-
```
131+
```hcl
132132
provider "aws" {
133133
access_key = "test"
134134
secret_key = "test"

content/en/docs/Local AWS Services/cognito/index.md

+28-26
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ LocalStack Pro contains basic support for authentication via Cognito. You can cr
1717
{{< /alert >}}
1818

1919
First, start up LocalStack. In addition to the normal setup, we need to pass several SMTP settings as environment variables.
20-
```
20+
```env
2121
SMTP_HOST=<smtp-host-address>
2222
SMTP_USER=<email-user-name>
2323
SMTP_PASS=<email-password>
@@ -28,12 +28,12 @@ Don't forget to pass Cognito as a service as well.
2828
## Creating a User Pool
2929

3030
Just as with aws, you can create a User Pool in LocalStack via
31-
```
32-
awslocal cognito-idp create-user-pool --pool-name test
33-
```
31+
{{< command >}}
32+
$ awslocal cognito-idp create-user-pool --pool-name test
33+
{{< /command >}}
3434
The response should look similar to this
3535

36-
```
36+
```json
3737
"UserPool": {
3838
"Id": "us-east-1_fd924693e9b04f549f989283123a29c2",
3939
"Name": "test",
@@ -60,28 +60,31 @@ The response should look similar to this
6060
"AllowAdminCreateUserOnly": false
6161
},
6262
"Arn": "arn:aws:cognito-idp:us-east-1:000000000000:userpool/us-east-1_fd924693e9b04f549f989283123a29c2"
63+
}
6364
```
64-
We will need the pool-id for further operations, so save it in a ```pool_id``` variable.
65+
We will need the pool-id for further operations, so save it in a `pool_id` variable.
6566
Alternatively, you can also use a JSON processor like [jq](https://stedolan.github.io/jq/) to directly extract the necessary information when creating a pool.
66-
```
67-
pool_id=$(awslocal cognito-idp create-user-pool --pool-name test | jq -rc ".UserPool.Id")
68-
```
67+
68+
{{< command >}}
69+
$ pool_id=$(awslocal cognito-idp create-user-pool --pool-name test | jq -rc ".UserPool.Id")
70+
{{< /command >}}
71+
6972
## Adding a Client
7073

7174
Now we add a client to our newly created pool. We will also need the ID of the created client for the next step. The complete command for client creation with subsequent ID extraction is therefore
7275

73-
```
74-
client_id=$(awslocal cognito-idp create-user-pool-client --user-pool-id $pool_id --client-name test-client | jq -rc ".UserPoolClient.ClientId")
75-
```
76+
{{< command >}}
77+
$ client_id=$(awslocal cognito-idp create-user-pool-client --user-pool-id $pool_id --client-name test-client | jq -rc ".UserPoolClient.ClientId")
78+
{{< /command >}}
7679

7780
## Signing up and confirming a user
7881

7982
With these steps already taken, we can now sign up a user.
80-
```
81-
awslocal cognito-idp sign-up --client-id $client_id --username example_user --password 12345678 --user-attributes Name=email,Value=<[email protected]>
82-
```
83+
{{< command >}}
84+
$ awslocal cognito-idp sign-up --client-id $client_id --username example_user --password 12345678 --user-attributes Name=email,Value=<[email protected]>
85+
{{< /command >}}
8386
The response should look similar to this
84-
```
87+
```json
8588
{
8689
"UserConfirmed": false,
8790
"UserSub": "5fdbe1d5-7901-4fee-9d1d-518103789c94"
@@ -91,17 +94,17 @@ and you should have received a new e-mail!
9194

9295
As you can see, our user is still unconfirmed. We can change this with the following instruction.
9396

94-
```
95-
awslocal cognito-idp confirm-sign-up --client-id $client_id --username example_user --confirmation-code <received-confirmation-code>
96-
```
97+
{{< command >}}
98+
$ awslocal cognito-idp confirm-sign-up --client-id $client_id --username example_user --confirmation-code <received-confirmation-code>
99+
{{< /command >}}
97100
The verification code for the user is in the e-mail you received. Additionally, LocalStack prints out the verification code in the console.
98101

99102
The above command doesn't return an answer, you need to check the pool to see that it was successful
100-
```
101-
awslocal cognito-idp list-users --user-pool-id $pool_id
102-
```
103+
{{< command >}}
104+
$ awslocal cognito-idp list-users --user-pool-id $pool_id
105+
{{< /command >}}
103106
which should return something similar to this
104-
<pre>
107+
```json {hl_lines=[20]}
105108
{
106109
"Users": [
107110
{
@@ -121,12 +124,11 @@ which should return something similar to this
121124
}
122125
],
123126
"Enabled": true,
124-
<b>"UserStatus": "CONFIRMED"</b>
127+
"UserStatus": "CONFIRMED"
125128
}
126129
]
127130
}
128-
129-
</pre>
131+
```
130132

131133
## OAuth Flows via Cognito Login Form
132134

content/en/docs/Local AWS Services/elasticsearch/index.md

-1
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,6 @@ In the LocalStack log you will see something like
7272
2021-10-01T21:14:27:INFO:localstack.services.install: Installing Elasticsearch plugin analysis-stempel
7373
2021-10-01T21:14:45:INFO:localstack.services.install: Installing Elasticsearch plugin analysis-ukrainian
7474
2021-10-01T21:15:01:INFO:localstack.services.es.cluster: starting elasticsearch: /opt/code/localstack/localstack/infra/elasticsearch/bin/elasticsearch -E http.port=59237 -E http.publish_port=59237 -E transport.port=0 -E network.host=127.0.0.1 -E http.compression=false -E path.data="/opt/code/localstack/localstack/infra/elasticsearch/data" -E path.repo="/tmp/localstack/es_backup" -E xpack.ml.enabled=false with env {'ES_JAVA_OPTS': '-Xms200m -Xmx600m', 'ES_TMPDIR': '/opt/code/localstack/localstack/infra/elasticsearch/tmp'}
75-
7675
```
7776

7877
and after some time, you should see that the `Created` state of the domain is set to `true`:

content/en/docs/Local AWS Services/glue/index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ For a more detailed example illustrating how to run a local Glue PySpark job, pl
6868
The Glue data catalog is integrated with Athena, and the database/table definitions can be imported via the `import-catalog-to-glue` API.
6969

7070
Assume you are running the following Athena queries to create databases and table definitions:
71-
```
71+
```sql
7272
CREATE DATABASE db2
7373
CREATE EXTERNAL TABLE db2.table1 (a1 Date, a2 STRING, a3 INT) LOCATION 's3://test/table1'
7474
CREATE EXTERNAL TABLE db2.table2 (a1 Date, a2 STRING, a3 INT) LOCATION 's3://test/table2'

content/en/docs/LocalStack Tools/Lambda Tools/debugging.md.bak

-114
This file was deleted.

content/en/docs/LocalStack Tools/Lambda Tools/debugging/index.md

+9-9
Original file line numberDiff line numberDiff line change
@@ -38,11 +38,11 @@ There, the necessary code fragments for enabling debugging are already present.
3838
### Configure LocalStack for remote Python debugging
3939

4040
First, make sure that LocalStack is started with the following configuration (see the [Configuration docs]({{< ref "configuration#lambda" >}}) for more information):
41-
```sh
42-
LAMBDA_REMOTE_DOCKER=0 \
41+
{{< command >}}
42+
$ LAMBDA_REMOTE_DOCKER=0 \
4343
LAMBDA_DOCKER_FLAGS='-p 19891:19891' \
4444
DEBUG=1 localstack start
45-
```
45+
{{< /command >}}
4646

4747
### Preparing your code
4848

@@ -86,19 +86,19 @@ To create the Lambda function, you just need to take care of two things:
8686

8787
So, in our [example](https://github.com/localstack/localstack-pro-samples/tree/master/lambda-mounting-and-debugging), this would be:
8888

89-
```sh
90-
awslocal lambda create-function --function-name my-cool-local-function \
89+
{{< command >}}
90+
$ awslocal lambda create-function --function-name my-cool-local-function \
9191
--code S3Bucket="__local__",S3Key="$(pwd)/" \
9292
--handler handler.handler \
9393
--runtime python3.8 \
9494
--role cool-stacklifter
95-
```
95+
{{< /command >}}
9696

9797
We can quickly verify that it works by invoking it with a simple payload:
9898

99-
```sh
100-
awslocal lambda invoke --function-name my-cool-local-function --payload '{"message": "Hello from LocalStack!"}' output.txt
101-
```
99+
{{< command >}}
100+
$ awslocal lambda invoke --function-name my-cool-local-function --payload '{"message": "Hello from LocalStack!"}' output.txt
101+
{{< /command >}}
102102

103103
### Configuring Visual Studio Code for remote Python debugging
104104

content/en/docs/LocalStack Tools/transparent-execution-mode/patched-sdks.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -37,9 +37,9 @@ The main advantage of this mode is, that no DNS magic is involved, and SSL certi
3737

3838
## Configuration
3939

40-
If you want to disable this behavior, and use the DNS server to resolve the endpoints for AWS, you can disable this behavior using:
40+
If you want to disable this behavior, and use the DNS server to resolve the endpoints for AWS, you can disable this behavior by using:
4141

42-
```
42+
```bash
4343
TRANSPARENT_LOCAL_ENDPOINTS=0
4444
```
4545

0 commit comments

Comments
 (0)