You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: pg-todo-tutorial_section-1.md
+35-37
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
# Postgres and Graphql: Structure for Everyone
2
-
3
-
**TL;DR** The structured nature of [Postgres][postgres] (A relational database mangement system) allows [Postgraphile][postgraphile] to generate a wonderful interface to [Graphql][graphql]. [Project Source at Github][source]
**TL;DR** The structured nature of [Postgres][postgres] (A relational database mangement system) allows [Postgraphile][postgraphile] to generate a wonderful interface to [Graphql][graphql]. [Project Source at Github][source]
@@ -17,16 +17,16 @@ This project will provide a brief overview of the underlying tools used to conne
17
17
18
18
This project requires a basic working knowledge of [Git][git], [Node][node], and a local install of [Docker][docker] with [Docker-Compose][docker-compose] for creating an isolated Postgres database. Please refer to the official documentation for installation as it varies by operating system and is outside the scope of this tutorial. If you would prefer to use a local installation of Postgres this tutorial will still apply; skip the the docker-compose configuration and adjust the provided .env as neccesary. The corresponding [repo][source] is divided into branches in case you would like to start with a relatively clean slate or jump ahead.
This section begins with the `section-1` branch and includes a little directory structure, a `.env`, `database.json`, and a `docker-config.yml`. Let's begin: `git checkout section-1`
27
+
This section begins with the `section-1` branch and includes a little directory structure, a `.local_env`, `database.json`, and a `docker-config.yml`. Let's begin: `git checkout section-1`
28
28
29
-
>Note: In the root directory of the project first please have a look at the provided `.env` file. Please note that typically `.env` would ==**never**== be included in a public git repository. Its purpose is to provide a central place for environemtal variables which may or may not be super secret stuff like passwords or connection settings. Please make sure to add `.env` to your local .gitignore before proceeding: `echo '.env' >> .gitignore`
29
+
>Note: In the root directory of the project first please have a look at the provided `.env_example` file. You will want to copy it to your a local `.env`. Its purpose is to provide a central place for environemtal variables which may or may not be super secret stuff like passwords or connection settings. Please make sure to add `.env` to your local .gitignore before proceeding: `echo '.env' >> .gitignore`
30
30
31
31
Let's take a look at the `.env`. The first three lines will be used by docker-compose to configure the new postgres instance with a required user, password and database name:
32
32
@@ -46,18 +46,18 @@ The two additional lines will be used by `db-migrate`, `postgraphile`, and `expr
46
46
Next lets take a look at out `todo_db/docker-compose.yml`
47
47
48
48
```yml
49
-
version: "3.8"
50
-
services:
51
-
postgres:
52
-
image: "postgres"
53
-
ports:
54
-
- "5432:5432"
55
-
volumes:
56
-
- todovolume:/var/lib/postgresql/data/
57
-
env_file:
58
-
- ../.env
59
-
volumes:
60
-
todovolume:
49
+
version: '3.8'
50
+
services:
51
+
postgres:
52
+
image: 'postgres'
53
+
ports:
54
+
- '5432:5432'
55
+
volumes:
56
+
- todovolume:/var/lib/postgresql/data/
57
+
env_file:
58
+
- ../.env
59
+
volumes:
60
+
todovolume:
61
61
```
62
62
63
63
This file is responsible for the configuration of `docker-compose`. It specifies that we will use the official `postgres` docker image. `ports` specifies the exposed external port followed by the internal port postgres is using. Specifying `volumes` allows us to persist out database data. The volume will be created the first time that the container is brought up. `env_file` allows us to centralize(for the most part) our environment details. Refer to the [official][docker-compose] documentation if you would like to know more about the structure and specifics of `docker-compose.yml`.
@@ -86,7 +86,7 @@ yarn global add db-migrate
86
86
87
87
Why do we need `db-migrate`available globally? It is recommended as the method for install by the official documentation as it is primarily used on the command line. It will find the local install when running globally and in fact use it so there are no worries about project specific versioning.
@@ -103,11 +103,11 @@ Now lets take a look at `database.json`:
103
103
{
104
104
"dev": {
105
105
"driver": "pg",
106
-
"user": {"ENV": "POSTGRES_USER"},
107
-
"password": {"ENV": "POSTGRES_PASSWORD"},
108
-
"host": {"ENV": "POSTGRES_HOST"},
109
-
"database": {"ENV": "POSTGRES_DB"},
110
-
"port": {"ENV": "POSTGRES_PORT"}
106
+
"user": {"ENV": "POSTGRES_USER"},
107
+
"password": {"ENV": "POSTGRES_PASSWORD"},
108
+
"host": {"ENV": "POSTGRES_HOST"},
109
+
"database": {"ENV": "POSTGRES_DB"},
110
+
"port": {"ENV": "POSTGRES_PORT"}
111
111
}
112
112
}
113
113
```
@@ -130,13 +130,13 @@ CREATE SCHEMA IF NOT EXISTS todo_public;
130
130
131
131
and now a corresponding down migrationg in `##############-create-schema-down.sql`
132
132
133
-
```SQL
134
-
DROP SCHEMA todo_public;
135
-
```
133
+
```SQL
134
+
DROP SCHEMA todo_public;
135
+
```
136
136
137
137
So what's going on here? If you have not seen too much sql and it appears its yelling at you with a harsh tone of capitalization, do not worry, that is just a formality. SQL is case insensitive so often times it increases readability if SQL keywords are capitalized. Feel free to use all lowercase everything if you prefer. Why do we need to migrations? For everything we do to modify the database, we want to be able to easily revert our changes. This practice is somewhat [contested][stackoverflow-migration], although I tend to like it, especially during projects like this which are primarily for experimentation.
If you are following along jump to **Start Section 2** below but if you are just now starting the tutorial at branch `section-2` you will want to make sure you have a postgres instance up and running with an appropriately configured `.env`. I have provided defaults that are not at all secure, at the very least a password change might be in order. Make sure to add `.env` to your `.gitignore` and then:
10
+
If you are following along jump to **Start Section 2** below but if you are just now starting the tutorial at branch `section-2` you will want to make sure you have a postgres instance up and running with an appropriately configured `.env`. I have provided defaults in `.env_example`that are not at all secure, at the very least a password change might be in order. Make sure to add `.env` to your `.gitignore` and then:
11
11
12
12
```sh
13
13
yarn install && yarn global add db-migrate
@@ -17,7 +17,7 @@ If you are following along jump to **Start Section 2** below but if you are just
17
17
18
18
---
19
19
20
-
**Start Section 2**
20
+
## Start Section 2
21
21
22
22
Now lets test the database with `pgcli` or `psql` (You can use any postgres client really)
Now at the prompt we will list the contents of our `todo` table on our `todo_public` schema:
29
29
30
30
```psql
31
-
postgres@127:todo_db> \d todo_public.todo
31
+
postgres@127:todo_db> \d todo_public.todo
32
32
```
33
33
34
34
This should produce the output:
@@ -51,6 +51,8 @@ postgres@127:todo_db>
51
51
52
52
If your client cannot connect or you cannot list the table something has gone awry. Try `\dt todo_public.*` to see if any table has been created under the schema `todo_public`. If you cannot connect check that your environmental variables are correct. The connection string is of the form `"postgres://POSTGRES_USER:POSTGRES_PASSWORD@POSTGRES_HOST/POSTGRES_DB"`. I am certain you will have it figured out in no time. If not, open an issue on the [github][source].
Install `postgraphile` and a plugin to simplify naming **globally**. Do not worry we will get rid of our global install when we are don with them. For now it is convenient for command line usage.
55
57
56
58
`yarn global add postgraphile @graphile-contrib/pg-simplify-inflector`
@@ -83,6 +85,7 @@ Click the "GraphiQL GUI/IDE" link or open localhost:5000/graphiql in your browse
83
85
<summary>Here is a screenshot if you would like to compare</summary>
84
86
85
87

88
+
86
89
</details>
87
90
88
91
Voila! We have easy access to our `todo` table. The `graphiql` explorer is primarily divided into four panes. The first is populated with queries or mutations depending on which you select that can built. The second is the actual interface to graphql queries. Clicking on items in the first pane will incrementally build queries in the second. The third pane is the output from the queries and the final pane is our documentation that postgraphile automagically generates for us.
@@ -107,9 +110,9 @@ Lets use the first pane to build a query. Select the `todos` dropdown and then c
107
110
}
108
111
```
109
112
110
-
>Note: The pg-simplify-inflector simplifies the "inflection" or the method that it uses to create names when analyzing the database. Postgraphile is by default very verbose/explicit with naming as it [leads to less accidental conflicts][inflection]. In out example there is very little difference but try starting postgraphile without it and you will get the general idea of what it does.
113
+
>Note: The pg-simplify-inflector simplifies the "inflection" or the method that it uses to create names when analyzing the database. Postgraphile is by default very verbose/explicit with naming as it [leads to less accidental conflicts][inflection]. In out example there is very little difference but try starting postgraphile without it and you will get the general idea of what it does.
Before we move on lets explore the documentation and see what postgraphile has generated for us so far.
115
118
We are primarily interested in adding a todo at this point, as we have none. Browse `mutations` and then `input:CreateTodoInput`. And then `todo:TodoInput`. Now we know! Although not really. The fields for input are listed but what if our coworker also wanted to explore our newly created API? Maybe we should add some helpful coments in out database that describe the fields that we will be using. Back to the SQL!
@@ -127,14 +130,14 @@ COMMENT ON COLUMN todo_public.todo.completed IS 'status of todo';
127
130
and corresponding `migrations/sqls/######-todo-comments-down.sql`
128
131
129
132
```sql
130
-
COMMENT ON TABLE todo_public.todo IS NULL;
133
+
COMMENT ON TABLE todo_public.todo IS NULL;
131
134
COMMENT ON COLUMN todo_public.todo.id IS NULL;
132
-
COMMENT ON COLUMN todo_public.todo.task IS NULL;
135
+
COMMENT ON COLUMN todo_public.todo.task IS NULL;
133
136
COMMENT ON COLUMN todo_public.todo.created_at IS NULL;
134
137
COMMENT ON COLUMN todo_public.todo.completed IS NULL;
135
138
```
136
139
137
-
>Note: COMMENT is postgreql specific
140
+
>Note: COMMENT is postgreql specific
138
141
139
142
`db-migrate up`
140
143
@@ -144,7 +147,7 @@ So lets create a todo. If we look in the leftmost panel we see only queries. Go
@@ -172,7 +175,7 @@ Ohhhh `NodeId`, where do you come from? Look in the documentation. It will tell
172
175
173
176
`The ID scalar type represents a unique identifier, often used to refetch an object or as key fora cache. The ID type appearsin a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as "4") or integer (such as 4) input value will be accepted as an ID.`
174
177
175
-
Graphql is gets much of its power and usefulness from the fact that it *normalizes* our data. Each piece of data in graphql gets a unique identifier. The key is a hash of the data and allows us to query our endpoint with laser precision, and then also to cache the query. We will not need the NodeId for quite some time, but it is nice to know we can get a hold of any of our nodes(todos) by NodeId if the need arises (like client side cache manipulation).
178
+
Graphql is gets much of its power and usefulness from the fact that it _normalizes_ our data. Each piece of data in graphql gets a unique identifier. The key is a hash of the data and allows us to query our endpoint with laser precision, and then also to cache the query. We will not need the NodeId for quite some time, but it is nice to know we can get a hold of any of our nodes(todos) by NodeId if the need arises (like client side cache manipulation).
176
179
177
180
Have fun with graphiql fora little while. Make sure that you can fetch all the todos and fields, and perform an update and a deletion. Hint: The explorer (the first panelin graphiql) is your friend. Drop downs follow if you get stuck.
178
181
@@ -254,49 +257,53 @@ We will `mkdir server` for our express server to live in and then two files, and
0 commit comments