Skip to content

Commit 2140994

Browse files
author
Jon Wayne Parrott
authored
Generate readmes for most service samples [(googleapis#599)](GoogleCloudPlatform/python-docs-samples#599)
1 parent 5b2f9bf commit 2140994

File tree

5 files changed

+377
-7
lines changed

5 files changed

+377
-7
lines changed

samples/snippets/README.md

Lines changed: 0 additions & 5 deletions
This file was deleted.

samples/snippets/README.rst

Lines changed: 332 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,332 @@
1+
.. This file is automatically generated. Do not edit this file directly.
2+
3+
Google BigQuery Python Samples
4+
===============================================================================
5+
6+
This directory contains samples for Google BigQuery. `Google BigQuery`_ is Google's fully managed, petabyte scale, low cost analytics data warehouse. BigQuery is NoOps—there is no infrastructure to manage and you don't need a database administrator—so you can focus on analyzing data to find meaningful insights, use familiar SQL, and take advantage of our pay-as-you-go model.
7+
8+
9+
10+
11+
.. _Google BigQuery: https://cloud.google.com/bigquery/docs
12+
13+
Setup
14+
-------------------------------------------------------------------------------
15+
16+
17+
Authentication
18+
++++++++++++++
19+
20+
Authentication is typically done through `Application Default Credentials`_,
21+
which means you do not have to change the code to authenticate as long as
22+
your environment has credentials. You have a few options for setting up
23+
authentication:
24+
25+
#. When running locally, use the `Google Cloud SDK`_
26+
27+
.. code-block:: bash
28+
29+
gcloud beta auth application-default login
30+
31+
32+
#. When running on App Engine or Compute Engine, credentials are already
33+
set-up. However, you may need to configure your Compute Engine instance
34+
with `additional scopes`_.
35+
36+
#. You can create a `Service Account key file`_. This file can be used to
37+
authenticate to Google Cloud Platform services from any environment. To use
38+
the file, set the ``GOOGLE_APPLICATION_CREDENTIALS`` environment variable to
39+
the path to the key file, for example:
40+
41+
.. code-block:: bash
42+
43+
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service_account.json
44+
45+
.. _Application Default Credentials: https://cloud.google.com/docs/authentication#getting_credentials_for_server-centric_flow
46+
.. _additional scopes: https://cloud.google.com/compute/docs/authentication#using
47+
.. _Service Account key file: https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount
48+
49+
Install Dependencies
50+
++++++++++++++++++++
51+
52+
#. Install `pip`_ and `virtualenv`_ if you do not already have them.
53+
54+
#. Create a virtualenv. Samples are compatible with Python 2.7 and 3.4+.
55+
56+
.. code-block:: bash
57+
58+
$ virtualenv env
59+
$ source env/bin/activate
60+
61+
#. Install the dependencies needed to run the samples.
62+
63+
.. code-block:: bash
64+
65+
$ pip install -r requirements.txt
66+
67+
.. _pip: https://pip.pypa.io/
68+
.. _virtualenv: https://virtualenv.pypa.io/
69+
70+
Samples
71+
-------------------------------------------------------------------------------
72+
73+
Quickstart
74+
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
75+
76+
77+
78+
To run this sample:
79+
80+
.. code-block:: bash
81+
82+
$ python quickstart.py
83+
84+
85+
Sync query
86+
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
87+
88+
89+
90+
To run this sample:
91+
92+
.. code-block:: bash
93+
94+
$ python sync_query.py
95+
96+
usage: sync_query.py [-h] query
97+
98+
Command-line application to perform synchronous queries in BigQuery.
99+
100+
For more information, see the README.md under /bigquery.
101+
102+
Example invocation:
103+
$ python sync_query.py \
104+
'SELECT corpus FROM `publicdata.samples.shakespeare` GROUP BY corpus'
105+
106+
positional arguments:
107+
query BigQuery SQL Query.
108+
109+
optional arguments:
110+
-h, --help show this help message and exit
111+
112+
113+
Async query
114+
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
115+
116+
117+
118+
To run this sample:
119+
120+
.. code-block:: bash
121+
122+
$ python async_query.py
123+
124+
usage: async_query.py [-h] query
125+
126+
Command-line application to perform asynchronous queries in BigQuery.
127+
128+
For more information, see the README.md under /bigquery.
129+
130+
Example invocation:
131+
$ python async_query.py 'SELECT corpus FROM `publicdata.samples.shakespeare` GROUP BY corpus'
132+
133+
positional arguments:
134+
query BigQuery SQL Query.
135+
136+
optional arguments:
137+
-h, --help show this help message and exit
138+
139+
140+
Snippets
141+
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
142+
143+
144+
145+
To run this sample:
146+
147+
.. code-block:: bash
148+
149+
$ python snippets.py
150+
151+
usage: snippets.py [-h] [--project PROJECT]
152+
{list-datasets,list-tables,create-table,list-rows,copy-table,delete-table}
153+
...
154+
155+
Samples that demonstrate basic operations in the BigQuery API.
156+
157+
For more information, see the README.md under /bigquery.
158+
159+
Example invocation:
160+
$ python snippets.py list-datasets
161+
162+
The dataset and table should already exist.
163+
164+
positional arguments:
165+
{list-datasets,list-tables,create-table,list-rows,copy-table,delete-table}
166+
list-datasets Lists all datasets in a given project. If no project
167+
is specified, then the currently active project is
168+
used
169+
list-tables Lists all of the tables in a given dataset. If no
170+
project is specified, then the currently active
171+
project is used.
172+
create-table Creates a simple table in the given dataset. If no
173+
project is specified, then the currently active
174+
project is used.
175+
list-rows Prints rows in the given table. Will print 25 rows at
176+
most for brevity as tables can contain large amounts
177+
of rows. If no project is specified, then the
178+
currently active project is used.
179+
copy-table Copies a table. If no project is specified, then the
180+
currently active project is used.
181+
delete-table Deletes a table in a given dataset. If no project is
182+
specified, then the currently active project is used.
183+
184+
optional arguments:
185+
-h, --help show this help message and exit
186+
--project PROJECT
187+
188+
189+
Load data from a file
190+
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
191+
192+
193+
194+
To run this sample:
195+
196+
.. code-block:: bash
197+
198+
$ python load_data_from_file.py
199+
200+
usage: load_data_from_file.py [-h] dataset_name table_name source_file_name
201+
202+
Loads data into BigQuery from a local file.
203+
204+
For more information, see the README.md under /bigquery.
205+
206+
Example invocation:
207+
$ python load_data_from_file.py example_dataset example_table example-data.csv
208+
209+
The dataset and table should already exist.
210+
211+
positional arguments:
212+
dataset_name
213+
table_name
214+
source_file_name Path to a .csv file to upload.
215+
216+
optional arguments:
217+
-h, --help show this help message and exit
218+
219+
220+
Load data from Cloud Storage
221+
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
222+
223+
224+
225+
To run this sample:
226+
227+
.. code-block:: bash
228+
229+
$ python load_data_from_gcs.py
230+
231+
usage: load_data_from_gcs.py [-h] dataset_name table_name source
232+
233+
Loads data into BigQuery from an object in Google Cloud Storage.
234+
235+
For more information, see the README.md under /bigquery.
236+
237+
Example invocation:
238+
$ python load_data_from_gcs.py example_dataset example_table gs://example-bucket/example-data.csv
239+
240+
The dataset and table should already exist.
241+
242+
positional arguments:
243+
dataset_name
244+
table_name
245+
source The Google Cloud Storage object to load. Must be in the format
246+
gs://bucket_name/object_name
247+
248+
optional arguments:
249+
-h, --help show this help message and exit
250+
251+
252+
Load streaming data
253+
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
254+
255+
256+
257+
To run this sample:
258+
259+
.. code-block:: bash
260+
261+
$ python stream_data.py
262+
263+
usage: stream_data.py [-h] dataset_name table_name json_data
264+
265+
Loads a single row of data directly into BigQuery.
266+
267+
For more information, see the README.md under /bigquery.
268+
269+
Example invocation:
270+
$ python stream_data.py example_dataset example_table '["Gandalf", 2000]'
271+
272+
The dataset and table should already exist.
273+
274+
positional arguments:
275+
dataset_name
276+
table_name
277+
json_data The row to load into BigQuery as an array in JSON format.
278+
279+
optional arguments:
280+
-h, --help show this help message and exit
281+
282+
283+
Export data to Cloud Storage
284+
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
285+
286+
287+
288+
To run this sample:
289+
290+
.. code-block:: bash
291+
292+
$ python export_data_to_gcs.py
293+
294+
usage: export_data_to_gcs.py [-h] dataset_name table_name destination
295+
296+
Exports data from BigQuery to an object in Google Cloud Storage.
297+
298+
For more information, see the README.md under /bigquery.
299+
300+
Example invocation:
301+
$ python export_data_to_gcs.py example_dataset example_table gs://example-bucket/example-data.csv
302+
303+
The dataset and table should already exist.
304+
305+
positional arguments:
306+
dataset_name
307+
table_name
308+
destination The desintation Google Cloud Storage object.Must be in the
309+
format gs://bucket_name/object_name
310+
311+
optional arguments:
312+
-h, --help show this help message and exit
313+
314+
315+
316+
317+
The client library
318+
-------------------------------------------------------------------------------
319+
320+
This sample uses the `Google Cloud Client Library for Python`_.
321+
You can read the documentation for more details on API usage and use GitHub
322+
to `browse the source`_ and `report issues`_.
323+
324+
.. Google Cloud Client Library for Python:
325+
https://googlecloudplatform.github.io/google-cloud-python/
326+
.. browse the source:
327+
https://github.com/GoogleCloudPlatform/google-cloud-python
328+
.. report issues:
329+
https://github.com/GoogleCloudPlatform/google-cloud-python/issues
330+
331+
332+
.. _Google Cloud SDK: https://cloud.google.com/sdk/

samples/snippets/README.rst.in

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
# This file is used to generate README.rst
2+
3+
product:
4+
name: Google BigQuery
5+
short_name: BigQuery
6+
url: https://cloud.google.com/bigquery/docs
7+
description: >
8+
`Google BigQuery`_ is Google's fully managed, petabyte scale, low cost
9+
analytics data warehouse. BigQuery is NoOps—there is no infrastructure to
10+
manage and you don't need a database administrator—so you can focus on
11+
analyzing data to find meaningful insights, use familiar SQL, and take
12+
advantage of our pay-as-you-go model.
13+
14+
setup:
15+
- auth
16+
- install_deps
17+
18+
samples:
19+
- name: Quickstart
20+
file: quickstart.py
21+
- name: Sync query
22+
file: sync_query.py
23+
show_help: true
24+
- name: Async query
25+
file: async_query.py
26+
show_help: true
27+
- name: Snippets
28+
file: snippets.py
29+
show_help: true
30+
- name: Load data from a file
31+
file: load_data_from_file.py
32+
show_help: true
33+
- name: Load data from Cloud Storage
34+
file: load_data_from_gcs.py
35+
show_help: true
36+
- name: Load streaming data
37+
file: stream_data.py
38+
show_help: true
39+
- name: Export data to Cloud Storage
40+
file: export_data_to_gcs.py
41+
show_help: true
42+
43+
cloud_client_library: true

samples/snippets/async_query.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
For more information, see the README.md under /bigquery.
2020
2121
Example invocation:
22-
$ python async_query.py \
22+
$ python async_query.py \\
2323
'SELECT corpus FROM `publicdata.samples.shakespeare` GROUP BY corpus'
2424
"""
2525

0 commit comments

Comments
 (0)