@@ -70,6 +70,18 @@ Install Dependencies
70
70
Samples
71
71
-------------------------------------------------------------------------------
72
72
73
+ Simple Application
74
+ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
75
+
76
+
77
+
78
+ To run this sample:
79
+
80
+ .. code-block :: bash
81
+
82
+ $ python simple_app.py
83
+
84
+
73
85
Quickstart
74
86
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
75
87
@@ -82,7 +94,7 @@ To run this sample:
82
94
$ python quickstart.py
83
95
84
96
85
- Sync query
97
+ Query
86
98
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
87
99
88
100
@@ -91,26 +103,35 @@ To run this sample:
91
103
92
104
.. code-block :: bash
93
105
94
- $ python sync_query .py
106
+ $ python query .py
95
107
96
- usage: sync_query.py [-h] query
108
+ usage: query.py [-h] [--use_standard_sql]
109
+ [--destination_table DESTINATION_TABLE]
110
+ query
97
111
98
- Command-line application to perform synchronous queries in BigQuery.
112
+ Command-line application to perform queries in BigQuery.
99
113
100
114
For more information, see the README.rst.
101
115
102
116
Example invocation:
103
- $ python sync_query.py \
104
- ' SELECT corpus FROM `publicdata.samples.shakespeare` GROUP BY corpus'
117
+ $ python query.py ' #standardSQL
118
+ SELECT corpus
119
+ FROM `bigquery-public-data.samples.shakespeare`
120
+ GROUP BY corpus
121
+ ORDER BY corpus'
105
122
106
123
positional arguments:
107
- query BigQuery SQL Query.
124
+ query BigQuery SQL Query.
108
125
109
126
optional arguments:
110
- -h, --help show this help message and exit
127
+ -h, --help show this help message and exit
128
+ --use_standard_sql Use standard SQL syntax.
129
+ --destination_table DESTINATION_TABLE
130
+ Destination table to use for results. Example:
131
+ my_dataset.my_table
111
132
112
133
113
- Async query
134
+ Parameterized Query
114
135
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
115
136
116
137
@@ -119,23 +140,29 @@ To run this sample:
119
140
120
141
.. code-block :: bash
121
142
122
- $ python async_query .py
143
+ $ python query_params .py
123
144
124
- usage: async_query .py [-h] query
145
+ usage: query_params .py [-h] {named,positional,array,timestamp,struct} ...
125
146
126
- Command-line application to perform asynchronous queries in BigQuery.
147
+ Command-line app to perform queries with parameters in BigQuery.
127
148
128
149
For more information, see the README.rst.
129
150
130
151
Example invocation:
131
- $ python async_query .py \
132
- ' SELECT corpus FROM `publicdata.samples.shakespeare` GROUP BY corpus '
152
+ $ python query_params .py named ' romeoandjuliet ' 100
153
+ $ python query_params.py positional ' romeoandjuliet ' 100
133
154
134
155
positional arguments:
135
- query BigQuery SQL Query.
156
+ {named,positional,array,timestamp,struct}
157
+ samples
158
+ named Run a query with named parameters.
159
+ positional Run a query with positional parameters.
160
+ array Run a query with an array parameter.
161
+ timestamp Run a query with a timestamp parameter.
162
+ struct Run a query with a struct parameter.
136
163
137
164
optional arguments:
138
- -h, --help show this help message and exit
165
+ -h, --help show this help message and exit
139
166
140
167
141
168
Snippets
@@ -202,20 +229,21 @@ To run this sample:
202
229
203
230
$ python load_data_from_file.py
204
231
205
- usage: load_data_from_file.py [-h] dataset_name table_name source_file_name
232
+ usage: load_data_from_file.py [-h] dataset_id table_id source_file_name
206
233
207
234
Loads data into BigQuery from a local file.
208
235
209
236
For more information, see the README.rst.
210
237
211
238
Example invocation:
212
- $ python load_data_from_file.py example_dataset example_table example-data.csv
239
+ $ python load_data_from_file.py example_dataset example_table \
240
+ example-data.csv
213
241
214
242
The dataset and table should already exist.
215
243
216
244
positional arguments:
217
- dataset_name
218
- table_name
245
+ dataset_id
246
+ table_id
219
247
source_file_name Path to a .csv file to upload.
220
248
221
249
optional arguments:
@@ -233,25 +261,26 @@ To run this sample:
233
261
234
262
$ python load_data_from_gcs.py
235
263
236
- usage: load_data_from_gcs.py [-h] dataset_name table_name source
264
+ usage: load_data_from_gcs.py [-h] dataset_id table_id source
237
265
238
266
Loads data into BigQuery from an object in Google Cloud Storage.
239
267
240
268
For more information, see the README.rst.
241
269
242
270
Example invocation:
243
- $ python load_data_from_gcs.py example_dataset example_table gs://example-bucket/example-data.csv
271
+ $ python load_data_from_gcs.py example_dataset example_table \
272
+ gs://example-bucket/example-data.csv
244
273
245
274
The dataset and table should already exist.
246
275
247
276
positional arguments:
248
- dataset_name
249
- table_name
250
- source The Google Cloud Storage object to load. Must be in the format
251
- gs://bucket_name/object_name
277
+ dataset_id
278
+ table_id
279
+ source The Google Cloud Storage object to load. Must be in the format
280
+ gs://bucket_name/object_name
252
281
253
282
optional arguments:
254
- -h, --help show this help message and exit
283
+ -h, --help show this help message and exit
255
284
256
285
257
286
Load streaming data
@@ -265,24 +294,25 @@ To run this sample:
265
294
266
295
$ python stream_data.py
267
296
268
- usage: stream_data.py [-h] dataset_name table_name json_data
297
+ usage: stream_data.py [-h] dataset_id table_id json_data
269
298
270
299
Loads a single row of data directly into BigQuery.
271
300
272
301
For more information, see the README.rst.
273
302
274
303
Example invocation:
275
- $ python stream_data.py example_dataset example_table ' ["Gandalf", 2000]'
304
+ $ python stream_data.py example_dataset example_table \
305
+ ' ["Gandalf", 2000]'
276
306
277
307
The dataset and table should already exist.
278
308
279
309
positional arguments:
280
- dataset_name
281
- table_name
282
- json_data The row to load into BigQuery as an array in JSON format.
310
+ dataset_id
311
+ table_id
312
+ json_data The row to load into BigQuery as an array in JSON format.
283
313
284
314
optional arguments:
285
- -h, --help show this help message and exit
315
+ -h, --help show this help message and exit
286
316
287
317
288
318
Export data to Cloud Storage
@@ -296,25 +326,26 @@ To run this sample:
296
326
297
327
$ python export_data_to_gcs.py
298
328
299
- usage: export_data_to_gcs.py [-h] dataset_name table_name destination
329
+ usage: export_data_to_gcs.py [-h] dataset_id table_id destination
300
330
301
331
Exports data from BigQuery to an object in Google Cloud Storage.
302
332
303
333
For more information, see the README.rst.
304
334
305
335
Example invocation:
306
- $ python export_data_to_gcs.py example_dataset example_table gs://example-bucket/example-data.csv
336
+ $ python export_data_to_gcs.py example_dataset example_table \
337
+ gs://example-bucket/example-data.csv
307
338
308
339
The dataset and table should already exist.
309
340
310
341
positional arguments:
311
- dataset_name
312
- table_name
313
- destination The desintation Google Cloud Storage object.Must be in the
314
- format gs://bucket_name/object_name
342
+ dataset_id
343
+ table_id
344
+ destination The destination Google Cloud Storage object. Must be in the
345
+ format gs://bucket_name/object_name
315
346
316
347
optional arguments:
317
- -h, --help show this help message and exit
348
+ -h, --help show this help message and exit
318
349
319
350
320
351
0 commit comments