Skip to content

Commit f655dd7

Browse files
danoscarmikegguusswaprinmerla18dpebot
committed
feat!: use microgenerator (#76)
* Adds tutorials using Cloud Client [(#930)](#930) * Adds tutorials. * Removes unused enumerate * Adds one more tutorial as well as fixes some copy/paste typos. [(#933)](#933) * Adds new examples, replaces markdown with restructured text [(#945)](#945) * Adds new examples, replaces markdown with restructured text * Address review feedback * Use videos from pubilc bucket, update to new client library. * Style nit * Updates requirements [(#952)](#952) * Fix README rst links [(#962)](#962) * Fix README rst links * Update all READMEs * change the usage file sample [(#958)](#958) since the file does not exist. Propose to use the same one as the tutorial: demomaker/gbikes_dinosaur.mp4 * Updates examples for video [(#968)](#968) * Auto-update dependencies. [(#1093)](#1093) * Auto-update dependencies. * Fix storage notification poll sample Change-Id: I6afbc79d15e050531555e4c8e51066996717a0f3 * Fix spanner samples Change-Id: I40069222c60d57e8f3d3878167591af9130895cb * Drop coverage because it's not useful Change-Id: Iae399a7083d7866c3c7b9162d0de244fbff8b522 * Try again to fix flaky logging test Change-Id: I6225c074701970c17c426677ef1935bb6d7e36b4 * Update all generated readme auth instructions [(#1121)](#1121) Change-Id: I03b5eaef8b17ac3dc3c0339fd2c7447bd3e11bd2 * Auto-update dependencies. [(#1123)](#1123) * Video v1beta2 [(#1088)](#1088) * update analyze_safe_search * update analyze_shots * update explicit_content_detection and test * update fece detection * update label detection (path) * update label detection (file) * flake * safe search --> explicit content * update faces tutorial * update client library quickstart * update shotchange tutorial * update labels tutorial * correct spelling * correction start_time_offset * import order * rebased * Added Link to Python Setup Guide [(#1158)](#1158) * Update Readme.rst to add Python setup guide As requested in b/64770713. This sample is linked in documentation https://cloud.google.com/bigtable/docs/scaling, and it would make more sense to update the guide here than in the documentation. * Update README.rst * Update README.rst * Update README.rst * Update README.rst * Update README.rst * Update install_deps.tmpl.rst * Updated readmegen scripts and re-generated related README files * Fixed the lint error * Tweak doc/help strings for sample tools [(#1160)](#1160) * Corrected copy-paste on doc string * Updated doc/help string to be more specific to labels tool * Made shotchange doc/help string more specific * Tweaked doc/help string to indicate no arg expected * Adjusted import order to satisfy flake8 * Wrapped doc string to 79 chars to flake8 correctly * Adjusted import order to pass flake8 test * Auto-update dependencies. [(#1186)](#1186) * update samples to v1 [(#1221)](#1221) * update samples to v1 * replace while loop with operation.result(timeout) * addressing review comments * flake * flake * Added "Open in Cloud Shell" buttons to README files [(#1254)](#1254) * Auto-update dependencies. [(#1377)](#1377) * Auto-update dependencies. * Update requirements.txt * Auto-update dependencies. * Regenerate the README files and fix the Open in Cloud Shell link for some samples [(#1441)](#1441) * Update READMEs to fix numbering and add git clone [(#1464)](#1464) * Video Intelligence region tag update [(#1639)](#1639) * Auto-update dependencies. [(#1658)](#1658) * Auto-update dependencies. * Rollback appengine/standard/bigquery/. * Rollback appengine/standard/iap/. * Rollback bigtable/metricscaler. * Rolledback appengine/flexible/datastore. * Rollback dataproc/ * Rollback jobs/api_client * Rollback vision/cloud-client. * Rollback functions/ocr/app. * Rollback iot/api-client/end_to_end_example. * Rollback storage/cloud-client. * Rollback kms/api-client. * Rollback dlp/ * Rollback bigquery/cloud-client. * Rollback iot/api-client/manager. * Rollback appengine/flexible/cloudsql_postgresql. * Use explicit URIs for Video Intelligence sample tests [(#1743)](#1743) * Auto-update dependencies. [(#1846)](#1846) ACK, merging. * Longer timeouts to address intermittent failures [(#1871)](#1871) * Auto-update dependencies. [(#1980)](#1980) * Auto-update dependencies. * Update requirements.txt * Update requirements.txt * replace demomaker with cloud-samples-data/video for video intelligenc… [(#2162)](#2162) * replace demomaker with cloud-samples-data/video for video intelligence samples * flake * Adds updates for samples profiler ... vision [(#2439)](#2439) * Auto-update dependencies. [(#2005)](#2005) * Auto-update dependencies. * Revert update of appengine/flexible/datastore. * revert update of appengine/flexible/scipy * revert update of bigquery/bqml * revert update of bigquery/cloud-client * revert update of bigquery/datalab-migration * revert update of bigtable/quickstart * revert update of compute/api * revert update of container_registry/container_analysis * revert update of dataflow/run_template * revert update of datastore/cloud-ndb * revert update of dialogflow/cloud-client * revert update of dlp * revert update of functions/imagemagick * revert update of functions/ocr/app * revert update of healthcare/api-client/fhir * revert update of iam/api-client * revert update of iot/api-client/gcs_file_to_device * revert update of iot/api-client/mqtt_example * revert update of language/automl * revert update of run/image-processing * revert update of vision/automl * revert update testing/requirements.txt * revert update of vision/cloud-client/detect * revert update of vision/cloud-client/product_search * revert update of jobs/v2/api_client * revert update of jobs/v3/api_client * revert update of opencensus * revert update of translate/cloud-client * revert update to speech/cloud-client Co-authored-by: Kurtis Van Gent <[email protected]> Co-authored-by: Doug Mahugh <[email protected]> * chore(deps): update dependency google-cloud-videointelligence to v1.14.0 [(#3169)](#3169) * Simplify noxfile setup. [(#2806)](#2806) * chore(deps): update dependency requests to v2.23.0 * Simplify noxfile and add version control. * Configure appengine/standard to only test Python 2.7. * Update Kokokro configs to match noxfile. * Add requirements-test to each folder. * Remove Py2 versions from everything execept appengine/standard. * Remove conftest.py. * Remove appengine/standard/conftest.py * Remove 'no-sucess-flaky-report' from pytest.ini. * Add GAE SDK back to appengine/standard tests. * Fix typo. * Roll pytest to python 2 version. * Add a bunch of testing requirements. * Remove typo. * Add appengine lib directory back in. * Add some additional requirements. * Fix issue with flake8 args. * Even more requirements. * Readd appengine conftest.py. * Add a few more requirements. * Even more Appengine requirements. * Add webtest for appengine/standard/mailgun. * Add some additional requirements. * Add workaround for issue with mailjet-rest. * Add responses for appengine/standard/mailjet. Co-authored-by: Renovate Bot <[email protected]> * fix: changes positional to named pararameters in Video samples [(#4017)](#4017) Changes calls to `VideoClient.annotate_video()` so that GCS URIs are provided as named parameters. Example: ``` operation = video_client.annotate_video(path, features=features) ``` Becomes: ``` operation = video_client.annotate_video(input_uri=path, features=features) ``` * Update dependency google-cloud-videointelligence to v1.15.0 [(#4041)](#4041) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [google-cloud-videointelligence](https://togithub.com/googleapis/python-videointelligence) | minor | `==1.14.0` -> `==1.15.0` | --- ### Release Notes <details> <summary>googleapis/python-videointelligence</summary> ### [`v1.15.0`](https://togithub.com/googleapis/python-videointelligence/blob/master/CHANGELOG.md#&#8203;1150-httpswwwgithubcomgoogleapispython-videointelligencecomparev1140v1150-2020-06-09) [Compare Source](https://togithub.com/googleapis/python-videointelligence/compare/v1.14.0...v1.15.0) ##### Features - add support for streaming automl action recognition in v1p3beta1; make 'features' a positional param for annotate_video in betas ([#&#8203;31](https://www.github.com/googleapis/python-videointelligence/issues/31)) ([586f920](https://www.github.com/googleapis/python-videointelligence/commit/586f920a1932e1a813adfed500502fba0ff5edb7)), closes [#&#8203;517](https://www.github.com/googleapis/python-videointelligence/issues/517) [#&#8203;538](https://www.github.com/googleapis/python-videointelligence/issues/538) [#&#8203;565](https://www.github.com/googleapis/python-videointelligence/issues/565) [#&#8203;576](https://www.github.com/googleapis/python-videointelligence/issues/576) [#&#8203;506](https://www.github.com/googleapis/python-videointelligence/issues/506) [#&#8203;586](https://www.github.com/googleapis/python-videointelligence/issues/586) [#&#8203;585](https://www.github.com/googleapis/python-videointelligence/issues/585) </details> --- ### Renovate configuration :date: **Schedule**: At any time (no schedule defined). :vertical_traffic_light: **Automerge**: Disabled by config. Please merge this manually once you are satisfied. :recycle: **Rebasing**: Never, or you tick the rebase/retry checkbox. :no_bell: **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [WhiteSource Renovate](https://renovate.whitesourcesoftware.com). View repository job log [here](https://app.renovatebot.com/dashboard#GoogleCloudPlatform/python-docs-samples). * chore(deps): update dependency pytest to v5.4.3 [(#4279)](#4279) * chore(deps): update dependency pytest to v5.4.3 * specify pytest for python 2 in appengine Co-authored-by: Leah Cole <[email protected]> * Update dependency pytest to v6 [(#4390)](#4390) * chore: pin sphinx * chore: adds samples templates * chore: temporarily pins sphinx * chore: blacken noxfile * chore: lints * chore(deps): update dependency google-cloud-videointelligence to v1.16.0 [(#4798)](#4798) * chore: fixes flaky tests * chore(deps): update dependency pytest to v6.1.1 [(#4761)](#4761) * chore(deps): update dependency pytest to v6.1.2 [(#4921)](#4921) Co-authored-by: Charles Engelke <[email protected]> * chore: updates samples templates * chore: cleans up merge conflicts * chore: blacken * feat!: use microgenerator * docs: update samples for microgenerator client * docs: updates shotchange samples to microgen * chore: deletes temp files * chore: lint and blacken * Update UPGRADING.md Co-authored-by: Bu Sun Kim <[email protected]> * Update setup.py Co-authored-by: Bu Sun Kim <[email protected]> Co-authored-by: Gus Class <[email protected]> Co-authored-by: Bill Prin <[email protected]> Co-authored-by: florencep <[email protected]> Co-authored-by: DPE bot <[email protected]> Co-authored-by: Jon Wayne Parrott <[email protected]> Co-authored-by: Yu-Han Liu <[email protected]> Co-authored-by: michaelawyu <[email protected]> Co-authored-by: Perry Stoll <[email protected]> Co-authored-by: Frank Natividad <[email protected]> Co-authored-by: michaelawyu <[email protected]> Co-authored-by: Alix Hamilton <[email protected]> Co-authored-by: Charles Engelke <[email protected]> Co-authored-by: Yu-Han Liu <[email protected]> Co-authored-by: Kurtis Van Gent <[email protected]> Co-authored-by: Doug Mahugh <[email protected]> Co-authored-by: WhiteSource Renovate <[email protected]> Co-authored-by: Eric Schmidt <[email protected]> Co-authored-by: Leah Cole <[email protected]> Co-authored-by: gcf-merge-on-green[bot] <60162190+gcf-merge-on-green[bot]@users.noreply.github.com> Co-authored-by: Charles Engelke <[email protected]> Co-authored-by: Bu Sun Kim <[email protected]>
1 parent 488fc6c commit f655dd7

19 files changed

+339
-287
lines changed

videointelligence/samples/analyze/analyze.py

Lines changed: 72 additions & 57 deletions
Large diffs are not rendered by default.

videointelligence/samples/analyze/analyze_test.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -74,15 +74,15 @@ def test_speech_transcription(capsys):
7474
def test_detect_text_gcs(capsys):
7575
analyze.video_detect_text_gcs("gs://cloud-samples-data/video/googlework_tiny.mp4")
7676
out, _ = capsys.readouterr()
77-
assert 'Text' in out
77+
assert "Text" in out
7878

7979

8080
# Flaky timeout
8181
@pytest.mark.flaky(max_runs=3, min_passes=1)
8282
def test_detect_text(capsys):
8383
analyze.video_detect_text("resources/googlework_tiny.mp4")
8484
out, _ = capsys.readouterr()
85-
assert 'Text' in out
85+
assert "Text" in out
8686

8787

8888
# Flaky timeout

videointelligence/samples/analyze/beta_snippets.py

Lines changed: 83 additions & 78 deletions
Large diffs are not rendered by default.

videointelligence/samples/analyze/beta_snippets_test.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,13 +15,13 @@
1515
# limitations under the License.
1616

1717
import os
18+
from urllib.request import urlopen
1819
import uuid
1920

2021
import backoff
2122
from google.api_core.exceptions import Conflict
2223
from google.cloud import storage
2324
import pytest
24-
from six.moves.urllib.request import urlopen
2525

2626
import beta_snippets
2727

@@ -55,7 +55,7 @@ def video_path(tmpdir_factory):
5555
@pytest.fixture(scope="function")
5656
def bucket():
5757
# Create a temporaty bucket to store annotation output.
58-
bucket_name = f'tmp-{uuid.uuid4().hex}'
58+
bucket_name = f"tmp-{uuid.uuid4().hex}"
5959
storage_client = storage.Client()
6060
bucket = storage_client.create_bucket(bucket_name)
6161

@@ -128,7 +128,7 @@ def test_detect_text(capsys):
128128
in_file = "./resources/googlework_tiny.mp4"
129129
beta_snippets.video_detect_text(in_file)
130130
out, _ = capsys.readouterr()
131-
assert 'Text' in out
131+
assert "Text" in out
132132

133133

134134
# Flaky timeout
@@ -137,7 +137,7 @@ def test_detect_text_gcs(capsys):
137137
in_file = "gs://python-docs-samples-tests/video/googlework_tiny.mp4"
138138
beta_snippets.video_detect_text_gcs(in_file)
139139
out, _ = capsys.readouterr()
140-
assert 'Text' in out
140+
assert "Text" in out
141141

142142

143143
# Flaky InvalidArgument

videointelligence/samples/analyze/noxfile.py

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -37,28 +37,25 @@
3737

3838
TEST_CONFIG = {
3939
# You can opt out from the test for specific Python versions.
40-
'ignored_versions': ["2.7"],
41-
40+
"ignored_versions": ["2.7"],
4241
# Old samples are opted out of enforcing Python type hints
4342
# All new samples should feature them
44-
'enforce_type_hints': False,
45-
43+
"enforce_type_hints": False,
4644
# An envvar key for determining the project id to use. Change it
4745
# to 'BUILD_SPECIFIC_GCLOUD_PROJECT' if you want to opt in using a
4846
# build specific Cloud project. You can also use your own string
4947
# to use your own Cloud project.
50-
'gcloud_project_env': 'GOOGLE_CLOUD_PROJECT',
48+
"gcloud_project_env": "GOOGLE_CLOUD_PROJECT",
5149
# 'gcloud_project_env': 'BUILD_SPECIFIC_GCLOUD_PROJECT',
52-
5350
# A dictionary you want to inject into your test. Don't put any
5451
# secrets here. These values will override predefined values.
55-
'envs': {},
52+
"envs": {},
5653
}
5754

5855

5956
try:
6057
# Ensure we can import noxfile_config in the project's directory.
61-
sys.path.append('.')
58+
sys.path.append(".")
6259
from noxfile_config import TEST_CONFIG_OVERRIDE
6360
except ImportError as e:
6461
print("No user noxfile_config found: detail: {}".format(e))
@@ -73,12 +70,12 @@ def get_pytest_env_vars():
7370
ret = {}
7471

7572
# Override the GCLOUD_PROJECT and the alias.
76-
env_key = TEST_CONFIG['gcloud_project_env']
73+
env_key = TEST_CONFIG["gcloud_project_env"]
7774
# This should error out if not set.
78-
ret['GOOGLE_CLOUD_PROJECT'] = os.environ[env_key]
75+
ret["GOOGLE_CLOUD_PROJECT"] = os.environ[env_key]
7976

8077
# Apply user supplied envs.
81-
ret.update(TEST_CONFIG['envs'])
78+
ret.update(TEST_CONFIG["envs"])
8279
return ret
8380

8481

@@ -87,7 +84,7 @@ def get_pytest_env_vars():
8784
ALL_VERSIONS = ["2.7", "3.6", "3.7", "3.8"]
8885

8986
# Any default versions that should be ignored.
90-
IGNORED_VERSIONS = TEST_CONFIG['ignored_versions']
87+
IGNORED_VERSIONS = TEST_CONFIG["ignored_versions"]
9188

9289
TESTED_VERSIONS = sorted([v for v in ALL_VERSIONS if v not in IGNORED_VERSIONS])
9390

@@ -136,7 +133,7 @@ def _determine_local_import_names(start_dir):
136133

137134
@nox.session
138135
def lint(session):
139-
if not TEST_CONFIG['enforce_type_hints']:
136+
if not TEST_CONFIG["enforce_type_hints"]:
140137
session.install("flake8", "flake8-import-order")
141138
else:
142139
session.install("flake8", "flake8-import-order", "flake8-annotations")
@@ -145,9 +142,11 @@ def lint(session):
145142
args = FLAKE8_COMMON_ARGS + [
146143
"--application-import-names",
147144
",".join(local_names),
148-
"."
145+
".",
149146
]
150147
session.run("flake8", *args)
148+
149+
151150
#
152151
# Black
153152
#
@@ -160,6 +159,7 @@ def blacken(session):
160159

161160
session.run("black", *python_files)
162161

162+
163163
#
164164
# Sample Tests
165165
#
@@ -199,9 +199,9 @@ def py(session):
199199
if session.python in TESTED_VERSIONS:
200200
_session_tests(session)
201201
else:
202-
session.skip("SKIPPED: {} tests are disabled for this sample.".format(
203-
session.python
204-
))
202+
session.skip(
203+
"SKIPPED: {} tests are disabled for this sample.".format(session.python)
204+
)
205205

206206

207207
#

videointelligence/samples/analyze/video_detect_faces_beta.py

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -27,16 +27,18 @@ def detect_faces(local_file_path="path/to/your/video-file.mp4"):
2727
input_content = f.read()
2828

2929
# Configure the request
30-
config = videointelligence.types.FaceDetectionConfig(
30+
config = videointelligence.FaceDetectionConfig(
3131
include_bounding_boxes=True, include_attributes=True
3232
)
33-
context = videointelligence.types.VideoContext(face_detection_config=config)
33+
context = videointelligence.VideoContext(face_detection_config=config)
3434

3535
# Start the asynchronous request
3636
operation = client.annotate_video(
37-
input_content=input_content,
38-
features=[videointelligence.enums.Feature.FACE_DETECTION],
39-
video_context=context,
37+
request={
38+
"features": [videointelligence.Feature.FACE_DETECTION],
39+
"input_content": input_content,
40+
"video_context": context,
41+
}
4042
)
4143

4244
print("\nProcessing video for face detection annotations.")
@@ -53,9 +55,9 @@ def detect_faces(local_file_path="path/to/your/video-file.mp4"):
5355
print(
5456
"Segment: {}s to {}s".format(
5557
track.segment.start_time_offset.seconds
56-
+ track.segment.start_time_offset.nanos / 1e9,
58+
+ track.segment.start_time_offset.microseconds / 1e6,
5759
track.segment.end_time_offset.seconds
58-
+ track.segment.end_time_offset.nanos / 1e9,
60+
+ track.segment.end_time_offset.microseconds / 1e6,
5961
)
6062
)
6163

videointelligence/samples/analyze/video_detect_faces_gcs_beta.py

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -22,16 +22,18 @@ def detect_faces(gcs_uri="gs://YOUR_BUCKET_ID/path/to/your/video.mp4"):
2222
client = videointelligence.VideoIntelligenceServiceClient()
2323

2424
# Configure the request
25-
config = videointelligence.types.FaceDetectionConfig(
25+
config = videointelligence.FaceDetectionConfig(
2626
include_bounding_boxes=True, include_attributes=True
2727
)
28-
context = videointelligence.types.VideoContext(face_detection_config=config)
28+
context = videointelligence.VideoContext(face_detection_config=config)
2929

3030
# Start the asynchronous request
3131
operation = client.annotate_video(
32-
input_uri=gcs_uri,
33-
features=[videointelligence.enums.Feature.FACE_DETECTION],
34-
video_context=context,
32+
request={
33+
"features": [videointelligence.Feature.FACE_DETECTION],
34+
"input_uri": gcs_uri,
35+
"video_context": context,
36+
}
3537
)
3638

3739
print("\nProcessing video for face detection annotations.")
@@ -48,9 +50,9 @@ def detect_faces(gcs_uri="gs://YOUR_BUCKET_ID/path/to/your/video.mp4"):
4850
print(
4951
"Segment: {}s to {}s".format(
5052
track.segment.start_time_offset.seconds
51-
+ track.segment.start_time_offset.nanos / 1e9,
53+
+ track.segment.start_time_offset.microseconds / 1e6,
5254
track.segment.end_time_offset.seconds
53-
+ track.segment.end_time_offset.nanos / 1e9,
55+
+ track.segment.end_time_offset.microseconds / 1e6,
5456
)
5557
)
5658

videointelligence/samples/analyze/video_detect_logo.py

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,11 @@ def detect_logo(local_file_path="path/to/your/video.mp4"):
2626

2727
with io.open(local_file_path, "rb") as f:
2828
input_content = f.read()
29-
features = [videointelligence.enums.Feature.LOGO_RECOGNITION]
29+
features = [videointelligence.Feature.LOGO_RECOGNITION]
3030

31-
operation = client.annotate_video(input_content=input_content, features=features)
31+
operation = client.annotate_video(
32+
request={"features": features, "input_content": input_content}
33+
)
3234

3335
print(u"Waiting for operation to complete...")
3436
response = operation.result()
@@ -53,13 +55,13 @@ def detect_logo(local_file_path="path/to/your/video.mp4"):
5355
print(
5456
u"\n\tStart Time Offset : {}.{}".format(
5557
track.segment.start_time_offset.seconds,
56-
track.segment.start_time_offset.nanos,
58+
track.segment.start_time_offset.microseconds * 1000,
5759
)
5860
)
5961
print(
6062
u"\tEnd Time Offset : {}.{}".format(
6163
track.segment.end_time_offset.seconds,
62-
track.segment.end_time_offset.nanos,
64+
track.segment.end_time_offset.microseconds * 1000,
6365
)
6466
)
6567
print(u"\tConfidence : {}".format(track.confidence))
@@ -91,12 +93,14 @@ def detect_logo(local_file_path="path/to/your/video.mp4"):
9193
for segment in logo_recognition_annotation.segments:
9294
print(
9395
u"\n\tStart Time Offset : {}.{}".format(
94-
segment.start_time_offset.seconds, segment.start_time_offset.nanos,
96+
segment.start_time_offset.seconds,
97+
segment.start_time_offset.microseconds * 1000,
9598
)
9699
)
97100
print(
98101
u"\tEnd Time Offset : {}.{}".format(
99-
segment.end_time_offset.seconds, segment.end_time_offset.nanos,
102+
segment.end_time_offset.seconds,
103+
segment.end_time_offset.microseconds * 1000,
100104
)
101105
)
102106

videointelligence/samples/analyze/video_detect_logo_gcs.py

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -21,9 +21,11 @@ def detect_logo_gcs(input_uri="gs://YOUR_BUCKET_ID/path/to/your/file.mp4"):
2121

2222
client = videointelligence.VideoIntelligenceServiceClient()
2323

24-
features = [videointelligence.enums.Feature.LOGO_RECOGNITION]
24+
features = [videointelligence.Feature.LOGO_RECOGNITION]
2525

26-
operation = client.annotate_video(input_uri=input_uri, features=features)
26+
operation = client.annotate_video(
27+
request={"features": features, "input_uri": input_uri}
28+
)
2729

2830
print(u"Waiting for operation to complete...")
2931
response = operation.result()
@@ -49,13 +51,13 @@ def detect_logo_gcs(input_uri="gs://YOUR_BUCKET_ID/path/to/your/file.mp4"):
4951
print(
5052
u"\n\tStart Time Offset : {}.{}".format(
5153
track.segment.start_time_offset.seconds,
52-
track.segment.start_time_offset.nanos,
54+
track.segment.start_time_offset.microseconds * 1000,
5355
)
5456
)
5557
print(
5658
u"\tEnd Time Offset : {}.{}".format(
5759
track.segment.end_time_offset.seconds,
58-
track.segment.end_time_offset.nanos,
60+
track.segment.end_time_offset.microseconds * 1000,
5961
)
6062
)
6163
print(u"\tConfidence : {}".format(track.confidence))
@@ -86,12 +88,14 @@ def detect_logo_gcs(input_uri="gs://YOUR_BUCKET_ID/path/to/your/file.mp4"):
8688
for segment in logo_recognition_annotation.segments:
8789
print(
8890
u"\n\tStart Time Offset : {}.{}".format(
89-
segment.start_time_offset.seconds, segment.start_time_offset.nanos,
91+
segment.start_time_offset.seconds,
92+
segment.start_time_offset.microseconds * 1000,
9093
)
9194
)
9295
print(
9396
u"\tEnd Time Offset : {}.{}".format(
94-
segment.end_time_offset.seconds, segment.end_time_offset.nanos,
97+
segment.end_time_offset.seconds,
98+
segment.end_time_offset.microseconds * 1000,
9599
)
96100
)
97101

videointelligence/samples/analyze/video_detect_person_beta.py

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,11 @@ def detect_person(local_file_path="path/to/your/video-file.mp4"):
3636

3737
# Start the asynchronous request
3838
operation = client.annotate_video(
39-
input_content=input_content,
40-
features=[videointelligence.enums.Feature.PERSON_DETECTION],
41-
video_context=context,
39+
request={
40+
"features": [videointelligence.Feature.PERSON_DETECTION],
41+
"input_content": input_content,
42+
"video_context": context,
43+
}
4244
)
4345

4446
print("\nProcessing video for person detection annotations.")
@@ -55,9 +57,9 @@ def detect_person(local_file_path="path/to/your/video-file.mp4"):
5557
print(
5658
"Segment: {}s to {}s".format(
5759
track.segment.start_time_offset.seconds
58-
+ track.segment.start_time_offset.nanos / 1e9,
60+
+ track.segment.start_time_offset.microseconds / 1e6,
5961
track.segment.end_time_offset.seconds
60-
+ track.segment.end_time_offset.nanos / 1e9,
62+
+ track.segment.end_time_offset.microseconds / 1e6,
6163
)
6264
)
6365

videointelligence/samples/analyze/video_detect_person_gcs_beta.py

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,11 @@ def detect_person(gcs_uri="gs://YOUR_BUCKET_ID/path/to/your/video.mp4"):
3131

3232
# Start the asynchronous request
3333
operation = client.annotate_video(
34-
input_uri=gcs_uri,
35-
features=[videointelligence.enums.Feature.PERSON_DETECTION],
36-
video_context=context,
34+
request={
35+
"features": [videointelligence.Feature.PERSON_DETECTION],
36+
"input_uri": gcs_uri,
37+
"video_context": context,
38+
}
3739
)
3840

3941
print("\nProcessing video for person detection annotations.")
@@ -50,9 +52,9 @@ def detect_person(gcs_uri="gs://YOUR_BUCKET_ID/path/to/your/video.mp4"):
5052
print(
5153
"Segment: {}s to {}s".format(
5254
track.segment.start_time_offset.seconds
53-
+ track.segment.start_time_offset.nanos / 1e9,
55+
+ track.segment.start_time_offset.microseconds / 1e6,
5456
track.segment.end_time_offset.seconds
55-
+ track.segment.end_time_offset.nanos / 1e9,
57+
+ track.segment.end_time_offset.microseconds / 1e6,
5658
)
5759
)
5860

0 commit comments

Comments
 (0)