Skip to content

Commit 49f8bc1

Browse files
Merge branch 'release-1.34.83'
* release-1.34.83: Bumping version to 1.34.83 Update endpoints model Update to latest models
2 parents 53f954f + c801451 commit 49f8bc1

File tree

17 files changed

+17998
-12190
lines changed

17 files changed

+17998
-12190
lines changed

.changes/1.34.83.json

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
[
2+
{
3+
"category": "``batch``",
4+
"description": "This release adds the task properties field to attempt details and the name field on EKS container detail.",
5+
"type": "api-change"
6+
},
7+
{
8+
"category": "``cloudfront``",
9+
"description": "CloudFront origin access control extends support to AWS Lambda function URLs and AWS Elemental MediaPackage v2 origins.",
10+
"type": "api-change"
11+
},
12+
{
13+
"category": "``cloudwatch``",
14+
"description": "This release adds support for Metric Characteristics for CloudWatch Anomaly Detection. Anomaly Detector now takes Metric Characteristics object with Periodic Spikes boolean field that tells Anomaly Detection that spikes that repeat at the same time every week are part of the expected pattern.",
15+
"type": "api-change"
16+
},
17+
{
18+
"category": "``codebuild``",
19+
"description": "Support access tokens for Bitbucket sources",
20+
"type": "api-change"
21+
},
22+
{
23+
"category": "``iam``",
24+
"description": "For CreateOpenIDConnectProvider API, the ThumbprintList parameter is no longer required.",
25+
"type": "api-change"
26+
},
27+
{
28+
"category": "``medialive``",
29+
"description": "AWS Elemental MediaLive introduces workflow monitor, a new feature that enables the visualization and monitoring of your media workflows. Create signal maps of your existing workflows and monitor them by creating notification and monitoring template groups.",
30+
"type": "api-change"
31+
},
32+
{
33+
"category": "``omics``",
34+
"description": "This release adds support for retrieval of S3 direct access metadata on sequence stores and read sets, and adds support for SHA256up and SHA512up HealthOmics ETags.",
35+
"type": "api-change"
36+
},
37+
{
38+
"category": "``pipes``",
39+
"description": "LogConfiguration ARN validation fixes",
40+
"type": "api-change"
41+
},
42+
{
43+
"category": "``rds``",
44+
"description": "Updates Amazon RDS documentation for Standard Edition 2 support in RDS Custom for Oracle.",
45+
"type": "api-change"
46+
},
47+
{
48+
"category": "``s3control``",
49+
"description": "Documentation updates for Amazon S3-control.",
50+
"type": "api-change"
51+
}
52+
]

CHANGELOG.rst

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,21 @@
22
CHANGELOG
33
=========
44

5+
1.34.83
6+
=======
7+
8+
* api-change:``batch``: This release adds the task properties field to attempt details and the name field on EKS container detail.
9+
* api-change:``cloudfront``: CloudFront origin access control extends support to AWS Lambda function URLs and AWS Elemental MediaPackage v2 origins.
10+
* api-change:``cloudwatch``: This release adds support for Metric Characteristics for CloudWatch Anomaly Detection. Anomaly Detector now takes Metric Characteristics object with Periodic Spikes boolean field that tells Anomaly Detection that spikes that repeat at the same time every week are part of the expected pattern.
11+
* api-change:``codebuild``: Support access tokens for Bitbucket sources
12+
* api-change:``iam``: For CreateOpenIDConnectProvider API, the ThumbprintList parameter is no longer required.
13+
* api-change:``medialive``: AWS Elemental MediaLive introduces workflow monitor, a new feature that enables the visualization and monitoring of your media workflows. Create signal maps of your existing workflows and monitor them by creating notification and monitoring template groups.
14+
* api-change:``omics``: This release adds support for retrieval of S3 direct access metadata on sequence stores and read sets, and adds support for SHA256up and SHA512up HealthOmics ETags.
15+
* api-change:``pipes``: LogConfiguration ARN validation fixes
16+
* api-change:``rds``: Updates Amazon RDS documentation for Standard Edition 2 support in RDS Custom for Oracle.
17+
* api-change:``s3control``: Documentation updates for Amazon S3-control.
18+
19+
520
1.34.82
621
=======
722

botocore/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
import os
1717
import re
1818

19-
__version__ = '1.34.82'
19+
__version__ = '1.34.83'
2020

2121

2222
class NullHandler(logging.Handler):

botocore/data/batch/2016-08-10/service-2.json

Lines changed: 66 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -459,6 +459,10 @@
459459
"statusReason":{
460460
"shape":"String",
461461
"documentation":"<p>A short, human-readable string to provide additional details for the current status of the job attempt.</p>"
462+
},
463+
"taskProperties":{
464+
"shape":"ListAttemptEcsTaskDetails",
465+
"documentation":"<p>The properties for a task definition that describes the container and volume definitions of an Amazon ECS task.</p>"
462466
}
463467
},
464468
"documentation":"<p>An object that represents a job attempt.</p>"
@@ -467,6 +471,50 @@
467471
"type":"list",
468472
"member":{"shape":"AttemptDetail"}
469473
},
474+
"AttemptEcsTaskDetails":{
475+
"type":"structure",
476+
"members":{
477+
"containerInstanceArn":{
478+
"shape":"String",
479+
"documentation":"<p>The Amazon Resource Name (ARN) of the container instance that hosts the task.</p>"
480+
},
481+
"taskArn":{
482+
"shape":"String",
483+
"documentation":"<p>The ARN of the Amazon ECS task.</p>"
484+
},
485+
"containers":{
486+
"shape":"ListAttemptTaskContainerDetails",
487+
"documentation":"<p>A list of containers that are included in the <code>taskProperties</code> list.</p>"
488+
}
489+
},
490+
"documentation":"<p>An object that represents the details of a task.</p>"
491+
},
492+
"AttemptTaskContainerDetails":{
493+
"type":"structure",
494+
"members":{
495+
"exitCode":{
496+
"shape":"Integer",
497+
"documentation":"<p>The exit code for the container’s attempt. A non-zero exit code is considered failed.</p>"
498+
},
499+
"name":{
500+
"shape":"String",
501+
"documentation":"<p>The name of a container.</p>"
502+
},
503+
"reason":{
504+
"shape":"String",
505+
"documentation":"<p>A short (255 max characters) string that's easy to understand and provides additional details for a running or stopped container.</p>"
506+
},
507+
"logStreamName":{
508+
"shape":"String",
509+
"documentation":"<p>The name of the Amazon CloudWatch Logs log stream that's associated with the container. The log group for Batch jobs is <code>/aws/batch/job</code>. Each container attempt receives a log stream name when they reach the <code>RUNNING</code> status.</p>"
510+
},
511+
"networkInterfaces":{
512+
"shape":"NetworkInterfaceList",
513+
"documentation":"<p>The network interfaces that are associated with the job attempt.</p>"
514+
}
515+
},
516+
"documentation":"<p>An object that represents the details of a container that's part of a job attempt.</p>"
517+
},
470518
"Boolean":{"type":"boolean"},
471519
"CEState":{
472520
"type":"string",
@@ -1673,6 +1721,10 @@
16731721
"EksAttemptContainerDetail":{
16741722
"type":"structure",
16751723
"members":{
1724+
"name":{
1725+
"shape":"String",
1726+
"documentation":"<p>The name of a container.</p>"
1727+
},
16761728
"exitCode":{
16771729
"shape":"Integer",
16781730
"documentation":"<p>The exit code returned for the job attempt. A non-zero exit code is considered failed.</p>"
@@ -2025,7 +2077,7 @@
20252077
},
20262078
"imagePullSecrets":{
20272079
"shape":"ImagePullSecrets",
2028-
"documentation":"<p>References a Kubernetes secret resource. This object must start and end with an alphanumeric character, is required to be lowercase, can include periods (.) and hyphens (-), and can't contain more than 253 characters.</p> <p> <code>ImagePullSecret$name</code> is required when this object is used.</p>"
2080+
"documentation":"<p>References a Kubernetes secret resource. It holds a list of secrets. These secrets help to gain access to pull an images from a private registry.</p> <p> <code>ImagePullSecret$name</code> is required when this object is used.</p>"
20292081
},
20302082
"containers":{
20312083
"shape":"EksContainers",
@@ -2067,7 +2119,7 @@
20672119
},
20682120
"imagePullSecrets":{
20692121
"shape":"ImagePullSecrets",
2070-
"documentation":"<p>Displays the reference pointer to the Kubernetes secret resource.</p>"
2122+
"documentation":"<p>Displays the reference pointer to the Kubernetes secret resource. These secrets help to gain access to pull an images from a private registry.</p>"
20712123
},
20722124
"containers":{
20732125
"shape":"EksContainerDetails",
@@ -2290,7 +2342,7 @@
22902342
"documentation":"<p>Provides a unique identifier for the <code>ImagePullSecret</code>. This object is required when <code>EksPodProperties$imagePullSecrets</code> is used.</p>"
22912343
}
22922344
},
2293-
"documentation":"<p>References a Kubernetes configuration resource that holds a list of secrets. These secrets help to gain access to pull an image from a private registry.</p>"
2345+
"documentation":"<p>References a Kubernetes secret resource. This name of the secret must start and end with an alphanumeric character, is required to be lowercase, can include periods (.) and hyphens (-), and can't contain more than 253 characters.</p>"
22942346
},
22952347
"ImagePullSecrets":{
22962348
"type":"list",
@@ -2640,15 +2692,15 @@
26402692
},
26412693
"state":{
26422694
"shape":"JobStateTimeLimitActionsState",
2643-
"documentation":"<p>The state of the job needed to trigger the action. The only supported value is \"<code>RUNNABLE</code>\".</p>"
2695+
"documentation":"<p>The state of the job needed to trigger the action. The only supported value is <code>RUNNABLE</code>.</p>"
26442696
},
26452697
"maxTimeSeconds":{
26462698
"shape":"Integer",
26472699
"documentation":"<p>The approximate amount of time, in seconds, that must pass with the job in the specified state before the action is taken. The minimum value is 600 (10 minutes) and the maximum value is 86,400 (24 hours).</p>"
26482700
},
26492701
"action":{
26502702
"shape":"JobStateTimeLimitActionsAction",
2651-
"documentation":"<p>The action to take when a job is at the head of the job queue in the specified state for the specified period of time. The only supported value is \"<code>CANCEL</code>\", which will cancel the job.</p>"
2703+
"documentation":"<p>The action to take when a job is at the head of the job queue in the specified state for the specified period of time. The only supported value is <code>CANCEL</code>, which will cancel the job.</p>"
26522704
}
26532705
},
26542706
"documentation":"<p>Specifies an action that Batch will take after the job has remained at the head of the queue in the specified state for longer than the specified time.</p>"
@@ -2830,6 +2882,14 @@
28302882
},
28312883
"documentation":"<p>Linux-specific modifications that are applied to the container, such as details for device mappings.</p>"
28322884
},
2885+
"ListAttemptEcsTaskDetails":{
2886+
"type":"list",
2887+
"member":{"shape":"AttemptEcsTaskDetails"}
2888+
},
2889+
"ListAttemptTaskContainerDetails":{
2890+
"type":"list",
2891+
"member":{"shape":"AttemptTaskContainerDetails"}
2892+
},
28332893
"ListEcsTaskDetails":{
28342894
"type":"list",
28352895
"member":{"shape":"EcsTaskDetails"}
@@ -4061,5 +4121,5 @@
40614121
"member":{"shape":"Volume"}
40624122
}
40634123
},
4064-
"documentation":"<fullname>Batch</fullname> <p>Using Batch, you can run batch computing workloads on the Amazon Web Services Cloud. Batch computing is a common means for developers, scientists, and engineers to access large amounts of compute resources. Batch uses the advantages of the batch computing to remove the undifferentiated heavy lifting of configuring and managing required infrastructure. At the same time, it also adopts a familiar batch computing software approach. You can use Batch to efficiently provision resources d, and work toward eliminating capacity constraints, reducing your overall compute costs, and delivering results more quickly.</p> <p>As a fully managed service, Batch can run batch computing workloads of any scale. Batch automatically provisions compute resources and optimizes workload distribution based on the quantity and scale of your specific workloads. With Batch, there's no need to install or manage batch computing software. This means that you can focus on analyzing results and solving your specific problems instead.</p>"
4124+
"documentation":"<fullname>Batch</fullname> <p>Using Batch, you can run batch computing workloads on the Amazon Web Services Cloud. Batch computing is a common means for developers, scientists, and engineers to access large amounts of compute resources. Batch uses the advantages of the batch computing to remove the undifferentiated heavy lifting of configuring and managing required infrastructure. At the same time, it also adopts a familiar batch computing software approach. You can use Batch to efficiently provision resources, and work toward eliminating capacity constraints, reducing your overall compute costs, and delivering results more quickly.</p> <p>As a fully managed service, Batch can run batch computing workloads of any scale. Batch automatically provisions compute resources and optimizes workload distribution based on the quantity and scale of your specific workloads. With Batch, there's no need to install or manage batch computing software. This means that you can focus on analyzing results and solving your specific problems instead.</p>"
40654125
}

0 commit comments

Comments
 (0)