|
30 | 30 | {"shape":"ProvisionedThroughputExceededException"},
|
31 | 31 | {"shape":"InvalidImageFormatException"}
|
32 | 32 | ],
|
33 |
| - "documentation":"<p>Compares a face in the <i>source</i> input image with each of the 100 largest faces detected in the <i>target</i> input image. </p> <note> <p> If the source image contains multiple faces, the service detects the largest face and compares it with each face detected in the target image. </p> </note> <p>You pass the input and target images either as base64-encoded image bytes or as references to images in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes isn't supported. The image must be formatted as a PNG or JPEG file. </p> <p>In response, the operation returns an array of face matches ordered by similarity score in descending order. For each face match, the response provides a bounding box of the face, facial landmarks, pose details (pitch, role, and yaw), quality (brightness and sharpness), and confidence value (indicating the level of confidence that the bounding box contains a face). The response also provides a similarity score, which indicates how closely the faces match. </p> <note> <p>By default, only faces with a similarity score of greater than or equal to 80% are returned in the response. You can change this value by specifying the <code>SimilarityThreshold</code> parameter.</p> </note> <p> <code>CompareFaces</code> also returns an array of faces that don't match the source image. For each face, it returns a bounding box, confidence value, landmarks, pose details, and quality. The response also returns information about the face in the source image, including the bounding box of the face and confidence value.</p> <p>The <code>QualityFilter</code> input parameter allows you to filter out detected faces that don’t meet a required quality bar. The quality bar is based on a variety of common use cases. Use <code>QualityFilter</code> to set the quality bar by specifying <code>LOW</code>, <code>MEDIUM</code>, or <code>HIGH</code>. If you do not want to filter detected faces, specify <code>NONE</code>. The default value is <code>NONE</code>. </p> <note> <p>To use quality filtering, you need a collection associated with version 3 of the face model or higher. To get the version of the face model associated with a collection, call <a>DescribeCollection</a>. </p> </note> <p>If the image doesn't contain Exif metadata, <code>CompareFaces</code> returns orientation information for the source and target images. Use these values to display the images with the correct image orientation.</p> <p>If no faces are detected in the source or target images, <code>CompareFaces</code> returns an <code>InvalidParameterException</code> error. </p> <note> <p> This is a stateless API operation. That is, data returned by this operation doesn't persist.</p> </note> <p>For an example, see Comparing Faces in Images in the Amazon Rekognition Developer Guide.</p> <p>This operation requires permissions to perform the <code>rekognition:CompareFaces</code> action.</p>" |
| 33 | + "documentation":"<p>Compares a face in the <i>source</i> input image with each of the 100 largest faces detected in the <i>target</i> input image. </p> <note> <p> If the source image contains multiple faces, the service detects the largest face and compares it with each face detected in the target image. </p> </note> <p>You pass the input and target images either as base64-encoded image bytes or as references to images in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes isn't supported. The image must be formatted as a PNG or JPEG file. </p> <p>In response, the operation returns an array of face matches ordered by similarity score in descending order. For each face match, the response provides a bounding box of the face, facial landmarks, pose details (pitch, role, and yaw), quality (brightness and sharpness), and confidence value (indicating the level of confidence that the bounding box contains a face). The response also provides a similarity score, which indicates how closely the faces match. </p> <note> <p>By default, only faces with a similarity score of greater than or equal to 80% are returned in the response. You can change this value by specifying the <code>SimilarityThreshold</code> parameter.</p> </note> <p> <code>CompareFaces</code> also returns an array of faces that don't match the source image. For each face, it returns a bounding box, confidence value, landmarks, pose details, and quality. The response also returns information about the face in the source image, including the bounding box of the face and confidence value.</p> <p>The <code>QualityFilter</code> input parameter allows you to filter out detected faces that don’t meet a required quality bar. The quality bar is based on a variety of common use cases. Use <code>QualityFilter</code> to set the quality bar by specifying <code>LOW</code>, <code>MEDIUM</code>, or <code>HIGH</code>. If you do not want to filter detected faces, specify <code>NONE</code>. The default value is <code>NONE</code>. </p> <p>If the image doesn't contain Exif metadata, <code>CompareFaces</code> returns orientation information for the source and target images. Use these values to display the images with the correct image orientation.</p> <p>If no faces are detected in the source or target images, <code>CompareFaces</code> returns an <code>InvalidParameterException</code> error. </p> <note> <p> This is a stateless API operation. That is, data returned by this operation doesn't persist.</p> </note> <p>For an example, see Comparing Faces in Images in the Amazon Rekognition Developer Guide.</p> <p>This operation requires permissions to perform the <code>rekognition:CompareFaces</code> action.</p>" |
34 | 34 | },
|
35 | 35 | "CreateCollection":{
|
36 | 36 | "name":"CreateCollection",
|
|
644 | 644 | {"shape":"ProvisionedThroughputExceededException"},
|
645 | 645 | {"shape":"InvalidImageFormatException"}
|
646 | 646 | ],
|
647 |
| - "documentation":"<p>Returns an array of celebrities recognized in the input image. For more information, see Recognizing Celebrities in the Amazon Rekognition Developer Guide. </p> <p> <code>RecognizeCelebrities</code> returns the 100 largest faces in the image. It lists recognized celebrities in the <code>CelebrityFaces</code> array and unrecognized faces in the <code>UnrecognizedFaces</code> array. <code>RecognizeCelebrities</code> doesn't return celebrities whose faces aren't among the largest 100 faces in the image.</p> <p>For each celebrity recognized, <code>RecognizeCelebrities</code> returns a <code>Celebrity</code> object. The <code>Celebrity</code> object contains the celebrity name, ID, URL links to additional information, match confidence, and a <code>ComparedFace</code> object that you can use to locate the celebrity's face on the image.</p> <p>Amazon Rekognition doesn't retain information about which images a celebrity has been recognized in. Your application must store this information and use the <code>Celebrity</code> ID property as a unique identifier for the celebrity. If you don't store the celebrity name or additional information URLs returned by <code>RecognizeCelebrities</code>, you will need the ID to identify the celebrity in a call to the <a>GetCelebrityInfo</a> operation.</p> <p>You pass the input image either as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file. </p> <p>For an example, see Recognizing Celebrities in an Image in the Amazon Rekognition Developer Guide.</p> <p>This operation requires permissions to perform the <code>rekognition:RecognizeCelebrities</code> operation.</p>" |
| 647 | + "documentation":"<p>Returns an array of celebrities recognized in the input image. For more information, see Recognizing Celebrities in the Amazon Rekognition Developer Guide. </p> <p> <code>RecognizeCelebrities</code> returns the 64 largest faces in the image. It lists recognized celebrities in the <code>CelebrityFaces</code> array and unrecognized faces in the <code>UnrecognizedFaces</code> array. <code>RecognizeCelebrities</code> doesn't return celebrities whose faces aren't among the largest 64 faces in the image.</p> <p>For each celebrity recognized, <code>RecognizeCelebrities</code> returns a <code>Celebrity</code> object. The <code>Celebrity</code> object contains the celebrity name, ID, URL links to additional information, match confidence, and a <code>ComparedFace</code> object that you can use to locate the celebrity's face on the image.</p> <p>Amazon Rekognition doesn't retain information about which images a celebrity has been recognized in. Your application must store this information and use the <code>Celebrity</code> ID property as a unique identifier for the celebrity. If you don't store the celebrity name or additional information URLs returned by <code>RecognizeCelebrities</code>, you will need the ID to identify the celebrity in a call to the <a>GetCelebrityInfo</a> operation.</p> <p>You pass the input image either as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file. </p> <p>For an example, see Recognizing Celebrities in an Image in the Amazon Rekognition Developer Guide.</p> <p>This operation requires permissions to perform the <code>rekognition:RecognizeCelebrities</code> operation.</p>" |
648 | 648 | },
|
649 | 649 | "SearchFaces":{
|
650 | 650 | "name":"SearchFaces",
|
|
967 | 967 | "members":{
|
968 | 968 | "GroundTruthManifest":{"shape":"GroundTruthManifest"}
|
969 | 969 | },
|
970 |
| - "documentation":"<p>Assets are the images that you use to train and evaluate a model version. Assets are referenced by Sagemaker GroundTruth manifest files. </p>" |
| 970 | + "documentation":"<p>Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training. </p>" |
971 | 971 | },
|
972 | 972 | "Assets":{
|
973 | 973 | "type":"list",
|
|
1001 | 1001 | },
|
1002 | 1002 | "NumberOfChannels":{
|
1003 | 1003 | "shape":"ULong",
|
1004 |
| - "documentation":"<p>The number of audio channels in the segement.</p>" |
| 1004 | + "documentation":"<p>The number of audio channels in the segment.</p>" |
1005 | 1005 | }
|
1006 | 1006 | },
|
1007 | 1007 | "documentation":"<p>Metadata information about an audio stream. An array of <code>AudioMetadata</code> objects for the audio streams found in a stored video is returned by <a>GetSegmentDetection</a>. </p>"
|
|
2568 | 2568 | },
|
2569 | 2569 | "Segments":{
|
2570 | 2570 | "shape":"SegmentDetections",
|
2571 |
| - "documentation":"<p>An array of segments detected in a video.</p>" |
| 2571 | + "documentation":"<p>An array of segments detected in a video. The array is sorted by the segment types (TECHNICAL_CUE or SHOT) specified in the <code>SegmentTypes</code> input parameter of <code>StartSegmentDetection</code>. Within each segment type the array is sorted by timestamp values.</p>" |
2572 | 2572 | },
|
2573 | 2573 | "SelectedSegmentTypes":{
|
2574 | 2574 | "shape":"SegmentTypesInfo",
|
|
2625 | 2625 | "members":{
|
2626 | 2626 | "S3Object":{"shape":"S3Object"}
|
2627 | 2627 | },
|
2628 |
| - "documentation":"<p>The S3 bucket that contains the Ground Truth manifest file.</p>" |
| 2628 | + "documentation":"<p>The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. </p>" |
2629 | 2629 | },
|
2630 | 2630 | "HumanLoopActivationConditionsEvaluationResults":{
|
2631 | 2631 | "type":"string",
|
|
2980 | 2980 | },
|
2981 | 2981 | "X":{
|
2982 | 2982 | "shape":"Float",
|
2983 |
| - "documentation":"<p>The x-coordinate from the top left of the landmark expressed as the ratio of the width of the image. For example, if the image is 700 x 200 and the x-coordinate of the landmark is at 350 pixels, this value is 0.5. </p>" |
| 2983 | + "documentation":"<p>The x-coordinate of the landmark expressed as a ratio of the width of the image. The x-coordinate is measured from the left-side of the image. For example, if the image is 700 pixels wide and the x-coordinate of the landmark is at 350 pixels, this value is 0.5. </p>" |
2984 | 2984 | },
|
2985 | 2985 | "Y":{
|
2986 | 2986 | "shape":"Float",
|
2987 |
| - "documentation":"<p>The y-coordinate from the top left of the landmark expressed as the ratio of the height of the image. For example, if the image is 700 x 200 and the y-coordinate of the landmark is at 100 pixels, this value is 0.5.</p>" |
| 2987 | + "documentation":"<p>The y-coordinate of the landmark expressed as a ratio of the height of the image. The y-coordinate is measured from the top of the image. For example, if the image height is 200 pixels and the y-coordinate of the landmark is at 50 pixels, this value is 0.25.</p>" |
2988 | 2988 | }
|
2989 | 2989 | },
|
2990 | 2990 | "documentation":"<p>Indicates the location of the landmark on the face.</p>"
|
|
3445 | 3445 | },
|
3446 | 3446 | "TrainingDataResult":{
|
3447 | 3447 | "shape":"TrainingDataResult",
|
3448 |
| - "documentation":"<p>The manifest file that represents the training results.</p>" |
| 3448 | + "documentation":"<p>Contains information about the training results.</p>" |
3449 | 3449 | },
|
3450 | 3450 | "TestingDataResult":{
|
3451 | 3451 | "shape":"TestingDataResult",
|
3452 |
| - "documentation":"<p>The manifest file that represents the testing results.</p>" |
| 3452 | + "documentation":"<p>Contains information about the testing results.</p>" |
3453 | 3453 | },
|
3454 | 3454 | "EvaluationResult":{
|
3455 | 3455 | "shape":"EvaluationResult",
|
3456 | 3456 | "documentation":"<p>The training results. <code>EvaluationResult</code> is only returned if training is successful.</p>"
|
| 3457 | + }, |
| 3458 | + "ManifestSummary":{ |
| 3459 | + "shape":"GroundTruthManifest", |
| 3460 | + "documentation":"<p>The location of the summary manifest. The summary manifest provides aggregate data validation results for the training and test datasets.</p>" |
3457 | 3461 | }
|
3458 | 3462 | },
|
3459 | 3463 | "documentation":"<p>The description of a version of a model.</p>"
|
|
3534 | 3538 | "members":{
|
3535 | 3539 | "CelebrityFaces":{
|
3536 | 3540 | "shape":"CelebrityList",
|
3537 |
| - "documentation":"<p>Details about each celebrity found in the image. Amazon Rekognition can detect a maximum of 15 celebrities in an image.</p>" |
| 3541 | + "documentation":"<p>Details about each celebrity found in the image. Amazon Rekognition can detect a maximum of 64 celebrities in an image.</p>" |
3538 | 3542 | },
|
3539 | 3543 | "UnrecognizedFaces":{
|
3540 | 3544 | "shape":"ComparedFaceList",
|
|
3746 | 3750 | },
|
3747 | 3751 | "StartTimestampMillis":{
|
3748 | 3752 | "shape":"Timestamp",
|
3749 |
| - "documentation":"<p>The start time of the detected segment in milliseconds from the start of the video.</p>" |
| 3753 | + "documentation":"<p>The start time of the detected segment in milliseconds from the start of the video. This value is rounded down. For example, if the actual timestamp is 100.6667 milliseconds, Amazon Rekognition Video returns a value of 100 millis.</p>" |
3750 | 3754 | },
|
3751 | 3755 | "EndTimestampMillis":{
|
3752 | 3756 | "shape":"Timestamp",
|
3753 |
| - "documentation":"<p>The end time of the detected segment, in milliseconds, from the start of the video.</p>" |
| 3757 | + "documentation":"<p>The end time of the detected segment, in milliseconds, from the start of the video. This value is rounded down.</p>" |
3754 | 3758 | },
|
3755 | 3759 | "DurationMillis":{
|
3756 | 3760 | "shape":"ULong",
|
|
3818 | 3822 | "members":{
|
3819 | 3823 | "Index":{
|
3820 | 3824 | "shape":"ULong",
|
3821 |
| - "documentation":"<p>An Identifier for a shot detection segment detected in a video </p>" |
| 3825 | + "documentation":"<p>An Identifier for a shot detection segment detected in a video. </p>" |
3822 | 3826 | },
|
3823 | 3827 | "Confidence":{
|
3824 | 3828 | "shape":"SegmentConfidence",
|
|
4378 | 4382 | "Output":{
|
4379 | 4383 | "shape":"TestingData",
|
4380 | 4384 | "documentation":"<p>The subset of the dataset that was actually tested. Some images (assets) might not be tested due to file formatting and other issues. </p>"
|
| 4385 | + }, |
| 4386 | + "Validation":{ |
| 4387 | + "shape":"ValidationData", |
| 4388 | + "documentation":"<p>The location of the data validation manifest. The data validation manifest is created for the test dataset during model training.</p>" |
4381 | 4389 | }
|
4382 | 4390 | },
|
4383 |
| - "documentation":"<p>A Sagemaker Groundtruth format manifest file representing the dataset used for testing.</p>" |
| 4391 | + "documentation":"<p>Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.</p>" |
4384 | 4392 | },
|
4385 | 4393 | "TextDetection":{
|
4386 | 4394 | "type":"structure",
|
|
4471 | 4479 | "Output":{
|
4472 | 4480 | "shape":"TrainingData",
|
4473 | 4481 | "documentation":"<p>The images (assets) that were actually trained by Amazon Rekognition Custom Labels. </p>"
|
| 4482 | + }, |
| 4483 | + "Validation":{ |
| 4484 | + "shape":"ValidationData", |
| 4485 | + "documentation":"<p>The location of the data validation manifest. The data validation manifest is created for the training dataset during model training.</p>" |
4474 | 4486 | }
|
4475 | 4487 | },
|
4476 |
| - "documentation":"<p>A Sagemaker Groundtruth format manifest file that represents the dataset used for training.</p>" |
| 4488 | + "documentation":"<p>Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.</p>" |
4477 | 4489 | },
|
4478 | 4490 | "UInteger":{
|
4479 | 4491 | "type":"integer",
|
|
4506 | 4518 | "type":"list",
|
4507 | 4519 | "member":{"shape":"Url"}
|
4508 | 4520 | },
|
| 4521 | + "ValidationData":{ |
| 4522 | + "type":"structure", |
| 4523 | + "members":{ |
| 4524 | + "Assets":{ |
| 4525 | + "shape":"Assets", |
| 4526 | + "documentation":"<p>The assets that comprise the validation data. </p>" |
| 4527 | + } |
| 4528 | + }, |
| 4529 | + "documentation":"<p>Contains the Amazon S3 bucket location of the validation data for a model training job. </p> <p>The validation data includes error information for individual JSON lines in the dataset. For more information, see Debugging a Failed Model Training in the Amazon Rekognition Custom Labels Developer Guide. </p> <p>You get the <code>ValidationData</code> object for the training dataset (<a>TrainingDataResult</a>) and the test dataset (<a>TestingDataResult</a>) by calling <a>DescribeProjectVersions</a>. </p> <p>The assets array contains a single <a>Asset</a> object. The <a>GroundTruthManifest</a> field of the Asset object contains the S3 bucket location of the validation data. </p>" |
| 4530 | + }, |
4509 | 4531 | "VersionName":{
|
4510 | 4532 | "type":"string",
|
4511 | 4533 | "max":255,
|
|
0 commit comments