Amazon announces new media analysis features for Amazon Rekognition Video

Amazon Rekognition Video is a machine learning (ML) based service that can analyze videos to detect objects, people, faces, text, scenes, and activities, as well as detect any inappropriate content. 

Starting today, you can automate four common media analysis tasks - detection of black frames, end credits, shot changes, and color bars using fully managed, ML-powered APIs from Amazon Rekognition Video.

These features enable you to execute workflows such as content preparation, ad insertion, and add ‘binge-markers’ to content at scale in the cloud. Videos often contain a short duration of empty black frames with no audio to demarcate ad insertion slots or end of a scene. Using Amazon Rekognition Video, you can detect such sequences to automate ad insertion or package content for Video-On-Demand (VOD) by removing unwanted segments. Next, to implement interactive viewer prompts such as ‘Next Episode’ in VOD applications, you can identify the exact frames where the closing credits start and end in a video. Further, Amazon Rekognition Video enables you to detect shot changes, when a scene cuts from one camera to another. Using this information, you can create promotional videos using selected shots, generate high-quality preview thumbnails by choosing key frames in shots, and insert ads without disrupting viewer experience, for example, by avoiding the middle of a shot when someone is speaking. Lastly, you can detect sections of video that display SMPTE (Society of Motion Picture and Television Engineers) color bars, to remove them from VOD content, or to detect issues such as loss of broadcast signals in a recording, when color bars may be shown continuously as the default signal. 

With these APIs, you can easily analyze large volumes of videos stored in Amazon S3 and get SMPTE timecodes and timestamps for each detection - without requiring any machine learning experience. Returned SMPTE timecodes are frame accurate, which means that Amazon Rekognition Video provides the exact frame number when it detects a relevant segment of video, and also handles various video frame rate formats, such as drop frame and fractional frame rates under the hood. Using the frame accurate metadata from Amazon Rekognition Video, you can either automate operational tasks completely, or significantly reduce the review workload of trained human operators. This enables you to execute media analysis workflows at scale in the cloud. You pay only for the minutes of video you analyze. There are no minimum fees, licenses, or upfront commitments. 

Media analysis features for Amazon Rekognition Video are now available in all AWS Regions supported by Amazon Rekognition. To get started, please visit the product webpage, read our blog, refer to our documentation, and download the latest AWS SDK. To try these features with your videos, you can use the Media Insights Engine. 

Share This Post