Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Ghost
    @ghost~5efca58fd73408ce4fe86e1f
    okay that makes sense, I got that working
    the other issue I have is I'm getting a 403 from the UI still, it errors with the identity pool, but I can log in fine with no errors/do the upload process
    Ian Downard
    @iandow
    Where are you getting a 403?
    Ghost
    @ghost~5efca58fd73408ce4fe86e1f
    Within the UI. I have my Cognito user added to the proper group
    Ian Downard
    @iandow
    So, you can login and upload videos, but when you click on Analysis you get a 403?
    Ghost
    @ghost~5efca58fd73408ce4fe86e1f
    correct, I get originally a "The ambiguous role mapping rules for: {my redacted user pool id} denies this request
    then when I hit the search endpoint I get a 403. The data and index is there in the cluster, checked in Kibana
    Ian Downard
    @iandow
    Do you see that same error if you log in with the original user account that was setup when you deployed MIE?
    Ghost
    @ghost~5efca58fd73408ce4fe86e1f
    I can force the reset through the CLI
    They are in the state force change password, and it seems like the client Vue app doesn't have a state for that
    is there something unique about the originally created user?
    actually I got it working, I just needed to relogin, I don't think the user pool had caught up to the changes I made
    thanks so much for the help, everything is working as expecting :)
    Ian Downard
    @iandow
    great!
    Marc Rudkowski
    @marcrd
    Ian Downard
    @ianwow
    image.png
    1. CDC stream from DynamoDB
    2. Notify Lambda function that new data was created. This Lambda loads metadata from json files in s3 (reminder: lots of MIE operators save metadata as json files in s3, then put the path to said file in dynamodb), then reformats those json records in a format that Elasticsearch likes (e.g. it’ll flatten nested json arrays because Elasticsearch doesn’t search well over nested arrays)
    3. This is a hypothetical line. You can use the same kinesis stream endpoints to publish to a new lambda that feeds data into another data store. We use Elasticsearch for the MIE webapp, but as this diagram suggests, you can use this same architectural pattern to populate another data store, e.g. like a graph DB to facilitate some kind of new app for graphing relationships in metadata.
    Marc Rudkowski
    @marcrd
    gotcha, thank you so so much, that is exactly what I was looking for. that makes sense about the third line, was actually looking at another store than Elasticsearch for metadata, so thats good to know
    Marc Rudkowski
    @marcrd
    With timestamp, what does that timestamp represent within the metadata? Eg 1012, does that mean the 1012th frame?
    Ian Downard
    @ianwow
    Where do you see this reference to timestamp? (it depends on the operator, I suppose)
    Marc Rudkowski
    @marcrd
    Face detection and Rekognition
    was just curious if there is best practices for labeling when a bounding box appears, eg giving it the frame number or using a timestamp at the millisecond level
    Ian Downard
    @ianwow
    I think you need to derive the frame number given the millisecond timestamp from Rekognition.
    Timestamp is the time, in milliseconds from the start of the video, that the face was recognized.
    Rajesh.M
    @rajesh1993
    Hey Ian! I changed the mediaconvert operator to extract frames into a folder in the dataplane location by updating the create_job call parameters. For some reason, I am unable to see any metadata tags populated in the UI for any of the existing videos that were uploaded. Is there a way to debug this?
    The mediaconvert operator ran and frames were extracted but the overall job status shows an error in the UI. I was operating under the assumption that the mediaconvert operator is never called in the workflow created through the UI.
    Ian Downard
    @ianwow
    This message was deleted
    To debug a workflow error, click the error status in the UI. That will take you to the state machine in AWS Step Functions. Then in the graph for the state machine you should see an orange or red state. Click that state. (Depending on your mousing skills, you might have to zoom in to click on it). Then expand the Output text in the Step details, and look for error info that explains why it failed.
    image.png
    @rajesh1993 Does that help?
    Rajesh.M
    @rajesh1993
    I will take a look at that. Thanks.
    Wilson Wang
    @opiuman
    The mie cloudformation failed on me with the following error message:
    The following resource(s) failed to create: [MediaconvertOperation, GenericDataLookupOperation, technicalCueDetectionOperation, contentModerationOperation, WebCaptionsOperation, TranscribeOperation, comprehendPhrasesOperation, textDetectionOperation, CreateSRTCaptionsOperation, CreateVTTCaptionsOperation, ThumbnailOperation, contentModerationOperationImage, comprehendEntitiesOperation].
    I built and deployed it from the source. Anyone knows what's happening behind those error message.
    Btw, I was able to deploy it successfully yesterday and I deleted the stack (successfully) and got the error message when I tried to redeploy
    Wilson Wang
    @opiuman
    Verified that the zip and cf files for the operator library are available in the s3 bucket where mie stack is referring to. The error message is really just a statement and didn't tell what caused the failure of those resource creation.
    Ian Downard
    @ianwow
    @opiuman Maybe your prior stack was not completely removed (i.e. the delete failed)? Check to see if there are any Lambda functions or Step Functions left over from your prior stack. If so, manually remove those then try to deploy again.
    1 reply
    Ian Downard
    @ianwow
    The AWS Content Analysis solution was released today. This is the first MIE based solution to be released as an official AWS solution. The Content Analysis solution is designed for video search use cases, and for people who want to test-drive AWS AI services using their own video content.
    Landing page:
    https://aws.amazon.com/solutions/implementations/aws-content-analysis/
    Documentation:
    https://docs.aws.amazon.com/solutions/latest/aws-content-analysis/
    Source Code:
    https://github.com/awslabs/aws-content-analysis
    Rajesh.M
    @rajesh1993
    Nice!
    Ian Downard
    @ianwow
    A blog article was published today describing the AWS Content Analysis application: https://aws.amazon.com/blogs/media/introducing-aws-content-analysis-solution/
    Ian Downard
    @ianwow
    Yesterday we published a blog article summarizing why we built MIE and showcasing some of the applications that use it: https://aws.amazon.com/blogs/media/how-to-rapidly-prototype-multimedia-applications-on-aws-with-the-media-insights-engine/
    Oliver Nordemann
    @olivernordemann
    Hello, I am working in a student project at HPI in Potsdam, Germany. Our task in our project is to develop a pipeline for the automated transcription and translation of videos. We are building an application which integrates different services for transcription and translation. Currently we try to use the aws media insights engine. What is your suggestion to call the aws mie api from our application? How does the authentication work? We are using python with django. We appreciate any help to find a good solution. Thanks a lot!
    Ian Downard
    @ianwow
    @olivernordemann MIE’s api is a REST API, so you interact with it via HTTP requests. Since you’re using python you’ll probably want to use the urllib package to do that. Those calls will include a security token. The IMPLEMENTATION GUIDE shows how to get the token and make HTTP requests: https://github.com/awslabs/aws-media-insights-engine/blob/master/IMPLEMENTATION_GUIDE.md#obtain-a-cognito-token
    Marc Rudkowski
    @marcrd
    is there a way to configure an already existing set of frames/video, and not use the UI as an entry point? eg if we have frames partitioned in a specific way
    Ian Downard
    @ianwow
    @marcd I don’t understand. Are you asking whether its possible to input a video with missing frames?
    Marc Rudkowski
    @marcrd

    No, let's say I have a set of videos I've already broken down into frames, that have a specific partition structure, eg:

    s3://my-path/parent_id={}/child_id={}

    Where the parent_id holds a set frames partitioned by a specific set of ID's. This is for performance reasons doing sagemaker inference (if I uploaded a 25 minute video and tried to do inference on all the frames, that would be very time consuming without splitting the jobs parallel)

    The frames will match the video, its just partitioned in a specific way for performance and metadata labeling reasons
    the parent ID would be corresponding to a specific video
    Ian Downard
    @ianwow
    I think you’ll need to implement your own frame partitioning operator. I’ve heard of several instances where people have built an operator to downsample a video into some small set of frames, but noone has contributed a PR to bring that into the main MIE repo yet.
    Marc Rudkowski
    @marcrd
    Okay that makes sense. Another question: Is there an operator that takes video and converts into frames?
    I see the ability to specify images as an input, but nowhere where the data plane outputs frames
    Ian Downard
    @ianwow
    Not really. The mediaconvert operator extracts a frame for a thumbnail, but I think the MediaConvert services only intends for that to be used for a small number of frames.
    Marc Rudkowski
    @marcrd
    So can the input object be a set of frames within the operator command? eg "Input":{"Media":{"Images"...