Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Anton
    @antonostrovsky
    I have a question about FaceSearch. I have successfully added some faces to a Rekognition collection, and confirmed that they were present:
    Faces in collection "people"
    {'FaceId': '1f5f7d4b-d82c-4499-9959-e078b8a0300b', 'BoundingBox': {'Width': 0.5046759843826294, 'Height': 0.59968101978302, 'Left': 0.20208899676799774, 'Top': 0.16646400094032288}, 'ImageId': '9c8e829e-25fe-37c7-a037-b33489de2ead', 'ExternalImageId': 'Matthew', 'Confidence': 100.0}
    {'FaceId': '22152d88-020e-44fc-b0a1-ee01ecb67fb3', 'BoundingBox': {'Width': 0.6467159986495972, 'Height': 0.6781880259513855, 'Left': 0.10873699933290482, 'Top': 0.2798289954662323}, 'ImageId': '3aecbfd0-31fd-3d29-a1a1-05bbeebd93fe', 'ExternalImageId': 'Kelz', 'Confidence': 99.99970245361328}
    A media file containing the faces above has been uploaded, and yet I can't see any matches at all in ML Vision -> Faces output. Is that where I should expect to see matches? And what would be the best way to troubleshoot the process?
    Ian Downard
    @iandow
    Double check that you created your face collection in the same region as where you deployed MIE.
    e.g. aws rekognition list-faces --collection-id family_faces --region us-east-2, where us-east-2 is the region where you deployed MIE.
    Anton
    @antonostrovsky
    Yes, all listed
    aws rekognition list-faces --collection-id people --region eu-west-2
    {
    "Faces": [
    {
    "FaceId": "1f5f7d4b-d82c-4499-9959-e078b8a0300b",
    "BoundingBox": {
    "Width": 0.5046759843826294,
    "Height": 0.59968101978302,
    "Left": 0.20208899676799774,
    "Top": 0.16646400094032288
    },
    "ImageId": "9c8e829e-25fe-37c7-a037-b33489de2ead",
    "ExternalImageId": "Matthew",
    "Confidence": 100.0
    },
    Ian Downard
    @iandow
    Oh, I see what’s happening. We haven’t surfaced “face search” (i.e. show me faces in my collection) results in the GUI yet.
    The GUI only shows "face detection” (i.e. show me all the faces) results.
    Ian Downard
    @iandow
    I'm going to remove face search from the workflow config for now, until I get a chance to surface those results in the GUI.
    Anton
    @antonostrovsky
    Thank you for looking into this. This is useful feedback, as I could try and get to these values from the local instance of the webapp. It looks like the results of faceSearch are in the ElasticSearch, is that correct?
    Ian Downard
    @iandow
    Yes, that’s correct.
    Rajesh.M
    @rajesh1993
    Hey! I have a use case which involves extracting frames by specifying timestamps to a video. Is there a way to do that using MIE?
    Ian Downard
    @iandow
    @rajesh1993 Yes, you can do that. First you would want to write an AWS Lambda function which extracts frames based on an input timestamp. Then you would use MIE to define a workflow which calls that function. Are you planning to use ffmpeg?
    Rajesh.M
    @rajesh1993
    Thanks Ian. I was looking for libraries in Python to do the same. I was not aware of ffmpeg. Would you recommend it over openCV (a simple Google search for a Pythonic way gave me this)?
    Ian Downard
    @iandow
    OpenCV is fine. You might also be able to use the thumbnail feature for AWS Mediaconvert to do the same. The disadvantage of using opencv is (I think) you’ll need to download the entire media file to /tmp in the lambda runtime environment in order to process it. However Mediaconvert could process it directly from s3, which would have no limit on file size.
    Ian Downard
    @iandow
    Media Insights Engine beta version 0.1.8 has been released and is available at https://github.com/awslabs/aws-media-insights-engine/releases/tag/v0.1.8. This release includes new support for Rekogntion Video Segment Detection, including the ability to detect technical cues and shots.
    Ian Downard
    @iandow
    More info about MIE's support for technical cues and shots here: https://aws.amazon.com/blogs/media/streamline-media-analysis-tasks-with-amazon-rekognition-video/
    Rajesh.M
    @rajesh1993
    Following up on my frame extraction question, thanks a lot for the input Ian. I have decided to go the Mediaconvert route. I have added an output group to the mediaconvert job to extract frames. I have followed the instructions on the developer quick start page. I sent out a curl command to run the workflow and I believe it is using the existing deployment to run the new job. Do I have to redeploy the solution with the new code through the build-s3-dist.sh script to add my changes and test?
    Ian Downard
    @iandow
    Did you make your changes to the mediaconvert job on the deployed Lambda function (called [stack_name]-StartMediaConvertFunction…)?
    If you make your changes on that lambda function from the AWS console, then you shouldn't need to redeploy.
    Rajesh.M
    @rajesh1993
    Ah. Understood. Ill try that.
    Ian Downard
    @iandow
    image.png
    Rajesh.M
    @rajesh1993
    If I wanted to know the frame rate of the input video, is there an upstream operator that does it already? I saw the mediainfo.json file has the information Im looking for
    I believe the mediainfo operator does it already.
    And just as a heads-up, the token retrieval command on the Developer quick start page, has a typo on the path. It should be $MIE_DEVELOPMENT_HOME/source/tests/getAccessToken.py as opposed to $MIE_DEVELOPMENT_HOME/tests/getAccessToken.py
    Ian Downard
    @iandow
    Thanks for finding type-o. I’ll fix that. Yes, mediainfo has the framerate already.
    Rajesh.M
    @rajesh1993
    I have opened a PR for some minor fixes to ensure developer QOL. Please have a look at it Ian! Thanks.
    Ghost
    @ghost~5efca58fd73408ce4fe86e1f
    Question: I have the MIE solution deployed, and the vue app running locally. Everything is working except the Elasticsearch instance. I can't access it via Kibana and I get a forbidden when querying it from the client (403)
    is there extra setup needed outside of CF for Elasticsearch?
    Ian Downard
    @iandow
    @inacubicle_twitter Kibana access is disabled by default, but you can set it up like this: https://github.com/awslabs/aws-media-insights-engine/blob/development/IMPLEMENTATION_GUIDE.md#validate-metadata-in-elasticsearch
    Ghost
    @ghost~5efca58fd73408ce4fe86e1f
    okay that makes sense, I got that working
    the other issue I have is I'm getting a 403 from the UI still, it errors with the identity pool, but I can log in fine with no errors/do the upload process
    Ian Downard
    @iandow
    Where are you getting a 403?
    Ghost
    @ghost~5efca58fd73408ce4fe86e1f
    Within the UI. I have my Cognito user added to the proper group
    Ian Downard
    @iandow
    So, you can login and upload videos, but when you click on Analysis you get a 403?
    Ghost
    @ghost~5efca58fd73408ce4fe86e1f
    correct, I get originally a "The ambiguous role mapping rules for: {my redacted user pool id} denies this request
    then when I hit the search endpoint I get a 403. The data and index is there in the cluster, checked in Kibana
    Ian Downard
    @iandow
    Do you see that same error if you log in with the original user account that was setup when you deployed MIE?
    Ghost
    @ghost~5efca58fd73408ce4fe86e1f
    I can force the reset through the CLI
    They are in the state force change password, and it seems like the client Vue app doesn't have a state for that
    is there something unique about the originally created user?
    actually I got it working, I just needed to relogin, I don't think the user pool had caught up to the changes I made
    thanks so much for the help, everything is working as expecting :)
    Ian Downard
    @iandow
    great!
    Marc Rudkowski
    @marcrd
    Ian Downard
    @ianwow
    image.png
    1. CDC stream from DynamoDB
    2. Notify Lambda function that new data was created. This Lambda loads metadata from json files in s3 (reminder: lots of MIE operators save metadata as json files in s3, then put the path to said file in dynamodb), then reformats those json records in a format that Elasticsearch likes (e.g. it’ll flatten nested json arrays because Elasticsearch doesn’t search well over nested arrays)
    3. This is a hypothetical line. You can use the same kinesis stream endpoints to publish to a new lambda that feeds data into another data store. We use Elasticsearch for the MIE webapp, but as this diagram suggests, you can use this same architectural pattern to populate another data store, e.g. like a graph DB to facilitate some kind of new app for graphing relationships in metadata.
    Marc Rudkowski
    @marcrd
    gotcha, thank you so so much, that is exactly what I was looking for. that makes sense about the third line, was actually looking at another store than Elasticsearch for metadata, so thats good to know
    Marc Rudkowski
    @marcrd
    With timestamp, what does that timestamp represent within the metadata? Eg 1012, does that mean the 1012th frame?