Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Nicolas Menciere
    @koxon
    Validate steps is supposed to return you the mime type and the ffprobe output
    if you dont have that the transcoding step should fail
    you may be able to add a stop in your state machine that checks the output of the Validate task
    if you shut down a worker while it's processing a task in a state machine then yes
    the State will timeout
    jessica88888
    @jessica88888

    Hi Nicolas,

    I’ve created an invalid MP4 file by simply renaming a text file with an extension of MP4 to see whether it goes through the validation steps. So, the validation step returned these metadata.

    "input_metadata": { "mime": "text/plain", "type": "text" },

    May I know what’s the difference between these two assets:
    https://github.com/bfansports/CloudTranscode/blob/master/state_machines/SATranscodeAssets.json
    https://github.com/bfansports/CloudTranscode/blob/master/state_machines/SAValidateTranscodeAssets.json

    Can I use SATranscodeAssets.json which doesn’t have the validation step?
    Thanks Nicolas.
    Best regards,
    Jessica

    Nicolas Menciere
    @koxon
    yes you can use the transcode State machine only
    in SFN console, check the jobs that you ran
    you can check the output of each step
    look at what the Validate step output
    and see what is says with your bad file
    look also at the transcode step
    jessica88888
    @jessica88888
    Hi Nicolas,
    Thanks for your explanation. I’ve tried to transcode without validation step, everything goes nicely.
    Currently, I’m still working on auto-scaling part.
    Many thanks for all of your support. :smiley:
    Best regards,
    Jessica
    Nicolas Menciere
    @koxon
    the auto scaling is quite painful indeed
    not sure how you can avoid killing a server that is running a task
    jessica88888
    @jessica88888
    Hi Nicolas,
    Yes, I agree with you. These few days, I’m trying to scale the task and instance up and down with different alarms. However, tasks that are running will still accidentally killed even with the lifestyle hook for instance.
    Hoping to find a solution that can allow the task to run till complete.
    Thanks Nicolas.
    Best regards,
    Jessica
    Nicolas Menciere
    @koxon
    maybe the scaling awareness should be built in the application itself
    Alarms could trigger a Lambda function
    this function could then send messages to the workers
    and order some workers to stop listening for new tasks
    but it sounds quite hard thoughg
    the workers would need to know which machine they running on, etc
    jessica88888
    @jessica88888
    Hi Nicolas,
    I will keep on trying on auto-scaling part, will update you if I found any solution on this, thank you so much.
    Best regards,
    Jessica
    jessica88888
    @jessica88888
    Dear Nicolas,
    May I know is there any way to set the permission of transcoded files?
    Thank you.
    Best regards,
    Jessica
    Nicolas Menciere
    @koxon
    Hi Jessica
    What you can do is update the permissions in the ClientInterface Class
    if you implemented one
    when transcoding is done
    you can call S3 and update the permissions of you alternative files
    shahzaibcb
    @shahzaibcb
    Hi Nicolas thanks for the quick reply on github.
    Nicolas Menciere
    @koxon
    np! Anytime
    jessica88888
    @jessica88888
    Dear Nicolas,
    Thank you so much for your supports. After few weeks of testing, I’ve got everything running smoothly on AWS. Really appreciate for your efforts. Thanks once again.
    Best regards,
    Jessica
    Dbinutu
    @Dbinutu
    Hi Nicolas, I also want to say a big thank you. Cheers Bro!
    Nicolas Menciere
    @koxon
    hey guys
    no problem, you're welcome
    @jessica88888 you managed to make auto scaling work ?
    jessica88888
    @jessica88888
    Hi Nicolas,
    I managed to auto-scale the instances and tasks. However, I’m not using lifecycle hook, as I couldn’t make it to work the way it should. The only thing I can do to reduce the task being killed is to increase the memory so that the job can run as fast as possible. Other than that, I also add more retries in step function. In case that the task was killed due to scale down of instance, it will still restart another task.
    Thanks Nicolas.
    Best regards,
    Jessica
    Nicolas Menciere
    @koxon
    Nicely done @jessica88888 you're doing better than me!
    best
    jessica88888
    @jessica88888
    Hi Nicolas,
    Thank you so much. Really appreciate for your supports and the great CloudTranscode.
    Thanks once again.
    Best regards,
    Jessica
    Alouane Nour-Eddine
    @alouane
    Hi @koxon
    I have questions regrading this great project
    As I understand from the previous discussions, I can run CT using aws CS + SFN to handle concurrent jobs. Now ,if I have many parallel transcoding jobs, I could check Pods cpu or SFN pending tasks in order to spin up more ressources(pods or ec2 servers...)
    Alouane Nour-Eddine
    @alouane
    Do you think that this workflow will lead into additional delay in terms of transcoding time (time of init & install new pod... when the traffic will goes up)?
    I would like @jessica88888 to join this discussion & give us here feed back about this workflow
    I think maybe if we add an additional step transition that checks the cluster's cpu or the SFN pending tasks & then decide whether or not to use aws transcoding to handle picks => so we have an hybrid solution scalable with no delay time
    Alouane Nour-Eddine
    @alouane
    Currently we have a 4k$ monthly bill from aws transcoder service only, & I have doubts about the lazy scall solution, maybe it will delay the transcoding time for our current users
    So my theory is to use a fixed number of pods or servers + use aws transcoder during picks when SFN's pending tasks goes up
    I would like to hear what do you think guys about this :)
    Cheers ;)