Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    jessica88888
    @jessica88888
    Hi Nicolas,
    Thanks for your explanation. I’ve tried to transcode without validation step, everything goes nicely.
    Currently, I’m still working on auto-scaling part.
    Many thanks for all of your support. :smiley:
    Best regards,
    Jessica
    Nicolas Menciere
    @koxon
    the auto scaling is quite painful indeed
    not sure how you can avoid killing a server that is running a task
    jessica88888
    @jessica88888
    Hi Nicolas,
    Yes, I agree with you. These few days, I’m trying to scale the task and instance up and down with different alarms. However, tasks that are running will still accidentally killed even with the lifestyle hook for instance.
    Hoping to find a solution that can allow the task to run till complete.
    Thanks Nicolas.
    Best regards,
    Jessica
    Nicolas Menciere
    @koxon
    maybe the scaling awareness should be built in the application itself
    Alarms could trigger a Lambda function
    this function could then send messages to the workers
    and order some workers to stop listening for new tasks
    but it sounds quite hard thoughg
    the workers would need to know which machine they running on, etc
    jessica88888
    @jessica88888
    Hi Nicolas,
    I will keep on trying on auto-scaling part, will update you if I found any solution on this, thank you so much.
    Best regards,
    Jessica
    jessica88888
    @jessica88888
    Dear Nicolas,
    May I know is there any way to set the permission of transcoded files?
    Thank you.
    Best regards,
    Jessica
    Nicolas Menciere
    @koxon
    Hi Jessica
    What you can do is update the permissions in the ClientInterface Class
    if you implemented one
    when transcoding is done
    you can call S3 and update the permissions of you alternative files
    shahzaibcb
    @shahzaibcb
    Hi Nicolas thanks for the quick reply on github.
    Nicolas Menciere
    @koxon
    np! Anytime
    jessica88888
    @jessica88888
    Dear Nicolas,
    Thank you so much for your supports. After few weeks of testing, I’ve got everything running smoothly on AWS. Really appreciate for your efforts. Thanks once again.
    Best regards,
    Jessica
    Dbinutu
    @Dbinutu
    Hi Nicolas, I also want to say a big thank you. Cheers Bro!
    Nicolas Menciere
    @koxon
    hey guys
    no problem, you're welcome
    @jessica88888 you managed to make auto scaling work ?
    jessica88888
    @jessica88888
    Hi Nicolas,
    I managed to auto-scale the instances and tasks. However, I’m not using lifecycle hook, as I couldn’t make it to work the way it should. The only thing I can do to reduce the task being killed is to increase the memory so that the job can run as fast as possible. Other than that, I also add more retries in step function. In case that the task was killed due to scale down of instance, it will still restart another task.
    Thanks Nicolas.
    Best regards,
    Jessica
    Nicolas Menciere
    @koxon
    Nicely done @jessica88888 you're doing better than me!
    best
    jessica88888
    @jessica88888
    Hi Nicolas,
    Thank you so much. Really appreciate for your supports and the great CloudTranscode.
    Thanks once again.
    Best regards,
    Jessica
    Alouane Nour-Eddine
    @alouane
    Hi @koxon
    I have questions regrading this great project
    As I understand from the previous discussions, I can run CT using aws CS + SFN to handle concurrent jobs. Now ,if I have many parallel transcoding jobs, I could check Pods cpu or SFN pending tasks in order to spin up more ressources(pods or ec2 servers...)
    Alouane Nour-Eddine
    @alouane
    Do you think that this workflow will lead into additional delay in terms of transcoding time (time of init & install new pod... when the traffic will goes up)?
    I would like @jessica88888 to join this discussion & give us here feed back about this workflow
    I think maybe if we add an additional step transition that checks the cluster's cpu or the SFN pending tasks & then decide whether or not to use aws transcoding to handle picks => so we have an hybrid solution scalable with no delay time
    Alouane Nour-Eddine
    @alouane
    Currently we have a 4k$ monthly bill from aws transcoder service only, & I have doubts about the lazy scall solution, maybe it will delay the transcoding time for our current users
    So my theory is to use a fixed number of pods or servers + use aws transcoder during picks when SFN's pending tasks goes up
    I would like to hear what do you think guys about this :)
    Cheers ;)
    Nicolas Menciere
    @koxon
    Hi @alouane
    The idea is to run your workers in Docker containers
    this way they "instantly" start
    if you reach a certain amount of Docker container and your instance is getting too full of them
    then you can spawn a new instance which can run X containers
    this way you anticipate the load and always have capacity available for spawning new containers/workers
    so let's say you reach a capacity of 80% on your box, and you box can only accept 1 or 2 new Docker containers
    based on the available memory
    then you can anticipate and spawn a new instance so you can add more containers if needed
    on the other hand, if a box is empty, and you scale down to a certain capacity, then you can kill extra costly boxes
    another solution is to have a machine in-house that runs your transcoding
    and only if you need to scale then you spawn instances in the cloud