Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Nicolas Menciere
    @koxon
    if you implemented one
    when transcoding is done
    you can call S3 and update the permissions of you alternative files
    shahzaibcb
    @shahzaibcb
    Hi Nicolas thanks for the quick reply on github.
    Nicolas Menciere
    @koxon
    np! Anytime
    jessica88888
    @jessica88888
    Dear Nicolas,
    Thank you so much for your supports. After few weeks of testing, I’ve got everything running smoothly on AWS. Really appreciate for your efforts. Thanks once again.
    Best regards,
    Jessica
    Dbinutu
    @Dbinutu
    Hi Nicolas, I also want to say a big thank you. Cheers Bro!
    Nicolas Menciere
    @koxon
    hey guys
    no problem, you're welcome
    @jessica88888 you managed to make auto scaling work ?
    jessica88888
    @jessica88888
    Hi Nicolas,
    I managed to auto-scale the instances and tasks. However, I’m not using lifecycle hook, as I couldn’t make it to work the way it should. The only thing I can do to reduce the task being killed is to increase the memory so that the job can run as fast as possible. Other than that, I also add more retries in step function. In case that the task was killed due to scale down of instance, it will still restart another task.
    Thanks Nicolas.
    Best regards,
    Jessica
    Nicolas Menciere
    @koxon
    Nicely done @jessica88888 you're doing better than me!
    best
    jessica88888
    @jessica88888
    Hi Nicolas,
    Thank you so much. Really appreciate for your supports and the great CloudTranscode.
    Thanks once again.
    Best regards,
    Jessica
    Alouane Nour-Eddine
    @alouane
    Hi @koxon
    I have questions regrading this great project
    As I understand from the previous discussions, I can run CT using aws CS + SFN to handle concurrent jobs. Now ,if I have many parallel transcoding jobs, I could check Pods cpu or SFN pending tasks in order to spin up more ressources(pods or ec2 servers...)
    Alouane Nour-Eddine
    @alouane
    Do you think that this workflow will lead into additional delay in terms of transcoding time (time of init & install new pod... when the traffic will goes up)?
    I would like @jessica88888 to join this discussion & give us here feed back about this workflow
    I think maybe if we add an additional step transition that checks the cluster's cpu or the SFN pending tasks & then decide whether or not to use aws transcoding to handle picks => so we have an hybrid solution scalable with no delay time
    Alouane Nour-Eddine
    @alouane
    Currently we have a 4k$ monthly bill from aws transcoder service only, & I have doubts about the lazy scall solution, maybe it will delay the transcoding time for our current users
    So my theory is to use a fixed number of pods or servers + use aws transcoder during picks when SFN's pending tasks goes up
    I would like to hear what do you think guys about this :)
    Cheers ;)
    Nicolas Menciere
    @koxon
    Hi @alouane
    The idea is to run your workers in Docker containers
    this way they "instantly" start
    if you reach a certain amount of Docker container and your instance is getting too full of them
    then you can spawn a new instance which can run X containers
    this way you anticipate the load and always have capacity available for spawning new containers/workers
    so let's say you reach a capacity of 80% on your box, and you box can only accept 1 or 2 new Docker containers
    based on the available memory
    then you can anticipate and spawn a new instance so you can add more containers if needed
    on the other hand, if a box is empty, and you scale down to a certain capacity, then you can kill extra costly boxes
    another solution is to have a machine in-house that runs your transcoding
    and only if you need to scale then you spawn instances in the cloud
    at scale that is certainly less costly to actually host your worker yourself. In this case the network because the bottleneck, as you will need to download and upload the assets from and to AWS S3
    jessica88888
    @jessica88888

    Hi Alouane,

    Totally agree with Nicolas, I think it will be hard to handle if you're using both aws transcoder & cloud transcoder together.
    As per Nicolas suggested, you could scale up the instance before it reached 100% of usage, that way you've enough time to scale up before it runs out of containers.

    Thank you.

    Best regards,
    Jessica

    Alouane Nour-Eddine
    @alouane
    mmm.. I got the point, & by using a similar scale down policy (less than 5 ~ 10% of cpu usage) the host can be shutdown, so we can prevent any transcoding task kill. I agree that aws is so expensive, I ll definitely go with a dedicated docker solution
    Thanks guys for your feedback ;)