Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    Alouane Nour-Eddine
    Hi @koxon
    I have questions regrading this great project
    As I understand from the previous discussions, I can run CT using aws CS + SFN to handle concurrent jobs. Now ,if I have many parallel transcoding jobs, I could check Pods cpu or SFN pending tasks in order to spin up more ressources(pods or ec2 servers...)
    Alouane Nour-Eddine
    Do you think that this workflow will lead into additional delay in terms of transcoding time (time of init & install new pod... when the traffic will goes up)?
    I would like @jessica88888 to join this discussion & give us here feed back about this workflow
    I think maybe if we add an additional step transition that checks the cluster's cpu or the SFN pending tasks & then decide whether or not to use aws transcoding to handle picks => so we have an hybrid solution scalable with no delay time
    Alouane Nour-Eddine
    Currently we have a 4k$ monthly bill from aws transcoder service only, & I have doubts about the lazy scall solution, maybe it will delay the transcoding time for our current users
    So my theory is to use a fixed number of pods or servers + use aws transcoder during picks when SFN's pending tasks goes up
    I would like to hear what do you think guys about this :)
    Cheers ;)
    Nicolas Menciere
    Hi @alouane
    The idea is to run your workers in Docker containers
    this way they "instantly" start
    if you reach a certain amount of Docker container and your instance is getting too full of them
    then you can spawn a new instance which can run X containers
    this way you anticipate the load and always have capacity available for spawning new containers/workers
    so let's say you reach a capacity of 80% on your box, and you box can only accept 1 or 2 new Docker containers
    based on the available memory
    then you can anticipate and spawn a new instance so you can add more containers if needed
    on the other hand, if a box is empty, and you scale down to a certain capacity, then you can kill extra costly boxes
    another solution is to have a machine in-house that runs your transcoding
    and only if you need to scale then you spawn instances in the cloud
    at scale that is certainly less costly to actually host your worker yourself. In this case the network because the bottleneck, as you will need to download and upload the assets from and to AWS S3

    Hi Alouane,

    Totally agree with Nicolas, I think it will be hard to handle if you're using both aws transcoder & cloud transcoder together.
    As per Nicolas suggested, you could scale up the instance before it reached 100% of usage, that way you've enough time to scale up before it runs out of containers.

    Thank you.

    Best regards,

    Alouane Nour-Eddine
    mmm.. I got the point, & by using a similar scale down policy (less than 5 ~ 10% of cpu usage) the host can be shutdown, so we can prevent any transcoding task kill. I agree that aws is so expensive, I ll definitely go with a dedicated docker solution
    Thanks guys for your feedback ;)