by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Sep 18 06:02
    Coverage (#undefined) +undefined%
  • Sep 18 05:58

    dependabot[bot] on maven

    (compare)

  • Sep 18 05:58

    mergify[bot] on master

    Bump mockito-core from 3.5.10 t… Merge pull request #259 from bt… (compare)

  • Sep 18 05:58
    mergify[bot] closed #259
  • Sep 18 05:58
    codecov[bot] commented #259
  • Sep 18 05:57
    codecov[bot] commented #259
  • Sep 18 05:57
    Coverage (#undefined) +undefined%
  • Sep 18 05:57
    Coverage (#undefined) +undefined%
  • Sep 18 05:55
    dependabot[bot] milestoned #259
  • Sep 18 05:55
    dependabot[bot] labeled #259
  • Sep 18 05:55
    dependabot[bot] opened #259
  • Sep 18 05:55

    dependabot[bot] on maven

    Bump mockito-core from 3.5.10 t… (compare)

  • Sep 17 21:49

    fhermeni on min-migs

    (compare)

  • Sep 17 21:49

    fhermeni on kp

    (compare)

  • Sep 17 21:49

    fhermeni on issue-206

    (compare)

  • Sep 17 21:49

    fhermeni on issue-120

    (compare)

  • Sep 17 21:49

    fhermeni on ext-events

    (compare)

  • Sep 17 21:48

    fhermeni on choco-4.0

    (compare)

  • Sep 17 21:48

    fhermeni on issue-89

    (compare)

  • Sep 17 21:48

    fhermeni on issue-117

    (compare)

Aditya Ramesh
@adramesh
as less of the search space is getting explored
Fabien Hermenier
@fhermeni
I saw a hotspot in a constraint but I cannot act on it properly right now. I don't master that class enough (code of my wife and maternity leave) and there is something better to do that my modification
yes it would reduce the memory usage
because there might be less old values to store depending on the variable usage
also, use less constraints. I already removed a lot of redundant constraints for this release but it is not easy to be sure about the pruning efficiency
some redundancy might be good
finally, the repair mode reduces the memory usage a lot by pre-instantiating a large amount of variables
Aditya Ramesh
@adramesh
ok; we will be using repair mode to ensure that the plans can get generated quickly enough
but for issue100, what exactly is causing a big memory spike
there are only 3 constraints and the number of nodes is small
only the number of VMs is big but it's not something very massive (< 10K)
Fabien Hermenier
@fhermeni
be carefull here, btrplace constraint != choco constraint
and sadly, for the general comprehension, I don't clarify the constraint nature :D
VMs that stay running and modeled using a significant amount of variables
I can check the memory hotspot
Aditya Ramesh
@adramesh
ok
is there a way of figuring out how many constraints are generated underneath?
SolvingStatistics does not seem to expose this
Fabien Hermenier
@fhermeni
indeed
The number are accessible from the Solver object
So when I work on that aspect, I run in debug mode, breakpoint, and look at the number
Aditya Ramesh
@adramesh
ok; got it
There are 74181 variables and 20357 constraints
Fabien Hermenier
@fhermeni
I pushed the environment factory
Aditya Ramesh
@adramesh
ok; thanks for making the change quickly
Fabien Hermenier
@fhermeni
A simple one and the workaround looks clean to me so no worry
Fabien Hermenier
@fhermeni
btrplace v.046 released and available from maven central repository. Better performances and better code (thanks @adramesh )
Michael Drogalis
@MichaelDrogalis
@fhermeni Upgraded Onyx to use BtrPlace 0.46. Test suite is passing ^^
Nice to be back on the most recent version.
Fabien Hermenier
@fhermeni
@MichaelDrogalis great. Thanks for this feedback. Keep me in touch if your users have some issues that seem to be related to btrplace
Michael Drogalis
@MichaelDrogalis
Will do!
Aditya Ramesh
@adramesh
@fhermeni: One question with regards to determining migrations
Currently, we only allow a migration if there is enough capacity on both the source and the destination for each of the resources. This model works well for memory where there needs to be enough capacity on both nodes but not for other resources like network or cpu where it's okay for there to be temporary starvation. Is there support for discrete and continuous constraints for resources as well?
Fabien Hermenier
@fhermeni
@adramesh currently, there must be enough resource on the destination node to perform the migration, but on this node, you can have some VMs (to leave possibly) that are consuming their current consumption. So you can play with the notion of current consumption to simulate that starvation
could you illustrate through a gantt or any plan showing the resource dispatch over a node ?
Aditya Ramesh
@adramesh
consider this example:
{"model":{"mapping":{"readyVMs":[],"onlineNodes":{"0":{"sleepingVMs":[],"runningVMs":[0]},"1":{"sleepingVMs":[],"runningVMs":[1]}},"offlineNodes":[]},"attributes":{"nodes":{},"vms":{}},"views":[{"defConsumption":4,"nodes":{},"rcId":"CPU","id":"shareableResource","defCapacity":7,"vms":{}}]},"constraints":[{"nodes":[0],"vm":0,"continuous":false,"id":"ban"},{"nodes":[1],"vm":1,"continuous":false,"id":"ban"}],"objective":{"id":"minimizeMTTR"}}
Here, we need to migrate both VMs to the other host. However, live migration cannot succeed as there is not enough CPU capacity during the migration. However, since CPU can be time shared, it is okay if there is not enough capacity as only performance will suffer during the migration. So, it should be okay for the migration to continue.
Essentially, every resource should have a guarantee factor which specifies how much of the resources should be preserved during migration.
Memory must be 1 (all of the memory must be allocated) while things like CPU and storage are more fungible and can be around 0.25-0.5.
Fabien Hermenier
@fhermeni
side note, I need to enhance the demo to copy paste such stuff and visualise :D
Fabien Hermenier
@fhermeni
at first sight, it prevent to model a consumption using a rectangle which is not desirable to compute the scheduling
a possible workaround would be to declare a reduced current consumption for the VMs while using Preserve to ask for the right amount
bedtime. I'll be back
Aditya Ramesh
@adramesh
problem with that approach is that now we need a constraint for every VM for each of the resources which need not be guaranteed during live migration
Fabien Hermenier
@fhermeni
It will not be an issue in terms of performance (such a constraint is already state the usage by default)
Aditya Ramesh
@adramesh
To clarify, is this an example of what you're saying to do (if CPU during migration needs be given at most 50%):
{"model":{"mapping":{"readyVMs":[],"onlineNodes":{"0":{"sleepingVMs":[],"runningVMs":[0]},"1":{"sleepingVMs":[],"runningVMs":[1]}},"offlineNodes":[]},"attributes":{"nodes":{},"vms":{}},"views":[{"defConsumption":2,"nodes":{},"rcId":"CPU","id":"shareableResource","defCapacity":7,"vms":{}}]},"constraints":[{"nodes":[0],"vm":0,"continuous":false,"id":"ban"},{"nodes":[1],"vm":1,"continuous":false,"id":"ban"},{"rc":"CPU","amount":4,"vm":0,"id":"preserve"},{"rc":"CPU","amount":4,"vm":1,"id":"preserve"}],"objective":{"id":"minimizeMTTR"}}
Essentially, all VMs will have the consumption be the reduced consumption and the Preserve constraint ensures that the consumptions ends up getting reset to the default.
Fabien Hermenier
@fhermeni
yes, that look correct
the "real consumption" is 4, reduced to 2 to allow a temporary sharing. Back to 4 at the end necessarily
Aditya Ramesh
@adramesh
ok; will try out with this modification and see how performance goes
Aditya Ramesh
@adramesh
But, what is the issue of ensuring that during a migration, the shareable resource can be lower and if the end state violates the constraint, then we end up backtracking.
Much like how the discrete constraint gets treated
Fabien Hermenier
@fhermeni
don't get it