## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
• Create your own community
##### Activity
• Feb 04 20:48
pfhub commented #1487
• Feb 04 20:46
mfrichtl opened #1487
• Feb 03 19:59
pfhub commented #1485
• Feb 03 19:57
wd15 synchronize #1485
• Feb 03 18:50
wd15 edited #1486
• Feb 03 18:49
wd15 milestoned #1486
• Feb 03 18:49
wd15 labeled #1486
• Feb 03 18:49
wd15 opened #1486
• Feb 03 18:06
pfhub commented #1485
• Feb 03 18:04
wd15 milestoned #1485
• Feb 03 18:04
wd15 milestoned #1485
• Feb 03 18:04
wd15 labeled #1485
• Feb 03 18:04
review-notebook-app[bot] commented #1485
• Feb 03 18:03
wd15 opened #1485
• Feb 01 17:43
wd15 review_requested #1484
• Feb 01 17:43
wd15 commented #1484
• Feb 01 16:51
pfhub commented #1484
• Feb 01 16:49
wd15 milestoned #1484
• Feb 01 16:49
wd15 milestoned #1484
• Feb 01 16:49
wd15 labeled #1484
Stephen DeWitt
@stvdwtt
I’m working on the code paper for PRISMS-PF, and I’m planning on referencing the results that have been uploaded for BM3. If anyone would like me to include your code in the comparison (or an updated result for a code that currently has an upload), please let me know and give me an estimate for when you’ll be able to upload your result. Also, when I do a convergence test in time and space, I get a nondimensional tip velocity of 8.69e-4. If anyone else has done a convergence test, please let me know — hopefully your answer is similar to mine.
Daniel Wheeler
@wd15
We/I need to include the tip velocity on the BM3 web page at some point. Have you already uploaded these results?
Stephen DeWitt
@stvdwtt
I haven’t uploaded the full convergence test (I was worried that it’d clutter up the plots too much), but I have a nearly fully converged result uploaded as well as a slightly less well-converged result
Daniel Wheeler
@wd15
ok, should I be able to infer that tip velocity number from the data that you uploaded with a suitable scheme?
Stephen DeWitt
@stvdwtt
Correct
Since all of the uploads report tip position as a function of time, you can approximate the tip velocity using finite differences (which I did for a table on one of my slides at the last workshop)
Daniel Wheeler
@wd15
Although the tip position curves look smooth for BM3 on the web site, when you zoom in they really aren't smooth. Obviously, the details of the scheme matter. On the website I'm planning on fitting a low order spline through the curves to estimate the tip velocity as a function of time.
Stephen DeWitt
@stvdwtt
That makes sense
Larry Aagesen
@laagesen
@wd15 looks good to me- sorry for the late reply
Daniel Wheeler
@wd15
What are other examples of websites that ask for scientific results and try to display them whether mostly automated or manual? Obviously, $\mu$MAG is one and the IPR is another and GenBank. Kaggle sort of does that, but it's not focused on any particular domain. I'm interested in examples that serve a small community. GenBank and Kaggle serve large communities.
Daniel Schwen
@dschwen
@stvdwtt I'm working on a new MOOSE benchmark upload
spoiler alert: it is not going to be slower than the previous benchmark I uploaded :-D
Stephen DeWitt
@stvdwtt
@dschwen Thanks for the heads-up. Do you think you can post it before the end of the week?
Daniel Schwen
@dschwen
Yeah, I think so.
Daniel Schwen
@dschwen
PR is in
Stephen DeWitt
@stvdwtt
That’s definitely not slower than the previous one 😀
I don’t think anyone thought that benchmark would be a 15 second problem
Daniel Schwen
@dschwen
yeah... this got out of hand
I'll focus on the other benchmark problems now
Daniel Wheeler
@wd15
It's 10000x faster than FiPy. Take it easy on the other benchmarks.
Daniel Schwen
@dschwen
Yeah, the whole efficiency plot is actually pretty useless TBH
I was looking at problem 1a again, and as the efficiency is simply sim time over wall time, a bunch of factors skew the results bigtime
• CPU type is disregarded
• Number of cores is disregarded
• End time of the simulation is arbitrary
The last one is pretty problematic. If I extend my simulation end time to 1e15 I can make MOOSE the fastest code by like a factor of a million
You'll have steady state after 1e6 and the timestep can grow boundless. So you get a lot of sim time for almost no wall time
I also see that people used unreasonably coarse meshes and very low non-linear solver tolerances
This just encourages a pointless on-upmanship(to quote @laagesen) - and I'm certainly guilty of that with my obsession over 3a
We need to discuss this at the meeting, but I'm in favor of just dropping the "efficiency" plot until we get this right.
Daniel Schwen
@dschwen
The solution could be a well defined architecture (or set of architectures) - A reference desktop and a reference cluster. Preferably an actual physical machine everyone has access to.
or just chuck it altogether
Just looking at the gamut of the 3a MOOSE benchmarks where the slowest MOOSE benchmark is over 6000 times slower than the fastest should make one realize how utterly useless this comparison is.
I am discouraged from working on further uploads but that means that subpar uploads will likely define the perception of our code. Frustrating.
Stephen DeWitt
@stvdwtt
I’m with you on the pitfalls of determining the “efficiency” of a code. I think in most cases, any measure of performance that doesn’t account for accuracy is bound to be misleading. That was one of the main motivations behind BM7 — it gives a well-defined error metric and normalizes the time by the number of cores and the processor clock speed. The normalized clock speed still misses plenty of relevant information though (and encourges single-core runs) and a reference machine would definitely be preferable.
Stephen DeWitt
@stvdwtt
I don't think our joint obession with 3a has been <i>entirely</i> pointless. With the tip velocity there’s an ok error metric. The spread in times that it takes between codes is something that’s worth knowing in an order of magnitude sense. I also think the spread in times for the MOOSE uploads is interesting to the extent that it demonstrates the vast difference between expertly optimized codes/parameter sets vs more naive implementations.
Daniel Schwen
@dschwen
Well, yeah, except, none of the MOOSE codes was "explicitly" optimized
Everything was due to changes in the input file.
Yeah, if the takeway is that a newbie MOOSE user can run a really slow simulation, that's fine with me.
My "crazy idea" was to use a standardized off the shelf computer - a Raspberry Pi - as a reference machine. But I've gotten mixed feedback. In particular that the ARM architecture may not be representative if most simulations in practice are run on x64.
Stephen DeWitt
@stvdwtt
That choices in the input file (for any code) can cause a calculation to take orders of magnitude longer and be less accurate is a useful warning I think
Daniel Schwen
@dschwen
Also, it is hard to know whether the spread is due to a spread in uploader expertise, or due to low user friendlyness (if that is the point to be made here...)
Issue #500 proposes tags for uploads
one tag could be the "level" of familiarity with the code of the uploader
Stephen DeWitt
@stvdwtt
That makes sense to me.
Daniel Schwen
@dschwen
And that might just be a pill to swallow. If you are not familiar with MOOSE you will not get optimal results.
I have a suspicion that this is common problem for scientific codes.
And part of the problem could be that it is easy to get started, where other codes maybe put up a barrier through a steeper learning curve that prevents a quick "shot from the hip"
Stephen DeWitt
@stvdwtt
You’re right, that is hard to know. They can also be hard to separate — if you run an implicit code you need to work harder to pick a time step, choosing that correctly is some mix of expertise and user-friendlieness
Daniel Schwen
@dschwen
preconditioning is a big one for implicit codes
Stephen DeWitt
@stvdwtt
Right, that’s a better example
Daniel Schwen
@dschwen
that is voodoo science even to me