Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Dec 15 2018 14:41
    safford93 starred usnistgov/hiperc
  • Dec 12 2018 18:08

    tkphd on master

    Doxygen moved (compare)

  • Oct 26 2018 23:06
  • Aug 23 2018 14:49
    wme7 forked
    wme7/hiperc
  • Aug 23 2018 14:49
    wme7 starred usnistgov/hiperc
  • Jul 30 2018 10:57
    avinashprabhu starred usnistgov/hiperc
  • Jul 27 2018 17:40
    tkphd opened #138
  • Jul 27 2018 17:39
    tkphd opened #137
  • Jul 27 2018 17:39
    tkphd labeled #137
  • Jul 27 2018 17:39
    tkphd labeled #137
  • Jul 11 2018 02:30
    ablekh starred usnistgov/hiperc
  • Jul 10 2018 18:36

    tkphd on master

    merged find command (compare)

  • Jul 10 2018 18:29

    tkphd on master

    link checks with Travis fix broken links Merge branch 'linting' (compare)

  • Jul 02 2018 16:21
    richardotis starred usnistgov/hiperc
  • Jun 28 2018 17:12
    ritajitk starred usnistgov/hiperc
  • Apr 26 2018 17:37
    tkphd opened #136
  • Mar 30 2018 08:42
    hellabyte starred usnistgov/hiperc
  • Feb 20 2018 21:50

    tkphd on manufactured-solutions

    fix kappa in script (compare)

  • Feb 19 2018 08:56

    tkphd on manufactured-solutions

    prettier truncation label starting temporal study (compare)

  • Feb 19 2018 07:56

    tkphd on manufactured-solutions

    tracking down SymPy source gene… clean-compiling Benchmark 7 for… improved output and 2 more (compare)

A. M. Jokisaari
@amjokisaari
by the way, what's the reason for the name change?
"High Performance Computing Strategies for Boundary Value Problems"
Trevor Keller
@tkphd
"hiperc" is easier to type and remember than "phasefield-accelerator-benchmarks", and HPCS4BVP (sorry, on a phone) captures the goals and scope of the project. Phase field is my preferred application, but anybody solving diffusion-like equations will find this useful.
A. M. Jokisaari
@amjokisaari
yeah, gotcha. what is the pronunciation? "hyper-see" ? "hi-perk" ?
drjamesawarren
@drjamesawarren
Hype-rock
Trevor Keller
@tkphd
Hyper-see is my preference, yeah.
@reid-a, thanks for the regression suggestion. Is the technique documented in the OOF manual?
Andrew Reid
@reid-a
@tkphd Not really, the manual mostly is about how to use it. The critical function is "fp_file_compare" in the utils directory. You can drill down to it on the github repo.
Dan Lewis
@lucentdan
Thanks for the invite. Looking into the phase field accelerator at this time.
Dan Lewis
@lucentdan
Think this will be useful for the new generation of HPC planned at RPI 2018+
Trevor Keller
@tkphd
No problem, @lucentdan. Welcome!
A. M. Jokisaari
@amjokisaari

ok. The diffusion code runs on KNL!

runlog.csv results:

iter sim_time wrss conv_time step_time IO_time soln_time run_time
0 0 0 0 0.188137 0.057628 0 0.246949
10000 10000 0.000286 4.493929 1.365069 0.12165 0.005621 6.65671
20000 20000 0.000574 8.895637 2.39418 0.187032 0.006831 12.781632
30000 30000 0.000863 13.398053 3.401486 0.255395 0.008045 19.002456
40000 40000 0.001152 17.789476 4.41478 0.327418 0.009311 25.117928
50000 50000 0.001442 22.126154 5.438066 0.402769 0.012329 31.182672
60000 60000 0.001732 26.484548 6.458279 0.478117 0.013839 37.286988
70000 70000 0.002023 30.873932 7.447651 0.555889 0.015224 43.361118
80000 80000 0.002313 35.321395 8.449465 0.635728 0.016628 49.513117
90000 90000 0.002604 39.700359 9.443941 0.720135 0.018102 55.588724
100000 100000 0.002895 44.014251 10.427562 0.803771 0.019487 61.58906

diffusion.0100000.png
that's the final result.

I'm watching the output of top while running this, and I'm getting

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
22566 jokisaar 20 0 17.009g 35144 2232 R 25600 0.0 154:24.30 diffusion

Andrew Reid
@reid-a
Where's the +1 button on this thing?
+1!
Trevor Keller
@tkphd
PID PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
22566 20 0 17.009g 35144 2232 R 25600 0.0 154:24.30 diffusion
@reid-a type :+1:
Andrew Reid
@reid-a
:+1:
A. M. Jokisaari
@amjokisaari
@tkphd , ooh formatting. I shall endeavor to do that the next time
Trevor Keller
@tkphd
:+1:
A. M. Jokisaari
@amjokisaari
dumb me, but how do I interpret that %CPU? That's indicating threaded running, right? (mpi would give multiple lines in top)
Trevor Keller
@tkphd
Thank you so much for the KNL time and data! Looks like my code is 25.6/60=42% efficient, so we'll iterate :smile:
A. M. Jokisaari
@amjokisaari
haha. you are welcome! looking forward to further testing and really seeing how the phase field benchmarks work too!
Trevor Keller
@tkphd
Yes, CPU of 100% is one core, 25.6e3 is 25 cores, 60e3 would be 100% load.
A. M. Jokisaari
@amjokisaari
so these KNL nodes have 64 cores.
Trevor Keller
@tkphd
Right, just looked that up. 64e3 would be 100% load.
So my efficiency is only 40%.
A. M. Jokisaari
@amjokisaari
ok. so it was running 32.
Trevor Keller
@tkphd
Oh... interesting.
A. M. Jokisaari
@amjokisaari
right? if it's 25600%?
i added a github issue to include somewhere easily visible in the documentation how to specify the # of cores you want KNL to use.
Trevor Keller
@tkphd
Well, maybe. Depends how you launched it, and whether $OMP_NUM_THREADS was defined or not, and whether the core count was restricted by SLURM.
OpenMP takes all of them by default, which is the behavior I want. But yes, I will happily comment on that in the docs.
A. M. Jokisaari
@amjokisaari
aaahhh hrm. so I launched an interactive job via $ srun --pty -p knlall -t 1:00:00 /bin/bash
i will do it again to see if i can find out what the defaults are on the node.
ah. $OMP_NUM_THREADS was not defined. I have no idea what slurm will do for the interactive KNL job
Trevor Keller
@tkphd
OK... that should reserve the whole node for your use. When you log in, type "echo $OMP_NUM_THREADS". It should give a blank line, meaning the variable is not set. That would take all available cores, though I'm not sure whether it takes KNL hyperthreading into account.
Andrew Reid
@reid-a
Some (all?) KNL devices have up to four-way hyperthreading, so 25600% is theoretically achievable if you're very cache-friendly and have zero pipline stalls.
Pipeline.
Trevor Keller
@tkphd
OK, if you have the patience, please export OMP_NUM_THREADS=32; make run; tail runlog.csv, then export OMP_NUM_THREADS=64; make run; tail runlog.csv
A. M. Jokisaari
@amjokisaari
rofl. if I have the patience. :+1:
way ahead of you
Trevor Keller
@tkphd
Oh... touche, @reid-a. CPU=25600% means 256 cores, not 25; and the program is tiny, so I am indeed cache friendly.
A. M. Jokisaari
@amjokisaari
oh kriky. bebop instructions specifically say to limit to 128 or we might crash the nodes
i think bebop is still in sort of the shakedown stage.
Trevor Keller
@tkphd
Oh, come on. Crash it for science!
A. M. Jokisaari
@amjokisaari
DO IT FOR SCIENCE, MORTY