Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Nov 14 2019 20:49
    letmaik opened #39
  • Oct 25 2019 09:35

    szaghi on master

    update submodules update travis config (compare)

  • Oct 25 2019 09:30

    szaghi on master

    update submodules (compare)

  • Oct 25 2019 09:19

    szaghi on master

    update submodules update travis config (compare)

  • Oct 21 2019 06:34
    rakowsk commented #7
  • Oct 20 2019 16:09
    unfurl-links[bot] commented #7
  • Oct 20 2019 16:09
    rakowsk commented #7
  • Oct 12 2019 17:49
    ShatrovOA commented #38
  • Oct 11 2019 15:25
    szaghi labeled #38
  • Oct 11 2019 15:25
    szaghi assigned #38
  • Oct 11 2019 15:25
    szaghi commented #38
  • Oct 11 2019 13:52
    ShatrovOA edited #38
  • Oct 11 2019 13:44
    ShatrovOA opened #38
  • Sep 19 2019 11:19
    szaghi commented #7
  • Sep 19 2019 11:08

    szaghi on master

    Fix parsing bug issue#7 The me… update travis config Merge branch 'release/0.1.0' (compare)

  • Sep 19 2019 11:06

    szaghi on fix-parsing-bug-issue#7

    (compare)

  • Sep 19 2019 07:54

    szaghi on fix-parsing-bug-issue#7

    (compare)

  • Sep 19 2019 07:52
    szaghi commented #7
  • Sep 19 2019 07:51
    szaghi labeled #7
  • Sep 19 2019 07:51
    szaghi assigned #7
Milan Curcic
@milancurcic

My dear Fortranners, I just posted a question on SO that has been bugging me on and off for few days now:

http://stackoverflow.com/questions/39643775/passing-a-generic-procedure-to-a-function-as-actual-argument

I am aware quite a few of you can code circles around things like this. Let me know if you have a suggestion and thank you for being amazing.

Stefano Zaghi
@szaghi
@milancurcic I am with you. Indeed, I never tried to pass a generic name as dummy, but close to it. Currently, I am trying to forcing my self to adopt Damian's teaching about OO and in similar cases I am trying to adopt something like strategy pattern, but I do not know if this can fit with your case. Anyhow, I was not aware about the limitations involved in passing generic name as dummy. What solution you have eventually adopted? The wrapper suggested by Vladimir? OHHH, I just see that Vladimir is indeed @LadaF ! We have another guru here :tada: :tada:
Stefano Zaghi
@szaghi

Dear @/all , another very interesting publication by Chris

https://cmacmackin.github.io/pdfs/transferReport.pdf

( @cmacmackin you are too much modest to highlight it and not all read compulsively your blog as me... sorry to spoiler your work, but it is really interesting for me)

Izaak "Zaak" Beekman
@zbeekman
@/all I think we should have a stronger showing here so go vote!
Stefano Zaghi
@szaghi
:+1: done... for C? :smile:
Giacomo Rossi
@giacombum
Well done @szaghi... You better know that language than Fortran... :-D
Stefano Zaghi
@szaghi
@giacombum :smile: poor C and poor Fortran is always better than nothing as you :smile:
Izaak "Zaak" Beekman
@zbeekman

@/all I have been staring at something too long and have started to doubt myself. Can someone please confirm that the following is true for Coarray Fortran:

On a given image if an assignment is performed with an image local variable on the left hand side, and a coarray object on the righthand side (i.e., data on another image) no synchronization is needed before using the local variable on the left hand side because the assignment will not allow that image to proceed until the local variable has received the remote data. i.e., there is no synchronization required between line one and two below:

local_b(:) = remote_a(:)[me+1]
all_match = all( local_b(:) ==  local_a(:) )

This is correct, no? That is local_b will always have the values from remote_a when being compared with local_a, right?

Stefano Zaghi
@szaghi

@zbeekman I really do not know what standard states about this. Have you find an answer? This is interesting also for me, that is,

does your example imply that there is an implicit synchronization between me and me+1 before the 2nd statement?

Or, on the other side of the moon,

to achieve asynchronous parallelism, do I have to assign to a coarray rather than to get/fetch from it?

Izaak "Zaak" Beekman
@zbeekman
@szaghi yes, my current understanding is that gets are blocking for the local image, whereas puts are non-blocking. So to overlap computation and communication and reduce time spent waiting for other images, you should try to have the coindexed object on the LHS of the assignment.
Stefano Zaghi
@szaghi
@zbeekman :+1:
Izaak "Zaak" Beekman
@zbeekman
@/all If any one wants to try nightly GFortran builds from GCC trunk, you can do so very easily with my nightly-gcc-trunk-docker-image project
I use my other project to trigger nightly builds of the docker image: https://github.com/zbeekman/nightly-docker-rebuild
Milan Curcic
@milancurcic
Thanks @zbeekman I made note of this. It will come very useful when I start playing with gcc7 (haven't yet). Started playing with docker containers recently and absolutely love them!
Izaak "Zaak" Beekman
@zbeekman
@milancurcic great! Feel free to star/watch/fork!
Stefano Zaghi
@szaghi
@zbeekman Thank you very much. I already (ad)used of gcc 7, but having your nightly build system is awesome. The cons is that I have to learn docker and my time is nearly to vanish...
Izaak "Zaak" Beekman
@zbeekman
@szaghi well docker is pretty simple... you use it as a virtual machine essentially, that has GCC 7 trunk installed in it. Get docker installation instructions for Arch Linux here and then once it's installed and the docker engine is running just docker pull zbeekman/nightly-gcc-trunk-docker-image:latest and then to run it, and mount a local directory, say with some source code: docker run -v /local/code/source:/mount/path/on/vm -i -t zbeekman/nightly-gcc-trunk-docker-image If you want root, add --user root to that, so you can install some packages etc... Although I may give the default user sudo priviledges soon...
Stefano Zaghi
@szaghi
@zbeekman Great! Thanks!
Chris MacMackin
@cmacmackin
Hi all. I'd like to make a small announcement that I am releasing a simple logging library for Fortran, called Flogging. This is licensed under the LGPL. The easiest way to install this is using the FLATPack repository (which is still in an embryonic stage) with the Spack package manager.
Stefano Zaghi
@szaghi
@cmacmackin
Stefano Zaghi
@szaghi
@cmacmackin Flogging is very intersting (damn, I have missed, you announcement is fundamental!!!???!!). I have to study it because I have never used Python's logging library. A wild question (before study it): using it in a parallel env is viable? It could be very handy for debugging my CAF experiments...
Chris MacMackin
@cmacmackin
I haven't put much thought a parallel environment. As it stands now, I'm pretty sure each image would end up creating its own log-file, which isn't really ideal. I'll also probably need to change the IO slightly to make it thread-safe.
Stefano Zaghi
@szaghi
@/all Happy Crhistmas ! :santa:
Stefano Zaghi
@szaghi

@/all @jacobwilliams

Multidimensional interpolation on non uniform grid

Dear all,
I am now approaching to implement some interpolation procedures for some different purposes:

  • for WENO reconstruction with @giacombum;
  • for mesh-to-mesh solution interpolation;

I am a fun of @jacobwilliams interpolotars and, indeed, I had already planned to use his libraries. However, I have just realized that for my needs two keys are necessary

  • multidimensional;
  • non uniform grids spacing;

Before reinventing the wheel or re-study the state-of-the-art litterature, are you aware of such a kind of interpolator libraries? Obviusly, FOSS and possible in Fortran (or Fortran-friendly).

Jacob's libraries (bspline in particular) is currently my best option: our mesh are in general very slowly-changing that can be viewed (approximated/considered) as locally uniform, but for the WENO aim a non uniform interpolator can be very useful just for only study aims.

Happy new years :tada: :tada:

Milan Curcic
@milancurcic

@szaghi I have been using ESMF for years for interpolation, take a look at their capabilities: https://www.earthsystemcog.org/projects/esmf/regridding_esmf_6_3_0rp1

Pros:

  • parallel (MPI), including different domain partitioning between source and target grids
  • works for both structured and unstructured grids.
  • has a CLI tool for regridding

Cons:

  • Quite large project if you need only interpolation
  • Only 2-d and 3-d interpolation supported AFAIK
  • No fancy interpolation methods - just nearest neighbor, bilinear, and conservative.

ESMF is probably not something you will want to use, but I recommend you look at it for inspiration, especially how it handles the grid-to-grid interpolation in parallel and interpolating between different domain partitions.

Giacomo Rossi
@giacombum
The main problem with WENO reconstruction on non-uniform grids is the evaluation of smoothness indicators coefficients: the most used formula for smoothness indicators evaluation consists of a sum of integrals of the squares of the derivatives of the interpolating polynomial; when the interpolating polynomials are Lagrange polynomials (as in the original work of Shu), the evaluation of these coefficient is very laborious for order greater than 3, because analytical expressions are very very very complex. Are you aware of such problem and know a maybe simplest solution? Obviously @szaghi could describe the problem that we have encounter better than me!
Stefano Zaghi
@szaghi
@milancurcic Milan, thank you very much for your hints, it is very appreciated. I'll look at ESMF and I'll try to discuss with Jacob about possible modifications of his libraries. Cheers
Izaak "Zaak" Beekman
@zbeekman
@giacombum I have always used curvilinear grids, so that the problem can be transformed to an equivalent formulation in computational space. Are you and @szaghi creating a unstructured solver?
Giacomo Rossi
@giacombum
You are saying that as for curvilinear grids, non-uniform grids can be mapped in uniform grids in the computational space? This is true and of course it can save a lot of computation, because the WENO formulation for uniform grids can be used. But I want to use WENO interpolation also for joined grids approach: two grids with different spatial resolution are joined together, so I have to interpolate the values on the ghost cells of the finest grid via interpolation from "true" points of fine and coarse grid... Maybe I haven't explained very well...
For transformation to physical to computational space, do you use Jacobian?
Izaak "Zaak" Beekman
@zbeekman
Yes transforming uses Jacobian. So you're using multi-grid? Or chimera/overset grids?
Stefano Zaghi
@szaghi

@zbeekman Giacomo used a kinda of multi-grid (non dynamic/adaptive grid refinement), I am using dynamic overlapping grid (Chimera) with also multi-grid acceleration, but the point is that we want to have:

  • multi-grid residual acceleration;
  • AMR (dynamic/adaptive mesh refinement);
  • dynamic overlapping grids (Chimera with moving bodies);
  • initially based on structured (body fitted) blocks but at same point we want:
    • mix also unstructured blocks (to easy mesh region of low interest);
    • mix also immersed boundaries;
    • track sharp interface between:
      • same fluid with different state or phase;
      • different fluids:

One crucial point (among others) is doing interpolation with very different solutions computed on the some region.

Giacomo Rossi
@giacombum
@zbeekman And how you evaluate Jacobian elements? I refer to the derivatives of the physical coordinates respect to computational coordinates, or vice-versa.
Izaak "Zaak" Beekman
@zbeekman
Well it depends how the grid was generated... I have done some composition of analytic functions (conformal mapping which is investable, and 1-D stretching functions) or finite difference if the grid was elliptic or hyperbolic etc numerically generated. Obviously the difficulty becomes proving > 2nd order accuracy if you then also have a numerically generated grid and are using FD etc. to compute the jacobians or grid metrics... But the case that Stefano describes sounds very complicated... In particular with overset/chimera grids you lose conservation in the interpolated region... There is some work being done to ensure stable, conservative interpolation, but it's a real pain....
As for the expense of the smoothness coefficients, yes that is a big problem. I haven't tried or researched it, but I wonder if you could switch your basis from lagrange polynomials to something else that would yield a simpler formulation?
A good approach might be to use a compact WENO higher order scheme... I think you only need nearest neighbor for that approach, but it's been a long time since I have read the paper: https://scholar.google.com/scholar?hl=en&q=compact+WENO+scheme+baeder&btnG=&as_sdt=1%2C7&as_sdtp=
Izaak "Zaak" Beekman
@zbeekman
If you can increase the bandwidth resolving efficiency and order of accuracy of the scheme, sometimes having more expensive computations can be close to free, as long as you have good data locality... These days on KNL etc. data motion and memory hiearchy can often be a much bigger bottleneck than FLOPS, whose relative cost has been declining. Having a higher arethmatic intensity (i.e. not memory bound) can be cheap relative to less computationally intensive algorithms that are memory bound. I would check out compact WENO schemes from Baeder (He was a student of McCormick's, just like Graham Candler who is my ex-advisor's advisor....)
Izaak "Zaak" Beekman
@zbeekman

A little bit late, but just FYI, a new release of OpenCoarrays is here. Here are the release notes and download links etc.:


Github Releases (by Asset) Build Status license Twitter URL

Enhancements

  • Patch gcc's contributed download_prerequisites script to detect presence of wget and try alternates like curl if it is missing

Added Regression Tests

Added GCC 7 trunk regression tests. Thanks to @egiovan for reporting segfaults with event post when num_images > 2 and the type conversion before coarray puts regressions with GCC trunk.

Installation

Please see the installation instructions for more details on how to build and install this version of OpenCoarrays


GitHub forks GitHub stars GitHub stars Twitter URL

Stefano Zaghi
@szaghi
@zbeekman Zaak, thank you very much for your hints, they are really interesting. In the past, I studied Martin's works, I am aware of bandiwth optimized model. Simply, my time was (is) limited... surely it will be strongly considered to be added to WenOOF. Compact WENO, on the contrary, is a new concept for me! Thank you very much!
Izaak "Zaak" Beekman
@zbeekman

I'm almost done with a mac OS X Homebrew "Formula" for OpenCoarrays, but it's not quite popular enough to meet the required "notoriety" metrics to be included in the main package repository (homebrew-core tap). We only need 3 more forks, 6 more watchers and/or 16 more stargazers. Any help would in increasing our popularity metrics would be much appreciated!


GitHub forks GitHub stars GitHub stars Twitter URL

Once in Homebrew it will get picked up into the downstream LinuxBrew as well
Stefano Zaghi
@szaghi
@zbeekman I have already starged, watched. I have now forked it. Tomorrow I'll force my boss to do the same if still there will be the need :smile:
I start with not a my boss... @giacombum Giacomo starge, watch and fork opencoarrays please :smile:
Izaak "Zaak" Beekman
@zbeekman
@szaghi thanks :bow: due to your fork we are over the threshold for popularity/notariety!
For thos interested the PR is here: Homebrew/homebrew-core#8790
Stefano Zaghi
@szaghi
:tada: :tada: :tada:
Giacomo Rossi
@giacombum
I've already watch and star this awesome project... now I'll fork it!!!!
done!
Izaak "Zaak" Beekman
@zbeekman
I am pleased to announce that OpenCoarrays is now available through the Homebrew package manager for OS X! homebrew
brew update
brew install --cc=gcc-6 OpenCoarrays # installs fine with clang too if you drop cc=
Stefano Zaghi
@szaghi
:tada: happy for you!