by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Mar 26 17:48

    szaghi on master

    update submodules (compare)

  • Mar 25 13:39

    szaghi on master

    update submodules (compare)

  • Nov 14 2019 20:49
    letmaik opened #39
  • Oct 25 2019 09:35

    szaghi on master

    update submodules update travis config (compare)

  • Oct 25 2019 09:30

    szaghi on master

    update submodules (compare)

  • Oct 25 2019 09:19

    szaghi on master

    update submodules update travis config (compare)

  • Oct 21 2019 06:34
    rakowsk commented #7
  • Oct 20 2019 16:09
    unfurl-links[bot] commented #7
  • Oct 20 2019 16:09
    rakowsk commented #7
  • Oct 12 2019 17:49
    ShatrovOA commented #38
  • Oct 11 2019 15:25
    szaghi labeled #38
  • Oct 11 2019 15:25
    szaghi assigned #38
  • Oct 11 2019 15:25
    szaghi commented #38
  • Oct 11 2019 13:52
    ShatrovOA edited #38
  • Oct 11 2019 13:44
    ShatrovOA opened #38
  • Sep 19 2019 11:19
    szaghi commented #7
  • Sep 19 2019 11:08

    szaghi on master

    Fix parsing bug issue#7 The me… update travis config Merge branch 'release/0.1.0' (compare)

  • Sep 19 2019 11:06

    szaghi on fix-parsing-bug-issue#7

    (compare)

  • Sep 19 2019 07:54

    szaghi on fix-parsing-bug-issue#7

    (compare)

  • Sep 19 2019 07:52
    szaghi commented #7
Stefano Zaghi
@szaghi
@zbeekman :+1:
Izaak "Zaak" Beekman
@zbeekman
@/all If any one wants to try nightly GFortran builds from GCC trunk, you can do so very easily with my nightly-gcc-trunk-docker-image project
I use my other project to trigger nightly builds of the docker image: https://github.com/zbeekman/nightly-docker-rebuild
Milan Curcic
@milancurcic
Thanks @zbeekman I made note of this. It will come very useful when I start playing with gcc7 (haven't yet). Started playing with docker containers recently and absolutely love them!
Izaak "Zaak" Beekman
@zbeekman
@milancurcic great! Feel free to star/watch/fork!
Stefano Zaghi
@szaghi
@zbeekman Thank you very much. I already (ad)used of gcc 7, but having your nightly build system is awesome. The cons is that I have to learn docker and my time is nearly to vanish...
Izaak "Zaak" Beekman
@zbeekman
@szaghi well docker is pretty simple... you use it as a virtual machine essentially, that has GCC 7 trunk installed in it. Get docker installation instructions for Arch Linux here and then once it's installed and the docker engine is running just docker pull zbeekman/nightly-gcc-trunk-docker-image:latest and then to run it, and mount a local directory, say with some source code: docker run -v /local/code/source:/mount/path/on/vm -i -t zbeekman/nightly-gcc-trunk-docker-image If you want root, add --user root to that, so you can install some packages etc... Although I may give the default user sudo priviledges soon...
Stefano Zaghi
@szaghi
@zbeekman Great! Thanks!
Chris MacMackin
@cmacmackin
Hi all. I'd like to make a small announcement that I am releasing a simple logging library for Fortran, called Flogging. This is licensed under the LGPL. The easiest way to install this is using the FLATPack repository (which is still in an embryonic stage) with the Spack package manager.
Stefano Zaghi
@szaghi
@cmacmackin
Stefano Zaghi
@szaghi
@cmacmackin Flogging is very intersting (damn, I have missed, you announcement is fundamental!!!???!!). I have to study it because I have never used Python's logging library. A wild question (before study it): using it in a parallel env is viable? It could be very handy for debugging my CAF experiments...
Chris MacMackin
@cmacmackin
I haven't put much thought a parallel environment. As it stands now, I'm pretty sure each image would end up creating its own log-file, which isn't really ideal. I'll also probably need to change the IO slightly to make it thread-safe.
Stefano Zaghi
@szaghi
@/all Happy Crhistmas ! :santa:
Stefano Zaghi
@szaghi

@/all @jacobwilliams

Multidimensional interpolation on non uniform grid

Dear all,
I am now approaching to implement some interpolation procedures for some different purposes:

  • for WENO reconstruction with @giacombum;
  • for mesh-to-mesh solution interpolation;

I am a fun of @jacobwilliams interpolotars and, indeed, I had already planned to use his libraries. However, I have just realized that for my needs two keys are necessary

  • multidimensional;
  • non uniform grids spacing;

Before reinventing the wheel or re-study the state-of-the-art litterature, are you aware of such a kind of interpolator libraries? Obviusly, FOSS and possible in Fortran (or Fortran-friendly).

Jacob's libraries (bspline in particular) is currently my best option: our mesh are in general very slowly-changing that can be viewed (approximated/considered) as locally uniform, but for the WENO aim a non uniform interpolator can be very useful just for only study aims.

Happy new years :tada: :tada:

Milan Curcic
@milancurcic

@szaghi I have been using ESMF for years for interpolation, take a look at their capabilities: https://www.earthsystemcog.org/projects/esmf/regridding_esmf_6_3_0rp1

Pros:

  • parallel (MPI), including different domain partitioning between source and target grids
  • works for both structured and unstructured grids.
  • has a CLI tool for regridding

Cons:

  • Quite large project if you need only interpolation
  • Only 2-d and 3-d interpolation supported AFAIK
  • No fancy interpolation methods - just nearest neighbor, bilinear, and conservative.

ESMF is probably not something you will want to use, but I recommend you look at it for inspiration, especially how it handles the grid-to-grid interpolation in parallel and interpolating between different domain partitions.

Giacomo Rossi
@giacombum
The main problem with WENO reconstruction on non-uniform grids is the evaluation of smoothness indicators coefficients: the most used formula for smoothness indicators evaluation consists of a sum of integrals of the squares of the derivatives of the interpolating polynomial; when the interpolating polynomials are Lagrange polynomials (as in the original work of Shu), the evaluation of these coefficient is very laborious for order greater than 3, because analytical expressions are very very very complex. Are you aware of such problem and know a maybe simplest solution? Obviously @szaghi could describe the problem that we have encounter better than me!
Stefano Zaghi
@szaghi
@milancurcic Milan, thank you very much for your hints, it is very appreciated. I'll look at ESMF and I'll try to discuss with Jacob about possible modifications of his libraries. Cheers
Izaak "Zaak" Beekman
@zbeekman
@giacombum I have always used curvilinear grids, so that the problem can be transformed to an equivalent formulation in computational space. Are you and @szaghi creating a unstructured solver?
Giacomo Rossi
@giacombum
You are saying that as for curvilinear grids, non-uniform grids can be mapped in uniform grids in the computational space? This is true and of course it can save a lot of computation, because the WENO formulation for uniform grids can be used. But I want to use WENO interpolation also for joined grids approach: two grids with different spatial resolution are joined together, so I have to interpolate the values on the ghost cells of the finest grid via interpolation from "true" points of fine and coarse grid... Maybe I haven't explained very well...
For transformation to physical to computational space, do you use Jacobian?
Izaak "Zaak" Beekman
@zbeekman
Yes transforming uses Jacobian. So you're using multi-grid? Or chimera/overset grids?
Stefano Zaghi
@szaghi

@zbeekman Giacomo used a kinda of multi-grid (non dynamic/adaptive grid refinement), I am using dynamic overlapping grid (Chimera) with also multi-grid acceleration, but the point is that we want to have:

  • multi-grid residual acceleration;
  • AMR (dynamic/adaptive mesh refinement);
  • dynamic overlapping grids (Chimera with moving bodies);
  • initially based on structured (body fitted) blocks but at same point we want:
    • mix also unstructured blocks (to easy mesh region of low interest);
    • mix also immersed boundaries;
    • track sharp interface between:
      • same fluid with different state or phase;
      • different fluids:

One crucial point (among others) is doing interpolation with very different solutions computed on the some region.

Giacomo Rossi
@giacombum
@zbeekman And how you evaluate Jacobian elements? I refer to the derivatives of the physical coordinates respect to computational coordinates, or vice-versa.
Izaak "Zaak" Beekman
@zbeekman
Well it depends how the grid was generated... I have done some composition of analytic functions (conformal mapping which is investable, and 1-D stretching functions) or finite difference if the grid was elliptic or hyperbolic etc numerically generated. Obviously the difficulty becomes proving > 2nd order accuracy if you then also have a numerically generated grid and are using FD etc. to compute the jacobians or grid metrics... But the case that Stefano describes sounds very complicated... In particular with overset/chimera grids you lose conservation in the interpolated region... There is some work being done to ensure stable, conservative interpolation, but it's a real pain....
As for the expense of the smoothness coefficients, yes that is a big problem. I haven't tried or researched it, but I wonder if you could switch your basis from lagrange polynomials to something else that would yield a simpler formulation?
A good approach might be to use a compact WENO higher order scheme... I think you only need nearest neighbor for that approach, but it's been a long time since I have read the paper: https://scholar.google.com/scholar?hl=en&q=compact+WENO+scheme+baeder&btnG=&as_sdt=1%2C7&as_sdtp=
Izaak "Zaak" Beekman
@zbeekman
If you can increase the bandwidth resolving efficiency and order of accuracy of the scheme, sometimes having more expensive computations can be close to free, as long as you have good data locality... These days on KNL etc. data motion and memory hiearchy can often be a much bigger bottleneck than FLOPS, whose relative cost has been declining. Having a higher arethmatic intensity (i.e. not memory bound) can be cheap relative to less computationally intensive algorithms that are memory bound. I would check out compact WENO schemes from Baeder (He was a student of McCormick's, just like Graham Candler who is my ex-advisor's advisor....)
Izaak "Zaak" Beekman
@zbeekman

A little bit late, but just FYI, a new release of OpenCoarrays is here. Here are the release notes and download links etc.:


Github Releases (by Asset) Build Status license Twitter URL

Enhancements

  • Patch gcc's contributed download_prerequisites script to detect presence of wget and try alternates like curl if it is missing

Added Regression Tests

Added GCC 7 trunk regression tests. Thanks to @egiovan for reporting segfaults with event post when num_images > 2 and the type conversion before coarray puts regressions with GCC trunk.

Installation

Please see the installation instructions for more details on how to build and install this version of OpenCoarrays


GitHub forks GitHub stars GitHub stars Twitter URL

Stefano Zaghi
@szaghi
@zbeekman Zaak, thank you very much for your hints, they are really interesting. In the past, I studied Martin's works, I am aware of bandiwth optimized model. Simply, my time was (is) limited... surely it will be strongly considered to be added to WenOOF. Compact WENO, on the contrary, is a new concept for me! Thank you very much!
Izaak "Zaak" Beekman
@zbeekman

I'm almost done with a mac OS X Homebrew "Formula" for OpenCoarrays, but it's not quite popular enough to meet the required "notoriety" metrics to be included in the main package repository (homebrew-core tap). We only need 3 more forks, 6 more watchers and/or 16 more stargazers. Any help would in increasing our popularity metrics would be much appreciated!


GitHub forks GitHub stars GitHub stars Twitter URL

Once in Homebrew it will get picked up into the downstream LinuxBrew as well
Stefano Zaghi
@szaghi
@zbeekman I have already starged, watched. I have now forked it. Tomorrow I'll force my boss to do the same if still there will be the need :smile:
I start with not a my boss... @giacombum Giacomo starge, watch and fork opencoarrays please :smile:
Izaak "Zaak" Beekman
@zbeekman
@szaghi thanks :bow: due to your fork we are over the threshold for popularity/notariety!
For thos interested the PR is here: Homebrew/homebrew-core#8790
Stefano Zaghi
@szaghi
:tada: :tada: :tada:
Giacomo Rossi
@giacombum
I've already watch and star this awesome project... now I'll fork it!!!!
done!
Izaak "Zaak" Beekman
@zbeekman
I am pleased to announce that OpenCoarrays is now available through the Homebrew package manager for OS X! homebrew
brew update
brew install --cc=gcc-6 OpenCoarrays # installs fine with clang too if you drop cc=
Stefano Zaghi
@szaghi
:tada: happy for you!
Izaak "Zaak" Beekman
@zbeekman
Has anyone tried the IDE at https://simplyfortran.com
Izaak "Zaak" Beekman
@zbeekman
? It looks pretty amazing, to be honest... I have maybe half of the functionality it provides via my custom emacs setup, but it was a real pain to implement.
Stefano Zaghi
@szaghi
@zbeekman I do not tried it. It looks interesting, however reading the main features I cannot say why I should prefer it over VIM: with VIM (with a lot of plugins, yes, but it is super easy to use vim plugins, directly from github!) I have almost all that features (I think only visual debugging is still a pain with vim, but my debugging style almost relies on print*, 'I am here'...) and much more vimish features... Why do you say that emacs has half of the functionality? Renaming, autocompletition, search (tree view), etc... are available in vim and I think also in emacs. I admit, I do not consider the legacy support a feature, but it maybe for you. Also the auto-documentation is not a feature for me, I am now too much addicted to FORD :smile: I see only the support for visual debugging a plus that is still missing on vim. Can you tell which features you are particularly interested in? Cheers
Giacomo Rossi
@giacombum
@szaghi your debugging messages are a little different :smile: :smile: :smile:
Stefano Zaghi
@szaghi
@giacombum I like that Zaak still think I am polite... but I have to admit that I am Latin, specially when debugging...
Giacomo Rossi
@giacombum
@zbeekman dear Izaak, what about evaluating the Jacobian matrix via compact differentiation and interpolation? If you have to evaluate the terms dx/dcsi and so on, the computational grid is regular and so the coefficients of the compact scheme are available in a lot of papers...
Izaak "Zaak" Beekman
@zbeekman
Well it depends how the grid was generated... I have done some composition of analytic functions (conformal mapping which is investable, and 1-D stretching functions) or finite difference if the grid was elliptic or hyperbolic etc numerically generated. Obviously the difficulty becomes proving > 2nd order accuracy if you then also have a numerically generated grid and are using FD etc. to compute the jacobians or grid metrics... But the case that Stefano describes sounds very complicated... In particular with overset/chimera grids you lose conservation in the interpolated region... There is some work being done to ensure stable, conservative interpolation, but it's a real pain....
Stefano Zaghi
@szaghi

@/all

Dear all, I am going to define the IO format for a new CFD code :tada: :tada: :tada:

I am really in doubt for selecting a convenient approach for my needs. Essentially the new code is a multi-block, Chimera-AMR, finite volume. In the past I have adopted two different approaches:

  1. a single file which IO is synced by master processor: this is not a reall parallel IO approach due to the needed barrier, thus it is not efficient for large scaled cluster;
  2. each processor takes into account its own IO: this is very efficient because the IO is really parallel, but for the new code there are some issues...

In fact, the new code will have AMR capability (the others did not) thus the work-load on each processor is not static for the whole simulation, rather it changes step by step. Thus, each processor has a varying number of blocks in charge thus I cannot simply have the pattern 1-processor=> 1-multi-block-file, as I did in the past.

The simplest solution for me is a new pattern 1-block=>1-file : surely it will be parallel-efficient. The cons is that for production simulation I will have billions of files for time-accurate unsteady sims (1000-blocks X 1000... steps...) :cry:

Other possibilities that I see are:

  1. MPI IO: I never used it, I could re-learn, but due to the fact that I hope to switch soon from MPI to CAF, I am really in doubt if it is worth to invest my time on it;
  2. HDF5: I never used it, but when @francescosalvadore did show it to me it looked amazing; one issue is that some of my bosses have issues (old CentOS-based workstations) on installing exotic libraries (I had issues in installing Python 2.7 that has about 10 years...): remain in Fortran/MPI field could be better for them.

Any suggestions are much more than welcome.

My best regards.

Stefano Zaghi
@szaghi
@zbeekman Indeed, our chimera approach (based on a sort of volume-forcing interpolation) does a good job for the conservation preservation: surely, some is lost in the chimera interpolation (and the fully conservative scheme is a chimera...), but it is not dramatic and it is really comparable with other sources of error for challenging simulations like a ship maneuvering in formed sea. The interpolation which Giacomo is referring to, happens on each single block, not in the chimera interpolation. We use WENO interpolation for high order fluxes reconstruction into the block, while the block-to-block chimera interpolation is done with other approach.
Giacomo Rossi
@giacombum
@szaghi well, I think you already know my answer... If you have to re-learn MPI IO, well, why don't learn HDF5? And if your bosses use out of date software, well, this isn't your problem... :smile: