Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Mar 26 17:48

    szaghi on master

    update submodules (compare)

  • Mar 25 13:39

    szaghi on master

    update submodules (compare)

  • Nov 14 2019 20:49
    letmaik opened #39
  • Oct 25 2019 09:35

    szaghi on master

    update submodules update travis config (compare)

  • Oct 25 2019 09:30

    szaghi on master

    update submodules (compare)

  • Oct 25 2019 09:19

    szaghi on master

    update submodules update travis config (compare)

  • Oct 21 2019 06:34
    rakowsk commented #7
  • Oct 20 2019 16:09
    unfurl-links[bot] commented #7
  • Oct 20 2019 16:09
    rakowsk commented #7
  • Oct 12 2019 17:49
    ShatrovOA commented #38
  • Oct 11 2019 15:25
    szaghi labeled #38
  • Oct 11 2019 15:25
    szaghi assigned #38
  • Oct 11 2019 15:25
    szaghi commented #38
  • Oct 11 2019 13:52
    ShatrovOA edited #38
  • Oct 11 2019 13:44
    ShatrovOA opened #38
  • Sep 19 2019 11:19
    szaghi commented #7
  • Sep 19 2019 11:08

    szaghi on master

    Fix parsing bug issue#7 The me… update travis config Merge branch 'release/0.1.0' (compare)

  • Sep 19 2019 11:06

    szaghi on fix-parsing-bug-issue#7

    (compare)

  • Sep 19 2019 07:54

    szaghi on fix-parsing-bug-issue#7

    (compare)

  • Sep 19 2019 07:52
    szaghi commented #7
Giacomo Rossi
@giacombum
@szaghi your debugging messages are a little different :smile: :smile: :smile:
Stefano Zaghi
@szaghi
@giacombum I like that Zaak still think I am polite... but I have to admit that I am Latin, specially when debugging...
Giacomo Rossi
@giacombum
@zbeekman dear Izaak, what about evaluating the Jacobian matrix via compact differentiation and interpolation? If you have to evaluate the terms dx/dcsi and so on, the computational grid is regular and so the coefficients of the compact scheme are available in a lot of papers...
Izaak "Zaak" Beekman
@zbeekman
Well it depends how the grid was generated... I have done some composition of analytic functions (conformal mapping which is investable, and 1-D stretching functions) or finite difference if the grid was elliptic or hyperbolic etc numerically generated. Obviously the difficulty becomes proving > 2nd order accuracy if you then also have a numerically generated grid and are using FD etc. to compute the jacobians or grid metrics... But the case that Stefano describes sounds very complicated... In particular with overset/chimera grids you lose conservation in the interpolated region... There is some work being done to ensure stable, conservative interpolation, but it's a real pain....
Stefano Zaghi
@szaghi

@/all

Dear all, I am going to define the IO format for a new CFD code :tada: :tada: :tada:

I am really in doubt for selecting a convenient approach for my needs. Essentially the new code is a multi-block, Chimera-AMR, finite volume. In the past I have adopted two different approaches:

  1. a single file which IO is synced by master processor: this is not a reall parallel IO approach due to the needed barrier, thus it is not efficient for large scaled cluster;
  2. each processor takes into account its own IO: this is very efficient because the IO is really parallel, but for the new code there are some issues...

In fact, the new code will have AMR capability (the others did not) thus the work-load on each processor is not static for the whole simulation, rather it changes step by step. Thus, each processor has a varying number of blocks in charge thus I cannot simply have the pattern 1-processor=> 1-multi-block-file, as I did in the past.

The simplest solution for me is a new pattern 1-block=>1-file : surely it will be parallel-efficient. The cons is that for production simulation I will have billions of files for time-accurate unsteady sims (1000-blocks X 1000... steps...) :cry:

Other possibilities that I see are:

  1. MPI IO: I never used it, I could re-learn, but due to the fact that I hope to switch soon from MPI to CAF, I am really in doubt if it is worth to invest my time on it;
  2. HDF5: I never used it, but when @francescosalvadore did show it to me it looked amazing; one issue is that some of my bosses have issues (old CentOS-based workstations) on installing exotic libraries (I had issues in installing Python 2.7 that has about 10 years...): remain in Fortran/MPI field could be better for them.

Any suggestions are much more than welcome.

My best regards.

Stefano Zaghi
@szaghi
@zbeekman Indeed, our chimera approach (based on a sort of volume-forcing interpolation) does a good job for the conservation preservation: surely, some is lost in the chimera interpolation (and the fully conservative scheme is a chimera...), but it is not dramatic and it is really comparable with other sources of error for challenging simulations like a ship maneuvering in formed sea. The interpolation which Giacomo is referring to, happens on each single block, not in the chimera interpolation. We use WENO interpolation for high order fluxes reconstruction into the block, while the block-to-block chimera interpolation is done with other approach.
Giacomo Rossi
@giacombum
@szaghi well, I think you already know my answer... If you have to re-learn MPI IO, well, why don't learn HDF5? And if your bosses use out of date software, well, this isn't your problem... :smile:
Stefano Zaghi
@szaghi
@giacombum Indeed, it is one of my problem :cry:
Giacomo Rossi
@giacombum
@szaghi You know that I know :cry: But I think that when a new CFD code is in development, the use of very out of date operating systems in which the code could run can't be a mandatory requirement. If the code is new, it can take advantages from all the new formats, paradigms and softwares that are available at the moment. And if you want to use CAF instead of MPI (very very very good choiche), don't you think you have to to face the same problems of HDF5 file format?
Stefano Zaghi
@szaghi

@/all

I think I found a compromise: each processor/image will IO its own 1-file containing the blocks handled at that time step. The file format will be a sort of XML tagged format to allow dynamic workload and un-ordered list of block-id/refinement-level, namely to allow each processor/image to change its workload at runtime. However, this will require an extra post-processor for collect the simulation results in order to be re-used by other runs with different number of processors/images.

I think that to have only 1 file independently of the number of processors/blocks only MPI IO or HDF5 can help. I considered also direct-access file format (each processor/image access to only its own part of the file to avoid race condiction) but it sound tricky...

@giacombum I think that HDF5 and CAF can be mixed almost smoothly, maybe @zbeekman or @rouson can comment this.

Cheers

Giacomo Rossi
@giacombum
@szaghi when I refer to the problem of the HDF5 format, I refer to the old workstation with CentOS: do you think that on that machines is simple to install CAF? And as you can see here, there are already pre-built binaries for CentOS, so... please, consider HDF5 as the future file format for your new code!
Stefano Zaghi
@szaghi
@giacombum We have gfortran6.2 on that CentOS. I guess that with the great install script of OpenCoarrays CAF are easy to support for Roberto & Co. On the contrary, I guess that if I force them to use PETSc, HDF5, ecc.. they will leave me alone :cry: sadly, there is an inertia on our Institute to learn/adopt new standard
Izaak "Zaak" Beekman
@zbeekman

MPI IO: I never used it, I could re-learn, but due to the fact that I hope to switch soon from MPI to CAF, I am really in doubt if it is worth to invest my time on it;
HDF5: I never used it, but when @francescosalvadore did show it to me it looked amazing; one issue is that some of my bosses have issues (old CentOS-based workstations) on installing exotic libraries (I had issues in installing Python 2.7 that has about 10 years...): remain in Fortran/MPI field could be better for them.

These are the two solutions I would recommend. Also, N.B. CAF can be safely mixed with MPI (in theory I have yet to try) at least for OpenCoarrays. The distributed shared memory OpenCoarrays implemention is built ontop of MPI, but we create private communicators, so it should be safe to use with other MPI elements. I have little experience with MPI-IO but it may give you the best fine grained control if that is critical to your application. I tried parallel HDF5 (also built on MPI, also should be safe to mix with OpenCoarrays) a few years ago and hit some painful bugs with LustreFS and HDF5. Hopefully these are all fixed now and you can use parallel HDF5 without issues. I know, for example, US3D uses HDF5 for it's file format.

Izaak "Zaak" Beekman
@zbeekman
@szaghi another idea I like is store data very compactly with direct access IO binary files. Then store meta-data in a self describing way using JSON (JSON-Fortran) or, if you must, XML. So you can store raw blocks of variables (an ordered list of vectors of node or cell centered variables) and then either have 1 file per processor or use direct access to index into each file in the appropriate location. Obviously the connectivity info is also challenging if you're using unstructured.
@/all Travis-CI supports community contributed and supported languages. See: https://github.com/travis-ci/docs-travis-ci-com/blob/gh-pages/user/languages/community-supported-languages.md They require a team of at least three people to be maintainers and contributors with the expertise etc. It would be great if @szaghi @rouson @jacobwilliams @milancurcic @cmacmackin or anyone else here were interested in helping out with this. Setting up a sane config for fortran would hopefully make it less of a yak shave to get started using Fortran on Travis-CI. Please let me know if you are interested.
Stefano Zaghi
@szaghi
@zbeekman Thanks for the hints on IO, I'll discuss about this with @giacombum and @francescosalvadore For the Travis support I surely offer my (poor and scattered) support, but I could not be up to the task: I have to carefully read the mission, later today I'll give you an answer. Cheers
Stefano Zaghi
@szaghi

@zbeekman

Dear Zaak, I am trying your docker image for having a very fresh build of gfortran... it is very fancy, your image has been installed and ran very smoothly in my Arch Linux, thank you very much!

I am very new to docker, I have not read all docs... my apologize if my following question is very silly. Running your image mounting a source_path is handy for testing that source with a new gfortran, but it is tedius to stop the image and run it again with another source path. It would be very handy to use your docker image like I do with python virtualenv: run the image to have the new gfortran and use it on my whole system, not only into 1 mounted path.

Is it possible to run a docker image on the whole file system? Or at least is safe to mount one partition as source_path, say to mount /home/stefano?

Cheers

Stefano Zaghi
@szaghi

@zbeekman Maybe I found an answer

https://www.theodo.fr/blog/2015/04/docker-and-virtualenv-a-clean-way-to-locally-install-python-dependencies-with-pip-in-docker/

It seems that I can mount some paths of a running-docker-image into my system. If this is true, I can mount your (docker image) gfortran-PATH somewhere in my system (say /opt/zaak-gfortran/) and then load it as my other modules... I'll try later today.

Cheers

Stefano Zaghi
@szaghi

@zbeekman

Zaak, I am sorry to bother you, but docker remains ambiguous for me.

See this guide

https://docs.docker.com/engine/getstarted/step_three/#/step-2-run-the-whalesay-image

It seems it is possible to use docker to directly run a program (the shell program whalesay in that case) in the host system, i.e.

docker run docker/whalesay cowsay 'hello Zaak'

So my new question: is it possible to dockerize gfortran in that way? say to be able to run

docker run zaak/gfortran 'foo.f90 -o foo'

Cheers

Izaak "Zaak" Beekman
@zbeekman
Hi Stefano, let me get back to you. I'm pretty sure it shouldn't be too hard to set this up so you can invoke GFortran directly from Docker (like whalesay) however, the compiled code may not run natively on your system outside docker due to glibc, libgfortran etc.
Also, I think you shouldn't have any problem just mounting your entire home directory in docker, except, perhaps, runtime issues if you try to natively run code you compiled with docker.
Izaak "Zaak" Beekman
@zbeekman
Stefano, yes it is possible to implement what you're asking... I can try to make an additional tag to provide gfortran, gcc and g++ (each) that will share the same image components except for ENTRYPOINT and CMD
but in the mean time, if you pass something like -c "gfortran --version" or -c "gfortran /path/to/mounted/source/file.f90 && /path/to/mounted/source/a.out" you should be able to use dockerized gfortran with the image as it stands
Izaak "Zaak" Beekman
@zbeekman
At some point I'll look into providing direct calls into gfortran/gcc/g++ but not for a while probably
Izaak "Zaak" Beekman
@zbeekman
Basically the entry point is the command that the image runs, in this case /bin/bash and the CMD is -l which is the default arguments to be passed to the ENTRYPOINT. If you add extra text after docker run .... it will replace CMD with that extra text. So you can run arbitrary commands by passing either -c "my raw bash commands here like calls to /usr/local/bin/gfortran" or you could pass it a script that is somewhere already cross mounted with the container.
Hope this helps, @szaghi
Stefano Zaghi
@szaghi

@zbeekman
Thank you very for your kind replays.

Today I played with docker for learning aims... I tried to achieve a sort of cross-compiling without success, but I had few minutes for my tests, I hope to be more lucky tomorrow. However, I have a request for you: I tried to compose your image with another providing python 2.7, but docker-compose is really non intuitive (IMO) and it seems not so trivial at least at the beginning; can you modify your image to include a python 2.7 interpreter? Without a python interpreter I cannot use FoBiS thus my productivity becomes close to zero... having a python interpreter into your gcc docker image let me to source any virtualenv I created for each Fortran project... it will become very handy.

Cheers

Izaak "Zaak" Beekman
@zbeekman
I haven't tried docker compose yet, I just write dockerfiles by hand. I'll add python 2.7, pip and FoBiS.py to the image that nightly gcc is based on, then you'll have it.
Stefano Zaghi
@szaghi
@zbeekman Great! Thank you very much! :tada: :tada: :tada:
Izaak "Zaak" Beekman
@zbeekman
@szaghi try pulling the latest docker image... it has python and pip and it should install to ~/.local so you can get a virtual env up and running that way.. Let me know if for some reason it's not working as expected
Stefano Zaghi
@szaghi
@zbeekman ok, I try it immediately
Stefano Zaghi
@szaghi

@zbeekman,

Dear Zaak, it works perfectly! Thank you very much!!!!! :tada: :tada: :tada:

victorsndvg
@victorsndvg
Sorry all because I'm not often reading this thread ...
@szaghi , related with HDF5 and parallel IO with fortran, maybe you can be interested in XH5For. I've experienced good scalability with it. If you need, I can help you with the first steps! ;)
Stefano Zaghi
@szaghi

@victorsndvg
Dear Victor,
indeed I am studying your library in these days, it is cool. Do you think that an AMR structure (a hierarchy of refined multi-block domain) can be easily represented into your XDMF format? My IO involves hexaedron cells with variables saved at vertex and/or at center (and probably in the future also some at face). Currently, I implemented a dirty solution as Zaak suggested: a stream-unformatted file where each process can direct do IO into its own position. Not yet tested in parallel, thus it will probably do not scale decently (it is no based in MPI IO). For the moment, I am good with this IO: I am developing a new CFD code from scratch, thus I am doing a lot of serial tests, but when the code will reach a critic mass, I'll start to do parallel tests; at that time I'll decide which way do: 0) my own dirty stream file, 1) MPI IO, 2) HDF5, 3) some other good library like your.

For the moment I have to study... Do you think your library can satisfy my needs (parallel AMR on block-structured hexaedrons)?

victorsndvg
@victorsndvg
First of all I have to say that the library was created because libxdmf (http://www.xdmf.org) is much more than a simple IO library, for me it was difficult to deal with it and also because the fortran layer was not mature enough when I started to develop XH5For (I don't the current status)
XH5For is a light-weight layer, you only have to pass the number of nodes and elements, and its correspondent arrays per task and it write the data into a HDF5 file per timestep
it does nothing more
Not true, it also communicate the number of elements and nodes per task. The library only use broadcasts and allgathers
victorsndvg
@victorsndvg
In this schema you can write nodal and elemental fields of this types of elements and some more. Ofcourse hexahedron cells are included and fully tested
Unstructured and structured meshes are supported
Face fields are not supported
victorsndvg
@victorsndvg
Thats all I think ...
I don't know exactly what do mean when you say AMR in terms of the mesh file. XH5For it only writes raw meshes and fields. There is not yet implemented the writing of any extra-metadata like boundary conditions or vicinity
The last note: It also works sequentially! ;)
Stefano Zaghi
@szaghi

The last note: It also works sequentially!

This is a great plus... but how do it? to my knowledge, you have to select between HDF5 and Parallel HDF5 a priori...

Indeed, yes, my dirty solution relies on an header where a sort of metadata defines the hierarchy of mesh saved. Without such an information it becomes cumbersome to reconstruct the AMR grid, that can changes step by step. I can save the metadata into a separate file, but this is also not much nice
Anyhow, I'll study you library with much more care soon
cheers
victorsndvg
@victorsndvg
CMake compilation system is on the one in charge of detecting if the HDF5 installation is parallel or not