milancurcic on master
fix typo in readme Merge pull request #39 from let… (compare)
szaghi on master
Merge tag 'v0.1.0' into develop… Merge branch 'master' into deve… Merge branch 'master' into deve… and 1 more (compare)
szaghi on master
update submodule (compare)
szaghi on master
update submodules (compare)
szaghi on master
update submodules (compare)
szaghi on master
update submodules update travis config (compare)
szaghi on master
update submodules (compare)
szaghi on master
update submodules update travis config (compare)
@/all @jacobwilliams
Dear all,
I am now approaching to implement some interpolation procedures for some different purposes:
I am a fun of @jacobwilliams interpolotars and, indeed, I had already planned to use his libraries. However, I have just realized that for my needs two keys are necessary
Before reinventing the wheel or re-study the state-of-the-art litterature, are you aware of such a kind of interpolator libraries? Obviusly, FOSS and possible in Fortran (or Fortran-friendly).
Jacob's libraries (bspline in particular) is currently my best option: our mesh are in general very slowly-changing that can be viewed (approximated/considered) as locally uniform, but for the WENO aim a non uniform interpolator can be very useful just for only study aims.
Happy new years :tada: :tada:
@szaghi I have been using ESMF for years for interpolation, take a look at their capabilities: https://www.earthsystemcog.org/projects/esmf/regridding_esmf_6_3_0rp1
Pros:
Cons:
ESMF is probably not something you will want to use, but I recommend you look at it for inspiration, especially how it handles the grid-to-grid interpolation in parallel and interpolating between different domain partitions.
@zbeekman Giacomo used a kinda of multi-grid (non dynamic/adaptive grid refinement), I am using dynamic overlapping grid (Chimera) with also multi-grid acceleration, but the point is that we want to have:
One crucial point (among others) is doing interpolation with very different solutions computed on the some region.
A little bit late, but just FYI, a new release of OpenCoarrays is here. Here are the release notes and download links etc.:
download_prerequisites
script to detect presence of wget and try alternates like curl if it is missingAdded GCC 7 trunk regression tests. Thanks to @egiovan for reporting segfaults with event post
when num_images > 2
and the type conversion before coarray puts regressions with GCC trunk.
Please see the installation instructions for more details on how to build and install this version of OpenCoarrays
I'm almost done with a mac OS X Homebrew "Formula" for OpenCoarrays, but it's not quite popular enough to meet the required "notoriety" metrics to be included in the main package repository (homebrew-core tap). We only need 3 more forks, 6 more watchers and/or 16 more stargazers. Any help would in increasing our popularity metrics would be much appreciated!
brew update
brew install --cc=gcc-6 OpenCoarrays # installs fine with clang too if you drop cc=
print*, 'I am here'
...) and much more vimish features... Why do you say that emacs has half of the functionality? Renaming, autocompletition, search (tree view), etc... are available in vim and I think also in emacs. I admit, I do not consider the legacy support a feature, but it maybe for you. Also the auto-documentation is not a feature for me, I am now too much addicted to FORD :smile: I see only the support for visual debugging a plus that is still missing on vim. Can you tell which features you are particularly interested in? Cheers
Well it depends how the grid was generated... I have done some composition of analytic functions (conformal mapping which is investable, and 1-D stretching functions) or finite difference if the grid was elliptic or hyperbolic etc numerically generated. Obviously the difficulty becomes proving > 2nd order accuracy if you then also have a numerically generated grid and are using FD etc. to compute the jacobians or grid metrics... But the case that Stefano describes sounds very complicated... In particular with overset/chimera grids you lose conservation in the interpolated region... There is some work being done to ensure stable, conservative interpolation, but it's a real pain....
@/all
Dear all, I am going to define the IO format for a new CFD code :tada: :tada: :tada:
I am really in doubt for selecting a convenient approach for my needs. Essentially the new code is a multi-block, Chimera-AMR, finite volume. In the past I have adopted two different approaches:
In fact, the new code will have AMR capability (the others did not) thus the work-load on each processor is not static for the whole simulation, rather it changes step by step. Thus, each processor has a varying number of blocks in charge thus I cannot simply have the pattern 1-processor=> 1-multi-block-file, as I did in the past.
The simplest solution for me is a new pattern 1-block=>1-file : surely it will be parallel-efficient. The cons is that for production simulation I will have billions of files for time-accurate unsteady sims (1000-blocks X 1000... steps...) :cry:
Other possibilities that I see are:
Any suggestions are much more than welcome.
My best regards.
@/all
I think I found a compromise: each processor/image will IO its own 1-file containing the blocks handled at that time step. The file format will be a sort of XML tagged format to allow dynamic workload and un-ordered list of block-id/refinement-level, namely to allow each processor/image to change its workload at runtime. However, this will require an extra post-processor for collect the simulation results in order to be re-used by other runs with different number of processors/images.
I think that to have only 1 file independently of the number of processors/blocks only MPI IO or HDF5 can help. I considered also direct-access file format (each processor/image access to only its own part of the file to avoid race condiction) but it sound tricky...
@giacombum I think that HDF5 and CAF can be mixed almost smoothly, maybe @zbeekman or @rouson can comment this.
Cheers
MPI IO: I never used it, I could re-learn, but due to the fact that I hope to switch soon from MPI to CAF, I am really in doubt if it is worth to invest my time on it;
HDF5: I never used it, but when @francescosalvadore did show it to me it looked amazing; one issue is that some of my bosses have issues (old CentOS-based workstations) on installing exotic libraries (I had issues in installing Python 2.7 that has about 10 years...): remain in Fortran/MPI field could be better for them.
These are the two solutions I would recommend. Also, N.B. CAF can be safely mixed with MPI (in theory I have yet to try) at least for OpenCoarrays. The distributed shared memory OpenCoarrays implemention is built ontop of MPI, but we create private communicators, so it should be safe to use with other MPI elements. I have little experience with MPI-IO but it may give you the best fine grained control if that is critical to your application. I tried parallel HDF5 (also built on MPI, also should be safe to mix with OpenCoarrays) a few years ago and hit some painful bugs with LustreFS and HDF5. Hopefully these are all fixed now and you can use parallel HDF5 without issues. I know, for example, US3D uses HDF5 for it's file format.