szaghi on master
Merge tag 'v0.0.1' into develop… Fix paginate issue With the ne… Merge branch 'master' into feat… and 7 more (compare)
szaghi on master
Add Euler 1D solver app Add Eu… Merge branch 'master' of github… (compare)
milancurcic on master
fix typo in readme Merge pull request #39 from let… (compare)
szaghi on master
Merge tag 'v0.1.0' into develop… Merge branch 'master' into deve… Merge branch 'master' into deve… and 1 more (compare)
szaghi on master
update submodule (compare)
szaghi on master
update submodules (compare)
@zbeekman I really do not know what standard states about this. Have you find an answer? This is interesting also for me, that is,
does your example imply that there is an implicit synchronization between
me
andme+1
before the 2nd statement?
Or, on the other side of the moon,
to achieve asynchronous parallelism, do I have to assign to a coarray rather than to get/fetch from it?
docker pull zbeekman/nightly-gcc-trunk-docker-image:latest
and then to run it, and mount a local directory, say with some source code: docker run -v /local/code/source:/mount/path/on/vm -i -t zbeekman/nightly-gcc-trunk-docker-image
If you want root, add --user root
to that, so you can install some packages etc... Although I may give the default user sudo priviledges soon...
@/all @jacobwilliams
Dear all,
I am now approaching to implement some interpolation procedures for some different purposes:
I am a fun of @jacobwilliams interpolotars and, indeed, I had already planned to use his libraries. However, I have just realized that for my needs two keys are necessary
Before reinventing the wheel or re-study the state-of-the-art litterature, are you aware of such a kind of interpolator libraries? Obviusly, FOSS and possible in Fortran (or Fortran-friendly).
Jacob's libraries (bspline in particular) is currently my best option: our mesh are in general very slowly-changing that can be viewed (approximated/considered) as locally uniform, but for the WENO aim a non uniform interpolator can be very useful just for only study aims.
Happy new years :tada: :tada:
@szaghi I have been using ESMF for years for interpolation, take a look at their capabilities: https://www.earthsystemcog.org/projects/esmf/regridding_esmf_6_3_0rp1
Pros:
Cons:
ESMF is probably not something you will want to use, but I recommend you look at it for inspiration, especially how it handles the grid-to-grid interpolation in parallel and interpolating between different domain partitions.
@zbeekman Giacomo used a kinda of multi-grid (non dynamic/adaptive grid refinement), I am using dynamic overlapping grid (Chimera) with also multi-grid acceleration, but the point is that we want to have:
One crucial point (among others) is doing interpolation with very different solutions computed on the some region.
A little bit late, but just FYI, a new release of OpenCoarrays is here. Here are the release notes and download links etc.:
download_prerequisites
script to detect presence of wget and try alternates like curl if it is missingAdded GCC 7 trunk regression tests. Thanks to @egiovan for reporting segfaults with event post
when num_images > 2
and the type conversion before coarray puts regressions with GCC trunk.
Please see the installation instructions for more details on how to build and install this version of OpenCoarrays
I'm almost done with a mac OS X Homebrew "Formula" for OpenCoarrays, but it's not quite popular enough to meet the required "notoriety" metrics to be included in the main package repository (homebrew-core tap). We only need 3 more forks, 6 more watchers and/or 16 more stargazers. Any help would in increasing our popularity metrics would be much appreciated!
brew update
brew install --cc=gcc-6 OpenCoarrays # installs fine with clang too if you drop cc=
print*, 'I am here'
...) and much more vimish features... Why do you say that emacs has half of the functionality? Renaming, autocompletition, search (tree view), etc... are available in vim and I think also in emacs. I admit, I do not consider the legacy support a feature, but it maybe for you. Also the auto-documentation is not a feature for me, I am now too much addicted to FORD :smile: I see only the support for visual debugging a plus that is still missing on vim. Can you tell which features you are particularly interested in? Cheers
Well it depends how the grid was generated... I have done some composition of analytic functions (conformal mapping which is investable, and 1-D stretching functions) or finite difference if the grid was elliptic or hyperbolic etc numerically generated. Obviously the difficulty becomes proving > 2nd order accuracy if you then also have a numerically generated grid and are using FD etc. to compute the jacobians or grid metrics... But the case that Stefano describes sounds very complicated... In particular with overset/chimera grids you lose conservation in the interpolated region... There is some work being done to ensure stable, conservative interpolation, but it's a real pain....
@/all
Dear all, I am going to define the IO format for a new CFD code :tada: :tada: :tada:
I am really in doubt for selecting a convenient approach for my needs. Essentially the new code is a multi-block, Chimera-AMR, finite volume. In the past I have adopted two different approaches:
In fact, the new code will have AMR capability (the others did not) thus the work-load on each processor is not static for the whole simulation, rather it changes step by step. Thus, each processor has a varying number of blocks in charge thus I cannot simply have the pattern 1-processor=> 1-multi-block-file, as I did in the past.
The simplest solution for me is a new pattern 1-block=>1-file : surely it will be parallel-efficient. The cons is that for production simulation I will have billions of files for time-accurate unsteady sims (1000-blocks X 1000... steps...) :cry:
Other possibilities that I see are:
Any suggestions are much more than welcome.
My best regards.