milancurcic on master
fix typo in readme Merge pull request #39 from let… (compare)
szaghi on master
Merge tag 'v0.1.0' into develop… Merge branch 'master' into deve… Merge branch 'master' into deve… and 1 more (compare)
szaghi on master
update submodule (compare)
szaghi on master
update submodules update travis config (compare)
szaghi on master
update submodules update travis config (compare)
@giacombum With the test you sent me, Intel 17.0.3 generates a catastrophic error and GNU 7.0.1 says
Error: Assumed-rank variable arr2 at (1) may only be used as actual argument
Maybe, this is not the correct usage. If I recall well, this feature is intended to facilitate C-interop...
@giacombum , I don't know which are the particular issues in debian, but there is an application that could help you to easy install scientific software, easybuild.
I've never checked it before, but I'm exited about it.
There are several versions of gcc, you can see them in the following link:
Dear all, this is an help request for expert readers of Fortran Standard (like @zbeekman @rouson ...) : if you have the time, can you let me know if the following snippet is standard?
subroutine solve(self, eos_left, state_left, eos_right, state_right, normal, fluxes, pattern) !< Solve Riemann Problem. !< !< Approximate Riemann Solver based on (local) Lax-Friedrichs (known also as Rusanov) algorithm. class(riemann_solver_llf), intent(inout) :: self !< Solver. class(eos_object), intent(in) :: eos_left !< Equation of state for left state. class(conservative_object), intent(in) :: state_left !< Left Riemann state. class(eos_object), intent(in) :: eos_right !< Equation of state for right state. class(conservative_object), intent(in) :: state_right !< Right Riemann state. type(vector), intent(in) :: normal !< Normal (versor) of face where fluxes are given. class(conservative_object), intent(inout) :: fluxes !< Fluxes of the Riemann Problem solution. class(riemann_pattern_object), intent(out), optional :: pattern !< Riemann pattern. type(conservative_compressible), pointer :: state_left_ !< Left Riemann state, local variable. type(conservative_compressible), pointer :: state_right_ !< Right Riemann state, local variable. type(conservative_compressible) :: fluxes_left !< Fluxes of left state. type(conservative_compressible) :: fluxes_right !< Fluxes of right state. type(conservative_compressible) :: fluxes_ !< Fluxes, local variable. type(riemann_pattern_compressible_pvl) :: pattern_ !< Riemann pattern. real(R8P) :: lmax !< Maximum wave speed estimation. call pattern_%compute(eos_left=eos_left, state_left=state_left, eos_right=eos_right, state_right=state_right, normal=normal) lmax = maxval(abs(pattern_%waves_extrema())) call state_left%compute_fluxes(eos=eos_left, normal=normal, fluxes=fluxes_left) call state_right%compute_fluxes(eos=eos_right, normal=normal, fluxes=fluxes_right) state_left_ => conservative_compressible_pointer(to=state_left) state_right_ => conservative_compressible_pointer(to=state_right) select type(fluxes) type is(conservative_compressible) #ifdef __GFORTRAN__ fluxes = 0._R8P * (fluxes_left + fluxes_right - (lmax * (state_right_ - state_left_))) #else ! Intel Fortran has issue in resolving the equation with multiple operators... it must be split fluxes_ = state_right_ - state_left_ fluxes_ = lmax * fluxes_ fluxes = fluxes_left + fluxes_right fluxes = fluxes - fluxes_ fluxes = 0._R8P * fluxes #endif endselect if (present(pattern)) then select type(pattern) type is(riemann_pattern_compressible_pvl) pattern = pattern_ endselect endif endsubroutine solve
The full story can be read here on google group, but in few words, Intel Fortran does not accept the simple equation and I have to split it.
I suspect that this is a compiler bug, but I like to know your opinion.
@rouson Dear Damian, the minimal working example has been already done (not by me, but by a very nice Fortraner) and submitted to intel developers attention. It happened that this is really an intel bug and, more funny, it is of the same nature of another issue I faced within @francescosalvadore and for which he created this report. I was in the beta testing last year, I'll probably be also in the next. It is wonderful that intel will be fully compliant with 2008 standard!
Aside, I faced today with a probable bug with GNU gfortran 6.3.1 (and 7.0.0, not able to test against the Zaak nightly build). The (probable) bug is related to abstract types... I hope to be able to isolate a simple example monday.
My best regards.
Dear Zaak, I can confirm, the issue was due to my broken docker installation
┌╼ stefano@zaghi(10:55 AM Mon Mar 27) ├───╼ ~ 32 files, 6.3Mb └──────╼ docker pull zbeekman/nightly-gcc-trunk-docker-image Using default tag: latest latest: Pulling from zbeekman/nightly-gcc-trunk-docker-image 5f2c9defd8b5: Pull complete 676f34be0213: Pull complete eecb0076700b: Pull complete 8c856ba4f4c6: Pull complete 97f9497af1ef: Pull complete Digest: sha256:7869ca6419b3f554392f72cc1cd0b90449dbe374b6bbb9cd80cb91e76e4d3960 Status: Downloaded newer image for zbeekman/nightly-gcc-trunk-docker-image:latest ╼ stefano@zaghi(10:56 AM Mon Mar 27) ├───╼ ~ 32 files, 6.3Mb └──────╼ docker images REPOSITORY TAG IMAGE ID CREATED SIZE zbeekman/nightly-gcc-trunk-docker-image latest 614b0749b3db 2 hours ago 579 MB hello-world latest 48b5124b2768 2 months ago 1.84 kB
Hi @/all ,
there is anyone here that is expert (or at least who has already used) multi-step ODE integrator with variable time step-size? Maybe @jacobwilliams with DDEABM or other libraries? I searched for good references on sciencedirect, but after an hour I did not find anything really interesting?
In particular, I am interested on Strong Stability Preserving multi-step Runge-Kutta solvers to achieve an ODE solver of at least 8th formal order of accuracy that preserves stability features to deal with discontinuous solutions. However, in the literature it seems that such a scheme has been developed for only fixed time step (probably because the varying time step was juiced to be too costly). Before spent my time to (re)compute the linear-multi-step coefficients I like to know if some of you can point me to good references.
My best regards.
Can anyone suggest good reference for using from python modern Fortran libraries?
For example, assume you have a modern Fortran library exploiting OOP (a lot of modules, abstract types, polymorphims...), but, at the end, the library can provide a simple API to the end-user: a single concrete derived type exposing a procedure with plain real arrays dummy arguments. Assume you want to use this derived type from python. Is it possible?
F2py seems to be not useful in this scenario, but I maybe wrong, I have no experience with f2py. In the case it is not possible, with a lot of work, I can extract the derived type procedure as a standalone procedure, but I cannot totally refactor the library to trim-out all internal OOP.
Thank you in advance for any suggestions.
@tomedunn and @/all interest (in particular @certik could be helpful...),
I have been successful to wrap some WenOOF facilities by means of ctypes, here is the python-wrapper test: this is already enough, but moving toward the wrapping of the actual
interpolator object could be better. However, there are some things that I do not understand, in particular concerning ctypes.
The Fortran wrapped procedures now look like:
subroutine wenoof_interpolate_c_wrap(S, stencil, interpolation) bind(c, name='wenoof_interpolate_c_wrap') !< Interpolate over stencils values by means of WenOOF interpolator. integer(C_INT32_T), intent(in), value :: S !< Stencils number/dimension. real(C_DOUBLE), intent(in) :: stencil(1-S:-1+S) !< Stencil of the interpolation. real(C_DOUBLE), intent(out) :: interpolation !< Result of the interpolation. call interpolator_c_wrap%interpolate(stencil=stencil, interpolation=interpolation) endsubroutine wenoof_interpolate_c_wrap
The current (naive) Python wrapper looks like:
from ctypes import CDLL, c_double, c_int, POINTER import numpy as np wenoof = CDLL('./shared/libwenoof.so') wenoof.wenoof_interpolate_c_wrap.argtypes = [c_int, POINTER(c_double), POINTER(c_double)] cells_number = 50 x_cell, dx = np.linspace(start=-np.pi, stop=np.pi, num=cells_number + 2 * stencils, endpoint=True, retstep=True) y_cell = np.sin(x_cell) y_weno = np.empty(cells_number + 2 * stencils, dtype="double") interpolation = np.empty(1, dtype="double") for i, x in enumerate(x_cell): if i >= stencils and i < cells_number + stencils: wenoof.wenoof_interpolate_c_wrap(stencils, y_cell[i+1-stencils:].ctypes.data_as(POINTER(c_double)), interpolation.ctypes.data_as(POINTER(c_double))) y_weno[i] = interpolation
The oddities for my limited vision are:
y_wenonumpy array directly into the
wenoof_interpolate_c_wrapcalling: I have to pass through the temporary 1-element array
interpolation; surely this is due to my ignorance about ctypes, but google does not offer me much insight...
refbyand similar of ctypes and the corresponding Fortran exploitation of
valueattribute; the guide of @certik provides a very clear example, but it does not enter in details.
How do you simplify this workflow? Are the Fortran dummy arguments well defined? How the Python wrapper can be simplified/improved? At the end, how to wrap a
class(integrator), allocatableobject or at least a
type(integrator_js) concrete instance? I definitely need more documentation about Python-ctypes-Fortran mixing...
Thank you in advance for any hints.
Dear @/all ,
I am facing on a very big issue with GNU gfortran and polymorphic functions (for your interest see this) related to serious memory leaks generation. If the bug is confirmed I think the fix will be difficult and it will require a lot of time.
I am wondering if there are some techniques or approaches (maybe developed when f2003/2008 features were not yet widely available) to mimic polymorphism. I would like to save the work done in last months and reuse as much as possible all the polymorphic libraries. I found a paper describing how to achieve multi-inheritance in f2003, maybe there are similar papers on how to achieve a sort of polymorphism without
Thank you in advance for any hints!
pure. You can see an example of this approach in my factual library. Let me know if you want me to explain some of the details of it.
@cmacmackin Chris, I need your help...
I re-read the @rouson book, but that part looks obscure in the description of the leak-free hermetic field (this is probably why I missed in my first lecture...). I have then studied your factual but I have difficult to understand the *rationale".
I am focused on how you implement the operators, say this multiplication that I report here
function array_scalar_sf_m_sf(this,rhs) result(res) class(array_scalar_field), intent(in) :: this class(scalar_field), intent(in) :: rhs class(scalar_field), allocatable :: res !! The restult of this operation class(array_scalar_field), allocatable :: local call this%guard_temp(); call rhs%guard_temp() allocate(local, mold=this) call local%assign_meta_data(this) select type(rhs) class is(array_scalar_field) local%field_data%array = this%field_data%array * rhs%field_data%array class is(uniform_scalar_field) local%field_data%array = this%field_data%array * rhs%get_value() end select call move_alloc(local,res) call res%set_temp() call this%clean_temp(); call rhs%clean_temp() end function array_scalar_sf_m_sf
I see that
guard_temp method increase the temporary level if they are already set as temporary, so
rhs are optional now more temporary... why this is necessary? My first guess is that
rhs should not be a problem, but I could underestimate the function recursive execution scope I think.
local is necessary? Is there something implied in the
res is set as temporary? I guess because then it can be properly finalized, but I cannot figured out who then is able to finalize
program example use factual_mod implicit none type(cheb1d_scalar_field) :: a, b, c a = b * c a = do_thing(b, c) a = do_thing(b*b, c) contains function do_thing(thing1, thing2) class(scalar_field), intent(in) :: thing1, thing2 class(scalar_field), allocatable :: do_thing call thing1%guard_temp(); call thing2%guard_temp() call thing1%allocate_scalar_field(do_thing) do_thing = thing1*thing2 call thing1%clean_temp(); call thing2%clean_temp() call do_thing%set_temp() end function do_thing end program example
a = b*cis executed,
care both non-temporary variables. As such, they should not be finalised. So far, so obvious. However, the actual arguments to function
do_thingmay or may not be temporary. If they are not temporary (e.g. when executing
a = do_thing(b, c)), then this function will not finalise them and nor will the multiplication routine.
clean_tempmethod which is called at the end of all procedures using fields as arguments. However, we don't want to finalise any fields which have been assigned to variables, so we must distinguish between these ones and temporary ones which are just function results.
Function results may or may not be assigned to a variable, so we must assume that they are temporary, hence why
set_temp is called for the function results. In the defined assignment operator, the LHS argument will be set not to be temporary so that it will not be finalised prematurely.
Where things get complicated is when a function receives a temporary field as an argument and must pass it to another procedure. This happens in
do_thing when it is called in the
a = do_thing(b*b, c) statement. As
b*b is temporary, it will be guarded at the start of
do_thing. When we pass it to the multiplication method, we don't want it to be finalised yet (as we might want to do something else with it in
do_thing). As such, the call to
set_temp in the multiplication routine must increment the temporary-count of this argument. That way, it will know not to finalise it when
clean_temp is called in the multiplication routine. Instead,
clean_temp will just decrement the temporary-count again. When the
clean_temp call is made in
do_thing, the count will be back down to its initial value and
clean_temp will know that it can safely finalise the argument.