by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Mar 26 17:48

    szaghi on master

    update submodules (compare)

  • Mar 25 13:39

    szaghi on master

    update submodules (compare)

  • Nov 14 2019 20:49
    letmaik opened #39
  • Oct 25 2019 09:35

    szaghi on master

    update submodules update travis config (compare)

  • Oct 25 2019 09:30

    szaghi on master

    update submodules (compare)

  • Oct 25 2019 09:19

    szaghi on master

    update submodules update travis config (compare)

  • Oct 21 2019 06:34
    rakowsk commented #7
  • Oct 20 2019 16:09
    unfurl-links[bot] commented #7
  • Oct 20 2019 16:09
    rakowsk commented #7
  • Oct 12 2019 17:49
    ShatrovOA commented #38
  • Oct 11 2019 15:25
    szaghi labeled #38
  • Oct 11 2019 15:25
    szaghi assigned #38
  • Oct 11 2019 15:25
    szaghi commented #38
  • Oct 11 2019 13:52
    ShatrovOA edited #38
  • Oct 11 2019 13:44
    ShatrovOA opened #38
  • Sep 19 2019 11:19
    szaghi commented #7
  • Sep 19 2019 11:08

    szaghi on master

    Fix parsing bug issue#7 The me… update travis config Merge branch 'release/0.1.0' (compare)

  • Sep 19 2019 11:06

    szaghi on fix-parsing-bug-issue#7

    (compare)

  • Sep 19 2019 07:54

    szaghi on fix-parsing-bug-issue#7

    (compare)

  • Sep 19 2019 07:52
    szaghi commented #7
Stefano Zaghi
@szaghi
@zbeekman Zaak, I still have problems, but it depends on my side, I think I had broken my docker installation: Monday, I'll reinstall docker and I'll give you my feedback. Have a good weekend!
Stefano Zaghi
@szaghi

@zbeekman
Dear Zaak, I can confirm, the issue was due to my broken docker installation

┌╼ stefano@zaghi(10:55 AM Mon Mar 27)
├───╼ ~ 32 files, 6.3Mb
└──────╼ docker pull zbeekman/nightly-gcc-trunk-docker-image
Using default tag: latest
latest: Pulling from zbeekman/nightly-gcc-trunk-docker-image
5f2c9defd8b5: Pull complete
676f34be0213: Pull complete
eecb0076700b: Pull complete
8c856ba4f4c6: Pull complete
97f9497af1ef: Pull complete
Digest: sha256:7869ca6419b3f554392f72cc1cd0b90449dbe374b6bbb9cd80cb91e76e4d3960
Status: Downloaded newer image for zbeekman/nightly-gcc-trunk-docker-image:latest
╼ stefano@zaghi(10:56 AM Mon Mar 27)
├───╼ ~ 32 files, 6.3Mb
└──────╼ docker images
REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
zbeekman/nightly-gcc-trunk-docker-image   latest              614b0749b3db        2 hours ago         579 MB
hello-world                               latest              48b5124b2768        2 months ago        1.84 kB

Thank you!

Stefano Zaghi
@szaghi

Hi @/all ,
there is anyone here that is expert (or at least who has already used) multi-step ODE integrator with variable time step-size? Maybe @jacobwilliams with DDEABM or other libraries? I searched for good references on sciencedirect, but after an hour I did not find anything really interesting?

In particular, I am interested on Strong Stability Preserving multi-step Runge-Kutta solvers to achieve an ODE solver of at least 8th formal order of accuracy that preserves stability features to deal with discontinuous solutions. However, in the literature it seems that such a scheme has been developed for only fixed time step (probably because the varying time step was juiced to be too costly). Before spent my time to (re)compute the linear-multi-step coefficients I like to know if some of you can point me to good references.

My best regards.

Jacob Williams
@jacobwilliams
I'm not familiar with the "strong stability preserving" concept so I can't help there. I've used many variable step ODE IVP solvers before. Some good ones (Fortran) are DDEABM, DOP853, DIVA, RKNINT. At least they are good for my kinds of problems (spacecraft trajectories).
Stefano Zaghi
@szaghi
Dear @jacobwilliams thank you for replay. I contacted one of the main developer/creator of the Strong Stability Preserving models and he said that such methods (SSP multistep Runge-Kutta with variable stepsize) are not yet done at all: I am now trying to develop one with his help (I hope). If you are interest, check FOODIE :smile: Thank you as always.
@jacobwilliams I do not know DIVA and RKNINT. Can you give me some references?
Jacob Williams
@jacobwilliams
DIVA (an Adams method like DDEABM) is in MATH77: http://netlib.org/math/ and RKNINT (a Nystrom method) is from "Algorithm 670: a Runge-Kutta-Nyström code" http://dl.acm.org/citation.cfm?id=69650
Stefano Zaghi
@szaghi
@jacobwilliams Thanks!
Stefano Zaghi
@szaghi

Dear @/all

Can anyone suggest good reference for using from python modern Fortran libraries?

For example, assume you have a modern Fortran library exploiting OOP (a lot of modules, abstract types, polymorphims...), but, at the end, the library can provide a simple API to the end-user: a single concrete derived type exposing a procedure with plain real arrays dummy arguments. Assume you want to use this derived type from python. Is it possible?

F2py seems to be not useful in this scenario, but I maybe wrong, I have no experience with f2py. In the case it is not possible, with a lot of work, I can extract the derived type procedure as a standalone procedure, but I cannot totally refactor the library to trim-out all internal OOP.

Thank you in advance for any suggestions.

Cheers

Jacob Williams
@jacobwilliams
Take a look at Ctypes. That might be what you need. The documentation doesn't mention Fortran but if you use F/C interoperability it works the same.
Stefano Zaghi
@szaghi
Hi @jacobwilliams Thank you for your help. I found the great resource of @certik here. Indeed, Ctypes looks very promising for my needs! Thank you both!
Tom Dunn
@tomedunn
@szaghi, I haven't used it much myself but there is https://github.com/jameskermode/f90wrap which looks to extend F2py to work with derived types.
Stefano Zaghi
@szaghi
@tomedunn Tom, thank you for the highlight. I considered it, but it looked somehow cumbersome to use and I was not sure if the underlining OOP of the library (that is WenOOF) will be handled correctly. If my test with ctypes will fail, I'll consider it again.
Tom Dunn
@tomedunn
@szaghi, I've also done a bit of what you're looking to do with F2py (I would have tried f90wrap but I didn't have a lot of control over the system I was doing the work on at the time). I wasn't ever able to pass derived types between Python and Fortran but I got around this in a limited sense by creating a Fortran module that contained the different top level derived types I wanted to use as module variables and then exporting functions to Python for adding/initializing these derived types. I'd love to replace it with something else someday but it works for the limited scope I needed it to.
Stefano Zaghi
@szaghi
@tomedunn Tom, indeed, your pattern seems to be similar to what I am going to do... if you are not too much bothered by my spam I 'll probably ask for your help :smile:
Tom Dunn
@tomedunn
That works for me, I'll do my best to answer any questions I can. From what I remember the hardest part for me was getting the compiler flags and linking working for F2py to do it's thing. So if you can get a simple example to compile then you've done well.
Stefano Zaghi
@szaghi

@tomedunn and @/all interest (in particular @certik could be helpful...),
I have been successful to wrap some WenOOF facilities by means of ctypes, here is the python-wrapper test: this is already enough, but moving toward the wrapping of the actual interpolator object could be better. However, there are some things that I do not understand, in particular concerning ctypes.

The Fortran wrapped procedures now look like:

  subroutine wenoof_interpolate_c_wrap(S, stencil, interpolation) bind(c, name='wenoof_interpolate_c_wrap')
  !< Interpolate over stencils values by means of WenOOF interpolator.
  integer(C_INT32_T), intent(in), value  :: S                 !< Stencils number/dimension.
  real(C_DOUBLE),     intent(in)         :: stencil(1-S:-1+S) !< Stencil of the interpolation.
  real(C_DOUBLE),     intent(out)        :: interpolation     !< Result of the interpolation.

  call interpolator_c_wrap%interpolate(stencil=stencil, interpolation=interpolation)
  endsubroutine wenoof_interpolate_c_wrap

The current (naive) Python wrapper looks like:

from ctypes import CDLL, c_double, c_int, POINTER
import numpy as np

wenoof = CDLL('./shared/libwenoof.so')
wenoof.wenoof_interpolate_c_wrap.argtypes = [c_int, POINTER(c_double), POINTER(c_double)]

cells_number = 50
x_cell, dx = np.linspace(start=-np.pi, stop=np.pi, num=cells_number + 2 * stencils, endpoint=True, retstep=True)
y_cell = np.sin(x_cell)
y_weno = np.empty(cells_number + 2 * stencils, dtype="double")
interpolation = np.empty(1, dtype="double")
for i, x in enumerate(x_cell):
  if i >= stencils and i < cells_number + stencils:
    wenoof.wenoof_interpolate_c_wrap(stencils,
                                     y_cell[i+1-stencils:].ctypes.data_as(POINTER(c_double)),
                                     interpolation.ctypes.data_as(POINTER(c_double)))
    y_weno[i] = interpolation

The oddities for my limited vision are:

  • I was not able to use y_weno numpy array directly into the wenoof_interpolate_c_wrap calling: I have to pass through the temporary 1-element array interpolation; surely this is due to my ignorance about ctypes, but google does not offer me much insight...
  • I do not understand in details the rationale about data_as(POINTER...), refby and similar of ctypes and the corresponding Fortran exploitation of value attribute; the guide of @certik provides a very clear example, but it does not enter in details.

How do you simplify this workflow? Are the Fortran dummy arguments well defined? How the Python wrapper can be simplified/improved? At the end, how to wrap a class(integrator), allocatableobject or at least a type(integrator_js) concrete instance? I definitely need more documentation about Python-ctypes-Fortran mixing...

Thank you in advance for any hints.

Cheers

Stefano Zaghi
@szaghi

Dear @/all ,
I am facing on a very big issue with GNU gfortran and polymorphic functions (for your interest see this) related to serious memory leaks generation. If the bug is confirmed I think the fix will be difficult and it will require a lot of time.

I am wondering if there are some techniques or approaches (maybe developed when f2003/2008 features were not yet widely available) to mimic polymorphism. I would like to save the work done in last months and reuse as much as possible all the polymorphic libraries. I found a paper describing how to achieve multi-inheritance in f2003, maybe there are similar papers on how to achieve a sort of polymorphism without class(...), allocatable...

Thank you in advance for any hints!

Cheers

Chris MacMackin
@cmacmackin
There is already a report of this bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60913. My way of dealing with it (which doesn't stop all memory leaks but is enough to keep them to a manageable level) is that described on pages 236-244 of Rouson, Xia, and Xu's Scientific Software Design: The Object-Oriented Way. The main disadvantage of this approach is it means your routines can no longer be pure. You can see an example of this approach in my factual library. Let me know if you want me to explain some of the details of it.
Stefano Zaghi
@szaghi
Hi Chris! @cmacmackin Thank you very much for the hints. I searched for similar issues, but I missed it yesterday. Surprising, now that you point it out, I am quite sure I have already it (probably again from one of your hints). This makes me very sad: that report is dated 2014! I am also surprise about the comments of Damian in that report (@rouson). I am also surprised that in his book there is a workaround, I'll go to re-study it right now. Frankly speaking, this is a nightmare (see also the comments of Ian Harvey): I am seriously thinking to left Fortran...
Stefano Zaghi
@szaghi
@cmacmackin @rouson I just re-read the note Warning: Calculus May Cause Memory Loss, Bloating and Insomnia on page 238... arghhhhh tell me this is a nightmare!!!!
Stefano Zaghi
@szaghi

@cmacmackin Chris, I need your help...

I re-read the @rouson book, but that part looks obscure in the description of the leak-free hermetic field (this is probably why I missed in my first lecture...). I have then studied your factual but I have difficult to understand the *rationale".

I am focused on how you implement the operators, say this multiplication that I report here

  function array_scalar_sf_m_sf(this,rhs) result(res)
    class(array_scalar_field), intent(in) :: this
    class(scalar_field), intent(in) :: rhs
    class(scalar_field), allocatable :: res !! The restult of this operation
    class(array_scalar_field), allocatable :: local

    call this%guard_temp(); call rhs%guard_temp() 
   allocate(local, mold=this)
    call local%assign_meta_data(this)
    select type(rhs)
    class is(array_scalar_field)
      local%field_data%array = this%field_data%array * rhs%field_data%array
    class is(uniform_scalar_field)
      local%field_data%array = this%field_data%array * rhs%get_value()
    end select
    call move_alloc(local,res)
    call res%set_temp()
    call this%clean_temp(); call rhs%clean_temp()
  end function array_scalar_sf_m_sf

I see that guard_temp method increase the temporary level if they are already set as temporary, so this and rhs are optional now more temporary... why this is necessary? My first guess is that this and rhs should not be a problem, but I could underestimate the function recursive execution scope I think.

Why the local is necessary? Is there something implied in the move_alloc?

Why res is set as temporary? I guess because then it can be properly finalized, but I cannot figured out who then is able to finalize res.

Chris MacMackin
@cmacmackin
To see why this is necessary, consider the following code.
program example
  use factual_mod
  implicit none
  type(cheb1d_scalar_field) :: a, b, c

  a = b * c
  a = do_thing(b, c)
  a = do_thing(b*b, c)

contains

  function do_thing(thing1, thing2)
    class(scalar_field), intent(in) :: thing1, thing2
    class(scalar_field), allocatable :: do_thing
    call thing1%guard_temp(); call thing2%guard_temp()
    call thing1%allocate_scalar_field(do_thing)
    do_thing = thing1*thing2
    call thing1%clean_temp(); call thing2%clean_temp()
    call do_thing%set_temp()
  end function do_thing

end program example
When a = b*c is executed, b, and c are both non-temporary variables. As such, they should not be finalised. So far, so obvious. However, the actual arguments to function do_thing may or may not be temporary. If they are not temporary (e.g. when executing a = do_thing(b, c)), then this function will not finalise them and nor will the multiplication routine.
Chris MacMackin
@cmacmackin
Memory leaks occur when the variable returned from a function are not finalised after assignment or use as an argument. (Note that this issue can sometimes be avoided by returning pointer variables and using pointer assignment, at the cost of lots of manual memory-management. ) As such, wee need to have the clean_temp method which is called at the end of all procedures using fields as arguments. However, we don't want to finalise any fields which have been assigned to variables, so we must distinguish between these ones and temporary ones which are just function results.
Chris MacMackin
@cmacmackin

Function results may or may not be assigned to a variable, so we must assume that they are temporary, hence why set_temp is called for the function results. In the defined assignment operator, the LHS argument will be set not to be temporary so that it will not be finalised prematurely.

Where things get complicated is when a function receives a temporary field as an argument and must pass it to another procedure. This happens in do_thing when it is called in the a = do_thing(b*b, c) statement. As b*b is temporary, it will be guarded at the start of do_thing. When we pass it to the multiplication method, we don't want it to be finalised yet (as we might want to do something else with it in do_thing). As such, the call to set_temp in the multiplication routine must increment the temporary-count of this argument. That way, it will know not to finalise it when clean_temp is called in the multiplication routine. Instead, clean_temp will just decrement the temporary-count again. When the clean_temp call is made in do_thing, the count will be back down to its initial value and clean_temp will know that it can safely finalise the argument.

I hope that clarifies things.
Stefano Zaghi
@szaghi

@cmacmackin Chris all are more clear, but indeed, I think this does not work my bug. Now I understand the care you paid on finalization, but the point is that in my test case the finalization is totally not useful... my type prototype (your fields) is something like


type :: field
   real :: i_am_static(length)
   contains
     procedure...
endtyep field

Now if I add a finalizer to field it will have no effect on the i_am_static member of the type. My leaks originate to the fact that gfortran is not able to free the static memory of class(...), allocatable function results. If I made the static member allocatable the leaks seem to vanish (but not if the allocatables are other types...) as well as if I trim out the polymorphism defining the result as type(...), allocatable the finalization is automatically done right with both static and dynamic members. So, you workaround could be very useful to fix related to dynamic members that are other class/types, but it has no effect on the static member. Tomorrow I'll try your workaround in more details, but I am going to sleep very sadly...

Anyhow, thank you very much, you are great!

Chris MacMackin
@cmacmackin
You are correct. What I do is make all of my large type components dynamic, which happened to be the case for my field types anyway. Static components can not be finalised, hence why I said that this approach doesn't stop all memory leaks. Actually, it gets a little bit more complicated then this, because you can not deallocate components of an intent(in) argument. However, in a non-pure procedure, pointer components of intent(in) objects are a bit odd. It is only required that you don't change the pointer--there is no issue with changing the thing that it points to.
What I did was define an additional, transparent, derived type called called array_1d:
```fortran
  type, public :: array_1d
    real(r8), dimension(:), allocatable, public :: array
  end type array_1d
I have similar types for higher-dimensional arrays. That way I can just deallocate the component array. You can see this in action in the source code.
Stefano Zaghi
@szaghi
@cmacmackin Chris, thank you very much for your help! Indeed, my fields have also big components defined defined as allocatable, but they have also many static components: for an integrand field could be a big block with not only the fluid dynamic fields, but also grid dimensions, species concentration, boundary conditions types... alone they could be few static bytes for each integrand, but then you have to consider that each integrand is integrated (namely added, multiplied, divided...) many times for each time step and for time accurate simulations you perform millions/billions of time steps... the few bytes leaked become quickly gigabytes. Put all of these into HPC view... it is not acceptable. Anyhow, thank you again!
Izaak "Zaak" Beekman
@zbeekman
Hi all, @DmitryLyakh pointed out his cool looking project (hosted on GitLab): Generic Fortran Containers I just thought I would pass it along!
Stefano Zaghi
@szaghi
@zbeekman @DmitryLyakh GFC is very interesting! Thank you both! Why not also GitHub? There are other sources of documentation?
Chris MacMackin
@cmacmackin

@szaghi I've come up with an idea which should work better than the "forced finalisation" approach which I'm using currently. I'll still use my guard_/clean_temp methods, but I'll couple them with an object pool. That way, when a temporary object is ready to be "cleaned", I can simply release it back into the object pool for later reuse and no memory is leaked. A pool of 100 or so should be more than enough for most applications and would have a reasonably manageable minimal memory footprint.

This approach still would not be ammenable to pure procedures, so you likely won't want to take it. However, I thought it might be worth mentioning on here in case anyone else is interested. Note that I have not actually tested it yet, or done more than sketch out the basic details.

Stefano Zaghi
@szaghi
@cmacmackin Chris, thank you very much, this is interesting. Please, keep me informed about it, in particular if you test it on your factual. Currently, I think I have found my nirvana coupling abstracts with pure math-operators (that has a restriction on abstraction, but a great performance boos), but your object pool approach could come in hand for totally abstract non pure operators. Thank you very much for your help!
Chris MacMackin
@cmacmackin
@szaghi How much of a performance boost is there from using pure procedures? Do you know what sorts of optimisations are used? This is something I've wondered about.
Stefano Zaghi
@szaghi
@cmacmackin Chris, this is just a feeling, I have not yet done accurate comparisons (just run some small tests, indeed large being 1D tests), the non-pure allocatable polymorphic version was not really usable in production due to the memory leaks. Today, I can try a more rigorous analysis (with gcc 7.1), but in the 1D tests I did, the performance improvement seems visible. However, this fact could be not (only) related to the purity: now my math operators really works on plain real arrays, each operator (+, -, *, /, **) returns a real array, polymorphic classes are totally out from this kind of operators (polymorphism returns in play, without allocatable, in assignment) , thus I think that the intrinsic optimal handling of arrays of Fortran can play a role (or I had a wrong feeling and the performance boost is not there :cry: ).
Chris MacMackin
@cmacmackin
@szaghi I just wonder because I was under the impression that PURE was mostly used for handling things like parallelisation. I'd have thought that most of the opportunities for parallelisation would occur within the type-bound operators and not in making parallel calls to the operators themselves. The big advantage I can see to your new approach, though, is that it would make it much easier to use abstract calculus with coarrays, since function results are not allowed to contain coarray components. In Scientific Software Design it was proposed that you would essentially have two versions of your types: one with coarray components and one without, where the non-coarray version would be used for function results. This greatly increases the ammount of code needed, whereas just using arrays would be much simpler. The disadvantage is that it becomes harder to use new defined operators, such as .div., .grad., .curl., on function results because they wouldn't have the necessary information about grid-layout.
Chris MacMackin
@cmacmackin

On a different note, you say you're using gcc 7.1. I compiled that today using the OpenCoarrays script. I wanted to see if it got rid of the memory leaks in my project. However, when I tried running my test suite, I found that it produced the error

Fortran runtime error: Recursive call to nonrecursive procedure 'cheb1d_scalar_grid_spacing'

When I examined the backtrace and the code, it seemed that a call to totally different type-bound procedure got mixed up with the one called grid_spacing. This happened twice, which is what ended up producing the "recursion". I have no idea what could be wrong with the compiler to produce this. Is it working properly for you?

Stefano Zaghi
@szaghi
@cmacmackin I run a simple test with 7.1. If you can wait few minutes I can try a more serious test (memory leaks seem to be still here with 7.1...)
Chris MacMackin
@cmacmackin
Good to know.
Stefano Zaghi
@szaghi

@cmacmackin Chris, I have just run a more complex test with this

╼ stefano@zaghi(02:32 PM Thu May 04) on feature/add-riemann-2D-tests [!?] desk {gcc-7.1.0 - gcc 7.1.0 environment}
├───╼ ~/fortran/FORESEER 15 files, 840Kb
└──────╼ gfortran --version
GNU Fortran (GCC) 7.1.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

It seems to work exactly as in gcc 6.3

the test is in FORESEER, this one and uses a lot of OOP
Chris MacMackin
@cmacmackin
Okay, something must have gone wrong with how I compiled it. If it doesn't solve the memory leaks then I won't bother pursuing it any further.
Stefano Zaghi
@szaghi
Let me check the memory leaks issues with the dedicated tests, few minutes again :smile:
@cmacmackin Chris, we are not very fortunate... the leaks seems to be still there
╼ stefano@zaghi(02:43 PM Thu May 04) on master desk {gcc-7.1.0 - gcc 7.1.0 environment}
├───╼ ~/fortran/leaks_hunter 3 files, 88Kb
└──────╼ scripts/compile.sh src/leaks_raiser_static_intrinsic.f90 

┌╼ stefano@zaghi(02:43 PM Thu May 04) on master [?] desk {gcc-7.1.0 - gcc 7.1.0 environment}
├───╼ ~/fortran/leaks_hunter 4 files, 100Kb
└──────╼ scripts/run_valgrind.sh 
==59798== Memcheck, a memory error detector
==59798== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==59798== Using Valgrind-3.12.0 and LibVEX; rerun with -h for copyright info
...
==59798== HEAP SUMMARY:
==59798==     in use at exit: 4 bytes in 1 blocks
==59798==   total heap usage: 20 allocs, 19 frees, 12,012 bytes allocated
==59798==
==59798== Searching for pointers to 1 not-freed blocks
==59798== Checked 101,856 bytes
==59798==
==59798== 4 bytes in 1 blocks are definitely lost in loss record 1 of 1
==59798==    at 0x4C2AF1F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==59798==    by 0x40075C: __static_intrinsic_type_m_MOD_add_static_intrinsic_type (leaks_raiser_static_intrinsic.f90:24)
==59798==    by 0x40084D: MAIN__ (leaks_raiser_static_intrinsic.f90:37)
==59798==    by 0x40089F: main (leaks_raiser_static_intrinsic.f90:30)
==59798==
==59798== LEAK SUMMARY:
==59798==    definitely lost: 4 bytes in 1 blocks
==59798==    indirectly lost: 0 bytes in 0 blocks
==59798==      possibly lost: 0 bytes in 0 blocks
==59798==    still reachable: 0 bytes in 0 blocks
==59798==         suppressed: 0 bytes in 0 blocks
==59798==
==59798== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
==59798== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Chris MacMackin
@cmacmackin
Was it only 4 bytes lost before? I'd almost worry that was just some issue with initialisation or something.
Stefano Zaghi
@szaghi
@cmacmackin Chris, this is a synthetic test designed to raise GNU memory leaks, you can check it on leaks_hunter
The test is very simple, it must return 0 bytes lost