Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Nov 14 2019 20:49
    letmaik opened #39
  • Oct 25 2019 09:35

    szaghi on master

    update submodules update travis config (compare)

  • Oct 25 2019 09:30

    szaghi on master

    update submodules (compare)

  • Oct 25 2019 09:19

    szaghi on master

    update submodules update travis config (compare)

  • Oct 21 2019 06:34
    rakowsk commented #7
  • Oct 20 2019 16:09
    unfurl-links[bot] commented #7
  • Oct 20 2019 16:09
    rakowsk commented #7
  • Oct 12 2019 17:49
    ShatrovOA commented #38
  • Oct 11 2019 15:25
    szaghi labeled #38
  • Oct 11 2019 15:25
    szaghi assigned #38
  • Oct 11 2019 15:25
    szaghi commented #38
  • Oct 11 2019 13:52
    ShatrovOA edited #38
  • Oct 11 2019 13:44
    ShatrovOA opened #38
  • Sep 19 2019 11:19
    szaghi commented #7
  • Sep 19 2019 11:08

    szaghi on master

    Fix parsing bug issue#7 The me… update travis config Merge branch 'release/0.1.0' (compare)

  • Sep 19 2019 11:06

    szaghi on fix-parsing-bug-issue#7

    (compare)

  • Sep 19 2019 07:54

    szaghi on fix-parsing-bug-issue#7

    (compare)

  • Sep 19 2019 07:52
    szaghi commented #7
  • Sep 19 2019 07:51
    szaghi labeled #7
  • Sep 19 2019 07:51
    szaghi assigned #7
Stefano Zaghi
@szaghi

@cmacmackin @rouson ,

Damian, you know how I think high of you, but I disagree (with respect): the world could be changed, but it currently does not. Intel and GNU have so many bugs about OOP that claiming full support of 2003 or even 2008 standard for that compilers is premature. Maybe the world will change the next year, but in 2017 I am really in trouble doing OOP in Fortran.

I really would like to know your new idea about functional programming, but I am skeptical: if defined operators have so big overhead as I shown above, how functional programming be suitable for HPC? In HASTY I tried to do a really useful, but not so complex, thing with CAF and it is stopped by compilers bugs...

Chris,

Truth be told, I'm getting really frustrated with Fortran. If I didn't already have so much effort invested in my Fortran code base, I'd probably switch to another language. There are so many bugs related to object oriented programming in gfortran and ifort, and I'm getting sick of having to work around them. Memory management is a massive pain and not something I want to be thinking about as a programmer.

I am not so young as you, but my feeling is really the same: if I did not invested so hard in Fortran, I had likely used some other language two years ago. Probably, I'll try to invest more in Python: I see more and more HPC courses about "optimizing Python for number-crunching". Python performances are the worst I could imagine, but OOP is really a "new world" in Python.

Cheers

Damian Rouson
@rouson
@cmacmackin and @szaghi, trust me that I feel your pain. At the peak of my frustrations around 2010, I was involved directly or indirectly in submitting 50-60 bug reports annually across six compilers. Part of why I encounter bugs less often now is that I lasted through that process, got reasonably speedy responses from some compiler teams, dropped the compilers from vendors that were insufficiently responsive, and went to great lengths to become crafty about funding compiler development. None of those things were straightforward or easy, but I saw them as necessary because Fortran has important features that no other language has and I care most about writing clean code. So much of what I saw in other languages seemed like a crime against humanity. The interpreted languages such as Python are factors of 2-3 slower at best and the compiled languages such as C and C++ lack even basic array manipulation facilities. And no language other than Fortran has a parallel programming model that works in distributed memory. And no other language has support for fault tolerance. To get distributed-memory parallelism and fault tolerance, you could go with MPI, but the MPI being written by almost every scientific programmer I've met will be slower, more complex, and less fault-tolerant than what a Fortran programmer can write with coarray Fortran. I hope you'll think more about how to contribute to gfortran, whether as a developer (almost all the developers are domain scientists -- few are computer scientists and none have any training in compiler development as far as I know) or through organizational funds when you reach a stage when that becomes an option via grants or contracts. GFortran has been developed primarily by volunteers and some gfortran developers would rather not accept pay because they prefer the freedom of being a volunteer, but some do accept pay and it makes a difference in getting bugs fixed in a timely manner. And it takes creativity. None of the projects I've used to pay developers had a line item in the budget that read, "Fix gfortran bugs." I had to figure out how to make it happen in support of objectives that did have a line in the budget.
Damian Rouson
@rouson
@szaghi, I don't have any great new idea about functional programming in Fortran so you'll be disappointed. I have a set of strategies that were inspired by functional programming and that I frequently employ to make the intention of the code more clear and potentially more optimizable. One is the defined operators and your latest news is discouraging with regard to the performance (recall that I worried that Abstract Calculus might be an anti-pattern for just this reason but you previously reported that Abstract Calculus did not hurt performance based on your experience with FOODIE so I wonder what changed). But I always knew there could be performance penalties associated with user-defined operators and I'm pretty sure I talk about some of those in my book (e.g., related to cache utilization and the ability of modern processors to perform a multiply and add in one clock cycle). Another idea inspired by functional programming relates to the ASSOCIATE statement. I don't think I want to go into detail in this forum just because the back-and-forth takes too much time, but I'd be glad to explain it in a call and it will be in my book. Another thing I'll cover will be the use of the functional-fortran library, of which you are aware. For now, that's it. There's no grand idea here. And then there is the use of PURE. As we all know, Fortran is not a functional programming language, but there are several ways in which Fortran programming can be influenced by functional programming concepts and that's what I mean when I talk about functional programming in Fortran.
Damian Rouson
@rouson
My new book will have two new co-authors: Salvatore Filippone and Sameer Shende. Salvatore has more than 25 years of deep experience in parallel programming and Sameer has more than 15 years of experience in parallel performance analysis. The goal is to have almost every code in the book parallel and almost every code back by performance analysis. The last thing I'll say -- and then I've got to move on to some other things for a while -- is be careful trading one set of problems for another. For many reasons, you are likely to find more robust compilers for other languages, but you'll trade the compiler bugs for another set of problems in the form of low performance or ease with which you can shoot yourself in the foot or learning curve (it takes years to be a truly competent C++ programmer, for example, whereas the students in my classes become quite competent and even at the leading edge of Fortran programming in the span of one academic quarter. That's a really powerful statement.
Stefano Zaghi
@szaghi

@rouson ,

Dear Damian, as always you are too much kind!

trust me that I feel your pain.

I know, but this does not alleviate to much the pain :smile:

I lasted through that process, got reasonably speedy responses from some compiler teams, dropped the compilers from vendors that were insufficiently responsive, and went to great lengths to become crafty about funding compiler development.

I'll try to follow your path, but in my reality searching for gfortran funding is a dream more than a challenge. In these day I'am evangelizing your idea and trying to make conscious my colleagues who are using gfortran for their research that it should be ethically and practically important to contribute to the GNU project with part of the research funding... but in Italy we do research with almost null fund.

Fortran has important features that no other language has and I care most about writing clean code. So much of what I saw in other languages seemed like a crime against humanity. The interpreted languages such as Python are factors of 2-3 slower at best and the compiled languages such as C and C++ lack even basic array manipulation facilities. And no language other than Fortran has a parallel programming model that works in distributed memory. And no other language has support for fault tolerance. To get distributed-memory parallelism and fault tolerance, you could go with MPI, but the MPI being written by almost every scientific programmer I've met will be slower, more complex, and less fault-tolerant than what a Fortran programmer can write with coarray Fortran.

I agree, this is why I selected Fortran, but currently this is all true if I do not use OOP, when OOP come in to play, all the pain highlighted by Chris arises. At the end, for the reasons you summarized and for the efforts I have already invested I'll never stop to use Fortran.

I hope you'll think more about how to contribute to gfortran, whether as a developer (almost all the developers are domain scientists -- few are computer scientists and none have any training in compiler development as far as I know) or through organizational funds...

If finding funds is a dream for me, the possibility that I can contribute to the development to gfortran is even more difficult: I am not up to the task. I know very little about C, but the big issue is that writing a compiler is an art and I am not an artist, just an oompa loompa.

I don't have any great new idea about functional programming in Fortran so you'll be disappointed. I have a set of strategies that were inspired by functional programming and that I frequently employ to make the intention of the code more clear and potentially more optimizable. One is the defined operators and your latest news is discouraging with regard to the performance (recall that I worried that Abstract Calculus might be an anti-pattern for just this reason but you previously reported that Abstract Calculus did not hurt performance based on your experience with FOODIE so I wonder what changed).

Sure, I remember your surprise, but that benchmark was really different from the one of yesterday. In FOODIE I compared Abstract Calculus with polymorphic allocatable functions (in which the ODE solver changes at runtime as well as all the operators results) with an identical test, but without abstract polymorphic operators and without changes of solvers at runtime. However, both version uses defined operators: the ACP has polymorphic allocatable (impure) operators, the other has static (pure) operators returning a type. The performances were identical between ACP and non abstract one, but this is in line with also the test I mad yesterday. What is really different is the comparison between defined operators vs intrinsic operators. For these reasons yesterday I updated our paper (soon a draft will sent to you) and I am planning to add a "performance mode* to FOODIE to allow users to select an operational mode:

  • for rapid ODE solvers development she can safely select normal mode;
  • for using FOODIE in production mode (heavy number crunching) she should select performance mode.
This new performance mode put on my shoulders (and on the developers of future ODE solvers) the burden to write also the %integrate_performance version of each solver, but it should be very easy.
Stefano Zaghi
@szaghi

For many reasons, you are likely to find more robust compilers for other languages, but you'll trade the compiler bugs for another set of problems in the form of low performance or ease with which you can shoot yourself in the foot or learning curve (it takes years to be a truly competent C++ programmer, for example, whereas the students in my classes become quite competent and even at the leading edge of Fortran programming in the span of one academic quarter. That's a really powerful statement.

I agree, this is why I select Fortran. When I start to play with CAF it takes few days to let me productive, while I am still not able to be really efficient (namely really asynchronous) with MPI after years. Fortran is still the most suitable choice for my math, but there is a lot of pain if we want to exploit OOP.

I think I'll book you soon for a talk, please speak slow :smile: (tomorrow I'll know Alessandro: I am really excited to see his exascale work)

Cheers

P.S. I am very happy read about Filippone will be your co-author. Your new book promises at lot!

Stefano Zaghi
@szaghi

@rouson @cmacmackin ,

I played with operators vs non operators mode in FOODIE... it seems confirmed the overhead of defined operators, see this

stefano@thor(11:50 AM Sun May 07) on feature/add-performance-mode [!]
~/fortran/FOODIE 21 files, 2.5Mb
→ time ./build/tests/accuracy/oscillation/oscillation -s adams_bashforth_4 -Dt 0.05 --fast
adams_bashforth_4
    steps:   20000000    Dt:      0.050, f*Dt:      0.000, E(x):  0.464E-09, E(y):  0.469E-09

real    0m5.214s
user    0m4.996s
sys    0m0.216s

stefano@thor(11:51 AM Sun May 07) on feature/add-performance-mode [!]
~/fortran/FOODIE 21 files, 2.5Mb
→ time ./build/tests/accuracy/oscillation/oscillation -s adams_bashforth_4 -Dt 0.05
adams_bashforth_4
    steps:   20000000    Dt:      0.050, f*Dt:      0.000, E(x):  0.464E-09, E(y):  0.469E-09

real    0m10.535s
user    0m10.320s
sys    0m0.216s

I added the fast mode to only Adams Bashforth solver for now, but I 'll add similar mode for all solver tomorrow, it is really simple and to the end user the change is almost seamless.

See you soon, happy "domenica" :smile:

Damian Rouson
@rouson
@cmacmackin and @szaghi Do you monitor the gfortrtan mailing list? If so, you might have seen that one finalization bug was just fixed: 79311. It's an ICE so it presumably doesn't help with the memory leaks you're seeing, but it's at least one decrement to the finalization bug count. That's progress. I'll inquire with the developer about plans for the remaining bugs on the list.
Stefano Zaghi
@szaghi
:tada: : a small progress for gfortran, but a big progress for poor Fortran men like Chris and me :smile:
Milan Curcic
@milancurcic
@szaghi I suggest we stop using words "poor" and "Fortran" in the same sentence, it only perpetuates the false stigma that this language carries.
Stefano Zaghi
@szaghi
@milancurcic Hi Milan, sorry for my bad humor, I promise I'll be more careful in the future, with hope without stigma :smile:
Milan Curcic
@milancurcic
@szaghi Thanks Stefano! I am convinced of your genuinely great intentions :)
Neil Carlson
@nncarlson
I've been loosely following the recent discussion with great empathy. I want to remind people that Fortran /= gfortran. There are better compilers out there than gfortran. It would be ideal to have a top-notch free Fortran compiler, but that's not where we are right now. I understand that everyone's situation and priorities are different, but it might be worthwhile considering using a different compiler.
Stefano Zaghi
@szaghi

@nncarlson Dear Neil, thank you for sharing your thoughts, it is appreciated.

If the idea of Fortran == gfortran was conveyed by me, my bad, it is not my thought neither I want to convey it. In my view a good program must be tested with as much as possible different compilers to obtain cross-verification: compilers are programs as others thus they could (and are) be bugged as others. To me Fortran == iso-standard-xx.

My current feeling is, however, sad. Due to the sempiterna lack of funds in my research institute I have to strongly rely on free compilers; the access to commercial compilers is possible only when we buy core-hours at HPC facilities or when we obtain a grant at them (1 or 2 times for year, in mean). So, my view is strictly related to Intel and GNU: both have serious bugs about OOP, thus this blocks me.

I tested PGI, but it has too much limited support to F03/08 and no support at all to CAF; it was even very inefficient (in some scenario) if compared with Intel and GNU.

I used IBM XLF when I had a grant on PowerPC cluster, it is a great compiler, but it is not an option for x86 GNU/Linux.

Others said great things about Cray, but I did not never accessed to a CRAY cluster.

Finally there is NAG that seems great, but it is too expensive for me and Cineca (the HPC where I often obtain grants) does not provide it.

All said means, *I agree with you, Fortran /= gfortran, but, for someone like me Fortran ~= gfortran + ifort is a good approximation :cry:

Cheers

Neil Carlson
@nncarlson
@szaghi, I'm very curious to hear what your OOP issues with the Intel compiler are (perhaps off-line). That is our go-to production compiler now. It had many problems in the past (I reported many) but has greatly improved in the current version. I'm not aware of any current issues that effect me (well, perhaps one ...), but it sounds like it still has some significant problems that I should be looking to avoid.
Stefano Zaghi
@szaghi
@nncarlson Hi Neil, currently these are the issues frustrating me with Intel: CAF issue and ADT issue (this other is similar or the same issue of the other ADT issue.). I am in the 2018 beta testing and these bugs seem to be not fixed.
Jacob Williams
@jacobwilliams
Neil Carlson
@nncarlson
@jacobwilliams Actually the trick of boxing the polymorphic pointer also allows you to preserve the metadata associated with the polymorphic variable. An example:
Neil Carlson
@nncarlson
program main

  use iso_c_binding

  type box
    class(*), pointer :: p => null()
  end type
  type(box), target  :: pbox
  type(box), pointer :: qbox
  type(c_ptr) :: cp

  allocate(pbox%p, source=1)
  cp = c_loc(pbox)
  call c_f_pointer(cp, qbox)

  select type (q => qbox%p)
  type is (integer)
    print *, 'got integer', q
  class default
    print *, 'lost dynamic type'
  end select

end program
Jacob Williams
@jacobwilliams
Thanks @nncarlson ! I'm still not sure what is "legal" and what isn't with C_LOC but I like this approach (at least it works on both ifort and gfortran).
Neil Carlson
@nncarlson
It also works for NAG. Using C_LOC on a polymorphic variable is an error with NAG, and I too am not sure what is "legal". I got mixed signals from FortranFan and Steve Lionel (your Intel forum link).
Stefano Zaghi
@szaghi
@jacobwilliams @nncarlson Interop-C is still a mystery for me... can I ask some details about when/where the box wrapper plus c_loc turn to be useful? Cheer
Jacob Williams
@jacobwilliams
@szaghi I'm doing some experiments using it as a way to call object-oriented fortran code from Python. I'll try to post about it soon.
Neil Carlson
@nncarlson
My use case (https://github.com/nncarlson/yajl-fort/blob/master/src/yajl_fort.F90) involved a C library that needed call back functions, which I implemented in Fortran. One of the arguments to a call-back was a user-supplied void pointer to "context data" that the call-back needed. It's a pretty standard approach in the C world. My ideal call-back was a type bound procedure of a polymorphic type with data components of the type providing the necessary context data. To get this to work with the C library, I passed the c_loc of a box wrapper around the polymorphic type pointer as the "context data". The function, whose pointer I passed as the call-back, turned this pointer back to a box around the polymorphic type pointer, and then invoked the type bound procedure that was the actual call-back.
Not sure if any of that made sense -- perhaps it's better explained with an example.
Stefano Zaghi
@szaghi
@jacobwilliams Jacob, your use case is very interesting for me, I am now using ctypes in Python but I am not still able to exploit an OOD Fortran class with it, only non OOD procedures. If you will go with this, please share your result :smile:
@nncarlson Thank you very much for your clarification, call back world is still far from way, but I am looking to your code to learn it. Thank you again.
Jacob Williams
@jacobwilliams
@szaghi jacobwilliams/Fortran-Astrodynamics-Toolkit@25a0119 is my example. I just pass a C pointer to my class back to Python, which it then can pass back into the Fortran routines where the class procedures can be called. @nncarlson I think this is similar to what you are talking about. Comments and suggestions are welcome.
Stefano Zaghi
@szaghi
@jacobwilliams @nncarlson Jacob, thank you very much for your help and insight, they are very appreciated. Neil, thank you too for sharing your work, it is of great inspiration.
Izaak "Zaak" Beekman
@zbeekman

Hi @/all, Just wanted to let you know that you can now try OpenCoarrays in the cloud via Binder. It is implemented as a kernel for Jupyter over at https://github.com/sourceryinstitute/jupyter-CAF-kernel. You can launch the binder (which also has python, Julia, and R kernels installed) using this button: Binder.

Navigate to the index.ipynb file to run a demo. Or create a new notebook using the Coarray Fortran kernel and run your own experimental code, after seeing a few tutorial details in the index.ipynb file. If you just want to skip straight to that file use this link: https://bit.ly/TryCoarrays. To get to the full on binder instance, same as the button, go to https://bit.ly/CAF-Binder

Jacob Williams
@jacobwilliams
That is amazing! Great work!
Stefano Zaghi
@szaghi
Wonderful! Great Work!
Izaak "Zaak" Beekman
@zbeekman
Also, I just want to add I installed a kernel multiplexing kernel: allthekernels. If you use that as the main notebook kernel, then you specify the kernel for each cell using >kernel-name at the top of the cell. So you could create a notebook with python, fortran, julia, r etc. cells. Perhaps this could be useful for computing some data in one language (Fortran) and then plotting it and/or post processing it in another language (python or R)
Stefano Zaghi
@szaghi

@jeffhammond

Dear Jeff,
I read this comment . It is very interesting for me, I would like to add some Autotools capabilities in FoBiS. I know about your long experience in the field whereas my knowledge of Autotools is near zero. Can you point me to some good references about the right way to identify compilers and their features? For example, I am now trying to implement in FoBiS a simple feature that should check is a compiler support the iso_10646 character kind; my idea is to that FoBiS create on the fly a simple test, invoke selected_char_kind print the result and capture it in order to understand if the compiler support it. Is this the right way (similar to the autotools one)?

Thank you in advance.

My best regards.

Jeff Hammond
@jeffhammond
@szaghi I favor the autotools style of attempting to compile code and making decisions based upon that. Autotools is often too fine-grain for most applications. if an app requires features A B C D, then write a simple source that uses all of them. if that sources compiles, use the compiler. if not, throw and error. while i use autotools for a lot of system software, in most cases, i would be fine to just test if the compiler supports C99 and POSIX, because that is what i need.
a lot of what buildsystems do is punish developers for supporting terrible platforms. i mean, if your computer doesn't have a C99 compiler, maybe you should just throw it in the trash, no? Microsoft refuses to support modern C in MSVC, and their users should revolt rather than work around this nonsense. Intel and Clang support C99 on Windows. anyways, i have strong feelings about crappy programming environments. they are a waste of everyone's time and should be ignored. i recently tried out Flang based on PGI. it's 2017 and they don't support basic features of Fortran 2008. no one who cares about modern Fortran should support this compiler.
Jeff Hammond
@jeffhammond
of course, i am biased because i work for Intel and Intel compilers are pretty good about the latest standards (and where they are not, i have filed bug reports and communicated directly with the team about the necessity of fixing the issues ASAP), and x86 has great support for GCC and Clang, but when i was supporting IBM Blue Gene and the IBM C++ compiler sucked, i advocated vigorously for Clang. https://www.ibm.com/developerworks/community/blogs/fe313521-2e95-46f2-817d-44a4f27eba32/entry/ibm_xl_compilers_for_little_endian_coming?lang=en may or may not be related ;-)
Stefano Zaghi
@szaghi
@jeffhammond Dear Jeff, thank you for your insight, I am going along your suggested approach: minimal tests (to be compiled/ran/checked on the fly) to verify if compiler support that feature. I agree with your other ideas, but I am less sharp: flang is currently inadequate for my modern Fortran approach, but I would like to support them if they are aimed to improve the compiler into a FOSS framework :smile:
My best regards
Jeff Hammond
@jeffhammond
@szaghi Indeed, I am a huge fan of LLVM and have long wished for a quality Fortran front-end for it. I have mixed feelings about PGI's front-end being the basis for that, but it's certainly better than what we had before. This community can help them understand the need for Fortran 2008+ support. Hopefully folks will also encourage them to support OpenMP 4.5 as well.
Stefano Zaghi
@szaghi
@jeffhammond :+1:
Stefano Zaghi
@szaghi

Dear @/all

I have just realized that it is possible to use gfortran 7.1.0 on Travis CI by simply selecting dist: trusty as your container (it will use the container based on ubuntu 14.xy rather than 12.xy).

My best regards.

P.S. the following is extracted from one of my .travis.yml configuration files

language: generic

sudo: false
dist: trusty

cache:
  apt: true
  pip: true
  directories:
    - $HOME/.cache/pip
    - $HOME/.local

addons:
  apt:
    sources:
      - ubuntu-toolchain-r-test
    packages:
      - gfortran-7
      - binutils 
...
install:
  - |
    if [[ ! -d "$HOME/.local/bin" ]]; then
      mkdir "$HOME/.local/bin"
    fi
  - export PATH="$HOME/.local/bin:$PATH"
  - export FC=/usr/bin/gfortran-7
  - ln -fs /usr/bin/gfortran-7 "$HOME/.local/bin/gfortran" && gfortran --version
  - ls -l /usr/bin/gfortran-7
  - ln -fs /usr/bin/gcov-7 "$HOME/.local/bin/gcov" && gcov --version
...
victorsndvg
@victorsndvg
:+1: Thanks @szaghi
Jacob Williams
@jacobwilliams
@/all If anyone is interested, I just set up a git repo here as a potential place to collaborate on writing feature proposals to submit to WG5 for the next Fortran standard. I think it's very clear from this thread that Usenet is not the future of language design discussions. :) Perhaps getting something started at the grass roots level using modern tools is the way to get the Fortran user community engaged in the process.
Stefano Zaghi
@szaghi
@jacobwilliams You are my hero! I am off due to very important public contest for a stable position, but soon I'll come back. Great initiative!
Rand Huso
@rchuso
Good Morning everyone. First time here, and it's because I have a Fortran+Polymorphism+MPI question. I'm getting unexpected results transferring an object over MPI. The object I'm sending EXTENDS an ABSTRACT object, and I'm using a CLASS pointer to the base object in the MPI_Send and Receive. Data from the base object is transferred, but not from the extended object, unless I'm using gfortran-7 and OpenMPI 2.1.1 built with gfortran-7 and gcc-7 - where data from the extended object is transferred, but not the data in the base class (which I find very strange). I've tried this using the Intel MPI (Version 2017 Update 1 Build 20161016 (id: 16418)) with ifort version 17.0.1 20161005 and gfortran-7 with a couple builds of OpenMPI. Any takers? I've got a 136 line Fortran program to demonstrate the problem. I'd like to know if I'm doing something wrong, or if I'm just expecting too much from the compiler and library implementers?
Stefano Zaghi
@szaghi
Dear Rand
Stefano Zaghi
@szaghi
Welcome here. Others will give you more insight, but from my experience your living dangerously ... This sounds a very cutting-edge MPI application. In my MPI code I usually send/receive base types, I am not so confident with current implementations. Recently I switched to CAF and with coarrays it seems more safe and natural communicate OO data. Anyhow, please share your test, I'll read it with interest. My best regards.
Rand Huso
@rchuso
Hello Stefano. "living dangerously" - I like that. With my C work processing seismic survey data (sizes up to PB, and running on the largest privately owned supercomputers in the world - like Total.com), I'm currently the fastest in the industry (if I understand what our customers are saying - my applications are 3 to 5 times faster than those of CGG and others - see the GLOBE Claritas web site for some details - part of GNS Science). I'm able to do this because of how I can abstract some of the complexity of MPI for the applications I wrote (like 3D Kirchhoff time migration - seismic tomography), and I'm trying to do the same thing with Fortran. I just want to be able to send and receive objects that have a base class. What really surprises me is the different behaviour I'm seeing with MPICH and OpenMPI using the gfortran 7.1.0 vs earlier versions. I'm in the process of changing my test routine to help me track down the progress, and will include it here when ready. Thanks.
Rand Huso
@rchuso

I got it working.

Well, it turns out the code is too large to enter here. It's at 144 lines, and successfully runs on OpenMPI 2.1 with gfortran 7.1. Is there a way to include it here for others to see?