These are chat archives for Fortran-FOSS-Programmers/General-Discussion

7th
May 2017
Damian Rouson
@rouson
May 07 2017 04:24
@cmacmackin and @szaghi, trust me that I feel your pain. At the peak of my frustrations around 2010, I was involved directly or indirectly in submitting 50-60 bug reports annually across six compilers. Part of why I encounter bugs less often now is that I lasted through that process, got reasonably speedy responses from some compiler teams, dropped the compilers from vendors that were insufficiently responsive, and went to great lengths to become crafty about funding compiler development. None of those things were straightforward or easy, but I saw them as necessary because Fortran has important features that no other language has and I care most about writing clean code. So much of what I saw in other languages seemed like a crime against humanity. The interpreted languages such as Python are factors of 2-3 slower at best and the compiled languages such as C and C++ lack even basic array manipulation facilities. And no language other than Fortran has a parallel programming model that works in distributed memory. And no other language has support for fault tolerance. To get distributed-memory parallelism and fault tolerance, you could go with MPI, but the MPI being written by almost every scientific programmer I've met will be slower, more complex, and less fault-tolerant than what a Fortran programmer can write with coarray Fortran. I hope you'll think more about how to contribute to gfortran, whether as a developer (almost all the developers are domain scientists -- few are computer scientists and none have any training in compiler development as far as I know) or through organizational funds when you reach a stage when that becomes an option via grants or contracts. GFortran has been developed primarily by volunteers and some gfortran developers would rather not accept pay because they prefer the freedom of being a volunteer, but some do accept pay and it makes a difference in getting bugs fixed in a timely manner. And it takes creativity. None of the projects I've used to pay developers had a line item in the budget that read, "Fix gfortran bugs." I had to figure out how to make it happen in support of objectives that did have a line in the budget.
Damian Rouson
@rouson
May 07 2017 04:32
@szaghi, I don't have any great new idea about functional programming in Fortran so you'll be disappointed. I have a set of strategies that were inspired by functional programming and that I frequently employ to make the intention of the code more clear and potentially more optimizable. One is the defined operators and your latest news is discouraging with regard to the performance (recall that I worried that Abstract Calculus might be an anti-pattern for just this reason but you previously reported that Abstract Calculus did not hurt performance based on your experience with FOODIE so I wonder what changed). But I always knew there could be performance penalties associated with user-defined operators and I'm pretty sure I talk about some of those in my book (e.g., related to cache utilization and the ability of modern processors to perform a multiply and add in one clock cycle). Another idea inspired by functional programming relates to the ASSOCIATE statement. I don't think I want to go into detail in this forum just because the back-and-forth takes too much time, but I'd be glad to explain it in a call and it will be in my book. Another thing I'll cover will be the use of the functional-fortran library, of which you are aware. For now, that's it. There's no grand idea here. And then there is the use of PURE. As we all know, Fortran is not a functional programming language, but there are several ways in which Fortran programming can be influenced by functional programming concepts and that's what I mean when I talk about functional programming in Fortran.
Damian Rouson
@rouson
May 07 2017 04:37
My new book will have two new co-authors: Salvatore Filippone and Sameer Shende. Salvatore has more than 25 years of deep experience in parallel programming and Sameer has more than 15 years of experience in parallel performance analysis. The goal is to have almost every code in the book parallel and almost every code back by performance analysis. The last thing I'll say -- and then I've got to move on to some other things for a while -- is be careful trading one set of problems for another. For many reasons, you are likely to find more robust compilers for other languages, but you'll trade the compiler bugs for another set of problems in the form of low performance or ease with which you can shoot yourself in the foot or learning curve (it takes years to be a truly competent C++ programmer, for example, whereas the students in my classes become quite competent and even at the leading edge of Fortran programming in the span of one academic quarter. That's a really powerful statement.
Stefano Zaghi
@szaghi
May 07 2017 05:14

@rouson ,

Dear Damian, as always you are too much kind!

trust me that I feel your pain.

I know, but this does not alleviate to much the pain :smile:

I lasted through that process, got reasonably speedy responses from some compiler teams, dropped the compilers from vendors that were insufficiently responsive, and went to great lengths to become crafty about funding compiler development.

I'll try to follow your path, but in my reality searching for gfortran funding is a dream more than a challenge. In these day I'am evangelizing your idea and trying to make conscious my colleagues who are using gfortran for their research that it should be ethically and practically important to contribute to the GNU project with part of the research funding... but in Italy we do research with almost null fund.

Fortran has important features that no other language has and I care most about writing clean code. So much of what I saw in other languages seemed like a crime against humanity. The interpreted languages such as Python are factors of 2-3 slower at best and the compiled languages such as C and C++ lack even basic array manipulation facilities. And no language other than Fortran has a parallel programming model that works in distributed memory. And no other language has support for fault tolerance. To get distributed-memory parallelism and fault tolerance, you could go with MPI, but the MPI being written by almost every scientific programmer I've met will be slower, more complex, and less fault-tolerant than what a Fortran programmer can write with coarray Fortran.

I agree, this is why I selected Fortran, but currently this is all true if I do not use OOP, when OOP come in to play, all the pain highlighted by Chris arises. At the end, for the reasons you summarized and for the efforts I have already invested I'll never stop to use Fortran.

I hope you'll think more about how to contribute to gfortran, whether as a developer (almost all the developers are domain scientists -- few are computer scientists and none have any training in compiler development as far as I know) or through organizational funds...

If finding funds is a dream for me, the possibility that I can contribute to the development to gfortran is even more difficult: I am not up to the task. I know very little about C, but the big issue is that writing a compiler is an art and I am not an artist, just an oompa loompa.

I don't have any great new idea about functional programming in Fortran so you'll be disappointed. I have a set of strategies that were inspired by functional programming and that I frequently employ to make the intention of the code more clear and potentially more optimizable. One is the defined operators and your latest news is discouraging with regard to the performance (recall that I worried that Abstract Calculus might be an anti-pattern for just this reason but you previously reported that Abstract Calculus did not hurt performance based on your experience with FOODIE so I wonder what changed).

Sure, I remember your surprise, but that benchmark was really different from the one of yesterday. In FOODIE I compared Abstract Calculus with polymorphic allocatable functions (in which the ODE solver changes at runtime as well as all the operators results) with an identical test, but without abstract polymorphic operators and without changes of solvers at runtime. However, both version uses defined operators: the ACP has polymorphic allocatable (impure) operators, the other has static (pure) operators returning a type. The performances were identical between ACP and non abstract one, but this is in line with also the test I mad yesterday. What is really different is the comparison between defined operators vs intrinsic operators. For these reasons yesterday I updated our paper (soon a draft will sent to you) and I am planning to add a "performance mode* to FOODIE to allow users to select an operational mode:

  • for rapid ODE solvers development she can safely select normal mode;
  • for using FOODIE in production mode (heavy number crunching) she should select performance mode.
This new performance mode put on my shoulders (and on the developers of future ODE solvers) the burden to write also the %integrate_performance version of each solver, but it should be very easy.
Stefano Zaghi
@szaghi
May 07 2017 05:23

For many reasons, you are likely to find more robust compilers for other languages, but you'll trade the compiler bugs for another set of problems in the form of low performance or ease with which you can shoot yourself in the foot or learning curve (it takes years to be a truly competent C++ programmer, for example, whereas the students in my classes become quite competent and even at the leading edge of Fortran programming in the span of one academic quarter. That's a really powerful statement.

I agree, this is why I select Fortran. When I start to play with CAF it takes few days to let me productive, while I am still not able to be really efficient (namely really asynchronous) with MPI after years. Fortran is still the most suitable choice for my math, but there is a lot of pain if we want to exploit OOP.

I think I'll book you soon for a talk, please speak slow :smile: (tomorrow I'll know Alessandro: I am really excited to see his exascale work)

Cheers

P.S. I am very happy read about Filippone will be your co-author. Your new book promises at lot!

Stefano Zaghi
@szaghi
May 07 2017 09:54

@rouson @cmacmackin ,

I played with operators vs non operators mode in FOODIE... it seems confirmed the overhead of defined operators, see this

stefano@thor(11:50 AM Sun May 07) on feature/add-performance-mode [!]
~/fortran/FOODIE 21 files, 2.5Mb
→ time ./build/tests/accuracy/oscillation/oscillation -s adams_bashforth_4 -Dt 0.05 --fast
adams_bashforth_4
    steps:   20000000    Dt:      0.050, f*Dt:      0.000, E(x):  0.464E-09, E(y):  0.469E-09

real    0m5.214s
user    0m4.996s
sys    0m0.216s

stefano@thor(11:51 AM Sun May 07) on feature/add-performance-mode [!]
~/fortran/FOODIE 21 files, 2.5Mb
→ time ./build/tests/accuracy/oscillation/oscillation -s adams_bashforth_4 -Dt 0.05
adams_bashforth_4
    steps:   20000000    Dt:      0.050, f*Dt:      0.000, E(x):  0.464E-09, E(y):  0.469E-09

real    0m10.535s
user    0m10.320s
sys    0m0.216s

I added the fast mode to only Adams Bashforth solver for now, but I 'll add similar mode for all solver tomorrow, it is really simple and to the end user the change is almost seamless.

See you soon, happy "domenica" :smile: