Hello Roland, I am very glad if you found TNL potentialy usefull for you. The solver from the paper is not a part of TNL yet. The solver is very experimental. TNL does not have any systematical support for the finite elements at this moment and it is why we keep the MHFEM solver apart. However, there is a robust data structure for unstructured numerical meshes with support for GPUs and MPI. This could help you to write the two-phase flow solver. If you would need any help with TNL, do not hasitate to ask.
Roland Grinis
@grinisrit
Thank you very much Tomas, that makes perfect sense. Do you still have the GMRES solvers (MGSR and/or CWY) implemented with GPU support in TNL?
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Yes, Roland, you can find it here - https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/TNL/Solvers/Linear/GMRES.h . Unfortunately, it is not documented yet, but I plan to write the documentation within the next few weeks, and it is pretty easy to use, so I am sure, that you will easily understand how to use it. If not, feel free to ask. The MGSR variant seems to be better than CWY. And there is one more thing I should warn you about to be fair. Currently, TNL has no efficient prencoditioner for GPUs. Our experiments show, that GMRES with ILUT on CPU is more or less as fast as GMRES on GPU with Jacoby (diagonal) preconditioner. We also test BDDC on CPU and I think that it is much faster than GPU. So preconditioners for GPUs are becoming our priority, we already work on one but have no tests so far and will take some time.
Roland Grinis
@grinisrit
Thanks you a lot for this description. Yes I think I will try to navigate directly through the source code, no worries regarding the docs (I guess examples are more important and you have quite a few). Looking forward to the updates regarding GPU support!
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Great ;). Fortunately, we have examples on sparse matrices which will be more important for you.
Roland Grinis
@grinisrit
I imagine on CPU you still rely on BLAS? If yes, I imagine I shall link against an explicit implementation (e.g. MKL) for TNL to pick it up?
Tomáš Oberhuber
@oberhuber.tomas_gitlab
No no, we do not use BLAS. You do not need to link against any implementation. What operations from Blas you would like to use?
Roland Grinis
@grinisrit
Ok, fair enough. Just some basic linear algebra for dense matrices like LU, SVD, QR, eigen-decompositions crops up here and there. But it's fine, I am actually happy I don't have to worry about linking BLAS/LAPACK
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Ok, we do not have SVD, LU only on CPU, I think that we could help you with QR and we do not have any algorithm for eigen-decomposition. But it should not be in issue to combine TNL with BLAS. When it comes to Blas Level 1, I think all operations are somehow implemented in TNL based on expression templates and parallel reduction. Blas Level 2 should be also more or less covered by TNL, but Level 3 is still something we are working on.
Hey! The tutorials page is down, do you have a pdf version of them?
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Hi Roland, there is some issue with our system for deployement of documentation. I have already asked Kuba to fix it. Meanwhile, if you execute ./install doc in your TNL directory, it will create the documentation localy on your system. You may find it in Documentation/html folder then.
Roland Grinis
@grinisrit
Nice thank you
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Roland, the online documentation is fixed so you can use it.