Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • May 09 12:34
    GitLab | Yury Hayeu pushed 1 commits to tnl-dev
  • May 09 12:27
    GitLab | Yury Hayeu pushed 1 commits to tnl-dev
  • May 08 16:09
    GitLab | Yury Hayeu pushed to tnl-dev
  • May 08 16:09
    GitLab | Yury Hayeu pushed to tnl-dev
  • May 04 14:52
    GitLab | Yury Hayeu pushed 1 commits to tnl-dev
  • Apr 29 20:35
    GitLab | Jakub Klinkovský pushed 2 commits to tnl-dev
  • Apr 25 20:21
    GitLab | Jakub Klinkovský pushed 8 commits to tnl-dev
  • Apr 25 19:29
    Jakub Klinkovský commented on merge request #136 Draft: Hypre wrappers in tnl-dev
  • Apr 25 16:53
    Tomáš Oberhuber commented on merge request #136 Draft: Hypre wrappers in tnl-dev
  • Apr 25 07:21
    GitLab | Jakub Klinkovský pushed 7 commits to tnl-dev
  • Apr 25 07:06
    GitLab | Jakub Klinkovský pushed 7 commits to tnl-dev
  • Apr 25 07:04
    GitLab | Jakub Klinkovský pushed 10 commits to tnl-dev
  • Apr 24 18:22
    GitLab | Jakub Klinkovský pushed 7 commits to tnl-dev
  • Apr 24 17:41
    GitLab | Jakub Klinkovský pushed 7 commits to tnl-dev
  • Apr 21 12:27
    GitLab | Jakub Klinkovský pushed 7 commits to tnl-dev
  • Apr 21 08:34
    GitLab | Jakub Klinkovský pushed 2 commits to tnl-dev
  • Apr 21 08:32
    GitLab | Jakub Klinkovský pushed 2 commits to tnl-dev
  • Apr 21 08:30
    GitLab | Jakub Klinkovský pushed 4 commits to tnl-dev
  • Apr 21 08:26
    GitLab | Jakub Klinkovský pushed 4 commits to tnl-dev
  • Apr 18 15:14
    GitLab | Jakub Klinkovský pushed 1 commits to tnl-dev
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Wellcome all TNL users :).
Roland Grinis
@grinisrit
Hello, I am considering using TNL in a project involving two-phase flow in heterogeneous media. I have bumped into a nice paper from Comp. Phys. Comm. 2019 by you on this topic but I couldn't find it among the examples your provide. Do you have example similar to dumux https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/blob/master/examples/2pinfiltration/README.md#part-1-two-phase-infiltration-set-up ? Thanks
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Hello Roland, I am very glad if you found TNL potentialy usefull for you. The solver from the paper is not a part of TNL yet. The solver is very experimental. TNL does not have any systematical support for the finite elements at this moment and it is why we keep the MHFEM solver apart. However, there is a robust data structure for unstructured numerical meshes with support for GPUs and MPI. This could help you to write the two-phase flow solver. If you would need any help with TNL, do not hasitate to ask.
Roland Grinis
@grinisrit
Thank you very much Tomas, that makes perfect sense. Do you still have the GMRES solvers (MGSR and/or CWY) implemented with GPU support in TNL?
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Yes, Roland, you can find it here - https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/TNL/Solvers/Linear/GMRES.h . Unfortunately, it is not documented yet, but I plan to write the documentation within the next few weeks, and it is pretty easy to use, so I am sure, that you will easily understand how to use it. If not, feel free to ask. The MGSR variant seems to be better than CWY. And there is one more thing I should warn you about to be fair. Currently, TNL has no efficient prencoditioner for GPUs. Our experiments show, that GMRES with ILUT on CPU is more or less as fast as GMRES on GPU with Jacoby (diagonal) preconditioner. We also test BDDC on CPU and I think that it is much faster than GPU. So preconditioners for GPUs are becoming our priority, we already work on one but have no tests so far and will take some time.
Roland Grinis
@grinisrit
Thanks you a lot for this description. Yes I think I will try to navigate directly through the source code, no worries regarding the docs (I guess examples are more important and you have quite a few). Looking forward to the updates regarding GPU support!
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Great ;). Fortunately, we have examples on sparse matrices which will be more important for you.
Roland Grinis
@grinisrit
I imagine on CPU you still rely on BLAS? If yes, I imagine I shall link against an explicit implementation (e.g. MKL) for TNL to pick it up?
Tomáš Oberhuber
@oberhuber.tomas_gitlab
No no, we do not use BLAS. You do not need to link against any implementation. What operations from Blas you would like to use?
Roland Grinis
@grinisrit
Ok, fair enough. Just some basic linear algebra for dense matrices like LU, SVD, QR, eigen-decompositions crops up here and there. But it's fine, I am actually happy I don't have to worry about linking BLAS/LAPACK
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Ok, we do not have SVD, LU only on CPU, I think that we could help you with QR and we do not have any algorithm for eigen-decomposition. But it should not be in issue to combine TNL with BLAS. When it comes to Blas Level 1, I think all operations are somehow implemented in TNL based on expression templates and parallel reduction. Blas Level 2 should be also more or less covered by TNL, but Level 3 is still something we are working on.
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Roland, we have just finished the documentation on linear solvers. You can find it here - https://mmg-gitlab.fjfi.cvut.cz/doc/tnl/tutorial_Linear_solvers.html . There are several examples which might help you.
Roland Grinis
@grinisrit
That's wonderful - thank you very much!
Roland Grinis
@grinisrit
Hey! The tutorials page is down, do you have a pdf version of them?
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Hi Roland, there is some issue with our system for deployement of documentation. I have already asked Kuba to fix it. Meanwhile, if you execute ./install doc in your TNL directory, it will create the documentation localy on your system. You may find it in Documentation/html folder then.
Roland Grinis
@grinisrit
Nice thank you
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Roland, the online documentation is fixed so you can use it.
Roland Grinis
@grinisrit
Is it possible to dynamically add elements, refine an unstructured mesh? Combine several meshes together?
Tomáš Oberhuber
@oberhuber.tomas_gitlab
No, the mesh is completely staric, it does not support any refinement. Combining of meshes is not possible either. What kind of mesh combining would you like to do?
Roland Grinis
@grinisrit
I understand that no being able to mutate a mesh is good design and enables perfomance, but maybe there is an easy way to construct a new mesh from the old by refining it. You have a lot of nice examples about how to initialise matrices (sparse and denses), is there something like that for unstructured meshes. For the moment I manage only to initialise a mesh by reading it from a file.
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Yes, that could be the best solution at the moment - create a new mesh. The mesh can be read from a file only now. But I think that it should not be an issue to modify it so that you could create a mesh algorithmicaly. I have to ask Kuba, maybe we could find solution for it ;).
Jakub Klinkovský
@lahwaacz_gitlab
TNL has a tool for uniform mesh refinement (it works by creating a new mesh using the MeshBuilder class): https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/Tools/tnl-refine-mesh.cpp
The internals are in the getRefinedMesh function and the EntityRefiner class: https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/TNL/Meshes/Geometry/getRefinedMesh.h https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/TNL/Meshes/Geometry/EntityRefiner.h
As Tomáš said, the Mesh class itself is static, so the modifications need to go through the mesh initialization process, which will be the bottleneck. It can be optimized by using a minimal mesh config to avoid unnecessary data and initialization steps, though.
Roland Grinis
@grinisrit
Thanks Jakub that's helpful
Roland Grinis
@grinisrit
Among all your examples, do you have one with mass lumping technique?
Roland Grinis
@grinisrit
Why does the solver's setMatrix takes only shared pointers and not a matrix view? Is there a way to efficiently get a SparseMatrix from SpaceMatrixView (initially obtained say from wrapCSRMatrix)?
Roland Grinis
@grinisrit
So basically to formulate more precisely, I have some external matrix in CSR format that I want to wrap into your SparseMatrix and feed it to your linear solver. Is there a way to do it efficiently? Do I have to copy the data (because the solver might mutate the matrix)?
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Hi Roland, thats a good question. It seems like a mistake in the design of the linear solver. I have to discuss it with Jakub.
Or maybe, could you try to substitute MatrixView a s a Matrix to the linear solver? For example using Solver = GMRES< SparseMatrixView< .. >>. The MatrixPointer would be std::shared_pointer< SparseMatrixView >.