Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Aug 12 13:34
    GitLab | Tomáš Oberhuber pushed 1 commits to TNL
  • Aug 12 06:56
    GitLab | Jakub Klinkovský pushed 5 commits to TNL
  • Aug 12 06:56
    GitLab | Jakub Klinkovský pushed to TNL
  • Aug 12 06:46
    GitLab | Jakub Klinkovský pushed 1 commits to TNL
  • Aug 11 18:56
    GitLab | Jakub Klinkovský pushed 4 commits to TNL
  • Aug 10 23:03
    GitLab | Tomáš Oberhuber pushed 2 commits to TNL
  • Aug 10 17:24
    GitLab | Jakub Klinkovský pushed 1 commits to TNL
  • Aug 09 21:57
    GitLab | Tomáš Oberhuber pushed 4 commits to TNL
  • Aug 09 18:06
    GitLab | Tomáš Oberhuber pushed 1 commits to TNL
  • Aug 09 02:47
    GitLab | Tomáš Oberhuber pushed 3 commits to TNL
  • Aug 07 18:37
    GitLab | Tomáš Oberhuber pushed 2 commits to TNL
  • Aug 01 19:32
    GitLab | Jakub Klinkovský pushed 1 commits to TNL
  • Jul 31 17:39
    GitLab | Tomáš Oberhuber pushed 2 commits to TNL
  • Jul 31 13:56
    GitLab | Tomáš Oberhuber pushed 2 commits to TNL
  • Jul 31 09:17
    GitLab | Tomáš Oberhuber pushed 1 commits to TNL
  • Jul 31 08:11
    GitLab | Tomáš Oberhuber pushed 1 commits to TNL
  • Jul 30 21:07
    GitLab | Tomáš Oberhuber pushed 1 commits to TNL
  • Jul 30 20:08
    GitLab | Tomáš Oberhuber pushed 1 commits to TNL
  • Jul 30 18:56
    GitLab | Tomáš Oberhuber pushed 1 commits to TNL
  • Jul 30 16:23
    GitLab | Tomáš Oberhuber pushed 2 commits to TNL
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Yes, Roland, you can find it here - https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/TNL/Solvers/Linear/GMRES.h . Unfortunately, it is not documented yet, but I plan to write the documentation within the next few weeks, and it is pretty easy to use, so I am sure, that you will easily understand how to use it. If not, feel free to ask. The MGSR variant seems to be better than CWY. And there is one more thing I should warn you about to be fair. Currently, TNL has no efficient prencoditioner for GPUs. Our experiments show, that GMRES with ILUT on CPU is more or less as fast as GMRES on GPU with Jacoby (diagonal) preconditioner. We also test BDDC on CPU and I think that it is much faster than GPU. So preconditioners for GPUs are becoming our priority, we already work on one but have no tests so far and will take some time.
Roland Grinis
@grinisrit
Thanks you a lot for this description. Yes I think I will try to navigate directly through the source code, no worries regarding the docs (I guess examples are more important and you have quite a few). Looking forward to the updates regarding GPU support!
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Great ;). Fortunately, we have examples on sparse matrices which will be more important for you.
Roland Grinis
@grinisrit
I imagine on CPU you still rely on BLAS? If yes, I imagine I shall link against an explicit implementation (e.g. MKL) for TNL to pick it up?
Tomáš Oberhuber
@oberhuber.tomas_gitlab
No no, we do not use BLAS. You do not need to link against any implementation. What operations from Blas you would like to use?
Roland Grinis
@grinisrit
Ok, fair enough. Just some basic linear algebra for dense matrices like LU, SVD, QR, eigen-decompositions crops up here and there. But it's fine, I am actually happy I don't have to worry about linking BLAS/LAPACK
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Ok, we do not have SVD, LU only on CPU, I think that we could help you with QR and we do not have any algorithm for eigen-decomposition. But it should not be in issue to combine TNL with BLAS. When it comes to Blas Level 1, I think all operations are somehow implemented in TNL based on expression templates and parallel reduction. Blas Level 2 should be also more or less covered by TNL, but Level 3 is still something we are working on.
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Roland, we have just finished the documentation on linear solvers. You can find it here - https://mmg-gitlab.fjfi.cvut.cz/doc/tnl/tutorial_Linear_solvers.html . There are several examples which might help you.
Roland Grinis
@grinisrit
That's wonderful - thank you very much!
Roland Grinis
@grinisrit
Hey! The tutorials page is down, do you have a pdf version of them?
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Hi Roland, there is some issue with our system for deployement of documentation. I have already asked Kuba to fix it. Meanwhile, if you execute ./install doc in your TNL directory, it will create the documentation localy on your system. You may find it in Documentation/html folder then.
Roland Grinis
@grinisrit
Nice thank you
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Roland, the online documentation is fixed so you can use it.
Roland Grinis
@grinisrit
Is it possible to dynamically add elements, refine an unstructured mesh? Combine several meshes together?
Tomáš Oberhuber
@oberhuber.tomas_gitlab
No, the mesh is completely staric, it does not support any refinement. Combining of meshes is not possible either. What kind of mesh combining would you like to do?
Roland Grinis
@grinisrit
I understand that no being able to mutate a mesh is good design and enables perfomance, but maybe there is an easy way to construct a new mesh from the old by refining it. You have a lot of nice examples about how to initialise matrices (sparse and denses), is there something like that for unstructured meshes. For the moment I manage only to initialise a mesh by reading it from a file.
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Yes, that could be the best solution at the moment - create a new mesh. The mesh can be read from a file only now. But I think that it should not be an issue to modify it so that you could create a mesh algorithmicaly. I have to ask Kuba, maybe we could find solution for it ;).
Jakub Klinkovský
@lahwaacz_gitlab
TNL has a tool for uniform mesh refinement (it works by creating a new mesh using the MeshBuilder class): https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/Tools/tnl-refine-mesh.cpp
The internals are in the getRefinedMesh function and the EntityRefiner class: https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/TNL/Meshes/Geometry/getRefinedMesh.h https://mmg-gitlab.fjfi.cvut.cz/gitlab/tnl/tnl-dev/-/blob/develop/src/TNL/Meshes/Geometry/EntityRefiner.h
As Tomáš said, the Mesh class itself is static, so the modifications need to go through the mesh initialization process, which will be the bottleneck. It can be optimized by using a minimal mesh config to avoid unnecessary data and initialization steps, though.
Roland Grinis
@grinisrit
Thanks Jakub that's helpful
Roland Grinis
@grinisrit
Among all your examples, do you have one with mass lumping technique?
Roland Grinis
@grinisrit
Why does the solver's setMatrix takes only shared pointers and not a matrix view? Is there a way to efficiently get a SparseMatrix from SpaceMatrixView (initially obtained say from wrapCSRMatrix)?
Roland Grinis
@grinisrit
So basically to formulate more precisely, I have some external matrix in CSR format that I want to wrap into your SparseMatrix and feed it to your linear solver. Is there a way to do it efficiently? Do I have to copy the data (because the solver might mutate the matrix)?
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Hi Roland, thats a good question. It seems like a mistake in the design of the linear solver. I have to discuss it with Jakub.
Or maybe, could you try to substitute MatrixView a s a Matrix to the linear solver? For example using Solver = GMRES< SparseMatrixView< .. >>. The MatrixPointer would be std::shared_pointer< SparseMatrixView >.
Roland Grinis
@grinisrit
Thanks that actually works, I should have thought about it ))
still I am not sure why you need the shared pointer
if that data is read only
Jakub Klinkovský
@lahwaacz_gitlab
The shared pointer in linear solvers ensures that users do not have to call the setMatrix function manually whenever views to the matrix get invalidated. This happens upon reallocation, especially when matrix size or row capacities change. The approach suggested by Tomáš should work, but of course you need to ensure that the view wrapped by the shared pointer is always valid when the solver is used.
Roland Grinis
@grinisrit
Fair enough, thanks.
Tomáš Oberhuber
@oberhuber.tomas_gitlab
Exactly, Kuba is right. But I have to confes that when the linear solvers are used with wrapped matrix created outside of TNL, it is not perfectly smooth. I did not realise it until you asked, Roland.
Roland, did you solve the mass lumping? I am not sure if we have some example on it.
Roland Grinis
@grinisrit
I am going to have a look at that following Younes et. al>
Roland Grinis
@grinisrit
Yes that's the one
Gregory Dushkin
@GregTheMadMonk

Hello! I've encountered weird behavior when moving TNL meshes. My code would crash if I attempted to move-construct a mesh from another non-empty mesh and then move data back into the original variable. So, something like this:

TNL::Meshes::Mesh<MeshConfig> m1;
// ...
// Initialize m1
// ...
auto m2(std::move(m1));
m1 = std::move(m2); // This line will throw an "Attempted to move data to a nullptr" exception
                    // if m1 was initialized with some data, and will not if m1 was originally empty

This will not happen if I pre-declared m2, like this:

TNL::Meshes::Mesh<MeshConfig> m1, m2;
// Initialize m1
m2 = std::move(m1);
m1 = std::move(m2); // It's ok now

Here's the gist with an example code for this issue: https://gist.github.com/GregTheMadMonk/7b14a538a3a0e147f9ba5510d4831be4.
Is it a bug, or am I doing something that I'm not supposed to do?

Jakub Klinkovský
@lahwaacz_gitlab
Gregory Dushkin
@GregTheMadMonk
Everything seems to be working now! Thank you!
Jakub Klinkovský
@lahwaacz_gitlab
Hey! TNL is now officially on the public gitlab.com instance: https://gitlab.com/tnl-project/tnl Please update the URLs in your local repositories to receive the latest commits on git pull.