fverdugo on gh-pages
build based on d464956 (compare)
santiagobadia on generalized_alpha
santiagobadia on master
Generalized-alpha method Fixes in generalized alpha updated NEWS.md and 3 more (compare)
santiagobadia on generalized_alpha
Added ModalC0 Bases Added ModalC0 RefFEs Added Modal C0 RefFEs for Seren… and 10 more (compare)
fverdugo on gh-pages
build based on 447d58e (compare)
FEOperator(res,U,V)
helped to remove the hiccup. What is the typo for dS
, I currently have: function dS(∇du,∇u)
Cinv = inv(C(F(∇u)))
_dE = dE(∇du,∇u)
λ*(Cinv⊙_dE)*Cinv + 2*(μ-λ*log(J(F(∇u))))*Cinv⋅_dE⋅(Cinv')
end
@Kevin-Mattheus-Moerman Looking at it quickly, I think it should be
function dS(∇du,∇u)
Cinv = inv(C(F(∇u)))
_dE = dE(∇du,∇u)
λ*J(F(∇u))*(Cinv⊙_dE)*Cinv + 2*(μ-λ*log(J(F(∇u))))*Cinv⋅_dE⋅(Cinv')
end
with the extra J
coming from the derivative ∂J/∂E (assuming I didn't mess up any signs). Does this help?
op=AffineFEOperator(a,l,U,V)
use: du = get_trial_fe_basis(U)
dv = get_fe_basis(V)
uhd = zero(U)
data = collect_cell_matrix_and_vector(U,V,a,l,uhd)
Tm = SparseMatrixCSC{Float64,Int32}
Tv = Vector{Float64}
assem = SparseMatrixAssembler(Tm,Tv,U,V)
A, b = assemble_matrix_and_vector(assem,data) # This is the assembly loop + allocation and compression of the matrix
assemble_matrix_and_vector!(A,b,assem,data) # This is the in-place assembly loop on a previously allocated matrix/vector.
Hi,
thanks for the excellent work put in Gridap.jl
. I'm setting up a numerical homogenization simulation with periodic boundary conditions, in two dimensions. The model was generated with GMSH, here is an extract of the *.geo
file (I know I can use the Julia interface to GMSH as well)
bottom() = Line In BoundingBox{-0.00075, -0.00075, -0.00075, 5.00075, 0.00075, 0.00075};
top() = Line In BoundingBox{-0.00075, 4.99925, -0.00075, 5.00075, 5.00075, 0.00075};
Periodic Line{top()} = {bottom()} Translate{0, 5.0, 0};
left() = Line In BoundingBox{-0.00075, -0.00075, -0.00075, 0.00075, 5.00075, 0.00075};
right() = Line In BoundingBox{4.99925, -0.00075, -0.00075, 5.00075, 5.00075, 0.00075};
Periodic Line{right()} = {left()} Translate{5.0, 0, 0};
The model is then retrieved from within Julia with
model = GmshDiscreteModel(joinpath("..", "validation", "f=0.3", "N=25", "h=0.075", "00001.msh"))
and the FE spaces are defined as follows (as indicated by Tutorial 12, periodic BCs are automatically accouted for)
order = 1
reffe = ReferenceFE(lagrangian, Float64, order)
V = TestFESpace(model, reffe; conformity=:H1, constraint=:zeromean)
U = V
I then solve a standard conductivity problem. The solution, u₁
does not however seem to be periodic
u₁(Point(0., 1.)) = 0.4216567688688445
u₁(Point(5., 1.)) = -0.49267721257058655
What might have I done wrong? How can I check that periodic boundary conditions are indeed enforced by Gridap upon reading the periodic GMSH mesh?
Thanks,
Sébastien
Hello, I am new to FEM simulation with julia. To get started, I would like to build a model that connects multiple boxes via contact conditions (tied and frictionless contact). Can I use gridap for this? Unfortunately I have not found any example.
Thank you
Hello, I want to study the case of the turbulent channel. I have found something strange in creating geometry with periodic boundary conditions on multiprocessors.
Taking the Tutorial 16 as an example, I just replace the model generation line withmodel = CartesianDiscreteModel(domain,mesh_partition,isperiodic = (true, false))
to create a geometry with periodic boundary conditions in X direction, single-process
model = CartesianDiscreteModel(parts,domain,mesh_partition, isperiodic = (true, false))
to create a geometry with periodic boundary conditions in X direction, multi-process
However, the resulting geometry is different in the two cases. The single-process case is apparently the right one. In the multi-process case, the geometry is both 'closed' and with an extra cell in the periodic direction.
I have noticed that you have this kind of problem when you have more than one part in the periodic direction (in this case with a partition of partition = (1,2) and launching the script on 2 cores). I was wondering what I am doing incorrectly
Hi all,
I converted a fun evening project into a little blog post.
https://jonasisensee.de/posts/2022-04-29-amazeing-fem/
Hello everyone, as a test I tried to solve the transient Poisson equation on parallel distributed. I merely copied the code of tutorial 17 into the first example of tutorial 16. Then I create the model on multi-cores with the line https://github.com/carlodev/Channel_flow/blob/9878fe7d1943a663bc1bdbab9b4fdd9de04bc317/Channel_Multicore_PETSc/TutorialTest/D1_transient.jl#L13
But I got an error
MethodError: no method matching HomogeneousTrialFESpace(::GridapDistributed.DistributedSingleFieldFESpace{MPIData{Gridap.FESpaces.UnconstrainedFESpace{Vector{Float64}, Gridap.FESpaces.NodeToDofGlue{Int32}}, 2}, PRange{MPIData{IndexSet, 2}, Exchanger{MPIData{Vector{Int32}, 2}, MPIData{Table{Int32}, 2}}, Nothing}, PVector{Float64, MPIData{Vector{Float64}, 2}, PRange{MPIData{IndexSet, 2},
Exchanger{MPIData{Vector{Int32}, 2}, MPIData{Table{Int32}, 2}}, Nothing}}})
Is that a bug or am I doing something incorrectly? If I don’t split the model on multi cores it works
Thank you so much for your help