Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    ksripathy
    @ksripathy
    @uekerman Thanks for the reply. I'll do the needed changes. As for the steady state results are concerned, the displacement at the watch-point becomes steady around 7 seconds itself. It's just that the x-displacement is off by nearly an order of magnitude. Also, the drag force on the cylinder and flap converges. Only perturbations are there in the lift force, which I managed to minimize by employing stricter tolerance for the coupling residual. As requested, I am attaching the forces and displacement files:
    Navi
    @nkr0
    Hi, what is the definition of DisplacementDelta? Is it V - V_last_coupling_iteration or is it V - V_last_solver_sub_cycle? Is there a difference in the interpretation for Implicit and Explicit coupling?
    Benjamin Uekermann
    @uekerman
    DisplacementDelta is typically V-V_last_timstep (other IQN does not converge). So, should be identical for explicit and implicit coupling.
    Navi
    @nkr0
    last_time_step of the solver or of precice? that's what I'm confused with.
    so precice has coupling iterations.. the iterations, where we use checkpoints to go to the beginning of
    Benjamin Uekermann
    @uekerman
    ok, i get your problem now. so, it the solver subcycles, you should not update V_last. only update when "write-iteration-checkpoint". so V_last is the value of the last time window end.
    OK?
    Navi
    @nkr0
    pointDisplacementFluidPatch is the last written precice-checkpoint.
    no.
    Navi
    @nkr0
    here in displacementdeltas, https://github.com/precice/openfoam-adapter/blob/master/FSI/DisplacementDelta.C#L55 the buffer is added to pointDisplacementFluidPatch. It's not pointDisplacementFluidPatch = checkpoint + buffer
    and in displacements, https://github.com/precice/openfoam-adapter/blob/master/FSI/Displacement.C#L55 the buffer is set to pointDisplacementFluidPatch
    Benjamin Uekermann
    @uekerman
    to comment that, i don't know the OpenFOAM adapter well enough, i am afraid of. but the concept is clearer now?
    Navi
    @nkr0
    concept is. but for all adapters to work together there has to be a common definition for displacementdeltas, i.e., a defined reference point for computing delta.
    and looking at the openfoam adapter I think the definition is V-V_last_solver_sub_cycle. But the calculix adapter is written as V - V_last_coupling_iteration.
    so when openfoam is expecting delta w.r.t its last sub-cycle, calculix is sending delta w.r.t the last checkpoint.
    Benjamin Uekermann
    @uekerman

    but for all adapters to work together there has to be a common definition for displacementdeltas, i.e., a defined reference point for computing delta.

    yes, completely agree. and it should be the value at the end of the last time window (i.e. precice coupling timestep). everything else does not work.

    so when openfoam is expecting delta w.r.t its last sub-cycle, calculix is sending delta w.r.t the last checkpoint.

    ok, sounds like a bug (or missing feature) in the openfoam adapter.
    @MakisH agree?
    could you please open an issue? (or a PR if you already know how to solve the issue).

    Navi
    @nkr0
    I was hoping that openfoam is right and calculix is wrong. And, fixed it the other way.
    Gerasimos Chourdakis
    @MakisH
    OpenFOAM only applies whatever it gets:
            // Get the displacement on the patch
            fixedValuePointPatchVectorField& pointDisplacementFluidPatch =
                refCast<fixedValuePointPatchVectorField>
                (
                    pointDisplacement_->boundaryFieldRef()[patchID]
                );
    
            // For every cell of the patch
            forAll(pointDisplacement_->boundaryFieldRef()[patchID], i)
            {
                // Add the received delta to the actual displacement
                pointDisplacementFluidPatch[i][0] += buffer[bufferIndex++];
                pointDisplacementFluidPatch[i][1] += buffer[bufferIndex++];
                pointDisplacementFluidPatch[i][2] += buffer[bufferIndex++];
            }
    Navi
    @nkr0
    yes. but the operator is +=. Not = checkpoint + buffer
    Gerasimos Chourdakis
    @MakisH
    A good question is what happens in subcycling
    yes, indeed
    Navi
    @nkr0
    += means it keeps on adding deltas over subcycles.
    and that shouldn't be the case based on the definition benjamin wrote above
    @uekerman in the case that this turns out to be a bug in openfoam-adapter, I am only familiar enough to read a bit. Not enough to make changes.
    Gerasimos Chourdakis
    @MakisH
    @nkr0 please open an issue and I will look into it! :)
    it is not trivial, but also not very difficult to implement
    Navi
    @nkr0
    there is for sure a checkpoint mesh. I don't knw the variable yet. so it's probably just pointDisplacementFluidPatch = checkpoint + buffer
    Navi
    @nkr0
    @uekerman one more issue. Since velocity of the interface is not exchanged, I guess openfoam internally computes it by dividing displacementdelta with sub-cycle time step length. now there is a mismatch in the delta_V and delta_t reference points.
    Gerasimos Chourdakis
    @MakisH
    I am not sure at the moment, but I would look on what the displacementLaplacian mesh motion solver does.
    Benjamin Uekermann
    @uekerman

    @ksripathy
    Sorry for the late reply!

    As for the steady state results are concerned, the displacement at the watch-point becomes steady around 7 seconds itself. It's just that the x-displacement is off by nearly an order of magnitude.

    Are you using the OpenFOAM mesh from the tutorial? This one is too coarse to give good values. It is merely meant as a tutorial. Also, you need a parabolic inflow.
    How is the restart doing? I still don't understand why you would want to restart here. I don't expect the converged results to "get better" over time.
    Another important point: for such a converged setup I would also expect quasi-Newton to have problems. If two iterations (or timesteps) are too close to one another the QN system gets ill-conditioned. Normally, this is no problem as the simulation is already in steady-state, so why continue. This could also explain your observation. To really test this you could try to restart at, let's say, "t=2s" and see if you get the same convergence problems there.

    ksripathy
    @ksripathy
    @uekerman I am using my own meshes with parabolic boundary condition at the inlet using groovyBC. As for the restart is concerned, for meshes with higher refinement, the coupled simulation struggles to converge to strict tolerance value. Thus, my idea is to employ a lenient coupling tolerance until the solution becomes relatively steady, say first 2-3s as you have described, and then restart with a strict tolerance from 3s. As far as testing restart functionality is concerned, I haven’t got to it yet. My laptop is quite slow at coupling a fluid mesh of 43000 cells and a solid quadratic mesh of 1120 elements. So, I got access to the HPC cluster recently, and in the process of installing the necessary packages at the moment. In due course, I’ll let you know about the status of restart functionality for coupled FSI simulations.
    Benjamin Uekermann
    @uekerman
    @ksripathy :+1: for sake of archiving it might also be better to continue the discussion on discourse then
    ksripathy
    @ksripathy
    Sure
    ksripathy
    @ksripathy
    By the way, CalculiX 2.16 has been released. Do you think it will work with the present CalculiX adapter for preCICE.
    Benjamin Uekermann
    @uekerman
    Normally not, in the CalculiX adapter we overwrite some CalculiX files, so there needs to be a new adapter version for every new CalculiX
    patelrohan008
    @patelrohan008
    Hello all, I have a quick question about setting up a precice config file using three solvers. The goal is to use serial explicit coupling such that Solver A (using some initial values) runs, passes data to Solver B which then runs and passes data to Solver C. Solver C would then pass data back to Solver A, which would run again beginning the next time step and so on and so forth. The issue I'm having is with defining the participant order for each individual bi-coupling scheme. If I define them as: first A second B, then first B second C, then first C second A, for each bi-coupling scheme, precice hangs on the "initialize: slaves are connected" message. I'm assuming that this is because there is no place to start since each solver requires a different one to be run first. Since the parallel explicit approach would not be appropriate as the solvers must run in series, how do I tell precice to begin by running solver A?
    Gerasimos Chourdakis
    @MakisH
    @patelrohan008 this is a very good questions for Discourse! Could you please ask the same question there, so that the discussion can stay for others with the same question in the future?
    Benjamin Uekermann
    @uekerman
    @patelrohan008 and please upload your preCICE config, for which you get this deadlock.
    patelrohan008
    @patelrohan008
    @MakisH
    @uekerman
    I created a discourse post and attached the preCICE config file, https://precice.discourse.group/t/deadlock-using-three-solver-explicit-coupling-scheme/114
    Thanks!
    Satish
    @skchimak_gitlab
    Does anyone know if preCICE was ever used on CUDA architectures? TIA
    Frédéric Simonis
    @fsimonis
    @skchimak_gitlab Hi! preCICE has been used to couple TherMoS which is based on Nvidia OptiX. You can have a look at the Masters Thesis.
    Satish
    @skchimak_gitlab
    @fsimonis Thank you
    Gerasimos Chourdakis
    @MakisH
    @skchimak_gitlab @fsimonis good for our FAQ on Discourse! ;-)
    Frédéric Simonis
    @fsimonis
    True
    @skchimak_gitlab would you mind repeating the same question on our discourse? https://precice.discourse.group/