by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    GPSalachs
    @GPSalachs
    i will post the error message asap.
    GPSalachs
    @GPSalachs
    The error message says: "The evaluation of the point number 0 is in error"
    I don't understand if it is an error of mine with the 2019 version or something else
    Julien Schueller
    @jschueller
    I looks more like an error in your wrapper, not in openturns
    GPSalachs
    @GPSalachs
    How can i solve it? Reinstall salome meca?
    kimrj
    @kimrj
    Today, Julien has advised me to apply a "--enable-shared=yes" option in order to add OPT++ support to OpenTURNS.
    This leads me to these additional questions:
    1) Where (on the OPT++ side of the fence or on the OpenTURNS side)? 2) How?
    kimrj
    @kimrj
    Feeling rather stuck, I have another question: 3) Even though every OpenTURNS that I have worked with has stated "RuntimeError: NotYetImplementedException : No OPTpp support", I guess that an OpenTURNS with OPT++ must have existed at least once. When and under which circumstances?
    Julien Schueller
    @jschueller

    hello kim,

    "--enable-shared=yes" is an option for the configure command of opt++, see:
    https://software.sandia.gov/opt++/opt++2.4_doc/html/InstallDoc.html

    yes, Opt++ is not that well supported in openturns ; it may even be removed in the next release

    kimrj
    @kimrj
    The only "configure"-file that I could find resides in "optpp-2.4.tar.gz". It has no "--enable-shared=yes" option...
    Julien Schueller
    @jschueller
    maybe just "--enable-shared" is valid too: "./configure:1038: --enable-shared[=PKGS]"
    kimrj
    @kimrj
    I guess that I should give up. am__api_version="1.9" is hard coded in "configure", but I have version 1.15. If I edit "configure", a number of incompatibilities turn up. If your attitude is as suggested, I guess that I would not find a "magic optimizer" in OPT++, anyway.
    GPSalachs
    @GPSalachs
    Hello, running a calculation in text mode on Salome 2018 i noticed that i cannot export the case in OT,because it doesnt recognise the parameters and .npy files i produce. Is there a method to do so?
    GPSalachs
    @GPSalachs
    Hello, in case i want to group together more than one variable in a CDF representation how can i do that, from withing the Salome Meca GUI? (Salome 2018)
    NicolasDelepine
    @NicolasDelepine

    Hi, I would like to define as class a joint law and use it with tools of the openturns and pymc libraries.

    The joint law is made of a Weibull distribution and a log-normal distribution. There is a correlation between the 2 laws : The log-normal distribution depends on the Weibull one.

    In openturns, it is possible to defined a composed or a conditional distribution, but they are not appropriate in my case.

    Hence, my composed distribution will be defined, I use it in the decorator of the pymc library« pm.Stochastic ».

    Until now, I could only define my composed distribution as a fonction, but to be corretly implemented and to be used in the decorator it is mandatory to define it as a class.

    Many thanks in advance
    Take care in the current context
    Best regards
    Nicolas Delépine

    Julien Schueller
    @jschueller
    @NicolasDelepine maybe you could share some code to show what you're trying to do
    NicolasDelepine
    @NicolasDelepine

    Thank you Julien.
    WindSpeed = ot.Weibull(10.12,1.8, 0.)
    WindSpeed.setDescription(['Vit'])

    def U_TI_Dist_logp(value):
    [Uf,TIf]=value
    parameters = np.array(ot.LogNormalMuSigma((0.12(0.75+3.8/Uf)),abs(0.121.4/Uf),0).evaluate())
    TI_cond=ot.LogNormal(parameters[0],parameters[1],parameters[2])
    return WindSpeed.computePDF(Uf)*TI_cond.computePDF(TIf)

    def U_TI_Dist_rand():
    tmpU=rand.weibullvariate(10.12,1.8)
    parameters = np.array(ot.LogNormalMuSigma((0.12(0.75+3.8/tmpU)),(0.121.4/tmpU),0).evaluate())
    TI_cond=rand.lognormvariate(parameters[0],abs(parameters[1]))
    return [abs(tmpU),abs(TI_cond)]

    U_TI_Dist_pymc_model = pm.Stochastic(logp=U_TI_Dist_logp, random=U_TI_Dist_rand,
    doc='U_TI_Dist', name='U_TI_Dist',parents={},
    trace=True)

    I just add a part of my code above. I would like to use pymc.stochastic on a joint pdf made of weibull and log-normal distributions
    Many thanks in advance
    Julien Schueller
    @jschueller

    Seems you need not ConditionalDistribution but BayesDistribution if you want the [X,Y] vector:
    http://openturns.github.io/openturns/master/user_manual/_generated/openturns.BayesDistribution.html?highlight=bayesdistribution#openturns-bayesdistribution

    U_dist = ot.ParametrizedDistribution(ot.LogNormalMuSigma())
    link = ot.SymbolicFunction(['tmpU'], ['0.12*(0.75+3.8/tmpU)', '0.12*(1.4/tmpU)', '0.0'])
    U_IT_dist = ot.BayesDistribution(U_dist, WindSpeed, link)
    U_IT_dist.setDescription(['U', 'Vit'])

    Tricky thing is to wrap the conditioned distribution in a ParametrizedDistribution as you want to be in LogNormalMuSigma parametrization.
    Then you need a function to map the value from the conditioning distribution into the the 3 lognormal parameters, so you have to add a null output for the gamma parameter.
    Finally assemble everything into a BayesDistribution, note that it gives you the conditioning vector in second position.
    Finally you can use U_IT_dist.computeLogPDF(value) and U_IT_dist.getRealization() in the pymc callbacks.

    NicolasDelepine
    @NicolasDelepine
    Thank you very much Julien for this very precious help ! I have modified my code with your solution. I currently work on. Take care
    josephmure
    @josephmure
    Do we have a set date for the next release candidate?
    AliyuAziz
    @AliyuAziz
    Hello, please what function is equivalent to obsolete NumericalMathFunction()?
    josephmure
    @josephmure
    @AliyuAziz Function(), I think. From the ChangeLog of the 1.9 release:Deprecated NumericalMathFunction for Function
    AliyuAziz
    @AliyuAziz
    @josephmure many thanks for the prompt response. It has really helped me a lot in troubleshooting. I am trying to run the old cantilever [example ] (http://autobuilder.openturns.org/openturns-doc/distcheck-r1772/html/ExamplesGuide/cid1.xhtml) I have changed the function defining deviation successfully. The new error comes from line ' 116 outputVariableOfInterest = RandomVector(deviation, inputRandomVector) ' with the TypeError: new_RandomVector expected at most 1 arguments, got 2
    Additional information:
    Wrong number or type of arguments for overloaded function 'new_RandomVector'.
    Possible C/C++ prototypes are:
    OT::RandomVector::RandomVector()
    OT::RandomVector::RandomVector(OT::RandomVectorImplementation const &)
    OT::RandomVector::RandomVector(OT::TypedInterfaceObject< OT::RandomVectorImplementation >::Implementation const &)
    OT::RandomVector::RandomVector(OT::Distribution const &)
    OT::RandomVector::RandomVector(OT::RandomVector const &)
    OT::RandomVector::RandomVector(PyObject *)
    josephmure
    @josephmure
    @AliyuAziz Yes, the RandomVector contructor now only accepts a Distribution as input. Apparently, RandomVector was split in several classes. I think the one you're looking for is CompositeRandomVector.
    AliyuAziz
    @AliyuAziz
    @josephmure Many thanks for your help and assistance. I am able to carryout the FORM analysis of the cantilever. I look forward to a framework for coupling OT with an external FEM code. Any suggestions, examples or guidelines? Best regards
    josephmure
    @josephmure
    @AliyuAziz You're welcome! I don't know much about that, but there are coupling tools within OpenTURNS. Warning: despite what is written on the page I linked, you need to import the coupling tools separately from the rest of the library: from openturns import coupling_tools as ct and then ct.replace(), ct.execute(), etc.
    AliyuAziz
    @AliyuAziz
    @josephmure You are the best! I believe this is what I am looking for. I will try it and revert with a feedback. Best regards
    matrixbot
    @matrixbot
    michaelbaudin josephmure (Gitter): I don't think that a release candidate is known.
    tbittar
    @tbittar
    Hello everyone, this is my first post here and I am playing around a bit with EfficientGlobalOptimization. I wonder how we can access the result of each iteration of EGO easily ? I mean how can we access to the new added point, the corresponding function value, the new metamodel if it has been recomputed, the value of the expected improvement and so on. So far, I use algo.setVerbose(True)in combination with ot.Log.Show(ot.Log.INFO) but the information I am looking for is flooded into many other INFOmessages (and the prints really slow down the algorithm). It could also be nice if we can access this information after the execution of the algorithm. Thanks in advance.
    tbittar
    @tbittar
    Actually there is a algo.getExpectedImprovement() method I didn't notice and I can also use algo.getResult().getInputSample() and algo.getResult().getOutputSample(), the only point remaining in my question is then how to access the metamodels that are recomputed during the optimization.
    Julien Schueller
    @jschueller
    @tbittar you cannot access it directly but you can rebuild them from outside EGO by iteratively enriching the DOE from the result history:
    
    # first kriging model
    covarianceModel = ot.SquaredExponential([0.3007, 0.2483], [0.981959])
    basis = ot.ConstantBasisFactory(dim).build()
    kriging = ot.KrigingAlgorithm(
        inputSample, outputSample, covarianceModel, basis)
    noise = [x[1] for x in modelEval]
    kriging.setNoise(noise)
    kriging.run()
    
    # algo
    algo = ot.EfficientGlobalOptimization(problem, kriging.getResult())
    algo.setNoiseModel(ot.SymbolicFunction(
        ['x1', 'x2'], ['0.96']))  # assume constant noise var
    algo.setMaximumEvaluationNumber(20)
    algo.setImprovementFactor(0.05)
    #algo.setAEITradeoff(0.66744898)
    algo.run()
    result = algo.getResult()
    print(result.getIterationNumber())
    
    metamodels = []
    for i in range(result.getIterationNumber()):
        inputSample.add(result.getInputSample()[i])
        outputSample.add(result.getOutputSample()[i])
        kriging = ot.KrigingAlgorithm(inputSample, outputSample, covarianceModel, basis)
        kriging.run()
        metamodels.append(kriging.getResult().getMetaModel())
    @josephmure I fixed the wrapper doc in my branch
    tbittar
    @tbittar
    Ok very nice, thanks for the answer !
    josephmure
    @josephmure
    @jschueller Great! Will this fix make it to version 1.15?
    Julien Schueller
    @jschueller
    yes
    Michael Baudin
    @mbaudin47
    Hi ! The new 1.15rc1 version has an interface to Spectra. What are the features provided? Are there examples?
    Julien Schueller
    @jschueller
    No new features, it just speeds up KarhunenLoeveP1Algorithm, see doc. No example is provided atm.
    Julien Schueller
    @jschueller
    v1.15rc1 is out!
    Michael Baudin
    @mbaudin47
    Hi! There is a NLOPT message when I build OT:
    -- Found NLopt: /usr/lib/x86_64-linux-gnu/libnlopt_cxx.so
    -- nlopt::GN_AGS: fails to build
    What does this mean?
    Antoine Dumas
    @adumasphi
    Hello, I am trying to define my own covariance model (an to use in the kriging algorithm but I cannot manage to do it. I tried to create a python class that inherits from StationaryCovarianceModel and define a method computeStandardRepresentative but it stil throws me a NotYetImplementedError. I also tried to inherit from CovarianceModelImplementation. Is it possible to do it in Python ?
    2 replies
    Gabriel Sarazin
    @gabrielsarazin

    Hello,
    I encountered the problem of memory leak while using the function ComposedDistribution within a loop. Every time the function is executed, a large trunk of memory is consumed. Could you please give me a hint on how to fix this problem. Here is my code:

    import openturns as ot
    d = 3
    M,N = 10**4,10**5
    Norm = ot.Normal(d)  
    for i in range(M):
        print(i)
        X = Norm.getSample(N)
        U = X.rank()/(N+1)
        marginals = [ot.KernelSmoothing().build(X[:,k]) for k in range(d)]
        copula = ot.NormalCopulaFactory().build(U)
        distribution = ot.ComposedDistribution(marginals, copula)

    Thank you in advance!
    Best regards,
    Gabriel

    5 replies
    efekhari27
    @efekhari27

    Hi everyone,
    I am encountering issues when summing UserDefined distributions.
    It seems like the sum of two UserDefined either returns a UserDefined or a RandomMixture (depending on the two UserDefined summed). When a RandomMixture is returned, some method associated to the object are not available. The exception returned is:
    "NotYetImplementedException : Error: no algorithm is currently available for the non-continuous case with more than one atom."
    Here is a script that should reproduce the exception:

    Thanks in advance for your feedback.
    Best,
    Elias

    2 replies
    import openturns as ot 
    import numpy as np
    
    
    # Returns a UserDefined
    #my_array = np.arange(40)
    
    # Returns a RandomMixture
    my_array = np.arange(50)
    points, weights = np.unique(my_array, return_counts=True)
    points = points.reshape(len(points), 1)
    weights = weights/len(my_array)
    
    my_distribution = ot.UserDefined(points, weights)
    my_distribution.drawPDF()
    
    my_distribution_2 = my_distribution + my_distribution
    my_distribution_2.drawPDF()
    Julien Schueller
    @jschueller
    Gabriel Sarazin
    @gabrielsarazin

    Hello,
    I encountered the problem of memory leak while running the FORM algorithm within a FOR loop. Here is my code:

    import numpy as np
    import openturns as ot
    
    d = 3
    
    def h(x):
        y = x[0]+x[1]+x[2]
        return [y]
    
    Perf = ot.PythonFunction(d,1,h)
    T = 5.0
    
    dist =  ot.Normal(d)
    input_vector = ot.RandomVector(dist)
    output = ot.CompositeRandomVector(Perf, input_vector)
    failure_event = ot.Event(output, ot.Greater(), T)
    solver = ot.Cobyla()
    starting_point = np.array(dist.getMean())
    algo = ot.FORM(solver, failure_event, starting_point)
    
    for i in range(10**4):
        print(i)
        algo.run()
        Pf = algo.getResult().getEventProbability()

    Is there a way to fix this?
    Best regards,
    Gabriel

    3 replies
    josephmure
    @josephmure

    Hi everybody, a user showed me a weird kernel smoothing error:

    import openturns as ot
    import numpy as np
    
    
    #a1 is a 254 x 1 matrix with 5 non-zero elements equal to 0.25
    #a1 is a 255 x 1 matrix with 5 non-zero elements equal to 0.25
    a1 = np.append(np.repeat(0.0,249), np.repeat(0.25,5)).reshape(-1, 1)
    a2 = np.append(np.repeat(0.0,250), np.repeat(0.25,5)).reshape(-1, 1)
    
    #a1 et a2 are turned into Samples s1 and s2
    s1 = ot.Sample(a1)
    s2 = ot.Sample(a2)
    
    #Kernel smoothing succeeds with s1, but fails with s2
    ks = ot.KernelSmoothing()
    k1 = ks.build(s1)
    k2 = ks.build(s2)

    The Distribution k1 is constructed without error, but the last line of this script produces the following error:

    RuntimeError: InternalException : Error: Brent method requires that the function takes different signs at the endpoints of the given starting interval, here infPoint=0, supPoint=0, value=0, f(infPoint) - value=-nan and f(supPoint) - value=-nan

    I realize that kernel smoothing should not be attempted on 2-valued Samples anyway, but I would like to understand the error.
    It looks like the Brent algorithm does not realize there is more than one value in the Sample when building k2, although it does when building k1.

    1 reply
    Julien Schueller
    @jschueller
    Hi,
    Just a reminder for the user day webconf tomorrow via teams:
    https://github.com/openturns/openturns/wiki/User-day-%2313-2020
    See ya