- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

I don't understand if it is an error of mine with the 2019 version or something else

1) Where (on the OPT++ side of the fence or on the OpenTURNS side)? 2) How?

hello kim,

"--enable-shared=yes" is an option for the configure command of opt++, see:

https://software.sandia.gov/opt++/opt++2.4_doc/html/InstallDoc.html

yes, Opt++ is not that well supported in openturns ; it may even be removed in the next release

Hi, I would like to define as class a joint law and use it with tools of the openturns and pymc libraries.

The joint law is made of a Weibull distribution and a log-normal distribution. There is a correlation between the 2 laws : The log-normal distribution depends on the Weibull one.

In openturns, it is possible to defined a composed or a conditional distribution, but they are not appropriate in my case.

Hence, my composed distribution will be defined, I use it in the decorator of the pymc library« pm.Stochastic ».

Until now, I could only define my composed distribution as a fonction, but to be corretly implemented and to be used in the decorator it is mandatory to define it as a class.

Many thanks in advance

Take care in the current context

Best regards

Nicolas Delépine

Thank you Julien.

WindSpeed = ot.Weibull(10.12,1.8, 0.)

WindSpeed.setDescription(['Vit'])

def U_TI_Dist_logp(value):

[Uf,TIf]=value

parameters = np.array(ot.LogNormalMuSigma((0.12*(0.75+3.8/Uf)),abs(0.12*1.4/Uf),0).evaluate())

TI_cond=ot.LogNormal(parameters[0],parameters[1],parameters[2])

return WindSpeed.computePDF(Uf)*TI_cond.computePDF(TIf)

def U_TI_Dist_rand():

tmpU=rand.weibullvariate(10.12,1.8)

parameters = np.array(ot.LogNormalMuSigma((0.12*(0.75+3.8/tmpU)),(0.12*1.4/tmpU),0).evaluate())

TI_cond=rand.lognormvariate(parameters[0],abs(parameters[1]))

return [abs(tmpU),abs(TI_cond)]

U_TI_Dist_pymc_model = pm.Stochastic(logp=U_TI_Dist_logp, random=U_TI_Dist_rand,

doc='U_TI_Dist', name='U_TI_Dist',parents={},

trace=True)

I just add a part of my code above. I would like to use pymc.stochastic on a joint pdf made of weibull and log-normal distributions

Many thanks in advance

Seems you need not ConditionalDistribution but BayesDistribution if you want the [X,Y] vector:

http://openturns.github.io/openturns/master/user_manual/_generated/openturns.BayesDistribution.html?highlight=bayesdistribution#openturns-bayesdistribution

```
U_dist = ot.ParametrizedDistribution(ot.LogNormalMuSigma())
link = ot.SymbolicFunction(['tmpU'], ['0.12*(0.75+3.8/tmpU)', '0.12*(1.4/tmpU)', '0.0'])
U_IT_dist = ot.BayesDistribution(U_dist, WindSpeed, link)
U_IT_dist.setDescription(['U', 'Vit'])
```

Tricky thing is to wrap the conditioned distribution in a ParametrizedDistribution as you want to be in LogNormalMuSigma parametrization.

Then you need a function to map the value from the conditioning distribution into the the 3 lognormal parameters, so you have to add a null output for the gamma parameter.

Finally assemble everything into a BayesDistribution, note that it gives you the conditioning vector in second position.

Finally you can use `U_IT_dist.computeLogPDF(value)`

and `U_IT_dist.getRealization()`

in the pymc callbacks.

@AliyuAziz Function(), I think. From the ChangeLog of the 1.9 release:

`Deprecated NumericalMathFunction for Function`

@josephmure many thanks for the prompt response. It has really helped me a lot in troubleshooting. I am trying to run the old cantilever [example ] (http://autobuilder.openturns.org/openturns-doc/distcheck-r1772/html/ExamplesGuide/cid1.xhtml) I have changed the function defining deviation successfully. The new error comes from line ' 116 outputVariableOfInterest = RandomVector(deviation, inputRandomVector) ' with the TypeError: new_RandomVector expected at most 1 arguments, got 2

Additional information:

Wrong number or type of arguments for overloaded function 'new_RandomVector'.

Possible C/C++ prototypes are:

OT::RandomVector::RandomVector()

OT::RandomVector::RandomVector(OT::RandomVectorImplementation const &)

OT::RandomVector::RandomVector(OT::TypedInterfaceObject< OT::RandomVectorImplementation >::Implementation const &)

OT::RandomVector::RandomVector(OT::Distribution const &)

OT::RandomVector::RandomVector(OT::RandomVector const &)

OT::RandomVector::RandomVector(PyObject *)

Additional information:

Wrong number or type of arguments for overloaded function 'new_RandomVector'.

Possible C/C++ prototypes are:

OT::RandomVector::RandomVector()

OT::RandomVector::RandomVector(OT::RandomVectorImplementation const &)

OT::RandomVector::RandomVector(OT::TypedInterfaceObject< OT::RandomVectorImplementation >::Implementation const &)

OT::RandomVector::RandomVector(OT::Distribution const &)

OT::RandomVector::RandomVector(OT::RandomVector const &)

OT::RandomVector::RandomVector(PyObject *)

@AliyuAziz Yes, the RandomVector contructor now only accepts a Distribution as input. Apparently, RandomVector was split in several classes. I think the one you're looking for is CompositeRandomVector.

You may also want to check out newer versions of the cantilever beam example, like these Jupyter Notebooks: https://github.com/openturns/openturns/blob/master/python/doc/examples/meta_modeling/chaos_cantilever_beam_integration.ipynb and https://github.com/openturns/openturns/blob/master/python/doc/examples/reliability_sensitivity/estimate_probability_form.ipynb

@AliyuAziz You're welcome! I don't know much about that, but there are coupling tools within OpenTURNS. Warning: despite what is written on the page I linked, you need to import the coupling tools separately from the rest of the library:

`from openturns import coupling_tools as ct`

and then `ct.replace()`

, `ct.execute()`

, etc.
Hello everyone, this is my first post here and I am playing around a bit with

`EfficientGlobalOptimization`

. I wonder how we can access the result of each iteration of EGO easily ? I mean how can we access to the new added point, the corresponding function value, the new metamodel if it has been recomputed, the value of the expected improvement and so on. So far, I use `algo.setVerbose(True)`

in combination with `ot.Log.Show(ot.Log.INFO)`

but the information I am looking for is flooded into many other `INFO`

messages (and the prints really slow down the algorithm). It could also be nice if we can access this information after the execution of the algorithm. Thanks in advance.
Actually there is a

`algo.getExpectedImprovement()`

method I didn't notice and I can also use `algo.getResult().getInputSample()`

and `algo.getResult().getOutputSample()`

, the only point remaining in my question is then how to access the metamodels that are recomputed during the optimization.
@tbittar you cannot access it directly but you can rebuild them from outside EGO by iteratively enriching the DOE from the result history:

```
# first kriging model
covarianceModel = ot.SquaredExponential([0.3007, 0.2483], [0.981959])
basis = ot.ConstantBasisFactory(dim).build()
kriging = ot.KrigingAlgorithm(
inputSample, outputSample, covarianceModel, basis)
noise = [x[1] for x in modelEval]
kriging.setNoise(noise)
kriging.run()
# algo
algo = ot.EfficientGlobalOptimization(problem, kriging.getResult())
algo.setNoiseModel(ot.SymbolicFunction(
['x1', 'x2'], ['0.96'])) # assume constant noise var
algo.setMaximumEvaluationNumber(20)
algo.setImprovementFactor(0.05)
#algo.setAEITradeoff(0.66744898)
algo.run()
result = algo.getResult()
print(result.getIterationNumber())
metamodels = []
for i in range(result.getIterationNumber()):
inputSample.add(result.getInputSample()[i])
outputSample.add(result.getOutputSample()[i])
kriging = ot.KrigingAlgorithm(inputSample, outputSample, covarianceModel, basis)
kriging.run()
metamodels.append(kriging.getResult().getMetaModel())
```

@josephmure I fixed the wrapper doc in my branch

Hello, I am trying to define my own covariance model (an to use in the kriging algorithm but I cannot manage to do it. I tried to create a python class that inherits from StationaryCovarianceModel and define a method computeStandardRepresentative but it stil throws me a NotYetImplementedError. I also tried to inherit from CovarianceModelImplementation. Is it possible to do it in Python ?

2 replies

Hello,

I encountered the problem of memory leak while using the function ComposedDistribution within a loop. Every time the function is executed, a large trunk of memory is consumed. Could you please give me a hint on how to fix this problem. Here is my code:

```
import openturns as ot
d = 3
M,N = 10**4,10**5
Norm = ot.Normal(d)
for i in range(M):
print(i)
X = Norm.getSample(N)
U = X.rank()/(N+1)
marginals = [ot.KernelSmoothing().build(X[:,k]) for k in range(d)]
copula = ot.NormalCopulaFactory().build(U)
distribution = ot.ComposedDistribution(marginals, copula)
```

Thank you in advance!

Best regards,

Gabriel

5 replies

Hi everyone,

I am encountering issues when summing UserDefined distributions.

It seems like the sum of two UserDefined either returns a UserDefined or a RandomMixture (depending on the two UserDefined summed). When a RandomMixture is returned, some method associated to the object are not available. The exception returned is:

"NotYetImplementedException : Error: no algorithm is currently available for the non-continuous case with more than one atom."

Here is a script that should reproduce the exception:

Thanks in advance for your feedback.

Best,

Elias

2 replies

```
import openturns as ot
import numpy as np
# Returns a UserDefined
#my_array = np.arange(40)
# Returns a RandomMixture
my_array = np.arange(50)
points, weights = np.unique(my_array, return_counts=True)
points = points.reshape(len(points), 1)
weights = weights/len(my_array)
my_distribution = ot.UserDefined(points, weights)
my_distribution.drawPDF()
my_distribution_2 = my_distribution + my_distribution
my_distribution_2.drawPDF()
```

Hello,

I encountered the problem of memory leak while running the FORM algorithm within a FOR loop. Here is my code:

```
import numpy as np
import openturns as ot
d = 3
def h(x):
y = x[0]+x[1]+x[2]
return [y]
Perf = ot.PythonFunction(d,1,h)
T = 5.0
dist = ot.Normal(d)
input_vector = ot.RandomVector(dist)
output = ot.CompositeRandomVector(Perf, input_vector)
failure_event = ot.Event(output, ot.Greater(), T)
solver = ot.Cobyla()
starting_point = np.array(dist.getMean())
algo = ot.FORM(solver, failure_event, starting_point)
for i in range(10**4):
print(i)
algo.run()
Pf = algo.getResult().getEventProbability()
```

Is there a way to fix this?

Best regards,

Gabriel

3 replies

Hi everybody, a user showed me a weird kernel smoothing error:

```
import openturns as ot
import numpy as np
#a1 is a 254 x 1 matrix with 5 non-zero elements equal to 0.25
#a1 is a 255 x 1 matrix with 5 non-zero elements equal to 0.25
a1 = np.append(np.repeat(0.0,249), np.repeat(0.25,5)).reshape(-1, 1)
a2 = np.append(np.repeat(0.0,250), np.repeat(0.25,5)).reshape(-1, 1)
#a1 et a2 are turned into Samples s1 and s2
s1 = ot.Sample(a1)
s2 = ot.Sample(a2)
#Kernel smoothing succeeds with s1, but fails with s2
ks = ot.KernelSmoothing()
k1 = ks.build(s1)
k2 = ks.build(s2)
```

The Distribution k1 is constructed without error, but the last line of this script produces the following error:

`RuntimeError: InternalException : Error: Brent method requires that the function takes different signs at the endpoints of the given starting interval, here infPoint=0, supPoint=0, value=0, f(infPoint) - value=-nan and f(supPoint) - value=-nan`

I realize that kernel smoothing should not be attempted on 2-valued Samples anyway, but I would like to understand the error.

It looks like the Brent algorithm does not realize there is more than one value in the Sample when building k2, although it does when building k1.

1 reply

Hi,

Just a reminder for the user day webconf tomorrow via teams:

https://github.com/openturns/openturns/wiki/User-day-%2313-2020

See ya

Just a reminder for the user day webconf tomorrow via teams:

https://github.com/openturns/openturns/wiki/User-day-%2313-2020

See ya