"--enable-shared=yes" is an option for the configure command of opt++, see:
yes, Opt++ is not that well supported in openturns ; it may even be removed in the next release
Hi, I would like to define as class a joint law and use it with tools of the openturns and pymc libraries.
The joint law is made of a Weibull distribution and a log-normal distribution. There is a correlation between the 2 laws : The log-normal distribution depends on the Weibull one.
In openturns, it is possible to defined a composed or a conditional distribution, but they are not appropriate in my case.
Hence, my composed distribution will be defined, I use it in the decorator of the pymc library« pm.Stochastic ».
Until now, I could only define my composed distribution as a fonction, but to be corretly implemented and to be used in the decorator it is mandatory to define it as a class.
Many thanks in advance
Take care in the current context
Thank you Julien.
WindSpeed = ot.Weibull(10.12,1.8, 0.)
parameters = np.array(ot.LogNormalMuSigma((0.12(0.75+3.8/Uf)),abs(0.121.4/Uf),0).evaluate())
parameters = np.array(ot.LogNormalMuSigma((0.12(0.75+3.8/tmpU)),(0.121.4/tmpU),0).evaluate())
U_TI_Dist_pymc_model = pm.Stochastic(logp=U_TI_Dist_logp, random=U_TI_Dist_rand,
Seems you need not ConditionalDistribution but BayesDistribution if you want the [X,Y] vector:
U_dist = ot.ParametrizedDistribution(ot.LogNormalMuSigma()) link = ot.SymbolicFunction(['tmpU'], ['0.12*(0.75+3.8/tmpU)', '0.12*(1.4/tmpU)', '0.0']) U_IT_dist = ot.BayesDistribution(U_dist, WindSpeed, link) U_IT_dist.setDescription(['U', 'Vit'])
Tricky thing is to wrap the conditioned distribution in a ParametrizedDistribution as you want to be in LogNormalMuSigma parametrization.
Then you need a function to map the value from the conditioning distribution into the the 3 lognormal parameters, so you have to add a null output for the gamma parameter.
Finally assemble everything into a BayesDistribution, note that it gives you the conditioning vector in second position.
Finally you can use
U_IT_dist.getRealization() in the pymc callbacks.
from openturns import coupling_tools as ctand then
EfficientGlobalOptimization. I wonder how we can access the result of each iteration of EGO easily ? I mean how can we access to the new added point, the corresponding function value, the new metamodel if it has been recomputed, the value of the expected improvement and so on. So far, I use
algo.setVerbose(True)in combination with
ot.Log.Show(ot.Log.INFO)but the information I am looking for is flooded into many other
INFOmessages (and the prints really slow down the algorithm). It could also be nice if we can access this information after the execution of the algorithm. Thanks in advance.
algo.getExpectedImprovement()method I didn't notice and I can also use
algo.getResult().getOutputSample(), the only point remaining in my question is then how to access the metamodels that are recomputed during the optimization.
# first kriging model covarianceModel = ot.SquaredExponential([0.3007, 0.2483], [0.981959]) basis = ot.ConstantBasisFactory(dim).build() kriging = ot.KrigingAlgorithm( inputSample, outputSample, covarianceModel, basis) noise = [x for x in modelEval] kriging.setNoise(noise) kriging.run() # algo algo = ot.EfficientGlobalOptimization(problem, kriging.getResult()) algo.setNoiseModel(ot.SymbolicFunction( ['x1', 'x2'], ['0.96'])) # assume constant noise var algo.setMaximumEvaluationNumber(20) algo.setImprovementFactor(0.05) #algo.setAEITradeoff(0.66744898) algo.run() result = algo.getResult() print(result.getIterationNumber()) metamodels =  for i in range(result.getIterationNumber()): inputSample.add(result.getInputSample()[i]) outputSample.add(result.getOutputSample()[i]) kriging = ot.KrigingAlgorithm(inputSample, outputSample, covarianceModel, basis) kriging.run() metamodels.append(kriging.getResult().getMetaModel())
I encountered the problem of memory leak while using the function ComposedDistribution within a loop. Every time the function is executed, a large trunk of memory is consumed. Could you please give me a hint on how to fix this problem. Here is my code:
import openturns as ot d = 3 M,N = 10**4,10**5 Norm = ot.Normal(d) for i in range(M): print(i) X = Norm.getSample(N) U = X.rank()/(N+1) marginals = [ot.KernelSmoothing().build(X[:,k]) for k in range(d)] copula = ot.NormalCopulaFactory().build(U) distribution = ot.ComposedDistribution(marginals, copula)
Thank you in advance!
I am encountering issues when summing UserDefined distributions.
It seems like the sum of two UserDefined either returns a UserDefined or a RandomMixture (depending on the two UserDefined summed). When a RandomMixture is returned, some method associated to the object are not available. The exception returned is:
"NotYetImplementedException : Error: no algorithm is currently available for the non-continuous case with more than one atom."
Here is a script that should reproduce the exception:
Thanks in advance for your feedback.
import openturns as ot import numpy as np # Returns a UserDefined #my_array = np.arange(40) # Returns a RandomMixture my_array = np.arange(50) points, weights = np.unique(my_array, return_counts=True) points = points.reshape(len(points), 1) weights = weights/len(my_array) my_distribution = ot.UserDefined(points, weights) my_distribution.drawPDF() my_distribution_2 = my_distribution + my_distribution my_distribution_2.drawPDF()
I encountered the problem of memory leak while running the FORM algorithm within a FOR loop. Here is my code:
import numpy as np import openturns as ot d = 3 def h(x): y = x+x+x return [y] Perf = ot.PythonFunction(d,1,h) T = 5.0 dist = ot.Normal(d) input_vector = ot.RandomVector(dist) output = ot.CompositeRandomVector(Perf, input_vector) failure_event = ot.Event(output, ot.Greater(), T) solver = ot.Cobyla() starting_point = np.array(dist.getMean()) algo = ot.FORM(solver, failure_event, starting_point) for i in range(10**4): print(i) algo.run() Pf = algo.getResult().getEventProbability()
Is there a way to fix this?
Hi everybody, a user showed me a weird kernel smoothing error:
import openturns as ot import numpy as np #a1 is a 254 x 1 matrix with 5 non-zero elements equal to 0.25 #a1 is a 255 x 1 matrix with 5 non-zero elements equal to 0.25 a1 = np.append(np.repeat(0.0,249), np.repeat(0.25,5)).reshape(-1, 1) a2 = np.append(np.repeat(0.0,250), np.repeat(0.25,5)).reshape(-1, 1) #a1 et a2 are turned into Samples s1 and s2 s1 = ot.Sample(a1) s2 = ot.Sample(a2) #Kernel smoothing succeeds with s1, but fails with s2 ks = ot.KernelSmoothing() k1 = ks.build(s1) k2 = ks.build(s2)
The Distribution k1 is constructed without error, but the last line of this script produces the following error:
RuntimeError: InternalException : Error: Brent method requires that the function takes different signs at the endpoints of the given starting interval, here infPoint=0, supPoint=0, value=0, f(infPoint) - value=-nan and f(supPoint) - value=-nan
I realize that kernel smoothing should not be attempted on 2-valued Samples anyway, but I would like to understand the error.
It looks like the Brent algorithm does not realize there is more than one value in the Sample when building k2, although it does when building k1.