by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Sheehan Olver
    @dlfivefifty
    Ah and * has changed to imes
    imes
    \times
    Mitko Georgiev
    @mitkoge

    thank you.

    using ApproxFun,LinearAlgebra
    dm = Segment(0.001,1) × PeriodicSegment(-pi,pi)
    sp = Space(dm)
    Dr = Derivative(sp, [1,0]); Dθ = Derivative(sp, [0,1])
    Mr = Multiplication(Fun( (r, θ) -> r, sp ), sp)
    rDr = Mr * Dr
    Lr = rDr * rDr; L = Lr + Dθ * Dθ

    i modified it and now can play with.

    Simon Etter
    @ettersi

    Not sure whether this is the right place to ask since technically my question concerns SingularIntegralEquations.jl, but that package seems to be related enough to ApproxFun.jl that I'll go ahead anyway.

    In the code of SingularIntegralEquations it says

    # stieltjesintegral is an indefinite integral of stieltjes
    # normalized so that there is no constant term

    Can someone elaborate on that normalisation condition?

    Sheehan Olver
    @dlfivefifty
    Sure, fine to ask here. This normalisation condition comes from the fact that Jacobi polynomials have nice integration formulae coming from the weighted differentiation d/dx[(1-x)^a *(1+x)^b P_n^(a,b)] = C (1-x)^{a-1} (1+x)^{b-1} P_{n+1}^(a-1,b-1). Roughly speaking, for functions that vanish at ±1, derivatives and Stieltjes transforms commute, so provided a,b > 0 we can use this to write the integral of a Stieltjes transforms in terms of the integral of the integrand.
    Note that the n = 0 case is special, and dictates the normalisation constant (as the construction for other n will automatically decay something like O(z^(-n)) )
    For the cases actually implemented, I think its normalized to go to zero at ∞.
    Sorry, that's not right.
    You always have logarithmic growth (unless the integrand integrates to zero).
    Sheehan Olver
    @dlfivefifty
    On -1..1 it just vanishes apart from the log term:
    julia> x = Fun();
    
    julia> f = exp(x)/sqrt(1-x^2);
    
    julia> z = 10_000; stieltjesintegral(f,z) - log(z) * sum(f)
    -0.00017756097918919522
    julia> f = exp(x) * sqrt(1-x^2);
    
    julia> z = 10_000; stieltjesintegral(f,z) - log(z) * sum(f)
    -4.264886882765495e-5
    Oh, wait, that's true for other intervals too:
    
    julia> x = Fun(1..2);
    
    julia> f = exp(x) * sqrt((x-1)*(2-x));
    
    julia> z = 100_000; stieltjesintegral(f,z) - log(z) * sum(f)
    -2.8356239955229512e-5
    Sorry for making it seem more complicated than it actually it is. It's really simple: it's normalized so that stieltjes(f,z) = log(z) *sum(f) + o(1).
    Which happens to be consistent with logkernel (whose normalization is imposed by the actual definition):
    julia> logkernel(f, z) -log(abs(z))/π * sum(f)
    -9.026071512430178e-6
    Simon Etter
    @ettersi
    If I understand correctly, I can summarise what you said as: stieltjes(f,z) = integral log(z - w) f(w) dw + C with Csuch that stieltjes(f,z) = log(z) *sum(f) + o(1) for z to Inf. Correct?
    Sheehan Olver
    @dlfivefifty
    Yes
    Simon Etter
    @ettersi
    Great, thanks a lot!
    Mitko Georgiev
    @mitkoge

    Hi i was playing with modified example from Quantum states.jl

    Using  ApproxFun
    fc1 = Fun(1, 0..1); fc2 = Fun(2, 1..2); fc3 = Fun(3, 2..3)
    
    fc= fc1+fc2+fc3
    D = Derivative(); sp = space(fc); x= Fun(sp);
    B = [Dirichlet(sp); continuity(sp,0:1)]
    λ, v = ApproxFun.eigs(B, -D^2 + fc, 500,tolerance=1E-10)  
    #returns result
    
    λ, v = ApproxFun.eigs(B, -D^2 - 1/x*D + fc, 500,tolerance=1E-10)
    # says Not implemented  but if fc=fc1+fc2 it manages to return result

    Is 1/x*D kind of a problematic composition?

    Sheehan Olver
    @dlfivefifty
    Yes. It would be easy to work around by multiplying by x and updating the generalised eigenvalue call appropriately in the implementation of eigs.
    Mitko Georgiev
    @mitkoge

    Thanks,
    you mean calling like this

    λ, v = ApproxFun.eigs_overx(B, -x*D^2 - D + x*fc, 500,tolerance=1E-10)

    and somehow divide C1 to x before calling to

        λ,V = eigen(A1,C1)

    inside the copy of eigs?

    Sheehan Olver
    @dlfivefifty
    No multiply by x: instead of solving A*v = λ*C*v we solve x * A * v = λ * x * C* v. That is, in line 34 change to C = Conversion(ds,rangespace(A)); C = x * C should be enough.
    Note eigs is quite stale and needs to be rewritten, but that's low priority for me. I'll accept any PRs.
    Mitko Georgiev
    @mitkoge
    Thanks for the great support,
    Mitko Georgiev
    @mitkoge

    i followed you and copy paste defined

        function eigs(Bcs_in::Operator,A_in::Operator,fmul::Fun,n::Integer;tolerance::Float64=100eps())

    with the correction

            C = Conversion(ds,rangespace(A)); C = fmul * C

    now have only to check if it works correct. :-)

    λ, v = ApproxFun.eigs(B, L, x, 500,tolerance=1E-10)
    Mitko Georgiev
    @mitkoge
    :point_up: 22. November 2018 12:35
    I intendet to help with replacing it in the docs but as i see it is already fixed.
    Unfortunately it might be not at http://juliaapproximation.github.io/ApproxFun.jl/latest
    Ashton Bradley
    @AshtonSBradley
    is it possible to solve a nonlinear PDE in approxfun? for example I would like to add a simple nonlinear term to this problem
    https://github.com/JuliaApproximation/ApproxFun.jl#solving-partial-differential-equations
    , presumably switching to newton solver, but not sure how to inform it about boundary conditions etc in 2D
    Sheehan Olver
    @dlfivefifty
    Not yet: I want to make the PDE solver faster before trying to implement that sort of thing. For now you can try doing Newton iteration by hand.
    Ashton Bradley
    @AshtonSBradley
    ok thanks, good to know
    Ashton Bradley
    @AshtonSBradley
    for the basic procedure of setting up for manual iteration, can I take it that this https://github.com/JuliaApproximation/ApproxFunExamples/blob/master/ODEs/Blasius%20and%20Falkner-Skan.jl
    is a reasonable starting point, that I just need to generalise to 2D?
    Ashton Bradley
    @AshtonSBradley
    Ok, maybe ignore that
    Sheehan Olver
    @dlfivefifty
    It’s pretty simple: just differentiate wrt u but treating d/dx and d/dy as constants.
    To get the Jacobian
    Mitko Georgiev
    @mitkoge
    excuse if my question is too naive but i try this
    x= Fun(0..1); sx=space(x);
    a=Fun(0..pi); sa=space(a);
    fa= DefiniteIntegral(x*a,sx)
    # i expect fa= a here
    but i get MethodError. Are integrals of two arguments possible?
    Mitko Georgiev
    @mitkoge
    or may be like this
    x= Fun(0..1); sx=space(x)
    a=Fun(0..pi); sa=space(a)
    fm= x*a; sm= space(fm)
    fI=  DefiniteIntegral(sx)
    (fI*fm)(0.2)
    #expect 0.2
    Sheehan Olver
    @dlfivefifty

    Are you expecting it to be a 2D function? And do you actually expect the result to be 0.1 (that is, a/2 since you integrate out x)?

    That's possible via:

    x,a = Fun((0..1) × (0..π))
    Q = DefiniteIntegral(0..1) ⊗ I
    (Q*(x*a))(0.0,0.2)
    Mitko Georgiev
    @mitkoge
    Wow exciting how elegant is this! Thank you!
    As you can see i am exploring it the dummy way so please bear with me.
    Mitko Georgiev
    @mitkoge
    let Q2= Q (x a) . Q2 is still 2D Fun((x,a)-> smth). Is there a way to curry it to 1D function Q1 that is Fun(a-> smth)?
    Sheehan Olver
    @dlfivefifty
    Not an easy way. What we actually want is DefiniteIntegral(S, [1,0]) that wraps the kronecker product. It’s not too hard to add this but I don’t have time right now
    Mitko Georgiev
    @mitkoge

    to practice your guiding i wrote

    function Hankel(fx::Fun, spx,spa::Space)
        spxa= spx ⊗ spa
        fx2= Fun((x,a)-> fx(x), spxa)
        fj= Fun((x,a) -> besselj0(x*a), spxa)
        fa2= (DefiniteIntegral(dmx1) ⊗ I)*(fx2*fj*x)
        fa= Fun(a->fa2(0,a), spa)
    end

    from the definition. It seems to work. Thank you!
    Is this the style ApproxFun is expected to be used?

    Sheehan Olver
    @dlfivefifty

    Not an easy way.

    I take that back. I think you can always do ApproxFun.SpaceOperator(op, newdomainspace, newrangespace) change the spaces of an operator. So just write Q with the "right" spaces

    Is this the style ApproxFun is expected to be used?

    In Julia, capital letters are only used when it returns a special type of the same name. So this would be better as lower case. Also, there is support for Green's functions in https://github.com/JuliaApproximation/SingularIntegralEquations.jl including Helmholtz / hankel kernels

    Mitko Georgiev
    @mitkoge
    I see. I will check SpaceOperator and will have to learn what rangespace is.
    And thank you for the link. It is probably some more efficient implementation? Will check it.
    I tried hankel as an learning example.
    The kind of high level style the ApproxFun works is fascinating me.
    Like a poetry for the crowd. I do not necessarily understand but like it. :-)
    Mitko Georgiev
    @mitkoge
    here is the revised
    function hankel(fx::Fun, spx,spa::Space)
        spxa= spx ⊗ spa
        fx2= Fun((x,a)-> fx(x), spxa)
        fjx= Fun((x,a)-> x*besselj0(x*a), spxa)
        fa2= (DefiniteIntegral(spx.domain) ⊗ I)*(fx2*fjx)
        fa= Fun(a->fa2(0,a), spa)
    end
    Mitko Georgiev
    @mitkoge

    when i blind try

    spx= Space(0..1); spa= Space(0..π); spxa= spx ⊗ spa
    Q2= DefiniteIntegral(spx.domain) ⊗ I
    Q1= ApproxFun.SpaceOperator(Q2, spxa, spa ) 
    
    x=Fun((x,a)->x,spxa); a=Fun((x,a)->a,spxa)
    (Q2*(x*a))(0,1)  #=0.5
    (Q1*(x*a))(1)      #=0.5*pi/2

    it seems to work up to scaling cofficient

    Mitko Georgiev
    @mitkoge
    i gues it is because the default domain for I is -1..1
    Mitko Georgiev
    @mitkoge
    may i ask if a least quare fit by bivariate Fun to data is possible?
    Given data is Array{Float64,2}.
    I liked your nice 1D example from the docs and wonder how to extend to 2D.
    Sheehan Olver
    @dlfivefifty
    Yes an example is in the FAQ