Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Sheehan Olver
    @dlfivefifty
    Use Segment(a,b) for line segments in the complex plane. (The previous support for intervals in the complex plane violated the definition of an interval.)
    Art Gower
    @arturgower
    A ha! Yes I see. Thanks so much
    Shi Pengcheng
    @shipengcheng1230

    Hello, I was playing around with the poisson equation example and wonder if I could replace the RHS ff with something like δ(x)δ(y)\delta (x) \delta (y). I tried to construct the RHS like this:

    fx = KroneckerDelta()
    fy = KroneckerDelta()
    f = Fun((x,y) -> fx(x) * fy(y))

    But I got the error that ERROR: MethodError: no method matching isless(::Int64, ::Nothing). What is the proper way for me to do that? Thanks in advance!

    Sheehan Olver
    @dlfivefifty
    Essentially you want to calculate the Greens function? That’s a tough question which we are looking at right now.
    Shi Pengcheng
    @shipengcheng1230
    Thanks for the reply. I guess I have to wait for now :)
    Quentin C.
    @robocop
    Hello ! I would like to consider ApproxFun for the following problem: I have a N-L operator Phi: L_s -> L_s, where L_s is the set of continuous functions that goes exponentially fast to zero at speed s > 0, that is: x belongs to Ls if \sup{t \geq 0} |x(t)| e^{s t} < +oo. Moreover given x in L_s, Phi(x) is the solution of a Volterra integral equation of the form Phi(x)(t) = K_0(x)(t) + \int_0^t{K(x) (t, u)Phi(x) (u) du}, where the Kernels K_0(x) and K(x) are known explicitly in term of x. I would like to compute the spectrum of the Frechet differential of Phi at the point x = 0. Do you think it is possible to do that?
    Let me know if the question is unclear... I do not have a background in numerical simulations. Thanks :)
    Quentin C.
    @robocop
    I think I have started to understand how to do that with ApproxFun. But, I have got few questions. How to encode the domain R+\mathbb{R}_+?
    and how to encode {(x,y)mathbbR+,xy}\{ (x, y) \in mathbb{R}_+, x \geq y \} ?
    Quentin C.
    @robocop
    More generally the fact that I want to work on R+\mathbb{R}_+ and not on a segment seems to be a problem. Is that correct?
    Sheehan Olver
    @dlfivefifty
    Sorry forgot to reply. I have a student working on Volterra integral equations so once that’s up and running your problem should be easy. We’d love to know the application so we can mention it in the paper
    Quentin C.
    @robocop
    Ok great! Looking forward to see that. The Volterra equation I am looking at appears in a neuroscience problem (see https://arxiv.org/abs/1810.08562) The dynamic of the mean-field network can be reduce to a N-L Volterra equation (see equations (7) and (8) of the paper). Btw the same problem can be solved numerically by looking at the associated PDE (see equation (3)) - but I am curious to know if more efficient numerically methods can be develop by specifically exploit this Volterra equation.
    Art Gower
    @arturgower
    Hello! For the space of Jacobi polynomials, what package does ApproxFun use? Or is it internally implemented? I struggled to find it in the code... =(
    Sheehan Olver
    @dlfivefifty
    Internal: see src/Spaces/Jacobi
    Art Gower
    @arturgower
    Cheers! What method do you use: is it Clenshaw for all PolynomialSpaces? In which case, where do you specify the recurrence relation?
    Sheehan Olver
    @dlfivefifty
    Yes. There’s a bunch of recα , etc., overrides
    Clenshaw is implemented in PolynomialSpace.jl
    Art Gower
    @arturgower
    thanks, I get it now
    Mitko Georgiev
    @mitkoge
    May i ask may be not directly related to ApproxFun
    I found FastAsyTransforms.m on github and wondered if such functionality is already available as julia code?
    I looked at FastTransforms.jl but could not find something like a HankelTransform. Would you kindly direct?
    May be SingularIntegralEquations.jl? But no idea how to start with it? Or other place?
    Sheehan Olver
    @dlfivefifty
    @MikaelSlevinsky should be able to help, but I'm not aware of anything.
    Christoph Ortner
    @cortner

    I’m trying to implement something like this FAQ example,

    S = Chebyshev(1..2);
    p = points(S,20); # the default grid
    v = exp.(p);      # values at the default grid
    f = Fun(S,ApproxFun.transform(S,v));

    but multi-variate (2D tensor Chebyshev will do). The canonical thing,

    S = Chebyshev((1..2)^2)
    p = points(S, 20)

    errors. Of course I could just construct the points via tensor products, but then I’m unsure how to use the ApproxFun.transform(S,v) correctly. Is this documented somewhere? Is there an example I can look at?

    Basically, I just want to freeze the polynomial degree, rather than prescribe a solver tolerance.

    Sheehan Olver
    @dlfivefifty
    You want Chebyshev(1..2)^2. It actually uses Padua points so if you say ` Fun(f, S, div(n*(n+1),2)) it should give the degree n interpolant.
    Christoph Ortner
    @cortner
    Interesting - thanks. are tensor product grids a special case of these?
    I didn’t know about this family of points, readig up on it now, very very interesting...
    Sheehan Olver
    @dlfivefifty
    No, but anything other than Chebyshev^2 will use a tensor grid. Padua points are nice because you don’t oversample, and the transform is a single one dimensional DCT (though as implemented we form a tensor product by filling with zeros and use the 2D DCT, which due to aliasing we can recover the coefficients)
    There’s a Chebfun example describing it
    Christoph Ortner
    @cortner
    So this is very interesting, but for teaching purposes I would have still preferred the standard tensor Chebyshev grid. But this is not as straightforward?
    Sheehan Olver
    @dlfivefifty
    If you comment out the lines after ## Multivariate in https://github.com/JuliaApproximation/ApproxFun.jl/blob/master/src/Spaces/Chebyshev/Chebyshev.jl it will go back to the default tensor version, probably a keyword would be appropriate here to allow switching
    Christoph Ortner
    @cortner
    ok - I’ll try
    thanks for the suggestions
    Christoph Ortner
    @cortner
    Do I understand correctly that if I wanted to have fast and direct access to the transforms between the various spaces then it would be best to go around ApproxFun.jl and use FastTransforms.jl directly?
    Though I’d probably still need ApproxFun to evaluate the basis functions...
    In general, I wonder whether a “manual mode” of ApproxFun might be useful. I noticed e.g. that restricting the degree in \ rather than the tolerance throws warnings even if it is intended. By “manual mode” I mean non-adaptive.
    Sheehan Olver
    @dlfivefifty
    The long term plan is to do “Manual mode” via https://github.com/JuliaApproximation/ContinuumArrays.jl. This will also support FEM (in fact @jagot has a package for splines building on ContinuumArrays.jl) and make it possible to use distributed memory.
    Stefanos Carlström
    @jagot
    Actually, I don't, yet :)
    I have a FEDVR package
    But that is a bit limited at the moment, in that it only supports Dirichlet1 boundary conditions, since I've yet to fix the issue with restriction matrices (i.e. dropping the first and last basis functions)
    Sheehan Olver
    @dlfivefifty
    In any case, it’s kind of on the back burner as I have a backlog of about 10 papers to finish (and probably papers help more with promotion then making another package that does basically the same thing as ApproxFun 😅). But the basics are already implemented.
    Mitko Georgiev
    @mitkoge
    Thank you @dlfivefifty ,
    i think i will start with reading the Alex Townsend's paper behind FastAsyTransforms.mand may be in long turn
    i may figure out how to do HankelTransform(Fun)->Fun.
    Don MacMillen
    @macd
    I guess I don't really understand the support for Complex numbers in ApproxFun. If I try to evaluate a complex argument to a Fun(cos) then it returns zero if the imaginary part is non zero. Curiously, it will return a complex number with a correct real part if the imaginary part of the input is zero. Also, I can easily create Fun's with complex coefficients that will return a complex number when evaluating a real number, but again will return zero if the imaginary part is non-zero. Here are a few examples. (I believe all of these work in Chebfun, btw.)
    julia (v1.2)> f = Fun(cos);
    
    julia (v1.2)> f(.1)
    0.9950041652780257
    
    julia (v1.2)> f(.1 + .1im)
    0.0
    
    julia (v1.2)> f(.1 + .0im)
    0.9950041652780257 + 0.0im
    
    julia (v1.2)> g = Fun(Chebyshev(), randn(Complex{Float64}, 20));
    
    julia (v1.2)> g(.1)
    2.1013219596855888 - 0.11052912191219874im
    
    julia (v1.2)> g(.1 + .1im)
    0.0 + 0.0im
    
    julia (v1.2)> cos(.1 + .1im)
    0.9999833333373015 - 0.00999998888888977im
    Sheehan Olver
    @dlfivefifty
    evaluation is only well defined on the domain of the function, which is -1..1 by default. Otherwise it returns zero.
    If you want to evaluate on a domain in the complex plane you can use say a Circle or Segment
    I’d be shocked if those work in Chebfun unless it is doing extrapolation which is numerically dangerous. But you can use extrapolate(f,x) in ApproxFun if you wish
    Don MacMillen
    @macd
    Thanks. I see your point. It does look, however, that Chebfun takes more of a "let the buyer beware" type of view. It looks like it will happily evaluate many complex args. As long as you are inside of the Bernstein ellipse, life is good. It will even plot the Bernstein ellipse as well. As you say, I can use extrapolate for those cases, but that does beg the question, is returning zero the best way to flag that error?
    Sheehan Olver
    @dlfivefifty
    That was probably a poor design decision from ages ago. The use case is for adding two functions whose domains are different. But the issue is a conflation of the support of a function with the domain, so some thought would he needed before changing it
    If it were redesigned it would probably throw an error: it doesn’t really make sense to automatically do analytic continuation while also supporting piecewise functions
    Don MacMillen
    @macd
    Good to know. Thanks again.
    Jones-IHF
    @Jones-IHF
    Hello, I'm trying to solve the Helmholtz eqn for arbitrary wavenumber, building on the example in the readme.
    Jones-IHF
    @Jones-IHF
    using ApproxFun, Plots, LinearAlgebra
    const c₀ = 3.0*10^8
    d = Circle(0.0, 1.0, true)                         # Defines a circle
    Δ = Laplacian(d)                            # Represent the Laplacian
    f = ones(∂(d))                              # one at the boundary
    
    ω = Interval(0.0..10.0^9)
    k = ω/c₀
    
    u = \([Dirichlet(d); Δ+k*I], [f;0.];       # Solve the PDE
                    tolerance=1E-5)        
    
    surface(u)                                  # Surface plot

    I get the error 'Implement Laplacian(Laurant(:clock:), 1)'

    When looking in to the code for ApproxFun, i can find no reference to the laplacian, only a confusing macro for implementing general derivative operators.
    Where would I go to work out how to implement new Operators?

    I was also wondering if it was yet possible to build spaces of arbitrary dimension to solve 3D PDEs, possibly with extra non spatial dimensions to represent free parameters

    Sheehan Olver
    @dlfivefifty
    do you actually want a Disk()? That sort of existed at one point but the code is crusty. We do have a version in MultivariateOrthogonalPolynomials.jl which uses @MikaelSlevinsky’s awesome FastTransform to do function approximation on the disk, but there’s no build script for that