- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

Clenshaw is implemented in PolynomialSpace.jl

May i ask may be not directly related to ApproxFun

I found FastAsyTransforms.m on github and wondered if such functionality is already available as julia code?

I looked at FastTransforms.jl but could not find something like a HankelTransform. Would you kindly direct?

May be SingularIntegralEquations.jl? But no idea how to start with it? Or other place?

I found FastAsyTransforms.m on github and wondered if such functionality is already available as julia code?

I looked at FastTransforms.jl but could not find something like a HankelTransform. Would you kindly direct?

May be SingularIntegralEquations.jl? But no idea how to start with it? Or other place?

I’m trying to implement something like this FAQ example,

```
S = Chebyshev(1..2);
p = points(S,20); # the default grid
v = exp.(p); # values at the default grid
f = Fun(S,ApproxFun.transform(S,v));
```

but multi-variate (2D tensor Chebyshev will do). The canonical thing,

```
S = Chebyshev((1..2)^2)
p = points(S, 20)
```

errors. Of course I could just construct the points via tensor products, but then I’m unsure how to use the `ApproxFun.transform(S,v)`

correctly. Is this documented somewhere? Is there an example I can look at?

Basically, I just want to freeze the polynomial degree, rather than prescribe a solver tolerance.

I didn’t know about this family of points, readig up on it now, very very interesting...

No, but anything other than Chebyshev^2 will use a tensor grid. Padua points are nice because you don’t oversample, and the transform is a single one dimensional DCT (though as implemented we form a tensor product by filling with zeros and use the 2D DCT, which due to aliasing we can recover the coefficients)

There’s a Chebfun example describing it

If you comment out the lines after ## Multivariate in https://github.com/JuliaApproximation/ApproxFun.jl/blob/master/src/Spaces/Chebyshev/Chebyshev.jl it will go back to the default tensor version, probably a keyword would be appropriate here to allow switching

thanks for the suggestions

Though I’d probably still need

`ApproxFun`

to evaluate the basis functions...
In general, I wonder whether a “manual mode” of

`ApproxFun`

might be useful. I noticed e.g. that restricting the degree in `\`

rather than the tolerance throws warnings even if it is intended. By “manual mode” I mean non-adaptive.
The long term plan is to do “Manual mode” via https://github.com/JuliaApproximation/ContinuumArrays.jl. This will also support FEM (in fact @jagot has a package for splines building on ContinuumArrays.jl) and make it possible to use distributed memory.

I have a FEDVR package

But that is a bit limited at the moment, in that it only supports Dirichlet1 boundary conditions, since I've yet to fix the issue with restriction matrices (i.e. dropping the first and last basis functions)

In any case, it’s kind of on the back burner as I have a backlog of about 10 papers to finish (and probably papers help more with promotion then making another package that does basically the same thing as ApproxFun 😅). But the basics are already implemented.

I guess I don't really understand the support for Complex numbers in ApproxFun. If I try to evaluate a complex argument to a Fun(cos) then it returns zero if the imaginary part is non zero. Curiously, it will return a complex number with a correct real part if the imaginary part of the input is zero. Also, I can easily create Fun's with complex coefficients that will return a complex number when evaluating a real number, but again will return zero if the imaginary part is non-zero. Here are a few examples. (I believe all of these work in Chebfun, btw.)

```
julia (v1.2)> f = Fun(cos);
julia (v1.2)> f(.1)
0.9950041652780257
julia (v1.2)> f(.1 + .1im)
0.0
julia (v1.2)> f(.1 + .0im)
0.9950041652780257 + 0.0im
julia (v1.2)> g = Fun(Chebyshev(), randn(Complex{Float64}, 20));
julia (v1.2)> g(.1)
2.1013219596855888 - 0.11052912191219874im
julia (v1.2)> g(.1 + .1im)
0.0 + 0.0im
julia (v1.2)> cos(.1 + .1im)
0.9999833333373015 - 0.00999998888888977im
```

If you want to evaluate on a domain in the complex plane you can use say a

`Circle`

or `Segment`

I’d be shocked if those work in Chebfun unless it is doing extrapolation which is numerically dangerous. But you can use

`extrapolate(f,x)`

in ApproxFun if you wish
Thanks. I see your point. It does look, however, that Chebfun takes more of a "let the buyer beware" type of view. It looks like it will happily evaluate many complex args. As long as you are inside of the Bernstein ellipse, life is good. It will even plot the Bernstein ellipse as well. As you say, I can use extrapolate for those cases, but that does beg the question, is returning zero the best way to flag that error?

If it were redesigned it would probably throw an error: it doesn’t really make sense to automatically do analytic continuation while also supporting piecewise functions

```
using ApproxFun, Plots, LinearAlgebra
const c₀ = 3.0*10^8
d = Circle(0.0, 1.0, true) # Defines a circle
Δ = Laplacian(d) # Represent the Laplacian
f = ones(∂(d)) # one at the boundary
ω = Interval(0.0..10.0^9)
k = ω/c₀
u = \([Dirichlet(d); Δ+k*I], [f;0.]; # Solve the PDE
tolerance=1E-5)
surface(u) # Surface plot
```

I get the error 'Implement Laplacian(Laurant(:clock:), 1)'

When looking in to the code for ApproxFun, i can find no reference to the laplacian, only a confusing macro for implementing general derivative operators.

Where would I go to work out how to implement new Operators?

I was also wondering if it was yet possible to build spaces of arbitrary dimension to solve 3D PDEs, possibly with extra non spatial dimensions to represent free parameters

do you actually want a

`Disk()`

? That sort of existed at one point but the code is crusty. We do have a version in MultivariateOrthogonalPolynomials.jl which uses @MikaelSlevinsky’s awesome FastTransform to do function approximation on the disk, but there’s no build script for that
For 3D, we’re starting to implement that in MultivariateOrthogonalPolynomials.jl for the cone (random yes, it’s because I’m writing a paper about the cone with Yuan Xu)

But honestly ApproxFun needs a significant refactor before 3D can be tackled seriously: it’s possible to get it working but requires fighting hacky code the whole way, and is not amendable to high performance

There is work behind the scenes on ContinuumArrays.jl to provide a more sound underpinning that scales to 3D, but it won’t be worked on seriously in the next 6 months or so

If you are interested in helping with that I’m happy to talk more.

Hi. I'm working with the Gross-Pitaevskii Equation in two dimensions, with periodic boundary conditions, and I'm trying out the periodic parts of ApproxFun. Could someone please remake the ApproxFun manual? Apparently it's been 10 months since *Latest* version at github.io is that out of date.

`PeriodicInterval`

was updated to `PeriodicSegment`

in the source tree, but the
I've noticed that ApproxFun doesn't reexport DomainSets. So the manual describes ProductDomain and UnionDomain, but these aren't exported by ApproxFun.