- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

```
x= Fun(0..1); sx=space(x);
a=Fun(0..pi); sa=space(a);
fa= DefiniteIntegral(x*a,sx)
# i expect fa= a here
```

but i get MethodError. Are integrals of two arguments possible?
to practice your guiding i wrote

```
function Hankel(fx::Fun, spx,spa::Space)
spxa= spx ⊗ spa
fx2= Fun((x,a)-> fx(x), spxa)
fj= Fun((x,a) -> besselj0(x*a), spxa)
fa2= (DefiniteIntegral(dmx1) ⊗ I)*(fx2*fj*x)
fa= Fun(a->fa2(0,a), spa)
end
```

from the definition. It seems to work. Thank you!

Is this the style ApproxFun is expected to be used?

Is this the style ApproxFun is expected to be used?

In Julia, capital letters are only used when it returns a special type of the same name. So this would be better as lower case. Also, there is support for Green's functions in https://github.com/JuliaApproximation/SingularIntegralEquations.jl including Helmholtz / hankel kernels

I see. I will check SpaceOperator and will have to learn what rangespace is.

And thank you for the link. It is probably some more efficient implementation? Will check it.

I tried hankel as an learning example.

The kind of high level style the ApproxFun works is fascinating me.

Like a poetry for the crowd. I do not necessarily understand but like it. :-)

And thank you for the link. It is probably some more efficient implementation? Will check it.

I tried hankel as an learning example.

The kind of high level style the ApproxFun works is fascinating me.

Like a poetry for the crowd. I do not necessarily understand but like it. :-)

when i blind try

```
spx= Space(0..1); spa= Space(0..π); spxa= spx ⊗ spa
Q2= DefiniteIntegral(spx.domain) ⊗ I
Q1= ApproxFun.SpaceOperator(Q2, spxa, spa )
x=Fun((x,a)->x,spxa); a=Fun((x,a)->a,spxa)
(Q2*(x*a))(0,1) #=0.5
(Q1*(x*a))(1) #=0.5*pi/2
```

it seems to work up to scaling cofficient

I'm quite sure `Fun`

used to work on intervals in the complex domain. But now

```
f(x) = cos(x)
Fun(f, Interval(1.0+1.0im, 2.0+2.0im))
```

throws the error

```
ERROR: MethodError: no method matching isless(::Complex{Float64}, ::Complex{Float64})
Closest candidates are:
isless(::Missing, ::Any) at missing.jl:66
isless(::InfiniteArrays.OrientedInfinity{Bool}, ::Number) at /.julia/packages/InfiniteArrays/Z4yap/src/Infinity.jl:145
isless(::Number, ::InfiniteArrays.OrientedInfinity{Bool}) at /.julia/packages/InfiniteArrays/Z4yap/src/Infinity.jl:144
...
Stacktrace:
[1] <(::Complex{Float64}, ::Complex{Float64}) at ./operators.jl:260
[2] >(::Complex{Float64}, ::Complex{Float64}) at ./operators.jl:286
[3] isempty(::Interval{:closed,:closed,Complex{Float64}}) at /.julia/packages/IntervalSets/xr34V/src/IntervalSets.jl:153
```

Hello, I was playing around with the poisson equation example and wonder if I could replace the RHS $f$ with something like $\delta (x) \delta (y)$. I tried to construct the RHS like this:

```
fx = KroneckerDelta()
fy = KroneckerDelta()
f = Fun((x,y) -> fx(x) * fy(y))
```

But I got the error that `ERROR: MethodError: no method matching isless(::Int64, ::Nothing)`

. What is the proper way for me to do that? Thanks in advance!

Hello ! I would like to consider ApproxFun for the following problem: I have a N-L operator Phi: L_s -> L_s, where L_s is the set of continuous functions that goes exponentially fast to zero at speed s > 0, that is: x belongs to L*s if \sup*{t \geq 0} |x(t)| e^{s t} < +oo. Moreover given x in L_s, Phi(x) is the solution of a Volterra integral equation of the form Phi(x)(t) = K_0(x)(t) + \int_0^t{K(x) (t, u)Phi(x) (u) du}, where the Kernels K_0(x) and K(x) are known explicitly in term of x. I would like to compute the spectrum of the Frechet differential of Phi at the point x = 0. Do you think it is possible to do that?

Let me know if the question is unclear... I do not have a background in numerical simulations. Thanks :)

and how to encode $\{ (x, y) \in mathbb{R}_+, x \geq y \}$?

Ok great! Looking forward to see that. The Volterra equation I am looking at appears in a neuroscience problem (see https://arxiv.org/abs/1810.08562) The dynamic of the mean-field network can be reduce to a N-L Volterra equation (see equations (7) and (8) of the paper). Btw the same problem can be solved numerically by looking at the associated PDE (see equation (3)) - but I am curious to know if more efficient numerically methods can be develop by specifically exploit this Volterra equation.

Clenshaw is implemented in PolynomialSpace.jl

May i ask may be not directly related to ApproxFun

I found FastAsyTransforms.m on github and wondered if such functionality is already available as julia code?

I looked at FastTransforms.jl but could not find something like a HankelTransform. Would you kindly direct?

May be SingularIntegralEquations.jl? But no idea how to start with it? Or other place?

I found FastAsyTransforms.m on github and wondered if such functionality is already available as julia code?

I looked at FastTransforms.jl but could not find something like a HankelTransform. Would you kindly direct?

May be SingularIntegralEquations.jl? But no idea how to start with it? Or other place?

I’m trying to implement something like this FAQ example,

```
S = Chebyshev(1..2);
p = points(S,20); # the default grid
v = exp.(p); # values at the default grid
f = Fun(S,ApproxFun.transform(S,v));
```

but multi-variate (2D tensor Chebyshev will do). The canonical thing,

```
S = Chebyshev((1..2)^2)
p = points(S, 20)
```

errors. Of course I could just construct the points via tensor products, but then I’m unsure how to use the `ApproxFun.transform(S,v)`

correctly. Is this documented somewhere? Is there an example I can look at?

Basically, I just want to freeze the polynomial degree, rather than prescribe a solver tolerance.

I didn’t know about this family of points, readig up on it now, very very interesting...

No, but anything other than Chebyshev^2 will use a tensor grid. Padua points are nice because you don’t oversample, and the transform is a single one dimensional DCT (though as implemented we form a tensor product by filling with zeros and use the 2D DCT, which due to aliasing we can recover the coefficients)

There’s a Chebfun example describing it

If you comment out the lines after ## Multivariate in https://github.com/JuliaApproximation/ApproxFun.jl/blob/master/src/Spaces/Chebyshev/Chebyshev.jl it will go back to the default tensor version, probably a keyword would be appropriate here to allow switching

thanks for the suggestions