mclow on boost-1.78.0.beta1
bassoy on develop
ci: add CI for asciidoc documen… (compare)
shikharvashistha on out_of_range_exception_for_static_tensor
Included exception handling for… (compare)
shikharvashistha on out_of_range_exception_for_static_tensor
Added out of range exception fo… (compare)
shikharvashistha on out_of_range_exception_tensor_static
shikharvashistha on out_of_range_exception_tensor_static
Added out of range exception to… (compare)
shikharvashistha on doc
shikharvashistha on doc
Enabled source hilighting for c… (compare)
couldn't we specialize
tensor_core
with amatrix_engine
?
Yes that can be done too, but as discussed we will proceed with Amit's proposal now. I suggest one thing that we should be quick in merging PR's, there are so many open PR and there is absolutely no solid base branch to work on new stuff.
boost/ublas
. see my previous comments to @KalyanKumar-4 and also consider this project list.
should we know blas to learn ublas?
@KalyanKumar-4 what do you mean by "know blas"
read my comments
@KalyanKumar-4 Thanks for your interest in boostorg/ublas. We did not select any gsoc21 tensor projects yet. As for now, I recommend to read the general gsoc student guidelines and @mloskot great advice and to have a look into the previous gsoc projects, e.g. https://github.com/BoostGSoC20/ublas/wiki or https://github.com/BoostGSoC19/tensor/wiki. For the communication style you might want to visit our former gsoc19 gitter channel.
@cosurgi there are no "periodic" iterators. Only random access iterators, as you know it from std::vector
with which you can create valid ranges. We will introduce subtensors like A (3:4,6:end,end)
as you might know it from Matlab. However, this may take a while until it is pulled into the master branch.
If you want to apply FFT for a specific continuous memory region inside a tensor than slice the tensor using subtensor, copy it into a new tensor and access it with standard pointers. If you do not care about continuous memory region and speed, you can directly use subtensor and apply all operations as you do for tensor.
feature/subtensor
branch.