## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
• Mar 04 2021 11:26
timholy commented #160
• Mar 04 2021 07:53
tjaeuth commented #160
• Feb 14 2020 13:06
ViralBShah removed as member
• Feb 14 2020 13:06
ViralBShah removed as member
• Feb 14 2020 13:06
ViralBShah removed as member
• Oct 09 2019 08:33
KristofferC closed #161
• Oct 09 2019 08:28
KristofferC closed #2
• Oct 09 2019 08:28
KristofferC closed #2
• Sep 04 2019 18:14
KristofferC opened #161
• Sep 04 2019 15:28
KristofferC opened #2
• Sep 04 2019 15:28
KristofferC opened #2
• Jun 01 2019 12:21
timholy commented #160
• Jun 01 2019 05:05
• Sep 12 2018 21:30
NaelsonDouglas opened #159
• Sep 12 2018 21:29
NaelsonDouglas commented #158
• Aug 12 2018 19:06
GravityAssisted opened #158
• Jul 09 2018 16:40
RoyiAvital commented #142
• Dec 18 2017 11:19
alirezamecheng commented #157
• Dec 12 2017 12:05
proflage commented #157
• Dec 12 2017 12:05
proflage commented #157
f
@time f2([1,2,3,4,5]) is 14.322378
it their example
Christopher Rackauckas
@ChrisRackauckas
julia> f(x)=x.+x.*x
f (generic function with 1 method)

julia> @time f([1,2,3,4,5])
0.021305 seconds (24.38 k allocations: 1.195 MiB)
5-element Array{Int64,1}:
2
6
12
20
30

julia> @time f([1,2,3,4,5])
0.000004 seconds (6 allocations: 416 bytes)
5-element Array{Int64,1}:
2
6
12
20
30
did you run it twice?
No
f
No
Christopher Rackauckas
@ChrisRackauckas
So your timings don't mean anything.
f
of course if I run the both code
I can get this
@time f([1,2,3,4,5]) is 0.000004
and @time f2([1,2,3,4,5]) is 0.000109
why f2 is worse than f?
I mean It's their example
I think parallelaccelerator is not good
Christopher Rackauckas
@ChrisRackauckas
because you're probably are multithreading a computation which only has length 5?
f
because I couldn't get enough speed up
no
I think Julia is good for only sequential I know It's a bad result but
I tried every thing every where
I never get a good result in each package that told me it's parallel it's good in speed up it's bla bla
Christopher Rackauckas
@ChrisRackauckas
Yes, and I've been telling you why for weeks
f
I expect to get a good result here!
but
Christopher Rackauckas
@ChrisRackauckas
no, not in this 5 element case
f
you're right
in main program that I'm working with
Christopher Rackauckas
@ChrisRackauckas
that's just not how parallelism works. 5 elements added and multiplied is not a good idea to parallelize
f
I tried to work with 50 elements and I got only 2 seconds better than sequential
Christopher Rackauckas
@ChrisRackauckas
What chip are you using and are you sure that SIMD isn't just using all of your FPUs or something like that?
f
I run it in several pc
I just tried this example for 50 elements too
I got a bad result too
Christopher Rackauckas
@ChrisRackauckas
yes...
f
what number is big ? 50 is big
Christopher Rackauckas
@ChrisRackauckas
50 is very very small
f
but my main program has a lot of work and I think 50 is too big for that
I think yes for this function it's small..... but 50 times for a loop
Christopher Rackauckas
@ChrisRackauckas
if what you're multithreading is costly then 50 is very big
f
and parallelization a loop with that size is not small
right?
Christopher Rackauckas
@ChrisRackauckas
depends
f
oh I used multitreading too and I got a really bad result
Christopher Rackauckas
@ChrisRackauckas
I told you to check for inferrability issues though
f
I can push all of my work on git and shows to you If you want
Christopher Rackauckas
@ChrisRackauckas
and you never did
(this is off topic for ParallelAccelerator and should probably go to the main Julia channel)
f