Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Feb 01 10:11
    @SystemFw banned @Hudsone_gitlab
  • Jan 31 2019 04:19
    404- forked
    404-/fs2
  • Jan 31 2019 03:01
    SethTisue commented #1232
  • Jan 30 2019 17:22
  • Jan 30 2019 13:45
  • Jan 30 2019 10:48
    pchlupacek commented #1406
  • Jan 30 2019 10:47
    pchlupacek commented #1406
  • Jan 30 2019 10:39
    pchlupacek commented #1407
  • Jan 30 2019 09:58
    lJoublanc commented #870
  • Jan 30 2019 09:42
    vladimir-popov commented #1407
  • Jan 30 2019 08:10
    vladimir-popov closed #1407
  • Jan 30 2019 08:10
    vladimir-popov commented #1407
  • Jan 29 2019 19:20
    SystemFw commented #1407
  • Jan 29 2019 19:20
    SystemFw commented #1407
  • Jan 29 2019 18:57
    SystemFw commented #1406
  • Jan 29 2019 17:47
    pchlupacek commented #1406
  • Jan 29 2019 17:42
    pchlupacek commented #1406
  • Jan 29 2019 17:39
    pchlupacek commented #1407
  • Jan 29 2019 17:39
    vladimir-popov edited #1407
  • Jan 29 2019 17:38
    vladimir-popov commented #1406
Ross A. Baker
@rossabaker
I have some scars from commons-logging like 15 years ago in Tomcat environments, but this is just not really a problem on servlet deployments anymore. They isolate container classloaders from application classloaders in a way that doesn't happen in Spark.
Ross A. Baker
@rossabaker
I ran into further fun with shading when shapeless was introduced by a macro. I still don't understand why that made a difference, because macros are compile time, and shading should happen after compilation. But I had to fork that library and manually change all the references to shapeless.
Long Cao
@longcao
@rossabaker I'm really curious as to what the runtime error you are running into is
we definitely have our own shapeless code on 2.3.3 and Spark 2.3.1 on EMR but I don't recall doing any special shading rules specifically for shapeless
Ross A. Baker
@rossabaker
I have a coworker who is upgrading a dependency on that legacy project against my advice. I may have another example of it soon.
It cost me almost a week last time I touched that build, and I'm not going back.
Pavel Chlupacek
@pchlupacek
@rossabaker thats interesting observation with spark. I guess in these environments every library essentially hurts and gives you problems. So you essentially lucke the spark does not use pairboiled right ?
and back to the original question is pairbioled available for scala-Js?
Ross A. Baker
@rossabaker
Yes.
Pavel Chlupacek
@pchlupacek
cool :-)
Ross A. Baker
@rossabaker
parboiled2's API is hostile to composition. We have it because we forked Spray to bootstrap the project and it was all by the same person.
So there is some very unpleasant code around it, but it's fast, and more importantly, nobody has wanted to rewrite all the parsers.
Pavel Chlupacek
@pchlupacek
yes, I know thats why we sort of always aviod it when possible :-)
Ross A. Baker
@rossabaker
It would be interesting to benchmark a scodec or an atto based solution.
Dennis
@dennis4b_twitter

Hi, I have a Stream[IO,Byte] which I want to use as the source for a java.io.InputStream to give to a legacy Java library, with full resource safety.
For this I can use fs2.io.toInputStream like so:

myStream.through(fs2.io.toInputStream).flatMap(inputStream => {
    // I need to return a fs2.Stream?
    // this is where I am a bit stuck. I would like to work with IO "in here", so I can
    // hack something together, wrap it in fs2.Stream.eval, then compile.toVector.head to
    // get the return value, but that feels awkward.
})

How would I do this?

In other words, assume:

def myFunction(source: fs2.Stream[IO,Byte]): IO[Int] = ???

and the existence of some java function:

  def getSize(inputstream: java.io.InputStream): Int

What would the body of myFunction look like?

Fabio Labella
@SystemFw
that's more helpful :)
myStream.through(fs2.io.toInputStream).evalMap(in => IO(getSize(in)).lastOrError
Dennis
@dennis4b_twitter
evalMap and lastOrError were the things I needed to learn about, thank you! It compiles :-)
btw I had to use .compile.lastOrError
Fabio Labella
@SystemFw
yeah, my bad
Dennis
@dennis4b_twitter

Hmm...

val someStream: Stream[IO,Byte] = ....
val tmp = someStream.compile.toVector.unsafeRunSync
tmp.size         // = 5000, as expected
// but then:
someStream.through(fs2.io.toInputStream).evalMap(inputStream => IO{
      inputStream.available   // = 0 ??? 
}).compile.drain

am I using toInputStream wrong?

Michael Pilquist
@mpilquist
available is not supported - we inherit the default implementation of that method, which always returns 0
Dennis
@dennis4b_twitter
Ok so the following won't work... (from the scrimage library)
  def fromStream(in: InputStream, `type`: Int = CANONICAL_DATA_TYPE): Image = {
    require(in != null)
    require(in.available > 0)
    val bytes = IOUtils.toByteArray(in)
    apply(bytes, `type`)
}
Michael Pilquist
@mpilquist
Ouch no
Seems like that's broken in scrimage, as that method is supposed to return the number of bytes that can be read without blocking

E.g., from JavaDoc of available:

Returns an estimate of the number of bytes that can be read (or skipped over) from this input stream without blocking by the next invocation of a method for this input stream. The next invocation might be the same thread or another thread. A single read or skip of this many bytes will not block, but may read or skip fewer bytes.

Dennis
@dennis4b_twitter
Thanks, I'll work around it with the Array[Byte] constructor and report an issue
Gavin Bisesi
@Daenyth
require(in.available > 0) that's... not smart
if in.available returned 1, then the next line could still block on 5000 bytes after that one ready byte
Fabio Labella
@SystemFw
do you guys think this is useful? I'm debating whether to PR it or not
def spawn[F[_]: Concurrent, A](s: Stream[F, A]): Stream[F, Fiber[F, Unit]]
@mpilquist
Michael Pilquist
@mpilquist
Cool
Christopher Davenport
@ChristopherDavenport
evalMap(_.start)?
Fabio Labella
@SystemFw
not quite
you'll see the PR in a minute though
Christopher Davenport
@ChristopherDavenport
Oh, missed the unit. I get it now.
Gavin Bisesi
@Daenyth
What's the point?
Fabio Labella
@SystemFw
@Daenyth a very minimal "fork this stream"
not crucial by any means, hence why I was hesitant
Gavin Bisesi
@Daenyth
I see, you get the cancellation behavior
Fabio Labella
@SystemFw
yeah, link the Fiber to the Stream lifetime
we already have something similar for F
Stream.supervise
this is just the logical extension to Stream
Dennis
@dennis4b_twitter
@Daenyth Yeah I guess it's just a misunderstanding about the meaning of available and not an expectation of not blocking (and honestly I would have assumed the same thing - that it means how much data is available, though obviously that can not always be known in advance). Anyway I worked around it, and pdfbox is happy with the InputStream from fs2.io :)
Dennis
@dennis4b_twitter
val stream = Stream.emit(42) ++ Stream.eval(timer.sleep(1000.millis))     // timer is Timer[IO]

stream.repeat.map(v => logger.info(s"Value is ${v}")).compile.drain
This prints "42" and then "()" since the return value of timer.sleep is also emitted. What I would like is to only emit the 42, spaced 1 second apart.
What is the correct way to do this?My usecase is polling a database table for pending tasks, so a "return-if-found-else-sleep" kind of loop.
Dennis
@dennis4b_twitter
nicer but has the same result: val s = Stream.emit(10) ++ Stream.sleep[IO](1000.millis)
Dennis
@dennis4b_twitter
(in this simple example I could use awakeEvery)