Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Feb 01 10:11
    @SystemFw banned @Hudsone_gitlab
  • Jan 31 2019 04:19
    404- forked
    404-/fs2
  • Jan 31 2019 03:01
    SethTisue commented #1232
  • Jan 30 2019 17:22
  • Jan 30 2019 13:45
  • Jan 30 2019 10:48
    pchlupacek commented #1406
  • Jan 30 2019 10:47
    pchlupacek commented #1406
  • Jan 30 2019 10:39
    pchlupacek commented #1407
  • Jan 30 2019 09:58
    lJoublanc commented #870
  • Jan 30 2019 09:42
    vladimir-popov commented #1407
  • Jan 30 2019 08:10
    vladimir-popov closed #1407
  • Jan 30 2019 08:10
    vladimir-popov commented #1407
  • Jan 29 2019 19:20
    SystemFw commented #1407
  • Jan 29 2019 19:20
    SystemFw commented #1407
  • Jan 29 2019 18:57
    SystemFw commented #1406
  • Jan 29 2019 17:47
    pchlupacek commented #1406
  • Jan 29 2019 17:42
    pchlupacek commented #1406
  • Jan 29 2019 17:39
    pchlupacek commented #1407
  • Jan 29 2019 17:39
    vladimir-popov edited #1407
  • Jan 29 2019 17:38
    vladimir-popov commented #1406
Adam Rosien
@arosien
my memories reset every month or two, so i'm in the same boat
Yosef Fertel
@frosforever
is there a good reference bit of code on walking a driectory tree and say getting the size of the contents of each directory in a parallel controlled fashion? My problem is similar enough to that idea and I currently have 3 implementations: Non parallel but slow, parallel but blows up the paralleism exponentially based on the tree depth, parallel but blows up memory as it’s holding on to too much and not streaming. Wondering if there’s a prior work I could get a better sense of how to do this “right"
Christopher Davenport
@ChristopherDavenport
Great question, walk subtree for depth seems problematic, maybe limit total max number of ops at once via a semaphore and walk the exponential route.
Yosef Fertel
@frosforever
good old semaphore. that sounds reasonable
Fabio Labella
@SystemFw
you can use scatter-gather as well, with a queue of workers
Ivan Putera Masli
@imasli
Hi, do fs2 supports detecting whenever a Stream is empty?
An example use-case, if the stream is empty do some logging/warning.
7 replies
Fabio Labella
@SystemFw
@imasli that is not really compatible with what a Stream is, but it kinda depends on what you mean exactly. Would you mind expanding a bit more on your actual use case?
2 replies
Barry O'Neill
@barryoneill
Hi - quick question. I have a Stream[F, Either[A,B]], and I'd like to take from the list until I have seen N Rights. Eg. Stream(Right, Left, Right, Right, Left, Right), with N of 2 would emit (Right, Left, Right). What's the easiest way of doing this?
I'm fighting with scan but not quite getting it
Gavin Bisesi
@Daenyth
Pull
Fabio Labella
@SystemFw
I think mapAccumulate is easier for this case
Gavin Bisesi
@Daenyth
oh hm yeah that makes sense
Barry O'Neill
@barryoneill
I did not see that function
will give it a try!
Barry O'Neill
@barryoneill
almost there
.mapAccumulate(0)((cnt, next) =>
(cnt + (if(next.isRight) 1 else 0),next))
.takeWhile(_._1 <= N)
except that emits all the adjacent Lefts that happen after N rights have been seen
so not takeWhile
oh, takewhile has a takeFailure
      .mapAccumulate(0)((cnt, next) =>
        (cnt + (if(next.isRight) 1 else 0),next))
      .takeWhile(_._1 < N, takeFailure = true)
appears to do the trick
Barry O'Neill
@barryoneill
thanks folks!
Fabio Labella
@SystemFw
there is takeThrough that includes the first element that failed
Barry O'Neill
@barryoneill
yeah, I just saw that its impl is takeWhile_(p, true) - changed it out, thanks :)
Daniel Capo Sobral
@dcsobral
What the state of fs2-based kafka libraries?
Billzabob
@Billzabob
I believe https://fd4s.github.io/fs2-kafka/ is still being actively mantained and used
Christopher Davenport
@ChristopherDavenport
As is https://github.com/Banno/kafka4s/ - Which has a slightly different take.
Gabriel Volpe
@gvolpe
Christopher Davenport
@ChristopherDavenport
I think they moved it at some point to a microsite - https://banno.github.io/kafka4s/
I'll get a ticket open to fix it.
Billzabob
@Billzabob
Oh cool. What exactly is the different take between the two?
Olivier Deckers
@olivierdeckers
I observe that with queue.dequeue.evalScan(0)(...), the initial state (0) is not emitted until the first item is available on the queue. However, when I use evalScan without a queue, or when I use a regular scan on a queue, the initial state is emitted immediately. Is this expected?
Fabio Labella
@SystemFw
@olivierdeckers Would you mind opening an issue? :)
I cannot quickly make my mind up on which behaviour is more correct (though I lean towards the scan one), but the inconsistency sounds annoying regardless
Erlend Hamnaberg
@hamnis
hey all.
isnt fs2.io.file.writeAll supposed to emit a value?
def unzip[F[_]: ContextShift: Sync](blocker: Blocker, stream: Stream[F, Byte], chunksize: Int = chunkSize) = {
    Stream.eval(blocker.delay(Files.createTempDirectory("unzippped"))).flatMap { tempDir =>
      val path = tempDir.resolve("file.zip")
      println(path)
      stream
       .through(fs2.io.file.writeAll(path, blocker))
        .flatMap { _ =>
          println(path)
          Stream.evals(blocker.delay(javaUnzip(path, tempDir, chunksize)))
        }
    }
  }
this always becomes and empty stream.
what am I doing wrong?
Fabio Labella
@SystemFw
it's working as intended, even though it might be confusing
that being said, "and then" in fs2 is expressed by ++, not flatMap(_ => (think about a stream that emits more than one element)
@hamnis
Erlend Hamnaberg
@hamnis
right. thanks
the Stream[F, Nothing] would actually helped here
Erlend Hamnaberg
@hamnis
yeah, I was trying different things here, so might do that as well
Erlend Hamnaberg
@hamnis
just to make sure I understand finalisers in streams, If they are compiled to an IO, they wont run until the IO completes? so if I do stream.compile.toList will that run the finalizers?
Fabio Labella
@SystemFw
I think the questions are independent. Nothing will run until you compile. On the other hand there can be finalisers whose lifetime isn't the lifetime of the entire stream
Erlend Hamnaberg
@hamnis
ok
Arjun Dhawan
@arjun-1

Hi everyone :),

I recently started using fs2 + zio, and there is some behavior of fs2 I have difficulty understanding.

Basically I have a stream (backed by a Queue) which I am processing in parallel, using parEvalMap.
During the processing of an element, some DbError might occur which should:

  • temporarily stop all processing,
  • retry the failing element until it succeeds.
  • after which the remaining stream is processed as usual

This I thought to achieve using the following Pipe:

val pipe: Pipe[Effect, Command, Result] = ZIO.runtime
  .map { implicit r =>


    def pipe(stream: Stream[Effect, Command]): Stream[Effect, Result] =
      stream.parEvalMap(2)(processAndLog).handleErrorWith {
        case error @ DbError(_) =>
          retryProcessing(error) ++ pipe(stream)
      }
    pipe
  }

stream.through(pipe)

with

def retryProcessing(error: DbError): Stream[Effect, Result] =
    Stream.retry[Effect, CompilationResult](
      processAndLog(Command(error.id)),
      10.seconds,
      identity,
      Int.MaxValue
    )

and

def processAndLog(command: Command): Task[Result]

I noticed however, that while the above code does indeed retry the first failing element until it succeeds, the 'remainder' of the stream is not processed any more.
I.e. when offering elements command1, command2, command3 which would all fail at first but succeed with retry later,
I notice that only command1 is processed, and the remainder of the commands is not.

I am convinced that the remainder of the stream should process as well, since when I replace parEvalMap with evalMap that is what actually happens (for this case I can see the corresponding log messages of the remaining commands being processed printed).
Basically, I would expect the same continuation behavior for parEvalMap, as is the case for evalMap.