## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
• Feb 01 10:11
@SystemFw banned @Hudsone_gitlab
• Jan 31 2019 04:19
404- forked
• Jan 31 2019 03:01
SethTisue commented #1232
• Jan 30 2019 17:22
• Jan 30 2019 13:45
• Jan 30 2019 10:48
pchlupacek commented #1406
• Jan 30 2019 10:47
pchlupacek commented #1406
• Jan 30 2019 10:39
pchlupacek commented #1407
• Jan 30 2019 09:58
lJoublanc commented #870
• Jan 30 2019 09:42
• Jan 30 2019 08:10
• Jan 30 2019 08:10
• Jan 29 2019 19:20
SystemFw commented #1407
• Jan 29 2019 19:20
SystemFw commented #1407
• Jan 29 2019 18:57
SystemFw commented #1406
• Jan 29 2019 17:47
pchlupacek commented #1406
• Jan 29 2019 17:42
pchlupacek commented #1406
• Jan 29 2019 17:39
pchlupacek commented #1407
• Jan 29 2019 17:39
• Jan 29 2019 17:38
Barry O'Neill
@barryoneill
I'm fighting with scan but not quite getting it
Gavin Bisesi
@Daenyth
Pull
Fabio Labella
@SystemFw
I think mapAccumulate is easier for this case
Gavin Bisesi
@Daenyth
oh hm yeah that makes sense
Barry O'Neill
@barryoneill
I did not see that function
will give it a try!
Barry O'Neill
@barryoneill
almost there
.mapAccumulate(0)((cnt, next) =>
(cnt + (if(next.isRight) 1 else 0),next))
.takeWhile(_._1 <= N)
except that emits all the adjacent Lefts that happen after N rights have been seen
so not takeWhile
oh, takewhile has a takeFailure
      .mapAccumulate(0)((cnt, next) =>
(cnt + (if(next.isRight) 1 else 0),next))
.takeWhile(_._1 < N, takeFailure = true)
appears to do the trick
Barry O'Neill
@barryoneill
thanks folks!
Fabio Labella
@SystemFw
there is takeThrough that includes the first element that failed
Barry O'Neill
@barryoneill
yeah, I just saw that its impl is takeWhile_(p, true) - changed it out, thanks :)
Daniel Capo Sobral
@dcsobral
What the state of fs2-based kafka libraries?
Billzabob
@Billzabob
I believe https://fd4s.github.io/fs2-kafka/ is still being actively mantained and used
Christopher Davenport
@ChristopherDavenport
As is https://github.com/Banno/kafka4s/ - Which has a slightly different take.
Gabriel Volpe
@gvolpe
Christopher Davenport
@ChristopherDavenport
I think they moved it at some point to a microsite - https://banno.github.io/kafka4s/
I'll get a ticket open to fix it.
Billzabob
@Billzabob
Oh cool. What exactly is the different take between the two?
Olivier Deckers
@olivierdeckers
I observe that with queue.dequeue.evalScan(0)(...), the initial state (0) is not emitted until the first item is available on the queue. However, when I use evalScan without a queue, or when I use a regular scan on a queue, the initial state is emitted immediately. Is this expected?
Fabio Labella
@SystemFw
@olivierdeckers Would you mind opening an issue? :)
I cannot quickly make my mind up on which behaviour is more correct (though I lean towards the scan one), but the inconsistency sounds annoying regardless
Erlend Hamnaberg
@hamnis
hey all.
isnt fs2.io.file.writeAll supposed to emit a value?
def unzip[F[_]: ContextShift: Sync](blocker: Blocker, stream: Stream[F, Byte], chunksize: Int = chunkSize) = {
Stream.eval(blocker.delay(Files.createTempDirectory("unzippped"))).flatMap { tempDir =>
val path = tempDir.resolve("file.zip")
println(path)
stream
.through(fs2.io.file.writeAll(path, blocker))
.flatMap { _ =>
println(path)
Stream.evals(blocker.delay(javaUnzip(path, tempDir, chunksize)))
}
}
}
this always becomes and empty stream.
what am I doing wrong?
Fabio Labella
@SystemFw
it's working as intended, even though it might be confusing
that being said, "and then" in fs2 is expressed by ++, not flatMap(_ => (think about a stream that emits more than one element)
@hamnis
Erlend Hamnaberg
@hamnis
right. thanks
the Stream[F, Nothing] would actually helped here
Erlend Hamnaberg
@hamnis
yeah, I was trying different things here, so might do that as well
Erlend Hamnaberg
@hamnis
just to make sure I understand finalisers in streams, If they are compiled to an IO, they wont run until the IO completes? so if I do stream.compile.toList will that run the finalizers?
Fabio Labella
@SystemFw
I think the questions are independent. Nothing will run until you compile. On the other hand there can be finalisers whose lifetime isn't the lifetime of the entire stream
Erlend Hamnaberg
@hamnis
ok
Arjun Dhawan
@arjun-1

Hi everyone :),

I recently started using fs2 + zio, and there is some behavior of fs2 I have difficulty understanding.

Basically I have a stream (backed by a Queue) which I am processing in parallel, using parEvalMap.
During the processing of an element, some DbError might occur which should:

• temporarily stop all processing,
• retry the failing element until it succeeds.
• after which the remaining stream is processed as usual

This I thought to achieve using the following Pipe:

val pipe: Pipe[Effect, Command, Result] = ZIO.runtime
.map { implicit r =>

def pipe(stream: Stream[Effect, Command]): Stream[Effect, Result] =
stream.parEvalMap(2)(processAndLog).handleErrorWith {
case error @ DbError(_) =>
retryProcessing(error) ++ pipe(stream)
}
pipe
}

stream.through(pipe)

with

def retryProcessing(error: DbError): Stream[Effect, Result] =
Stream.retry[Effect, CompilationResult](
processAndLog(Command(error.id)),
10.seconds,
identity,
Int.MaxValue
)

and

def processAndLog(command: Command): Task[Result]

I noticed however, that while the above code does indeed retry the first failing element until it succeeds, the 'remainder' of the stream is not processed any more.
I.e. when offering elements command1, command2, command3 which would all fail at first but succeed with retry later,
I notice that only command1 is processed, and the remainder of the commands is not.

I am convinced that the remainder of the stream should process as well, since when I replace parEvalMap with evalMap that is what actually happens (for this case I can see the corresponding log messages of the remaining commands being processed printed).
Basically, I would expect the same continuation behavior for parEvalMap, as is the case for evalMap.

pool
@hamstakilla
Hello, can i somehow clone stream so i can have multiple consumers see all the values emitted?
@hamstakilla Look at broadcast (and its variants)
pool
@hamstakilla
Thanks
Fabio Labella
@SystemFw
@arjun-1 the stream stops with both parEvalMap and evalMap, which is the only possible behaviour since it's a monad. If you want to somehow skip, you need to handleErrorWith at the F level (so processAndLog)
I can give more info as to why that behaviour is the only possible one :)
Arjun Dhawan
@arjun-1

@SystemFw thanks for the suggestion I'll try it!

Although this probably means I have some fundamentally flawed view of how streams 'work' :)
My initial thought was that while a a stream might fail due an error during processing (and further computation halts),
it would be possible to 'continue' the stream where it left off, using the .handleErrorWith. Much alike we can continue computation using the .handleErrorWith on for example the IO monad.

But if I understand correctly, the failure of a stream should somehow be interpreted as a 'hard failure', in the sense that any continuation from it cannot happen?

Fabio Labella
@SystemFw
@arjun-1 so, this is a common question, let me expand
in the IO monad, you cannot "continue" the computation (it depends on how you interpret continue)