Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Feb 04 19:44

    mergify[bot] on master

    Update cats-effect to 3.4.6 Merge pull request #804 from sc… (compare)

  • Feb 04 19:44
    mergify[bot] closed #804
  • Feb 04 19:39
    scala-steward opened #804
  • Feb 02 04:26
    stale[bot] closed #768
  • Jan 31 21:23

    mergify[bot] on master

    Update scala3-library to 3.2.2 Merge pull request #803 from sc… (compare)

  • Jan 31 21:23
    mergify[bot] closed #803
  • Jan 31 21:18
    scala-steward opened #803
  • Jan 31 21:18
    scala-steward synchronize #793
  • Jan 29 21:25

    mergify[bot] on master

    Update mdoc, sbt-mdoc to 2.3.7 Merge pull request #802 from sc… (compare)

  • Jan 29 21:25
    mergify[bot] closed #802
  • Jan 29 21:19
    scala-steward opened #802
  • Jan 25 20:52

    mergify[bot] on master

    Update scalafmt-core to 3.7.1 Merge pull request #801 from sc… (compare)

  • Jan 25 20:52
    mergify[bot] closed #801
  • Jan 25 20:46
    scala-steward opened #801
  • Jan 23 18:59

    mergify[bot] on master

    Update scalafmt-core to 3.7.0 Reformat with scalafmt 3.7.0 E… Add 'Reformat with scalafmt 3.7… and 1 more (compare)

  • Jan 23 18:59
    mergify[bot] closed #800
  • Jan 23 18:52
    scala-steward opened #800
  • Jan 21 16:41
    faustin0 commented #738
  • Jan 21 02:02
    stale[bot] labeled #768
  • Jan 21 02:02
    stale[bot] commented #768
Gavin Bisesi
@Daenyth
Gabriel Volpe
@gvolpe
I guess I'll find out about it once we deploy :D
I used Kafka at my last job and although it's a great choice you will be introducing other specific problems
not easy to tune and maintain, we had a 3rd party company managing the Kafka cluster for us
Gavin Bisesi
@Daenyth
I have at least already seen the advice about 'pre-shard your stream to some larger-than-needed power of two so you can rebalance partitions across consumers without resharding'
Talking about it, just saw this :laughing:
Gavin Bisesi
@Daenyth
heh
in this case we put it off for over a year first because of that :)
it's time
Gavin Bisesi
@Daenyth
@gvolpe are you aware of any resource-related bugfixes after 2.0.0-RC2 ?
I'm on that version (because fs2 1.x) and I have a test I'm scratching my head on
The stream from calling createAckerConsumer isn't terminating when I take from it, and I'm trying to figure out if I have an fs2 mistake or a rabbit mistake or it's a bug somewhere
Gabriel Volpe
@gvolpe
not that I can remember... All the changes are described in the release notes
Gavin Bisesi
@Daenyth
k lemme check that out..
yeah nothing jumps out at me
What I'm afraid of is that this is a difference in fs2's scope handling somehow between current and 1.x
Gavin Bisesi
@Daenyth
no it's something else
Gavin Bisesi
@Daenyth
oh I found it out btw
My .take wasn't on the queue envelopes, it was on the stream providing me the (acker, msg) tuple
:finnadie:
Gavin Bisesi
@Daenyth
huh, the amqp spec has an xml :thought_balloon: :suspect:
makes me curious to write a code generator for the main shapes

The protocol definition conforms to a formal grammar that is
published seperately in several technologies.

How nice.

What's the formal grammar called, and what places publish it? :anger:

Gavin Bisesi
@Daenyth
so uh, I made the java library throw a TimeoutException
so that's fun
do you happen to know if one channel is able to consume from one queue multiple times and have multiple qos on that?
Gavin Bisesi
@Daenyth
So more learning: qos is a per-channel or per-connection setting depending on global. If you construct multiple consumers on the same channel, the last qos you set "wins"
for rabbitmq this means each consumer has its own independent prefetch buffer.
For amqp per the spec I believe it's one buffer period on the channel for all consumers
The createAckerConsumer api is very misleading the way it's typed
we should consider how to appropriately change that
but if people want to use prefetching, they're best to create a channel per consumer
Gavin Bisesi
@Daenyth

@gvolpe Do you know why in ConsumingProgram for createConsumer we read from the InternalQueue using Stream.repeatEval(queue.dequeue1.....) instead of using queue.dequeue?

Using dequeue1 wrecks the chunk structure and processing many small envelopes gets less efficient

Gavin Bisesi
@Daenyth
I can make a PR to that effect, I don't think it should change any behaviors
Gabriel Volpe
@gvolpe
Hey @Daenyth , not sure to be honest, that's as old as the library itself I guess
If I had a reason when I wrote that code I should have added a comment cause I don't recall now :smile:
Gavin Bisesi
@Daenyth
K, I'll PR it
Gabriel Volpe
@gvolpe
:+1:
Gavin Bisesi
@Daenyth
Gavin Bisesi
@Daenyth
yeah wow I think fs2-rabbit is very slow when you have high prefetch values
if I look in my docker rabbit's console I can see "unacked = 131,072" and "ready = 0" but I'm having to use Agitation to set >4second timeouts on every single message in order to not be timing out
I wonder if this is because of the dequeue1
or if it's something I'm doing myself
Gavin Bisesi
@Daenyth
Running a single test case, in 464,085ms according to logback, I dequeued 47 envelopes. That's not right, we definitely go faster in prod. Something I'm doing is being very slow, maybe it's the Agitation?
lemme try throwing in a groupWithin
maybe bounded by internalQueueSize?
Gavin Bisesi
@Daenyth
Thanks for the quick merge btw. Github actions looks neat. Is there any way to have it also show the html-ized test report?
Gabriel Volpe
@gvolpe
No worries, is there such a thing?
Gavin Bisesi
@Daenyth
sort of. There's the standard junit-style xml that you can have scalatest emit, and many things allow you to navigate through it interactively
even pytest in python emits it, so I wouldn't be surprised if GH can read it