Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 06:11
    armanbilge commented #6945
  • 06:11
    cos commented #6945
  • 05:58
    SrTobi opened #6945
  • Jan 27 16:13
    coltfred commented #6934
  • Jan 27 15:59
    gringrape commented #6944
  • Jan 27 15:24
    armanbilge commented #6944
  • Jan 27 11:25
    gringrape synchronize #6944
  • Jan 27 11:02
    gringrape synchronize #6944
  • Jan 27 10:52
    gringrape synchronize #6944
  • Jan 27 08:29
    gringrape commented #6944
  • Jan 27 08:28
    gringrape commented #6944
  • Jan 27 08:27
    gringrape commented #6944
  • Jan 27 08:26
    gringrape commented #6944
  • Jan 27 05:51
    armanbilge commented #6944
  • Jan 27 03:53
    mergify[bot] labeled #6944
  • Jan 27 03:53
    mergify[bot] labeled #6944
  • Jan 27 03:52
    gringrape opened #6944
  • Jan 26 23:03
    rossabaker commented #786
  • Jan 26 17:14
    armanbilge commented #6219
  • Jan 26 17:11
    armanbilge commented #6552
Gavin Bisesi
@Daenyth
to hopefully help improve any fairness issues
Ross A. Baker
@rossabaker
Maybe a semaphore in front could limit the requests being made to a client such that they spend less time queued up in the client.
Christopher Davenport
@ChristopherDavenport
I like Ross’s, thats nice.
Niklas Klein
@Taig
It's already backed by such a thread pool. I'm not entirely sure how shifting may help though? The app itself is already highly concurrent.
Gavin Bisesi
@Daenyth
then it probably wouldn't
What are you doing with the responses? My first guess would be that you're spending cpu pool time handling responses and that's why things are queueing up
Niklas Klein
@Taig
@rossabaker I like that. Going down that route (:
Ross A. Baker
@rossabaker
If requests are sitting in the wait queue on the blaze client pool, I think my idea might help. If they're not, I think it won't.
Instead of semaphores, making a Stream of requests and using its concurrency controls may be worth exploring as well.
While people are awake, we have a couple open PRs that would be nice to get an 0.20.2 out.
Gavin Bisesi
@Daenyth
^ I'm a big fan of that approach, it makes it quite easy to handle rate limiting requests, retries, etc
Niklas Klein
@Taig
@Daenyth That's an interesting catch. I wasn't aware that might be blocking the queue. I'm basically just parsing json, but I might wanna step into the flamegrapghs to find out more.
Gavin Bisesi
@Daenyth
oof that reminds me, I need to get us up from 0.20.0-M5
@Taig json parsing is heavily cpu bound IME
Christopher Davenport
@ChristopherDavenport
I’ve been on vacation, my notification list is 29 http4s issues, with many of them PR’s what is most pressing?
Gavin Bisesi
@Daenyth
I'd definitely do something like fs2.Stream.emits(requests).mapAsyncUnordered(cpuCount)(req => send(req).flatMap(handleResp)) or something
Ross A. Baker
@rossabaker
I just milestoned the 0.20.2 ones
Christopher Davenport
@ChristopherDavenport
Where doews #2604 invoke getDefault? Are we worried this will bite us?
i.e. behavior will change with default behavior having an SSL context or having no ssl context.
As if we were deferring creation, I’d expect I’d see where we called that later with the delayed call.
Ross A. Baker
@rossabaker
I don't really like it. If you're on a system where getDefault throws, it just defers the explosion until the first https call. But the reason it's there is that it gives people who know that getDefault throws a chance to override it. The crucial point is that getDefault not appear as a default argument. That's why we've gotten that bug filed ... twice.
Christopher Davenport
@ChristopherDavenport
Alright. Lets go with it, but lets make it big in the release notes.
We
Ross A. Baker
@rossabaker
I would like to do better there, but it's the best I could figure out to do without breaking bincompat.
Christopher Davenport
@ChristopherDavenport
*We’ll still get the issue
I believe that unblocks 0.20.2
Ross A. Baker
@rossabaker
There's absolutely no way to create an SSLContext that's guaranteed to work, which suggests maybe it should be an Option.
Christopher Davenport
@ChristopherDavenport
Yeah, I think it will still blow up though.
So…
Ross A. Baker
@rossabaker
If it were optional, it would fail with a message of our choosing if someone attempts to make an https call and doesn't have it configured.
And it still needs to be lazy, because Some(sslContext.getDefault) is the right choice for most people.
Christopher Davenport
@ChristopherDavenport
lazy should not be our tool to defer.
but, thats for a binary-breaking change.
Ross A. Baker
@rossabaker
Right. I started speculating on either the original or on the PR what we could do when we do break binary.
Christopher Davenport
@ChristopherDavenport
Http1Support should just take SSLContext
Then we need to explicitly deal with it somewhere after server creation.
Ross A. Baker
@rossabaker
Later is another way of making it lazy to defer.
SyncIO makes it lazy and expresses Dragons Be Here.
F[SSLContext] could have a reasonable default.
I don't like the last two because I tend not to like passing effects as parameters.
Christopher Davenport
@ChristopherDavenport
Except we can’t access that.
Our implicits being in last position is really a PITA
I really want (implicit XYZ)(argumentList)
Ross A. Baker
@rossabaker
Yeah.
If we passed Http1Support and F[SslContext], we've got a ConcurrentEffect[F] (which I'm not proud of), and then could run it in the constructor.
Keeping it lazy in Http1Support is still important though.
We couldn't just strictly run that effect in Http1Support, or we'd break people who can't get an SSLContext and don't want one.
Christopher Davenport
@ChristopherDavenport
Really, how do you opt out?
As that seems like a value that could be evaluated and not properly seperated if we are concerned about that.