Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Jan 17 19:25
    vigoo commented #1212
  • Jan 16 00:28

    mergify[bot] on master

    Update jaeger-client to 1.8.0 Merge pull request #1259 from s… (compare)

  • Jan 16 00:28
    mergify[bot] closed #1259
  • Jan 16 00:01
    mergify[bot] labeled #1259
  • Jan 16 00:01
    scala-steward opened #1259
  • Jan 15 11:50
    plokhotnyuk commented #1246
  • Jan 14 22:51

    mergify[bot] on master

    Update finagle-http to 22.1.0 Merge pull request #1258 from s… (compare)

  • Jan 14 22:51
    mergify[bot] closed #1258
  • Jan 14 22:17
    mergify[bot] labeled #1258
  • Jan 14 22:16
    scala-steward opened #1258
  • Jan 14 10:49
    adamw commented #1248
  • Jan 14 07:12
    scala-steward synchronize #1238
  • Jan 14 04:11

    mergify[bot] on master

    Update model:core to 1.4.22 Merge pull request #1254 from s… (compare)

  • Jan 14 04:11
    mergify[bot] closed #1254
  • Jan 14 04:10

    mergify[bot] on master

    Update slf4j-api to 1.7.33 Merge pull request #1257 from s… (compare)

  • Jan 14 04:10
    mergify[bot] closed #1257
  • Jan 14 04:09

    mergify[bot] on master

    Update scalafmt-core to 3.3.1 Merge pull request #1256 from s… (compare)

  • Jan 14 04:09
    mergify[bot] closed #1256
  • Jan 14 04:06

    mergify[bot] on master

    Update scribe to 3.6.9 Merge pull request #1253 from s… (compare)

  • Jan 14 04:06
    mergify[bot] closed #1253
Nick Robison

Hi folks, I had a quick question regarding the streaming functionality of the Zio backend:

I have the following code that I'm attempting to use to return the http response stream to the caller:

          .method(Method(proxyRequest.method.value), uri)
          .headers(proxyRequest.headers.map(h => Header(h.name(), h.value())): _*)

The response type is Request[Either[String, BinaryStream]; however, when I attempt to run the code, I get the following error:

class zio.stream.ZStream$$anon$1 cannot be cast to class scala.util.Either

The caller looks like this: client.send(req).map(_.body)

I'm sure there are a couple of things that I'm doing wrong, but I'm stumped as to why the types don't seem to line up with the actual implementation.

1 reply
Ujjal Satpathy
Hi Folks..I am trying to create unit test cases for API client application that uses HttpURLConnectionBackend..but I am facing issue in using the stub SttpBackendStub.synchronous..need some urgent help on this
1 reply
Any immediate help would be really appreciated
Hello :wave: I got hit by io.netty.handler.codec.http.websocketx.CorruptedWebSocketFrameException: Max frame length of 10240 has been exceeded. using sttp 2.2.10. Is there any way to configure WS max frame length. (ZIO backend if that matters.)
1 reply
Felix Bjært Hargreaves
12:36:52.270 [AsyncHttpClient-5-2] WARN  i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.channel.ChannelPipelineException: org.asynchttpclient.netty.handler.StreamedResponsePublisher.handlerRemoved() has thrown an exception.
    at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:640) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]
    at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:477) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]
    at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:417) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]
    at org.asynchttpclient.netty.handler.AsyncHttpClientHandler.channelRead(AsyncHttpClientHandler.java:94) ~[async-http-client-2.12.3.jar:na]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]
Caused by: java.lang.IllegalArgumentException: capacity < 0: (-480651692 < 0)
    at java.base/java.nio.Buffer.createCapacityException(Buffer.java:278) ~[na:na]
    at java.base/java.nio.ByteBuffer.allocate(ByteBuffer.java:360) ~[na:na]
    at sttp.client3.asynchttpclient.SimpleSubscriber.onComplete(reactive.scala:65) ~[async-http-client-backend_3-3.3.14.jar:3.3.14]
    at sttp.client3.asynchttpclient.AsyncHttpClientBackend$$anon$2.onComplete(AsyncHttpClientBackend.scala:108) ~[async-http-client-backend_3-3.3.14.jar:3.3.14]
8 replies
I get this when downloading large files, 3.6 gb in this case
Evgenii Kuzmichev

Hello! I'm using sttp client with AsyncHttpClientZioBackend. Version 2.2.3.
Endpoint server which I connect to responds with huge json body (90 Mb) but response time is quite fast (few seconds).
I read the ressponse with circe deserializer via .response(asJson[Option[MyDto]]). All deserilizers are correct (checked in unit tests)
I have also the .readTimeout(readTimeout) (10 min) set on my sttp request (it is definilely longer than server response time).

The problem is that I get sttp.client.DeserializationError: exhausted input due to the fact that not all bytes have been read and
I have json with cropped end and no valid json ending and etc: e.g. {"a": "foo", "b": but in scale of huger json

While debugging I've got that it is being read via zio stream instantiated from reactive streams publisher (via zio-interop-reactivestreams lib)

override protected def publisherToBytes(p: Publisher[ByteBuffer]): Task[Array[Byte]] =

As I understand when requestTimeout exhausts and reading managed to start it closes the stream and leaves that bytes what it could read in this time.
And in general case it could be inconsistent, not full body completely read.

This buffer size above is 16 (private val bufferSize = 16) and I haven't found any way how to increase it.
When I Copy-Paste the code of AsyncHttpClientZioBackend to my local code base to the same package as the original one
I set larger buffer size(1024, 2048) and the exhausted input error disappears. Client manages to read full json body in time and correcly deserialize it.

Could someone help me please to handle such a problem?
Can one set buffer size via config or smth else?
Is my way of digging is correct? Maybe there is better way to solve the problem?

1 reply
Arman Bilge
Hi, would someone here mind approving the workflow on my PR? :) thanks in advance!
Dmytro Nikitin
Hey guys, I'm trying to use sttp for websocket connection on the ScalaJS side and can't figure out how can I do it with FetchBackends.
I need just open websocket connection and update the state on each received message. Will be very happy to see some kind of guide or just advice where to start
5 replies
Roman Leshchenko

Hello guys. I'm struggling with optional params in my service.
I'm specifying the route like this:


And providing such a codec:

implicit val filterCodec: PlainCodec[Option[Filter]]       =
    Codec.string.mapDecode { s =>
      val l = parse(s).extract[Map[String, String]]
      Value(Option(Filter(l.head._1, l.head._2)))
    } { m =>

But when I'm trying to query my server with no "filter" param, I have an error:Invalid value for: query parameter filter
Any suggestions on this?

You wouldn't make a PlainCodec[Option[Filter]], but PlainCodec[Filter].
At least that would make more sense, I think? Doesn't tapir handle the Option-part itself?
I wonder if you do define PlainCodec[Option[Filter]] you might override the built-in mechanism for handling optionals. As I assume there exists a PlainCodec[Option[A]] for any A where PlainCodec[A] exists. Which means you should define a PlainCodec[Filter] and then it plays nicely.
Roman Leshchenko
@heksenlied:matrix.org, yep, you were right. Since I've deleted Option from codecs - it's started to work
Thank you very much!
Adam Warski
@rleshchenko for the record - yes, there is built-in handling of optionals; it didn't work as your PlainCodec[Option[Filter]] defined a codec between String <-> Option[Filter]. However, query requires a codec between List[String] <-> Option[Filter] - as query parameters can be repeated. Tapir knows how to convert a String <-> T codec into List[String] <-> Option[T], but doesn't have built in conversions for String <-> Option[T]
Yuval Itzchakov
Hi, is there any way to get the AsyncHttpClientZioBackend to run on the ZIO Blocking threadpool? I see the definition is: Task[SttpBackend[Task, ZioStreams with WebSockets]] although I do see in the docs it used to be a BlockingTask
3 replies
Is there a way to guarantee a strict order of requests?
Short of going back to a sync backend (I'm currently using cats.effects/Armeria) or introducing some sort of artificial delay between calls, I can't think of any way to achieve this.
Thanks in advance for any feedback.
5 replies
Andrii Ilin
Hey guys!
Examples for the latest documentation seems not working.
Example code: is blank
1 reply
Kopaniev Vladyslav

Hey team, sttp 3.3.15 introduced a binary incompatibility with 3.3.14, is that a known issue? Particularly it was this change:

I spotted this because my application gave me a "Exception in thread "main" java.lang.NoSuchMethodError: sttp.client3.FollowRedirectsBackend.<init>(Lsttp/client3/SttpBackend;Lscala/collection/immutable/Set;Lscala/collection/immutable/Set;)V"

(That happened after I updated one of my dependencies which transitively pulled 3.3.16 and my current version of sttp which was 3.3.14 was evicted)

1 reply
Kopaniev Vladyslav
^^ it's strange that it happened because mima plugin was enabled after 3.3.14, and if you checkout tag 3.3.15 and try to run "mimaReportBinaryIssues" it will error
Found possible hint:
The corresponding CI build was not checking bincompat because "mimaPreviousArtifacts" was empty for some reason
Kopaniev Vladyslav
It looks like even for releases mima won't check "core" module bincompat issues:
9 replies
Eric Meisel

Hi there. I'm having a hard time upgrading from v2 to v3 due to the requirement of send to tie R to an Effect[F]

I had a class with the following definition:

class TracedLoggingSttpBackend[F[_], G[_]: Sync: Trace, +P](
    val delegate: SttpBackend[F, P],
    liftK: F ~> G
  ) extends SttpBackend[G, P]

And I was able to pass the Request from the backend to its delegate and simply run liftK on the result of it.

Now, because Request is tied to the SttpBackend's first type param (F or G), the requests become incompatible.

This was the setup we were using to tie in Natchez's kleisli usage through Sttp's backends.

4 replies

From what I can see, I need to be able to convert between the two, like so:

def convertRequest[T, RF >: P with Effect[F], RG >: P with Effect[G]](
      request: Request[T, RG]
    ): Request[T, RF]

But I have no idea how I could accomplish that

Frej Soya
Simple problem here. Adding
libraryDependencies += "com.softwaremill.sttp.client3" %% "core" % "3.3.18" does not seem to update dependencies/add sttp to the classpath. Likely simple problem of me not having used scala in 5+ years :)
1 reply
Batiste Dekimpe

Hi there, I'm running into issues using AsyncHttpClientZioBackend. I'm initiating it like that:

lazy val httpClientLayer: ULayer[Has[SttpBackend[Task, ZioStreams with capabilities.WebSockets]]] = {

and giving it to my class using Zlayer. Unfortunately, I have requests that send me the error: IllegalStateException: Closed. I'm wondering if anyone had issues with this before. Thanks a lot in advance

1 reply
Piotr Buszka

I have a strange low level exception ('IndexOutOfBoundsException') in 'backend.send(req)' which appears only on 1 out of ~100 endpoints . I'm clutching at straws with this bug and the my hipotesis is that moving from sttp 3.3.4 to 3.3.18 and upgrade of scalajs-dom to 2.0.0 is the differential "cause".

  • sttp 3.3.4 compiled and linked with scalajs-dom 1.2 and there was no error.
  • sttp 3.3.4 and scalajs-dom 2.0.0 it compiles but fails on linking. 3.3.18 and 2.0.0. it compiles and linkes but systematically fails in execution on 1 particular endpoint.

I'm using FetchMonixBackend and I see in the source code that it still uses 'import org.scalajs.dom.experimental...' which is no longer available in scalajs-dom 2.0.0

Is sttp compatible with scalajs-dom 2.0.0 ?
Any clues what can I check more ?

2 replies
Andrius Bentkus
Hi guys, how do I force sttp Uri to encode the query fragment with %?
1 reply
Tim Pigden
Hi, I'm using sttp3 from zio. I'm trying to track down a bug (in my code) whereby the client thinks it's' sending a post but nothing is happening as far as the gateway logs are concerned. One of things I've realised is that 2 different areas of the code are using their own instances of AsyncHttpClientZioBackend.layer(), in one case and AsyncHttpClientZioBackend.usingConfig(config)
in another. Is this likely to cause a problem?
4 replies
Alex Myodov
Hello. Using sttp.client3 with AkkaHttpBackend. I see that internally, AkkaHttpBackend uses “classic” Akka model (ActorSystem, etc). So when doing AkkaHttpBackend.usingActorSystem(actorSystem), actorSystem must be a classic actor system, not a typed one. Any hints how I can take my existing typed actor system and use it in AkkaHttpBackend.usingActorSystem?
2 replies
Andrius Bentkus
Is it possible to leave the encoding as is with sttp.url for allowed chars? for exmaple if I have https://www.test.com/A%27B, it will convert it to https://www.test.com/A'B, but I want to preserve the %27
4 replies
Stan Sobolev
Hi guys. Is it possible to convert sttp.monad.MonadError[sttp.client3.Identity] to cats.MonadError[sttp.client3.Identity, Throwable] w/o using effect.IO?
7 replies
Roman Makurin
hi guys, is there any sport of aws signing in sttp or a way how to do it by yourself?
Igor Tovstopyat-Nelip
Is Java 8 the strict requirement for the latest stable? Is Java 11 possible by any chance?
2 replies
For the cats-effect backend. ^
Mikhail Kobenko
is there a way to disable logging of request and response body on a specific http request in Slf4jLoggingBackend?
3 replies