Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 10:34
    micossow synchronize #1197
  • 10:34

    micossow on fix-curl-logging-secret-headers

    fix scala3 build (compare)

  • 10:07
    mergify[bot] labeled #1200
  • 10:04
    scala-steward opened #1200
  • 09:34
    micossow synchronize #1197
  • 09:34

    micossow on fix-curl-logging-secret-headers

    fix 2.11 build (compare)

  • 09:25
    micossow synchronize #1197
  • 09:25

    micossow on fix-curl-logging-secret-headers

    refactor (compare)

  • 09:21
    micossow synchronize #1197
  • 09:20

    micossow on fix-curl-logging-secret-headers

    custom sensitive headers (compare)

  • 09:11
    micossow synchronize #1197
  • 09:11

    micossow on fix-curl-logging-secret-headers

    custom sensitive headers (compare)

  • Dec 02 22:31

    mergify[bot] on master

    Update armeria to 1.13.4 Merge pull request #1199 from s… (compare)

  • Dec 02 22:31
    mergify[bot] closed #1199
  • Dec 02 22:05
    mergify[bot] labeled #1199
  • Dec 02 22:04
    scala-steward closed #1189
  • Dec 02 22:04
    scala-steward commented #1189
  • Dec 02 22:04
    scala-steward opened #1199
  • Dec 02 20:09
    Pask423 synchronize #1154
  • Dec 02 20:09

    Pask423 on auto-decompression-turning

    Update mdoc, sbt-mdoc to 2.2.24 Merge pull request #1151 from s… Update dependencies and 73 more (compare)

Philippe Derome
@phderome
Ashwin Bhaskar
@ashwinbhaskar
How can I mock requests and response when using AsyncHttpClientZioBackend.stubLayer? When using AsyncHttpClientZioBackend.stub I could do AsyncHttpClientZioBackend.stub.whenRequestMatches(...).thenResponse(...).
2 replies
Łukasz Drygała
@ldrygala
Hi guys, I would like to test a client that streams data from a websocket. Unfortunately, it fails with class sttp.ws.testing.WebSocketStub cannot be cast to class akka.stream.scaladsl.Flow (sttp.ws.testing.WebSocketStub and akka.stream.scaladsl.Flow are in unnamed module of loader 'app')
4 replies
Ashwin Bhaskar
@ashwinbhaskar
Is there an equivalent of akka.http.scaladsl.model.Uri's parseAndResolve for sttp.model.Uri?
1 reply
rabzu
@rabzu
How can I get BinaryStream from
(sttp.capabilities.Streams[sttp.capabilities.zio.ZioStreams] & Singleton)# BinaryStream?
I have:
val fileStream: (Streams[ZioStreams] & Singleton)#BinaryStream <- sttpClient
                            .send(downloadRequest)

uploadRequest = basicRequest.streamBody(ZioStreams)(fileStream)
Found:    (fileStream : 
  (sttp.capabilities.Streams[sttp.capabilities.zio.ZioStreams] & Singleton)#
    BinaryStream
)
Required: sttp.capabilities.zio.ZioStreams.BinaryStream²

where:    BinaryStream  is a type in class Streams with bounds 
          BinaryStream² is a type in trait ZioStreams which is an alias of zio.stream.Stream[Throwable, Byte]
rabzu
@rabzu
So error is in basicRequest.streamBody(ZioStreams)(fileStream)
9 replies
catostrophe
@catostrophe
The latest tapir 0.18.0 is CE2-only compatible and its http4s module depends on http4s 0.22.0-RC1, while sttp 3.3.9 CE2 http4s module depends on http4s 0.21.x, so they are incompatible. It may be force-fixed in sbt but the risk of runtime bincompat issues is high
6 replies
truongio
@truongio

Hi, I'm getting some weird EOF error from sttp that I'm unsure about.

o.h.b.p.Command$EOF$: EOF
from:

logger_name: "sttp.client3.logging.slf4j.Slf4jLoggingBackend"
logger_name: "sttp.tapir.server.http4s.Http4sServerInterpreter"

I'm using sttp client version 3.2.3. Does anyone know what might be causing this and how to resolve it?

3 replies
catostrophe
@catostrophe
@adamw for those who's still on CE2, can we have constistent versions of tapir 0.18.x and sttp 3.3.x compatible with each other and http4s 0.22.0 ?
8 replies
Julien Richard-Foy
@julienrf
Hello! What is the versioning scheme used by sttp? Are all the minor releases backward binary compatible? Are all the patch releases backward source compatible?
6 replies
dabbibi
@dabbibi
Hi qq: Is there a zio integration with FetchBackend for scalajs?
2 replies
Looking for something like AsyncZioHttpBackend but for scalajs
Colin Aygalinc
@aygalinc

Hi, I'm getting weird behavior with AkkaSttpBackend & retry when one of the service I rely is down.
I have draw a small test case :
"When constraint server is in error then we " should " get an error when asking for constraint" in {
val sttp = AkkaHttpBackend()

val request: () => Future[String] = () => {
  println("Launch some stuff")
  val startTime = System.currentTimeMillis()

  basicRequest
    .get(uri"https://fake.url:9000")
    .response(asString.getRight)
    .send(sttp)
    .map(_.body)
    .recoverWith {
      case error => {

        println(s"${System.currentTimeMillis() - startTime} millis")
        println(error.printStackTrace())
        Future.failed(error)

      }
    }
}

implicit val successForFuture: Success[String] = Success.always

recoverToSucceededIf[SttpClientException] {
  val response = retry.Pause(max = 3, delay = 20.milliseconds).apply {
    request
  }
  response
}

}

What I observe is inconsistent timeout for detecting the failure : 4366 millis - 200920 millis - 117760 millis - 138533 millis
So basically error detection occurs btw ' seconds to more than 3 minutes
Colin Aygalinc
@aygalinc
I expect that the detection of this error would be quicker or at least take the Default 30 second of timeout no ?
uneewk
@uneewk
I am struggling to POST with a simple payload in json using sttp. Wondering if anyone can help me correct what I'm doing wrong. I get a response back that the payload is invalid. I can also see the curl equivalent of my request and it's not what I intend as the data is not being sent as json
def sendMessage(message: String): Unit = {
    val request = basicRequest
      .contentType(ct = "application/json")
      .body(Map("text" -> message))
      .post(uri"<URI goes here>")

    request.send().body match {
      case Left(error) => logger.error(s"Could not send message: $message due to $error")
      case Right(_) => ()
    }
  }
3 replies
Mike Limansky
@limansky

Hi. I'm pretty new both to sttp and zio, so my question might be pretty simple, however I've got stuck with testing.

I've created a service which uses SttpClient. I'd like to test it. So I create service like:

val myService = (ZLayer.succeed(Config("1234", "abc")) ++ HttpClientZioBackend.stubLayer) >>> MyService.live

val stubEffect = for {
        _ <- whenRequestMatches(_.uri.toString().endsWith("/api/v1/doit")).thenRespond("response")
      } yield ()

val result = for {
        a <- MyService.doIt("Some input")
      } yield assert(a)(isRight(equalTo("response")))

result.provideLayer(myService)

What is not clear is how to pass the stubFffect to provide a mocked server response. Are there any examples?

5 replies
Fredrik Wärnsberg
@frekw
What happened to https://github.com/softwaremill/sttp/issues/451#issuecomment-675533291 ? I can't seem to find failLeft anywhere in the code base.
2 replies
Swoorup Joshi
@Swoorup
what is the equivalent package for sttp.tapir.server.stub.*?
in scala 3
1 reply
joules-o
@joules-o
I'm trying to migrate from sttp client 2 to client3, but my linter fails on Any. I don't want to turn this off, but I'd really rather not add ignore annotations to some 40-odd endpoints. Is there some version of a no-op stream type I can use instead of Any?
6 replies
Nick Robison
@nickrobison

Hi folks, I had a quick question regarding the streaming functionality of the Zio backend:

I have the following code that I'm attempting to use to return the http response stream to the caller:

basicRequest
          .method(Method(proxyRequest.method.value), uri)
          .headers(proxyRequest.headers.map(h => Header(h.name(), h.value())): _*)
          .streamBody(ZioStreams)(entityStream)
          .response(asStreamUnsafe(ZioStreams))

The response type is Request[Either[String, BinaryStream]; however, when I attempt to run the code, I get the following error:

class zio.stream.ZStream$$anon$1 cannot be cast to class scala.util.Either

The caller looks like this: client.send(req).map(_.body)

I'm sure there are a couple of things that I'm doing wrong, but I'm stumped as to why the types don't seem to line up with the actual implementation.

1 reply
Ujjal Satpathy
@ujjalsatpathy
Hi Folks..I am trying to create unit test cases for API client application that uses HttpURLConnectionBackend..but I am facing issue in using the stub SttpBackendStub.synchronous..need some urgent help on this
1 reply
Any immediate help would be really appreciated
renikov
@renikov
Hello :wave: I got hit by io.netty.handler.codec.http.websocketx.CorruptedWebSocketFrameException: Max frame length of 10240 has been exceeded. using sttp 2.2.10. Is there any way to configure WS max frame length. (ZIO backend if that matters.)
1 reply
Felix Bjært Hargreaves
@hejfelix
12:36:52.270 [AsyncHttpClient-5-2] WARN  i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.channel.ChannelPipelineException: org.asynchttpclient.netty.handler.StreamedResponsePublisher.handlerRemoved() has thrown an exception.
    at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:640) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]
    at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:477) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]
    at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:417) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]
    at org.asynchttpclient.netty.handler.AsyncHttpClientHandler.channelRead(AsyncHttpClientHandler.java:94) ~[async-http-client-2.12.3.jar:na]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]
...
Caused by: java.lang.IllegalArgumentException: capacity < 0: (-480651692 < 0)
    at java.base/java.nio.Buffer.createCapacityException(Buffer.java:278) ~[na:na]
    at java.base/java.nio.ByteBuffer.allocate(ByteBuffer.java:360) ~[na:na]
    at sttp.client3.asynchttpclient.SimpleSubscriber.onComplete(reactive.scala:65) ~[async-http-client-backend_3-3.3.14.jar:3.3.14]
    at sttp.client3.asynchttpclient.AsyncHttpClientBackend$$anon$2.onComplete(AsyncHttpClientBackend.scala:108) ~[async-http-client-backend_3-3.3.14.jar:3.3.14]
...
8 replies
I get this when downloading large files, 3.6 gb in this case
Evgenii Kuzmichev
@ekuzmichev

Hello! I'm using sttp client with AsyncHttpClientZioBackend. Version 2.2.3.
Endpoint server which I connect to responds with huge json body (90 Mb) but response time is quite fast (few seconds).
I read the ressponse with circe deserializer via .response(asJson[Option[MyDto]]). All deserilizers are correct (checked in unit tests)
I have also the .readTimeout(readTimeout) (10 min) set on my sttp request (it is definilely longer than server response time).

The problem is that I get sttp.client.DeserializationError: exhausted input due to the fact that not all bytes have been read and
I have json with cropped end and no valid json ending and etc: e.g. {"a": "foo", "b": but in scale of huger json

While debugging I've got that it is being read via zio stream instantiated from reactive streams publisher (via zio-interop-reactivestreams lib)

override protected def publisherToBytes(p: Publisher[ByteBuffer]): Task[Array[Byte]] =
    p.toStream(bufferSize).fold(ByteBuffer.allocate(0))(concatByteBuffers).map(_.array())

As I understand when requestTimeout exhausts and reading managed to start it closes the stream and leaves that bytes what it could read in this time.
And in general case it could be inconsistent, not full body completely read.

This buffer size above is 16 (private val bufferSize = 16) and I haven't found any way how to increase it.
When I Copy-Paste the code of AsyncHttpClientZioBackend to my local code base to the same package as the original one
I set larger buffer size(1024, 2048) and the exhausted input error disappears. Client manages to read full json body in time and correcly deserialize it.

Could someone help me please to handle such a problem?
Can one set buffer size via config or smth else?
Is my way of digging is correct? Maybe there is better way to solve the problem?

1 reply
Arman Bilge
@armanbilge
Hi, would someone here mind approving the workflow on my PR? :) thanks in advance!
Dmytro Nikitin
@KRoLer
Hey guys, I'm trying to use sttp for websocket connection on the ScalaJS side and can't figure out how can I do it with FetchBackends.
I need just open websocket connection and update the state on each received message. Will be very happy to see some kind of guide or just advice where to start
5 replies
Roman Leshchenko
@rleshchenko

Hello guys. I'm struggling with optional params in my service.
I'm specifying the route like this:

endpoint.get
        .in(
          query[Option[Filter]]("filter")
        )

And providing such a codec:

implicit val filterCodec: PlainCodec[Option[Filter]]       =
    Codec.string.mapDecode { s =>
      val l = parse(s).extract[Map[String, String]]
      Value(Option(Filter(l.head._1, l.head._2)))
    } { m =>
      Serialization.write(m)
    }

But when I'm trying to query my server with no "filter" param, I have an error:Invalid value for: query parameter filter
Any suggestions on this?

heksesang
@heksenlied:matrix.org
[m]
You wouldn't make a PlainCodec[Option[Filter]], but PlainCodec[Filter].
At least that would make more sense, I think? Doesn't tapir handle the Option-part itself?
heksesang
@heksenlied:matrix.org
[m]
I wonder if you do define PlainCodec[Option[Filter]] you might override the built-in mechanism for handling optionals. As I assume there exists a PlainCodec[Option[A]] for any A where PlainCodec[A] exists. Which means you should define a PlainCodec[Filter] and then it plays nicely.
Roman Leshchenko
@rleshchenko
@heksenlied:matrix.org, yep, you were right. Since I've deleted Option from codecs - it's started to work
Thank you very much!
Adam Warski
@adamw
@rleshchenko for the record - yes, there is built-in handling of optionals; it didn't work as your PlainCodec[Option[Filter]] defined a codec between String <-> Option[Filter]. However, query requires a codec between List[String] <-> Option[Filter] - as query parameters can be repeated. Tapir knows how to convert a String <-> T codec into List[String] <-> Option[T], but doesn't have built in conversions for String <-> Option[T]
Yuval Itzchakov
@YuvalItzchakov
Hi, is there any way to get the AsyncHttpClientZioBackend to run on the ZIO Blocking threadpool? I see the definition is: Task[SttpBackend[Task, ZioStreams with WebSockets]] although I do see in the docs it used to be a BlockingTask
3 replies
javierg1975
@javierg1975
Hi,
Is there a way to guarantee a strict order of requests?
Short of going back to a sync backend (I'm currently using cats.effects/Armeria) or introducing some sort of artificial delay between calls, I can't think of any way to achieve this.
Thanks in advance for any feedback.
5 replies
Andrii Ilin
@ilinandrii
Hey guys!
Examples for the latest documentation seems not working.
Example code: is blank
1 reply
Kopaniev Vladyslav
@VladKopanev

Hey team, sttp 3.3.15 introduced a binary incompatibility with 3.3.14, is that a known issue? Particularly it was this change:
https://github.com/softwaremill/sttp/pull/1119/files#diff-328a76b199b03a413e3a607009ce6d0ee31047193247381451e5b1362c1c96eeR15

I spotted this because my application gave me a "Exception in thread "main" java.lang.NoSuchMethodError: sttp.client3.FollowRedirectsBackend.<init>(Lsttp/client3/SttpBackend;Lscala/collection/immutable/Set;Lscala/collection/immutable/Set;)V"

(That happened after I updated one of my dependencies which transitively pulled 3.3.16 and my current version of sttp which was 3.3.14 was evicted)

1 reply
Kopaniev Vladyslav
@VladKopanev
^^ it's strange that it happened because mima plugin was enabled after 3.3.14, and if you checkout tag 3.3.15 and try to run "mimaReportBinaryIssues" it will error
Found possible hint:
https://github.com/softwaremill/sttp/runs/3776276618?check_suite_focus=true#step:8:1221
The corresponding CI build was not checking bincompat because "mimaPreviousArtifacts" was empty for some reason
Kopaniev Vladyslav
@VladKopanev
It looks like even for releases mima won't check "core" module bincompat issues:
https://github.com/softwaremill/sttp/runs/4272599204?check_suite_focus=true#step:8:1252
9 replies