Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Jul 03 18:44
    codecov-commenter commented #1243
  • Jul 03 18:37
    scala-steward opened #1243
  • Jun 29 20:07
    MaximilianoFelice commented #1237
  • Jun 27 23:26
    codecov-commenter commented #1242
  • Jun 27 23:17
    scala-steward opened #1242
  • Jun 24 04:23
    codecov-commenter commented #1241
  • Jun 24 04:17
    scala-steward opened #1241
  • Jun 19 23:19
    codecov-commenter commented #1240
  • Jun 19 23:13
    scala-steward opened #1240
  • Jun 19 03:01
    scala-steward opened #1239
  • Jun 13 11:21
    olib963 commented #1230
  • Jun 12 18:26
    codecov-commenter commented #1238
  • Jun 12 18:20
    scala-steward opened #1238
  • Jun 10 12:00
    rpless closed #1229
  • Jun 10 12:00
    rpless commented #1229
  • Jun 10 11:43
    sergeykolbasov closed #997
  • Jun 10 08:49

    sergeykolbasov on master

    Update cats-effect to 2.1.3 (#1… (compare)

  • Jun 10 08:49
    sergeykolbasov closed #1224
  • Jun 10 08:31
    codecov-commenter commented #1191
  • Jun 10 08:24
    codecov-commenter commented #1191
Vladimir Kostyukov
Yeah, I think Sergey was experimenting with this as well. finagle/finch#880
Larry Bordowitz
That's a really nice pointer, thank you for that! Should it be a separate field like meta, or expand item?

Hi there!

I am currently trying to use finch but got some issues with understanding of it's abstractions

I have multiple services and want to combine them while bootrstrapping

its it possible to do?

for example

val svc1: Service[Request, Response] = ???
val svc2:  Service[Request, Response] = ???

Http.server.serve(":8080", svc1 :+: svc2)
thanks for any help!
Sergey Kolbasov

Hi @ratoshniuk

Could you elaborate how did you end up with two Finagle services?

Usually, in finch you would combine Endpoints first using :+: operator, and only after that convert it to Finagle service, so routing would be handled for you.

If it's crucial to have multiple services, take a look at twitter-server HttpMuxer:

Though you would need to handle routing there manually

Dermot Haughey
@sergeykolbasov if you want finagle filters to only be applied to certain endpoints, but not others, wouldn't that imply you need separate finagle services?
we had an issue like that in the past which we resolved by having multiple finagle services (which we got rid of by writing a more finch-centric way of doing authorization/authentication)
Larry Bordowitz
Is there a way to make a list of endpoints, to use in a foreach and then also to "fold" (or something) into a coproduct?
Larry Bordowitz
Frederick Cai
@lbordowitz I had the same wonder earlier hah! Have you managed to find the solution?
Sergey Kolbasov
It's possible only under the condition of endpoints sharing the same type
Then you can use Endpoint.coproduct method
Otherwise welcome to the beautiful world of shapeless, type level programming and all the rest of things we love
Georgi Krastev
Hmm, it would be nice to have Endpoint.coproduct[A](endpoints: Endpoint[A]*)
I checked now and we have only endpointA coproduct endpointB
Larry Bordowitz
@zhenwenc I'm working on adding metadata to finch for auto-generation of REST API specification documents. So, instead of working with the coproduct directly, I just created a recursive list-like datastructure in my metadata and shoved the two parts of the coproduct into there.
Dermot Haughey
@lbordowitz interested to see how this turns out. The big weakness with all the auto-documenting stuff I've seen so far is what happens when you have a JSON body or JSON response. I'm not sure that's doable in a general sense with something like circe given that custom encoders/decoders are relied upon relatively frequently in practice and I'm not sure if there is an ideal solution for documenting them automatically given they can execute arbitrary code. Also I don't think you can tell a derived codec from a manually implemented one so I'm not confident that libraries could handle it properly. Let me know if you have any ideas because that ended up being one of the big roadblocks I identified
Georgi Krastev
Well you can't use Circe or other JSON-only codecs. You would need another typeclass perhaps for schemas but the problem is keeping it in sync with the codecs. Btw that's one reason why I prefer (semi) auto generated codecs. I don't understand why people shy away from that. As long as you have separate data types to keep your API layer isolated from your internal logic, it's all fine.
Larry Bordowitz
@hderms the goal isn't automatic derivation of everything, just low touch. So, you get the type when you make a jsonBody, and it's decodable; can we get enough metadata to derive a json object schema? What if it's just for standard case classes? And, yeah, definitely more than a little stuck on body encoding right now. Regarding responses, I had an idea of a sort of "test harness" like runner, which could generate documented examples and maybe even output types. It's a bit pie-in-the-sky, tho, cuz I don't know a lot of the reflection and fancy tricks these serialization libraries (like Circe) use to do what they do.
Dermot Haughey
yeah i'm not confident anything like that could work with circe without breaking as soon as you introduce your own custom encoder/decoder
i'm not an authority on the subject though
Larry Bordowitz
Oh, it won't work at all on a custom encoder/decoder. That's fine by me for now, I'm just working on a Proof-of-Concept. If it works for me, then at least I'll have something to show for it. Regarding auto derivation, I think rather than plugging into circe or any encoder in particular, it should just use the same fancy macro/implicit logic like how circe uses, but instead of encoding objects of those types, getting a metadata schema of that type.
Larry Bordowitz
In fact I've found this project which uses shapeless to great effect in generating a JSON schema out of a case class. It's unmaintained, but MIT licensed, and the code matches the other shapeless documentation that I've found. I'll try incorporating this and some smarter recursive logic in the metadata consolidation.
Dmitry Avdonkin
Hey guys!
I've got a question about error handling in Finch.
For the last couple of years i've been doing it with the good ol' Finagle filter. It allows me to log error, log request and return failed response.
I've tried to use Endpoint.handle instead, but it means i cant log request in case of error anymore.
Is there any other more modern and finch-style way to do it?
Sergey Kolbasov
If you use finchx, you can do it on the Endpoint.Compiled level

Handling all the errors inside of Kleisli, so roughly the same as finagle filter

Good thing that Finch itself makes its best to guarantee that F[(T, Either[Throwable, Response])] won't throw on itself, so it's enough to handle the left case of this Either
Dermot Haughey
@sergeykolbasov do you know if it's possible to make that work while also passing an argument into an endpoint?
one of the big obstacles for us getting off finagle filters is we want to have a filter which sets a request id on the request object before any application code is run
then we want to configure a logger that has the request ID as a parameter
the way I accomplish that currently is with a finagle filter that runs before everything and sets a random request ID on finagle Request
then I have an endpoint based on root which can pull out that request ID, configure the logger and return the logger. I call it withRequestLogger
and then I use it like
post("foo" :: withRequestLogger) { logger: Logger =>
this does everything we want but it has the undesirable side effect of putting logger in lots of type signatures (which is debatably a positive)
and in addition requires understanding of finagle filters
Sergey Kolbasov

@hderms you just hit the most actual topic for me, so I have tons of answers :)

If I understand correctly, you create a logger per request (due to the unique request context) and pass it around. If it's so, then I strongly advise using the famous Reader monad!
You might ask why, and the answer is - because you can pass around logger, context, whatever you feel fancy to pass around without polluting your interfaces API.

The way we solve it in Zalando is to use tagless final + cats-mtl to extract context whenever you need it (actually, that's a topic for a blog post or even a tech talk). Nevertheless, you can fix on specific monad (Reader) and go with it for a while.

Then, to compile Finch endpoints into Finagle service, you're required to have Effect (or ConcurrentEffect) for your monad. Reader doesn't have them out of the box, and the reason is simple: what should be the initial environment?
You have two options here:

  • mapK over Endpoint.Compiled and in natural transformation define the initial environment for your reader (say, NoopLogger?). Then in the next Kleisli redefine it based on the request, instantiating the logger you need and using local to propagate this environment down to the next element in the chain.
  • provide your own implicit Effect for Reader[F[_] : Effect, YourEnv, *] that will run this monad with some initial YourEnv, so finch would pick it up to convert Compiled to Service

Voilà, you don't have these logger endpoints everywhere, with loggers as parameters being passed here and there.

And it's not over yet! Just this night I've published the first version of Odin:

It's Fast & Functional logger that is not that features-fancy as log4j yet but has basic options available with something special on top. One of them: context being first-class citizen, so you don't need to mess around with ThreadLocal MDC and/or create loggers per context. It even has a contextual logger that can pick up your context from any ApplicativeAsk (that is Reader) and embed it in the log for you.

It's not that I suggest you to pick it right away and use in production today, we are still to battle test in production this month, as well as some features might be missing. But you might be interested to subscribe to it and follow, and who knows if one day you can start using it in your project :)

Dermot Haughey
@sergeykolbasov thanks for the help. I'll try that approach.
Odin looks nice but one thing we've come to rely on is Izumi logger's ability to convert logs to JSON
in particular i think a lot of scala logging libraries should be going for structured logging as a first approach
maybe building a ciirce integration for Odin would be helpful
Georgi Krastev
If you use Monix, TaskLocal is a great alternative to ReaderT (I think FiberLocal if you use ZIO). You can actually define ApplicativeAsk[Task, Logger] based on a TaskLocal and profit.
Sergey Kolbasov
@hderms there is already one :) called odin-json
I'll spend some time in the next days working on proper documentation
@joroKr21 I'm not huge fan of *Local things personally. It's even worse than implicits if you think about it. You should believe that someone somewhere put the required data into the magical box of *Local before the moment you're going to use it
This message was deleted
Georgi Krastev
It's not so difficult to arrange as long as you don't have too many ends of the world. Besides, how is it different than providing a default NoopLogger to ReaderT?
You also need to make sure that someone is calling local with a new tracing logger.