Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Dec 05 22:58
    codecov-io commented #1188
  • Dec 05 22:36
    scala-steward opened #1188
  • Dec 04 17:28
    codecov-io commented #1187
  • Dec 04 17:17
    scala-steward opened #1187
  • Dec 03 20:39
    codecov-io commented #1186
  • Dec 03 20:30
    jhnsmth opened #1186
  • Dec 02 22:45
    codecov-io commented #1185
  • Dec 02 22:33
    scala-steward opened #1185
  • Nov 28 13:21
    codecov-io commented #1178
  • Nov 28 13:10
    joroKr21 synchronize #1178
  • Nov 28 13:09
    joroKr21 synchronize #1169
  • Nov 27 17:53
    joroKr21 commented #923
  • Nov 27 11:30
    sergeykolbasov commented #923
  • Nov 27 11:29
    sergeykolbasov commented #923
  • Nov 27 08:35
    joroKr21 commented #923
  • Nov 27 03:10
    zhenwenc commented #923
  • Nov 27 03:10
    zhenwenc commented #923
  • Nov 25 19:51
    zhenwenc commented #923
  • Nov 25 19:20
    codecov-io commented #1167
  • Nov 25 19:09
    scala-steward synchronize #1124
Sergey Kolbasov
@sergeykolbasov

Usually, in finch you would combine Endpoints first using :+: operator, and only after that convert it to Finagle service, so routing would be handled for you.

If it's crucial to have multiple services, take a look at twitter-server HttpMuxer:
https://github.com/twitter/twitter-server/blob/master/server/src/test/scala/com/twitter/server/util/HttpUtilsTest.scala

Though you would need to handle routing there manually

Dermot Haughey
@hderms
@sergeykolbasov if you want finagle filters to only be applied to certain endpoints, but not others, wouldn't that imply you need separate finagle services?
we had an issue like that in the past which we resolved by having multiple finagle services (which we got rid of by writing a more finch-centric way of doing authorization/authentication)
Larry Bordowitz
@lbordowitz
Is there a way to make a list of endpoints, to use in a foreach and then also to "fold" (or something) into a coproduct?
Larry Bordowitz
@lbordowitz
Nevermind
Frederick Cai
@zhenwenc
@lbordowitz I had the same wonder earlier hah! Have you managed to find the solution?
Sergey Kolbasov
@sergeykolbasov
It's possible only under the condition of endpoints sharing the same type
Then you can use Endpoint.coproduct method
Otherwise welcome to the beautiful world of shapeless, type level programming and all the rest of things we love
Georgi Krastev
@joroKr21
Hmm, it would be nice to have Endpoint.coproduct[A](endpoints: Endpoint[A]*)
I checked now and we have only endpointA coproduct endpointB
Larry Bordowitz
@lbordowitz
@zhenwenc I'm working on adding metadata to finch for auto-generation of REST API specification documents. So, instead of working with the coproduct directly, I just created a recursive list-like datastructure in my metadata and shoved the two parts of the coproduct into there.
Dermot Haughey
@hderms
@lbordowitz interested to see how this turns out. The big weakness with all the auto-documenting stuff I've seen so far is what happens when you have a JSON body or JSON response. I'm not sure that's doable in a general sense with something like circe given that custom encoders/decoders are relied upon relatively frequently in practice and I'm not sure if there is an ideal solution for documenting them automatically given they can execute arbitrary code. Also I don't think you can tell a derived codec from a manually implemented one so I'm not confident that libraries could handle it properly. Let me know if you have any ideas because that ended up being one of the big roadblocks I identified
Georgi Krastev
@joroKr21
Well you can't use Circe or other JSON-only codecs. You would need another typeclass perhaps for schemas but the problem is keeping it in sync with the codecs. Btw that's one reason why I prefer (semi) auto generated codecs. I don't understand why people shy away from that. As long as you have separate data types to keep your API layer isolated from your internal logic, it's all fine.
Larry Bordowitz
@lbordowitz
@hderms the goal isn't automatic derivation of everything, just low touch. So, you get the type when you make a jsonBody, and it's decodable; can we get enough metadata to derive a json object schema? What if it's just for standard case classes? And, yeah, definitely more than a little stuck on body encoding right now. Regarding responses, I had an idea of a sort of "test harness" like runner, which could generate documented examples and maybe even output types. It's a bit pie-in-the-sky, tho, cuz I don't know a lot of the reflection and fancy tricks these serialization libraries (like Circe) use to do what they do.
Dermot Haughey
@hderms
yeah i'm not confident anything like that could work with circe without breaking as soon as you introduce your own custom encoder/decoder
i'm not an authority on the subject though
Larry Bordowitz
@lbordowitz
Oh, it won't work at all on a custom encoder/decoder. That's fine by me for now, I'm just working on a Proof-of-Concept. If it works for me, then at least I'll have something to show for it. Regarding auto derivation, I think rather than plugging into circe or any encoder in particular, it should just use the same fancy macro/implicit logic like how circe uses, but instead of encoding objects of those types, getting a metadata schema of that type.
Larry Bordowitz
@lbordowitz
In fact I've found this project which uses shapeless to great effect in generating a JSON schema out of a case class. It's unmaintained, but MIT licensed, and the code matches the other shapeless documentation that I've found. https://github.com/timeoutdigital/docless/blob/master/src/main/scala/com/timeout/docless/schema/derive/HListInstances.scala I'll try incorporating this and some smarter recursive logic in the metadata consolidation.
Dmitry Avdonkin
@oviron
Hey guys!
I've got a question about error handling in Finch.
For the last couple of years i've been doing it with the good ol' Finagle filter. It allows me to log error, log request and return failed response.
I've tried to use Endpoint.handle instead, but it means i cant log request in case of error anymore.
Is there any other more modern and finch-style way to do it?
Sergey Kolbasov
@sergeykolbasov
If you use finchx, you can do it on the Endpoint.Compiled level

https://github.com/finagle/finch/blob/master/examples/src/main/scala/io/finch/middleware/Main.scala

Handling all the errors inside of Kleisli, so roughly the same as finagle filter

Good thing that Finch itself makes its best to guarantee that F[(T, Either[Throwable, Response])] won't throw on itself, so it's enough to handle the left case of this Either
Dermot Haughey
@hderms
@sergeykolbasov do you know if it's possible to make that work while also passing an argument into an endpoint?
one of the big obstacles for us getting off finagle filters is we want to have a filter which sets a request id on the request object before any application code is run
then we want to configure a logger that has the request ID as a parameter
the way I accomplish that currently is with a finagle filter that runs before everything and sets a random request ID on finagle Request
then I have an endpoint based on root which can pull out that request ID, configure the logger and return the logger. I call it withRequestLogger
and then I use it like
post("foo" :: withRequestLogger) { logger: Logger =>
this does everything we want but it has the undesirable side effect of putting logger in lots of type signatures (which is debatably a positive)
and in addition requires understanding of finagle filters
Sergey Kolbasov
@sergeykolbasov

@hderms you just hit the most actual topic for me, so I have tons of answers :)

If I understand correctly, you create a logger per request (due to the unique request context) and pass it around. If it's so, then I strongly advise using the famous Reader monad!
You might ask why, and the answer is - because you can pass around logger, context, whatever you feel fancy to pass around without polluting your interfaces API.

The way we solve it in Zalando is to use tagless final + cats-mtl to extract context whenever you need it (actually, that's a topic for a blog post or even a tech talk). Nevertheless, you can fix on specific monad (Reader) and go with it for a while.

Then, to compile Finch endpoints into Finagle service, you're required to have Effect (or ConcurrentEffect) for your monad. Reader doesn't have them out of the box, and the reason is simple: what should be the initial environment?
You have two options here:

  • mapK over Endpoint.Compiled and in natural transformation define the initial environment for your reader (say, NoopLogger?). Then in the next Kleisli redefine it based on the request, instantiating the logger you need and using local to propagate this environment down to the next element in the chain.
  • provide your own implicit Effect for Reader[F[_] : Effect, YourEnv, *] that will run this monad with some initial YourEnv, so finch would pick it up to convert Compiled to Service

VoilĂ , you don't have these logger endpoints everywhere, with loggers as parameters being passed here and there.

And it's not over yet! Just this night I've published the first version of Odin:
https://github.com/valskalla/odin

It's Fast & Functional logger that is not that features-fancy as log4j yet but has basic options available with something special on top. One of them: context being first-class citizen, so you don't need to mess around with ThreadLocal MDC and/or create loggers per context. It even has a contextual logger that can pick up your context from any ApplicativeAsk (that is Reader) and embed it in the log for you.

It's not that I suggest you to pick it right away and use in production today, we are still to battle test in production this month, as well as some features might be missing. But you might be interested to subscribe to it and follow, and who knows if one day you can start using it in your project :)

Dermot Haughey
@hderms
@sergeykolbasov thanks for the help. I'll try that approach.
Odin looks nice but one thing we've come to rely on is Izumi logger's ability to convert logs to JSON https://izumi.7mind.io/latest/release/doc/logstage/
in particular i think a lot of scala logging libraries should be going for structured logging as a first approach
maybe building a ciirce integration for Odin would be helpful
Georgi Krastev
@joroKr21
If you use Monix, TaskLocal is a great alternative to ReaderT (I think FiberLocal if you use ZIO). You can actually define ApplicativeAsk[Task, Logger] based on a TaskLocal and profit.
Sergey Kolbasov
@sergeykolbasov
@hderms there is already one :) called odin-json
I'll spend some time in the next days working on proper documentation
@joroKr21 I'm not huge fan of *Local things personally. It's even worse than implicits if you think about it. You should believe that someone somewhere put the required data into the magical box of *Local before the moment you're going to use it
This message was deleted
Georgi Krastev
@joroKr21
It's not so difficult to arrange as long as you don't have too many ends of the world. Besides, how is it different than providing a default NoopLogger to ReaderT?
You also need to make sure that someone is calling local with a new tracing logger.
Sergey Kolbasov
@sergeykolbasov
well, at least it's an explicit requirement to have one
you might as well have no NoopLogger at all and just run the Endpoint.Compiled[ReaderT[F[_], Ctx, *]] inside of Endpoint.Compiled[F[_]] when there is an access to request to build a proper logger right away
Pavel Borobov
@blvp

Hello everyone.
I faced one little problem using finch to build json REST API application.
In my application I have 2 entities User and Pet and they both have CRUD like operations.
Both API groups have their own encoders and decoders (I'm using circe).

class UserResources[F[_]](userRepo: UserRepo[F]) extends Endpoint.Module[F] with UserCodecs {
   val create: Endpoint[F, User] = post("user" :: jsonBody[User]) { user: User => userRepo.save(user).map(Ok(_)) }
   val get: Endpoint[F, User] = get("user" :: path[Long]) { userId: Long => userRepo.findById(userId).map(Ok(_)) }
   ...
   val endpoints = (create :+: get)
}
class PetResources[F[_]](petRepo: PetRepo[F]) extends Endpoint.Module[F] with PetCodecs {
   val create: Endpoint[F, User] = post("pet" :: jsonBody[Pet]) { pet: Pet => petRepo.save(user).map(Ok(_)) }
   val get: Endpoint[F, User] = get("pet" :: path[Long]) { userId: Long => petRepo.findById(userId).map(Ok(_)) }
   ...
   val endpoints = (create :+: get)
}

Then I use them in this fashion:

val allEndpoints = (new UserResources(repo).endpoints :+: new PetResources(repo2).endpoints)
val api  = Bootstrap.serve[Application.Json](allEndpoints).toService

But this call require same instances of Encoder/Decoderwhich are defined in *Codecs trait for .toService call to materialise the service. I understand why we should have instances in both situations.

Could you please suggest to me how I can better organise code in similar fashion, but without codecs instances duplicate?

Sergey Kolbasov
@sergeykolbasov

Hi @blvp

Best practice in Scala is to put type classes instances for specific types into companion objects whenever is possible

Compiler picks it up from there on its own without any imports.

If you need to describe it for types outside of your application (like library types), you might as well to keep it inside a package object (or just an object) and import those implicits from there

Pavel Borobov
@blvp
Yeah, but it will require it to be imported firstly inside of your Resource class and in component that will combine several resources into a service. So in my example it will require both instances of User and Pet in the code where I call . toService
Sergey Kolbasov
@sergeykolbasov
You don't need to import anything if you put implicit encoders and decoders into corresponding companion objects (User and Pet in your example)