Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Dermot Haughey
@hderms
so if I have a finagle admission control filter and a request is not admitted, is it fair to assume it will show up as 503 somewhere? Currently that's what I experience when I run a load test against a server with admission control on. The documentation implies that there's some conditionality w.r.t. how it responds based on the protocol in use so I'm just trying to make sure I am understanding this correctly
2 replies
Alessandro Vermeulen
@spockz
Interesting. Whenever I configure our http client with a SSLContext directly tls resumption works. If I use SslClientConfiguration it does t work anymore. I don’t even see the “Client cached” debug log statements so no resumption is attempted at all
Has anyone else experienced this issue?
Alessandro Vermeulen
@spockz
@ryanoneill does this behaviour ring a bell? Gut feeling is that a ssl engine or context is not being reused. I’ll Port the reproducer from our code to a mr on finagle tomorrow so I can put it in an issue
Ryan O'Neill
@ryanoneill

@spockz Yes. By default it's not. What's available in OSS Finagle is also a little different than what we use internally. If you look at the apply method of the Netty4ClientEngineFactory you can see where it recreates an SSLContext each time. https://github.com/twitter/finagle/blob/develop/finagle-netty4/src/main/scala/com/twitter/finagle/netty4/ssl/client/Netty4ClientEngineFactory.scala

That's far less than ideal, but it's also the safest.

We use ones internally that don't recreate the contexts each time, but they are very tied to how our security infrastructure works.
We also aren't particularly concerned with resumption, as Finagle is mostly used for service to service communication and connections are generally long-lived. When you're reconnecting, it's almost always because a service has redeployed.
We rewrote all of this though when we moved to using it more internally
Alessandro Vermeulen
@spockz
@ryanoneill alright. What would be the best way forward to fix this? Connections also tend to be closed on 5xx errors. Also when scaling up due to higher load the ssl handshake overhead is significant in cpu resources.
Why is recreating the context every time safest?
matrixbot
@matrixbot

Alessandro Vermeulen > <@gitter_ryanoneill:matrix.org> A long time ago (4? years or so), Finagle used to have a Context cache - https://github.com/twitter/finagle/blob/finagle-6.25.0/finagle-core/src/main/scala/com/twitter/finagle/ssl/OpenSSL.scala#L114

What would be the problem of memoizing this create function? I suppose the context can be closed somehow at some point maybe?

Alessandro Vermeulen
@spockz
Okay. Using matrix to join this room looks really strange.
Ryan O'Neill
@ryanoneill

This is not something I've looked at recently, so good chance I'm not 100% accurate here:

"Also when scaling up due to higher load" - so if you want old clients connecting to new instances of the same server to not negotiate and resume as if they were talking to old instances, you'll need to have shared state amongst your servers. Cloudflare talks about that here - https://blog.cloudflare.com/tls-session-resumption-full-speed-and-secure/

To play around and try things out in Finagle, you'll need to write your own SslClientEngineFactory (and corresponding server one if the server side is Finagle). You can use the Netty4ClientEngineFactory as a starting point https://github.com/twitter/finagle/blob/develop/finagle-netty4/src/main/scala/com/twitter/finagle/netty4/ssl/client/Netty4ClientEngineFactory.scala

You can then use that engine factory by supplying it along with your SslClientConfiguration to the tls method that you're using (they all should take an engine factory as an optional second argument).

As an example of one that we use internally outside of service to service communication, in it we precreate 4 contexts based on potential parameters of what will be called in apply and then select the appropriate one. Depending on your scenario, you may be able to do something similar. That will get you part of the way there.

Alessandro Vermeulen
@spockz
@ryanoneill this is not about sharing sessions between hosts of the producing servers, either with shared session database or session tickets. I’m referring to re-using the same ssl session for multiple http connections between the same client instance and server instance. It appears that the engine is selected on every pipeline unit which happens for every http connection.
So it looks like menoizing/caching the engine per endpoint stack would solve our issues. When using .tls(sslcontext) that context would indeed be shared accross all instances of a producer which is not what we are looking for now.
In our case the sslcontext was a singleton for server side and all downstream connections.
Alessandro Vermeulen
@spockz
@ryanoneill I've created twitter/finagle#874 to discuss this further
James Lawrie
@jlawrienyt
Hey folks, I was taking a look at Finagle's backup request filter and noticed that the default highestTrackableMsValue being used is 2 seconds. What are the ramifications of adjusting that value, setting it to say the logical request timeout? Is it a performance concern with the histogram itself? Or is the idea that there's less value in sending backup requests for responses taking that long?
Alessandro Vermeulen
@spockz
I think that is because the window itself is two seconds wide?
1 reply
jyanJing
@jyanJing
Alessandro Vermeulen
@spockz
@ryanoneill FYI, we switched back to the SslContextClientEngineFactory and tls connections are now established 10x faster overall when reconnecting to the same instance, and first shot is 2.25x faster than with the netty4clientenginefactory.
Alessandro Vermeulen
@spockz
I expected the connections with session reuse to be faster but I’m puzzled about why the first connection with the ssl context should be faster when it uses JSSE instead of boring ssl.
This is on GraalVM 1.8_252 btw
Andre Souza
@andrerocker
Hey folks, given a Http.client request, are there an easy way to send a haproxy-protocol to the target host? Tks in advance
Beck Gaël
@beckgael
Hi, i'm new to microservices, i've not establish yet which of Finagle, Finch, Finatra will fit the most my use case. And by reading some disadvantages of RPC technologies i fall on the fact that RPC has issues when dealing with large quantity of data. In my use case i will need to load large datasets from DB to process them. Are there really any restrictions about the size / load speed of that kind of files with RPC and more specifically with Finagle and frameworks on top of it ?
Thank you.
Alessandro Vermeulen
@spockz
@beckgael most RPC doesn’t support resumption or other niceties of say sftp but that depends more on the client and server business logic then RPC itself.
Hamdi Allam
@hamdiallam
Finagle supports streaming over HTTP which works well for large payloads
Roger
@rogern
Hello! Im using scrooge to build thrift structures and I've run into a complicated include problem. I got a common library including projects with thrift files where one of the projects act as a main module with shared types in a single thrift file. The whole library is then used in many services that also have their own thrift structures that requires the shared types from both the main module and the other projects of the common library. This makes it really hard to depend on include in the thrift files because the paths will be different depending on which context you are in. I can't get it to work both for builds in commons library as well as from jar in services. I can't find any docs about how you would organise a build like this? It has lead us to do a lot of workarounds with custom plugins etc. But what would a recommended way of setting this up be?
Roger
@rogern
I solved it with a combination of scroogeThriftSources and scroogeThriftIncludeFolders. Not sure if this is the best approach since it means a "copy" of the shared types file. But it works for us.
Vladimir Ivanovskiy
@vi-p4f
Hi! Are there any particular reasons why "fixedinet" schema is not implemented in Path resolution?
Namer.scala:84
def unapply(path: Path): Option[(Var[Addr], Path)] = path match {
        case Path.Utf8("$", "inet", host, IntegerString(port), residual @ _*) =>
          Some((resolve(host, port), Path.Utf8(residual: _*)))
        case Path.Utf8("$", "inet", IntegerString(port), residual @ _*) =>
          // no host provided means localhost
          Some((resolve("", port), Path.Utf8(residual: _*)))
        case _ => None
      }
11 replies
Moses Nakamura
@mosesn
@spockz we’re looking at changing how trace id gets printed, I remember you had some way you wanted to change it, right? can you remind me what how it was?
renyuliu
@renyuliu
hi, does anyone knows how to config the Redis client with tls-enabled ?
peterstorm
@peterstorm:matrix.org
[m]
Is the Twitter Future lazy or eager?
Alessandro Vermeulen
@spockz
@mosesn not necessarily the format but the contents. We use OpenTracing instead of the built in tracer and I want the ability to show our trace information in Failures
And somewhere in our infinite wisdom we choose 128bits for both span and trace if
Moses Nakamura
@mosesn
@peterstorm:matrix.org Futures don’t have work attached to them, so I wouldn’t call them either eager or lazy. NB calling Future.applyA to construct a future executes synchronously, a is call-by-name to catch thrown exceptions, not to trigger work on a different thread. You can think of a Future as being like a box–but the work that is done to fill that box is not attached to the box, except for interruptions, which are advisory
peterstorm
@peterstorm:matrix.org
[m]
Well, another way of asking, I guess, is if it's referentially transparent?
Moses Nakamura
@mosesn
@spockz good to know, thanks!
Alessandro Vermeulen
@spockz
@mosesn do you see a way for us to be able control which tracing info is a#deed in a failure?
peterstorm
@peterstorm:matrix.org
[m]
Can I somehow configure a finagle-thrift client to do streaming?
Moses Nakamura
@mosesn
@spockz hmm, I’m not sure.
@peterstorm:matrix.org not right now. there isn’t a streaming protocol for thrift, afaik.
you can stream thrift objects over HTTP/2 if you want. but Finagle any special support for it.
Alessandro Vermeulen
@spockz
@mosesn would you be willing to consider a MR to make the trace id type definable by the user?
Alessandro Vermeulen
@spockz
@mosesn or switch to using the OpenTracing/OpenCensus interface, that would be ideal for us
Moses Nakamura
@mosesn
possibly! I think we’re still a little nervous about adopting OT because it still feels like the early days. opentelemetry was announced less than two years ago, and we’re not as confident it will be a longterm standard as something like slf4j
I think a PR to change how the trace id is displayed would probably be accepted. making the type flexible would probably be pretty invasive though, so we’d want to discuss it before doing something like that
hllrsr
@hllrsr

Hi everyone!

I got a question about dependency versions and security issues. The latest release from finagle is using the zookeeper version "3.5.0-alpha" which uses a vulnerable version from io.netty/netty 3.x.
After reading some issues and PR's, I've saw this issue (twitter/finagle#665) mentioning some changes that make sure that finagle is not using netty on version 3.x anymore.
What I'm trying to figure out is if finagle is not using netty on version 3.x only on the finagle project or in a whole perspective (even 3rd party dependencies that use netty on an outdated version are having their netty dependency overriden).
If the issue that I've mentioned only takes effect on the finagle project, should it be considered a bump on the zookeeper version?

Thanks!

Alessandro Vermeulen
@spockz
@mosesn open telemetry is indeed new. But with adoption of spring and other jvm frameworks plus integration with opencensus and tracing it seems the most stable form. If at least the Tracer interface can be made compatible with open telemetry tracing it would be a benefit