:wave: Hi! I'm wondering if anyone knows of an http decoder that would allow me to detect HTTP/2 vs. HTTP/1.x so I can support both versions on the same port. I'd like to configure the channel pipeline after the protocol version has been detected. I've seen the netty example Http2OrHttpHandler but my understanding is that only works if you're using SSL, which I'm not.
If such a decoder doesn't exist would it be possible to write one? I'm imagining something like the PrefaceDecoder in Http2ConnectionHandler that triggers an event after reading the preface instead of throwing an error.
Hi! I'm wondering how to make the client of proxy server keep alive. (Thus, I don't want the proxy client to make a tcp close handshake everytime.)
I saw the proxy example in netty
Adding the keepAlive option to this example doesn't seem to work properly. Because it makes a client and connect everytime the server get request and close the client when the response is arrived.
Is there anyone who know how to make the proxy client keepAlive? Is there any reference/example for it?
Getting the below error couple of times while uploading same multipart file.
Caused by: java.io.IOException: Out of size: 990 > 989 at io.netty.handler.codec.http.multipart.AbstractMemoryHttpData.addContent(AbstractMemoryHttpData.java:104)
io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.loadDataMultipartOptimized(HttpPostMultipartRequestDecoder.java:1190)
io.netty.handler.codec.http.multipart.HttpPostRequestDecoder$ErrorDataDecoderException: java.io.IOException: Out of size: 990 > 989
Any guidance will be very much appreciated.
Hello, I have some troubles with tc-native:
Failed to load any of the given libraries: [netty_tcnative_linux_x86_64, netty_tcnative_linux_x86_64_fedora, netty_tcnative_x86_64, netty_tcnative]","logger":"com.onespan.billinginvoicereports.config.HttpClientConfiguration","thread":"main","exception":"java.lang.IllegalArgumentException: Failed to load any of the given libraries: [netty_tcnative_linux_x86_64, netty_tcnative_linux_x86_64_fedora, netty_tcnative_x86_64, netty_tcnative]
Epoll loads fine but not openssl, tc-native jar is there, I am using eclipse-temurin:17.0.2_8-jdk-focal, netty 4.1.73 and tc-native v2.0.48
public static void zeroCopyFile(FileChannel targetChannel, ByteBuf content) {
targetChannel.transferFrom(new ByteBufChannel(content), 0, content.readableBytes());
}
private static class ByteBufChannel implements ReadableByteChannel {
private final ByteBuf byteBuf;
public ByteBufChannel(ByteBuf byteBuf) {
this.byteBuf = byteBuf;
}
@Override
public boolean isOpen() {
return true;
}
@Override
public void close() throws IOException {
}
@Override
public int read(ByteBuffer dst) throws IOException {
int byteBufReadableBytes = byteBuf.readableBytes();
dst.put(byteBuf.nioBuffer());
return byteBufReadableBytes - byteBuf.readableBytes();
}
};
Hello Netty!
We are currently in the process of porting the Apache James server to Netty 4 and encounters some problems down the way. Those goes beyond my current understanding of Netty, thus is it OK if I ask for help here?
Currently I encounter issues with SMTP pipelining: the client sends all the SMTP requests into a single network hop.
Socket client = new Socket(bindedAddress.getAddress().getHostAddress(), bindedAddress.getPort());
buf.append("HELO TEST");
buf.append("\r\n");
buf.append("MAIL FROM: <test@localhost>");
buf.append("\r\n");
buf.append("RCPT TO: <test2@localhost>");
buf.append("\r\n");
buf.append("DATA");
buf.append("\r\n");
buf.append("Subject: test");
buf.append("\r\n");
buf.append("\r\n");
buf.append("content");
buf.append("\r\n");
buf.append(".");
buf.append("\r\n");
buf.append("quit");
buf.append("\r\n");
OutputStream out = client.getOutputStream();
out.write(buf.toString().getBytes());
out.flush();
The way DATA
is handled is that it adds a handler prior the core SMTP handler: https://github.com/chibenwa/james-project/blob/ffc0d4a8b22508b8f5b58594d14041d1f6bc3acf/protocols/netty/src/main/java/org/apache/james/protocols/netty/NettyProtocolTransport.java#L162
channel.pipeline().addBefore(eventExecutors, HandlerConstants.CORE_HANDLER, "lineHandler" + lineHandlerCount, new LineHandlerUpstreamHandler(session, overrideCommandHandler));
Everything works fine if the core handler is running on the event loop, however once we switch it to a distinct executor, the pipeline modification is no longer applied and the subsequent message content is interpreted as SMTP commands as it the line handler was not there.
// Fails
pipeline.addLast(eventExecutorGroup, HandlerConstants.CORE_HANDLER, createHandler());
// Succeed
pipeline.addLast(HandlerConstants.CORE_HANDLER, createHandler());
Is there any way to modify the pipeline 'synchronously' from outside the event loop?
Thanks in advance.
Edit: I found a walkaroud this issue by modifying the core handler to allow overrides of its behaviour, thus not requiring any further modifications of the pipeline. At the price of a not that costly refactoring.
I think I will head toward encapsulating all those behaviors modification into a single handler, thus getting read of all that thread ordering madness.
Things might get bloodier for IMAP though as request decoding is done in a separate handler (request-parsing then request execution) thus overrides happens before request decoding...
Hi everyone!
I just started with Netty -> DotNetty -> SpanNetty. Sorry about the C# code :)
I'm working on a decoder for my custom protocol. Currently I'm using ReplayingDecoder<Enum>
decoder.
Now I'm running into IndexOutOfRangeException: readerIndex(4) + length(1280) exceeds writerIndex(10): PooledHeapByteBuffer(ridx: 4, widx: 10, cap: 256)
So I tried to fix it as followed:
if (input.ReadableBytes >= _messageFrame.Length)
{
_messageFrame.Payload = input.ReadBytes(_messageFrame.Length);
Checkpoint(ProtocolDecoderState.ReadEpilog);
}
which breaks my next switch case. case ProtocolDecoderState.ReadEpilog
because I'm expected epilog to be a certain byte.
Here is the whole code:
protected override void Decode(IChannelHandlerContext context, IByteBuffer input, List<object> output)
{
switch (State)
{
case ProtocolDecoderState.ReadProlog:
_messageFrame.Prolog = input.ReadByte();
Checkpoint(ProtocolDecoderState.ReadMessageType);
break;
case ProtocolDecoderState.ReadMessageType:
_messageFrame.SetMessageType(input.ReadByte());
Checkpoint(ProtocolDecoderState.ReadLength);
break;
case ProtocolDecoderState.ReadLength:
_messageFrame.Length = input.ReadShort();
Checkpoint(ProtocolDecoderState.ReadPayload);
break;
case ProtocolDecoderState.ReadPayload:
if (input.ReadableBytes >= _messageFrame.Length)
{
_messageFrame.Payload = input.ReadBytes(_messageFrame.Length);
Checkpoint(ProtocolDecoderState.ReadEpilog);
}
break;
case ProtocolDecoderState.ReadEpilog:
_messageFrame.Epilog = input.ReadByte();
output.Add(ProtocolHelper.CovertByteFrameToMessage(_messageFrame));
Checkpoint(ProtocolDecoderState.ReadProlog);
_messageFrame = new MessageFrame();
break;
default:
throw new InvalidDataException("Shouldn't reach here.");
}
}
Do I need to use the ByteToMessageDecoder
instead?
Can I not use the ReadBytes(int length)
in the ReplayingDecoder
?
Thanks!!
Hi, I am currently trying to make ECDSA related ciphers to work with TLS 1.2 in Spring Cloud Gateway (Spring Boot Parent 2.6.7 and Spring Cloud 2021.0.2). Here's the snippet of WebServerFactoryCustomizer
@Bean
public WebServerFactoryCustomizer<NettyReactiveWebServerFactory> customizer() {
return factory -> factory.addServerCustomizers(httpServer -> httpServer.secure(sslContextSpec -> {
try {
Ssl ssl = factory.getSsl();
KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
char[] keyStorePassword = ssl.getKeyStorePassword().toCharArray();
keyStore.load(resourceLoader.getResource(ssl.getKeyStore()).getInputStream(), keyStorePassword);
KeyManagerFactory keyManagerFactory = OpenSslCachingX509KeyManagerFactory
.getInstance(KeyManagerFactory.getDefaultAlgorithm());
keyManagerFactory.init(keyStore, keyStorePassword);
Http11SslContextSpec http11SslContextSpec = Http11SslContextSpec.forServer(keyManagerFactory)
.configure(sslContextBuilder -> {
sslContextBuilder.sslProvider(SslProvider.OPENSSL);
sslContextBuilder.ciphers(Arrays.asList(ssl.getCiphers()));
sslContextBuilder.protocols(ssl.getEnabledProtocols());
sslContextBuilder.trustManager(InsecureTrustManagerFactory.INSTANCE);
sslContextBuilder.clientAuth(ClientAuth.REQUIRE);
});
sslContextSpec.sslContext(http11SslContextSpec)
.handlerConfigurator(sslHandler -> {
sslHandler.setCloseNotifyReadTimeout(18000, TimeUnit.MILLISECONDS);
sslHandler.setHandshakeTimeout(19000, TimeUnit.MILLISECONDS);
SSLParameters sslParameters = sslHandler.engine().getSSLParameters();
sslParameters.setUseCipherSuitesOrder(false);
sslHandler.engine().setSSLParameters(sslParameters);
});
} catch (UnrecoverableKeyException | IOException | CertificateException | KeyStoreException |
NoSuchAlgorithmException e) {
throw new RuntimeException(e);
}
}));
}
But when I try to connect using openssl s_client with ECDHE-ECDSA-AES128-GCM-SHA256 cipher the server returns an error with no shared ciphers, but I do have it in the configuration as
server.ssl.ciphers=TLS_RSA_WITH_AES_128_GCM_SHA256,\
TLS_RSA_WITH_AES_256_GCM_SHA384, \
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,\
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,\
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
server.ssl.enabled-protocols=TLSv1.2
This behavior was observed when I upgraded versions from Spring Boot 2.3.3.RELEASE and Spring Cloud Hoxton.SR7. Also, if I switch to JDK as a SSL Provider it is working as expected, this issue only occurs with OpenSSL as a provider. Any advice/suggestions would be of great help on fixing or correctly configuring it.
Hi,
We have seen errors in the logs, but we are no longer able to reproduce manually.
Do you have any idea what the cause might be?
The exception is:
Jul 06, 2022 7:13:39 AM reactor.util.Loggers$Slf4JLogger error
SEVERE: [9b7e395e-1, L:/10.219.105.156:4089 - R:/104.28.98.59:30389] Error finishing response. Closing connection
java.lang.ArrayIndexOutOfBoundsException: Index 67108863 out of bounds for length 8
at io.netty.buffer.PoolSubpage.allocate(PoolSubpage.java:92)
at io.netty.buffer.PoolArena.tcacheAllocateSmall(PoolArena.java:165)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:134)
at io.netty.buffer.PoolArena.reallocate(PoolArena.java:287)
at io.netty.buffer.PooledByteBuf.capacity(PooledByteBuf.java:122)
at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:333)
at io.netty.handler.ssl.SslHandler.attemptCopyToCumulation(SslHandler.java:2334)
at io.netty.handler.ssl.SslHandler.access$2800(SslHandler.java:168)
at io.netty.handler.ssl.SslHandler$SslHandlerCoalescingBufferQueue.compose(SslHandler.java:2295)
at io.netty.channel.AbstractCoalescingBufferQueue.remove(AbstractCoalescingBufferQueue.java:176)
at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:817)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1439)
at io.netty.handler.ssl.SslHandler.decodeNonJdkCompatible(SslHandler.java:1246)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1286)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:449)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:829)
Our netty version is 4.1.75-Final
Thanks in advance
Runnable
tasks to capture times for histogram metrics which sounds like it would work but short of wrapping/delegating the EventLoopGroup/EventLoop I haven't found an obvious way through which I can intercept the tasks submitted to the event loop. Any pointers?
{"instant":{"epochSecond":1659945933,"nanoOfSecond":570231204},"thread":"ingress-h2-epoll-4","level":"DEBUG","loggerName":"reactor.netty.http.server.HttpServer","message":"[48bc3437, L:/10.233.109.48:8443 - R:/10.233.121.74:33256] EXCEPTION: io.netty.util.IllegalReferenceCountException: SslHandler#decode() might have released its input buffer, or passed it down the pipeline without a retain() call, which is not allowed.","thrown":{"commonElementCount":0,"localizedMessage":"SslHandler#decode() might have released its input buffer, or passed it down the pipeline without a retain() call, which is not allowed.","message":"SslHandler#decode() might have released its input buffer, or passed it down the pipeline without a retain() call, which is not allowed.","name":"io.netty.util.IllegalReferenceCountException","cause":{"commonElementCount":14,"localizedMessage":"refCnt: 0, decrement: 1","message":"refCnt: 0, decrement:
What is the recommended timing for this property? -Dio.netty.allocator.cacheTrimIntervalMillis
We have an application which is mainly based on reactor netty and which uses caffeine cache as cache.
From the metrics we see that the ByteBuf allocator occasionally allocates new ByteBufs and therefore the use of direct memory increases.
We have already investigated possible memory leaks and have not found anything.
Reproducing the situation in development we tried to set the io.netty.allocator.cacheTrimIntervalMillis propery to be sure that any unused ByteBufs are deallocated. The situation looks much better and we no longer see continuous growth of direct memory.
Now to re-propose the same situation in production I was a little doubtful about the timing to set.
If our basic application receives a request and the data is cacheable, it caches it. So a high use of Direct memory is possible. However, we have limited the cache to a maximum size. So usually when the cache is full some chunks are released.