Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Violeta Georgieva
    @violetagg
    yep
    Lukasz
    @lradziwonowicz
    ok, but maven did not download anything new, is this ok?
    Violeta Georgieva
    @violetagg
    Reactor Core should be version 3.2.8 and Reactor Netty 0.8.6?
    Lukasz
    @lradziwonowicz
    ok, I see it in dependency tree, I'll test it, thx for the links
    Violeta Georgieva
    @violetagg
    :+1:
    Lukasz
    @lradziwonowicz
    @violetagg cool, it is working, the LEAK is gone. I'll test it with my original application
    Violeta Georgieva
    @violetagg
    also you might want to try then Spring Framework 5.1.6 snapshots there are some fixes related to memory leaks
    Nouras Hamwi
    @NourasHamwi
    I am using netty with apache camel and it works fine on my local enviroment ( Mac ) but when I moved the project to Linux it is giving this error NettyProducer WARN No payload to send for exchange
    Any idea what could be causing it ?
    Norman Maurer
    @normanmaurer
    Sorry I think you will need to ask on the camel mailing list
    Nouras Hamwi
    @NourasHamwi
    I got the issue resolved, it was a packaging proglem.
    Enrico Olivelli
    @eolivelli
    Hi guys. I am working on Maven Surefire. We are going to rewrite the communication system between core Maven process and forked JVMs in tests, we are going to use loopback network instead of stdout/stdin. We are currently discussing about using Netty because it will make it very simple to implement the protocol and it is, you know, very efficient.
    I have a concern about shading/relocating Netty bits and deploying it inside the forked JVM: this will "pollute" the environment for projects that are really using Netty.
    May I have an opinion from any of you ?
    Norman Maurer
    @normanmaurer
    not sure what to say here @eolivelli :D
    I think as it is only a test dependency I would not worry
    that said I think you must shade
    netty is too widely used to not shade
    Enrico Olivelli
    @eolivelli
    How would you see it to have a shaded org.apache.surefire.io.netty while running netty self tests ? Wouldn't it be a mess, looking at thread dumps for instance, or contents of ThreadLocals, log messages ?
    Norman Maurer
    @normanmaurer
    hmm no idea
    I would not be too concerned imho
    as long as it is shaded
    Enrico Olivelli
    @eolivelli
    ok, thanks for your feedback @normanmaurer
    Norman Maurer
    @normanmaurer
    np
    Francesco Nigro
    @franz1981
    Hi! I'm looking the single threaded event loops and which type of JCTools q they use...
    It seems the mpsc unbounded q, unless users specify a limit for the pending tasks...it is correct?
    Norman Maurer
    @normanmaurer
    yes
    Francesco Nigro
    @franz1981
    Nice one, so (fingers crossed)...On the next week I'm thiking to give it a shot to a new mpsc q I've implemented for JCTools
    I'm waiting Nitsan to review (and I'm sure, improve) it :)
    Norman Maurer
    @normanmaurer
    cool :)
    the XADD one ?
    (following jctools as well)
    Francesco Nigro
    @franz1981
    Yep :)
    I just need to undertand the use case: depends if the event loop mailbox is being stressed by multiple threads...starting from 2 the difference is "interesting" :)
    Norman Maurer
    @normanmaurer
    it depends :)
    Francesco Nigro
    @franz1981
    The king answer of computer science eheh
    Norman Maurer
    @normanmaurer
    exactly
    if you only operate on the EventLoop it will mostly only one thread
    if you write from multiple threads and have multiple channels per EventLoop it will happen a lot ;)
    @franz1981 btw would also be interested in your take on this netty/netty#9004
    Francesco Nigro
    @franz1981
    That makes sense: need to check how it performs with single writer, given that is a must-have :)
    Re netty/netty#9004, count on me :) I was planning to take a look at it in the next 2 days: I've nearlty finished to organize https://voxxeddays.com/milan/ so I've not been very active in the last period ^^
    Norman Maurer
    @normanmaurer
    ok cool
    thanks a lot
    Francesco Nigro
    @franz1981
    YWC!
    Norman Maurer
    @normanmaurer
    looking forward to make some progress on netty 5
    too many things going on :&/
    bm3780
    @bm3780
    @normanmaurer Any known issues writing large (>100MB) HTTP messages over SSL? I am seeing OOM errors when writing large CompositeByteBuf with many components. If I convert to ByteBuffer prior to sending I don't see any problems.
    07:03:05 [queue-4] DEBUG r.n.channel.ChannelOperationsHandler - [id: 0x09a464e3, L:/127.0.0.1:51978 - R:localhost/127.0.0.1:8443] Writing object DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
    PUT /data HTTP/1.1
    user-agent: ReactorNetty/0.8.6.RELEASE
    host: localhost:8443
    accept: */*
    Content-Length: 209715200
    07:03:05 [queue-4] DEBUG r.n.channel.ChannelOperationsHandler - [id: 0x09a464e3, L:/127.0.0.1:51978 - R:localhost/127.0.0.1:8443] Writing object 
    07:03:31 [queue-4] WARN  i.n.c.AbstractChannelHandlerContext - An exception 'java.lang.OutOfMemoryError: Direct buffer memory' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:
    java.lang.OutOfMemoryError: Direct buffer memory
        at java.base/java.nio.Bits.reserveMemory(Bits.java:175)
        at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:118)
        at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317)
        at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:768)
        at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:744)
        at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:245)
        at io.netty.buffer.PoolArena.allocate(PoolArena.java:227)
        at io.netty.buffer.PoolArena.allocate(PoolArena.java:147)
        at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:327)
        at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
        at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178)
        at io.netty.handler.ssl.SslHandler.allocate(SslHandler.java:2120)
        at io.netty.handler.ssl.SslHandler.allocateOutNetBuf(SslHandler.java:2131)
        at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:839)
        at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:810)
        at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:791)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:739)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:731)
        at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:717)
        at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.flush(CombinedChannelDuplexHandler.java:533)
        at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115)
        at io.netty.channel.CombinedChannelDuplexHandler.flush(CombinedChannelDuplexHandler.java:358)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:739)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:731)
        at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:717)
        at reactor.netty.channel.ChannelOperationsHandler.doWrite(ChannelOperationsHandler.java:306)
    Francesco Nigro
    @franz1981
    I have no idea if this is the right place for this question :)
    I've spent some time looking to BurstCostExecutorsBenchmark with NioEventLoop burstLength = 1 , work = 0 and it is getting good results....too good maybe.
    What is weird is that given the nature of the benchmark I was expecting each task offer to alway pay the full price of awaking the selector, but it is not. Looking at the flame graphs it seems that selector.wakeUp has been called inside the event loop, making a subsequent select to return immediately. Probably it is the whole point behind the wakUp CAS logic...
    Indeed if I put on purpose a BlackHole::consumeCPU right after a burst (in order to let the EventLoop to go to sleep) I'm getting the same super good results...
        private int executeBurst(final PerThreadState state) {
            final ExecutorService executor = this.executor;
            final int burstLength = this.burstLength;
            final Runnable completeTask = state.completeTask;
            for (int i = 0; i < burstLength; i++) {
                executor.execute(completeTask);
            }
            final int value = state.spinWaitCompletionOf(burstLength);
            state.resetCompleted();
            Blackhole.consumeCPU(10);
            return value;
        }
    Francesco Nigro
    @franz1981
    Same is happening with Blackhole.consumeCPU(100) ie Seems that NioEventLoop is not falling asleep when it has to do it (at least AFAIK) or that is falling asleep with some "wakeup credit"
    Francesco Nigro
    @franz1981

    Looking at the code it seems to me that

    if (wakenUp.get()) {
        selector.wakeup();
    }

    is being called too often

    Norman Maurer
    @normanmaurer
    sorry I am super busy atm