Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Norman Maurer
    @normanmaurer
    netty is too widely used to not shade
    Enrico Olivelli
    @eolivelli
    How would you see it to have a shaded org.apache.surefire.io.netty while running netty self tests ? Wouldn't it be a mess, looking at thread dumps for instance, or contents of ThreadLocals, log messages ?
    Norman Maurer
    @normanmaurer
    hmm no idea
    I would not be too concerned imho
    as long as it is shaded
    Enrico Olivelli
    @eolivelli
    ok, thanks for your feedback @normanmaurer
    Norman Maurer
    @normanmaurer
    np
    Francesco Nigro
    @franz1981
    Hi! I'm looking the single threaded event loops and which type of JCTools q they use...
    It seems the mpsc unbounded q, unless users specify a limit for the pending tasks...it is correct?
    Norman Maurer
    @normanmaurer
    yes
    Francesco Nigro
    @franz1981
    Nice one, so (fingers crossed)...On the next week I'm thiking to give it a shot to a new mpsc q I've implemented for JCTools
    I'm waiting Nitsan to review (and I'm sure, improve) it :)
    Norman Maurer
    @normanmaurer
    cool :)
    the XADD one ?
    (following jctools as well)
    Francesco Nigro
    @franz1981
    Yep :)
    I just need to undertand the use case: depends if the event loop mailbox is being stressed by multiple threads...starting from 2 the difference is "interesting" :)
    Norman Maurer
    @normanmaurer
    it depends :)
    Francesco Nigro
    @franz1981
    The king answer of computer science eheh
    Norman Maurer
    @normanmaurer
    exactly
    if you only operate on the EventLoop it will mostly only one thread
    if you write from multiple threads and have multiple channels per EventLoop it will happen a lot ;)
    @franz1981 btw would also be interested in your take on this netty/netty#9004
    Francesco Nigro
    @franz1981
    That makes sense: need to check how it performs with single writer, given that is a must-have :)
    Re netty/netty#9004, count on me :) I was planning to take a look at it in the next 2 days: I've nearlty finished to organize https://voxxeddays.com/milan/ so I've not been very active in the last period ^^
    Norman Maurer
    @normanmaurer
    ok cool
    thanks a lot
    Francesco Nigro
    @franz1981
    YWC!
    Norman Maurer
    @normanmaurer
    looking forward to make some progress on netty 5
    too many things going on :&/
    bm3780
    @bm3780
    @normanmaurer Any known issues writing large (>100MB) HTTP messages over SSL? I am seeing OOM errors when writing large CompositeByteBuf with many components. If I convert to ByteBuffer prior to sending I don't see any problems.
    07:03:05 [queue-4] DEBUG r.n.channel.ChannelOperationsHandler - [id: 0x09a464e3, L:/127.0.0.1:51978 - R:localhost/127.0.0.1:8443] Writing object DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
    PUT /data HTTP/1.1
    user-agent: ReactorNetty/0.8.6.RELEASE
    host: localhost:8443
    accept: */*
    Content-Length: 209715200
    07:03:05 [queue-4] DEBUG r.n.channel.ChannelOperationsHandler - [id: 0x09a464e3, L:/127.0.0.1:51978 - R:localhost/127.0.0.1:8443] Writing object 
    07:03:31 [queue-4] WARN  i.n.c.AbstractChannelHandlerContext - An exception 'java.lang.OutOfMemoryError: Direct buffer memory' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:
    java.lang.OutOfMemoryError: Direct buffer memory
        at java.base/java.nio.Bits.reserveMemory(Bits.java:175)
        at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:118)
        at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317)
        at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:768)
        at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:744)
        at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:245)
        at io.netty.buffer.PoolArena.allocate(PoolArena.java:227)
        at io.netty.buffer.PoolArena.allocate(PoolArena.java:147)
        at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:327)
        at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
        at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178)
        at io.netty.handler.ssl.SslHandler.allocate(SslHandler.java:2120)
        at io.netty.handler.ssl.SslHandler.allocateOutNetBuf(SslHandler.java:2131)
        at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:839)
        at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:810)
        at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:791)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:739)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:731)
        at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:717)
        at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.flush(CombinedChannelDuplexHandler.java:533)
        at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115)
        at io.netty.channel.CombinedChannelDuplexHandler.flush(CombinedChannelDuplexHandler.java:358)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:739)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:731)
        at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:717)
        at reactor.netty.channel.ChannelOperationsHandler.doWrite(ChannelOperationsHandler.java:306)
    Francesco Nigro
    @franz1981
    I have no idea if this is the right place for this question :)
    I've spent some time looking to BurstCostExecutorsBenchmark with NioEventLoop burstLength = 1 , work = 0 and it is getting good results....too good maybe.
    What is weird is that given the nature of the benchmark I was expecting each task offer to alway pay the full price of awaking the selector, but it is not. Looking at the flame graphs it seems that selector.wakeUp has been called inside the event loop, making a subsequent select to return immediately. Probably it is the whole point behind the wakUp CAS logic...
    Indeed if I put on purpose a BlackHole::consumeCPU right after a burst (in order to let the EventLoop to go to sleep) I'm getting the same super good results...
        private int executeBurst(final PerThreadState state) {
            final ExecutorService executor = this.executor;
            final int burstLength = this.burstLength;
            final Runnable completeTask = state.completeTask;
            for (int i = 0; i < burstLength; i++) {
                executor.execute(completeTask);
            }
            final int value = state.spinWaitCompletionOf(burstLength);
            state.resetCompleted();
            Blackhole.consumeCPU(10);
            return value;
        }
    Francesco Nigro
    @franz1981
    Same is happening with Blackhole.consumeCPU(100) ie Seems that NioEventLoop is not falling asleep when it has to do it (at least AFAIK) or that is falling asleep with some "wakeup credit"
    Francesco Nigro
    @franz1981

    Looking at the code it seems to me that

    if (wakenUp.get()) {
        selector.wakeup();
    }

    is being called too often

    Norman Maurer
    @normanmaurer
    sorry I am super busy atm
    looking for another regression :/
    Francesco Nigro
    @franz1981
    @normanmaurer no worries, same for me today :O
    Francesco Nigro
    @franz1981
    Maybe will create an issue to avoid to forget about it :+1:
    bm3780
    @bm3780
    @normanmaurer Just FYI I was able to get around the OOM by increasing the number of bytes partitioned and sent to the SSLEngine. I did this by setting SslHandler#setWrapDataSize to 32k.
    dboss
    @lempx
    aaaaaaaa
    google
    dboss
    @lempx
    aaa
    Dong, Ji-gong
    @DongJigong
    Please do not send useless messages
    dboss
    @lempx
    :<
    itsccn
    @itsccn
    f.channel().closeFuture().sync();
    When is it called?
    Norman Maurer
    @normanmaurer
    @itsccn you can call it when you want to wait until the channel is closed
    catiga
    @catiga
    Anybody know the reason that Netty suggest to init event loop thread quantity with CPU cores multiply by two
    Meaning only four work threads on two CPU cores, and four concurrent request will hold the 100% threads resource, no more request could be handled.
    Whether my understanding about this correct?