Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Francesco Nigro
    @franz1981
    The king answer of computer science eheh
    Norman Maurer
    @normanmaurer
    exactly
    if you only operate on the EventLoop it will mostly only one thread
    if you write from multiple threads and have multiple channels per EventLoop it will happen a lot ;)
    @franz1981 btw would also be interested in your take on this netty/netty#9004
    Francesco Nigro
    @franz1981
    That makes sense: need to check how it performs with single writer, given that is a must-have :)
    Re netty/netty#9004, count on me :) I was planning to take a look at it in the next 2 days: I've nearlty finished to organize https://voxxeddays.com/milan/ so I've not been very active in the last period ^^
    Norman Maurer
    @normanmaurer
    ok cool
    thanks a lot
    Francesco Nigro
    @franz1981
    YWC!
    Norman Maurer
    @normanmaurer
    looking forward to make some progress on netty 5
    too many things going on :&/
    bm3780
    @bm3780
    @normanmaurer Any known issues writing large (>100MB) HTTP messages over SSL? I am seeing OOM errors when writing large CompositeByteBuf with many components. If I convert to ByteBuffer prior to sending I don't see any problems.
    07:03:05 [queue-4] DEBUG r.n.channel.ChannelOperationsHandler - [id: 0x09a464e3, L:/127.0.0.1:51978 - R:localhost/127.0.0.1:8443] Writing object DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
    PUT /data HTTP/1.1
    user-agent: ReactorNetty/0.8.6.RELEASE
    host: localhost:8443
    accept: */*
    Content-Length: 209715200
    07:03:05 [queue-4] DEBUG r.n.channel.ChannelOperationsHandler - [id: 0x09a464e3, L:/127.0.0.1:51978 - R:localhost/127.0.0.1:8443] Writing object 
    07:03:31 [queue-4] WARN  i.n.c.AbstractChannelHandlerContext - An exception 'java.lang.OutOfMemoryError: Direct buffer memory' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:
    java.lang.OutOfMemoryError: Direct buffer memory
        at java.base/java.nio.Bits.reserveMemory(Bits.java:175)
        at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:118)
        at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317)
        at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:768)
        at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:744)
        at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:245)
        at io.netty.buffer.PoolArena.allocate(PoolArena.java:227)
        at io.netty.buffer.PoolArena.allocate(PoolArena.java:147)
        at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:327)
        at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
        at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178)
        at io.netty.handler.ssl.SslHandler.allocate(SslHandler.java:2120)
        at io.netty.handler.ssl.SslHandler.allocateOutNetBuf(SslHandler.java:2131)
        at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:839)
        at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:810)
        at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:791)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:739)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:731)
        at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:717)
        at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.flush(CombinedChannelDuplexHandler.java:533)
        at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115)
        at io.netty.channel.CombinedChannelDuplexHandler.flush(CombinedChannelDuplexHandler.java:358)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:739)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:731)
        at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:717)
        at reactor.netty.channel.ChannelOperationsHandler.doWrite(ChannelOperationsHandler.java:306)
    Francesco Nigro
    @franz1981
    I have no idea if this is the right place for this question :)
    I've spent some time looking to BurstCostExecutorsBenchmark with NioEventLoop burstLength = 1 , work = 0 and it is getting good results....too good maybe.
    What is weird is that given the nature of the benchmark I was expecting each task offer to alway pay the full price of awaking the selector, but it is not. Looking at the flame graphs it seems that selector.wakeUp has been called inside the event loop, making a subsequent select to return immediately. Probably it is the whole point behind the wakUp CAS logic...
    Indeed if I put on purpose a BlackHole::consumeCPU right after a burst (in order to let the EventLoop to go to sleep) I'm getting the same super good results...
        private int executeBurst(final PerThreadState state) {
            final ExecutorService executor = this.executor;
            final int burstLength = this.burstLength;
            final Runnable completeTask = state.completeTask;
            for (int i = 0; i < burstLength; i++) {
                executor.execute(completeTask);
            }
            final int value = state.spinWaitCompletionOf(burstLength);
            state.resetCompleted();
            Blackhole.consumeCPU(10);
            return value;
        }
    Francesco Nigro
    @franz1981
    Same is happening with Blackhole.consumeCPU(100) ie Seems that NioEventLoop is not falling asleep when it has to do it (at least AFAIK) or that is falling asleep with some "wakeup credit"
    Francesco Nigro
    @franz1981

    Looking at the code it seems to me that

    if (wakenUp.get()) {
        selector.wakeup();
    }

    is being called too often

    Norman Maurer
    @normanmaurer
    sorry I am super busy atm
    looking for another regression :/
    Francesco Nigro
    @franz1981
    @normanmaurer no worries, same for me today :O
    Francesco Nigro
    @franz1981
    Maybe will create an issue to avoid to forget about it :+1:
    bm3780
    @bm3780
    @normanmaurer Just FYI I was able to get around the OOM by increasing the number of bytes partitioned and sent to the SSLEngine. I did this by setting SslHandler#setWrapDataSize to 32k.
    dboss
    @lempx
    aaaaaaaa
    google
    dboss
    @lempx
    aaa
    Dong, Ji-gong
    @DongJigong
    Please do not send useless messages
    dboss
    @lempx
    :<
    itsccn
    @itsccn
    f.channel().closeFuture().sync();
    When is it called?
    Norman Maurer
    @normanmaurer
    @itsccn you can call it when you want to wait until the channel is closed
    catiga
    @catiga
    Anybody know the reason that Netty suggest to init event loop thread quantity with CPU cores multiply by two
    Meaning only four work threads on two CPU cores, and four concurrent request will hold the 100% threads resource, no more request could be handled.
    Whether my understanding about this correct?
    Thanks
    belkirdi
    @belkirdi
    you can set by doing
    @Bean public ReactiveWebServerFactory reactiveWebServerFactory() { NettyReactiveWebServerFactory factory = new NettyReactiveWebServerFactory(); factory.addServerCustomizers(builder -> builder.loopResources(LoopResources.create("my-http", 16, true))); return factory; }
    or from java argument
    java -jar your-app.jar -Dreactor.ipc.netty.workerCount=16
    The number that you got is the default one
    Stephane Maldini
    @smaldini
    @belkirdi please file an issue we can track that for 0.8.9/0.9.0.M2
    (are you sure you are customizing the bean, i tend to use something like like :
    @Override
        public void customize(NettyReactiveWebServerFactory factory) {
    //        factory.addServerCustomizers(httpServer -> httpServer.wiretap(true));
            super.customize(factory);
        }
    itsccn
    @itsccn
    Where can I learn the tutorial of netty?
    Enrico Olivelli
    @eolivelli
    Hello, as you know io.netty.tryReflectionSetAccessible on Java 9+ is disabled by default and this makes Netty unable to do its tweaks and create direct buffers using its special way.
    Is there any plan to find a workaround or any open discussion on OpenJDK mailing lists ?
    Wouldn't it be better to set io.netty.tryReflectionSetAccessible=true by default or at least print a WARN log message that tells that is is better to turn on that switch ? Otherwise people that are moving from Java8 to Java 11 will suffer a performance drop without even knowing that Netty is working in this (bad) mode.
    Norman Maurer
    @normanmaurer
    as long as you use a PooledByteBufAllocator (which is default) the drop is not really bad
    and people complained about warning logs etc before
    that is why we switched
    Enrico Olivelli
    @eolivelli
    Oh now I understand why there is not so much documentation and warnings about io.netty.tryReflectionSetAccessible.
    Thank you @normanmaurer for your quick answer !
    Norman Maurer
    @normanmaurer
    @eolivelli np
    Enrico Olivelli
    @eolivelli
    Norman Maurer
    @normanmaurer
    yes