Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Stephane
    @steevivo
    @antoineco in command line > " docker-compose up -d logstash —config.reload.automatic » for example is ok too ?
    Antoine Cotten
    @antoineco
    Unfortunately not, it has to be inside the Compose file.
    command: ['--config.reload.automatic']
    Antoine Cotten
    @antoineco
    This message was deleted
    Or
    command:
    - --config.reload.automatic
    Stephane
    @steevivo
    @antoineco ok Thx
    ifleg
    @ifleg
    Hi,
    About config.reload.automatic it triggers the reload, but the reload hangs forever ! I have to restart the container !
    ifleg
    @ifleg
    I tryed to put config.reload.automatic: true in logstash/config/logstash.yml or to change the docker-compose.yml as @antoineco suggested. Both methods trigger the reload. But the result is the same : the restart seems to get stucked. The logs shows
    [2021-02-05T15:04:53,567][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>48, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>6000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x18973e34 run>"}
    and nothing more !
    Antoine Cotten
    @antoineco
    @steevivo actually that setting can also go inside the logstash.yml file, contrary to what I told you above: https://www.elastic.co/guide/en/logstash/7.10/logstash-settings-file.html
    Antoine Cotten
    @antoineco
    @ifleg strange, I just tried and whenever I perform a change in my pipeline file, I see the following messages in the logs:
    logstash_1       | [2021-02-05T15:28:21,868][INFO ][logstash.pipelineaction.reload] Reloading pipeline {"pipeline.id"=>:main}
    logstash_1       | [2021-02-05T15:28:32,879][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
    ...
    logstash_1       | [2021-02-05T15:28:33,592][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x1865a88d run>"}
    ...
    logstash_1       | [2021-02-05T15:28:33,697][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
    Maybe try starting Logstash with log.level=debug (can also be set in the config file) and see where it hangs in the logs? (warning: Logstash will produce a LOT of logs in debug mode)
    Antoine Cotten
    @antoineco
    As you can see, reloading the pipeline takes >10sec when absolutely no data is being processed, so if your pipeline is currently processing data, it could be that things seem to "hang" but are actually just being gracefully restarted in the background.
    ifleg
    @ifleg

    Actually it's a test installation... so there is almost no incoming data (apart from one test server).
    At the end it works, but it takes a quite while to restart (around 2mn) as I can see in the logs !

    [2021-02-05T15:43:48,802][INFO ][logstash.pipelineaction.reload] Reloading pipeline {"pipeline.id"=>:main}
    [2021-02-05T15:44:00,577][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
    
    [2021-02-05T15:44:04,668][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>48, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>6000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x47483bae run>"}

    Then it hangs up... but after a while...

    [2021-02-05T15:45:39,510][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>94.84}
    [2021-02-05T15:45:39,549][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
    [2021-02-05T15:45:39,553][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}

    Indeed, brutally restarting the docker container is faster !

    Antoine Cotten
    @antoineco
    Hehe yeah, I guess the reason why it takes so long is because Logstash is trying to avoid downtime as much as possible! When you restart the container, it's effectively a cold restart.
    Stephane
    @steevivo
    @antoineco @ifleg Only for 7.10 ? I have 7.9.1 server too
    Antoine Cotten
    @antoineco
    @steevivo any 7.x version.
    Stephane
    @steevivo
    @antoineco cool :thumbsup:
    Stephane
    @steevivo
    Hi, when I launch docker-compose up -d I have doublon images docker.elastic.co/logstash/logstash 7.10.2 278903ffa6ee 3 weeks ago 915MB
    elk-stack_logstash latest 278903ffa6ee 3 weeks ago 915MB
    elk-stack_kibana latest cca015890e7f 3 weeks ago 1.05GB
    docker.elastic.co/kibana/kibana 7.10.2 cca015890e7f 3 weeks ago 1.05GB
    docker.elastic.co/elasticsearch/elasticsearch 7.10.2 0b58e1cea500 3 weeks ago 814MB
    elk-stack_elasticsearch latest 0b58e1cea500 3 weeks ago 814MB
    Do you know why ? latest & 7.10.2
    Antoine Cotten
    @antoineco
    @steevivo that's because docker-elk builds a thin image layer on top of official images so you can add plugins which are not included in official images (the ones with the latest tag are those images).
    If you know you're not gonna need any plug-in, you can replace build: directives with image: in the Compose file, and use Elastic's images directly.
    In this case you can also remove all Dockerfiles present in the project.
    Stephane
    @steevivo
    @antoineco Ok thx
    Stephane
    @steevivo
    @antoineco if I want to curl elasticsearch, what is the name of host which replace the http://localhost:9200 ? with this docker-compose ?
    Antoine Cotten
    @antoineco
    Port 9200 is exposed on your host by default via a port mapping in the Compose file, so if you're on the Docker host, localhost works. Otherwise, the IP/hostname of your Docker host, if you curl from "outside".
    Stephane
    @steevivo
    @antoineco Ok , can I use patterns in logstash directory ? with this stack ?
    Antoine Cotten
    @antoineco
    Could you elaborate?
    Antoine Cotten
    @antoineco
    @pgruener could that be a question for the Scrapy project maybe?
    Ghost
    @ghost~6027cd416da037398461e93f
    scrapy-cluster
    with elk
    Ghost
    @ghost~6027cd416da037398461e93f
    @antoineco oh but very good point, that I joined thhe wrong group … cannot understand why, as I was in the correct one before - sorry for this :D
    Antoine Cotten
    @antoineco
    No worries :)
    Stephane
    @steevivo
    Hi , how to install plugin kibana like elasalert with a docker setup ? it’s possible ?
    Antoine Cotten
    @antoineco
    @steevivo I think I answered that after your previous message, where you asked why docker-elk generates local images (the ones called docker-elk_... with the latest tag). The reason is exactly that: plugins!
    Check this out: https://github.com/deviantony/docker-elk#how-to-add-plugins
    Antoine Cotten
    @antoineco
    It doesn't seem like this Kibana plug-in is supported anymore though. The last supported version was 7.5.0 apparently: https://github.com/bitsensor/elastalert-kibana-plugin/releases/tag/1.1.0
    So if you want to use elastalert, you'll only be able to use the server, without frontend.
    There is a fork of docker-elk that includes a few extra tools, including elastalert: https://github.com/sherifabdlnaby/elastdocker
    If you're interested in using elastalert with this stack (even without Kibana plug-in) we can maybe add it as an extension. Feel free to open a GitHub issue as a feature request!
    Stephane
    @steevivo
    @antoineco thanks for all details @antoineco cool :thumbsup:
    Antoine Cotten
    @antoineco
    Btw, in recent versions Kibana natively supports alerting! But you may need a license past the 30-day trial.
    Antoine Cotten
    @antoineco

    Actually no, it's included in the free tier, so no need for elastalert :)
    https://www.elastic.co/subscriptions

    Here is the documentation about Kibana Alerts in v7.11: https://www.elastic.co/guide/en/kibana/7.11/kibana-alerts.html

    teamrdwitti
    @teamrdwitti
    Hello all, we have a weird behavior of ELK. After 4 days, we systematically lose all the data (or they are not accessible anymore). Our docker-compose runs inside a VM hosted in Azure. Please, find bellow the stacktrace found in the logs
    es_1 | [2021-03-04T00:50:00,053][INFO ][o.e.x.m.MlDailyMaintenanceService] [10DiCJN] Successfully completed [ML] maintenance tasks
    es_1 | [2021-03-04T01:19:31,744][WARN ][o.e.t.TcpTransport ] [10DiCJN] exception caught on transport layer [Netty4TcpChannel{localAddress=/172.18.0.2:9300, remoteAddress=/xxx.xxx.xxx.xxx:52668}], closing connection
    es_1 | io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (16,3,1,0)
    es_1 | at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) [netty-common-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at java.lang.Thread.run(Thread.java:830) [?:?]
    es_1 | Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (16,3,1,0)
    es_1 | at org.elasticsearch.transport.TcpTran
    Antoine Cotten
    @antoineco
    @teamrdwitti have you changed anything to the network configuration? E.g. used hardcoded IPs instead of the internal "elastic" hostname?
    teamrdwitti
    @teamrdwitti
    No, here is our conf
    es:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.8.7
    ports:
    • 9200:9200
    • 9300:9300
      environment:
    • cluster.name=kapua-datastore
    • discovery.type=single-node
    • transport.host=site
    • transport.ping_schedule=-1
    • transport.tcp.connect_timeout=30s
    Antoine Cotten
    @antoineco
    transport.host is about the network configuration, and it's most likely what's causing issues here: exception caught on transport layer.
    I'm not sure what the value of transport.host is because it's redacted, but in a single node setup this should absolutely not be touched.
    hcw2016
    @hcw2016
    Stumbled across this project a couple of weeks ago. Love the implementation.
    JohnFromBD
    @vootbd
    Hi my logstash container is getting stopped after few miniutes
    can anyone help me about it
    what i am doing wrong
    Antoine Cotten
    @antoineco
    @vootbd could you please share the logs? docker-compose logs logstash