Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Antoine Cotten
    @antoineco
    In this case you can also remove all Dockerfiles present in the project.
    Stephane
    @steevivo
    @antoineco Ok thx
    Stephane
    @steevivo
    @antoineco if I want to curl elasticsearch, what is the name of host which replace the http://localhost:9200 ? with this docker-compose ?
    Antoine Cotten
    @antoineco
    Port 9200 is exposed on your host by default via a port mapping in the Compose file, so if you're on the Docker host, localhost works. Otherwise, the IP/hostname of your Docker host, if you curl from "outside".
    Stephane
    @steevivo
    @antoineco Ok , can I use patterns in logstash directory ? with this stack ?
    Antoine Cotten
    @antoineco
    Could you elaborate?
    Antoine Cotten
    @antoineco
    @pgruener could that be a question for the Scrapy project maybe?
    Ghost
    @ghost~6027cd416da037398461e93f
    scrapy-cluster
    with elk
    Ghost
    @ghost~6027cd416da037398461e93f
    @antoineco oh but very good point, that I joined thhe wrong group … cannot understand why, as I was in the correct one before - sorry for this :D
    Antoine Cotten
    @antoineco
    No worries :)
    Stephane
    @steevivo
    Hi , how to install plugin kibana like elasalert with a docker setup ? it’s possible ?
    Antoine Cotten
    @antoineco
    @steevivo I think I answered that after your previous message, where you asked why docker-elk generates local images (the ones called docker-elk_... with the latest tag). The reason is exactly that: plugins!
    Check this out: https://github.com/deviantony/docker-elk#how-to-add-plugins
    Antoine Cotten
    @antoineco
    It doesn't seem like this Kibana plug-in is supported anymore though. The last supported version was 7.5.0 apparently: https://github.com/bitsensor/elastalert-kibana-plugin/releases/tag/1.1.0
    So if you want to use elastalert, you'll only be able to use the server, without frontend.
    There is a fork of docker-elk that includes a few extra tools, including elastalert: https://github.com/sherifabdlnaby/elastdocker
    If you're interested in using elastalert with this stack (even without Kibana plug-in) we can maybe add it as an extension. Feel free to open a GitHub issue as a feature request!
    Stephane
    @steevivo
    @antoineco thanks for all details @antoineco cool :thumbsup:
    Antoine Cotten
    @antoineco
    Btw, in recent versions Kibana natively supports alerting! But you may need a license past the 30-day trial.
    Antoine Cotten
    @antoineco

    Actually no, it's included in the free tier, so no need for elastalert :)
    https://www.elastic.co/subscriptions

    Here is the documentation about Kibana Alerts in v7.11: https://www.elastic.co/guide/en/kibana/7.11/kibana-alerts.html

    teamrdwitti
    @teamrdwitti
    Hello all, we have a weird behavior of ELK. After 4 days, we systematically lose all the data (or they are not accessible anymore). Our docker-compose runs inside a VM hosted in Azure. Please, find bellow the stacktrace found in the logs
    es_1 | [2021-03-04T00:50:00,053][INFO ][o.e.x.m.MlDailyMaintenanceService] [10DiCJN] Successfully completed [ML] maintenance tasks
    es_1 | [2021-03-04T01:19:31,744][WARN ][o.e.t.TcpTransport ] [10DiCJN] exception caught on transport layer [Netty4TcpChannel{localAddress=/172.18.0.2:9300, remoteAddress=/xxx.xxx.xxx.xxx:52668}], closing connection
    es_1 | io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (16,3,1,0)
    es_1 | at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) [netty-common-4.1.32.Final.jar:4.1.32.Final]
    es_1 | at java.lang.Thread.run(Thread.java:830) [?:?]
    es_1 | Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (16,3,1,0)
    es_1 | at org.elasticsearch.transport.TcpTran
    Antoine Cotten
    @antoineco
    @teamrdwitti have you changed anything to the network configuration? E.g. used hardcoded IPs instead of the internal "elastic" hostname?
    teamrdwitti
    @teamrdwitti
    No, here is our conf
    es:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.8.7
    ports:
    • 9200:9200
    • 9300:9300
      environment:
    • cluster.name=kapua-datastore
    • discovery.type=single-node
    • transport.host=site
    • transport.ping_schedule=-1
    • transport.tcp.connect_timeout=30s
    Antoine Cotten
    @antoineco
    transport.host is about the network configuration, and it's most likely what's causing issues here: exception caught on transport layer.
    I'm not sure what the value of transport.host is because it's redacted, but in a single node setup this should absolutely not be touched.
    hcw2016
    @hcw2016
    Stumbled across this project a couple of weeks ago. Love the implementation.
    JohnFromBD
    @vootbd
    Hi my logstash container is getting stopped after few miniutes
    can anyone help me about it
    what i am doing wrong
    Antoine Cotten
    @antoineco
    @vootbd could you please share the logs? docker-compose logs logstash
    And also the exit code when this happens: docker-compose ps
    JohnFromBD
    @vootbd
    docker-compose logs logstash command output
    Attaching to docker-elk-main_logstash_1
    logstash_1 | Using bundled JDK: /usr/share/logstash/jdk
    logstash_1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
    logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
    logstash_1 | [2021-05-27T08:13:13,643][INFO ][logstash.runner ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
    logstash_1 | [2021-05-27T08:13:13,652][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.12.1", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.10+9 on 11.0.10+9 +indy +jit [linux-x86_64]"}
    logstash_1 | [2021-05-27T08:13:13,670][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
    logstash_1 | [2021-05-27T08:13:13,679][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
    logstash_1 | [2021-05-27T08:13:14,237][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"cb4469a3-5762-4c72-9561-25ddb38a909c", :path=>"/usr/share/logstash/data/uuid"}
    logstash_1 | [2021-05-27T08:13:14,819][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
    logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
    logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
    logstash_1 | [2021-05-27T08:13:15,246][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash_1 | [2021-05-27T08:13:15,779][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@elasticsearch:9200/]}}
    logstash_1 | [2021-05-27T08:13:16,133][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
    logstash_1 | [2021-05-27T08:13:16,225][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused) {:url=>http://elastic:xxxxxx@elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
    logstash_1 | [2021-05-27T08:13:16,231][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
    logstash_1 | [2021-05-27T08:13:16,281][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasti
    attached is the logstash.log file
    @antoineco
    JohnFromBD
    @vootbd
    found the issue in ~/docker-elk-main/logstash/config/logstash.yml line number 5 it was changed to localhost:5044
    thank you
    JohnFromBD
    @vootbd
    How to see realtime logstash logs
    Antoine Cotten
    @antoineco
    Same command, just append the -f flag (docker-compose logs -f logstash)
    Glad you figure your issue out!
    From the logs you shared it also seems like Elasticsearch is having issues (or maybe it's just still starting?)
    node.js.developers.kh
    @nodejsdeveloperskh

    Hi, I get the following error while i am trying to run the original repo.

    FATAL Error: "monitoring.ui.container.elasticsearch.enabled" setting was not applied. Check for spelling errors and ensure that expected plugins are installed.

    So Change it like this: https://github.com/nodejsdeveloperskh/docker-elk

    But not differ, I get the same error.
    note: I disable the xpack
    what is the problem?

    Antoine Cotten
    @antoineco
    @nodejsdeveloperskh if you disabled X-Pack entirely (as opposed to just setting the license to "basic") you also have to remove those lines from your config: https://github.com/deviantony/docker-elk/blob/main/elasticsearch/config/elasticsearch.yml#L11-L13
    node.js.developers.kh
    @nodejsdeveloperskh

    @antoineco Thanks, your message resolve one of the problems. but the Logstash still exit after docker-compose up command. I remove those line from the logstash, kibana & elasticsearch config file as you can see in my repo.

    the logstash logs in the terminal:

    Unknown setting 'ecs_compatibility' for elasticsearch
    Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (ConfigurationError) Something is wrong with your configuration.", :backtrace=>["org.logstash.config.ir.CompiledPipeline.<init>(CompiledPipeline.java:100)", "org.logstash.execution.JavaBasePipelineExt.initialize(JavaBasePipelineExt.java:60)", "org.logstash.execution.JavaBasePipelineExt$INVOKER$i$1$0$initialize.call(JavaBasePipelineExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:837)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1156)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuperSplatArgs(IRRuntimeHelpers.java:1143)", "org.jruby.ir.targets.InstanceSuperInvokeSite.invoke(InstanceSuperInvokeSite.java:39)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$initialize$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:27)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:91)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:90)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86)", "org.jruby.RubyClass.newInstance(RubyClass.java:915)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:183)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:91)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:90)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:183)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.RUBY$block$converge_state$2(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:326)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:136)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:77)", "org.jruby.runtime.Block.call(Block.java:129)", "org.jruby.RubyProc.call(RubyProc.java:295)", "org.jruby.RubyProc.call(RubyProc.java:274)", "org.jruby.RubyProc.call(RubyProc.java:270)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)", "java.base/java.lang.Thread.run(Thread.java:834)"]}
    warning: thread "Converge PipelineAction::Create<main>" terminated with exception (report_on_exception is true):
    LogStash::Error: Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`
             create at org/logstash/execution/ConvergeResultExt.java:109
                    add at org/logstash/execution/ConvergeResultExt.java:37
       converge_state at /usr/share/logstash/logstash-core/lib/logstash/agent.rb:339
    An exception happened when converging configuration {:exception=>LogStash::Error, :message=>"Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`", :backtrace=>["org/logstash/execution/ConvergeResultExt.java:109:in `create'", "org/logstash/execution/ConvergeResultExt.java:37:in `add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:339:in `block in converge_state'"]}
    java.lang.IllegalStateException: Logstas

    And this is the full log of the logstash

    Screenshot from 2021-06-16 09-45-05.png

    Antoine Cotten
    @antoineco
    That issue is related to the ecs_compstibility setting in logstash/pipeline/logstash.conf, which doesn't exist in the v7.5 of the stack. I see in your repo that you downgraded to 7.5.
    node.js.developers.kh
    @nodejsdeveloperskh
    @antoineco yes you are right. I am using an older version. thanks a lot
    node.js.developers.kh
    @nodejsdeveloperskh
    @antoineco But i got the same error after upgrading the ELK version, Are you sure about that?
    Antoine Cotten
    @antoineco
    Maybe you didn't rebuild your images? See the in the README about "Version selection".
    Also to be clear, it's totally fine to downgrade, as long as you remove the logstash setting I mentionned.