Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    JohnFromBD
    @vootbd
    docker-compose logs logstash command output
    Attaching to docker-elk-main_logstash_1
    logstash_1 | Using bundled JDK: /usr/share/logstash/jdk
    logstash_1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
    logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
    logstash_1 | [2021-05-27T08:13:13,643][INFO ][logstash.runner ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
    logstash_1 | [2021-05-27T08:13:13,652][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.12.1", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.10+9 on 11.0.10+9 +indy +jit [linux-x86_64]"}
    logstash_1 | [2021-05-27T08:13:13,670][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
    logstash_1 | [2021-05-27T08:13:13,679][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
    logstash_1 | [2021-05-27T08:13:14,237][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"cb4469a3-5762-4c72-9561-25ddb38a909c", :path=>"/usr/share/logstash/data/uuid"}
    logstash_1 | [2021-05-27T08:13:14,819][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
    logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
    logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
    logstash_1 | [2021-05-27T08:13:15,246][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash_1 | [2021-05-27T08:13:15,779][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@elasticsearch:9200/]}}
    logstash_1 | [2021-05-27T08:13:16,133][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
    logstash_1 | [2021-05-27T08:13:16,225][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused) {:url=>http://elastic:xxxxxx@elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
    logstash_1 | [2021-05-27T08:13:16,231][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
    logstash_1 | [2021-05-27T08:13:16,281][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasti
    attached is the logstash.log file
    @antoineco
    JohnFromBD
    @vootbd
    found the issue in ~/docker-elk-main/logstash/config/logstash.yml line number 5 it was changed to localhost:5044
    thank you
    JohnFromBD
    @vootbd
    How to see realtime logstash logs
    Antoine Cotten
    @antoineco
    Same command, just append the -f flag (docker-compose logs -f logstash)
    Glad you figure your issue out!
    From the logs you shared it also seems like Elasticsearch is having issues (or maybe it's just still starting?)
    node.js.developers.kh
    @nodejsdeveloperskh

    Hi, I get the following error while i am trying to run the original repo.

    FATAL Error: "monitoring.ui.container.elasticsearch.enabled" setting was not applied. Check for spelling errors and ensure that expected plugins are installed.

    So Change it like this: https://github.com/nodejsdeveloperskh/docker-elk

    But not differ, I get the same error.
    note: I disable the xpack
    what is the problem?

    Antoine Cotten
    @antoineco
    @nodejsdeveloperskh if you disabled X-Pack entirely (as opposed to just setting the license to "basic") you also have to remove those lines from your config: https://github.com/deviantony/docker-elk/blob/main/elasticsearch/config/elasticsearch.yml#L11-L13
    node.js.developers.kh
    @nodejsdeveloperskh

    @antoineco Thanks, your message resolve one of the problems. but the Logstash still exit after docker-compose up command. I remove those line from the logstash, kibana & elasticsearch config file as you can see in my repo.

    the logstash logs in the terminal:

    Unknown setting 'ecs_compatibility' for elasticsearch
    Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (ConfigurationError) Something is wrong with your configuration.", :backtrace=>["org.logstash.config.ir.CompiledPipeline.<init>(CompiledPipeline.java:100)", "org.logstash.execution.JavaBasePipelineExt.initialize(JavaBasePipelineExt.java:60)", "org.logstash.execution.JavaBasePipelineExt$INVOKER$i$1$0$initialize.call(JavaBasePipelineExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:837)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1156)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuperSplatArgs(IRRuntimeHelpers.java:1143)", "org.jruby.ir.targets.InstanceSuperInvokeSite.invoke(InstanceSuperInvokeSite.java:39)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$initialize$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:27)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:91)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:90)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86)", "org.jruby.RubyClass.newInstance(RubyClass.java:915)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:183)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:91)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:90)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:183)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.RUBY$block$converge_state$2(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:326)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:136)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:77)", "org.jruby.runtime.Block.call(Block.java:129)", "org.jruby.RubyProc.call(RubyProc.java:295)", "org.jruby.RubyProc.call(RubyProc.java:274)", "org.jruby.RubyProc.call(RubyProc.java:270)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)", "java.base/java.lang.Thread.run(Thread.java:834)"]}
    warning: thread "Converge PipelineAction::Create<main>" terminated with exception (report_on_exception is true):
    LogStash::Error: Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`
             create at org/logstash/execution/ConvergeResultExt.java:109
                    add at org/logstash/execution/ConvergeResultExt.java:37
       converge_state at /usr/share/logstash/logstash-core/lib/logstash/agent.rb:339
    An exception happened when converging configuration {:exception=>LogStash::Error, :message=>"Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`", :backtrace=>["org/logstash/execution/ConvergeResultExt.java:109:in `create'", "org/logstash/execution/ConvergeResultExt.java:37:in `add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:339:in `block in converge_state'"]}
    java.lang.IllegalStateException: Logstas

    And this is the full log of the logstash

    Screenshot from 2021-06-16 09-45-05.png

    Antoine Cotten
    @antoineco
    That issue is related to the ecs_compstibility setting in logstash/pipeline/logstash.conf, which doesn't exist in the v7.5 of the stack. I see in your repo that you downgraded to 7.5.
    node.js.developers.kh
    @nodejsdeveloperskh
    @antoineco yes you are right. I am using an older version. thanks a lot
    node.js.developers.kh
    @nodejsdeveloperskh
    @antoineco But i got the same error after upgrading the ELK version, Are you sure about that?
    Antoine Cotten
    @antoineco
    Maybe you didn't rebuild your images? See the in the README about "Version selection".
    Also to be clear, it's totally fine to downgrade, as long as you remove the logstash setting I mentionned.
    node.js.developers.kh
    @nodejsdeveloperskh
    @antoineco I fix it with commenting this two line.
    Thank you.
    image.png
    node.js.developers.kh
    @nodejsdeveloperskh
    @antoineco
    Deprecated setting?
    You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", hosts=>[http://elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"7d7dfa0f023f65240aeb31ebb353da5a42dc782979a2bd7e26e28b7cbd509bb3", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_dd6c95ab-e533-4160-b540-5f7b15e8e590", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
    Antoine Cotten
    @antoineco
    That one can be safely ignored. As far as I understand, this warning will go away (be fixed on the Logstash side) as soon as v8.0 is released.
    node.js.developers.kh
    @nodejsdeveloperskh
    Hi. Why we have to use the Elasticsearch cluster in ELK while based on what I read it is some kind of time-wasting?
    Antoine Cotten
    @antoineco

    @nodejsdeveloperskh you don't have to create a cluster, in our default configuration Elasticsearch is running in single node mode. Unless you add extra nodes manually, it will always remain a single Elasticsearch instance, and not an actual "cluster".

    The meaning of this setting is documented here. The default value is "elasticsearch", but we changed it to something else so that users don't accidentally reuse the name of an existing cluster in their environment (as recommended in the docs).

    node.js.developers.kh
    @nodejsdeveloperskh

    @antoineco OK, you are right. I read that doc

    And based on what they wrote there, I think I should deploy a cluster for the production mode

    Antoine Cotten
    @antoineco
    If you are going to handle a lot of data and want to keep everything highly available in case of disruption yes, I would also recommend it.
    Just keep in mind that Docker Compose works on a single host, so it's not particularly suitable for scaling Elasticsearch to multiple nodes ("cluster").
    hendarto kurniawan
    @hendarto100_gitlab
    can you add documentation / step guide to add new node to the elastic cluster , i have new server and need to join it on elastic cluster that created from docker compose, my cluster using default configuration docker-compose and elastic.yml from ur repository
    or is'nt possible to add it on ur docker-elk ??
    Antoine Cotten
    @antoineco
    @hendarto100_gitlab the procedure should be quite similar to what's described at https://github.com/deviantony/docker-elk/#how-to-scale-out-the-elasticsearch-cluster (click the link and check the instructions for Swarm Mode)
    Antoine Cotten
    @antoineco
    In a nutshell:
    1. Set a fixed node.name for each node (e.g. "docker-elk", "elasticsearch 1", ...)
    2. Declare each node's IP or network hostname inside discovery.seed_hosts (2 nodes in your case)
    3. Declare each node.name inside cluster.initial_master_nodes
    Let me know if you have any issue with those steps.
    Sivayavvari
    @Sivayavvari
    Hi, I am getting a few error logs for the elasticsearch.
    elasticsearch_1 | "at org.elasticsearch.xpack.security.authz.AuthorizationService.lambda$authorizeBulkItems$21(AuthorizationService.java:597) [x-pack-security-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:134) [elasticsearch-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.xpack.security.authz.RBACEngine.lambda$authorizeIndexAction$4(RBACEngine.java:338) [x-pack-security-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:134) [elasticsearch-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.xpack.security.authz.AuthorizationService.lambda$authorizeBulkItems$20(AuthorizationService.java:595) [x-pack-security-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.xpack.security.authz.RBACEngine.authorizeIndexAction(RBACEngine.java:330) [x-pack-security-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.xpack.security.authz.AuthorizationService.lambda$authorizeBulkItems$22(AuthorizationService.java:594) [x-pack-security-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at java.util.HashMap.forEach(HashMap.java:1425) [?:?]",
    elasticsearch_1 | "at org.elasticsearch.xpack.security.authz.AuthorizationService.lambda$authorizeBulkItems$23(AuthorizationService.java:591) [x-pack-security-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:134) [elasticsearch-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:113) [elasticsearch-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:55) [elasticsearch-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:41) [elasticsearch-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.xpack.security.authz.AuthorizationService$CachingAsyncSupplier.getAsync(AuthorizationService.java:761) [x-pack-security-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.xpack.security.authz.AuthorizationService.lambda$authorizeBulkItems$24(AuthorizationService.java:525) [x-pack-security-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:134) [elasticsearch-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:113) [elasticsearch-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:55) [elasticsearch-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:41) [elasticsearch-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.xpack.security.authz.AuthorizationService$CachingAsyncSupplier.getAsync(AuthorizationService.java:761) [x-pack-security-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.xpack.security.authz.AuthorizationService.authorizeBulkItems(AuthorizationService.java:524) [x-pack-security-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.xpack.security.authz.AuthorizationService.handleIndexActionAuthorizationResult(AuthorizationService.java:388) [x-pack-security-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.xpack.security.authz.AuthorizationService.lambda$authorizeAction$11(AuthorizationService.java:335) [x-pack-security-7.15.0.jar:7.15.0]",
    elasticsearch_1 | "at org.elasticsearch.xpack.security.authz.AuthorizationService$AuthorizationResultListener.onResponse(AuthorizationService.java:714) [x-pack-security-7.15.0.jar:7.15.0]",
    elasticsearch1 | "at org.elasticsearch.xpack.security.authz.AuthorizationService$AuthorizationResultListener.onResponse(AuthorizationService.java:689) [x-pack-security-7.15.0.jar:7.15.0]",
    elasticsearch
    With these logs my stack is not coming up
    As I am not much familiar with the ELK. I am not able to see where I am wrong. Can anyone help me with this.
    Antoine Cotten
    @antoineco
    @Sivayavvari your error is truncated, you didn't post the beginning of the error message so it's difficult to understand the context. Care to post the full error somewhere?
    zqiushi
    @zqiushi
    [root@10-10-10-27-elk1 docker-elk]# docker-compose up
    Building elasticsearch
    Step 1/2 : ARG ELK_VERSION
    ERROR: Service 'elasticsearch' failed to build: Please provide a source image with from prior to commit
    Docker version 1.13.1, build 7d71120/1.13.1
    [root@10-10-10-27-elk1 docker-elk]# docker -v
    Docker version 1.13.1, build 7d71120/1.13.1
    [root@10-10-10-27-elk1 docker-elk]# docker-compose -v
    docker-compose version 1.19.0, build 9e633ef
    efftee
    @sriptorium
    Hi, looking to spawn an elk with your very nice docker-elk, but also looking to enable and expose https to the internet (to safely ship data from distant instances, and access Kibana from wherever), looking for some guidance or recommendations to help me acheive that ? (I've already successfully used your docker-compose on a private network, and am quite familiair with the whole process, and tuning of elastic, just not that part).
    Antoine Cotten
    @antoineco
    Hi @sriptorium 👋
    I'd recommend looking into the tls branch in case you haven't already done so (there is a link in the README). This will give you TLS enabled in Elasticsearch (HTTP + TCP transport), as well between Kibana/Logstash and Elasticsearch.
    Antoine Cotten
    @antoineco
    Now, to enable TLS communications between a client (e.g. browser) and the Kibana frontend, you first need to obtain a certificate from an authority that is trusted by this client. It could be your own Certificate Authority, or it could be a public authority such as Let's Encrypt. Then, follow the instructions about enabling TLS between the browser and Kibana at: https://www.elastic.co/guide/en/kibana/current/configuring-tls.html
    Antoine Cotten
    @antoineco
    The reason why it is not enabled by default on the tls branch is because web browsers would show users a loud warning when they try to open the Kibana URL, due to the fact that the certificate presented by Kibana wouldn't be considered "trusted". We were afraid this would be a harsh user experience, so we preferred to let users decide about the certificate they want to use (and be intentional about it).
    Let me know if you need extra guidance!
    efftee
    @sriptorium
    Hi @antoineco 👋
    Thanks for your answers. All very clear and to the point. I’ll work with that and should be just fine ! 👍
    Siddharth Balyan
    @alt-glitch
    Hi, I want to use --config.reload.automatic in this docker-elk stack such that changing the config file on my host reloads the logstash and use the updated one.
    Do I add this in docker-compose.yml or docker-stack.yml?
    Quite new to all this so I appreciate anyone's patience and help :)
    Antoine Cotten
    @antoineco
    @alt-glitch you have to pass it as a command argument in the docker-compose.yaml file. Here is an example of issue where a user was using that exact setting: deviantony/docker-elk#506