Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Bob Lorincz
    @blorincz1

    Hi everyone, getting an error after upgrading the stack to ver 7.8.0 and looking for some help.

    I updated my local repo, ran docker-compose build then docker-compose up -d. When I try to hit the Kibana URL, I'm constantly getting the "Kibana server is not ready yet" message.

    Looking at the logs this is the only message I am seeing:

    {"type":"log","@timestamp":"2020-07-06T14:19:37Z","tags":["warning","plugins","licensing"],"pid":7,"message":"License information could not be obtained from Elasticsearch due to [security_exception] unable to authenticate user [kibana] for REST request [/_xpack], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } } :: {\"path\":\"/_xpack\",\"statusCode\":401,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [kibana] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}}],\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [kibana] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}},\\\"status\\\":401}\",\"wwwAuthenticateDirective\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"} error"}

    Any assistance is appreciated.

    Antoine Cotten
    @antoineco
    @blorincz1 the kibana user was renamed to kibana_system in v7.8.
    I wish Elastic had offered a way to perform that transition smoothly but didn't find anything relevant unfortunately.
    Antoine Cotten
    @antoineco
    With that being said, the kibana user is deprecated, not removed, so the upgrade should have worked with the procedure you described.
    Bob Lorincz
    @blorincz1

    Thank you for pointing that out @antoineco. I updated the username to kibana_system in kibana/config/kibana.yml, but I'm getting the same result.

    {"type":"log","@timestamp":"2020-07-07T12:55:28Z","tags":["warning","plugins","licensing"],"pid":7,"message":"License information could not be obtained from Elasticsearch due to [security_exception] unable to authenticate user [kibana_system] for REST request [/_xpack], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } }

    Antoine Cotten
    @antoineco
    You might need to initialize the password for that user. I'm still wondering why the old kibana user stopped working though, it's supposed to be only deprecated, not locked.
    Bhanupraveen G
    @bhanupraveeng
    Hello,
    After enabling xpack security, logstash not coming up. Kindly advice. I can't see any logs fr logstash
    vishal979
    @vishal979
    Hi, i need some help. Is it possible to use nested queries in elasticsearch. My index has documents of 2 types:-
    {"action":"a",trans_id:"xyz",...}
    and
    {"action":"c","camp_id":"abc","trans_id":"xyz"}
    So, what i have is camp_id, and as whenever there will be a camp_id i will get a trans_id , so now what i want to do is, fetch all the records that
    select * from index where trans_id in (select trans_id from index where camp_id="someid")
    kamal2222ahmed
    @kamal2222ahmed
    Has anyone here used this elk stack with JMeter?
    JaivyDaam
    @JaivyDaam
    @bhanupraveeng This is a bit to vague for getting actual assistance... when you use docker-compose up logstash you should be able to see what logstash is doing and what the actual problem is.
    @kamal2222ahmed Nope, and looking at what Apache JMeter does, why not doing so in the Beats framework?
    TAMIL SELVAN S.B
    @TamilTintin

    Hi all , I am getting error
    ][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>400, :url=>"http://xxxxx9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=6&interval=1s"}

    While running logstash as docker service, logstash version is 6.8.0

    Thanks in advance

    Antoine Cotten
    @antoineco
    @TamilTintin I'm assuming you started the stack from the release-6.x branch, as described in the README? https://github.com/deviantony/docker-elk#version-selection
    If yes, make sure you ran docker-compose build after switching branches.
    If your Elasticsearch contains data from a version 7.x you may have started before switching to 6.x, it might be wise wiping that data with docker-compose down -v and starting with a fresh data volume.
    12 replies
    domdom8
    @domdom8
    sorry for asking
    do we have a group for ELK commuity in Slack?
    JaivyDaam
    @JaivyDaam
    @domdom8 the blasphemy!
    Probably, there are a lot of slack communities
    Antoine Cotten
    @antoineco
    This Gitter chat is not an all-purpose room for discussing Elastic products, just a small corner of the web to help users getting up and running with docker-elk.
    domdom8
    @domdom8
    @antoineco thank you
    TAMIL SELVAN S.B
    @TamilTintin

    HI all , In docker stack deploy for logstash,kibana and filebeat

    I would like to know what would happen if logstash is restarted. I mean, the logic should be, if logstash is started, then filebeat should be restarted to re-setup the stream.

    What can be done for this case ?

    Thank you

    Antoine Cotten
    @antoineco
    @TamilTintin if Logstash gets restarted, Filebeat will automatically queue what it's doing until Logstash is available again. First it will retry every 1s, then it will increase the retry delay progressively up to 60s. See backoff.init and backoff.max settings at Configure the Logstash output)
    1 reply
    Mansour
    @Mansour-J

    Hi all, I am trying to use this ELK docker-compose file and push auditbeat logs to elasticsearch. The whole set up is running in a single host. The docker-compose for runs successfully and I have auditbeat installed locally. Auditbeat has a command that test if it can connect to elasticsearch auditbeat test output

    When I run the auditbeat connectivity test, I get:

    elasticsearch: http://localhost:9200...
      parse url... OK
      connection...
        parse host... OK
        dns lookup... OK
        addresses: ::1, 127.0.0.1
        dial up... OK
      TLS... WARN secure connection disabled
      talk to server... ERROR Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license from the /_license endpoint, Auditbeat requires the default distribution of Elasticsearch. Please make the endpoint accessible to Auditbeat so it can verify the license.: could not extract license information from the server response: unknown state, received: 'expired'

    but when I pull down docker pull docker.elastic.co/kibana/kibana:7.9.2 image directly and run it - it works.

    I have tried changing the xpack.license.self_generated.type: trial to basic as well

    I am running it on CentOs 7

    Antoine Cotten
    @antoineco
    @Mansour-J are you running a version of Beats which matches the stack version (7.9)? If yes, did you provide ES credentials in your beats config file? You can try with the elastic user (super-admin) first, and if it works, consider switching to a role with lower privileges.
    Mansour
    @Mansour-J
    @antoineco Yes, the auditbeat version matches(not considering patch number) the elasticsearch version and I used the default elastic and changeme as a password(which I think is the super user)
    auditbeat version 7.9.2 (amd64), libbeat 7.9.2 [2ab907f5ccecf9fd82fe37105082e89fd871f684 built 2020-09-22 23:14:31 +0000 UTC]
    
    {
      "name" : "19780697fd9b",
      "cluster_name" : "docker-cluster",
      "cluster_uuid" : "CMKnZU0KTZKuOWyAbZJEpg",
      "version" : {
        "number" : "7.9.1",
        "build_flavor" : "default",
        "build_type" : "docker",
        "build_hash" : "083627f112ba94dffc1232e8b42b73492789ef91",
        "build_date" : "2020-09-01T21:22:21.964974Z",
        "build_snapshot" : false,
        "lucene_version" : "8.6.2",
        "minimum_wire_compatibility_version" : "6.8.0",
        "minimum_index_compatibility_version" : "6.0.0-beta1"
      },
      "tagline" : "You Know, for Search"
    }
    Antoine Cotten
    @antoineco
    OK, looks good to me. I just noticed the word expired at the end of the error. Could be that Auditbeat requires a paid license, in which case trial (our default) should work for 30 days. Was your stack initialized more than 30 days ago?
    Also, please note that Kibana does work just fine with an expired trial license, it just doesn't show all the features (and it should show a warning banner).
    Antoine Cotten
    @antoineco
    I thought beats also worked that way, let me try to reproduce.
    Mansour
    @Mansour-J

    @antoineco I am not sure if Auditbeat requires paid license. However, swapping the docker-compose elastic by running:

    docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.9.2

    Gives the following output:

    elasticsearch: http://localhost:9200...
      parse url... OK
      connection...
        parse host... OK
        dns lookup... OK
        addresses: ::1, 127.0.0.1
        dial up... OK
        talk to server... OK
    Antoine Cotten
    @antoineco
    @Mansour-J So it might be an authorization error, because the only difference with the default setup is the absence of xpack options (security). Maybe auditbeat test output didn't read the credentials from your file, did you double check they were defined correctly?
    Mansour
    @Mansour-J
    @antoineco Interesting, because auditbeat is installed at host while elastic search is in container. I would assume that the password in auditbeat would be the same for both of them.
    Antoine Cotten
    @antoineco
    @Mansour-J I mean, you bypassed the docker-elk config with your Docker command, and didn't enable Xpack Security on the command line. By default, ES doesn't require any credentials.
    Antoine Cotten
    @antoineco

    @Mansour-J I wasn't able to reproduce:

    $ docker run \
      --cap-add="AUDIT_CONTROL" --cap-add="AUDIT_READ" \
      docker.elastic.co/beats/auditbeat:7.9.2 \
      test output \
      -E 'output.elasticsearch.hosts=["host.docker.internal:9200"]' -E 'output.elasticsearch.username=elastic' -E 'output.elasticsearch.password=changeme'
    elasticsearch: http://host.docker.internal:9200...
      parse url... OK
      connection...
        parse host... OK
        dns lookup... OK
        addresses: 192.168.65.2
        dial up... OK
      TLS... WARN secure connection disabled
      talk to server... OK
      version: 7.9.1

    (I use host.docker.internal because I run Docker for Desktop, but in your case localhost should work just fine)

    RoD
    @rodrigoquijarro
    Hello everyone!, I am trying to use this ELK docker-compose file and push filebeat logs to elasticsearch. The whole setup is running in a different host (same network). The docker-compose runs successfully when filebeat is installed locally, but no remote. Running the validation command for Filebeat shows the following output. We have checked all possible issues with network configuration and by now everything is ok, however, we're thinking 'discovery.type: single-node' environmental parameter could be a possible issue.
    $ sudo filebeat test output
    elasticsearch: http://10.2.2.2:5044...
    parse url... OK
    connection...
    parse host... OK
    dns lookup... OK
    addresses: 10.1.1.20
    dial up... ERROR dial tcp 10.1.1.20:5044: i/o timeout
    I'm running both machines on Debian 10.
    Antoine Cotten
    @antoineco
    @rodrigoquijarro discovery.type: single-node shouldn't be related to your issue here.
    Are you sending your logs to Logstash or directly to Elasticsearch? By default, port 5044 is reserved for Logstash's beats input. If you want to send directly to Elasticsearch, you have to use port 9200, not 5044.
    See https://www.elastic.co/guide/en/beats/filebeat/current/configuring-output.html
    Antoine Cotten
    @antoineco
    Here is a quick test I ran locally:
    $ docker-compose up -d elasticsearch
    $ docker run docker.elastic.co/beats/filebeat:7.9.3 \
        test output \
        -E 'output.elasticsearch.hosts=["host.docker.internal:9200"]' -E 'output.elasticsearch.username=elastic' -E 'output.elasticsearch.password=changeme'
    elasticsearch: http://host.docker.internal:9200...
      parse url... OK
      connection...
        parse host... OK
        dns lookup... OK
        addresses: 192.168.65.2
        dial up... OK
      TLS... WARN secure connection disabled
      talk to server... OK
      version: 7.9.2
    Antoine Cotten
    @antoineco
    Another test, this time with Logstash:
    $ docker-compose up -d elasticsearch logstash
    #  blank config for testing
    $ touch filebeat.yml
    $ docker run docker.elastic.co/beats/filebeat:7.9.3 \
        -v ${PWD}/filebeat.yml:/usr/share/filebeat/filebeat.yml \
        test output \
        -E 'output.logstash.hosts=["host.docker.internal:5044"]'
    logstash: host.docker.internal:5044...
      connection...
        parse host... OK
        dns lookup... OK
        addresses: 192.168.65.2
        dial up... OK
      TLS... WARN secure connection disabled
      talk to server... OK
    RoD
    @rodrigoquijarro
    Thank you very much for your replay @antoineco , I tried to sent logs to logstash, elasticsearch, separately and together. Locally works, yes, however when I try to collect them remotely (from a external host with filebeat installed) its show the timeout message.
    Antoine Cotten
    @antoineco
    @rodrigoquijarro that probably means you have a firewall blocking TCP packets to ports 9200 and 5044 between Filebeat and the machine running the stack. Or maybe the IP address is simply not routable?
    Mohammed Gaber
    @mgabs
    Hi all, I'm new to elk and have been trying to run the stack for a while now with no joy
    Elastic is working fine, logstash faces ES unreachable and kibana dies with an error
    kibana_1         | {"type":"log","@timestamp":"2020-11-03T14:14:59Z","tags":["fatal","root"],"pid":6,"message":"Error: Setup lifecycle of \"monitoring\" plugin wasn't completed in 30sec. Consider disabling the plugin and re-start.\n    at Timeout.setTimeout (/usr/share/kibana/src/core/utils/promise.js:31:90)\n    at ontimeout (timers.js:436:11)\n    at tryOnTimeout (timers.js:300:5)\n    at listOnTimeout (timers.js:263:5)\n    at Timer.processTimers (timers.js:223:10)"}
    kibana_1         | {"type":"log","@timestamp":"2020-11-03T14:14:59Z","tags":["info","plugins-system"],"pid":6,"message":"Stopping all plugins."}
    kibana_1         | 
    kibana_1         |  FATAL  Error: Setup lifecycle of "monitoring" plugin wasn't completed in 30sec. Consider disabling the plugin and re-start.
    kibana_1         | 
    elasticsearch_1  | {"type": "server", "timestamp": "2020-11-03T14:15:00,579Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "[ilm-history-2-000001] creating index, cause [api], templates [ilm-history], shards [1]/[0]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw"  }
    elasticsearch_1  | {"type": "server", "timestamp": "2020-11-03T14:15:00,664Z", "level": "INFO", "component": "o.e.x.i.IndexLifecycleTransition", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "moving index [ilm-history-2-000001] from [null] to [{\"phase\":\"new\",\"action\":\"complete\",\"name\":\"complete\"}] in policy [ilm-history-ilm-policy]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw"  }
    elasticsearch_1  | {"type": "server", "timestamp": "2020-11-03T14:15:00,845Z", "level": "INFO", "component": "o.e.x.i.IndexLifecycleTransition", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "moving index [ilm-history-2-000001] from [{\"phase\":\"new\",\"action\":\"complete\",\"name\":\"complete\"}] to [{\"phase\":\"hot\",\"action\":\"unfollow\",\"name\":\"wait-for-indexing-complete\"}] in policy [ilm-history-ilm-policy]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw"  }
    elasticsearch_1  | {"type": "server", "timestamp": "2020-11-03T14:15:00,913Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[ilm-history-2-000001][0]]]).", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw"  }
    elasticsearch_1  | {"type": "server", "timestamp": "2020-11-03T14:15:00,991Z", "level": "INFO", "component": "o.e.x.i.IndexLifecycleTransition", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "moving index [ilm-history-2-000001] from [{\"phase\":\"hot\",\"action\":\"unfollow\",\"name\":\"wait-for-indexing-complete\"}] to [{\"phase\":\"hot\",\"action\":\"unfollow\",\"name\":\"wait-for-follow-shard-tasks\"}] in policy [ilm-history-ilm-policy]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw"  }
    devian_kibana_1 exited with code 1
    logstash_1       | [2020-11-03T14:18:14,409][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
    logstash_1       | [2020-11-03T14:18:14,524][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::ConnectTimeout] connect timed out"}
    logstash_1       | [2020-11-03T14:18:22,279][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::
    I reverted back to repo with git sync and tried again but no joy
    apperciate your help
    Antoine Cotten
    @antoineco
    @mgabs seems like networking issues to me. Are you running on CentOS?
    Antoine Cotten
    @antoineco
    If yes, could you please temporarily set SELinux in permissive mode with sudo setenforce 0? (you can re-enable it after we confirm it's not the source of your issue)
    Mohammed Gaber
    @mgabs
    I'm running arch, I am doubting that now as well
    not running SEL, only nftables - firewalld