Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Antoine Cotten
    @antoineco
    Also, please note that Kibana does work just fine with an expired trial license, it just doesn't show all the features (and it should show a warning banner).
    Antoine Cotten
    @antoineco
    I thought beats also worked that way, let me try to reproduce.
    Mansour
    @Mansour-J

    @antoineco I am not sure if Auditbeat requires paid license. However, swapping the docker-compose elastic by running:

    docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.9.2

    Gives the following output:

    elasticsearch: http://localhost:9200...
      parse url... OK
      connection...
        parse host... OK
        dns lookup... OK
        addresses: ::1, 127.0.0.1
        dial up... OK
        talk to server... OK
    Antoine Cotten
    @antoineco
    @Mansour-J So it might be an authorization error, because the only difference with the default setup is the absence of xpack options (security). Maybe auditbeat test output didn't read the credentials from your file, did you double check they were defined correctly?
    Mansour
    @Mansour-J
    @antoineco Interesting, because auditbeat is installed at host while elastic search is in container. I would assume that the password in auditbeat would be the same for both of them.
    Antoine Cotten
    @antoineco
    @Mansour-J I mean, you bypassed the docker-elk config with your Docker command, and didn't enable Xpack Security on the command line. By default, ES doesn't require any credentials.
    Antoine Cotten
    @antoineco

    @Mansour-J I wasn't able to reproduce:

    $ docker run \
      --cap-add="AUDIT_CONTROL" --cap-add="AUDIT_READ" \
      docker.elastic.co/beats/auditbeat:7.9.2 \
      test output \
      -E 'output.elasticsearch.hosts=["host.docker.internal:9200"]' -E 'output.elasticsearch.username=elastic' -E 'output.elasticsearch.password=changeme'
    elasticsearch: http://host.docker.internal:9200...
      parse url... OK
      connection...
        parse host... OK
        dns lookup... OK
        addresses: 192.168.65.2
        dial up... OK
      TLS... WARN secure connection disabled
      talk to server... OK
      version: 7.9.1

    (I use host.docker.internal because I run Docker for Desktop, but in your case localhost should work just fine)

    RoD
    @rodrigoquijarro
    Hello everyone!, I am trying to use this ELK docker-compose file and push filebeat logs to elasticsearch. The whole setup is running in a different host (same network). The docker-compose runs successfully when filebeat is installed locally, but no remote. Running the validation command for Filebeat shows the following output. We have checked all possible issues with network configuration and by now everything is ok, however, we're thinking 'discovery.type: single-node' environmental parameter could be a possible issue.
    $ sudo filebeat test output
    elasticsearch: http://10.2.2.2:5044...
    parse url... OK
    connection...
    parse host... OK
    dns lookup... OK
    addresses: 10.1.1.20
    dial up... ERROR dial tcp 10.1.1.20:5044: i/o timeout
    I'm running both machines on Debian 10.
    Antoine Cotten
    @antoineco
    @rodrigoquijarro discovery.type: single-node shouldn't be related to your issue here.
    Are you sending your logs to Logstash or directly to Elasticsearch? By default, port 5044 is reserved for Logstash's beats input. If you want to send directly to Elasticsearch, you have to use port 9200, not 5044.
    See https://www.elastic.co/guide/en/beats/filebeat/current/configuring-output.html
    Antoine Cotten
    @antoineco
    Here is a quick test I ran locally:
    $ docker-compose up -d elasticsearch
    $ docker run docker.elastic.co/beats/filebeat:7.9.3 \
        test output \
        -E 'output.elasticsearch.hosts=["host.docker.internal:9200"]' -E 'output.elasticsearch.username=elastic' -E 'output.elasticsearch.password=changeme'
    elasticsearch: http://host.docker.internal:9200...
      parse url... OK
      connection...
        parse host... OK
        dns lookup... OK
        addresses: 192.168.65.2
        dial up... OK
      TLS... WARN secure connection disabled
      talk to server... OK
      version: 7.9.2
    Antoine Cotten
    @antoineco
    Another test, this time with Logstash:
    $ docker-compose up -d elasticsearch logstash
    #  blank config for testing
    $ touch filebeat.yml
    $ docker run docker.elastic.co/beats/filebeat:7.9.3 \
        -v ${PWD}/filebeat.yml:/usr/share/filebeat/filebeat.yml \
        test output \
        -E 'output.logstash.hosts=["host.docker.internal:5044"]'
    logstash: host.docker.internal:5044...
      connection...
        parse host... OK
        dns lookup... OK
        addresses: 192.168.65.2
        dial up... OK
      TLS... WARN secure connection disabled
      talk to server... OK
    RoD
    @rodrigoquijarro
    Thank you very much for your replay @antoineco , I tried to sent logs to logstash, elasticsearch, separately and together. Locally works, yes, however when I try to collect them remotely (from a external host with filebeat installed) its show the timeout message.
    Antoine Cotten
    @antoineco
    @rodrigoquijarro that probably means you have a firewall blocking TCP packets to ports 9200 and 5044 between Filebeat and the machine running the stack. Or maybe the IP address is simply not routable?
    Mohammed Gaber
    @mgabs
    Hi all, I'm new to elk and have been trying to run the stack for a while now with no joy
    Elastic is working fine, logstash faces ES unreachable and kibana dies with an error
    kibana_1         | {"type":"log","@timestamp":"2020-11-03T14:14:59Z","tags":["fatal","root"],"pid":6,"message":"Error: Setup lifecycle of \"monitoring\" plugin wasn't completed in 30sec. Consider disabling the plugin and re-start.\n    at Timeout.setTimeout (/usr/share/kibana/src/core/utils/promise.js:31:90)\n    at ontimeout (timers.js:436:11)\n    at tryOnTimeout (timers.js:300:5)\n    at listOnTimeout (timers.js:263:5)\n    at Timer.processTimers (timers.js:223:10)"}
    kibana_1         | {"type":"log","@timestamp":"2020-11-03T14:14:59Z","tags":["info","plugins-system"],"pid":6,"message":"Stopping all plugins."}
    kibana_1         | 
    kibana_1         |  FATAL  Error: Setup lifecycle of "monitoring" plugin wasn't completed in 30sec. Consider disabling the plugin and re-start.
    kibana_1         | 
    elasticsearch_1  | {"type": "server", "timestamp": "2020-11-03T14:15:00,579Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "[ilm-history-2-000001] creating index, cause [api], templates [ilm-history], shards [1]/[0]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw"  }
    elasticsearch_1  | {"type": "server", "timestamp": "2020-11-03T14:15:00,664Z", "level": "INFO", "component": "o.e.x.i.IndexLifecycleTransition", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "moving index [ilm-history-2-000001] from [null] to [{\"phase\":\"new\",\"action\":\"complete\",\"name\":\"complete\"}] in policy [ilm-history-ilm-policy]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw"  }
    elasticsearch_1  | {"type": "server", "timestamp": "2020-11-03T14:15:00,845Z", "level": "INFO", "component": "o.e.x.i.IndexLifecycleTransition", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "moving index [ilm-history-2-000001] from [{\"phase\":\"new\",\"action\":\"complete\",\"name\":\"complete\"}] to [{\"phase\":\"hot\",\"action\":\"unfollow\",\"name\":\"wait-for-indexing-complete\"}] in policy [ilm-history-ilm-policy]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw"  }
    elasticsearch_1  | {"type": "server", "timestamp": "2020-11-03T14:15:00,913Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[ilm-history-2-000001][0]]]).", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw"  }
    elasticsearch_1  | {"type": "server", "timestamp": "2020-11-03T14:15:00,991Z", "level": "INFO", "component": "o.e.x.i.IndexLifecycleTransition", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "moving index [ilm-history-2-000001] from [{\"phase\":\"hot\",\"action\":\"unfollow\",\"name\":\"wait-for-indexing-complete\"}] to [{\"phase\":\"hot\",\"action\":\"unfollow\",\"name\":\"wait-for-follow-shard-tasks\"}] in policy [ilm-history-ilm-policy]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw"  }
    devian_kibana_1 exited with code 1
    logstash_1       | [2020-11-03T14:18:14,409][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
    logstash_1       | [2020-11-03T14:18:14,524][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::ConnectTimeout] connect timed out"}
    logstash_1       | [2020-11-03T14:18:22,279][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::
    I reverted back to repo with git sync and tried again but no joy
    apperciate your help
    Antoine Cotten
    @antoineco
    @mgabs seems like networking issues to me. Are you running on CentOS?
    Antoine Cotten
    @antoineco
    If yes, could you please temporarily set SELinux in permissive mode with sudo setenforce 0? (you can re-enable it after we confirm it's not the source of your issue)
    Mohammed Gaber
    @mgabs
    I'm running arch, I am doubting that now as well
    not running SEL, only nftables - firewalld
    Antoine Cotten
    @antoineco
    @mgabs then it's very likely that you're running into exactly that issue: https://github.com/deviantony/docker-elk/issues/541#issuecomment-707572713
    Firewalld doesn't play nice at all with Docker (Compose especially) and requires some extra configuration to avoid interfering with Docker-generated rules.
    Mohammed Gaber
    @mgabs
    I confirm it's an internet access / networking problem
    tried the fix you linked, no joy
    Mohammed Gaber
    @mgabs
    I got the stack running after disabling nftables, thanks
    Antoine Cotten
    @antoineco
    The "fix" depends on the IP address allocated to the docker-elk bridge, so I wouldn't recommend copying those commands.
    Glad you managed to isolate the problem though :+1: I hope you find a way to configure firewalld in a suitable way.
    Mohammed Gaber
    @mgabs
    Since i had to dig deeper to get it to work, I wanted to share that It's not really firewalld (only a management interface)
    The underlying issue however is iptables vs nftables and the fact that docker doesn't have full support for nftables
    1 reply
    timiil
    @timiil
    Please here there is a question.
    we have to store a 'Checkin' doc into ES index , we want count a lot of value metrics about this 'Checkin',
    for Tenant, for Entity, for Month/Week/Date/Hour/LastMinute/LastHour... the problem is :
    should we use which product in ElasticSearch for these usage ? or should I just use Promethues to do this
    task ?
    soumen0
    @soumen0
    I am unable to create a docker image for logstash, it is actually failing in the lines ...
    ADD config/ /usr/share/logstash/config/ and
    ADD pipeline/ /usr/share/logstash/pipeline/
    Although both the logstash.conf and logstash.yml exists in the same directory with docker file.
    I am trying to customize using the following logstash docker 7.9.3
    Antoine Cotten
    @antoineco
    @soumen0 are you using Compose on macOS or Windows, by any chance?
    Also could you please elaborate on "it is actually failing in the lines". What error are you seeing?
    neethujacobmec
    @neethujacobmec

    Hi Everyone, I am using kinesis agent to stream from application logs and lambda function to separate the logs based on some indices. However I am seeing the below error intermittently.

    { "error": { "root_cause": [ { "type": "remote_transport_exception", "reason": "[2f4424726309a9300ffdd0939e5cca77][x.x.x.x:9300][indices:data/write/bulk[s]]" } ], "type": "es_rejected_execution_exception", "reason": "rejected execution of processing of [195195722][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[misc-logs-2020-11-16][0]] containing [index {[misc-logs-2020-11-16][_doc][J9_q0HUBq-O-de1zDKJd], source[{\"message\":\"\\"End of file or no input: Operation interrupted or timed out (60 s recv delay) (60 s send delay)\\"\"}]}], target allocation id: yx7QdcoSRkecIXzgqLLeLg, primary term: 1 on EsThreadPoolExecutor[name = 2f4424726309a9300ffdd0939e5cca77/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@5205cb35[Running, pool size = 2, active threads = 2, queued tasks = 200, completed tasks = 98568339]]" }, "status": 429 }

    I am using multiline option set in agent.json to read multiline log statements together.

    AWS Kinesis Agent offers settings like number of nodes and shards. However unsure where queue size may be increased as elasticsearch.yaml doesn't seem to be accessible. Any help would be appreciated.

    Antoine Cotten
    @antoineco
    @neethujacobmec you can adjust the size of thewrite thread pool as described here: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html
    Alan M Kristensen
    @Big-al
    Hi, in the TLS setup you write "You will be prompted to enter an optional passphrase to protect both the CA and Elasticsearch keys. Please be aware that the passphrase you enter here, if not empty, will have be manually entered on every restart of Elasticsearch."
    Where do i update this password? My Elasticsearch node is failing on startup with a wrong certificate password.
    Also - let me just add, this is truely awesome work. Makes it so much easier to get going !
    Antoine Cotten
    @antoineco
    @Big-al glad this was useful!
    According to this page, the passphrase needs to be added to Elasticsearch's keystore (see step 3): https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls.html#tls-transport
    I'm glad you raised this, I need to update the doc with some more accurate info.
    Antoine Cotten
    @antoineco
    README updated on the tls branch.
    Davide Pugliese
    @Deviad
    Hello
    I am trying to use fscrawler on docker-elk, but I get an error unfortunately.
    https://discuss.elastic.co/t/failed-to-create-elasticsearch-client/256243
    image.png
    Alan M Kristensen
    @Big-al
    Thanks for taking quick action Antoine! Im actually going to change and rebuild my production image for a large client based on some of your changes here. Its great. This makes it a lot easier for new developers to adopt elastic. Huge fan.
    Antoine Cotten
    @antoineco
    @Big-al my pleasure, always happy to hear the project is useful to people!
    dobixu
    @dobixu
    image.png
    Screen Shot 2020-11-27 at 2.53.00 PM.png
    Hello guys . I updated docker-compose.yml ,but elasticsearch's data path not changed.
    I want to change elasticsearch data storage path.