Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Mohammed Gaber
    @mgabs
    I'm running arch, I am doubting that now as well
    not running SEL, only nftables - firewalld
    Antoine Cotten
    @antoineco
    @mgabs then it's very likely that you're running into exactly that issue: https://github.com/deviantony/docker-elk/issues/541#issuecomment-707572713
    Firewalld doesn't play nice at all with Docker (Compose especially) and requires some extra configuration to avoid interfering with Docker-generated rules.
    Mohammed Gaber
    @mgabs
    I confirm it's an internet access / networking problem
    tried the fix you linked, no joy
    Mohammed Gaber
    @mgabs
    I got the stack running after disabling nftables, thanks
    Antoine Cotten
    @antoineco
    The "fix" depends on the IP address allocated to the docker-elk bridge, so I wouldn't recommend copying those commands.
    Glad you managed to isolate the problem though :+1: I hope you find a way to configure firewalld in a suitable way.
    Mohammed Gaber
    @mgabs
    Since i had to dig deeper to get it to work, I wanted to share that It's not really firewalld (only a management interface)
    The underlying issue however is iptables vs nftables and the fact that docker doesn't have full support for nftables
    1 reply
    timiil
    @timiil
    Please here there is a question.
    we have to store a 'Checkin' doc into ES index , we want count a lot of value metrics about this 'Checkin',
    for Tenant, for Entity, for Month/Week/Date/Hour/LastMinute/LastHour... the problem is :
    should we use which product in ElasticSearch for these usage ? or should I just use Promethues to do this
    task ?
    soumen0
    @soumen0
    I am unable to create a docker image for logstash, it is actually failing in the lines ...
    ADD config/ /usr/share/logstash/config/ and
    ADD pipeline/ /usr/share/logstash/pipeline/
    Although both the logstash.conf and logstash.yml exists in the same directory with docker file.
    I am trying to customize using the following logstash docker 7.9.3
    Antoine Cotten
    @antoineco
    @soumen0 are you using Compose on macOS or Windows, by any chance?
    Also could you please elaborate on "it is actually failing in the lines". What error are you seeing?
    neethujacobmec
    @neethujacobmec

    Hi Everyone, I am using kinesis agent to stream from application logs and lambda function to separate the logs based on some indices. However I am seeing the below error intermittently.

    { "error": { "root_cause": [ { "type": "remote_transport_exception", "reason": "[2f4424726309a9300ffdd0939e5cca77][x.x.x.x:9300][indices:data/write/bulk[s]]" } ], "type": "es_rejected_execution_exception", "reason": "rejected execution of processing of [195195722][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[misc-logs-2020-11-16][0]] containing [index {[misc-logs-2020-11-16][_doc][J9_q0HUBq-O-de1zDKJd], source[{\"message\":\"\\"End of file or no input: Operation interrupted or timed out (60 s recv delay) (60 s send delay)\\"\"}]}], target allocation id: yx7QdcoSRkecIXzgqLLeLg, primary term: 1 on EsThreadPoolExecutor[name = 2f4424726309a9300ffdd0939e5cca77/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@5205cb35[Running, pool size = 2, active threads = 2, queued tasks = 200, completed tasks = 98568339]]" }, "status": 429 }

    I am using multiline option set in agent.json to read multiline log statements together.

    AWS Kinesis Agent offers settings like number of nodes and shards. However unsure where queue size may be increased as elasticsearch.yaml doesn't seem to be accessible. Any help would be appreciated.

    Antoine Cotten
    @antoineco
    @neethujacobmec you can adjust the size of thewrite thread pool as described here: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html
    Alan M Kristensen
    @Big-al
    Hi, in the TLS setup you write "You will be prompted to enter an optional passphrase to protect both the CA and Elasticsearch keys. Please be aware that the passphrase you enter here, if not empty, will have be manually entered on every restart of Elasticsearch."
    Where do i update this password? My Elasticsearch node is failing on startup with a wrong certificate password.
    Also - let me just add, this is truely awesome work. Makes it so much easier to get going !
    Antoine Cotten
    @antoineco
    @Big-al glad this was useful!
    According to this page, the passphrase needs to be added to Elasticsearch's keystore (see step 3): https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls.html#tls-transport
    I'm glad you raised this, I need to update the doc with some more accurate info.
    Antoine Cotten
    @antoineco
    README updated on the tls branch.
    Davide Pugliese
    @Deviad
    Hello
    I am trying to use fscrawler on docker-elk, but I get an error unfortunately.
    https://discuss.elastic.co/t/failed-to-create-elasticsearch-client/256243
    image.png
    Alan M Kristensen
    @Big-al
    Thanks for taking quick action Antoine! Im actually going to change and rebuild my production image for a large client based on some of your changes here. Its great. This makes it a lot easier for new developers to adopt elastic. Huge fan.
    Antoine Cotten
    @antoineco
    @Big-al my pleasure, always happy to hear the project is useful to people!
    dobixu
    @dobixu
    image.png
    Screen Shot 2020-11-27 at 2.53.00 PM.png
    Hello guys . I updated docker-compose.yml ,but elasticsearch's data path not changed.
    I want to change elasticsearch data storage path.
    dobixu
    @dobixu
    deviantony/docker-elk#430
    I referred to this document and rebuild docker-compose.yml 。Return issue:
    Named volume "{'type': 'volume', 'source': '/data/docker-elk/elasticsearch/data', 'target': '/usr/share/elasticsearch/data'}" is used in service "elasticsearch" but no declaration was found in the volumes section.
    Antoine Cotten
    @antoineco
    @dobixu in that case you have to use a bind mount, not a named volume. See how config files are mounted.
    dobixu
    @dobixu
    Screen Shot 2020-11-27 at 4.35.58 PM.png
    And chmod 777 ./elasticsearch @antoineco Thank you!!! It's workded
    Antoine Cotten
    @antoineco
    :+1:
    Stephane
    @steevivo
    hi all, when I launch kibana with ip:5601 i hqve the message Kibana server is not ready yet , I forget what ?
    Antoine Cotten
    @antoineco
    @steevivo Can you please check the logs of Kibana and Elasticsearch?
    Stephane
    @steevivo
    @antoineco After remove xpack security it's ok , but I don't know really why ... . The text in the log elastic is connection refused on docker elastic , I use 7.9.3 version
    Antoine Cotten
    @antoineco
    Interesting, did you initialize your passwords at some point, then decide to start over without wiping Elasticsearch's data volume? Removing containers is not enough, ES' data is persisted in a volume.
    Stephane
    @steevivo
    @antoineco did you initialize your passwords at some point ? no , I started docker-compose up -d after the build command
    Antoine Cotten
    @antoineco

    Let's try this:

    $ docker-compose down -v
    $ docker-compose up

    (and of course undo the change you applied to the config to disable X-Pack, etc.)

    Stephane
    @steevivo
    @antoineco how to activate the login kibana ? with the xpack ?
    Antoine Cotten
    @antoineco
    Yes, but that's on by default if you just clone the repo and run Compose.
    Stephane
    @steevivo
    @antoineco hi, no login page after setup xpack.security.enabled: true , I've launch command elasticsearch-setup-passwords auto --batch and setup the yml files bt nothing append , very strange. Https only?
    Antoine Cotten
    @antoineco
    @steevivo No there is no HTTPS on the master branch, only on the tls branch, and only to secure connections between Elastic components and Elasticsearch (not in the browser, because a browser would reject a self-signed certificate).
    I don't know what's happening to you but it's definitely not the default behavior. The stack has automated tests so we do verify regularly that everything works together. I'd suggest opening a GitHub issue with all the requested information so we can reproduce the problem.
    Checking what has been manually changed with git diff should be a good start.
    Alan M Kristensen
    @Big-al

    So I ran into an issue when running the TLS branch. I followed it, and everyhthing is up and running. I'm able to send logs via postman to elasticsearch. I'm also able to send logs via APM on a random test .net api.
    The issue may be due to my limited knowledge of certificates, but when i try to connect to elasticsearch via for example a metricbeat configuration, i get the following error:
    "cannot validate certificate for some.ip.address because it doesn't contain any IP SANs"

    I get a similar error for Logstash or with Curl.

    My assumption here, is that this has to do with it being a self signed cert, but im not sure.

    Questions:

    1. What causes this, and what did i likely do wrong?
    2. Wouldn't it be equally secure having nginx with certbot limitting incoming traffic to HTTPS using for example a LetsEncrypt certificate? I understand internal coms to elastic wouldnt be secured then, but that is not an issue when the only thing running on these servers is elastic.
    Antoine Cotten
    @antoineco
    @Big-al this is most likely because the certificates you generated do not include the host name or IP address that your external clients (beat, curl) use to connect to Elasticsearch. TLS is pretty strict regarding SANs, and clients typically refuse to establish a connection to a server which doesn't advertise the expected identity.