Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    soumen0
    @soumen0
    I am unable to create a docker image for logstash, it is actually failing in the lines ...
    ADD config/ /usr/share/logstash/config/ and
    ADD pipeline/ /usr/share/logstash/pipeline/
    Although both the logstash.conf and logstash.yml exists in the same directory with docker file.
    I am trying to customize using the following logstash docker 7.9.3
    Antoine Cotten
    @antoineco
    @soumen0 are you using Compose on macOS or Windows, by any chance?
    Also could you please elaborate on "it is actually failing in the lines". What error are you seeing?
    neethujacobmec
    @neethujacobmec

    Hi Everyone, I am using kinesis agent to stream from application logs and lambda function to separate the logs based on some indices. However I am seeing the below error intermittently.

    { "error": { "root_cause": [ { "type": "remote_transport_exception", "reason": "[2f4424726309a9300ffdd0939e5cca77][x.x.x.x:9300][indices:data/write/bulk[s]]" } ], "type": "es_rejected_execution_exception", "reason": "rejected execution of processing of [195195722][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[misc-logs-2020-11-16][0]] containing [index {[misc-logs-2020-11-16][_doc][J9_q0HUBq-O-de1zDKJd], source[{\"message\":\"\\"End of file or no input: Operation interrupted or timed out (60 s recv delay) (60 s send delay)\\"\"}]}], target allocation id: yx7QdcoSRkecIXzgqLLeLg, primary term: 1 on EsThreadPoolExecutor[name = 2f4424726309a9300ffdd0939e5cca77/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@5205cb35[Running, pool size = 2, active threads = 2, queued tasks = 200, completed tasks = 98568339]]" }, "status": 429 }

    I am using multiline option set in agent.json to read multiline log statements together.

    AWS Kinesis Agent offers settings like number of nodes and shards. However unsure where queue size may be increased as elasticsearch.yaml doesn't seem to be accessible. Any help would be appreciated.

    Antoine Cotten
    @antoineco
    @neethujacobmec you can adjust the size of thewrite thread pool as described here: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html
    Alan M Kristensen
    @Big-al
    Hi, in the TLS setup you write "You will be prompted to enter an optional passphrase to protect both the CA and Elasticsearch keys. Please be aware that the passphrase you enter here, if not empty, will have be manually entered on every restart of Elasticsearch."
    Where do i update this password? My Elasticsearch node is failing on startup with a wrong certificate password.
    Also - let me just add, this is truely awesome work. Makes it so much easier to get going !
    Antoine Cotten
    @antoineco
    @Big-al glad this was useful!
    According to this page, the passphrase needs to be added to Elasticsearch's keystore (see step 3): https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls.html#tls-transport
    I'm glad you raised this, I need to update the doc with some more accurate info.
    Antoine Cotten
    @antoineco
    README updated on the tls branch.
    Davide Pugliese
    @Deviad
    Hello
    I am trying to use fscrawler on docker-elk, but I get an error unfortunately.
    https://discuss.elastic.co/t/failed-to-create-elasticsearch-client/256243
    image.png
    Alan M Kristensen
    @Big-al
    Thanks for taking quick action Antoine! Im actually going to change and rebuild my production image for a large client based on some of your changes here. Its great. This makes it a lot easier for new developers to adopt elastic. Huge fan.
    Antoine Cotten
    @antoineco
    @Big-al my pleasure, always happy to hear the project is useful to people!
    dobixu
    @dobixu
    image.png
    Screen Shot 2020-11-27 at 2.53.00 PM.png
    Hello guys . I updated docker-compose.yml ,but elasticsearch's data path not changed.
    I want to change elasticsearch data storage path.
    dobixu
    @dobixu
    deviantony/docker-elk#430
    I referred to this document and rebuild docker-compose.yml 。Return issue:
    Named volume "{'type': 'volume', 'source': '/data/docker-elk/elasticsearch/data', 'target': '/usr/share/elasticsearch/data'}" is used in service "elasticsearch" but no declaration was found in the volumes section.
    Antoine Cotten
    @antoineco
    @dobixu in that case you have to use a bind mount, not a named volume. See how config files are mounted.
    dobixu
    @dobixu
    Screen Shot 2020-11-27 at 4.35.58 PM.png
    And chmod 777 ./elasticsearch @antoineco Thank you!!! It's workded
    Antoine Cotten
    @antoineco
    :+1:
    Stephane
    @steevivo
    hi all, when I launch kibana with ip:5601 i hqve the message Kibana server is not ready yet , I forget what ?
    Antoine Cotten
    @antoineco
    @steevivo Can you please check the logs of Kibana and Elasticsearch?
    Stephane
    @steevivo
    @antoineco After remove xpack security it's ok , but I don't know really why ... . The text in the log elastic is connection refused on docker elastic , I use 7.9.3 version
    Antoine Cotten
    @antoineco
    Interesting, did you initialize your passwords at some point, then decide to start over without wiping Elasticsearch's data volume? Removing containers is not enough, ES' data is persisted in a volume.
    Stephane
    @steevivo
    @antoineco did you initialize your passwords at some point ? no , I started docker-compose up -d after the build command
    Antoine Cotten
    @antoineco

    Let's try this:

    $ docker-compose down -v
    $ docker-compose up

    (and of course undo the change you applied to the config to disable X-Pack, etc.)

    Stephane
    @steevivo
    @antoineco how to activate the login kibana ? with the xpack ?
    Antoine Cotten
    @antoineco
    Yes, but that's on by default if you just clone the repo and run Compose.
    Stephane
    @steevivo
    @antoineco hi, no login page after setup xpack.security.enabled: true , I've launch command elasticsearch-setup-passwords auto --batch and setup the yml files bt nothing append , very strange. Https only?
    Antoine Cotten
    @antoineco
    @steevivo No there is no HTTPS on the master branch, only on the tls branch, and only to secure connections between Elastic components and Elasticsearch (not in the browser, because a browser would reject a self-signed certificate).
    I don't know what's happening to you but it's definitely not the default behavior. The stack has automated tests so we do verify regularly that everything works together. I'd suggest opening a GitHub issue with all the requested information so we can reproduce the problem.
    Checking what has been manually changed with git diff should be a good start.
    Alan M Kristensen
    @Big-al

    So I ran into an issue when running the TLS branch. I followed it, and everyhthing is up and running. I'm able to send logs via postman to elasticsearch. I'm also able to send logs via APM on a random test .net api.
    The issue may be due to my limited knowledge of certificates, but when i try to connect to elasticsearch via for example a metricbeat configuration, i get the following error:
    "cannot validate certificate for some.ip.address because it doesn't contain any IP SANs"

    I get a similar error for Logstash or with Curl.

    My assumption here, is that this has to do with it being a self signed cert, but im not sure.

    Questions:

    1. What causes this, and what did i likely do wrong?
    2. Wouldn't it be equally secure having nginx with certbot limitting incoming traffic to HTTPS using for example a LetsEncrypt certificate? I understand internal coms to elastic wouldnt be secured then, but that is not an issue when the only thing running on these servers is elastic.
    Antoine Cotten
    @antoineco
    @Big-al this is most likely because the certificates you generated do not include the host name or IP address that your external clients (beat, curl) use to connect to Elasticsearch. TLS is pretty strict regarding SANs, and clients typically refuse to establish a connection to a server which doesn't advertise the expected identity.
    You could either include the correct host name / IP at the prompt when you generate ES' certificates, or disable host name verification in your clients (curl has -k, for Beat I'm not sure).
    Antoine Cotten
    @antoineco
    Alan M Kristensen
    @Big-al
    So if im reading your message correctly:
    When creating certificate using the cert tool
    At the place i marked “HERE” i should add in the hosts of all my nodes? Do i then have to recompute the cert every time i add in a new agent for example?

    Generate a CSR? [y/N] n
    Use an existing CA? [y/N] y
    CA Path: /usr/share/elasticsearch/tls/ca/ca.p12
    Password for ca.p12: <none>
    For how long should your certificate be valid? [5y] 10y
    Generate a certificate per node? [y/N] n
    (Enter all the hostnames that you need, one per line.)
    elasticsearch <- HERE
    localhost
    Is this correct [Y/n] y
    (Enter all the IP addresses that you need, one per line.)

    <none>
    Is this correct [Y/n] y
    Do you wish to change any of these options? [y/N] n
    Provide a password for the "http.p12" file: <none>
    What filename should be used for the output zip file? tls/elasticsearch-ssl-http.zip

    Antoine Cotten
    @antoineco
    Not the IPs of the agents (clients), only the ones of the servers. For example, if your clients access Elasticsearch on es01.mydomain.local, you have to include that name in the certificate.
    Server hostnames aren't supposed to change often.
    The default elasticsearch name is an internal name that's only resolvable from within the docker-elk local container network (e.g. by Logstash and Kibana).
    Alan M Kristensen
    @Big-al

    Ahhh that makes a lot more sense. So for each of my Elasticnodes i add the nodes host to the CA's list of authors on the certificates i generate on that machine?
    I was also wondering how the CA's got resolved on a higher network level.

    You really should set up a "Buy me a coffee" for all your hard work. A lot of the specifics on stuff like encryption for example, has up until recently been fairly poorly documented. These forums really help!

    Alan M Kristensen
    @Big-al

    Allright!
    For anyone else on the steep learning curve of elastic and security, i had forgotten 2 key parts.

    1. As Antoine accurately pointed out, i had forgotten to sign my CA with the hostnames of my Elastic instances. In my case i added a dns record ie. "host01.somedomain.com" to the list of hosts while running the certificate generator tool listed above, in the step "Enter all hostnames that you need"

    2. I had forgotten to add the certificate .pem file to my metric beat elastic config. Essentially my config ended up along these lines:

      output.elasticsearch:
      # Array of hosts to connect to.
      hosts: ["https://es01.somedomain.com"]
      
      ssl.certificate_authorities: ["/storage/elk-docker/tls/kibana/elasticsearch-ca.pem"]
      
      # Protocol - either `http` (default) or `https`.
      protocol: "https"
      
      # Authentication credentials - either API key or username/password.
      #api_key: "id:api_key"
      username: "elastic"
      password: "for_the_love_of_god_change_me_please" # (Not real)

    Adding these allowed Metricbeat to authenticate my private CA signed certificate. Im running an ssl config in front of my docker cluster as well using nginx and a reverse proxy, which had its own set of challenges here.

    Antoine Cotten
    @antoineco
    Looking good!