Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Alan M Kristensen
    @Big-al
    Also - let me just add, this is truely awesome work. Makes it so much easier to get going !
    Antoine Cotten
    @antoineco
    @Big-al glad this was useful!
    According to this page, the passphrase needs to be added to Elasticsearch's keystore (see step 3): https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls.html#tls-transport
    I'm glad you raised this, I need to update the doc with some more accurate info.
    Antoine Cotten
    @antoineco
    README updated on the tls branch.
    Davide Pugliese
    @Deviad
    Hello
    I am trying to use fscrawler on docker-elk, but I get an error unfortunately.
    https://discuss.elastic.co/t/failed-to-create-elasticsearch-client/256243
    image.png
    Alan M Kristensen
    @Big-al
    Thanks for taking quick action Antoine! Im actually going to change and rebuild my production image for a large client based on some of your changes here. Its great. This makes it a lot easier for new developers to adopt elastic. Huge fan.
    Antoine Cotten
    @antoineco
    @Big-al my pleasure, always happy to hear the project is useful to people!
    dobixu
    @dobixu
    image.png
    Screen Shot 2020-11-27 at 2.53.00 PM.png
    Hello guys . I updated docker-compose.yml ,but elasticsearch's data path not changed.
    I want to change elasticsearch data storage path.
    dobixu
    @dobixu
    deviantony/docker-elk#430
    I referred to this document and rebuild docker-compose.yml 。Return issue:
    Named volume "{'type': 'volume', 'source': '/data/docker-elk/elasticsearch/data', 'target': '/usr/share/elasticsearch/data'}" is used in service "elasticsearch" but no declaration was found in the volumes section.
    Antoine Cotten
    @antoineco
    @dobixu in that case you have to use a bind mount, not a named volume. See how config files are mounted.
    dobixu
    @dobixu
    Screen Shot 2020-11-27 at 4.35.58 PM.png
    And chmod 777 ./elasticsearch @antoineco Thank you!!! It's workded
    Antoine Cotten
    @antoineco
    :+1:
    Stephane
    @steevivo
    hi all, when I launch kibana with ip:5601 i hqve the message Kibana server is not ready yet , I forget what ?
    Antoine Cotten
    @antoineco
    @steevivo Can you please check the logs of Kibana and Elasticsearch?
    Stephane
    @steevivo
    @antoineco After remove xpack security it's ok , but I don't know really why ... . The text in the log elastic is connection refused on docker elastic , I use 7.9.3 version
    Antoine Cotten
    @antoineco
    Interesting, did you initialize your passwords at some point, then decide to start over without wiping Elasticsearch's data volume? Removing containers is not enough, ES' data is persisted in a volume.
    Stephane
    @steevivo
    @antoineco did you initialize your passwords at some point ? no , I started docker-compose up -d after the build command
    Antoine Cotten
    @antoineco

    Let's try this:

    $ docker-compose down -v
    $ docker-compose up

    (and of course undo the change you applied to the config to disable X-Pack, etc.)

    Stephane
    @steevivo
    @antoineco how to activate the login kibana ? with the xpack ?
    Antoine Cotten
    @antoineco
    Yes, but that's on by default if you just clone the repo and run Compose.
    Stephane
    @steevivo
    @antoineco hi, no login page after setup xpack.security.enabled: true , I've launch command elasticsearch-setup-passwords auto --batch and setup the yml files bt nothing append , very strange. Https only?
    Antoine Cotten
    @antoineco
    @steevivo No there is no HTTPS on the master branch, only on the tls branch, and only to secure connections between Elastic components and Elasticsearch (not in the browser, because a browser would reject a self-signed certificate).
    I don't know what's happening to you but it's definitely not the default behavior. The stack has automated tests so we do verify regularly that everything works together. I'd suggest opening a GitHub issue with all the requested information so we can reproduce the problem.
    Checking what has been manually changed with git diff should be a good start.
    Alan M Kristensen
    @Big-al

    So I ran into an issue when running the TLS branch. I followed it, and everyhthing is up and running. I'm able to send logs via postman to elasticsearch. I'm also able to send logs via APM on a random test .net api.
    The issue may be due to my limited knowledge of certificates, but when i try to connect to elasticsearch via for example a metricbeat configuration, i get the following error:
    "cannot validate certificate for some.ip.address because it doesn't contain any IP SANs"

    I get a similar error for Logstash or with Curl.

    My assumption here, is that this has to do with it being a self signed cert, but im not sure.

    Questions:

    1. What causes this, and what did i likely do wrong?
    2. Wouldn't it be equally secure having nginx with certbot limitting incoming traffic to HTTPS using for example a LetsEncrypt certificate? I understand internal coms to elastic wouldnt be secured then, but that is not an issue when the only thing running on these servers is elastic.
    Antoine Cotten
    @antoineco
    @Big-al this is most likely because the certificates you generated do not include the host name or IP address that your external clients (beat, curl) use to connect to Elasticsearch. TLS is pretty strict regarding SANs, and clients typically refuse to establish a connection to a server which doesn't advertise the expected identity.
    You could either include the correct host name / IP at the prompt when you generate ES' certificates, or disable host name verification in your clients (curl has -k, for Beat I'm not sure).
    Antoine Cotten
    @antoineco
    Alan M Kristensen
    @Big-al
    So if im reading your message correctly:
    When creating certificate using the cert tool
    At the place i marked “HERE” i should add in the hosts of all my nodes? Do i then have to recompute the cert every time i add in a new agent for example?

    Generate a CSR? [y/N] n
    Use an existing CA? [y/N] y
    CA Path: /usr/share/elasticsearch/tls/ca/ca.p12
    Password for ca.p12: <none>
    For how long should your certificate be valid? [5y] 10y
    Generate a certificate per node? [y/N] n
    (Enter all the hostnames that you need, one per line.)
    elasticsearch <- HERE
    localhost
    Is this correct [Y/n] y
    (Enter all the IP addresses that you need, one per line.)

    <none>
    Is this correct [Y/n] y
    Do you wish to change any of these options? [y/N] n
    Provide a password for the "http.p12" file: <none>
    What filename should be used for the output zip file? tls/elasticsearch-ssl-http.zip

    Antoine Cotten
    @antoineco
    Not the IPs of the agents (clients), only the ones of the servers. For example, if your clients access Elasticsearch on es01.mydomain.local, you have to include that name in the certificate.
    Server hostnames aren't supposed to change often.
    The default elasticsearch name is an internal name that's only resolvable from within the docker-elk local container network (e.g. by Logstash and Kibana).
    Alan M Kristensen
    @Big-al

    Ahhh that makes a lot more sense. So for each of my Elasticnodes i add the nodes host to the CA's list of authors on the certificates i generate on that machine?
    I was also wondering how the CA's got resolved on a higher network level.

    You really should set up a "Buy me a coffee" for all your hard work. A lot of the specifics on stuff like encryption for example, has up until recently been fairly poorly documented. These forums really help!

    Alan M Kristensen
    @Big-al

    Allright!
    For anyone else on the steep learning curve of elastic and security, i had forgotten 2 key parts.

    1. As Antoine accurately pointed out, i had forgotten to sign my CA with the hostnames of my Elastic instances. In my case i added a dns record ie. "host01.somedomain.com" to the list of hosts while running the certificate generator tool listed above, in the step "Enter all hostnames that you need"

    2. I had forgotten to add the certificate .pem file to my metric beat elastic config. Essentially my config ended up along these lines:

      output.elasticsearch:
      # Array of hosts to connect to.
      hosts: ["https://es01.somedomain.com"]
      
      ssl.certificate_authorities: ["/storage/elk-docker/tls/kibana/elasticsearch-ca.pem"]
      
      # Protocol - either `http` (default) or `https`.
      protocol: "https"
      
      # Authentication credentials - either API key or username/password.
      #api_key: "id:api_key"
      username: "elastic"
      password: "for_the_love_of_god_change_me_please" # (Not real)

    Adding these allowed Metricbeat to authenticate my private CA signed certificate. Im running an ssl config in front of my docker cluster as well using nginx and a reverse proxy, which had its own set of challenges here.

    Antoine Cotten
    @antoineco
    Looking good!
    Also please note it's not necessary to regenerate the Certificate Authority in order to generate a new server certificate for ES. If you already have a CA key/cert pair, you can skip directly to the second step of the README.
    Antoine Cotten
    @antoineco
    Just for my own understanding, since you said you have an NGINX instance between your clients and Elasticsearch: are you using Elasticsearch's CA + server certs on NGINX and terminating the TLS connection there? Or are you just doing a passthrough on NGINX and terminating TLS on Elasticsearch directly?
    Alan M Kristensen
    @Big-al

    Also please note it's not necessary to regenerate the Certificate Authority in order to generate a new server certificate for ES. If you already have a CA key/cert pair, you can skip directly to the second step of the README.

    Yes, however i had not included the proper hostnames in the CA, so i had to redo the CA to include the new host for proper resolution

    Alan M Kristensen
    @Big-al

    Just for my own understanding, since you said you have an NGINX instance between your clients and Elasticsearch: are you using Elasticsearch's CA + server certs on NGINX and terminating the TLS connection there? Or are you just doing a passthrough on NGINX and terminating TLS on Elasticsearch directly?

    Proxy - I use nginx on my host machine for all dns resolutions. For example i have mapped my elasticsearch to a dns resolvable address on my domain, i.e. 127.0.0.1:9200 to es01.somdomain.com
    The reason is 2 fold, i like having all my configuration for (public) port mapping and resolving in one place. The second reason is that i can now just move the pointer to a different cluster should i decide to move my server in the future.
    I know that i could do this with an nginx container as well, but i like keeping my webserver seperate from Docker. Makes configuration easier in my workflow at least :-)

    Antoine Cotten
    @antoineco
    @Big-al there is no hostname in a CA, only in a HTTP server certificate (the one used by Elaticsearch). If you're talking about the CA PEM certificate in the second section of the README, it is simply a conversion from P12 to PEM, but there is no hostname added to that PEM certificate.