git diffshould be a good start.
So I ran into an issue when running the TLS branch. I followed it, and everyhthing is up and running. I'm able to send logs via postman to elasticsearch. I'm also able to send logs via APM on a random test .net api.
The issue may be due to my limited knowledge of certificates, but when i try to connect to elasticsearch via for example a metricbeat configuration, i get the following error:
"cannot validate certificate for some.ip.address because it doesn't contain any IP SANs"
I get a similar error for Logstash or with Curl.
My assumption here, is that this has to do with it being a self signed cert, but im not sure.
-k, for Beat I'm not sure).
verification_mode: certificate: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-ssl.html
Generate a CSR? [y/N] n
Use an existing CA? [y/N] y
CA Path: /usr/share/elasticsearch/tls/ca/ca.p12
Password for ca.p12: <none>
For how long should your certificate be valid? [5y] 10y
Generate a certificate per node? [y/N] n
(Enter all the hostnames that you need, one per line.)
elasticsearch <- HERE
Is this correct [Y/n] y
(Enter all the IP addresses that you need, one per line.)
Is this correct [Y/n] y
Do you wish to change any of these options? [y/N] n
Provide a password for the "http.p12" file: <none>
What filename should be used for the output zip file? tls/elasticsearch-ssl-http.zip
elasticsearchname is an internal name that's only resolvable from within the docker-elk local container network (e.g. by Logstash and Kibana).
Ahhh that makes a lot more sense. So for each of my Elasticnodes i add the nodes host to the CA's list of authors on the certificates i generate on that machine?
I was also wondering how the CA's got resolved on a higher network level.
You really should set up a "Buy me a coffee" for all your hard work. A lot of the specifics on stuff like encryption for example, has up until recently been fairly poorly documented. These forums really help!
For anyone else on the steep learning curve of elastic and security, i had forgotten 2 key parts.
As Antoine accurately pointed out, i had forgotten to sign my CA with the hostnames of my Elastic instances. In my case i added a dns record ie. "host01.somedomain.com" to the list of hosts while running the certificate generator tool listed above, in the step "Enter all hostnames that you need"
I had forgotten to add the certificate .pem file to my metric beat elastic config. Essentially my config ended up along these lines:
output.elasticsearch: # Array of hosts to connect to. hosts: ["https://es01.somedomain.com"] ssl.certificate_authorities: ["/storage/elk-docker/tls/kibana/elasticsearch-ca.pem"] # Protocol - either `http` (default) or `https`. protocol: "https" # Authentication credentials - either API key or username/password. #api_key: "id:api_key" username: "elastic" password: "for_the_love_of_god_change_me_please" # (Not real)
Adding these allowed Metricbeat to authenticate my private CA signed certificate. Im running an ssl config in front of my docker cluster as well using nginx and a reverse proxy, which had its own set of challenges here.
Also please note it's not necessary to regenerate the Certificate Authority in order to generate a new server certificate for ES. If you already have a CA key/cert pair, you can skip directly to the second step of the README.
Yes, however i had not included the proper hostnames in the CA, so i had to redo the CA to include the new host for proper resolution
Just for my own understanding, since you said you have an NGINX instance between your clients and Elasticsearch: are you using Elasticsearch's CA + server certs on NGINX and terminating the TLS connection there? Or are you just doing a passthrough on NGINX and terminating TLS on Elasticsearch directly?
Proxy - I use nginx on my host machine for all dns resolutions. For example i have mapped my elasticsearch to a dns resolvable address on my domain, i.e. 127.0.0.1:9200 to es01.somdomain.com
The reason is 2 fold, i like having all my configuration for (public) port mapping and resolving in one place. The second reason is that i can now just move the pointer to a different cluster should i decide to move my server in the future.
I know that i could do this with an nginx container as well, but i like keeping my webserver seperate from Docker. Makes configuration easier in my workflow at least :-)
command: ['--config.reload.automatic']somewhere under the
logstashservice in the Compose file and Logstash will automatically reload your config when it changes.
env:in the Compose file). You should end up with
- --config.reload.automaticif you opt for that syntax (dash space dash dash..., looks weird but it's 100% valid).
docker_elknetwork created by Compose and then you won't even need to expose port 5601 on your host.
command: - --config.reload.automatic