Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Alan M Kristensen
    @Big-al

    Allright!
    For anyone else on the steep learning curve of elastic and security, i had forgotten 2 key parts.

    1. As Antoine accurately pointed out, i had forgotten to sign my CA with the hostnames of my Elastic instances. In my case i added a dns record ie. "host01.somedomain.com" to the list of hosts while running the certificate generator tool listed above, in the step "Enter all hostnames that you need"

    2. I had forgotten to add the certificate .pem file to my metric beat elastic config. Essentially my config ended up along these lines:

      output.elasticsearch:
      # Array of hosts to connect to.
      hosts: ["https://es01.somedomain.com"]
      
      ssl.certificate_authorities: ["/storage/elk-docker/tls/kibana/elasticsearch-ca.pem"]
      
      # Protocol - either `http` (default) or `https`.
      protocol: "https"
      
      # Authentication credentials - either API key or username/password.
      #api_key: "id:api_key"
      username: "elastic"
      password: "for_the_love_of_god_change_me_please" # (Not real)

    Adding these allowed Metricbeat to authenticate my private CA signed certificate. Im running an ssl config in front of my docker cluster as well using nginx and a reverse proxy, which had its own set of challenges here.

    Antoine Cotten
    @antoineco
    Looking good!
    Also please note it's not necessary to regenerate the Certificate Authority in order to generate a new server certificate for ES. If you already have a CA key/cert pair, you can skip directly to the second step of the README.
    Antoine Cotten
    @antoineco
    Just for my own understanding, since you said you have an NGINX instance between your clients and Elasticsearch: are you using Elasticsearch's CA + server certs on NGINX and terminating the TLS connection there? Or are you just doing a passthrough on NGINX and terminating TLS on Elasticsearch directly?
    Alan M Kristensen
    @Big-al

    Also please note it's not necessary to regenerate the Certificate Authority in order to generate a new server certificate for ES. If you already have a CA key/cert pair, you can skip directly to the second step of the README.

    Yes, however i had not included the proper hostnames in the CA, so i had to redo the CA to include the new host for proper resolution

    Alan M Kristensen
    @Big-al

    Just for my own understanding, since you said you have an NGINX instance between your clients and Elasticsearch: are you using Elasticsearch's CA + server certs on NGINX and terminating the TLS connection there? Or are you just doing a passthrough on NGINX and terminating TLS on Elasticsearch directly?

    Proxy - I use nginx on my host machine for all dns resolutions. For example i have mapped my elasticsearch to a dns resolvable address on my domain, i.e. 127.0.0.1:9200 to es01.somdomain.com
    The reason is 2 fold, i like having all my configuration for (public) port mapping and resolving in one place. The second reason is that i can now just move the pointer to a different cluster should i decide to move my server in the future.
    I know that i could do this with an nginx container as well, but i like keeping my webserver seperate from Docker. Makes configuration easier in my workflow at least :-)

    Antoine Cotten
    @antoineco
    @Big-al there is no hostname in a CA, only in a HTTP server certificate (the one used by Elaticsearch). If you're talking about the CA PEM certificate in the second section of the README, it is simply a conversion from P12 to PEM, but there is no hostname added to that PEM certificate.
    Stephane
    @steevivo
    Hi, question for newbie , how to modify my filter file logstash in pipeline directory , I have an error ro file system and yes my docker-compose has a ro on the ./logstash/pipeline:/usr/share/logstash/pipeline:ro , would be rw or another way?
    Antoine Cotten
    @antoineco
    @steevivo could it be that you're trying to edit the file from inside the container, instead of from outside?
    Pro tip: you can add command: ['--config.reload.automatic'] somewhere under the logstash service in the Compose file and Logstash will automatically reload your config when it changes.
    Stephane
    @steevivo
    @antoineco Thks, braket is not optional ? in the compose file ?
    Stephane
    @steevivo
    @antoineco Do you know Tips and tricks about Traefik and Kibana ?
    Antoine Cotten
    @antoineco
    @steevivo nope it's not optional, but you can also use the regular YAML syntax with a new line and a dash instead (like for env:in the Compose file). You should end up with - --config.reload.automatic if you opt for that syntax (dash space dash dash..., looks weird but it's 100% valid).
    I'm not a Traefik user no. What are you trying to do? Expose Kibana to the outside?
    Stephane
    @steevivo
    @antoineco yes , I would like expose kibana outside
    Antoine Cotten
    @antoineco
    I only know Traefik can do some autodiscovery of containers and auto-TLS, but if it's just for Kibana you can probably just configure it manually.
    Just make sure Traefik can access the docker_elk network created by Compose and then you won't even need to expose port 5601 on your host.
    Stephane
    @steevivo
    @antoineco Thx for your informations, your stack compose ELK is a "must have" you know . very very useful.Thx for the work
    Antoine Cotten
    @antoineco
    Thanks for the kind words! Always nice to hear people find it useful!
    Stephane
    @steevivo
    @antoineco Can I setup the logstash.yml with config.reload.automatic:true ? I would like for all file in my pipeline directory
    Antoine Cotten
    @antoineco
    No it's only a command line setting, but it does apply to your entire pipeline, there is no "per-file" reload: https://www.elastic.co/guide/en/logstash/current/reloading-config.html
    Stephane
    @steevivo
    @antoineco in command line > " docker-compose up -d logstash —config.reload.automatic » for example is ok too ?
    Antoine Cotten
    @antoineco
    Unfortunately not, it has to be inside the Compose file.
    command: ['--config.reload.automatic']
    Antoine Cotten
    @antoineco
    This message was deleted
    Or
    command:
    - --config.reload.automatic
    Stephane
    @steevivo
    @antoineco ok Thx
    ifleg
    @ifleg
    Hi,
    About config.reload.automatic it triggers the reload, but the reload hangs forever ! I have to restart the container !
    ifleg
    @ifleg
    I tryed to put config.reload.automatic: true in logstash/config/logstash.yml or to change the docker-compose.yml as @antoineco suggested. Both methods trigger the reload. But the result is the same : the restart seems to get stucked. The logs shows
    [2021-02-05T15:04:53,567][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>48, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>6000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x18973e34 run>"}
    and nothing more !
    Antoine Cotten
    @antoineco
    @steevivo actually that setting can also go inside the logstash.yml file, contrary to what I told you above: https://www.elastic.co/guide/en/logstash/7.10/logstash-settings-file.html
    Antoine Cotten
    @antoineco
    @ifleg strange, I just tried and whenever I perform a change in my pipeline file, I see the following messages in the logs:
    logstash_1       | [2021-02-05T15:28:21,868][INFO ][logstash.pipelineaction.reload] Reloading pipeline {"pipeline.id"=>:main}
    logstash_1       | [2021-02-05T15:28:32,879][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
    ...
    logstash_1       | [2021-02-05T15:28:33,592][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x1865a88d run>"}
    ...
    logstash_1       | [2021-02-05T15:28:33,697][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
    Maybe try starting Logstash with log.level=debug (can also be set in the config file) and see where it hangs in the logs? (warning: Logstash will produce a LOT of logs in debug mode)
    Antoine Cotten
    @antoineco
    As you can see, reloading the pipeline takes >10sec when absolutely no data is being processed, so if your pipeline is currently processing data, it could be that things seem to "hang" but are actually just being gracefully restarted in the background.
    ifleg
    @ifleg

    Actually it's a test installation... so there is almost no incoming data (apart from one test server).
    At the end it works, but it takes a quite while to restart (around 2mn) as I can see in the logs !

    [2021-02-05T15:43:48,802][INFO ][logstash.pipelineaction.reload] Reloading pipeline {"pipeline.id"=>:main}
    [2021-02-05T15:44:00,577][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
    
    [2021-02-05T15:44:04,668][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>48, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>6000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x47483bae run>"}

    Then it hangs up... but after a while...

    [2021-02-05T15:45:39,510][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>94.84}
    [2021-02-05T15:45:39,549][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
    [2021-02-05T15:45:39,553][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}

    Indeed, brutally restarting the docker container is faster !

    Antoine Cotten
    @antoineco
    Hehe yeah, I guess the reason why it takes so long is because Logstash is trying to avoid downtime as much as possible! When you restart the container, it's effectively a cold restart.
    Stephane
    @steevivo
    @antoineco @ifleg Only for 7.10 ? I have 7.9.1 server too
    Antoine Cotten
    @antoineco
    @steevivo any 7.x version.
    Stephane
    @steevivo
    @antoineco cool :thumbsup:
    Stephane
    @steevivo
    Hi, when I launch docker-compose up -d I have doublon images docker.elastic.co/logstash/logstash 7.10.2 278903ffa6ee 3 weeks ago 915MB
    elk-stack_logstash latest 278903ffa6ee 3 weeks ago 915MB
    elk-stack_kibana latest cca015890e7f 3 weeks ago 1.05GB
    docker.elastic.co/kibana/kibana 7.10.2 cca015890e7f 3 weeks ago 1.05GB
    docker.elastic.co/elasticsearch/elasticsearch 7.10.2 0b58e1cea500 3 weeks ago 814MB
    elk-stack_elasticsearch latest 0b58e1cea500 3 weeks ago 814MB
    Do you know why ? latest & 7.10.2
    Antoine Cotten
    @antoineco
    @steevivo that's because docker-elk builds a thin image layer on top of official images so you can add plugins which are not included in official images (the ones with the latest tag are those images).
    If you know you're not gonna need any plug-in, you can replace build: directives with image: in the Compose file, and use Elastic's images directly.
    In this case you can also remove all Dockerfiles present in the project.
    Stephane
    @steevivo
    @antoineco Ok thx
    Stephane
    @steevivo
    @antoineco if I want to curl elasticsearch, what is the name of host which replace the http://localhost:9200 ? with this docker-compose ?
    Antoine Cotten
    @antoineco
    Port 9200 is exposed on your host by default via a port mapping in the Compose file, so if you're on the Docker host, localhost works. Otherwise, the IP/hostname of your Docker host, if you curl from "outside".
    Stephane
    @steevivo
    @antoineco Ok , can I use patterns in logstash directory ? with this stack ?
    Antoine Cotten
    @antoineco
    Could you elaborate?
    Antoine Cotten
    @antoineco
    @pgruener could that be a question for the Scrapy project maybe?
    Ghost
    @ghost~6027cd416da037398461e93f
    scrapy-cluster