Thank you for pointing that out @antoineco. I updated the username to kibana_system in kibana/config/kibana.yml, but I'm getting the same result.
{"type":"log","@timestamp":"2020-07-07T12:55:28Z","tags":["warning","plugins","licensing"],"pid":7,"message":"License information could not be obtained from Elasticsearch due to [security_exception] unable to authenticate user [kibana_system] for REST request [/_xpack], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } }
Hi all , I am getting error
][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>400, :url=>"http://xxxxx9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=6&interval=1s"}
While running logstash as docker service, logstash version is 6.8.0
Thanks in advance
release-6.x
branch, as described in the README? https://github.com/deviantony/docker-elk#version-selectiondocker-compose build
after switching branches.
7.x
you may have started before switching to 6.x
, it might be wise wiping that data with docker-compose down -v
and starting with a fresh data volume.
HI all , In docker stack deploy for logstash,kibana and filebeat
I would like to know what would happen if logstash is restarted. I mean, the logic should be, if logstash is started, then filebeat should be restarted to re-setup the stream.
What can be done for this case ?
Thank you
backoff.init
and backoff.max
settings at Configure the Logstash output)
Hi all, I am trying to use this ELK docker-compose file and push auditbeat logs to elasticsearch. The whole set up is running in a single host. The docker-compose for runs successfully and I have auditbeat installed locally. Auditbeat has a command that test if it can connect to elasticsearch auditbeat test output
When I run the auditbeat connectivity test, I get:
elasticsearch: http://localhost:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: ::1, 127.0.0.1
dial up... OK
TLS... WARN secure connection disabled
talk to server... ERROR Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license from the /_license endpoint, Auditbeat requires the default distribution of Elasticsearch. Please make the endpoint accessible to Auditbeat so it can verify the license.: could not extract license information from the server response: unknown state, received: 'expired'
but when I pull down docker pull docker.elastic.co/kibana/kibana:7.9.2
image directly and run it - it works.
I have tried changing the
xpack.license.self_generated.type: trial
tobasic
as well
I am running it on CentOs 7
auditbeat
version matches(not considering patch number) the elasticsearch version and I used the default elastic
and changeme
as a password(which I think is the super user)auditbeat version 7.9.2 (amd64), libbeat 7.9.2 [2ab907f5ccecf9fd82fe37105082e89fd871f684 built 2020-09-22 23:14:31 +0000 UTC]
{
"name" : "19780697fd9b",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "CMKnZU0KTZKuOWyAbZJEpg",
"version" : {
"number" : "7.9.1",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "083627f112ba94dffc1232e8b42b73492789ef91",
"build_date" : "2020-09-01T21:22:21.964974Z",
"build_snapshot" : false,
"lucene_version" : "8.6.2",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
@antoineco I am not sure if Auditbeat requires paid license. However, swapping the docker-compose
elastic by running:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.9.2
Gives the following output:
elasticsearch: http://localhost:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: ::1, 127.0.0.1
dial up... OK
talk to server... OK
@Mansour-J I wasn't able to reproduce:
$ docker run \
--cap-add="AUDIT_CONTROL" --cap-add="AUDIT_READ" \
docker.elastic.co/beats/auditbeat:7.9.2 \
test output \
-E 'output.elasticsearch.hosts=["host.docker.internal:9200"]' -E 'output.elasticsearch.username=elastic' -E 'output.elasticsearch.password=changeme'
elasticsearch: http://host.docker.internal:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 192.168.65.2
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 7.9.1
(I use host.docker.internal
because I run Docker for Desktop, but in your case localhost
should work just fine)
discovery.type: single-node
shouldn't be related to your issue here.beats
input. If you want to send directly to Elasticsearch, you have to use port 9200
, not 5044
.$ docker-compose up -d elasticsearch
$ docker run docker.elastic.co/beats/filebeat:7.9.3 \
test output \
-E 'output.elasticsearch.hosts=["host.docker.internal:9200"]' -E 'output.elasticsearch.username=elastic' -E 'output.elasticsearch.password=changeme'
elasticsearch: http://host.docker.internal:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 192.168.65.2
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 7.9.2
$ docker-compose up -d elasticsearch logstash
# blank config for testing
$ touch filebeat.yml
$ docker run docker.elastic.co/beats/filebeat:7.9.3 \
-v ${PWD}/filebeat.yml:/usr/share/filebeat/filebeat.yml \
test output \
-E 'output.logstash.hosts=["host.docker.internal:5044"]'
logstash: host.docker.internal:5044...
connection...
parse host... OK
dns lookup... OK
addresses: 192.168.65.2
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
kibana_1 | {"type":"log","@timestamp":"2020-11-03T14:14:59Z","tags":["fatal","root"],"pid":6,"message":"Error: Setup lifecycle of \"monitoring\" plugin wasn't completed in 30sec. Consider disabling the plugin and re-start.\n at Timeout.setTimeout (/usr/share/kibana/src/core/utils/promise.js:31:90)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)"}
kibana_1 | {"type":"log","@timestamp":"2020-11-03T14:14:59Z","tags":["info","plugins-system"],"pid":6,"message":"Stopping all plugins."}
kibana_1 |
kibana_1 | FATAL Error: Setup lifecycle of "monitoring" plugin wasn't completed in 30sec. Consider disabling the plugin and re-start.
kibana_1 |
elasticsearch_1 | {"type": "server", "timestamp": "2020-11-03T14:15:00,579Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "[ilm-history-2-000001] creating index, cause [api], templates [ilm-history], shards [1]/[0]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-11-03T14:15:00,664Z", "level": "INFO", "component": "o.e.x.i.IndexLifecycleTransition", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "moving index [ilm-history-2-000001] from [null] to [{\"phase\":\"new\",\"action\":\"complete\",\"name\":\"complete\"}] in policy [ilm-history-ilm-policy]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-11-03T14:15:00,845Z", "level": "INFO", "component": "o.e.x.i.IndexLifecycleTransition", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "moving index [ilm-history-2-000001] from [{\"phase\":\"new\",\"action\":\"complete\",\"name\":\"complete\"}] to [{\"phase\":\"hot\",\"action\":\"unfollow\",\"name\":\"wait-for-indexing-complete\"}] in policy [ilm-history-ilm-policy]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-11-03T14:15:00,913Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[ilm-history-2-000001][0]]]).", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-11-03T14:15:00,991Z", "level": "INFO", "component": "o.e.x.i.IndexLifecycleTransition", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "moving index [ilm-history-2-000001] from [{\"phase\":\"hot\",\"action\":\"unfollow\",\"name\":\"wait-for-indexing-complete\"}] to [{\"phase\":\"hot\",\"action\":\"unfollow\",\"name\":\"wait-for-follow-shard-tasks\"}] in policy [ilm-history-ilm-policy]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw" }
devian_kibana_1 exited with code 1
logstash_1 | [2020-11-03T14:18:14,409][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
logstash_1 | [2020-11-03T14:18:14,524][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::ConnectTimeout] connect timed out"}
logstash_1 | [2020-11-03T14:18:22,279][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::
git sync
and tried again but no joy