backoff.init
and backoff.max
settings at Configure the Logstash output)
Hi all, I am trying to use this ELK docker-compose file and push auditbeat logs to elasticsearch. The whole set up is running in a single host. The docker-compose for runs successfully and I have auditbeat installed locally. Auditbeat has a command that test if it can connect to elasticsearch auditbeat test output
When I run the auditbeat connectivity test, I get:
elasticsearch: http://localhost:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: ::1, 127.0.0.1
dial up... OK
TLS... WARN secure connection disabled
talk to server... ERROR Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license from the /_license endpoint, Auditbeat requires the default distribution of Elasticsearch. Please make the endpoint accessible to Auditbeat so it can verify the license.: could not extract license information from the server response: unknown state, received: 'expired'
but when I pull down docker pull docker.elastic.co/kibana/kibana:7.9.2
image directly and run it - it works.
I have tried changing the
xpack.license.self_generated.type: trial
tobasic
as well
I am running it on CentOs 7
auditbeat
version matches(not considering patch number) the elasticsearch version and I used the default elastic
and changeme
as a password(which I think is the super user)auditbeat version 7.9.2 (amd64), libbeat 7.9.2 [2ab907f5ccecf9fd82fe37105082e89fd871f684 built 2020-09-22 23:14:31 +0000 UTC]
{
"name" : "19780697fd9b",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "CMKnZU0KTZKuOWyAbZJEpg",
"version" : {
"number" : "7.9.1",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "083627f112ba94dffc1232e8b42b73492789ef91",
"build_date" : "2020-09-01T21:22:21.964974Z",
"build_snapshot" : false,
"lucene_version" : "8.6.2",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
@antoineco I am not sure if Auditbeat requires paid license. However, swapping the docker-compose
elastic by running:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.9.2
Gives the following output:
elasticsearch: http://localhost:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: ::1, 127.0.0.1
dial up... OK
talk to server... OK
@Mansour-J I wasn't able to reproduce:
$ docker run \
--cap-add="AUDIT_CONTROL" --cap-add="AUDIT_READ" \
docker.elastic.co/beats/auditbeat:7.9.2 \
test output \
-E 'output.elasticsearch.hosts=["host.docker.internal:9200"]' -E 'output.elasticsearch.username=elastic' -E 'output.elasticsearch.password=changeme'
elasticsearch: http://host.docker.internal:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 192.168.65.2
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 7.9.1
(I use host.docker.internal
because I run Docker for Desktop, but in your case localhost
should work just fine)
discovery.type: single-node
shouldn't be related to your issue here.beats
input. If you want to send directly to Elasticsearch, you have to use port 9200
, not 5044
.$ docker-compose up -d elasticsearch
$ docker run docker.elastic.co/beats/filebeat:7.9.3 \
test output \
-E 'output.elasticsearch.hosts=["host.docker.internal:9200"]' -E 'output.elasticsearch.username=elastic' -E 'output.elasticsearch.password=changeme'
elasticsearch: http://host.docker.internal:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 192.168.65.2
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 7.9.2
$ docker-compose up -d elasticsearch logstash
# blank config for testing
$ touch filebeat.yml
$ docker run docker.elastic.co/beats/filebeat:7.9.3 \
-v ${PWD}/filebeat.yml:/usr/share/filebeat/filebeat.yml \
test output \
-E 'output.logstash.hosts=["host.docker.internal:5044"]'
logstash: host.docker.internal:5044...
connection...
parse host... OK
dns lookup... OK
addresses: 192.168.65.2
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
kibana_1 | {"type":"log","@timestamp":"2020-11-03T14:14:59Z","tags":["fatal","root"],"pid":6,"message":"Error: Setup lifecycle of \"monitoring\" plugin wasn't completed in 30sec. Consider disabling the plugin and re-start.\n at Timeout.setTimeout (/usr/share/kibana/src/core/utils/promise.js:31:90)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)"}
kibana_1 | {"type":"log","@timestamp":"2020-11-03T14:14:59Z","tags":["info","plugins-system"],"pid":6,"message":"Stopping all plugins."}
kibana_1 |
kibana_1 | FATAL Error: Setup lifecycle of "monitoring" plugin wasn't completed in 30sec. Consider disabling the plugin and re-start.
kibana_1 |
elasticsearch_1 | {"type": "server", "timestamp": "2020-11-03T14:15:00,579Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "[ilm-history-2-000001] creating index, cause [api], templates [ilm-history], shards [1]/[0]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-11-03T14:15:00,664Z", "level": "INFO", "component": "o.e.x.i.IndexLifecycleTransition", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "moving index [ilm-history-2-000001] from [null] to [{\"phase\":\"new\",\"action\":\"complete\",\"name\":\"complete\"}] in policy [ilm-history-ilm-policy]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-11-03T14:15:00,845Z", "level": "INFO", "component": "o.e.x.i.IndexLifecycleTransition", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "moving index [ilm-history-2-000001] from [{\"phase\":\"new\",\"action\":\"complete\",\"name\":\"complete\"}] to [{\"phase\":\"hot\",\"action\":\"unfollow\",\"name\":\"wait-for-indexing-complete\"}] in policy [ilm-history-ilm-policy]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-11-03T14:15:00,913Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[ilm-history-2-000001][0]]]).", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-11-03T14:15:00,991Z", "level": "INFO", "component": "o.e.x.i.IndexLifecycleTransition", "cluster.name": "docker-cluster", "node.name": "d72b08299b6b", "message": "moving index [ilm-history-2-000001] from [{\"phase\":\"hot\",\"action\":\"unfollow\",\"name\":\"wait-for-indexing-complete\"}] to [{\"phase\":\"hot\",\"action\":\"unfollow\",\"name\":\"wait-for-follow-shard-tasks\"}] in policy [ilm-history-ilm-policy]", "cluster.uuid": "OmP8_jOLQ_C-NUQEPRIb1A", "node.id": "EF4mNoLbSs-aLiiv_uBFmw" }
devian_kibana_1 exited with code 1
logstash_1 | [2020-11-03T14:18:14,409][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
logstash_1 | [2020-11-03T14:18:14,524][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::ConnectTimeout] connect timed out"}
logstash_1 | [2020-11-03T14:18:22,279][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::
git sync
and tried again but no joy
Hi Everyone, I am using kinesis agent to stream from application logs and lambda function to separate the logs based on some indices. However I am seeing the below error intermittently.
{ "error": { "root_cause": [ { "type": "remote_transport_exception", "reason": "[2f4424726309a9300ffdd0939e5cca77][x.x.x.x:9300][indices:data/write/bulk[s]]" } ], "type": "es_rejected_execution_exception", "reason": "rejected execution of processing of [195195722][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[misc-logs-2020-11-16][0]] containing [index {[misc-logs-2020-11-16][_doc][J9_q0HUBq-O-de1zDKJd], source[{\"message\":\"\\"End of file or no input: Operation interrupted or timed out (60 s recv delay) (60 s send delay)\\"\"}]}], target allocation id: yx7QdcoSRkecIXzgqLLeLg, primary term: 1 on EsThreadPoolExecutor[name = 2f4424726309a9300ffdd0939e5cca77/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@5205cb35[Running, pool size = 2, active threads = 2, queued tasks = 200, completed tasks = 98568339]]" }, "status": 429 }
I am using multiline option set in agent.json to read multiline log statements together.
AWS Kinesis Agent offers settings like number of nodes and shards. However unsure where queue size may be increased as elasticsearch.yaml doesn't seem to be accessible. Any help would be appreciated.
write
thread pool as described here: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html