Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    solved the problem with json serializing the data so it doesn't get parsed..

    Hello everyone, I have encountered a problem about kibana monitoring, there is monitor data in my ES, but monitoring does not show, this phenomenon is from the beginning, the recent configuration file has not been changed, but the nodes in the ES cluster have done some Changes, new additions, and exiting the cluster. The ES continuously collects the monitoring data every day. What should I do to display the monitoring data?


    can you help me ?
    Hi all

    Search with typeahead for Static HTML pages

    I have 2,000 html files. Each has only text information and no images. Each file is a text article

    I want to search them all with index or via any means locally for a particular keyword.

    Looking or a type-ahead search which does the full text search i.e search inside of all the html pages and returns back the results with easy navigation and preview


    Folio Views https://img.informer.com/screenshots/568/568140_1_4.png and https://www.youtube.com/watch?v=0U_fNIcudyQ

    Please let me know which one would be suitable for this. This is for standalone HTML content


    Ayushya Devmurari
    Hi all
    I have different systems running on different timezones, and have a common logging ELK for all systems, would it impact the log collection capability? Would I miss any logs due to changes in the time of systems ?
    Under what circumstances there are chances that I may lose the logs, added in the system's console by the application?
    Basically I don't find some logs (important ones) in the Kibana even after searching.
    Ayushya Devmurari
    if anyone can share some thoughts or pointers plz
    Piotr Kosecki
    hello, I'm having trouble understanding how to configure logback ssl to send logs to logstash, I'm using logstash-logback-encoder
    logstash is apparently secured with ssl certificates, I've found in the doc that it should use some trustStore, but I'm not familiar with it really
    Does anyone have manual testing job - to test an application and provides the results and get paid based on the issue raised ?
    I'm using logstash with filebeat from a central syslog host. The problem i'm having is that when i try to process multiple source of files (i.e. nginx and PostgreSQL log files) i couldn't process them in different indexes in elasticsearch; PostgreSQL log entries seem to be indexed in nginx index. Anybody has a clear documentation about this?
    Doron Tsur
    Hey guys, I'm looking for good reading material on elastic. I found the definitive guide but I'm unsure to which version it relates. I find the product really useful but documentation seems (to me?) un trackable. Help?
    Sven Ludwig
    Hi all, we have a small 1-node ELK 7.0 machine for some purposes and we want to reduce the default number of primary shards per index. I would prefer to do that via elasticsearch configuration file. What are the options I have? I would like to avoid writing a custom tool that talks to the ELK REST API. Thanks
    Arthur Eyckerman
    @Zoomtoo @prog20901 @pathfinder2104 @piotrkosecki @prog20901 @pavelliano @qballer @sourcekick I highly recommend to use the elastic forums for your questions at https://discuss.elastic.co
    Tyrfing Mjølner
    Anyone able to point me in the direction of licensing in elasticsearch?
    @TyrfingMjolnir You can see it here.
    @pavelliano Hi, the same question with an answer was posted on stackoverflow.
    Vikas Pandey

    Hi All, I am implementing elasticsearch in springboot. And I am stuck at how to add the cluster name in the RestHighLevelClient . I checked the documentation, and didn't got anything useful.

    @Bean(destroyMethod = "close")
        public RestHighLevelClient client() {
            RestHighLevelClient restClient = new RestHighLevelClient(RestClient.builder(new HttpHost("", 9200, "http")));  
            return restClient;

    I am not sure, how do I add the cluster name in the client. I am trying with the Settings though, but there is no such method in the RestHighLevelClient to include the cluster name

    Settings esSettings = Settings.builder().put("cluster.name", clusterName)
    @vikpandey I think that by default cluster node is exposed on port 9300, so basically if you want to connect to cluster, just replace port 9200 with port 9300.
    Vikas Pandey

    @tomicarsk6 I'm using spring-data-elasticsearch and when I tried with changing the port to 9300 in application.properties and also in the Config.java, however I'm getting following error in the console

    failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{2SA3tnkFSIeuc6RxpQnFwA}{}{}]

    I get the similar error when I'm trying with port 9200. I have tried both using Port 9300 and 9200.

    Here is my config.java

    public class Config {
        @Bean(destroyMethod = "close")
        public RestHighLevelClient client() {
            RestHighLevelClient restClient = new RestHighLevelClient(RestClient.builder(new HttpHost("", 9300, "http")));
            return restClient;

    my UserController.java file.

    public class UserController {
        private UserService userService;
        public String getUsers() {
            return "here is the list of users";
        public ResponseEntity<User> create(@RequestBody User user) throws IOException {
            System.out.println("inside rest user create ....");    
            return new ResponseEntity<User>(userService.save(user), HttpStatus.CREATED);


    @Document(indexName = "users", type = "emp")
    public class User {
        private String id;
        private String name;
        private String age;
        private String email;
        public String getId() {
            return id;
        public void setId(String id) {
            this.id = id;
        public String getName() {
            return name;
        public void setName(String name) {
            this.name = name;
        public String getAge() {
            return age;
        public void setAge(String age) {
            this.age = age;
        public String getEmail() {
            return email;
        public void setEmail(String email) {
            this.email = email;


    public interface UserRepository extends ElasticsearchRepository<User, String> {
        List<User> findByName(String name);
        User findByEmail(String email);


    public interface UserService {
        User save(User user);
        Iterable<User> findAll();
        List<User> findByName(String name);
        User findByEmail(String email);


    public class UserServiceImpl implements UserService {
        private UserRepository userRepository;
        public void setUserRepository(UserRepository userRepository) {
            this.userRepository = userRepository;
        public User save(User user) {
            return userRepository.save(user);
        public Iterable<User> findAll() {
            return userRepository.findAll();
        public List<User> findByName(String name) {
            return userRepository.findByName(name);
        public User findByEmail(String email) {
            return userRepository.findByEmail(email);


    spring.data.elasticsearch.repositories.enabled = true
    spring.data.elasticsearch.cluster-nodes =
    @vikpandey Depends what you are using, can you also post pom.xml or build.gradle?
    @vikpandey And can you also replace with http://localhost, and then use the 9200 or 9300 port.
    Flavio Campana
    Hi everyone, i remembere there was a mib converter somewhere inside logstash install dir, to prepare files for snmptrap input plugin; but i can't find it anywhere
    Madhu Muchukota
    Hi Everyone, I have a grok filter that works fine in the grok debugger tool whereas same filter when I put in the conf file and test it, it is giving me grokparsefailure error....any help on this is highly appreciated pls
    "tags" => [
    [0] "_grokparsefailure"
    "message" => "2019-05-08T08:30:50.241-0500: 4.208: [GC (Metadata GC Threshold) [PSYoungGen: 158901K->19812K(225792K)] 158901K->19892K(741888K), 0.0466583 secs] [Times: user=0.03 sys=0.01, real=0.04 secs]",
    "@timestamp" => 2019-06-11T20:12:18.542Z,
    "@version" => "1",
    "host" => "XXXXXX"
    is the error I am seeing
    hello everyone
    @kenalex hey
    hi guys how can i return the whole content in my highlight search in elastic?
    @qballer Thanks! I'll check that out, I hadn't opened gitter in ages
    Sergio Teixeira
    Hello guys
    anyone here using filebeat?
    Hi guys,
    anyone here using docker?
    I use docker command: docker run -itd -p 9200:9200 -p 9300:9300 --name elasticsearch -v /elasticsearch/data:/usr/share/elasticsearch/data --restart=always 45ff2ef9a219
    then I got this.My configuration in docker is :
    node.name: node-1
    node.master: true
    node.data: true
    cluster.name: ebds-data-application
    http.port: 9200
    discovery.zen.ping.unicast.hosts: ["localhost:9201", "localhost:9202"]
    http.cors.enabled: true
    http.cors.allow-origin: "*"
    why I can't get the connection?
    @vikpandey Hi,my application.yml configuration is:
    Maybe something wrong with your hosts file
    And I used Class org.springframework.data.elasticsearch.core.ElasticsearchTemplate; to connect the elasticsearch
    I don't need to build class Config at all;
    Ramo Karahasan-Riechardt
     When you want to show someone something cool And they ask "What's cool about it?" Don"t try to e x p l a i n Try to ask them what other things they could find cool that you could show them It"s annoying to explain why things are cool over and over again And why is that annoying? Because it"s annoying to understand what"s cool about something Hence: How will it be cool for the person you are showing it to when they are trying to understand it and the understanding part is already uncool for you?

    Hello guys,
    I have a service that outputs two files into a file system, csv and manifest, both have same file name but different extension.

    I need to build a logstash config file that does the following:

    1. Once the files are written, it reads both files (csv and manifest) that are either located in the main or subdirectories (nested folders)
    2. Don't read a previously added files if any new pairs are being added, I mean it only reads the newly added ones in any location under the main root.

    Note: both files, csv and manifest, should be read together because the manifest has metadata that helps me to index the csv file when I'll push it to elasticsearch.

    Question: sometimes the csv file will take 30 seconds to be written, it is a huge file, so I'm wondering if logstash will start read the file once it's created OR once it's closed and the service finished filling it.

    Here is the code I'm using, I managed to read a csv file only, but not sure how to do that for both files as I mentioned above.

    input {
      file {
        path => "/usr/share/input/**/*.*"
        start_position => beginning
        sincedb_path => "/dev/null"
        discover_interval => 2
        stat_interval => "1 s"
    filter {
           .... Code goes here ....
    output {
        stdout { codec => rubydebug }
        elasticsearch {
            index => "%{blockId}"
            hosts => ["${HOSTS}"]