Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    J Khalaf
    Hi all
    I have a question and hope someone here can help.
    Say I have Kafka running on a machine. And the topic is just sat there idle with nothing happening. Will Kafka eventually kill that topic and discard whatever messages were sitting in the topic?
    @Ciwan1859 It won't take any action until you delete the kafka topic
    Hello sir,
    Faced some problem using Kafka in spring boot. When send a message to listener. listener listen message and done some validation on message and send back response like validation completed the message assign to new topic.But i want to listen the new topic. its taken some time to listen that topic. I hope you understand this question.
    1) First create a Topic="user" send to Consumer. Consumer listen the topic using @Kafkalistener done some validation on user. Send back to new topic like topic="validation". This is the problem i want read this topic takes some time. like 100ms.
    Hi, i have a problem connecting with my producer to kafka, i receive this error:
    Failure(org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.)
    could it be related to the kafka listeners?
    I started kafka with the docker image: and this is the parameter i setup:
    Do this is good place to ask question about kafka-streams?
    Aditya Parikh

    I had a question on Kafka Connect Single Message Transforms. I am using S3 as a sink. In most cases, I want to write the message as is. For certain messages with large payloads, the payload is omitted and I need to call a Rest API (Spring-Boot app) to get the payload and enrich the message being written to S3.
    Following along

    I believe my logic should be here https://github.com/apache/kafka/blob/trunk/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/ReplaceField.java#L142

    I kind of get what is going on here https://github.com/apache/kafka/blob/trunk/connect/transforms/src/test/java/org/apache/kafka/connect/transforms/ReplaceFieldTest.java#L43

    I couldn't find a simpler example to do so.

    Paul Snively
    Hi everyone. Has anyone else run into an issue where close on TopologyTestDriver in Kafka Streams doesn't unregister TestInputTopics? That is, I can do the new TopologyTestDriver(), createInputTopic("topic-1"), driver.close() dance once, but the next go-round, I get an exception saying topic "topic-1" was already registered by another source. My understanding is that close disposes of all state.
    Paul Snively
    Nevermind. This was a bug in my ScalaCheck Gen. :-(
    Paul Snively
    In Kafka Streams, does anyone know why:
       Sub-topology: 0
        Source: KTABLE-SOURCE-0000000016 (topics: [KTABLE-FK-JOIN-SUBSCRIPTION-RESPONSE-0000000014-topic])
        Source: KSTREAM-SOURCE-0000000001 (topics: [input-topic-1])
          --> KTABLE-SOURCE-0000000002
        Processor: KTABLE-FK-JOIN-SUBSCRIPTION-RESPONSE-RESOLVER-PROCESSOR-0000000017 (stores: [input-topic-1-STATE-STORE-0000000000])
          --> KTABLE-FK-JOIN-OUTPUT-0000000018
          <-- KTABLE-SOURCE-0000000016
        Processor: KTABLE-FK-JOIN-OUTPUT-0000000018 (stores: [])
          --> KTABLE-TOSTREAM-0000000020
        Processor: KTABLE-SOURCE-0000000002 (stores: [input-topic-1-STATE-STORE-0000000000])
          <-- KSTREAM-SOURCE-0000000001
        Processor: KTABLE-FK-JOIN-SUBSCRIPTION-REGISTRATION-0000000007 (stores: [])
          --> KTABLE-SINK-0000000008
          <-- KTABLE-SOURCE-0000000002
        Processor: KTABLE-TOSTREAM-0000000020 (stores: [])
          --> KSTREAM-SINK-0000000021
          <-- KTABLE-FK-JOIN-OUTPUT-0000000018
        Sink: KSTREAM-SINK-0000000021 (topic: output-topic)
          <-- KTABLE-TOSTREAM-0000000020
        Sink: KTABLE-SINK-0000000008 (topic: KTABLE-FK-JOIN-SUBSCRIPTION-REGISTRATION-0000000006-topic)
      Sub-topology: 1
        Source: KSTREAM-SOURCE-0000000004 (topics: [input-topic-2])
          --> KTABLE-SOURCE-0000000005
        Source: KTABLE-SOURCE-0000000009 (topics: [KTABLE-FK-JOIN-SUBSCRIPTION-REGISTRATION-0000000006-topic])
          <-- KTABLE-SOURCE-0000000009
        Processor: KTABLE-SOURCE-0000000005 (stores: [input-topic-2-STATE-STORE-0000000003])
          <-- KSTREAM-SOURCE-0000000004
        Processor: KTABLE-FK-JOIN-SUBSCRIPTION-PROCESSOR-0000000012 (stores: [input-topic-2-STATE-STORE-0000000003])
          --> KTABLE-SINK-0000000015
          --> KTABLE-SINK-0000000015
          <-- KTABLE-SOURCE-0000000005
        Sink: KTABLE-SINK-0000000015 (topic: KTABLE-FK-JOIN-SUBSCRIPTION-RESPONSE-0000000014-topic)
    would be invalid? Visualizing it at https://zz85.github.io/kafka-streams-viz/, it seems OK to me. But in a test, I get no output on output-topic.
    Vishal Shivare
    Hi Everyone, In our project we are using Kafka Streams and ksqlDB to process our events. We are able to run KSQL server and able to create and listen stream logs. If we create any stream or table in ksqldb it will create new topic on Kafka so that we can get streamed data from kafka topic so I have all data stored in multiple Kafka topics. Now I want to process the data which is available on multiple Kafka topics.
    What are available ways are there to consume that data. I already have kafkacat and kafka tools to listen that topic logs but I want to do some processing on that data so that in future I can connect any BI tools to generate some dashboard with that data.
    In case you have some specific custom logic processing, You can just use any programming language which has Kafka client.
    In case you Just want to store data to Storage like S3/hdfs/SQL DB/ETC you can use Confluent Connectors or Secor Or AMPLIFY Streams or more tools like this.
    In case you need some complex logic with join/groupBy/reduceByKey/etc over multiple topics you may use Spark/Flink/Heron depends on your use case.