PartitionOffset
per event produced. My events are being created from several events consumed over several partitions, so I need to commit a batch of PartitionOffset
per event. Is this possible?
committablePartitionedSource
so I have access to per-partition offset. How do I terminate the flow after a specific offset has been reached?Consumer.DrainingControl<Done> control =
Consumer.committablePartitionedSource(consumerSettings, Subscriptions.topics(topic))
.mapAsyncUnordered(
maxPartitions,
pair -> {
Source<ConsumerMessage.CommittableMessage<String, String>, NotUsed> source =
pair.second();
return source
.via(business())
.map(message -> message.committableOffset())
.runWith(Committer.sink(committerSettings), system);
})
.toMat(Sink.ignore(), Consumer::createDrainingControl)
.run(system);
preStart
Consumer.commitWithMetadataPartitionedSource
meant to be used? What does the param metadataFromRecord: Function[ConsumerRecord[K, V], String])
allow you to do? I can't find any examples. I want to stop processing when a message is greater than a given timestamp. I can see that the timestamp is available in the metadataFromRecord
, but how can I use it in the result of commitWithMetadataPartitionedSource
?
The result of metadataFromRecord
is only passed back to Kafka when committing an offset (see https://kafka.apache.org/21/javadoc/org/apache/kafka/clients/consumer/OffsetAndMetadata.html). The message timestamp is available in any of the sources which give you a ConsumerRecord
or a CommittableMessage
without needing a commitWithMetadata
source.
For the other committable sources, you would call msg.record.timestamp
to get the timestamp. So given StopAfterTimestamp
, you could .takeWhile { msg => msg.record.timestamp <= StopAfterTimestamp }
LagomApplication
, but I can only see the "starting" log, but not "executing" log (topic has messages being produced). I do not see the consumer group created when checking from broker side. private val consumerSettings = ConsumerSettings(actorSystem, new StringDeserializer, new StringDeserializer)
logger.info("Starting the subscriber")
Consumer
.plainSource(consumerSettings, Subscriptions.topics(ExternalTopic))
.mapAsync(1)(message => {
val request = Json.parse(message.value).as[ExternalRequest]
logger.info("Executing {}", request)
Future.successful(Done)
})
.run()
So I have an interesting problem where when I am subscribing to a Stream using Alpakka Kafka and right at the start of the stream I am using prefixAndTail(1).flatMapConcat
to get the first element however it returns None
even though topics are being sent to the Kafka topic. Interestingly I am not getting this problem with a local Kafka stream that I run with Docker.
Does anyone know in what cases this occurs and also if prefixAndTail(1)
is eager? i.e. will it wait for perpetuity until it happens to get an element or is there some kind of timeout?
Consumer.sourceWithOffsetContext
when the source is assigned multiple partitions, how consumption is managed between partitions. I was under the impression the partitions were consumed from using round-robin distribution but cannot find documentation to back that up (or contradict/refute).
msg
data in the code shown in the following link?msg.record.value
contains some string, then publish otherwise skip etc.Hi all, I'm trying to use a dependency that adds Kinesis KPL support to akka. It has a KPLFlow class to provide that support. I'm relativly new to akka and flows, but my objective would be to have a kafka source, that is already created and have some type of sink, to replace the "native" kinesis sink and use this flow to deliver the records. Is there a way to extract from the flow class this? Or is it possible with just the flow "consume" from the kafka source and deliver to a target stream ?
I've create a stackoverflow question regarding this https://stackoverflow.com/questions/73873966/akka-kafka-source-to-kinesis-sink-using-kpl
Hi everyone. I have an application that receives an api request and relays its to a Kafka API Producer. Each request calls the producer to send a message to Kafka. The producer exists throughout the application lifetime and is shared for all requests.
producer.send(new ProducerRecord[String, String](topic, requestBody))
This works OK. Now I want to use instead, an alpakka Producer for the job. The code looks like this:
val kafkaProducer = producerSettings.createKafkaProducer()
val settingsWithProducer = producerSettings.withProducer(kafkaProducer)
val done = Source.single(requestBody)
.map(value => new ProducerRecord[String, String](topic, value))
.runWith(Producer.plainSink(settingsWithProducer))
What are the advantages of the alpakka Producer over the plain, vanilla Producer? I don't know whether the new approach can help me handle a large number of API requests in order at the same time.
def atMostOnceSource[K, V]: Source[ConsumerRecord[K, V], NotUsed] = {
Consumer
.committableSource[K, V](consumerSettings, Subscriptions.topics(allTopics))
.groupedWithin(maxBatchSize, maxBatchDuration)
.mapAsync(1) { messages: Seq[CommittableMessage[K, V]] =>
val committableOffsetBatch =
CommittableOffsetBatch(messages.map(_.committableOffset))
Source
.single(committableOffsetBatch)
.toMat(Committer.sink(committerSettings))(Keep.right)
.run()
.map(_ => messages)
}
.mapConcat(identity)
}