Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    Miklos Szots
    Here again, can someone explain why both HollowConsumer and HollowProducer need a type bound on their Builder class? Basically it's not possible to call these classes without providing an dummy Builder implementation? At least in scala I can't compile this code
        val consumer = HollowConsumer
    Alexandru-Gabriel Gherguț
    Hello. Can I get a pair of eyes on this small PR? Netflix/hollow#459
    Mandeep Gandhi

    Hey folks
    I am getting an NoClassDef error while building my project for one of the dependent classes being used by my data model. Not sure how to get that included. Kindly help
    Execution failed for task ':generateHollowConsumerApi'.


    Relevant Stack trace
    Caused by: java.lang.NoClassDefFoundError: Lcom/fasterxml/jackson/databind/JsonNode;
    at com.netflix.hollow.core.write.objectmapper.HollowObjectTypeMapper.<init>(HollowObjectTypeMapper.java:84)
    at com.netflix.hollow.core.write.objectmapper.HollowObjectMapper.getTypeMapper(HollowObjectMapper.java:137)
    at com.netflix.hollow.core.write.objectmapper.HollowObjectMapper.getTypeMapper(HollowObjectMapper.java:109)
    at com.netflix.hollow.core.write.objectmapper.HollowObjectMapper.initializeTypeState(HollowObjectMapper.java:105)
    at com.netflix.nebula.hollow.ApiGeneratorTask.generateApi(ApiGeneratorTask.java:59)
    at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:103)


    Hi I am getting this exception

    Please help in this

    Alexandru-Gabriel Gherguț

    if in the current cycle a dataset fails, is there a recommended way for a classic producer to publish the previous state for that dataset? We don't want to fail the cycle and want to publish the successful datasets

    For example, we have

    public class DomainFilterConfig {
        private String orgId;
        private List<String> whitelistedClickRedirectDomains;

    and in case of failure, we tried


    but this leads to some strange behavior. The contents of the list changes to ids mixed with strings from other datasets

    Viktor Nyström
    @AlexandruGhergut the previous is already published no? why wouldn't you want to fail the current cycle? it would lead to a silent error where your dataset never updates but continues to provide new states right?
    Alexandru-Gabriel Gherguț
    @viqtor yes it is published. We don't want to fail the current cycle because other datasets might have been updated. The error would indeed be a silent one but because we cannot publish anything for the failed dataset, the write state will interpret while building the diff that all entries in that dataset have been deleted while we want to keep the ones from the previous cycle. We are not using the incremental producer
    I happened to look at the README.md today - is it true that 3.0.1 is the latest "stable" version? https://github.com/Netflix/hollow/blob/master/README.md
    3 replies
    Hi all, i'm looking for a way to make the producer HA. Did anyone used locking for supporting multiple producers to run concurrently? Is it recommended?
    2 replies
    I am getting below error again
    java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException
    at com.netflix.hollow.core.write.HollowWriteStateEngine.resetToLastPrepareForNextCycle(HollowWriteStateEngine.java:265)
    at com.netflix.hollow.api.producer.HollowProducer.runCycle(HollowProducer.java:476)
    at com.netflix.hollow.api.producer.HollowProducer.runCycle(HollowProducer.java:390)
    Is above error related to memory??
    hey, I updated hollow to 4.9.1 noticed odd behavior with incremental producer. When i publish incremental update it is overwriting whole snapshot and only the partial data update through incremental is available. Am I missing something with new incremental producer. Below is my configuration
    producer = HollowProducer.withPublisher(publisher).withAnnouncer(announcer).withMetricsCollector(metricsCollector).withBlobStorageCleaner(blobStorageCleaner).withListeners().build();
    hollowIncrementalProducer = HollowProducer.withPublisher(publisher).withAnnouncer(announcer).withMetricsCollector(metricsCollector).withBlobStorageCleaner(blobStorageCleaner).withListeners().buildIncremental();
    1 reply
    Cosmin Ioniță
    Hello! Can I please get a review on this PR? Netflix/hollow#470
    This PR may help other folks in the community
    We are working on an application that has 2 producers a Hollow Producer and an Incremental Hollow Producer. This application receives messages from Kafka and depending on the message type uses the Hollow Producer or Incremental Hollow Producer to publish the blobs. The issue is how to keep track of the versions as both producers have different HollowProducers under the covers. We were thinking of creating pollers for both producers that check if the version has changed compared to their current version and then restore to that version. Is this a good way to synchronise the changes between the two producers or is there a better or more efficient way? Any help would be greatly appreciated.
    Mark Zelou Garces

    Hi all,

    I am new to hollow, and I am currently looking for a cache solution alternative to my diy custom cache.
    The system looks like this:

    [App nodes...] -> fetches data from a [couchbase] (cache metadata, source of truth)
    [The App] holds this fetched data and store it in memory and act as app's cache.

    The caching system works but with added business logics, I am getting a request (rest api) performance of around ~600ms to 1000ms+

    With the current setup, I want to have this goals:

    1. Better request performance at average of 300ms
    2. Horizontal scaling

    Upon checking hollow documentations, I am thinking about this approach.



    1. With this approach, since the consumer is traveling to AWS s3 to consume data snapshots and delta,
      will the performance improve compared to directly in memory cache?

    2. Is hollow the GOTO for Hollow Scaling?

    3. Should I proceed with Hollow given to my requirements?

    Sorry for being a noob, I want to hear from experts and experienced from hollow. Bless me with your knowledge. Have a nice day! and Many thanks in advance.

    14 replies
    Image 1 current
    Image 2 what I am thinking when I integrate with Hollow
    Good day. I am trying to produce a delta as per this documentation https://hollow.how/advanced-topics/#delta-based-producer-input. I am able to update the hollow write state as per the documentation. How do I create a delta blob and publish it after updating the write state engine? Any help is appreciated.
    Hello Folks,
    Is com.fasterxml.jackson.databind.JsonNode supported as Instance variable for writing to hollow.? If yes, can we do indexing on this JsonNode Object?
    Please see the params field in Below DTO.
    public class Actor {
        public int actorId;
        public String actorName;
        public JsonNode params;
        public Actor() { }
        public Actor(int actorId, String actorName, JsonNode params) {
            this.actorId = actorId;
            this.actorName = actorName;
            this.params = params;

    I have the following object written to hollow.

      fields = {"flowId"}
    public class FlowDTO {
      private String name;
      private String flowId;
      private String orgId;
      private Map<String, String> attributes;

    I want to build an index for the dataset using orgId field and multiple key-value pairs in the attributes map.

    For example, if the attributes map contains

    k1 -> v1
    k2 -> v2
    k3 -> v3
    k4 -> v4
    kn -> vn

    I want to build an index where I can query the dataset using orgId and k1=v1 and k2=v2.
    Am able to get the index working with one key-value pair in my consumer.

    this.mapperAPIHashIndex = new MapperAPIHashIndex(hollowConsumer, true, "FlowDTO",
          "", "orgId.value", "attributes.key.value", "attributes.value.value");

    The above index works fine for querying by "orgId"="xx" and "k1"="v1"
    But it is not working when I want to query by "orgId"="xx" and "k1"="v1" and "k2"="v2"

    I did refer to an earlier thread here by user @mahipal0913 but am not sure of the proposed solution to use two different hash indexes.
    Is it like I get the HashIndexResult of one hashIndex and then check if its ordinals are contained in the hashIndexResult of the other index? Is this efficient way of building the index for this scenario?

    I also need to build the index with more than just 2 key value pairs in the map like query by "orgId"="xx" and "k1"="v1" and "k2"="v2" and "k3"="v3". Then do I build 3 different hash indexes and compare them to see which ordinals are common across the 3 hash index results?

    Please point me to the right way to build the index for the examples above.

    hello hollowers - I'm having to use UUIDs as primary keys. I have this working modelled as both a plain hex string (f0af802fd41047d28104fac7bc295da5), as well as a composite type with two 64 ints. I'm assuming the latter is going to be faster to index and more compact, though I haven't tested to confirm. Unfortunately the use of the composite key somewhat defeats the usefulness of the explorer UI. I'm curious if anyone else has been faced with this choice, and perhaps done any quick measurements on the impact?
    1 reply

    is there a reason that the prior state is available when doing com.netflix.hollow.api.producer.HollowProducer#runCycle (via com.netflix.hollow.api.producer.HollowProducer.WriteState#getPriorState), but its not available on an incremental (com.netflix.hollow.api.producer.HollowProducer.Incremental.IncrementalPopulator has no access to write state)?

    I can probably workaround but just wanted to query whether it was a deliberate API

    1 reply

    Is it expected that if I have parent/child, that when I call com.netflix.hollow.api.producer.HollowProducer.Incremental.IncrementalPopulator#populate I must only call using the parent object? ie if I have a Movie with a list of Actors, even if only a single actor has changed, I cannot call populate( Henry Cavill ), I must do populate ( Man Of Steel ) ?

    In my current implementation, doing the latter results in duplicate records, but I'm unsure if that's operator error or by design

    3 replies
    Hi, Silly question - can the same @HollowTypeName be set on both: List<String> list and String field if values of each are to be deduplicated ?
    2 replies
    Hi all, I experience a weird behavior when I have multiple top level models with runIncrementalCycle.
    Given 2 models A and B: when I update only A in my runIncrementalCycle the data is being published and B can still be queried from my consumers. But when I restart my consumer - it fails to load schema for B.
    To fix that, I updated a default object B every time I update A, and vice versa. What might be the problem? Thanks in advance
    Olavo Masayuki Machado Shibata
    I want to use Hollow for a small set of data. It is around 50 entries. The idea of using Hollow is because I don't want to restart my application all the time I change this data. This data changes weekly. Would Hollow be a good solution for that?

    Is the latest stable version for Hollow really 3.0.1, as the README states, or is 5.0.8 considered stable?

    Additionally, has anyone upgraded from any of the 3.x versions to 5.x? Did you have to migrate to a new namespace or produce a new snapshot, or is it safe to upgrade in-place with an existing producer-consumer already running?



    Question about push notification for delta updates, referencing from the Hollow.how "When your AnnouncementWatcher is initialized, you should immediately set up your selected announcement mechanism -- either subscribe to your push notifications or set up a thread to poll for updates."

    How do I subscribe to pus notification? the HollowReference project polls (setupPollingThread()) the updates I believe as per the below code snippet:

    public DynamoDBAnnouncementWatcher(AWSCredentials credentials, String tableName, String blobNamespace) {
            this.dynamoDB = new DynamoDB(new AmazonDynamoDBClient(credentials));
            this.tableName = tableName;
            this.blobNamespace = blobNamespace;
            this.subscribedConsumers = Collections.synchronizedList(new ArrayList<HollowConsumer>());
            this.latestVersion = readLatestVersion();

    I could not find an example of push notification, any guidance/pointer will be helpful!

    Appreciate help in advance!

    Ping! can someone answer above mentioned question? Thanks!
    Mike Muske
    @rpalcolea Can you guys take a look at this PR? Netflix/hollow#512
    Mike Muske

    @jkade v4.9.2 is the latest. This project is still actively developed and used widely at Netflix. I agree the docs could use some more attention.

    This was the last message I could find from a Netflix person, and it is 8+ months old. Is anyone from Netflix still monitoring this channel?

    @dkoszewnik :point_up:
    Drew Koszewnik
    @mikemuske taking a look now.
    I'm assuming you are blocked on this? Should we push a release out for you immediately?
    Drew Koszewnik
    @mikemuske I have merged the PR and will cut a release imminently.
    Erich Weidner
    Thanks @dkoszewnik
    Drew Koszewnik
    Absolutely, thanks for the fix. v5.2.2 releasing now.
    Erich Weidner
    That one was a tough one to find and only occurred on datasets where our producer was running with 100+GB which made debugging very difficult.
    Drew Koszewnik
    @erichw yes I would imagine that took a lot of detective work. Nice work and thanks again for tracking it down :).
    Mike Muske
    @dkoszewnik thank you!
    Dillon Beliveau
    Hey everyone - I didn't quite feel this was issue worthy, but I've been trying out Hollow on the AWS Graviton aarch64 instances, and noticed an unaligned bus access in FixedLengthElementArray, possible on x86_64, hardware exception on aarch64. Judging by the comments in that class it seems intentional. I'm curious if there are any plans on addressing this on the roadmap, or if it has been looked into before?
    Dillon Beliveau
    In the meantime I'll see what I can come up with
    @Dillonb we did some testing on ARM last year and we observed SIGBUS crash in unaligned access in FixedLengthElementArray. The failure seemed to only occur in the delta application path, so we were able to run microbenchmarking on ARM vs x86
    I can share our results from the microbenchmarks we ran if you're curious
    We didn't get to identifying the root cause of SIGBUS on arm, and we don't have anything on the roadmap for arm support
    Me and some other folks here would be keen to hear about what you can find though :)
    Dillon Beliveau

    @Sunjeet it seems to be one/all of these three calls to putOrderedLong() in FixedLengthElementArray that's causing the crash I'm seeing at least: https://github.com/Netflix/hollow/blob/master/hollow/src/main/java/com/netflix/hollow/core/memory/encoding/FixedLengthElementArray.java#L194-L200

    ARM doesn't support reading/writing unaligned values - the solution I believe would be replacing those calls with plain old accesses to the array. It would negatively impact performance, though, so ideally we could somehow use the existing path on platforms that support it, and the new path only on platforms that do not