Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    We are working on an application that has 2 producers a Hollow Producer and an Incremental Hollow Producer. This application receives messages from Kafka and depending on the message type uses the Hollow Producer or Incremental Hollow Producer to publish the blobs. The issue is how to keep track of the versions as both producers have different HollowProducers under the covers. We were thinking of creating pollers for both producers that check if the version has changed compared to their current version and then restore to that version. Is this a good way to synchronise the changes between the two producers or is there a better or more efficient way? Any help would be greatly appreciated.
    Mark Zelou Garces

    Hi all,

    I am new to hollow, and I am currently looking for a cache solution alternative to my diy custom cache.
    The system looks like this:

    [App nodes...] -> fetches data from a [couchbase] (cache metadata, source of truth)
    [The App] holds this fetched data and store it in memory and act as app's cache.

    The caching system works but with added business logics, I am getting a request (rest api) performance of around ~600ms to 1000ms+

    With the current setup, I want to have this goals:

    1. Better request performance at average of 300ms
    2. Horizontal scaling

    Upon checking hollow documentations, I am thinking about this approach.



    1. With this approach, since the consumer is traveling to AWS s3 to consume data snapshots and delta,
      will the performance improve compared to directly in memory cache?

    2. Is hollow the GOTO for Hollow Scaling?

    3. Should I proceed with Hollow given to my requirements?

    Sorry for being a noob, I want to hear from experts and experienced from hollow. Bless me with your knowledge. Have a nice day! and Many thanks in advance.

    14 replies
    Image 1 current
    Image 2 what I am thinking when I integrate with Hollow
    Good day. I am trying to produce a delta as per this documentation https://hollow.how/advanced-topics/#delta-based-producer-input. I am able to update the hollow write state as per the documentation. How do I create a delta blob and publish it after updating the write state engine? Any help is appreciated.
    Hello Folks,
    Is com.fasterxml.jackson.databind.JsonNode supported as Instance variable for writing to hollow.? If yes, can we do indexing on this JsonNode Object?
    Please see the params field in Below DTO.
    public class Actor {
        public int actorId;
        public String actorName;
        public JsonNode params;
        public Actor() { }
        public Actor(int actorId, String actorName, JsonNode params) {
            this.actorId = actorId;
            this.actorName = actorName;
            this.params = params;

    I have the following object written to hollow.

      fields = {"flowId"}
    public class FlowDTO {
      private String name;
      private String flowId;
      private String orgId;
      private Map<String, String> attributes;

    I want to build an index for the dataset using orgId field and multiple key-value pairs in the attributes map.

    For example, if the attributes map contains

    k1 -> v1
    k2 -> v2
    k3 -> v3
    k4 -> v4
    kn -> vn

    I want to build an index where I can query the dataset using orgId and k1=v1 and k2=v2.
    Am able to get the index working with one key-value pair in my consumer.

    this.mapperAPIHashIndex = new MapperAPIHashIndex(hollowConsumer, true, "FlowDTO",
          "", "orgId.value", "attributes.key.value", "attributes.value.value");

    The above index works fine for querying by "orgId"="xx" and "k1"="v1"
    But it is not working when I want to query by "orgId"="xx" and "k1"="v1" and "k2"="v2"

    I did refer to an earlier thread here by user @mahipal0913 but am not sure of the proposed solution to use two different hash indexes.
    Is it like I get the HashIndexResult of one hashIndex and then check if its ordinals are contained in the hashIndexResult of the other index? Is this efficient way of building the index for this scenario?

    I also need to build the index with more than just 2 key value pairs in the map like query by "orgId"="xx" and "k1"="v1" and "k2"="v2" and "k3"="v3". Then do I build 3 different hash indexes and compare them to see which ordinals are common across the 3 hash index results?

    Please point me to the right way to build the index for the examples above.

    hello hollowers - I'm having to use UUIDs as primary keys. I have this working modelled as both a plain hex string (f0af802fd41047d28104fac7bc295da5), as well as a composite type with two 64 ints. I'm assuming the latter is going to be faster to index and more compact, though I haven't tested to confirm. Unfortunately the use of the composite key somewhat defeats the usefulness of the explorer UI. I'm curious if anyone else has been faced with this choice, and perhaps done any quick measurements on the impact?
    1 reply

    is there a reason that the prior state is available when doing com.netflix.hollow.api.producer.HollowProducer#runCycle (via com.netflix.hollow.api.producer.HollowProducer.WriteState#getPriorState), but its not available on an incremental (com.netflix.hollow.api.producer.HollowProducer.Incremental.IncrementalPopulator has no access to write state)?

    I can probably workaround but just wanted to query whether it was a deliberate API

    1 reply

    Is it expected that if I have parent/child, that when I call com.netflix.hollow.api.producer.HollowProducer.Incremental.IncrementalPopulator#populate I must only call using the parent object? ie if I have a Movie with a list of Actors, even if only a single actor has changed, I cannot call populate( Henry Cavill ), I must do populate ( Man Of Steel ) ?

    In my current implementation, doing the latter results in duplicate records, but I'm unsure if that's operator error or by design

    3 replies
    Hi, Silly question - can the same @HollowTypeName be set on both: List<String> list and String field if values of each are to be deduplicated ?
    2 replies
    Hi all, I experience a weird behavior when I have multiple top level models with runIncrementalCycle.
    Given 2 models A and B: when I update only A in my runIncrementalCycle the data is being published and B can still be queried from my consumers. But when I restart my consumer - it fails to load schema for B.
    To fix that, I updated a default object B every time I update A, and vice versa. What might be the problem? Thanks in advance
    Olavo Masayuki Machado Shibata
    I want to use Hollow for a small set of data. It is around 50 entries. The idea of using Hollow is because I don't want to restart my application all the time I change this data. This data changes weekly. Would Hollow be a good solution for that?

    Is the latest stable version for Hollow really 3.0.1, as the README states, or is 5.0.8 considered stable?

    Additionally, has anyone upgraded from any of the 3.x versions to 5.x? Did you have to migrate to a new namespace or produce a new snapshot, or is it safe to upgrade in-place with an existing producer-consumer already running?



    Question about push notification for delta updates, referencing from the Hollow.how "When your AnnouncementWatcher is initialized, you should immediately set up your selected announcement mechanism -- either subscribe to your push notifications or set up a thread to poll for updates."

    How do I subscribe to pus notification? the HollowReference project polls (setupPollingThread()) the updates I believe as per the below code snippet:

    public DynamoDBAnnouncementWatcher(AWSCredentials credentials, String tableName, String blobNamespace) {
            this.dynamoDB = new DynamoDB(new AmazonDynamoDBClient(credentials));
            this.tableName = tableName;
            this.blobNamespace = blobNamespace;
            this.subscribedConsumers = Collections.synchronizedList(new ArrayList<HollowConsumer>());
            this.latestVersion = readLatestVersion();

    I could not find an example of push notification, any guidance/pointer will be helpful!

    Appreciate help in advance!

    Ping! can someone answer above mentioned question? Thanks!
    Mike Muske
    @rpalcolea Can you guys take a look at this PR? Netflix/hollow#512
    Mike Muske

    @jkade v4.9.2 is the latest. This project is still actively developed and used widely at Netflix. I agree the docs could use some more attention.

    This was the last message I could find from a Netflix person, and it is 8+ months old. Is anyone from Netflix still monitoring this channel?

    @dkoszewnik :point_up:
    Drew Koszewnik
    @mikemuske taking a look now.
    I'm assuming you are blocked on this? Should we push a release out for you immediately?
    Drew Koszewnik
    @mikemuske I have merged the PR and will cut a release imminently.
    Erich Weidner
    Thanks @dkoszewnik
    Drew Koszewnik
    Absolutely, thanks for the fix. v5.2.2 releasing now.
    Erich Weidner
    That one was a tough one to find and only occurred on datasets where our producer was running with 100+GB which made debugging very difficult.
    Drew Koszewnik
    @erichw yes I would imagine that took a lot of detective work. Nice work and thanks again for tracking it down :).
    Mike Muske
    @dkoszewnik thank you!
    Dillon Beliveau
    Hey everyone - I didn't quite feel this was issue worthy, but I've been trying out Hollow on the AWS Graviton aarch64 instances, and noticed an unaligned bus access in FixedLengthElementArray, possible on x86_64, hardware exception on aarch64. Judging by the comments in that class it seems intentional. I'm curious if there are any plans on addressing this on the roadmap, or if it has been looked into before?
    Dillon Beliveau
    In the meantime I'll see what I can come up with
    @Dillonb we did some testing on ARM last year and we observed SIGBUS crash in unaligned access in FixedLengthElementArray. The failure seemed to only occur in the delta application path, so we were able to run microbenchmarking on ARM vs x86
    I can share our results from the microbenchmarks we ran if you're curious
    We didn't get to identifying the root cause of SIGBUS on arm, and we don't have anything on the roadmap for arm support
    Me and some other folks here would be keen to hear about what you can find though :)
    Dillon Beliveau

    @Sunjeet it seems to be one/all of these three calls to putOrderedLong() in FixedLengthElementArray that's causing the crash I'm seeing at least: https://github.com/Netflix/hollow/blob/master/hollow/src/main/java/com/netflix/hollow/core/memory/encoding/FixedLengthElementArray.java#L194-L200

    ARM doesn't support reading/writing unaligned values - the solution I believe would be replacing those calls with plain old accesses to the array. It would negatively impact performance, though, so ideally we could somehow use the existing path on platforms that support it, and the new path only on platforms that do not

    Mike Muske
    @dkoszewnik I saw your note about an additional spot where there may be an overflow. I've looked at it a little and it appears to me like you're right. https://github.com/Netflix/hollow/blob/7ff25c9b3113d731341ca203ee81a46d7ab46cdc/hollow/src/main/java/com/netflix/hollow/core/write/HollowListTypeWriteState.java#L226-L227
    Did you confirm it's not a problem?
    Dillon Beliveau

    I'd love to see anything you're willing to share about your testing on ARM in addition to this PR Netflix/hollow#503 though :)

    I'll try to spend some time this weekend hacking on this to see if I can get something together.

    Drew Koszewnik
    @mikemuske after posting that, I realized the integer written there is the number of 64-bit longs required to write all of the element data, not the number of elements. So you should be ok there.
    Mike Muske
    well, that will be a little smaller, but won't it still exceed 32 bits?
    For example, the dataset I'm attempting to load now has 6.6 billion total array elements, 6.6 billion x 33 bits = 217,800,000,000, and dividing by 64 gives 3,403,125,000 which still overflows Integer.MAX_VALUE
    Drew Koszewnik
    Your references are 33 bits each?
    Mike Muske
    i was thinking they'd have to be... but maybe that's where i went wrong
    Drew Koszewnik
    The number of bits required for each element will only be the number required to represent the max ordinal of the referenced type in your dataset.
    Mike Muske
    got it, so is that 29 bits then?
    Drew Koszewnik
    So it depends what the cardinality of your referenced type is, if you only have 1,000 unique elements, then you'll need 10 bits.
    Mike Muske
    i see
    Drew Koszewnik
    6.6 billion seems very large, pushing the limits here -- maybe there's some way to remodel the data?
    Mike Muske
    I guess in this case we are expecting about 121 million strings, so only need 27 bits, but it will still overflow.
    Yeah, it's a ridiculous dataset. I think we can shard the arrays into 4 separate spaces, so that will make it fit. But, it seems like we could fix this anyway.