Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    krishnaram
    @krishnaram
    java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException
    at com.netflix.hollow.core.write.HollowWriteStateEngine.resetToLastPrepareForNextCycle(HollowWriteStateEngine.java:265)
    Please help me in this
    damianChapman
    @damianChapman
    Is it possible with the latest version of Netflix Hollow that you can create an index to return all results for an array field that has zero size?
    Deepak Garg
    @deepak0917
    we recently updated to 3.8.1 from 3.7.7 as we wanted to fix issue mentioned here -> Netflix/hollow#431
    but we have started noticed this error " Attempting to apply a delta to a state from which it was not originated!" after the upgrade.
    our service has been running since 3 months but we noticed this error only after the upgrade. we rolled back to 3.7.8 but still the issue is happening. please help @dkoszewnik @toolbear
    Drew Koszewnik
    @dkoszewnik
    Hi @deepak0917
    What this means is that either the delta your clients are attempting to apply did not originate from the state which they currently have in memory (e.g. it is tagged incorrectly), or the state is somehow corrupted in the memory on the clients.
    The most straightforward way to get out of this situation is to restart the consumers so that they can load a more recent snapshot, then continue to apply deltas from there. Is that possible in this case?
    If that is not possible, then you need to determine what state the clients are actually in, then restart your producer and restore from that state. The next published state will create a delta from the consumers' current state and they will consume it and resume progress.
    Bryan Jang Kim
    @jjangsam
    Hi Is there a performance downgrade for having String as primary key? I have about 200k entries and seeing slower lookup time using XXX.uniqueIndex(consumer) method. Each look up taking more than 100ms. The primary key is pretty lengthly string, roughly about 50 characters. Please help @dkoszewnik @toolbear!
    Bryan Jang Kim
    @jjangsam
    Actually I found my answer from the doc "Retrieval from an index is extremely cheap, and indexing is (relatively) expensive. You should create your indexes when the HollowConsumer is initialized and share them thereafter. Indexes will automatically stay up-to-date with the HollowConsumer"
    Alexandru-Gabriel Gherguț
    @AlexandruGhergut
    Hello, could someone confirm if the currentVersion metric of the hollow producer is supposed to be changed even when there is no change in state? It seems wrong to me but I'm not sure if I fully understand the system. I have opened this PR to address this https://github.com/Netflix/hollow/pull/458/files
    Alexandru-Gabriel Gherguț
    @AlexandruGhergut
    Clarification for the above: we want to set up an alert for consumers that are out of sync with the producer. For that we're comparing the consumer's version metric with the producer's. However the producer's version is not reliable as it changes on each cycle even when there's nothing to publish
    Alexandru-Gabriel Gherguț
    @AlexandruGhergut
    Any thoughts on the above ^? The change is minor. Should I compile a report of our current configuration and the behavior we observe in order to more easily validate whether this is an issue?
    Drew Koszewnik
    @dkoszewnik
    @AlexandruGhergut the cycle version does need to update in the producer even when no change in the underlying data occurs. This is so that we can understand that progress is being made, even if no changes are occurring. For example, we store log lines tagged with each cycle version in a system like elasticsearch, then we can peruse what is happening in the system in a dashboard that uses the data in this system. Rather than change the meaning of this field, how about adding another field that indicates the last announced version, then using that to drive your alert?
    Alexandru-Gabriel Gherguț
    @AlexandruGhergut
    @dkoszewnik thank you for your response. I was thinking of the currentVersion as the last published version but it makes perfect sense to associate a new version to each cycle for tracking purposes. Another field would do the job. I used your suggestion and opened a new PR Netflix/hollow#459
    Viktor Nyström
    @viqtor
    how does one work with compound keys and UniqueKeyIndex? the generated primary key index supports compound primary keys as explained in https://hollow.how/indexing-querying/#compound-primary-keys but i cant see how the UniqueKeyIndex would and its referenced in the deprecation notice on the previous class
    does that mean that compound keys are also deprecated?
    Rich Bolen
    @richbolen
    Does hollow support less than or greater than queries? We have a need to query our data set with a calendar date. If it is >< two date fields in the data, return a record.
    Viktor Nyström
    @viqtor
    as a follow up on my previous post. if i use the deprecated generated primary key index and don't specify the field paths for the index the order of the fields are reversed?
    Miklos Szots
    @smiklos

    Hi all,

    Is it a valid usecase for Hollow to have a job peridocally take data from a small table and publish that? The focus would be on having the Hollow Producer be a job in kubernetes as I see no reason for it to be a 24/7 app running

    Viktor Nyström
    @viqtor
    @smiklos by all means. sounds reasonable to me at least. just make sure to restore your producer on startup in the job first https://hollow.how/producer-consumer-apis/#restoring-at-startup
    Miklos Szots
    @smiklos
    @viqtor Thanks. I was aware of this but the docs said that the ideal usage is to reuse the producer. I guess from a metric collection perspective, that's easier.
    Miklos Szots
    @smiklos
    Here again, can someone explain why both HollowConsumer and HollowProducer need a type bound on their Builder class? Basically it's not possible to call these classes without providing an dummy Builder implementation? At least in scala I can't compile this code
    
        val consumer = HollowConsumer
                .newHollowConsumer().build();
    Alexandru-Gabriel Gherguț
    @AlexandruGhergut
    Hello. Can I get a pair of eyes on this small PR? Netflix/hollow#459
    Mandeep Gandhi
    @welcomemandeep

    Hey folks
    I am getting an NoClassDef error while building my project for one of the dependent classes being used by my data model. Not sure how to get that included. Kindly help
    Execution failed for task ':generateHollowConsumerApi'.

    Lcom/fasterxml/jackson/databind/JsonNode;

    Relevant Stack trace
    Caused by: java.lang.NoClassDefFoundError: Lcom/fasterxml/jackson/databind/JsonNode;
    at com.netflix.hollow.core.write.objectmapper.HollowObjectTypeMapper.<init>(HollowObjectTypeMapper.java:84)
    at com.netflix.hollow.core.write.objectmapper.HollowObjectMapper.getTypeMapper(HollowObjectMapper.java:137)
    at com.netflix.hollow.core.write.objectmapper.HollowObjectMapper.getTypeMapper(HollowObjectMapper.java:109)
    at com.netflix.hollow.core.write.objectmapper.HollowObjectMapper.initializeTypeState(HollowObjectMapper.java:105)
    at com.netflix.nebula.hollow.ApiGeneratorTask.generateApi(ApiGeneratorTask.java:59)
    at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:103)

    krishnaram
    @krishnaram

    Hi I am getting this exception

    Please help in this

    Alexandru-Gabriel Gherguț
    @AlexandruGhergut

    if in the current cycle a dataset fails, is there a recommended way for a classic producer to publish the previous state for that dataset? We don't want to fail the cycle and want to publish the successful datasets

    For example, we have

    @HollowPrimaryKey(fields={"orgId"})
    public class DomainFilterConfig {
        private String orgId;
        private List<String> whitelistedClickRedirectDomains;
    }

    and in case of failure, we tried

     stateEngine.getTypeState(failedModel).addAllObjectsFromPreviousCycle()

    but this leads to some strange behavior. The contents of the list changes to ids mixed with strings from other datasets

    Viktor Nyström
    @viqtor
    @AlexandruGhergut the previous is already published no? why wouldn't you want to fail the current cycle? it would lead to a silent error where your dataset never updates but continues to provide new states right?
    Alexandru-Gabriel Gherguț
    @AlexandruGhergut
    @viqtor yes it is published. We don't want to fail the current cycle because other datasets might have been updated. The error would indeed be a silent one but because we cannot publish anything for the failed dataset, the write state will interpret while building the diff that all entries in that dataset have been deleted while we want to keep the ones from the previous cycle. We are not using the incremental producer
    jkade
    @jkade
    I happened to look at the README.md today - is it true that 3.0.1 is the latest "stable" version? https://github.com/Netflix/hollow/blob/master/README.md
    3 replies
    TamirYardeni
    @TamirYardeni
    Hi all, i'm looking for a way to make the producer HA. Did anyone used locking for supporting multiple producers to run concurrently? Is it recommended?
    2 replies
    krishnaram
    @krishnaram
    Hi
    I am getting below error again
    java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException
    at com.netflix.hollow.core.write.HollowWriteStateEngine.resetToLastPrepareForNextCycle(HollowWriteStateEngine.java:265)
    at com.netflix.hollow.api.producer.HollowProducer.runCycle(HollowProducer.java:476)
    at com.netflix.hollow.api.producer.HollowProducer.runCycle(HollowProducer.java:390)
    krishnaram
    @krishnaram
    Is above error related to memory??
    pruv
    @pruv
    hey, I updated hollow to 4.9.1 noticed odd behavior with incremental producer. When i publish incremental update it is overwriting whole snapshot and only the partial data update through incremental is available. Am I missing something with new incremental producer. Below is my configuration
    producer = HollowProducer.withPublisher(publisher).withAnnouncer(announcer).withMetricsCollector(metricsCollector).withBlobStorageCleaner(blobStorageCleaner).withListeners().build();
    producer.initializeDataModel(classes);
    hollowIncrementalProducer = HollowProducer.withPublisher(publisher).withAnnouncer(announcer).withMetricsCollector(metricsCollector).withBlobStorageCleaner(blobStorageCleaner).withListeners().buildIncremental();
    hollowIncrementalProducer.initializeDataModel(classes);
    1 reply
    Cosmin Ioniță
    @cosmin-ionita
    Hello! Can I please get a review on this PR? Netflix/hollow#470
    This PR may help other folks in the community
    damianChapman
    @damianChapman
    We are working on an application that has 2 producers a Hollow Producer and an Incremental Hollow Producer. This application receives messages from Kafka and depending on the message type uses the Hollow Producer or Incremental Hollow Producer to publish the blobs. The issue is how to keep track of the versions as both producers have different HollowProducers under the covers. We were thinking of creating pollers for both producers that check if the version has changed compared to their current version and then restore to that version. Is this a good way to synchronise the changes between the two producers or is there a better or more efficient way? Any help would be greatly appreciated.
    Mark Zelou Garces
    @fluffymarkz
    image.png

    Hi all,

    I am new to hollow, and I am currently looking for a cache solution alternative to my diy custom cache.
    The system looks like this:

    [App nodes...] -> fetches data from a [couchbase] (cache metadata, source of truth)
    [The App] holds this fetched data and store it in memory and act as app's cache.

    The caching system works but with added business logics, I am getting a request (rest api) performance of around ~600ms to 1000ms+

    With the current setup, I want to have this goals:

    1. Better request performance at average of 300ms
    2. Horizontal scaling

    Upon checking hollow documentations, I am thinking about this approach.

    [image]

    [Questions]

    1. With this approach, since the consumer is traveling to AWS s3 to consume data snapshots and delta,
      will the performance improve compared to directly in memory cache?

    2. Is hollow the GOTO for Hollow Scaling?

    3. Should I proceed with Hollow given to my requirements?

    Sorry for being a noob, I want to hear from experts and experienced from hollow. Bless me with your knowledge. Have a nice day! and Many thanks in advance.

    14 replies
    image.png
    Image 1 current
    Image 2 what I am thinking when I integrate with Hollow
    mailsanchu
    @mailsanchu
    Good day. I am trying to produce a delta as per this documentation https://hollow.how/advanced-topics/#delta-based-producer-input. I am able to update the hollow write state as per the documentation. How do I create a delta blob and publish it after updating the write state engine? Any help is appreciated.
    garghima
    @garghima
    Hello Folks,
    Is com.fasterxml.jackson.databind.JsonNode supported as Instance variable for writing to hollow.? If yes, can we do indexing on this JsonNode Object?
    Please see the params field in Below DTO.
    @HollowPrimaryKey(fields="actorId")
    public class Actor {
        public int actorId;
        public String actorName;
        public JsonNode params;
    
        public Actor() { }
    
        public Actor(int actorId, String actorName, JsonNode params) {
            this.actorId = actorId;
            this.actorName = actorName;
            this.params = params;
        }
    
    }
    javabarn
    @javabarn

    I have the following object written to hollow.

    @HollowPrimaryKey(
      fields = {"flowId"}
    )
    public class FlowDTO {
    
      private String name;
      @NotNull
      private String flowId;
      private String orgId;
      private Map<String, String> attributes;

    I want to build an index for the dataset using orgId field and multiple key-value pairs in the attributes map.

    For example, if the attributes map contains

    k1 -> v1
    k2 -> v2
    k3 -> v3
    k4 -> v4
    ...
    kn -> vn

    I want to build an index where I can query the dataset using orgId and k1=v1 and k2=v2.
    Am able to get the index working with one key-value pair in my consumer.

    this.mapperAPIHashIndex = new MapperAPIHashIndex(hollowConsumer, true, "FlowDTO",
          "", "orgId.value", "attributes.key.value", "attributes.value.value");

    The above index works fine for querying by "orgId"="xx" and "k1"="v1"
    But it is not working when I want to query by "orgId"="xx" and "k1"="v1" and "k2"="v2"

    I did refer to an earlier thread here by user @mahipal0913 but am not sure of the proposed solution to use two different hash indexes.
    Is it like I get the HashIndexResult of one hashIndex and then check if its ordinals are contained in the hashIndexResult of the other index? Is this efficient way of building the index for this scenario?

    I also need to build the index with more than just 2 key value pairs in the map like query by "orgId"="xx" and "k1"="v1" and "k2"="v2" and "k3"="v3". Then do I build 3 different hash indexes and compare them to see which ordinals are common across the 3 hash index results?

    Please point me to the right way to build the index for the examples above.

    adrian-skybaker
    @adrian-skybaker
    hello hollowers - I'm having to use UUIDs as primary keys. I have this working modelled as both a plain hex string (f0af802fd41047d28104fac7bc295da5), as well as a composite type with two 64 ints. I'm assuming the latter is going to be faster to index and more compact, though I haven't tested to confirm. Unfortunately the use of the composite key somewhat defeats the usefulness of the explorer UI. I'm curious if anyone else has been faced with this choice, and perhaps done any quick measurements on the impact?
    1 reply
    adrian-skybaker
    @adrian-skybaker

    is there a reason that the prior state is available when doing com.netflix.hollow.api.producer.HollowProducer#runCycle (via com.netflix.hollow.api.producer.HollowProducer.WriteState#getPriorState), but its not available on an incremental (com.netflix.hollow.api.producer.HollowProducer.Incremental.IncrementalPopulator has no access to write state)?

    I can probably workaround but just wanted to query whether it was a deliberate API

    1 reply
    adrian-skybaker
    @adrian-skybaker

    Is it expected that if I have parent/child, that when I call com.netflix.hollow.api.producer.HollowProducer.Incremental.IncrementalPopulator#populate I must only call using the parent object? ie if I have a Movie with a list of Actors, even if only a single actor has changed, I cannot call populate( Henry Cavill ), I must do populate ( Man Of Steel ) ?

    In my current implementation, doing the latter results in duplicate records, but I'm unsure if that's operator error or by design

    3 replies
    yansvanhorn
    @yansvanhorn
    Hi, Silly question - can the same @HollowTypeName be set on both: List<String> list and String field if values of each are to be deduplicated ?
    2 replies
    Noam
    @noammanyfler
    Hi all, I experience a weird behavior when I have multiple top level models with runIncrementalCycle.
    Given 2 models A and B: when I update only A in my runIncrementalCycle the data is being published and B can still be queried from my consumers. But when I restart my consumer - it fails to load schema for B.
    To fix that, I updated a default object B every time I update A, and vice versa. What might be the problem? Thanks in advance
    Olavo Masayuki Machado Shibata
    @olavoshibata
    I want to use Hollow for a small set of data. It is around 50 entries. The idea of using Hollow is because I don't want to restart my application all the time I change this data. This data changes weekly. Would Hollow be a good solution for that?
    milk
    @milk89676173_twitter

    Is the latest stable version for Hollow really 3.0.1, as the README states, or is 5.0.8 considered stable?

    Additionally, has anyone upgraded from any of the 3.x versions to 5.x? Did you have to migrate to a new namespace or produce a new snapshot, or is it safe to upgrade in-place with an existing producer-consumer already running?