Hollow is a java library and toolset for disseminating in-memory datasets from a single producer to many consumers for high performance read-only access.
sun.misc.Unsafe
(neither building from console nor from IntelliJ works). I currently have JDK 11 and JDK 15 installed on my machine. Can someone point me towards change I need to make to build the project?
I understand from reading this Gitter that the time it takes to apply deltas and create a Hollow blob is proportional to the size of the dataset and not the size of the deltas being applied. In production, we see this cycle publish time taking upwards of 2 minutes at present using the HollowProducer.Incremental class, and it grows over time as our snapshot gets larger and larger (we typically don't need to delete data from the dataset). 2 minutes however seems high. We did upgrade our producer hosts to a larger host size with a better CPU (newer EC2 generation) and found that the publish time reduced slightly, but now it is back to where it was. This is problematic for us from a throughput perspective - we ingest changes from a stream and it frequently backs up if the stream encounters increased volume.
We're working on profiling the service to see exactly what is taking so long, but I figured it may be beneficial to ask here:
Are there any tuning parameters (like GC or Hollow config changes), or options to leverage concurrency we could use in order to get this time down? Again, 2 minutes seems high.
@mil0 @adwu73 how large are your datasets? We have one in particular that is very large and was taking about 2 minutes to publish. For this dataset, I turned off integrity checking (.noIntegrityCheck()
when building the HollowProducer
) to significantly reduce the publish time.
The integrity check is a failsafe which loads two copies of the data and validates the delta and reverse delta artifacts with checksums. If some unexpected edge case in the framework itself were to be triggered by your dataset, this is where it would be caught.
It's a judgement call whether you want to turn it off. I felt confident in doing so since this particular dataset is far enough back in our content ETL pipeline that it is never directly consumed by our instances responsible for serving member requests, and any unanticipated errors would be caught by a later stage in the pipeline.
Howdy. I have a hollow datastore that is fairly static, I'd like to be able to initialize a hollow consumer within an AWS lambda, and manually manage refreshing the store as part of invocation. Is this a pattern that anyone has attempted?
I'm running into issues creating a client updater that does not rely on self-created background threads. I feel like i can kind of hack the refresh (and short circuit the supplied executor) - but I can't figure out a way to self-manage the StaleHollowReferenceDetector thread.
Is it even worth trying this? Would I need to write my own consumer/updater?
Hi,
When I import hollow-reference-implementation gradle project into my Eclipse workspace after cloning it from https://github.com/Netflix/hollow-reference-implementation, I am
getting below error.
FAILURE: Build failed with an exception.
Where:
Build file 'C:\WS\Git\TCCache-Hollow\hollow-reference-implementation\build.gradle' line: 7
What went wrong:
Plugin [id: 'nebula.info', version: '3.6.0'] was not found in any of the following sources:
Gradle Core Plugins (plugin is not in 'org.gradle' namespace)
Plugin Repositories (could not resolve plugin artifact 'nebula.info:nebula.info.gradle.plugin:3.6.0')
Searched in the following repositories:
Gradle Central Plugin Repository
Try:
Run with --stacktrace option to get the stack trace.
Run with --info or --debug option to get more log output.
Run with --scan to get full insights.
Get more help at https://help.gradle.org
Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.
You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
See https://docs.gradle.org/7.3/userguide/command_line_interface.html#sec:command_line_warnings
CONFIGURE FAILED in 41ms
Configure project :
Inferred project: hollow-reference-implementation, version: 0.1.0-SNAPSHOT
FAILURE: Build failed with an exception.
Where:
Build file 'C:\WS\Git\TCCache-Hollow\hollow-reference-implementation\build.gradle' line: 6
What went wrong:
An exception occurred applying plugin request [id: 'nebula.netflixoss', version: '3.5.2']
Failed to apply plugin class 'nebula.plugin.info.dependencies.DependenciesInfoPlugin'.
Could not create plugin of type 'DependenciesInfoPlugin'.
No signature of method: org.gradle.api.internal.artifacts.ivyservice.ivyresolve.strategy.DefaultVersionComparator.asStringComparator() is applicable for argument types: () values: []Possible solutions: asVersionComparator()
Try:
Run with --stacktrace option to get the stack trace.
Run with --info or --debug option to get more log output.
Run with --scan to get full insights.
Get more help at https://help.gradle.org
Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.
You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
See https://docs.gradle.org/7.3/userguide/command_line_interface.html#sec:command_line_warnings
CONFIGURE FAILED in 146ms
Hey all,
Im want to restore a producer from an exsisting running consumer like this:
producer.getValue().getWriteEngine().restoreFrom(consumer.getStateEngine());
and when a new cycle is starting that is updating exsisting key, the data gets duplicated instead of updated.
up until now we are successfully using this function:
producer.restore(announcementWatcher.getLatestVersion(),blobRetriever));
but this function creates a consumer by itself and i want to restore the producer from another consumer.
The main difference I saw is that the restore function is that at the end of the restore if it is successful the object mapper of the producer will replace it's write state to the new one, but since it's a private param i'm unable to do so myself.
my question is if there is a better way to achieve that or should I open an issue?
Thanks!
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.NullPointerException
at com.netflix.hollow.api.producer.HollowIncrementalCyclePopulator.addRecords(HollowIncrementalCyclePopulator.java:144) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
at com.netflix.hollow.api.producer.HollowIncrementalCyclePopulator.populate(HollowIncrementalCyclePopulator.java:53) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
at com.netflix.hollow.api.producer.HollowProducer.runCycle(HollowProducer.java:438) [golftec-api-1.0-jar-with-dependencies.jar:na]
at com.netflix.hollow.api.producer.HollowProducer.runCycle(HollowProducer.java:390) [golftec-api-1.0-jar-with-dependencies.jar:na]
at com.netflix.hollow.api.producer.HollowIncrementalProducer.runCycle(HollowIncrementalProducer.java:206) [golftec-api-1.0-jar-with-dependencies.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_292]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_292]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_292]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_292]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_292]
Caused by: java.util.concurrent.ExecutionException: java.lang.NullPointerException
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[na:1.8.0_292]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[na:1.8.0_292]
at com.netflix.hollow.core.util.SimultaneousExecutor.awaitSuccessfulCompletion(SimultaneousExecutor.java:118) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
at com.netflix.hollow.api.producer.HollowIncrementalCyclePopulator.addRecords(HollowIncrementalCyclePopulator.java:142) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
... 10 common frames omitted
Caused by: java.lang.NullPointerException: null
at com.netflix.hollow.core.write.objectmapper.HollowObjectTypeMapper.write(HollowObjectTypeMapper.java:170) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
at com.netflix.hollow.core.write.objectmapper.HollowMapTypeMapper.write(HollowMapTypeMapper.java:76) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
at com.netflix.hollow.core.write.objectmapper.HollowObjectTypeMapper$MappedField.copy(HollowObjectTypeMapper.java:470) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
at com.netflix.hollow.core.write.objectmapper.HollowObjectTypeMapper.write(HollowObjectTypeMapper.java:176) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
at com.netflix.hollow.core.write.objectmapper.HollowObjectMapper.add(HollowObjectMapper.java:70) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
at com.netflix.hollow.api.producer.WriteStateImpl.add(WriteStateImpl.java:41) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
at com.netflix.hollow.api.producer.HollowIncrementalCyclePopulator$2.run(HollowIncrementalCyclePopulator.java:136) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
... 5 common frames omitted
I've got a hollow dataset that ideally I'd split between a "hot" set of current data (eg non-archived, non-expired, "active" records), and a larger set of "archived" data that's only of interest to some clients. As an analogy, think of a catalog of items in an online store, many of which are no longer offered for sale, but you still need to maintain records to resolve data about historical orders.
I'm looking at some of the filtering/splitting options (https://hollow.how/tooling/#dataset-manipulation-tools), but I'm not sure I can see a way to make them work - in my case, it's about having a smaller set of records for the same types, rather than excluding specific types or fields.
The more heavy handed option is to just create two entire hollow data sets, with two producers, which can share the same model. Which will work, but you lose the flexibility of letting clients decide how they filter. Before I go down this path, just wondering if anyone else has used the filtering/combining tools for this use case?
I was extremely dismayed to discover this week that the producer validation listeners (eg DuplicateDataDetectionValidator) run after content has been written out to the persisted blob store.
Although it did prevent the faulty version from being announced, the resulting cleanup has proved hard enough that we've given up and will just create a brand new blob store and get all clients to switch.
Although this post-write validation behaviour is actually documented, it's extremely surprising and greatly reduces the usefulness of the validators.
Hello, I am using Hollow:7.1.1
producer init:
val producer = HollowProducer.withPublisher(publisher).withAnnouncer(announcer)
.withNumStatesBetweenSnapshots(5)
.buildIncremental()
write data:
s3Producer.runIncrementalCycle { writer ->
writer.addOrModify(data)
}
I have encountered such error: Caused by: java.io.IOException: Attempting to apply a delta to a state from which it was not originated!
Can someone help tell me how to fix this?
Text Over Image with Java Web Application
https://www.baeldung.com/java-add-text-to-image
https://www.geeksforgeeks.org/java-program-to-add-text-to-an-image-in-opencv/
I want to display an image in the web application where user can add text on the image.
Finally i need to save in DB, later user has to view the editable text and edit if required
How to achieve this in java web application - UI? back-end? DB (json or image or co-ordinates) ?
Does any opensource can be used in all the levels? Can someone suggest some comments/feedback
URL url = new URL(...); --> FAILS here when i try to download a https image - "javax.imageio.IIOException ... "Can't get input stream from URL!""
Note:
URL works from browser
URL works in standalone program
URL fails when used in java web application
Question:
What is the correct/right approach and what is the underlying differences?
Thanks