Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    ashaychangwani
    @ashaychangwani
    Is there any fix?
    Andreas Petersson
    @apetersson
    tried to shutdown a mmaped file and got: java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;
    Per-Åke Minborg
    @minborg
    Have you checked out jankotek/mapdb#879 ?
    Andreas Petersson
    @apetersson
    yes, but i would really love to cleanup the DB in my Junit tests cleanly, including truly deleting my db file. im on Java 11. disabling FileMmap is also not really an option
    Per-Åke Minborg
    @minborg
    Since Java 9, the Cleaner was moved. It is no longer a part of sun.nio.ch.DirectBuffer. I am unsure how this is handled by mapdb? Anyone?
    Andreas Petersson
    @apetersson
    is ti possible to set a different, fixed expiration dates for a group of keys/values?
    Andreas Petersson
    @apetersson
    i implemented a FixedSizeSerializer for byte arrays where the size is known, and which skips writing the out.packInt(value.length); as in SerializerByteArray. for unknown reasons this is ca 50% slower in my impl...
    Andreas Petersson
    @apetersson
    in this doc: https://studylib.net/doc/8397568/mapdb-cheat-sheet there is talks of using queues with mapdb. did not find anything in 3.0.7 was this removed? what to use instead when i want to persist a queue locally?
    Jan Kotek
    @jankotek
    @apetersson Cheat Sheet was targeting older version (1.0x), there is no cheat sheet for new version.
    Sergey Zharikhin
    @SergeyZharikhin
    hello, Guys! Is it possible to expire entries from inMemory db to onDisk db ONLY based on inMemory map store size?
    Jan Kotek
    @jankotek
    @SergeyZharikhin I do not think so, not in newest version
    Kevin Stiehl
    @kstiehl
    Hey gusy i am using only a memoryDirectDB at the moment to create a HTreeMap which then is accessed by mutliple threads. Is it necessary to call db.commit when I only use a the direct Memory. From the documentaion it looks like it is used to sync with the filesystem.
    axonix
    @axonix69_gitlab
    after correct a fileDB commiting and closing , a database file remains locked until JVM shutdown on Windows10.
    is there any way to force unlock it to make possible the file deleting ?
    janothan
    @janothan
    What is the reason for a String Array Serializer not being available? What should I ideally choose for key -> String[]?
    Jan Kotek
    @jankotek
    @kstiehl no it is not necessary, transactions are disabled by default, so that is fine
    @axonix69_gitlab DBMaker.cleanerHackEnable(), it is JVM problem
    Jan Kotek
    @jankotek
    @janothan I will add it in 4.0 jankotek/mapdb#937
    Jasneet Singh
    @singhjasneet
    Hey @jankotek I am trying to fix the the cleanHackEnable() to work for Java 9+. But I am not able to build the project on my machine.
    lexaz777
    @lexaz777
    Hello, I'm started to use mapdb 3.0.7, but there is no "db.compact()" method. Can you help me, please?
    lexaz777
    @lexaz777
    @cdluv Did you find .compact() method?
    sunyaf
    @sunyaf
    hello
    java.lang.ArrayIndexOutOfBoundsException: 31 at org.mapdb.DataInput2$ByteArray.unpackLongArray(DataInput2.java:200)
    this issue,any one konw
    @jankotek
    hello
    any one can help me
    @tokuhirom
    can you help me?
    sunyaf
    @sunyaf
    are you sleep?
    satheessh
    @satheessh
    We need help on below error
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)--> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)--> at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)--> at java.lang.Thread.run(Thread.java:748)-->Caused by: java.lang.IllegalAccessError: Store was closed--> at org.mapdb.StoreDirectAbstract.assertNotClosed(StoreDirectAbstract.kt:62)--> at org.mapdb.StoreDirect.get(StoreDirect.kt:520)--> at org.mapdb.BTreeMap.getNode(BTreeMap.kt:800)--> at org.mapdb.BTreeMap.access$getNode(BTreeMap.kt:72)--> at org.mapdb.BTreeMap$BTreeIterator.advanceFrom(BTreeMap.kt:1031)--> at org.mapdb.BTreeMap$BTreeIterator.<init>(BTreeMap.kt:1016)--> at org.mapdb.BTreeMap$valueIterator$1.<init>(BTreeMap.kt:1223)--> at org.mapdb.BTreeMap.valueIterator(BTreeMap.kt:1223)--> at org.mapdb.BTreeMap$values$1.iterator(BTreeMap.kt:999)--> at java.util.Spliterators$IteratorSpliterator.estimateSize(Spliterators.java:1821)-->
    Abhijith V Mohan
    @avmohan
    does the current stable 3.x have a concept of a FIFO queue? I don't see it anywhere in the docs, hence asking
    star-usman
    @star-usman
    Hi, When we can expect version 4 release?
    with HDFS storage integration
    Yashwanth
    @YashwanthKumarJ

    Hi, While i was exploring on java offline data store, i came across mapdb. The requirement is to have a persistable queue. I tried building the latest codebase and use it in my project. Looks like queue is not working in FIFO basis.
    File f = new File("queueTest");
    DB db = DB.Maker.Companion.appendFile(f).make();
    Queue<String> queue = QueueMaker.newLinkedFifoQueue(db, "myqueue", String.class).make();
    queue.add("first");
    queue.add("second");
    queue.add("third");
    queue.add("fourth");

    System.out.println(queue);
    System.out.println(queue.poll());

    output

    [fourth, third, second, first]
    fourth

    1) This is not FIFO queue result. I know that it is not officially released, but just in case if the implementation is in progress, please do take care of it. Also, if i have made any mistake in the way the queue is supposed to be created and used, do let me know.
    2) I know in 3.X queues can be created. Is this queue persistable?

    Abhijith V Mohan
    @avmohan
    @YashwanthKumarJ I had a similar requirement
    Evaluated mapdb, square/tape, and openhft/chronicle-queue. Settled on chronicle-queue
    I'd recommend you to take a look at both. Tape's api is much simpler, but if your use case needs high throughput, especially with good number of concurrent writers, cq will be better.
    Marko Mitić
    @mimarko
    Hi all, any experience or best practice to use MapDB (file stored HTreeMap or hybrid - mem+file storage) with AWS Elastic Beanstalk and EC2 instances (could be scalable)? Thanks in advance!
    Avi Hayun
    @Chaiavi

    Hi,

    I am using mapdb v3.0.7

    How do I start a simple queue, no need for any fancy features, just a simple mapdb implementation

    If it is a problem in this version, I can change the version to any other one

    Avi Hayun
    @Chaiavi
    So this Gitter is abandoned ?
    Yashwanth
    @YashwanthKumarJ
    @avmohan , Thanks for the list. Tried chronicle queue and it nicely fit our usecase and it is blazingly fast. Thanks
    ruslan
    @unoexperto
    @jankotek Where are you publishing new versions of the artifact ? The one in Maven Central is pretty old.
    ruslan
    @unoexperto
    Well, I published snapshot from today's master here https://bintray.com/cppexpert/maven/mapdb/snapshot-20191113 if anyone needs it.
    ruslan
    @unoexperto
    And looks like it was bad idea because master seems to be very far away from release state. Author removed file-based DB from it :) It would be nice to remove queue word from the title of the project then :)
    NamSor
    @namsor
    Hello, not a question, but I've made a small Java Naive Bayes classifier (https://github.com/namsor/Java-Naive-Bayes-Classifier-JNBC) that maintains the observations and counters as key-values. It runs with either a ConcurrentHashMap() or some other disk-based key-value store. I compared in-memory (on a 190Gb server) and SSD-storage with MapDB, LevelDB or RocksDB. Found MapDB results very impressive, very constant memory use, fast inserts/updates/reads. I wish I could understand why mapdb HTreeMap is so fast compared to LevelDB (pure java or JNI) or RocksDB ... perhaps I have not tuned those correctly. I hope to investigate further after the holidays. Anyway, thanks for this great piece of work ! Elian
    Arseniy Kaploun
    @kapliars
    Hello, we are using MapDB for offheap cache. What we are missing are ability to extract snapshots/backups for hashmap (htreemap) and then import back, and on top of that do incremental backups. I guess our best option is to add support for that, but I have few questions maybe someone can help, as @jankotek does not seem to appear here anymore
    • I have actually saw few mentions of backups in documentation, but can't find anything in the code , did I miss it?
    • We are using 3.0.7 and I was exploring sources of that one, if we want to develop that functionality what version / branch we better start off with intention to be merged to upstream eventually? We still need it to be stable enough as we use it for our production (and with quite high uptime requirements). Should we develop in 3.0.7 and port it to master then? Or master has diverged too much, and it would have to be actually reimplemented differently?
    • As far as I understand to export htreemap, we can quite easily dump content of the volumes to files, but in addition to that will need to serialize expiration queues, right? What else we might be missing?
    Jan Kotek
    @jankotek
    @kapliars I worked on snapshots, incremental backups etc for a while. It will be in MapDB4, but I dont have best history with keeping my promises. Maybe we should have a chat, send me an email with your contact info. I am in Europe.
    MapDB 3.0.8 was released. Bugfixes and Java 11 compatibility. http://www.mapdb.org/changelog/