Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    can you help me?
    are you sleep?
    We need help on below error
    java.util.concurrent.ThreadPoolExecutor.runWorker(> at java.util.concurrent.ThreadPoolExecutor$> at> at>Caused by: java.lang.IllegalAccessError: Store was closed--> at org.mapdb.StoreDirectAbstract.assertNotClosed(StoreDirectAbstract.kt:62)--> at org.mapdb.StoreDirect.get(StoreDirect.kt:520)--> at org.mapdb.BTreeMap.getNode(BTreeMap.kt:800)--> at org.mapdb.BTreeMap.access$getNode(BTreeMap.kt:72)--> at org.mapdb.BTreeMap$BTreeIterator.advanceFrom(BTreeMap.kt:1031)--> at org.mapdb.BTreeMap$BTreeIterator.<init>(BTreeMap.kt:1016)--> at org.mapdb.BTreeMap$valueIterator$1.<init>(BTreeMap.kt:1223)--> at org.mapdb.BTreeMap.valueIterator(BTreeMap.kt:1223)--> at org.mapdb.BTreeMap$values$1.iterator(BTreeMap.kt:999)--> at java.util.Spliterators$IteratorSpliterator.estimateSize(>
    Abhijith V Mohan
    does the current stable 3.x have a concept of a FIFO queue? I don't see it anywhere in the docs, hence asking
    Hi, When we can expect version 4 release?
    with HDFS storage integration

    Hi, While i was exploring on java offline data store, i came across mapdb. The requirement is to have a persistable queue. I tried building the latest codebase and use it in my project. Looks like queue is not working in FIFO basis.
    File f = new File("queueTest");
    DB db = DB.Maker.Companion.appendFile(f).make();
    Queue<String> queue = QueueMaker.newLinkedFifoQueue(db, "myqueue", String.class).make();



    [fourth, third, second, first]

    1) This is not FIFO queue result. I know that it is not officially released, but just in case if the implementation is in progress, please do take care of it. Also, if i have made any mistake in the way the queue is supposed to be created and used, do let me know.
    2) I know in 3.X queues can be created. Is this queue persistable?

    Abhijith V Mohan
    @YashwanthKumarJ I had a similar requirement
    Evaluated mapdb, square/tape, and openhft/chronicle-queue. Settled on chronicle-queue
    I'd recommend you to take a look at both. Tape's api is much simpler, but if your use case needs high throughput, especially with good number of concurrent writers, cq will be better.
    Marko Mitić
    Hi all, any experience or best practice to use MapDB (file stored HTreeMap or hybrid - mem+file storage) with AWS Elastic Beanstalk and EC2 instances (could be scalable)? Thanks in advance!
    Avi Hayun


    I am using mapdb v3.0.7

    How do I start a simple queue, no need for any fancy features, just a simple mapdb implementation

    If it is a problem in this version, I can change the version to any other one

    Avi Hayun
    So this Gitter is abandoned ?
    @avmohan , Thanks for the list. Tried chronicle queue and it nicely fit our usecase and it is blazingly fast. Thanks
    @jankotek Where are you publishing new versions of the artifact ? The one in Maven Central is pretty old.
    Well, I published snapshot from today's master here if anyone needs it.
    And looks like it was bad idea because master seems to be very far away from release state. Author removed file-based DB from it :) It would be nice to remove queue word from the title of the project then :)
    Hello, not a question, but I've made a small Java Naive Bayes classifier ( that maintains the observations and counters as key-values. It runs with either a ConcurrentHashMap() or some other disk-based key-value store. I compared in-memory (on a 190Gb server) and SSD-storage with MapDB, LevelDB or RocksDB. Found MapDB results very impressive, very constant memory use, fast inserts/updates/reads. I wish I could understand why mapdb HTreeMap is so fast compared to LevelDB (pure java or JNI) or RocksDB ... perhaps I have not tuned those correctly. I hope to investigate further after the holidays. Anyway, thanks for this great piece of work ! Elian
    Arseniy Kaploun
    Hello, we are using MapDB for offheap cache. What we are missing are ability to extract snapshots/backups for hashmap (htreemap) and then import back, and on top of that do incremental backups. I guess our best option is to add support for that, but I have few questions maybe someone can help, as @jankotek does not seem to appear here anymore
    • I have actually saw few mentions of backups in documentation, but can't find anything in the code , did I miss it?
    • We are using 3.0.7 and I was exploring sources of that one, if we want to develop that functionality what version / branch we better start off with intention to be merged to upstream eventually? We still need it to be stable enough as we use it for our production (and with quite high uptime requirements). Should we develop in 3.0.7 and port it to master then? Or master has diverged too much, and it would have to be actually reimplemented differently?
    • As far as I understand to export htreemap, we can quite easily dump content of the volumes to files, but in addition to that will need to serialize expiration queues, right? What else we might be missing?
    Jan Kotek
    @kapliars I worked on snapshots, incremental backups etc for a while. It will be in MapDB4, but I dont have best history with keeping my promises. Maybe we should have a chat, send me an email with your contact info. I am in Europe.
    MapDB 3.0.8 was released. Bugfixes and Java 11 compatibility.
    Stuart Goldberg
    We recently upgraded from mapdb 1.0.7 to 3.0.6 and I find that queues have been removed. I tried replacing the queue with an IndexTreeList, but the performance was horrible as I had thread contention between producer thread and consumer thread. Any advice would be appreciated.
    Hi. I'm using mapdb (3.0.8) as cache for batch jobs (~1 million jobs). each job has a small "jobState" object and a larger (few kb) binary payload. I often iterate over the jobState objects (without payload). Sometimes I need to iterate over the jobState objects plus their payload. What is the best pattern to store those in mapdb? One large state+payload object? Or better 2 maps, one for state, one for payload (in this case, iterating over state+payload would mean iterating over state, get payload for each by id).
    And which map type should I use?

    Hi, Expiration does not seem to kick in for me:

    val db = DBMaker.memoryDirectDB().make()
      val map: HTreeMap[java.lang.Long, String] = db.hashMap("map")
      map.put(1L, "A")
      map.put(2L, "B")
      map.put(3L, "C")
      map.put(4L, "D")
      map.put(5L, "E")

    I'm expecting to see nulls for 1 and 2 (null, null, C, D, E) ....... but instead I get A,B,C,D,E

    Mitesh Patel
    Hi, I'm using mapdbutils (1.1.1) with mapdb (2.0-beta). Has anyone tried creating a map using Spark's UnsafeRow? I'd like a hashMap[UnsafeRow, Seq[UnsafeRow]] but I'm having trouble figuring out the serialization. UnsafeRow is already Externalizable and KryoSerializable, so I thought I wouldn't have to write a serializer myself....but I guess I do?
    Vishrawas Gopalakrishnan
    I am working of mapdb 3.0.8 ... I am trying to follow the example of composite keys. I am not able to find Tuple2 or any Tuple related classes. I can see the corresponding classes in however not in the jar files that get downloaded via Maven. Manually copy pasting the relevant files like Tuple, Tuple2, Tuple2Serializer throws an error DB.DBAware not found ... Fixing each of these by copy pasting relevant portions from the git results in recreation of the entire code base. Am I missing something here? Is there an easier way or is the Tuple no longer supported. My use-case involves composite key and querying based on that. If Tuple is no longer the right way, can someone please advise on how to proceed with composite keys (of strings) in version 3.0.8
    Jan Kotek
    @MasterDDT I would recommend converting UnsafeRow into a byte[].
    @vishrawas That code is from 1.0 branch, that causes compilation errors. Tuples were replaced with array tuples in MapDB3, source here:
    Mitesh Patel
    @jankotek Thanks, yes I converted the row into hashcode Int for key, and wrote a custom Serializer<UnsafeRow> for value. Works now
    Mitesh Patel
    Does mapdb 3.x support Scala 2.11.8? I'm trying to compile it into Spark 2.3.2 ( and seeing this error:
    Jason Schoeman
    Does anyone perhaps use this library on Android, if so, how did you overcome this bit upon compile? Method name '%%%verify$mapdb' in class 'org.mapdb.DB$Maker' cannot be represented in dex format.

    Similarly, @Jason Shoeman above, I would like to use mapdb on my android app.
    When using 3.0.0 I got similar error.
    However, I don't have a compilation issue when using 2.xx (2.0-beta13) but that's an old and beta...

    So my querry would be if it possible to rename this method so it goes well with android, as per my understnading name of the method is your choice in kotlin (I didn't do anything in kotlin...), unless some some obligatory part in kotlin. What do you reckon @Jan Kotek


    From 3.0.6 it reports usage of static method (java 8).

    Could it be avoided for for making app a bit more backword compatibilie?
    "Static interface methods are only supported starting with Android N (--min-api 24): java.lang.Object org.eclipse.collections.api.InternalIterable.$deserializeLambda$(java.lang.invoke.SerializedLambda)
    "Default interface methods are only supported starting with Android N (--min-api 24): org.eclipse.collections.impl.lazy.parallel.Batch org.eclipse.collections.impl.lazy.parallel.OrderedBatch.collect(org.eclipse.collections.api.block.function.Function)

    Jan Kotek
    @MarianJones This method has some strange characters, to prevent it from being imported from Java code. Weird case between Kotlin compiler, JVM bytecode and Java code.
    I will have a look and make Android friendly release.
    @TitanKing see above
    Many Thanks @jankotek !
    Hi, I have cluster system with multiple nodes. Each nodes persist state into mapdb (per node) so it able to resume from the last point across restarts. Due to adding new nodes in to the cluster - the load balancing may change. This basically means some processing records may available in other node's mapdb - for this, I have implemented - each nodes to open its main db in write/create mode while also open all other available mapdb in readonly. This seems to be working fine despite warning messages from mapdb - I able to suppress it. Question - will this may lead to data corruption?
    I am using mapdb v3.0.8
    how to reclaim the file size ?
    list is empty,but file is too large
            localDB = DBMaker.fileDB(DB_FILE_PATH)
            IndexTreeList<T> list = localDB.indexTreeList(DB_FILE_EVENT, new ObjectSerializer<T>()).createOrOpen();
    Hi, I recently started using Ant Media Server which is using MapDB. I found online few ways to see the MapDB content with Java. Are there any examples of how to read MapDB from shell, javascript or php?
    Jan Kotek
    @abadash MapDB is java library, it does not work with other languages.
    Thank you @jankotek . does that mean that there is no shell/python wrapper yet to access its data? Or do you know any interpreted java environment which I can use to quickly see the data from a MapDB?
    Or is there any tool like TablePlus for example which can get data from a MapDB file?
    Jan Kotek
    write program in java and use and call it to access data
    Nilesh Injulkar
    Does it make sense to use MapDb just to reduce the memory footprint of the application by using DBMaker.tempFileDB? The application I am working on parses remote files, sometimes the files are larger in size so I was thinking of using MapDb to store intermediate results of the parsing?
    Ayman Madkour
    Hello. How do you delete a Map? I know db.delete() was removed in 3.x, and I can't figure out how to use db.getStore().delete(). Any help would be really appreciated.