MapDB provides concurrent Maps, Sets and Queues backed by disk storage or off-heap-memory. It is a fast and easy to use embedded Java database engine.
Method name '%%%verify$mapdb' in class 'org.mapdb.DB$Maker' cannot be represented in dex format.
build error message. I saw this happen before, I'm wondering if there are any progresses on this problem?
@jankotek QUES on the hashmap or treemap with nested data structure. With the following code block:
HTreeMap<String, List<String>> map = (HTreeMap<String, List<String>>) DBMaker.fileDB("map").make().hashMap("map").createOrOpen();
for (int i = 0; i< 10; i++) {
final int finalI = i;
for (int j = 0; j< 10; j++) {
final int finalJ = j;
map.compute(String.valueOf(finalI), (key, value) -> {
if (value == null) {
value = new ArrayList<>();
}
value.add(String.format("%d_%d", finalI, finalJ));
return value;
});
}
}
It falls into deadloop. Instead, I need to copy the value and then return the modified new copy to make it work.
HTreeMap<String, List<String>> map = (HTreeMap<String, List<String>>) DBMaker.fileDB("map").make().hashMap("map").createOrOpen();
for (int i = 0; i< 10; i++) {
final int finalI = i;
for (int j = 0; j< 10; j++) {
final int finalJ = j;
map.compute(String.valueOf(finalI), (key, value) -> {
final List<String> newValue;
if (value == null) {
newValue = new ArrayList<>();
} else {
newValue = new ArrayList<>(value);
}
newValue.add(String.format("%d_%d", finalI, finalJ));
return newValue;
});
}
}
Is it a bug or it is expected? I think this is not an issue with in-memory counterpart (e.g. concurrentHashMap)
Hi, Another question on how expire works in MapDB. The following snippet does not work
@Test
public void testExpiration() throws InterruptedException {
HTreeMap<String, String> map = (HTreeMap<String, String>) DBMaker
// .fileDB("map").fileDeleteAfterClose()
.memoryDB()
.make().hashMap("map")
.expireAfterCreate(10)
.createOrOpen();
map.put("a", "b");
Thread.sleep(4000);
assertNull(map.get("a"));
map.close();
}
Did I misunderstand how expiration works?
Hi, is it supported to store multiple HashMaps in one database? Something like:
DB db = DBMaker.fileDB("/tmp/test").closeOnJvmShutdown().make();
HTreeMap<byte[], MyClass> mapDB1 = db.hashMap("map")..keySerializer(Serializer.BYTE_ARRAY).valueSerializer(SERIALIZER.JAVA).createOrOpen();
HTreeMap<String, Integer> mapDB2 = db.hashMap("info")..keySerializer(Serializer.STRING).valueSerializer(SERIALIZER.INTEGER).createOrOpen();
MyClass c = new MyClass();
byte[] b = { 1, 2, 3, 4 };
mapDB1.put(b, c);
mapDB2.put("version", 1);
db.commit();
db.close();
I tried this and it worked in a very simple test but I want to be sure this is actually supported. Thank you!
Hey, quick question.
On 3.0.8., other than opening and closing a file based DB, is there a (static?) way to check the validity of a DB (properly closed, file corruption, what not).
MapDB files are relatively small and the use case is, i have to keep rather large source data files they are built on, but if i can verify that sources are no longer required i could release them (which could lead to them being deleted).
So is there away to do that in a oneliner, if not is DBMaker.make() enough to verify validity or do i have to open all required collections also?
Thanks.
@jankotek I've got a weird issue if anyone wants to chime in:
I have a custom serialization and i'm getting an exception on deserialization (a readshort returns a negative number when a positive is expected), so i debugged the code to get other object fields that were deserialized before it breaks and then debugged again with those fields to see ho the object is serialized in the first place.
The thing is, an object with those field values is never created let alone serialized. Up to that point deserialization works as expected.
What am i missing?
IndexTreeList[Object]
val db = DBMaker.tempFileDB().fileMmapEnable().make()
val list = db.indexTreeList("foo").create()