MapDB provides concurrent Maps, Sets and Queues backed by disk storage or off-heap-memory. It is a fast and easy to use embedded Java database engine.
HI
I am using mapdb v3.0.8
how to reclaim the file size ?
list is empty,but file is too largelocalDB = DBMaker.fileDB(DB_FILE_PATH) .fileMmapEnable() .checksumHeaderBypass() .make(); IndexTreeList<T> list = localDB.indexTreeList(DB_FILE_EVENT, new ObjectSerializer<T>()).createOrOpen();
I believe you can call list.getStore().compact()
sun.misc
and sun.nio.ch
in its manifest. I've found a workaround here https://search.maven.org/artifact/net.sdruskat/net.sdruskat.fragment.sun.misc/1.0.0/jar to expose sun.misc
to OSGi. But unfortunately I haven't found something similar for the sun.nio.ch
package. I've tried creating this kind of fragment myself, but I haven't been that successful so far. I would also like to avoid to upgrade to mapDBv3 since I'm not sure if there are any compatability issues with the source code, and I'd also have to get kotlin running in an eclipse developing environment and within the tycho maven build.sun.nio.ch
exposed to OSGi / the tycho build?
Method name '%%%verify$mapdb' in class 'org.mapdb.DB$Maker' cannot be represented in dex format.
build error message. I saw this happen before, I'm wondering if there are any progresses on this problem?
@jankotek QUES on the hashmap or treemap with nested data structure. With the following code block:
HTreeMap<String, List<String>> map = (HTreeMap<String, List<String>>) DBMaker.fileDB("map").make().hashMap("map").createOrOpen();
for (int i = 0; i< 10; i++) {
final int finalI = i;
for (int j = 0; j< 10; j++) {
final int finalJ = j;
map.compute(String.valueOf(finalI), (key, value) -> {
if (value == null) {
value = new ArrayList<>();
}
value.add(String.format("%d_%d", finalI, finalJ));
return value;
});
}
}
It falls into deadloop. Instead, I need to copy the value and then return the modified new copy to make it work.
HTreeMap<String, List<String>> map = (HTreeMap<String, List<String>>) DBMaker.fileDB("map").make().hashMap("map").createOrOpen();
for (int i = 0; i< 10; i++) {
final int finalI = i;
for (int j = 0; j< 10; j++) {
final int finalJ = j;
map.compute(String.valueOf(finalI), (key, value) -> {
final List<String> newValue;
if (value == null) {
newValue = new ArrayList<>();
} else {
newValue = new ArrayList<>(value);
}
newValue.add(String.format("%d_%d", finalI, finalJ));
return newValue;
});
}
}
Is it a bug or it is expected? I think this is not an issue with in-memory counterpart (e.g. concurrentHashMap)
Hi, Another question on how expire works in MapDB. The following snippet does not work
@Test
public void testExpiration() throws InterruptedException {
HTreeMap<String, String> map = (HTreeMap<String, String>) DBMaker
// .fileDB("map").fileDeleteAfterClose()
.memoryDB()
.make().hashMap("map")
.expireAfterCreate(10)
.createOrOpen();
map.put("a", "b");
Thread.sleep(4000);
assertNull(map.get("a"));
map.close();
}
Did I misunderstand how expiration works?