@hazelcast_twitter Thanks for response. I still try to reproduce this issue but get only OOM..
btw you can also use the dedicated Jet gitter channel https://gitter.im/hazelcast/hazelcast-jet
could you create an issue on the repository?
yes, sure, I'll do it later
Hi there, I'm trying to schedule a task in hazelcast. This task is only to be executed by one of the nodes. All the nodes have the same code (all are the same application, horizontal scaled), this is why I'm trying to use a named task (expecting to not create it if it is already present).
So far, this is how I try to use it:
hzInstance.getScheduledExecutorService("myScheduler").scheduleAtFixedRate(named("MyTask", new EchoTask("Executed")), 10, 5, TimeUnit.SECONDS);
And although it seems to work, it always throw an asynchronous DuplicateTaskException (which I can't catch).
Is there a "proper way" to handle this?
MapConfig mapConfig = new MapConfig() .setMaxSizeConfig(new MaxSizeConfig() .setMaxSizePolicy(MaxSizeConfig.MaxSizePolicy.PER_PARTITION) .setSize(3)) .setEvictionPolicy(EvictionPolicy.LRU) .setInMemoryFormat(InMemoryFormat.OBJECT) .setName("test”); Config config = new Config("HazelcastEvictionSpec”); config.addMapConfig(mapConfig); HazelcastInstance hazelcastServerInstance = Hazelcast.newHazelcastInstance(config);
PER_NODEyields this warning:
The max size configuration for map "test" does not allow any data in the map. Given the current cluster size of 1 members with 271 partitions, max size should be at least 271. Map size is forced set to 271 for backward compatibility
Hi, no, we're not planning on fixing that in 4.0. Our transaction support is very limited, we assume that if the transaction passes the prepare phase that the commit phase will not fail and we don't check any interceptors during prepare phase. We also assume that the cluster is stable while transactions are used. Our "transactions" work for some very simple cases but for all non-trivial cases, they can fail in various ways.
Overall, the transactions are as-is now and in the future we might reconsider a complete redesign instead of addressing issues individually. If you want, you can try preparing a PR and we might include it in 4.0 (we are about to release GA in about 1 month).
I've did some test, and yes, the transactions do "fail" in various ways.
Contrary to what is stated in Hazelcast Manual, the changes done in a transaction are not isolated properly from other transactions or non-transactional reads. Granted, the changes are not visible from outside before you call
TransactionContext.commitTransaction(). But if you've changed for example five map entries in the transaction (even belonging to the same IMap and the same partition), there is a short period of time during commit when you can observe those changes partially, i.e. see three objects already changed and two not yet changed, when doing
Besides the two ways to observe non-atomicity described in #15542, there's another one. If in a transaction you change two map entries that belong to the same partition, and the node on which this partition is held dies, but you had backups enabled, you can observe sometimes that only one entry has been changed and the other one remained unchanged.
I suppose that the second case falls under the category "transaction works only if cluster is stable", though such restriction is not stated in the manual. I must say that I was mislead by the manual that transaction do work correctly, especially that there's a section about TWO_PHASE transaction type that one needs to use "in case of a member failure". I've understood it as transactions will work even if cluster is not stable.