Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Hazelcast
    @hazelcast_twitter
    [Viliam Ďurina (viliam)] @vigo_gitlab_mt_gitlab looks like you've hit some edge case. The capacityLimit ist 128kB, the snapshot is stored in chunks of up to 128kB. I'm trying to reproduce it.
    Jet has no hard limit on the state size, but as it gets bigger, you'll have GC pause issues. We don't yet support off-heap state. Snapshot is much smaller and is stored in IMap in 128kB chunks, so the GC overhead cause by it is smaller (GC slowness is caused by large number of objects, not by objects large by size.
    Can Gencer
    @cangencer
    @vigo_gitlab_mt_gitlab btw you can also use the dedicated Jet gitter channel https://gitter.im/hazelcast/hazelcast-jet
    could you create an issue on the repository?
    Maksim
    @vigo_gitlab_mt_gitlab

    @hazelcast_twitter Thanks for response. I still try to reproduce this issue but get only OOM..

    @cangencer

    btw you can also use the dedicated Jet gitter channel https://gitter.im/hazelcast/hazelcast-jet

    Ok, thanks

    could you create an issue on the repository?

    yes, sure, I'll do it later

    Hazelcast
    @hazelcast_twitter
    [Viliam Ďurina (viliam)] @vigo_gitlab_mt_gitlab here's the fix: hazelcast/hazelcast-jet#1771. It happened if the size of your serialized key and value was exactly between 131048 and 131059 bytes, see the PR for details if you're interested
    Maksim
    @vigo_gitlab_mt_gitlab
    @hazelcast_twitter thanks for so fast solution of problem, I'll look this PR closely
    NemesisMate
    @NemesisMate

    Hi there, I'm trying to schedule a task in hazelcast. This task is only to be executed by one of the nodes. All the nodes have the same code (all are the same application, horizontal scaled), this is why I'm trying to use a named task (expecting to not create it if it is already present).

    So far, this is how I try to use it:
    hzInstance.getScheduledExecutorService("myScheduler").scheduleAtFixedRate(named("MyTask", new EchoTask("Executed")), 10, 5, TimeUnit.SECONDS);

    And although it seems to work, it always throw an asynchronous DuplicateTaskException (which I can't catch).
    Is there a "proper way" to handle this?

    Hazelcast
    @hazelcast_twitter
    [Thomas Kountis (tkountis)] @NemesisMate got a reproducer for what you are seeing?
    NemesisMate
    @NemesisMate
    yes, that's what happening, an exception is thrown. However, what I want is for it to not throw the exception as not-creating the task is the behavior I expect.
    Hazelcast
    @hazelcast_twitter
    [Thomas Kountis (tkountis)] there is no way for quite scheduling (aka ignoring dup exceptions). that leaves you with good old try-catch-ignore solution :\
    NemesisMate
    @NemesisMate
    well, since the exception is performed asynchronously, I can't try-catch it. So it leaves me with ugly logs. If this is the case, I guess the best is to open a formal issue?
    Hazelcast
    @hazelcast_twitter
    [Thomas Kountis (tkountis)] i don't get the async part, thats why i shared the test code with you. the exception should happen immediately not async, unless you have any async logic on the caller side. could you share a reproducer that try-catch is not possible?
    Álvaro Sánchez-Mariscal
    @alvarosanchez
    Hi there. I’m Álvaro, from the Micronaut team, working (among other things) on Hazelcast support
    Hazelcast
    @hazelcast_twitter
    [Thomas Kountis (tkountis)] Hi Álvaro
    Álvaro Sánchez-Mariscal
    @alvarosanchez
    We have a TCK for different cache providers, and one of the tests expects a cache with a maximum size of 3 entries. I’m trying to setup that in a test, but eviction isn’t happening
    What I would expect is that, upon inserting 4 elements in a row, fourth insertion would evict one entry
    This is the configuration I’m using:
    MapConfig mapConfig = new MapConfig()
            .setMaxSizeConfig(new MaxSizeConfig()
                    .setMaxSizePolicy(MaxSizeConfig.MaxSizePolicy.PER_PARTITION)
                    .setSize(3))
            .setEvictionPolicy(EvictionPolicy.LRU)
            .setInMemoryFormat(InMemoryFormat.OBJECT)
            .setName("test”);
    Config config = new Config("HazelcastEvictionSpec”);
    config.addMapConfig(mapConfig);
    HazelcastInstance hazelcastServerInstance = Hazelcast.newHazelcastInstance(config);
    Hazelcast
    @hazelcast_twitter
    [Thomas Kountis (tkountis)] MaxSizePolicy.PER_PARTITION = 3 * partition size, so its not just 3 elements
    [Thomas Kountis (tkountis)] unless you target a particular partition only in your test
    Álvaro Sánchez-Mariscal
    @alvarosanchez
    Right, I have read that using PER_NODE would make it size times the number of partitions (271 by default)
    Hazelcast
    @hazelcast_twitter
    [Thomas Kountis (tkountis)] PER_NODE should make it size * num_of_instances
    [Nicolas Fränkel (Nicolas)] welcome @alvaro
    Álvaro Sánchez-Mariscal
    @alvarosanchez
    Using PER_NODE yields this warning: The max size configuration for map "test" does not allow any data in the map. Given the current cluster size of 1 members with 271 partitions, max size should be at least 271. Map size is forced set to 271 for backward compatibility
    How can I target a particular partition in the test?
    Álvaro Sánchez-Mariscal
    @alvarosanchez
    I think I found the answer
    Hazelcast
    @hazelcast_twitter
    Álvaro Sánchez-Mariscal
    @alvarosanchez
    Declaring a PartitionStrategy
    Hazelcast
    @hazelcast_twitter
    [Thomas Kountis (tkountis)] or yes, you can use that :)
    Álvaro Sánchez-Mariscal
    @alvarosanchez
    Yep, that works
    Vassilis Bekiaris
    @vbekiaris
    another way to achieve that could be to set your partition count to 1: config.setProperty("hazelcast.partition.count", "1");
    Álvaro Sánchez-Mariscal
    @alvarosanchez
    aweseome! easier indeed
    Álvaro Sánchez-Mariscal
    @alvarosanchez
    ok, that was it
    Vassilis Bekiaris
    @vbekiaris
    :+1:
    Álvaro Sánchez-Mariscal
    @alvarosanchez
    FYI, the upcoming Micronaut 1.3 version will contain support for Hazelcast: https://micronaut-projects.github.io/micronaut-cache/snapshot/guide/#hazelcast
    Vassilis Bekiaris
    @vbekiaris
    sounds great, thanks for the heads up!
    Tomasz Gawęda
    @TomaszGaweda
    Hello :) Some time ago we've spotted huge problem with transactions, see hazelcast/hazelcast#15542. Do you know if it's targeted for 4.0? It's very important for us (we rely on transactions), but seems non trivial
    Matko Medenjak
    @mmedenjak

    Hi, no, we're not planning on fixing that in 4.0. Our transaction support is very limited, we assume that if the transaction passes the prepare phase that the commit phase will not fail and we don't check any interceptors during prepare phase. We also assume that the cluster is stable while transactions are used. Our "transactions" work for some very simple cases but for all non-trivial cases, they can fail in various ways.

    Overall, the transactions are as-is now and in the future we might reconsider a complete redesign instead of addressing issues individually. If you want, you can try preparing a PR and we might include it in 4.0 (we are about to release GA in about 1 month).

    Tomasz Gawęda
    @TomaszGaweda
    Thanks for the response!
    Wojciech Gdela
    @gdela

    I've did some test, and yes, the transactions do "fail" in various ways.

    1. Contrary to what is stated in Hazelcast Manual, the changes done in a transaction are not isolated properly from other transactions or non-transactional reads. Granted, the changes are not visible from outside before you call TransactionContext.commitTransaction(). But if you've changed for example five map entries in the transaction (even belonging to the same IMap and the same partition), there is a short period of time during commit when you can observe those changes partially, i.e. see three objects already changed and two not yet changed, when doing IMap.values().

    2. Besides the two ways to observe non-atomicity described in #15542, there's another one. If in a transaction you change two map entries that belong to the same partition, and the node on which this partition is held dies, but you had backups enabled, you can observe sometimes that only one entry has been changed and the other one remained unchanged.

    I suppose that the second case falls under the category "transaction works only if cluster is stable", though such restriction is not stated in the manual. I must say that I was mislead by the manual that transaction do work correctly, especially that there's a section about TWO_PHASE transaction type that one needs to use "in case of a member failure". I've understood it as transactions will work even if cluster is not stable.

    Matko Medenjak
    @mmedenjak
    I'll try to update the reference manual and javadoc but generally speaking, our "transactions" were written a long time ago and definitely don't hold up to the wide-spread standards one would expect.
    lishijun121910
    @lishijun121910
    I am doing some tests about heap usage and hazelcast performance.
    I put 50,0000 entries in map , with each entry 1k length, it shows in hazelcast-mc that the map takes about 554MB memory,no other data.
    my hazelcast-server is a 3-node cluster , each node with 2GB heap size , I think there shouldn't be any memory problem.
    But in my test , every node is continuous calling GC, there are spikes in QPS in every 10~20 seconds, why this happen ?
    3*2GB=6GB can not serve with 500M data?
    QQ20191205-202102@2x.png
    QQ20191205-203544@2x.png
    image.png
    lishijun121910
    @lishijun121910
    image.png
    Can Gencer
    @cangencer
    do you have any indexes?
    what queries are you making?
    Matko Medenjak
    @mmedenjak
    @lishijun121910 since you say you're simply putting data and querying it, can you create a sample project where we can reproduce this?