Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Sagar Raut
    @v8sagar
    setPublicAddress("127.0.0.1")?
    It worked thanks for clarifying it :)
    Jaromir Hamala
    @jerrinot
    you have 3 options:
    1. use multicast discovery. this is the default. it's simple to use and has no external dependency. but it does not work on most cluster providers as clouds do not allow multicast traffic.
    2. use static configuration. then you have to know IP address of all your members. I see you use join.getTcpIpConfig().setEnabled(true).setMembers(Arrays.asList(environment)); for this joiner you have to know all IP addresses anyway.
    3. use a discovery plugin. this is the most popular option when deploying in clouds. we have discovery plugins for AWS, Azure, GCP, Kubernetes, Zookeepr, Eureka, etc..
    setPublicAddress("127.0.0.1") will work only on a single box. again, remove that. most likely you dont need at all.
    Sagar Raut
    @v8sagar
    okay thanks
    Jaromir Hamala
    @jerrinot
    you are very welcome. Happy Hazelcasting!
    Sagar Raut
    @v8sagar
    @jerrinot
    for my case, this will work perfectly fine right?
        Config config = new Config();
        config.setLiteMember(false);
        JoinConfig join = config.getNetworkConfig().getJoin();
        config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
        config.getNetworkConfig().getJoin().getAwsConfig().setEnabled(true)
                .setProperty("tag-key", "my-ec2-instance-tag-key")
                .setProperty("tag-value", "my-ec2-instance-tag-value");
    
        CacheSimpleConfig cacheConfig = new CacheSimpleConfig();
        cacheConfig.setName("buckets");
        config.addCacheConfig(cacheConfig);
    
        HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance(config);
        ICacheManager cacheManager = hazelcastInstance.getCacheManager();
        Cache<String, GridBucketState> cache = cacheManager.getCache("buckets");
    
        return cache;
    1> 2 Hazelcast instances running on 1 server
    2>there is another instance running on a different service.
    3>all instances are in the same network
    4>all 3 instances are supposed to form a single cluster
    Jaromir Hamala
    @jerrinot

    that looks fine to me.
    some notes: config.setLiteMember(false); is redundant as that's the default.

    depending on your deployment scheme you might want to store hazelcastInstance somewhere and shut it down when your application is about to shutdown. this is mostly concern if you deploy to an app server with multiple tenants (applications). If you are using something like Spring Boot (or appserver-less deployment in general) then it's usually not a concern.

    other than that - it looks good to me
    if you are on AWS then using the AWS plugin for discovery is the best option
    Sagar Raut
    @v8sagar
    yes I am on AWS
    and I am using springboot
    Sagar Raut
    @v8sagar
    @jerrinot if this works
    I will be creating Github repo with lots of explanation for someone like me
    Happy Hazelcasting
    Jaromir Hamala
    @jerrinot
    that would be excellent! @mesutcelik would be certainly interested in that!
    Sagar Raut
    @v8sagar
    @jerrinot I wanted to know one addidtional thing
    if my all three instances are now same network
    will this work
       Config config = new Config();
        config.setLiteMember(false);
        System.out.println( Arrays.asList(environment).toString());
        JoinConfig join = config.getNetworkConfig().getJoin();
        join.getMulticastConfig().setEnabled(false);
        join.getAwsConfig().setEnabled(false);
        join.getTcpIpConfig().setEnabled(true).setMembers(Arrays.asList("172.323.24.130","896.341.438.65"));
    Hazelcast
    @hazelcast_twitter
    [Rafal Leszko (unknown)] @v8sagar If the IPs are correct, then you static TCP/IP configuration will work, but as mentioned, you're better off using AWS Discovery plugin when running on AWS.
    Sagar Raut
    @v8sagar
    @hazelcast_twitter thank you
    Vignesh-Thiraviam
    @Vignesh-Thiraviam
    Hi I need to disable some URL's in hazel cast management centre REST api's
    Jaromir Hamala
    @jerrinot
    @Vignesh-Thiraviam what's your Hazelcast version?
    Vignesh-Thiraviam
    @Vignesh-Thiraviam
    @jerrinot Hazelcast management centre ,version 3.12.8
    Jaromir Hamala
    @jerrinot
    @emre-aydin please see :point_up: Is this possible at all?
    Vignesh-Thiraviam
    @Vignesh-Thiraviam
    @jerrinot I have enabled Clustered rest in hazelcast management centre , but it provides a lot of API's , need to enable only MAP reads
    Vignesh-Thiraviam
    @Vignesh-Thiraviam
    @hazelcast_twitter can you help on this
    Sagar Raut
    @v8sagar
    hi all if set this config, and same code deployed on two different AWS instance
    api-1 and api-2 both are talking to each other will this work
     config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
        config.getNetworkConfig().getJoin().getAwsConfig().setEnabled(true)
                .setProperty("tag-key", "Name1")
                .setProperty("tag-value", " api-1")
                .setProperty("access-key","1")
                .setProperty("secret-key","1");
    
        config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
        config.getNetworkConfig().getJoin().getAwsConfig().setEnabled(true)
                .setProperty("tag-key", "Name2")
                .setProperty("tag-value", " api-2")
                .setProperty("access-key","2")
                .setProperty("secret-key","2");
    Hazelcast
    @hazelcast_twitter
    [Rafal Leszko (unknown)] Yes, it looks correct assuming you have tags configured on your EC2 Instances and the port 5701 is open in security groups
    Sagar Raut
    @v8sagar
    Just to rephrase
    same configs code have deployed on both the servers I mean api-1 and api-2
    config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
    config.getNetworkConfig().getJoin().getAwsConfig().setEnabled(true)
    .setProperty("tag-key", "Name1")
    .setProperty("tag-value", " api-1")
    .setProperty("access-key","1")
    .setProperty("secret-key","1");
    config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
    config.getNetworkConfig().getJoin().getAwsConfig().setEnabled(true)
            .setProperty("tag-key", "Name2")
            .setProperty("tag-value", " api-2")
            .setProperty("access-key","2")
            .setProperty("secret-key","2");
    above code is constant
    Hazelcast
    @hazelcast_twitter
    [Rafal Leszko (unknown)] yes, it's fine
    Jaromir Hamala
    @jerrinot
    @Vignesh-Thiraviam the clustered API in Managmeent Centre is all-or-nothing. It does not allow fine grained per-resource configuration for now.
    the new Management Centre for Hazelcast 4 has a Prometheus exporter and you will be able to define what to export.
    Vignesh-Thiraviam
    @Vignesh-Thiraviam
    Thank you very much @jerrinot . I will try to upgrade the version and use prometheus
    Jaromir Hamala
    @jerrinot
    you are very welcome. please report back your results!
    Jaromir Hamala
    @jerrinot

    Hello everyone,
    I am from Hazelcast development team and I am trying to learn more about How People Use Hazelcast.

    One of the Hazelcast painpoints is reliance on Java bytecode.
    Example: IMap Entry Processors are awesome, they can often replace a complicated locking schema, increase both reliability and performance.
    However they have one downside: They require bytecode to be available on each member classpath. This is usually not a issue if you are embedding Hazelcast inside your application. But It could be a problem in the client-server topology when your app uses a remote cluster. In this case you have to either distribute JARs with Entry Processors to all cluster members or use User Code Deployment.

    I have 2 different questions is:

    1. If you use the client - server topology: Are you using User Code Deployment or ship JAR manually?
    2. If you use the embedded topology: Why? What are you reasons?

    I will be grateful for any feedback, thank you!

    Enes Ozcan
    @enozcan

    Hi everyone,

    We just introduced Hazelcast Guides with ~10 min. guides on Hazelcast integrations with Springboot, Quarkus, Micronaut, Microprofile, Kubernetes, Istio, and more! Check it out at https://guides.hazelcast.org

    Serdar Ozmen
    @Serdaro
    Hello everyone,
    In addition to the brilliantly prepared guides mentioned above, also please don’t forget to check out our latest development Hazelcast IMDG Reference Manual updated and uploaded daily: https://docs.hazelcast.org/docs/latest-dev/manual/html-single/#
    Vignesh-Thiraviam
    @Vignesh-Thiraviam
    hazelcast joins with different port rather than one specified in member configuration
    it joins with 5702 instead of 5701 , can anyone help on this
    Hazelcast
    @hazelcast_twitter
    [Sharath Sahadevan (Sharath Sahadevan)] Hi Vignesh - Typically I see this if the specified port is not available and auto-increment is on. Resolution is to free up that port and restart the process. If that does not work , Are you seeing any error messages in your logs? Also please provide your member config and info on the environment you are running in. Thanks!
    Vignesh-Thiraviam
    @Vignesh-Thiraviam
    hi , yes sure , I am running this in an open shift environment
    network:
    join:
    multicast:
    enabled: false
    tcp-ip:
    enabled: true
    members: "10.173.14.23:5701,10.173.14.24:5701,10.173.14.25:5701,10.173.14.26:5701"
    this is my member config
    10.173.14.23:5701 instead of this it is connecting to 10.173.14.23:5702
    hazelcast is present in 10.173.14.23:5702 also , so i am not getting any errors
    @ssahadevan
    lprimak
    @lprimak
    @jerrinot That's the problem I came across with Payara. If the application is not deployed on all nodes in the cluster, the cluster stopped working.
    in my implementation of Tenant Control, I just have to stop all processing indefinitely until application becomes available on all nodes in the cluster. Not the "ultimate" solution but it works since the use case is most likely rolling upgrades.
    It would be nice if migration can take into account application availability, but I think that's too ingrained in the design and would be too difficult to do
    well, not all processing, just the processing that has a partition on a node that doesn't have the application available