dependabot[bot] on npm_and_yarn
Bump minimist from 1.2.5 to 1.2… (compare)
dependabot[bot] on npm_and_yarn
Bump minimist from 1.2.5 to 1.2… (compare)
@matrixmonster11 If you're no longer interested in consuming messages from a specific channel, then you should invoke socket.closeChannel(channelName)
or channel.close()
on the client side; this will both unsubscribe the client from the channel and also break out of all for-await-of loops which are consuming the channel data (this will allow them to be garbage collected automatically on the client side). On the server-side, SC will garbage-collect any channel automatically as soon as it it reaches 0 subscribers...
It's also possible to unsubscribe from a channel temporarily by using socket.unsubscribe(channelName)
or channel.unsubscribe()
but this will keep all the consumer for-await-of loops active and they will resume message processing until the socket is subscribed to the channel again using socket.subscribe(channelName)
... Note that using unsubscribe will still cause the channel to be garbage collected on the server side (assuming it reaches 0 client subscribers) but the difference with socket.closeChannel
is that unsubscribe doesn't fully cleanup on the client side; it's only useful if you intend to re-use the channel later.
The subscription and consumption of channel data are treated independently in SC.
@ehsanm94 In most situations, trying to map users to specific workers is a bad idea as it creates a potential DoS attack vulnerability since an attacker could exploit the worker-selection/load balancing rules to target and saturate specific workers with connection requests and messages. A random load balancing algorithm is the most secure.
It's also better from a scalability point of view if you don't try to target specific workers. SC's pub/sub mechanism allows users to publish and subscribe to/from any worker in the cluster.
That said, if each worker serves a different purpose (e.g. a distinct game server with completely different players and data or a completely different service), then you might just want to associate each one with a different subdomain or URL.
exchange.subscriptions(true)
and clusterBrokerClient.getAllSubscriptions()
, but they all seem to return only the subscriptions of the broker that's executing this command. However I'm interested in extracting all currently open subscriptions across all brokers.
@josiahbryan Sachin makes a good point about host limits; there are a few limits which can be configured on the host. One of these is the file limit (which limits the number of open sockets). You should ensure that the host/Linux configs will support that amount of connections.
About SC itself, depending on your use case, it is possible to support 75K concurrent connections on a single server but it depends significantly on how often the average client publishes a message and how many subscribers each message reaches.
From a basic test I did a while ago; all clients were subscribed to the same channel and one client published a new message to this channel every 5 seconds (so this message would reach all clients). I found that a single process could comfortably handle 10K to 20K clients per process. So on a 4-core host, this would be between 40k to 80k clients.
The exact type of workload can have a huge impact on how many clients can be supported and how many scc-broker
instances you will need. There are a lot of factors to consider. One of the main ones is whether the workload is publish-heavy or subscribe-heavy.
For subscribe-heavy workloads with many subscribers consuming messages from just a few publishers, you don't need many scc-brokers
; in extreme cases, 1 broker may be enough to support tens of millions of subscribers (client-facing sc worker instances would be the bottleneck in this case). SC tries to minimize the number of messages which pass through the broker on the back end.
For publish-heavy workloads, you may need more scc-broker
instances. As a rough guideline, I aim to keep the number of messages which pass through each scc-broker
process to below 10K messages per second. SCC shards different channels across available scc-broker
instances using a consistent hashing algorithm which gives a random distribution; the more channels you have, the more even the distribution is.
@bryandeasis Yes it can autoscale. As you add and remove scc-broker and scc-worker instances, the channel sharding gets adjusted automatically.
Clients do not connect to the scc-state instance so it is not a bottleneck for scalability. A single scc-state server should be able to comfortably connect to over 1K scc-worker and scc-broker instances... If you assume that each scc-worker can handle 20K connections, this is 20K * 1K= 20 million concurrent connections.
clients
poperty https://socketcluster.io/docs/api-ag-server/#properties and then have each server send them to a central place... But this approach won't scale. Beyond a certain number of clients, it's physically impossible to have all the data in a single place because it will be too much data.
@jondubois thank you very much for your help and support, I have another concern, how to know if a specific user or group of users are connected to the SC Cluster or not by using user id?
I can store the user status in Redis, if the user emit the login event and succeeded to login, i will mark this user as online, and if the user disconnected i will mark him as offline, but if the SC worker goes down or crashed, how to mark all the connected users to the crashed SW worker as offline users?
please help.