Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Apr 07 12:32
    jondubois commented #501
  • Apr 07 12:30
    jondubois commented #501
  • Apr 07 04:10
    guanzo commented #501
  • Apr 07 04:09
    guanzo commented #501
  • Apr 07 04:09
    guanzo commented #501
  • Apr 06 02:56
    valvenetbabu closed #541
  • Apr 06 02:54
    valvenetbabu commented #541
  • Apr 01 13:33
    maarteNNNN commented #541
  • Apr 01 12:44
    valvenetbabu commented #541
  • Apr 01 12:44
    valvenetbabu commented #541
  • Apr 01 12:43
    valvenetbabu commented #541
  • Apr 01 12:31
    valvenetbabu commented #541
  • Mar 31 17:25
    maarteNNNN commented #541
  • Mar 31 14:57
    valvenetbabu commented #541
  • Mar 31 11:06
    maarteNNNN commented #541
  • Mar 31 11:01
    maarteNNNN commented #501
  • Mar 31 05:41
    Nuxij commented #501
  • Mar 31 04:24
    valvenetbabu commented #541
  • Mar 30 14:45
    valvenetbabu commented #541
  • Mar 30 14:44
    valvenetbabu commented #541
Paul
@wilensky
@SloppyShovel the strange thing is that client receives all events in right order and JWT is there as well. Here is the output of CLI client (Connected: OK, Authenticated: OK + JWT) :point_down:
image.png
Paul
@wilensky
It seems I nailed it down and this error occurs on socket.disconnect(code, msg);. We have certain timeout for authentication, so user can't stay connected unauthenticated and rule that was checking this condition was broken due to changed flow, but its still a mystery to me why it throws such error if this .disconnect() occurs way after authentication has passed and JWT received by the client.
Paul
@wilensky
There was an error on attaching one of the RPCs and that made some real mess, but error nicely appeared in log so I considered it as "ok it was handled what could go wrong" ... what a shame.
Kevin Boon
@SloppyShovel
@wilensky it might seem strange, but i think it has something to do with the middleware flow, when there is a small disconnect or even a reconnect, the socket becomes automatically unauthenticated for a moment, if the timer isn't reset and still sends the disconnect that you get a really messing flow, i think you have to go over your unauthenticated timeout and make sure the flow is correct, the way we did it is using middleware authentication and seems to work without any issue, you can still authenticate, but you won't be able to do much unless your authenticated, if you need help figuring out the best way to create what you want, just let me know, even in a dm if you that works better.
Andrè Straube
@astraube

Hello my friends, I really liked the framework.

I would like to know if you know the maximum number of simultaneous connections.

Kevin Boon
@SloppyShovel
@astraube there is no set limit of the maximum connections, there is different options for different solutions, but they all come with there down sides like everything else, it's up to the developer to build it the way it works for the project your working on.
Oussama Mubarak
@semiaddict_gitlab

Hello, I am new to SC, which seems pretty promising.
I am trying to allow or deny connection to the server depending on data sent by the client. I am using a handshake middleware for this which looks at the query parameters sent when creating the client. This seems to work fine, but I can't manage to receive the error message on the client side that was generated using action.block(err) in the server side middleware. All I get on the client side is an error of type "SocketProtocolError" with the generic message "Socket hung up".
Here's the relevant code inside the handshake middleware:

for await (let action of middlewareStream) {
  const err = new Error('No ID provided');
  err.name = 'MissingParameter';
  action.block(err);
  continue;
}

And here's the listener on the client side:

for await (let data of this.socket.listener('error')) {
  console.log('error', data);
};

I also tried listening to other events such as connectAbort, close and disconnect, but those are not called.
Am I doing something wrong? Should I not be doing this during the handshake?
Any help would be highly appreciated.
Thank you in advance.

Maarten Coppens
@maarteNNNN
image.png
@semiaddict_gitlab I simulated your code here locally and upon firing the middleware in the server I get the MissingParameter response.
Could you post a bigger scope of your client-side code I think the problem is the this.socket.listener('error')
feel free to DM me
This is my client-side code:
(async () => {
  for await (let { error } of socket.listener('error')) {
    console.error(error);
  }
})();
Kevin Boon
@SloppyShovel
@semiaddict_gitlab Welcome! Yes you can block connection on middleware, this is also recommended for authentication, i think the issue you are running into is, you are calling the middleware HANDSHAKE_WS which is the handshake for websocket, but you are expecting it in socketcluster, but that middleware is HANDSHAKE_SC
Kevin Boon
@SloppyShovel
Here is a better explanation :
HANDSHAKE_WS middleware is special because it happens before the underlying WebSocket has been created (at the HTTP/WS handshake stage). If you block the connection by passing an error to the next(err) callback, the error string will show up in your browser's developer panel (Network tab in Chrome) but there is no way to handle this error in your code. If you try to listen to the 'connectAbort' event, the error code will always be 1006 and you won't be able to get any additional information about it. This is a limitation of the WebSocket RFC itself, For this reason, blocking connections with HANDSHAKE_WS mostly makes sense for quickly and efficiently shutting down malicious connections. If you want a more client-friendly way to kill a connection, you should use the HANDSHAKE_SC middleware instead.
next(err) callback, is now called action.block(error) in v16
Hope that clears things up.
Andrè Straube
@astraube
@SloppyShovel tnks my friend
Oussama Mubarak
@semiaddict_gitlab
Thank you @SloppyShovel and @maarteNNNN for your help.
I indeed managed to get it working by only blocking HANDSHAKE_SC actions. That makes sense now, but might be good to add more information in the docs for future users.
Thank you again for your responses.
Nguyễn Thu Đức Trung
@trungntd-mirabo
Hi guys. I've encounter a problem recently. I am using socketcluster version 16, and it seems that every socket connection sharing a same exchange object. After perform subcribe on first socket connection by calling socket.exchange.subscribe(channelName), the second socket exchange is subscribe to the same channel as well (return true when call socket.exchange.isSubscribed(channelName)). And when i perform unsubscribe on one socket, the other one unsubscribe too. I acknowledge that this behaviour only happens in version 15+.
Does anyone have any idea that I can do a subscribe channel on the server side (not the client) that is separate between each socket connection?
Kevin Boon
@SloppyShovel
@trungntd-mirabo it's not recommended to let the server do request on behalf of the client, you can run in all kinds of issues.
but that said, post a snippet of the code you have right now.
and what is the reason you want the server to subscribe on behalf of the client ?
Nguyễn Thu Đức Trung
@trungntd-mirabo

@SloppyShovel here is my current code

export const subscribeChannel = async (socket, channelName) => {
  const remote = `${socket.remoteAddress}:${socket.remotePort}`
  let channel
  if (socket.exchange.isSubscribed(channelName)) {
    channel = socket.exchange.channel(channelName)
  } else {
    channel = socket.exchange.subscribe(channelName)
  }
  for await (const data of channel) {
    const userId = socket.authToken?.uid || ANONYMOUS_USER_ID
    if (socket.state !== socket.CLOSED) {
      console.log(`User ${userId} watch channel ${channelName}'s ${JSON.stringify(data)} on ${remote}`)
    }
    if (socket.state !== socket.CLOSED) socket.transmit('notification', data)
  }
}

export const unsubscribeChannel = async (socket, channelName, msg) => {
  const userId = socket.authToken?.uid || ANONYMOUS_USER_ID
  const publishData = { uid: userId, msg }
  const subscriberClients = socket.exchange._broker._clientSubscribers[channelName] || {}
  Object.values(subscriberClients).forEach((client) => {
    client.transmit('#publish', {
      event: 'notification',
      channel: channelName,
      data: publishData,
    })
  })
  const channel = socket.exchange.channel(channelName)
  channel.closeAllListeners()
  socket.exchange.unsubscribe(channelName)
}

I want all user subcribe to the same channel, so that when a new user joins or exits other users would be notified. I want to do this separately from the client side since client-side channels are unmanaged on the server side.

Kevin Boon
@SloppyShovel
@trungntd-mirabo gotcha, and why can't you use the following options on the server side?
(async () => {
  for await (let { socket, channel, channelOptions } of agServer.listener('subscription')) {
    // client subscribed to a channel
  }
})();
(async () => {
  for await (let { socket, channel} of agServer.listener('unsubscription')) {
    // client unsubscribed from a channel
  }
})();
this events are fired when a client subscribed or unsubscribe from a channel, you know what client it is, and which channel
Kevin Boon
@SloppyShovel
you can then use the agServer.exchange client to send a notification that the client joint or left, this way you are not doing anything on the clients behalf.
Nguyễn Thu Đức Trung
@trungntd-mirabo
@SloppyShovel refer to the exchange object, are different connection sockets sharing the same exchange object?
Kevin Boon
@SloppyShovel
@trungntd-mirabo the exchange object on the socket you mean? i don't think sockets have an exchange object, at least it never did before not in v14 or v15, when you do socket.exchange inside the server it self it's the same as doing, agServer.exchange, you are talking to the exchange object (client) that the server has attached, you can't directly do request on the client's behave server sided, you can however do it indirectly, which is still a bad idea, the only exchange object that ever existed to my knowledge was on the SCServer level, now called AGServer object, this was the same for v14.
Nguyễn Thu Đức Trung
@trungntd-mirabo
@SloppyShovel thank you sir for your information. I'll consider your solution and conduct more research into this 🙌
Kevin Boon
@SloppyShovel
@trungntd-mirabo No problem at all, if you get stuck or have more questions, or need any help getting it working the way you want, always feel free to ask :)
Ben Francis
@benfrancisAxis
Looks like the certificate for socketcluster.io just expired, chrome flagging the site as insecure. Might want to renew your certificate.
Kevin Boon
@SloppyShovel
@benfrancisAxis Thanks for letting us know, @jondubois can you check this out when you got a minute.
Ryan432
@Ryan432
Hey,
There is any documentation about what are the main responsibilities of broker, state in kubernetes cluster?
worker as I can understand is the actual server where all the custom implementation taking place.
Ryan432
@Ryan432

I am currently building a solution with SC for distributing data into channels were the data is sent by SC client.
So basically, when the server is getting a data on some specific channels the server to loop on the data and distribute it into unique channel.
Now, SC is scalable means I can have multiple workers running at the same time, which each one of them will subscribe to the "supplying" channels data but the distributing needs to happened only once.

My question is if someone ever build similar use case, what is the best / elegant option of making sure that the data will be distribute only once.
My current idea is using unique locks with TTL using couchbase,
Means all SC workers will still subscribe to the channels, each one will try to distribute but only one will succeed due to lock.

Here is some real life example

Imagine my supplier SC Client sent the data in that structure into "stocks" channel:

{
    "event": "update",
    "id": 123456,
    "data": [
        {
            "id": 1,
            "name": "apple",
            "type": "fruit",
            "stock": 10
        },
        {
            "id": 2,
            "name": "chaires",
            "type": "furniture",
            "stock": 156
        }
    ]
}

My server side subscribe to 'stocks' channel and needs to loop the "data" and sent every object into different channels what I am calling "distribution"
distributing channels will be unique by the object "id","name" and "type"
So the it will like that:


agServer.exchange.transmitPublish(`stocks:${id}`, object);
agServer.exchange.transmitPublish(`stocks:${name}`, object);
agServer.exchange.transmitPublish(`stocks:${type}`, object);

The thing is that only 1 worker can take care of it otherwise there will be multiple distributions into those channels.

@Ryan432
Brokers are the bridge between Workers
State is like etcd but simple, like the name states, it holds the state of the cluster, so the number of workers, brokers and there addresses.
Kevin Boon
@SloppyShovel
Now to your question of "My question is if someone ever build similar use case, what is the best / elegant option of making sure that the data will be distribute only once."
the answer is in another question or statement "The thing is that only 1 worker can take care of it otherwise there will be multiple distributions into those channels."
if your fighting so much with the architecture of the framework your doing it 99.9% of the time wrong.
the way you described it is not only going go against sc-cluster architecture but your also doing requests on the clients behalf
so lets say you have 3 servers running, in a cluster, all 3 of them are going to publish data in a loop (on server tick rate) anything you do on server tick rate is very bad and will hurt performance, then on top of that your basically saying server 1 already send the data, so server 2 and 3 your data is useless.
at that point your better of using socketcluster in single server mode, rather then sc-cluster because that doesn't make a lot of sense.
Kevin Boon
@SloppyShovel
you're also going to create a lot of overhead.
Jonathan Gros-Dubois
@jondubois
https://socketcluster.io website certificate issue should be fixed now.
Thanks for letting me know :)
Kevin Boon
@SloppyShovel
@jondubois Thanks Jonathan
Ryan432
@Ryan432
@SloppyShovel
Thank you for your explanations,
I am pretty new with SC so I might using it incorrectly :)
Based on your suggestion, I can do the distribution from a single client then instead of making that on the server side,
Is that a better idea?
Kevin Boon
@SloppyShovel
@Ryan432 No problem at all that's why we are here, normally the way i recommend of doing something like that is, that the client requests an update from the server, each server should be able to access your database (cluster) to pull the data needed, and send it to the requesting client, using the client-tick on a realistic interval does not impact performance that much, as each client has there own tick rate, of course withing measures and things can be optimized, of course, now that being said, this does not always work for every need, so i'm not sure what your application needs, but if all you need to is to simply get updated data to a client, it's better to let the client request it and the server answers, if there are some reasons you couldn't do that let me know and i can tell what a better way is.
Ryan432
@Ryan432

Yes, my use case is a bit different,
I have 1 client working as my "supplier" -> he's pumping real time events from external system and publishing to my SC server in specific channels like "stocks" as my example above.

So now I will just build my distributors as clients and it will allow me to make sure to have 1 source of distribution and also scalability because I can start many clients but each one will take care of distrusting different channels.

Kevin Boon
@SloppyShovel
@Ryan432 and you can't integrate the "supplier" into the server? and let multiple instances access the data and distribute?, i understand it, doesn't really make your system scalable, the only thing that scales is your clients, the system itself won't be scaleable, as you only have 1 incoming data source, if that data source falls away your system be 1) not scabable 2) you loose the redundancy of sc-cluster.
Ryan432
@Ryan432

I agree, the whole system won't be scalable because I have one "supplier" the can break the whole thing, but it's doesn't critical in my case, even if it fails for certain time it's acceptable and won't be critical for the operation of my business logic.

"you can't integrate the "supplier" into the server?"
Unfortunately not, my supplier is a sort of a plugin injected into the external system, also in C++, from the external system point of view it won't be a good idea to distribute the data from the supplier client due the possibility of affecting the external system performance (this system is scalable).

I already built the client distributors seems to works as expected,
Thanks a lot for your help if you have any other suggestion I would love to hear :)

Kevin Boon
@SloppyShovel
then yes, a single client as a distributer makes sense and optimize it as much as possible
it's always a trade-off when dealing with third-party tools, a single client distributor has been done before and works pretty good, the issue is like you said your self the trade offs, if you got anymore questions or like to discuss anything else, always feel free to chat :)