by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • May 06 10:18
    rigalpatel001 commented #370
  • Apr 30 05:07
    smnzhu edited #528
  • Apr 30 05:06
    smnzhu opened #528
  • Apr 26 13:49
    jondubois commented #429
  • Apr 26 13:38
    JojoStacy commented #429
  • Apr 22 09:28
    toredash closed #348
  • Apr 22 09:28
    toredash commented #348
  • Apr 14 09:24
    JojoStacy closed #527
  • Apr 14 09:24
    JojoStacy commented #527
  • Apr 14 07:52
    JojoStacy opened #527
  • Mar 18 09:37

    jondubois on master

    Bump minimist and cleanup packa… (compare)

  • Mar 18 09:31
    jondubois commented #526
  • Mar 18 09:10

    dependabot[bot] on npm_and_yarn

    (compare)

  • Mar 18 09:10
    dependabot[bot] commented #526
  • Mar 18 09:10
    jondubois closed #526
  • Mar 18 09:10
    jondubois commented #526
  • Mar 17 23:29
    dependabot[bot] labeled #526
  • Mar 17 23:29
    dependabot[bot] opened #526
  • Mar 17 23:29

    dependabot[bot] on npm_and_yarn

    Bump minimist from 1.2.0 to 1.2… (compare)

  • Mar 17 03:40
    Energytechglobal closed #525
Kevin Boon
@inQonsole
we now use development pipeline using kubernetes and docker, must easyer work flow easyer to track as well but that's all different stuff
it's late here my bed time :P
Frank Lemanschik
@frank-dspeed
https://github.com/direktspeed/feathers-cctalk/blob/master/cctalk.service
[Service]
ExecStart=/usr/bin/node /srv/drivers/cctalk-devices/index.js
WorkingDirectory=/srv/drivers/cctalk-devices/
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=cctalk
User=root
Group=root
Environment=NODE_ENV=development
#Environment=DEBUG=*

[Install]
WantedBy=multi-user.target
docker is really hard to debug in many scenarios and i stoped using docker
the new focus is on krustlet
and run directly wasm modules
and nginx and any software can be compiled to wasm
in 5 years docker will be like linux jails
a tech that is only known by Linux Administrators and security people
smnzhu
@smnzhu
@frank-dspeed yes, but some of the data I'm storing is mapping from socket id's, and these are all lost when the server crashes right?
So the data is basically useless because the socket id's are no longer valid
but at the same time, I'm not even sure how I can implement cleanup since I will not know when a server crashed...
Ideally, if I can somehow know when an instance crashed, I can at least clean up the data left over in redis
For a very simple example, let's say for each socket i'm storing a userId in redis with key = socket id
Then when a server crashes, I would like to be able to delete all the userId's corresponding to all the sockets connected to the server that crashed, right?
or will those sockets automatically reconnect to another instance with the same socketid?
Frank Lemanschik
@frank-dspeed
thats a more complex scenario and there are solutions for that but that is not a short story
I would need to know more about your overall architecture to give advice I could do that for 100 euro I can write you a complete build plan.
but one algo that is maybe easy to apply is to hash the connection id's with a instance name and if the instance is not up
simply delete all related id's
but overall i would go for something that i call the Enterprise Bus Pattern
it depends on your scale
you can not always do a real clean up with process codes like crash and terminate
when you for example OOM (Out Of Memory)
the system will freeze
or what happens even more often is disk IO error
smnzhu
@smnzhu
@frank-dspeed @inQonsole Going back to my earlier question, basically I'm using shortid package to create custom short urls. As I recall, one of the suggestions was to store a mapping of instance id -> integer somewhere (e.g in redis), so my question is, is there any way to me to get the current set of instance id's for all instances that are up? Then, every time an instance restarts / a new one is added, I can check redis to see which integers are not taken and take that integer for the instance.
smnzhu
@smnzhu
Or better yet, can I run some custom code that runs in SCC which can create a new ID for each newly spun up instance?
In the example I linked above https://github.com/SocketCluster/socketcluster/blob/master/app/server.js, it appears that each instance generates its own uuid, but can I change this so that the instance ID's are created in a centralized place?
probably the scc-state service?
smnzhu
@smnzhu
Also, when I run a cluster, can each worker instance process publish events that originated from sockets connected to another instance?
For example, let's say whenever a socket publishes to a channel, I want for each worker instance to add a timestamp field to the data that is published to its own sockets. I need to do this on each individual instance since clocks aren't aligned in an asynchronous setting
Can I use the PUBLISH_OUT middleware for this? will this middleware be run even for publish events that originate from a different worker instance?
Håkon Nessjøen
@haakonnessjoen
@frank-dspeed: Any tips for throttling data? For example, I want to show the number of active visitors to a specific hostname that my socketcluster handles. And every visit is counted with redis backend. But I don't want to publish every time the count changes. I would like to update the admin at most every 500ms, for example. Today I do this by having a custom worker connect as a client with special privileges, and this custom worker reads all the add/remove visitior events. And then throtthles it, and publishes at most once per 500ms to the admins. But I feel this "extra single worker" is a bit flawed in the horizontally scaling thought.
Frank Lemanschik
@frank-dspeed
I am not sure about that because i do not know your overall structure
but for scaling and also user count and all needs you should use what we use at Netflix
at Netflix we use something that is internal called the POP Pattern Point of Presence pattern
you accept the connection on a client and then you put the connection (socketId) into redis
if you reach max size of that redis instance you can replace that with something more scaleable (Couchbase not CouchDB)
then you can simply add a command to your general chat pipeline that returns user count simply split your messages on the client side
give them an identifier you can for example wrap each msg as so { type: "usercount", msg: "300" }
Frank Lemanschik
@frank-dspeed
on the client if type === usercount update usercount
The pop pattern allows you to process then each command (msg) in a async queue on x workers even parallel
and the client that holds the connection returns the response that it listens for on redis or couchbase
this way you decoupled Processing from Accepting Connections that is what makes Netflix Scale
there is only a Single Eventbus for all workers
in your case redis
all Connection accepting instances do log the request to the Eventbus and the Eventbus will return the workers Result for the event and the Connection Holding instance will return that
Frank Lemanschik
@frank-dspeed
the eventbus scales well as it needs only the current used data so you can easy move data to a Archive storage that is in the past