Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 21 19:09

    jkarneges on master

    don't send cancel on timeout (compare)

  • Sep 21 19:06

    jkarneges on master

    don't react to unknown rids wit… (compare)

  • Sep 21 18:22

    jkarneges on master

    command.py: port to python3 zrpc: convert response string v… handler: support publishing via… (compare)

  • Aug 17 20:15

    jkarneges on master

    mention rust in readme (compare)

  • Aug 13 16:50

    jkarneges on master

    use https urls in readme (compare)

  • Aug 13 16:48

    jkarneges on master

    update install section (compare)

  • Aug 13 16:46

    jkarneges on master

    update scalability guidance (compare)

  • Aug 09 22:43

    jkarneges on master

    update dev version (compare)

  • Aug 09 22:42

    jkarneges on v1.33.1

    (compare)

  • Aug 09 22:42

    jkarneges on master

    update version (compare)

  • Aug 09 22:39

    jkarneges on master

    fix mac build (compare)

  • Aug 09 22:31

    jkarneges on master

    add missing file (compare)

  • Aug 09 22:30

    jkarneges on master

    centralize rust lib config (compare)

  • Aug 09 16:36

    jkarneges on master

    monitorstats: port to python3 (compare)

  • Aug 09 15:15

    jkarneges on master

    build mode fixes (compare)

  • Aug 09 14:58

    jkarneges on master

    Fix configuration in enginetest… Merge pull request #47701 from … (compare)

  • Aug 09 05:46

    jkarneges on master

    update dev version (compare)

  • Aug 09 05:46

    jkarneges on v1.33.0

    (compare)

  • Aug 09 05:45

    jkarneges on master

    update version (compare)

  • Aug 09 03:01

    jkarneges on master

    include conf.pri before checkin… (compare)

javier322
@javier322
Hi, with websocket over http, can a client subscribe to new channels after the handshake or, instead, the only way is to repeat the handshake with the new channels included and init a new connection?
Justin Karneges
@jkarneges
@javier322 an existing client can subscribe to new channels, yes
Sebastian Pinto
@seba322
@jkarneges Thank you for the answer
In the same case, how can you do with sse ?. Should I generate a new EventSource object when I want to subscribe to new channels?
Justin Karneges
@jkarneges
hi @seba322 , channels can be changed on http streaming connections whenever the backend sends a response. this is normally done only one time when the connection starts, but if reliable mode is used then there may be subsequent http requests sent to the backend for the same connection
that said, my recommendation is to simply make a new connection if you want to update the channels. it's easier to reason about
Neil Marrin
@marrinn_gitlab

Hi @jkarneges. I'm using nginx as a proxy in front of pushpin. I'm using ws over http for websocket requests and I'm proxying non ws requests over HTTP2 to a static site and Kong API gateway. Nginx is closing the ws connection after 60 seconds - which is as per its default proxy timeout settings. I want to keep the connection open and changing this to another value works in prolonging the connection duration but ultimately defers the issue. The ideal solution is to use the PushPin keep-alive but I can't seem to get this working. This is an extract of the code I'm using. I'm subscribing to the channel on the OPEN event and adding the keep alive header. I'm adding a header for the keep alive as I wasn't sure from the documentation as it's not long polling or pure websockets.

 // An open event is sent on its own to establish the connection

``` if (inEvents[0].getType() == 'OPEN') { log.info(InEvents OPEN:, script);

    //  check for channel header otherwise generate one
    let channel = req.params.channel

    if (!channel){
        channel = uuidv1();
    }

    log.info(`Publish channel: ${channel}`, script);

    var outEvents = setGripEvents(inEvents[0].getType(),channel);

    // set channel header on the response
    res.setHeader('Set-Meta-Channel', channel);
    res.setHeader('Sec-WebSocket-Extensions', 'grip; message-prefix=""');                        
    res.setHeader('Content-Type', 'application/websocket-events');    
    // added timeout to stop socket from being closed
    res.setHeader('Grip-Keep-Alive','\\n; format=cstring; timeout=30');
    log.debug(`response headers: ${JSON.stringify(res.getHeaders())}`);

    // send response
    res.writeHead(200);
    res.end(grip.encodeWebSocketEvents(outEvents));
}

Any advice would be gratefully received.

Justin Karneges
@jkarneges
hey @marrinn_gitlab , for WebSockets you still send a control message as a TEXT frame, even though you're using ws over http. see https://pushpin.org/docs/advanced/#keep-alives
probably you want to do this in your setGripEvents function. I assume that adds a TEXT frame with a subscribe control message. so just add another TEXT frame with a keep-alive control message.
Neil Marrin
@marrinn_gitlab
@jkarneges that's working now thanks.
Alec Larson
@aleclarson
@jkarneges What is the default value for sig_iss and why would I override it?
Justin Karneges
@jkarneges
@aleclarson the default is "pushpin". you'd override it if you are setting sig_key on the route (I believe both sig_iss and sig_key need to be set for either to take effect)
as an example, Fanout Cloud sets sig_iss to the user's realm ID
Alec Larson
@aleclarson
so IIUC, it's basically for differentiating between different Pushpin deployments, which typically is unnecessary
i'm pretty sure i've set sig_key without sig_iss and that was enough
or maybe the pushpin.conf already had a default value for sig_iss
Justin Karneges
@jkarneges
more specifically it's for being able to use different keys with different origin servers. so more about multitenancy, even if just one pushpin deployment
Alec Larson
@aleclarson
got it, thanks
Justin Karneges
@jkarneges
oh, pushpin.conf doesn't have sig_iss. I was talking about the route param. so yeah, if you set just the sig_key in pushpin.conf it would have global effect
but, I suppose it was intended to be configurable in pushpin.conf. the docs for the route param even say "Overrides sig_iss in pushpin.conf." oops
Justin Karneges
@jkarneges
@/all hey folks, Pushpin 1.33.0 was released, with major performance improvements. see https://github.com/fanout/pushpin-c1m
ASHMEET KANDHARI
@ashmeet-kandhari
Hi @jkarneges ,
For a websocket connection
Is it possible to trigger channel subscriptions and un-subscriptions for a user without the user having to initiate?
Can it be done with Zeromq pub port (5562)?
Justin Karneges
@jkarneges
hi @ashmeet-kandhari , yes the way to do this is with the refresh command https://pushpin.org/docs/advanced/#commands
ASHMEET KANDHARI
@ashmeet-kandhari
Hi @jkarneges ,
Still not clear how can we send a unsubscribe channel request through refresh command?
Justin Karneges
@jkarneges
@ashmeet-kandhari when the connection is refreshed, your backend will receive a request. you can respond with control messages at that time, such as unsubscribe
the way I'd recommend doing this is storing the subscription list as metadata on the connection. for example Set-Meta-subs: channel1,channel2,channel3
this is a little redundant but currently there is no way for a connection to know its own subscriptions
Justin Karneges
@jkarneges
then, whenever your backend gets a request, look up the user's session in the database, and make the necessary subscribes/unsubscribes to get the connection into the correct state
ASHMEET KANDHARI
@ashmeet-kandhari
Sure @jkarneges , thanks for the suggestion
Sumit Chahal
@smtchahal
Hi, for horizontally scaling Pushpin (i.e. multiple instances), what's the standard practice/way of making making sure that publish is only called to the available instances (assuming that instances are getting launched/terminated depending on the load, using say AWS Autoscaling)? Do we need to update the control_uris as and when instances are launched/terminated?
Justin Karneges
@jkarneges
@smtchahal there's basically two ways to work with multiple pushpin instances: 1) configure the publishers with the set of available pushpin instances, and update this configuration whenever the set of available pushpin instances changes, or 2) route messages through a broker layer, which you can either build (using programs like edgebroker.py and sourcebroker.py in the tools folder for guidance) or buy (we offer a commercial add-on)
slushpuppy
@slushpuppy
short of putting a nginx frontend, is there a way to add https to the http control uri?
Justin Karneges
@jkarneges
@slushpuppy not at the moment
note that you'll want to set up authentication too
Vladislav Horbachov
@LeftTwixWand

Hello, developers! I found an interesting problem, and I'm trying to understand, to figure it out:

Write a client and server console application, with push notifications from the server that will go through the application layer HTTP proxies by using HTTP long poll. The client will talk to the server over a socket connection using a simple BBQ protocol.

The BBQ protocol consists of client requests: "I'm hungry", "Come on dessert".
Server replies: "Ok, wait", "Serving".
Server events - "Main dish is ready", "Dessert is ready"

Design the transition between states from this set of commands. The server should only work with 1 client.

Should I write my own protocol, or do some wrappers that encapsulate the work of the client and server? So that under the hood they would work with my BBQ implementation logic?

Justin Karneges
@jkarneges
@LeftTwixWand my advice is to start by considering how the API might look without push capability. for example, perhaps the client could make a POST request to create a food order (POST /orders/), and the server could provide a resource representing the order (/orders/1/). Then the client could poll the order (GET /orders/1/) to see the status. then from there you can decide how to implement realtime updates, perhaps by making a long-polling mode on the order resources
Vladislav Horbachov
@LeftTwixWand
@jkarneges Thank You so much. I've goodled it and now already have some ideas how to implement it.
I think, I just heed to create some classes, like BBQClient and BBQServer and implement long polling communications through the HTTP
slushpuppy
@slushpuppy
Thanks @jkarneges , in the docs i noticed that condure is replacing mongrel2 however with a default install on ubuntu(version 1.33.1), the default config files still comes with binary path set for mongrel. If i am building for production, should I make the changes directly?
Justin Karneges
@jkarneges
@slushpuppy as long as condure is in the services= line you should be fine
slushpuppy
@slushpuppy
Thanks Justin. Also with regards to ZeroMQ, based on the documentation it appears there are advanatages to using it over http publish when comes to dealing with multiple instances- am i right to assume you only need to push to any PULL socket in the cluster and pushpin handler will handle the propagation for me afterwards?
slushpuppy
@slushpuppy
perhaps I am understanding the arch wrong, if i wish to scale up horizontally over multiple servers, do I have pushpin instances running in each servers or individual condure/mongrel2?
Justin Karneges
@jkarneges
@slushpuppy pushpin instances don't talk to each other. you'd want to push to the SUB sockets of each pushpin instance, so that each pushpin will deliver to its own clients. the advantage of ZeroMQ is this fan-out is handled for you, and messages are only sent to the pushpin instances that have clients interested in the message. just make a PUB socket, connect it to all the pushpin instances, and send a message, and it will go to the right places
Justin Karneges
@jkarneges
when scaling horizontally, simply duplicate the whole set of processes. e.g. make 3 servers and apt-get install pushpin on all of them, so they each have their own condure/etc
slushpuppy
@slushpuppy
Thanks again @jkarneges , whats the difference then between publishing using ZeroMQ vs that of HTTP control Api( port 5561) then since I'll still need to push to all 3 servers
if i understand your explanation correctly, the advantage of ZeroMQ is only if I have multiple condure instances per pushpin per server right?
Justin Karneges
@jkarneges
@slushpuppy if you have 3 pushpin instances, and you use the HTTP control port, you'd have to make a POST to all 3, yes. with ZeroMQ, you connect to all 3 when your backend initializes, and then make 1 ZeroMQ send call when there is data to publish. the ZeroMQ library will then potentially send the message 3 times for you
I say "potentially" here because the ZeroMQ library used by your backend will be keeping track of which pushpin instances have subscriptions to which channels, and it will only send the message to the pushpin instances that need it. this is the more interesting aspect of sending using ZeroMQ
unless you are publishing tons of messages though, this benefit may not be important. for example, if you are only publishing once per second to potentially 3 instances, you could just send every message to all the instances and the load would be insignificant. being selective about which pushpin instances to send to only starts to matter when there is higher publishing load, like say hundreds of messages per second
Justin Karneges
@jkarneges
publishing using HTTP vs ZeroMQ has nothing to do with condure. you should always have only one condure per pushpin. and you shouldn't really need to think about this as it's an internal component managed by pushpin
slushpuppy
@slushpuppy
ahh ok, appreciate the clear explanation!