I've set up a repo describing our public infrastructure. This currently applies to just staging, but I'll move our production environment over on May 11th. https://github.com/portier/public-infra
I also changed permissions on GitHub to reflect governance changes. Hope I didn't hurt anyone's feelings. 😇
Stéphan Kochen
@stephank
But if you're locked out of a repo that you are supposed to have access to, please let me know!
The production broker is now running on Hetzner, and upgraded to 0.3.2 :)
Stéphan Kochen
@stephank
I imported the old RSA key, so it'll use that key today, and still announce it tomorrow. But it's now rotating keys daily, so after tomorrow that key is no longer valid. (I don't believe I've ever seen anyone hardcode that key.)
onli
@onli
I will have to take a second look at the ruby gem, that it still works :)
Stéphan Kochen
@stephank
@onli Btw, should I give you access to all the hosted stuff?
onli
@onli
@stephank Probably a good idea to reduce the busfactor, right?
btw, just had a new user with a . in the gmail address subscribe to pipes. Since the public broker is already changed that really seems to work fine
Wasn't expecting the IANA TLDs list to change that often, but it's like a weekly thing.
I noticed the DMARC spec references the same public suffix browsers use: https://publicsuffix.org
So maybe we should use that instead. And make it a run-time thing, so we bundle a version and read it on startup, allowing the user to update it independently.
To the overall change: If the suffix list changes as often, isn't it too problematic to rely on the user to update it?
Stéphan Kochen
@stephank
Hmm, good point. Maybe I misinterpreted the warning labels on publicsuffix.org about downloading too often.
Will have to think on that. I'm hoping for something simpler than another fetch & cache solution.
onli
@onli
I'm not familiar with that list and how it should be managed
I immediately thought of "download it on startup when no custom list is provided"
Maybe something I could try to implement?
Stéphan Kochen
@stephank
They talk about once per day, so I’m not to keen on doing it every restart. That may happen often during testing.
onli
@onli
oh, that would not work then
Dylan Staley
@dstaley
Does Portier have anything similar in concept to a refresh token? If not, is auth persistence something that's left up to the application to implement?
onli
@onli
In my apps that's completely up to the application, yes. Portier gives you the initial confirmation token and it's up to the app to translate that into a session. Recommendation was/is to keep that session alive for as long as possible.
Stéphan Kochen
@stephank
Same here. I think the only alternative is for Portier itself to have a session? But that's sort of what we're trying to prevent. It's one of the things that was confusing about Mozilla Persona, if I recall correctly.
Stéphan Kochen
@stephank
Thinking about the public suffixes thing, I guess we just have to fetch at an interval in the broker? I don't really see another way. (The only other thing I can think about is doing it outside the broker and adding SIGHUP config reloading support. While reloading is nice to have, having a separate thing to fetch the list complicates setup.)
But it looks like the list at publicsuffix.org doesn't have Cache-Control headers. So we probably want to add functionality to override cache age in our fetching code. Plus we should keep using stale lists if fetching fails.
Could probably do something crude to handle failed fetches. Like still set a Redis TTL, but x3 the cache age, and then allow 3 fetches to fail before crashing. No complicated retry timer logic for MVP.
Stéphan Kochen
@stephank
(Reason I don't want an infinite TTL is because changing configuration means old lists linger in the store forever.)
onli
@onli
If we fetch at an interval in the broken we at least avoid the situation of hitting that site too often. Though I wondered if one could maybe target the github source? They have the infrastructure o serve a file that small without issues.
Stéphan Kochen
@stephank
I noticed it's behind cloudfront, so probably the infra is not the problem. I guess the guideline is more about good practice towards users (of the broker or whatever) as well?