Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
onli
@onli
oh, what's the advantage of the API?
Stéphan Kochen
@stephank
According to docs, the API is quicker to produce an error for bad messages, while SMTP might accept and put it in a queue first. It's in the 'differences' section here: https://postmarkapp.com/developer/user-guide/sending-email/sending-with-smtp
onli
@onli
okay, nice
Stéphan Kochen
@stephank
The production broker is now running on Hetzner, and upgraded to 0.3.2 :)
Stéphan Kochen
@stephank
I imported the old RSA key, so it'll use that key today, and still announce it tomorrow. But it's now rotating keys daily, so after tomorrow that key is no longer valid. (I don't believe I've ever seen anyone hardcode that key.)
onli
@onli
I will have to take a second look at the ruby gem, that it still works :)
Stéphan Kochen
@stephank
@onli Btw, should I give you access to all the hosted stuff?
onli
@onli
@stephank Probably a good idea to reduce the busfactor, right?
btw, just had a new user with a . in the gmail address subscribe to pipes. Since the public broker is already changed that really seems to work fine
the normalization and the rotating keys
Stéphan Kochen
@stephank
Nice 👍
Stéphan Kochen
@stephank
Now have a test running every 5 minutes, that implements a small custom IdP: https://server.portier.io/stats.jsonl
Could make a nice graph out of that sometime :)
Stéphan Kochen
@stephank
@onli Hoping I can bother you to maybe do a quick read-through of the docs on the new config options here: https://github.com/portier/portier-broker/pull/210/files#diff-482d762113d87ccfaae28adb3c4a2262 🙂
Stéphan Kochen
@stephank
Wasn't expecting the IANA TLDs list to change that often, but it's like a weekly thing.
I noticed the DMARC spec references the same public suffix browsers use: https://publicsuffix.org
So maybe we should use that instead. And make it a run-time thing, so we bundle a version and read it on startup, allowing the user to update it independently.
Stéphan Kochen
@stephank
@onli Moar settings! 🎉 Hope you won't mind maybe reviewing the docs here again? 😇
https://github.com/portier/portier-broker/blob/7111ad531186d2ea7c5f40e995d19fda33ed1541/config.toml.dist#L132-L179
onli
@onli
@stephank of course not :)
Stéphan Kochen
@stephank
Ah, yes! Sorry :B
onli
@onli
To the overall change: If the suffix list changes as often, isn't it too problematic to rely on the user to update it?
Stéphan Kochen
@stephank
Hmm, good point. Maybe I misinterpreted the warning labels on publicsuffix.org about downloading too often.
Will have to think on that. I'm hoping for something simpler than another fetch & cache solution.
onli
@onli
I'm not familiar with that list and how it should be managed
I immediately thought of "download it on startup when no custom list is provided"
Maybe something I could try to implement?
Stéphan Kochen
@stephank
They talk about once per day, so I’m not to keen on doing it every restart. That may happen often during testing.
onli
@onli
oh, that would not work then
Dylan Staley
@dstaley
Does Portier have anything similar in concept to a refresh token? If not, is auth persistence something that's left up to the application to implement?
onli
@onli
In my apps that's completely up to the application, yes. Portier gives you the initial confirmation token and it's up to the app to translate that into a session. Recommendation was/is to keep that session alive for as long as possible.
Stéphan Kochen
@stephank
Same here. I think the only alternative is for Portier itself to have a session? But that's sort of what we're trying to prevent. It's one of the things that was confusing about Mozilla Persona, if I recall correctly.
Stéphan Kochen
@stephank
Thinking about the public suffixes thing, I guess we just have to fetch at an interval in the broker? I don't really see another way. (The only other thing I can think about is doing it outside the broker and adding SIGHUP config reloading support. While reloading is nice to have, having a separate thing to fetch the list complicates setup.)
But it looks like the list at publicsuffix.org doesn't have Cache-Control headers. So we probably want to add functionality to override cache age in our fetching code. Plus we should keep using stale lists if fetching fails.
Could probably do something crude to handle failed fetches. Like still set a Redis TTL, but x3 the cache age, and then allow 3 fetches to fail before crashing. No complicated retry timer logic for MVP.
Stéphan Kochen
@stephank
(Reason I don't want an infinite TTL is because changing configuration means old lists linger in the store forever.)
onli
@onli
If we fetch at an interval in the broken we at least avoid the situation of hitting that site too often. Though I wondered if one could maybe target the github source? They have the infrastructure o serve a file that small without issues.
Stéphan Kochen
@stephank
I noticed it's behind cloudfront, so probably the infra is not the problem. I guess the guideline is more about good practice towards users (of the broker or whatever) as well?
Dylan Staley
@dstaley
Today I spent a bit writing a Mailgun agent, but when it came time to write tests I realized that the only tests for the Postmark agent are the E2E tests, which are enabled due to the fact that Postmark supports sending test emails using a test API token. Unfortunately, Mailgun doesn't have a similar API. It looks like most Mailgun clients either mock the API, or use the actual, production API (which requires credentials, and incurs costs). If I contributed the agent, I'd definitely want there to be some sort of automated tests to make sure it doesn't break. I'm very much a novice when it comes to Rust, but if someone could take a look at how we could write Rust tests for the Postmark agent that don't actually hit the API, and simply compare the sent request to a fixture in the test, I could implement the same thing for a Mailgun provider.
(Also, if this would be more appropriate in a GitHub issue please let me know!)
Stéphan Kochen
@stephank
Yeah, think we do need to get rid of that. It's the kind of dependency you don't want. What I liked about using the real Postmark API is that we get to use their validation code, but I think we just have to do without that.
Thinking we could add a hidden postmark_email_api (doesn't need docs) that defaults to the currently hardcoded "https://api.postmarkapp.com/email". Then tests can start a dummy server and point the broker at it. (In the broker postmark code, we can then also get rid of the hacky is_test_request path.)
@dstaley Btw, thanks for looking into this, and the IE stuff! Really cool. Let me know if you want to try your hand at the above, or would rather have me look into it. (I may hopefully have some time this weekend.)
Dylan Staley
@dstaley
😊 I'm glad to help! I love helping projects support Windows, and I've been wanting to improve my Rust abilities so it feels like a good match.
Dylan Staley
@dstaley
Just so I understand correctly, you're thinking of spawing a mock server, pointing the broker binary at that server via an environment variable, and then testing the requests to the mock server? If so, would those tests be Rust unit tests, or would this be part of the E2E test?
Stéphan Kochen
@stephank
@dstaley Yes, that's correct. The test harness does the mocking in-process, for example: https://github.com/portier/portier-broker/blob/793c5987e744c6a864f38aff4dc8abcd55eccef2/tests/e2e/src/mailbox.js#L50-L61
I think right there is an okay place to add support for that, inside an if (TEST_MAILER === "mailgun") { ... }
I found E2E testing easier in this case. Especially for the Agent stuff, I guessed unit testing would be more difficult. :)
Dylan Staley
@dstaley
Okay awesome! I think this gives me enough to get something started. Do you have any objections to using Jest as the test runner? That way we can focus on actually testing the broker instead of implementing test framework features.
Stéphan Kochen
@stephank
@dstaley Hmm, no idea, haven't used it. If you think it'd really reduce the amount of code, feel free to try. I'm also worried it's just more work and complexity. :)