Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
Stéphan Kochen
@stephank
Ah, yes! Sorry :B
onli
@onli
To the overall change: If the suffix list changes as often, isn't it too problematic to rely on the user to update it?
Stéphan Kochen
@stephank
Hmm, good point. Maybe I misinterpreted the warning labels on publicsuffix.org about downloading too often.
Will have to think on that. I'm hoping for something simpler than another fetch & cache solution.
onli
@onli
I'm not familiar with that list and how it should be managed
I immediately thought of "download it on startup when no custom list is provided"
Maybe something I could try to implement?
Stéphan Kochen
@stephank
They talk about once per day, so I’m not to keen on doing it every restart. That may happen often during testing.
onli
@onli
oh, that would not work then
Dylan Staley
@dstaley
Does Portier have anything similar in concept to a refresh token? If not, is auth persistence something that's left up to the application to implement?
onli
@onli
In my apps that's completely up to the application, yes. Portier gives you the initial confirmation token and it's up to the app to translate that into a session. Recommendation was/is to keep that session alive for as long as possible.
Stéphan Kochen
@stephank
Same here. I think the only alternative is for Portier itself to have a session? But that's sort of what we're trying to prevent. It's one of the things that was confusing about Mozilla Persona, if I recall correctly.
Stéphan Kochen
@stephank
Thinking about the public suffixes thing, I guess we just have to fetch at an interval in the broker? I don't really see another way. (The only other thing I can think about is doing it outside the broker and adding SIGHUP config reloading support. While reloading is nice to have, having a separate thing to fetch the list complicates setup.)
But it looks like the list at publicsuffix.org doesn't have Cache-Control headers. So we probably want to add functionality to override cache age in our fetching code. Plus we should keep using stale lists if fetching fails.
Could probably do something crude to handle failed fetches. Like still set a Redis TTL, but x3 the cache age, and then allow 3 fetches to fail before crashing. No complicated retry timer logic for MVP.
Stéphan Kochen
@stephank
(Reason I don't want an infinite TTL is because changing configuration means old lists linger in the store forever.)
onli
@onli
If we fetch at an interval in the broken we at least avoid the situation of hitting that site too often. Though I wondered if one could maybe target the github source? They have the infrastructure o serve a file that small without issues.
Stéphan Kochen
@stephank
I noticed it's behind cloudfront, so probably the infra is not the problem. I guess the guideline is more about good practice towards users (of the broker or whatever) as well?
Dylan Staley
@dstaley
Today I spent a bit writing a Mailgun agent, but when it came time to write tests I realized that the only tests for the Postmark agent are the E2E tests, which are enabled due to the fact that Postmark supports sending test emails using a test API token. Unfortunately, Mailgun doesn't have a similar API. It looks like most Mailgun clients either mock the API, or use the actual, production API (which requires credentials, and incurs costs). If I contributed the agent, I'd definitely want there to be some sort of automated tests to make sure it doesn't break. I'm very much a novice when it comes to Rust, but if someone could take a look at how we could write Rust tests for the Postmark agent that don't actually hit the API, and simply compare the sent request to a fixture in the test, I could implement the same thing for a Mailgun provider.
(Also, if this would be more appropriate in a GitHub issue please let me know!)
Stéphan Kochen
@stephank
Yeah, think we do need to get rid of that. It's the kind of dependency you don't want. What I liked about using the real Postmark API is that we get to use their validation code, but I think we just have to do without that.
Thinking we could add a hidden postmark_email_api (doesn't need docs) that defaults to the currently hardcoded "https://api.postmarkapp.com/email". Then tests can start a dummy server and point the broker at it. (In the broker postmark code, we can then also get rid of the hacky is_test_request path.)
@dstaley Btw, thanks for looking into this, and the IE stuff! Really cool. Let me know if you want to try your hand at the above, or would rather have me look into it. (I may hopefully have some time this weekend.)
Dylan Staley
@dstaley
😊 I'm glad to help! I love helping projects support Windows, and I've been wanting to improve my Rust abilities so it feels like a good match.
Dylan Staley
@dstaley
Just so I understand correctly, you're thinking of spawing a mock server, pointing the broker binary at that server via an environment variable, and then testing the requests to the mock server? If so, would those tests be Rust unit tests, or would this be part of the E2E test?
Stéphan Kochen
@stephank
@dstaley Yes, that's correct. The test harness does the mocking in-process, for example: https://github.com/portier/portier-broker/blob/793c5987e744c6a864f38aff4dc8abcd55eccef2/tests/e2e/src/mailbox.js#L50-L61
I think right there is an okay place to add support for that, inside an if (TEST_MAILER === "mailgun") { ... }
I found E2E testing easier in this case. Especially for the Agent stuff, I guessed unit testing would be more difficult. :)
Dylan Staley
@dstaley
Okay awesome! I think this gives me enough to get something started. Do you have any objections to using Jest as the test runner? That way we can focus on actually testing the broker instead of implementing test framework features.
Stéphan Kochen
@stephank
@dstaley Hmm, no idea, haven't used it. If you think it'd really reduce the amount of code, feel free to try. I'm also worried it's just more work and complexity. :)
Feels like most of the code in the E2E test harness right now is fairly specific to what we're testing?
Dylan Staley
@dstaley
Okay I'll take a stab without Jest! I saw it wasn't using a framework but I didn't look too closely at what it does and doesn't support
Dylan Staley
@dstaley
@stephank I spent some time today working on figuring out the best way to make assertions against network requests in the E2E test. In order to help me understand how all the pieces worked, and to give me some additional confidence things were working correctly, I wrote TypeScript-compatible JSDoc comments in the tests. The TypeScript compiler was able to pull types for every library except the node portier client. I went to look at how hard it would be to add types, and was surprised to see it was already in TypeScript! Would you mind if I also convert the E2E tests to TypeScript as well? (I can also send a PR to enable declaration emission for the node-portier library as well)
Stéphan Kochen
@stephank
@dstaley Sure! Though, I'm using node-portier in TS at $work and it works fine? Declarations should already be there.
Stéphan Kochen
@stephank
Hmm, maybe I'm wrong. It looks like we're only using it from small JS projects so far. But tsconfig has declarations enabled, so I'm guessing it's a packaging issue.
Dylan Staley
@dstaley
Yup we just need to add a "types" key to package.json
Stéphan Kochen
@stephank
I'm considering scrapping the public suffix list thing. :(
Thinking the semantics are too vague. Its real purpose is add checks to browser cookies after a domain has already been resolved (and thus verified existing). For example, this issue reiterates this: publicsuffix/list#570
So really, the question here is: do we support the use case of using the public suffix list on arbitrary inputs, or do we only support the use case of determining the public suffix for a domain which is in the DNS?
I guess the PR is still a better implementation of the ICANN TLD list. And we should still add the fetch & refresh code just for that. (Plus it adds allow/block lists)
But I kinda wanted to do a DNS check any way, so don't think the public suffix list adds much.
onli
@onli
sounds right, it should not be needed if we can confirm the domain exists before sending the email
The last time the problem was that the DNS lookup could not be made async?
Stéphan Kochen
@stephank
Think so, yes. Plus we need to do both MX and A lookups, which we can't do using OS facilities, I think. So I was thinking about adding an explicit resolver option, which enables the DNS check with explanation.
Stéphan Kochen
@stephank
I don't understand why PRs aren't built by github actions. Tried tweaking the workflow, but still not working. :/
PRs from outside of the org, that is
Stéphan Kochen
@stephank
Every now and then I see outlook.com fail our DMARC rules. I assume this is legit bad actors trying to send mail from portier.io, but I'm not sure if someone has an account there and can maybe do a quick check to see if https://demo.portier.io works?
Dylan Staley
@dstaley
@stephank no issues using my outlook.com email
onli
@onli
@stephank I let someone test it with hotmail.com and that worked as well
Stéphan Kochen
@stephank
Awesome, thanks! Guess it's nothing to worry about then. :)