Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 17:32
    oskardudycz labeled #1450
  • 17:32
    oskardudycz labeled #1450
  • 17:32
    oskardudycz labeled #1450
  • 17:32
    oskardudycz labeled #1450
  • 11:05
    mysticmind commented #1448
  • 11:05
    mysticmind commented #1448
  • 11:05
    mysticmind commented #1448
  • 11:05
    mysticmind commented #1448
  • 10:21
    tonykaralis commented #1450
  • 10:21
    tonykaralis commented #1450
  • 10:08
    mysticmind commented #1448
  • 10:08
    mysticmind commented #1448
  • 10:08
    mysticmind commented #1448
  • 10:08
    mysticmind commented #1448
  • 10:04
    mysticmind commented #1448
  • 10:04
    mysticmind commented #1448
  • 10:00
    mysticmind labeled #1448
  • 10:00
    mysticmind labeled #1448
  • 09:58
    mysticmind milestoned #1448
  • 09:58
    mysticmind milestoned #1448
Oskar Dudycz
@oskardudycz
so at first publish prerelease package
then clone the client repos that are using it, run the basic build (might be extended to run also tests etc.)
and if that succeeds then publish the regular package
That might be even extended to send a PR to those repos via webhook
then on the PR pipeline full test flow for the specific repo can be run
This sample also shows other scenarios for the matrix builds (like building cross platforms and cross PG versions) but that might not be super useful for you
The sample shows how to use that in Azure Pipelines, but same flow could be implemented in other CI/CD solution
edwardridge
@edwardridge
Hello! I have a question about a potential feature - have you considered adding an equivalent to RavenDB's BeforeConversionToEntity method on the IDocumentStoreListener? I see Marten has an IDocumentSessionListener with a DocumentLoaded method, but this runs after serialisation, whereas BeforeConversionToEntity runs on the raw json before serialisation
Where I work we're looking to move our large app from RavenDb to Marten, but our database migrations heavily use the BeforeConversionToEntity - this allows us to manipulate the json into the new C# schema before serialisation (as it gets lost in serialisation) and also allows us to do "on the fly" migrations as the document is loaded
I'm happy to raise an issue/PR for discussion but wanted to check if it had already been considered before
Jeremy D. Miller
@jeremydmiller
We haven’t thought about anything like that, no. For document migrations like that, Marten supports using a Javascript function to transform the persisted JSON in one shot in the database.
Babu Annamalai
@mysticmind
@jeremydmiller @oskardudycz @jokokko I am catching up on the threads of conversation around capturing the thought/discussions pertaining to design/features/v4, I would suggest that we follow a similar method like https://github.com/vuejs/rfcs. Vue folks use a separate repo and the RFCs (request for comments) come in as a PR. For our case, we can use our current repo and tag the PR accordingly. If a larger design change/early ideas which need to be discussed first among the core contributors, we can collaborate using a google docs/private repo(this makes it transferring the content between GH repos).
edwardridge
@edwardridge
@jeremydmiller is it worth me raising a GitHub issue and/or PR for it? We find it really useful and powerful to have the transformation code in C# and applied when the document is loaded (which can't be done as a javascript one shot)
Oskar Dudycz
@oskardudycz
@jeremydmiller I'm not sure about the document part but for the ES we should have the concept of the upcasters
I don't know RavenDB, but that seems to be a similar concept
Jeremy D. Miller
@jeremydmiller
@edwardridge Why wouldn’t the one shot migration work in your scenario?
edwardridge
@edwardridge
We keep a list of migrations, and store the last migration run inside the document. So, if we have migrations A, B and C, and a document has already run migration A, when we load this document we check what migration was last run and then run B and C. I'm not sure how that would work with the javascript transformations. Also, the document are quite large and complex, and the benefit of migrating them one at a time on the fly is our downtime for database deployments is basically zero, which I'm not sure would be the case if it needed to apply these migrations all at once (I could be wrong though)
Jeremy D. Miller
@jeremydmiller
Postgresql can actually do that though. You wouldn’t have downtime.
edwardridge
@edwardridge
OK, I'll try it out, thanks
Marko Lahma
@lahma
I've considered porting https://github.com/migrating-ravens/RavenMigrations to work with Marten APi, I'd rather do things using c# when data transformations are needed, does that sound reasonable?
After years of using RavenDB and liking it, using SQL-like API doesn't feel good to transform the documents and tracking state of done migrations
Tony Karalis
@tonykaralis
Is there a performance benefit when using LoadAsync or LoadManyAsync versus using query<TDoc>().Where(x => x.Id == id)
Jeremy D. Miller
@jeremydmiller
Marten can take advantage of the identity map when using one of the Load*** methods to prevent double loading the same entity. The sql creation behind the scenes is quicker too without having to parse Linq. So yes, there is. You’d have to test for yourself in your app to see how big a deal that really is.
Tony Karalis
@tonykaralis
Thanks @jeremydmiller makes sense. Am trying to use Load and LoadMany on a tenanted document without providing a tenant id. On the Queryable we can use the .AnyTenant() extension which returns everything but couldn't find a work around when using Load and LoadMany.
Oskar Dudycz
@oskardudycz
I think that it wasn't added yet
Currently it'll only work for the query
Tony Karalis
@tonykaralis
No prob, will use the Query, thanks
Oskar Dudycz
@oskardudycz
I think that if you're not getting the same entities multiple time by calling the Query method
then the performance difference shouldn't be significant
Jeremy D. Miller
@jeremydmiller
It runs through the identity map even with a Query(), but it has to do the database query first to know what’s coming back. That came in very early on.
Oskar Dudycz
@oskardudycz
yup, that's what I meant. So if you load entities once and then perform the logic on them then it should be more or less the same
Tony Karalis
@tonykaralis
Brilliant, thanks both
Oskar Dudycz
@oskardudycz
:+1:
Andrew Bullock
@trullock
For GDPR purposes I need to be able to pull all event data for any stream that has events relating to user X. Whats the best way of me doing this?
My events all have a UserId on them, but thats not always the ID of the entity the events are owned by/emitted from
I realise I could project the events with a userId index, but it will be an almost exact copy of the same data, just with an index on it
wondering if theres a tidier way
Oskar Dudycz
@oskardudycz
Do you want to display all the events related to that user? Or somehow anonymize them afterwards?
Andrew Bullock
@trullock
i want all the events for the User stream, but i also want select events from other streams, which relate to that user
in pseudosql, select * from events where userId = X
Oskar Dudycz
@oskardudycz
I think that it should be possible to do that with session.Events.QueryAllRawEvents()
Andrew Bullock
@trullock
yeah, i was just wondering how to make it efficient, as it will have to dig into the json data of each event
Oskar Dudycz
@oskardudycz
I think that it should be translated to the native PG query
The other case is
if you always have the userId
then you could use conjoined multitenancy
and then you'd have it automatically indexed
But then you'd need to configure projections to be non tenanted if you're using them as such
Andrew Bullock
@trullock
ta