Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
John Sullivan
@sullivan-
longevity supports UUID columns in your domain model
I wanted to support ObjectId as well, but the reason why I didn't, is because ObjectId class is provided by the MongoDB driver, which is an optional dependency, and the way longevity is currently written, it's pretty hard to support ObjectId columns without making the MongoDB driver a hard dependency. But, yeah, you can generate an ObjectId within the JVM which is pretty much guaranteed to be unique, and this would be the recommended approach.
Once I migrate some of the internals to shapeless, users will be able to use types like MongoDB's ObjectId without having to include that driver as a hard dependency - the user would just have to supply an implicit shapeless.Generic[ObjectId] - or something like that.
Yet another reason to migrate to shapeless!
For now, I would recommend using UUIDs, which are guaranteed to be unique.
John Sullivan
@sullivan-
BTW now that I'm back home, I'll be checking out your IDEA plugin soon, and plugging it. Thanks for that! And sorry it's taken me so long to get a look at it
Mardo Del Cid
@mardo
@sullivan- Sounds good!, I think UUID works for me… In the same way, we should be able to access the created and updated timestamps from within the model. I’ve been maintaing my own timestamps but would be so cool if I could just reuse the ones already maintained by the framework.
:D
Mardo Del Cid
@mardo

@sullivan- Controlled vocabularies are not working for me, I submitted an issue here: longevityframework/longevity#39

But not sure if I’m doing something wrong or if it is a bug :P

John Sullivan
@sullivan-
@mardo I saw the ticket and I will take a look at it soon. First let me address the timestamps comment. The created and modified timestamps (put in by the writeTimestamps configuration) are for diagnostic purposes only, and are not part of your domain model. If you have timestamps in your domain model, you are doing the right thing by adding them in. The idea here is to not conflate persistence concerns and domain model design. For more discussion on this please see http://scabl.blogspot.com/2015/03/aeddd-3.html and http://scabl.blogspot.com/2015/03/aeddd-4.html
Mardo Del Cid
@mardo
Got it, I’ll continue using my own timestamps then :)
Mardo Del Cid
@mardo
@sullivan-
I was thinking: Maybe you should be able to take advantage of database features you have as backend. For example: If you’re using mongo, the framework should know about the context, and provide you with goodies like using $inc, for instance. Or if you’re in SQLite, then allow you to do native SQL queries. Some food for thought… :)
John Sullivan
@sullivan-
Hi @mardo! That's a great idea, and is definitely worthy of discussion. Bear with me here, because I have a lot to say on this, and my schedule is a bit busy today, so I might not get it out all at once.
So there are two different directions I could see satisfying your desire here, and both are sensible.
The first is to expose the underlying connection to the database for you to use however you want. I've definitely considered this in the past, and I've put it off, because for one it's not as simple as one might think - every back end has a unique type for the underlying collection, and these types come from optional dependencies. So to do that right, I would need to employ some tricks to keep it from breaking things when the optional dependencies are not present. I've done this kind of thing in the past, it's not too hard, but it's not trivial. For two, nothing it stopping you from acquiring your own connection to the same underlying database. Not ideal, I know - two different connections open in the same program. But it's certainly workable.
John Sullivan
@sullivan-
Second is an API for doing on-database updates. Right now, to update a row, you have to read it in to software memory, make the change, and write it back. This is certainly an inefficient way to do updates in some scenarios.
I have considered on-database updates and deletes before. Let me see if I can pull up the stories...
This would be a fair bit of work, because it would require building out a DSL for describing the update clauses
Also, as things currently stand, this would be virtually impossible for the SQL and Cassandra back ends. So as things stand, this would be a Mongo only feature.
I have nothing against a Mongo only feature, but it does make it less useful, and hence lower priority for me
John Sullivan
@sullivan-
The reason why it's practically impossible in SQL and Cassandra is that as things stand, the persistent is stored as JSON on these back ends, in a text field. RDB and Cassandra have no native support for processing JSON
So there is no way that I can reach into the JSON stored in a text column and tweak an individual field or handful of fields, while remaining on-database. (Maybe we could say in-place instead of on-database)
It's possible that using procedural extensions to SQL such as Oracle's PL/SQL, could help. But this is vendor-independent stuff, and a lot of SQL back ends (such as SQLite) will probably never support anything like this
But there is a way it could work
Instead of storing the persistent as JSON, I could instead break it down into all its individual fields, and store each one in a column
This is more than a little tricky, because we want to allow nested collections, such as lists of lists
This is definitely doable in Cassandra, we would have to use frozen collections and things like that
In SQL, it would require breaking up a persistent across multiple tables in a sort of relational style
John Sullivan
@sullivan-
Both of these would be doable, but would represent a lot of work
Now, users and potential users might prefer one way, or prefer another. There are pros and cons to each
I'll skip the details of the pros and cons to try to keep this diatribe somewhat contained
It's not clear to me which way users would prefer, or how I would get feedback on the matter.
Getting feedback from users seems to be pretty hard in general :). With Mardo being a glowing exception :)
We could support both approaches, but that would essentially meaning adding two new back ends
The two Cassandra back ends might be able to share some code, but my test suite would basically grow in size by 50%
Which is no big deal just on its own
It would definitely be easier to stick to supporting just four back ends (I'm including in-memory here)
So let me try to sum up:
  • we could do a Mongo only API for in-place updates. this would represent some work and would be less than desirable for not supporting all back ends
John Sullivan
@sullivan-
  • we could modify SQL and Cassandra back ends to flatten their representation of the persistent out of JSON, and into individual columns. This would represent the same work as the previous bullet point, and then a good deal more on top of that
  • we could support existing SQL and Cassandra back ends as well as the new, flattened formats. This would represent a bit more work now, and a continual stream of extra maintenance overhead down the line
We could certainly pursue the first option first, and continue on with the second or third approach at our leisure
But a key idea in all this is "a lot of work"
I probably should move these ideas into GitHub issues
Okay, I'm done. I guess I got it out all at once! :) @mardo /all Let me know what you think of all that. And we can mull it all over
Mardo Del Cid
@mardo
@sullivan-
WOW!… this was a lot hahaha. Yes, I agree on the amount of work. What I am currently thinking to do for these kind of Mongo-specific things is in fact open my own connection to the database and do custom queries. It is just that I love to keep my whole code type-safe, and this will be a part where it won’t. Not big deal as it is just for some views, not the whole app. Another option I was considering is doing a layer on top of longevity for mongo, but I’m not sure what would be the best way to convert a mongo record to a case class instance. I am sure there’s a utility in longevity to do this conversion, but what I am not sure is if that’s exposed or not, or if I should be using it or not :)