Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
John Sullivan
@sullivan-
#37 is about replacing Scala macros with scalameta, somewhere down the line
#36 is about short term measures to address of IDEA support
Mardo Del Cid
@mardo
sorry, yes… I meant #36
John Sullivan
@sullivan-
u r awesome
Mardo Del Cid
@mardo
#37 is out of my league as for now :D
Haven’t done anything with scalameta/scalamacros still
John Sullivan
@sullivan-
just a heads up for readers, the IDEA support has moved to issue #38
.
Hey everyone! I wanted to let you know that I will be leaving on vacation the day after tomorrow. I won't be back until August 28th or so. My connectivity will be pretty spotty where I'm headed, so don't be dismayed if it takes me a while to get back to you.
Mardo Del Cid
@mardo
This message was deleted
Mardo Del Cid
@mardo
Hi @sullivan-, Is there a way to have an autoincremented (or autogenerated in case of mongo, for instance) ID for the domain models? I was looking how to just to reuse the database’s autogenerated ones
John Sullivan
@sullivan-
Hi @mardo, thanks for writing. Sorry it's taken me a while to get back to you; I've just gotten back from a long vacation and tons of stuff to catch up on.
So in my understanding, MongoDB with autogenerate an _id field if you don't supply one. At the moment, I kind of consider that a database concern, and not a domain model concern. I think you would run into trouble if you included an attribute named _id in your domain model.
You can always generate a UUID in the software, and incorporate that into your model, using e.g. java.util.UUID.randomUUID()
longevity supports UUID columns in your domain model
John Sullivan
@sullivan-
I wanted to support ObjectId as well, but the reason why I didn't, is because ObjectId class is provided by the MongoDB driver, which is an optional dependency, and the way longevity is currently written, it's pretty hard to support ObjectId columns without making the MongoDB driver a hard dependency. But, yeah, you can generate an ObjectId within the JVM which is pretty much guaranteed to be unique, and this would be the recommended approach.
Once I migrate some of the internals to shapeless, users will be able to use types like MongoDB's ObjectId without having to include that driver as a hard dependency - the user would just have to supply an implicit shapeless.Generic[ObjectId] - or something like that.
Yet another reason to migrate to shapeless!
For now, I would recommend using UUIDs, which are guaranteed to be unique.
BTW now that I'm back home, I'll be checking out your IDEA plugin soon, and plugging it. Thanks for that! And sorry it's taken me so long to get a look at it
Mardo Del Cid
@mardo
@sullivan- Sounds good!, I think UUID works for me… In the same way, we should be able to access the created and updated timestamps from within the model. I’ve been maintaing my own timestamps but would be so cool if I could just reuse the ones already maintained by the framework.
:D
Mardo Del Cid
@mardo

@sullivan- Controlled vocabularies are not working for me, I submitted an issue here: longevityframework/longevity#39

But not sure if I’m doing something wrong or if it is a bug :P

John Sullivan
@sullivan-
@mardo I saw the ticket and I will take a look at it soon. First let me address the timestamps comment. The created and modified timestamps (put in by the writeTimestamps configuration) are for diagnostic purposes only, and are not part of your domain model. If you have timestamps in your domain model, you are doing the right thing by adding them in. The idea here is to not conflate persistence concerns and domain model design. For more discussion on this please see http://scabl.blogspot.com/2015/03/aeddd-3.html and http://scabl.blogspot.com/2015/03/aeddd-4.html
Mardo Del Cid
@mardo
Got it, I’ll continue using my own timestamps then :)
Mardo Del Cid
@mardo
@sullivan-
I was thinking: Maybe you should be able to take advantage of database features you have as backend. For example: If you’re using mongo, the framework should know about the context, and provide you with goodies like using $inc, for instance. Or if you’re in SQLite, then allow you to do native SQL queries. Some food for thought… :)
John Sullivan
@sullivan-
Hi @mardo! That's a great idea, and is definitely worthy of discussion. Bear with me here, because I have a lot to say on this, and my schedule is a bit busy today, so I might not get it out all at once.
So there are two different directions I could see satisfying your desire here, and both are sensible.
The first is to expose the underlying connection to the database for you to use however you want. I've definitely considered this in the past, and I've put it off, because for one it's not as simple as one might think - every back end has a unique type for the underlying collection, and these types come from optional dependencies. So to do that right, I would need to employ some tricks to keep it from breaking things when the optional dependencies are not present. I've done this kind of thing in the past, it's not too hard, but it's not trivial. For two, nothing it stopping you from acquiring your own connection to the same underlying database. Not ideal, I know - two different connections open in the same program. But it's certainly workable.
John Sullivan
@sullivan-
Second is an API for doing on-database updates. Right now, to update a row, you have to read it in to software memory, make the change, and write it back. This is certainly an inefficient way to do updates in some scenarios.
I have considered on-database updates and deletes before. Let me see if I can pull up the stories...
This would be a fair bit of work, because it would require building out a DSL for describing the update clauses
Also, as things currently stand, this would be virtually impossible for the SQL and Cassandra back ends. So as things stand, this would be a Mongo only feature.
I have nothing against a Mongo only feature, but it does make it less useful, and hence lower priority for me
John Sullivan
@sullivan-
The reason why it's practically impossible in SQL and Cassandra is that as things stand, the persistent is stored as JSON on these back ends, in a text field. RDB and Cassandra have no native support for processing JSON
So there is no way that I can reach into the JSON stored in a text column and tweak an individual field or handful of fields, while remaining on-database. (Maybe we could say in-place instead of on-database)
It's possible that using procedural extensions to SQL such as Oracle's PL/SQL, could help. But this is vendor-independent stuff, and a lot of SQL back ends (such as SQLite) will probably never support anything like this
But there is a way it could work
Instead of storing the persistent as JSON, I could instead break it down into all its individual fields, and store each one in a column
This is more than a little tricky, because we want to allow nested collections, such as lists of lists
This is definitely doable in Cassandra, we would have to use frozen collections and things like that
In SQL, it would require breaking up a persistent across multiple tables in a sort of relational style
John Sullivan
@sullivan-
Both of these would be doable, but would represent a lot of work
Now, users and potential users might prefer one way, or prefer another. There are pros and cons to each
I'll skip the details of the pros and cons to try to keep this diatribe somewhat contained
It's not clear to me which way users would prefer, or how I would get feedback on the matter.