Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Mardo Del Cid
@mardo
On the other hand, this could very well be a show stopper for IntelliJ users, I mean I love my vim and can easily switch to that, but many people won’t like that
Not sure when scalameta will have all the features that longevity needs
John Sullivan
@sullivan-
Just checking that.. Here's the roadmap of the general feature - typechecking - that I need: scalameta/scalameta#604
So one thing we could do is add the def mprop[A](propName: String): Prop[User, A] described in the ticket, and clearly mark in the ScalaDocs that it is provided as an IDEA workaround, and will go away after migration to scalameta
Mardo Del Cid
@mardo
yeah, that makes sense
John Sullivan
@sullivan-
I think mostly this is what I will need for meta: scalameta/scalameta#609
which looks relatively close on the roadmap
i gotta keep my ears perked for the next meta release, and probably dust off my feat/meta branch when it comes out
@mardo do you want to spearhead this approach: implement the two-step workaround described in longevity issue 36, clearly marked as a workaround that will be phased out when we migrate to scalameta
I'm happy to support you every step of the way if you want to take it on
John Sullivan
@sullivan-
get scalameta on the board: longevityframework/longevity#37
Mardo Del Cid
@mardo
@sullivan- Yes, I’m happy to help with #37. Not sure how much knowledge do I need to have regarding longevity, but I’ll definitely give it a shot
John Sullivan
@sullivan-
you mean #36 ?
#37 is about replacing Scala macros with scalameta, somewhere down the line
John Sullivan
@sullivan-
#36 is about short term measures to address of IDEA support
Mardo Del Cid
@mardo
sorry, yes… I meant #36
John Sullivan
@sullivan-
u r awesome
Mardo Del Cid
@mardo
#37 is out of my league as for now :D
Haven’t done anything with scalameta/scalamacros still
John Sullivan
@sullivan-
just a heads up for readers, the IDEA support has moved to issue #38
.
Hey everyone! I wanted to let you know that I will be leaving on vacation the day after tomorrow. I won't be back until August 28th or so. My connectivity will be pretty spotty where I'm headed, so don't be dismayed if it takes me a while to get back to you.
Mardo Del Cid
@mardo
This message was deleted
Mardo Del Cid
@mardo
Hi @sullivan-, Is there a way to have an autoincremented (or autogenerated in case of mongo, for instance) ID for the domain models? I was looking how to just to reuse the database’s autogenerated ones
John Sullivan
@sullivan-
Hi @mardo, thanks for writing. Sorry it's taken me a while to get back to you; I've just gotten back from a long vacation and tons of stuff to catch up on.
So in my understanding, MongoDB with autogenerate an _id field if you don't supply one. At the moment, I kind of consider that a database concern, and not a domain model concern. I think you would run into trouble if you included an attribute named _id in your domain model.
You can always generate a UUID in the software, and incorporate that into your model, using e.g. java.util.UUID.randomUUID()
longevity supports UUID columns in your domain model
John Sullivan
@sullivan-
I wanted to support ObjectId as well, but the reason why I didn't, is because ObjectId class is provided by the MongoDB driver, which is an optional dependency, and the way longevity is currently written, it's pretty hard to support ObjectId columns without making the MongoDB driver a hard dependency. But, yeah, you can generate an ObjectId within the JVM which is pretty much guaranteed to be unique, and this would be the recommended approach.
Once I migrate some of the internals to shapeless, users will be able to use types like MongoDB's ObjectId without having to include that driver as a hard dependency - the user would just have to supply an implicit shapeless.Generic[ObjectId] - or something like that.
Yet another reason to migrate to shapeless!
For now, I would recommend using UUIDs, which are guaranteed to be unique.
BTW now that I'm back home, I'll be checking out your IDEA plugin soon, and plugging it. Thanks for that! And sorry it's taken me so long to get a look at it
Mardo Del Cid
@mardo
@sullivan- Sounds good!, I think UUID works for me… In the same way, we should be able to access the created and updated timestamps from within the model. I’ve been maintaing my own timestamps but would be so cool if I could just reuse the ones already maintained by the framework.
:D
Mardo Del Cid
@mardo

@sullivan- Controlled vocabularies are not working for me, I submitted an issue here: longevityframework/longevity#39

But not sure if I’m doing something wrong or if it is a bug :P

John Sullivan
@sullivan-
@mardo I saw the ticket and I will take a look at it soon. First let me address the timestamps comment. The created and modified timestamps (put in by the writeTimestamps configuration) are for diagnostic purposes only, and are not part of your domain model. If you have timestamps in your domain model, you are doing the right thing by adding them in. The idea here is to not conflate persistence concerns and domain model design. For more discussion on this please see http://scabl.blogspot.com/2015/03/aeddd-3.html and http://scabl.blogspot.com/2015/03/aeddd-4.html
Mardo Del Cid
@mardo
Got it, I’ll continue using my own timestamps then :)
Mardo Del Cid
@mardo
@sullivan-
I was thinking: Maybe you should be able to take advantage of database features you have as backend. For example: If you’re using mongo, the framework should know about the context, and provide you with goodies like using $inc, for instance. Or if you’re in SQLite, then allow you to do native SQL queries. Some food for thought… :)
John Sullivan
@sullivan-
Hi @mardo! That's a great idea, and is definitely worthy of discussion. Bear with me here, because I have a lot to say on this, and my schedule is a bit busy today, so I might not get it out all at once.
So there are two different directions I could see satisfying your desire here, and both are sensible.
The first is to expose the underlying connection to the database for you to use however you want. I've definitely considered this in the past, and I've put it off, because for one it's not as simple as one might think - every back end has a unique type for the underlying collection, and these types come from optional dependencies. So to do that right, I would need to employ some tricks to keep it from breaking things when the optional dependencies are not present. I've done this kind of thing in the past, it's not too hard, but it's not trivial. For two, nothing it stopping you from acquiring your own connection to the same underlying database. Not ideal, I know - two different connections open in the same program. But it's certainly workable.
John Sullivan
@sullivan-
Second is an API for doing on-database updates. Right now, to update a row, you have to read it in to software memory, make the change, and write it back. This is certainly an inefficient way to do updates in some scenarios.
I have considered on-database updates and deletes before. Let me see if I can pull up the stories...
This would be a fair bit of work, because it would require building out a DSL for describing the update clauses
Also, as things currently stand, this would be virtually impossible for the SQL and Cassandra back ends. So as things stand, this would be a Mongo only feature.