Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 19 16:49
    coreyoconnor commented #25
  • Sep 19 16:44
    coreyoconnor labeled #50
  • Sep 19 16:43
    coreyoconnor closed #88
  • Sep 19 16:43
    coreyoconnor commented #88
  • Sep 19 16:42

    coreyoconnor on master

    add scala 2.13.1 as compile tar… (compare)

  • Sep 19 16:42
    coreyoconnor closed #91
  • Sep 19 01:50
    coreyoconnor opened #91
  • Sep 18 16:19

    coreyoconnor on master

    akka: 2.5.23 -> 2.5.25 (compare)

  • Sep 18 16:19
    coreyoconnor closed #89
  • Sep 18 06:55
    johanandren closed #87
  • Sep 18 06:55
    johanandren commented #87
  • Sep 17 16:26
    coreyoconnor review_requested #89
  • Sep 17 16:25
    coreyoconnor closed #79
  • Sep 17 16:25
    coreyoconnor commented #79
  • Sep 17 16:22
    coreyoconnor closed #84
  • Sep 17 16:22
    coreyoconnor commented #84
  • Sep 17 16:20
    coreyoconnor labeled #88
  • Sep 17 16:19
    coreyoconnor synchronize #89
  • Sep 17 16:17

    coreyoconnor on master

    correct travis config. copy fr… Merge pull request #90 from gln… (compare)

  • Sep 17 16:17
    coreyoconnor closed #90
Corey O'Connor
@coreyoconnor
aight. a bit of style comments. Testing locally
Michal Janousek
@teroxik
Thanks, I will go through the comments over the weekend. Biggest question is if it's not better to get rid of the old data fields, and don't serialize the PersistentRepr - as that's same approach as in akka-persistence-cassandra is going for. And add couple of more fields into the record, support for metadata.
Michal Janousek
@teroxik
Got rid of PersistenRepr serialization and stored persistenceId, sequenceNumber, writerUUID and persistentRepr.manifest as separate fields. Sender is currently set to null. Is sender even used? Haven't seen it to be explicitly set in the cassandra plugin, neither seen a field for it in cassandra schema. The persistenceId and sequenceNr could be derived from the records key but I went for separate field instead.
Corey O'Connor
@coreyoconnor
I'm not familiar with the sender field either.
I'll try these changes and will post any issues.
On a aside: I should really look on how to do an integration test with the GDPR plugin.. hmmm
Michal Janousek
@teroxik
@coreyoconnor Patrick commented on the PR, that's not used. I don't think that a integration test is required. I have added a dummy async serializer which is basically a Java serializer wrapped inside a future. That should replicate the required behaviour enough. I'm currently testing the latest changes in the implementation. Once I will have some time I will look into the performance suite and try to recover the bits which are gone because of the persistent view deprecation. Next on my list is the AWS SDK 2.0 / akka-http (alpakka-client) and persistence query but that's a bit tastier changes.
Michal Janousek
@teroxik
Do we plan to push it forward. Any more requirements on the PR?
Corey O'Connor
@coreyoconnor
AFAICT we are good to push forward. How does that work with this community maintained project?
Good to know about the integration test. I agree that dummy async serializer is sufficient.
Corey O'Connor
@coreyoconnor
@teroxik did you figure out how to push to the repo? Anybody I can contact for you to move this forward?
Has an appropriate version bump been considered? Only a minor version bump?
Michal Janousek
@teroxik
We can do major version, I would be keen to do more work on the features mentioned above. Seems that's mainly on akka team, and if they approve then we can merge.
Johan Andrén
@johanandren
Just FYI @teroxik and @coreyoconnor we are discussing how to unstall progress here, since the team is so busy with other things. Would either or both of you be interested in a contributor/maintainer role for the plugin?
Sorry for not picking this up earlier
Michal Janousek
@teroxik
@johanandren yes I'm definitely interested.
Corey O'Connor
@coreyoconnor
@johanandren happy to support as a contributor/maintainer as well
Michal Janousek
@teroxik
@johanandren Going to merge the PR and would be good if someone can help out with the release. Seems there is a bit more activity.
Johan Andrén
@johanandren
@teroxik Sorry, the notification for this got lost in the flood. You wouldn’t want #82 to go in a release as well?
Oh, #80 was also not merged yet, I’d guess that is the reason you want a new release, right?
Michal Janousek
@teroxik
Yes, I don't want to merge my own PR though. There could be couple of smaller cleanup ones once this gets in as patch versions.
Corey O'Connor
@coreyoconnor
I've been running dynamodb persistence using these changes for a bit. Only light testing but so far consistent with old. Except for one bit
I recall encountering a null pointer exception when it attempted to recover from a journal using the old schema
I just added a comment to the PR with what I recall. Unfortunately I can't find my notes.
Michal Janousek
@teroxik
I do run it in production, didn't notice any null pointers, but currently there is not a massive throughput through the system and there wasn't many new / old persistent entitities. So any feedback welcome. The test for backwards compatibility is there, maybe extend that one a bit. We could release it as RC at least.
Corey O'Connor
@coreyoconnor
OK. I merged the PR as is
I agree with releasing as an RC first. Seems like a good practice regardless.
in which case version.sbt should be updated to 1.2.0-RC1?
Michal Janousek
@teroxik
ok, Johan is on vacation, asked in the akka/dev channel, if somebody will help us out with the release process.
Corey O'Connor
@coreyoconnor
Cool. The version number in version.sbt should be changed? Typically I set the version to the next release version. Not sure what the akka standard is.
Corey O'Connor
@coreyoconnor
Added an issue related to the plugin identifiers used by the two different ddb plugins.
Kato's implementation is at 1.0.7 and using "dynamo-db-plugin". Which was adds some complexity to all this but what's done is done .
Michal Janousek
@teroxik
yeah makes sense, the naming is a bit silly
Michal Janousek
@teroxik
Nobody is really picking up in the akka/dev, so I guess we would have to wait for Johan being back to from holiday. The 1.2.0-RC1 seems good.
Corey O'Connor
@coreyoconnor
ok. I'll create a PR for the version bump if you'd like?
Johan Andrén
@johanandren
I can release a 1.2.0-RC1 now, sounds good? @teroxik @coeryoconnor
Michal Janousek
@teroxik
Yep, sounds great
Johan Andrén
@johanandren
Artifacts on their way to maven central now. I have to run now, feel free to announce over in the Akka discuss forums if you want.
Michal Janousek
@teroxik
Will do today / tomorrow.
Michal Janousek
@teroxik
Thanks for finally helping out publishing it.
Johan Andrén
@johanandren
Sorry for the long delay!
Corey O'Connor
@coreyoconnor
thanks!
feel free to ping me at coreyoconnor@gmail.com if I'm blocking and not on gitter :)
Vladimir
@vladimir-lu
Hi, I have a question about atomicity. As far as I understand from the PersistentActor documentation, the persistAll method is supposed to be atomic for the whole batch. Currently, the journal implementation works around the fact that a batch write is not atomic in DynamoDB with the idx and cnt attributes. Has anyone looked into using TransactWriteItems to guarantee this atomicity?
Corey O'Connor
@coreyoconnor
I don't think anybody has looked into that. Interesting idea tho. IIRC TransactWriteItems has a limit on the number of items. What would be reasonable behavior if persistAll batch exceeded that limit?
Todor Kolev
@todor-kolev
Hi, I am going over the docs atm but it's not obvious to me how to do a recovery based on an offset i.e I keep an offset in dynamo and only replay the messages where offset >= sequenceId, can somebody point me in the right direction? Is this even supported by the plugin or I need to implement it myself? Thanks!
Corey O'Connor
@coreyoconnor
Do you have an example from another storage plugin of this?
At first glance... That seems like something that would be generic to akka persistence