Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 07 2019 01:26
    scullxbones commented #216
  • Jan 06 2019 15:04
    thjaeckle commented #216
  • Jan 06 2019 14:58
    thjaeckle synchronize #216
  • Jan 06 2019 02:53
    scullxbones commented #216
  • Jan 04 2019 15:08
    thjaeckle synchronize #216
  • Jan 04 2019 12:02
    thjaeckle synchronize #216
  • Jan 04 2019 11:10
    thjaeckle opened #216
  • Jan 03 2019 15:52
    pepite commented #214
  • Jan 03 2019 15:09
    scullxbones commented #214
  • Jan 03 2019 15:01
    scullxbones commented #215
  • Jan 03 2019 14:25
    thjaeckle opened #215
  • Jan 03 2019 09:29
    gbrd commented #193
  • Jan 02 2019 22:03
    pepite commented #214
  • Jan 02 2019 21:33
  • Jan 02 2019 15:50
    gbrd commented #193
  • Jan 02 2019 15:50
    gbrd commented #193
  • Jan 02 2019 15:48
    yahor-filipchyk commented #37
  • Jan 02 2019 15:12
    thjaeckle commented #193
  • Jan 02 2019 15:01
    gbrd commented #193
  • Dec 29 2018 14:14
    scullxbones commented #37
Elmar Sonnenschein
@nleso
I noticed that on errors the full connection URL will be logged, including the credentials. This leaks sensitive information into any consumers of the log. Is there a way to specify the credentials or at least the password outside of the URL?
Brian Scully
@scullxbones

Hi @nleso -

It is not documented in the latest documentation, but there is a legacy approach to supply connection information to the plugin that can be seen in the older documentation. The implementation is in MongoSettings.MongoUri:

  val MongoUri: String = Try(config.getString("mongouri")).toOption match {
    case Some(uri) => uri
    case None => // Use legacy approach
      val Urls = config.getStringList("urls").asScala.toList.mkString(",")
      val Username = Try(config.getString("username")).toOption
      val Password = Try(config.getString("password")).toOption
      val DbName = config.getString("db")
      (for {
        user <- Username
        password <- Password
      } yield {
        s"mongodb://$user:$password@$Urls/$DbName"
      }) getOrElse s"mongodb://$Urls/$DbName"
  }

You can see if the mongouri configuration is not supplied, it falls back to using the legacy fields of urls, username, password, db ... which now that I've typed all this out I see just generates a URI. Hmm

do you know where the logging is coming from? is it from the plugin or the underlying driver?
Elmar Sonnenschein
@nleso
No, I have no idea. I had just noticed the full URL appearing in the logs after a wrong configuration caused a connection error. It doesn't occur if all works well so normally nobody will notice. But still, it's a bit uncomfortable to have credentials leaking into the cluster-wide log system on network errors... :-)
Brian Scully
@scullxbones
Can you share the log statement? Omitting the credentials of course :)
Elmar Sonnenschein
@nleso

One error message was:

Could not parse URI 'mongodb://<user>:<pw>@<host>:27017': authentication information found but no database name in URI

Another one occurred when the DB host was not reachable, but I don't have the exact error message available
alokshukla121
@alokshukla121:matrix.org
[m]
Hi @scullxbones , could you please help me figure out the issue with tags I'm facing - The tags aren't written even though i've wrapped the event inside 'akka.persistence.journal.Tagged'. Is there any specific configuration for tags in the journal for this which i'm missing?
I'm using akka 2.6.10 and java serialization for now.
ReadJournal works absolutely fine - i'm able to get events by tag if i add tags manually to the above document.
Brian Scully
@scullxbones

Hi @alokshukla121:matrix.org -

There's no configuration to turn on. But it looks like there may be a configuration setting to turn off.

Looking at: https://github.com/scullxbones/akka-persistence-mongo/blob/master/common/src/main/scala/akka/contrib/persistence/mongodb/MongoDataModel.scala I see your type is considered to be Legacy. Do you have legacy serialization enabled? That may play havoc with tagging.

Legacy serialization is really only for projects that have used this library for many years, I think it's from Akka 2.3 series :)
alokshukla121
@alokshukla121:matrix.org
[m]
Hi @scullxbones , I switched off java serialization and used protobuf this time but still i see the type as 'repr' in the database. Moreover, i'm not able to wrap my event to 'akka.persistence.journal.Tagged' anymore as akka attempts to use java serialization for 'Tagged' which is turned off.
alokshukla121
@alokshukla121:matrix.org
[m]
@scullxbones: Sorry i missed your point and after checking for a flag to toggle legacy serialization, i could find this in reference.conf
this was set to true earlier and hence tags were not getting added. After i switched it to false, it works as expected. Thanks a lot for your help!
Brian Scully
@scullxbones
happy to help!
alokshukla121
@alokshukla121:matrix.org
[m]
@scullxbones: How do we debug the journal as to what's going on with replayed events? How do i enable logging to see what's going wrong? I've migrated the existing events to from Cashbah schema to the schema of your journal but i think in the process i'm missing something and only 'RecoveryCompleted' event is being replayed for all the migrated messages/events. Is it possible to change default 'JournlingFieldNames'?
Brian Scully
@scullxbones
All the plug in drivers share the same data model, Mongo Data Model in common submodule. There shouldn't be any need to migrate events, that part of the question confuses me. Are you trying to migrate something home grown?
1 reply
No it's not possible to change the Journaling field names without a fork of the project.
Brian Scully
@scullxbones
Regarding any events, make sure that your message handler in the persistent actor has a catch all section, or that you are catching all possible infrastructure messages, like RecoveryFailed. Logging is done via slf4j, but it is fairly limited, preferring instead to communicate failures through akka's plugin interface, e.g. Try[Unit]
1 reply
alokshukla121
@alokshukla121:matrix.org
[m]
@scullxbones: I'm facing a strange problem with eventsByTag query - sometimes it works and sometimes it doesn't. I've compared the offset and figured out that the offset stored at projector was of earlier time than the recently added events. It doesn't even show any error.
final EventsByTagQuery readJournal = PersistenceQuery.get(ACTOR_SYSTEM)
.getReadJournalFor(EventsByTagQuery.class, MongoReadJournal.Identifier());
final Source<EventEnvelope, NotUsed> source = readJournal.eventsByTag(getTagName(),
getCurrentOffset());
source.runForeach(this::processEvent, ACTOR_SYSTEM);
Brian Scully
@scullxbones
@alokshukla121:matrix.org have you seen issue #370? I wonder if you're seeing this problem.
alokshukla121
@alokshukla121:matrix.org
[m]
@scullxbones: Thanks for the reference! I did check #370 and other related issues but all those possibly relate to missing events due to time difference within a second. In my case, events aren't appearing even when the difference is around 20 days in offset - the read side processor has offset pertaining to 6th July, 2021 while the events present in messages(the collection i use to store all events) are as latest as 26nd July, 2021. In my local with embedded tomcat in spring boot it works fine but when i deploy the same application in Azure App Service, the events aren't received. Neither i see any error logs. Is it to do with the fact all events including missing 20 days are in messages collection not realtime? Should i copy the unprocessed events to realtime collection? I had migrated the events from previous schema to the one with official mongo driver supports and in the process i copied all the events to same collection.
Brian Scully
@scullxbones
@alokshukla121:matrix.org do you have any live events at all? The realtime collection is a capped collection that is written to when events are written to the journal. It's a kind of event bus to allow for distributed listeners (read processors) without requiring a cluster. It can be disabled by configuration, do you have this enabled? realtime-enable-persistence - is enabled by default ... https://github.com/scullxbones/akka-persistence-mongo/blob/master/common/src/main/resources/reference.conf#L34
Julian Sarrelli
@jsarrelli
Hi @scullxbones , is there any way to persists several events on one document for each one? Right now if I persist 20 events it will be under a single Bson document
Brian Scully
@scullxbones

Hi @jsarrelli - the plugin respects the calls that are made from the library. If multiple messages are passed via asyncWriteMessages then they are stored as a batch in a single document in mongodb. This is because the unit of atomicity for mongodb is the document

This should be triggered by this documented usage or if you prefer classic

Julian Sarrelli
@jsarrelli
Hi @scullxbones , is there any support for Durable State?
Brian Scully
@scullxbones
Hi @jsarrelli - i have not looked into that yet. If it uses existing plugins as is, then yes. In general, the team has been good about reuse and backward compatibility with new persistence features. But I'll have to research it more to know. Don't feel like you need to wait on me though. A proof of concept could answer fairly quickly i think.
Julian Sarrelli
@jsarrelli
And also, is it possible that eventByTag functionality does work with several events under the same mongo document? I'm persisting events as Bson documents with tags, but when I run the stream nothing happens
2 replies