Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Nov 30 2018 19:29
    mfurkandemir starred fehmicansaglam/tepkin
  • Oct 20 2017 12:55
    Athedorer starred fehmicansaglam/tepkin
  • Mar 02 2017 09:27
    Travis jeroenr/tepkin@f0fe216 (upgradeakka) passed (39)
  • Mar 02 2017 09:25
    jeroenr closed #32
  • Mar 02 2017 09:25
    jeroenr opened #32
  • Aug 12 2016 01:09
    igorlimansky starred fehmicansaglam/tepkin
  • Jun 04 2016 07:49
    fehmicansaglam closed #31
  • Jun 04 2016 07:49
    fehmicansaglam commented #31
  • Jun 03 2016 03:48
    Whitilied opened #31
  • Apr 15 2016 01:22
    cm2316 starred fehmicansaglam/tepkin
  • Apr 05 2016 09:22
    Travis jeroenr/tepkin (propagateerrors) passed (36)
  • Mar 26 2016 19:00
    joecwu starred fehmicansaglam/tepkin
  • Mar 15 2016 12:25
    ginnun starred fehmicansaglam/tepkin
  • Mar 11 2016 21:18
    steveswing starred fehmicansaglam/tepkin
  • Mar 10 2016 08:29
    jeroenr commented #30
  • Mar 10 2016 07:12
    fehmicansaglam commented #30
  • Mar 08 2016 13:40
    Travis jeroenr/tepkin (v0.7) passed (32)
  • Mar 08 2016 13:27
    Travis jeroenr/tepkin@8c867d0 (fixorderby) passed (28)
  • Mar 08 2016 13:22
    jeroenr closed #30
  • Mar 08 2016 13:22
    jeroenr commented #30
Mike Limansky
@limansky
the problem is that I didn't find the way to construct BsonDocument in macro from the List[(String, Any)].
Fehmi Can Sağlam
@fehmicansaglam
Hmm
Fehmi Can Sağlam
@fehmicansaglam
I will come up with a working gist soon
Mike Limansky
@limansky
thanx
Fehmi Can Sağlam
@fehmicansaglam
@limansky i have updated 0.5-SNAPSHOT
    val elements: List[(String, Any)] = List("name" -> "name", "age" -> 3, "salary" -> 4.0)
    val document: BsonDocument = BsonDocument.from(elements)
you can construct a BsonDocument from (String, Any) tuples like above.
Mike Limansky
@limansky
@fehmicansaglam Great! Thanx. I'll try to use it this evening :+1:
Harmeet Singh(Taara)
@harmeetsingh0013
@fehmicansaglam is there discussion room for #reactivemongo-extensions ?
Fehmi Can Sağlam
@fehmicansaglam
Nope
Harmeet Singh(Taara)
@harmeetsingh0013
i have litte probem with query dsl
if you allow, i can paste the link of my query here ?
Fehmi Can Sağlam
@fehmicansaglam
Pleas file an issue on github
Or you can ask in the reactivemongo google group
This room is just for tepkin
Harmeet Singh(Taara)
@harmeetsingh0013
ok @fehmicansaglam , paste the issue on githum abd google group
that's why, i taken the permission from you
Fehmi Can Sağlam
@fehmicansaglam
ok, i have posted the answer to SO.
Fehmi Can Sağlam
@fehmicansaglam
If anyone is interested I have opened a room for my new project: elastic-streams https://gitter.im/fehmicansaglam/elastic-streams
I am trying to come up with an implementation of reactive streams for elasticsearch
Daniel Wegener
@danielwegener
hey @fehmicansaglam . Sorry that it took so long to report back. The collection sink api looks just as expected. very nice, thank you. I had no luck yet running this in a combined example (like having a flow stdin->collection.sink ; collection.find(tailable=true) -> stdout) but I'll give you a link once its running. might be a nice example.
Daniel Wegener
@danielwegener
I am still wondering if one could make the collection.find() returning directly a Source instead of returning a Future[Source]. In the case when the future would fail, you could just let the stream itself fail directly. This could enable the use of stream supervision for recoverable failures (like when the mongo is not available during flow materialization, recover with a restart). In the end, Sources are easier to compose.
Fehmi Can Sağlam
@fehmicansaglam
@danielwegener I have changed Future[Source] return values to Source and updated 0.5-SNAPSHOT
I agree that it is easier to use the api right now
Daniel Wegener
@danielwegener
Again, cheers, @fehmicansaglam, that was awesome quick.
Daniel Wegener
@danielwegener
Mhm weird. Sink seems to work fine but the MongoCursor seem to receive a reply with a cursorId 0 (as of documentation that states that indicates a dead cursor, whatever that means) and thus eagerly closes the stream. Would you be fine if I'd add ResponseFlags to the reply class?
Fehmi Can Sağlam
@fehmicansaglam
@danielwegener as you said if no documents match the query(or the collection is empty) cursor is dead. If you add at least 1 document to the capped collection everything works fine. I couldn't find much in the documentation so I left it as it is.
What do you mean by ResponseFlags? Could you point me to the documentation?
Daniel Wegener
@danielwegener
The response flags are documented in http://docs.mongodb.org/meta-driver/latest/legacy/mongodb-wire-protocol/#op-reply . They already a field flags in https://github.com/fehmicansaglam/tepkin/blob/master/tepkin/src/main/scala/net/fehmicansaglam/tepkin/protocol/message/Reply.scala but are not parsed (but maybe I am in the wrong version of the wire protocol).
I think I have misunderstood the tailable cursors then. But why would the mongo give me a dead cursor for a tailable cursor query on an empty collection if I want to query all elements that will arrive there in the near future. I thought that is what tailable cursors / capped collections are made for.
Fehmi Can Sağlam
@fehmicansaglam
I also think like that. We may look at the official driver's source code or ReactiveMongo
Maybe there is a way to handle empty collections
Daniel Wegener
@danielwegener
I will try to tail query a non empty collection this evening and report back then
Fehmi Can Sağlam
@fehmicansaglam
BTW it will be great if you can send a PR for flags
Daniel Wegener
@danielwegener
Wip :)
Daniel Wegener
@danielwegener
The dead cursor on empty collection seem to be an off by one corner case of the dead cursor "the cyclic log buffer has overwritten your current read position" - like in "the producer is faster than the consumer". The tailable cursor documentation state that the user should explicitply deal with such errors (in their example, using a for loop). We maybe could fail the source if we receive a dead cursor on tailable MongoCursors (it should never complete normally). if (cursorID == 0 && tailable) onError(new DeadCursorException()). The user could then decide to use a proper failure policy (restart or fail the whole stream, or switch to another source or whatever)
Fehmi Can Sağlam
@fehmicansaglam
I see. That makes sense to throw a DeadCursorException. But I am not sure about a proper way of dealing with dead cursors.
Daniel Wegener
@danielwegener
I think we can rely on stream supervision such that the user can decide if she wants to restart/stop/fail downstream (see http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M5/scala/stream-error.html)
The restart should just restart the stage (create an new MongoCursor actor)
Fehmi Can Sağlam
@fehmicansaglam
Makes sense
      document.getAs[Int]("age").value shouldBe 18
      document.getAs[Double]("details.salary").value shouldBe 455.5
      document.getAs[String]("details.personal.foo").value shouldBe "bar"
      document.getAsList[List[_]]("details.inventory").value shouldBe List("a", 3.5, 1L, true)

      val details = document.getAs[BsonDocument]("details").get
      details.getAs[Double]("salary").value shouldBe 455.5

      val personal = document.getAs[BsonDocument]("details.personal").get
      personal.getAs[String]("foo").value shouldBe "bar"
    }
Harmeet Singh(Taara)
@harmeetsingh0013
Hello, what is the difference between tepkin and ReactiveMongo drivers?
Fehmi Can Sağlam
@fehmicansaglam
@harmeetsingh0013 you asked that question before and i replied.
Harmeet Singh(Taara)
@harmeetsingh0013
hello @fehmicansaglam yes i found, sorry for repeat.
Harmeet Singh(Taara)
@harmeetsingh0013
hello @fehmicansaglam can you help me in #reactive-mongo-extension 0.11.7
Harmeet Singh(Taara)
@harmeetsingh0013
Jeroen Rosenberg
@jeroenr
Hi all, is this project still alive?
Jeroen Rosenberg
@jeroenr
Hi guys, new gitter for this project will be https://gitter.im/jeroenr/tepkin
We agreed I would be maintaining this project from now on