Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 13:02
    plokhotnyuk edited #931
  • 12:25
    plokhotnyuk synchronize #931
  • 12:25

    plokhotnyuk on add-optional-support-of-comments

    Add an optional ability to skip… (compare)

  • 11:43

    plokhotnyuk on master

    Update smithy4s-json to 0.15.0 (compare)

  • 10:28

    plokhotnyuk on master

    Update zio-json to 0.3.0-RC11 (compare)

  • 09:07
    plokhotnyuk synchronize #931
  • 09:07

    plokhotnyuk on add-optional-support-of-comments

    Add an optional ability to skip… (compare)

  • 09:00
    plokhotnyuk synchronize #931
  • 09:00

    plokhotnyuk on add-optional-support-of-comments

    Add an optional ability to skip… (compare)

  • 08:57
    plokhotnyuk synchronize #931
  • 08:57

    plokhotnyuk on add-optional-support-of-comments

    Add an optional ability to skip… (compare)

  • 08:43

    plokhotnyuk on master

    Reduce startup time for GraalVM… (compare)

  • 08:32
    plokhotnyuk synchronize #931
  • 08:32

    plokhotnyuk on add-optional-support-of-comments

    (compare)

  • 08:00
    plokhotnyuk synchronize #931
  • 08:00

    plokhotnyuk on add-optional-support-of-comments

    More efficient parsing of token… (compare)

  • 07:39
    plokhotnyuk synchronize #931
  • 07:39

    plokhotnyuk on add-optional-support-of-comments

    More efficient parsing of token… (compare)

  • 07:12
    plokhotnyuk synchronize #931
  • 07:12

    plokhotnyuk on add-optional-support-of-comments

    Add an optional ability to skip… (compare)

Andriy Plokhotnyuk
@plokhotnyuk
@TimothyKlim Hi, Timothy! Thanks for your feedback! Currently the no ADT tags option is not supported for derivation of codecs. The main reason is non-safe implementation of parsing for general case. plokhotnyuk/jsoniter-scala#435 is an issue where possible solutions were discussed.
Glen Marchesani
@fizzy33

I am converting from an AST based json library (well a few actually) I am looking for a recommendation. I have messaging middleware where effectively there are two layers one layer has the metadata and another layer handles the message bodies like so

case class Message(
  context: MessageContext,
  sender: NodeId,
  destination: NodeId,
  methodName: MethodName,
  event: StreamEvent,
  headers: Map[String,String],
  body: JsonStr,
)

So JsonStr is really any valid json. In the AST json world I would use the AST type. In jsoniter world that doesn't exist. So I have this

    object JsonStr {
      implicit val codec: JsonValueCodec[JsonStr] =
        new JsonValueCodec[JsonStr] {

          override def decodeValue(in: JsonReader, default: JsonStr): JsonStr =
            JsonStr(Chunk.array(in.readRawValAsBytes))

          override def encodeValue(x: JsonStr, out: JsonWriter): Unit =
            out.writeRawVal(x.rawBytes.toArray) // ??? optimize me we have at least one if not more extra array copies

          override def nullValue: JsonStr = ???
        }

      val Null = JsonStr(Chunk.array("null".getBytes))
      val EmptyObj = JsonStr(Chunk.array("{}".getBytes))
      val EmptyArr = JsonStr(Chunk.array("[]".getBytes))

    }
    case class JsonStr(rawBytes: Chunk[Byte])
So this use case with AST we parse once to AST and then in another layer process Message.body into the underlying type expected.
The use case with jsoniter it looks like I will have to parse the json twice. Once to get JsonStr and another to parse JsonStr into the type expected.
Wondering if anyone has thoughts on how best to do this with JsonIter
second question is any interest in PR with some tweaks to JsonReader and JsonWriter to support JsonStr working via a zero copy ?
Glen Marchesani
@fizzy33
third question is a ponder related to the first, is this double parsing going to ruin the performance gains of using jsonite?
any thoughts are appreciated
Andriy Plokhotnyuk
@plokhotnyuk
@fizzy33 Hi, Glen! Thanks for reaching out here! Usually reading and writing of raw bytes used for speeding up of JSON message transformation when those raw chunks go from the input to the output untouched, or for deferred parsing when raw chunks are re-parsed to case classes on demand in case of some rare conditions. I need to understand your challenge in general to propose the most efficient solution. As an example, if the body is schema-less and should be parsed always fully to some AST then you can do that without intermediate raw chunks using some custom codec. Dijon and ply-json-jsoniter libraries have such kind of codecs for their ASTs.
Andriy Plokhotnyuk
@plokhotnyuk
BTW, if it is the Chunk type is from the fs2 library then it will be quite inefficient to use it with Byte type due lack of specialization for primitives.
Glen Marchesani
@fizzy33
well something Chunk-like... wouldn't want to add fs2 as a dep and roger on the innefciencies without specialization
Well my use case is there is a general messaging layer to pass messages between processes which the Message class is the handler. Then at the lower level each class gets the Body and then turns it into whatever it needs to process. So for example most of the messages are some form of api call so one such call is just a case class e.g.
   case class QueryRequest(query: String, batching: Int)

  // the response
  case class QueryResponse(rows: fs2.Stream[F, Chunk[Row]], metadata: QueryMetadata)
Message and QueryRequest | QueryResponse are distinct layers so one cannot marshal them into instances at the same time.
Glen Marchesani
@fizzy33
fwiw I did some prelim testing on the JsonStr works well enough for now
I get better performance all around and probly can tweak it a bit later
Andriy Plokhotnyuk
@plokhotnyuk
Just wondering why Array[Byte] cannot be used directly but safely like in the RawVal type here instead of using intermediate Chunk[Byte]?
Glen Marchesani
@fizzy33
Array[Byte] can be used directly
the issue is I have offsets
so not in my example out.writeRawVal(x.rawBytes.toArray)
ideally if I could have
def writeRawVal(array: Array[Byte], offset: Int, len: Int)
then I can achieve zero copy (well one copy) on writes
on the read side if there was a read that gave back the same triplet as in (array: Array[Byte], offset: Int, len: Int) then I can get zero copy on my side
Andriy Plokhotnyuk
@plokhotnyuk
Do you mean def readRawValAsBytes(array: Array[Byte], offset: Int): Int = ??? // returns the length of a read chunk?
Could be the java.nio.ByteBuffer type a better option with adding following methods: def writeRawVal(bytes: java.nio.ByteBuffer): Unit and def readRawValAsByteBuffer(bytes: java.nio.ByteBuffer): Unit?
Glen Marchesani
@fizzy33
definitely java.nio.ByteBuffer would work quite well
In the internals I am in I have little choice to be squeemish about mutation ;-)
one layer up it is all pretty clean and immutable
well lets say at the low layer it is immutable when everyone plays nicely together :-)
seems like some extends AnyVal type around an Array[Byte] to give you the data of the array but not have it be mutable would be really nice. I wonder if someone has done the work on such a thing
and thanks for sharing your thoughts on this.
Marcin Szałomski
@baldram

Currently I'm going to check if it will be possible to minimize derivation configuration by adding the support for Enumeratum in some additional sub-project or even in the jsoniter-scala-macros sub-project with an optionally scoped dependency on Enumeratum.
While EnumEntry from Enumeratum is supported [...]

@plokhotnyuk Hi Andriy! Have you had a chance to evaluate creating the Enumeratum support related jsoniter-scala-macros sub-project? Is it something in your current work scope and doable based on the existing work from the past (https://github.com/lloydmeta/enumeratum/pull/176/files)?

Andriy Plokhotnyuk
@plokhotnyuk
@baldram Hi, Marcin! Thank for your question and support! I think that support of value enums of Enumeratum can be added with test-only scoped dependency on Enumeratum. Just need to add the CodecMakerConfig.useEnumeratumEnumValueId flag that turns on using of value from value fields as ids for parsing and serialization of objects which extends sealed traits as it was done for Scala enums here: plokhotnyuk/jsoniter-scala@f37dde6 Also, it can bring maximum performance because Enumeratum API will not be used at all.
Andriy Plokhotnyuk
@plokhotnyuk
I've added tests for Enumeratum enums: plokhotnyuk/jsoniter-scala@73ce8aa
Marcin Szałomski
@baldram
Oh, wow! What an answer :) Just seeing commits. Thank you.
This will be a part of 2.8.2 I guess. Is it already scheduled? I would try out a new feature.
Marcin Szałomski
@baldram

@plokhotnyuk I have one more question Andriy. Is it possible to add to generated code a SuppressWarnings? Maybe as an option?
The macro causes a lot of Wartremover errors. So we need to ignore it like below.

@SuppressWarnings(Array("org.wartremover.warts.All"))

The problem is that it doesn't make sense to create a single file, let's say Codecs.scala, put all codecs there, and add wartremoverExcluded for the whole file from build.sbt.
In this case, we would have to import codecs manually, losing the advantage of implicit codec.
If we have a codec inside a companion object, this imports automatically and also, other features relying on this codec are working automatically.

So everywhere in code we add this linter ignore for JsonCodecMaker.make. This is in each place we use Jsoniter.

object SomeFooBarEvent {
  @SuppressWarnings(Array("org.wartremover.warts.All")) // silencing generated code false-positive issues
  implicit val codec: JsonValueCodec[SomeFooBarEvent] = JsonCodecMaker.make
}

final case class SomeFooBarEvent(
  metadata: FooBarMetadata,
  request: FooBarRequest
)

Maybe such a SuppressWarnings could be already generated or with some option?
Or there is another solution?

Andriy Plokhotnyuk
@plokhotnyuk
@baldram Could you please share the code and commands to reproduce your warnings? Probably they worth to be fixed immediately in the macro that generate implementations of codecs. In the v2.7.1 release I fixed the "match may not be exhaustive" warning and it is possible that others are easy to fix too.
Marcin Szałomski
@baldram
The above code is enough. This already causes warnings if there is no @SuppressWarnings. Any use of JsonCodecMaker.make for any hello world case class caused tons of errors, when used with Wartremover wartremoverErrors ++= Warts.all.
Then enough is to dosbt clean compile from the console and that's it. Shall I provide some example warnings?
Marcin Szałomski
@baldram

I'm not sure it's easy to fix those warnings as "null" is a thing which is prohibited or lack of triple equals and many things.
Here is example:

object FooBar {
  implicit val codec: JsonValueCodec[FooBar] = JsonCodecMaker.make
}
final case class FooBar(foo: Int, bar: Option[String])

...causes:

[warn] /mnt/foobar/model/FooBar.scala:11:63: [wartremover:Equals] != is disabled - use =/= or equivalent instead
[warn]   implicit val codec: JsonValueCodec[FooBar] = JsonCodecMaker.make
Now I will disable errors one by one. Eg. @SuppressWarnings(Array("org.wartremover.warts.Equals")). Then next will come, and next, and next...
So just after I get:
[warn] /mnt/foobar/model/FooBar.scala:12:63: [wartremover:Null] null is disabled
Marcin Szałomski
@baldram

So now I do:
@SuppressWarnings(Array("org.wartremover.warts.Equals", "org.wartremover.warts.Null"))
... and I get:

[warn] /mnt/foobar/model/FooBar.scala:12:63: [wartremover:OptionPartial] Option#get is disabled - use Option#fold instead

It looks like never ending story, so I end up in adding @SuppressWarnings(Array("org.wartremover.warts.All")).
That's why it would be great to already generate this SuppressWarnings, as this is kind of not nice thing we need to add in a code and excuse ;) for it in a comment.

If not generating Supress Warnings, I was thinking also how about implementing some custom annotation alias for Jsoniter like @SupressJsoniterWarts which would have this suppress for All errors under the hood.
Maybe also a solution. So adding this short version of Supress Warnings (@SupressJsoniterWarts) would be at least a bit nicer.
The best way would be to have it already generated, or have some configuration in build.sbt or whatever while importing a library that this will generate those @SuppressWarnings(Array("org.wartremover.warts.All")). Not sure this is possible.

Marcin Szałomski
@baldram
Is my description of a problem clear enough?
Andriy Plokhotnyuk
@plokhotnyuk
Thank you, Marcin! It was clear and helpful! Here is a commit with the proposed solution: plokhotnyuk/jsoniter-scala@a550237
Could you please clone the repo with git clone --depth 1000 --branch master git@github.com:plokhotnyuk/jsoniter-scala.git, build and publish locally a snapshot version with sbt clean publishLocal and then test the 2.8.2-SNAPSHOT version if it works for you?
Marcin Szałomski
@baldram
All right. I will do that in the evening, is it ok?
Andriy Plokhotnyuk
@plokhotnyuk
Ok, I've force pushed a better solution that passes internal tests with polluted namespaces: plokhotnyuk/jsoniter-scala@fa8b53f
Marcin Szałomski
@baldram
I've compiled a project with 2.8.2-SNAPSHOT with all SuppressWarnings removed and no linter errors :tada:
I will try in the other project to double check.