plokhotnyuk on update-smithy4s-json-to-0.14.0
Update smithy4s-json to 0.14.0 (compare)
plokhotnyuk on update-smithy4s-json-to-0.14.0
Update smithy4s-json to 0.14.0 (compare)
plokhotnyuk on master
Update commons-core to 2.7.2 (compare)
plokhotnyuk on master
Return of Scala Steward logo (compare)
plokhotnyuk on master
Update smithy4s-json to 0.13.7 (compare)
plokhotnyuk on v2.13.33
plokhotnyuk on master
Setting version to 2.13.33 Setting version to 2.13.34-SNAP… (compare)
plokhotnyuk on master
Update Scala Native to 0.4.5 (compare)
plokhotnyuk on master
Update Scala Native to 0.4.5 (compare)
plokhotnyuk on master
Clean up of the build (compare)
plokhotnyuk on master
Clean up of the build (compare)
plokhotnyuk on master
Update docs Update Scala Native to 0.4.5 (compare)
plokhotnyuk on master
Update zio-json to 0.3.0-RC9 (compare)
plokhotnyuk on master
Fix a benchmark for jackson-mod… Update sbt to 1.7.0-RC1 (compare)
(array: Array[Byte], offset: Int, len: Int)
then I can get zero copy on my side
java.nio.ByteBuffer
type a better option with adding following methods: def writeRawVal(bytes: java.nio.ByteBuffer): Unit
and def readRawValAsByteBuffer(bytes: java.nio.ByteBuffer): Unit
?
extends AnyVal
type around an Array[Byte]
to give you the data of the array but not have it be mutable would be really nice. I wonder if someone has done the work on such a thing
Currently I'm going to check if it will be possible to minimize derivation configuration by adding the support for Enumeratum in some additional sub-project or even in the jsoniter-scala-macros sub-project with an optionally scoped dependency on Enumeratum.
While EnumEntry from Enumeratum is supported [...]
@plokhotnyuk Hi Andriy! Have you had a chance to evaluate creating the Enumeratum support related jsoniter-scala-macros sub-project? Is it something in your current work scope and doable based on the existing work from the past (https://github.com/lloydmeta/enumeratum/pull/176/files)?
CodecMakerConfig.useEnumeratumEnumValueId
flag that turns on using of value from value
fields as ids for parsing and serialization of objects which extends sealed traits as it was done for Scala enums here: plokhotnyuk/jsoniter-scala@f37dde6 Also, it can bring maximum performance because Enumeratum API will not be used at all.
@plokhotnyuk I have one more question Andriy. Is it possible to add to generated code a SuppressWarnings
? Maybe as an option?
The macro causes a lot of Wartremover errors. So we need to ignore it like below.
@SuppressWarnings(Array("org.wartremover.warts.All"))
The problem is that it doesn't make sense to create a single file, let's say Codecs.scala, put all codecs there, and add wartremoverExcluded
for the whole file from build.sbt
.
In this case, we would have to import codecs manually, losing the advantage of implicit codec.
If we have a codec inside a companion object, this imports automatically and also, other features relying on this codec are working automatically.
So everywhere in code we add this linter ignore for JsonCodecMaker.make
. This is in each place we use Jsoniter.
object SomeFooBarEvent {
@SuppressWarnings(Array("org.wartremover.warts.All")) // silencing generated code false-positive issues
implicit val codec: JsonValueCodec[SomeFooBarEvent] = JsonCodecMaker.make
}
final case class SomeFooBarEvent(
metadata: FooBarMetadata,
request: FooBarRequest
)
Maybe such a SuppressWarnings could be already generated or with some option?
Or there is another solution?
sbt clean compile
from the console and that's it. Shall I provide some example warnings?
I'm not sure it's easy to fix those warnings as "null" is a thing which is prohibited or lack of triple equals and many things.
Here is example:
object FooBar {
implicit val codec: JsonValueCodec[FooBar] = JsonCodecMaker.make
}
final case class FooBar(foo: Int, bar: Option[String])
...causes:
[warn] /mnt/foobar/model/FooBar.scala:11:63: [wartremover:Equals] != is disabled - use =/= or equivalent instead
[warn] implicit val codec: JsonValueCodec[FooBar] = JsonCodecMaker.make
@SuppressWarnings(Array("org.wartremover.warts.Equals"))
. Then next will come, and next, and next...
[warn] /mnt/foobar/model/FooBar.scala:12:63: [wartremover:Null] null is disabled
So now I do:@SuppressWarnings(Array("org.wartremover.warts.Equals", "org.wartremover.warts.Null"))
... and I get:
[warn] /mnt/foobar/model/FooBar.scala:12:63: [wartremover:OptionPartial] Option#get is disabled - use Option#fold instead
It looks like never ending story, so I end up in adding @SuppressWarnings(Array("org.wartremover.warts.All"))
.
That's why it would be great to already generate this SuppressWarnings, as this is kind of not nice thing we need to add in a code and excuse ;) for it in a comment.
If not generating Supress Warnings, I was thinking also how about implementing some custom annotation alias for Jsoniter like @SupressJsoniterWarts
which would have this suppress for All errors under the hood.
Maybe also a solution. So adding this short version of Supress Warnings (@SupressJsoniterWarts
) would be at least a bit nicer.
The best way would be to have it already generated, or have some configuration in build.sbt or whatever while importing a library that this will generate those @SuppressWarnings(Array("org.wartremover.warts.All"))
. Not sure this is possible.
git clone --depth 1000 --branch master git@github.com:plokhotnyuk/jsoniter-scala.git
, build and publish locally a snapshot version with sbt clean publishLocal
and then test the 2.8.2-SNAPSHOT
version if it works for you?
jsoniter-scala
Scala 3 compatible? I thought it's not doable because of macros, and there is no hope for that.jsoniter-scala-core
with 3.0.0-RC3 and 3.0.0 versions but failed both attempts due some classpath problems with the sonatypeBundleRelease
command: xerial/sbt-sonatype#230
null
. However in that scenario (at least I think that's what happening) jsoniter will freak out because I believe it wants valid json, but is instead receiving null. And then you're met with this
[info] org.eclipse.lsp4j.jsonrpc.ResponseErrorException: com.github.plokhotnyuk.jsoniter_scala.core.JsonReaderException: expected '{', offset: 0x00000000, buf:
[info] +----------+-------------------------------------------------+------------------+
[info] | | 0 1 2 3 4 5 6 7 8 9 a b c d e f | 0123456789abcdef |
[info] +----------+-------------------------------------------------+------------------+
[info] | 00000000 | 6e 75 6c 6c | null |
[info] +----------+-------------------------------------------------+------------------+
the way we are doing these requests are like
case class Reload()
object Reload {
implicit val codec: JsonValueCodec[Reload] =
JsonCodecMaker.make
}
But is there another way to do this that would allow for null
?
Option[Reload]
instead, so null
JSON values will be parsed to the None
object reference. But if it is not an option for your than some delegating custom codec can be created to wrap derived one by the standard make
: def wrapByNullAllowingCodec[A](codec: JsonValueCodec[A]): JsonValueCodec[A] = new JsonValueCodec[A] {
override def decodeValue(in: JsonReader, default: A): A =
if (in.isNextToken('n')) {
in.readNullOrError[String]("x", "expected value or null")
nullValue
} else {
in.rollbackToken()
codec.decodeValue(in, default)
}
override def encodeValue(x: A, out: JsonWriter): Unit =
if (x == null) out.writeNull()
else codec.encodeValue(x, out)
override val nullValue: A = null.asInstanceOf[A]
}
wrapByNullAllowingCodec
I was actually just going down that path so it's good to see that I was headed in the right direction
I was wondering if I had:
case class(id: String, fields: Map[String, String], num: Int)
// How can I read in the following JSON:
{
"id" : "FOO",
"num": 456,
"fields" : {
"time" : 123,
"bar" : "baz"
}
}
I guess a custom codec? The Stringified annotation or config doesn't seem to apply here
final case class StringMapValue(underlying: String) extends AnyVal
final case class(id: String, fields: Map[String, StringMapValue], num: Int)
implicit val mapValueCodec: JsonValueCodec[StringMapValue] = new JsonValueCodec[StringMapValue]() {
override def decodeValue(in: JsonReader, default: StringMapValue): StringMapValue = {
val b = in.nextToken()
if (b == '"') {
in.rollbackToken()
StringMapValue(in.readString(null))
} else if ((b >= '0' || b <= '9') || b == '-') {
in.rollbackToken()
val d = in.readDouble()
val i = d.toInt
if (i.toDouble == d) {
StringMapValue(i.toString)
} else {
StringMapValue(d.toString)
}
} else if (b == 'n') {
in.rollbackToken()
in.readNullOrError(null, "expected `null` value")
nullValue
} else if (b == 't' || b == 'f') {
in.rollbackToken()
StringMapValue(in.readBoolean().toString)
} else {
in.decodeError("expected JSON value")
}
}
override def encodeValue(x: StringMapValue, out: JsonWriter): Unit = {
out.writeVal(x.underlying)
}
override val nullValue: StringMapValue = StringMapValue(null)
}
fields
values (except null) as strings and then will not able to parse them. For parsed nulls it will just throw NPEs during encoding. Also it have a couple of bug in parsing of numbers and null values. To avoid such problems you can define an ADT (or sum type) for fields
values and use the following codec:import com.github.plokhotnyuk.jsoniter_scala.macros._
import com.github.plokhotnyuk.jsoniter_scala.core._
object Example01 {
sealed trait MapValue
case class StringMapValue(value: String) extends MapValue
case class BooleanMapValue(value: Boolean) extends MapValue
case class DoubleMapValue(value: Double) extends MapValue
case class IntMapValue(value: Int) extends MapValue
case object NullMapValue extends MapValue
case class Data(id: String, fields: Map[String, MapValue], num: Int)
implicit val mapValueCodec: JsonValueCodec[MapValue] = new JsonValueCodec[MapValue] {
override def decodeValue(in: JsonReader, default: MapValue): MapValue = {
val b = in.nextToken()
if (b == '"') {
in.rollbackToken()
StringMapValue(in.readString(null))
} else if (b >= '0' && b <= '9' || b == '-') {
in.rollbackToken()
val d = in.readDouble()
val i = d.toInt
if (i.toDouble == d) IntMapValue(i)
else DoubleMapValue(d)
} else if (b == 't' || b == 'f') {
in.rollbackToken()
BooleanMapValue(in.readBoolean())
} else if (b == 'n') {
in.readNullOrError(default, "expected `null` value")
NullMapValue
} else in.decodeError("expected JSON value")
}
override def encodeValue(x: MapValue, out: JsonWriter): Unit = x match {
case NullMapValue => out.writeNull()
case StringMapValue(x) => out.writeVal(x)
case BooleanMapValue(x) => out.writeVal(x)
case IntMapValue(x) => out.writeVal(x)
case DoubleMapValue(x) => out.writeVal(x)
}
override def nullValue: MapValue = NullMapValue
}
implicit val codec: JsonValueCodec[Data] = JsonCodecMaker.make
def main(args: Array[String]): Unit = {
val data = readFromArray[Data]("""{
| "id" : "FOO",
| "num": 456,
| "fields" : {
| "time" : 123,
| "bar" : "baz",
| "bool" : true,
| "double" : 123.456890123,
| "null" : null
| }
|}""".stripMargin.getBytes("UTF-8"))
val json = writeToArray(Data(id = "BAR", num = 123,
fields = Map[String, MapValue]("time" -> IntMapValue(123), "bar" -> StringMapValue("baz"))))
println(data)
println(new String(json, "UTF-8"))
}
}
fields
values can be arbitrary JSON values and they could be mutable then I would recommend to use Dijon's SomeJson
for them that uses jsoniter-scala's codec to parse and encode it.
fields
values not need to be handled and just need to be passed back to encoded JSON then for the fields
field use RawVal
type like here.