Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Viktor Rudebeck
    @vlovgr
    Yes, it provides a mapping between Scala types and types supported by the Java Avro library.
    Ben Plommer
    @bplommer
    I've created https://gitter.im/fd4s/dev for discussion related to development of fd4s libraries - anyone interested should feel free to join
    Jacqueline Hubbard
    @JackieDev
    Hi I have a problem, my Codec/s are throwing a scala.UninitializedFieldError however I can see that my codecs are covering all nested Models/DataTypes, has anyone else had this issue?
    Ben Plommer
    @bplommer
    (The problem Jacqueline had was the result of a codec defined in a val depending on another val defined further down in the same object - the answer was to replace vals with lazy vals)
    Filippo De Luca
    @filosganga

    @bplommer @vlovgr I am testing the vulcan evolution logic, and I have a question. It does support deserialising the ByteBuffer as string. However, it does not support Array[Byte] to string.

    I believe the rationale is that the AvroSdk, when deserialising, uses ByteBuffer rather tha Array[Byte]

    Is this correct? There will be any befit adding the support also for Array[Byte] (Or IndexedSeq[Byte])

    Ben Plommer
    @bplommer
    yeah, that's correct. There'd only be a benefit in supporting Array[Byte] if that's something the java lib outputs
    Filippo De Luca
    @filosganga
    Thanks makes sense to me
    Filippo De Luca
    @filosganga

    @bplommer As you could imagine I am hacking with the vulcan evolution logic. There is one case that does not seem to be supported:

    If the reader shcema is a union of A,B and the writer schema is just B, the reader should be able to read it.

    However, in vulcan it fails.

    There is a test in vulcan for it:
            it("should decode if schema is part of union") {
              assertDecodeIs[SealedTraitCaseClass](
                unsafeEncode[SealedTraitCaseClass](FirstInSealedTraitCaseClass(0)),
                Right(FirstInSealedTraitCaseClass(0)),
                Some(unsafeSchema[FirstInSealedTraitCaseClass])
              )
            }
    However, unsafeEncode[SealedTraitCaseClass ] here, I believe is using the schema for SealedTraitCaseClass rather than FirstInSealedTraitCaseClass
    Ben Plommer
    @bplommer
    Yeah, it would be. There's quite a confusing division of work between the java lib and the Vulcan code with regard to schema resolution. With the confluent deserializers used in fs2-kafka-vulcan the decoded record should already be adapted to match the reader schema before vulcan sees it
    So if you change that unsafeEncode[SealedTraitCaseClass] to unsafeEncode[FirstInSealedTraitCaseClass]it fails?
    Filippo De Luca
    @filosganga

    I know. So me and Fabio have developed a library that needs Vulcan to apply the evolution logic, this is why I have tested all the cases to find if there was some not supported.

    And So far I have found this one, and another one on the enum but I am not sure about the enaum one.

    So if you change that unsafeEncode[SealedTraitCaseClass] to unsafeEncode[FirstInSealedTraitCaseClass]it fails?
    Will try it now
    In case it fails, are you happy to me opening a PR?
    Ben Plommer
    @bplommer
    by all means
    I guess that's an internal ovo thing?
    Filippo De Luca
    @filosganga
    yes for the time being it is internal
    Ben Plommer
    @bplommer
    hmm, actually it shouldn't matter which schema you're encoding with - the encoding of an enum is just the encoding of the value type
    Filippo De Luca
    @filosganga
    In fact it does work as expected, my bad
    The failing case is another one :)
    if writer's is a union, but reader's is not
    If the reader's schema matches the selected writer's schema, it is recursively resolved against it. If they do not match, an error is signalled.
    Basically the reverse
    Ben Plommer
    @bplommer
    Ah right, yeah. So when we check the writer schema type, we need to always accept SchemaType.UNION
    Feel free to open a PR for that :)
    Filippo De Luca
    @filosganga
    cool thanks!
    Ben Plommer
    @bplommer
    Oh btw @filosganga I remember some time back we were talking about re-implementing functionality from the Java lib so Vulcan wouldn't depend on it - I have a couple of WIP/experimental PRs up for aspects of that (one for representing schemas, one for scodec-based codecs)
    Filippo De Luca
    @filosganga
    It would be great
    Ben Plommer
    @bplommer
    This one is a separate module that replaces the encoding/decoding functionality while keeping the existing vulcan API (so uses the java schema representation) - fd4s/vulcan#289
    I think even if it doesn't become the preferred implementation, it would be good to have it for cross-validating the implementations against each other and clarifying our understanding
    I finally made some proper progress by starting from what we had rather than starting from the ground up with trying to produce a fully statically validated implementation
    Fabio Labella
    @SystemFw
    oh, one quick win
    the schema registry client has a lot of synchronized for what's essentially a concurrent map and a couple of rest calls that need to be cached
    Ben Plommer
    @bplommer
    yeah, we have a ticket in fs2-kafka to reimplement that
    I feel like some of that fs2-kafka code should move to vulcan though - the thing of wiring up the serdes with the codecs to get the right schema resolution seems much more an avro thing than a kafka thing
    Filippo De Luca
    @filosganga
    Uh, I am not sure about that. If the vulcona scope is just Avro, it has nothing to do with Kafka serde, but can be used in any other context. We use it with DynamoDb for example
    Ben Plommer
    @bplommer
    What do people think about adding reader behaviour that's optionally non-compliant with the Avro spec? I'm thinking in particular about allowing a union field in a record to be decoded as a default value when it's present but not a type the reader recognises. So e.g. if the writer schema is [Foo, Bar, Baz] I can use [null, Foo] as a reader schema and decode as Option[Foo]. It really annoys me that Avro doesn't allow that.
    Keir Lawson
    @keirlawson
    Is it possible to use Vulcan to go from a GenericRecord to a specific implementation? As opposed to deserialising directly from bytes?
    Liu Yi
    @kinglywork

    Hey there, I got a question about the codec, say if I have a model like this:

    sealed abstract class Parent
    
    final case class Child1(property: AnyType) extends Parent
    final case class Child2() extends Parent

    there is no fields in Child2, how can I write codec for Child2 ??

    Keir Lawson
    @keirlawson
    Why does Codec[A].encode return an Either? What are the possible failure modes?
    Viktor Rudebeck
    @vlovgr
    @keirlawson it basically stems from this issue: fd4s/vulcan#271
    Ben Plommer
    @bplommer
    @vlovgr speaking of, what are your thoughts on throwing when a valid schema can’t be instantiated?
    Viktor Rudebeck
    @vlovgr
    I think it might be okay until we can validate at compile-time.
    Nishant Vishwakarma
    @nishantv12
    I recently updated my version of vulcan-generic and I see that the schema getting generated for my sealed trait has changed. The change is in the order of records. This also affects the references of nested records. Is there a way to control this or prevent that change?
    5 replies
    Keir Lawson
    @keirlawson
    Is there a reason why there is a LocalDate codec included with Vulcan, but not a LocalTime codec? is this because there are two possible logical encodings, millis vs micros?
    Ben Plommer
    @bplommer
    No reason. Feel free to open an issue or a PR - probably best to have localTimeMillis and localTimeMicros.
    Keir Lawson
    @keirlawson
    :+1:
    Keir Lawson
    @keirlawson
    Would it be possible to cut a Vulcan release? Keen to port my code over to the new time codecs...