Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Julian Peeters
    @julianpeeters
    thanks a ton @pascals-ager . I'll add a note about that to the readme
    Nick Aldwin
    @NJAldwin
    :wave: hello! We’re starting to use sbt-avrohugger. The plugin itself works great to generate scala. However, I’m running into a confusing error.
    If I start with a clean, tests-passing tree of our master branch, without the plugin, then I add sbt-avrohugger to plugins.sbt with no other changes, suddenly the tests in my submodule are unable to resolve any dependencies
    that is, literally the diff is just the addSbtPlugin and nothing else, and suddenly tests are not happy
    Any ideas what could be going on here? I’m using 2.0.0-RC16 if that helps
    happy to open a ticket too but figured I could try here first
    repro:
    1. sbt the-submodule/clean the-submodule/test succeeds
    2. Add addSbtPlugin line to plugins.sbt (there are multiple other plugins already there, working fine)
    3. sbt the-submodule/clean the-submodule/test fails with no dependencies seemingly able to resolve
      (i.e. even scalatest isn’t there, and we get value should is not a member of String for a ”foo” should “bar” in {)
    Nick Aldwin
    @NJAldwin
    I did a little more digging, and it seems to work fine in RC15 — so this is due to some change between RC15 and RC16
    Julian Peeters
    @julianpeeters
    @NJAldwin wow, bizarre, thank you very much for the report. This makes me realize that avrohugger indeed doesn't have a test with submodules. I'll see if I can reproduce this.
    from RC15 to RC16, I tried to match java avro by adding a way to check the classpath during compilation -- something seems to have very wrong!
    I expect I can look more closely tomorrow
    Nick Aldwin
    @NJAldwin
    Thanks, I appreciate your looking into it!
    I’ve a mostly-unrelated (probably noob) question. I wrote a small IDL file and avrohugger is able to use it to generate a case class. However, if I want to actually use Avro to deserialize, I have to load the schema separately — and avro cannot load IDLs. Am I missing something here or does that mean that if I want to actually use Avro to serialize/deserialize to/from these case classes, I must either write the JSON schema directly or try to build in the avrohugger-tools jar into the build process somehow? I really feel like I’m missing something important that’s resulting in my making all of this much harder for myself
    Nick Aldwin
    @NJAldwin
    fwiw I think what I was missing above was that I had some incorrect information on generic records vs specific records
    I see now that generating specific records seems to do what I was looking for
    for some reason some of the docs I was reading seemed to suggest that these were differences in the resulting format rather than just avro’s java api
    Julian Peeters
    @julianpeeters
    :thumbsup:
    Thanks again @NJAldwin , submodules should be working again, now in RC17
    Nick Aldwin
    @NJAldwin
    Thanks @julianpeeters ! Glad it doesn’t seem to have been too hard to track down

    By the way, are there any plans to support intermediate output of schema JSON files?

    Despite my misunderstandings above, it would still be really nice with how we plan on using our shemas if we could have the one compilation pipeline which is already reading IDL and translating to JSON to be able to dump that in the jar’s resources or somewhere.

    A lot of libs (e.g. other languages’ avro support) don’t support IDL, and trying to plug avro-tools into the toolchain when avrohugger is already reading the files seems like an exercise in pain

    Julian Peeters
    @julianpeeters
    @NJAldwin Plans, yes julianpeeters/sbt-avrohugger#28
    Time-frame, no.
    I'd be happy to maintain it if someone wanted to submit a PR tho
    but currently it's pretty far down on my queue
    adityanahan
    @adityanahan
    anyone using avro c libraries here
    seeing a crash inside the avro apis

    #0 0x0000000001690608 in memcpy (n=37, src=0x7f058c2a88a0, dst=0x0)

    at /sources/sto/system/external/dpdk-18.05/x86_64-default-linuxapp-gcc/include/rte_memcpy.h:842

    #1 0x0000000001690608 in memcpy (n=37, src=0x7f058c2a88a0, dst=0x0)

    at /sources/sto/system/external/dpdk-18.05/x86_64-default-linuxapp-gcc/include/rte_memcpy.h:867

    #2 0x0000000001690608 in memcpy (str1=0x0, str2=0x7f058c2a88a0, n=37) at /sources/sto/system/lib/a10_dpdk/a10_dpdk_buffer.c:543

    #3 0x00007f0cbc9ae99e in avro_raw_string_set (str=0x7efc135524e8, src=0x7f058c2a88a0 "ba174a18-65a8-11e9-a6f2-3516e7b6bcbf")

    avro_raw_string_set -->
    Nick Aldwin
    @NJAldwin
    speaking of Avro issues
    apparently unions of logical types don’t work with the Java serialization?
    we’ve developed a schema with a union of null and a decimal logical type
    which avrohugger is able to correctly turn into Option[BigDecimal]
    but it seems that due to this bug, we cannot use avro files with such a field when it’s present with Avro’s Java implementation : https://issues.apache.org/jira/browse/AVRO-1891
    has anyone else run into this, and maybe found a workaround?
    Nick Aldwin
    @NJAldwin
    (there are the obvious workarounds of using a second boolean “presence” field, or using some sentinel value, or just using double (with the associated potential loss of precision), but I’m hoping maybe someone has found something I haven’t thought of here?)
    Julian Peeters
    @julianpeeters
    @adityanahan sorry, no experience personally. However, I recall that avro's java idl parser doesn't support uuid yet, while the java schema parser does. I wonder if the avro c libraries' parsers support it yet? You may like to troubleshoot by replacing uuid type with string type in your idl or schema
    @NJAldwin dang, that's unfortunate. Until that java avro codegen bug is fixed, I don't see a way to share the schema with a java team. I wonder if it may be worth considering losing a little type safety in your schema, and using string type instead of decimal.
    Tibo Delor
    @t-botz
    Hi there, wondering whether there 's an example of how to have nested types.
    I am trying to generate something like:
    case class Address(street: String)
    case class User(name: String, address1: Address,  address2: Address)
    I have tried to put all my types into one avsc like suggested here but it fails with Unions, beyond nullable fields, are not supported.
    Tibo Delor
    @t-botz
    Aight I figured it out just have to declare it in separate files and the type as "type": "myPackage.Address"
    Julien Truffaut
    @julien-truffaut
    Hi all, I am getting an ArithmeticException: Rounding necessary when I try to serialize a BigDecimal
    I used a field with union {null, decimal(20, 8)} foo;
    Julien Truffaut
    @julien-truffaut
    so I solved it by manually scaling my BigDecimal to the precision I used in the schema, e.g. bigDecimal.setScale(8, RoundingMode.HALF_DOWN)
    it would be great if we could specify the RoundingMode
    Julian Peeters
    @julianpeeters
    Thanks for the report @julien-truffaut , I'll take a look
    Nick Aldwin
    @NJAldwin
    This is a long shot, but anyone in here using sbt/sbt-avro? It seems to have just up and vanished from GitHub… https://github.com/sbt/sbt-avro
    (we use it in concert with sbt-avrohugger to generate java and scala from avro schemas)
    Nick Aldwin
    @NJAldwin
    following on from that ^ turns out the maintainer accidentally deleted it, and it has since been restored (albeit with all of its issues deleted)
    Jeff Kennedy
    @valesken

    Hello! I was just trying out avrohugger (looking to introduce it generally to the organization), but am running into a rather confusing error when I try to generate a schema with enums. Do you have any advice?

    val schemaStr =
        """
          |{
          |    "type": "record",
          |    "name": "Employee",
          |    "fields": [
          |        {
          |            "name": "Gender",
          |            "type": "enum",
          |            "symbols": [ "male", "female", "non_binary" ]
          |        }
          |    ]
          |}
          |""".stripMargin
      val myScalaTypes = Some(Standard.defaultTypes.copy(enum = EnumAsScalaString))
      val generator = Generator(Standard, avroScalaCustomTypes = myScalaTypes)
      generator.stringToFile(schemaStr)

    Which throws:

    ToolBoxError: reflective compilation has failed:
    identifier expected but string literal found.
        ...
    Jeff Kennedy
    @valesken
    Solved it by just creating a separate namespaced Gender enum and setting the field type in the Employee record to "Gender"