Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 02 23:58
    @SethTisue banned @fakubishes:nerdsin.space
  • Dec 15 2021 05:01
    som-snytt commented #12516
  • Dec 15 2021 04:38
    SethTisue edited #1312
  • Dec 15 2021 04:38
    SethTisue opened #2273
  • Dec 15 2021 04:31
    jackkoenig opened #12516
  • Dec 15 2021 04:29
    SethTisue edited #1312
  • Dec 15 2021 04:28
    SethTisue edited #1312
  • Dec 15 2021 04:27
    SethTisue labeled #9831
  • Dec 15 2021 04:27
    scala-jenkins milestoned #9831
  • Dec 15 2021 04:27
    SethTisue labeled #9831
  • Dec 15 2021 04:27
    SethTisue opened #9831
  • Dec 15 2021 03:35
    som-snytt commented #11339
  • Dec 15 2021 03:27
    som-snytt labeled #12494
  • Dec 15 2021 03:07
    SethTisue edited #1312
  • Dec 15 2021 03:07
    SethTisue edited #1312
  • Dec 15 2021 03:05
    SethTisue edited #1312
  • Dec 15 2021 03:05
    SethTisue edited #1312
  • Dec 15 2021 03:05
    SethTisue edited #1312
  • Dec 15 2021 02:58
    SethTisue edited #1312
  • Dec 15 2021 02:58
    SethTisue synchronize #1312
nafg
@nafg
Mainly in a property test using Nyaya
Now I have
  val genJsSafeLong = Gen.chooseLong(-9007199254740991L, 9007199254740991L)
Rob Norris
@tpolecat
We ran into it with timestamps.
GraphQL has real numeric types, which is nice.
nafg
@nafg
I mean in my case it's only an issue because Circe for scala.js relies on JSON.parse
IIRC upickle doesn't have that issue
i.e. there's nothing stopping you from parsing JSON in a scala-friendly way in scala.js. It will probably be slower, of course
Oh right -- upickle avoids it by representing Longs as a string
it doesn't use json number for longs
You can customize circe to do that also
Rob Norris
@tpolecat
:musical_note: string, string, string, string, everybody loves string
nafg
@nafg
It's better than bytes...
sometimes, anyway
Rob Norris
@tpolecat
could be worse
nafg
@nafg
I mean JSON is a string representation
Li Haoyi
@lihaoyi
you can actually tell uPickle to use numbers to represent longs, nowadays we have our own JSON parser written in scala.js that is competitive with JSON.parse
not necessarily a good idea though; who knows what other non-uPickle JSON libraries you may be interacting with that will truncate your longs to 64bit doubles
M Karthick
@mkartic
I've recently run into a file at work which was just a C struct instance dumped into a file. How would I go about parsing this in scala?
Something like, struct {char a; char b[5];} will have 'ABB$$$' (where $ denotes NUL).
I can probably read the file into a byte array, and convert it by hand. Is there a better way?
nafg
@nafg
Better by what metric?
How much of this do you need to do?
M Karthick
@mkartic
@nafg Better as in, whatever I've come up with sounds like the most naive approach possible. Wondering if there might be a more elegant way to do this.
Fairly often, as we have to parse this byte file on a regular basis.
nafg
@nafg
Right but is it just this one file and this one struct format? And how complex is it?
M Karthick
@mkartic
Its just this one struct format. But there are about 10 fields of varying lengths.
All the fields are primitives. Either strings or ints.
nafg
@nafg
The simplest thing that comes to mind that might make it a bit higher-level is DataInputStream but I'm not sure if it will necessarily do the right thing.
The full-on high level solution would probably be scodec
I'm just not sure if bringing it in would be overkill for your scenario
Also you can just make your own code nicer, that can go a long way in scala
Do you want to share it?
One technique is to use recursion and pattern matching, easiest if you make it into a List[Byte]
Then you can use :: or your own extractors
ByteBuffer is another JDK class that can help with pulling primitives out of bytes
M Karthick
@mkartic
That one seems promising!
I'll share my code once I've written it. :)
nafg
@nafg
oh sorry misunderstood. ByteBuffer sounds like a good first thing to try
Rob Norris
@tpolecat
The right way to parse and produce binary data is a library called scodec.
Oh @nafg already mentioned that.
Soren
@srnb_gitlab
@Ichoran because
  • the people in this group have this weird notion that llvm is trash
  • they make it so the lib literally only works with gnu compilers
  • they say "just use C++ we already have a prebuilt Makefile"
  • the project was already in C++ by the person who started it, I just made PRs 1-3
felher
@felher
Hey folks. Any tips on how to deal with large scale refactorings? I normaly have my sbt on ~compile to see errors while I'm coding. But once in a while, a big, large refactoring comes along with no real way to do it incrementally. Now, every compile run takes forever, since the whole project needs to be recompiled, and I'm drowning in error messages. Maybe there is a way to tell sbt "Just compile file X.scala and all its dependencies", so I can go file by file?
felher
@felher
Just opened it in Intellij to see if it's easier there, but Intellij marks everything red before I even start the refactoring, so that's probably not a good sign :D
Intellij works pretty well if everything is in a final tagless style, though. :)
felher
@felher
I'm currently doing the same as here: https://stackoverflow.com/questions/14160677/how-to-compile-single-file-in-sbt , but that's cumbersome as well
Rob Norris
@tpolecat
vs-code with metals works well for me.
Rob Norris
@tpolecat
Modularity helps with refactoring. Break the code into a handful of sub-projects so you can fight the fire in layers.
felher
@felher
Yeah, creating multiple projects and fixing one after another might work. Thanks :)
Gavin Bisesi
@Daenyth
@mkartic this sounds like a good use case for scodec
Ethan
@esuntag
Out of curiosity, does scodec have built in support for run length encoding, or would I need a custom codec for that?