by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 2019 22:45
    eolivelli commented #4914
  • Jan 31 2019 22:07
    samsartor starred google/flatbuffers
  • Jan 31 2019 21:28
    marang starred google/flatbuffers
  • Jan 31 2019 20:51
    thyrlian starred google/flatbuffers
  • Jan 31 2019 19:19
    harshshah903 commented #5144
  • Jan 31 2019 19:19
    harshshah903 commented #5144
  • Jan 31 2019 18:56
    aardappel commented #4914
  • Jan 31 2019 18:54
    aardappel commented #5144
  • Jan 31 2019 18:51
    aardappel commented #5141
  • Jan 31 2019 18:51
    aardappel commented #5145
  • Jan 31 2019 18:51
    krojew commented #5142
  • Jan 31 2019 18:49
    krojew commented #5142
  • Jan 31 2019 18:48
    gabyx edited #5142
  • Jan 31 2019 18:48
    gabyx edited #5142
  • Jan 31 2019 18:47
    gabyx commented #5142
  • Jan 31 2019 18:47
    aardappel commented #5002
  • Jan 31 2019 18:43
    gabyx commented #5142
  • Jan 31 2019 18:43
    krojew commented #5142
  • Jan 31 2019 18:43
    aardappel commented #5143
  • Jan 31 2019 18:42
    gabyx commented #5142
Michael Ramos
@backnotprop
oh
that was it
asUInt8Array().slice(0) needed for nested and send data
William
@imWillX
Hello,
I am trying to get tables within a flatbuffer schema that is not a root type. Is this a feature?
MikkelFJ
@mikkelfj
It depends on the target language, but I believe most languages support having any table as the root table in the buffer, if that is what you mean. The root declaration in the schema is mostly for JSON parsing where the parser must know what table to parse as root. In FlatCC for C, you can also parse other tables as root, but there is more direct support for the declared root.
Konstantin Bläsi
@konstantinblaesi
I am trying to compare buffer/payload sizes of flatbuffers vs JSON, basically trying to see how much I save regarding wire transfer size. I came up with this code after reading some stackoverflow post on how to parse/serialize from/to JSON https://dpaste.org/BtXL According to that the ratio JSON / flatbuffer is 1.66237 . Does that make sense what I am doing ? I was expecting a bigger difference after seeing https://google.github.io/flatbuffers/flatbuffers_benchmarks.html, but what do I know :)
Wouter van Oortmerssen
@aardappel
@konstantinblaesi that really depends on your data.. the bulk of that data is two strings and a double. The strings are by themselves slightly larger in FlatBuffers, but you save not having to write the key or any of the formatting around it. The double is always 8 bytes in FlatBuffers, in your example it is sometimes shorter (0) and sometimes longer (58096.25652706856), and again you save the keys etc.
You could turn the double into a float, but thats not a big savings. The bigger one is to turn unit from a string into an enum, making it as small as 1 byte
Konstantin Bläsi
@konstantinblaesi
@aardappel thanks for the quick feedback and insights :)
MikkelFJ
@mikkelfj
If you have gzip compressed JSON, you can generally assume that JSON will be larger than GZIP compressed FlatBuffers by a factor 2 for smaller messages - say below 1K, but for messages above 100K GZIP compressed JSON will smaller than GZIP compressed flatbuffers by a factor 2. Alls this will vary on payload and schema. For uncompressed data, JSON tend to be large - I’m guessing a factor 4-10 for small messages. The reason is that JSON keywords take up space and flatbuffer pointers take up space. Keywords compress better when there are many of them.
Michael Ramos
@backnotprop
@mikkelfj and thus upon exiting inter-service processing, it might be best to send gzip json to a client that intends to immediately parse into json, correct? Or could there still be better efficiencies in deserializing a flatbuffer into native objects and skipping over json entirely (even in JS) ... guess here what would matter is flattbuffer vector iteration efficiency vs array reference/copy assignments
MikkelFJ
@mikkelfj
I’m not sure what you mean by inter-service processing - if you mean internal processing vs. public API, then I think it often makes most sense to use HTTPS gzip’ed JSON on the API but FlatBuffers internally. I work with MQTT interfaces that are uncompressed FlatBuffers both internally and externally via a browser MQTT connection. If you use JSON, you can parse it very quickly into FlatBuffers using FlatCC generated parsers (and printers for the opposite direction). However, using FlatBuffers over MQTT is more concerned with processing speed and overhead than with size. If you are bandwidth sensitive (such as paying for traffic), or you need to adhere to public conventions (REST API), then gzip’ed JSON might make sense, but you need to test for size.
Also FlatBuffers over MQTT from browser is not necessarily for performance reasons, but to avoid having an additional gateway, and to use the FlatBuffers schema vs more loose JSON. You do need some access control and validation though. JSON parsing is more robust against abuse.
MikkelFJ
@mikkelfj
As to (de)serializing to native objects, that very much depends on the use case - but generally that would be a lot of difficult to maintain code that is not very portable, and likely not faster. But it depends on the language and use case. You can also find techniques that are even faster or smaller than flatbuffers, especially when creating buffers, but FlatBuffers strikes the balance of schema evolution, size and speed prettye well. You would loose some of that with a custom to native framework.
If you want smaller, use protocol buffers, if you want faster, use FastBuffers or SBE, but these make other trade offs. If you want more dynamic schemaless, use MessagePack or JSON. In my view JSON, compressed JSON, and FlatBuffers makes the most sense in most cases, especially JSON that is compatible with FlatBuffers so it can be converted.
Michael Ramos
@backnotprop
"internal processing vs. public API" ah yea, better/simpler way of explaining. I create and send flatbuffers into internal processing that is first balanced, parallelly, across workers (nodes/cores) (all receiving original FB but calculated split assignment for processing), and then an aggregated bulk/large json package is created and sent back to the client. Flatbuffers already gave us serious performance gains in the processing and especially because we could balance between available workes without reconstructing. Thanks for the feedback.
we require optimized performance and FB is definitely giving us that. Just looking into further improvements now on the client-side.
MikkelFJ
@mikkelfj
I’d consider MQTT and FlatBuffers client side if you control everything. But you need to consider the traffic cost vs performance tradeoff. Mind you - parsing and printing JSON via FlatCC is only 4 times slower than creating a buffer and 40 slower than reading a buffer. (Parsing would require reading the buffer subsequently). FlatBuffer reading is 2x slower than the fastest possible native data structure access. These are rough figures.
Note that MQTT gives an extra hop, and thus added latency. You can reduce that by hosting the broker on same machine as the server endpoint. But it is very flexibible, fast, and robust.
Kubernetes can also give extra hops BTW - use ExternalTrafficPolicy to avoid.
Michael Ramos
@backnotprop
good read/call on K8s. Will look into. & yea we made those considerations in the processing layers except flatcc/json ...interesting... because one hard bottleneck on processing side is list & numpy array construcion & concat (tf inputs)
MikkelFJ
@mikkelfj
There is the Arrow format which uses FlatBuffers for metadata - more suited for large volumes of similar data and with Python support - but not sure how mature.
Wouter van Oortmerssen
@aardappel
Wouter van Oortmerssen
@aardappel
Ok, from now on we'll have flatc binaries on every commit thanks to github actions: https://github.com/google/flatbuffers/actions/runs/96547077
MikkelFJ
@mikkelfj
are there limits on how long they are stored?
Wouter van Oortmerssen
@aardappel
I don't actually know.. I know it is limited, not sure if its just a FIFO thing
but yeah, that means you probably don't want to hard-link to any of them, instead just go find the latest
Ori Popowski
@oripwk

Hi, I'm struggling with building a FlatBuffer which has a field of string array. Can someone please help? I've posted my question here:
https://groups.google.com/forum/#!topic/flatbuffers/DUp-QYvF9hA

Thanks!

Ori Popowski
@oripwk
Hi! I've solved the problem by translating this Javascript code to Java line-by-line:
https://stackoverflow.com/q/46043360/1038182
Ori Popowski
@oripwk

Okay, after I solved this issue, I'm having another problem, and this time I cannot build a FlatBuffer from a byte-array. It's described here: https://groups.google.com/forum/#!topic/flatbuffers/gFOBEtvnO48.

Any help will be much appreciated! :)

Wouter van Oortmerssen
@aardappel
Matthias Vallentin
@mavam
Awesome!
I provided you some details on my question about Flexbuffers vs MsgPack: https://news.ycombinator.com/item?id=23598512
Wouter van Oortmerssen
@aardappel
@mavam cool, replied there :)