I am going to need help with that to have impartial benchmark. The only thing I need from CI is that my changes do not make existing paths slower relative to previous build
I had a look at that parser before writing my flat file import. Not bad, but too complex and slow for what I needed. Nfsdb parser twice as fast as univocity in the very same file.
This is a very useful idea, in fact my friend is doing very similar project for a bank. It is very useful to integrate legacy data sources under single query system. That said what i'm doing is slightly different. Calcite query system simply would not do for my project for three reasons: its query system does not offer functionality beyond what you get from individual databases, it looks more of an overlap between functionality of data sources it supports (check what kind of query functionality splunk provides vs. calcite). Pick a source file on calcite github and search for "new " operator usage, it is far too many for what i'm building. Third: name sounds strange (https://en.wikipedia.org/wiki/Calcite) what does it have to do with either querying or integration? ;)
may be one day somebody would honour my project by writing an adaptor for calcite? :smile:
Came back from holiday today :) I found that I need rewritable in-memory structure for some functions, as-of join being one. I'm writing and testing that. It'll prompt some exciting query capabilities once done.
Just wanted to say this project looks interesting. Will see if a potential use case comes up to use it.
Vlad Ilyushchenko
@bluestreak01
Thanks, we are evolving to maximise use cases :)
ratcash
@ratcashdev
Hi. QuestDb seems like an exceptional piece of work. I was wondering how does QuestDB perform when one expects a workload biased heavily towards READS with >> 10k concurrent QUERIES? In other words, how can I scale it out, should I find the need?