In our rpc implementation we try to reduce number of allocs due to high volume traffic that we serve. The rpc needs slices of bytes of different size to serve the network traffic. The obvious solution could be to use sync.Pool to avoid extra allocations and re-use already allocated and now freed slices. The problem is we don't know in advance what size of buffer will be requested. I even don't have an idea whether the size distribution is uniform or it is going to tend to be around a mean...
One of the ideas is to create several pools object, which will serve only particular buffer sizes. So let's say up to 100, up to 1K, up to 10K and up to 100K, if requested size is 4500 we will go to the buffer which has 10K slices and ask there.
We can measure the request statistics as well, but would not it an overengineering for the task?
@akhilesh2011_twitter , every record can have custom fields. Custom field is the key-value pair which could be assigned per record base. The fields can be used in WHERE conditions for filtering records in LQL the same way like msg or ts fields are used. For instance, you know that some of your records has field 'fld_error' and if you want to select only records with the field is not empty you write the query like this:
SELECT FROM app="myapp" WHERE fields.fld_error != "" LIMIT 1000'
@vtolstov I think logrange has similar functionality but with some variations. Logrange is built with idea to be fast and effective. We compared its injection speed with Kafka and believe we can make it better.
I think the main your request is about embedding Logrange to your app, but after some ping-pongs it seems that you simply need just another API. Either way it is possible, so let me know what you would prefer more, we can discuss details.