adityamukho on tests
Fixed traverse tests. (compare)
dependabot[bot] on npm_and_yarn
dependabot[bot] on npm_and_yarn
adityamukho on master
Bump vis-timeline from 5.1.0 to… Merge pull request #1 from Reca… (compare)
adityamukho on master
Bump vis-timeline from 5.1.0 to… Merge pull request #1 from Reca… (compare)
Something else for you to investigate when you have some time. A performance degradation seems to be related to the size of the RG tables. My test was to take a graph with a couple million each of objects and edges and "commit" them so there would only be a single create command for each of these. Then I created a small independent graph (10s of vertex and edges) via RG and finally request a traversal of that. It took many seconds to return the request. This is not high priority for me, but it means that I can't use RG calls to do current-time traversals but do it directly on my tables.
Ok I will take a look at this. Can you help me with the input parameters you used for the query?
// Method signature:
// function traverseProvider (timestamp, svid, minDepth, maxDepth, edges, options = {}) { ... }
traverseProvider(1581583228.2800217, "vertex_collection/starting_vertex_key", 0, 2, { "edge_collection_1": "inbound", "edge_collection_2": "outbound", "edge_collection_3": "any" }, { "uniqueVertices": "path", "uniqueEdges": "path", "bfs": true, "vFilter": "x == 2 && y < 1", "eFilter": "x == 2 && y < 1", "pFilter": "edges[0].x > 2 && vertices[1].y < y" })
I think in your case you had embedded the uniqueVertices
param inside edges
, resulting in the nested validation error.
I realize that there is a lot of ground to be covered to make the documentation catch up with all the new exports and I really appreciate your patience in sticking with the product so far. I have it on priority to improve documentation and test coverage as the very next items on my plate, to ensure minimal hiccups in the future.
minDepth
and maxDepth
. The vFilter
only applies to vertices starting from minDepth
.
// function purgeProvider (path, options = {}) { ... }
// This function deletes ALL history of the given path. If deleteUserObjects is true, it also
// deletes the corresponding objects from the plain collections (holding the current state)
purgeProvider('/c/pmconfig', { deleteUserObjects: true, silent: false })
A short question: approximately when one may expect support for valid time to be available?
I've been trying to raise funds/sponsorship for the RecallGraph project during the last few months. Since it is now almost two years that I've been working almost exclusively on RecallGraph, I need to establish a revenue source to be able to keep working on it going forward. I'll get back to core development as soon as I secure some funds.
If you want, you can help by sponsoring the project. Please visit https://opencollective.com/recallgraph
@LeenaBahulekar Glad to see you on board with the idea of RecallGraph as a data versioning solution.
RecallGraph's design is actually quite different from the one provided in the blog post on time-travelling databases. There isn't a technical whitepaper as such, but you can refer to this YouTube video (in case you haven't seen it already), which discusses a few details about RecallGraph's architecture:
@LeenaBahulekar Although RecallGraph supports bulk updates, it wraps each individual update in its own separate transaction. This is because each update involves a bunch of associated event log entries and skeleton graph updates, snapshot insertions, etc. On an average, around 5 internal documents are created/updated during a single node update, although in some cases this number could go higher.
Since ArangoDB does not support nested transactions, it is not possible to wrap these individual small transactions under a larger envelope transaction. It might be useful to support bulk updates in a single transaction in future versions of RecallGraph, but it would need some careful design in order to avoid overshooting memory limits. Here's a list of limitations that must be accounted for when using the RocksDB engine - https://www.arangodb.com/docs/stable/transactions-limitations.html#rocksdb-storage-engine.
Supporting bulk updates in a single transaction is thus not a straightforward matter.