Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    Vinayak Shiddappa Bali

    Hi All,
    I am trying to connect Janusgraph with gremlin python and query the database.Following are the steps followed:
    Step 1:

    ./bin/ -i org.apache.tinkerpop gremlinpython 3.4.6

    But getting configuration not found an error.


    def globals[:]'conf/')
    globals << [g : graph.traversal()]


    port: 8182
    scriptEvaluationTimeout: 10000000
    channelizer: org.janusgraph.channelizers.JanusGraphWebSocketChannelizer
    graphs: {
      ConfigurationManagementGraph: conf/
    scriptEngines: {
      gremlin-groovy: {
        plugins: { org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin: {},
                   org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: {},
                   org.apache.tinkerpop.gremlin.tinkergraph.jsr223.TinkerGraphGremlinPlugin: {},
                   org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin: {classImports: [java.lang.Math], methodImports: [java.lang.Math#*]},
                   org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: {files: ['scripts/init.groovy']}}}}
      - { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
      - { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true }}
      - { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
      # Older serialization versions for backwards compatibility:
      - { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
      - { className: org.apache.tinkerpop.gremlin.driver.ser.GryoLiteMessageSerializerV1d0, config: {ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
      - { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
      - { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
      - { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
      - { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
      - { className: org.apache.tinkerpop.gremlin.server.op.session.SessionOpProcessor, config: { sessionTimeout: 28800000 }}
      - { className: org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor, config: { cacheExpirationTime: 600000, cacheMaxSize: 1000 }}
    metrics: {
      consoleReporter: {enabled: true, interval: 180000},
      csvReporter: {enabled: true, interval: 180000, fileName: /tmp/gremlin-server-metrics.csv},
      jmxReporter: {enabled: true},
      slf4jReporter: {enabled: true, interval: 180000},
      gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST},
      graphiteReporter: {enabled: false, interval: 180000}}
    maxInitialLineLength: 4096
    maxHeaderSize: 8192
    maxChunkSize: 8192
    maxContentLength: 65536
    maxAccumulationBufferComponents: 1024
    resultIterationBatchSize: 64
    writeBufferLowWaterMark: 32768
    writeBufferHighWaterMark: 65536

    Python Script:

    >>> from gremlin_python import statics
    >>> from gremlin_python.structure.graph import Graph
    >>> from gremlin_python.process.graph_traversal import __
    >>> from gremlin_python.process.strategies import *
    >>> from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
    >>> graph = Graph()
    >>> connection = DriverRemoteConnection('ws://', 'g')
    >>>g = graph.traversal().withRemote(connection)
    >>> g.V().limit(1).valueMap()
    tornado.httpclient.HTTPError: HTTP 599: Timeout while connecting
    Also tried :
    Cluster cluster ="conf/");
    NameError: name 'Cluster' is not defined

    Whats is the issue and how to resolve it ?

    Jack Waudby
    Hi. I have JanusGraph server backed by Cassandra and Elasticsearch. I'm running two threads through the Java API. 1 thread (Thread-A) is issuing transactions that increment a property on a node. The other thread (Thread-B) issuing transactions that read the property on that node. Once the property has been read once all subsequent reads across transactions on Thread-A return the same value, even though Thread-B has updated it. I've tried turning off the db-cache but it made no difference. I've also tried issuing reads from the Gremlin console whilst the java process is updating the property and I again experience the same behaviour. Any suggestion why this is happening?


    We had no luck in finding ways to upload data directly to hbase. Now we are trying spark graph computer, but there were a lot of dependency errors, while starting spark jobs(spark 2.4 yarn cloudera 6.3) using janusgraph 0.5.1. The only confguration we could get working is hbase 2.1.0 in cloudera 6.3, separate spark 2.2 cluster (master + 1 worker) installed on the same servers where hbase regionservers and janusgraph 0.3.3.

    While trying to upload using more than one spark worker there was an error:
    Task 18 in stage 1.0 failed 4 times, most recent failure: Lost task 18.3 in stage 1.0 (TID 627,, executor 0): java.lang.NoSuchMethodError:;

    As we googled this error because of Guava versions incompatibility, but we could not understand how to resolve it and why everything works with single spark worker.

    Our hardware configuration:
    Totally 4 servers with, hbase regionserver, hdfs datanode on each server:
    2 x Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz 6 cores, 12 threads
    128 Gb RAM
    12 x 4Tb HDD
    10gbit/s lan

    janusgraph 0.3.3 and single spark 2.2 worker(24 cores + 24 gb ram for single spark executor) installed on one of those servers

    We started to upload graph from graphson file stored on hdfs and it was tooo slooowww - 36 hours for uploading 100 000 vertices ~ 2777 vertices per hour. During upload there were no significant load on network, disks and ram, only cpu LA was about 24 for 24 cpu threads.



    Commands in gremlin console:

    graph ='conf/hadoop-graph/')
    blvp ='conf/').create(graph)

    So the questions are:

    1. Does anybody could run spark graph computer with spark 2.4 yarn on cloudera 6.x or separate spark 2.4 cluster?
    2. How to resolve Guava compatibility problems? As I remember the same problems were on spark 2.4 and janusgraph 0.5.2
    3. What was approximate speed of vertices/edges upload with spark graph computer and what hardware configuration and software versions you used?

    Thanks in advance!

    4 replies
    hi group. i'm getting timeouts while searching by properties which have an associated composite index (in the enabled state). It seems the timeout occurs for a node label type which contains a large number of nodes (~100 million). Has anyone faced similar issues?
    Vinayak Shiddappa Bali

    Hi All,
    I have established a connection with Janusgraph from python. There are 2 problems:
    Problem 1 :
    Running the following query in python gives a syntax error when the same query is executed on gremlin console it returns the output:

    SyntaxError: invalid syntax at in()

    Problem 2:
    To run the query in python, I have to use .next() at the end otherwise it returns the string format of the query as follows:
    [['V'], ['limit', 1], ['valueMap']]
    The problem is I don't know how many records are been returned by the query.
    Please, share a solution to the above problems.
    Thanks for answering

    Yash Datta

    hi all,
    On this page, docs describe adding an edgelabel and then code example adds a property with cardinality set

    mgmt = graph.openManagement()
    follow = mgmt.makeEdgeLabel('follow').multiplicity(MULTI).make()
    name = mgmt.makePropertyKey('name').dataType(String.class).cardinality(Cardinality.SET).make()
    mgmt.addProperties(follow, name)

    But I think we cannot, because in code we throw exception:

        public EdgeLabel addProperties(EdgeLabel edgeLabel, PropertyKey... keys) {
            for (PropertyKey key : keys) {
                if (key.cardinality() != Cardinality.SINGLE) {
                    throw new IllegalArgumentException(String.format("An Edge [%s] can not have a property [%s] with the cardinality [%s].", edgeLabel, key, key.cardinality()));
                addSchemaEdge(edgeLabel, key, TypeDefinitionCategory.PROPERTY_KEY_EDGE, null);
            return edgeLabel;

    So maybe the docs need to be updated ?

    4 replies
    Vipul Kakkar
    Hi everyone.. I recently cam across this amazing open source project and thinking of contributing to this project. Can some one guide me how I can start contributing
    2 replies
    Hi All, In Janusgraph with cassandra storage backend, is there a way to store the data in human readable format? I am wondering how i would be able to add functionality on top of the database, without going through JanusGraph
    Wouter de Vries
    @ramana1266 that seems highly unlikely
    that would be as if you are accessing the raw data that postgres is storing outside of the postgres server
    not really a good idea(tm)
    I also have a problem: I've setup the ConfiguredGraphFactory, with cassandra as a backend. That all works, except it doesn't create the traversal bindings. So I have my "graph" variable available, as should be. But not the "graph_traversal", which should also be created.
    4 replies
    Running v0.5.1
    Where do I find the documentation to plug in my own custom storage backend for Janusgraph? How complicated is it to write my own backend
    I realize we can achieve multi tenancy using
    Wouter de Vries
    Is it possible to create a hadoopgraph with the ConfiguredGraphFactory?
    How can I achieve multi-tenency in Janusgraph while using elastic search as index backend? ConfiguredGraphFactory lets me pick a graph to work with with the storage backend, but what about the index backend?


    I am using Janus 0.4.0 with Cassandra DB 3.11.4 (data getting replicated within 2 nodes) as backend storage.
    While executing some queries I am getting following exception:

    Caused by: java.lang.NullPointerException: Could not find type for id: 784405
    at ~[guava-28.2-jre.jar:?]
    at ~[janusgraph-core-0.4.0.jar:?]
    at org.janusgraph.graphdb.query.vertex.BasicVertexCentricQueryBuilder.constructQueryWithoutProfile( ~[janusgraph-core-0.4.0.jar:?]
    at org.janusgraph.graphdb.query.vertex.BasicVertexCentricQueryBuilder.constructQuery( ~[janusgraph-core-0.4.0.jar:?]
    at org.janusgraph.graphdb.query.vertex.VertexCentricQueryBuilder.execute( ~[janusgraph-core-0.4.0.jar:?]
    at org.janusgraph.graphdb.query.vertex.VertexCentricQueryBuilder.vertices( ~[janusgraph-core-0.4.0.jar:?]
    at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphVertexStep.flatMap( ~[janusgraph-core-0.4.0.jar:?]
    at ~[gremlin-core-3.4.1.jar:3.4.1]
    at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphVertexStep.processNextStart( ~[janusgraph-core-0.4.0.jar:?]
    at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext( ~[gremlin-core-3.4.1.jar:3.4.1]
    at ~[gremlin-core-3.4.1.jar:3.4.1]
    at org.apache.tinkerpop.gremlin.process.traversal.step.filter.FilterStep.processNextStart( ~[gremlin-core-3.4.1.jar:3.4.1]
    at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext( ~[gremlin-core-3.4.1.jar:3.4.1]
    at org.apache.tinkerpop.gremlin.process.traversal.step.util.ExpandableStepIterator.hasNext( ~[gremlin-core-3.4.1.jar:3.4.1]
    at org.apache.tinkerpop.gremlin.process.traversal.step.util.ReducingBarrierStep.processAllStarts( ~[gremlin-core-3.4.1.jar:3.4.1]
    at org.apache.tinkerpop.gremlin.process.traversal.step.util.ReducingBarrierStep.processNextStart( ~[gremlin-core-3.4.1.jar:3.4.1]
    at ~[gremlin-core-3.4.1.jar:3.4.1]
    at ~[gremlin-core-3.4.1.jar:3.4.1]
    at ~[gremlin-core-3.4.1.jar:3.4.1]

    The queries are for which this issue exception is reported are simple queries-
    g.V().has("type", "employee").out("worksfor", "reportsto").has("deptName", P.within("HR", "IT"))

    I am suspecting this is something internal to JanusGraph or Cassandra. But not getting exact reason behind this.
    After restarting services accessing JanusGraph this issue disappears. Tried to reproduce this issue again but no luck. I didn't found any help on above exception.
    Any idea on what kind of exception is this ? What can be possible reasons for it ?



    Hi, I am trying to get Janusgraph working with ConfigurationManagementGraph. I added this property to my gremlin-server.yaml file and starting my docker my mounting my configuration. When I try to run the gremlin console, I am still getting the error "Please add a key named "ConfigurationManagementGraph" to the "graphs" property in your YAML file and restart the server to be able to use the functionality of the ConfigurationManagementGraph class.

    Please help me. Not sure what i am missing.
    This is my gremlin-server.yaml file:

    port: 8182
    scriptEvaluationTimeout: 30000
    graphs: {
      #graph: /etc/opt/janusgraph/,
      ConfigurationManagementGraph: /etc/opt/janusgraph/
    scriptEngines: {
      gremlin-groovy: {

    And this is my file


    And I am starting using this command

    docker-compose -f docker-compose-mount.yml up
    6 replies
    Vinod Vijayan
    Hi, the query g.E().hasId('<edge id>') is not working in Janus Graph while g.E('<edge id>') returns the edge. I am using JanusGraph version 0.5.2 with BerkeleyJE/Lucene storage and the query is done from Gremlin client version 3.4.6. Is 'hasId() ' or 'has('id','3kk-3a8-4r9-6g0')' not supported in Janus Graph? Any help is appreciated. Thanks in advance
    gremlin> g.E("3kk-3a8-4r9-6g0")
    gremlin> g.E().hasId("3kk-3a8-4r9-6g0")
    gremlin> g.E().has('Id','3kk-3a8-4r9-6g0')
    I have ConfiguredGraphFactory set up in my janusgraph and am able to configure,create and open miultiple graphs from gremlin console. How do i go about doing the same thing from Java code? Any pointers to existing sourcecode would be great.
    4 replies

    How complicated is it to create composite indexes in janusgraph? I'm executing this groovy script on my schema. All my indices are stuck in INSTALLED state. The log message says

    Some key(s) on index I3byLabelTypeComposite do not currently have status(es) [REGISTERED]: label_type=INSTALLED

    This is my groovy script.

    :remote connect tinkerpop.server conf/remote.yaml session
    :remote console
    graph ="graph1");
    size = graph.getOpenTransactions().size();
    for(i=0;i<size;i++) {graph.getOpenTransactions().getAt(0).rollback()}
    mgmt = graph.openManagement()
    name = mgmt.getPropertyKey('name')
    mgmt.buildIndex('I3byNameComposite', Vertex.class).addKey(name).buildCompositeIndex()
    //Wait for the index to become available
    ManagementSystem.awaitGraphIndexStatus(graph, 'I3byNameComposite').call()
    //Reindex the existing data
    mgmt = graph.openManagement()
    mgmt.updateIndex(mgmt.getGraphIndex("I3byNameComposite"), SchemaAction.REINDEX).get()
    7 replies
    hello ~
    I am using cql storage backend and using ConfiguredGraphFactory for achieving multi-tenancy. This means that each tenant will have their own keyspace. Now, if each tenant can have multiple graphs for different use-cases, how can we achieve that?
    is it possible that there will be collisions with ES search results while using configuredGraphFactory? e.g., I am managing 2 different graphs. Since the vertex ids are longs, both the graphs can have vertex with id, lets say 100. vertex 100 in graph 1 has name '123456' and vertex 100 in graph2 has name 'abcdef'. In graph 1, when I do a mixed index search. e.g., g.V().has('name', textPrefix('abcdef')), since Janusgraph pulls out id from ES, pulls the vertex id 100. It then looks if there is a vertex with that id in the graph since vertex id 100 exists, it returns the vertex with name '123456', instead of returning empty result. Am I missing something here?
    5 replies
    Yash Datta

    Is it possible to cache Vertex object after we retrieve it using the API once ?
    Can somebody give me an example if yes ?

    srcVertex = g.V(id1)
    dstVertex = g.V(id2)

    Can I somehow reuse these vertices? Because with the above API does not work

    44 replies
    Steve Todorov
    Hi guys, I'm new to JanusGraph and I was doing some research on it. I was wondering why is JG using integer ids for vertex/edges? Could this be replaced with uuid perhaps? I'm specifically interested avoiding using id block allocation since this is a performance bottleneck (what happens when you don't know how much IDs you will be inserting in an hour? also if you over-provision you could end up wasting a lot of IDs). Has anybody here managed to use UUIDs or something else perhaps?
    3 replies
    Debasish Kanhar

    srcVertex = g.V(java.lang.Long.valueOf(idManager.toVertexId(r.src))).next() dstVertex = g.V(java.lang.Long.valueOf(idManager.toVertexId(r.dst))).next() g.addE(label).from(srcVertex).property("value", r.propVal).to(dstVertex).next() this also throws java.lang.IllegalStateException: The vertex or type is not associated with this transaction

    Really Strange @saucam . The following works for me in Scala:

    private def getVertex(vertexLabel: String, vertexKey: String, vertexValue: Any): Vertex = {
            var v: Vertex = null
            var vLabel = vertexLabel
            if (getAllDefaults().contains(vertexValue))
                return null
            else {
                try {
                    v = g.V().has("node_label", vLabel).has(vertexKey, vertexValue).next()
                } catch {
                    case e: NoSuchElementException =>
                        println("Vertex label: %s key: %s value: %s not found for edge %s".format(vertexLabel, vertexKey, vertexValue, edge_label))
                        v = null
    sourceV = getVertex(leftLabel, leftNodeFieldName, leftVal)
    dstV = getVertex(rightLabel, rightNodeFieldName, rightVal)
    var tr: GraphTraversal[Edge, Edge] = this.g.addE(this.edge_label).from(sourceV).to(dstV)
    for ((propName, propVal) <- keyValueEntries) {
                if (maps_for_record.containsKey(propName)) {
                    val cardinality = this.getPropertyCardinality(propName, propertyKeys = propertyKeys)
                    if (cardinality == "SINGLE") {
                        tr =, propVal)
                    else {
                        tr =, propName, propVal.toString)
    val edge =

    Where maps_for_record is a JSON file containing the properties of edge as key-value pairs

    Hopefully you aren't closing out the transactions inbetween like commit/close
    1 reply
    Yash Datta
    hmm, no I commit after , let me check on my end
    Vinayak Shiddappa Bali
    Hi All,
    Facing issue while connecting with Cassandra cluster apart from the localhost. Can anyone help me ?
    15 replies
    Manish Baid
    Hello, I need to make a decision on the architecture "Remote Server Mode" or "Remote Server Mode with Gremlin Server". This is for production usage with reasonably large dataset - 100 mil vertex, 500 mil edges. What are the best pratices? is Janus graph lite enough to be embedded along with the client app to avoid an additional network hop? Thanks
    2 replies
    Hi all, is it possible to use AWS keyspaces instead of Cassandra on an EC2 instance for Janusgraph? Actually seems not possible (some problems related to a custom partitioner). It will ever be supported by Janusgraph? Thanks
    I Actually use janusgraph, with cassandra and elasticsearch standalone backend. I wanted to add hadoop/spark support. Is it possible de mix all ?
    2 replies
    Brad Peters

    I have a traversal like this

          __.has("field2",P.eq((int) 0))

    with a mixed index that covers all 6 fields, I would expect that to return vertices that matched either of the first 2 fields and field 3, field 4 and either of field 5 and 6 but what I get is a profile that creates an index query that looks like this

      (field3 = value3 AND field4 = true AND field1 = value1) 
      (field3 = value3 AND field4 = true AND field2 = 0) 
      (field3 = value3 AND field4 = true AND field5 textRegex .*value5)
      (field3 = value3 AND field4 = true AND field6 textRegex .*value6)

    I have dug through the strategy code and I see how the ANDs and ORs get reordered and then folded to result in this query, what I am asking is there a way I can write this traversal that will accomplish what I want without having to add support to JanusGraph itself?

    2 replies
    gremlin> mgmt = graph.openManagement()
    BerkeleyJE does not support non-transactional for multi threaded tx
    Yash Datta
    Hi all,
    anyone looking for a spark based solution to bulk upload data into Janusgraph can have a look at Grafink: It is currently under development but would love your feedback of what can be done better
    I am using Remote Server Mode Config for JanusGraph Backed up by ScyllaDB
    Getting this exception while starting gremlin-server with required config
    2615 [main] ERROR  - SchemaDisagreementException: [host=, latency=1(1), attempts=1]Can't change schema due to pending schema agreement
    Goutham A S
    Hi All..I'm facing some issues with my gremlin query. The scenario is like this vertex A is connected to vertex B ,C and D, query traverses past this to retrieve some values..I need fetch some attribute from this immediate vertex . i.e the vertex B/C/D which is connected to the final value . i want do this using path.. Is it possible to do this?..Let me know if you any suggestions? Thanks
    10 replies
    Abhijeet Kumar


    For multi-tenancy purpose, I want to use ConfiguredGraphFactory. I did the changes in YAML file and included

    graphs: {
      ConfigurationManagementGraph: /Users/abhijeet/Downloads/janusgraph-full-0.5.2/conf/

    In my gremlin-console, I’m able to create different graphs.

    Now, I want to do the same with Java. So, is there any documentation or some pointer which is going to serve my purpose.

    I’m exactly in the same situation explained by this guy in user-group:!topic/janusgraph-users/fwXDXJRNUFA

    Any help will be appreciated.

    5 replies
    Jean Rossier
    Has anybody ever used rollover indices in JanusGraph ? I mean mixed indices (in ES) that are rollovered
    Is it possible to have a mixed and composite index on the same property. Some of my use cases require an equality check (for which composite indicies are faster) while other use cases require ordering on the same property (for which composite indicies are ideal). If this is possible, can I assume that janusgraph will pick the appropriate underlying index based on the query?
    2 replies

    Hi All, I am not able to start JanusGraph 0.5.2 on my local m/c with following configuration

    • macOS 10.15.5
    • OpenJDK 11.0.7

    Getting following error

    java -version
    openjdk version "11.0.7" 2020-04-14 LTS
    ./ start -v
    Forking Cassandra...
    Running `nodetool statusthrift`.Unrecognized VM option 'UseParNewGC'
    Error: Could not create the Java Virtual Machine.
    Error: A fatal exception has occurred. Program will exit.
    2 replies
    can someone help on this
    Philipp Kraus
    Hello, I'm new with Janusgraph, but I'm creating my first infrastructure at the moment, so my question is: Is there any existing docker container with Janusgraph and Elassandra (Cassandra + Elasticsearch) and Gremlin so I can build a cluster based on the embedded Janusgraph structure? If need to build my own, is there any good tutorial for building my own structure?
    1 reply
    Vinayak Shiddappa Bali
    Hi All,
    I am working on JanusGraph for the last 1 year. During this time my Cassandra and Janusgraph were on same machine.
    But, now I need to have Janusgraph instance and Cassandra on different machines, as per the requirement.
    Check the following document:
    Need more clarification on the following :
    1. How to connect to more clusters of Cassandra on different machines for high availability with data replication?
    2. How to use multiple Janusgraph instances with multiple Cassandra?
      Please share blogs or articles which will be useful.
      Thanks for the response !!!
    12 replies
    Joshua Lucksom
    Hello! Also new to janusgraph; I'm currently trying to connect to an externally-managed Cassandra cluster, but the cluster/keyspace exposes a different native transport port than the standard 9042. It looks like on server startup the datastax driver tries to automatically reach out to cluster peers, but it assigns them the default port (9042) instead of my storage.port config option (1024), causing the connection to fail. Has anyone dealt w this before/have any advice on how to proceed?
    6 replies
    A question about JG docker - When I download the JG all bundle and start using ./bin/, it starts a gremlin server, an embedded cassandra and an embedded ES. How can I do same with docker? The default docker command starts only gremlin server.