Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    evgvain
    @evgvain
    @apanicker-nflx , thank you for the answer! My workflows manage distributed computation pipeline, when different phases have to be executed by different computation nodes. I am using task to domain assignment to ensure that each phase is executed by the appropriate compute. However, Some special tasks need to be executed on multiple workflow phases, by different computes. For example, the task which moves a data to the next phase of the pipeline can be re-used multiple times, but each time by other domain.
    Poorya
    @courosh12
    Hey guys i have question about failureworkflow. How do i get the workflow instance id, when i am in the failure workflow, of the workflow that called it?
    Miguel Filipe
    @msf
    hi there, I have a 3-node setup of a dynomite cluster and for the life of me can't get conductor to work correctly w/ this setup. I believe the dynomite deployment is correct, it passes some various tests I've done (using redis-cli against the individual nodes)
    I can share the conductor config.properties or even the dynomite servers yaml config if needed.
    I can use it against a local, single-instance deployment of dynomite on the same server.. but not the 3-node one
    Miguel Filipe
    @msf
    Update, the only way I can get it to work is to declare a single host entry , like so:
     #format is host:port:rack separated by semicolon  
    -workflow.dynomite.cluster.hosts=10.0.16.61:8102:eu-west-1a;10.0.52.159:8102:eu-west-1b;10.0.74.57:8102:eu-west-1c
    +workflow.dynomite.cluster.hosts=10.0.74.57:8102:eu-west-1c
    the line with 3 hosts fails to work, the line w/ single host works.
    Miguel Filipe
    @msf
    my issue might be related to this issue: Netflix/conductor#1246 -- however in my case, I only have a single instance per rack, so, all dynomite+redis instances should be complete copies of each other
    Miguel Filipe
    @msf
    I created an issue: Netflix/conductor#1287
    Miguel Filipe
    @msf

    Anyone here runs conductor + dynomite but using a a single endpoint that is a TCP load-balancer to the dynomite cluster ?

    I'm considering this option because making conductor topology aware of dynomite is leading to serious doubts in its behaviour during faults.
    For example: if conductor uses a redis-local connection to a specific local node, how does it cope with that specific endpoint failing ? The default configuration/code seems to have issues if specific AZs are unreachable.

    evgvain
    @evgvain
    Hi. I have a question regarding the High Availability. The HA is achieved by running multiple servers connected to the same Dynomite cluster of Redis instances. Would you recommend to enable Redis AOF persistence to prevent data loss when all cluster member are down?
    Miguel Filipe
    @msf
    tuning redis is out of scope, but yes, IMO, you should make redis a durable datastore that persists changes to the disk
    mahalaxmiganesh
    @mahalaxmiganesh
    I am new to this community. Please help me to know for the PR #1234 which is not yet resolved, can I contribute the solution?
    Jim
    @Jimeh87
    I'm looking into wrapping conductor in a spring boot app. I've got all the necessary dependencies added to the project but I think there is some wiring I need to do. I'm just not exactly sure what wiring is needed. Do you have any advice or are there any sample projects out there?
    Shivji Kumar Jha
    @shiv4289
    Hi, I am looking to extract the conductor metrics into prometheus. I could follow the link here (https://github.com/mohelsaka/conductor-prometheus-metrics) and that exposed worker metrics. What do I need to do get the server metrics mentioned here: https://netflix.github.io/conductor/metrics/server/#publishing-metrics
    Shivji Kumar Jha
    @shiv4289
    I do see Monitor methods being populated from appropriate places for instance: DeciderService, WorkflowExecutor etc. What it seems is missing is wiring the collected metrics to the Spectator.globalRegistry.add(...).
    James DeMichele
    @demichej
    @shiv4289 I believe that you will need to create an AbstractModule of your own, and build your own version of Conductor which uses your AbstractModule. You can tell Conductor about your extra modules with a config property I think: https://github.com/Netflix/conductor/blob/master/server/README.md#additional-modules-optional
    @Jimeh87 Are you stuck on something in particular? You'll need to depend on the Java Conductor library, and then you can interact with the Conductor clients by telling them how to communicate with your Conductor server
    Shivji Kumar Jha
    @shiv4289
    Hi @Jimeh87 Thanks for inputs. I did the following:
    • In config.properties: Added MetricsModule to additional_modules

      conductor.additional.modules=class_extending_com.google.inject.AbstractModule,com.netflix.conductor.service.MetricsModule
    • Added MetricsModule to ModuleProvider

      modules.add(new MetricsModule());
    • I could define servlet as

    public class MetricsModule extends ServletModule {
    @Override
    protected void configureServlets() {
    serve("/metrics").with(PrometheusMetricsServlet.class);
    }

    @Provides
    @Singleton
    private PrometheusMeterRegistry buildPrometheusMetricsServlet() {
        io.micrometer.prometheus.PrometheusMeterRegistry prometheusMeterRegistry = new PrometheusMeterRegistry(PrometheusConfig.DEFAULT);
        com.netflix.spectator.micrometer.MicrometerRegistry micrometerRegistry = new MicrometerRegistry(prometheusMeterRegistry);
    
        Spectator.globalRegistry().add(micrometerRegistry);
    
        return prometheusMeterRegistry;
    }

    }

    What I am not able to understand is how to define PrometheusServelet.class in a way that I can fetch all the metrics that's defined here: https://netflix.github.io/conductor/metrics/server/

    This is my structure for PrometheusServelet.class

    public class PrometheusMetricsServlet extends HttpServlet {
    @Inject
    private transient PrometheusMeterRegistry registry;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
    
        // need to implement something to get all server metrics here 
    
        try {
            output.print(registry.scrape());
        } finally {
            output.close();
        }
    }

    }

    Brian Tarricone
    @kelnos
    was looking in the 'contribs' module, and found a SQS implementation of ObservableQueue. i don't see an impl for QueueDAO, though, which appears to be required. was this an accidental omission? looking at the QueueDAO interface, though, i'm not sure how you'd implement things like 'pushIfNotExists()` on SQS, so maybe it's just not possible?
    Jim
    @Jimeh87
    @demichej I was going down the wrong path trying to wrap conductor in a spring boot app. It was adding a whole pointless layer of complexity. I'm more just looking to extend conductor. Do most conductor setups just use the provided jetty server and wire extensions into the server project?
    Kishore
    @kishorebanala
    Hello @mahalaxmiganesh , welcome. We're waiting on the contributor to respond. Yes, please feel free to take over and submit your own PR. Thanks.
    @Jimeh87 Given Conductor 2.x is a guice app, it's hard to tell what is required to make it compatible with Spring boot. Any particular reason you want to wrap it into a Spring boot app?
    Kishore
    @kishorebanala
    @Jimeh87 Just noticed your follow up comment. You can modify the server module as you like to use different servers if required, and wire custom extensions through guice bindings.
    Hey @mohelsaka, can you help @shiv4289 with setting up Prometheus on Conductor server?
    Kishore
    @kishorebanala
    @kelnos SQS may not have all the features required for queueing in Conductor like message priority and pushIfNotExists as you mentioned. But, it is possible to implement QueueDAO with core set of supported SQS features.
    Jim
    @Jimeh87
    @kishorebanala Our microservice stack is all springboot so my thinking was it might be a little easier to work with if I wrapped Conductor in a spring boot app. Our current plan is mod the server module to run on tomcat, and leave springboot out of the equation. Thanks for the response :)
    maheshyaddanapudi
    @maheshyaddanapudi
    Requirement - Make Netflix Conductor Jar Spring Boot compatible.
    Reason
    To is embedded Maria DB using mariadb4j
    Can I convert the Guice based Conductor to Spring Boot and use embedded MariaDB4j or vice versa to use DB=embedded-mariadb or some similar thing using existing Guice framework ?
    Brian Tarricone
    @kelnos
    thanks @kishorebanala, that's pretty much what i expected, good to have confirmation
    Teddy Reinert
    @TJReinert
    I joined to ask a similar question about wrapping conductor in a spring boot app
    From what I've been able to find here, is that it's not possible right now, and the recommended action would be fork the project?
    David Zuckerman
    @davidzzzzz
    Hi folks! We have a use case where we are going to be running untrusted JavaScript. I'm working on a PR now to inject the ScriptEvaluator allowing clients to provide a different implementation if needed. Thoughts on this approach?
    maheshyaddanapudi
    @maheshyaddanapudi
    In that case may be a migration trial as suggested in below url might be an option
    Teddy Reinert
    @TJReinert
    @maheshyaddanapudi I was thinking of writing a wrapper using spring-guice
    https://github.com/spring-projects/spring-guice
    maheshyaddanapudi
    @maheshyaddanapudi
    @TJReinert That's a good option, I realize I was trying similar approach few weeks ago without knowing the above example was out there. Thanks for sharing and I will try out this option first and then the other option of migration from Guice to Boot if wrapper doesn't workout.
    maheshyaddanapudi
    @maheshyaddanapudi

    I m trying to create a UI to perform the general tasks though conductor like defining a workflow by allowing user to pick the available task defs or allowing user to wire I/O for a task within a workflow or the workflow itself etc.

    The existing UI though gives the web ui to view an existing Workflow, it doesn't give the capacity to create i.e. define new workflow or trigger the workflow.

    And the goal is to try and achieve this without the use of json in UI. The UI scripting counterpart for example angular typescript should take care of the json to ui vs ui to json part.

    I could have forked the existing UI but I m more inclined towards Angular and as the entire operation is through Conductor APIs, it is not necessary to continue the existing UI.

    https://github.com/maheshyaddanapudi/netflix-conductor-worflow-creator-ui

    So in this attempt, I stumbled across the fact that, while defining a Task or a Workflow using Conductor API, InputParameters and OutputParameters are option and for documentation purpose only. I though completely understand the purpose is to have a decoupled definition to an execution, but to have a UI to display up Task details or Workflow details in the as an input UI page, the I/O params from the taskdef i.e. documentation is all that the UI can rely upon.

    My real question is, can we define a centralized flag which says probably "mandate_i_o_details_in_def" to be true or false.

    That will help a central authority who looks after conductor server to put this check in place and mandating it to anyone who writes up a worker.

    Or may be adding a parameter in taskdef and workflowdef to explicitly specify "no_known_i_o_params" to be either true or false. By default it should be considered true and expect the inputParams and outputParams in taskdef be non empty array or whatever. Only if that new parameter is explicitly set to true, the validation should be relaxed and would work like it is working today.

    This will at least make the developer aware that his client or worker had to define i/o and if they explicitly want to skip defining i/o they can do it. Currently a developer might take a code sample from somewhere in which there is a sample task def or Workflow def in which original author know but ignored to define i/o because they don't need documentation or for any reason. If the current developer who copies this code doesn't read the proper Netflix Conductor documentation, then there is a very good chance they will implement another custom persistence later to keep track of I/O params though it's only document purpose. Could lead to a duplication of persistence without utilizing conductors in definition n persistence.

    Personally I went through 100s of times through Conductor documentation on Netflix and GitHub but I always end up referring again and again to be sure of few things. Its that huge and that good a tool or oss to be precise.

    Just throwing in an idea to see if it gives more granular but useful control in Conductor orchestration that could also help UI.

    maheshyaddanapudi
    @maheshyaddanapudi
    P.S the UI in above message is a not fully working example. I will put up a proper version soon out there
    Teddy Reinert
    @TJReinert

    @maheshyaddanapudi I will review this more thoroughly shortly, I have a project internally I've been working on as a builder. Had a though time finding something in angular to solve the need. I'm interested to see what you come up with.

    I'll need to chat with my higher ups to see if this is something I can help out with.

    skarsri
    @skarsri
    Any best practices for building a HA Conductor infrastructure?
    Faheem Zunjani
    @faheemzunjani
    Hi all,
    I am new to using conductor, hence my query might be a silly one. I am trying to run conductor from my IDE (IntelliJ IDEA). I have built the project successfully using 'gradlew build'. To run the server, when I run the 'Main.java' file in 'com.netflix.conductor.bootstrap' package I get this error:
    "Task :conductor-server:Main.main()
    0 [main] INFO com.netflix.conductor.bootstrap.ModulesProvider - Starting conductor server using in memory data store.
    15 [main] INFO com.netflix.conductor.bootstrap.ModulesProvider - External payload storage is not configured, provided: , supported values are: [S3]
    java.lang.IllegalArgumentException: No enum constant com.netflix.conductor.bootstrap.ModulesProvider.ExternalPayloadStorageType.
    at java.lang.Enum.valueOf(Enum.java:238)"
    How do I go about running the conductor server and ui from my IDE?
    maheshyaddanapudi
    @maheshyaddanapudi
    Isn't that an info log ? It is only to inform user that external S3 buckets were not configured for storage
    It should ideally not terminate the run and should continue without impact to conductor execution with in memory DB. Please let me know if otherwise, I will try replicating then
    Oleg Ovcharuk
    @vgvoleg
    Hi all, sorry for probably stupid question, but how to setup elasticsearch auth parameters? I’m trying to make conductor work with external es, but get an exception “no node available”. I’m sure that host:9300 is available from machine with conductor, I’m sure that es cluster is configured correctly, I’m sure that conductor use compatible client version. The only thing I’m not sure is how to setup user:pass from es on conductor and if this could be the root cause of the problem. Thank you in advance.