Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Sabby Anandan
    @sabbyanandan
    @PraneethPathireddy: Can you share the versions in use, and as well as your task definition in entirety?
    PraneethPathireddy
    @PraneethPathireddy
    I am using Data flow server version : 2.1.2.RELEASE
    Composed-task-runner : 2.1.0.RELEASE
    Kubernetes platform
    Task definition : <Task1> && <Task2 || Task3 || Task4> && Task4
    I am sending the deployer.composedtaskname.task1.kubernetes.deploymentLabels=myLabelName:myLabelValue programatically
    Thanks in advance
    PraneethPathireddy
    @PraneethPathireddy
    ComposedTaskName : StoreTask Task definition : Task1 && <Task2 || Task3 || Task4> && Task4 I am launching tasks through dataflowTemplate.launch(); I have added following property into deplyoyer property Map. deployer.StoreTask.Task1.kubernetes.deploymentLabels=myLabelName:myLabelValue
    But it does not add this label
    Sabby Anandan
    @sabbyanandan
    That's an old release. Let me do some digging to see if this is supported in the dependent k8s-deployer version or in the associated task java dsl version.
    PraneethPathireddy
    @PraneethPathireddy
    @sabbyanandan Thank you
    djgeary
    @djgeary
    @cppwfs Raised issue spring-cloud/spring-cloud-dataflow#3695, @sabbyanandan we have cleaned up the task execution entries and still get the issue, from @cppwfs description above and the documentation its implied you cant launch task definitions with different arguments or properties while one is currently running. Its currently returning a 500 error which implies there's something unintended going on internally?
    Sabby Anandan
    @sabbyanandan

    "while one is currently running" <- I missed this on your previous post. What I shared before was around launching tasks with different args/props, and that is the expected behavior. It continues to work that way.

    However, there's now governance around task launches in 2.3. We use the task definition to parse the app, its version, args, and props. Based on whatever that is changed, we apply that on new launches. It can be app-version or args or props -- all of it. You get to practice CICD on task apps with it. With that, there is now the validation to make sure the task definition is currently not running.

    If I have to restate your requirement, you want to launch multiple versions of the same task-definition and concurrently with different args. Yeah?

    djgeary
    @djgeary
    yes thats right
    Sabby Anandan
    @sabbyanandan
    Alright, thanks for confirming and the issue. We will see how to address this requirement on the newly designed infra for Task CICD.
    djgeary
    @djgeary
    we have a client application that uses the SCDF API and it launches the same definition with different args
    Michael Minella
    @mminella
    Just to confirm @djgeary The values that are changing, are they command line args, or application properties?
    Command line args should not cause the CD flow to kick in (and therefore should not be impacted by the new restrictions). If that isn't the case, then it's a bug IMHO.
    djgeary
    @djgeary
    i think they are application properties
    djgeary
    @djgeary
    ive checked, they are application properties, although previously it would have worked for both application properties and arguments
    Michael Minella
    @mminella
    That's because app property changes were ignored on subsequent runs with tasks in the past. Command line args are where values that are expected to change should live
    Are you running on CF or K8s?
    djgeary
    @djgeary
    OK, I have to admit it wasn't so clear to me what the difference was from the point of view of launching a task - eg there is a screen in SCDF to specify both types at launch and we could have used either. Perhaps this (new) distinction should be a bit more clear as it may be a good reason to choose to code it as arguments - possibly we can update our code to do that.
    K8s
    i mean the documentation as to why you might want to code a something as an argument instead of a property
    Michael Minella
    @mminella
    The distinction has been more explicit in CF since env vars are cached from run to run.
    Agreed that if we are not explicit in the docs, we should be
    djgeary
    @djgeary
    we can do some tests on using arguments instead and if we still run into problems update the issue
    Michael Minella
    @mminella
    Great! Thanks for the feedback and working with us on this
    Siddhant Sorann
    @siddhantsorann
    Hey Guys, I'm migrating from SCDF 1.7.2 to 2.3.0. I run it on Kubernetes. How can I add an annotation to all the pods created by SCDF?
    previously I used to used to give
    • name: SPRING_CLOUD_DEPLOYER_KUBERNETES_JOB_ANNOTATIONS
      value: key:value
    in deployment.yaml file
    kasim-ba
    @kasim-ba
    Hi I'm currently setting up SCDF for our Batch Jobs. The Machine should run locally so is it possible to schedule tasks on a local running Spring Cloud DataFlow? And how would I achive this. The documentation only talks about PCF and Kubernetes.
    I want to be able to change the scheduling without touching the code. Especially since needs to be done by people other than developers.
    Ilayaperumal Gopinathan
    @ilayaperumalg
    Hi @siddhantsorann, You can set the K8s deployer properties at the server level for streams and tasks. In case of streams, you need to set the K8s deployer properties for Skipper server config as explained in doc here: https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_application_and_server_properties
    For tasks, you need to set the K8s deployer properties at the SCDF server level as mentioned here: https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-kubernetes-tasks
    Ilayaperumal Gopinathan
    @ilayaperumalg
    Hi @kasim-ba, the local scheduler support is not available yet. You can try to see if this PR can help your case: spring-cloud/spring-cloud-scheduler-quartz#1
    kasim-ba
    @kasim-ba
    @ilayaperumalg Thanks, I will take a look at it
    Siddhant Sorann
    @siddhantsorann
    Worked like a charm. Thanks @ilayaperumalg
    Also, is there a reason that the scheduling agent was created to trigger tasks via rest api in 2.3.0? Previously the cronjob created in K8 was directly running the tasks.
    kasim-ba
    @kasim-ba
    @ilayaperumalg As far as I can see the PR has been waiting since over a year to be merged. Is there a specific reason for keeping that merge request open for such a long time?
    Bart Veenstra
    @bartveenstra
    Congrats on the 2.3.0.RELEASE! Are there release notes for this release?
    Bart Veenstra
    @bartveenstra
    @cppwfs Awesome! Nice new features!
    Sabby Anandan
    @sabbyanandan

    Also, is there a reason that the scheduling agent was created to trigger tasks via rest api in 2.3.0? Previously the cronjob created in K8 was directly running the tasks.

    @siddhantsorann: You may want to review the new CICD support for Tasks: https://docs.spring.io/spring-cloud-dataflow/docs/2.3.0.RELEASE/reference/htmlsingle/#spring-cloud-dataflow-task-cd

    When the scheduler is used in K8s, to support CD flows for tasks, we needed a consistent mechanism to determine whether or not a given task is already running.

    The only approach to determine that is to ask SCDF itself, so the new scheduler-tasklauncher app is spawned behind the scenes (similar to composed-task-runner) to query SCDF at runtime, and then intelligently decide whether to consume new versions of the task apps or its property changes.

    Sabby Anandan
    @sabbyanandan

    As far as I can see the PR has been waiting since over a year to be merged. Is there a specific reason for keeping that merge request open for such a long time?

    @kasim-ba: It is a common practice for folks to rely on an enterprise scheduler in a production setting. In K8s, we rely on the scheduler agent in Kubernetes itself — assuming users would use a production-ready K8s cluster. Likewise, in CF, we rely on pcf-scheduler, which is a managed service in Cloud Foundry.

    However, in local, there's no platform-like concept attached to it. SCDF is a Boot app that runs as a local Java process in the VM or in the Docker Deamon (if you're using Docker Compose).

    All that said, the quality of service and resilient operations are limited in the local method. If something goes wrong or if SCDF app crashes, there's nothing to bring it back up automatically. It is your responsibility to manage and operate it reliably — that's why it is typically used for local development.

    Given these limitations, we didn't want to take upon the local-scheduler in our releases. If you're going to need a Scheduler in local-mode, it is a common practice to use quartz-based implementation locally. It will be a simple Boot app, too. There are many Boot + Quartz examples you can find online.

    Michael Minella
    @mminella
    To @kasim-ba 's point though...if we have an open PR for that functionality and we have decided not to support it, we should close the PR.
    Sabby Anandan
    @sabbyanandan
    Sure, OK.
    kasim-ba
    @kasim-ba
    @sabbyanandan Thanks for the further informations. I knew about Quartz. Just looking at possibilities to bringing my team closer to the whole Spring Enviroment without pushing them also into the Kubernetes topic at the same time.
    Sabby Anandan
    @sabbyanandan

    @/all Hi, folks! Thank you for your interest and contributions to Spring Cloud Data Flow and the ecosystem. We would love to learn more about your feedback and/or concerns, so please take this 1-page survey to let us know: https://twitter.com/pivotal/status/1204532909065080842

    ps: I posted a similar message in the Spring Cloud Stream Gitter channel, so if you have feedback at the framework level, we would appreciate it if you can also complete the other survey.