by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Enrique Medina Montenegro
    @emedina
    Caused by: java.lang.IllegalArgumentException: A default binder has been requested, but there is no binder available
    I believe I'm mixing up the dependencies, but I'm a bit lost here now
    Sabby Anandan
    @sabbyanandan
    It looks like you're mixing Spring Cloud Stream and Spring Cloud Task dependencies in your app.

    Not sure what official doc that you're referring to, but have you had a chance to review the Batch Developer Guide.

    Specifically, the Simple Task guide should in the entirety be ready to prep and run the app in local, cf or k8s — as a standalone Task.

    Once you get that running, it will work exactly the same when launched via SCDF.

    Bart Veenstra
    @bartveenstra
    I am running into the issue of not being able to specify a transaction manager in Spring Cloud Tasks as I am using r2dbc inside my tasks
    On github, i see that this is added in the current snapshot.
    To unblock my progress, what’s the most suitable way to include the latest snapshot into my project?
    Michael Minella
    @mminella
    @bartveenstra Since that was just merged in, you should be able to grab the latest from repo.spring.io.
    Add https://repo.spring.io/libs-snapshot-local as the repository to your build and update your Spring Cloud Task version to 2.3.0.BUILD-SNAPSHOT.
    Bart Veenstra
    @bartveenstra
    Cool. I’ll give it a go. Any idea when 2.3.0 will be released?
    Michael Minella
    @mminella
    We're working on the timeline. We'll update the milestone in Github Issues when we get something more concrete. My first guess would be it would be included within the next Spring Cloud release train at the latest.
    Bart Veenstra
    @bartveenstra
    Excellent. Thank you. I’ll see if I can get the libraries from the snapshot repo. Although I don’t know if our cicd can cope with these
    Michael Minella
    @mminella
    We'd love to get your feedback to confirm that it addresses your needs as well. As a side note, I'd love to learn more about your use case for using R2DBC
    Bart Veenstra
    @bartveenstra
    I am working on sagas written in project reactor. In these sagas I track the history of the items I am processing and now am storing that in a reactivemongodb
    As R2DBC is picking up, I was checking if I also can supply with an alternative repository so my clients can choose the persistence method as Mongo is not everywhere available on coorporate systems
    Glenn Renfro
    @cppwfs
    :cool:
    Bart Veenstra
    @bartveenstra
    So R2DBC works quite well, as I only have to specify a differnet configuration which adds the repository scanning of the ReactiveCrudRepositories
    So I can easily switch repository implementations. But cloud task was receiving 2 transaction managers so stuck on that, until I saw the commit from @mminella
    :-D
    Being on the cutting edge is challenging :)
    Michael Minella
    @mminella
    Understandable. However, thank you for finding that. It was a bug that we missed.
    jinishavora
    @jinishavora

    Hi All,

    We have Spring Cloud Data Flow local setup and the task is running the Spring Batch Job which reads from a Database and writes to AWS S3, all of this works fine.

    1) When it comes to stopping the JOB, the task stops but resuming the job is not possible since the status is in "STARTED", this I think we can handle in code, by setting the batch status to 'STOPPED' when the stop is triggered, correct me if this can't be handled?

    2) Also when trying to stop an individual slave task, there's an error:

    2020-03-27 10:48:48.140 INFO 11258 --- [nio-9393-exec-7] .s.c.d.s.s.i.DefaultTaskExecutionService : Task execution stop request for id 192 for platform default has been submitted 2020-03-27 10:48:48.144 ERROR 11258 --- [nio-9393-exec-7] o.s.c.d.s.c.RestControllerAdvice : Caught exception while handling a request

    java.lang.NullPointerException: null at org.springframework.cloud.dataflow.server.service.impl.DefaultTaskExecutionService.cancelTaskExecution(DefaultTaskExecutionService.java:669) ~[spring-cloud-dataflow-server-core-2.3.0.RELEASE.jar!/:2.3.0.RELEASE] at org.springframework.cloud.dataflow.server.service.impl.DefaultTaskExecutionService.lambda$stopTaskExecution$0(DefaultTaskExecutionService.java:583) ~[spring-cloud-dataflow-server-core-2.3.0.RELEASE.jar!/:2.3.0.RELEASE]

    3) How do we implement this is in distributed environment where we have a master server which can start the master on the master server and start the workers on respective slave servers?

    Posted on StackOverflow as well: https://stackoverflow.com/questions/60891472/spring-cloud-task-remote-partitioning-concerns

    Any suggestions?

    Glenn Renfro
    @cppwfs
    @jinishavora
    1) You are correct you will need to change your status from STARTED to FAILED.
    2) Since remote partitioning uses Spring Cloud Deployer (not Spring Cloud Data Flow) to launch the worker tasks, SCDF does not have a way to determine platform information to properly stop the the worker task. I've added GH Issue spring-cloud/spring-cloud-dataflow#3857 to resolve this problem.
    3) The current implementation prevents a user from launching on multiple servers, rather lets the platform (Kubernetes, Cloud Foundry) distribute the worker tasks. You can implement your own deployer to add this feature.
    Glenn Renfro
    @cppwfs
    @javaHelper I added a note to the answer given.
    majorisit
    @majorisit
    Hi @sabbyanandan / @cppwfs, I am getting 'Java heap space' for a task in k8s. Can I get the right property to resolve this error during execution?
    Sabby Anandan
    @sabbyanandan
    @mminella ^^
    Glenn Renfro
    @cppwfs
    @majorisit If you are launching from SCDF then you can set the heap using the following deployer property: deployer.<app>.kubernetes.environmentVariables=JAVA_TOOL_OPTIONS=-Xmx1024m
    majorisit
    @majorisit
    I set something like this ,
    task launch --name ds-batchjobhist-ini7 --arguments "random=1" --properties "deployer.ds-app-v2.kubernetes.environmentVariables=JAVA_TOOL_OPTIONS=-Xmx8392m"
    but I am not seeing this setting anywhere in the Java process level
    1 root 1:38 java -Djava.security.egd=file:/dev/./urandom -Denv=${curenv} -Dspring.profiles.active=${curenv} -jar /app.jar --schemaname=dbo --spring.datasource.driverClassName=org.postgresql.Driver --datasourcename=ds --spring.cloud.task.name=ds-batchjobhist-ini7 --batchjobname=edars-batchjobhist-ini7 --databasename=ds --fetchsize=10000 --table=batch_job_hist random=1 --spring.cloud.data.flow.platformname=default --spring.cloud.task.executionid=270094
    Where do I see this settings? Still I am getting OutOfMemoryError
    Glenn Renfro
    @cppwfs
    hmm… What versio of SCDF are you using . I added the following deployer property for timestamp and it worked fine.
    deployer.timestamp.kubernetes.environmentVariables=JAVA_TOOL_OPTIONS=-Xmx1224m produced…
         env:
            - name: JAVA_TOOL_OPTIONS
              value: '-Xmx1224m’
    majorisit
    @majorisit
    ─────────────────┼──────────────────────────────────────────────────────────────╢
    ║ │spring-cloud-dataflow-server: 2.3.0.RC1 ║
    ║ Versions │ Spring Cloud Data Flow Core: 2.3.0.RC1 ║
    ║ │ Spring Cloud Dataflow UI: 2.3.0.RC1 ║
    ║ │Spring Cloud Data Flow Shell: 2.3.0.RC1 ║
    ╟─────────────────┼──────────────────────────────────────────────────────────────╢
    Glenn Renfro
    @cppwfs
    Try on 2.4.x line.
    majorisit
    @majorisit
    Will try
    Glenn Renfro
    @cppwfs
    I tried on the 2.5.0.BUILD-SNAPSHOT
    keerindurthi
    @keerindurthi
    Can we have multiple tasks in one project as submodules, or should those be separate projects.
    Glenn Renfro
    @cppwfs
    You can only have one task per app. If each module produces a java app.. then yes.
    majorisit
    @majorisit
    Hi @sabbyanandan / @cppwfs, the current DataFlowTaskLaunchRequest object does not take an argument taskname. Please help us how to set JAVA_TOOL_OPTIONS at task level when we do dynamic task launch using dataflow sink.
    nightswimmings
    @nightswimmings

    I keep seeing different writings across documentation on the closeContextEnabled property. Which is the proper one?

    spring.cloud.task.closecontextEnabled
    spring.cloud.task.closecontext.enabled
    spring.cloud.task.closecontext_enabled

    and the yaml form?

    Swyrik Thupili
    @swyrik
    can we externalize configuration of a spring cloud task to spring cloud config?
    Either spring cloud config or cloud consul?
    Glenn Renfro
    @cppwfs
    @nightswimmings it follows the spring boot form for properties. here is the docs from SCT https://docs.spring.io/spring-cloud-task/docs/current/reference/#closing-the-context
    @swyrik A Spring Cloud Task is a spring boot app and thus you will be able to use spring cloud config.
    @majorisit Please post this question in the Spring Cloud Data Flow room
    majorisit
    @majorisit

    Hi @cppwfs, I have tried setting deployer property via JAVA_TOOL_OPTIONS. I can see that it in printenv output as well. But, it's not working as expected and still I am getting OOMKilled error. I am setting appName in the deployer property, not a taskName.

    task launch smoke-task --arguments "random=2" --properties "deployer.smoketest-task-app.kubernetes.environmentVariables=JAVA_TOOL_OPTIONS=-Xmx8392m"

    I am running this test on K8S environment with SCDF version 2.5.1.RELEASE

    majorisit
    @majorisit

    Also, I am not seeing that property pass on and visible in PS command

    # ps -ef | grep -v grep
    1 root 0:34 java -Djava.security.egd=file:/dev/./urandom -Denv=${curenv} -Dspring.profiles.active=${curenv} -jar /app.jar --schemaname=performanceschema --spring.datasource.driverClassName=org.postgresql.Driver --datasourcename=smoketest --spring.cloud.task.name=smoke-task-perf0v6 --recordscount=10000000 --batchjobname=smoke-task-perf0v2 --databasename=smokedb --fetchsize=10000 --table=performancetable0 random=2 --spring.cloud.data.flow.platformname=default --spring.cloud.task.executionid=321528

    Sabby Anandan
    @sabbyanandan
    @cppwfs: Do you have any thoughts on this? ^^
    majorisit
    @majorisit
    Thanks @sabbyanandan. we were able to resolve this issue via Pivotal support ticket.
    Swyrik Thupili
    @swyrik
    How to trigger an email if a task or composed task is running more than certain threshold time. Is there any built in SCDF options? @sabbyanandan @cppwfs