Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Glenn Renfro
    @cppwfs
    Also what bean was it creating?
    Aruna
    @arugtechie
    boot 2.1.7.RELEASE Spring-Cloud Greenwich.SR2
    Aruna
    @arugtechie
    I posted the stacktrace at this link
    Glenn Renfro
    @cppwfs
    @arugtechie I added a comment to your issue on Stack Overflow.
    Aruna
    @arugtechie
    @cppwfs thanks.
    Aruna
    @arugtechie
    Hi, For the cloud task batch, If I want to setup integration test for individual step using batch JobLauncherTestUtils. But before the test method is called DeployerStepExecutionHandler is invoked. And there are assertions for job execution is, step execution id. How can I test step?
    Glenn Renfro
    @cppwfs
    @arugtechie Can you post this question in the Spring Batch room?
    Aruna
    @arugtechie
    Ok
    Aruna
    @arugtechie
    Hi, we are facing an issue, for the partitioned cloud batch task, task launch is successful. But step execution is not getting updated. Due to the worker task had some other startup errors on cloud foundry. On PCF console Tasks tab the task is marked as failed. And master job execution is not updated. What could be reason? How to handle these? Thanks!
    We tested with say 10 partitions, we did not see startup error.
    Aruna
    @arugtechie
    When we tested with 70 partitions, we see the master task is stuck and step and job execution are not updated. They are still in Starting and started state.
    Aruna
    @arugtechie
    Hi, We are trying to retry launching task which was not successfully launched by the spring cloud foundry deployed. How to go about this? Any hint or sample I can refer to, will be great help. Thanks!
    Hi, We are observing that DeployerPartitioner is not getting called if the previous launch request is not timing out. Even after using SimpleAsyncTaskExecutor, launching a new job new tasks are not being launched.
    ashishreddyv
    @ashishreddyv

    Hi All, I have a Spring Cloud Task that accepts some required arguments and does some processing. The application works as expected. I have an issue where my application launches before my unit tests and fails to start the tests. Here's my current setup:

    @SpringBootApplication(exclude = {ServletWebServerFactoryAutoConfiguration.class, WebMvcAutoConfiguration.class})
    @EnableTask
    public class MyTaskApplication {
           public static void main(String[] args) {
            SpringApplication.run(MyTaskApplication.class, args);
        }
    }

    My actual task:

    @Component
    public class MyTask implements ApplicationRunner {
        @Override
        public void run(ApplicationArguments args) {
           List<String> arg1 = args.getOptionValues("arg1");
           List<String> arg2 = args.getOptionValues("arg2");
           if (CollectionUtils.isNotEmpty(arg1) && CollectionUtils.isNotEmpty(arg2)) {
                executeTask(arg1.get(0), arg2.get(0));
            } else {
                throw new IllegalArgumentException("One of the required parameters is missing : [arg1, arg2].");
            }
        }
    
       private void executeTask(String a1, String a2) {
           //doSomething
       }
    
    }

    My Test Class:

    @RunWith(SpringRunner.class)
    @SpringBootTest(classes = {MyTaskApplication.class}, webEnvironment = WebEnvironment.NONE)
    public class MyTaskTest {
         @Autowired
        private MyTask myTask;
    
       @Test
        public void testTask() {
            String[] arguments = { "--arg1=argument1  --arg2=argument2" };
            myTask.run(new DefaultApplicationArguments(arguments));
        }
    
    }

    When I run MyTaskTest.testTask() test method, the MyTask.run() method gets called before the actual test starts with no arguments and my test setup fails with IllegalArgumentException being thrown. Is there something I'm missing or doing wrong here ?

    Glenn Renfro
    @cppwfs
    @ashishreddyv Change up your test to look something like:
    public class Demo2ApplicationTests {
        @Test
        public void testTask() {
            String[] arguments = { "--arg1=argument1", "--arg2=argument2" };
            ConfigurableApplicationContext context = SpringApplication.run(Demo2Application.class, arguments);
            MyTask myTask = context.getBean(MyTask.class);
            Assert.assertNotNull(myTask);
        }
    Glenn Renfro
    @cppwfs
    @arugtechie I am so sorry about the delayed response! If an partitioned app fails to update its step information the managing app will block waiting for that app to complete. The managing app assumes that the partitioned app is still running. You can set a timeout on the PartitionHandler to set a maximum time for workers to complete (currently its -1 meaning no max).
    Also why did the worker app fail? Is there a stack trace?
    ashishreddyv
    @ashishreddyv
    @cppwfs Thank you for your response. The solution seems to solve the current use case, but the complexity for this test grows as I have a few classes that need to be autowired into this test class (Not referring to MyTask class here). So, I needed to use @SpringBootTest annotation on my test class. Is there any good example for spring cloud task that you could point me to which accepts arguments and serves my use case ?
    Glenn Renfro
    @cppwfs
    When unit testing you can unit test the components as you would with any other Spring Boot application.
    majorisit
    @majorisit
    Hi @cppwfs, @sabbyanandan,
    To launch thousands of short lived SCDF tasks in k8s, you guys recommend me to use kubernetes task launcher and it is working successfully. I am just curious to know advantage of spring cloud task shedulers? Do you guys recommend that for thousands of short lived jobs? Please let us know which one is better to meet our requirements
    Glenn Renfro
    @cppwfs
    @majorisit I don’t know your use case. That would determine what choice best fits you. It could be a blend.
    majorisit
    @majorisit

    @cppwfs, assume we have close to 5000 short lived tasks that runs in every 30 mins to pull the incremental data from source data base and send it to Kafka as intermittent storage. We have managed this very well in spring-xd environment by creating a separate stream definition for each job along with trigger --cron

    job create --name performancetable_v1_job --definition "pipeline-smoketest --datasourceName=smoketest --databaseName=smokedb --schemaName=performanceschema --table=performancetable99 --fetchSize=1000 --recordsCount=20000000" --deploy

    stream create --name performancetable_v1_stream --definition "trigger --cron='0 0/30 * * * ?' > queue:job:performancetable_v1_job" --deploy

    we are looking for a similar approach in scdf + k8s environment to launch tasks individually for each table based on the cron expression.
    Please let me know if you need more details

    Glenn Renfro
    @cppwfs
    Scheduling using k8s cronjobs sounds like the preferred way of launching tasks with the use case above. But if that does not fit your needs you can implment the behavior from XD example above using the following instructions https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#spring-cloud-dataflow-launch-tasks-from-stream. Just use trigger instead of time.
    ashishreddyv
    @ashishreddyv
    Question: Should the business logic for a spring-cloud-task, always be in a class that implements CommandLineRunner or ApplicationRunner ? Do we have any other way ?
    Glenn Renfro
    @cppwfs
    I wouldn’t put the logic directly into the CommandLine or Application Runners. I would have the use a service bean that implements the logic necessary. In the same manner as you would treat a controller.
    majorisit
    @majorisit

    @cppwfs, I need to set JVM arg -Denv=dev OR -Denv=prod based on the k8s environment. I was able to set and make it work using Dockerfile

    ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-Denv=dev","-jar","/app.jar"]

    Could you please help me how to set this argument in SCDF server level for across all the tasks?

    Also, please let me know how to pass this argument at individual task and stream level as well.
    Glenn Renfro
    @cppwfs
    @majorisit I believe the answer to both of your questions can be found in the Application and Server section of the reference documentation. Specifically on how to use deployer properties. https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_application_and_server_properties
    majorisit
    @majorisit
    Thank you @cppwfs
    venkatasreekanth
    @venkatasreekanth
    I am seeing this in the logs 2019-10-25 14:41:38.271 WARN 6343 --- [p-nio-80-exec-2] .m.m.a.ExceptionHandlerExceptionResolver : Resolved [java.lang.IllegalStateException: The maximum concurrent task executions [20] is at its limit.], I cleared the jobs where start_time and end_time were null, there are no jobs with status running, but the server won't execute any jobs
    Glenn Renfro
    @cppwfs
    @venkatasreekanth ask this question in the Spring Cloud Data Flow room.
    Thanks!
    venkatasreekanth
    @venkatasreekanth
    @cppwfs did that got no reponse
    venkatasreekanth
    @venkatasreekanth
    2019-11-13 18:00:33.849 ERROR 16104 --- [           main] o.s.batch.core.job.AbstractJob           : Encountered fatal error executing job
    
    org.springframework.dao.DataAccessResourceFailureException: Could not increment identity; nested exception is com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 2253) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
            at org.springframework.jdbc.support.incrementer.AbstractIdentityColumnMaxValueIncrementer.getNextKey(AbstractIdentityColumnMaxValueIncrementer.java:113) ~[spring-jdbc-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
    
    2019-11-13 18:00:26.339 ERROR 16043 --- [           main] o.s.batch.core.job.AbstractJob           : Encountered fatal error executing job
    
    org.springframework.dao.DataAccessResourceFailureException: Could not increment identity; nested exception is com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 1874) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
            at org.springframework.jdbc.support.incrementer.AbstractIdentityColumnMaxValueIncrementer.getNextKey(AbstractIdentityColumnMaxValueIncrementer.java:113) ~[spring-jdbc-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
            at org.springframework.jdbc.support.incrementer.AbstractDataFieldMaxValueIncrementer.nextLongValue(AbstractDataFieldMaxValueIncrementer.java:128) ~[spring-jdbc-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
    I changed the isolation level to ISOLATION_REPEATABLE_READ, but I still see deadlocks
    These tasks are running on SCDF 1.7.2
    venkatasreekanth
    @venkatasreekanth
    Can Read Committed Snapshot Isolation (RCSI) be used on SQL server to overcome deadlocks?
    venkatasreekanth
    @venkatasreekanth
    @cppwfs could you help me out with the above issue? I will also submit a bug if you want me to
    Glenn Renfro
    @cppwfs
    @venkatasreekanth sorry for the delay. Can you share a little more detail . I see the batch job is having a deadlock issue. What table is having the problem?
    venkatasreekanth
    @venkatasreekanth
    @cppwfs the DBA says that this is where the issue is DELETE STATEMENTS:
    SPID 2253(victim): delete from BATCH_STEP_EXECUTION_SEQ where ID < 131203
    SPID 1964: delete from BATCH_STEP_EXECUTION_SEQ where ID < 131201
    SPID 1874(victim): delete from BATCH_STEP_EXECUTION_SEQ where ID < 131200
    we actually had 3 jobs deadlock on this
    Glenn Renfro
    @cppwfs
    @venkatasreekanth This looks like its a Spring Batch Question more than a Spring Cloud Task Question. Please provide the detail in the Spring Batch Room . Also provide the database type you are using.
    Glenn Renfro
    @cppwfs
    The delete is putting some kind of lock on the table or rows, while you are running jobs and your jobs are the victims.
    venkatasreekanth
    @venkatasreekanth
    @cppwfs This happens win conjunction with identity increments
    we have enabled RCIS on SQL server to overcome the deadlock issue, but are running into id issues
    Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
    2019-12-04 20:31:01.877 ERROR 23018 --- [           main] o.s.c.t.listener.TaskLifecycleListener   : An event to end a task has been received for a task that has not yet started.
    2019-12-04 20:31:01.882 ERROR 23018 --- [           main] o.s.boot.SpringApplication               : Application run failed
    
    org.springframework.context.ApplicationContextException: Failed to start bean 'taskLifecycleListener'; nested exception is java.lang.IllegalArgumentException: Invalid TaskExecution, ID 180017 not found
            at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:185) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
            at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:53) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
            at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:360) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
            at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:158) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
            at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:122) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
            at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:883) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
            at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:551) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
            at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
            at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:386) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
            at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
            at org.springframework.boot.SpringApplication.run(SpringApplication.java:1242) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
            at org.springframework.boot.SpringApplication.run(SpringApplication.java:1230) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
            at com.digikey.batch.PimBatchApplication.main(PimBatchApplication.java:22) [classes!/:20191014.1-master]
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_191]
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_191]
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_191]
            at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_191]
            at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [PIMBatch-20191014.1-master.jar:20191014.1-master]
            at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [PIMBatch-20191014.1-master.jar:20191014.1-master]
            at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [PIMBatch-20191014.1-master.jar:20191014.1-master]
            at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) [PIMBatch-20191014.1-master.jar:20191014.1-master]
    Caused by: java.lang.IllegalArgumentException: Invalid TaskExecution, ID 180017 not found
            at org.springframework.util.Assert.notNull(Assert.java:193) ~[spring-core-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
            at org.springframework.cloud.task.listener.TaskLifecycleListener.doTaskStart(TaskLifecycleListener.java:233) ~[spring-cloud-task-core-2.0.0.RELEASE.jar!/:2.0.0.RELEASE]
            at org.springframework.cloud.task.listener.TaskLifecycleListener.start(TaskLifecycleListener.java:355) ~[spring-cloud-task-core-2.0.0
    Glenn Renfro
    @cppwfs
    When a user requests a task launch from Spring Cloud Data Flow, it creates a entry in the task-execution table and then launches the task with the spring.cloud.task.execution-id set to the next available task-execution-id. When your task runs it will populate this task-execution record as it proceeds with the execution. In the error above this record has not been created/commited prior to the task looking up the entry
    venkatasreekanth
    @venkatasreekanth
    @cppwfs could this be an issue with how RCIS works on sql server. Is there a production db you recommend? I think we have had it with sql server
    Glenn Renfro
    @cppwfs
    Before giving up on Sql Server, ask the question from the 19th in the Spring-batch room and see if they may have a solution to the problem before moving to RCSI.