spring.cloud.dataflow.applicationProperties.stream.management.metrics.export.prometheus.enabled=true spring.cloud.dataflow.applicationProperties.stream.spring.cloud.streamapp.security.enabled=false spring.cloud.dataflow.applicationProperties.stream.management.endpoints.web.exposure.include=prometheus,info,health spring.cloud.dataflow.grafana-info.url=http://localhost:3000
Our use case:
We are migrating jobs that used to run on a home-grown scheduler. All jobs were running in the same VM and, obviously, this is less than ideal for a company of our size. So we have been pulling jobs out of that scheduler and reimplementing them as spring cloud task applications. The legacy scheduler now just makes an API call to the SCDF server to launch the task. We have 100+ scheduled jobs, of which we have migrated roughly 10%.
Hi @sabbyanandan, We are running a spring cloud Dataflow server on Kubernetes. When creating a task app on scdf-shell, we are referring a maven meta-data artifact in the task command for whitelist properties.
app register --name smoketest --type task --uri docker://scdf-smoketest:latest --metadata-uri maven://com.pipeline.scdf:smoketest-task-app:jar:metadata:1.0.0-SNAPSHOT
App got created succesfully. But, when we type app info command on scdf-shell, we are getting SSLHandshakeException on scdf server logs.
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Is there anyway to skip SSL validation from scdf server side? Please help us to resolve this issue.
Hi, @majorisit. No-cert errors are no fun. I don't think we have a way to skip SSL validation for when resolving for the companion-metadata artifact for the apps. If you cannot fix the cert issue, perhaps you could download the metadata JARs in a HTTP location inside your network, and register the metadata JAR with the
http location of the JAR.
dataflow:>app register --name time2 --type source --metadata-uri http://repo.spring.io/milestone/org/springframework/cloud/stream/app/time-source-rabbit/2.1.0.M2/time-source-rabbit-2.1.0.M2-metadata.jar --uri docker:springcloudstream/time-source-kafka:2.1.0.M2
Hi, we have just updated from SCDF 1.5.0 to 1.7.3 and have an issue with some of our existing streams not being able to be displayed in the dashboard, I think it might be to do with commas or quotes in the stream (as simple ones without either do work)
For example we have a stream already defined:
rabbit --queues=logs --outputType=text/plain | filter --expression=!#jsonPath(payload,'$.type').equals('test') | mongodb --mongodb.database=logs --collection=responses
If I try to view the stream in the dashboard I just get 'Loading ....' and in the web console I see
TokenizationError: Unexpected character
ERROR TypeError: "t.lines.nodes is undefined"
So I can't deploy etc from the dashboard, but deploy/undeploy still works from the shell.
However if I try to create a new version in the shell with the same definition eg
stream create --name newlogs --definition "rabbit --queues=logs --outputType=text/plain | filter --expression=!#jsonPath(payload,'$.type').equals('test') | mongodb --mongodb.database=logs --collection=responses"
bash: syntax error near unexpected token `('
Has something changed with the syntax or do I need to escape something?
ok, the problem seems to be I now need to quote anything with a comma (ie the filter expression here) , whereas I didn't before, if I use
stream create --name newlogs2 --definition "rabbit --queues=logs --outputType=text/plain | filter --expression='!#jsonPath(payload,'''$.type''').equals('''test''')' | mongodb --mongodb.database=logs --collection=responses"
it will display correctly in the dashboard
Hi, @djgeary! You're correct. We have had to revise the parser when we added support for Stream Application DSL; since this feature introduced a new delimiter for streams, the overall parser had had to be reworked a bit via spring-cloud/spring-cloud-dataflow@7c3a99c.
Typically, we are very careful to not break anything in a point-release. Unfortunately, we had to in this case to avoid several downstream impacts, and we didn't do a good job saying it clearly in the docs. Sorry about that!
hello, I've been getting the following error and not sure what's causing it 🤔Dependencies were updated but that's it.
The bean 'transactionManager', defined in org.springframework.cloud.task.configuration.SimpleTaskConfiguration, could not be registered. A bean with that name has already been defined in class path resource [org/springframework/batch/core/configuration/annotation/SimpleBatchConfiguration.class] and overriding is disabled.
spring.main.allow-bean-definition-overriding is already set to true, yet the action mentioned is setting that to true
@djgeary: Thanks for the feedback. I will look into (1) from the UI perspective. For (2), though, when we implemented it, we didn't have a way to determine whether if the Task is running, and its current status apart from inspecting the start < - > end times. For situations that aren't in our control (such as a pod crash), I can see how it would trigger a false alarm.
Feel free to drop an issue for both of them, and we will review it on our side. If you have any suggestions on the solution, please have at it as PRs.
If I understand your question correctly, you'd like to see the connected graph of all the streams that are attached to a "primary pipeline" via TAPs. Did I get that right?
If yes, there's such option from the stream-list page. If you click the
i icon of the "primary pipeline", it'd go to the stream-details page. In that page, you'd see
Graph tab, which will show all the TAP connections associated with it.
timer = time --date-format=hhmmss | log minutes = :timer.time > transform --expression='payload.substring(2,4)' | log seconds = :timer.time > transform --expression='payload.substring(2)' | log
timer is the "primary pipeline", so after when these streams are created, you can click the
i icon of
timer stream from the list page to view the connected graph.