Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    DrOof
    @DrOof
    I'm loathe to reset Kubernetes entirely.
    Sabby Anandan
    @sabbyanandan
    @DrOof : There's not much info here to reason through and answer.
    Guessing here. It could happen if you aren't using a RDBMS, and are defaulting to in-memory DB such as the H2 instead. When Skipper or SCDF restarts without a persistent DB, the footprint would get wiped-off (upon restarts). When that happens, the data between both could go out-of-sync.
    Saket Puranik
    @saket88
    @sabbyanandan Hi Sabby . Just wanted to know how backpressure works in case of SCDF. Is it handled by Spring Cloud kafka Binder?
    DrOof
    @DrOof
    Thanks @sabbyanandan I am using SCDF with MySQL in K8s.
    Sabby Anandan
    @sabbyanandan

    Hi, @saket88. SCDF doesn't depend on the broker or the apps themselves. In other words, the apps directly interact with the broker (binder impl) of choice. The push/pull model and the payload delivery machinery is handled in the broker natively, too.

    You may want to describe your use-cases, so we can discuss specifically what you're trying to do.

    @DrOof : Hmm, that is odd indeed. Perhaps share your stream definitions and the actions that you performed. Esp. if it is repeatable, we would be able to follow what you did to get there.
    DrOof
    @DrOof
    Thanks @sabbyanandan I just loaded up another stream
    Something got stuck. I got a statemachine error
    DrOof
    @DrOof
    I think I would've had to fix it by going into the database to see exactly what went wrong.
    mikulass
    @mikulass
    Hello sorry to disturb, is there any sample how to configure remote maven repository for local/docker SCDF deployment and also for CF based deployment. I'm trying to set SPRING_APPLICATION_JSON and forec local or CF to use it, but I'm getting >> Error Message = [Failed to resolve MavenResource: sk.comp.demo.celsius.converter:celsius-converter-processor:0.0.1.RELEASE. Configured remote repository: : [springRepo]] .. Any idea is highly appreciated. Thanks.
    Sabby Anandan
    @sabbyanandan
    @mikulass: Please have a look at: https://dataflow.spring.io/docs/resources/faq/#mavenconfig — it covers both local and cf configuration samples.
    mikulass
    @mikulass
    @sabbyanandan Hi, thanks a lot, I've fixed it accordingly and local/docker finally see the remote maven repo. thanks.
    mikulass
    @mikulass
    Hello, sorry to disturb again maybe with this simple question.. How can I force SCDF local/docker to use java 11? Is Java 11 supported? (Inside skipper container I see java version 1.8.0_192) . While deploying simple custom stream I've received flowing error: >> stderr_0.log: Exception in thread "main" java.lang.UnsupportedClassVersionError: sk/iwcf/iwap/scdf/DemoApplication has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0 ..at java.lang.ClassLoader.defineClass1(Native Method) ... maybe I'm again lost in documentation or shall I recompile my DemoApplication with Java 8?. Thanks in advance.
    Sabby Anandan
    @sabbyanandan

    We have an open issue on this matter: spring-cloud/spring-cloud-dataflow#3507

    There are still a way other items in terms of compatibility checking on our side, which we will revisit for SCDF 2.6. If you must use 11 right now, in the meantime, feel free to produce custom Docker images with JDK11 for SCDF and Skipper.

    mikulass
    @mikulass
    @sabbyanandan Thanks for clarification. Best regards.
    mikulass
    @mikulass

    hello, sorry to disturb with this question if it is little off topic..I'm trying to run "simple" stream "http --server.port=9090 | log" on SCDF deployed on cloud foundry. While posting some raw data to the http source I'm getting following response ..after http POST from postman : {"timestamp": "2020-03-22T22:36:15.586+0000",
    "status": 500, "error": "Internal Server Error", "message": "error occurred in message handler [org.springframework.integration.amqp.outbound.AmqpOutboundEndpoint@4d7ee4a2]; nested exception is org.springframework.amqp.AmqpIOException: java.io.IOException", "path": "/"} and log stream from cf is :
    23:36:15.568: [APP/PROC/WEB.0] 2020-03-22 22:36:15.568 INFO 6 --- [nio-8080-exec-5] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [10.0.0.34]
    23:36:15.592: [RTR.0] simple01-http-v1...sk - [2020-03-22T22:36:15.541708626Z] "POST / HTTP/1.1" 500 4 307 "-" "PostmanRuntime/7.23.0" "178.40.175.133:36782" "10.0.16.4:61010" x_forwarded_for:"178.40.175.133" x_forwarded_proto:"http" vcap_request_id:"e699500a-50ad-4cb3-5277-6dd88ceb976e" response_time:0.050121 gorouter_time:0.000473 app_id:"f16dd979-e079-454e-adca-76e7e5451729" app_index:"0" x_b3_traceid:"52e9fef5e8395d4e" x_b3_spanid:"52e9fef5e8395d4e" x_b3_parentspanid:"-" b3:"52e9fef5e8395d4e-52e9fef5e8395d4e"

    23:36:15.585: [APP/PROC/WEB.0] 2020-03-22 22:36:15.584 ERROR 6 --- [nio-8080-exec-5] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.integration.amqp.outbound.AmqpOutboundEndpoint@4d7ee4a2]; nested exception is org.springframework.amqp.AmqpIOException: java.io.IOException, failedMessage=GenericMessage [payload=byte[4], headers={content-length=4, http_requestMethod=POST, host=simple01-http-v1...sk, http_requestUrl=http://simple01-http-v1...sk/, id=8916046f-a441-d68d-3454-efea6bdcedd2, cache-control=no-cache, contentType=text/plain, accept-encoding=gzip, deflate, br, user-agent=PostmanRuntime/7.23.0, accept=/, originalContentType=text/plain;charset=UTF-8, timestamp=1584916575567}]] with root cause
    23:36:15.585: [APP/PROC/WEB.0] java.io.EOFException: null
    23:36:15.585: [APP/PROC/WEB.0] at java.io.DataInputStream.readUnsignedByte(DataInputStream.java:290) ~[na:1.8.0_222] Any idea or some point link is highly appreciated. Thanks in advance.

    Janani Kamaraj
    @janamailzuu
    @sabbyanandan, I am trying to enable security in SCDF(deployed in Kubernetes). Added Keycloak configs as per this https://github.com/spring-cloud/spring-cloud-dataflow/issues/3488#issuecomment-529823935. Security(Authentication and Authorization) is still showing displayed. attached the screenshot here. can you please help with what we are missing?
    image.png
    melakonda
    @melakonda

    Hello SCDF Team,

    I am using scdf command file option "--spring.shell.commandFile" everything is fine as long as commands run successfully. Its getting hard when a command fails in the file and all the commands that are after the failed command are not executing.

    Example:
    app list
    stream undeploy <stream name>
    stream destroy <stream name>
    stream create <stream name>

    in the above example stream undeploy is failed them all the commands after that are not executing.

    Any way to fix this ??

    Sabby Anandan
    @sabbyanandan

    @janamailzuu: Hard to tell what could be missing. As next steps, compare your settings with what we have in this experiment: https://github.com/jvalkeal/randomstuff/tree/master/dataflow-keycloak

    Also, I know of @eskuai who has made Keycloack work with SCDF on K8s. He/she may have more details to share and guide you with the configs even.

    @melakonda: The commandFile option simply runs a whatever you run as bash. If any of the commands get interrupted, there's no automated way to recover from it.

    Instead of relying on this, please consider using the Java DSL. That way, you can programmatically build, deploy, and scale streams and tasks. You will also have the control to handle exceptions to either gracefully recover and proceed or abruptly stop it — you will have choices and flexibility. Besides that, you can unit/IT test your stream/task deployments in isolation in this model.

    eynet
    @eynet

    Hello SCDF Team,

    I am using SCDF with Microsoft SQL server in K8s. But running batch jobs ends up with following error :

    Host Port:     <none>
        Args:
          --spring.datasource.username=[redacted]
          --spring.cloud.task.name=Application1
          --spring.datasource.url=[redacted]
          --spring.datasource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
          --spring.datasource.password=[redacted]
          --spring.cloud.data.flow.platformname=default
          --spring.cloud.task.executionid=37
        State:          Terminated
          Reason:       ContainerCannotRun
          Message:      OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"--spring.datasource.username=[redacted]\": executable file not found in $PATH": unknown
    Glenn Renfro
    @cppwfs
    Do you see the same behavior if you use another db like mysql?
    eynet
    @eynet
    @cppwfs No..MySQL works fine.
    Glenn Renfro
    @cppwfs
    Is there a log available?
    eynet
    @eynet
    @cppwfs No the bach task fail to start with message "OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"--spring.datasource.username=[redacted]\": executable file not found in $PATH": unknown", No other logs are generated by the pods/application.
    Glenn Renfro
    @cppwfs
    Hmm… This doesn’t look like a SCDF problem. It looks like another problem, possibly: a mount or a security issue.
    to-wi
    @to-wi
    Hi there!
    Are there any experiences how to integrate an existing Prometheus/Grafana installation with SCDF the same way if I deployed it with features.monitoring.enabled=true? They're already running in the cluster for general monitoring purposes and just for SCDF I don't want (maybe even cannot) set it up a second time. I'd be thankful for any hints where and what to configure :)
    Sabby Anandan
    @sabbyanandan
    No, today, the helm-chart doesn't offer that functionality. We have had the community contribute similar capability for DB and message broker configurations, so please feel to submit a PR against the chart for prometheus+grafana — we can collaborate here.
    eynet
    @eynet
    @cppwfs Applications works well with other DB's ( MySQL) in the same cluster. The issue heppens only when configuring SQL server as the DB source.
    Janani Kamaraj
    @janamailzuu
    @sabbyanandan Hi, We would like to customize the role mapping behaviour. Currently using Helm chart for deploying SCDF in Kubernetes. This document(https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-security-customizing-authorization) says, Provide your own Spring bean definition that extends Spring Cloud Data Flow’s AuthorityMapper interface for customizing the behaviour. Can you please help with how to do this as we are just using the image directly now?
    hbutalia
    @hbutalia
    I have more than one jobs in a jar and all marked as @EnableTask but in task_task_batch table entry is coming for only one batch job but in other tables all three jobs entries are present Db is postgres
    Due to this not able to view job details on UI
    Glenn Renfro
    @cppwfs
    @hbutalia I’ve responded to your GH issue.
    @eynet Without any kind of logs it is hard for me to help you with this problem.
    hbutalia
    @hbutalia
    @eynet request to let me know the way to get logs will attach it in issue
    eynet
    @eynet
    @cppwfs @hbutalia I've changed the "deployer.sampleapp.kubernetes.entryPointStyle=shell", previousely it was default (exec) and I was able to run the application but it fails. I've uploaded complete logs here https://raw.githubusercontent.com/eynet/logs/master/logs.txt , Please have a look..
    Glenn Renfro
    @cppwfs
    From the log it looks like the BILL_STATEMENTS table isn’t present.
    Janani Kamaraj
    @janamailzuu
    We are looking for a way to customize the role mapping behaviour. Currently using Helm chart for deploying SCDF in Kubernetes. This document(https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-security-customizing-authorization) says, "Provide your own Spring bean definition that extends Spring Cloud Data Flow’s AuthorityMapper interface for customizing the behaviour". Can someone please help with how to do this as we are just using the image directly now?
    Rayce Brossette
    @rayce_gitlab
    @to-wi If you only want Grafana integration separately, what we do is provide our own Dataflow server properties via the server.configMap property, and in that ConfigMap, we set spring.cloud.dataflow.grafana-info.url: https://my-grafana-hostname. We replicate all the default ConfigMap properties that the Helm chart normally sets into our custom ConfigMap, but this works perfectly for us! :)
    sachinsaju
    @sachinsaju
    I have got two namespaces in my k8s namespace1 and namespace2. scdf server is in namespace 1 .When I launched exampletask with deployer.exampletask.kubernetes.namespace=namespace2, the task is still starting in namespace 1.Any property else to be configured?
    DrOof
    @DrOof
    Hi. We're running an experiment with Flux inside of SCDF. However, by default it doesn't serialize as we were hoping.
    Any experience using Flux inside SCDF apps?
    I'm sure there's some out-of-the-box binder for it, so we don't have to override the serializers.
    Glenn Renfro
    @cppwfs
    @sachinsaju You can’t select a namespace at task launch time by using the deployer.<task-app>.kubernetes.namespace. You will need to add a new add add a new platform deployment. We have a easy to use guide located here: https://dataflow.spring.io/docs/recipes/multi-platform-deployment/multiple-platform-accounts/
    Venu guntupalli
    @venuguntupalli

    Hi Everyone, have question on SCDF and reliable message delivery - We have built a stream that reads data from Azure Service Bus(ASB). We do use kafka binders in our implementation on K8s environment. This stream orchestration is asb (source) -> decryptor (procesor) -> scrubber (processor) -> REST invoker (processor) -> message merger (processor) -> Kafka (Sink). These messages need to be delivered reliably to the end point as there is no reply mechanism for these messages.

    Main challenge in this is "ACK to ASB should happen only after a message reliably delivered to kafka backbone". We can deal with it on regular kafka producer by using callbacks (trigger ACK from callback method). Here with kafka binders, how can we achieve this? We are open to the idea of somehow deliver messages reliably to kafka from ASB first and have above designed stream read from kafka rather reading from ASB. Please advise.

    Sabby Anandan
    @sabbyanandan

    @DrOof : The concept of a binder belongs at the level of Spring Cloud Stream, and that is not SCDF's responsibility. SCDF simply just deploys stream apps defined in the stream definition.

    It sounds like you are attempting to use a brokerless-binder (we don't have such a thing today). Perhaps expand your post and clarify with as much as information possible in the Spring Cloud Stream Gitter channel. One of us in that room can help.

    Sabby Anandan
    @sabbyanandan

    @venuguntupalli: We don't have a binder implementation for ASB or for that matter an ASB source-app that we support today, so I am not sure how to reason through and understand what you're asking.

    There's an option in the Kafka binder for producers to block the calling thread until the send completes successfully or if it fails with an exception.

    Perhaps play with the following property in your ASB source:

    spring.cloud.stream.kafka.bindings.<channelName>.producer.sync=true