by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Anil Kumar
    @akandach
    looks same
    Raymond Roestenburg
    @RayRoestenburg
    Ok, great
    Anil Kumar
    @akandach
    Getting this error
    org.apache.flink.util.FlinkException: Could not create the DispatcherResourceManagerComponent.
    [error] at org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:261)
    [error] at org.apache.flink.runtime.minicluster.MiniCluster.createDispatcherResourceManagerComponents(MiniCluster.java:394)
    [error] at org.apache.flink.runtime.minicluster.MiniCluster.setupDispatcherResourceManagerComponents(MiniCluster.java:360)
    [error] at org.apache.flink.runtime.minicluster.MiniCluster.start(MiniCluster.java:314)
    [error] at org.apache.flink.client.deployment.executors.LocalExecutor.startMiniCluster(LocalExecutor.java:117)
    [error] at org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:63)
    [error] at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1733)
    [error] at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1634)
    [error] at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
    [error] at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1620)
    [error] at org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.scala:678)
    [error] at cloudflow.flink.FlinkStreamletLogic.executeStreamingQueries(FlinkStreamlet.scala:378)
    [error] at cloudflow.flink.FlinkStreamlet$LocalFlinkJobExecutor$.$anonfun$execute$1(FlinkStreamlet.scala:231)
    [error] at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
    [error] at scala.util.Success.$anonfun$map$1(Try.scala:255)
    [error] at scala.util.Success.map(Try.scala:213)
    [error] at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
    [error] at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
    [error] at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
    [error] at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
    [error] at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
    [error] at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
    [error] at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
    [error] at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
    [error] at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
    [error] Caused by: java.net.BindException: Could not start rest endpoint on any port in port range 9005
    [error] at org.apache.flink.runtime.rest.RestServerEndpoint.start(RestServerEndpoint.java:219)
    [error] at org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:165)
    [error] ... 24 more
    Caused by: java.net.BindException: Could not start rest endpoint on any port in port range 9005
    [error] at org.apache.flink.runtime.rest.RestServerEndpoint.start(RestServerEndpoint.java:219)
    [error] at org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:165)
    [error] ... 24 more
    Anil Kumar
    @akandach
    global is not working
    i tried for specific streamlet it is working
    so do i need to do for each flink streamlet?
    Raymond Roestenburg
    @RayRoestenburg
    They all need to be on a separate port probably, are you running this with runLocal?
    Anil Kumar
    @akandach
    yes running local
    so cannot use the same port
    Raymond Roestenburg
    @RayRoestenburg
    exactly
    so need to specify per streamlet
    Anil Kumar
    @akandach
    i thought all streamlet will connect and show all the jobs in the same dashbaord
    k will try with different ports
    Raymond Roestenburg
    @RayRoestenburg
    Ok
    Anil Kumar
    @akandach
    so in the production we do not need to set this?
    in the conf file?
    Raymond Roestenburg
    @RayRoestenburg
    The web interface for Flink is only supported in runLocal
    No port is opened per streamlet for this yet
    Anil Kumar
    @akandach
    k got it. But how do we monitor the checkpoint status?
    Raymond Roestenburg
    @RayRoestenburg
    That would likely need a web interface accessible on prod, if I read this: https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/checkpoint_monitoring.html Can you ask for this in the support channel so we can track it?
    Anil Kumar
    @akandach
    k sure
    we will talk to support guys on this
    Raymond Roestenburg
    @RayRoestenburg
    Thanks
    Anil Kumar
    @akandach
    setting different ports for each flink streamlet worked
    Raymond Roestenburg
    @RayRoestenburg
    Cool!
    Anil Kumar
    @akandach
    getting this timout issue when i start the cloudflow in local
    [error] Caused by: akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://flink/user/dispatcher#-547885934]] after [10000 ms]. Message of type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A typical reason for AskTimeoutException is that the recipient actor didn't send a reply.
    [error] at akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:648)
    [error] at akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:669)
    [error] at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:202)
    [error] at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:875)
    [error] at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:113)
    [error] at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:107)
    [error] java.lang.RuntimeException: java.util.concurrent.ExecutionException: akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://flink/user/dispatcher#-1088394114]] after [10000 ms]. Message of type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A typical reason for AskTimeoutException is that the recipient actor didn't send a reply.
    [error] at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:873)
    looks like azure connectivity is taking time and its timing out
    adding this in the config to increase the timeout value in flink
    runtimes {
    flink {
    config {
    flink {
    web.timeout: 1000000
    }
    }
    }
    but looks like it is not picking
    Anil Kumar
    @akandach
    increased the asktime out resolved the issue
          akka.ask.timeout: 100 s
    SemanticBeeng
    @SemanticBeeng

    What are the requirements cloudflow has from the kubernetes cluster? Any specs available ? Can see https://github.com/lightbend/cloudflow/blob/master/installer/start-cluster.md but am looking to deploy on premise and knowing that Fast Data Platform required openshift am double checking.

    (Gitter sucks, especially at search... Please consider discord)

    2 replies
    Alex Sergeenko
    @Lockdain
    Hi all!
    AFAIK Cloudflow internally uses Apache Avro 1.8.2. Are there any plans to update the library to a higher version?
    2 replies
    orElse
    @orElse
    Hi, we had a very strange behaviour (cloudflow 2.0.7) - we have 2 similar named streamlets in one name space with quite long names - after deploying, one pod went into a crashloop and from the logs it seemed it was trying to start the wrong streamlet - we didn't really see anything else in the pod log - after shortening the streamlet names, rebuilding and redeploying - the pods started fine again. Could this be related to max label length in kubernetes?
    3 replies
    Sudhanshu Mishra
    @sudmis

    Hi All, i am getting below error while deploying below error while deploying application, any pointers will be really helpful

    C:\Users\sudhanshu.mishra1\source\repos\ms-dp-common\dp-cloudflow-poc>kubectl cloudflow deploy C:\Users\sudhanshu.mishra1\source\repos\ms-dp-common\dp-cloudflow-poc\target\pipeline-blueprint.json

    [Error] Could not verify protocol version. Kubernetes API returned an error: Failed to create new kubernetes client, CreateFile .kube\config: The system cannot find the path specified.
    exit status 1

    4 replies
    Sudhanshu Mishra
    @sudmis
    image.png
    Anil Kumar
    @akandach
    how to read the runtime flink config from the code?
    1 reply
    Anil Kumar
    @akandach
    can u set the runtime config from the code?
    1 reply
    P Murali Krisihna
    @muralipmk
    I am using cloudflow 2.0.9 version, FlinkKafkaConsumer is able to consume the data from flinkstreamlet and able to pass the data to another flinkstreamlet for transforming the data then i have one more flink streamlet which reads the stream and convert the avro data to json string format and produce it to the external kafka topic using FlinkKafkaProducer API, but when i introduce this code of FlinkKafkaProducer it is failing, Not just the streamlet with FlinkKafkaProducer API but also its previous flinkstreamlet. Any inputs?
    13 replies
    ehausig
    @ehausig
    Hi, I'm new to CloudFlow and have been working through the tutorial and am trying to reconcile the build/deploy pipeline with our environment:
    6 replies
    dlmutart
    @dlmutart
    Looks like the Cloudflow operator version 2.0.10 has a dependency on a vulnerability fix using a skuber version that is no longer available in Maven repo. We get an error: [ERROR] Failed to execute goal on project approval: Could not resolve dependencies for project artifact:some:jar:1: Could not find artifact io.skuber:skuber_2.12:jar:2.4.1-cve-fix-a8d7617c in java. I noticed a note in the 2.0.10 build file about // skuber version 2.4.0 depends on akka-http 10.1.9 : hence overriding with akka-http 10.1.12 to use akka 2.6 remove this override once skuber is updated so wondering if pull request #628 is intended to fix this?
    3 replies
    Thomas
    @thomasschoeftner
    Hi - quick question:
    Is it safe to upgrade from Cloudflow 2.0.7 to 2.0.10 directly?
    3 replies
    shivgit87
    @shivgit87
    Unable to Produce data to External Kafka using CloudFlow and Flink (Java), But with the same Code I am able to successfully Produce to Kafka when I use only Flink. Can anyone help here ? I am using CloudFlow 2.0.9 version.
    3 replies
    Thomas
    @thomasschoeftner
    Hi, about the Cloudflow container images / PODs:
    How can I properly attach a sidecar container in the PODs running AkkaStreamlets?
    I haven't found anything revealing in the docs and the sources...
    In our case a lot of system integration with our eco-system happens via sidecars.
    Thanks for any pointers!
    33 replies
    DarthKrab
    @DarthKrab

    Hi. I wrote earlier about this problem. When I recreate the jobmanager container of a streamlet in my pipeline (or it crashes for some reason), after starting, I see errors like this in the JobManager's logs:
    2020-09-22T08:51:55.311Z ERROR org.apache.flink.runtime.rest.handler.job.JobDetailsHandler [{}] - Exception occurred in REST handler: Job ed952687752d2a5b2c60d843d7e5605f8 not found
    2020-09-22T08:52:25.485Z ERROR org.apache.flink.runtime.rest.handler.job.JobDetailsHandler [{}] - Exception occurred in REST handler: Job ed952687752d2a5b2c60d843d7e5605f8 not found
    2020-09-22T08:52:55.673Z ERROR org.apache.flink.runtime.rest.handler.job.JobDetailsHandler [{}] - Exception occurred in REST handler: Job ed952687752d2a5b2c60d843d7e5605f8 not found
    ....

    The directory with checkpoints exists and contains binary data and metadata: http://joxi.ru/LmGPZgeTlvj9G2
    But the jobmanager doesn't find it.
    Тhe streamlet has the following settings:
    http://joxi.ru/eAOPkEYTkdegbr

    In a prod environment we can't just redeploy the pipeline, because the data is important.

    It is necessary that the streamlet after re-creation continues to work from the moment of the fall. Maybe someone has already encountered this problem. What settings should I check?

    15 replies
    shivgit87
    @shivgit87
    We are seeing failures in deployment for our Cloudflow application , Is it mandatory to have cloudflow and AKS to be on same version or is it ok to be on different version (As in our case CloudFlow is on 2.0.9 and AKS is of 2.0.10 version).