Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 13:58
    shawkins commented #4160
  • 13:56
    sonarcloud[bot] commented #4166
  • 13:53
    rohanKanojia commented #4160
  • 13:40
    rohanKanojia review_requested #4166
  • 13:34
    iocanel opened #4166
  • 06:40
  • 05:42
    justinlinsong starred fabric8io/kubernetes-client
  • May 23 17:11

    manusa on master

    test: document SerialExecutor e… (compare)

  • May 23 17:11
    manusa closed #4161
  • May 23 14:50
    sonarcloud[bot] commented #4165
  • May 23 14:25
    shawkins synchronize #4165
  • May 23 14:21
    shawkins synchronize #4165
  • May 23 14:19
    shawkins labeled #4165
  • May 23 14:18
    shawkins review_requested #4165
  • May 23 14:18
    shawkins review_requested #4165
  • May 23 14:18
    shawkins review_requested #4165
  • May 23 14:18
    shawkins opened #4165
  • May 23 14:08
    sonarcloud[bot] commented #4161
  • May 23 14:08
    sonarcloud[bot] commented #4161
  • May 23 13:45
    manusa milestoned #4161
heesuk-ahn
@heesuk-ahn

https://github.com/fabric8io/kubernetes-client/blob/8ed9692a97d9ca40784c43c66b056f76fb795481/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/informers/cache/ProcessorStore.java#L105-L109

The above Processor store resync method only seems to trigger the Update notification once again.
I understood The resync plays back all the events held in the informer cache.
Does this logic have any purpose other than giving the controller one more notification?

Steven Hawkins
@shawkins
@heesuk-ahn yes the deltafifo was removed from the fabric8 client
1 reply
asdf
@asdf90

Hello,

Is there are way to list all pods for a jenkins instance and see which ones are down by using https://github.com/fabric8io/kubernetes-client ?

For example, if I run the following command "kubectl get pod --all-namespaces -o wide" I will get "Namespace", "NAME", "Status" etc. I want to get all pods/agents which STATUS is set to "Error"

asdf
@asdf90
I thought "PodList podList = client.pods().inAnyNamespace().list();" would do that for me, but it seems like it is looking for something in .kubeconfig file if I understand correct?
Rohan Kumar
@rohanKanojia
What do you mean?
Is code executing within a Pod(Jenkins Pod) ?
If this is the case, Fabric8 Kubernetes Client would check mounted ServiceAccount for API token.
You just need to make sure that your Serviceaccount has cluster wide permission to access pods
asdf
@asdf90
image.png
Well if I run the following command "kubectl get pod --all-namespaces -o wide" in ubuntu terminal i get a list of pods (as mentioned before). Like this:
I would like to get the same type of list using the fabric8
2 replies
I will check out ServiceAccount and see what it does
heesuk-ahn
@heesuk-ahn

It is said that kubernetes uses level-trigger, but in fact, watch or informer api is an edge trigger rather than a level trigger, isn't it?

If an event is lost due to a network issue, the watch or informer api loses the new desired state.

If we use the polling method, It can be thought of as a level trigger but I think that the event-driven method as a watch or informer is close to an eade trigger.

Orsák Maroš
@see-quick
Hi guys, I have open PR related to the WebSocket timeout, which was with the fixed value 10 seconds (which is FPOV small). It would be better to be configurable like this..
fabric8io/kubernetes-client#3559
can you take a look on that?
@iocanel @rohanKanojia ^^
Yikun Jiang
@Yikun

I try to add the extension for volcano, but I meet the issue: panic: Not able to set api version for volcano.sh/apis/pkg/apis/scheduling/v1beta1/Queue Complete code and log are in [1].
Could you give some idea on it?

[1] Yikun/kubernetes-client#1

Hannes Hofmann
@hanneshofmann
Hi, since upgrading from 4.9.2 to 5.9.0 I observed that one of our unit tests is failing. It looks to me that field selectors are not considered on the Event API when we are using KubernetesServer in unit tests. Are there any known issues regarding this (I couldn't find any but maybe I have missed something)? Otherwise I would try to raise an issue on GitHub including a small reproduction example.
1 reply
Marc Nuri
@manusa
Starting release process for Kubernetes Client 5.10.0
Marc Nuri
@manusa
Starting release process for Kubernetes Client 5.10.1
nautiam
@nautiam
Hi guys, I used SharedInformer in order to update events happening to my resource after a certain interval of time. But it always get the status of all resources, I just want to see the status of changed resources since the last time. For example, if a resource is deleted or added, it will be shown, but if it unchanged in the interval, it will be skipped. Is there anyway to do it?
Rohan Kumar
@rohanKanojia
try setting resync period to 0
5 replies
deepak-patra
@deepak-patra
image.png

Code:

final CustomResourceDefinitionContext sprkCrdContext = new CustomResourceDefinitionContext.Builder()
.withName("sparkoperator.k8s.io")
.withGroup("sparkoperator.k8s.io")
.withScope("Namespaced")
.withVersion("v1beta2")
.withPlural("sparkapp")
.build();

    log.error("config location {}", System.getProperty("kubeconfig"));
    try (
            KubernetesClient k8s = new DefaultKubernetesClient()) {
        k8s.customResource(sprkCrdContext);
        k8s.load(getClass().getResourceAsStream(name))
        .inNamespace("default")
        .createOrReplace();
Getting error com.fasterxml.jackson.databind.JsonMappingException: No resource type found for:sparkoperator.k8s.io/v1beta2#SparkApplication
deepak-patra
@deepak-patra
Hi @All.
Can some one help me what is wrong in it
deepak-patra
@deepak-patra
@rohanKanojia can you please help
Steven Hawkins
@shawkins
@deepak-patra try k8s.customResource(sprkCrdContext).load... - you need to the operations to be off of the result of the customResource call
deepak-patra
@deepak-patra
@shawkins Thaks for the response, Sorry I am not getting the context of your explanation...
Rohan Kumar
@rohanKanojia
Sorry, I was on PTO yesterday. client.customResource DSL method involving HashMaps has been marked as deprecated. I think you should replace your code with something like this:
try (KubernetesClient client = new DefaultKubernetesClient()) {
  CustomResourceDefinitionContext context = new CustomResourceDefinitionContext.Builder()
      .withGroup("sparkoperator.k8s.io")
      .withScope("Namespaced")
      .withVersion("v1beta2")
      .withPlural("sparkapp")
      .build();

  GenericKubernetesResource cr = client.genericKubernetesResources(context)
      .load(GenericKubernetesResourceExample.class.getResourceAsStream("/sparkapplication-cr.yml"))
      .get();

  client.genericKubernetesResources(context)
      .inNamespace("default")
      .create(cr);
}
David Calap
@dcalap

Hi, one question. When we create a Customer Resource Definition we have one part with:

 schema:
        openAPIV3Schema:
          type: object
          properties:
            # Fields to validate are the following:
            metadata: # 'metadata' should be an object
              type: object
              properties: # With the following field 'name'
                name:
                  type: string # Of type 'string'
                  pattern: '^[a-z]+\.[a-z]+$' # allows only 'word.word' names
            spec: # Root field 'spec'

When we create a resource from this CRD and we apply it with kubctl it validates the pattern properly, getting an error in the terminal if we don't match the pattern.

The point now is, can we do something similar with the kubernetes crud server? Looks like is not validating it when we create it programatically. Any idea how to do it? Thanks!

Rohan Kumar
@rohanKanojia
I don't think KubernetesMockServer supports any kind of validation
David Calap
@dcalap
@rohanKanojia ok, thanks for the info
Marc Nuri
@manusa
The Crud Server supports only very basic functionality. This seems like something that could be implemented because it's not dependent on external controllers and might be part of the API server. However, it's something pretty advanced, but contributions are welcome ;)
nautiam
@nautiam
I want to watch and forward all the log of Pod whenever a log is printed out. Is there anyway to do it? Thanks.
nautiam
@nautiam
@rohanKanojia thanks, it works. Is there anyway to use regex with Pod name? Because Pod name is generated randomly and I don't know how to get it exactly.
Rohan Kumar
@rohanKanojia
Umm, you can try using labels. I'm not sure if we have support of querying with regex
Yikun Jiang
@Yikun

Is it possible to reuse the kubernetes client in extension client?

such as for volcano extension:

volcanoClient = new DefaultVolcanoClient(kubernetesClient)

then we can reuse the kubernetesClient configuration and httpClient.

@rohanKanojia @manusa
Rohan Kumar
@rohanKanojia
there should be an adapt method available in KubernetesClient interface
example for knative extension here: https://github.com/fabric8io/kubernetes-client/blob/master/doc/CHEATSHEET.md#initializing-knative-client . Maybe this should be already available in Volcano extension you implemented
Yikun Jiang
@Yikun
@rohanKanojia Thanks, I will do some try to use it.
markusriedl
@markusriedl
Hi, is it possible create an informer, that doesn't use the default serviceaccount?
Rohan Kumar
@rohanKanojia
Umm, I don't understand your question. What does creating informer has to do with serviceaccount? It depends on the ServiceAccount used by the pod from where informer gets created (If you're doing this in cluster)
markusriedl
@markusriedl

Thanks for replying, maybe I'm understanding it wrong (which is very likely).
Maybe a bit more detail.
So, when querying:

final Endpoints endpoints = k8sClient.endpoints()
        .withName(name).get();

which works fine, however, when I want to register an informable to that, i.e.,

  k8sClient
          .endpoints()
          .withName(serviceId.getValue())
          .inform(endpointsHandler, resyncTimeInMillis);

I get an exception:

Caused by: io.fabric8.kubernetes.client.KubernetesClientException: endpoints "value" is forbidden: User "system:serviceaccount:default:default" cannot watch resource "endpoints" in API group "" in the namespace "default"

And that is not the configured service account (so my guess was that this is service account related, and it needs to be configured somehow) (yes this is done in a cluster)