Hi all,
In fabric 8 kubernetes client, when Reflector receives a watch event, it seems to pass the event to the SyncerStore.
In client-go, it seems to be put in a queue called DeltaFIFO
.
Can I understand that the abstraction is different without DeltaFIFO
in fabric 8 kubernetes client?
The above Processor store resync
method only seems to trigger the Update
notification once again.
I understood The resync plays back all the events held in the informer cache.
Does this logic have any purpose other than giving the controller one more notification?
Hello,
Is there are way to list all pods for a jenkins instance and see which ones are down by using https://github.com/fabric8io/kubernetes-client ?
For example, if I run the following command "kubectl get pod --all-namespaces -o wide" I will get "Namespace", "NAME", "Status" etc. I want to get all pods/agents which STATUS is set to "Error"
It is said that kubernetes uses level-trigger
, but in fact, watch or informer api is an edge trigger
rather than a level trigger, isn't it?
If an event is lost due to a network issue, the watch or informer api loses the new desired state.
If we use the polling method, It can be thought of as a level trigger but I think that the event-driven method as a watch or informer is close to an eade trigger.
I try to add the extension for volcano, but I meet the issue: panic: Not able to set api version for volcano.sh/apis/pkg/apis/scheduling/v1beta1/Queue Complete code and log are in [1].
Could you give some idea on it?
KubernetesServer
in unit tests. Are there any known issues regarding this (I couldn't find any but maybe I have missed something)? Otherwise I would try to raise an issue on GitHub including a small reproduction example.
Code:
final CustomResourceDefinitionContext sprkCrdContext = new CustomResourceDefinitionContext.Builder()
.withName("sparkoperator.k8s.io")
.withGroup("sparkoperator.k8s.io")
.withScope("Namespaced")
.withVersion("v1beta2")
.withPlural("sparkapp")
.build();
log.error("config location {}", System.getProperty("kubeconfig"));
try (
KubernetesClient k8s = new DefaultKubernetesClient()) {
k8s.customResource(sprkCrdContext);
k8s.load(getClass().getResourceAsStream(name))
.inNamespace("default")
.createOrReplace();
try (KubernetesClient client = new DefaultKubernetesClient()) {
CustomResourceDefinitionContext context = new CustomResourceDefinitionContext.Builder()
.withGroup("sparkoperator.k8s.io")
.withScope("Namespaced")
.withVersion("v1beta2")
.withPlural("sparkapp")
.build();
GenericKubernetesResource cr = client.genericKubernetesResources(context)
.load(GenericKubernetesResourceExample.class.getResourceAsStream("/sparkapplication-cr.yml"))
.get();
client.genericKubernetesResources(context)
.inNamespace("default")
.create(cr);
}
Hi, one question. When we create a Customer Resource Definition we have one part with:
schema:
openAPIV3Schema:
type: object
properties:
# Fields to validate are the following:
metadata: # 'metadata' should be an object
type: object
properties: # With the following field 'name'
name:
type: string # Of type 'string'
pattern: '^[a-z]+\.[a-z]+$' # allows only 'word.word' names
spec: # Root field 'spec'
When we create a resource from this CRD and we apply it with kubctl
it validates the pattern properly, getting an error in the terminal if we don't match the pattern.
The point now is, can we do something similar with the kubernetes crud server? Looks like is not validating it when we create it programatically. Any idea how to do it? Thanks!