Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Guilhem30
    @Guilhem30
    thank for the help @Chaya56 and @jlecorre_gitlab , took me a moment but i'm able to get the source ip now
    Chaya56
    @Chaya56
    @Guilhem30 you're welcome
    thomasboni
    @thomasboni
    Hello guys, do you have any news about k8s load balancer cost ? It was planned to September (if I remember well)
    arduinio
    @arduinio
    Good Morning to all
    Is there any way to allow a comunication between container inside OVH kubernetes and virtual machine inside on private cloud OVH ?
    Nicolas Steinmetz
    @nsteinmetz
    Interesting to avoid passwords in CI scripts & co :-)
    https://blog.docker.com/2019/09/docker-hub-new-personal-access-tokens/
    Bmagic
    @bmagic
    I think what you're looking for is the vrack. It doesn't seem to me that it's available yet. Otherwise you can always use the public network.
    Otherwise I'm looking for a way to configure the ovh DNS automatically via a tool like this (to generate certificates with lets-encrypt): https://github.com/kubernetes-incubator/external-dns
    For the moment ovh is not available in the providers of this tool, has anyone ever had this problem and a solution?
    Thomas Coudert
    @thcdrt
    @bmagic I confirm that vrack is not available for now
    arduinio
    @arduinio
    ok
    thaks @bmagic and @thcdrt
    Jawher Moussa
    @jawher_twitter

    @SystemZ Hello ! Did you manage to get the fluent-bit grep filter to work to exclude canal pods logs ? I added this bloc to the end of my filter-kubernetes.conf entry in the configmap:

        [FILTER]
            Name    grep
            Match   *
            Exclude kubernetes_container_name calico-node|wormhole

    and I still keep getting canal logs sent to graylog

    Guilhem30
    @Guilhem30
    Is there a way to ban/blacklist an ip address from connecting to a loadbalancer k8s service ?
    or the opposite, whitelist and allow only some ip to connect
    Guilhem30
    @Guilhem30
    found a way to whitelist but not blacklist
    Guillaume
    @dracorpg
    @bmagic do you need to be using the DNS challenge? The ACME protocol supports 2 other challenge types that don't require editing the DNS zone. But if you need to, traefik does support it for its automatic/dynamic certificate generation feature (and supports many registrars' API, including OVH)
    Dennis van der Veeke
    @MrDienns
    since ReadWriteMany volumes aren't possible yet, how can I host an elasticsearch cluster that spands across several nodes?
    Jawher Moussa
    @jawher_twitter
    Every elastic node would get its own pv.
    Here's an example ES 2 node cluster using version 7.3.1 + 1Gi of storage per node: https://gist.github.com/jawher/db9973ccafc8fa7ba379e603d90e6681
    Dennis van der Veeke
    @MrDienns
    ah okay, so it's possible that each replicate creates its own volume per node?
    Jawher Moussa
    @jawher_twitter
    right, cf. the volumeClaimTemplates:
    bcarriou
    @bcarriou
    Hi all, I have upgrade my k8s cluster to 1.14 and now, I have some problem.
    For example, some pods failed to started with this messag
    Events:
      Type     Reason                  Age                  From               Message
      ----     ------                  ----                 ----               -------
      Normal   Scheduled               3m12s                default-scheduler  Successfully assigned cattle-system/rancher-84955df7bc-czmn7 to worker-3
      Warning  FailedCreatePodSandBox  3m9s                 kubelet, worker-3  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7e376e706e0dc527138ceaefcbf92410a716c5dd95661e0e72e8d3deb48e7a78" network for pod "rancher-84955df7bc-czmn7": NetworkPlugin cni failed to set up pod "rancher-84955df7bc-czmn7_cattle-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
      Warning  FailedCreatePodSandBox  3m                   kubelet, worker-3  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8f215886b17a1160bb5835d5a6650e70c03cb661d2a623a542e1e97b5de7a09c" network for pod "rancher-84955df7bc-czmn7": NetworkPlugin cni failed to set up pod "rancher-84955df7bc-czmn7_cattle-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
      Warning  FailedCreatePodSandBox  2m50s                kubelet, worker-3  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0ed4c2c8dd092b318f9639c823330fa4dd8fd14fc5bc5ffe7017bf9f21b9668d" network for pod "rancher-84955df7bc-czmn7": NetworkPlugin cni failed to set up pod "rancher-84955df7bc-czmn7_cattle-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
      Warning  FailedCreatePodSandBox  2m29s                kubelet, worker-3  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b13a56e92a7ed8848008334211c47d36f471b06b5d4f51e5315426e4032843fa" network for pod "rancher-84955df7bc-czmn7": NetworkPlugin cni failed to set up pod "rancher-84955df7bc-czmn7_cattle-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
      Warning  FailedCreatePodSandBox  2m18s                kubelet, worker-3  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "4aa951d441f9171b37d2d5f7c6b96912a69393602e7997aaf6f289222f247039" network for pod "rancher-84955df7bc-czmn7": NetworkPlugin cni failed to set up pod "rancher-84955df7bc-czmn7_cattle-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
      Warning  FailedCreatePodSandBox  2m2s                 kubelet, worker-3  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7ab8ac00e5b0ba941d6e37464fff93d0639432a993032ddcd300f5dda62b0d8e" network for pod "rancher-84955df7bc-czmn7": NetworkPlugin cni failed to set up pod "rancher-84955df7bc-czmn7_cattle-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
      Warning  FailedCreatePodSandBox  104s                 kubelet, worker-3  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dcbb4d39e804fb011e341f40e6b3de467945440586c18acb4d4c961687a44009" network for pod "rancher-84955df7bc-czmn7": NetworkPlugin cni failed to set up pod "rancher-84955df7bc-czmn7_cattle-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
      Warning  FailedCreatePodSandBox  94s                  kubelet,
    ̀
    bcarriou
    @bcarriou
    my cluster ID 1aa61333-b08c-4ea9-abc0-82f8b39b5cb2
    the problem seem be occur only on one worker, for me 68253824-2227-42fa-a251-8090ee7fd5f9
    Joël LE CORRE
    @jlecorre_gitlab
    Hello @bcarriou
    How are you?
    We can check this together if you want.
    bcarriou
    @bcarriou
    Hi, yes, we switch in private :)
    Thomas Coudert
    @thcdrt
    Hello, new Kubernetes patch versions are available on OVH Managed Kubernetes service (1.13.11, 1.14.7 and 1.15.4)
    Moreover I invite you all to update your kubectl binary because of CVE-2019-11251
    arduinio
    @arduinio
    anyone had try last traefik image ?
    it dont work
    Guillaume
    @dracorpg
    Hi guys. Thanks @thcdrt for the heads up. Our clusters with the "maximum security" policy selected haven't auto-applied this patch yet: still running 1.15.3-5 and the manual "Force the security update (patch)" button is available.
    What is the "normal" / expected rollout delay for such auto-updates?
    Thomas Coudert
    @thcdrt
    Hello @dracorpg , the average time for update is about 5-10mn per node.
    Guillaume
    @dracorpg
    Sorry, my question wasn't clear enough. I didn't mean the time it takes to update nodes once the cluster update has been triggered, but the time your cluster auto-update policy takes to initiate the update once it becomes available.
    (i.e. should we expect clusters to start updating within 1 hour, 1 day, 1 week of a new patch version being released ?)
    info2000
    @info2000
    Byebye ovh kubernetes, after 6 months, and loss many time and issues with problems without support, I dimiss to continue use this solution, this is the latest example of issue unacceptable in these days https://share.getcloudapp.com/RBuPgK4P
    don't found docker.io and registry.gitlab hosts
    Thomas Coudert
    @thcdrt
    @dracorpg , sorry I didn't understand your question. In fact we don't necessary auto update your cluster as it can causes some downtime on your platform. As those new versions don't bring security fixes, we let you update when this is better for you.
    Guillaume
    @dracorpg
    @thcdrt okay - I assumed there were security fixes among the changes. Thanks!
    Michał Frąckiewicz
    @SystemZ
    @jawher_twitter make sure you parse your logs correctly before using tags to match message
    BTW any news on LB pricing?
    Giovanni
    @gclem
    @SystemZ Any for now. Will keep you updated as this subject is actually handled :)
    Dennis van der Veeke
    @MrDienns
    Is it possible to configure K8S audit logging including a backend system, like logstash + kibana? Reading the documentation, I think you're suppose to pass a configuration when starting the kube api server, which we don't have access to. Is there any way to achieve these things?
    BarbeRousse
    @mjoigny
    Hi Guys, i would like to increase my client_max_body_size value (error message : client intended to send too large body) using my ingress's configmap, i follow this doc https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-max-body-size but it's not working for me, I must surely forget something, could you help me ?
    Bmagic
    @bmagic
    @mjoigny, I'm using this config and it's working for me.
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      annotations:
        nginx.ingress.kubernetes.io/proxy-body-size: "99m"
    [ ... ]
    BarbeRousse
    @mjoigny
    @bmagic thanks but it's not working, i'm using this image image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1
    @bmagic do you use configmap or not ?
    @bmagic my bad it's seems to work now
    Bmagic
    @bmagic
    No, I deployed my ingress avec helm.
    Okay, good for you.
    BarbeRousse
    @mjoigny
    thanks @bmagic
    Michał Frąckiewicz
    @SystemZ

    Are OVH datacenter names 1:1 to openstack regions?
    I've seen only GRA-1 and GRA-2 on VMS
    http://prace.ovh.pl/vms/index_gra2.html
    On about page I see only GRA-1
    https://www.ovh.ie/aboutus/datacentres.xml
    In customer panel I have VMs in GRA-3 and GRA-5

    Can you tell me in what DCs are Openstack regions? Is there any page describing it?