Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Guillaume
    @dracorpg
    (this is the actual spirit of devops BTW - as little is changed by hand, operations are done by deploying configuration files)
    PersistentVolumes are not namespaced, but PersistentVolumeClaims are
    I suppose the PV where SQL data is persisted was dynamically provisioned by a PVC ?
    monsio
    @kaizokun
    ok so I could just create a new pvc cool
    Guillaume
    @dracorpg
    yeah but BUT if you delete the previous one, the underlying PV (and even OpenStack Cinder volume) will be deleted with it, by default
    you first have to change the current PV's reclamation policy to "Retain" instead of "Delete"
    monsio
    @kaizokun
    ok thank you very much
    Guillaume
    @dracorpg
    then you should add a volumeName: 'your-pv-name-here' line to the PVC's spec
    so the claim will bind to this existing PV instead of dynamically provisioning a new (empty) PV as per default
    (not very elegant, this whole stuff, but it should work and avoid resorting to an dump-and-load process for your data)
    monsio
    @kaizokun
    apparently the chart create de pv et pvc automaticly
    I guess that I ll be able to target the old pv from the new pvc
    Guillaume
    @dracorpg
    you'll have to hack your spec.volumeName in the chart template
    monsio
    @kaizokun
    ok I just tried to modify the pvc config but It doesn't work :)
    Guillaume
    @dracorpg
    once the PVC is created and a new PV has been dynamically provisioned for it, I'm not suuure you can un-bind the PVC and change its definition to bind it to another pre-existing PV instead
    yeah, you'll have to do the hack in the original yml that is applied to create the PVC (for a Helm chart I guess it's somewhere in templates ?)
    monsio
    @kaizokun
    probably there is a pvc.yalm in the templates folder but it uses variables
    I don't know where the vars are declared
    it seems more tricky than I thought I'll find an easier way ^^
    Guillaume
    @dracorpg
    you don't really need to care about the variables Helm injects, just add your line in the template and it should work - but yeah, not absolutely trivial
    kobrasz
    @kobrasz
    probably you can change variables in values.yaml, you can find this file in repository of your chart, I think that you have to create pvc on your own, attach there old pv and configure values.yaml to use your pvc insted of dynamically provisioned
    arduinio
    @arduinio
    Is there any way to allow a comunication between container inside OVH kubernetes and virtual machine inside on private cloud OVH ?
    Chaya56
    @Chaya56

    Hello,
    Do we have an incident currently in managed kubernetes service ?

    alt

    Joël LE CORRE
    @jlecorre_gitlab
    Hello @Chaya56
    I will check that with you in private if you want.
    Joël LE CORRE
    @jlecorre_gitlab
    Problem solved for @Chaya56.
    And for your information, we haven't any on going outage on our platform.
    Chaya56
    @Chaya56
    fyi, problem is due to have multiple cluster, with node having the same name
    Guilhem30
    @Guilhem30
    Hello, I'm trying to get the client real ip in the ingress controller and in my applications logs but it seems i'm only able to retrieve private ip addresses
    even in the ingress controller (nginx) logs all the ip are in 10.110.X.Y
    i modified my loadblancer to "externalTrafficPolicy: Local" but i'm still getting only private ip
    Chaya56
    @Chaya56
    @Guilhem30 what kind of ingress controller are you using ? nginx-ingress-controller ?
    Guilhem30
    @Guilhem30
    deployed with helm
    Chaya56
    @Chaya56
    you have to create a configmap with following values:
    use-proxy-protocol: "true"
    proxy-real-ip-cidr: "0.0.0.0/32"
    use-forwarded-headers: "false"
    log-format-upstream: '$proxy_protocol_addr etc...'
    http-snippet: |
      geo $realip_remote_addr $is_lb {
        default       0;
        10.108.0.0/14 1;
      }
    server-snippet: |
      if ($is_lb != 1) {
        return 403;
      }
    Joël LE CORRE
    @jlecorre_gitlab
    Hi guys, you can follow this documentation to get the source IP behind your LoadBalancer:
    https://docs.ovh.com/gb/en/kubernetes/getting-source-ip-behind-loadbalancer/
    Chaya56
    @Chaya56
    you create config map in same namespace as your ingress controller, and you give it the name of your ingress controller, by example mine is named: app-proxy-nginx-ingress-controller
    Guilhem30
    @Guilhem30
    okay, thank you i will try this right away
    Chaya56
    @Chaya56
    @Guilhem30 and on my chart i've used this parameters:
    helm install stable/nginx-ingress --namespace arx-dev --name app-proxy \
        --set rbac.create=true \
        --set controller.service.externalTrafficPolicy=Local \
        --set controller.service.annotations."service\.beta\.kubernetes\.io/ovh-loadbalancer-proxy-protocol"=v1
    Guilhem30
    @Guilhem30
    okay, i'm missing the annotation
    Guilhem30
    @Guilhem30
    thank for the help @Chaya56 and @jlecorre_gitlab , took me a moment but i'm able to get the source ip now
    Chaya56
    @Chaya56
    @Guilhem30 you're welcome
    thomasboni
    @thomasboni
    Hello guys, do you have any news about k8s load balancer cost ? It was planned to September (if I remember well)
    arduinio
    @arduinio
    Good Morning to all
    Is there any way to allow a comunication between container inside OVH kubernetes and virtual machine inside on private cloud OVH ?
    Nicolas Steinmetz
    @nsteinmetz
    Interesting to avoid passwords in CI scripts & co :-)
    https://blog.docker.com/2019/09/docker-hub-new-personal-access-tokens/
    Bmagic
    @bmagic
    I think what you're looking for is the vrack. It doesn't seem to me that it's available yet. Otherwise you can always use the public network.
    Otherwise I'm looking for a way to configure the ovh DNS automatically via a tool like this (to generate certificates with lets-encrypt): https://github.com/kubernetes-incubator/external-dns
    For the moment ovh is not available in the providers of this tool, has anyone ever had this problem and a solution?
    Thomas Coudert
    @thcdrt
    @bmagic I confirm that vrack is not available for now
    arduinio
    @arduinio
    ok
    thaks @bmagic and @thcdrt
    Jawher Moussa
    @jawher_twitter

    @SystemZ Hello ! Did you manage to get the fluent-bit grep filter to work to exclude canal pods logs ? I added this bloc to the end of my filter-kubernetes.conf entry in the configmap:

        [FILTER]
            Name    grep
            Match   *
            Exclude kubernetes_container_name calico-node|wormhole

    and I still keep getting canal logs sent to graylog