Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    jeyaprabhuj-tts
    @jeyaprabhuj-tts
    @ayushr2 Now i am able to retrieve and share video query capabilites of webcam inside sentry (btw lisafs client and server)
    Thanks
    Ayush Ranjan
    @ayushr2
    Great
    jeyaprabhuj-tts
    @jeyaprabhuj-tts
    @ayushr2 Can we share memory map from lisafs server to client ? Device memory map needs to be accessed inside sentry
    Thomas <b>H. Ptacek"
    @tqbf
    helu! i searched the backlog on this channel but didn't see anything, and if this isn't the best place to ask that's cool, i'll figure it out, but: I'm trying to get the netstack code in pkg/tcpip hooked up to a Linux tap interface and having trouble getting ARP to work. Should I expect ARP to work? It's easy enough to do it myself, but I'd feel silly writing a hacky ARP implementation if I'm just missing some option in an option struct somewhere (I've tried both with tcpip/link/ethernet and with my own endpoint implementation based loosely on the ethernet code). I'm hung up with my "neighbor" (the host side of the tap address, the source address of a ping to my netstack-hosted endpoint) being in "unknown" state in the NUD code, and so the host's ARP probes are getting dropped.
    Thomas <b>H. Ptacek"
    @tqbf
    For posterity's sake: this was me implementing WritePacket but not WritePackets in my endpoint type :) ARP works the way you'd expect it to work, without finagling
    dorser
    @dorser
    Is there a way to increase SOMAXCONN for a sandboxed containers? I also tried with host network, but it didn't pick the host's SOMAXCONN and keeps defaulting to 128.
    colin-grapl
    @colin-grapl
    How does gVisor work with apparmor? I interested in using gvisor with docker containers and that's a feature I've seen with "regular" docker.
    Bhasker Hariharan
    @hbhasker
    @dorser We do not support overriding SOMAXCONN yet. Its hard coded today to 1024 here https://github.com/google/gvisor/blob/0c3abacb1c8cf11b914eb6be46dacaff0a993ebf/pkg/sentry/syscalls/linux/sys_socket.go#L50
    support can be added but its not something most people run into so I am curious about your use case and why the default limit is not sufficient.
    ah looks like we lie about the SOMAXCONN in our proc net file
    its set to 128
    even though internally we use 1024
    i will open a bug to update the proc net file to reflect the real value
    Bhasker Hariharan
    @hbhasker
    Constantine Peresypkin
    @pkit
    Is there any work going on that implements bind mounts? As I can see some groundwork is there with MountNamespace but it's not clear what are the next steps.
    lucasmanning
    @lucasmanning:matrix.org
    [m]
    Hi @pkit, I'm actually working on a change that will introduce shared subtrees (aka bind mounts) to gvisor right now. Initially we will just support shared mounts, but eventually if there is a need we can add slave mounts as well. Expect the changes to land in the next few weeks.
    Dobromir Marinov
    @DobromirM

    Hi, I've been playing around with gvisor on an EKS cluster and have the following setup:

    1) An EKS cluster with gvisor installed on every node.
    2) A runtime class with the following definition:

    apiVersion: node.k8s.io/v1
    kind: RuntimeClass
    metadata:
      name: gvisor 
    handler: runsc

    Using that setup I can create deployments running with gvisor by specifying the runtime class in the spec like: runtimeClassName: gvisor.

    This all works fine and I am able to get inside the pods and confirm that they are running with gvisor correctly.
    The problem, however, is that if I specify resource limits for the containers, they are never enforced.
    They work fine with the default runtime class and when a pod exceeds them it gets terminated, but with gvisor the pods exceed them and can even crash the whole node.
    I've read that all containers should have limits specified and I am running a single container, but the limits still don't get enforced.
    Any suggestions?

    Zach Koopmans
    @zkoopmans
    We're dealing with something similar within GKE Sandbox (https://cloud.google.com/kubernetes-engine/docs/concepts/sandbox-pods) where the sandboxed pods do correctly report their resources to the rest of the cluster. We'v e tracked the issue with something called cAdvisor (https://github.com/google/cadvisor). So it could be a similar issue.
    What does kubectl top pod and kubectl describe node/{NODE_WHERE_POD _IS_RUNNING} say?
    Zach Koopmans
    @zkoopmans
    Dobromir Marinov
    @DobromirM

    @zkoopmans
    kubectl top pod:

    NAME                       CPU(cores)   MEMORY(bytes)   
    traffic-5879846794-kgb2m   382m         353Mi

    kubectl describe node/...:

    Non-terminated Pods:          (4 in total)
      Namespace                   Name                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
      ---------                   ----                               ------------  ----------  ---------------  -------------  ---
      dmarinov                    traffic-5879846794-kgb2m           10m (0%)      10m (0%)    10Mi (0%)        10Mi (0%)      2m6s
      kube-system                 aws-node-btl8z                     25m (1%)      0 (0%)      0 (0%)           0 (0%)         2d19h
      kube-system                 kube-proxy-c9nl2                   100m (6%)     0 (0%)      0 (0%)           0 (0%)         2d19h
      kube-system                 metrics-server-847dcc659d-mb59d    100m (6%)     0 (0%)      200Mi (17%)      0 (0%)         5m21s
    Allocated resources:
      (Total limits may be over 100 percent, i.e., overcommitted.)
      Resource                    Requests     Limits
      --------                    --------     ------
      cpu                         235m (14%)   10m (0%)
      memory                      210Mi (18%)  10Mi (0%)
      ephemeral-storage           0 (0%)       0 (0%)
      hugepages-1Gi               0 (0%)       0 (0%)
      hugepages-2Mi               0 (0%)       0 (0%)
      attachable-volumes-aws-ebs  0            0
    Zach Koopmans
    @zkoopmans
    Huh...we should look into this. Do you mind filing a bug on github? Include this info and also your containerd/config.toml, the gvisor config.toml, and the gVisor release version settings just to be sure.
    Go ahead and assign it to me.
    Dobromir Marinov
    @DobromirM
    @zkoopmans I can't assign you to it but here is the Github issue: google/gvisor#8047
    Let me know if you need any additional information.
    Markus Thömmes
    @markusthoemmes
    Heya! A bit of a newbie question: In "The true cost of containing" (https://www.usenix.org/system/files/hotcloud19-paper-young.pdf) and the Performance guide (https://gvisor.dev/docs/architecture_guide/performance/) there are references to "internal" or "sandbox-internal" tmpfs filesystems. If I mount a tmpfs through docker's tmpfs option, would that be internal or external?
    I Shall Be
    @ishallbethat
    I'm new to gvisor.
    I create a pod in kubernetes and the runtime is already set with runsc and I can confirm pod is started with gvisor starting.
    but when i typed "free -h" or "cat /proc/cpuinfo" i still see host's setting
    the pod was created with a set limit and request. Why the isolation of gvisor doesn't work this way.
    I Shall Be
    @ishallbethat
    https://gvisor.dev/docs/user_guide/platforms/
    the changing platform is for docker. How about containerd ?
    Jianfeng Tan
    @tanjianfeng

    Heya! A bit of a newbie question: In "The true cost of containing" (https://www.usenix.org/system/files/hotcloud19-paper-young.pdf) and the Performance guide (https://gvisor.dev/docs/architecture_guide/performance/) there are references to "internal" or "sandbox-internal" tmpfs filesystems. If I mount a tmpfs through docker's tmpfs option, would that be internal or external?

    In that case, it'll be handled like bind mount, will go into the heavy path (sync with kernel through gofer).

    2 replies
    eirikr70
    @eirikr70:matrix.org
    [m]
    Hello folks, I'm trying to set up gVisor on ARM64. Been installing it through apt. Docker info shows
    Runtimes: runc runsc io.containerd.runc.v2 io.containerd.runtime.v1.linux
    The path in daemon.json is correct. But when I start a container with runtime: runsc it seems to loop with no log. Any clue ?
    eirikr70
    @eirikr70:matrix.org
    [m]
    Hi Wonderfall , I'm a fan of yours ! 😀
    I discovered about gVisor on your blog.
    Wonderfall
    @wonderfall:lysergide.dev
    [m]
    Nice to hear! If that helps, you can try debugging that way: https://gvisor.dev/docs/user_guide/debugging/
    Also would runsc do echo "hello world" show anything at all?
    1 reply
    eirikr70
    @eirikr70:matrix.org
    [m]
    And debugging logs nothing.
    But I don't really need gVisor, just having a personal homelab.
    Message when I kill the process is creating container: cannot create sandbox: cannot read client sync file: waiting for sandbox to start: EOF
    Wonderfall
    @wonderfall:lysergide.dev
    [m]
    What's your kernel version and is ptrace left enabled on your system?
    You're positive runsc --debug --debug-log=/tmp/runsc/ do echo "hello world" doesn't create logs in /tmp/runsc/?
    If that is still the case you should do a stack trace as instructed in the debugging guide
    1 reply
    eirikr70
    @eirikr70:matrix.org
    [m]
    eric@vault:~/hauk $ uname -a
    Linux vault 5.15.61-v8+ #1579 SMP PREEMPT Fri Aug 26 11:16:44 BST 2022 aarch64 GNU/Linux
    eirikr70
    @eirikr70:matrix.org
    [m]
    I get 3 logs do.txt, gofer.txt and boot.txt. Each of them seems fine. It creates a sandbox process, which satures a core endlessly.
    Samuel Mortenson
    @mortenson

    Heya! A bit of a newbie question: In "The true cost of containing" (https://www.usenix.org/system/files/hotcloud19-paper-young.pdf) and the Performance guide (https://gvisor.dev/docs/architecture_guide/performance/) there are references to "internal" or "sandbox-internal" tmpfs filesystems. If I mount a tmpfs through docker's tmpfs option, would that be internal or external?

    Good question - I'd like to know this too. Seeing less than ideal tmpfs performance when using /tmp to build Go binaries on an otherwise read-only filesystem

    Samuel Mortenson
    @mortenson
    I was also wondering if using runsc without docker has any performance benefits (or other negatives). I'd like to continue to use features like tmpfs and a read only filesystem. Very new to the project
    3 replies
    Derek Perez
    @perezd
    There's no way to interactively mess with runsc (eg: runsc --rootless do /bin/sh) right?
    Armote
    @greatwielder:matrix.org
    [m]
    Is it possible to disable dependence on cgroups? runsc --rootless do echo ok : creating container: cannot set up cgroup for root: configuring cgroup: stat /sys/fs/cgroup/cpu: no such file or directory
    Armote
    @greatwielder:matrix.org
    [m]
    found the the flag ignore-cgroups. But now: creating container: cannot create sandbox: cannot read client sync file: waiting for sandbox to start: EOF
    oddly. using -force-overlay=false solves this particular error.
    Armote
    @greatwielder:matrix.org
    [m]
    When using runsc do ..., is there a way to make the terminal function properly with TUI apps or even just apps which dynamically overwrite lines? right now the lines are just periodically flushed continuously.