• Hacker News
  • new|
  • comments|
  • show|
  • ask|
  • jobs|
  • QuietLedge375 7 hours

    [dead]

  • CalmBirch127 1 hours

    [dead]

  • foreman_ 6 hours

    [dead]

  • hackeman300 7 hours

    [dead]

  • averi 5 hours

    [flagged]

  • HollowRidge427 1 hours

    [dead]

  • BoldBrook418 59 minutes

    [dead]

  • Titan2189 6 hours

    > [...] that root was just my unprivileged podman user on the host

    Couldn't you then simply re-run the exploit again as unprivileged podman user and gain root on the host?

    tuananh 44 minutes

    did anyone try it? it suppose to work right?

  • netheril96 3 hours

    If the goal is just preventing full root privileges, a CapabilityBoundingSet in a systemd unit will do.

    However copy fail can be used in many other ways not contained by containers or the above settings. For example it can modify the /etc/ssl/certs to prepare for MitM attacks. If you have multiple containers based on the same image then one compromised CA set affects another.

    est 2 hours

    I added these

        AmbientCapabilities=CAP_NET_BIND_SERVICE
        CapabilityBoundingSet=CAP_NET_BIND_SERVICE
        NoNewPrivileges=yes
    
    to my .service. Is it good enough?

  • 2bitencryption 6 hours

    tl;dr - within the container, the exploit works, and elevates to root (uid 0) within the container - BUT because that namespace actually maps to uid 1000 (the user) outside the container, the escalation does not flow up to the host.

    But… does this escape the container? If not (the author seems to indicate it does not) then does it matter if you are in Docker or rootless Podman, right, since the end result is always: you have elevated to root within the container. If the rest of the container filesystem isolation does its job, the end result is the same? Though I guess another chained exploit to escape the container would be worse in Docker? Do I have that right?

    firesteelrain 2 hours

    This is a problem and most people hadn’t considered it before because the caching is done to speed up build pipeline performance:

    “ While rootless containers prevent the attacker from escalating to host root, the page cache is still shared across the host. Containers that re-use the same base image layers share the same cached pages for those layers — if a malicious CI job corrupts a binary in the page cache, other containers launched from that same image could end up executing the poisoned version.”

  • averi 6 hours

    [flagged]

    6 hours

    M_bara 5 hours

    > (like reading env vars and sending them to an external server) it'd not be able to send credentials or fetch a malware remotely at all due to the DNS queries being intercepted by eBPF and being sent to a CoreDNS proxy.

    Wouldn’t the exploit then just use ip addresses directly?

  • washbasin 7 hours

    Please post a tl;dr at the top or even in the subject. Many of us are scrambling to patch/reboot our **.

    nullsanity 6 hours

    [dead]

    PunchyHamster 1 hours

    tl;dr (not from article)

        echo -e 'install algif_aead /bin/false\n' > /etc/modprobe.d/disable-algif.conf 
    
    that just prevents the faulty module from loading. So you have time to fix it properly (kernel upgrade)

    Technically there should be zero impact (the very very few tools that use it will fall back to userspace), I haven't even found that module loaded in infrastructure

    Then check if it is loaded, and if it is, unload/reboot

    isityettime 6 hours

    It already has a table of contents. The heading titled "why rootless containers stopped the escalation" is your tl;dr.

    donaldjbiden 6 hours

    This isn't a new CVE. It's just documenting what happened when this person ran the exploit inside a certain type of container.

  • walletdrainer 4 hours

    This feels LLM generated, lots of emdashes and even more text around a completely false premise.

    cpach 1 hours

    What is the false premise in the article?

    Retr0id 1 hours

    That rootless containers mitigate kernel exploits.

  • eqvinox 7 hours

    Running sstrip on an ELF binary is called ELF "golfing"? TIL…

    Retr0id 1 hours

    It is, although real ELF golfers consider that a little naive.

    repelsteeltje 51 minutes

    Sorry for posting a n00b question, but could you share etymology on this term golfing?

    Retr0id 45 minutes

    In golf, lower scores are better.

    mbreese 15 minutes

    It’s manipulating the binary to make it as small as possible. In golf, the lowest score wins. So, in this context, the smallest binary that still works wins.

  • averi 6 hours

    [flagged]

    hlieberman 5 hours

    That's true... for the exploit demo that they released. The primitive that underlies the exploit, however -- a page cache write -- can easily bypass the container boundary. One only needs to hook an executable which is also present in the host.

    ezequiel-garzon 5 hours

    Please reply instead of (or in addition to) tagging the user you're replying to.

    pjmlp 4 hours

    Tagging isn't a feature in HN.

    ramon156 4 hours

    Thanks for the bikeshedding, they meant mentioning.

    pjmlp 4 hours

    It is also not supported, beyond people by sheer luck see their nick.

    anygivnthursday 4 hours

    Or running their Claw scraping HN comments periodically for their mentions.

    zenoprax 3 hours

    If I see my points shoot up a bit I check my comment history to see what caused it.

  • amluto 6 hours

    Sigh.

    1. I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.

    2. The write-to-RO-page-cache primitive STILL WORKED! It’s just that the particular exploit used had no meaningful effect in the already-root-in-a-container context. If you think you are safe, you’re probably wrong. All you need to make a new exploit is an fd representing something that you aren’t supposed to be able to write. This likely includes CoW things where you are supposed to be able to write after CoW but you aren’t supposed to be able to write to the source.

    So:

    - Are you using these containers with a common image or even a common layer in an image to isolate dangerous workloads from each other. Oops, they can modify the image layers and corrupt each other. There goes any sort of cross-tenant isolation.

    - What if you get an fd backed by the zero page and write to it? This can’t result in anything that the administrator would approve of.

    - What if you ro-bind-mount something in? It’s not ro any more.

    raesene9 4 hours

    I've not looked for podman but moby/docker I believe does now block this https://github.com/moby/profiles/commit/7158007a83005b14a24f...

    fguerraz 3 hours

    I just contributed this [1] which does what you want for seccomp. Well, not by default, but profiling is now effective against this attack.

    Oh, an this [2] just happened

    [1] https://github.com/containers/oci-seccomp-bpf-hook/pull/209 [2] https://github.com/moby/moby/pull/52501

    PunchyHamster 1 hours

    > I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.

    there is no reason it would be default policy. Else might as well block every socket and just multiplex everything on stdin/out

    SV_BubbleTime 4 minutes

    >might as well block every socket and just multiplex everything on stdin/out

    You may be on to something…

    cduzz 8 minutes

    I'd have guessed that the default paranoia-first policy would be "drop everything; verify what you need" which would include AF_ALG.

    share and enjoy!

    hlieberman 6 hours

    In fact, the authors specifically say on the very first line of their website that the copy/fail primitive can be used as a container escape. The entire premise of this article is flawed and irresponsible.

    5 hours

    jeroenhd 2 hours

    > I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.

    I see a lot of projects blocking those sockets in containers as a response to this exploit, but it seems rather strange to me. We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time? It's a rather weird default to use. It's not like we're mass-disabling kernel modules everywhere every time someone discovers an EoP bug, do we? Did we blacklist OpenSSL's binaries after Heartbleed?

    I suppose it makes sense as a default on vulnerable kernels (though people running vulnerable kernels should put effort into patching rather than workarounds in my opinion), but these defaults are going to be around ten years from now when copy.fail is a distant memory.

    Retr0id 50 minutes

    iiuc the AF_ALG interface only offers real performance wins if you have specialized hardware that the kernel can offload computations to. If you're not using that hardware, there's little reason not to do the crypto in userspace.

    nubinetwork 1 hours

    > We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time?

    To my knowledge, not many things were using the in-kernel code anyways, the recommended way is to use userland tools...

    It's optional for openssl, systemd apparently needs it, but deleting the module from one of my systems didn't cause any issues. /shrug

    PunchyHamster 1 hours

    I haven't had it loaded on 100s of servers ranging kernel version from 5.10 to 6.14. The use is just that low

    e12e 1 hours

    In fairness, after heartbleed - there was quite a push to move away from openSSL - like Google's boring ssl, openbsd libressl and Mozilla/nss or gnutls - but the alternative here would be moving to a different kernel, like freebsd or open Solaris/Illumos ...

    PunchyHamster 1 hours

    that's just moving to kernel that had 1000x less eyes on it. Yeah sure it will have less exploits but purely because nobody bothers to look when there are much juicer targets on Linux.

    But I am disappointed that we still don't have clear OpenSSL successor, there is nothing to be salvaged from this mess of a project

    DarkUranium 29 minutes

    1000x less eyes is true, but also: Linux, even in the kernel, has a long history of "move fast and break things".

    Yes, the syscall API is (famously) stable, but the drivers, for example, are such a mess that many non-Linux projects prefer to take BSD drivers for e.g. WiFi despite them supporting far fewer devices (even if the Linux ones would be license compatible).

    dwroberts 5 hours

    There is an addendum at the bottom where they admit the page corruption is still problematic even with rootless podman.

    Although using this to justify their migration to micro-VMs is very strange to me. Sure for this CVE it would have been better, but surely for a future attack it could hit a component shared across VMs but not containers? Are people really choosing technology based on CVE-of-the-week?

    anygivnthursday 4 hours

    Containers were never a security boundary. VMs have better isolation, which is why people choose them for security. Containers are convenience and usually have better performance.

    graemep 15 minutes

    They may not provide isolation as VMs but they clearly do limit some attacks. VMs do not provide the same isolation as using physically separate hardware either.

    I would have thought they provide better isolation than using multiple users which is the traditional security boundary.

    It might depends on what you mean by a container? Are sandboxes such as Bubblewrap and Firejail containers?

    ButlerianJihad 3 hours

    Containers are a convenience boundary and they increase complexity of your risk assessments.

    It is easy for security scanners to scan a Linux system, but will they inspect your containers, and snaps, and flatpaks, and VMs? It is easy for DevOps to ssh into your Linux server, but can they also get logged in to each container, and do useful things? Your patches and all dependencies are up-to-date on your server, but those containers are still dragging around legacy dependencies, by design. Is your backup system aware of containers and capable of creating backup images or files, that are suitable for restoring back to service?

    necovek 2 hours

    Security scanners already support most container and VM image formats in widespread use.

    Does this increase complexity? Yes, it does. Is it worth the cost? Depends on each individual case IMO.

    firesteelrain 2 hours

    You need a tool like Anchore and PrismaCloud to scan the container images then monitor them in runtime with PrismaCloud. Trellix can “scan” however most people turn off or exclude container directories on the host because it can interfere with the running container.

    dwroberts 3 hours

    I see the ‘not a security boundary’ thing repeated constantly, and while it makes sense (eg. they’re sharing the underlying kernel or at least some access to it) if you think about it a little more, VMs are not magically different: they are better isolated, but VMs on the same host still share the host in common. A CVE next week that allows corruption of host state that affects eg every VM under a particular hypervisor will be no less damaging than this CVE is to containers

    necovek 2 hours

    You are obviously right that these are similar in principle: VM isolation exploit would lead to the same exposure like container-related isolation exploits.

    VMs are considered vastly better because the surface area where exploits can happen is smaller and/or better isolated within the kernel.

    If you are arguing the latter is not true — and we are all collectively hand-waving away big chunk of the surface area so that may be the case — it would help to be explicit in why you believe an exploit in that area is similarly likely?

    robertlagrant 48 minutes

    I would say it's the fact that "not a security boundary" appears to be a pass/fail statement, whereas the reality is more like a security continuum, along which VMs are further than containers.