Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

segfault: buffer.View possibly released twice resulting in nil chunk #10696

Open
ignoramous opened this issue Jul 27, 2024 · 9 comments
Open
Assignees
Labels
type: bug Something isn't working

Comments

@ignoramous
Copy link
Contributor

ignoramous commented Jul 27, 2024

Description

runtime error: invalid memory address or nil pointer dereference
	/home/jitpack/golang/go/src/runtime/panic.go:770 +0x124
gvisor.dev/gvisor/pkg/buffer.(*chunk).DecRef(...)
	/pkg/mod/gvisor.dev/gvisor@v0.0.0-20240629121841-891b40cf7fe0/pkg/buffer/chunk.go:106
gvisor.dev/gvisor/pkg/buffer.(*View).Release(0x4009490d20)
	/pkg/mod/gvisor.dev/gvisor@v0.0.0-20240629121841-891b40cf7fe0/pkg/buffer/view.go:107 +0x38
gvisor.dev/gvisor/pkg/buffer.(*Buffer).removeView(0x70980e91f0?, 0x4009490d20)
	/pkg/mod/gvisor.dev/gvisor@v0.0.0-20240629121841-891b40cf7fe0/pkg/buffer/buffer.go:37 +0x2c
gvisor.dev/gvisor/pkg/buffer.(*Buffer).Release(0x400a5957c8)
	/pkg/mod/gvisor.dev/gvisor@v0.0.0-20240629121841-891b40cf7fe0/pkg/buffer/buffer.go:73 +0x2c
gvisor.dev/gvisor/pkg/tcpip/transport/udp.(*endpoint).Read.deferwrap1.(*PacketBuffer).DecRef.1()
	/pkg/mod/gvisor.dev/gvisor@v0.0.0-20240629121841-891b40cf7fe0/pkg/tcpip/stack/packet_buffer.go:204 +0x3c
gvisor.dev/gvisor/pkg/tcpip/stack.(*packetBufferRefs).DecRef(0x400842f498?, 0x400842f4c8)
	/pkg/mod/gvisor.dev/gvisor@v0.0.0-20240629121841-891b40cf7fe0/pkg/tcpip/stack/packet_buffer_refs.go:133 +0x70
gvisor.dev/gvisor/pkg/tcpip/stack.(*PacketBuffer).DecRef(...)
	/pkg/mod/gvisor.dev/gvisor@v0.0.0-20240629121841-891b40cf7fe0/pkg/tcpip/stack/packet_buffer.go:199
gvisor.dev/gvisor/pkg/tcpip/transport/udp.(*endpoint).Read(_, {_, _}, {_, _, _})
	/pkg/mod/gvisor.dev/gvisor@v0.0.0-20240629121841-891b40cf7fe0/pkg/tcpip/transport/udp/endpoint.go:299 +0x49c
gvisor.dev/gvisor/pkg/tcpip/adapters/gonet.commonRead({0x4009f36000, 0x10000, 0x10000}, {0x7098661e48, 0x40088be008}, 0x4009198600, 0x400a1958c0, 0x400842fc98, {0x7098650000, 0x40083ece40})
	/pkg/mod/gvisor.dev/gvisor@v0.0.0-20240629121841-891b40cf7fe0/pkg/tcpip/adapters/gonet/gonet.go:311 +0xfc
gvisor.dev/gvisor/pkg/tcpip/adapters/gonet.(*UDPConn).ReadFrom(0x40083ece40, {0x4009f36000, 0x10000, 0x10000})
	/pkg/mod/gvisor.dev/gvisor@v0.0.0-20240629121841-891b40cf7fe0/pkg/tcpip/adapters/gonet/gonet.go:639 +0x74
gvisor.dev/gvisor/pkg/tcpip/adapters/gonet.(*UDPConn).Read(...)
	/pkg/mod/gvisor.dev/gvisor@v0.0.0-20240629121841-891b40cf7fe0/pkg/tcpip/adapters/gonet/gonet.go:630
github.com/celzero/firestack/intra/netstack.(*GUDPConn).Read(0x400a37ce30?, {0x4009f36000?, 0x4dc?, 0x10000?})
	/home/jitpack/build/intra/netstack/udp.go:188 +0x8c
# io.copyBuffer({0x7446d7c278, 0x400a37ce30}, {0x7446d7c298, 0x4008dcaea0}, {0x4009f36000, 0x10000, 0x10000})
#	/home/jitpack/golang/go/src/io/io.go:429 +0x18c
# io.CopyBuffer({0x7446d7c278?, 0x400a37ce30?}, {0x7446d7c298?, 0x4008dcaea0?}, {0x4009f36000?, 0x7098651b40?, 0x7098651b40?})
#	/home/jitpack/golang/go/src/io/io.go:402 +0x38
# github.com/celzero/firestack/intra/core.Pipe({0x7446d7c278, 0x400a37ce30}, {0x7446d7c298, 0x4008dcaea0})
#	/home/jitpack/build/intra/core/cp.go:36 +0x1cc
# github.com/celzero/firestack/intra.upload({0x4009966550, 0x10}, {0x709865baa0, 0x4008dcaea0}, {0x709865baf8, 0x400a37ce30}, 0x40092a20c0)
#	/home/jitpack/build/intra/common.go:39 +0x10c
# created by github.com/celzero/firestack/intra.forward in goroutine 189043
#	/home/jitpack/build/intra/common.go:67 +0xe4

buffer.View.chunk is nil'd in buffer.View.Release():

*v = View{}

And then transport.udp.endpoint.go possibly DecRef()s an already released buffer:

defer p.pkt.DecRef()

One possibility is, the buffer was racy released by transport.udp.Close(): rcvMu is held, and so this edge case is unlikely.

This crash was reported by our Android app (cgo): celzero/firestack#74

I don't understand this code well enough to propose a fix (at a loss how pkg ref_template even works; for ex, where/how chunkRefs defined or init'd?):

chunkRefs

Steps to reproduce

with io.Copy(dst, gonet.UDPConn)

runsc version

Android

docker version (if using docker)

No response

uname

No response

kubectl (if using Kubernetes)

No response

repo state (if built from source)

No response

runsc debug logs (if available)

No response

@ignoramous ignoramous added the type: bug Something isn't working label Jul 27, 2024
@ignoramous
Copy link
Contributor Author

cc: @manninglucas

@manninglucas manninglucas self-assigned this Jul 29, 2024
@manninglucas
Copy link
Contributor

manninglucas commented Jul 29, 2024

Looks like a use-after-free to me. This could be coming from inside netstack or from your application which uses reference counted parts of the netstack API. Looking through the netstack code nothing sticks out to me. UDP isn't too complex and that part of the API is fuzzed by syzkaller so I would surprised if there was something obviously broken there. Do you know which net protocol this connection was using (ipv4 vs ipv6)?

Also, your "steps to reproduce" doesn't give me enough information on how to build and reproduce this issue myself. Adding some detail there will make it easier for me to help you debug this issue.

@manninglucas
Copy link
Contributor

Also, how frequently is this crash happening? Is it occasional or every time you try to run this code?

@ignoramous
Copy link
Contributor Author

This could be coming from your application which uses reference counted parts of the netstack API.

Quite possible. The two places we do use the netstack's refcounting APIs comes from code adopted from netstack's fdbased/endpoint.go (loc1) and fdbased/processors.go (loc1, loc2, loc3) for our repo (mostly to support swapping fds to avoid creating a new LinkEndpoint).

Do you know which net protocol this connection was using (ipv4 vs ipv6)

UDP IPv4 (it looks like a QUIC connection requested by 10268 which is Instagram).

udp: b85e6888d0a12280 (proxy? Exit) 192.168.0.144:40058 -> 157.240.23.128:443 for uid 10268

"steps to reproduce"

Apologies. What our Android app does:

  1. Create a (modified) fdbased endpoint with a TUN device (ref).
  2. Sockisfy incoming packets using netstack's (gonet) UDP (ref) & TCP handlers (ref).
  3. Pipe the socksified gonet.TCPConn / gonet.UDPConn to an actual egress (remote) connection to the same destination (upload: io.Copy(egressConn, gonetConn) and download: io.Copy(gonetConn, egressConn)) (ref).

The nil ptr is hit by gonet.UDPConn.Read() called by (upload) io.Copy.

Also, how frequently is this crash happening? Is it occasional or every time you try to run this code?

Rare. Around once a week (uptime).

Like you point out, this crash could totally be due to our app's incorrect use of ref-counting APIs.

@manninglucas
Copy link
Contributor

Thanks for the extra info. I wasn't able to determine the root cause after looking through both the gVisor UDP code and your code for some time. I will be AFK until next week, but will look at it again when I get back. In the meantime, @kevinGC could you take a look and see if you can find any ref counting issue here?

@kevinGC
Copy link
Collaborator

kevinGC commented Jul 31, 2024

Took a look and, while I don't have anything definitive, I wonder whether it could be related to the use of goroutines that starts in firestack/intra/netstack/udp.go:udpForwarder. For example, the function passed to NewForwarder:

  • Spawns a goroutine via h.Proxy or h.ProxyMux
  • Proxy spawns another goroutine via core.Go(..., forward, ...)
  • forward spawns another goroutine to call upload
  • ... which does some copying of bytes

I think that the first goroutine spawned here now has a pointer to a PacketBuffer that it never IncRefs or DecRefs, although I'm surprised this doesn't cause a memory leak; when a udp.ForwarderRequest is created in udp.Forwarder.HandlePacket, it calls pkt.IncRef and AFAICT that's never undone via DecRef.

Perhaps try DecRefing the packet after calling CreateEndpoint -- it would at least test whether memory is leaking.

@ignoramous
Copy link
Contributor Author

Thank you.

Perhaps try DecRefing the packet after calling CreateEndpoint -- it would at least test whether memory is leaking.

Looks like IncRef was introduced in ~Feb 2023 to resolve #8448 (comment) I couldn't find a way to DecRef ForwardRequest's PacketBuffer since it isn't exported.

pkt stack.PacketBufferPtr

iirc, none of the other FOSS projects (ex1, ex2) we looked at (at the time) DecRef'd in TCP/UDP Forwarders. gVisor's tests don't either:

fwd := udp.NewForwarder(s, func(r *udp.ForwarderRequest) {
defer close(done)
var wq waiter.Queue
ep, err := r.CreateEndpoint(&wq)
if err != nil {
t.Fatalf("r.CreateEndpoint() = %v", err)
}
defer ep.Close()
c := NewTCPConn(&wq, ep)
buf := make([]byte, 256)
n, e := c.Read(buf)
if e != nil {
t.Errorf("c.Read() = %v", e)
}
if _, e := c.Write(buf[:n]); e != nil {
t.Errorf("c.Write() = %v", e)
}
})

when a udp.ForwarderRequest is created in udp.Forwarder.HandlePacket, it calls pkt.IncRef and AFAICT that's never undone via DecRef.

Could DecRef be defer'd in ForwarderRequest.CreateEndpoint instead (udp.Forwarder provides no other way to process the handled PacketBuffer anyway other than ForwarderRequest.CreateEndpoint)?

@kevinGC
Copy link
Collaborator

kevinGC commented Aug 26, 2024

Looking now, I may have been wrong in #8458. It should probably be up to the caller of NewForwarder to IncRef the packet. I think I looked at the TCP fowarder and naively copied it, but the TCP forwarder has to IncRef because it starts a new goroutine. That's not true for UDP.

In any case, a leak isn't causing the panic.

@manninglucas
Copy link
Contributor

@ignoramous #10958 may fix this issue. Let me know if you see any improvement once it's merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants