From: Nico Schottelius <nico.schottelius@ungleich.ch>
To: wireguard@lists.zx2c4.com
Subject: Outgoing ping required in container environment (even with PersistentKeepalive)
Date: Sun, 08 May 2022 08:34:46 +0200 [thread overview]
Message-ID: <878rrcfrpv.fsf@ungleich.ch> (raw)
Good morning,
another day news from the container land. When running wireguard in
kubernetes, deleting the existing pod and replacing it with a new one, I
see the following behaviour:
- The assigned IPv4 address stops being reachable (good so far)
- The assigned IPv4 address is then shortly reachable for about 5 seconds
- The assigned IPv4 address stops being reachable (not good)
- The assigned IPv4 address is again reachable, if I trigger a ping
through the tunnel inside the container (ok, but why?)
I am using the following configuration:
--------------------------------------------------------------------------------
[Interface]
PrivateKey = ...
ListenPort = 51828
Address = 185.155.29.81/32
PostUp = iptables -t nat -I POSTROUTING -o ipv4 -j MASQUERADE
# upstream
[Peer]
Endpoint = vpn-...:51820
PublicKey = 6BRnQ+dmeFzVCH9RbM1pbJ7u3y3qrl+zUzzYCmC88kE=
AllowedIPs = 0.0.0.0/1, 128.0.0.0/1
PersistentKeepalive = 25
--------------------------------------------------------------------------------
And the following container specification:
--------------------------------------------------------------------------------
spec:
containers:
- name: wireguard
image: ungleich/ungleich-wireguard:{{ $.Chart.AppVersion }}
# We only support 1 listener at the moment
# Outgoing connections are not affected
ports:
- containerPort: 51820
securityContext:
capabilities:
# NET_ADMIN for wg
# NET_RAW for iptables
add: ["NET_ADMIN", "NET_RAW" ]
volumeMounts:
- name: wireguard-config
mountPath: "/etc/wireguard"
resources:
requests:
memory: {{ $v.memory | default "1Gi" }}
cpu: {{ $v.cpu | default "1000m" }}
limits:
memory: {{ $v.memory | default "1Gi" }}
cpu: {{ $v.cpu | default "1000m" }}
--------------------------------------------------------------------------------
The strange thing is that after issuing the ping once inside the
container:
--------------------------------------------------------------------------------
[8:41] nb2:~% kubectl -n wireguard exec -ti wireguard-vpn-server-7db664db6f-zl4fz -- ping -c2 -4 google.com
PING google.com (172.217.168.78): 56 data bytes
64 bytes from 172.217.168.78: seq=0 ttl=116 time=9.110 ms
64 bytes from 172.217.168.78: seq=1 ttl=116 time=6.664 ms
--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 6.664/7.887/9.110 ms
--------------------------------------------------------------------------------
The connection stays correctly established.
If anyone has a pointer on what might be going on, any help is
appreciated.
Best regards,
Nico
--
Sustainable and modern Infrastructures by ungleich.ch
next reply other threads:[~2022-05-08 6:45 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-08 6:34 Nico Schottelius [this message]
2022-05-08 19:53 ` Nico Schottelius
2022-05-08 20:09 ` Roman Mamedov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=878rrcfrpv.fsf@ungleich.ch \
--to=nico.schottelius@ungleich.ch \
--cc=wireguard@lists.zx2c4.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).