Development discussion of WireGuard
 help / color / mirror / Atom feed
* How to work around the fact that a Wireguard Server is a single point of failure
@ 2019-04-27 22:09 Garbage
  0 siblings, 0 replies; only message in thread
From: Garbage @ 2019-04-27 22:09 UTC (permalink / raw)
  To: wireguard

If I understood the architecture of Wireguard correctly the server is a single point of failure: when the server goes down no client will be able to communicate with another client.

I'm looking for a way to connect one or two hands full of low resource VPS from different providers and wireguard seems to be _the_ solution when it comes to ease of setup and performance. Looking for credible sources I only found this post: https://lists.zx2c4.com/pipermail/wireguard/2019-January/003788.html

Is there some documentation that describes how to set up a "high availability" or "hot standby" configuration ? The VPS will run Kubernetes and because I do not want to spend extra bucks for a loadbalancer service I decided that a DNS failover will suffice for my Kubernetes masters. So a comparable quality of service / duration of service interruption would be just fine for the Wireguard service too.

Will a DNS based failover work for Wireguard servers ? Or am I bound to a solution that uses a static IP (that of a "real" loadbalancer) and switches to the standby Wireguard server in case the first goes down ?
_______________________________________________
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2019-05-06 20:37 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-27 22:09 How to work around the fact that a Wireguard Server is a single point of failure Garbage

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).