2021-03-02 08:24:45 +00:00
|
|
|
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
|
kernel: 5.4: import wireguard backport
Rather than using the clunky, old, slower wireguard-linux-compat out of
tree module, this commit does a patch-by-patch backport of upstream's
wireguard to 5.4. This specific backport is in widespread use, being
part of SUSE's enterprise kernel, Oracle's enterprise kernel, Google's
Android kernel, Gentoo's distro kernel, and probably more I've forgotten
about. It's definately the "more proper" way of adding wireguard to a
kernel than the ugly compat.h hell of the wireguard-linux-compat repo.
And most importantly for OpenWRT, it allows using the same module
configuration code for 5.10 as for 5.4, with no need for bifurcation.
These patches are from the backport tree which is maintained in the
open here: https://git.zx2c4.com/wireguard-linux/log/?h=backport-5.4.y
I'll be sending PRs to update this as needed.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-19 13:29:04 +00:00
|
|
|
From: "Jason A. Donenfeld" <Jason@zx2c4.com>
|
|
|
|
Date: Wed, 9 Sep 2020 13:58:15 +0200
|
2021-03-02 08:24:45 +00:00
|
|
|
Subject: [PATCH] wireguard: peerlookup: take lock before checking hash in
|
|
|
|
replace operation
|
kernel: 5.4: import wireguard backport
Rather than using the clunky, old, slower wireguard-linux-compat out of
tree module, this commit does a patch-by-patch backport of upstream's
wireguard to 5.4. This specific backport is in widespread use, being
part of SUSE's enterprise kernel, Oracle's enterprise kernel, Google's
Android kernel, Gentoo's distro kernel, and probably more I've forgotten
about. It's definately the "more proper" way of adding wireguard to a
kernel than the ugly compat.h hell of the wireguard-linux-compat repo.
And most importantly for OpenWRT, it allows using the same module
configuration code for 5.10 as for 5.4, with no need for bifurcation.
These patches are from the backport tree which is maintained in the
open here: https://git.zx2c4.com/wireguard-linux/log/?h=backport-5.4.y
I'll be sending PRs to update this as needed.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-02-19 13:29:04 +00:00
|
|
|
|
|
|
|
commit 6147f7b1e90ff09bd52afc8b9206a7fcd133daf7 upstream.
|
|
|
|
|
|
|
|
Eric's suggested fix for the previous commit's mentioned race condition
|
|
|
|
was to simply take the table->lock in wg_index_hashtable_replace(). The
|
|
|
|
table->lock of the hash table is supposed to protect the bucket heads,
|
|
|
|
not the entires, but actually, since all the mutator functions are
|
|
|
|
already taking it, it makes sense to take it too for the test to
|
|
|
|
hlist_unhashed, as a defense in depth measure, so that it no longer
|
|
|
|
races with deletions, regardless of what other locks are protecting
|
|
|
|
individual entries. This is sensible from a performance perspective
|
|
|
|
because, as Eric pointed out, the case of being unhashed is already the
|
|
|
|
unlikely case, so this won't add common contention. And comparing
|
|
|
|
instructions, this basically doesn't make much of a difference other
|
|
|
|
than pushing and popping %r13, used by the new `bool ret`. More
|
|
|
|
generally, I like the idea of locking consistency across table mutator
|
|
|
|
functions, and this might let me rest slightly easier at night.
|
|
|
|
|
|
|
|
Suggested-by: Eric Dumazet <edumazet@google.com>
|
|
|
|
Link: https://lore.kernel.org/wireguard/20200908145911.4090480-1-edumazet@google.com/
|
|
|
|
Fixes: e7096c131e51 ("net: WireGuard secure network tunnel")
|
|
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
|
|
---
|
|
|
|
drivers/net/wireguard/peerlookup.c | 11 ++++++++---
|
|
|
|
1 file changed, 8 insertions(+), 3 deletions(-)
|
|
|
|
|
|
|
|
--- a/drivers/net/wireguard/peerlookup.c
|
|
|
|
+++ b/drivers/net/wireguard/peerlookup.c
|
|
|
|
@@ -167,9 +167,13 @@ bool wg_index_hashtable_replace(struct i
|
|
|
|
struct index_hashtable_entry *old,
|
|
|
|
struct index_hashtable_entry *new)
|
|
|
|
{
|
|
|
|
- if (unlikely(hlist_unhashed(&old->index_hash)))
|
|
|
|
- return false;
|
|
|
|
+ bool ret;
|
|
|
|
+
|
|
|
|
spin_lock_bh(&table->lock);
|
|
|
|
+ ret = !hlist_unhashed(&old->index_hash);
|
|
|
|
+ if (unlikely(!ret))
|
|
|
|
+ goto out;
|
|
|
|
+
|
|
|
|
new->index = old->index;
|
|
|
|
hlist_replace_rcu(&old->index_hash, &new->index_hash);
|
|
|
|
|
|
|
|
@@ -180,8 +184,9 @@ bool wg_index_hashtable_replace(struct i
|
|
|
|
* simply gets dropped, which isn't terrible.
|
|
|
|
*/
|
|
|
|
INIT_HLIST_NODE(&old->index_hash);
|
|
|
|
+out:
|
|
|
|
spin_unlock_bh(&table->lock);
|
|
|
|
- return true;
|
|
|
|
+ return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
void wg_index_hashtable_remove(struct index_hashtable *table,
|