Github messages for voidlinux
 help / color / mirror / Atom feed
* [PR PATCH] chromium: add upstream patch to help with segfaults
@ 2020-05-23  2:38 pbui
  2020-05-23  7:46 ` [PR PATCH] [Merged]: " Hoshpak
  0 siblings, 1 reply; 2+ messages in thread
From: pbui @ 2020-05-23  2:38 UTC (permalink / raw)
  To: ml

[-- Attachment #1: Type: text/plain, Size: 825 bytes --]

There is a new pull request by pbui against master on the void-packages repository

https://github.com/pbui/void-packages chromium
https://github.com/void-linux/void-packages/pull/22215

chromium: add upstream patch to help with segfaults
[ci skip]

- Built for x86_64, and x86_64-musl.
- Tested on x86_64.

Currently chromium is sporadically segfaulting.  This adds a patch I had
previously skipped when updating to 83.0.4103.61 becaused I didn't think
it applied to us (upstream notes it is for the GCC toolchain, but we use
LLVM).

From initial testing this appears to help eliminate the segfaults... but
I don't have a reliable way to test it (that is, I don't have a precise
series of steps to trigger the segfault).

A patch file from https://github.com/void-linux/void-packages/pull/22215.patch is attached

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: github-pr-chromium-22215.patch --]
[-- Type: text/x-diff, Size: 8389 bytes --]

From 82827182aa26aadd21727435db589d606210fcf5 Mon Sep 17 00:00:00 2001
From: Peter Bui <pbui@github.bx612.space>
Date: Fri, 22 May 2020 20:58:13 -0400
Subject: [PATCH] chromium: add upstream patch to help with segfaults

[ci skip]

- Built for x86_64, and x86_64-musl.
- Tested on x86_64.

Currently chromium is sporadically segfaulting.  This adds a patch I had
previously skipped when updating to 83.0.4103.61 becaused I didn't think
it applied to us (upstream notes it is for the GCC toolchain, but we use
LLVM).

From initial testing this appears to help eliminate the segfaults... but
I don't have a reliable way to test it (that is, I don't have a precise
series of steps to trigger the segfault).
---
 ...struction-of-ServiceWorkerObjectHost.patch | 138 ++++++++++++++++++
 srcpkgs/chromium/template                     |   2 +-
 2 files changed, 139 insertions(+), 1 deletion(-)
 create mode 100644 srcpkgs/chromium/patches/upstream-avoid-double-destruction-of-ServiceWorkerObjectHost.patch

diff --git a/srcpkgs/chromium/patches/upstream-avoid-double-destruction-of-ServiceWorkerObjectHost.patch b/srcpkgs/chromium/patches/upstream-avoid-double-destruction-of-ServiceWorkerObjectHost.patch
new file mode 100644
index 00000000000..14c59582e65
--- /dev/null
+++ b/srcpkgs/chromium/patches/upstream-avoid-double-destruction-of-ServiceWorkerObjectHost.patch
@@ -0,0 +1,138 @@
+From bd59ce32629ef684624821419c43967b73d2989e Mon Sep 17 00:00:00 2001
+From: Hiroki Nakagawa <nhiroki@chromium.org>
+Date: Fri, 8 May 2020 08:25:31 +0000
+Subject: [PATCH] ServiceWorker: Avoid double destruction of
+ ServiceWorkerObjectHost on connection error
+
+This CL avoids the case where ServiceWorkerObjectHost is destroyed twice
+on ServiceWorkerObjectHost::OnConnectionError() when Chromium is built
+with the GCC build toolchain.
+
+> How does the issue happen?
+
+ServiceWorkerObjectHost has a cyclic reference like this:
+
+ServiceWorkerObjectHost
+  --([1] scoped_refptr)--> ServiceWorkerVersion
+    --([2] std::unique_ptr)--> ServiceWorkerProviderHost
+      --([3] std::unique_ptr)--> ServiceWorkerContainerHost
+        --([4] std::unique_ptr)--> ServiceWorkerObjectHost
+
+Note that ServiceWorkerContainerHost manages ServiceWorkerObjectHost in
+map<int64_t version_id, std::unique_ptr<ServiceWorkerObjectHost>>.
+
+When ServiceWorkerObjectHost::OnConnectionError() is called, the
+function removes the reference [4] from the map, and destroys
+ServiceWorkerObjectHost. If the object host has the last reference [1]
+to ServiceWorkerVersion, the destruction also cuts off the references
+[2] and [3], and destroys ServiceWorkerProviderHost and
+ServiceWorkerContainerHost.
+
+This seems to work well on the Chromium's default toolchain, but not
+work on the GCC toolchain. According to the report, destruction of
+ServiceWorkerContainerHost happens while the map owned by the container
+host is erasing the ServiceWorkerObjectHost, and this results in crash
+due to double destruction of the object host.
+
+I don't know the reason why this happens only on the GCC toolchain, but
+I suspect the order of object destruction on std::map::erase() could be
+different depending on the toolchains.
+
+> How does this CL fix this?
+
+The ideal fix is to redesign the ownership model of
+ServiceWorkerVersion, but it's not feasible in the short term.
+
+Instead, this CL avoids destruction of ServiceWorkerObjectHost on
+std::map::erase(). The new code takes the ownership of the object host
+from the map first, and then erases the entry from the map. This
+separates timings to erase the map entry and to destroy the object host,
+so the crash should no longer happen.
+
+Bug: 1056598
+Change-Id: Id30654cb575bc557c42044d6f0c6f1f9bfaed613
+Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/2094496
+Reviewed-by: Makoto Shimazu <shimazu@chromium.org>
+Commit-Queue: Hiroki Nakagawa <nhiroki@chromium.org>
+Cr-Commit-Position: refs/heads/master@{#766770}
+---
+ .../service_worker_container_host.cc          | 10 +++++
+ .../service_worker_object_host_unittest.cc    | 38 +++++++++++++++++++
+ 2 files changed, 48 insertions(+)
+
+diff --git a/content/browser/service_worker/service_worker_container_host.cc b/content/browser/service_worker/service_worker_container_host.cc
+index ec7fb1449af..98c62093b0e 100644
+--- content/browser/service_worker/service_worker_container_host.cc
++++ content/browser/service_worker/service_worker_container_host.cc
+@@ -669,6 +669,16 @@ void ServiceWorkerContainerHost::RemoveServiceWorkerObjectHost(
+     int64_t version_id) {
+   DCHECK_CURRENTLY_ON(ServiceWorkerContext::GetCoreThreadId());
+   DCHECK(base::Contains(service_worker_object_hosts_, version_id));
++
++  // ServiceWorkerObjectHost to be deleted may have the last reference to
++  // ServiceWorkerVersion that indirectly owns this ServiceWorkerContainerHost.
++  // If we erase the object host directly from the map, |this| could be deleted
++  // during the map operation and may crash. To avoid the case, we take the
++  // ownership of the object host from the map first, and then erase the entry
++  // from the map. See https://crbug.com/1056598 for details.
++  std::unique_ptr<ServiceWorkerObjectHost> to_be_deleted =
++      std::move(service_worker_object_hosts_[version_id]);
++  DCHECK(to_be_deleted);
+   service_worker_object_hosts_.erase(version_id);
+ }
+ 
+diff --git a/content/browser/service_worker/service_worker_object_host_unittest.cc b/content/browser/service_worker/service_worker_object_host_unittest.cc
+index 408d7c1f9d1..6eab59040ab 100644
+--- content/browser/service_worker/service_worker_object_host_unittest.cc
++++ content/browser/service_worker/service_worker_object_host_unittest.cc
+@@ -200,6 +200,19 @@ class ServiceWorkerObjectHostTest : public testing::Test {
+     return registration_info;
+   }
+ 
++  void CallOnConnectionError(ServiceWorkerContainerHost* container_host,
++                             int64_t version_id) {
++    // ServiceWorkerObjectHost has the last reference to the version.
++    ServiceWorkerObjectHost* object_host =
++        GetServiceWorkerObjectHost(container_host, version_id);
++    EXPECT_TRUE(object_host->version_->HasOneRef());
++
++    // Make sure that OnConnectionError induces destruction of the version and
++    // the object host.
++    object_host->receivers_.Clear();
++    object_host->OnConnectionError();
++  }
++
+   BrowserTaskEnvironment task_environment_;
+   std::unique_ptr<EmbeddedWorkerTestHelper> helper_;
+   scoped_refptr<ServiceWorkerRegistration> registration_;
+@@ -409,5 +422,30 @@ TEST_F(ServiceWorkerObjectHostTest, DispatchExtendableMessageEvent_FromClient) {
+             events[0]->source_info_for_client->client_type);
+ }
+ 
++// This is a regression test for https://crbug.com/1056598.
++TEST_F(ServiceWorkerObjectHostTest, OnConnectionError) {
++  const GURL scope("https://www.example.com/");
++  const GURL script_url("https://www.example.com/service_worker.js");
++  Initialize(std::make_unique<EmbeddedWorkerTestHelper>(base::FilePath()));
++  SetUpRegistration(scope, script_url);
++
++  // Create the provider host.
++  ASSERT_EQ(blink::ServiceWorkerStatusCode::kOk,
++            StartServiceWorker(version_.get()));
++
++  // Set up the case where the last reference to the version is owned by the
++  // service worker object host.
++  ServiceWorkerContainerHost* container_host =
++      version_->provider_host()->container_host();
++  ServiceWorkerVersion* version_rawptr = version_.get();
++  version_ = nullptr;
++  ASSERT_TRUE(version_rawptr->HasOneRef());
++
++  // Simulate the connection error that induces the object host destruction.
++  // This shouldn't crash.
++  CallOnConnectionError(container_host, version_rawptr->version_id());
++  base::RunLoop().RunUntilIdle();
++}
++
+ }  // namespace service_worker_object_host_unittest
+ }  // namespace content
diff --git a/srcpkgs/chromium/template b/srcpkgs/chromium/template
index 8775ca32284..ac9daee487a 100644
--- a/srcpkgs/chromium/template
+++ b/srcpkgs/chromium/template
@@ -2,7 +2,7 @@
 pkgname=chromium
 # See http://www.chromium.org/developers/calendar for the latest version
 version=83.0.4103.61
-revision=1
+revision=2
 archs="i686 x86_64*"
 short_desc="Google's attempt at creating a safer, faster, and more stable browser"
 maintainer="Enno Boland <gottox@voidlinux.org>"

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-05-23  7:46 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-23  2:38 [PR PATCH] chromium: add upstream patch to help with segfaults pbui
2020-05-23  7:46 ` [PR PATCH] [Merged]: " Hoshpak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).