From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received-SPF: Permerror (mailfrom) identity=mailfrom; client-ip=2001:41d0:1:7a93::1; helo=nautica.notk.org; envelope-from=asmadeus@notk.org; receiver= Received: from nautica.notk.org (ipv6.notk.org [IPv6:2001:41d0:1:7a93::1]) by hurricane.the-brannons.com (Postfix) with ESMTPS id 8F6E677A9A for ; Mon, 12 Mar 2018 05:56:01 -0700 (PDT) Received: by nautica.notk.org (Postfix, from userid 1001) id 4F3E8C009; Mon, 12 Mar 2018 13:57:37 +0100 (CET) Date: Mon, 12 Mar 2018 13:57:22 +0100 From: Dominique Martinet To: Karl Dahlke Cc: Edbrowse-dev@lists.the-brannons.com Message-ID: <20180312125722.GA3901@nautica> References: <20180312071732.GA14308@nautica> <20180212083819.eklhad@comcast.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20180212083819.eklhad@comcast.net> User-Agent: Mutt/1.5.21 (2010-09-15) Subject: Re: [Edbrowse-dev] XHR same-domain restriction X-BeenThere: edbrowse-dev@lists.the-brannons.com X-Mailman-Version: 2.1.25 Precedence: list List-Id: Edbrowse Development List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 Mar 2018 12:56:02 -0000 Hi, No worry, I agree this isn't easy. We/you've been working hard to make more sites work so I don't want to break these either, let's take the time needed to research first. Adding to that that I'm a little bit paranoid and like to disable as much as I can get with, but we'll need to come up with a decent interface to display what's blocked and set exceptions... I think that'll be the most tricky part here! Karl Dahlke wrote on Mon, Mar 12, 2018: > 1. frames I'm honestly not sure there. As the website serving the frame you can say you don't want to be displayed, so that protects the remote point's resources but I think we really need to check if/how dynamic frames would work and what kind of other limits there can be. > 2. The same guy that writes the js, and the html, also sends out the http headers, > so if he wants xhr to access anything then he just sets that http header to 0 and off he goes. > It's like we put a lock on our browser for some kind of security, > but they can open it with an http key, and everybody knows it. I think the main purpose of this is to protect a website from code injection. Say you're a forum or some blog with a comment area. The fields the users can fill are supposed to be sanitized, but often enough someone comes up with a way to insert actual html code/js and could hijack the user's sessions or whatever it is they want to do. http headers are usually set by the web server directly without regards for the content, so no matter how much the site is defaced if the site says don't load external stuff then it won't load them. The disabled mode must have been added for compatibility. Ultimately, if some site depends on it by design and they haven't taken the time to say code.jquery.com is allowed then they can just set 0 and things will keep working even if they don't protect themselves. > 3. If we implement restrictions, we have to do it all, > including the http key that unlocks them, because some website might unlock them > and expect xhr to work on some other domain, and when it doesn't, then > the website doesn't work. Definitely agreed there, both headers I pointed at have a disabled mode and it should be easy to implement as it is what we currently do - basically we just have to make the checks conditional. > 4. bar.com -> foo.bar.com That's likely true, need to check as well. -- Dominique | Asmadeus