<div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small;display:inline">+1 </div>Sounds like a perfect solution<div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small;display:inline"> 👍</div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr">Med venlig hilsen / Best regards<div>Emil Stahl<br></div></div></div></div></div></div></div></div>
<br><div class="gmail_quote">On Mon, Nov 16, 2015 at 9:35 AM, Gil Bahat <span dir="ltr"><<a href="mailto:gil@magisto.com" target="_blank">gil@magisto.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi,<div><br></div><div>We are interested (like many others, I guess) in the ability to perform HTTP measurements at our own non-anchored network. Understanding the potential for abuse, I would like to suggest the following authentication protocol, which is based on best practices exhibited by other services with such potential (abuse or privacy implications).</div><div><br></div><div>1. Confirm control of the domain registration:</div><div>* This is usually done by mailing the technical contact for the relevant WHOIS entry with a confirmation email containing a unique hash, thus validating ownership.<br></div><div><br></div><div>2. Confirm control of the DNS servers:</div><div>* This is usually done by editing the root TXT record with a unique hash or publishing a CNAME with unique hash.<br></div><div><br></div><div>3. Confirm control of the Web servers:</div><div>* This is usually done by placing a uniquely-hashed file in the webserver root directory, a unique hash in the meta-tags for the index html file or a unique value in a file such as robots.txt.<br></div><div><br></div><div>I believe this protocol is sufficient to ensure that a web site owner agrees to the implications of allowing free HTTP measurements against their servers and that no unwilling server will ever be probed. At most during the protocol, the only resource that can be hit is a static file or robots.txt specifically, which has very little capability to overwhelm a web server, especially if negative responses are cached for a considerable amount of time / validation is done via a few nodes and propagated across the network.</div><div><br></div><div>thoughts/ideas welcome.</div><div><br></div><div>Regards,</div><div><br></div><div>Gil Bahat,</div><div>DevOps Engineer,</div><div>Magisto Ltd.</div></div>
</blockquote></div><br></div></div>