This archive is retained to ensure existing URLs remain functional. It will not contain any emails sent to this mailing list after July 1, 2024. For all messages, including those sent before and after this date, please visit the new location of the archive at https://mailman.ripe.net/archives/list/[email protected]/
[mat-wg] [atlas] Proposal for public HTTP measurements
- Previous message (by thread): [mat-wg] [atlas] Proposal for public HTTP measurements
- Next message (by thread): [mat-wg] [atlas] Proposal for public HTTP measurements
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Brian Trammell
trammell at tik.ee.ethz.ch
Thu Jan 8 12:02:46 CET 2015
> On 07 Jan 2015, at 21:34, Bryan Socha <bryan at digitalocean.com> wrote: > > I love the idea but unless you can profile what available capacity the probe/anchor has I don't think the resulting measurements will be useable. There is no way to know your http request was slow because someone with the end point is a hard core torrent user maxing their location out. Also valid for the areas where you have a hard limit and minimal extra bandwidth. ping/traceroute while not always a good test does squeeze through with minimal variance in result when the site bandwidth is congested. > > also as an anchor host, can I limit the max bps because some locations is not low cost if everyone decides to http test some speedtest file. Our singapore anchor for example would cost more per month then we spent on the hardware to host an anchor in the first place. I suspect the probe/anchor hosts in other areas like africa, australia, new zealand, and south america would get even larger monthly bills. So the proposal as I understand it has very low limits on the amount of payload that will be sent in the response (4kB, i.e. four packets), which I presume will reduce the temptation to misuse this measurement for bulk transfer capacity estimation (...and please don't get me started on how utterly pointless using a single TCP flow to estimate bulk transfer capacity is in the first place :) ) In the aggregate, though, you're right, this could lead to significant bandwidth usage, which I presume could be capped by the controller...? Cheers, Brian > Bryan Socha > Network Engineer > DigitalOcean > > > On Mon, Jan 5, 2015 at 7:59 AM, Robert Kisteleki <robert at ripe.net> wrote: > > Dear RIPE Atlas users, > > The topic of publicly available HTTP measurements in RIPE Atlas comes up > from time to time. There were a number of discussions about pros and cons > for this feature over the years (including exposing probe hosts to > unnecessary risks of other users "measuring" just about any kind of HTTP > content out there), with no firm outcome. > > While we understand that this feature would come handy for some of our > users, it does not benefit everyone. Therefore our proposal is the following: > > 1. We'll enable HTTP measurements to be performed by all atlas users, from > any probes. > > 2. The targets of such measurements can only be RIPE Atlas anchors (these > already run HTTP servers, see https://atlas.ripe.net/docs/anchors/). > > 3. Parameters like costs, minimum frequency, maximum number of probes > involved, etc. will be set by the development team, just as with the other > measurements. > > 4. The RIPE NCC will still be able to support other, vetted HTTP > measurements as long as it benefits the community, as well as other HTTP > measurements that we deem operationally useful. These will be evaluated on a > case by case basis. > > > Please speak up at the MAT working group list (mat-wg at ripe.net) if you > support / don't support this proposal, of if you have any other opinion > about it. > > Regards, > Robert Kisteleki > RIPE NCC R&D manager, for the RIPE Atlas team > >
- Previous message (by thread): [mat-wg] [atlas] Proposal for public HTTP measurements
- Next message (by thread): [mat-wg] [atlas] Proposal for public HTTP measurements
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
[ mat-wg Archives ]