Archives

These are unedited transcripts and may contain errors.



Live Captioning

Doyle Court Reporters

Anna PapaMurphy

CHAIR: Okay. Good morning. We're going to get started again. Those of you in the room will get a special prize for having been here on sometime. Those of you outside  the special prize is the satisfaction for knowing you're punk /KWRUL. You may already have that satisfaction. If you haven't let me tell you that you are and let me thank you.

Our first talk is Fredy Kunzler is going to give us a presentation on creative ways  no, how more specifics may be increasing your transit bill and what you may want to do about it.

Fredy Kunzler: Good morning everybody. I'm glad that some arrived last night. So this is my presentation, how more specifics are increasing your transit bill and how to avoid this. I'm Fredy Kunzler. I'm  well, network architect or whatever you call it from in/EUT seven. I founded the company a few years ago, almost 12, usually it's still fun to work in this community.

So what we're talking about, a few words about in/EUT seven, what we're doing, trying to a/KHAOEUF. Talking about traffic engineering of eyeball networks, what the damage is for the content networks. And last, how to avoid the damage. So a few words about Init7, we're a carry Internet service provider, privately owned company, we own international fully dual stacked using the AS number 13030. Most of you have heard of us before, 10 gig or multiple 10 gig, and we are also widely peering on approximately 20 exchanges around the globe. Well, actually around Europe and the US and we have about close to 1,BGP peers and customers. These are the usual network slides. They're not the topic. This slide also isn't interesting. So we come to the topic eyeball engineering. Many networks try to peer as much as possible and shift traffic away from the transit suppliers. This is probably why we all talk about making the Internet better in doing peering. So called tier 2 networks, they're connected to multiple exchanges and they can do more than probably 50 percent or 60 or 7% their traffic via peering, half of the global routing table by peering BGP sessions.

So I think this is pretty familiar to you. What we see here is discrepancy between content and eyeballs, out bound heavy networks they commonly try to dump the traffic as close as possible to the source, use the cheapest route. They can use. So called hot potato routing. And the eyeball networks, inbound heavy networks with a lot of DSL or cable customers, they fight against expensive saturated links. They use measures to low balance the traffic as well as possible, which is fine. Speaking of saturate or responsive or very expensive, C cables towards Africa, as an example or Asia Pacific region. So site note here, an STM 1 from Africa somewhere is still a fortune every month. So these interests are contradicting. He they want to dump traffic as cheap as possible and eyeball carry traffic on expensive sync links. So what do the eyeballs do to balance or engineer the traffic? If they can't upgrade and money is not sufficient for more capacity, then some links, they send to be more saturated than others. How they actually do or manage the capacity, the available capacity, I spoke on other occasions, so you may want to click through the slides here on the link I'm showing, about more detailed traffic engineering of the eyeballs.

So just to summarise these slides, they're usually prepending, sometimes it works, it gives the effect they want to achieve. They also do selective announcements to various suppliers or transit suppliers, the drawback here is less redundancy. They also, which is probably the wisest thing to engineer traffic, they do communitybased traffic engineering. They instruct their transits to prepend or to not announce certain prefixes further in the network.

What they also do is they do more specific propagation. This is acceptable if it's  when it's smartly executed. But what we all see is massive deaggregation. This causes pollution of the global BGP table, it's about 40% we see in the BGP table or just rubbish prefixes which could easily vanish and they're not needed at all.

We know the CIDR report which sends emails to various mailing lists, usually you delete them. If you look inside this aggregation summary, they show it's actually saying that we have, today, we have 381,000,something prefixes in the table and 41% could be removed, and this is massive this is the aggregation of the CIDR report we receiving every week in our mail box.

In the Middle Age, people which didn't behave well, they be treated like this and maybe for  maybe I can suggest for every unnecessary /24, the responsive network engineer should be probably treated 24 seconds in one of these at the entrance of the RIPE conference. So yes, if we see more aggregation, TLD cause less memory usage, TLD converge faster, BGP would converge faster, and our. Why are networks so selfish and don't aggregate neatly? It is listed in the top 30 of the biggest polluters of the global BGP table and they don't give a shit.

I tried to figure what the reason is for this BGP pollution, and we figure that had some people simply don't know the no export community. If they use more specifics for internal iBGP routing, that's fine, but they shouldn't leak these routes outside, so they probably simply forgot to set no export or forget to configure neighbour send community. So all these internal iBGP more specifics are leaked to the transits. Sometimes they have also not enough knowledge to config properly. What I also  what I'm suspicious about, there's probably one or two socalled network experts out in the wild, maybe in Africa or Latin America or also in the APNIC region, which actively promote more specific or deaggregate /19 for example into 32/24s. I don't know.

So what I would like to ask you, go and evangelize aggregation if you have fellow network engineers which deaggregate or customerses which deaggregate, talk to them and help them to aggregate their prefixs to reduce the number of the routes in the global table.

It hurts. Content networks, they have to face facts that even through massive evangelization of aggregation, the more specifics will still remain in the table and it's actually getting worse every day. We are learning these more specifics through transit. We actually see the less specific route by appearing, but the more specific by transit overrules it, this makes our traffic engineering efforts pretty useless because the more specific always wins. We can set local pref, even weight which is not recommended, we can set MEDs, it doesn't help. Traffic is shifted towards the expensive, peering traffic which is cheap shifted towards transit traffic. How can we avoid the damage? If your network is out bound heavy, probably those which are on an eyeball network they can leave now and drink coffee, if you have content or more content than eyeball traffic or inbound traffic, then you probably want to avoid this damage because it's increasing your bill. We did some analysis of the routing table, so what do we actually see? Can you read that? I made it a bit  I anonymized it, the networks and the S amount of numbers so no one can tell I'm blaming her. Covering prefixes slash 15, and this is chunked into a lot of/18s and /19s in this example. It's a reallife example, I just changed the numbers. So what we see here is  they have an aggregating prefix over peering, 2222 is a pewer BoF, then a /18 comes along via transit and they have again deaggregated into two /19s which come again via peering. So we can trip out all that and it goes actually even further down. I didn't show all the routes we're seeing, without any loss. Most interesting is that one below, they actually sent a more specific prepended three times, which is sillly. But this is my opinion. We probably should talk to them and they should explain why they're doing it. But another bad example. Here again, I have a /13. Sorry. I have a /13 covering prefix and then a number of /15, /17s, etc., via various paths, and as a matter of fact, this /15 and this /15 so half of that covering prefix is rerouted via my transit link even though I have a covering prefix which is via peering.

What can we do? There are too many more specifics in the table. So we talk about 40 %. So manual filtering is not an option. If you can figure your most  your destination which eats up most of your transit traffic, you probably want to filter a few prefixes of them manually, but on the larger scale it's not possible to do it manually. And also what we have to consider, when you apply a filter on the transit link, then the more specifics vanish from your BGP table or from our own BGP table and the automatic check for black holes is difficult then.

We also don't know how routers behave. If you apply a prefix list of 2000 entries, would they converge? Would they die? Would they crash? What would happen with such a long prefix list? And BGP conversion is very long, it took us more than ten minutes. Our experience with running brocade XMR is it works. The risk of blackholing is severe. Some networks may remove their covering prefix, covering less specific prefix the covering prefix of the more specific. So if you filter more specifics you probably would end up in black hole for certain destinations. So we did some reserve and scripted an automated filtering system based on the Piranha BGP route collector. It's still Alpha, not to be published yet. When we come to more mature relates, we would like to publish it in our findings in the GPL or whatever so everybody could use our findings.

And as mentioned, we need an an unfiltered dump of the feed, of the transit supplier. So we pull is with a script, pull the receive draft from the transit and put it into the Piranha BGP daemon. This is not the foreseen way of feeding the Piranha but it eliminates the need of the transit supplier to configure a second session.

And the result, we had quite a dump downwards in our transit. So we filtered about 20,000 prefixes right now, should be actually a bit more, and we have more than one and a half gig less traffic towards level 3. We're not a customer of level 3 so level 3 is not a transit of us. So I just pulled one example from our sFlow stats and it's significant, because it means money. This concludes my presentation. So I'm open for questions.



CHAIR: Are there any questions? It is apparent that none of you cares and you don't really care about your transit bills.

Fredy Kunzler: And the money you save.

CHAIR: Geoff you don't care? I still think of him as evil tell stir.

Fredy Kunzler: Are you an operator?

GEOFF HOUSTON: There's been a discussion about whether the CIDR report is useful and whether we should have to use it, but what else I could do with it that could make it more useful. Are you advocating that I should do some kind of continuously updated prefix filter list of more specifics?

How do you want me to publish this e

CHAIR: An extraordinarily long per compatible regular expression

GEOFF HOUSTON: I have a computer. I can do this. I would be interested in some feedback as to what would make it useful for you. If you have an application that used it if you want to plug in a filter base directly CIDR report base we can do this. I'm interested in feedback for this.

CHAIR: Okay. Thank you.

(Applause)


CHAIR: Next we have Pierre Francois discussing BGP policy violations in the dataplane. It seems impossible.

Pierre Francois: Can you hear me? Okay, so more specific prefixes do not only increase your bill, they can also violate your policies. So what  what's happened? Thanks. So what we're going to do on this talk is review  everyone in the room knows about that fact but we have to keep them in mind for the talk and we'll see if these wellknown acts have violating or  and the conclusion will simply be that you have to watch your network and figure out if your policys are violated or not. Having your perfect BGP violation with  thanks. Having your perfect BGP policy configuration does not imply that you are safe in your network.

So the first observation is that BGP offers you a sophisticated way to implement your policy and policies can be applied on a per prefix granularity. But you are dominated by the longer data plane, so whatever the policy you have applying on a given prefix either in that plane it is the policy of the more specific prefix that tracks the packet that you are forwarded that is being applied. The second observation is that a lot of operators are providing means to perform some very flexible routing inside their network. For example, if you look at the flexible routing operations that sprint gives to their customers, they have a whole range of community values that you can tag as a customer of sprint that you advertise, in order to have the router as sprint perform actions, prepending with the by sprint to our other customers, and one of them is not do advertise route. So basically when you're a customer of print and you advertise you can decide to have sprint not propagate that pass to a whole range of peers amount of OL, NTT, etc.. so what do we have is a very powerful means to not advertise paths across the network. You can do specific by advertising your more specific on a given link towards your transit and you can also trigger that remotely that behavior at your providers by using this communities.

This can lead to policy violations because basically you and your neighbour apply different shaded rules on over lapping prefix while in the mean time in the data plane you are dominated by the longer prefix.
So let's have look at a case study. The red block will be your main prefix that you advertise to all your transit then we're going to play with this yellow block which is more specific that we advertise on some specific links.

So what we're going to do is play with the more specific prefix and with communities in order to let the path to that more specific prefix be known known by a subset of the AS. Some ASes are going to forward traffic according to the less specific block until it reaches an AS who knows and ends the path in the data plane is going to be controlled by the control plane for that more specific process. And we'll see that that plane pass is sometimes not policy compliant. Let's have a look at this example, I have the customer who advertises towards its two providers, ISP amount of and ISP B, and the rest of the internet which is via ISP B. Let us say for traffic engineering reasons or more having the operator configure the BGP with the book on his knees he advertise a more specific prefix to his right. What do we have here, a new customer path, as it's a customer path it will be propagated to the peers of ISPB. I forgot to mention ISPA and ISPB are peers in this case and they will advertise that path towards its own transit. The path will be received by ISPA as it is received by a peer it will be propagated to the customers of ISPA of and only to the customers. It doesn't propagate by its peer normally to the transit and other peers. So as ISPA propagated that customer path to the more specific to its own transit it is very likely that this path will make it through the Internet on top through the ISPA through its own transit. So there is a thing, traffic distribution in the network as it is now that ISPA no longer for traffic destined to the more specific block on its customer link but rather uses it's peering link for ISPB because in the data plain the longer prefix match will end up towards the best path which is actually the only path that he knows there. The rest of the Internet does not know a path towards the more specific through ISPA. So basically ISPA loss due to that more specific injection. ISPA loss the traffic share for the more specific prefix. It can only forward traffic for that more specific prefix if it has been originated by its customer basis. From transit it is no longer the case. Now, let us assume that the customers play with the communities that I described earlier in the talk where you tagged the path with communities that say do not advertise to that peer, do not advertise to that transit. Let's assume that the customer, after having done the traffic engineering figures out that the paths are not that good and that the reach of ISPA was better for incoming traffic. It can tag the community on the path by putting them all  so by putting all the communities that say do not advertise to this guy, but to ISPA. So it means that assault ISPB will only be more specific to ISPA. ISPA is going to receive that path. For him it doesn't change anything, pick his best and propagate it towards his customer. The traffic coming from the customers to ISP is going to be forwarded over the peering link to ISPB and then regional customer. And ISPA do not propagate the path towards the more specific to its transit. But no one knows about that path, only the customer, ISPB and ISPA know about that path. What it means is ISPA propagate the path to the less specific to the rest of the Internet, they are going to keep on forwarding their traffic as initially as when the more specific path was not advertised. So as a result, traffic forwarded towards the more specific block by the guys who were initially using ISPA to reach the customer, is going to maybe it to ISPA. But in the forwarding information base of ISPA, the next stop for this yellow little prefix is the peering link. So in such a configuration where the customer propagates a path to ISPB, telling him do not advertise it to anyone but ISPA, this joined via ISPB, leads to a violation of the policy of ISPA because ISPA is currently providing a path via it's peers to its transit providers. This is annoying. Your policies are being violated. When I was discussing with some ISPs providing that flexible routing service they were saying okay I will try and fix it. Okay, fix it. With these techniques I'm allowing my customers to let me be a transit thief for my  against my peers. Another annoying issue with this topic is that this policy violation do not break anything. The customers see the packets  so if you do not watch the policy violation there is nothing breaking or coming at you to say something is wrong if you do not pay attention your policies can be violated or you can be the reason for policy violation that your peers and no one reacts. A nice exercise is to simply if your Tier 2 is to simply take a piece of paper, draw a Tier 1 peek and look at how fast you get to policy violation when you start to play with these communities for a more specific prefix. What can you do to solve the issue? Either be proper active and decide to forward traffic differently. Or you can decide to filter route or be aggressive drop traffic because these transit paths are not supposed to happen in your network. My recommendation is to basically monitor your network and when you see the policy violation pick up the phone. So you can deploy BGP so as to have the forwarding at an incoming interface, be defined only by the paths that are there and policy compliant. You will have to put the Internet in VRF and carefully configure the path in order to serve a transit facing  another to serve a transit facing link with the path that are policy compliant. It's a bit complex in order to just deal with the case that may happen so I'm not sure that you want to afford the cost  if you have no other reasons to put the Internet in VRF don't do this. You can be aggressive and decide to drop packets at incress for routes that are not supposed to be served there. You can find yourself in this situation for specific reasons that you're not sure about. It is not inside your network. I would need to contact these guys to know what's happening. Maybe the answer will be a just indication of these guys doing this. I'm not quite sure. So it might be not a good behavior to anticipate these valuations and decide to drop traffic right away. Plus we are always discussing about your customers. Every single bit crossing your network is originated or destined to your customer. If you decide to be proper active and drop traffic you will get phone call from his your customers. Another less aggressive is decide at the egress decide to no longer consider that prefix and filter it out. You will basically in this case despite the knowledge of the more specific prefix you will only forward traffic according to the less specific. If that more specific prefix was advertised for a reason you may break things by doing so. So basically what I would suggest  sometime when his I discuss this with ISPs, I get a lot of surprising answers, I guess that to run your business you have the means to know which amount of traffic received over that egress. So if you have that, just look at the ones that you are not  that should be at zero all the time and when they're not at zero, pick up the phone, look at the originator of the path and ask them, why the hell do I receive this more specific from my peer and I figure out that I receive traffic from my transit toward that more specific it means your more specific prefix did not make it to the rest of the Internet please fix this. If the originator that have messy prefix does not react then only route the prefix at your peering link. I be very surprise ed  I wrote a draft describing this issue a couple of years ago and I asked some ISPs and I've been surprised to see that nobody checks for that policy violation although some have the means to do so because they check their traffic demand in order to run their business.

There is  those who are in Rome, told by PA OLO, if you are a using of PMA CCT, just go and do  there are a couple of lines of script documented there that allow you to detect these policy violations. Those who are using tools to integrate PMA CCT, can also benefit from the policy violation detection too.

That's it. Thanks.

(Applause.)

CHAIR: There were 14 carefully hidden RIRs on those slides. No... are there any questions? You have stunned them into silence.

Okay. Thanks very much.

(Applause.)

CHAIR: Next up we are Antonio talking about an analysis of IPv4 talk versus IPv6 latencies. I'm sure IPv6 is better.

Antonio Marcos Moreiras: Good morning. First of all, I think I must warn you, you should  English is not my premiere language and for what I know about this language, hardly I can say that it's the sack or something like that. I think my slides almost speak for themselves, so we're going to be fine.

Well, in nic.br, we have IPv6 trainings and we have initiatives to measure the Internet quality, so we have interested in IPv6 quality. And we start wondering, what is the difference between IPv6 and IPv4 latencies and qualities in today's Internet at the time. The IPv6 have a producer quality. We start testing, with simple testing, pings and traceouts, to well known websites, like RIPE.net, and a lot of other sites. And these tests consistently showed us that IPv6 were worst than IPv4. Sometimes slightly worse, sometimes a lot worse. And we start wondering why. Would it be a routing problem, a configuration problem? But first things first, /R hardly we can say that some manual pings and trace holds are the best way to master this, so how could we do better? We start using RIPE TTM data. RIPE TTM has third dualstack box over the world. They have the cool tunnel discovery tool in IPv6. So there was a lot of that red there. And what we made. We developed something to get the data from RIPE website automatically every day and we developed our website to show it in a way where it is to compare v6 and v4 latencies. So we make a matrix, the red dots are when IPv6 is worse. The green dots, green boxes are where they are about the same. And the blue dots are when IPv6 is faster. And we made some graphs. This is on line. You can check any time. So this is the data. These graphs are IPv6 versus IPv4 latencies in March of this year. We are getting the daly mediums from TTM box that is typical behavior. This graph shows the best IPv4 and best IPv6 mediums in the month of March for a given pair of boxes. So for a given path. So we have a linear graph and it's very different of what we expected. For our first simple test with pings and traceouts, we expected that automatic all these points would be over the line or we expected that IPv6 would be a lot worse than IPv4. But we didn't find it. What we see is that the points is  the points are mostly over the line or very near the line where  the line is where IPv6 and IPv4 latencies are equal. The points are equally distributed over the line and some of them over and some of them under. The more recent data shows almost no difference. If we get the density graphs of the delay, the red line is IPv6, the blue line is IPv4. We have above about the same distribution. I think IPv6 is slightly worse. We have a few paths  a few paths with slow delay  a few IPv6 paths with slow delay and a bit more with higher delays. But it's a small difference. And the curve is slightly shifted to the right but it's a very small difference. This graph shows IPv6 versus IPv6 in October and in June. So paths under the line are better, IPv6 were faster in October than in June, and paths over the line are worse in October than in June.

So this is the tunnels we get the paths from the tunnels covered two and put it on the graph. And the tunnels seem to not be directly related to worse IPv6 delays. So, in fact, we have a case where we have tunnel and IPv6 delay is better than IPv4 delay. Well, and we found this academic work back in 2000 five that used a similar tool to  they use also RIPE TTM data. And what we can see of difference that we have no more this C region in the data, what is a lot better. We are a lot better now than six years ago.

So this seems a lot better than the pings and we start wondering why. We had a problem with Brazilian networks, this data started because the TTM box are in the core of the networks. What we have done, we created our own measurement system. We called it Simon Vic 6 because of project Simon from LACNIC that was created by an interconnection of networks. So this is an extension for Simon project. It will be integrated with Simon briefly. It's a JAVA client that makes measurements using the time to connect in TCP or NTP requests. We client 15 networks, six in Brazil, three in other countries in LACNIC region, three in Europe, two in USA and one in Asia, and we make measures of IPv4 and IPv6 delays against 29 TTM servers that TTM box and 366 web servers from the they were IPv6 and others.

We also have a website, you can get the raw data, if you want, there. This is the first month data. We have a lot of effects from networks with problems of scale, I think, after some filtering, you get get the mediums and the best mediums in the month like the other test. We have these graphs, similar to the others. The points are equally distributed. The results are very similar to the results from TTM data.

The data, the more recent data. This is the data for Brazilian websites and points, so we have a worse situation than in general. You can see in the left graph, in the distribution, IPv6 that's in the red, it is very shifted to the right. And Brazil versus United States, here we do have a problem. Almost all the points are over the line, what means that IPv6 is worse than IPv4 in this case for us.

So some comments and questions. I think that good news, in the general picture IPv6 is producting quality, is  at least regarding through latencies, it's about the same as IPv4. No so good news, proper production quality for IPv6 and IPv4 could be a bit alike and better. There are a lot where IPv4 delays are worse than IPv6. Very strange. Very very strange. And yes, it seems we do have a problem with Brazilian upstreams and specific destinations, for example, US. Why we do not know yet. And why IPv4 is worse sometimes? We do not know also.

So that's it. Only a little explanation about nic.br. It's on for reference here.

Thanks. Questions about it?

(Applause.)

CHAIR: Yes.

AUDIENCE: Hi, Emile Aben, RIPE NCC. I'm not sure if you saw my presentation yesterday but I'm happy to see that you are basically seeing the same thing. V4 slightly better but v6 is get there and we see a 6:4 split. And looking into why that is, if we look in our data and just look at traceroutes that go from the same source to the same destinations that traffic the same path, we actually see performance being exactly the same. Whereas if you look at things where a path are vastly different that's where you see the performance differences. So my current working hypothesis, is the difference is caused by the mesh of interconnection is less dense so thing have his to take detours to get from amount of to B. Maybe that's an explanation. Maybe your USBrazil data might help in showing that. Do you have any ideas about that or data about that?

Antonio Marcos Moreiras: I think it's the cause. But what I do not understand is the contrary. Why do we have paths where we have more hopes in IPv4 path than in IPv6. We have a lot of case showing the contrary case, the ops case. So in the case of Brazil and the United States, I think you are right. But the other case are difficulty to explain.

AUDIENCE: If you have other cases, I'd be happy to cooperate into looking into that.

AUDIENCE: I'd like to comment. First of all, you may be short from different locations, probably pretty sure that IPv6 goingle.com, could be located in the different place from just www.google.com so you can see different paths for your packet when you do v6. Secondly as it was mentioned, it could be different peering situation because v6 peering not necessary fallen v4. It could be traffic engineering. It could be sometimes equipment because unfortunately we're still seeing a lot of bug and some bugs are still to be discovered. So if you can see some strange issues with v6, it sometimes make sense to contact your ISP and tell them because probably quality of service could be improperly implement the in hardware. And in some cases v6 server could be closer to v4. I can remember last year in the RIPE meeting, our v6 to RIPE servers was surreal ... so it could be a lot of different reasons from topology, so on. I'm not surprised you can see sometimes v6 performing better than v4.

Antonio Marcos Moreiras: Okay. Thank you.

CHAIR: Any other questions? No. Thank you very much.

(Applause.)

CHAIR: What we learned from this is goingle does IPv6. Who knew? Fantastic. This is great stuff. Next up we have Cassio Sampaio who's going to speak about measured trends in IPv6 adoption. It's about v6 and there will be data.

CASSIO SAMPAIO: Good morning everyone, I have a few slides to show  to go through with you guys on the IPv6. Very much the same topic as Antonio was going through. The objective is not to drilldown into causes or like what are specific reasons for IPv6 being better or worse than IPv4 but to share some of the type data we collected, as doing the world IPv6 day and then the following days after the event. So for those of you who are not aware, Sandvine is a policy control company. We're in the business of understanding traffic trends and helping service providers with policies in general for many many years. We published a report, available on our website if you're interested in getting more detail busy what we saw on that day. The study here and most of the charts that we're we'll be going through, all of them were collected from north American carriers. And north American  in the north American definition that does not include Mexico. So it's US and Canada. So you're going to a lot of video and  like NetFlicks, etc.. very Canadian and US centric.

In terms of volume, an increase in 4%. And keep in mind this was collected across many many carriers in those two countries but not a representative view across the whole continent but I have to say across DSL and cable cares so we saw a 4% increase, not huge but the number of subscribers behind that, you are talking about tens of millions of users. Even an increase of four percent is useful.

In terms of the protocol we observe, so we saw a reduction on Teredo. Traditional technologies are the ones being used, mostly by the people exploring that space and trying to learn more, so the early adopters, of course. We saw an increase in six to four, rose to more than 11% of IPv6 traffic. And Teredo was the most used type of IPv6 type of transport.

So drilling down a bit in terms of what six to four has to show us. This is an early adopter type of behavior. In north American, if you look at the type of traffic that you see today, you're going to see NetFlicks, YouTube, realtime entertainment, and then peertopeer comes in third. In this case, you see bit Torrent, UDP, this is a good indication this is early adopter, so bit Torrent is being used for a number of years as a way to circumvent some types of identification. So that gives an indication of the type of people that are using that IPv6. Even after the world IPv6 day.

In terms of applications, that's how we see inside those different types of tunnels. So we see a relative increase in terms of what we call realtime entertainment. It's becoming more mainstream, but still more concentrated, so you can see the share of file sharing is still very significant. So we see  in terms of the top domains that what we observed, YouTube in terms of volume, very significant. This is distribution in terms of volume. You're consuming alot of bytes when you go into YouTube, versus when you go into something like Google did a good job in terms of promoting their work on that day.

In terms of device type, just a note, all of this reserve was done primarily on fixed network, so DSL and cable. The devices we can see, the traffic, WiFi router at home, so you can see that traffic through your fixed connection. So as you can see, v4 versus native IPv6, so PCs, were the lead in terms of IPv6 versus Macs being predominant on IPv4. We're going to see later the operating system that's being used, but an indication still not a mainstream usage, but more people are exploring the protocol. You can see a bunch of wireless devices connected through the WiFi network. The iPod Touch is very representative in terms data consumption.

In terms of operating system and the browser type. You see on the lefthand side, the  from IPv4 to IPv6, you have a significant transition from Windows being the top, going to Linux, it's not the most widely used so it goes back to the early adopters theme, Mac continues to be predominant. It's important to say that a part or representative part of this sample comes from parts of the United States where we see it's Mac OS would be used more predominantly. But this data isn't from copper teen owe so you shouldn't see only Macs but in general should be significant.

In terms of the browsers used, comparing both, you see Chrome like 12, back then it was a very recent version of the browser, so topping up the list, same with Safari, the latest version was being used, the type of subscribers, the system during those few days.

This was certainly have a bit of overlap from what Antonio was saying earlier. We use this technique where our system is sitting somewhere so we measure the access round trip time, as a measure of quality to the overrule traffic. So the way that our system is deployed we are not an end point, not a router, we are layer 2 so we have to measure that based on data. So we collect this data for every single packet that's flowing through the network and the same value with his any other metric that we see. Things like the application type, the browser type or anything we see earlier, we're able to correlate in terms of quality as well. So I have a few other charts that will give an idea of v4 versus v6 quality.

Probably hard for you guys to see from there, but trying to compare native v6 versus IPv4. So we actually saw IPv6 performing better. So both v6 and v4 were performing at acceptable levels. It's 16 and 24 milliseconds. So boast quite good qualities. You can assume that's any realtime service can be delivered. So 80% of the packets were observed at that latency. You could ask me why is IPv4 better than IPv6 I don't think I have an answer but it might be related to the fact that the IPv6 device typically have a better stack, better processors, but it could be other reasons that's been mentioned before.

Now, of course a bit obvious, but I thought TLD be interesting to pull up the tunnelled version of IPv6 perform dramatically worse than the native. But it's interesting, if you see Teredo, they were run at 120 milliseconds, present prevents you using applications over it. You're going to have a hard time having a descent VoIP call with those levels of latency. So it's better but still not better compared to the native versions. Still interesting to see the results nevertheless.

So in terms of the subscriber client, just ranking the different  the worse users that were observed in terms of access round trip time. So you can see on the left side on the native IPv4 versus Teredo. So the people using Teredo, the worse of all case the user sees a one second latency, which is pretty much unusable, but it's still the averages. The sample of IPv6 is much smaller than IPv4 so the data could be skewed but you see a much larger deviation on Teredo and it's extremely dramatic. While IPv4 is what's expected to be seen in a much more diverse sample.

And getting back to  getting to the end here, this is something else that we do a lot, not only the scope of v4 to v6 but compare the efficiency of certainly protocols, using the whole idea of how many bytes have been transmitted, how many bytes were required divided by the total bytes that were observed on the wire. So you can see the IPv4 being more efficient for that specific sample than IPv6. And then a breakdown in terms of the applications. On the left side the actual  what we're calling pay load bytes, but the actual bytes that were required to transmit the overrule data. And then on the righthand side, the volume that was being retransmitted. So you see a significant number there, flash video. Related to the fact that a lot of the services being offered, flash being an important component. And you see NetFlicks on IPv4 not IPv6 perhaps for obvious reasons but indicates the whole idea of the early adopter of consumption on IPv6 sill. And this is something that is future. So we produced this the impacts on IPv6 on video. We do a lot of reserve. We are planning to do the same on IPv6 so hopefully we'll be able to share that with you in the following opportunity. Trying continued to if the IPv6 is living or better quality to end users. And as I said before, NetFlicks being a phenomenon in north America and the equivalent in south America and Europe and not only the quality but what it means to video, a mean opinion score to video. It's something we're focused on comparing v6 to v4 is going to be one of the things we're investigating.

So that was it. I don't know if you guys have any questions or anything to add?

CHAIR: Questions? .

AUDIENCE: Benedict Stockabond, freelance IPv6 guy. Earlier in your talk you mentioned six over four. Do you have any idea where that come from his? I've never seen that anywhere with any customer because nobody supports multicast routing for IPv4.

Cassio Sampaio: The data was collected from, it's large carriers in the United States.

AUDIENCE: I'm not talking about 6to4. I'm talking about 6 over 4.

Cassio Sampaio: The data collected from Canada does have that it's certainly from the United States. I can't disclose the actual carrier.

AUDIENCE: Okay. Thank you.

AUDIENCE: Do you think possible to make your slide able on website?

Cassio Sampaio: Absolutely.

CHAIR: Are there any other questions? Excellent. Thank you very much.

(Applause.)

CHAIR: That concludes this morning's presentations. We're now at lunch for 90 minutes plus whatever we ran under. You can thank the speakers individually with drinks later. When we return this afternoon I'm seeing there will be a beauty pageant. Look being at this crowd I'm not encouraged but there may be ringers brought in more that purpose. Please return for that after lunch.

Thank you very much.












1