Archives

These are unedited transcripts and may contain errors.


Plenary session on 1st of November, 2011, at 9 a.m.:

SPEAKER: Good morning, everybody. Welcome to the second day of the 63rd RIPE meeting here in Vienna. My name is  I am a member of the programme committee for this RIPE meeting and I was invited to chair this first session of the plenary today.

Please, get a seat, sit down and we soon can start with our first speech. Just because I am from Vienna, a few words about the picture we have seen here. You see the Austrian parliament, and the statue in front of it is the palace, the Greek Goddess of wisdom and people can why she is not in the parliament but outside and facing away from it. We don't know the answer.

So, our first speaker will be Yaroslav Rosomakho from Arbor Networks. And the stage is yours.

YAROSLAV ROSOMAKHO: Good morning. Everybody is awake. So, hello everybody. My name is Yaroslav Rosomakho and I am from Arbor Networks and today I would like to present to you some information, some statistical information that we have got from our Atlas networks in regards with DDoS attacks around the world through 2009, 2011. We will also look into some details about attacks happening in the Europe and we will try to look into some statistical data that is available to attacks regarding some European countries.

So, first, I would like to introduce where the data is coming from. So we have a product called peak flow SP and this is deployed in majority of large Internet service providers worldwide and the idea is to correlate flow BGP flow information available in the routers and to detect anonlease that we see in the traffic.

So, it has an optional feature and some customers may want to obtain and enable this fee tower to share with Arbor Networks any statistics about DDoS attacks and traffic trends they see. As of today, we have 193 of peak flow employments worldwide which decided  administrators of which have decided to optin this anonymised stats sharing and we have got hourly XML files of information about, information from about DDoS attacks and traffic trends from those deployments. One thing I would like to note here, that data is coming anonymised so we are not exactly sure where the data is coming from when we get this information, and in case we  in case peak flee SP reports to us in coming DDoS attack in service provider network we obfuscate this IP address to make sure that, well, there is no damage done to the service provider itself because of the statistics sharing. If attack is outgoing or if this is a cross bound attack, transiting through service provider network, then source information is obfuscated about where this attack is coming from and if it can be extrapolated to a given sub net.

Well, since most of attacks that we have are actually targeting single host or a small sub net, we conveniently, we can convenely see what is the country where this attack is come to go in terms of country and where the attack is coming to in terms of autonomous system.

So let's look into some findings of what we see.

Data that I will be presenting today is mostly based on 2009 and 2010 statistics that we have, but there are few inside in 2011 data that we will look into.

So, one of interesting findings is that vast majority of attacks is still small. This percentage of small attacks is decreasing so by small attacks we call attacks that are less than one gigabit per second or one million packets per second. So in 2009 there was roughly 93, 94 percent of attacks that were small. In 2010, we see 79 and 87 percent of attacks are still small. Speaking about 2011, we see that this percentage is decreasing but still vast majority of attacks is small.

Well, I guess most attackers can achieve their goal even with smaller attack.

Another interesting finding is that large attacks are on the rise, so if we compare 2009 and 2010 statistical data that we have, we see that number of attacks over 10 gigabits per second and those are quite scary attacks, went up by 400  470 percent, and well, these trends are still progressing into 2011.

In terms of packets per second, the picture is less dramatic. If we look at attacks that are bigger, over than ten million packets per second we see 45 percent year by year, year over year increase.

In terms of attack destinations, there are two things I would like to highlight here: The first one is attacks against port A and attacks against port 53 are the most growing, so obviously those are driven by reputation risks, obviously those are driven by the fact that, now, well, pretty much every application, every new application is tunneling itself into port 80, and port 53 is the DNS, it's typically administrators pay much less attention to the security of the DNS, so for attackers it's quite commonly easier to bring down DNS infrastructure, DNS server instead of attacking actual resource that these DNSes resolving for.

Right. So, as I said, attacks against port 53 increase and attacks and size of attacks against port 53 increase. Port 53 have seen the biggest increase in terms of large DDoS attacks year over year. We see 885% increase in number of attacks over 10 gigabit per second. If we compare 2009 and 2010, this is quite scary.

Well, unfortunately, it's quite simple to create a DNS DDoS attack and reflection attacks are most over the large attacks that we see, most of the largest attacks that customers are talking to us about are DNS amp will I facial reflection attacks.

Right. So, let's look at the champions. If we compare champions of 2009, 2010 and 2011, in terms of bits per second, we see, I think, quite a pattern. So in 2009 the biggest attack was about 50 gigabit per second. I am not saying it was the biggest attack in the Internet; it was obviously the biggest attack that we happened to see information in our Atlas network. The biggest attack in 2010 went up to 66 gigabit per second. Of course, we all have heard about attacks 100 gig and even more in 2010, so this 66 attack is the largest that happened to be reported to us by Atlas network and in 2011, so far, the biggest attack was nearly 80 gigabit per second in peak and, well, the interesting part is that it was targeting some resource in a peaceful country of New Zealand.

Moving on. Packets per second. So, in terms of packets per second, still the biggest attack that was reported in Atlas network was in 2010 and it was 108 million packets per second. It was targeting DNS server in the US and it was going for three, nearly four days.

If we look at worldwide attack growth statistics trends in migabits per second and kilibits per second, we see pretty much liner growth, a little bit faster than Lynner, so we see that attack sizes in  in July 2011, if we compare that with start of 2010, went up by 40% in terms of megabits and 165% in terms of packets per second.

Right. Interestingly enough, if we look at proportion of large attacks, attacks over 10 gigabits per second and ten million packets per second, then we see much less trendy graphs so we see some spikes in some month, and if, in most cases if we look into details of those spikes, if we look into what those spikes actually are, we quite commonly see series of attacks targeting the same service provider or even the same resource. So, peaks in 2011, we saw some peak of large attacks in March and we have seen a large peak of attacks in July, so in March those attacks were targeting resource, we have seen 23 attacks within five days between 11 and 16th of March. Average size of attack was 17 million packets per second. So if we look at attacks over ten million packets per second targeting, we will see that there was something in January, something in February and there was a huge blast of those attacks in March and then it came down pretty much zero.

And those attacks were targeting port 80 and one specific ISP hosting provider in Belize.

Secondly peak of large attacks that happened in July, well, it was targeting service provider that obfuscated destination IP addresses before reporting to us, so obviously, this service provider was the victim of those attacks so we don't have information about destination IP address  well, we can only guess. But we have statistical information about port breakdowns and about number of attacks so it was 118 attacks that were over 10 million packets per second. Majority of those attacks were targeting port 80 and as you see there was pretty much 25% of attacks that were targeting over ports, but majority was going to port 80.

Now, let's dive into Europe a little bit so see how things are different if we try to compare worldwide statistics to European statistics, so just in order to be definitive, this is how we defined Europe for this study, so this set of country that we consider to be Europe.

Interestingly enough, in Europe, there are much more smaller attacks than in the world, so if we have seen that in the world there are about 80 plus% of small attacks, in Europe it's still 95.3 attacks, let's than one gigabit per sending and 94 .1% of attacks less than one million packets per second, and average attack size in Europe is still pretty small.

So there are biggest attacks in Europe, interestingly enough we see that the volume of attacks in Europe is  of peak attacks that we see in Atlas, of DDoS attacks tar getting European resources, is decreasing. In 2009 because 44 gigabit per second targeting resource in Switzerland, then we have seen attacks pretty much close to 30 gigabit per second targeting resource in Russia and so far in this year, the champion was 17.78 gigabit per second again, this is data that we see in Atlas.

In terms of packets per second, the biggest attacks are pretty much the same, so about 30 plus million packets per second and we do not see that trend in terms of top packet per second attacks.

Now, Europe as, there might be some US people in the audience and I guess it will be surprise for them but Europe is not a country; Europe consists of multiple country and this countries sometimes compete against each other and in this report, we decided to look into competition in Europe, in terms of DDoS attacks. So who are those champions in terms of overall volume of DDoS attacks. So Great Britain, Romania and Germany are, well, second, third, fourth place, changing a little bit between twine and 2010. Interestingly enough, the first place in this year we have turkey. This is the top destination of DDoS attacks in 2010, and they were nowhere near top in 2009, and another interesting thing, Russia went down from being the most targeted victim of DDoS attacks in 2009, to position number six in 2010. Well, I guess it's a good thing.

If we look at average attack sizes and we look at countries, we see that across those top European countries, everywhere except turkey, average attack size increases, somewhere  some were twofold some a little bit but in turkey average attack sizes is decreasing so number of attacks against Turkey for some reason in 2010 increased dramatically while average attack size stayed pretty much the same or even decreased.

So, let's look into those three champion countries: Turkey, Romania and Great Britain.

Turkey: Vast majority of attacks are small, so 96% of attack is less one gigabit and 98 less than one million packets per second port 80 is the dominant port, is the dominant destination of DDoS attacks and interestingly enough, port 53 is quite unpopular for DDoS attacks targeting Turkey. Only 1.9% of attacks that we see in Atlas network is targeting DNS resources located in Turkey and that compares to 12 percent worldwide.

Moving on. Top attacks that we have seen, so we are talking about average use and top attack in Turkey so far largest that we have seen in 2011, was 13 gigabit per second, 22 million packets per second and it was lasting just for ten minutes but interestingly enough, it was targeting port 22, which should be protected by infrastructure and all other best common practices of network security.

Great Britain: Is steadily holding number two. In terms of DDoS attacks in the Europe. So, we see that about 90% of attacks in Great Britain is small and proportion of attacks against port 80 and against port 53, those are common victims, is still the same between 2009 and 2010. Another interesting highlight here is that all attacks over 10 gig bit per second in targeting Great Britain resources, were targeting port 53, that is quite different from what we have seen for Turkey.

And here are top attacks in 2009, 2010 and 2011 targeting resource located in Great Britain. Nothing too much in particular here, still the biggest attack in 2011 was relatively quite small, just 5 gigabit per second, one million packets per second and in terms of packets per second the biggest attack was 14.57 million packets per second.

And last but not least, number three, Romania, which came up this year and we have seen the biggest attack in Europe was targeting resource located in Romania. Still vast majority of attacks are small, 98% of attacks is less than one gigabit, 95 percent of attacks is less than 1 million packets per second. And proportion of port 53 attacks is unusually high, so if we have seen just below two percent of attacks targeting port 53 in Turkey, in Romania, we see  we have seen in twine 33 percent of attacks targeting port 53, in 201029.8% and in, if we compare that to global 2010 we see that globally we have seen 12.2 percent, so it's pretty much more than double in terms of attack proportion targeting port 53.


Largest attacks, there was a huge spike of largest attacks targeting Romania this year. If we largest attacks were about 2.5, 2.7 gigabit per second in years before that. The biggest attack so far in 2009 is 17.78 gigabit per second.

So, as we can see, well, DDoS attacks are not going down, unfortunately. During the years, they are  they can target pretty much every country. Patterns are quite different, depending on the country, depending on the time frame, and, well, it's obviously highly important for service providers to follow best common practices, to secure their DNS resources, infrastructure ache is lists, to protect their critical network assets. So that is pretty much it from my side. Are there any questions?

AUDIENCE SPEAKER: One question: You mentioned 193 sensors or sensing networks. Did you mention any distribution that have because you are doing regional analysis, is that mainly focused in North America, Europe, Asia?

YAROSLAV ROSOMAKHO: That is pretty much worldwide so we have deployments all over the world and DDoS attacks doesn't have much to do with the destination of DDoS attacks doesn't have much to do with the system which reports to us information about DDoS attack. You know, BotNet that is responsible for DDoS attack can be located on one continent and the victim can be located on another continent that is very common, so I don't think our data is massively affected by global distribution of systems reporting to us.

AUDIENCE SPEAKER: Second question is: This year we had all of this, a little bit of political issues around particularly in the  in the Arab world. Did you guys saw any reflection in Atlas on any  we know what they done is just turn off BGP completely but I wonder if you have seen anything happening than reflects in your monitoring?

YAROSLAV ROSOMAKHO: Well, unfortunately, we cannot monitor reasons for attacks. This is not something technical. It's impossible to say why these resources was attacked or what was the motivation behind this attack. We can sometimes assume these things, well my favourite example personally that I have seen this year, I have seen six million packets per second attacking pizza delivery company in Moscow. I don't have good explanation about this attack. So yes, we do see some increase of attacks in Middle East and we will conduct additional studies regarding this matter, but we unfortunately we cannot say why this spike has happened.

AUDIENCE SPEAKER: Thank you.

AUDIENCE SPEAKER: Google. Have you seen any changes in v6 traffic pattern? Could you see any v6 attacks?

YAROSLAV ROSOMAKHO: So far, we have seen only one v6 attack and it happened back in 2004 or 2005, I believe. Interestingly enough, this attack was targeting BotNet controlled server so it seemed that this attack was targeting one BotNet was fighting effectively against another BotNet. Well DDoS attacks are mostly there to stop availability of the resources and availability of resources is reputation risk, financial risks and other risks. So far, since IPv6 is not yet business critical for most resources out there, I guess attackers are not using IPv6  are not seeing IPv6 as a viable target for them but what we see and this trend is really increasing, BotNet commandant control communications are more and more often using v6 or tunneling in order to hide their communication channels from, well, service providers security teams.

AUDIENCE SPEAKER: Thank you.

SPEAKER: What is the maximum intercontinental link capacity today and how many nodes would it take to incapacitate the network?

YAROSLAV ROSOMAKHO: Could you repeat the second one, please.

AUDIENCE SPEAKER: How many nodes would it take to incapacitate the network?

YAROSLAV ROSOMAKHO: Incapacitate which network, in the Internet? So I think there is small misunderstanding here. Atlas is not a physical network of some nodes connected with transatlantic link; this is just a set of pea flow PSP deployments worldwide that is reporting to us over the Internet. I am not specialist in transatlantic links, maybe somebody else in the audience can answer that, but, well, considering 100 gig ethernet, I don't think we will see transatlantic links go down because of DDoS attacks, at least not yet.

DANIEL KARRENBERG: RIPE NCC. I was missing a little bit about the methodology in the beginning and on continuing on the question: Do you have any estimations of what percentage of does attacks that are actually happening your infrastructure see and what percentage it doesn't, does it see all of them, you know? Can you characterise that a little bit more?

YAROSLAV ROSOMAKHO: OK. Unfortunately, there is no strict definition of DDoS attack. So what is DDoS attack? It can just be anything that brings down resource that is stops availability or it can be many things. We have a number of detection mechanisms in our product, some of those are statistically based and their accuracy depends on the configuration of the system so we do not in those studies rely on results of those detection mechanisms since it can be, well, it depends on the configuration of actual system. The basis that we use for those studies are something that we call mews use detection so when we see too many packets per second are a given threshold going for a single host, /32 or 128 in terms of IPv6 that is quite definite a DDoS attack so if we see more than 1,000 packets per second of it. CP, and this is not Google, then, yeah, it's quite obvious that it is a DDoS attack or more than, I don't know, 20,000 packets per second of VDP 53.

DANIEL KARRENBERG: But the question was more like, could there be significant attacks that you don't see because of the distribution of your measurement places and stuff like that?

YAROSLAV ROSOMAKHO: Well, again, I don't think I have good answer for your question. I would like to also notice here that we monitor, that we monitor only attacks that are going between service providers that are crossing their external interfaces and we have heard of reports of attacks internal to service provider, BotNet is somehow localated in the access network and what have you. So I do not have a strict presentation, we do see 50% of Internet attacks or 30 percent or 70 percent, we can assume some things but I don't think they are speaking about strict maths here.

GEOFF HUSTON: APNIC. I have a question for you and it's about measurement methodologies because the data you are presenting is certainly different to data that is gathered using a different methodology. Let me explain a little bit: Over the last couple of years we have been doing experiments by simply advertising /8s and looking at all the traffic that comes into it. This is homeless traffic not directed at a known location. This is just basically viruses searching for victims and that profile is quite different to the profile you are presenting here. How much of the traffic you are seeing is actually being sent to known victims, versus traffic that is just sent to random addresses? Because it seems that you are seeing something completely different to the profile we see when we set up a dark network and I am kind of curious about this and I am now sort of wondering why is this different and do you have any ideas?

YAROSLAV ROSOMAKHO: So we do, also, have a network of honey pots, that is pretty much what you are  that you are explaining, I think, and we do also catch viruses, Bots, to honey pots, to virtual machines basically that try to represent some system and we see how they try to behave, where do they connect to so that is how we can discover BotNet common and control servers, and by the way, if you are interested, results this analysis is publicly available on resource Atlas.Arbor.net, and indeed, this activity has completely different profile with actual DDoS attacks because it has nothing to do with actual DDoS attacks. One propagation is one thing, it's can be used as a preparation, can be used as source for black market and actual DDoS  DDoS attacks are usually  this is not something one propagated itself and then, you know, immediately started attacking; in this scenario, it would be those  those events would be connected. In most cases these one has a BotNet commandant control server that tells it what to attack, when, all this information is, somehow, hard coded in the virus or worm. I believe that propagation of viruses, worms and attacks, that those things are making are completely different things, and I'd like, also, to note here another interesting phenomena that we have seen last year something called voluntarily BotNet, I think most people heard about that when people installed some software voluntarily to their PCs to attack something that they believe is wrong.

CHAIR: We are running out of time, two short questions from the ladies at the microphone and then close the mikes.

AUDIENCE SPEAKER: This is another question from the same remote part ant and he is saying you said the largest attacks were 18 GBP S, how much of the link you have is that? At some point this is dangerous, how safe are you?

YAROSLAV ROSOMAKHO: Again, this is attack not against our infrastructure and we are not a service provider. We provide some products that service providers worldwide install to their networks and we do not monitor capacity of their links so I don't know. Most likely, the link itself was much bigger than 18 gigabits but the question is not the capacity of the up link usually, the question is usually the capacity of down link or what kind of throughput this given piece of resource of, of service resource can take care of.

AUDIENCE SPEAKER: From Google again. So my question is related to the first one, about your installation based. Have you tried to normalise your data changes your installation base, because, for example, if you have install recently more devices in Turkey you probably could see more attacks targeting Turkey resources?

YAROSLAV ROSOMAKHO: If we would install more devices in Turkey, then we would not see more attacks targeting Turkey because destination of incoming attack is obfuscated and source outgoing attack is obfuscated.

AUDIENCE SPEAKER: Probably a bad example.

YAROSLAV ROSOMAKHO: I get your point, just to be clear on that. So, we, since, yes, base is increasing, people  networks are expanding, networks are changing all the time, in most numbers that I have presented today was more speaking about percentages. I was not really speaking about number of attacks, except some particular accidents. I was speaking about percentages, so like 95 percent of attacks are small so that is one of the reasons why we do that, we believe our data is much more reliable if we will divide number of incidents by total number of incidents.

CHAIR: Thank you.

(Applause)

Next one is Manish Karir from Merit Networks telling us a little about repitition networks.

MANISH KARIR: Hi, I am from Merit network, I am here to talk to us about reputation idea, of networks in the RIPE region, talk about the data that we have collected and give some examples, and then discuss the future of our network reputation idea.

So, how do we create incentives for the need to run a clean network? How do we measure relative security posture of a given network and balance the need that networks have to communicate with the risks of communicating with another network? What tools do you have to make business decisions or connectivity decisions and how can you estimate the likelihood of malicious activity from another network. Is there some way we can assign a risk metric to BGP path. To make these decisions, we need to know about the historical and current reputation of existing networks.

So network reputation is an attempt to build metrics to illustrate the collective reputation in, of hosts in administrative domain as opposed to the host reputation idea which kind of discusses the reputation of individual IPs or URLs, the problem with this is that mappings of IP to domain change and so the host reputation is unreliable over time, whereas hosts can move and network reputations should be approximately constant. The level of aggregation should mitigate this problem.

So, what level of malicious activity or pollution is acceptable to the community? We see an increase in the amount of Spam that is delivered and malicious activity every year. When do we choose to do something about it? Hosts that engage in malicious activity such as spam or phishing, they reduce the external  reputation that other people view of your network. Why  it doesn't really go unnoticed. We all see what your network is doing, so what is acceptable to your customers, what is acceptable to your peers? So, being blacklisted for these  for having bad reputation or for having hosts in your network, doing malicious activity can impact your customers severely, and not all networks are equal. What policies make your network more likely to be blocked, what modifies reputation? How can we improve our reputation individually and collectively? How can we improve or security individually and collectively?

So I would like to take about reputation based security policies. Again, network reputation is not just something that other people know about you; reputation is something that comes of people talking about your network on mailing lists or talking about you with their friends on the phone, behind closed doors. This doesn't help us because it doesn't create a level playing field for reputation. Not everybody knows the same information, and therefore, not everybody has the same picture of your reputation. We'd like to create a scenario where people have a level playing field for reputation. They all see the activities that your network is seen to participate in and they all can make business decisions or policy decisions based on that.

So you can use reputation to craft flexible policies, to manage your risk profile. So we are creating an index for reputation which is basically an aggregate of many different reputation data points and we would like to propose some possible uses of this index.

These are all just possibilities and we aren't recommending any of them.

So, for example, we could, with BGP, compute the relative reputation of an entire path or for spam use a reputation filtering system, instead of using deep inspection or, for example, formbased inspection you could do just reputation check and then forward to the other filtering, depending on where it's coming from.

The possibilities go on and on.

So, reputation black lists for host reputation black lists, they are mostly a list of IPs or domains that have been observed to participate in suspicious behaviour, we analyse several different times including spam, malware and phishing hosting sites and active attack behaviour such as TCP scanning, the detailed black list, brute force and enumerate the  our goal is to analyze the distribution of hosts on these lists, to see if they are common traits to see to characterise observed malicious activity on the level of prefix country and ASN.

So, briefly I'd like to talk about the address space distribution in the RIPE area. The EU accounts for approximately 21% of allocations and then Great Britain, France and Germany, together, account for another 30%, totalling the majority of the space, and there are roughly 733 million IPs allocated in the RIPE region. So, for spam list analysis we are going to consider these lists, other ones were rejected just because they were so very similar in content. Then, we are going to talk about the portions just from the RIPE region and beyond that, we will break down those points that remain into countries.

So, these are the composition of the RIPE region in terms of what countries have membership on these lists. You'll note that Russia and Germany are on both the CBL and Barracuda banned list in large proportion, and Russia also makes large aexperience on spam cop. This table details the number of IPs on the list and what portion the RIPE IPs make up of that.

So, in general countries are larger allocations appear to have more entries in blocked lists which is approximately what you might expect if you assume that infection rates were constant. However, if we look closely, we may see that this is not true. When we look at block list entries relative to allocation sizes, a different picture appears, so we will look at both and what do we expect to see?

As you can see here, this top chart is sort by allocation size, from large to small, and it does appear to be liening to the left. However, when we look at portion of address space that is listed on block list the blue bars are portion, on the right for the green. You will notice that this is actually clearly right biased. You will note that network from Belarus have approximately 60% inclusion in the Barracuda block list, and this is just to validate using another block list we perform the same absolute volume of IP inclusion on the block list which does not necessarily correspond to volume of spam sent but speak to the relative pollution of the address space.

And then just to validate, we have, again, the same right bias.

So not all networks are created equal when it comes to entries on spam lists. You will note almost 65% of Belarus was on the Barracuda reputation block list. Almost 40 percent of Saudi Arabia was on Barracuda block list and about 35 percent of Turkey's is blacklisted as well. However, only ten percent of Germany but that amounts to somewhere above 9 million IPs that are on the block list.

So, given the allocation sizes, there are some other countries that would you assume would be high  would have high list inclusion, such as Netherlands, Sweden, Denmark and Norway, however the relevant trends for those are relatively low in spam. Smaller percentages of these IPs are listed but the relative trend is similar. So what accounts for these variations? Can we looking at policy difference, connectivity differences or is it network topology?

So, again, just to point out the statistics we discussed. Here is the inclusion of Belarus and then you will note  yes. Let's look at malware hosting and phishing hosting IPs, as well. So we considered three major lists and because of inclusion, admitted several others and we are using the same methodology as the spam analysis where we reduce it to the RIPE region and discuss the breakdown by country.

So here you will note, again, that Germany has a clear presence on all three of these lists, as does Czech Republic and the RIPE region does account for a large proportion for each of the three lists.

So Czech Republic is a relative higher percentage of all the listed domains, roughly 30 percent of all the RIPE region domains. Poland and France have unusually high percentage listed as phishing hosting and aside from Russia there seems to be very little correlation to the spam lists. And again, just the same numbers.

So in terms of active malicious activity, you will note that IPs allocated to Russia, again, make up a large proportion, and Russia accounts for nearly 30 percent of the scanning activity, then Lithuania Ukraine and account for 30 percent of the and Russia is an additional 23 percent but you will note, unusually lower malicious activity listings from France.

So, let's talk about boundaries where analysing reputation. What are the effective boundaries? I mean, country boundaries aren't granular enough to make policy decisions for network operators and what can we do better? It's certainly not good enough to make service decisions as well, so let's examine using ASNs for our boundaries. So, address distribution in the RIPE region, they are roughly 25% of the prefixes in the BGP routing table are from the RIPE region and a total of 733 million IPs. So these are the top ten ASNs by announcement size. Just briefly, some of these numbers will appear again.

So, this is an analysis based on the prefixes that we observed, being announced by these ASNs. You will note in terms of volume, TT net, Saudi net and BT net and then again this is the same graph as before in terms of absolute volume of IPs listed on the top and proportion of IPs listed on the bottom. Again, Ukraine telecom is nearly entirely listed, Saudi net is almost 85 percent listed and TT net is 60% listed.

So, the top  again, this is about what I just said  you will note that there are some IPs that  some ASNs that have a bit of IPs announced. However, a negligible A entries. I don't believe you can see the numbers on the bottom but you will note the gap here and here, and those are the two ASNs discussed on this slide.

15 of the largest 100 ASNs have more than 40 percent of their address space listed and I find that significant. This is the top 1,000 ASNs ranked so the numbers along the bottom are not ASNs numbers but the rank in terms of inclusion on the list where the blue line is the proportion included and the red is the actual volume of clued IPs. Almost 500 have 40 percent of their block listed and this is just in the RIPE region. Almost 200 have on a different list at least 50% of included.

So briefly, doing the same ASN analysis on the other list that is we analysed, so you will notice that AS 56 10, Telephonica Czech Republic represents about 30% of the IP region entries and about 47 percent of the HP hosts, which is malware hosting list, and again, the next highest contributor A is 257 these are clear trends in the HP hosts in several lists here and here and here, and then together, Dragonara and alpha telecom represented 25 percent of the listing on the Phish tank.

So you will note that TT net accounts is the top place on three of our four sources and Zeus because of the size is a negligible list here. And again this is the same data basically over again.

So, are ASNs the most useful for assessing reputation that we can come up with? I don't feel so. Don't necessarily indicate the administrative domain, they may just indicate the up service provider. How can we more effectively identify, we attempted to use all prefixes announced in the BGP routing table. We split it down to RIPE region and then performed this analysis again. We don't pretend this is the optimal way to do T however with more data points, with more people performing these am lease or with more views of the BGP routing table, we could evolve a better identification of administrative domain or more ideal point at which we can aggregate reputation.

So, briefly, we will discuss the prefix spam list distribution. The RIPE region has 88,2250 prefixes out of the total routing table. There is no doubt large prefixes have large numbers listed again but over 15 prefixes have over 500,000 of their IPs listed and again, /TURBG telecom has over 1.4 million out of their 2 million. So all 50 shown above have at least 2 that thousand IPs include, which I find significant. And again, same analysis, different block list.

So, this is discussing the relative amount of inclusion and you will see over 253, although we only show 50 of them on this chart, over 253 prefixes completely included in the Barracuda block list and over 3,500 out of all of the RIPE region prefixes have more than 85 percent of their address space blacklisted. Again, same data for a different lists.

So, the relative percentage of IPs to the top 50 prefixes for both the several domains list and the pH host domains list are shown above, and again, similar numbers that we looked at before, just specific to prefixes now.

And again, I will let you read through this, briefly.

Note that there aren't prefixes that carry across directly on any of these. So let's briefly speak to the global comparisons.

If you will note that ARIN region has a relatively lower rate of blacklisting for spam whereas RIPE and APNIC both contribute in a much greater proportion. And then for the malware RBLs, the ARIN region has much, much higher inclusion than any of the other regions. However, RIPE is also relatively high. And then the RIPE region has comparatively higher rates of membership for the malicious activity such as scanning or SSH brute forcing. You will also note that APNIC is relatively high as well. Our goal is to develop a comprehensive tool for reputation visibility globally, and this doesn't have to be implemented in any certain way. It's a tool for the uses that people see fit, and while we are using data sources from RBLs currently, it is just the beginning and it's basically a way for us to develop algorithms for panelling reputation. In the end, we intend to have many more data points from many deployed sensors. Different networks have different views of reputation and the more data points that we have, the closer to true reputation we will get.

So, the system must allow all networks to participate on a level playing field rather than just getting one view of a black list from somebody's, perhaps, biased or not publically known methodology. We would like to have an open methodology and unbiased listing. The current project to build such a system is on going at Merit Networks and is we will soon be recruiting via mailing lists, send an email. How reputable is your network?

I'd be open to questions now, as well.

(Applause)

CHAIR: Thank you, Kyle. Any questions?

RANDY BUSH: Randy Bush, IIJ. What would I do about it?

KYLE CREYTS: Excuse me?

RANDY BUSH: What will do I with this?

KYLE CREYTS: There are many different things you can use reputation for. A tool for your customers to use in terms of assessing their risk profile or a tool for you to use if it is in your interests to clean up your network. It's a tool to help people attract customers or to create incentive in the community for cleaning up if it sees fit.

TODD UNDERWOOD: Todd Underwood, Google. So I think any part of the intent of Randy is not uncommonly cryptic question is that you have aggregate add number of different kinds of bad behaviour in a way that doesn't obviously lend itself to any specific action, right? So I think one of the things that might be useful for you guys is to take a step back and see what should people be doing about this and just improve your network or don't be a bad person is certainly one useful thing but you might want to say something like let's say I want to do wholesale spam filtering and do it more accurately and at the routers, instead of using individual  spam processing signals within mail messages maybe I want to set some threshold for prefixes and drop those that have an overly high rate of spam at the border of my network. Do you see what I am saying? At some point you need too far few plausible if maybe a little bit extreme use cases that could drive the precise research and explain to people how these signals aggregate in a way that they can do something about them. Does that make sense?

KYLE CREYTS: Absolutely. I believe that was talking about when I was talking about risk metric for BGP paths or using reputation as a BGP filtering mechanism.

TODD UNDERWOOD: A spam metric, here are 17 prefixes that and I think listing the prefixes is fine because we are all friends here and  well we are not actually, I don't like most of these people  but I think if you list, here is 17 prefixes that were you to just drop them at your network edge, would dramatically reduce the volume of spam coming to your users. They might have the following other consequences and list what other things are in those prefixes so I think doing those things would be really useful for us as a way of assessing the utility of this, the quality and validity of the metrics and a way of guiding to you figuring out whether these metrics make more sense.

KYLE CREYTS: Absolutely. It isn't supposed to be universal solution. No one solution will fix awful the problems. The idea is to try to reduce the noise quickly and efficiently again.

AUDIENCE SPEAKER: Tom Best, RIPE NCC. To repeat a couple of questions that I posted last year this work was presented. First one is about the resurf sieve problem with reputation, you start off saying that you wanted to make reputational information more sort of, more accessible to people. At the end you pointed out that this would be perhaps useful to help people to attract customers. Well, you run into kind of a fundamental problem in trying to account for both of those things at the same time. If reputation information  first of all, if you are going to democratize it, how are you going to reduce the risk of it being polluted by malign actors who in fact would like to insert misleading reputation information. If you have to apply a reputation metric to the reputation information input providers, then you are creating a kind of reputation hierarchy by design, especially if you start off with some baseline assumption about this network has been in production, has been observed for a long period of years and therefore we automatically credit them with higher reputation. You run the risk of creating a system in which there is impunity for people, everyone is immune to criticism from below, perpetually, and kind of, enjoys a kind of impunity from any reputation ill effects from bad behaviour. So other question actually, just get you to answer both at the same time if you don't mind.

Second question was, given the sort of goal of aggregating and getting an aggregate composite reputation measure, how would you handle what seems to be an increasing share of the cases which would be single  apparently single homed ASs, originating one prefix?

KYLE CREYTS: The idea is not just to aggregate but aggregate for making decisions and also to be granular as is desired. So collecting reputation on awful the different levels of aggregation, so down to most specific prefix and also examining at the specific prefix and to answer your first question again?

AUDIENCE SPEAKER: How do you deal with the recursive problem of reputation. How do you eliminate the risk of reputation being degraded intentionally by people without having a prior reputation mechanism to filter it?

KYLE CREYTS: I don't think it's the only  it's the only metric that can be applied so I think the point of distributing the collection is there is a ground truth, if everybody is contributing it's much more difficult to create, to create such disdense.

CHAIR: One last question from Jabber, I guess.

AUDIENCE SPEAKER: From a remote participant, need real buyin by  there are already a number of activities along the same lines such as Spamhaus drop and similar efforts by Arbor but without a more global collaborative effort, it will continue to be more piecemeal by some but not all operators, often from experience some operators have no interest in such a collaborative effort as it is contrary to their commercial objectives.

KYLE CREYTS: Creates I think we have have an interest in cleaning, whether you are a carrier or consumer of bits, I think everyone  there is a financial relationship that in the end, when your end customer is impacted, you are impacted and that is just my answer to it.

CHAIR: OK. Thank you, Kyle.

Applause)

CHAIR: Now we hear Martin Pels from AMSIX telling us information about 100 gigabit experience.

MARTIN PELS: OK, so this was going to be a presentation together with Brocade but unfortunately he couldn't be here so it's just me. I would like to talk a little bit about the developments in 100 gig and the operational experience that we have had with it at AMSIX.

So the 100 gig standard at the moment last year IEEE approved 8 O2.3 BA standard, first generation of optics started shipping, we got them this year, and currently work is going on on the second generation of optics.

The current state is that, right now, there is no serial 100 gig and that will be a while away so the current optics all have a 10 by 10 electrical interface, so that is ten times 10 gigabits towards the line cards. Towards the optical sides there are two different standards, the IEEE for ER 4, had by 25 gigabit interface and there is the MSA camp which have a ten by ten optical interface.

A little bit more about the 10 by 10, this is a joint initiative from bunch of vendors and a couple of operators for a cheaper 100 gig standard. Right now, there is two kilometre standard, 10 kilometre standard and 40 kilometre standard are done as well, for those there are no products yet, and in the IEEE field currently there is work on the way on the 4 by 25 optical standard. Electrical standard.

There is also work going on on the KR4 and CR4 fields. And this will be for longer and 100 gigabit of copper.

There is a group started on the 4 by 25 gigabit of remote  for show distance signal mode and these will be targeting data centres, there will be very high density with very small optics, like...

Finally, there is the IEEE ethernet bandwidth assessment. That is going on. And they are seeking operator input for the next generation of high speed ethernet, so beyond 100 gig, 400, maybe terabit and they are seeking operator input, so if you want to get involved with this, have a look at the site and get involved.

This is the, what the optic currently look like. So on the left there is the CFP, what is currently being shipped and the others are the optics that are under development now and as with 10 gig, we start with really big optics and get smaller and smaller eventually.

Then it didn't work any more.

A little bit more about ten by ten MSA. The idea is that there is a problem currently with the IEEE standard in that it has a gap, there is a standard for SR 10 for multimode 100 gigabit and the LR 4 for ten kilometres and there is not really something in between that, and also, the LR 4 standard has the 4 by 25 optical interface that I just talked about and the 10 by 10 electrical interface and to make that conversion there is a gearbox in the optic and that makes the whole thing very expensive and very power consuming. So that is a why a bunch of folks started working on something to bridge the gap, which is the ten by ten MSA. This one has two kilometre optics on the market already, and work is going on on ten kilometres and 40 kilometres, and these optics are considerably cheaper and hopefully that will kickstart the deployment of 100 gig.

Next couple of slides are a little bit of an overview of the different module types and I am going to go through them; you can look online to the slides if you are interested in more info.

So first one was module types. This is the different vendors that have products on the market. And this is the different technologies that is being worked on.

So, a little bit about the operational experience that we have had. We did a couple of things. We did a large scale lab at Brocade in San Jose where we tested the thing. We did a trial with one customer so far, with Limelight networks. We did some net row area tests to see if we could deploy 100 gig in our backbone and did a longdistance trial with Surnet. The devices on the right are traffic generators. These are one port 100 gigabit traffic generators that can do line rate 100 gig at all the different frame sizes and they helped us a lot with our testing.

So the first thing we did was a large scale test at Brocade in San Jose in the lab. We built a standard AMSIX type of platform, a VPLS platform, with three PEs and two P routers to load balance all the traffic over. As you can see, we had a bunch of big lags in between them and we had a whole bunch of different traffic generator ports. We had our two traffic generators that we just saw and Brocade provided us with a whole bunch of ports, 10 gig and 100 gig from their test centre.

So the goal of that test was to see if we could deploy 100 gig in high density so completely filled up chassises with 3,200 gig ports, large aggregates, we had one aggregate of 6410 gig ports and we had two aggregates of a terabit, and with that, we pushed  we used to push a lot of traffic through T the highest we did on single chassis was for six terabit. Of course, we found a whole bunch of bugs there and in the months after this, we  Brocade came up with fixes and we verified these in our lab and their lab and our lab in Amsterdam and we now have a stable release for that.

Second thing we did was a trial with our first customer, from Limelight networks. The goal of that was to see if we could deploy a customer with 100 gig. What we had was an MLXE and two times gig to their backbone. On the MLX for redundancy each with 16 times 10 gig to our own backbone. Then we borrowed 300 gig modules from Brocade, a couple of ten by ten and of course we used our an /REUT saw traffic generator and 10 gig traffic generator because our 100 gig traffic generators could not do IPv6. So we had to test that with 10 gig only.

This is the setup that we had. On the left, you have the Limelight network, 8 slot chassis, then we connect that via a patch panel in the room of data centre in Amsterdam to our glimmer loss. The way this works is we employ  deploy dimmer class equipment for all our ten and 100 gigabit connections. This is sort of electronic patch panel so the way that works is that via the glimmer gloss the customer is connected to one ethernet switch, and if that dies we have 200 millisecond failover on the match panel to move the customer to a backup piece on that switch. So this is one of the things we tested. We tested layer two forwarding and routing and we tested the topology failovers. IX, IPv6, of course and as we went along we went through several revisions of the hardware and software that Brocade have.

Results of that was we get line rate forwarding for traffic mixers so that is a mix of smaller and larger frame size as is common on our platform. Routing, the same thing; we had about 190 millisecond failover time on the Glimmer Gloss, so that is a failover time including Mclearning and bringing interface open now.

We tested sFlow sampling. We found that this works up until 2,048, above that sample rate, the line card CPU is unable to process all the samples because of the amount. And we briefly tested jumbo frames but only little.

So this customer is currently actually in service and I am not sure if they are already pushing traffic over the link but it is in our production VLAN.

Second thing we did was look at how we can deploy it in our backbone. This is a picture of Amsterdam. We have several data centres spread around Amsterdam where we deploy our switches. And the distances here are largely than ten kilometres so we cannot use the optics that are currently available for. So what we did is took these two data centres in blue, we had a spare metro fibre between them. We looked at around and put a traffic generator on one end, so we created a fibre of 28 kilometres and we tried to bridge that with several solutions.

So yeah, 100 gig base ER 4 is not ready yet and we are looking into deploying this in the first quarter of next year, already. So the things we looked at were amplifiers, we look at LR 4, semiconductor optical amplifiers and this actually worked. The 28 kilometre fibre gave about 16 DB attenuation and with 75 million per optical amplifier we were able to bridge that with and were able to pass traffic over that for 24hour period.

Unfortunately, the solution we had was kind of lab thing, it was a circuit board, so soldiered on top of it and no casing around it or no pretty interface so we are still looking for a solid production quality solutions for that. But in the concept does work and that was a good step forward.

For the ten by tens, the issue is a little bit different because they work in the 50/50 nanometre range and we actually haven't found any solutions yet that can do amplication there properly or at least nothing cheap, because the SOAs are out of range for the higher channels, only go to about 50, 70, whereas ten by ten goes up to about 1,600 and the E D FA has issues on lower channels so that didn't work as well. So if there is anyone that knows of a solution that does work, I'd be happy to hear it.

Last thing we did was a longdistance trial. CERN currently has this end of the world machine in Genevieve have a and one of their tier one sites for that is in Amsterdam and they push a lot of data through that every day so they were working with the local in Amsterdam surf net to see if they could do 100 gig wave from Amsterdam to Geneva have a and surf net was like this is fun but let's see if we can do ethernet also on that and they asked us if we had equipment for that and if we could do a test with that, so that is what we did. We took our own tester and we attached to an NLX, to system from surf net this. Then went on a segmented wave of 1650 Kilometeres to Geneva. There was another box and Brocade MLXE 32 with 10 gig cards looped so we sent the traffic from the  from Amsterdam to Geneva and looped it back over the wave to Amsterdam and checked if we didn't have any loss.

So this is the equipment we used, Brocade, MLX, MLXE, CIE NA OM E 65000 and LR 4 optics.

The results of that was we did a 10 gig E test for 24 hours where we didn't see any loss on the links. We also did a test that 100 gigabit line rate for 24 hours. There, we saw no loss on the long distance part, on the link between Amsterdam and Geneva, we did on the box in Geneva but this was with the IEEE bit of ten to minus 12, I think it was 10 to minus 13 and 14.

So, our next plans are to deploy 100 gig in our backbone, provided that we find a good extender solution, a productionready semiconductor optical amplifier. We are also looking into testing with different kind of vendors. So far we only tested Brocade, to Brocade and we are also want to test this with other vendors to see if that works properly as well.

As for that, we are ready to accept customers. As I said, we already have Limelight in production, VLAN, and we are open for  we are hoping for new customers. Prices for 2012 are €9,000 for a ten by ten and 10,000 for an LR 4. And the ten by ten comes at six times a 10 gig.

So yes, that is it. Any questions?

(Applause)

CHAIR: Thank you, Martin. Questions? Thank you.

CHAIR: May I invite Wolfgang from the RIPE NCC to talk about RIPE RIS and some prefixes, I guess.

Wolfgang: This presentation was supposed to be given by Eric. He has to work on tech team, I will represent a DOS of some other kind to you today. So we are going to talk about RIS resource allocations.

Let me explain what they are used for. The majority of the resources remember talking about are used for routing at this point in time. Some of you might be familiar with those routing you can use them to research BGP propagation time. Those are split into two parts, we use some of them as steady anchors and the other part as actual beacons so from each of our RIS RACs which at the moment are 14 active around the world we announce 1/24 as anchor as a steady prefix and the B can that goes up is being withdrawn and reannounced on predefined schedule.

Also within this resource allocation we have some RPKI announcements. Those are at the moment two. One of them is valid originated and the other one intentionally invalid, that allows people implementing RPKI to test their implementations but researches out there to do some analysis on this.

Which resources we are talking about is three blocks here, two IPv4 and one IPv6. The first one is /19 is fully used from the RSCs at this point in time and the second one is intended to be used for additional RRCs and currently holds those two /24s for the RPKI announcements. IPv6, we have a /32, which has the same sort of allocations but all split down to /48s and you can see that the they have been allocated over a couple of years with the newest one in 2008 and the oldest one in 2004.

Well, what is the problem with this. The problem is, due to policy change, experimental resources which we've run those as experiment allocations so far, can only be extended in exceptional circumstances, and since this is a production service that is out there and is being used over the years it's hard to see that as exceptional circumstances.

Now, strictly speaking, by RIPE 530 paragraph 6.3, we are not actually eligible for a permanent allocation for this because we don't use those IPs, so we have to announce a /24 in IPv4 and a /48 in IPv6 to have those prefixes globally visible but since we don't use the full 255 IPs in the IPv4 prefixes, we are not eligible for the permanent allocation here.

Now, again, the affected services two I already mentioned, there is another the debogonising effort which is slightly affected here because we would lose the fixed /A*ERPBG which allows you to test the definite one prefix that should be definitely reachable from those allocations. And so, we have been thinking about this and we thought about what path ahead should we take here so we have services that are obviously used by people and they are appreciated and there is one way ahead that we can use here and that is specified in the RIPE 476 document at paragraph 5, which says that we can approach the plenary here and get approval of the plenary for those special allocations, and that is what I am doing here now, so I explained for you what those allocations are and what I am essentially asking for is, to approve the use of reasonable address space assignments to the RIPE NCC for the purpose of operating routing beacons and the question with that also is wave /32 IPv6 right now, which we of course only use quite little space from so there would be the opportunity to actually shrink that down a little, which is doable, the question here is just is that something that we should do or if  yeah. Does it matter, basically?

And that actually is the very brief presentation on this topic. And that is a quite different presentation to a plenary, I guess. So with that, I would appreciate feedback from you on this, if you approve on this or if you have objections on this. We also have an article on RIPE Labs here, so if you have some more thoughts on this later, you are also very welcome to comment there. Other than that, that brings my presentation to an end.

CHAIR: Thank you. Any question now? Or do you expect them later on.

DANIEL KARRENBERG: RIPE NCC. So this was the presentation so we are asking the community here for an approval to do this, according to the policy, and usually, the way we do this is to present it at some point, then give some time to  for discuss, maybe also in the Routing Working Group and elsewhere, and this will come up on Friday morning again for a formal consensus vote or well, formal/informal consensus vote. So you don't have to say yea or nay now. It's Friday morning.

SPEAKER: This is feedback. As far as I can see there is nobody currently at mikes. But same quality comments also on the RIPE Labs article have accepted the same input.

CHAIR: Just wait for Friday and people can think about it.

SPEAKER: Yes. I think Marco also has some more.

MARCO HOGEWONING: Thank you. Before I start, well, two things I want to make particularly clear: Be clear about this, I am going to talk about separate request, this is by no means related to the thing Wolfgang which presented, which is RIS. Also, I want to make very clear I am presenting this as an employee of the RIPE NCC, so don't let the yellow badge distract you.

What this is about, this is separate resource request which we deemed business operations, and from this perspective, the RIPE NCC has members, we have to refer to them as LIRs, but from a business perspective, these are customers, customers come to us, we provide services to them. Services in the form of, for instance, the LIRs portal, which is the most common one, but also we have background processes. Now, every once in a while, an issue comes up with one of the systems that requires us to take the customer's seat. It would be benefit to us if we could actually act as a customer to test their own system. For instance, somebody phones up and says, I have a bug here, if I push this button my screen turns purple and we would like to see if we push the same button, if the screen goes purple.

Of course, we operate /T*EGS environments, we have staging systems, but somewhere, at some point in time, you might want to verify that the production system actually matches your test environment. So what we would like to request the plenary at this stage, is to allow us to operate a fake LIR that can minimum I can the standard, the behaviour of what we say is a standard customer or standard LIR. This, of course, benefits software development but it can also benefit over departments like customer services if somebody comes up with a question, we can actually replicate what people are asking about training, we ask use it for thing something or demo something.

What is the actual request about? Of course this LIR also needs resources so what we would like to ask the plenary for this purpose, is to allow allocation of resources and these allocations are by no means covered by Address Policy because these will not be used on the Internet. They will live in our technical systems but they won't be used on actual systems on the Internet so you won't see these addresses appear on the Internet and you can filter them.

To mimic basically most of the customer sessions what we would like to ask for is a/21 IPv4 PA allocation, a /32 IPv6 allocation, and an AS number. For the purpose of the AS number, we can do 32bit. As we are not using these resources, we can work with tainted or toxic blocks, any /21 is good enough, the tot came up to use 12000 /21, some router vendor still consider that the margin so the  are hard to use on the Internet but we could take something that is really tainted and blacklisted. Of course, this LIR will only exist technically so it won't influence billing, algorithms and, of course, it's not a legal member so it won't execute voting rights. It's just there in the technical systems for us to take a look at what the customer sees and actually have that experience in a sense, eating our own dog food. Of course, these blocks will be documented that we are using them for this purpose and clearly marked so people can actually make sure that these addresses get filtered on their borders so nobody can use it.

That is about the story. Again, as Daniel explained, the procedure, this is the just we brought it up for your consideration. If you have questions now or later on during the week, please talk to me or any of my colleagues from registration services and this will come up again on Friday for a formal resolution. Are there any questions so far?

AUDIENCE SPEAKER: Olaf. I am a LIR and I have customers and I already have IP blocks to play with and now I go to the RIPE NCC and say, I want to mimic one of my customers, can you please give me a /21? What would RIPE NCC at that point say? It would look at its policy and says, well, you know, it's all intended to be routed on the Internet so you will Unicast and say we are not actually going to route it on the Internet, we are just going to use it for some tests. I might say, I am making this point because I believe that the RIPE NCC should be holier than the Pope. Well, that is pretty easy nowadays. No, but, already many people who look at the system of IP allocations from an outside perspective, and test whether it's fair and whether it's honest, and that is something that the RIPE NCC should take into account and I think that if this is turning to an exceptional policy, because what you are doing here is you are asking this to the RIPE community to turn it into policy, so that an exception can be made, because if this would have gone to the arbiters, asking this, the arbiters would clearly have said no, this is not within policy. So for the people who have to make the wise decision next Friday when this comes up, I ask them to take that into account. Do you want to make this exception and what is the precedent that is being set?

MARCO HOGEWONING: Thank you. This is a valid point. This has come up, exactly the reason why we are here is that  I am here  is that this is not covered by Address Policy at the moment, of course. To alleviate and to answer that question, what comes as an LIR, comes up; if we have the space in our systems, technically these are marked so you could use that. I have an operational background. I never come across a situation where we tested customer assignments with a /21. I have seen occasions where, as an LIR, as a member, we had various accounts that had resources allocated to them like single IP, we had line, email accounts to test. In a sense, the easy thing would be if the RIPE NCC would become its own customer, if we just pull our credit card. That yields two problems: One, for legal reasons we obviously can't back member of ourselves and again it would not be covered by Address Policy.

We do believe that to provide these services, the best customer expectations, it would be beneficial to, in this case, have this single exception for the RIPE NCC.

AUDIENCE SPEAKER: But the operator that you are referring to that would do that in its own network and test, you know, email addresses and what have you and infrastructure, would do that out of its already existing address pool.

MARCO HOGEWONING: Yes.

AUDIENCE SPEAKER: Again this is where you make an exception, RIPE NCC has an address pool.

MARCO HOGEWONING: We did consider using addresses already assigned to the RIPE NCC for this purpose. There are situations, for instance, if we want to test reclaim procedures where, using existing address allocations in archives, could harm other operations of this, of the RIPE NCC. We can't easily just pull or revoke address assignments that are in use on the office network with the risk of damaging the office network and damaging operations.

CHAIR: I think Wilifred, Randy and then Jabber.

WILIFRIED WOEBER: Olaf has already put this magical word "arbiters" so for a moment I will wear my arbiter's hat, for an observation. And I don't want to express any preference pro or con to this request by the RIPE NCC, I just would like to make some  are the sort of to give you the feedback that I consider this approach as a much better and cleaner way to get the resources you want to have or not, then sort of the other attempt that was submitted a short while ago. So, I think this is the right way to go and the community should actually say consciously yes, we are in favour, or we are not. That was the observation. And the second thing is more like a technical question:

You did present these two requests separately and I guess you did it on purpose. However, I am just curious techically, whether you did think about lumping the two things together removing these artificial restrictions you are proposing, like not being in the routing plane and that sort of things, so did you think about going all the way and sort of creating, sort of a RIPE NCC operational testing, whatever thingy, which is actually a fullblown member on the Internet? Just curiosity. Thanks.

MARCO HOGEWONING: As I explained earlier, this  we did consider this, we talked about  thought about merging these requests or using the space Wolfgang is asking for this purpose. Like I said, there are situations where we may run into revocation, but for instance, also, if we want to test something thank has to do with reverse DNS, that could potentially harm the operations, Wolfgang is talking about, at which point we will disturb measurements on the nest so that is why we want to have two clear distinct address blocks. We know that whatever we do with this won't harm any of the Internet operations or any of the other NCC ongoing tasks.

RANDY BUSH: It's hard to be a remote participant.

AUDIENCE SPEAKER: There are two questions from Jabber. I will start with the second one. It's from user called from AS 29169. If these are for internal testing purposes only and not visible router on the Internet, assuming contained within RIPE's own network, why not use RFC 1918 and private ASN for this?

MARCO HOGEWONING: To be clear, these addresses will never live on a network, not even internally. They will only live in our system software in our databases so they will never be configured on a computer.

We did thought over the use of RFC 1980 space. Introducing RFC space to these processes and databases would mean we are building exceptions for it. We have to teach the software that it's OK to see RFC 1980 space, at which point whenever we start testing with this, we always have to ask ourselves the question: Are we testing the exceptions we built in or are we actually testing the software? That is the main reason for not using RFC 1980 space for this. As you see in the RIPE test database you see we are using 1980 for a lot of tests but this particular case where we need to test prediction environment we want to be as close to a member, as close to a customer as we can be and that means having an honest or a PA block, a regular PA block that is no different from anybody else.

AUDIENCE SPEAKER: The second question from the Jabber is, let me try to find it: Martin from  he is asking and this keeps scrolling up "I would just like to know what are they concretely going to use this address space for?"

MARCO HOGEWONING: Mostly for software testing and helping customers out. It can be questions, it can be troubleshooting, but finding bugs, so these addresses again these will just live in our databases, in our certainly software systems to be used to test our processes, for instance, reverse delegation but also revocation, address assignment systems. Basically, we want to push every button that is in the LIR Portal available and that we can use this address space for.

RANDY BUSH: IIJ. Coming from another rule obsessed culture, I am greatly amused to see tangled up in your own our bureacacy I think you might also want a certificate for that space.

MARCO HOGEWONING: We could use this space also for certification tests.

RANDY BUSH: Don't you have expandible Address Policy under which the first question is covered?

MARCO HOGEWONING: It might be that if we use your way, it becomes valid. We chose to be as open and as transparent and clear about it, that is why we decided to just ask the plenary in plain old way, this is what we want to use these addresses for. So we are trying to not to sneak this in via the back door. Bush is the whole thing is about the two hat problem, left hand, right hand?

MARCO HOGEWONING: Yes.

RANDY BUSH: How much fuss can we make about it? I think the operational utility of these two things are obvious; surely you don't need an amendment to the Constitution?

MARCO HOGEWONING: Thank you.

AUDIENCE SPEAKER: A last remark that RIPE Labs article I mentioned was for feedback on my request. I believe you guys don't have one there at the moment but I hear that if you have more feedback on this afterwards, we will willing to take that at the RIPE NCC info help out there and the same goes for my request, just so we don't confuse the comments on the two of them.

CHAIR: Thank you, Wolfgang that. Brings us to the end of the first session of this plenary. Thank you to all of the presenters. The next important point on the agenda is the coffee break and the next session starts at around 11, I guess. Thank you.

LIVE CAPTIONING BY AOIFE DOWNES RPR

DOYLE COURT REPORTERS LTD, DUBLIN IRELAND.

WWW.DCR.IE



1