These are unedited transcripts and may contain errors.

DNS Working Group, 3rd November, 2011:

CHAIR: So good morning everyone. Now, I'm awake at least. So this is going to be the first of two sessions of the domain name system working group. If you're deeply interested in address policy, now is your time to change to the room next door. But I would recommend you think about that twice and have a look at the agenda, we'll go through some of these items. My name is Peter Koch. I've been one of the three co?chairs, the other is Jaap Akkerhuis and Jim Reid over there. If you have concerns, ideas, please come to us and talk to us during the coffee break, take the chance, use the microphone. Speaking of which, this is the same every time, the whole session is recorded and videotaped for the purpose of European security and the benefit of the remote people. So if you have something to say or ask, come to the aisle, state your name and, if you wish, your affiliation, and talk clearly into the microphones. Which brings me to finalizing the agenda. There's some changes that the Working Group Chairs would like to propose to cover a bit more content. Instead of having reports from the IETF, we've volunteered Richard Barnes, who is one of the workers in the DANE Working Group, to give a short status update. And after that Sarah Dickinson to give us an update on DNS CM, which is a configuration and control protocol, and we've reported about activity in the IETF which was delivering the requirements for that. After that we ??

If you manage to get the laser to do the scrolling. I'm looking for the upper part of that screen. What I want is this.

Yes, okay, this gesture thing works even on this screen. Thank you. With a bit of delay, but you know, new technology. After we've had these injections into the agenda we'll give the microphone to Wolfgang to get an update and after two presentations of new authority tiff domain name service. The first one will be from Lubos Slovak and second Peter Janssen and both will present their new products and then we'll probably have the coffee break and go on with the agenda in the second slot see any additional concern or ideas about the agenda, I would like to move to point B which is matters arising from the previous meeting and the review of action items. And I see Jim slowly moving to the microphone.

AUDIENCE SPEAKER: Good morning, everybody. Moving very slowly. This is far too early in the morning for me. This is one item dangling over the working group for a while and it's all my fault. Some of you might remember going back now probably about five years ago, we set up a task force about an interim trust anchor repository and this was to do with stud i didn't go DNSSEC keys in the event IANA didn't do something or some other trust party, and by the time of the following RIPE meeting, pretty much the need for the task force had gone away because IANA had created a trust anchor repository, which at that time was an interim kind of thing and since then it's moved on and we have the route sign but we haven't killed the task force, it's still there, hasn't done anything for a long time and this is where we say this the job is done, nothing more to do and follow it and say that's the end of it. If anybody has a sense that that's a good idea, bad idea or does anybody have a strong feeling we should keep it alive

CHAIR: Probably time for the morning gymnastics. Would everyone raise one hand please and see that you have hands and are listening. Would you, thank you. Wait a second. Now, we're doing the real exercise. That was the fake one. So, please, those of you who think that the task force has done their job and we can dissolve the task force, please raise your hand now. Thank you.

Anybody opposed? None. Anybody want to abstain? Oh, three. Okay. So that's anonymous with three abstentions. Thank you, thank you, Jim.

Jim, did we actually post the minutes? Yes, we did. Did anybody read the minutes? Does anybody have comments or clarifications to the minutes? One, two, gone, I guess. So I guess we can formally adopt the minutes and I would like to move we accept the minutes of the RIPE 62 DNS Working Group session as posted to the mailing list. Come on, be awake. At least somebody say I second.

Thank you. Who's in favour of adopting the minutes? Great. Anybody opposed? Anybody abstain? No. Hereby unanimously approved. Thank you.

Review of action items, that could be a short walk?through because there was no new action items arising from the previous meeting and everything else was covered before.

And we can immediately jump into the replaced agenda item C, which is Richard Barnes, who I would like to invite to the microphone to give an update on the DANE working group and the IETF.

Richard Barnes: So good morning, everyone. My name is Richard Barnes. The Chair has asked me to give a quick update on the DNS. It's about adding additional security features to TLS, as a security protocol that secures a lot of the web and a lot of other VoIP applications using DNSSEC. So, right now, most of what we have in terms of adding security, how we do authentication, especially in NTLS, is based on PKI, so you have some authority that is not part of the DNS, in almost all cases, doesn't know about the DNS, does some basic verification if someone holds a name before issuing a certificate, here is an example. The idea of DANE is we should be able to use DNSSEC to enable domain holders, in this examination,, to make some statement about their security properties, what public keys they have. The ultimate goal is to BIND a public key to a domain name, so I know ?? so that I can know that I'm talking to the holding of this domain name. So we can establish BINDings through PKIX or corroborate those BINDings through DNSSEC. So the working group went through and tried to organise its thought by writing a use cases document, that was published at RFC 6364. They came up with three different statements to make. The first one is CA lock. If I've gotten a certificate from some CA I should be able to assert that no other CA should issue certs for my domains. This is pertinent because J mail and the other services that were attacked already had certificates from another CA, and if these records had been present then relying parties would have known not to accept the ones that had been unauthorized to be issued. Similarly, the cert lock use case domain operator can say ?? use that to say you should trust this specific cert as opposed to another one. The one people get excited about is use DNS to assert certificates directly. Instead of having [PCIX], you can put self?sign certificates in a trust anchor search and, you know, have the trust established through DNS and relying parties would trust that certificate without trying to chain back to a trusted CA.

So there's an initial protocol that's in the works right now, the name of the document is there at the top. The idea is to have a resource record that has a number of fields to allow you to say I'm making this statement about TLS and binding cryptic to this and here is the signature. The usage in this usage field map to the usage cases here, there's a registry, you can say I'm provisioning this cert to say it is the CA under which all my certs should be issued or it's the certificate and you should trust it regardless of who issued it. You specify a usage in that form and you specify some way for a client to decide, when I get this server from the cert how do I match it against the DANE record? Do I compare it based on the whole cert or just the public key, do I harsh it, which algorithm, finally you should to have some cert data at the end. Where do we stand? We have the use cases document, the protocol document is about in this form, starting to get fairly well flushed out, starting to close issues. This would be a good time for folks to review the documents, submit comments to the working group mailing list. If there's comments here I'll take them or find me off line. That's a quick update.

CHAIR: Any questions?

Wolfgang Nägele: There seems to be some confusion about the current implementation in Google Chrome, the statement released has some implementation but it's not based on this draft?

Richard: It's based on one of the initial proposals that's a predecessor, it's conceptually similar but differs slightly.

Wolfgang: I'm referring to the current implementation and it requires you to fold the DNSSEC chain into an extension, on the service side and continuously refresh that. Is that still part of the protocol?

Richard: That's never been part of the protocol, that's a separate concept that the Google folks are putting forward. That's not something considered in DANE. It's come up a little bit in TLS, possibly to extend TLS to carry DNSSEC information. But DANE is focused on defining the record format that we would use in a system like that.

AUDIENCE SPEAKER: I'm going to ask: What is the current status of plans and known rumours about browser support for use case number three, it was two actually?

Richard: I know Firefox, in particular, have been involved in the working group, they're tracking things, and, as Wolfgang mentioned, Chrome is prototyped out and is in stable release, implementation of use case number two. So the browser vendors are interactively interested. They look at it as increased security to their product, their users, they're trying to be on top of it.

AUDIENCE SPEAKER: There's no push back of political lobbying.

Richard: There have been active contributions from that opportunity.

CHAIR: Can I get a short sense in the room. How many people have been aware of this effort before this morning? That's pretty cool, roughly a third or half of the room.

There's not only Richard Barnes but I would like to point fingers to one of the Working Group chairs of that group, and Jacob is sitting there being one of the editors of the protocol document that Richard asking you to review. These will probably be also available for further inside and discussion. And Richard himself. Thank you, Richard. Applause applause

CHAIR: That should bring us to the other unmentioned schedule C on the agenda, which is Sarah Dickinson giving a presentation and live demo.

Sarah Dickinson: So thank you for accommodating the talk, to the Chairs. I'm going to talk about DNS CCM, it's a progress report. My name is Sarah Dickinson, and Peter was spot on, the company name is [Sinodyne]. DNS CCM stands for DNS configuration, control and monitoring. It's a software tool designed to use those three functions. Behind it is NSMCP. It's a single cross platform and cross implementation protocol for name servers. The motivation behind it is clearly for DNS high viability is a desirable thing. One way to achieve that is genetic diversity. However, there are lots of things ?? hold on a second.

Okay. So the idea of NSCP is it's a cross platform implementation, because genetic diversity is a good thing and there are many name servers out there that look and quack like ducks but are difficult beasts to control and manage one of the goal is his to mitigate the operational complexity you get with that genetic diversity and make everything look like a plain duck and easier to control. As Peter mentioned, the Working Group has been looking at NSCP for a few years now and there's a requirement document and also an associated draft that's available, and the draft proposes uses a combination of YANG for the solution. What does the data model side of that solution look like? It uses YANG, which is a data modelling language and defined a data model. So this is the model we implemented in DNS CCM and the idea is it will become version 3 of the draft. It's a little different than what's in the current draft. The main concepts are that you define peers. Peers are any system that you talk to in any capacity. You can group your peers logically. There's a view. There's a set of zones and in the zone there are roles to which you assign a single peer group. One of the nice things about YANG, it gives you referential integrity. You also define RPC calls and at the moment we have five implemented, which is the basic control statements for name servers.

On the NETCONF side, I'm not going to go into too much details, the features are that it's secure, the messages are human readable and it's an extensive protocol because it has concept of capabilities. I'm just going to illustrate one aspect which you'll see more on the demo. One job of a server is to manage a set of conceptual configuration databases and I'm showing an example here of using two of them, a candidate database and a running database. Where we use it is edit from his the client are applied to the candidate database and on the commit those edits are transferred to the running database and the running database has the active configuration of it it always reflects the current state of the device that you're controlling. It can also contain state data, usually statistics. You can see it but can't configure it. And you can save your configuration out to disc, the NETCONF term is NV storage, if you have to restart your server you can recover your previous configuration. Some other nice features, database locking, confirm commit and it does validation for you.

So where are we with DNS CCM? Well, we're depending the implementation of NSCP with the support of NL net, so although NSCP is still in draft stage, we believe doing an implementation is the right thing, partly because the draft is a bit of a slow burn, not that much feedback. We need to have something for people to play it with. Also the names of implementation pre?date the control protocol, we have a job of reverse engineering in a lot of cases and that's quite hard to do in theory that's why we're going to go ahead with implementation. We're currently developing version 1.0. It's built on Yuma tools. That gives us 90% of what we need to handle YANG data models. We're implementing it for the authority name server and the specific ones are NSD 3 and BIND 9. What we have at the moment is a prototype, not production ready, but the plan is for an alpha release towards year end. To dig into the architecture in a touch more detail, what you see here, the dark blue blobs are what Yuma tools give you, on the right is net conserver and YANG. The DNS CCM sill layer gets generated based on the YANG model, composed of user call backs all the events, when you validate nodes, edit nodes or invoke a call. The light square is the DNS CCM library which is where we implement this. So it's effectively implementing a name server interface which correlates with the data model I described earlier. One thing to take away from this is the data flow, which is the way we've done it is the net conserver is the owner of the configuration data. When the client edit that's first applied to the NETCONF and when that succeeds it calls to the user call back and from our DNS CCM library we do a complete overwrite which can then be reloaded into the name server. When you pull up a client and issue a command to see the configure, it's not digging into the DNS it's looking directly into the NETCONF databases.

So I am going to attempt to demo, my second one this week. Not sure how it's going to work out, got away with the first one. I've got a master running in the background and two test servers, Fred who's running, and he happens to be NSD 3, and Homer running Simpsons and he happens to be BIND 9. The arrow represents relationship and I'm going to use the clients to talk too basically servers.

So very quickly to show you how I'm going to initialize the name servers and that's with ?? this is a start up file that I labelled earlier and it's a representation of the YANG data model. So I have ?? this is for the Fred server. So I have the peer Homer defined, I've got the peer master defined, I've got a couple of peer groups defined which you'll see later. And I have the basics of a view define, so just a name and an IP address.

So this is Fred, and to start up my NETCONF server, I point it to a configuration file. I've got some information coming out here so you can see what's going on behind the implementation. We're reading in from our XTML file, we read that out to the NSD configure file, we do rebuilds and file the ?? if I do ?? so moving over to the client's side I'm going to fire up YANG here and I've got some connection parameters, using a pass word but public/private keys are also supported. So you get a lot of information which is NETCONF specific which in future will screen out. First I'm going to check my service status and that sends a call and I get notification back, and this part of the notification is defined in the YANG data model. That can be extended. It's quite brief at the moment. It shows me my server is running. I'm going to use the next command which gives data in the data database, as I said earlier. The top two sections are the stat data, using it for statistics display and some information here about what particular name type server we're using. But the configuration data looks very similar to what's in the XML file. I've got a couple of peers to find, a couple of peer groups and I've got the view as well. I'm firstly going to manually edit this file and create a new peer group and put a peer in it. So the command I use is create and then the pass to the node I want to edit. I could create a peer group and not a peer name but this is a short cut way to do it. I say create me and peer name it. I get prompted to peer name. I'm going to type master. And then it prompts me for the name of the peer group I want to add it to. If I specify a master one here, master will be added to it. I'm going to specify a new one and that gets created for me. So I've done that and now I use a slightly different command to look into the database. Now, I see I have a third peer group, as I just specified. If I look in my running database, it's not there yet. But if I go ahead and do a commit, the commit itself is an RPC command. I get a notification back that that succeeded and I now look in the running database and master has appeared. I'm going to do some further configuration but I'm going to do it with a script.

So I'm not sure if you can see that. Okay. Here we go. It's a bit small but hopefully what you can see is the sin text is similar to what I did. There's an operation, create, or could be an edit or merge. There's a path to a node that I want to alter and there is a value which is the new value to put in. So I can simply run that script and I get an RPC okay for each of the individual lines that were in there. I could have defined that in XML equally and serve in the database equally. I run that configuration script. I'm again going to do a commit and have a look at my running database and now I have some zone data in there. The top zone is the one in which Fred is the master and I'll point out the master's own file location. This is just a path to a file. We intend it to become a URI so you can get zone files from other transfer mechanisms. And ultimately we'll extend it for dynamic updates but it doesn't support division name directly into the zone.

So right now what I've done is rewritten the configure file. So the zone isn't being served yet. For my client I'm going to do the restart, and in the background you'll see some things here, so it's actually restarting. And again I got a notification again, my server is restarting and serving the zone. That's the basis behind it. I realise I'm a little short on time. There's a half a dozen commands on doing the same thing on Homer, you do it the same way, do the commits, restart the server, so it doesn't matter from this perspective what flavor server name you've got.

So to talk about where we want to go because this is clearly just a prototype, there's things we can do to the YANG C client, we'd rather it look like the front end to a name server than a rich and full NETCONF tool because there's stuff there that you don't need. Clearly this would lend itself nicely to a graphical interface, you could do monitoring and multiple name servers, core visualizations because you're getting statistics and group management. And there's lots of nice ideas for the use cases for that side of it. In terms of the data model, it's basic at the moment, there's not a lot there, no DNSSEC, we'd hope to extend it for resolvers. Another thought we've got is offering user customisation of the implementations of some of those RPC calls so the user can use when I issue BIND starter which is what it does and that would make the system more flexible.

Finally, I would like to appeal for feedback, for use cases or for requirements. There's a project website available there or contact me directly. This is a sort of tricky piece of work and the delve is in the detail. No one likes like that feeling thinking they missed something. So I thank you for your time and attention and I'll take any questions.

CHAIR: Thank you, Sara. Questions. Insights? Volunteers? Beta testers?

AUDIENCE SPEAKER: This looks very useful. I won't take meeting time up but I'll grab somebody during coffee.

CHAIR: Thank you. I guess that's it. Thanks Sara for that interesting demonstration.


CHAIR: With that, we are the traditional report from the RIPE NCC and it's your turn, Wolfgang.

WOLFGANG NÄGELE: Today, our traditional updates, seeing the room is full, the address policy working group is apparently not that interesting anymore. Moving on. Some new stuff here, first and foremost in our DNS updates, the practice additional DNS department within the RIPE NCC doesn't exist anymore. It's been merged at the beginning of this year, we call it the Global Information Infrastructure, all the informations that the RIPE NCC does, in terms of IT that is not in our traditional network within Amsterdam. I have a listing here of a couple of those services. I think most of you are familiar, witness, TTM, DNSMON, something more exotic INRDB, RIS and a storage and analysis infrastructure. I'm going to give a more detailed presentation on this in the working group this afternoon. More detailed, the DNS services or main DNS services that we're operating for the community, SN LIR, reverse, secondaries for other RIRs to support stability, K?root since 1997 by now and F?Reverse. That's new. I mention it had in the last update, this is about consolidating the quality of the reverse top tree in IPv4 and IPv6. And secondary DNS for ccTLDs, we do this for ccTLD that do not yet have the funding to be able to afford this from a commercial provider, the ENUM tear 0 zone, under contract or great with the ITU, and the AS 112.

More details in 2010 we started a project to move the DNS services off of the RIPE NCCs network, they caused more than 90 percent of that network and made the scaling of that network unnecessarily complex. So we moved all of the DNS services or we're in the course of moving all of those into this separate autonomous systems, one in London and one in Amsterdam. The majority of the service is migrated on to it. The only big migration is the ccTLD that's due to communication with all of those ccTLD and very long lead times in such operations.

F?reverse, briefly mentioned already, the ultimate goal here was to have go off of the root service, and, one of them being the RIPE NCC and not along with K?root, the RIR plus ICANN or are currently operating these services under RFC 5855, that bridges into the same quality deployment.

Some part here we've been working on for a while is the early registration space. It's a little bit challenging when it comes to provisioning, there's a mechanism, if you are in the RIPE NCC region and you have early registration space, for instance, with AfriNIC, you can maintain that space via us but since not all of the RIRs have a DNSSEC in operation, we have to progressively roll this route. At this point in time we have successfully enabled this towards ARIN and looking forward to APNIC.

This is an update on an action point that we had together with the database working group Working Group. This was a couple of things. Number one we wanted to get rid of something that was called the 3rd octet in notation and this allowed you to specify in a reverse domain with a dash range. The implementation was confusing because it only allowed to you do so in creation not updates. So the decision was to get rid of that implementation on the 3rd octet, which was a bit confusing. But we introduced another one on the 4th octet, however on a different implementation. The reason is PI space that is smaller than a /24. So we have that dash notation now but it's really a plain object that has a dash notation so you can do any operation that the RIPE database supports and our provisioning systems will take care of making sense of that and doing the CNUM provisioning for that. We concluded that a couple of weeks ago by migrating all of the ?? which was about 60 only zones that had less than a /24 and do have this functionality or had it before into the RIPE database so it can be maintained by those people themselves.

Some data points, nothing here changed since RIPE 62. This is a pie chart that shows you the parents that we as the RIPE NCC have zones in, and the blue parts shows you zones that are DNSSEC signed that are chained from the root down and we are able to provision our records section into. This is pretty good. There's only a few left here. If we look further ahead, by the end of the year it should look like this. The even better part is all three of those zones that are remaining here are also going to go away soon, however we didn't get a definite commitment. So I expect this presentation to show a full pie chart by next year. Reverse DNSSEC these are the total records in all of our zones that we provision. You can see there's a steady uptake in general. This was after the signed, there was a natural take up here. We see that we're progressing quite well and that has been going on for quite a while now. That's good news.

K?root, there's not much to report, there's going to be a separate presentation in the DNS Working Group sessions on one event we had this year. Other than that, the operations are instead, 18 Anycast instances, our normal growth rates, 25,000 peek rates now. The other part, arpa part peeks at about 6,000 a second, also nothing too unusual here. IPv6 arpa has a higher growth rate but still quite slow with 2,000 queries at peeks and that's natural due to the growth rates on IPv6 right now.

IPv6 this graph is a little bit different with the uptake then a little bit of a slowdown. If we look at the total trend it's definitely picking up. This is actually as we want this to be. My suggestion here is actually what this slide shows you is the TCP traffic we receive at K?root, there was some concern when rolling out the signed zone there was going to be considerable traffic and that would be a burden. We're looking at 40 queries a second by a TCP on K?root and if you remember we're peeking at 25,000 queries a second. That's no concern. And my suggestion is not to present this slide any more the next time because I think we're satisfied now that this is just work.

Another thing that we're working on a lot right now that was an outcome from the root signing as well, during the root signing we saw that it was very bothersome for us to do analysis on the raw data on K?root and the other systems so one of the things we do there is we have developed a library for apatchdy to do distribution on this stuff and I'm going to give a detailed presentation and try to do a live demo as well. If you're interested in that, you can join us then.

The last part here about the Anycast system, those are the last things we need to move, is for the membership. It's trivial to migrate, we intend to do so by the end of the year. And this part is a trivial migration, but takes a long time communicating with the ccTLD, then communicating with IANA and it's bureaucracy that slows it down.

DNSMON, we had a new DNSMON and we decided that we wanted to put into production two of the main features that you've been asking us and the top feature was to have the data more realtime. And I'm quite happy to tell you if you're a subscriber to DNSMON you might have noticed if not you might want to log in and you're going to see that now in DNSMON you can get the data at about five?minute intervals, anything that happens on the internet you should see in about five minutes. If you're not a subscriber, we still have the normal artificial delay of two hours in there. We improve the performance on a plot generation, DNS 1 is way more responsive when you do more complicated plots. And the new user interface we prioritized a little bit lower because we're still not satisfied that it's such a substantial improvement that we want to move along with it. We'll go back and think about what we want to do about that.

Peter also asked me to give an update from the database working group on the clean up of forward domain data. There's this action fund in the database working group. Traditionally, there were 43 ccTLDs that have been provisioning or at least reflecting their who is data into the RIPE database. The database working group decided that they didn't want to support this anymore and a project or the action point to phase this out was decided a while ago. So by now we have reached the stage where only MON co?and the Israeli government ccTLDs are still in the RIPE database. But as I mentioned here, the government of Israel has been removed about a week ago and MON co?has confirmed by January 2012, we can move to remove those, which will eventually conclude that action point by changing the syntax in the RIPE database to not allow for those objects anymore and then we can conclude that action point. With that, I think I'm at the end of my updates for this section. If you have any questions I'm happy to take them now.

CHAIR: Thank you, Wolfgang. The government of Israel is still in place.

WOLFGANG NÄGELE: They removed it themselves.

CHAIR: The action item is ?? or action point is in the database working group. The idea is to get rid of some of the attributes that work in forward space but not reverse, in the ENUM top one so there can be a clean up.

Thanks for the update.

AUDIENCE SPEAKER: Robert. Regarding the DS and the reverse tree, do you have any idea if people are using that for doing DNSSEC in a place where people might not now.

WOLFGANG NÄGELE: We can definitely confirm that. As things go, I believe Steffann mentioned once that they were in the fortunate position of having a lot of TLD to face into and people are doing that with their reverse space as well. It's not too bad if that breaks but it would be bad if breaks.

AUDIENCE SPEAKER: The TCP thing seems very static in a very fluctuating environment. There any reason to believe that you reached the capacity of the server or something?

WOLFGANG NÄGELE: No. We did extensive load testing before rolling out the signed root and this is not the bottle neck. The bottle neck is a good thousand magnitudes higher.

CHAIR: I have one question, Wolfgang, unless anybody else rushes to the microphone. You mentioned the secondary service for developing top level domains. Some people may expect new TLDs to develop over the next couple of month or so. Can you say anything about the position? Is there expectation that the RIPE NCC will provide secondary services for those?

WOLFGANG NÄGELE: Relating to the new TLG program?


WOLFGANG NÄGELE: There are really two things. This service is about ccTLD specifically on the focus on developing countries which usually don't have the funding. If a country has an IDN we're happy to do that and give them a secondary service there. We don't distinguish between that. New TLG are ruled out because they're not a ccTLD and if you can afford the 385,000 dollars for the initial process, I think it covers that you must have the funding to be able to run the infrastructure. So if somebody would approach us there, we're very surely going to turn them down.

CHAIR: Thanks for that clarification. Any other questions to Wolfgang? Okay. Thank you.


CHAIR: So that brings us almost to the end of the first slot but not without having two implementations, two new kids on the block, first Lubos Slovak.

Lubos Slovak: Hello. I work in CZ.NIC and I'm the chief developer of our new alternative DNS server, which is not DNS, as you can see in the slide. In the following presentation, I will talk a bit about the history of the project, I will talk about some detain choices we've made, and I will also present some bench marks comparing our implementation to BIND NSD.

We started the project at the end of 2009 when, actually, I was the only person doing the ?? doing all the work. So the development went quite slowly. But then in the second half of 2010, the team was expanded by two more people and development moved forward a lot. So September, until September this year we have made a lot of work and in September we made first non?official release. And since then it was already deployed to some smaller or less important domains. As of today we are making a first public release, it's a Beta version, and until the end of the year, we will add some features but I'll talk about it later.

Our main goal with this project was to develop usable alternative to the most widely used implementations, like binder or NSD which which could used at the domains so it has to support some features, has to be quick, has to be secure, stable and so on.

As of today, KNOT?DNS can be run on a variety of systems like Linux, BSDs, we support the basic DNS protocols, such as full and incremental zone transfers with some ACLs implemented to restrict the access to the transfer. We support EDNS 0, DNSSEC and NSEC 3.

Currently, we do not support TSIG or dynamic updates. But there's been a lot of work already done on this, so we expect these features to be added in the following weeks.

KNOT?DNS use is simple text?based configuration file where you can set network interfaces, zones to be operated by the server and so on.

An important feature of KNOT?DNS is you can do run time reconfiguration without the need to restart the server or without stopping to provide the zones. You can add or remove zones, add or remove network interfaces without interrupting the service. KNOT?DNS uses some kind of zone compiler to pre?process zone files for faster loading, also for doing some optimisations. We use simple common line interface to control the server, which is called KNOT?C and you can start server and so on. As for the design we try to keep it object?oriented. We avoided usage of logs, so most of the server is log?free which adds to the performance. Here you can see some very simplified diagram of the project design. The boxes roughly represent the modules and the arrows represent the communications through some API. About achieving our goals, we do approaches to achieve the good performance. One is to minimize amount of look ups to answer one query, and the second is to minimize the actual look?up time needed to find one domain name in the zone. You can see that we employed some clever distractors. So I have said KNOT?DNS can run non?stop without restarting. We employed some mechanisms, such as read?copy?update which allowed us to do so.

Here you can see a small example of configuration file. You can see that it's very simple, straight forward, there is the definition of interfaces, list of zones, it's a very simple example just to make some notion.

We have made some bench marks, comparing our implementation to BIND NSD, we used the same set up but with two operating systems, Linux and BNSD we measured the response rate of the servers and the servers use whole everybody with NSEC 3 as the data. The results are here. We have achieved our goal with the performance quite satisfyingly. I must mention that [] KNOT?DNS is really focused on scaleability and on running of machines with more CB use, so it's really important to look at the peaks in the chart because it shows the best possible performance on such machine.

The next chart shows the same measurement done on free BSD. If you don't know DNS it was the same as KNOT?DNS, so the lines are even ?? and it seems that we achieved some system or network boundary, so that we can't see any scaling or anything. There was some limitation.

As of today, KNOT?DNS is already deployed on some smaller domains, but until the end of this year we intend to deploy it on the CZ zone at least on some Anycast nodes. In the following weeks, we want to finish the support for TSIG and the dynamic updates, also to add root zone support, which is now somehow not fully working and features like NSID and some other ?? and also we will do a lot of testing and fixing bugs and you know it:

For the long?term future, we want to work on the performance even more, because we think there are still places where we can improve it somehow. Also we want to reduce the memory footprint which currently is relatively high comparing to NSD or Bind but we don't want to sacrifice the speed, so it will be some kind of compromise.

Also, we want to add some other options to the common line interface. Now, it's quite simple, but it's important to easily control anything we the server, like he had zone, just one zone and so on.

Well, to summarize what I have said, we have developed a new authoritative?only name server which achieves quite high performance, allows for some run time reconfiguration, and what is important, it was developed in the environment of TLD operator CZ.NIC, so it's kind of ?? we know what we want from the server. So we are trying to achieve that we can use it ourselves. And also that it's being actively developed and it will be in the following years.

On the other hand, it's not yet fully complete, as I said. There are some features we want to add until the end of the year. And we need to reduce the memory footprint.

So it's important to know that it's still a Beta version, but we really encourage you to go, down load it, test it, try with your zones and own environment, and every feedback is very very welcome. It's available as a ?? in sources or in packages for some Linux distributions. Here there are some links, but you can down load the slide so I will not talk about this. And I will end my presentation with this educative image. Thank you for your time.

AUDIENCE SPEAKER: I have two quick questions. First, you mentioned increase in memory footprint. Could you produce some numbers comparing.

Lubos Slovak: Of course. We now use, I think, it's about ?? like when you run the server with some zone, it consumes, I think, four times the amount of memory as the zone has on the disc. It's from three times to five times. Depends on the machine.

AUDIENCE SPEAKER: Due to very quick harsh algorithm?

Lubos Slovak: And quite complex structures, and so on.

AUDIENCE SPEAKER: And the second quick question: Is there an estimation when for support this will be available?

Lubos Slovak: In two or three weeks. We are really in the debugging stage now.

AUDIENCE SPEAKER: Daniel. At the NSD project, my role was to test the thing. There was a lot of scepticism among the TLD operators and the people that we actually aimed an SD on, whether there were still bugs. What we did at the time, we built a test lab that played exactly the same queries to both the reference implementation at the time, which was BIND and to NSD and we analyzed the differences. I'm giving you advice, if you want to gain acceptance and confidence that answers the same, or that you can explain the differences, that's an approach you might want to take. I know they still have the code for that. I'm not proud of it, it was just a hack. You might want to reactivate that and compare the answers to exactly the same queries from your implementation and NSD and BIND.

Lubos Slovak: Yes, thank you.

AUDIENCE SPEAKER: I will answer it. We already spoke before and we also captured, I think it's now two months of traffic and we plan to replay that on the note to DNS.

AUDIENCE SPEAKER: Thank you very much. I couldn't remember the name of it.

AUDIENCE SPEAKER: In fact, if anybody develops name share codes and wants to use that name, it is available.

AUDIENCE SPEAKER: It's not publicly posted anyway.

AUDIENCE SPEAKER: Don't post it anywhere.

AUDIENCE SPEAKER: This is not well maintained but people who want to test name servers, it's available.

CHAIR: Emile.

AUDIENCE SPEAKER: Emile: I have a question from the chat room from the RIPE NCC: What are your plans to ?? for future development models? Do you plan to continue development within CZ.NIC or open it up? That was my screen safer again. And what is CZ.NIC's long?term plan to support the software?

Lubos Slovak: Maybe Andre will answer one of the questions.

CHAIR: With that I take the liberty to close the microphones after Andrew to give Peter a chance to get his presentation to.

AUDIENCE SPEAKER: That's not the only project we do. We do other projects and are always ?? okay. Sorry. So, that's not the first project we do actually. We would like to take some approach and open it up as much as possible. We welcome other people to join us. Honestly speaking the activity of the... is so huge that we want to support the project for long?term, back fixing and helping others deploy it. That's the current plan. I think we implement... so we would like to continue.

CHAIR: Thanks. And thanks to Lubos Slovak for the presentation.


CHAIR: The microphone is handed over to Peter Janssen with another new implementation.

Peter Janssen: Good morning. My name is Peter Janssen, I'm a ?? what can I say? Anything that was said in the previous presentation, multiply by two and that's it. I can keep this presentation very short. I'll go through it and see if there's anything that's a bit different with our colleagues from the Czech Republic. Who are we, what have we done, why are we dog it, where are we going and when and wow? That should be how, but I couldn't find a synonym for that, we are a Belgian not?for?profit organization, under contract with the European Commission and sole manages of the .eu TLD space. You can see, interesting enough we do support in 23 official languages and currently 3.4 million name. Two specific issues here, we run name serves for that, both own maintained unicast nodes and third party Anycast meshes, and as one of the few TLDs that are doing this, dynamic updates, when it comes in within a few minutes it gets if i had to the name server life. In a graphical representation on the upper right part of the registrar that talks to the registration at our site that feeds input into the database that gets extracted from a dynamic process and feeds it to a master, that then notifies to public slaves that we maintain ourselves as well as slaves to the Anycast. Challenges which is approaching why we have started this project. Obviously, for public authoritative slaves, spread on a geographical spread, network, hardware and name server software diversefy indication. Also specifically for us, the zone update mechanism is a separate process, listening on the core database, so it gets events from that and sends dynamic update messages to BIND and sends it on to the world. And there's room for improvement or decomplexification.

We started a new DNS implementation to mitigate the two issues mentioned on the previous slide. Here you will find an overview of the primary and secondary design goals. Obviously RFC compliant. And this is already answering a question that Daniel asked or said on the previous presentation, our goal is to be as close as possible to the existing implementations as long as course they are in compliance, portable, clean, fast, authoritative, DNSSEC support, secondary goals, we might do an extension to make it recursive caching validating. What we're looking at is to have generic backends, listen to sequel database or any other store or communication, message Q thing. Update APA and dynamic provisions that you can add and remove zones without having to go down.

1.0. There are a few things in italic. Those are things we're working on as we're approaching one. Support platforms, Linux, BSD, OSX and we are doing our best to run on some sort of Windows. It's authoritative, it includes the asterisk and the @ sign, you can see what we currently support. We do zone transfer, both as master and slave. We have some work to do the notify part but that's minor. We do support TSIG, thank you, Peter. And we do DNSSEC, algorithms five and seven. It does on line key signing so when they expire it will sign them automatically.

Future releases, adding algorithm on his the 1.2 release, adding clients, on a 2.0 we might go into caching resolve, validating, and also to have some libraries spun off of this to be helpful in other projects and then in the future this back?up that I talked about in the future, code clean?up, bug fixing, goal to have it released in February next year. I'm going rather quickly here because the time is running out. We have done some performance measurements. This is the set up that you can, client on the left, server on the right. We are running on the dual XOEN zone. What do we do, pre?create pick up files that we reply with TCP replay and we use TCP dump to catch the responses and we see how many we send out and how many responses. If you do 50,000 queries per second for 30 seconds long it should be one and a half million packets. If you do TCP replay, it comes out as hey, I took 29.7 seconds instead of 30 seconds. This is related to the one micro on the server and if you go beyond that you have serious issues as I will show. It doesn't really matter as long as it's around the number of queries, that's fine to put it in a graph.

So this is what you get. Four name server implementations that we graphed. You get the number of queries and the number of responses per second in comparison to the queries. I'll go into detail a bit later. Going back. The individual measure points which are the dots. We want to fill them up to have a more curvy line and this is what came out. This is where we hit the one micro second thing and all sorts of trouble. But interesting enough, one of big issues was the TCP dump running on the client itself was holding the machine back. We put a machine in between that does the TCP dump to take it away from the client. And then you get comparison in blue and purposish, it's the previous measurements in the greenish, it's the new measurements. We gained performance as well as more stable. Next step was to actually take away a lot of the load on his the client that is actually receiving the responses and then throwing them away because it doesn't do anything. So we did some blocking on the top or the machine in between to block the responses from the clients. Now, we got really some actual things. This is what we got. The full lines are without blocking the responses to the clients. The dotted lines are red blocking the responses to the client. Interesting enough, there is not a whole lot of difference. What do we have? In yellow BIND 9 /TKO*T 8 /TKO*T something. Green is the latest version, run is the running in full mode. /PWHRAOUD is only takings it puts it in response and send it out. Theoretical without doing a loot of work. Not useful name server but as a bench mark. Comparing it with the previous speaker, we have about 30% increase in comparison with NSD and I get the feeling the cheques have that as well. Did you steel our code or did we steel your code? That's the question here.

So performance is one thing I see already done. Conformity testing. We want to be RFC compliant. We have been doing automated testing, also doing random bit flipping to see if it just dies, things like that, we have some people dog manual testing to deliberately bring it down, the whole idea, indeed, is to use A input file, feed it to BIND, capture the responses and do an automated comparison to see if they are the same and where they are different and why and one of the things we want to do with this is we would like to have community input on this. We'll make a note available other the internet where you can try to do your best to kill it.

Availability? When will it be available and how? We're looking /ATD some sort of open?source licence, not quite sure which one, depends a lot on the forces up stairs. We will make available some test nodes and we will make available some binary packages so you can do some home testing and playing for the moment. Some code and stats, it's written in C. We have about two and a half of development on this. Not very interesting otherwise.

The most important slides, does it have a name? All the good witnesses were taken, we considered NSD or something like that, so yes we have found the name and for obvious reasons, we called it YADIFA. That's the low go, so we'll do a T?shirt hand?out after this session. Official launch in 2012, so it will have the low go YADIFA and coming in 2012 as the world will come to an end as predicted. It's already live and yes we're eating our own dog food, two of the serverers are running on YADIFA



DANIEL KARRENBERG: . You say two of your name servers are already running on it?

Peter: Yes.

AUDIENCE SPEAKER: Good. The problem was solved six or seven years ago, distell, as I was reminded is the thing. And, yes, it's about actually capturing responses and comparing them. There's also been some work done there that you might want to capitalise on, and the very ugly code is available through labs.

AUDIENCE SPEAKER: Also to chime into that, the K?root capacity testing that we did before rolling out the signed root zone, I think a white paper was published on /TH?FPLT you already did that work, but if anybody else has to do that there's quite comprehensive information on that methodology.

Peter: Where is this going, the whole intention is that some of our name servers will run on the YADIFA in the future. This is a long?term project that we plan to commit to keep running and alive, I would say. Odd

DANIEL KARRENBERG: : I didn't want to sound critical. That was friendly advice. And I like the jean pool expanding. Two is sort of okay but if we get more it's nice.

Peter: That's the idea. For obvious reasons we call it YADIFA, anybody any reasony?

CHAIR: Fancy Peter: Yet another domain information for...

CHAIR: After the coffee break we'll have Wolfgang with an incident report, Olaf introducing, DNSSEC trigger, an ICANN update and Steffann with /PH?RPBL impressions from a previous conference and a lively panel discussion.