This archive is retained to ensure existing URLs remain functional. It will not contain any emails sent to this mailing list after July 1, 2024. For all messages, including those sent before and after this date, please visit the new location of the archive at https://mailman.ripe.net/archives/list/dns-wg@ripe.net/
[dns-wg] Analysis of NSD
- Previous message (by thread): [dns-wg] Analysis of NSD
- Next message (by thread): [dns-wg] Analysis of NSD
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Johan Ihrén
johani at autonomica.se
Thu Oct 28 15:38:46 CEST 2004
Hi Jørgen, On Oct 28, 2004, at 14:09, Jørgen Hovland wrote: >>> and different replies depending on the country origin of the source >>> ip of the querying nameserver/client. >> >> Oops. >> >> What's the justification for that? And what about maintaining DNS >> coherency? Any ideas on how to make this work with DNSSEC (I realize >> you're not doing DNSSEC today, but this would seem to be >> fundamentally incompatible with any DNSSEC use whatsoever). > > Load balancy. A bit similar to what Akamai is doing. Regarding > dnsssec we aren't quite there yet and don't have a solution to it. Hmm. Urgl. But I see your point (much as I don't like it). I think such things are rather evil (i.e. I'd rather have intelligent clients making informed decisions that have just barely sentient servers making decisions on my behalf based on assumptions about my environment). But I can understand that there's a demand for that type of service given that the average client is not intelligent at all but rather downright retarded. I want to see DNSSEC happen, so I'm naturally concerned when I see designs that I believe to be incompatible with DNSSEC, but I can understand that DNSSEC support has not been your most requested feature so far and sympathize with the lack of a solution. >> What happens to updates (to the master) that occur *during* the >> reload? My guess is that they get added to "the tail" of the reload >> to ensure that no change is left out during the reload. But in that >> case it seems to me that you already have the zone sorted in >> "transaction order" and the only thing needed to steer around the >> complete reloads would be some sort of version stamp that is shared >> between slave and master. Doesn't really have to be the SOA serial, >> you can use whatever you want. > > The slave connects and registers with the master first, locks sql in > read-only mode, clears unprocessed zone change messages from the > master (there is about 0.001% chance of any zone change messages at > this stage anyway) and then reloads from sql. Changes sent by the > master during this stage will be held in a queue and not processed > before reload is finished. This should guarantee that the local zone > data is equal to the master. If the slave should die during the lock a > certain timeout would unlock it. > > We do a complete reload since it only takes 3 seconds. This is where > it becomes interesting. Wait. Are you saying that a complete reload of the zone (where all the data moves from the master to the slave) takes 3 seconds? For how large a zone? Cannot be very large. Or are you saying that the nameserver basically closes the connection to the DB backend and then reopens it to read the data fresh (i.e. there is massive data movement between nameserver and DB backend within the slave, but only DB syncronization magic goes over the wire from the master)? Or are you saying that the slave does the sql reads over the wire from the DB (i.e. the SQL DB is not locally replicated on the slave)? I know next to nothing about DB machinery when it comes to stuff like replication, so please excuse my ignorance. > I am quite confident that comparing SOA/zone changes would actually > take longer time. At least for us using SQL since a SQL query will > take prox 20ms before a reply is given. Lets say 10% of the domains > were altered. This is a pretty high number though. You have to get the > new SOA's from SQL. Lets just say that this has already been done. > Now, 10% altered zones out of 30 millions equals to 1000 minutes in > latency only to deal with sql zone retrieve calls, not the processing > of the data. I am quite sure a raw dump would require less time and > less cpu resources by the sql server and perhaps even on the slave > depending I agree to the efficiency of a raw dump. However, I'm really dense today so I don't really understand why you're using a number as high as 30M *zones*. No offense, but no one ought to put that much infrastructure into any single system regardless of the underlying technology. I think a more realistic example would be to look at 30K zones and 1 minute. And furthermore I don't understand why it is not possible to parallellize those calls. Especially since they not all go to the same master. Or perhaps they do in your case? Is it possible to have a slave slave multiple zones from different masters with multiple TCP sessions in different directions? > on the size of each zone. If you only have a few large zones then of > course the result would not be the same. However if you have frequent > updates on these few large zones you would probably have to reload > everything anyway. You could always try reducing the amount of sql I mostly agree. If you have a few large zones (typically TLDs) my guess would be that even with a rather high volume of changes most of the changes would concern a smaller part of the data and hence IXFRs still make sense as long as you're able to keep the transaction logs. I.e. I have no idea whatsoever about the change frequency of .co.uk for example, but I'd be really surprised if not more than 60% stayed unchanged for a year. > calls by grouping them together, but that might look "ugly". > Nameservers doesn't usually lose connectivity anyway, but of course > they do time to time. Exactly. And that's the situation that interests me. That "ordinary operation" works just fine doesn't surprise me at all. > There is also a solution doing transaction logging when a slave gets > disconnected. We skipped this because it is easier to add and delete > new slaves without having to configure the master without. A > transaction log could also get very high if the slave was down for a > large amount of time and the implications generally about knowing if a > slave actually performed the change or not made us skip this. This is exactly the reasoning behind how IXFR works. I.e. a slave can request an IXFR with all the transactions from version N until now, but the master alway has the right to respond with an AXFR. This way the master may "jettison" the transaction log if it grows too much to be convenient to keep. As to knowing if a slave performed a change or not that is also taken care of by the SOA serial. So, since you don't have that (or something similar OOB wrt to DNS data as I suggested) I understand your reasoning. Johan
- Previous message (by thread): [dns-wg] Analysis of NSD
- Next message (by thread): [dns-wg] Analysis of NSD
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
[ dns-wg Archives ]