I'm just confused with what happens with keys that are already in the database. Imagine that a new keyserver starts up with an out-of-date keydump, which is missing some keys that have non-exportable certifications. When that keyserver reconciles with one that does have these, aren't these going to lead to an exception being thrown somewhere?
Rather than throwing an exception, I would have thought the right thing to do would be to filter out such certificates when you see them, or when you do a merge. Then, if one SKS server ran a search-and-destroy mission to filter out such certificates on its local copy, then it would cause merges with everyone that it gossiped with, thus squashing them from the whole network.
That said, it's a little tricky during the period when some servers have this rule and some don't since the certificate squashing will happen on one side and not the other, leading to inconsistent handling of keys, and thus persistent differences.
I only vaguely remember how it works, but I think there is a filter mechanism that is meant to deal with this case where different servers support different sets of filters on the keyset.
All in, I don't see how this patch could be doing the right thing.
Another very simple thing we can do: do this kind of filtering only on keys submitted through the HTTP interface, not through reconciled keys. Then you can stop this problem for new keys, without disrupting the replication mechanism.
can you propose an alternate patch that you think would be doing the right thing? I see the filter lists in fixkey.ml, but i don't see how to implement a new one in that framework.
i agree that the synchronization phase across machines with/without this patch would be awkward, but it's better to have the awkward phase than to actively propagate data that is marked for non-propagation.