-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sled agent PUT /datasets
should validate its constraint that (pool, kind)
is unique
#7311
Comments
This was on the same a4x2 deployment where I've been testing datasets, most recently described in #7304. My goal was to cause Reconfigurator to try to put an external DNS zone onto the same pool where an expunged external DNS zone was previously. I first ran into #7305. I pulled in #7307 and tried again. So at the starting point, I had this:
Of note are the external DNS zones:
This is too few zones, so the next attempt to generate a blueprint would add a new external-dns zone. I want it to land on a disk with an expunged external-dns zone, so I'm first going to expunge zone 13f9e4ae-4f94-4718-af29-4a5da01bac2f. This is what led to #7305, but now I'm trying with the fix for that:
Execution took almost two minutes for some reason but did complete successfully:
At this point I expected that if I generated a new blueprint through the planner, it'd add an external DNS zone to the disk it shouldn't. That's what happened:
Notice that in the diff output above, under datasets, Thankfully, this did fail at execution time:
This makes sense: it found the dataset id associated with the durable dataset of the expunged zone (since we never deleted its dataset), but it was expecting to find the one from the newly-added zone. (This check prevented us from just re-using the old dataset for the new zone!) Unfortunately, it did update the ledger to reflect the new dataset config generation 6, even though it's invalid:
This will all go away once we do implement deleting datasets (#6177 + #7304). In the meantime, we may want the planner to avoid creating this problem and I'll file a separate issue for that. But in case this does ever happen, Sled Agent really shouldn't allow this. As long as Sled Agent requires that the (pool name, kind) tuple be unique among its datasets, it should validate that up front in |
Filed #7312 for the planner workaround. |
We recently realized that because datasets never get deleted today, we can never put two datasets of a particular kind on the same zpool, even if one of those belonged to a zone that's now expunged. I wanted to test this out and see what happens. Unfortunately the details don't fit in a GitHub issue description. The short version is:
I'm going to file a separate issue for the planner here. This issue covers having sled agent validate this constraint before accepting the request and committing the ledger. I'll comment below with details on how I tested this and how it went wrong.
The text was updated successfully, but these errors were encountered: