It can be thought of as a “pool-wide snapshot” (or a variation of extreme rewind that doesn’t corrupt your data). The idea of Storage Pool Checkpoint (aka zpool checkpoint) deals with exactly that. The potential negative performance impact is that we will be slightly reducing the size of the ARC (by ~3%). In real world workloads, this won't help as dramatically as the example above, but we think it's still worth it because the risk of decreasing performance is low. (We want it to be more than the 1/64th that the indirect blocks would use because we need to cache other stuff in the dbuf cache as well.) We suggest making the dbuf cache be 1/32nd of all memory, so that in this scenario we should be able to keep all the indirect blocks decompressed in the dbuf cache. If we are caching entire large files of recordsize=8K, the indirect blocks use 1/64th as much memory as the data blocks (assuming they have the same compression ratio). To reduce this decompression cost, we would like to increase the size of the dbuf cache so that more indirect blocks can be stored uncompressed. With compressed ARC (6950) we use up to 25% of our CPU to decompress indirect blocks, under a workload of random cached reads. Headlines ĩ188 increase size of dbuf cache to reduce indirect block decompression Lucas on sponsor gifts, TCP blackbox recorder, and Dell disk system hacking. New ZFS features landing in FreeBSD, MAP_STACK for OpenBSD, how to write safer C code with Clang’s address sanitizer, Michael W.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |