- GRAYBYTE UNDETECTABLE CODES -

403Webshell
Server IP : 184.154.167.98  /  Your IP : 18.191.174.4
Web Server : Apache
System : Linux pink.dnsnetservice.com 4.18.0-553.22.1.lve.1.el8.x86_64 #1 SMP Tue Oct 8 15:52:54 UTC 2024 x86_64
User : puertode ( 1767)
PHP Version : 7.2.34
Disable Function : NONE
MySQL : OFF  |  cURL : ON  |  WGET : ON  |  Perl : ON  |  Python : ON  |  Sudo : ON  |  Pkexec : ON
Directory :  /usr/libexec/pcp/pmdas/zfs/

Upload File :
current_dir [ Writeable ] document_root [ Writeable ]

 

Command :


[ Back ]     

Current File : /usr/libexec/pcp/pmdas/zfs/help.pag
PcPh22

@ 153.0 set of ZFS storage poolsZFS aggregates devices into a storage pool.  The storage pool
describes the physical characteristics of the storage (device
layout, data redundancy, and so on) and acts as an arbitrary
data store from which file systems can be created.
@ zfs.arc.hits ARC hitsNumber of times a data block was in the ARC DRAM cache and returned.
@ zfs.arc.misses ARC missesNumber of times a data block was not in the ARC DRAM cache. It will 
be read from the L2ARC cache devices (if available and the data is 
cached on them) or the pool disks.
@ zfs.arc.demand.data_hits ARC demand data hits
@ zfs.arc.demand.data_misses ARC demand data missesDemand misses mean that programs are waiting on disk IO.
@ zfs.arc.demand.metadata_hits ARC demand metadata hits
@ zfs.arc.demand.metadata_misses ARC demand metadata misses
@ zfs.arc.prefetch.data_hits ARC prefetch data hitsPrefetch identified a sequential workload, and requested that the data
be cached in the ARC ahead of time by performing ARC accesses for that
data.  As it turned out, the data was already in the ARC - so these
accesses returned as "hits" (and so the prefetch ARC access was not
actually needed).  This happens if cached data is repeatedly read in a
sequential manner.
@ zfs.arc.prefetch.data_misses ARC prefetch data missesPrefetch identified a sequential workload, and requested that the data
be cached in the ARC ahead of time by performing ARC accesses for that data. 
The data was not in the cache already, and so this is a "miss" and the data
is read from disk.  This is normal, and is how prefetch populates the ARC
from disk.
@ zfs.arc.prefetch.metadata_hits ARC prefetch metadata hits
@ zfs.arc.prefetch.metadata_misses ARC prefetch metadata misses
@ zfs.arc.mru.hits ARC most recently used data hits
@ zfs.arc.mru.ghost_hits ARC most recently used metadata hits
@ zfs.arc.mfu.hits ARC most frequently used data hits
@ zfs.arc.mfu.ghost_hits ARC most frequently used metadata hits
@ zfs.arc.deleted ARC deletions
@ zfs.arc.mutex_miss ARC mutex missNumber of buffers that could not be evicted because the hash lock
was held by another thread.  The lock may not necessarily be held
by something using the same buffer, since hash locks are shared
by multiple buffers.
@ zfs.arc.access_skip ARC buffers skippedNumber of buffers skipped when updating the access state due to the
header having already been released after acquiring the hash lock.
@ zfs.arc.evict.skip ARC evicted buffers skippedNumber of buffers skipped because they have I/O in progress, are
indirect prefetch buffers that have not lived long enough, or are
not from the spa we're trying to evict from.
@ zfs.arc.evict.not_enough ARC buffers that could not be evictedNumber of times arc_evict_state() was unable to evict enough
buffers to reach its target amount.
@ zfs.arc.evict.l2_cached ARC evicted data cached in L2ARC
@ zfs.arc.evict.l2_eligible ARC evicted data eligible for L2ARC
@ zfs.arc.evict.l2_ineligible ARC evicted data not eligible for L2ARC
@ zfs.arc.evict.l2_skip ARC evicted data not cached in L2ARC
@ zfs.arc.hash.elements number of hash elements in ARC
@ zfs.arc.hash.elements_max maximum number of has elements in ARC
@ zfs.arc.hash.collisions number of hash collisions in ARC
@ zfs.arc.hash.chains number of hash chains in ARC
@ zfs.arc.hash.chain_max maximum number of chains in ARC
@ zfs.arc.p target size of MRU
@ zfs.arc.c target size of cache
@ zfs.arc.c_min min target cache size
@ zfs.arc.c_max max target cache size
@ zfs.arc.size size of ARCNot updated directly; only synced in arc_kstat_update
@ zfs.arc.compressed_size compressed size of entire ARCNumber of compressed bytes stored in the arc_buf_hdr_t's b_pabd.
Note that the compressed bytes may match the uncompressed bytes
if the block is either not compressed or compressed arc is disabled.
@ zfs.arc.uncompressed_size uncompressed size of entire ARCUncompressed size of the data stored in b_pabd. If compressed
arc is disabled then this value will be identical to the stat
above.
@ zfs.arc.overhead_size number of bytes in the arc from arc_buf_t'sNumber of bytes stored in all the arc_buf_t's. This is classified
as "overhead" since this data is typically short-lived and will
be evicted from the arc when it becomes unreferenced unless the
zfs_keep_uncompressed_metadata or zfs_keep_uncompressed_level
values have been set (see comment in dbuf.c for more information).
@ zfs.arc.hdr_size size of internal ARC structuresNumber of bytes consumed by internal ARC structures necessary
for tracking purposes; these structures are not actually
backed by ARC buffers. This includes arc_buf_hdr_t structures
(allocated via arc_buf_hdr_t_full and arc_buf_hdr_t_l2only
caches), and arc_buf_t structures (allocated via arc_buf_t
cache).
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.data_size size of ARC buffersNumber of bytes consumed by ARC buffers of type equal to
ARC_BUFC_DATA. This is generally consumed by buffers backing
on disk user data (e.g. plain file contents).
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.metadata_size size of ARC metadataNumber of bytes consumed by ARC buffers of type equal to
ARC_BUFC_METADATA. This is generally consumed by buffers
backing on disk data that is used for internal ZFS
structures (e.g. ZAP, dnode, indirect blocks, etc).
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.dbuf_size size of dbufsNumber of bytes consumed by dmu_buf_impl_t objects.
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.dnode_size size of dnodesNumber of bytes consumed by dnode_t objects.
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.bonus_size bytes consumed by bonus buffers.Number of bytes consumed by bonus buffers.
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.anon_size size of buffers in anon stateTotal number of bytes consumed by ARC buffers residing in the
arc_anon state. This includes *all* buffers in the arc_anon
state; e.g. data, metadata, evictable, and unevictable buffers
are all included in this value.
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.anon_evictable_data size of evictable anon dataNumber of bytes consumed by ARC buffers that meet the
following criteria: backing buffers of type ARC_BUFC_DATA,
residing in the arc_anon state, and are eligible for eviction
(e.g. have no outstanding holds on the buffer).
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.anon_evictable_metadata size of evictable anon metadataNumber of bytes consumed by ARC buffers that meet the
following criteria: backing buffers of type ARC_BUFC_METADATA,
residing in the arc_anon state, and are eligible for eviction
(e.g. have no outstanding holds on the buffer).
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.mru.size size of most recently used dataTotal number of bytes consumed by ARC buffers residing in the
arc_mru state. This includes *all* buffers in the arc_mru
state; e.g. data, metadata, evictable, and unevictable buffers
are all included in this value.
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.mru.evictable_data size of most recently used evictable dataNumber of bytes consumed by ARC buffers that meet the
following criteria: backing buffers of type ARC_BUFC_DATA,
residing in the arc_mru state, and are eligible for eviction
(e.g. have no outstanding holds on the buffer).
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.mru.evictable_metadata size of most recently used evictable metadataNumber of bytes consumed by ARC buffers that meet the
following criteria: backing buffers of type ARC_BUFC_METADATA,
residing in the arc_mru state, and are eligible for eviction
(e.g. have no outstanding holds on the buffer).
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.mru.ghost_size size of most recently used ghost listTotal number of bytes that *would have been* consumed by ARC
buffers in the arc_mru_ghost state. The key thing to note
here, is the fact that this size doesn't actually indicate
RAM consumption. The ghost lists only consist of headers and
don't actually have ARC buffers linked off of these headers.
Thus, *if* the headers had associated ARC buffers, these
buffers *would have* consumed this number of bytes.
@ zfs.arc.mru.ghost_evictable_data size of most recently used evictable ghost dataNumber of bytes that *would have been* consumed by ARC
buffers that are eligible for eviction, of type
ARC_BUFC_DATA, and linked off the arc_mru_ghost state.
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.mru.ghost_evictable_metadata size of most recently used evictable ghost metadataNumber of bytes that *would have been* consumed by ARC
buffers that are eligible for eviction, of type
ARC_BUFC_METADATA, and linked off the arc_mru_ghost state.
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.mfu.size size of most frequently used dataTotal number of bytes consumed by ARC buffers residing in the
arc_mfu state. This includes *all* buffers in the arc_mfu
state; e.g. data, metadata, evictable, and unevictable buffers
are all included in this value.
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.mfu.evictable_data size of most frequently used evictable dataNumber of bytes consumed by ARC buffers that are eligible for
eviction, of type ARC_BUFC_DATA, and reside in the arc_mfu
state.
Not updated directly; only synced in arc_kstat_update
@ zfs.arc.mfu.evictable_metadata size of most frequently used evictable metadataNumber of bytes consumed by ARC buffers that are eligible for
eviction, of type ARC_BUFC_METADATA, and reside in the
arc_mfu state.
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.mfu.ghost_size size of most frequently used ghost listTotal number of bytes that *would have been* consumed by ARC
buffers in the arc_mfu_ghost state. See the comment above
arcstat_mru_ghost_size for more details.
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.mfu.ghost_evictable_data size of most frequently used evictable ghost dataNumber of bytes that *would have been* consumed by ARC
buffers that are eligible for eviction, of type
ARC_BUFC_DATA, and linked off the arc_mfu_ghost state.
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.mfu.ghost_evictable_metadata size of most frequently used evictable ghost metadataNumber of bytes that *would have been* consumed by ARC
buffers that are eligible for eviction, of type
ARC_BUFC_METADATA, and linked off the arc_mru_ghost state.
Not updated directly; only synced in arc_kstat_update.
@ zfs.arc.l2.hits L2ARC hits
@ zfs.arc.l2.misses L2ARC misses
@ zfs.arc.l2.feeds L2ARC feeds
@ zfs.arc.l2.rw_clash L2ARC read/write clashes
@ zfs.arc.l2.read_bytes L2ARC bytes read
@ zfs.arc.l2.write_bytes L2ARC bytes written
@ zfs.arc.l2.writes_sent number of writes sent to L2ARC
@ zfs.arc.l2.writes_done number of writes done in L2ARC
@ zfs.arc.l2.writes_error number of write errors in L2ARC
@ zfs.arc.l2.writes_lock_retry number of L2ARC write lock retries
@ zfs.arc.l2.evict_lock_retry number of L2ARC evict lock retries
@ zfs.arc.l2.evict_reading L2ARC evictions upon reading
@ zfs.arc.l2.evict_l1cached L2ARC evictions of L1-cached data
@ zfs.arc.l2.free_on_write size of L2ARC free on write data
@ zfs.arc.l2.abort_lowmem number of L2ARC low memory aborts
@ zfs.arc.l2.cksum_bad number of L2ARC bad checksum errors
@ zfs.arc.l2.io_error number of L2ARC IO errors
@ zfs.arc.l2.size L2ARC size
@ zfs.arc.l2.asize L2ARC aligned size
@ zfs.arc.l2.hdr_size Size of L2ARC internal data structures
@ zfs.arc.memory.throttle_count ARC memory throttle count
@ zfs.arc.memory.direct_count ARC memory direct count
@ zfs.arc.memory.indirect_count ARC memory indirect count
@ zfs.arc.memory.all_bytes ARC all memory
@ zfs.arc.memory.free_bytes ARC free memory
@ zfs.arc.memory.available_bytes ARC available memory
@ zfs.arc.no_grow ARC no grow setting
@ zfs.arc.tempreserve ARC tempreserve space
@ zfs.arc.loaned_bytes ARC loaned bytes
@ zfs.arc.prune ARC objects to scan for prune
@ zfs.arc.meta_used ARC metadata buffers used
@ zfs.arc.meta_limit ARC metadata buffers limitARC will evict meta buffers that exceed arc_meta_limit. This
tunable make arc_meta_limit adjustable for different workloads.
@ zfs.arc.dnode_limit ARC dnode limit
@ zfs.arc.meta_max nax size of ARC metadata
@ zfs.arc.meta_min nin size of ARC metadata
@ zfs.arc.async_upgrade_sync ARC async upgrade syncThis is a sync read that needs to wait for an in-flight async read. Request that the
zio have its priority upgraded.
@ zfs.arc.demand.hit_predictive_prefetch ARC demand hit predictive prefetch number
@ zfs.arc.demand.hit_prescient_prefetch ARC demand hit prescient prefetch number
@ zfs.arc.need_free ARC bytes to be freed
@ zfs.arc.sys_free ARC target system free bytes
@ zfs.arc.raw_size size of all b_rabd's in entire ARC
@ zfs.arc.l2.log.blk_writes L2ARC log block writes
@ zfs.arc.l2.log.blk_avg_asize L2ARC log block average size
@ zfs.arc.l2.log.blk_asize L2ARC log block aligned size
@ zfs.arc.l2.log.blk_count L2ARC log block count
@ zfs.arc.l2.data_to_meta_ratio L2ARC data to metadata ratio
@ zfs.arc.l2.rebuild.success L2ARC rebuild state
@ zfs.arc.l2.rebuild.unsupported L2ARC rebuild unsupportedAttempt to rebuild a device containing no actual dev hdr or containing a header
from some other pool or from another version of persistent L2ARC.
@ zfs.arc.l2.rebuild.io_errors L2ARC rebuild IO errors
@ zfs.arc.l2.rebuild.dh_errors L2ARC device header errors
@ zfs.arc.l2.rebuild.cksum_lb_errors L2ARC log block checksum errors
@ zfs.arc.l2.rebuild.lowmem L2ARC low memory errors
@ zfs.arc.l2.rebuild.size logical size of restored buffers in the L2ARC
@ zfs.arc.l2.rebuild.asize aligned size of restored buffers in the L2ARC
@ zfs.arc.l2.rebuild.bufs L2ARC rebuild buffers
@ zfs.arc.l2.rebuild.bufs_precached L2ARC rebuild buffers prefetched
@ zfs.arc.l2.rebuild.log_blks L2ARC rebuild log blocks
@ zfs.arc.cached_only_in_progress ARC cached only in progress
@ zfs.arc.abd_chunk_waste_size size of scattered ABD chunks in ARCThis includes space wasted by all scatter ABD's, not just those allocated
by the ARC.  But the vast majority of scatter ABD's come from the ARC,
because other users are very short-lived.
@ zfs.abd.struct_size size of allocated ABD structsAmount of memory occupied by all of the abd_t struct allocations 
@ zfs.abd.linear_cnt number of linear ABDsThe number of linear ABDs which are currently allocated, excluding
ABDs which don't own their data (for instance the ones which were
allocated through abd_get_offset() and abd_get_from_buf()). If an
ABD takes ownership of its buf then it will become tracked.
@ zfs.abd.linear_data_size size of linear ABDsAmount of data stored in all linear ABDs tracked by linear_cnt 
@ zfs.abd.scatter.cnt number of scatter ABDsThe number of scatter ABDs which are currently allocated, excluding
ABDs which don't own their data (for instance the ones which were
allocated through abd_get_offset()).
@ zfs.abd.scatter.data_size size of scatter ABDsAmount of data stored in all scatter ABDs tracked by scatter_cnt 
@ zfs.abd.scatter.chunk_waste size of scatter ABD chunk wasteThe amount of space wasted at the end of the last chunk across all
scatter ABDs tracked by scatter_cnt.
@ zfs.abd.scatter.order_0 number of 0th order scatter ABDsThe number of compound allocations of the 0th order.  These
allocations are spread over all currently allocated ABDs, and
act as a measure of memory fragmentation.
@ zfs.abd.scatter.order_1 number of 1st order scatter ABDsThe number of compound allocations of the 1st order.  These
allocations are spread over all currently allocated ABDs, and
act as a measure of memory fragmentation.
@ zfs.abd.scatter.order_2 number of 2nd order scatter ABDsThe number of compound allocations of the 2nd order.  These
allocations are spread over all currently allocated ABDs, and
act as a measure of memory fragmentation.
@ zfs.abd.scatter.order_3 number of 3rd order scatter ABDsThe number of compound allocations of the 3rd order.  These
allocations are spread over all currently allocated ABDs, and
act as a measure of memory fragmentation.
@ zfs.abd.scatter.order_4 number of 4th order scatter ABDsThe number of compound allocations of the 4th order.  These
allocations are spread over all currently allocated ABDs, and
act as a measure of memory fragmentation.
@ zfs.abd.scatter.order_5 number of 5th order scatter ABDsThe number of compound allocations of the 5th order.  These
allocations are spread over all currently allocated ABDs, and
act as a measure of memory fragmentation.
@ zfs.abd.scatter.order_6 number of 6th order scatter ABDsThe number of compound allocations of the 6th order.  These
allocations are spread over all currently allocated ABDs, and
act as a measure of memory fragmentation.
@ zfs.abd.scatter.order_7 number of 7th order scatter ABDsThe number of compound allocations of the 7th order.  These
allocations are spread over all currently allocated ABDs, and
act as a measure of memory fragmentation.
@ zfs.abd.scatter.order_8 number of 8th order scatter ABDsThe number of compound allocations of the 8th order.  These
allocations are spread over all currently allocated ABDs, and
act as a measure of memory fragmentation.
@ zfs.abd.scatter.order_9 number of 9th order scatter ABDsThe number of compound allocations of the 9th order.  These
allocations are spread over all currently allocated ABDs, and
act as a measure of memory fragmentation.
@ zfs.abd.scatter.order_10 number of 10th order scatter ABDsThe number of compound allocations of the 10th order.  These
allocations are spread over all currently allocated ABDs, and
act as a measure of memory fragmentation.
@ zfs.abd.scatter.page_multi_chunk number of multi-chunk scatter ABDsThe number of scatter ABDs which contain multiple chunks.
ABDs are preferentially allocated from the minimum number of
contiguous multi-page chunks, a single chunk is optimal.
@ zfs.abd.scatter.page_multi_zone number of split scatter ABDsThe number of scatter ABDs which are split across memory zones.
ABDs are preferentially allocated using pages from a single zone.
@ zfs.abd.scatter.page_alloc_retry number of scatter ABD allocation retriesThe total number of retries encountered when attempting to
allocate the pages to populate the scatter ABD.
@ zfs.abd.scatter.sg_table_retry number of scatter ABD SG table allocation retriesThe total number of retries encountered when attempting to
allocate the SG table for an ABD.
@ zfs.dbuf.cache.count number of elements in the dbuf cache
@ zfs.dbuf.cache.size_bytes size of the dbuf cache
@ zfs.dbuf.cache.size_bytes_max max size of the dbuf cache
@ zfs.dbuf.cache.target_bytes target bounds on the dbuf cache size
@ zfs.dbuf.cache.lowater_bytes low bounds on the dbuf cache size
@ zfs.dbuf.cache.hiwater_bytes high bounds on the dbuf cache size
@ zfs.dbuf.cache.total_evicts total number of dbuf cache evictions that have occurred
@ zfs.dbuf.cache.level_0 the distribution of dbufs level 0 in the cache
@ zfs.dbuf.cache.level_1 the distribution of dbufs level 1 in the cache
@ zfs.dbuf.cache.level_2 the distribution of dbufs level 2 in the cache
@ zfs.dbuf.cache.level_3 the distribution of dbufs level 3 in the cache
@ zfs.dbuf.cache.level_4 the distribution of dbufs level 4 in the cache
@ zfs.dbuf.cache.level_5 the distribution of dbufs level 5 in the cache
@ zfs.dbuf.cache.level_6 the distribution of dbufs level 6 in the cache
@ zfs.dbuf.cache.level_7 the distribution of dbufs level 7 in the cache
@ zfs.dbuf.cache.level_8 the distribution of dbufs level 8 in the cache
@ zfs.dbuf.cache.level_9 the distribution of dbufs level 9 in the cache
@ zfs.dbuf.cache.level_10 the distribution of dbufs level 10 in the cache
@ zfs.dbuf.cache.level_11 the distribution of dbufs level 11 in the cache
@ zfs.dbuf.cache.level_0_bytes the total size of dbufs level 0 in the cache
@ zfs.dbuf.cache.level_1_bytes the total size of dbufs level 1 in the cache
@ zfs.dbuf.cache.level_2_bytes the total size of dbufs level 2 in the cache
@ zfs.dbuf.cache.level_3_bytes the total size of dbufs level 3 in the cache
@ zfs.dbuf.cache.level_4_bytes the total size of dbufs level 4 in the cache
@ zfs.dbuf.cache.level_5_bytes the total size of dbufs level 5 in the cache
@ zfs.dbuf.cache.level_6_bytes the total size of dbufs level 6 in the cache
@ zfs.dbuf.cache.level_7_bytes the total size of dbufs level 7 in the cache
@ zfs.dbuf.cache.level_8_bytes the total size of dbufs level 8 in the cache
@ zfs.dbuf.cache.level_9_bytes the total size of dbufs level 9 in the cache
@ zfs.dbuf.cache.level_10_bytes the total size of dbufs level 10 in the cache
@ zfs.dbuf.cache.level_11_bytes the total size of dbufs level 11 in the cache
@ zfs.dbuf.hash.hits dbuf hash table hits
@ zfs.dbuf.hash.misses dbuf hash table misses
@ zfs.dbuf.hash.collisions dbuf hash table collisions
@ zfs.dbuf.hash.elements number of dbuf hash table elements
@ zfs.dbuf.hash.elements_max max number of dbuf hash table elements
@ zfs.dbuf.hash.chains number of dbuf hash chainsNumber of sublists containing more than one dbuf in the dbuf
hash table. Keep track of the longest hash chain.
@ zfs.dbuf.hash.chain_max max number of dbuf hash chains
@ zfs.dbuf.hash.insert_race number of dbug hash insert racesNumber of times a dbuf_create() discovers that a dbuf was
already created and in the dbuf hash table.
@ zfs.dbuf.metadata_cache.count number of elements in the metadata dbuf cache
@ zfs.dbuf.metadata_cache.size_bytes size of the metadata dbuf cache
@ zfs.dbuf.metadata_cache.size_bytes_max max size of the metadata dbuf cache
@ zfs.dbuf.metadata_cache.overflow number of the metadata dbuf cache overflowsFor diagnostic purposes, this is incremented whenever we can't add
something to the metadata cache because it's full, and instead put
the data in the regular dbuf cache.
@ zfs.dmu_tx.assigned number of DMU transactions assigned
@ zfs.dmu_tx.delay number of DMU transactions delayed
@ zfs.dmu_tx.error number of DMU transaction errors
@ zfs.dmu_tx.suspended number of DMU transactions suspended
@ zfs.dmu_tx.group number of DMU transaction gropus
@ zfs.dmu_tx.memory_reserve size of memory reserved for DMU transactions
@ zfs.dmu_tx.memory_reclaim size of memory reclaimed for DMU transactions
@ zfs.dmu_tx.dirty_throttle number of DMU transaction throttles
@ zfs.dmu_tx.dirty_delay number of DMU transaction delays
@ zfs.dmu_tx.dirty_over_max number of DMU transactions over the maximum limit
@ zfs.dmu_tx.dirty_frees_delay number of DMU transaction frees delayed
@ zfs.dmu_tx.quota DMU transaction quota
@ zfs.dnode.hold.dbuf_hold number of failed dnode dbuf holdsNumber of failed attempts to hold a meta dnode dbuf.
@ zfs.dnode.hold.dbuf_read number of failed dnode dbuf readsNumber of failed attempts to read a meta dnode dbuf.
@ zfs.dnode.hold.alloc_hits number of dnode allocation hitsNumber of times dnode_hold(..., DNODE_MUST_BE_ALLOCATED) was able
to hold the requested object number which was allocated.  This is
the common case when looking up any allocated object number.
@ zfs.dnode.hold.alloc_misses number of dnode allocation missesNumber of times dnode_hold(..., DNODE_MUST_BE_ALLOCATED) was not
able to hold the request object number because it was not allocated.
@ zfs.dnode.hold.alloc_interior number of failed dnode dbuf holds due to interior referencesNumber of times dnode_hold(..., DNODE_MUST_BE_ALLOCATED) was not
able to hold the request object number because the object number
refers to an interior large dnode slot.
@ zfs.dnode.hold.alloc_lock_retry number of dnode dbuf lock retriesNumber of times dnode_hold(..., DNODE_MUST_BE_ALLOCATED) needed
to retry acquiring slot zrl locks due to contention.
@ zfs.dnode.hold.alloc_lock_misses number of dnode dbuf lock missesNumber of times dnode_hold(..., DNODE_MUST_BE_ALLOCATED) did not
need to create the dnode because another thread did so after
dropping the read lock but before acquiring the write lock.
@ zfs.dnode.hold.alloc_type_none number of unallocated dnode dbuf holdsNumber of times dnode_hold(..., DNODE_MUST_BE_ALLOCATED) found
a free dnode instantiated by dnode_create() but not yet allocated
by dnode_allocate().
@ zfs.dnode.hold.free_hits number of dnode dbuf hold hitsNumber of times dnode_hold(..., DNODE_MUST_BE_FREE) was able
to hold the requested range of free dnode slots.
@ zfs.dnode.hold.free_misses number of dnode dbuf hold missesNumber of times dnode_hold(..., DNODE_MUST_BE_FREE) was not
able to hold the requested range of free dnode slots because
at least one slot was allocated.
@ zfs.dnode.hold.free_lock_misses number of dnode dbuf free lock missesNumber of times dnode_hold(..., DNODE_MUST_BE_FREE) was not
able to hold the requested range of free dnode slots because
after acquiring the zrl lock at least one slot was allocated.
@ zfs.dnode.hold.free_lock_retry number of dnode dbuf free lock retriesNumber of times dnode_hold(..., DNODE_MUST_BE_FREE) needed
to retry acquiring slot zrl locks due to contention.
@ zfs.dnode.hold.free_refcount number of referenced dnode dbufsNumber of times dnode_hold(..., DNODE_MUST_BE_FREE) requested
a range of dnode slots which were held by another thread.
@ zfs.dnode.hold.free_overflow number of dnode dbuf free overflowsNumber of times dnode_hold(..., DNODE_MUST_BE_FREE) requested
a range of dnode slots which would overflow the dnode_phys_t.
@ zfs.dnode.free_interior_lock_retry number of dnode dbufs free lock retries due to interior referencesNumber of times dnode_free_interior_slots() needed to retry
acquiring a slot zrl lock due to contention.
@ zfs.dnode.allocate number of new dnodes allocationsNumber of new dnodes allocated by dnode_allocate().
@ zfs.dnode.reallocate number of new dnodes reallocationsNumber of dnodes re-allocated by dnode_reallocate().
@ zfs.dnode.buf_evict number of dnode dbuf evictionsNumber of meta dnode dbufs evicted.
@ zfs.dnode.alloc.next_chunk number of dnode allocations to another chunkNumber of times dmu_object_alloc*() reached the end of the existing
object ID chunk and advanced to a new one.
@ zfs.dnode.alloc.race number of dnode allocation race conditionsNumber of times multiple threads attempted to allocate a dnode
from the same block of free dnodes.
@ zfs.dnode.alloc.next_block number of dnode allocations to another blockNumber of times dmu_object_alloc*() was forced to advance to the
next meta dnode dbuf due to an error from  dmu_object_next().
@ zfs.dnode.move.invalid number invalid dnode movesStatistics for tracking dnodes which have been moved.
@ zfs.dnode.move.recheck1 number of recheck1 dnode movesStatistics for tracking dnodes which have been moved.
@ zfs.dnode.move.recheck2 number of recheck2 dnode movesStatistics for tracking dnodes which have been moved.
@ zfs.dnode.move.special number of special dnode modesStatistics for tracking dnodes which have been moved.
@ zfs.dnode.move.handle number of handle dnode movesStatistics for tracking dnodes which have been moved.
@ zfs.dnode.move.rwlock number of RW lock dnode movesStatistics for tracking dnodes which have been moved.
@ zfs.dnode.move.active number of active dnode movesStatistics for tracking dnodes which have been moved.
@ zfs.fm.erpt_dropped number of erpts dropped on post
@ zfs.fm.erpt_set_failed number of erpt set failures
@ zfs.fm.fmri_set_failed number of fmri set failures
@ zfs.fm.payload_set_failed number of payload set failures
@ zfs.vdev.cache.delegations vdev cache delegations
@ zfs.vdev.cache.hits vdev cache hits
@ zfs.vdev.cache.misses vdev cache misses
@ zfs.vdev.mirror.rotating_linear vdev mirror linear rotationsNew IO follows directly the last IO 
@ zfs.vdev.mirror.rotating_offset vdev mirror offset rotationsNew IO is within zfs_vdev_mirror_rotating_seek_offset of the last 
@ zfs.vdev.mirror.rotating_seek vdev mirror random seeksNew IO requires random seek 
@ zfs.vdev.mirror.non_rotating_linear vdev mirror linear non-rotationsNew IO follows directly the last IO  (nonrot) 
@ zfs.vdev.mirror.non_rotating_seek vdev mirror random seeks non-rotNew IO requires random seek (nonrot) 
@ zfs.vdev.mirror.preferred_found vdev mirror preferred foundPreferred child vdev found 
@ zfs.vdev.mirror.preferred_not_found vdev mirror preferred not foundPreferred child vdev not found or equal load  
@ zfs.xuio.onloan_read_buf loaned yet not returned arc_buf read
@ zfs.xuio.onloan_write_buf loaned yet not returned arc_buf written
@ zfs.xuio.read_buf_copied whether a copy is made when loaning out a read buffer 
@ zfs.xuio.read_buf_nocopy whether a copy is not made when loaning out a read buffer 
@ zfs.xuio.write_buf_copied whether a copy is made when assigning a write buffer 
@ zfs.xuio.write_buf_nocopy whether a copy is not made when assigning a write buffer 
@ zfs.zfetch.hits zfetch hits
@ zfs.zfetch.misses zfetch misses
@ zfs.zfetch.max_streams maximum number of streams per zfetch
@ zfs.zil.commit.count ZIL commit countNumber of times a ZIL commit (e.g. fsync) has been requested.
@ zfs.zil.commit.writer_count ZIL commit writer countNumber of times the ZIL has been flushed to stable storage.
This is less than zil_commit_count when commits are merged
(see the documentation above zil_commit()).
@ zfs.zil.itx.count ZIL transation countNumber of transactions (reads, writes, renames, etc.)
that have been committed.
@ zfs.zil.itx.indirect_count ZIL indirect transaction countThe number of indirect transactions.
@ zfs.zil.itx.indirect_bytes ZIL indirect transaction data sizeThe size of the the large data portions to write.
Note that bytes accumulates the length of the transactions
(i.e. data), not the actual log record sizes.
@ zfs.zil.itx.copied_count number of ZIL transactions copiedThe number of transactions copied into lr_write_t.
@ zfs.zil.itx.copied_bytes size of ZIL data copiedThe size of the data copied into lr_write_t.
Note that bytes accumulates the length of the transactions
(i.e. data), not the actual log record sizes.
@ zfs.zil.itx.needcopy_count number of ZIL transactions to be copiedThe number of transactions that need to be copied if pushed.
@ zfs.zil.itx.needcopy_bytes size of ZIL transaction data to be copiedThe size of the data that needs to be copied if pushed.
Note that bytes accumulates the length of the transactions
(i.e. data), not the actual log record sizes.
@ zfs.zil.itx.metaslab.normal_count number of ZIL transactions allocatedThe number of transactions which have been allocated to the normal
(i.e. not slog) storage pool. Note that bytes accumulate
the actual log record sizes - which do not include the actual
data in case of indirect writes.
@ zfs.zil.itx.metaslab.normal_bytes size of ZIL transaction data allocatedThe size of transactions which have been allocated to the normal
(i.e. not slog) storage pool. Note that bytes accumulate
the actual log record sizes - which do not include the actual
data in case of indirect writes.
@ zfs.zil.itx.metaslab.slog_count number of ZIL transactions in slogThe number of transactions which have been allocated to the slog storage pool.
If there are no separate log devices, this is the same as the
normal pool.
@ zfs.zil.itx.metaslab.slog_bytes size of ZIL transaction data in slogThe size of transactions which have been allocated to the slog storage pool.
If there are no separate log devices, this is the same as the
normal pool.
@ zfs.pool.state number corresponding to the pool stateOFFLINE  = 0, ONLINE  = 1, 
DEGRADED = 2, FAULTED = 3, REMOVED = 4, 
UNAVAIL  = 5, UNKNOWN = 13
@ zfs.pool.nread number of bytes read
@ zfs.pool.nwritten number of bytes written
@ zfs.pool.reads number of read operations
@ zfs.pool.writes number of write operations
@ zfs.pool.wtime cumulative wait (pre-service) time
@ zfs.pool.wlentime cumulative wait len*time product
@ zfs.pool.wupdate last time wait queue changed
@ zfs.pool.rtime cumulative run (service) time
@ zfs.pool.rlentime cumulative run length*time product
@ zfs.pool.rupdate last time run queue changed
@ zfs.pool.wcnt count of elements in wait state
@ zfs.pool.rcnt count of elements in run state

Youez - 2016 - github.com/yon3zu
LinuXploit