000
18.03.2013, 20:05 Uhr
holm
|
Da das ja derzeit groß in Mode ist habe ich mal 3 platten als raidz1 zusammengeknotet und mein FreeBSD 9-stable auf einem ZFS installiert. Ich habe soweit nichts zu meckern mit der Performance oder so, aber dieses Gerassel im 5 Sekundenabstand geht mir auf den Senkel... Kann man den ARC etwas bremsen?
Steckt da einer in der Materie? Olli?
Gruß,
Holm
Quellcode: | ------------------------------------------------------------------------ ZFS Subsystem Report Mon Mar 18 20:00:43 2013 ------------------------------------------------------------------------
System Information:
Kernel Version: 901503 (osreldate) Hardware Platform: amd64 Processor Architecture: amd64
ZFS Storage pool Version: 5000 ZFS Filesystem Version: 5
FreeBSD 9.1-STABLE #5 r248172: Mon Mar 11 23:32:53 CET 2013 holm 8:00PM up 11:37, 6 users, load averages: 0.21, 0.43, 0.53
------------------------------------------------------------------------
System Memory:
9.20% 719.80 MiB Active, 5.83% 456.01 MiB Inact 56.64% 4.33 GiB Wired, 0.11% 8.65 MiB Cache 28.19% 2.15 GiB Free, 0.04% 3.04 MiB Gap
Real Installed: 8.00 GiB Real Available: 98.83% 7.91 GiB Real Managed: 96.68% 7.64 GiB
Logical Total: 8.00 GiB Logical Used: 67.39% 5.39 GiB Logical Free: 32.61% 2.61 GiB
Kernel Memory: 3.19 GiB Data: 99.07% 3.16 GiB Text: 0.93% 30.49 MiB
Kernel Memory Map: 7.63 GiB Size: 38.75% 2.96 GiB Free: 61.25% 4.68 GiB
------------------------------------------------------------------------
ARC Summary: (HEALTHY) Memory Throttle Count: 0
ARC Misc: Deleted: 10 Recycle Misses: 0 Mutex Misses: 0 Evict Skips: 0
ARC Size: 48.16% 3.20 GiB Target Size: (Adaptive) 100.00% 6.64 GiB Min Size (Hard Limit): 12.50% 850.43 MiB Max Size (High Water): 8:1 6.64 GiB
ARC Size Breakdown: Recently Used Cache Size: 50.00% 3.32 GiB Frequently Used Cache Size: 50.00% 3.32 GiB
ARC Hash Breakdown: Elements Max: 449.08k Elements Current: 100.00% 449.08k Collisions: 984.38k Chain Max: 13 Chains: 112.94k
------------------------------------------------------------------------
ARC Efficiency: 6.55m Cache Hit Ratio: 99.53% 6.52m Cache Miss Ratio: 0.47% 30.88k Actual Hit Ratio: 99.53% 6.52m
Data Demand Efficiency: 99.75% 4.58m
CACHE HITS BY CACHE LIST: Anonymously Used: 0.00% 44 Most Recently Used: 9.07% 591.61k Most Frequently Used: 90.93% 5.93m Most Recently Used Ghost: 0.00% 0 Most Frequently Used Ghost: 0.00% 0
CACHE HITS BY DATA TYPE: Demand Data: 70.02% 4.57m Prefetch Data: 0.00% 0 Demand Metadata: 29.98% 1.95m Prefetch Metadata: 0.00% 50
CACHE MISSES BY DATA TYPE: Demand Data: 37.64% 11.62k Prefetch Data: 0.00% 0 Demand Metadata: 62.30% 19.24k Prefetch Metadata: 0.05% 16
------------------------------------------------------------------------
L2ARC is disabled
------------------------------------------------------------------------
------------------------------------------------------------------------
VDEV cache is disabled
------------------------------------------------------------------------
ZFS Tunables (sysctl): kern.maxusers 384 vm.kmem_size 8207659008 vm.kmem_size_scale 1 vm.kmem_size_min 0 vm.kmem_size_max 329853485875 vfs.zfs.l2c_only_size 0 vfs.zfs.mfu_ghost_data_lsize 3315314688 vfs.zfs.mfu_ghost_metadata_lsize 247808 vfs.zfs.mfu_ghost_size 3315562496 vfs.zfs.mfu_data_lsize 1407816704 vfs.zfs.mfu_metadata_lsize 15166464 vfs.zfs.mfu_size 1432965120 vfs.zfs.mru_ghost_data_lsize 2323953152 vfs.zfs.mru_ghost_metadata_lsize 9271296 vfs.zfs.mru_ghost_size 2333224448 vfs.zfs.mru_data_lsize 1394391552 vfs.zfs.mru_metadata_lsize 202353664 vfs.zfs.mru_size 1714675200 vfs.zfs.anon_data_lsize 0 vfs.zfs.anon_metadata_lsize 0 vfs.zfs.anon_size 16384 vfs.zfs.l2arc_norw 1 vfs.zfs.l2arc_feed_again 1 vfs.zfs.l2arc_noprefetch 1 vfs.zfs.l2arc_feed_min_ms 200 vfs.zfs.l2arc_feed_secs 1 vfs.zfs.l2arc_headroom 2 vfs.zfs.l2arc_write_boost 8388608 vfs.zfs.l2arc_write_max 8388608 vfs.zfs.arc_meta_limit 1783479296 vfs.zfs.arc_meta_used 633718544 vfs.zfs.arc_min 891739648 vfs.zfs.arc_max 7133917184 vfs.zfs.dedup.prefetch 1 vfs.zfs.mdcomp_disable 0 vfs.zfs.nopwrite_enabled 1 vfs.zfs.write_limit_override 0 vfs.zfs.write_limit_inflated 25469177856 vfs.zfs.write_limit_max 1061215744 vfs.zfs.write_limit_min 33554432 vfs.zfs.write_limit_shift 3 vfs.zfs.no_write_throttle 0 vfs.zfs.zfetch.array_rd_sz 1048576 vfs.zfs.zfetch.block_cap 256 vfs.zfs.zfetch.min_sec_reap 2 vfs.zfs.zfetch.max_streams 8 vfs.zfs.prefetch_disable 1 vfs.zfs.no_scrub_prefetch 0 vfs.zfs.no_scrub_io 0 vfs.zfs.resilver_min_time_ms 3000 vfs.zfs.free_min_time_ms 1000 vfs.zfs.scan_min_time_ms 1000 vfs.zfs.scan_idle 50 vfs.zfs.scrub_delay 4 vfs.zfs.resilver_delay 2 vfs.zfs.top_maxinflight 32 vfs.zfs.write_to_degraded 0 vfs.zfs.mg_alloc_failures 8 vfs.zfs.check_hostid 1 vfs.zfs.recover 0 vfs.zfs.txg.synctime_ms 1000 vfs.zfs.txg.timeout 5 vfs.zfs.vdev.cache.bshift 16 vfs.zfs.vdev.cache.size 0 vfs.zfs.vdev.cache.max 16384 vfs.zfs.vdev.write_gap_limit 4096 vfs.zfs.vdev.read_gap_limit 32768 vfs.zfs.vdev.aggregation_limit 131072 vfs.zfs.vdev.ramp_rate 2 vfs.zfs.vdev.time_shift 6 vfs.zfs.vdev.min_pending 4 vfs.zfs.vdev.max_pending 10 vfs.zfs.vdev.bio_flush_disable 0 vfs.zfs.cache_flush_disable 0 vfs.zfs.zil_replay_disable 0 vfs.zfs.sync_pass_rewrite 2 vfs.zfs.sync_pass_dont_compress 5 vfs.zfs.sync_pass_deferred_free 2 vfs.zfs.zio.use_uma 0 vfs.zfs.snapshot_list_prefetch 0 vfs.zfs.version.zpl 5 vfs.zfs.version.spa 5000 vfs.zfs.version.acl 1 vfs.zfs.debug 0 vfs.zfs.super_owner 0
------------------------------------------------------------------------
|
-- float R,y=1.5,x,r,A,P,B;int u,h=80,n=80,s;main(c,v)int c;char **v; {s=(c>1?(h=atoi(v[1])):h)*h/2;for(R=6./h;s%h||(y-=R,x=-2),s;4<(P=B*B)+ (r=A*A)|++u==n&&putchar(*(((--s%h)?(u<n?--u%6:6):7)+"World! \n"))&& (A=B=P=u=r=0,x+=R/2))A=B*2*A+y,B=P+x-r;} Dieser Beitrag wurde am 18.03.2013 um 20:14 Uhr von holm editiert. |