Re: [LKP] [rcu] kernel BUG at include/linux/pagemap.h:149!
by Frederic Weisbecker
On Fri, Sep 11, 2015 at 10:19:47AM +0800, Boqun Feng wrote:
> Subject: [PATCH 01/27] rcu: Don't disable preemption for Tiny and Tree RCU
> readers
>
> Because preempt_disable() maps to barrier() for non-debug builds,
> it forces the compiler to spill and reload registers. Because Tree
> RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
> barrier() instances generate needless extra code for each instance of
> rcu_read_lock() and rcu_read_unlock(). This extra code slows down Tree
> RCU and bloats Tiny RCU.
>
> This commit therefore removes the preempt_disable() and preempt_enable()
> from the non-preemptible implementations of __rcu_read_lock() and
> __rcu_read_unlock(), respectively.
>
> For debug purposes, preempt_disable() and preempt_enable() are still
> kept if CONFIG_PREEMPT_COUNT=y, which makes the detection of sleeping
> inside atomic sections still work in non-preemptible kernels.
>
> Signed-off-by: Boqun Feng <boqun.feng(a)gmail.com>
> Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> ---
> include/linux/rcupdate.h | 6 ++++--
> include/linux/rcutiny.h | 1 +
> kernel/rcu/tree.c | 9 +++++++++
> 3 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index d63bb77..6c3cece 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -297,12 +297,14 @@ void synchronize_rcu(void);
>
> static inline void __rcu_read_lock(void)
> {
> - preempt_disable();
> + if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
> + preempt_disable();
preempt_disable() is a no-op when !CONFIG_PREEMPT_COUNT, right?
Or rather it's a barrier(), which is anyway implied by rcu_read_lock().
So perhaps we can get rid of the IS_ENABLED() check?
3 years, 2 months
Test monitoring on custom github repo
by Thomas Garnier
Hi,
I am working on KASLR (PIE for x86_64). I previously used Kees (CCed)
branches for lkp bot testing but someone told be I could ask you to add a
custom github path to monitor all branches on it.
I pushed my changes to: https://github.com/thgarnie/linux (kasrl_pie_v2
right now)
Can you add it? Anything I need to do?
Thanks,
--
Thomas
3 years, 6 months
[lkp-robot] [brd] 316ba5736c: aim7.jobs-per-min -11.2% regression
by kernel test robot
Greeting,
FYI, we noticed a -11.2% regression of aim7.jobs-per-min due to commit:
commit: 316ba5736c9caa5dbcd84085989862d2df57431d ("brd: Mark as non-rotational")
https://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git for-4.18/block
in testcase: aim7
on test machine: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
with following parameters:
disk: 1BRD_48G
fs: btrfs
test: disk_rw
load: 1500
cpufreq_governor: performance
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
Details are as below:
-------------------------------------------------------------------------------------------------->
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-7/performance/1BRD_48G/btrfs/x86_64-rhel-7.2/1500/debian-x86_64-2016-08-31.cgz/lkp-ivb-ep01/disk_rw/aim7
commit:
522a777566 ("block: consolidate struct request timestamp fields")
316ba5736c ("brd: Mark as non-rotational")
522a777566f56696 316ba5736c9caa5dbcd8408598
---------------- --------------------------
%stddev %change %stddev
\ | \
28321 -11.2% 25147 aim7.jobs-per-min
318.19 +12.6% 358.23 aim7.time.elapsed_time
318.19 +12.6% 358.23 aim7.time.elapsed_time.max
1437526 ± 2% +14.6% 1646849 ± 2% aim7.time.involuntary_context_switches
11986 +14.2% 13691 aim7.time.system_time
73.06 ± 2% -3.6% 70.43 aim7.time.user_time
2449470 ± 2% -25.0% 1837521 ± 4% aim7.time.voluntary_context_switches
20.25 ± 58% +1681.5% 360.75 ±109% numa-meminfo.node1.Mlocked
456062 -16.3% 381859 softirqs.SCHED
9015 ± 7% -21.3% 7098 ± 22% meminfo.CmaFree
47.50 ± 58% +1355.8% 691.50 ± 92% meminfo.Mlocked
5.24 ± 3% -1.2 3.99 ± 2% mpstat.cpu.idle%
0.61 ± 2% -0.1 0.52 ± 2% mpstat.cpu.usr%
16627 +12.8% 18762 ± 4% slabinfo.Acpi-State.active_objs
16627 +12.9% 18775 ± 4% slabinfo.Acpi-State.num_objs
57.00 ± 2% +17.5% 67.00 vmstat.procs.r
20936 -24.8% 15752 ± 2% vmstat.system.cs
45474 -1.7% 44681 vmstat.system.in
6.50 ± 59% +1157.7% 81.75 ± 75% numa-vmstat.node0.nr_mlock
242870 ± 3% +13.2% 274913 ± 7% numa-vmstat.node0.nr_written
2278 ± 7% -22.6% 1763 ± 21% numa-vmstat.node1.nr_free_cma
4.75 ± 58% +1789.5% 89.75 ±109% numa-vmstat.node1.nr_mlock
88018135 ± 3% -48.9% 44980457 ± 7% cpuidle.C1.time
1398288 ± 3% -51.1% 683493 ± 9% cpuidle.C1.usage
3499814 ± 2% -38.5% 2153158 ± 5% cpuidle.C1E.time
52722 ± 4% -45.6% 28692 ± 6% cpuidle.C1E.usage
9865857 ± 3% -40.1% 5905155 ± 5% cpuidle.C3.time
69656 ± 2% -42.6% 39990 ± 5% cpuidle.C3.usage
590856 ± 2% -12.3% 517910 cpuidle.C6.usage
46160 ± 7% -53.7% 21372 ± 11% cpuidle.POLL.time
1716 ± 7% -46.6% 916.25 ± 14% cpuidle.POLL.usage
197656 +4.1% 205732 proc-vmstat.nr_active_file
191867 +4.1% 199647 proc-vmstat.nr_dirty
509282 +1.6% 517318 proc-vmstat.nr_file_pages
2282 ± 8% -24.4% 1725 ± 22% proc-vmstat.nr_free_cma
357.50 +10.6% 395.25 ± 2% proc-vmstat.nr_inactive_file
11.50 ± 58% +1397.8% 172.25 ± 93% proc-vmstat.nr_mlock
970355 ± 4% +14.6% 1111549 ± 8% proc-vmstat.nr_written
197984 +4.1% 206034 proc-vmstat.nr_zone_active_file
357.50 +10.6% 395.25 ± 2% proc-vmstat.nr_zone_inactive_file
192282 +4.1% 200126 proc-vmstat.nr_zone_write_pending
7901465 ± 3% -14.0% 6795016 ± 16% proc-vmstat.pgalloc_movable
886101 +10.2% 976329 proc-vmstat.pgfault
2.169e+12 +15.2% 2.497e+12 perf-stat.branch-instructions
0.41 -0.1 0.35 perf-stat.branch-miss-rate%
31.19 ± 2% +1.6 32.82 perf-stat.cache-miss-rate%
9.116e+09 +8.3% 9.869e+09 perf-stat.cache-misses
2.924e+10 +2.9% 3.008e+10 ± 2% perf-stat.cache-references
6712739 ± 2% -15.4% 5678643 ± 2% perf-stat.context-switches
4.02 +2.7% 4.13 perf-stat.cpi
3.761e+13 +17.3% 4.413e+13 perf-stat.cpu-cycles
606958 -13.7% 523758 ± 2% perf-stat.cpu-migrations
2.476e+12 +13.4% 2.809e+12 perf-stat.dTLB-loads
0.18 ± 2% -0.0 0.16 ± 9% perf-stat.dTLB-store-miss-rate%
1.079e+09 ± 2% -9.6% 9.755e+08 ± 9% perf-stat.dTLB-store-misses
5.933e+11 +1.6% 6.029e+11 perf-stat.dTLB-stores
9.349e+12 +14.2% 1.068e+13 perf-stat.instructions
11247 ± 11% +19.8% 13477 ± 9% perf-stat.instructions-per-iTLB-miss
0.25 -2.6% 0.24 perf-stat.ipc
865561 +10.3% 954350 perf-stat.minor-faults
2.901e+09 ± 3% +9.8% 3.186e+09 ± 3% perf-stat.node-load-misses
3.682e+09 ± 3% +11.0% 4.088e+09 ± 3% perf-stat.node-loads
3.778e+09 +4.8% 3.959e+09 ± 2% perf-stat.node-store-misses
5.079e+09 +6.4% 5.402e+09 perf-stat.node-stores
865565 +10.3% 954352 perf-stat.page-faults
51.75 ± 5% -12.5% 45.30 ± 10% sched_debug.cfs_rq:/.load_avg.avg
316.35 ± 3% +17.2% 370.81 ± 8% sched_debug.cfs_rq:/.util_est_enqueued.stddev
15294 ± 30% +234.9% 51219 ± 76% sched_debug.cpu.avg_idle.min
299443 ± 3% -7.3% 277566 ± 5% sched_debug.cpu.avg_idle.stddev
1182 ± 19% -26.3% 872.02 ± 13% sched_debug.cpu.nr_load_updates.stddev
1.22 ± 8% +21.7% 1.48 ± 6% sched_debug.cpu.nr_running.avg
2.75 ± 10% +26.2% 3.47 ± 6% sched_debug.cpu.nr_running.max
0.58 ± 7% +24.2% 0.73 ± 6% sched_debug.cpu.nr_running.stddev
77148 -20.0% 61702 ± 7% sched_debug.cpu.nr_switches.avg
70024 -24.8% 52647 ± 8% sched_debug.cpu.nr_switches.min
6662 ± 6% +61.9% 10789 ± 24% sched_debug.cpu.nr_switches.stddev
80.45 ± 18% -19.1% 65.05 ± 6% sched_debug.cpu.nr_uninterruptible.stddev
76819 -19.3% 62008 ± 8% sched_debug.cpu.sched_count.avg
70616 -23.5% 53996 ± 8% sched_debug.cpu.sched_count.min
5494 ± 9% +85.3% 10179 ± 26% sched_debug.cpu.sched_count.stddev
16936 -52.9% 7975 ± 9% sched_debug.cpu.sched_goidle.avg
19281 -49.9% 9666 ± 7% sched_debug.cpu.sched_goidle.max
15417 -54.8% 6962 ± 10% sched_debug.cpu.sched_goidle.min
875.00 ± 6% -35.0% 569.09 ± 13% sched_debug.cpu.sched_goidle.stddev
40332 -23.5% 30851 ± 7% sched_debug.cpu.ttwu_count.avg
35074 -26.3% 25833 ± 6% sched_debug.cpu.ttwu_count.min
3239 ± 8% +67.4% 5422 ± 28% sched_debug.cpu.ttwu_count.stddev
5232 +27.4% 6665 ± 13% sched_debug.cpu.ttwu_local.avg
15877 ± 12% +77.5% 28184 ± 27% sched_debug.cpu.ttwu_local.max
2530 ± 10% +95.9% 4956 ± 27% sched_debug.cpu.ttwu_local.stddev
2.52 ± 7% -0.6 1.95 ± 3% perf-profile.calltrace.cycles-pp.btrfs_dirty_pages.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write.vfs_write
1.48 ± 12% -0.5 1.01 ± 4% perf-profile.calltrace.cycles-pp.btrfs_get_extent.btrfs_dirty_pages.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write
1.18 ± 16% -0.4 0.76 ± 7% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_file_extent.btrfs_get_extent.btrfs_dirty_pages.__btrfs_buffered_write
1.18 ± 16% -0.4 0.76 ± 7% perf-profile.calltrace.cycles-pp.btrfs_lookup_file_extent.btrfs_get_extent.btrfs_dirty_pages.__btrfs_buffered_write.btrfs_file_write_iter
0.90 ± 17% -0.3 0.56 ± 4% perf-profile.calltrace.cycles-pp.__dentry_kill.dentry_kill.dput.__fput.task_work_run
0.90 ± 17% -0.3 0.56 ± 4% perf-profile.calltrace.cycles-pp.evict.__dentry_kill.dentry_kill.dput.__fput
0.90 ± 17% -0.3 0.56 ± 4% perf-profile.calltrace.cycles-pp.dentry_kill.dput.__fput.task_work_run.exit_to_usermode_loop
0.90 ± 18% -0.3 0.56 ± 4% perf-profile.calltrace.cycles-pp.btrfs_evict_inode.evict.__dentry_kill.dentry_kill.dput
0.90 ± 17% -0.3 0.57 ± 5% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.90 ± 17% -0.3 0.57 ± 5% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.90 ± 17% -0.3 0.57 ± 5% perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.90 ± 17% -0.3 0.57 ± 5% perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
1.69 -0.1 1.54 ± 2% perf-profile.calltrace.cycles-pp.lock_and_cleanup_extent_if_need.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write.vfs_write
0.87 ± 4% -0.1 0.76 ± 2% perf-profile.calltrace.cycles-pp.__clear_extent_bit.clear_extent_bit.lock_and_cleanup_extent_if_need.__btrfs_buffered_write.btrfs_file_write_iter
0.87 ± 4% -0.1 0.76 ± 2% perf-profile.calltrace.cycles-pp.clear_extent_bit.lock_and_cleanup_extent_if_need.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write
0.71 ± 6% -0.1 0.61 ± 2% perf-profile.calltrace.cycles-pp.clear_state_bit.__clear_extent_bit.clear_extent_bit.lock_and_cleanup_extent_if_need.__btrfs_buffered_write
0.69 ± 6% -0.1 0.60 ± 2% perf-profile.calltrace.cycles-pp.btrfs_clear_bit_hook.clear_state_bit.__clear_extent_bit.clear_extent_bit.lock_and_cleanup_extent_if_need
96.77 +0.6 97.33 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.56 ± 3% perf-profile.calltrace.cycles-pp.can_overcommit.reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.__btrfs_buffered_write.btrfs_file_write_iter
96.72 +0.6 97.29 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
43.13 +0.8 43.91 perf-profile.calltrace.cycles-pp.btrfs_inode_rsv_release.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write.vfs_write
42.37 +0.8 43.16 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.block_rsv_release_bytes.btrfs_inode_rsv_release.__btrfs_buffered_write
43.11 +0.8 43.89 perf-profile.calltrace.cycles-pp.block_rsv_release_bytes.btrfs_inode_rsv_release.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write
42.96 +0.8 43.77 perf-profile.calltrace.cycles-pp._raw_spin_lock.block_rsv_release_bytes.btrfs_inode_rsv_release.__btrfs_buffered_write.btrfs_file_write_iter
95.28 +0.9 96.23 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
95.22 +1.0 96.18 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
94.88 +1.0 95.85 perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
94.83 +1.0 95.80 perf-profile.calltrace.cycles-pp.btrfs_file_write_iter.__vfs_write.vfs_write.ksys_write.do_syscall_64
94.51 +1.0 95.50 perf-profile.calltrace.cycles-pp.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write.vfs_write.ksys_write
42.44 +1.1 43.52 perf-profile.calltrace.cycles-pp._raw_spin_lock.reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.__btrfs_buffered_write.btrfs_file_write_iter
42.09 +1.1 43.18 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.__btrfs_buffered_write
44.07 +1.2 45.29 perf-profile.calltrace.cycles-pp.btrfs_delalloc_reserve_metadata.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write.vfs_write
43.42 +1.3 44.69 perf-profile.calltrace.cycles-pp.reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.__btrfs_buffered_write.btrfs_file_write_iter.__vfs_write
2.06 ± 18% -0.9 1.21 ± 6% perf-profile.children.cycles-pp.btrfs_search_slot
2.54 ± 7% -0.6 1.96 ± 3% perf-profile.children.cycles-pp.btrfs_dirty_pages
1.05 ± 24% -0.5 0.52 ± 9% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.50 ± 12% -0.5 1.03 ± 4% perf-profile.children.cycles-pp.btrfs_get_extent
1.22 ± 15% -0.4 0.79 ± 8% perf-profile.children.cycles-pp.btrfs_lookup_file_extent
0.81 ± 5% -0.4 0.41 ± 6% perf-profile.children.cycles-pp.btrfs_calc_reclaim_metadata_size
0.74 ± 24% -0.4 0.35 ± 9% perf-profile.children.cycles-pp.btrfs_lock_root_node
0.74 ± 24% -0.4 0.35 ± 9% perf-profile.children.cycles-pp.btrfs_tree_lock
0.90 ± 17% -0.3 0.56 ± 4% perf-profile.children.cycles-pp.__dentry_kill
0.90 ± 17% -0.3 0.56 ± 4% perf-profile.children.cycles-pp.evict
0.90 ± 17% -0.3 0.56 ± 4% perf-profile.children.cycles-pp.dentry_kill
0.90 ± 18% -0.3 0.56 ± 4% perf-profile.children.cycles-pp.btrfs_evict_inode
0.91 ± 18% -0.3 0.57 ± 4% perf-profile.children.cycles-pp.exit_to_usermode_loop
0.52 ± 20% -0.3 0.18 ± 14% perf-profile.children.cycles-pp.do_idle
0.90 ± 17% -0.3 0.57 ± 5% perf-profile.children.cycles-pp.task_work_run
0.90 ± 17% -0.3 0.57 ± 5% perf-profile.children.cycles-pp.__fput
0.90 ± 18% -0.3 0.57 ± 4% perf-profile.children.cycles-pp.dput
0.51 ± 20% -0.3 0.18 ± 14% perf-profile.children.cycles-pp.secondary_startup_64
0.51 ± 20% -0.3 0.18 ± 14% perf-profile.children.cycles-pp.cpu_startup_entry
0.50 ± 21% -0.3 0.17 ± 16% perf-profile.children.cycles-pp.start_secondary
0.47 ± 20% -0.3 0.16 ± 13% perf-profile.children.cycles-pp.cpuidle_enter_state
0.47 ± 19% -0.3 0.16 ± 13% perf-profile.children.cycles-pp.intel_idle
0.61 ± 20% -0.3 0.36 ± 11% perf-profile.children.cycles-pp.btrfs_tree_read_lock
0.47 ± 26% -0.3 0.21 ± 10% perf-profile.children.cycles-pp.prepare_to_wait_event
0.64 ± 18% -0.2 0.39 ± 9% perf-profile.children.cycles-pp.btrfs_read_lock_root_node
0.40 ± 22% -0.2 0.21 ± 5% perf-profile.children.cycles-pp.btrfs_clear_path_blocking
0.38 ± 23% -0.2 0.19 ± 13% perf-profile.children.cycles-pp.finish_wait
1.51 ± 3% -0.2 1.35 ± 2% perf-profile.children.cycles-pp.__clear_extent_bit
1.71 -0.1 1.56 ± 2% perf-profile.children.cycles-pp.lock_and_cleanup_extent_if_need
0.29 ± 25% -0.1 0.15 ± 10% perf-profile.children.cycles-pp.btrfs_orphan_del
0.27 ± 27% -0.1 0.12 ± 8% perf-profile.children.cycles-pp.btrfs_del_orphan_item
0.33 ± 18% -0.1 0.19 ± 9% perf-profile.children.cycles-pp.queued_read_lock_slowpath
0.33 ± 19% -0.1 0.20 ± 4% perf-profile.children.cycles-pp.__wake_up_common_lock
0.45 ± 15% -0.1 0.34 ± 2% perf-profile.children.cycles-pp.btrfs_alloc_data_chunk_ondemand
0.47 ± 16% -0.1 0.36 ± 4% perf-profile.children.cycles-pp.btrfs_check_data_free_space
0.91 ± 4% -0.1 0.81 ± 3% perf-profile.children.cycles-pp.clear_extent_bit
1.07 ± 5% -0.1 0.97 perf-profile.children.cycles-pp.__set_extent_bit
0.77 ± 6% -0.1 0.69 ± 3% perf-profile.children.cycles-pp.btrfs_clear_bit_hook
0.17 ± 20% -0.1 0.08 ± 10% perf-profile.children.cycles-pp.queued_write_lock_slowpath
0.16 ± 22% -0.1 0.08 ± 24% perf-profile.children.cycles-pp.btrfs_lookup_inode
0.21 ± 17% -0.1 0.14 ± 19% perf-profile.children.cycles-pp.__btrfs_update_delayed_inode
0.26 ± 12% -0.1 0.18 ± 13% perf-profile.children.cycles-pp.btrfs_async_run_delayed_root
0.52 ± 5% -0.1 0.45 perf-profile.children.cycles-pp.set_extent_bit
0.45 ± 5% -0.1 0.40 ± 3% perf-profile.children.cycles-pp.alloc_extent_state
0.11 ± 17% -0.1 0.06 ± 11% perf-profile.children.cycles-pp.btrfs_clear_lock_blocking_rw
0.28 ± 9% -0.0 0.23 ± 3% perf-profile.children.cycles-pp.btrfs_drop_pages
0.07 -0.0 0.03 ±100% perf-profile.children.cycles-pp.btrfs_set_lock_blocking_rw
0.39 ± 3% -0.0 0.34 ± 3% perf-profile.children.cycles-pp.get_alloc_profile
0.33 ± 7% -0.0 0.29 perf-profile.children.cycles-pp.btrfs_set_extent_delalloc
0.38 ± 2% -0.0 0.35 ± 4% perf-profile.children.cycles-pp.__set_page_dirty_nobuffers
0.49 ± 3% -0.0 0.46 ± 3% perf-profile.children.cycles-pp.pagecache_get_page
0.18 ± 4% -0.0 0.15 ± 2% perf-profile.children.cycles-pp.truncate_inode_pages_range
0.08 ± 5% -0.0 0.05 ± 9% perf-profile.children.cycles-pp.btrfs_set_path_blocking
0.08 ± 6% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.truncate_cleanup_page
0.80 ± 4% +0.2 0.95 ± 2% perf-profile.children.cycles-pp.can_overcommit
96.84 +0.5 97.37 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
96.80 +0.5 97.35 perf-profile.children.cycles-pp.do_syscall_64
43.34 +0.8 44.17 perf-profile.children.cycles-pp.btrfs_inode_rsv_release
43.49 +0.8 44.32 perf-profile.children.cycles-pp.block_rsv_release_bytes
95.32 +0.9 96.26 perf-profile.children.cycles-pp.ksys_write
95.26 +0.9 96.20 perf-profile.children.cycles-pp.vfs_write
94.91 +1.0 95.88 perf-profile.children.cycles-pp.__vfs_write
94.84 +1.0 95.81 perf-profile.children.cycles-pp.btrfs_file_write_iter
94.55 +1.0 95.55 perf-profile.children.cycles-pp.__btrfs_buffered_write
86.68 +1.0 87.70 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
44.08 +1.2 45.31 perf-profile.children.cycles-pp.btrfs_delalloc_reserve_metadata
43.49 +1.3 44.77 perf-profile.children.cycles-pp.reserve_metadata_bytes
87.59 +1.8 89.38 perf-profile.children.cycles-pp._raw_spin_lock
0.47 ± 19% -0.3 0.16 ± 13% perf-profile.self.cycles-pp.intel_idle
0.33 ± 6% -0.1 0.18 ± 6% perf-profile.self.cycles-pp.get_alloc_profile
0.27 ± 8% -0.0 0.22 ± 4% perf-profile.self.cycles-pp.btrfs_drop_pages
0.07 -0.0 0.03 ±100% perf-profile.self.cycles-pp.btrfs_set_lock_blocking_rw
0.14 ± 5% -0.0 0.12 ± 6% perf-profile.self.cycles-pp.clear_page_dirty_for_io
0.09 ± 5% -0.0 0.07 ± 10% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.17 ± 4% +0.1 0.23 ± 3% perf-profile.self.cycles-pp.reserve_metadata_bytes
0.31 ± 7% +0.1 0.45 ± 2% perf-profile.self.cycles-pp.can_overcommit
86.35 +1.0 87.39 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
aim7.jobs-per-min
29000 +-+-----------------------------------------------------------------+
28500 +-+ +.. + +..+.. +.. |
|..+ +.+..+.. : .. + .+.+..+..+.+.. .+..+.. + + + |
28000 +-+ + .. : + +. + + + |
27500 +-+ + + |
| |
27000 +-+ |
26500 +-+ |
26000 +-+ |
| |
25500 +-+ O O O O O |
25000 +-+ O O O O O O O O O
| O O O O O O O O |
24500 O-+O O O O |
24000 +-+-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
3 years, 8 months
[lkp-robot] [sched/fair] d519329f72: unixbench.score -9.9% regression
by kernel test robot
Greeting,
FYI, we noticed a -9.9% regression of unixbench.score due to commit:
commit: d519329f72a6f36bc4f2b85452640cfe583b4f81 ("sched/fair: Update util_est only on util_avg updates")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: unixbench
on test machine: 8 threads Intel(R) Core(TM) i7 CPU 870 @ 2.93GHz with 6G memory
with following parameters:
runtime: 300s
nr_task: 100%
test: execl
test-description: UnixBench is the original BYTE UNIX benchmark suite aims to test performance of Unix-like system.
test-url: https://github.com/kdlucas/byte-unixbench
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase:
gcc-7/x86_64-rhel-7.2/100%/debian-x86_64-2016-08-31.cgz/300s/nhm-white/execl/unixbench
commit:
a07630b8b2 ("sched/cpufreq/schedutil: Use util_est for OPP selection")
d519329f72 ("sched/fair: Update util_est only on util_avg updates")
a07630b8b2c16f82 d519329f72a6f36bc4f2b85452
---------------- --------------------------
%stddev %change %stddev
\ | \
4626 -9.9% 4167 unixbench.score
3495362 ± 4% +70.4% 5957769 ± 2% unixbench.time.involuntary_context_switches
2.866e+08 -11.6% 2.534e+08 unixbench.time.minor_page_faults
666.75 -9.7% 602.25 unixbench.time.percent_of_cpu_this_job_got
1830 -9.7% 1653 unixbench.time.system_time
395.13 -5.2% 374.58 unixbench.time.user_time
8611715 -58.9% 3537314 ± 3% unixbench.time.voluntary_context_switches
6639375 -9.1% 6033775 unixbench.workload
26025 +3849.3% 1027825 interrupts.CAL:Function_call_interrupts
4856 ± 14% -27.4% 3523 ± 11% slabinfo.filp.active_objs
3534356 -8.8% 3223918 softirqs.RCU
77929 -11.2% 69172 vmstat.system.cs
19489 ± 2% +7.5% 20956 vmstat.system.in
9.05 ± 9% +11.0% 10.05 ± 8% boot-time.dhcp
131.63 ± 4% +8.6% 142.89 ± 7% boot-time.idle
9.07 ± 9% +11.0% 10.07 ± 8% boot-time.kernel_boot
76288 ± 3% -12.8% 66560 ± 3% meminfo.DirectMap4k
16606 -13.1% 14433 meminfo.Inactive
16515 -13.2% 14341 meminfo.Inactive(anon)
11.87 ± 5% +7.8 19.63 ± 4% mpstat.cpu.idle%
0.07 ± 35% -0.0 0.04 ± 17% mpstat.cpu.soft%
68.91 -6.1 62.82 mpstat.cpu.sys%
29291570 +325.4% 1.246e+08 cpuidle.C1.time
8629105 -36.1% 5513780 cpuidle.C1.usage
668733 ± 12% +11215.3% 75668902 ± 2% cpuidle.C1E.time
9763 ± 12% +16572.7% 1627882 ± 2% cpuidle.C1E.usage
1.834e+08 ± 9% +23.1% 2.258e+08 ± 11% cpuidle.C3.time
222674 ± 8% +133.4% 519690 ± 6% cpuidle.C3.usage
4129 -13.3% 3581 proc-vmstat.nr_inactive_anon
4129 -13.3% 3581 proc-vmstat.nr_zone_inactive_anon
2.333e+08 -12.2% 2.049e+08 proc-vmstat.numa_hit
2.333e+08 -12.2% 2.049e+08 proc-vmstat.numa_local
6625 -10.9% 5905 proc-vmstat.pgactivate
2.392e+08 -12.1% 2.102e+08 proc-vmstat.pgalloc_normal
2.936e+08 -12.6% 2.566e+08 proc-vmstat.pgfault
2.392e+08 -12.1% 2.102e+08 proc-vmstat.pgfree
2850 -15.3% 2413 turbostat.Avg_MHz
8629013 -36.1% 5513569 turbostat.C1
1.09 +3.5 4.61 turbostat.C1%
9751 ± 12% +16593.0% 1627864 ± 2% turbostat.C1E
0.03 ± 19% +2.8 2.80 turbostat.C1E%
222574 ± 8% +133.4% 519558 ± 6% turbostat.C3
6.84 ± 8% +1.5 8.34 ± 10% turbostat.C3%
2.82 ± 7% +250.3% 9.87 ± 2% turbostat.CPU%c1
6552773 ± 3% +23.8% 8111699 ± 2% turbostat.IRQ
2.02 ± 11% +28.3% 2.58 ± 9% turbostat.Pkg%pc3
7.635e+11 -12.5% 6.682e+11 perf-stat.branch-instructions
3.881e+10 -12.9% 3.381e+10 perf-stat.branch-misses
2.09 -0.3 1.77 ± 4% perf-stat.cache-miss-rate%
1.551e+09 -15.1% 1.316e+09 ± 4% perf-stat.cache-misses
26177920 -10.5% 23428188 perf-stat.context-switches
1.99 -2.8% 1.93 perf-stat.cpi
7.553e+12 -14.7% 6.446e+12 perf-stat.cpu-cycles
522523 ± 2% +628.3% 3805664 perf-stat.cpu-migrations
2.425e+10 ± 4% -14.3% 2.078e+10 perf-stat.dTLB-load-misses
1.487e+12 -11.3% 1.319e+12 perf-stat.dTLB-loads
1.156e+10 ± 3% -7.7% 1.066e+10 perf-stat.dTLB-store-misses
6.657e+11 -11.1% 5.915e+11 perf-stat.dTLB-stores
0.15 +0.0 0.15 perf-stat.iTLB-load-miss-rate%
5.807e+09 -11.0% 5.166e+09 perf-stat.iTLB-load-misses
3.799e+12 -12.1% 3.34e+12 perf-stat.iTLB-loads
3.803e+12 -12.2% 3.338e+12 perf-stat.instructions
654.99 -1.4% 646.07 perf-stat.instructions-per-iTLB-miss
0.50 +2.8% 0.52 perf-stat.ipc
2.754e+08 -11.6% 2.435e+08 perf-stat.minor-faults
1.198e+08 ± 7% +73.1% 2.074e+08 ± 4% perf-stat.node-stores
2.754e+08 -11.6% 2.435e+08 perf-stat.page-faults
572928 -3.4% 553258 perf-stat.path-length
unixbench.score
4800 +-+------------------------------------------------------------------+
|+ + + |
4700 +-+ + + :+ +. :+ + + |
| + + + +. : + + + + + + + .+++++ .+ +|
4600 +-+ +++ :+++ + ++: : :+ +++ ++.++++ + ++++ ++ |
| + + + ++ ++ + |
4500 +-+ |
| |
4400 +-+ |
| |
4300 +-+ |
O |
4200 +-O O O OOOO OO OOO OOOO OOOO O O |
|O OO OOOOO O O OO O O O O O OO |
4100 +-+------------------------------------------------------------------+
unixbench.workload
9e+06 +-+---------------------------------------------------------------+
| : |
8.5e+06 +-+ : |
| : |
8e+06 +-+ : |
| :: |
7.5e+06 +-+ : : + |
| +: : : + |
7e+06 +-+ + + :: : :: + + : + + + + + |
|:+ + + : :: : : :: : :+ : : ::+ :+ .+ :+ ++ ++ + ++ ::++|
6.5e+06 +-O+ +++ ++++ +++ + ++ +.+ + ++ + + + + + + + +.+++ + |
O O O O O O O |
6e+06 +O+OOO O OOOOOOOO OOOO OO OOOOOOOOO O O O OO |
| O |
5.5e+06 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
3 years, 9 months
[lkp-robot] [confidence: ] 6ce3dd6eec [ 10.247585] WARNING: CPU: 0 PID: 210 at block/blk-mq.c:647 blk_mq_start_request
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 6ce3dd6eec114930cf2035a8bcb1e80477ed79a8
Author: Ming Lei <ming.lei(a)redhat.com>
AuthorDate: Tue Jul 10 09:03:31 2018 +0800
Commit: Jens Axboe <axboe(a)kernel.dk>
CommitDate: Tue Jul 17 16:04:00 2018 -0600
blk-mq: issue directly if hw queue isn't busy in case of 'none'
In case of 'none' io scheduler, when hw queue isn't busy, it isn't
necessary to enqueue request to sw queue and dequeue it from
sw queue because request may be submitted to hw queue asap without
extra cost, meantime there shouldn't be much request in sw queue,
and we don't need to worry about effect on IO merge.
There are still some single hw queue SCSI HBAs(HPSA, megaraid_sas, ...)
which may connect high performance devices, so 'none' is often required
for obtaining good performance.
This patch improves IOPS and decreases CPU unilization on megaraid_sas,
per Kashyap's test.
Cc: Kashyap Desai <kashyap.desai(a)broadcom.com>
Cc: Laurence Oberman <loberman(a)redhat.com>
Cc: Omar Sandoval <osandov(a)fb.com>
Cc: Christoph Hellwig <hch(a)lst.de>
Cc: Bart Van Assche <bart.vanassche(a)wdc.com>
Cc: Hannes Reinecke <hare(a)suse.de>
Reported-by: Kashyap Desai <kashyap.desai(a)broadcom.com>
Tested-by: Kashyap Desai <kashyap.desai(a)broadcom.com>
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
71e9690b59 blk-iolatency: truncate our current time
6ce3dd6eec blk-mq: issue directly if hw queue isn't busy in case of 'none'
89cf553533 Add linux-next specific files for 20180720
+-------------------------------------------------+------------+------------+---------------+
| | 71e9690b59 | 6ce3dd6eec | next-20180720 |
+-------------------------------------------------+------------+------------+---------------+
| boot_successes | 32 | 4 | 18 |
| boot_failures | 13 | 12 | 12 |
| BUG:workqueue_lockup-pool | 13 | 2 | 3 |
| WARNING:at_block/blk-mq.c:#blk_mq_start_request | 0 | 12 | 11 |
| RIP:blk_mq_start_request | 0 | 12 | 11 |
+-------------------------------------------------+------------+------------+---------------+
[ 10.225908] UDF-fs: error (device nbd11): udf_read_tagged: read failed, block=512, location=512
[ 10.230004] UDF-fs: error (device nbd6): udf_read_tagged: read failed, block=512, location=512
[ 10.234011] UDF-fs: error (device nbd15): udf_read_tagged: read failed, block=512, location=512
[ 10.238978] UDF-fs: error (device nbd11): udf_read_tagged: read failed, block=256, location=256
[ 10.243423] UDF-fs: error (device nbd6): udf_read_tagged: read failed, block=256, location=256
[ 10.247585] WARNING: CPU: 0 PID: 210 at block/blk-mq.c:647 blk_mq_start_request+0x57/0x84
[ 10.255769] CPU: 0 PID: 210 Comm: kworker/0:1H Not tainted 4.18.0-rc4-00068-g6ce3dd6 #1
[ 10.258705] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 10.261751] Workqueue: kblockd blk_mq_run_work_fn
[ 10.263469] RIP: 0010:blk_mq_start_request+0x57/0x84
[ 10.265231] Code: 73 1e e8 32 9c b9 ff 81 4b 18 00 00 02 00 48 89 83 b0 00 00 00 48 89 de 48 89 ef e8 4a e5 00 00 8b 83 d4 00 00 00 85 c0 74 02 <0f> 0b 48 89 df e8 65 dd ff ff c7 83 d4 00 00 00 01 00 00 00 83 bd
[ 10.271321] RSP: 0018:ffffc900002a7cb0 EFLAGS: 00010202
[ 10.273134] RAX: 0000000000000001 RBX: ffff88001c61ccc0 RCX: 00000000000000ff
[ 10.275339] RDX: 0000000000000000 RSI: ffff88001c5ae300 RDI: ffff88001c61ccc0
[ 10.277534] RBP: ffff88001c5f7008 R08: 0000000437ddd47e R09: 0000000000000000
[ 10.279729] R10: 0000000000000000 R11: ffff88001c61cd00 R12: 000000000000000a
[ 10.287969] R13: 0000000000000000 R14: ffff88001c5f7000 R15: ffff88001c308a00
[ 10.290141] FS: 0000000000000000(0000) GS:ffffffff82e98000(0000) knlGS:0000000000000000
[ 10.293062] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 10.294955] CR2: 00007f661eaba000 CR3: 0000000002e75002 CR4: 00000000000606b0
[ 10.297108] Call Trace:
[ 10.298313] nbd_queue_rq+0x105/0x3c5
[ 10.310816] ? kvm_sched_clock_read+0x5/0xd
[ 10.312427] ? __lock_acquire+0x217/0x7ae
[ 10.314125] blk_mq_dispatch_rq_list+0x2db/0x4ec
[ 10.315808] ? blk_mq_flush_busy_ctxs+0x5d/0x145
[ 10.317460] blk_mq_sched_dispatch_requests+0x127/0x165
[ 10.319212] __blk_mq_run_hw_queue+0x9d/0xc4
[ 10.320769] process_one_work+0x205/0x36f
[ 10.322277] ? process_one_work+0x1a4/0x36f
[ 10.327153] worker_thread+0x1ec/0x2bb
[ 10.329387] ? process_scheduled_works+0x27/0x27
[ 10.331067] kthread+0x117/0x11f
[ 10.332477] ? kthread_flush_work_fn+0x9/0x9
[ 10.348547] ret_from_fork+0x35/0x40
[ 10.349980] ---[ end trace 29f1a54dd1591b2f ]---
[ 10.352707] NILFS (nbd9): unable to read superblock
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 01c75271c97333b09c56dc9faf56efab9bddd395 7daf201d7fe8334e2d2364d4e8ed3394ec9af819 --
git bisect bad 09ab3047b1a130d322a0d250fef359b1b910b2b5 # 01:45 B 0 1 15 0 Merge 'pcmoore/next' into devel-catchup-201807190533
git bisect bad d2f660d44ec337305d2015a59542b666ea098a80 # 02:09 B 2 3 2 2 Merge 'linux-review/Katsuhiro-Suzuki/media-dvb-frontends-add-Socionext-SC1501A-ISDB-S-T-demodulator-driver/20180719-045753' into devel-catchup-201807190533
git bisect good bb9f3d6e0cd4247dc8261f6045f27eef059c258c # 02:35 G 11 0 11 11 Merge 'linux-review/Ernesto-A-Fern-ndez/hfsplus-fix-decomposition-of-Hangul-characters/20180719-051928' into devel-catchup-201807190533
git bisect good 08d5773f0d3f83b5f402025cb74f1ad4f9be2c3a # 03:04 G 11 0 4 4 Merge 'vfio/for-linus' into devel-catchup-201807190533
git bisect good 1452167e29e13d331b052fb5e86af82aeb7fa30e # 03:39 G 11 0 4 4 Merge 'linux-review/Srinath-Mannam/mmc-host-iproc-Add-ACPI-support-to-IPROC-SDHCI/20180719-043256' into devel-catchup-201807190533
git bisect bad 4fa583c08a968c296b131f5bf4221b1f8a78b18c # 03:56 B 0 10 25 0 Merge 'linux-review/RAGHU-Halharvi/pktcdvd-checkpatch-remove-static-initialise-null/20180719-040038' into devel-catchup-201807190533
git bisect good b351f0c76c3eb94c9ccfb68d0b23899a35e47f27 # 04:17 G 12 0 6 6 Documentation: add a doc for blk-iolatency
git bisect good 884b031b288bae15397dd07b084a41ffb44f99e4 # 04:31 G 11 0 4 4 lightnvm: pblk: mark expected switch fall-through
git bisect bad bdca3c87fb7ad1cc61d231d37eb0d8f90d001e0c # 04:59 B 0 2 17 0 block: Track DISCARD statistics and output them in stat and diskstat
git bisect bad 6ce3dd6eec114930cf2035a8bcb1e80477ed79a8 # 05:12 B 0 6 21 0 blk-mq: issue directly if hw queue isn't busy in case of 'none'
git bisect good f6352103d2e0ad2d2066725eb19bfdfb8763239b # 05:28 G 11 0 4 4 lightnvm: pblk: assume that chunks are closed on 1.2 devices
git bisect good 71e9690b59e7349156025a514c29c29ef55b0175 # 05:48 G 11 0 5 5 blk-iolatency: truncate our current time
# first bad commit: [6ce3dd6eec114930cf2035a8bcb1e80477ed79a8] blk-mq: issue directly if hw queue isn't busy in case of 'none'
git bisect good 71e9690b59e7349156025a514c29c29ef55b0175 # 05:54 G 36 0 8 13 blk-iolatency: truncate our current time
# extra tests with debug options
git bisect bad 6ce3dd6eec114930cf2035a8bcb1e80477ed79a8 # 06:32 B 0 1 16 0 blk-mq: issue directly if hw queue isn't busy in case of 'none'
# extra tests on HEAD of linux-devel/devel-catchup-201807190533
git bisect bad 01c75271c97333b09c56dc9faf56efab9bddd395 # 06:33 B 0 319 383 46 0day head guard for 'devel-catchup-201807190533'
# extra tests on tree/branch linux-next/master
git bisect bad 89cf553533084a35b44f533d59198497d3319d69 # 07:02 B 1 1 1 1 Add linux-next specific files for 20180720
# extra tests with first bad commit reverted
git bisect good e1aef67c37bb7d5591d822552d4aff6c4b5ab0de # 07:29 G 12 0 2 2 Revert "blk-mq: issue directly if hw queue isn't busy in case of 'none'"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 11 months
[lkp-robot] [nfsd4] 517dc52baa: fsmark.files_per_sec 32.4% improvement
by kernel test robot
Greeting,
FYI, we noticed a 32.4% improvement of fsmark.files_per_sec due to commit:
commit: 517dc52baa2a508c82f68bbc7219b48169e6b29f ("nfsd4: shortern default lease period")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: fsmark
on test machine: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 64G memory
with following parameters:
iterations: 1x
nr_threads: 1t
disk: 1BRD_48G
fs: f2fs
fs2: nfsv4
filesize: 4M
test_size: 40G
sync_method: fsyncBeforeClose
cpufreq_governor: performance
test-description: The fsmark is a file system benchmark to test synchronous write workloads, for example, mail servers workload.
test-url: https://sourceforge.net/projects/fsmark/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs2/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-7/performance/1BRD_48G/4M/nfsv4/f2fs/1x/x86_64-rhel-7.2/1t/debian-x86_64-2016-08-31.cgz/fsyncBeforeClose/ivb44/40G/fsmark
commit:
c2993a1d7d ("nfsd4: extend reclaim period for reclaiming clients")
517dc52baa ("nfsd4: shortern default lease period")
c2993a1d7d6687fd 517dc52baa2a508c82f68bbc72
---------------- --------------------------
%stddev %change %stddev
\ | \
53.60 +32.4% 70.95 fsmark.files_per_sec
191.89 -24.4% 145.16 fsmark.time.elapsed_time
191.89 -24.4% 145.16 fsmark.time.elapsed_time.max
17.75 ± 2% +31.0% 23.25 ± 3% fsmark.time.percent_of_cpu_this_job_got
1.43 ± 2% +0.4 1.85 ± 3% mpstat.cpu.sys%
0.03 ± 3% +0.0 0.04 ± 3% mpstat.cpu.usr%
1333968 ± 3% -24.4% 1008280 ± 2% softirqs.SCHED
4580860 ± 3% -25.0% 3433796 ± 4% softirqs.TIMER
49621514 ± 50% -33.8% 32838257 ± 4% cpuidle.C3.time
8.87e+09 ± 3% -25.7% 6.588e+09 cpuidle.C6.time
9796946 ± 3% -24.8% 7369851 cpuidle.C6.usage
212766 ± 3% +33.3% 283568 vmstat.io.bo
13605317 +27.6% 17354458 vmstat.memory.cache
41139824 ± 2% -15.6% 34707711 vmstat.memory.free
0.00 +1e+102% 1.00 vmstat.procs.r
16158 ± 9% +33.0% 21495 ± 12% vmstat.system.cs
28279 ± 10% +23.0% 34796 ± 4% meminfo.Active(file)
13485862 +27.9% 17253726 meminfo.Cached
20655 ± 10% +24.7% 25748 meminfo.Dirty
12246598 +30.7% 16008540 meminfo.Inactive
12237146 +30.7% 15999087 meminfo.Inactive(file)
41246576 ± 2% -15.3% 34917557 meminfo.MemFree
123641 ± 2% +15.8% 143144 meminfo.SReclaimable
233101 +10.4% 257273 meminfo.Slab
13275 ± 28% +54.6% 20527 ± 15% numa-meminfo.node0.Active(file)
9394 ± 33% +76.6% 16592 ± 14% numa-meminfo.node0.Dirty
20060196 ± 8% -18.6% 16336481 ± 10% numa-meminfo.node0.MemFree
5768180 ± 2% +67.5% 9661137 ± 22% numa-meminfo.node1.FilePages
5162181 ± 3% +74.9% 9029558 ± 23% numa-meminfo.node1.Inactive
5158148 ± 3% +75.0% 9027345 ± 23% numa-meminfo.node1.Inactive(file)
21163215 ± 5% -12.3% 18564891 ± 6% numa-meminfo.node1.MemFree
367.00 ± 27% +82.6% 670.25 ± 40% numa-meminfo.node1.NFS_Unstable
624.00 ± 21% +95.3% 1218 ± 35% numa-meminfo.node1.Writeback
2.236e+09 ± 6% -21.0% 1.767e+09 ± 12% perf-stat.branch-misses
3.553e+09 ± 6% -12.3% 3.115e+09 ± 10% perf-stat.cache-misses
9.503e+09 ± 4% -15.0% 8.074e+09 ± 9% perf-stat.cache-references
8701 ± 13% -38.3% 5367 ± 26% perf-stat.cpu-migrations
1.037e+08 ± 5% -20.6% 82303955 ± 7% perf-stat.dTLB-store-misses
86.00 -1.2 84.80 perf-stat.iTLB-load-miss-rate%
1.33e+08 ± 5% -20.2% 1.062e+08 ± 11% perf-stat.iTLB-load-misses
543566 ± 3% -24.5% 410527 perf-stat.minor-faults
543567 ± 3% -24.5% 410533 perf-stat.page-faults
98.50 +15.0% 113.25 turbostat.Avg_MHz
2081 +10.2% 2292 ± 3% turbostat.Bzy_MHz
0.59 +0.1 0.73 ± 2% turbostat.C1%
0.15 ± 2% +0.1 0.20 ± 6% turbostat.C1E%
9795281 ± 3% -24.8% 7368291 turbostat.C6
58.70 +9.0% 64.01 ± 3% turbostat.CorWatt
9631901 ± 3% -24.9% 7237299 turbostat.IRQ
4.29 ± 3% -26.1% 3.17 ± 5% turbostat.Pkg%pc6
87.05 +6.4% 92.61 ± 2% turbostat.PkgWatt
8.29 +2.9% 8.54 turbostat.RAMWatt
3296 ± 29% +55.4% 5124 ± 16% numa-vmstat.node0.nr_active_file
2342 ± 33% +76.4% 4131 ± 15% numa-vmstat.node0.nr_dirty
5021740 ± 8% -18.7% 4085154 ± 9% numa-vmstat.node0.nr_free_pages
764.00 ± 57% +124.9% 1718 ± 28% numa-vmstat.node0.nr_vmscan_immediate_reclaim
3297 ± 29% +55.4% 5124 ± 16% numa-vmstat.node0.nr_zone_active_file
2472 ± 31% +72.1% 4254 ± 14% numa-vmstat.node0.nr_zone_write_pending
2418556 ± 8% +51.7% 3669187 ± 18% numa-vmstat.node1.nr_dirtied
1442200 ± 2% +67.4% 2413905 ± 22% numa-vmstat.node1.nr_file_pages
5299408 ± 5% -12.3% 4649246 ± 6% numa-vmstat.node1.nr_free_pages
1289719 ± 3% +74.9% 2255484 ± 23% numa-vmstat.node1.nr_inactive_file
85499 ± 21% +39.3% 119063 ± 22% numa-vmstat.node1.nr_indirectly_reclaimable
92.00 ± 23% +82.1% 167.50 ± 33% numa-vmstat.node1.nr_unstable
136.50 ± 25% +92.3% 262.50 ± 25% numa-vmstat.node1.nr_writeback
2415645 ± 8% +51.8% 3666735 ± 18% numa-vmstat.node1.nr_written
1289715 ± 3% +74.9% 2255489 ± 23% numa-vmstat.node1.nr_zone_inactive_file
7109 ± 10% +22.6% 8718 ± 4% proc-vmstat.nr_active_file
5188 ± 10% +24.3% 6450 proc-vmstat.nr_dirty
1326117 -4.8% 1262303 proc-vmstat.nr_dirty_background_threshold
2655481 -4.8% 2527699 proc-vmstat.nr_dirty_threshold
3372021 +27.9% 4311617 proc-vmstat.nr_file_pages
10302256 ± 2% -15.3% 8722920 proc-vmstat.nr_free_pages
3059627 +30.7% 3997741 proc-vmstat.nr_inactive_file
191536 ± 2% +31.6% 252107 proc-vmstat.nr_indirectly_reclaimable
3508 -6.4% 3283 proc-vmstat.nr_shmem
30984 ± 2% +15.6% 35811 proc-vmstat.nr_slab_reclaimable
27320 +4.5% 28553 ± 2% proc-vmstat.nr_slab_unreclaimable
275.25 ± 13% +63.5% 450.00 ± 20% proc-vmstat.nr_writeback
7109 ± 10% +22.6% 8716 ± 4% proc-vmstat.nr_zone_active_file
3059632 +30.7% 3997758 proc-vmstat.nr_zone_inactive_file
5414 ± 9% +25.8% 6812 proc-vmstat.nr_zone_write_pending
561847 ± 3% -24.2% 425887 ± 2% proc-vmstat.pgfault
951.53 ± 4% -15.3% 805.83 ± 7% sched_debug.cfs_rq:/.exec_clock.avg
195.00 ± 3% +37.1% 267.35 ± 7% sched_debug.cfs_rq:/.load_avg.avg
0.78 -11.5% 0.69 ± 2% sched_debug.cfs_rq:/.nr_spread_over.avg
0.75 -11.1% 0.67 sched_debug.cfs_rq:/.nr_spread_over.min
129.41 ± 10% -23.9% 98.49 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.stddev
187.51 ± 4% +23.1% 230.86 sched_debug.cfs_rq:/.util_avg.avg
115822 -26.1% 85630 sched_debug.cpu.clock.avg
115824 -26.1% 85633 sched_debug.cpu.clock.max
115818 -26.1% 85627 sched_debug.cpu.clock.min
115822 -26.1% 85630 sched_debug.cpu.clock_task.avg
115824 -26.1% 85633 sched_debug.cpu.clock_task.max
115818 -26.1% 85627 sched_debug.cpu.clock_task.min
16.83 ± 13% +118.0% 36.67 ± 52% sched_debug.cpu.cpu_load[0].avg
18.88 ± 9% +71.7% 32.43 ± 20% sched_debug.cpu.cpu_load[1].avg
498.88 ± 9% +83.7% 916.50 ± 39% sched_debug.cpu.cpu_load[1].max
82.58 ± 7% +70.8% 141.02 ± 33% sched_debug.cpu.cpu_load[1].stddev
4240 -19.7% 3403 sched_debug.cpu.curr->pid.max
95563 -31.3% 65668 sched_debug.cpu.nr_load_updates.avg
100959 -30.6% 70040 sched_debug.cpu.nr_load_updates.max
93175 -32.1% 63230 sched_debug.cpu.nr_load_updates.min
600.39 ± 4% -16.6% 500.69 ± 2% sched_debug.cpu.ttwu_local.avg
115820 -26.1% 85628 sched_debug.cpu_clk
115820 -26.1% 85628 sched_debug.ktime
0.05 ± 10% +48.4% 0.08 ± 20% sched_debug.rt_rq:/.rt_time.avg
2.39 ± 10% +48.9% 3.56 ± 20% sched_debug.rt_rq:/.rt_time.max
0.34 ± 10% +48.9% 0.51 ± 20% sched_debug.rt_rq:/.rt_time.stddev
116302 -26.0% 86097 sched_debug.sched_clk
1036 ± 9% +27.7% 1322 ± 8% slabinfo.biovec-64.active_objs
1036 ± 9% +27.7% 1322 ± 8% slabinfo.biovec-64.num_objs
712.50 ± 8% +27.5% 908.50 ± 10% slabinfo.ext4_extent_status.active_objs
712.50 ± 8% +27.5% 908.50 ± 10% slabinfo.ext4_extent_status.num_objs
2640 ± 8% +25.5% 3313 ± 6% slabinfo.ext4_io_end.active_objs
2674 ± 8% +25.4% 3354 ± 6% slabinfo.ext4_io_end.num_objs
2709 ± 9% +34.4% 3641 ± 5% slabinfo.f2fs_extent_tree.active_objs
2750 ± 9% +32.8% 3651 ± 5% slabinfo.f2fs_extent_tree.num_objs
2255 ± 5% +31.6% 2969 ± 8% slabinfo.f2fs_inode_cache.active_objs
2286 ± 4% +31.2% 3000 ± 7% slabinfo.f2fs_inode_cache.num_objs
590.00 ± 8% +24.5% 734.75 ± 3% slabinfo.file_lock_cache.active_objs
590.00 ± 8% +24.5% 734.75 ± 3% slabinfo.file_lock_cache.num_objs
7627 +12.3% 8568 ± 5% slabinfo.free_nid.active_objs
7627 +12.3% 8568 ± 5% slabinfo.free_nid.num_objs
6251 ± 5% +26.0% 7875 ± 5% slabinfo.fscrypt_info.active_objs
6251 ± 5% +26.0% 7875 ± 5% slabinfo.fscrypt_info.num_objs
13894 ± 6% +16.3% 16158 slabinfo.kmalloc-128.active_objs
13901 ± 6% +16.3% 16161 slabinfo.kmalloc-128.num_objs
2184 ± 2% +30.5% 2851 slabinfo.nfs_inode_cache.active_objs
2209 ± 3% +30.1% 2874 ± 2% slabinfo.nfs_inode_cache.num_objs
636.50 ± 14% +46.5% 932.25 ± 24% slabinfo.nfsd4_stateids.active_objs
636.50 ± 14% +46.5% 932.25 ± 24% slabinfo.nfsd4_stateids.num_objs
10514 ± 8% +25.0% 13138 ± 2% slabinfo.nfsd_drc.active_objs
10514 ± 8% +25.0% 13138 ± 2% slabinfo.nfsd_drc.num_objs
984.00 ± 11% +38.0% 1358 ± 22% slabinfo.numa_policy.active_objs
984.00 ± 11% +38.0% 1358 ± 22% slabinfo.numa_policy.num_objs
2062 ± 15% +17.2% 2417 ± 8% slabinfo.pool_workqueue.active_objs
2062 ± 15% +18.1% 2435 ± 7% slabinfo.pool_workqueue.num_objs
121248 ± 4% +25.6% 152261 slabinfo.radix_tree_node.active_objs
2173 ± 4% +25.3% 2722 slabinfo.radix_tree_node.active_slabs
121725 ± 4% +25.3% 152514 slabinfo.radix_tree_node.num_objs
2173 ± 4% +25.3% 2722 slabinfo.radix_tree_node.num_slabs
2126 ± 6% +15.2% 2449 ± 4% slabinfo.sock_inode_cache.active_objs
2126 ± 6% +15.2% 2449 ± 4% slabinfo.sock_inode_cache.num_objs
18605 +19.2% 22180 ± 3% slabinfo.vm_area_struct.active_objs
18668 +18.9% 22193 ± 3% slabinfo.vm_area_struct.num_objs
7.75 ± 5% -22.6% 6.00 nfsstat.Client.nfs.v3.commit.percent
7.00 +14.3% 8.00 nfsstat.Client.nfs.v3.getattr.percent
3.00 -33.3% 2.00 nfsstat.Client.nfs.v3.write
8.00 -25.0% 6.00 nfsstat.Client.nfs.v3.write.percent
4.50 ± 11% -33.3% 3.00 nfsstat.Client.nfs.v4.access.percent
2546 ± 8% +24.2% 3164 nfsstat.Client.nfs.v4.close
5.00 +40.0% 7.00 nfsstat.Client.nfs.v4.close.percent
2546 ± 8% +24.3% 3164 nfsstat.Client.nfs.v4.commit
5.00 +40.0% 7.00 nfsstat.Client.nfs.v4.commit.percent
2.00 -50.0% 1.00 nfsstat.Client.nfs.v4.confirm.percent
3.00 -33.3% 2.00 nfsstat.Client.nfs.v4.getattr.percent
2551 ± 8% +24.2% 3169 nfsstat.Client.nfs.v4.lookup
1.00 -100.0% 0.00 nfsstat.Client.nfs.v4.lookup_root.percent
1.00 -100.0% 0.00 nfsstat.Client.nfs.v4.null.percent
2558 ± 8% +24.1% 3174 nfsstat.Client.nfs.v4.open
17.25 ± 2% -14.5% 14.75 ± 2% nfsstat.Client.nfs.v4.open.percent
2.00 -50.0% 1.00 nfsstat.Client.nfs.v4.pathconf.percent
6.00 -25.0% 4.50 ± 11% nfsstat.Client.nfs.v4.server_caps.percent
2.00 -50.0% 1.00 nfsstat.Client.nfs.v4.setclntid.percent
2.00 -50.0% 1.00 nfsstat.Client.nfs.v4.statfs.percent
10189 ± 8% +24.3% 12662 nfsstat.Client.nfs.v4.write
22.50 ± 3% +26.7% 28.50 nfsstat.Client.nfs.v4.write.percent
20460 ± 8% +24.1% 25401 nfsstat.Client.rpc.authrefrsh
20460 ± 8% +24.1% 25401 nfsstat.Client.rpc.calls
20422 ± 8% +24.2% 25365 nfsstat.Server.nfs.v4.compound
1.00 -100.0% 0.00 nfsstat.Server.nfs.v4.null.percent
2552 ± 8% +24.2% 3170 nfsstat.Server.nfs.v4.operations.access
2546 ± 8% +24.2% 3164 nfsstat.Server.nfs.v4.operations.close
2546 ± 8% +24.2% 3164 nfsstat.Server.nfs.v4.operations.commit
17858 ± 8% +24.2% 22185 nfsstat.Server.nfs.v4.operations.getattr
5101 ± 8% +24.2% 6337 nfsstat.Server.nfs.v4.operations.getfh
2551 ± 8% +24.2% 3169 nfsstat.Server.nfs.v4.operations.lookup
4.00 -25.0% 3.00 nfsstat.Server.nfs.v4.operations.lookup.percent
2558 ± 8% +24.1% 3174 nfsstat.Server.nfs.v4.operations.open
6.00 -16.7% 5.00 nfsstat.Server.nfs.v4.operations.open.percent
20417 ± 8% +24.2% 25359 nfsstat.Server.nfs.v4.operations.putfh
1.00 -100.0% 0.00 nfsstat.Server.nfs.v4.operations.setcltid.percent
1.00 -100.0% 0.00 nfsstat.Server.nfs.v4.operations.setcltidconf.percent
10189 ± 8% +24.3% 12662 nfsstat.Server.nfs.v4.operations.write
6.25 ± 6% +36.0% 8.50 ± 5% nfsstat.Server.nfs.v4.operations.write.percent
20423 ± 8% +24.2% 25366 nfsstat.Server.packet.packets
20423 ± 8% +24.2% 25366 nfsstat.Server.packet.tcp
10194 ± 8% +24.3% 12667 nfsstat.Server.reply.cache.misses
10229 ± 8% +24.1% 12699 nfsstat.Server.reply.cache.nocache
20423 ± 8% +24.2% 25366 nfsstat.Server.rpc.calls
19.63 ± 6% -1.8 17.83 ± 8% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
10.22 ± 10% -1.8 8.43 ± 13% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
7.83 ± 11% -1.5 6.33 ± 14% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
4.69 ± 14% -1.0 3.70 ± 19% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
4.20 ± 14% -0.9 3.29 ± 18% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
1.23 ± 20% -0.3 0.89 ± 23% perf-profile.calltrace.cycles-pp.rcu_check_callbacks.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
1.43 ± 8% -0.3 1.16 ± 10% perf-profile.calltrace.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.04 ± 7% -0.1 0.90 ± 8% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.08 ± 4% -0.1 0.94 ± 7% perf-profile.calltrace.cycles-pp.run_timer_softirq.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.19 ± 9% +0.2 1.38 ± 7% perf-profile.calltrace.cycles-pp.__next_timer_interrupt.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select
1.90 ± 5% +0.2 2.14 ± 6% perf-profile.calltrace.cycles-pp.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
0.99 ± 11% +0.2 1.23 ± 11% perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
0.99 ± 11% +0.2 1.23 ± 11% perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock
0.93 ± 7% +0.3 1.18 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_trylock.rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt
1.00 ± 10% +0.3 1.26 ± 12% perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.printk
1.00 ± 10% +0.3 1.26 ± 11% perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.printk.irq_work_run_list
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter_state
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.irq_work_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter_state.do_idle
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.printk.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.vprintk_emit.printk.irq_work_run_list.irq_work_run.smp_irq_work_interrupt
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.printk.irq_work_run_list.irq_work_run
2.88 ± 3% +0.4 3.29 ± 4% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
3.66 ± 3% +0.4 4.11 ± 3% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
3.05 ± 2% +0.5 3.54 ± 5% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
91.67 +0.7 92.33 perf-profile.calltrace.cycles-pp.secondary_startup_64
7.46 ± 5% +1.0 8.46 ± 4% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
56.78 +1.7 58.52 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
10.53 ± 10% -1.9 8.60 ± 13% perf-profile.children.cycles-pp.hrtimer_interrupt
8.14 ± 11% -1.6 6.51 ± 14% perf-profile.children.cycles-pp.__hrtimer_run_queues
4.88 ± 13% -1.1 3.79 ± 18% perf-profile.children.cycles-pp.tick_sched_timer
4.32 ± 13% -1.0 3.35 ± 18% perf-profile.children.cycles-pp.tick_sched_handle
1.82 ± 12% -0.4 1.44 ± 20% perf-profile.children.cycles-pp.scheduler_tick
1.30 ± 17% -0.4 0.93 ± 22% perf-profile.children.cycles-pp.rcu_check_callbacks
1.54 ± 9% -0.3 1.25 ± 9% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
1.25 ± 6% -0.2 1.03 ± 13% perf-profile.children.cycles-pp.ktime_get
1.15 ± 7% -0.2 0.97 ± 6% perf-profile.children.cycles-pp.clockevents_program_event
0.88 ± 6% -0.2 0.70 ± 6% perf-profile.children.cycles-pp.native_write_msr
1.17 ± 4% -0.2 1.01 ± 6% perf-profile.children.cycles-pp.run_timer_softirq
0.56 ± 9% -0.1 0.42 ± 18% perf-profile.children.cycles-pp.enqueue_hrtimer
0.16 ± 23% -0.1 0.08 ± 10% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.29 ± 15% -0.1 0.21 ± 29% perf-profile.children.cycles-pp.update_rq_clock
0.16 ± 18% -0.1 0.10 ± 27% perf-profile.children.cycles-pp.raise_softirq
0.08 ± 10% -0.0 0.04 ± 59% perf-profile.children.cycles-pp.calc_global_load_tick
0.22 ± 8% +0.1 0.27 ± 14% perf-profile.children.cycles-pp.pm_qos_read_value
1.42 ± 7% +0.2 1.59 ± 8% perf-profile.children.cycles-pp.__next_timer_interrupt
0.97 ± 6% +0.2 1.21 ± 3% perf-profile.children.cycles-pp._raw_spin_trylock
3.11 ± 5% +0.4 3.49 ± 4% perf-profile.children.cycles-pp.tick_nohz_next_event
3.81 ± 3% +0.4 4.23 ± 3% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
3.17 +0.5 3.62 ± 5% perf-profile.children.cycles-pp.rebalance_domains
91.87 +0.7 92.53 perf-profile.children.cycles-pp.do_idle
91.67 +0.7 92.33 perf-profile.children.cycles-pp.secondary_startup_64
91.67 +0.7 92.33 perf-profile.children.cycles-pp.cpu_startup_entry
7.71 ± 5% +0.9 8.65 ± 4% perf-profile.children.cycles-pp.menu_select
0.89 ± 16% -0.3 0.58 ± 23% perf-profile.self.cycles-pp.rcu_check_callbacks
0.88 ± 6% -0.2 0.70 ± 6% perf-profile.self.cycles-pp.native_write_msr
0.91 ± 6% -0.1 0.78 ± 5% perf-profile.self.cycles-pp.run_timer_softirq
0.42 ± 15% -0.1 0.29 ± 21% perf-profile.self.cycles-pp.timerqueue_add
0.39 ± 7% -0.1 0.28 ± 17% perf-profile.self.cycles-pp.scheduler_tick
0.29 ± 18% -0.1 0.23 ± 6% perf-profile.self.cycles-pp.clockevents_program_event
0.16 ± 18% -0.1 0.10 ± 27% perf-profile.self.cycles-pp.raise_softirq
0.08 ± 10% -0.0 0.04 ± 59% perf-profile.self.cycles-pp.calc_global_load_tick
0.10 ± 13% -0.0 0.06 ± 20% perf-profile.self.cycles-pp.rcu_bh_qs
0.07 ± 24% +0.0 0.11 ± 15% perf-profile.self.cycles-pp.rcu_nmi_enter
0.22 ± 8% +0.1 0.27 ± 14% perf-profile.self.cycles-pp.pm_qos_read_value
0.38 ± 3% +0.1 0.47 ± 6% perf-profile.self.cycles-pp.rebalance_domains
0.59 ± 11% +0.1 0.69 ± 5% perf-profile.self.cycles-pp.tick_nohz_next_event
0.78 ± 6% +0.1 0.92 ± 4% perf-profile.self.cycles-pp.__next_timer_interrupt
0.97 ± 6% +0.2 1.21 ± 3% perf-profile.self.cycles-pp._raw_spin_trylock
fsmark.files_per_sec
74 +-+--------------------------------------------------------------------+
72 +-O O O O O O O O O |
O O O O O |
70 +-+ O O O |
68 +-+ |
66 +-+ |
64 +-+ |
| |
62 +-+ |
60 +-+ |
58 +-+ |
56 +-+ |
|.+..+. .+..+. .+. .+.. .+. .+. |
54 +-+ +.+..+ +.+..+ +..+ + +..+.+.+..+.+.+. +..+.+.+..+.|
52 +-+--------------------------------------------------------------------+
fsmark.time.elapsed_time
200 +-+-------------------------------------------------------------------+
| .+.. |
190 +-+ .+.+..+. .+.+.+.. .+.+.. .+. .+.+.+..+.+.+..+. .+..+.+ +.|
| +..+ +.+. + + +. + |
| |
180 +-+ |
| |
170 +-+ |
| |
160 +-+ |
| |
| |
150 +-+ O O O O O O |
O O O O O O O O O |
140 +-O----------------O--------------------------------------------------+
fsmark.time.elapsed_time.max
200 +-+-------------------------------------------------------------------+
| .+.. |
190 +-+ .+.+..+. .+.+.+.. .+.+.. .+. .+.+.+..+.+.+..+. .+..+.+ +.|
| +..+ +.+. + + +. + |
| |
180 +-+ |
| |
170 +-+ |
| |
160 +-+ |
| |
| |
150 +-+ O O O O O O |
O O O O O O O O O |
140 +-O----------------O--------------------------------------------------+
nfsstat.Client.nfs.v4.write.percent
29 O-+--O-O----O-O------O--O-O-O----O--O----------------------------------+
| |
28 +-O O O O O O |
27 +-+ |
| |
26 +-+ |
| |
25 +-+ |
| |
24 +-+ |
23 +-+ :|
| :|
22 +-+ +.+..+ +.+..+ +..+ + +..+.+.+..+.+.+.. +..+.+.+..+ |
|+ + + + + + + .. + + + |
21 +-+--------------------------------------------------------------------+
nfsstat.Client.nfs.v4.commit.percent
7 O-O--O-O-O--O-O-O--O-O-O--O-O-O--O-O-O--------------------------------+
| |
| |
6.5 +-+ |
| |
| |
| |
6 +-+ |
| |
| |
5.5 +-+ |
| |
| |
| |
5 +-+-------------------------------------------------------------------+
nfsstat.Client.nfs.v4.open.percent
18 +-+------------------------------------------------------------------+
|: : : : : : : + : : : + : : |
17.5 +-+ : : : : : : + : : : + : : |
17 +-+ +.+..+ +..+.+ + + +.+..+.+.+..+.+ +.+ +..+.|
| |
16.5 +-+ |
| |
16 +-+ |
| |
15.5 +-+ |
15 O-O O O O O O O O O O O O O |
| |
14.5 +-+ |
| |
14 +-+--O--------O--------------------O---------------------------------+
nfsstat.Client.nfs.v4.close.percent
7 O-O--O-O-O--O-O-O--O-O-O--O-O-O--O-O-O--------------------------------+
| |
| |
6.5 +-+ |
| |
| |
| |
6 +-+ |
| |
| |
5.5 +-+ |
| |
| |
| |
5 +-+-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
4 years
ad1754318c [ 7.413021] kernel BUG at include/linux/gfp.h:466!
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace.git siginfo-testing
commit ad1754318cc4dab034d959fcc8e2c34dec0e3d82
Author: Eric W. Biederman <ebiederm(a)xmission.com>
AuthorDate: Mon Jul 23 15:20:37 2018 -0500
Commit: Eric W. Biederman <ebiederm(a)xmission.com>
CommitDate: Tue Jul 24 16:26:33 2018 -0500
signal: Don't restart fork when signals come in.
Wen Yang <wen.yang99(a)zte.com.cn> and majiang <ma.jiang(a)zte.com.cn>
report that a periodic signal received during fork can cause fork to
continually restart preventing an application from making progress.
The code was being overly pesimistic. Fork needs to guarantee that a
signal sent to multiple processes is logically delivered before the
fork and just to the forking process or logically delivered after the
fork to both the forking process and it's newly spawned child. For
signals like periodic timers that are always delivered to a single
process fork can safely complete and let them appear to logically
delivered after the fork().
While examining this issue I also discovered that fork today will miss
signals delivered to multiple processes during the fork and handled by
another thread. Similarly the current code will also miss blocked
signals that are delivered to multiple process, as those signals will
not appear pending during fork.
Add a list of each thread that is currently forking, and keep on that
list a signal set that records all of the signals sent to multiple
processes. When fork completes initialize the new processes
shared_pending signal set with it. The calculate_sigpending function
will see those signals and set TIF_SIGPENDING causing the new task to
take the slow path to userspace to handle those signals. Making it
appear as if those signals were received immediately after the fork.
It is not possible to send real time signals to multiple processes and
exceptions don't go to multiple processes, which means that that are
no signals sent to multiple processes that require siginfo. This
means it is safe to not bother collecting siginfo on signals sent
during fork.
The sigaction of a child of fork is initially the same as the
sigaction of the parent process. So a signal the parent ignores the
child will also initially ignore. Therefore it is safe to ignore
signals sent to multiple processes and ignored by the forking process.
Signals sent to only a single process or only a single thread and delivered
during fork are treated as if they are received after the fork, and generally
not dealt with. They won't cause any problems.
V2: Added removal from the multiprocess list on failure.
V3: Use -ERESTARTNOINTR directly
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=200447
Reported-by: Wen Yang <wen.yang99(a)zte.com.cn> and
Reported-by: majiang <ma.jiang(a)zte.com.cn>
Fixes: 4a2c7a7837da ("[PATCH] make fork() atomic wrt pgrp/session signals")
Signed-off-by: "Eric W. Biederman" <ebiederm(a)xmission.com>
159fb05651 fork: Have new threads join on-going signal group stops
ad1754318c signal: Don't restart fork when signals come in.
+------------------------------------------+------------+------------+
| | 159fb05651 | ad1754318c |
+------------------------------------------+------------+------------+
| boot_successes | 35 | 2 |
| boot_failures | 0 | 13 |
| kernel_BUG_at_include/linux/gfp.h | 0 | 13 |
| invalid_opcode:#[##] | 0 | 13 |
| RIP:copy_process | 0 | 13 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 13 |
+------------------------------------------+------------+------------+
[ 7.379068] Freeing unused kernel memory: 188K
[ 7.397925] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[ 7.399618] rodata_test: all tests were successful
[ 7.409207] random: init: uninitialized urandom read (12 bytes read)
[ 7.411666] ------------[ cut here ]------------
[ 7.413021] kernel BUG at include/linux/gfp.h:466!
[ 7.414818] invalid opcode: 0000 [#1] PREEMPT SMP
[ 7.416134] CPU: 1 PID: 1 Comm: init Not tainted 4.18.0-rc1-00020-gad17543 #1
[ 7.417936] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 7.420211] RIP: 0010:copy_process+0x7cc/0x1bb0
[ 7.421485] Code: 8b 3d 90 d5 7e 01 be c0 00 60 00 89 45 18 e8 fb 3f 13 00 48 85 c0 49 89 c4 74 cf 83 7d 18 ff 0f 84 5e fa ff ff e9 4e fa ff ff <0f> 0b 4c 89 e7 e8 1a 53 0d 00 85 c0 0f 85 7a fe ff ff 48 89 de 4c
[ 7.426166] RSP: 0018:ffff88000fc6bd68 EFLAGS: 00010206
[ 7.427570] RAX: 000000000fc6be18 RBX: 0000000001200011 RCX: 0000000000000000
[ 7.429373] RDX: 0000000000000000 RSI: 000000000000005a RDI: ffff88001fd83e80
[ 7.431164] RBP: ffff88000fc6be50 R08: 0000000000000004 R09: 0000000000000000
[ 7.432960] R10: ffff88001fd81000 R11: 0000000000000000 R12: ffff88001fd81000
[ 7.434752] R13: 00007efc763c49d0 R14: 0000000000000000 R15: 0000000000000000
[ 7.436535] FS: 00007efc763c4700(0000) GS:ffff88001d600000(0000) knlGS:0000000000000000
[ 7.438753] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 7.440258] CR2: 00007efc763c9000 CR3: 000000001f1e6000 CR4: 00000000000006a0
[ 7.442055] Call Trace:
[ 7.442902] ? kvm_sched_clock_read+0x9/0x20
[ 7.444113] ? kvm_clock_read+0x25/0x40
[ 7.445259] ? kvm_sched_clock_read+0x9/0x20
[ 7.446477] _do_fork+0xa5/0x3f0
[ 7.447484] ? __might_sleep+0x50/0x90
[ 7.448599] ? __might_fault+0x29/0x30
[ 7.449713] ? _copy_to_user+0x66/0x80
[ 7.450831] __x64_sys_clone+0x22/0x30
[ 7.451937] do_syscall_64+0x5d/0x110
[ 7.453041] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 7.454404] RIP: 0033:0x7efc75420126
[ 7.455475] Code: f7 d8 64 89 04 25 d4 02 00 00 64 4c 8b 0c 25 10 00 00 00 31 d2 4d 8d 91 d0 02 00 00 31 f6 bf 11 00 20 01 b8 38 00 00 00 0f 05 <48> 3d 00 f0 ff ff 0f 87 41 01 00 00 85 c0 41 89 c4 0f 85 4b 01 00
[ 7.460175] RSP: 002b:00007ffdd995e510 EFLAGS: 00000246 ORIG_RAX: 0000000000000038
[ 7.462308] RAX: ffffffffffffffda RBX: 00007ffdd995e510 RCX: 00007efc75420126
[ 7.464099] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000001200011
[ 7.465902] RBP: 00007ffdd995e570 R08: 0000000000000001 R09: 00007efc763c4700
[ 7.467687] R10: 00007efc763c49d0 R11: 0000000000000246 R12: 0000000000000000
[ 7.469488] R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000000
[ 7.471275] Modules linked in:
[ 7.472995] ---[ end trace 46857ad44e271661 ]---
[ 7.474292] RIP: 0010:copy_process+0x7cc/0x1bb0
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 336f4d5a0cf9a9ef5714c8e73290820b1e4a7799 acb1872577b346bd15ab3a3f8dff780d6cca4b70 --
git bisect good db2c12af4e82ee2fa8e4d28ef17c0d62a234c6d0 # 16:45 G 11 0 0 0 Merge 'linux-review/Gustavo-A-R-Silva/mailbox-xgene-slimpro-Fix-potential-NULL-pointer-dereference/20180728-160049' into devel-hourly-2018073011
git bisect good 15146f5bf013d7dda5e95c74fc0da160362814e2 # 16:56 G 11 0 0 0 Merge 'linux-review/Jia-Ju-Bai/net-nvidia-forcedeth-Replace-GFP_ATOMIC-with-GFP_KERNEL-in-nv_probe/20180728-012049' into devel-hourly-2018073011
git bisect good 93328c4026996becf9d856e4ed74672f6e8e78b5 # 17:07 G 11 0 0 0 Merge 'linux-review/Jia-Ju-Bai/video-fbdev-add-the-dependency-of-broadsheetfb-in-Kconfig/20180727-180657' into devel-hourly-2018073011
git bisect good afd12c321712240fc66709000a6e6af17ffc0dfe # 17:20 G 11 0 0 0 Merge 'kvms390/hlp_vsie' into devel-hourly-2018073011
git bisect good c28736279a15d7753bac09ba667df00ce10322b5 # 17:37 G 11 0 0 0 Merge 'kvm/master' into devel-hourly-2018073011
git bisect bad 048cebf2bc43ba21ee23caa67a0c1756e5fc0de5 # 17:50 B 0 2 17 0 Merge 'userns/siginfo-testing' into devel-hourly-2018073011
git bisect good 9eeaab07d31f43a3d0e2b3797e60df92b6e1c466 # 18:14 G 11 0 0 0 Merge 'random/master' into devel-hourly-2018073011
git bisect good 3137667d9bbcba39db1090b6a7c2ea6ec3329ff6 # 18:23 G 11 0 0 0 Merge 'hch-misc/net-poll-cleanup' into devel-hourly-2018073011
git bisect good 0102498083d58d8b17759642c602b525215e1a54 # 18:36 G 11 0 0 0 signal: Pass pid type into group_send_sig_info
git bisect good 0729614992c946f6e8ccb9ef260aea1f06993df0 # 18:45 G 11 0 0 0 signal: Push pid type down into complete_signal.
git bisect good 73ce6fe330cf55ee1b7de4bcdc6e0706f1ee436c # 18:58 G 11 0 0 0 signal: Add calculate_sigpending()
git bisect bad ad1754318cc4dab034d959fcc8e2c34dec0e3d82 # 19:12 B 0 2 17 0 signal: Don't restart fork when signals come in.
git bisect good 159fb056514ad8b1459128e8140ef6ce2af30478 # 19:26 G 11 0 0 0 fork: Have new threads join on-going signal group stops
# first bad commit: [ad1754318cc4dab034d959fcc8e2c34dec0e3d82] signal: Don't restart fork when signals come in.
git bisect good 159fb056514ad8b1459128e8140ef6ce2af30478 # 19:28 G 31 0 0 0 fork: Have new threads join on-going signal group stops
# extra tests with debug options
git bisect bad ad1754318cc4dab034d959fcc8e2c34dec0e3d82 # 19:38 B 0 1 16 0 signal: Don't restart fork when signals come in.
# extra tests on HEAD of linux-devel/devel-hourly-2018073011
git bisect bad 336f4d5a0cf9a9ef5714c8e73290820b1e4a7799 # 19:43 B 0 365 384 0 0day head guard for 'devel-hourly-2018073011'
# extra tests on tree/branch userns/siginfo-testing
git bisect bad ad1754318cc4dab034d959fcc8e2c34dec0e3d82 # 19:45 B 0 13 29 0 signal: Don't restart fork when signals come in.
# extra tests with first bad commit reverted
git bisect good f6d9006c6389f5141f78c0f7ee0d31f9a0dcb2ad # 19:55 G 11 0 0 0 Revert "signal: Don't restart fork when signals come in."
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
4 years
[lkp-robot] [mm] 9092c71bb7: blogbench.write_score -12.3% regression
by kernel test robot
Greeting,
FYI, we noticed a -12.3% regression of blogbench.write_score and a +9.6% improvement
of blogbench.read_score due to commit:
commit: 9092c71bb724dba2ecba849eae69e5c9d39bd3d2 ("mm: use sc->priority for slab shrink targets")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: blogbench
on test machine: 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 8G memory
with following parameters:
disk: 1SSD
fs: btrfs
cpufreq_governor: performance
test-description: Blogbench is a portable filesystem benchmark that tries to reproduce the load of a real-world busy file server.
test-url: https://www.pureftpd.org/project/blogbench
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/rootfs/tbox_group/testcase:
gcc-7/performance/1SSD/btrfs/x86_64-rhel-7.2/debian-x86_64-2016-08-31.cgz/lkp-bdw-de1/blogbench
commit:
fcb2b0c577 ("mm: show total hugetlb memory consumption in /proc/meminfo")
9092c71bb7 ("mm: use sc->priority for slab shrink targets")
fcb2b0c577f145c7 9092c71bb724dba2ecba849eae
---------------- --------------------------
%stddev %change %stddev
\ | \
3256 -12.3% 2854 blogbench.write_score
1235237 ± 2% +9.6% 1354163 blogbench.read_score
28050912 -10.1% 25212230 blogbench.time.file_system_outputs
6481995 ± 3% +25.0% 8105320 ± 2% blogbench.time.involuntary_context_switches
906.00 +13.7% 1030 blogbench.time.percent_of_cpu_this_job_got
2552 +14.0% 2908 blogbench.time.system_time
173.80 +8.4% 188.32 blogbench.time.user_time
19353936 +3.6% 20045728 blogbench.time.voluntary_context_switches
8719514 +13.0% 9850451 softirqs.RCU
2.97 ± 5% -0.7 2.30 ± 3% mpstat.cpu.idle%
24.92 -6.5 18.46 mpstat.cpu.iowait%
0.65 ± 2% +0.1 0.75 mpstat.cpu.soft%
67.76 +6.7 74.45 mpstat.cpu.sys%
50206 -10.7% 44858 vmstat.io.bo
49.25 -9.1% 44.75 ± 2% vmstat.procs.b
224125 -1.8% 220135 vmstat.system.cs
48903 +10.7% 54134 vmstat.system.in
3460654 +10.8% 3834883 meminfo.Active
3380666 +11.0% 3752872 meminfo.Active(file)
1853849 -17.4% 1530415 meminfo.Inactive
1836507 -17.6% 1513054 meminfo.Inactive(file)
551311 -10.3% 494265 meminfo.SReclaimable
196525 -12.6% 171775 meminfo.SUnreclaim
747837 -10.9% 666040 meminfo.Slab
8.904e+08 -24.9% 6.683e+08 cpuidle.C1.time
22971020 -12.8% 20035820 cpuidle.C1.usage
2.518e+08 ± 3% -31.7% 1.72e+08 cpuidle.C1E.time
821393 ± 2% -33.3% 548003 cpuidle.C1E.usage
75460078 ± 2% -23.3% 57903768 ± 2% cpuidle.C3.time
136506 ± 3% -25.3% 101956 ± 3% cpuidle.C3.usage
56892498 ± 4% -23.3% 43608427 ± 4% cpuidle.C6.time
85034 ± 3% -33.9% 56184 ± 3% cpuidle.C6.usage
24373567 -24.5% 18395538 cpuidle.POLL.time
449033 ± 2% -10.8% 400493 cpuidle.POLL.usage
1832 +9.3% 2002 turbostat.Avg_MHz
22967645 -12.8% 20032521 turbostat.C1
18.43 -4.6 13.85 turbostat.C1%
821328 ± 2% -33.3% 547948 turbostat.C1E
5.21 ± 3% -1.6 3.56 turbostat.C1E%
136377 ± 3% -25.3% 101823 ± 3% turbostat.C3
1.56 ± 2% -0.4 1.20 ± 3% turbostat.C3%
84404 ± 3% -34.0% 55743 ± 3% turbostat.C6
1.17 ± 4% -0.3 0.90 ± 4% turbostat.C6%
25.93 -26.2% 19.14 turbostat.CPU%c1
0.12 ± 3% -19.1% 0.10 ± 9% turbostat.CPU%c3
14813304 +10.7% 16398388 turbostat.IRQ
38.19 +3.6% 39.56 turbostat.PkgWatt
4.51 +4.5% 4.71 turbostat.RAMWatt
8111200 ± 13% -63.2% 2986242 ± 48% proc-vmstat.compact_daemon_free_scanned
1026719 ± 30% -81.2% 193485 ± 30% proc-vmstat.compact_daemon_migrate_scanned
2444 ± 21% -63.3% 897.50 ± 20% proc-vmstat.compact_daemon_wake
8111200 ± 13% -63.2% 2986242 ± 48% proc-vmstat.compact_free_scanned
755491 ± 32% -81.6% 138856 ± 28% proc-vmstat.compact_isolated
1026719 ± 30% -81.2% 193485 ± 30% proc-vmstat.compact_migrate_scanned
137.75 ± 34% +2.8e+06% 3801062 ± 2% proc-vmstat.kswapd_inodesteal
6749 ± 20% -53.6% 3131 ± 12% proc-vmstat.kswapd_low_wmark_hit_quickly
844991 +11.2% 939487 proc-vmstat.nr_active_file
3900576 -10.5% 3490567 proc-vmstat.nr_dirtied
459789 -17.8% 377930 proc-vmstat.nr_inactive_file
137947 -10.3% 123720 proc-vmstat.nr_slab_reclaimable
49165 -12.6% 42989 proc-vmstat.nr_slab_unreclaimable
1382 ± 11% -26.2% 1020 ± 20% proc-vmstat.nr_writeback
3809266 -10.7% 3403350 proc-vmstat.nr_written
844489 +11.2% 938974 proc-vmstat.nr_zone_active_file
459855 -17.8% 378121 proc-vmstat.nr_zone_inactive_file
7055 ± 18% -52.0% 3389 ± 11% proc-vmstat.pageoutrun
33764911 ± 2% +21.3% 40946445 proc-vmstat.pgactivate
42044161 ± 2% +12.1% 47139065 proc-vmstat.pgdeactivate
92153 ± 20% -69.1% 28514 ± 24% proc-vmstat.pgmigrate_success
15212270 -10.7% 13591573 proc-vmstat.pgpgout
42053817 ± 2% +12.1% 47151755 proc-vmstat.pgrefill
11297 ±107% +1025.4% 127138 ± 21% proc-vmstat.pgscan_direct
19930162 -24.0% 15141439 proc-vmstat.pgscan_kswapd
19423629 -24.0% 14758807 proc-vmstat.pgsteal_kswapd
10868768 +184.8% 30950752 proc-vmstat.slabs_scanned
3361780 ± 3% -22.9% 2593327 ± 3% proc-vmstat.workingset_activate
4994722 ± 2% -43.2% 2835020 ± 2% proc-vmstat.workingset_refault
316427 -9.3% 286844 slabinfo.Acpi-Namespace.active_objs
3123 -9.4% 2829 slabinfo.Acpi-Namespace.active_slabs
318605 -9.4% 288623 slabinfo.Acpi-Namespace.num_objs
3123 -9.4% 2829 slabinfo.Acpi-Namespace.num_slabs
220514 -40.7% 130747 slabinfo.btrfs_delayed_node.active_objs
9751 -25.3% 7283 slabinfo.btrfs_delayed_node.active_slabs
263293 -25.3% 196669 slabinfo.btrfs_delayed_node.num_objs
9751 -25.3% 7283 slabinfo.btrfs_delayed_node.num_slabs
6383 ± 8% -12.0% 5615 ± 2% slabinfo.btrfs_delayed_ref_head.num_objs
9496 +15.5% 10969 slabinfo.btrfs_extent_buffer.active_objs
9980 +20.5% 12022 slabinfo.btrfs_extent_buffer.num_objs
260933 -10.7% 233136 slabinfo.btrfs_extent_map.active_objs
9392 -10.6% 8396 slabinfo.btrfs_extent_map.active_slabs
263009 -10.6% 235107 slabinfo.btrfs_extent_map.num_objs
9392 -10.6% 8396 slabinfo.btrfs_extent_map.num_slabs
271938 -10.3% 243802 slabinfo.btrfs_inode.active_objs
9804 -10.6% 8768 slabinfo.btrfs_inode.active_slabs
273856 -10.4% 245359 slabinfo.btrfs_inode.num_objs
9804 -10.6% 8768 slabinfo.btrfs_inode.num_slabs
7085 ± 5% -5.5% 6692 ± 2% slabinfo.btrfs_path.num_objs
311936 -16.4% 260797 slabinfo.dentry.active_objs
7803 -9.6% 7058 slabinfo.dentry.active_slabs
327759 -9.6% 296439 slabinfo.dentry.num_objs
7803 -9.6% 7058 slabinfo.dentry.num_slabs
2289 -23.3% 1755 ± 6% slabinfo.proc_inode_cache.active_objs
2292 -19.0% 1856 ± 6% slabinfo.proc_inode_cache.num_objs
261546 -12.3% 229485 slabinfo.radix_tree_node.active_objs
9404 -11.9% 8288 slabinfo.radix_tree_node.active_slabs
263347 -11.9% 232089 slabinfo.radix_tree_node.num_objs
9404 -11.9% 8288 slabinfo.radix_tree_node.num_slabs
1140424 ± 12% +40.2% 1598980 ± 14% sched_debug.cfs_rq:/.MIN_vruntime.max
790.55 +13.0% 893.20 ± 3% sched_debug.cfs_rq:/.exec_clock.stddev
1140425 ± 12% +40.2% 1598982 ± 14% sched_debug.cfs_rq:/.max_vruntime.max
0.83 ± 10% +21.5% 1.00 ± 8% sched_debug.cfs_rq:/.nr_running.avg
3.30 ± 99% +266.3% 12.09 ± 13% sched_debug.cfs_rq:/.removed.load_avg.avg
153.02 ± 97% +266.6% 560.96 ± 13% sched_debug.cfs_rq:/.removed.runnable_sum.avg
569.93 ±102% +173.2% 1556 ± 14% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
1.42 ± 60% +501.5% 8.52 ± 34% sched_debug.cfs_rq:/.removed.util_avg.avg
19.88 ± 59% +288.9% 77.29 ± 16% sched_debug.cfs_rq:/.removed.util_avg.max
5.05 ± 58% +342.3% 22.32 ± 22% sched_debug.cfs_rq:/.removed.util_avg.stddev
791.44 ± 3% +47.7% 1168 ± 8% sched_debug.cfs_rq:/.util_avg.avg
1305 ± 6% +33.2% 1738 ± 5% sched_debug.cfs_rq:/.util_avg.max
450.25 ± 11% +66.2% 748.17 ± 14% sched_debug.cfs_rq:/.util_avg.min
220.82 ± 8% +21.1% 267.46 ± 5% sched_debug.cfs_rq:/.util_avg.stddev
363118 ± 11% -23.8% 276520 ± 11% sched_debug.cpu.avg_idle.avg
726003 ± 8% -30.8% 502313 ± 4% sched_debug.cpu.avg_idle.max
202629 ± 3% -32.2% 137429 ± 18% sched_debug.cpu.avg_idle.stddev
31.96 ± 28% +54.6% 49.42 ± 14% sched_debug.cpu.cpu_load[3].min
36.21 ± 25% +64.0% 59.38 ± 6% sched_debug.cpu.cpu_load[4].min
1007 ± 5% +20.7% 1216 ± 7% sched_debug.cpu.curr->pid.avg
4.50 ± 5% +14.8% 5.17 ± 5% sched_debug.cpu.nr_running.max
2476195 -11.8% 2185022 sched_debug.cpu.nr_switches.max
212888 -26.6% 156172 ± 3% sched_debug.cpu.nr_switches.stddev
3570 ± 2% -58.7% 1474 ± 2% sched_debug.cpu.nr_uninterruptible.max
-803.67 -28.7% -573.38 sched_debug.cpu.nr_uninterruptible.min
1004 ± 2% -50.4% 498.55 ± 3% sched_debug.cpu.nr_uninterruptible.stddev
2478809 -11.7% 2189310 sched_debug.cpu.sched_count.max
214130 -26.5% 157298 ± 3% sched_debug.cpu.sched_count.stddev
489430 ± 2% -16.6% 408309 ± 2% sched_debug.cpu.sched_goidle.avg
724333 ± 2% -28.2% 520263 ± 2% sched_debug.cpu.sched_goidle.max
457611 -18.1% 374746 ± 3% sched_debug.cpu.sched_goidle.min
62957 ± 2% -47.4% 33138 ± 3% sched_debug.cpu.sched_goidle.stddev
676053 ± 2% -15.4% 571816 ± 2% sched_debug.cpu.ttwu_local.max
42669 ± 3% +22.3% 52198 sched_debug.cpu.ttwu_local.min
151873 ± 2% -18.3% 124118 ± 2% sched_debug.cpu.ttwu_local.stddev
blogbench.write_score
3300 +-+------------------------------------------------------------------+
3250 +-+ +. .+ +. .+ : : : +. .+ .+.+.+. .|
|: +. .+ +.+.+.+ + + + : +. : : +. + +.+ + + |
3200 +-+ + +.+ + : + + : + + |
3150 +-+.+ ++ +.+ |
3100 +-+ |
3050 +-+ |
| |
3000 +-+ |
2950 +-+ O O |
2900 +-O O O O |
2850 +-+ O O O O O O O OO O O O |
| O O O O |
2800 O-+ O O |
2750 +-+------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
4 years
[lkp-robot] [shmem] 85766f621c: vm-scalability.throughput -15.5% regression
by kernel test robot
Greeting,
FYI, we noticed a -15.5% regression of vm-scalability.throughput due to commit:
commit: 85766f621c492a9aa7904050ad2f80893ab2a8fd ("shmem: Convert shmem_add_to_page_cache to XArray")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: vm-scalability
on test machine: 4 threads Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz with 8G memory
with following parameters:
runtime: 300s
size: 16G
test: shm-xread-rand
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.2/debian-x86_64-2018-04-03.cgz/300s/16G/lkp-ivb-d02/shm-xread-rand/vm-scalability
commit:
85b67aaaa5 ("shmem: Convert find_swap_entry to XArray")
85766f621c ("shmem: Convert shmem_add_to_page_cache to XArray")
85b67aaaa59cd76a 85766f621c492a9aa7904050ad
---------------- --------------------------
%stddev %change %stddev
\ | \
190175 -15.5% 160733 ± 21% vm-scalability.throughput
47700 +99.1% 94958 ± 3% vm-scalability.median
0.03 ± 15% +38.6% 0.05 ± 81% vm-scalability.stddev
96.91 -47.6% 50.75 ± 5% vm-scalability.time.elapsed_time
96.91 -47.6% 50.75 ± 5% vm-scalability.time.elapsed_time.max
878016 ± 2% -98.3% 14720 ± 53% vm-scalability.time.involuntary_context_switches
4024745 -0.0% 4024725 vm-scalability.time.maximum_resident_set_size
1497244 -14.3% 1283635 ± 5% vm-scalability.time.minor_page_faults
4096 +0.0% 4096 vm-scalability.time.page_size
375.00 -54.1% 172.25 ± 22% vm-scalability.time.percent_of_cpu_this_job_got
11.18 ± 3% -53.6% 5.19 ± 5% vm-scalability.time.system_time
352.62 -76.3% 83.60 ± 27% vm-scalability.time.user_time
18105273 -56.2% 7921056 ± 24% vm-scalability.workload
2677 ± 4% -42.1% 1549 ± 9% interrupts.CAL:Function_call_interrupts
74956 ± 11% -49.1% 38140 ± 19% softirqs.RCU
10283 ± 4% +166.8% 27439 ± 11% softirqs.SCHED
196048 -43.5% 110676 ± 6% softirqs.TIMER
79.95 -0.2% 79.79 ± 2% boot-time.boot
70.20 ± 2% -0.0% 70.17 ± 2% boot-time.dhcp
299.76 ± 2% -0.2% 299.06 ± 2% boot-time.idle
72.80 ± 2% -0.2% 72.62 ± 2% boot-time.kernel_boot
2.80 ± 6% +46.8 49.56 ± 20% mpstat.cpu.idle%
0.03 ± 34% -0.0 0.02 ± 28% mpstat.cpu.soft%
4.65 ± 4% +0.2 4.83 ± 34% mpstat.cpu.sys%
92.52 -46.9 45.59 ± 20% mpstat.cpu.usr%
1029 -3.3% 995.00 vmstat.memory.buff
4825193 -3.4% 4663456 vmstat.memory.cache
2875885 +6.4% 3059946 vmstat.memory.free
0.00 -100.0% 0.00 vmstat.procs.b
4.00 -50.0% 2.00 ± 35% vmstat.procs.r
20841 +49.8% 31217 vmstat.system.cs
13121 +37.1% 17983 vmstat.system.in
43219 ± 10% +755.6% 369768 ± 61% cpuidle.C1.time
1375 ± 15% +1197.9% 17855 ± 68% cpuidle.C1.usage
1256903 ± 68% +3335.5% 43180596 ± 3% cpuidle.C1E.time
21308 ± 69% +3424.9% 751082 ± 4% cpuidle.C1E.usage
39665 ± 25% +351.9% 179230 ± 40% cpuidle.C3.time
160.75 ± 18% +311.8% 662.00 ± 32% cpuidle.C3.usage
7135782 ± 18% +666.0% 54661249 ± 27% cpuidle.C6.time
7706 ± 17% +668.0% 59186 ± 23% cpuidle.C6.usage
301.25 ± 39% +1868.8% 5931 ± 37% cpuidle.POLL.time
29.00 ± 44% +3090.5% 925.25 ± 49% cpuidle.POLL.usage
2510 ± 20% +2.6% 2576 ± 16% slabinfo.anon_vma.active_objs
2607 ± 13% +0.7% 2627 ± 15% slabinfo.anon_vma.num_objs
1193 ± 10% +2.9% 1228 ± 9% slabinfo.cred_jar.active_objs
1193 ± 10% +2.9% 1228 ± 9% slabinfo.cred_jar.num_objs
107.75 ± 13% +107.2% 223.25 ± 54% slabinfo.dmaengine-unmap-16.active_objs
115.25 ± 19% +111.9% 244.25 ± 61% slabinfo.dmaengine-unmap-16.num_objs
7200 ± 12% +8.9% 7840 ± 6% slabinfo.kmalloc-32.active_objs
7200 ± 12% +8.9% 7840 ± 6% slabinfo.kmalloc-32.num_objs
5248 ± 8% -0.2% 5238 ± 4% slabinfo.kmalloc-8.active_objs
5248 ± 8% -0.2% 5238 ± 4% slabinfo.kmalloc-8.num_objs
3455 ± 5% +19.2% 4119 ± 4% slabinfo.pid.active_objs
4570 ± 7% +0.9% 4613 ± 5% slabinfo.pid.num_objs
24874 -1.4% 24521 slabinfo.radix_tree_node.active_objs
24874 -1.3% 24563 slabinfo.radix_tree_node.num_objs
3221 -46.7% 1717 ± 18% turbostat.Avg_MHz
97.84 -45.7 52.18 ± 18% turbostat.Busy%
3300 -0.0% 3299 turbostat.Bzy_MHz
1375 ± 15% +1198.2% 17856 ± 68% turbostat.C1
0.01 +0.2 0.18 ± 69% turbostat.C1%
21308 ± 69% +3425.0% 751108 ± 4% turbostat.C1E
0.32 ± 69% +20.5 20.78 ± 2% turbostat.C1E%
161.25 ± 17% +310.7% 662.25 ± 32% turbostat.C3
0.01 +0.1 0.09 ± 36% turbostat.C3%
7713 ± 17% +667.3% 59186 ± 23% turbostat.C6
1.82 ± 18% +24.9 26.73 ± 33% turbostat.C6%
1.38 ± 22% +3185.0% 45.33 ± 14% turbostat.CPU%c1
0.01 ± 34% -60.0% 0.01 ±100% turbostat.CPU%c3
0.77 ± 44% +220.4% 2.48 ±126% turbostat.CPU%c6
13.59 -3.8% 13.08 ± 6% turbostat.CorWatt
49.50 -3.5% 47.75 ± 2% turbostat.CoreTmp
350.00 +0.0% 350.00 turbostat.GFXMHz
1305924 -27.1% 952039 ± 4% turbostat.IRQ
0.04 ± 30% -100.0% 0.00 turbostat.Pkg%pc2
0.12 ± 10% -100.0% 0.00 turbostat.Pkg%pc6
47.00 ± 4% -1.6% 46.25 ± 2% turbostat.PkgTmp
30.99 -1.9% 30.41 ± 3% turbostat.PkgWatt
3292 -0.0% 3292 turbostat.TSC_MHz
5.878e+10 ± 3% -51.3% 2.861e+10 ± 53% perf-stat.branch-instructions
1.64 ± 14% +0.8 2.45 ± 86% perf-stat.branch-miss-rate%
9.698e+08 ± 18% -49.4% 4.904e+08 ±105% perf-stat.branch-misses
91.01 -18.6 72.41 ± 26% perf-stat.cache-miss-rate%
4.227e+09 -55.8% 1.868e+09 ± 87% perf-stat.cache-misses
4.645e+09 -53.7% 2.148e+09 ± 78% perf-stat.cache-references
2026235 -19.2% 1636525 ± 4% perf-stat.context-switches
4.83 ± 4% -50.7% 2.39 ± 32% perf-stat.cpi
1.255e+12 -72.6% 3.435e+11 ± 70% perf-stat.cpu-cycles
13403 ± 17% -85.7% 1914 ± 5% perf-stat.cpu-migrations
5.39 ± 3% -1.8 3.57 ± 58% perf-stat.dTLB-load-miss-rate%
3.877e+09 -57.0% 1.668e+09 ± 88% perf-stat.dTLB-load-misses
6.807e+10 ± 2% -50.5% 3.37e+10 ± 57% perf-stat.dTLB-loads
0.02 ± 20% +0.0 0.07 ± 69% perf-stat.dTLB-store-miss-rate%
6310413 ± 22% -6.8% 5881564 ± 31% perf-stat.dTLB-store-misses
2.53e+10 ± 2% -47.4% 1.331e+10 ± 49% perf-stat.dTLB-stores
89.37 -18.3 71.07 ± 14% perf-stat.iTLB-load-miss-rate%
10852282 ± 31% +98.0% 21484353 ± 50% perf-stat.iTLB-load-misses
1333714 ± 48% +446.8% 7292385 ± 6% perf-stat.iTLB-loads
2.601e+11 ± 3% -50.8% 1.278e+11 ± 51% perf-stat.instructions
25902 ± 23% -66.0% 8814 ± 95% perf-stat.instructions-per-iTLB-miss
0.21 ± 4% +126.5% 0.47 ± 33% perf-stat.ipc
1626010 -16.2% 1362182 ± 5% perf-stat.minor-faults
1626011 -16.2% 1362184 ± 5% perf-stat.page-faults
14363 ± 3% +4.0% 14938 ± 40% perf-stat.path-length
306033 ± 7% +501.9% 1842050 ± 51% meminfo.Active
305962 ± 7% +502.0% 1841980 ± 51% meminfo.Active(anon)
74019 -48.2% 38339 ± 5% meminfo.AnonHugePages
235389 -1.5% 231743 meminfo.AnonPages
1041 -2.2% 1017 meminfo.Buffers
4827393 -2.3% 4714606 meminfo.Cached
18164 ± 8% +32.3% 24031 ± 7% meminfo.CmaFree
204800 +0.0% 204800 meminfo.CmaTotal
4023392 +0.0% 4023392 meminfo.CommitLimit
4265286 -2.5% 4158052 meminfo.Committed_AS
8191029 +0.1% 8195721 meminfo.DirectMap2M
79997 ± 5% -5.9% 75305 ± 2% meminfo.DirectMap4k
2048 +0.0% 2048 meminfo.Hugepagesize
3877131 -42.6% 2224654 ± 41% meminfo.Inactive
3875926 -42.6% 2223471 ± 41% meminfo.Inactive(anon)
1205 -1.8% 1183 meminfo.Inactive(file)
3603 -1.7% 3541 meminfo.KernelStack
3932052 -2.8% 3822786 meminfo.Mapped
2715637 +5.0% 2850785 meminfo.MemAvailable
2824985 +4.8% 2960247 meminfo.MemFree
8046788 +0.0% 8046788 meminfo.MemTotal
34324 -50.5% 16985 ± 18% meminfo.PageTables
48267 -0.4% 48065 meminfo.SReclaimable
20732 -0.7% 20579 meminfo.SUnreclaim
3946053 -2.9% 3833269 meminfo.Shmem
69000 -0.5% 68645 meminfo.Slab
881118 -0.0% 881102 meminfo.Unevictable
3.436e+10 +0.0% 3.436e+10 meminfo.VmallocTotal
76751 ± 4% +500.6% 460938 ± 51% proc-vmstat.nr_active_anon
58905 -1.5% 58008 proc-vmstat.nr_anon_pages
67403 +4.7% 70593 proc-vmstat.nr_dirty_background_threshold
134972 +4.7% 141360 proc-vmstat.nr_dirty_threshold
1205133 -2.2% 1178739 proc-vmstat.nr_file_pages
4613 ± 4% +29.8% 5989 ± 7% proc-vmstat.nr_free_cma
708248 +4.5% 740205 proc-vmstat.nr_free_pages
966805 -42.6% 555343 ± 41% proc-vmstat.nr_inactive_anon
300.50 -2.0% 294.50 proc-vmstat.nr_inactive_file
3601 -1.7% 3542 proc-vmstat.nr_kernel_stack
978954 -2.9% 950733 proc-vmstat.nr_mapped
8544 -50.4% 4237 ± 18% proc-vmstat.nr_page_table_pages
984538 -2.7% 958151 proc-vmstat.nr_shmem
12063 -0.5% 12004 proc-vmstat.nr_slab_reclaimable
5182 -0.7% 5144 proc-vmstat.nr_slab_unreclaimable
220278 -0.0% 220275 proc-vmstat.nr_unevictable
76754 ± 4% +500.5% 460940 ± 51% proc-vmstat.nr_zone_active_anon
966805 -42.6% 555343 ± 41% proc-vmstat.nr_zone_inactive_anon
300.50 -2.0% 294.50 proc-vmstat.nr_zone_inactive_file
220278 -0.0% 220275 proc-vmstat.nr_zone_unevictable
1162127 -3.6% 1120485 proc-vmstat.numa_hit
1162127 -3.6% 1120485 proc-vmstat.numa_local
1006996 -5.2% 954325 ± 5% proc-vmstat.pgactivate
247442 -10.8% 220800 proc-vmstat.pgalloc_dma32
936515 -3.6% 903231 proc-vmstat.pgalloc_normal
1630736 -16.2% 1366671 ± 5% proc-vmstat.pgfault
595443 ± 62% +1.1% 601843 ± 83% proc-vmstat.pgfree
2173 +0.0% 2173 proc-vmstat.pgpgin
2048 +0.0% 2049 proc-vmstat.pgpgout
168.36 ±173% +248.0% 585.83 ±173% sched_debug.cfs_rq:/.MIN_vruntime.avg
673.46 ±173% +248.0% 2343 ±173% sched_debug.cfs_rq:/.MIN_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.MIN_vruntime.min
291.62 ±173% +248.0% 1014 ±173% sched_debug.cfs_rq:/.MIN_vruntime.stddev
29757 -99.9% 39.23 ± 25% sched_debug.cfs_rq:/.exec_clock.avg
30069 -99.9% 40.37 ± 24% sched_debug.cfs_rq:/.exec_clock.max
29419 -99.9% 37.80 ± 26% sched_debug.cfs_rq:/.exec_clock.min
248.80 ± 68% -99.6% 1.02 ± 13% sched_debug.cfs_rq:/.exec_clock.stddev
292413 ± 19% +13.2% 331037 ± 31% sched_debug.cfs_rq:/.load.avg
512518 ± 46% +11.9% 573270 ± 69% sched_debug.cfs_rq:/.load.max
187469 ± 7% +9.9% 205951 ± 11% sched_debug.cfs_rq:/.load.min
133934 ± 76% +10.8% 148390 ±112% sched_debug.cfs_rq:/.load.stddev
406.56 ± 15% +46.8% 596.94 ± 7% sched_debug.cfs_rq:/.load_avg.avg
634.38 ± 21% +85.7% 1178 ± 28% sched_debug.cfs_rq:/.load_avg.max
240.25 ± 11% +10.7% 266.00 ± 18% sched_debug.cfs_rq:/.load_avg.min
167.36 ± 42% +130.5% 385.79 ± 37% sched_debug.cfs_rq:/.load_avg.stddev
168.36 ±173% +248.0% 585.83 ±173% sched_debug.cfs_rq:/.max_vruntime.avg
673.46 ±173% +248.0% 2343 ±173% sched_debug.cfs_rq:/.max_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.max_vruntime.min
291.62 ±173% +248.0% 1014 ±173% sched_debug.cfs_rq:/.max_vruntime.stddev
131544 -93.1% 9126 sched_debug.cfs_rq:/.min_vruntime.avg
140526 ± 2% -89.2% 15201 ± 15% sched_debug.cfs_rq:/.min_vruntime.max
121759 ± 3% -97.1% 3486 ± 22% sched_debug.cfs_rq:/.min_vruntime.min
7074 ± 31% -36.0% 4528 ± 20% sched_debug.cfs_rq:/.min_vruntime.stddev
1.03 ± 5% +3.0% 1.06 ± 10% sched_debug.cfs_rq:/.nr_running.avg
1.12 ± 19% +11.1% 1.25 ± 34% sched_debug.cfs_rq:/.nr_running.max
1.00 +0.0% 1.00 sched_debug.cfs_rq:/.nr_running.min
0.05 ±173% +100.0% 0.11 ±173% sched_debug.cfs_rq:/.nr_running.stddev
3.41 ± 7% -100.0% 0.00 sched_debug.cfs_rq:/.nr_spread_over.avg
6.88 ± 13% -100.0% 0.00 sched_debug.cfs_rq:/.nr_spread_over.max
1.12 ± 72% -100.0% 0.00 sched_debug.cfs_rq:/.nr_spread_over.min
2.16 ± 27% -100.0% 0.00 sched_debug.cfs_rq:/.nr_spread_over.stddev
178.72 ± 3% -53.9% 82.38 ± 26% sched_debug.cfs_rq:/.runnable_load_avg.avg
274.75 ± 4% -50.9% 135.00 ± 38% sched_debug.cfs_rq:/.runnable_load_avg.max
141.12 ± 4% -70.8% 41.25 ± 44% sched_debug.cfs_rq:/.runnable_load_avg.min
56.87 ± 7% -35.1% 36.91 ± 66% sched_debug.cfs_rq:/.runnable_load_avg.stddev
212710 ± 24% -63.1% 78556 ± 28% sched_debug.cfs_rq:/.runnable_weight.avg
400157 ± 53% -69.9% 120568 ± 40% sched_debug.cfs_rq:/.runnable_weight.max
141898 ± 2% -70.5% 41804 ± 44% sched_debug.cfs_rq:/.runnable_weight.min
109826 ± 83% -72.0% 30704 ± 76% sched_debug.cfs_rq:/.runnable_weight.stddev
-956.92 -348.7% 2379 ±119% sched_debug.cfs_rq:/.spread0.avg
8024 ± 76% +5.3% 8451 ± 28% sched_debug.cfs_rq:/.spread0.max
-10742 -69.7% -3260 sched_debug.cfs_rq:/.spread0.min
7074 ± 31% -36.0% 4527 ± 20% sched_debug.cfs_rq:/.spread0.stddev
1028 ± 2% +0.5% 1033 ± 3% sched_debug.cfs_rq:/.util_avg.avg
1105 ± 5% +14.1% 1261 ± 6% sched_debug.cfs_rq:/.util_avg.max
929.62 ± 5% -8.2% 853.00 ± 14% sched_debug.cfs_rq:/.util_avg.min
67.44 ± 63% +130.4% 155.36 ± 41% sched_debug.cfs_rq:/.util_avg.stddev
222.56 ± 34% -71.7% 62.88 ± 58% sched_debug.cfs_rq:/.util_est_enqueued.avg
503.88 ± 33% -51.8% 242.75 ± 59% sched_debug.cfs_rq:/.util_est_enqueued.max
26.00 ±162% -95.2% 1.25 ± 34% sched_debug.cfs_rq:/.util_est_enqueued.min
195.39 ± 35% -46.8% 103.88 ± 60% sched_debug.cfs_rq:/.util_est_enqueued.stddev
473454 ± 14% +38.0% 653593 ± 9% sched_debug.cpu.avg_idle.avg
832956 ± 6% -3.1% 806762 ± 9% sched_debug.cpu.avg_idle.max
193874 ± 57% +124.3% 434865 ± 36% sched_debug.cpu.avg_idle.min
260441 ± 13% -45.2% 142845 ± 55% sched_debug.cpu.avg_idle.stddev
110437 -27.4% 80213 ± 2% sched_debug.cpu.clock.avg
110438 -27.4% 80214 ± 2% sched_debug.cpu.clock.max
110436 -27.4% 80211 ± 2% sched_debug.cpu.clock.min
0.83 ± 13% +25.6% 1.04 ± 12% sched_debug.cpu.clock.stddev
110437 -27.4% 80213 ± 2% sched_debug.cpu.clock_task.avg
110438 -27.4% 80214 ± 2% sched_debug.cpu.clock_task.max
110436 -27.4% 80211 ± 2% sched_debug.cpu.clock_task.min
0.83 ± 13% +25.6% 1.04 ± 12% sched_debug.cpu.clock_task.stddev
189.84 ± 4% -54.7% 85.94 ± 17% sched_debug.cpu.cpu_load[0].avg
289.12 ± 10% -44.1% 161.50 ± 23% sched_debug.cpu.cpu_load[0].max
141.12 ± 4% -71.1% 40.75 ± 17% sched_debug.cpu.cpu_load[0].min
63.08 ± 20% -21.9% 49.24 ± 30% sched_debug.cpu.cpu_load[0].stddev
185.59 ± 4% -52.2% 88.69 ± 12% sched_debug.cpu.cpu_load[1].avg
282.50 ± 6% -47.1% 149.50 ± 29% sched_debug.cpu.cpu_load[1].max
137.25 ± 4% -67.8% 44.25 ± 18% sched_debug.cpu.cpu_load[1].min
60.14 ± 14% -28.4% 43.04 ± 46% sched_debug.cpu.cpu_load[1].stddev
184.72 ± 3% -50.5% 91.38 ± 12% sched_debug.cpu.cpu_load[2].avg
275.38 ± 5% -47.7% 144.00 ± 30% sched_debug.cpu.cpu_load[2].max
138.75 ± 6% -62.3% 52.25 ± 24% sched_debug.cpu.cpu_load[2].min
55.48 ± 11% -30.6% 38.47 ± 50% sched_debug.cpu.cpu_load[2].stddev
186.38 ± 4% -47.5% 97.94 ± 14% sched_debug.cpu.cpu_load[3].avg
258.75 ± 4% -44.8% 142.75 ± 27% sched_debug.cpu.cpu_load[3].max
139.88 ± 6% -57.5% 59.50 ± 25% sched_debug.cpu.cpu_load[3].min
45.63 ± 13% -26.4% 33.58 ± 46% sched_debug.cpu.cpu_load[3].stddev
188.03 ± 5% -42.5% 108.06 ± 13% sched_debug.cpu.cpu_load[4].avg
240.62 ± 5% -40.2% 144.00 ± 23% sched_debug.cpu.cpu_load[4].max
142.50 ± 9% -51.9% 68.50 ± 18% sched_debug.cpu.cpu_load[4].min
36.69 ± 22% -21.2% 28.92 ± 43% sched_debug.cpu.cpu_load[4].stddev
966.06 ± 6% -25.7% 718.06 ± 9% sched_debug.cpu.curr->pid.avg
1299 -33.2% 868.25 sched_debug.cpu.curr->pid.max
705.75 ± 34% -40.1% 422.75 ± 37% sched_debug.cpu.curr->pid.min
237.21 ± 31% -24.8% 178.41 ± 35% sched_debug.cpu.curr->pid.stddev
325755 ± 34% -18.5% 265488 ± 5% sched_debug.cpu.load.avg
617677 ± 67% -44.1% 345304 ± 6% sched_debug.cpu.load.max
187469 ± 7% +9.9% 205951 ± 11% sched_debug.cpu.load.min
176372 ±100% -69.5% 53831 ± 27% sched_debug.cpu.load.stddev
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.avg
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.max
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.min
4294 -0.0% 4294 sched_debug.cpu.next_balance.avg
4294 -0.0% 4294 sched_debug.cpu.next_balance.max
4294 -0.0% 4294 sched_debug.cpu.next_balance.min
0.00 ± 36% -10.2% 0.00 ± 65% sched_debug.cpu.next_balance.stddev
37966 -78.9% 8004 ± 4% sched_debug.cpu.nr_load_updates.avg
40554 ± 2% -70.4% 11997 ± 21% sched_debug.cpu.nr_load_updates.max
34281 ± 2% -87.1% 4427 ± 15% sched_debug.cpu.nr_load_updates.min
2438 ± 22% +19.2% 2905 ± 41% sched_debug.cpu.nr_load_updates.stddev
2.25 ± 17% +8.3% 2.44 ± 29% sched_debug.cpu.nr_running.avg
3.12 ± 20% +20.0% 3.75 ± 54% sched_debug.cpu.nr_running.max
1.62 ± 13% -23.1% 1.25 ± 34% sched_debug.cpu.nr_running.min
0.63 ± 22% +59.9% 1.00 ± 85% sched_debug.cpu.nr_running.stddev
254977 -96.8% 8201 ± 4% sched_debug.cpu.nr_switches.avg
674868 ± 21% -98.2% 11993 ± 22% sched_debug.cpu.nr_switches.max
37283 ± 51% -85.3% 5491 ± 17% sched_debug.cpu.nr_switches.min
258652 ± 28% -99.0% 2585 ± 58% sched_debug.cpu.nr_switches.stddev
0.03 ±173% +700.0% 0.25 ± 70% sched_debug.cpu.nr_uninterruptible.avg
22.25 ± 22% -34.8% 14.50 ± 18% sched_debug.cpu.nr_uninterruptible.max
-20.75 -45.8% -11.25 sched_debug.cpu.nr_uninterruptible.min
16.52 ± 24% -38.8% 10.12 ± 14% sched_debug.cpu.nr_uninterruptible.stddev
247705 -99.6% 886.94 ± 51% sched_debug.cpu.sched_count.avg
667148 ± 21% -99.6% 2595 ± 62% sched_debug.cpu.sched_count.max
30369 ± 59% -99.5% 143.75 ± 7% sched_debug.cpu.sched_count.min
258206 ± 27% -99.6% 1005 ± 66% sched_debug.cpu.sched_count.stddev
2815 ± 66% -100.0% 0.00 sched_debug.cpu.sched_goidle.avg
8181 ± 73% -100.0% 0.00 sched_debug.cpu.sched_goidle.max
29.38 ±173% -100.0% 0.00 sched_debug.cpu.sched_goidle.min
3344 ± 71% -100.0% 0.00 sched_debug.cpu.sched_goidle.stddev
123801 -99.6% 440.69 ± 52% sched_debug.cpu.ttwu_count.avg
334777 ± 21% -99.6% 1307 ± 61% sched_debug.cpu.ttwu_count.max
14411 ± 67% -99.4% 81.25 ± 9% sched_debug.cpu.ttwu_count.min
129727 ± 27% -99.6% 507.89 ± 64% sched_debug.cpu.ttwu_count.stddev
122356 -99.7% 411.38 ± 55% sched_debug.cpu.ttwu_local.avg
332928 ± 21% -99.6% 1279 ± 62% sched_debug.cpu.ttwu_local.max
12947 ± 71% -99.6% 50.50 ± 8% sched_debug.cpu.ttwu_local.min
129617 ± 27% -99.6% 508.69 ± 64% sched_debug.cpu.ttwu_local.stddev
110435 -27.4% 80211 ± 2% sched_debug.cpu_clk
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.avg
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.max
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.min
4.295e+09 -0.0% 4.295e+09 sched_debug.jiffies
110435 -27.4% 80211 ± 2% sched_debug.ktime
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.avg
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.max
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.min
110441 -27.4% 80216 ± 2% sched_debug.sched_clk
1.00 +0.0% 1.00 sched_debug.sched_clock_stable()
4118331 +0.0% 4118331 sched_debug.sysctl_sched.sysctl_sched_features
18.00 +0.0% 18.00 sched_debug.sysctl_sched.sysctl_sched_latency
2.25 +0.0% 2.25 sched_debug.sysctl_sched.sysctl_sched_min_granularity
1.00 +0.0% 1.00 sched_debug.sysctl_sched.sysctl_sched_tunable_scaling
3.00 +0.0% 3.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
vm-scalability.workload
2e+07 +-+---------------------------------------------------------------+
| |
1.8e+07 +-++.+.++.+.++.+.++.++.+.++.+.++.+.++.+.++.+.++.+.++.++.+.++.+.++.|
1.6e+07 +-+ |
| |
1.4e+07 +-+ |
O O O O O |
1.2e+07 +-+ |
| |
1e+07 +-+ |
8e+06 +-OO O O O OO |
| |
6e+06 +-+ |
| |
4e+06 +-+----OO----O---O---O--------------------------------------------+
vm-scalability.time.user_time
400 +-+-------------------------------------------------------------------+
| |
350 +-+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.|
300 +-+ |
| |
250 +-+ |
| |
200 O-+ O O O O |
| |
150 +-+ |
100 +-O |
| O O O OO O |
50 +-+ O O O O O |
| |
0 +-+-------------------------------------------------------------------+
vm-scalability.time.system_time
13 +-+--------------------------------------------------------------------+
|:: :: : .+ |
12 +-+: + + :: + : : + + + + .|
11 +-+: :: .+ .++. .+.+ : + .+ :: : + : : : : +. .+ + + .+ |
| : : .+ +. .+.+ + :: + :: : + :: :: + +. + + |
10 +-+ + + + + + + + + + |
9 +-+ |
| |
8 +-+ |
7 +-+ |
O O |
6 +-O OO OO O O |
5 +-+ O O O O O O |
| O O |
4 +-+--------------------------------------------------------------------+
vm-scalability.time.percent_of_cpu_this_job_got
400 +-+-------------------------------------------------------------------+
|.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.|
350 +-+ |
| |
| |
300 +-+ |
O O O O O |
250 +-+ |
| |
200 +-O O O O |
| O O O |
| |
150 +-+ |
| |
100 +-+----O-O----O---O----O----------------------------------------------+
vm-scalability.time.involuntary_context_switches
1e+06 +-+----------------------------------------------------------------+
900000 +-++.+.++.+. +.+.+ .+.+.++.+.++.+.++.+.+ + .++.|
| ++.+.++.+.+ + +. + +.+.+ .+ |
800000 +-+ + + |
700000 +-+ |
| |
600000 +-+ |
500000 +-+ |
400000 +-+ |
| |
300000 +-+ |
200000 +-+ |
O O |
100000 +-+ O |
0 +-OO---OO-O-OO-O-O--O-OO-O-O---------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
4 years
[lkp-robot] [confidence: ] 7757d607c6 [ 56.996267] BUG: Bad page map in process trinity-c2 pte:0d755065 pmd:0d55b067
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/pti
commit 7757d607c6b31867777de42e1fb0210b9c5d8b70
Author: Joerg Roedel <jroedel(a)suse.de>
AuthorDate: Wed Jul 18 11:41:14 2018 +0200
Commit: Thomas Gleixner <tglx(a)linutronix.de>
CommitDate: Fri Jul 20 01:11:48 2018 +0200
x86/pti: Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32
Allow PTI to be compiled on x86_32.
Signed-off-by: Joerg Roedel <jroedel(a)suse.de>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Tested-by: Pavel Machek <pavel(a)ucw.cz>
Cc: "H . Peter Anvin" <hpa(a)zytor.com>
Cc: linux-mm(a)kvack.org
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Dave Hansen <dave.hansen(a)intel.com>
Cc: Josh Poimboeuf <jpoimboe(a)redhat.com>
Cc: Juergen Gross <jgross(a)suse.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Jiri Kosina <jkosina(a)suse.cz>
Cc: Boris Ostrovsky <boris.ostrovsky(a)oracle.com>
Cc: Brian Gerst <brgerst(a)gmail.com>
Cc: David Laight <David.Laight(a)aculab.com>
Cc: Denys Vlasenko <dvlasenk(a)redhat.com>
Cc: Eduardo Valentin <eduval(a)amazon.com>
Cc: Greg KH <gregkh(a)linuxfoundation.org>
Cc: Will Deacon <will.deacon(a)arm.com>
Cc: aliguori(a)amazon.com
Cc: daniel.gruss(a)iaik.tugraz.at
Cc: hughd(a)google.com
Cc: keescook(a)google.com
Cc: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: Waiman Long <llong(a)redhat.com>
Cc: "David H . Gutteridge" <dhgutteridge(a)sympatico.ca>
Cc: joro(a)8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-38-git-send-email-joro@8bytes.org
6df934b92a x86/ldt: Enable LDT user-mapping for PAE
7757d607c6 x86/pti: Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32
8c934e01a7 x86/pti: Check the return value of pti_user_pagetable_walk_pmd()
4edd5aa38c Merge branch 'linus'
+-----------------------------------------------+------------+------------+------------+------------+
| | 6df934b92a | 7757d607c6 | 8c934e01a7 | 4edd5aa38c |
+-----------------------------------------------+------------+------------+------------+------------+
| boot_successes | 1286 | 307 | 282 | 622 |
| boot_failures | 5 | 3 | 4 | 3 |
| Mem-Info | 5 | 1 | 2 | 1 |
| invoked_oom-killer:gfp_mask=0x | 3 | | | |
| BUG:Bad_page_map_in_process | 0 | 2 | 2 | 1 |
| BUG:Bad_page_state_in_process | 0 | 2 | 2 | 1 |
| BUG:Bad_rss-counter_state_mm:(ptrval)idx:#val | 0 | 1 | 2 | 1 |
| kernel_BUG_at_mm/filemap.c | 0 | 0 | 1 | |
| invalid_opcode:#[##] | 0 | 0 | 1 | |
| EIP:unaccount_page_cache_page | 0 | 0 | 1 | |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 0 | 1 | 1 |
| BUG:unable_to_handle_kernel | 0 | 0 | 0 | 1 |
| Oops:#[##] | 0 | 0 | 0 | 1 |
| EIP:put_pid | 0 | 0 | 0 | 1 |
+-----------------------------------------------+------------+------------+------------+------------+
[child3:1823] fdatasync (148) returned ENOSYS, marking as inactive.
[main] 12067 iterations. [F:7570 S:4509 HI:2917]
[ 28.550832] warning: process `trinity-c0' used the obsolete bdflush system call
[ 28.558277] Fix your initscripts?
[ 50.480036] trinity-c0 (1412) used greatest stack depth: 5416 bytes left
[ 56.996267] BUG: Bad page map in process trinity-c2 pte:0d755065 pmd:0d55b067
[ 56.997421] page:bfa00aa0 count:1 mapcount:-1 mapping:00000000 index:0x0
[ 56.998417] flags: 0xc000014(referenced|dirty)
[ 56.999087] raw: 0c000014 00000100 00000200 00000000 00000000 00000000 fffffffe 00000001
[ 57.000311] page dumped because: bad pte
[ 57.000909] addr:21632421 vm_flags:00100873 anon_vma:d90b494d mapping:7c8e0e7b index:3ec
[ 57.002102] file:trinity fault:filemap_fault mmap:generic_file_mmap readpage:simple_readpage
[ 57.003344] CPU: 0 PID: 1670 Comm: trinity-c2 Not tainted 4.18.0-rc4-00209-g7757d60 #1
[ 57.004505] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 57.005724] Call Trace:
[ 57.006112] dump_stack+0x75/0xa9
[ 57.006612] ? dcache_readdir+0x15a/0x15a
[ 57.007221] print_bad_pte+0x166/0x180
[ 57.007789] ? __lock_page_or_retry+0xa1/0xa1
[ 57.008436] ? read_cache_page_gfp+0x1c/0x1c
[ 57.009072] ? dcache_readdir+0x15a/0x15a
[ 57.009676] unmap_page_range+0x3e1/0x596
[ 57.010296] unmap_single_vma+0x8b/0x95
[ 57.010882] unmap_vmas+0x27/0x36
[ 57.011380] exit_mmap+0x93/0x121
[ 57.011905] mmput+0x44/0xbf
[ 57.012342] do_exit+0x31d/0x802
[ 57.012836] ? _raw_spin_unlock_irq+0x22/0x44
[ 57.013489] do_group_exit+0x30/0x86
[ 57.014032] get_signal+0x5e0/0x605
[ 57.014560] do_signal+0x24/0x4c1
[ 57.015071] ? _raw_spin_unlock_irqrestore+0x3a/0x5f
[ 57.015809] ? trace_hardirqs_on_caller+0x14b/0x166
[ 57.016549] ? trace_hardirqs_on_caller+0x14b/0x166
[ 57.017276] exit_to_usermode_loop+0x37/0x69
[ 57.017919] do_fast_syscall_32+0x217/0x249
[ 57.018542] entry_SYSENTER_32+0x70/0xc8
[ 57.019127] EIP: 0xa7fc8bf9
[ 57.019546] Code: ff 85 d2 74 02 89 02 5d c3 8b 04 24 c3 8b 1c 24 c3 8b 34 24 c3 90 90 90 90 90 90 90 90 90 90 90 90 51 52 55 89 e5 0f 34 cd 80 <5d> 5a 59 c3 90 90 90 90 eb 0d 90 90 90 90 90 90 90 90 90 90 90 90
[ 57.022438] EAX: fffffe00 EBX: a6d55000 ECX: 00085000 EDX: 00000002
[ 57.023356] ESI: 00000008 EDI: 00400000 EBP: fffffffc ESP: afe06e5c
[ 57.024269] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000296
[ 57.025347] Disabling lock debugging due to kernel taint
[main] trace_fd was -1
[main] kernel became tainted! (32/0) Last seed was 2560220593
trinity: Detected kernel tainting. Last seed was 2560220593
[main] exit_reason=7, but 3 children still running.
[child1:2134] trace_fd was -1
[main] trace_fd was -1
[main] kernel became tainted! (32/0) Last seed was 1446110596
trinity: Detected kernel tainting. Last seed was 1446110596
[ 57.096675] BUG: Bad page state in process trinity-c2 pfn:0d755
[ 57.097586] page:bfa00aa0 count:0 mapcount:-1 mapping:00000000 index:0x0
[ 57.098572] flags: 0xc000014(referenced|dirty)
[ 57.099238] raw: 0c000014 bdf45ce4 bdf45ce4 00000000 00000000 00000000 fffffffe 00000000
[ 57.104018] page dumped because: nonzero mapcount
[ 57.104732] Modules linked in:
[ 57.105200] CPU: 0 PID: 1670 Comm: trinity-c2 Tainted: G B 4.18.0-rc4-00209-g7757d60 #1
[ 57.106552] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 57.107782] Call Trace:
[ 57.108160] dump_stack+0x75/0xa9
[ 57.108658] bad_page+0xec/0x109
[ 57.109146] free_pages_check_bad+0x40/0x42
[ 57.109768] free_unref_page_prepare+0x4d/0xef
[ 57.110421] free_unref_page_list+0x3e/0x115
[ 57.111059] release_pages+0xd9/0x28f
[ 57.111604] free_pages_and_swap_cache+0x72/0x78
[ 57.112292] tlb_flush_mmu_free+0x20/0x33
[ 57.112886] unmap_page_range+0x56f/0x596
[ 57.113486] unmap_single_vma+0x8b/0x95
[ 57.114058] unmap_vmas+0x27/0x36
[ 57.114553] exit_mmap+0x93/0x121
[ 57.115056] mmput+0x44/0xbf
[ 57.115488] do_exit+0x31d/0x802
[ 57.115976] ? _raw_spin_unlock_irq+0x22/0x44
[ 57.116617] do_group_exit+0x30/0x86
[ 57.117154] get_signal+0x5e0/0x605
[ 57.117676] do_signal+0x24/0x4c1
[ 57.118175] ? _raw_spin_unlock_irqrestore+0x3a/0x5f
[ 57.118912] ? trace_hardirqs_on_caller+0x14b/0x166
[ 57.119631] ? trace_hardirqs_on_caller+0x14b/0x166
[ 57.120347] exit_to_usermode_loop+0x37/0x69
[ 57.120981] do_fast_syscall_32+0x217/0x249
[ 57.121596] entry_SYSENTER_32+0x70/0xc8
[ 57.122178] EIP: 0xa7fc8bf9
[ 57.122592] Code: Bad RIP value.
[ 57.123085] EAX: fffffe00 EBX: a6d55000 ECX: 00085000 EDX: 00000002
[ 57.124006] ESI: 00000008 EDI: 00400000 EBP: fffffffc ESP: afe06e5c
[ 57.124921] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000296
[main] exit_reason=7, but 2 children still running.
[main] trace_fd was -1
[main] kernel became tainted! (32/0) Last seed was 669750733
trinity: Detected kernel tainting. Last seed was 669750733
Failed to write post mortem log (Permission denied)
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 8c934e01a7ce685d98e970880f5941d79272c654 37b5dca2898d1471729194f45e281c2443eb9d6c --
git bisect good 8372d66865deb45ee3ec21401a9c80f231b728c8 # 23:36 G 304 0 4 4 x86/pgtable: Move pgdp kernel/user conversion functions to pgtable.h
git bisect good b976690f5db26fbc7c2be413bfa0fbd270547a94 # 00:10 G 305 0 5 5 x86/mm/pti: Introduce pti_finalize()
git bisect good 9bae3197e15dd5e03ce8e237db6fe4486b08a775 # 00:49 G 308 0 3 4 x86/ldt: Split out sanity check in map_ldt_struct()
git bisect bad 5e8105950a8b3e03e805299b4d05020ee4eda31a # 01:12 B 131 1 4 4 x86/mm/pti: Add Warning when booting on a PCID capable CPU
git bisect good 6df934b92a549cb3badb6d576f71aeb133e2f110 # 01:47 G 310 0 7 10 x86/ldt: Enable LDT user-mapping for PAE
git bisect bad 7757d607c6b31867777de42e1fb0210b9c5d8b70 # 02:08 B 134 1 2 2 x86/pti: Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32
# first bad commit: [7757d607c6b31867777de42e1fb0210b9c5d8b70] x86/pti: Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32
git bisect good 6df934b92a549cb3badb6d576f71aeb133e2f110 # 03:14 G 900 0 9 19 x86/ldt: Enable LDT user-mapping for PAE
# extra tests on HEAD of tip/x86/pti
git bisect bad 8c934e01a7ce685d98e970880f5941d79272c654 # 03:15 B 279 2 0 6 x86/pti: Check the return value of pti_user_pagetable_walk_pmd()
# extra tests on tree/branch tip/x86/pti
git bisect bad 8c934e01a7ce685d98e970880f5941d79272c654 # 03:20 B 279 2 0 6 x86/pti: Check the return value of pti_user_pagetable_walk_pmd()
# extra tests with first bad commit reverted
git bisect good 90d0ce801fac8115d424e40a4a258aeed0e409dd # 03:58 G 302 0 7 7 Revert "x86/pti: Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32"
# extra tests on tree/branch tip/master
git bisect bad 4edd5aa38cec47346e0d0a85fa43964828b982d0 # 04:37 B 257 1 5 6 Merge branch 'linus'
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
4 years