Re: [LKP] [rcu] kernel BUG at include/linux/pagemap.h:149!
by Frederic Weisbecker
On Fri, Sep 11, 2015 at 10:19:47AM +0800, Boqun Feng wrote:
> Subject: [PATCH 01/27] rcu: Don't disable preemption for Tiny and Tree RCU
> readers
>
> Because preempt_disable() maps to barrier() for non-debug builds,
> it forces the compiler to spill and reload registers. Because Tree
> RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
> barrier() instances generate needless extra code for each instance of
> rcu_read_lock() and rcu_read_unlock(). This extra code slows down Tree
> RCU and bloats Tiny RCU.
>
> This commit therefore removes the preempt_disable() and preempt_enable()
> from the non-preemptible implementations of __rcu_read_lock() and
> __rcu_read_unlock(), respectively.
>
> For debug purposes, preempt_disable() and preempt_enable() are still
> kept if CONFIG_PREEMPT_COUNT=y, which makes the detection of sleeping
> inside atomic sections still work in non-preemptible kernels.
>
> Signed-off-by: Boqun Feng <boqun.feng(a)gmail.com>
> Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> ---
> include/linux/rcupdate.h | 6 ++++--
> include/linux/rcutiny.h | 1 +
> kernel/rcu/tree.c | 9 +++++++++
> 3 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index d63bb77..6c3cece 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -297,12 +297,14 @@ void synchronize_rcu(void);
>
> static inline void __rcu_read_lock(void)
> {
> - preempt_disable();
> + if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
> + preempt_disable();
preempt_disable() is a no-op when !CONFIG_PREEMPT_COUNT, right?
Or rather it's a barrier(), which is anyway implied by rcu_read_lock().
So perhaps we can get rid of the IS_ENABLED() check?
3 years
[btrfs] 302167c50b: fio.write_bw_MBps -12.4% regression
by kernel test robot
Greeting,
FYI, we noticed a -12.4% regression of fio.write_bw_MBps due to commit:
commit: 302167c50b32e7fccc98994a91d40ddbbab04e52 ("btrfs: don't end the transaction for delayed refs in throttle")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git pending-fixes
in testcase: fio-basic
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
with following parameters:
runtime: 300s
nr_task: 8t
disk: 1SSD
fs: btrfs
rw: randwrite
bs: 4k
ioengine: sync
test_size: 400g
cpufreq_governor: performance
ucode: 0xb00002e
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/ucode:
4k/gcc-7/performance/1SSD/btrfs/sync/x86_64-rhel-7.2/8t/debian-x86_64-2018-04-03.cgz/300s/randwrite/lkp-bdw-ep3b/400g/fio-basic/0xb00002e
commit:
a627947076 ("Btrfs: fix deadlock when allocating tree block during leaf/node split")
302167c50b ("btrfs: don't end the transaction for delayed refs in throttle")
a6279470762c19ba 302167c50b32e7fccc98994a91
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
0.02 ± 4% -0.0 0.01 fio.latency_100ms%
41.36 ± 2% -14.7 26.66 ± 12% fio.latency_100us%
0.85 ± 6% +0.3 1.14 ± 14% fio.latency_10us%
0.01 +0.0 0.02 ± 3% fio.latency_2000ms%
0.02 ± 18% -0.0 0.01 ± 5% fio.latency_20ms%
0.50 ± 11% +0.1 0.56 ± 11% fio.latency_20us%
0.03 ± 11% -0.0 0.01 ± 10% fio.latency_250ms%
8.90 ± 5% -2.1 6.80 ± 3% fio.latency_250us%
0.03 ± 7% -0.0 0.02 ± 7% fio.latency_500ms%
0.03 ± 15% -0.0 0.01 fio.latency_50ms%
41.49 ± 3% +16.2 57.73 ± 5% fio.latency_50us%
44895412 ± 2% -12.5% 39295860 fio.time.file_system_outputs
36.25 ± 3% -16.6% 30.25 fio.time.percent_of_cpu_this_job_got
98.06 ± 3% -18.2% 80.23 fio.time.system_time
5558064 ± 2% -12.7% 4851975 fio.time.voluntary_context_switches
5610728 ± 2% -12.5% 4909544 fio.workload
72.97 ± 2% -12.4% 63.91 fio.write_bw_MBps
427.18 ± 2% +14.2% 487.93 fio.write_clat_mean_us
13691 ± 2% +43.7% 19669 fio.write_clat_stddev
18680 ± 2% -12.4% 16360 fio.write_iops
0.97 -0.7 0.30 ± 2% mpstat.cpu.iowait%
3.94 ± 3% -1.5 2.40 mpstat.cpu.sys%
2875717 -13.4% 2489058 softirqs.BLOCK
5107622 ± 3% +27.5% 6510241 ± 4% softirqs.RCU
30695 ± 15% -30.2% 21424 ± 11% numa-meminfo.node0.Writeback
179069 ± 19% +134.0% 419038 ± 20% numa-meminfo.node1.Active
36182 ±105% +701.8% 290125 ± 30% numa-meminfo.node1.Active(file)
1.096e+09 ± 3% -22.2% 8.531e+08 ± 7% cpuidle.C1.time
57940399 -34.0% 38218420 ± 4% cpuidle.C1.usage
13565831 ± 7% -67.4% 4420507 ± 16% cpuidle.POLL.time
4064467 ± 5% -72.0% 1136676 ± 12% cpuidle.POLL.usage
124.33 ± 2% -59.2% 50.74 ± 3% iostat.sda.avgqu-sz
18410 -13.2% 15975 iostat.sda.w/s
300245 -21.0% 237217 iostat.sda.wkB/s
9.15 ± 10% -42.0% 5.31 ± 19% iostat.sda.wrqm/s
300252 -21.0% 237234 vmstat.io.bo
1.00 -100.0% 0.00 vmstat.procs.b
3.00 -33.3% 2.00 vmstat.procs.r
392814 -36.9% 247683 vmstat.system.cs
12975351 -10.0% 11683920 meminfo.Inactive
12742134 -10.1% 11450539 meminfo.Inactive(file)
1336423 -10.4% 1197060 meminfo.SUnreclaim
36875 ± 15% -35.8% 23682 ± 8% meminfo.Writeback
97963 ± 4% -9.3% 88890 ± 2% meminfo.max_used_kB
9315760 ± 11% -24.4% 7044222 ± 9% numa-vmstat.node0.nr_dirtied
7593 ± 15% -30.2% 5301 ± 8% numa-vmstat.node0.nr_writeback
9253810 ± 11% -24.4% 6992866 ± 9% numa-vmstat.node0.nr_written
9053 ±105% +699.4% 72375 ± 30% numa-vmstat.node1.nr_active_file
9053 ±105% +699.4% 72375 ± 30% numa-vmstat.node1.nr_zone_active_file
197.50 ± 2% -20.8% 156.50 ± 4% turbostat.Avg_MHz
7.59 ± 4% -1.1 6.45 ± 7% turbostat.Busy%
57935368 -34.0% 38214519 ± 4% turbostat.C1
3.97 ± 3% -0.9 3.10 ± 7% turbostat.C1%
117.34 ± 5% -10.4% 105.14 ± 3% turbostat.PkgWatt
6.93 -5.8% 6.53 ± 3% turbostat.RAMWatt
23703837 -21.2% 18668822 proc-vmstat.nr_dirtied
11565487 +2.6% 11866577 proc-vmstat.nr_free_pages
3186566 -10.0% 2867899 proc-vmstat.nr_inactive_file
14987 -2.0% 14683 proc-vmstat.nr_kernel_stack
203124 -2.2% 198730 proc-vmstat.nr_slab_reclaimable
334281 -10.4% 299452 proc-vmstat.nr_slab_unreclaimable
23643508 -21.2% 18622029 proc-vmstat.nr_written
3186566 -10.0% 2867899 proc-vmstat.nr_zone_inactive_file
9200220 ± 4% -16.8% 7655217 ± 2% proc-vmstat.numa_hit
9182883 ± 4% -16.8% 7637938 ± 2% proc-vmstat.numa_local
15866899 ± 3% -34.3% 10421136 ± 2% proc-vmstat.pgalloc_normal
15347481 -37.3% 9620050 ± 3% proc-vmstat.pgfree
94578712 -21.2% 74490196 proc-vmstat.pgpgout
1.653e+09 -28.2% 1.188e+09 ± 2% perf-stat.i.branch-instructions
16239810 ± 6% -20.2% 12960638 ± 7% perf-stat.i.cache-misses
1.771e+08 ± 4% -21.6% 1.389e+08 ± 6% perf-stat.i.cache-references
397106 -37.0% 250140 perf-stat.i.context-switches
1.75e+10 ± 5% -21.7% 1.37e+10 ± 6% perf-stat.i.cpu-cycles
8.56 ± 17% -55.8% 3.79 ± 15% perf-stat.i.cpu-migrations
2.408e+09 -24.3% 1.823e+09 ± 2% perf-stat.i.dTLB-loads
1.351e+09 ± 6% -18.8% 1.097e+09 ± 2% perf-stat.i.dTLB-stores
6077563 ± 3% -14.6% 5188983 ± 6% perf-stat.i.iTLB-loads
8.756e+09 -25.6% 6.518e+09 perf-stat.i.instructions
48.01 ± 18% +12.6 60.57 ± 7% perf-stat.i.node-load-miss-rate%
2697176 ± 11% -36.8% 1705410 ± 12% perf-stat.i.node-loads
50.90 ± 16% +12.8 63.72 ± 5% perf-stat.overall.node-load-miss-rate%
486504 ± 2% -15.1% 412869 ± 2% perf-stat.overall.path-length
1.648e+09 -28.2% 1.184e+09 ± 2% perf-stat.ps.branch-instructions
16185048 ± 6% -20.2% 12917198 ± 7% perf-stat.ps.cache-misses
1.765e+08 ± 4% -21.6% 1.384e+08 ± 6% perf-stat.ps.cache-references
395744 -37.0% 249290 perf-stat.ps.context-switches
1.744e+10 ± 5% -21.7% 1.365e+10 ± 6% perf-stat.ps.cpu-cycles
8.54 ± 17% -55.7% 3.78 ± 15% perf-stat.ps.cpu-migrations
2.4e+09 -24.3% 1.817e+09 ± 2% perf-stat.ps.dTLB-loads
1.347e+09 ± 6% -18.8% 1.094e+09 ± 2% perf-stat.ps.dTLB-stores
6056751 ± 3% -14.6% 5171616 ± 6% perf-stat.ps.iTLB-loads
8.727e+09 -25.6% 6.497e+09 perf-stat.ps.instructions
2688159 ± 11% -36.8% 1699709 ± 12% perf-stat.ps.node-loads
2.729e+12 -25.7% 2.026e+12 perf-stat.total.instructions
7679 ± 2% -37.9% 4771 ± 7% sched_debug.cfs_rq:/.exec_clock.avg
25109 ± 10% -20.3% 20001 ± 12% sched_debug.cfs_rq:/.exec_clock.max
6099 ± 20% -24.1% 4629 ± 7% sched_debug.cfs_rq:/.exec_clock.stddev
96721 ± 8% -43.2% 54939 ± 37% sched_debug.cfs_rq:/.load.avg
243210 ± 4% -27.0% 177643 ± 21% sched_debug.cfs_rq:/.load.stddev
105.27 ± 15% -43.2% 59.81 ± 22% sched_debug.cfs_rq:/.load_avg.avg
197.18 ± 11% -21.2% 155.31 ± 8% sched_debug.cfs_rq:/.load_avg.stddev
0.13 ± 6% -31.5% 0.09 ± 25% sched_debug.cfs_rq:/.nr_running.avg
49.64 ± 12% -49.3% 25.18 ± 28% sched_debug.cfs_rq:/.runnable_load_avg.avg
689.54 ± 4% -9.3% 625.71 ± 5% sched_debug.cfs_rq:/.runnable_load_avg.max
142.56 ± 7% -26.4% 104.98 ± 12% sched_debug.cfs_rq:/.runnable_load_avg.stddev
97240 ± 8% -46.4% 52094 ± 33% sched_debug.cfs_rq:/.runnable_weight.avg
243593 ± 4% -28.5% 174272 ± 19% sched_debug.cfs_rq:/.runnable_weight.stddev
147.89 ± 8% -27.2% 107.65 ± 13% sched_debug.cfs_rq:/.util_avg.avg
192.27 ± 8% -20.2% 153.44 ± 8% sched_debug.cfs_rq:/.util_avg.stddev
43.61 ± 16% -60.4% 17.27 ± 44% sched_debug.cfs_rq:/.util_est_enqueued.avg
493.75 ± 16% -44.6% 273.67 ± 33% sched_debug.cfs_rq:/.util_est_enqueued.max
120.70 ± 13% -52.0% 57.95 ± 35% sched_debug.cfs_rq:/.util_est_enqueued.stddev
26.69 ± 32% -43.1% 15.20 ± 13% sched_debug.cpu.cpu_load[0].avg
107.63 ± 13% -22.0% 84.01 ± 8% sched_debug.cpu.cpu_load[0].stddev
28.23 ± 30% -46.4% 15.13 ± 10% sched_debug.cpu.cpu_load[1].avg
96.80 ± 14% -19.5% 77.96 ± 4% sched_debug.cpu.cpu_load[1].stddev
28.35 ± 27% -50.8% 13.93 ± 13% sched_debug.cpu.cpu_load[2].avg
26.83 ± 28% -54.8% 12.13 ± 16% sched_debug.cpu.cpu_load[3].avg
76.35 ± 21% -27.2% 55.61 ± 9% sched_debug.cpu.cpu_load[3].stddev
24.61 ± 29% -58.0% 10.35 ± 18% sched_debug.cpu.cpu_load[4].avg
67.78 ± 23% -29.6% 47.73 ± 16% sched_debug.cpu.cpu_load[4].stddev
217.01 ± 9% -29.1% 153.85 ± 11% sched_debug.cpu.curr->pid.avg
65004 ± 18% -52.3% 31025 ± 31% sched_debug.cpu.load.avg
200774 ± 8% -31.1% 138243 ± 19% sched_debug.cpu.load.stddev
0.09 ± 12% -33.7% 0.06 ± 17% sched_debug.cpu.nr_running.avg
0.27 ± 5% -16.5% 0.23 ± 9% sched_debug.cpu.nr_running.stddev
735069 -32.2% 498554 sched_debug.cpu.nr_switches.avg
2860144 ± 11% -27.4% 2076064 ± 13% sched_debug.cpu.nr_switches.max
665483 ± 24% -31.6% 455234 ± 9% sched_debug.cpu.nr_switches.stddev
0.13 ± 7% -30.1% 0.09 ± 12% sched_debug.cpu.nr_uninterruptible.avg
735117 -32.2% 498430 sched_debug.cpu.sched_count.avg
2858539 ± 11% -27.4% 2076509 ± 13% sched_debug.cpu.sched_count.max
665356 ± 24% -31.6% 454947 ± 9% sched_debug.cpu.sched_count.stddev
366543 -32.2% 248579 sched_debug.cpu.sched_goidle.avg
1428344 ± 11% -27.4% 1036752 ± 13% sched_debug.cpu.sched_goidle.max
332365 ± 24% -31.6% 227301 ± 9% sched_debug.cpu.sched_goidle.stddev
368002 -32.2% 249386 sched_debug.cpu.ttwu_count.avg
3059342 -9.8% 2760232 slabinfo.Acpi-State.active_objs
60835 -10.0% 54758 slabinfo.Acpi-State.active_slabs
3102644 -10.0% 2792672 slabinfo.Acpi-State.num_objs
60835 -10.0% 54758 slabinfo.Acpi-State.num_slabs
40884 ± 7% -42.6% 23477 ± 21% slabinfo.avc_xperms_data.active_objs
323.25 ± 7% -41.8% 188.00 ± 21% slabinfo.avc_xperms_data.active_slabs
41459 ± 7% -41.8% 24144 ± 21% slabinfo.avc_xperms_data.num_objs
323.25 ± 7% -41.8% 188.00 ± 21% slabinfo.avc_xperms_data.num_slabs
1524 ± 18% -25.4% 1136 ± 11% slabinfo.biovec-128.active_objs
1536 ± 18% -24.8% 1155 ± 11% slabinfo.biovec-128.num_objs
1681 ± 7% -20.8% 1331 ± 13% slabinfo.biovec-64.active_objs
1681 ± 7% -20.8% 1331 ± 13% slabinfo.biovec-64.num_objs
2654 ± 10% -56.1% 1166 ± 13% slabinfo.biovec-max.active_objs
671.00 ± 10% -55.3% 300.00 ± 12% slabinfo.biovec-max.active_slabs
2685 ± 10% -55.3% 1201 ± 12% slabinfo.biovec-max.num_objs
671.00 ± 10% -55.3% 300.00 ± 12% slabinfo.biovec-max.num_slabs
21641 ± 9% -12.3% 18989 ± 7% slabinfo.btrfs_delayed_ref_head.active_objs
22866 ± 8% -10.1% 20556 ± 7% slabinfo.btrfs_delayed_ref_head.num_objs
67913 ± 4% -12.5% 59451 ± 3% slabinfo.btrfs_extent_buffer.active_objs
1237 ± 4% -14.7% 1055 ± 3% slabinfo.btrfs_extent_buffer.active_slabs
71775 ± 4% -14.7% 61246 ± 3% slabinfo.btrfs_extent_buffer.num_objs
1237 ± 4% -14.7% 1055 ± 3% slabinfo.btrfs_extent_buffer.num_slabs
6184518 -10.1% 5562477 slabinfo.btrfs_extent_map.active_objs
110462 -10.1% 99345 slabinfo.btrfs_extent_map.active_slabs
6185888 -10.1% 5563352 slabinfo.btrfs_extent_map.num_objs
110462 -10.1% 99345 slabinfo.btrfs_extent_map.num_slabs
26097 ± 3% -27.1% 19016 ± 9% slabinfo.btrfs_ordered_extent.active_objs
673.75 ± 4% -26.8% 493.50 ± 9% slabinfo.btrfs_ordered_extent.active_slabs
26301 ± 4% -26.8% 19264 ± 9% slabinfo.btrfs_ordered_extent.num_objs
673.75 ± 4% -26.8% 493.50 ± 9% slabinfo.btrfs_ordered_extent.num_slabs
13863 ± 5% -39.9% 8328 ± 17% slabinfo.btrfs_path.active_objs
387.25 ± 5% -39.4% 234.50 ± 16% slabinfo.btrfs_path.active_slabs
13954 ± 5% -39.3% 8467 ± 16% slabinfo.btrfs_path.num_objs
387.25 ± 5% -39.4% 234.50 ± 16% slabinfo.btrfs_path.num_slabs
13884 ± 9% -25.6% 10330 ± 15% slabinfo.kmalloc-128.active_objs
439.75 ± 8% -24.7% 331.25 ± 15% slabinfo.kmalloc-128.active_slabs
14089 ± 8% -24.6% 10617 ± 15% slabinfo.kmalloc-128.num_objs
439.75 ± 8% -24.7% 331.25 ± 15% slabinfo.kmalloc-128.num_slabs
1554 ± 3% -10.8% 1386 ± 5% slabinfo.kmalloc-rcl-96.active_objs
1554 ± 3% -10.8% 1386 ± 5% slabinfo.kmalloc-rcl-96.num_objs
10158 ± 8% -28.3% 7284 ± 15% slabinfo.mnt_cache.active_objs
10369 ± 8% -26.9% 7581 ± 14% slabinfo.mnt_cache.num_objs
1660 ± 7% -15.2% 1408 ± 11% slabinfo.skbuff_fclone_cache.active_objs
1660 ± 7% -15.2% 1408 ± 11% slabinfo.skbuff_fclone_cache.num_objs
17.20 ± 15% -10.1 7.14 ± 5% perf-profile.calltrace.cycles-pp.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper.process_one_work.worker_thread
19.67 ± 13% -9.6 10.08 ± 7% perf-profile.calltrace.cycles-pp.btrfs_finish_ordered_io.normal_work_helper.process_one_work.worker_thread.kthread
14.18 ± 16% -9.5 4.73 ± 6% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper.process_one_work
20.52 ± 13% -9.1 11.40 ± 5% perf-profile.calltrace.cycles-pp.normal_work_helper.process_one_work.worker_thread.kthread.ret_from_fork
27.59 ± 9% -8.7 18.88 ± 4% perf-profile.calltrace.cycles-pp.ret_from_fork
27.59 ± 9% -8.7 18.88 ± 4% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
24.79 ± 10% -6.3 18.45 ± 4% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
25.03 ± 9% -6.2 18.79 ± 4% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
5.57 ± 21% -4.2 1.36 ± 7% perf-profile.calltrace.cycles-pp.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper
5.55 ± 21% -4.2 1.35 ± 7% perf-profile.calltrace.cycles-pp.btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io
4.87 ± 20% -3.6 1.31 ± 10% perf-profile.calltrace.cycles-pp.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper
4.84 ± 20% -3.6 1.28 ± 10% perf-profile.calltrace.cycles-pp.btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io
3.84 ± 24% -3.1 0.75 ± 7% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.prepare_to_wait_event.btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot
3.76 ± 24% -3.0 0.72 ± 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.prepare_to_wait_event.btrfs_tree_lock.btrfs_lock_root_node
3.60 ± 22% -2.8 0.81 ± 9% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.prepare_to_wait_event.btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot
3.54 ± 22% -2.7 0.79 ± 10% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.prepare_to_wait_event.btrfs_tree_read_lock.btrfs_read_lock_root_node
3.47 ± 19% -2.7 0.80 ± 6% perf-profile.calltrace.cycles-pp.prepare_to_wait_event.btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written
3.25 ± 17% -2.4 0.85 ± 10% perf-profile.calltrace.cycles-pp.prepare_to_wait_event.btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot.btrfs_mark_extent_written
1.85 ± 8% -1.2 0.65 ± 3% perf-profile.calltrace.cycles-pp.unlock_up.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper
1.83 ± 9% -1.2 0.63 ± 4% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.unlock_up.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io
1.45 ± 17% -1.2 0.26 ±100% perf-profile.calltrace.cycles-pp.btrfs_search_slot.setup_leaf_for_split.btrfs_duplicate_item.btrfs_mark_extent_written.btrfs_finish_ordered_io
1.71 ± 8% -1.1 0.60 ± 5% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.unlock_up.btrfs_search_slot.btrfs_mark_extent_written
1.69 ± 9% -1.1 0.59 ± 6% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.unlock_up.btrfs_search_slot
1.63 ± 9% -1.1 0.57 ± 7% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.unlock_up
2.12 ± 13% -0.7 1.43 ± 5% perf-profile.calltrace.cycles-pp.setup_leaf_for_split.btrfs_duplicate_item.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper
2.75 ± 10% -0.7 2.09 ± 5% perf-profile.calltrace.cycles-pp.btrfs_duplicate_item.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper.process_one_work
0.76 ± 5% -0.2 0.57 ± 8% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.73 ± 5% -0.2 0.56 ± 8% perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
0.99 ± 7% -0.1 0.87 ± 6% perf-profile.calltrace.cycles-pp.__btrfs_cow_block.btrfs_cow_block.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io
0.99 ± 7% -0.1 0.87 ± 7% perf-profile.calltrace.cycles-pp.btrfs_cow_block.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper
0.88 ± 10% +0.3 1.21 ± 13% perf-profile.calltrace.cycles-pp.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
0.27 ±100% +0.4 0.62 ± 10% perf-profile.calltrace.cycles-pp.setup_items_for_insert.btrfs_insert_empty_items.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io
0.99 ± 10% +0.4 1.38 ± 7% perf-profile.calltrace.cycles-pp.btrfs_insert_empty_items.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io.normal_work_helper
1.71 ± 17% +0.5 2.17 ± 13% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
0.29 ±100% +0.5 0.76 ± 12% perf-profile.calltrace.cycles-pp.__next_timer_interrupt.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select
1.50 ± 7% +0.5 2.00 ± 9% perf-profile.calltrace.cycles-pp.add_pending_csums.btrfs_finish_ordered_io.normal_work_helper.process_one_work.worker_thread
1.49 ± 7% +0.5 2.00 ± 9% perf-profile.calltrace.cycles-pp.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io.normal_work_helper.process_one_work
0.31 ±103% +0.5 0.83 ± 12% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.15 ±173% +0.6 0.75 ± 6% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_insert_empty_items.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io
0.14 ±173% +0.6 0.75 ± 12% perf-profile.calltrace.cycles-pp.split_leaf.setup_leaf_for_split.btrfs_duplicate_item.btrfs_mark_extent_written.btrfs_finish_ordered_io
0.29 ±100% +0.6 0.91 ± 27% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.08 ± 18% +0.6 1.71 ± 12% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 +0.7 0.66 ± 13% perf-profile.calltrace.cycles-pp.push_leaf_right.split_leaf.setup_leaf_for_split.btrfs_duplicate_item.btrfs_mark_extent_written
1.21 ± 19% +0.7 1.92 ± 12% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
1.37 ± 20% +0.8 2.20 ± 12% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
3.28 ± 11% +1.1 4.33 ± 10% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.00 +1.1 1.13 ± 24% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction.flush_space
0.00 +1.1 1.14 ± 24% perf-profile.calltrace.cycles-pp.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space
0.00 +1.1 1.15 ± 23% perf-profile.calltrace.cycles-pp.btrfs_write_and_wait_transaction.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space.process_one_work
2.54 ± 17% +1.3 3.81 ± 12% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.00 +1.4 1.35 ± 16% perf-profile.calltrace.cycles-pp.btrfs_run_delayed_refs.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space.process_one_work
0.00 +1.4 1.35 ± 16% perf-profile.calltrace.cycles-pp.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space
3.59 ± 12% +1.9 5.50 ± 8% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.15 ±173% +2.5 2.67 ± 14% perf-profile.calltrace.cycles-pp.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread.kthread.ret_from_fork
0.15 ±173% +2.5 2.67 ± 14% perf-profile.calltrace.cycles-pp.flush_space.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread.kthread
0.00 +2.7 2.67 ± 14% perf-profile.calltrace.cycles-pp.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread
6.62 ± 16% +3.0 9.58 ± 5% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
7.43 ± 10% +3.3 10.76 ± 2% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
52.83 ± 4% +4.2 57.01 ± 4% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
62.44 ± 3% +7.2 69.61 ± 3% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
69.18 ± 3% +8.3 77.44 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
69.26 ± 3% +8.3 77.52 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
69.25 ± 3% +8.3 77.52 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
69.97 ± 3% +8.8 78.74 perf-profile.calltrace.cycles-pp.secondary_startup_64
fio.write_clat_stddev
21000 +-+-----------------------------------------------------------------+
| O O O O |
20000 O-O O O O O O O O O O O O O O O O O O O |
19000 +-+ O O O |
| |
18000 +-+ |
17000 +-+ |
| |
16000 +-+ |
15000 +-+ |
| |
14000 +-+ .+ .+.. .+. .+. .+. .+.+.+.+.+.. .+.|
13000 +-+. .+. + .+.+.+.+.+ + +.+ + +..+.+.+ +.+.+ |
| + + |
12000 +-+-----------------------------------------------------------------+
fio.latency_2000ms_
0.019 O-+-----------------------------O---O------------O------------------+
| O O O O O O O O |
0.018 +-+ O O |
0.017 +-+ O O O O |
| O O O O O O O |
0.016 +-O |
0.015 +-+ O |
| |
0.014 +-+ |
0.013 +-+ |
| |
0.012 +-+ |
0.011 +-+ |
| |
0.01 +-+-----------------------------------------------------------------+
fio.time.voluntary_context_switches
5.8e+06 +-+---------------------------------------------------------------+
| + + |
5.6e+06 +-+ + +.. : + .+. .+ : : |
|. + : + : : +. + + : : : +.|
| + : + + : + +. .+. + : .+. .+.+.+ :+ |
5.4e+06 +-+ +.+ + + + + +.+.+..+.+ + |
| |
5.2e+06 +-+ |
| |
5e+06 +-+ O O O O O |
| O O O O O O |
O O O O O O O O O O O O O O |
4.8e+06 +-+ O O |
| |
4.6e+06 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 1 month
[mm] ac5b2c1891: vm-scalability.throughput -61.3% regression
by kernel test robot
Greeting,
FYI, we noticed a -61.3% regression of vm-scalability.throughput due to commit:
commit: ac5b2c18911ffe95c08d69273917f90212cf5659 ("mm: thp: relax __GFP_THISNODE for MADV_HUGEPAGE mappings")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: vm-scalability
on test machine: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 128G memory
with following parameters:
runtime: 300
thp_enabled: always
thp_defrag: always
nr_task: 32
nr_ssd: 1
test: swap-w-seq
ucode: 0x3d
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_ssd/nr_task/rootfs/runtime/tbox_group/test/testcase/thp_defrag/thp_enabled/ucode:
gcc-7/performance/x86_64-rhel-7.2/1/32/debian-x86_64-2018-04-03.cgz/300/lkp-hsw-ep4/swap-w-seq/vm-scalability/always/always/0x3d
commit:
94e297c50b ("include/linux/notifier.h: SRCU: fix ctags")
ac5b2c1891 ("mm: thp: relax __GFP_THISNODE for MADV_HUGEPAGE mappings")
94e297c50b529f5d ac5b2c18911ffe95c08d692739
---------------- --------------------------
%stddev %change %stddev
\ | \
0.57 ± 35% +258.8% 2.05 ± 4% vm-scalability.free_time
146022 ± 14% -40.5% 86833 ± 2% vm-scalability.median
29.29 ± 40% -89.6% 3.06 ± 26% vm-scalability.stddev
7454656 ± 9% -61.3% 2885836 ± 3% vm-scalability.throughput
189.21 ± 10% +52.4% 288.34 ± 2% vm-scalability.time.elapsed_time
189.21 ± 10% +52.4% 288.34 ± 2% vm-scalability.time.elapsed_time.max
8768 ± 3% +11.6% 9781 ± 5% vm-scalability.time.involuntary_context_switches
20320196 ± 2% -33.4% 13531732 ± 3% vm-scalability.time.maximum_resident_set_size
425945 ± 9% +17.4% 499908 ± 4% vm-scalability.time.minor_page_faults
253.79 ± 6% +62.0% 411.07 ± 4% vm-scalability.time.system_time
322.52 +8.0% 348.18 vm-scalability.time.user_time
246150 ± 12% +50.3% 370019 ± 4% vm-scalability.time.voluntary_context_switches
7746519 ± 11% +49.0% 11538799 ± 4% cpuidle.C6.usage
192240 ± 10% +44.3% 277460 ± 8% interrupts.CAL:Function_call_interrupts
22.45 ± 85% -80.6% 4.36 ±173% sched_debug.cfs_rq:/.MIN_vruntime.avg
22.45 ± 85% -80.6% 4.36 ±173% sched_debug.cfs_rq:/.max_vruntime.avg
29.36 ± 13% +10.8% 32.52 ± 14% boot-time.boot
24.28 ± 16% +12.4% 27.30 ± 16% boot-time.dhcp
1597 ± 15% +10.2% 1760 ± 15% boot-time.idle
68.25 -9.8 58.48 ± 2% mpstat.cpu.idle%
27.12 ± 5% +10.3 37.42 ± 3% mpstat.cpu.iowait%
2.48 ± 11% -0.8 1.73 ± 2% mpstat.cpu.usr%
3422396 ± 14% +46.6% 5018492 ± 7% softirqs.RCU
1776561 ± 10% +52.4% 2707785 ± 2% softirqs.SCHED
5685772 ± 7% +54.3% 8774055 ± 3% softirqs.TIMER
7742519 ± 11% +49.0% 11534924 ± 4% turbostat.C6
29922317 ± 10% +55.0% 46366941 turbostat.IRQ
9.49 ± 64% -83.0% 1.62 ± 54% turbostat.Pkg%pc2
36878790 ± 27% -84.9% 5570259 ± 47% vmstat.memory.free
1.117e+08 ± 15% +73.5% 1.939e+08 ± 10% vmstat.memory.swpd
25.25 ± 4% +34.7% 34.00 vmstat.procs.b
513725 ± 7% +47.9% 759561 ± 12% numa-numastat.node0.local_node
35753 ±160% +264.7% 130386 ± 39% numa-numastat.node0.numa_foreign
519403 ± 6% +48.7% 772182 ± 12% numa-numastat.node0.numa_hit
35753 ±160% +264.7% 130386 ± 39% numa-numastat.node1.numa_miss
44315 ±138% +198.5% 132279 ± 40% numa-numastat.node1.other_node
32798032 ± 46% +80.6% 59228165 ± 2% numa-meminfo.node0.Active
32798009 ± 46% +80.6% 59228160 ± 2% numa-meminfo.node0.Active(anon)
33430762 ± 46% +80.8% 60429537 ± 2% numa-meminfo.node0.AnonHugePages
33572119 ± 46% +80.2% 60512777 ± 2% numa-meminfo.node0.AnonPages
1310559 ± 64% +86.4% 2442244 ± 2% numa-meminfo.node0.Inactive
1309969 ± 64% +86.4% 2442208 ± 2% numa-meminfo.node0.Inactive(anon)
30385359 ± 53% -90.7% 2821023 ± 44% numa-meminfo.node0.MemFree
35505047 ± 45% +77.6% 63055165 ± 2% numa-meminfo.node0.MemUsed
166560 ± 42% +130.2% 383345 ± 6% numa-meminfo.node0.PageTables
23702 ±105% -89.0% 2617 ± 44% numa-meminfo.node0.Shmem
1212 ± 65% +402.8% 6093 ± 57% numa-meminfo.node1.Shmem
8354144 ± 44% +77.1% 14798964 ± 2% numa-vmstat.node0.nr_active_anon
8552787 ± 44% +76.8% 15122222 ± 2% numa-vmstat.node0.nr_anon_pages
16648 ± 44% +77.1% 29492 ± 2% numa-vmstat.node0.nr_anon_transparent_hugepages
7436650 ± 53% -90.4% 712936 ± 44% numa-vmstat.node0.nr_free_pages
332106 ± 63% +83.8% 610268 ± 2% numa-vmstat.node0.nr_inactive_anon
41929 ± 41% +130.6% 96703 ± 6% numa-vmstat.node0.nr_page_table_pages
5900 ±106% -89.0% 650.75 ± 45% numa-vmstat.node0.nr_shmem
43336 ± 92% +151.2% 108840 ± 7% numa-vmstat.node0.nr_vmscan_write
43110 ± 92% +150.8% 108110 ± 7% numa-vmstat.node0.nr_written
8354142 ± 44% +77.1% 14798956 ± 2% numa-vmstat.node0.nr_zone_active_anon
332105 ± 63% +83.8% 610269 ± 2% numa-vmstat.node0.nr_zone_inactive_anon
321.50 ± 66% +384.9% 1559 ± 59% numa-vmstat.node1.nr_shmem
88815743 ± 10% +33.8% 1.188e+08 ± 2% meminfo.Active
88815702 ± 10% +33.8% 1.188e+08 ± 2% meminfo.Active(anon)
90446011 ± 11% +34.0% 1.212e+08 ± 2% meminfo.AnonHugePages
90613587 ± 11% +34.0% 1.214e+08 ± 2% meminfo.AnonPages
5.15e+08 ± 3% +22.2% 6.293e+08 ± 2% meminfo.Committed_AS
187419 ± 10% -19.6% 150730 ± 7% meminfo.DirectMap4k
3620693 ± 18% +35.3% 4897093 ± 2% meminfo.Inactive
3620054 ± 18% +35.3% 4896961 ± 2% meminfo.Inactive(anon)
36144979 ± 28% -87.0% 4715681 ± 57% meminfo.MemAvailable
36723121 ± 27% -85.9% 5179468 ± 52% meminfo.MemFree
395801 ± 2% +56.9% 620816 ± 4% meminfo.PageTables
178672 +15.1% 205668 ± 2% meminfo.SUnreclaim
249496 +12.6% 280897 meminfo.Slab
1751813 ± 2% +34.7% 2360437 ± 3% meminfo.SwapCached
6.716e+08 -12.4% 5.88e+08 ± 2% meminfo.SwapFree
3926 ± 17% +72.9% 6788 ± 13% meminfo.Writeback
1076 ± 3% +42.9% 1538 ± 4% slabinfo.biovec-max.active_objs
275.75 ± 2% +41.1% 389.00 ± 4% slabinfo.biovec-max.active_slabs
1104 ± 2% +41.0% 1557 ± 4% slabinfo.biovec-max.num_objs
275.75 ± 2% +41.1% 389.00 ± 4% slabinfo.biovec-max.num_slabs
588.25 ± 7% +17.9% 693.75 ± 7% slabinfo.file_lock_cache.active_objs
588.25 ± 7% +17.9% 693.75 ± 7% slabinfo.file_lock_cache.num_objs
13852 ± 3% +37.5% 19050 ± 3% slabinfo.kmalloc-4k.active_objs
1776 ± 3% +37.7% 2446 ± 3% slabinfo.kmalloc-4k.active_slabs
14217 ± 3% +37.7% 19577 ± 3% slabinfo.kmalloc-4k.num_objs
1776 ± 3% +37.7% 2446 ± 3% slabinfo.kmalloc-4k.num_slabs
158.25 ± 15% +54.0% 243.75 ± 18% slabinfo.nfs_read_data.active_objs
158.25 ± 15% +54.0% 243.75 ± 18% slabinfo.nfs_read_data.num_objs
17762 ± 4% +44.3% 25638 ± 4% slabinfo.pool_workqueue.active_objs
563.25 ± 3% +43.7% 809.25 ± 4% slabinfo.pool_workqueue.active_slabs
18048 ± 3% +43.5% 25906 ± 4% slabinfo.pool_workqueue.num_objs
563.25 ± 3% +43.7% 809.25 ± 4% slabinfo.pool_workqueue.num_slabs
34631 ± 3% +21.0% 41905 ± 2% slabinfo.radix_tree_node.active_objs
624.50 ± 3% +20.7% 753.75 ± 2% slabinfo.radix_tree_node.active_slabs
34998 ± 3% +20.7% 42228 ± 2% slabinfo.radix_tree_node.num_objs
624.50 ± 3% +20.7% 753.75 ± 2% slabinfo.radix_tree_node.num_slabs
9.727e+11 ± 8% +50.4% 1.463e+12 ± 12% perf-stat.branch-instructions
1.11 ± 12% +1.2 2.31 ± 8% perf-stat.branch-miss-rate%
1.078e+10 ± 13% +214.9% 3.395e+10 ± 17% perf-stat.branch-misses
3.17 ± 11% -1.5 1.65 ± 9% perf-stat.cache-miss-rate%
8.206e+08 ± 7% +49.4% 1.226e+09 ± 11% perf-stat.cache-misses
2.624e+10 ± 14% +187.0% 7.532e+10 ± 17% perf-stat.cache-references
1174249 ± 9% +52.3% 1788442 ± 3% perf-stat.context-switches
2.921e+12 ± 8% +71.1% 4.998e+12 ± 9% perf-stat.cpu-cycles
1437 ± 14% +85.1% 2661 ± 20% perf-stat.cpu-migrations
7.586e+08 ± 21% +134.7% 1.78e+09 ± 30% perf-stat.dTLB-load-misses
7.943e+11 ± 11% +84.3% 1.464e+12 ± 14% perf-stat.dTLB-loads
93963731 ± 22% +40.9% 1.324e+08 ± 10% perf-stat.dTLB-store-misses
3.394e+11 ± 6% +60.5% 5.449e+11 ± 10% perf-stat.dTLB-stores
1.531e+08 ± 22% +44.0% 2.204e+08 ± 11% perf-stat.iTLB-load-misses
1.688e+08 ± 23% +71.1% 2.888e+08 ± 12% perf-stat.iTLB-loads
3.267e+12 ± 7% +58.5% 5.177e+12 ± 13% perf-stat.instructions
3988 ± 43% +123.9% 8930 ± 22% perf-stat.major-faults
901474 ± 5% +34.2% 1209877 ± 2% perf-stat.minor-faults
31.24 ± 10% +21.7 52.91 ± 2% perf-stat.node-load-miss-rate%
1.135e+08 ± 16% +187.6% 3.264e+08 ± 13% perf-stat.node-load-misses
6.27 ± 17% +26.9 33.19 ± 4% perf-stat.node-store-miss-rate%
27354489 ± 15% +601.2% 1.918e+08 ± 13% perf-stat.node-store-misses
905482 ± 5% +34.6% 1218833 ± 2% perf-stat.page-faults
4254 ± 7% +58.5% 6741 ± 13% perf-stat.path-length
6364 ± 25% +84.1% 11715 ± 14% proc-vmstat.allocstall_movable
46439 ± 12% +100.4% 93049 ± 21% proc-vmstat.compact_migrate_scanned
22425696 ± 10% +29.0% 28932634 ± 6% proc-vmstat.nr_active_anon
22875703 ± 11% +29.2% 29560082 ± 6% proc-vmstat.nr_anon_pages
44620 ± 11% +29.2% 57643 ± 6% proc-vmstat.nr_anon_transparent_hugepages
879436 ± 28% -77.3% 199768 ± 98% proc-vmstat.nr_dirty_background_threshold
1761029 ± 28% -77.3% 400034 ± 98% proc-vmstat.nr_dirty_threshold
715724 +17.6% 841386 ± 3% proc-vmstat.nr_file_pages
8960545 ± 28% -76.4% 2111248 ± 93% proc-vmstat.nr_free_pages
904330 ± 18% +31.2% 1186458 ± 7% proc-vmstat.nr_inactive_anon
11137 ± 2% +27.1% 14154 ± 9% proc-vmstat.nr_isolated_anon
12566 +3.5% 13012 proc-vmstat.nr_kernel_stack
97491 ± 2% +52.6% 148790 ± 10% proc-vmstat.nr_page_table_pages
17674 +6.2% 18763 ± 2% proc-vmstat.nr_slab_reclaimable
44820 +13.0% 50645 ± 2% proc-vmstat.nr_slab_unreclaimable
135763 ± 9% +68.4% 228600 ± 6% proc-vmstat.nr_vmscan_write
1017 ± 10% +54.7% 1573 ± 14% proc-vmstat.nr_writeback
220023 ± 5% +73.5% 381732 ± 6% proc-vmstat.nr_written
22425696 ± 10% +29.0% 28932635 ± 6% proc-vmstat.nr_zone_active_anon
904330 ± 18% +31.2% 1186457 ± 7% proc-vmstat.nr_zone_inactive_anon
1018 ± 10% +55.3% 1581 ± 13% proc-vmstat.nr_zone_write_pending
145368 ± 48% +63.1% 237050 ± 17% proc-vmstat.numa_foreign
671.50 ± 96% +479.4% 3890 ± 71% proc-vmstat.numa_hint_faults
1122389 ± 9% +17.2% 1315380 ± 4% proc-vmstat.numa_hit
214722 ± 5% +21.6% 261076 ± 3% proc-vmstat.numa_huge_pte_updates
1108142 ± 9% +17.4% 1300857 ± 4% proc-vmstat.numa_local
145368 ± 48% +63.1% 237050 ± 17% proc-vmstat.numa_miss
159615 ± 44% +57.6% 251573 ± 16% proc-vmstat.numa_other
185.50 ± 81% +8278.6% 15542 ± 40% proc-vmstat.numa_pages_migrated
1.1e+08 ± 5% +21.6% 1.337e+08 ± 3% proc-vmstat.numa_pte_updates
688332 ±106% +177.9% 1913062 ± 3% proc-vmstat.pgalloc_dma32
72593045 ± 10% +51.1% 1.097e+08 ± 3% proc-vmstat.pgdeactivate
919059 ± 4% +35.1% 1241472 ± 2% proc-vmstat.pgfault
3716 ± 45% +120.3% 8186 ± 25% proc-vmstat.pgmajfault
7.25 ± 26% +4.2e+05% 30239 ± 25% proc-vmstat.pgmigrate_fail
5340 ±106% +264.0% 19438 ± 33% proc-vmstat.pgmigrate_success
2.837e+08 ± 10% +51.7% 4.303e+08 ± 3% proc-vmstat.pgpgout
211428 ± 6% +74.1% 368188 ± 4% proc-vmstat.pgrefill
219051 ± 5% +73.7% 380419 ± 5% proc-vmstat.pgrotated
559397 ± 8% +43.0% 800110 ± 11% proc-vmstat.pgscan_direct
32894 ± 59% +158.3% 84981 ± 23% proc-vmstat.pgscan_kswapd
207042 ± 8% +71.5% 355174 ± 5% proc-vmstat.pgsteal_direct
14745 ± 65% +104.3% 30121 ± 18% proc-vmstat.pgsteal_kswapd
70934968 ± 10% +51.7% 1.076e+08 ± 3% proc-vmstat.pswpout
5852284 ± 12% +145.8% 14382881 ± 5% proc-vmstat.slabs_scanned
13453 ± 24% +204.9% 41023 ± 8% proc-vmstat.thp_split_page_failed
138385 ± 10% +51.6% 209783 ± 3% proc-vmstat.thp_split_pmd
138385 ± 10% +51.6% 209782 ± 3% proc-vmstat.thp_swpout
4.61 ± 24% -1.2 3.37 ± 10% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
2.90 ± 18% -0.7 2.19 ± 8% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.86 ± 32% -0.6 1.22 ± 7% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.60 ± 31% -0.5 1.08 ± 10% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
2.98 ± 8% -0.5 2.48 ± 10% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.46 ± 32% -0.5 0.96 ± 9% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.74 ± 27% -0.5 0.28 ±100% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
1.03 ± 52% -0.4 0.63 ± 15% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.17 ± 19% -0.3 0.87 ± 11% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
0.80 ± 17% +0.4 1.22 ± 7% perf-profile.calltrace.cycles-pp.nvme_queue_rq.__blk_mq_try_issue_directly.blk_mq_try_issue_directly.blk_mq_make_request.generic_make_request
0.81 ± 16% +0.4 1.23 ± 8% perf-profile.calltrace.cycles-pp.blk_mq_try_issue_directly.blk_mq_make_request.generic_make_request.submit_bio.__swap_writepage
0.81 ± 16% +0.4 1.23 ± 8% perf-profile.calltrace.cycles-pp.__blk_mq_try_issue_directly.blk_mq_try_issue_directly.blk_mq_make_request.generic_make_request.submit_bio
0.52 ± 60% +0.5 1.03 ± 6% perf-profile.calltrace.cycles-pp.dma_pool_alloc.nvme_queue_rq.__blk_mq_try_issue_directly.blk_mq_try_issue_directly.blk_mq_make_request
1.64 ± 16% +0.6 2.21 ± 15% perf-profile.calltrace.cycles-pp.find_next_bit.blk_mq_queue_tag_busy_iter.blk_mq_in_flight.part_round_stats.blk_account_io_done
0.51 ± 64% +0.8 1.31 ± 17% perf-profile.calltrace.cycles-pp.bt_iter.blk_mq_queue_tag_busy_iter.blk_mq_in_flight.part_round_stats.blk_account_io_start
0.80 ± 25% +0.9 1.67 ± 19% perf-profile.calltrace.cycles-pp.blk_mq_queue_tag_busy_iter.blk_mq_in_flight.part_round_stats.blk_account_io_start.blk_mq_make_request
0.82 ± 24% +0.9 1.71 ± 19% perf-profile.calltrace.cycles-pp.blk_mq_in_flight.part_round_stats.blk_account_io_start.blk_mq_make_request.generic_make_request
0.82 ± 25% +0.9 1.73 ± 19% perf-profile.calltrace.cycles-pp.part_round_stats.blk_account_io_start.blk_mq_make_request.generic_make_request.submit_bio
0.87 ± 25% +1.0 1.87 ± 14% perf-profile.calltrace.cycles-pp.blk_account_io_start.blk_mq_make_request.generic_make_request.submit_bio.__swap_writepage
2.05 ± 15% +1.4 3.48 ± 7% perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.__swap_writepage.pageout.shrink_page_list
2.09 ± 15% +1.4 3.53 ± 7% perf-profile.calltrace.cycles-pp.__swap_writepage.pageout.shrink_page_list.shrink_inactive_list.shrink_node_memcg
2.05 ± 15% +1.4 3.49 ± 7% perf-profile.calltrace.cycles-pp.blk_mq_make_request.generic_make_request.submit_bio.__swap_writepage.pageout
2.06 ± 15% +1.4 3.50 ± 7% perf-profile.calltrace.cycles-pp.submit_bio.__swap_writepage.pageout.shrink_page_list.shrink_inactive_list
2.10 ± 15% +1.4 3.54 ± 6% perf-profile.calltrace.cycles-pp.pageout.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
3.31 ± 12% +1.5 4.83 ± 5% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
3.33 ± 12% +1.5 4.86 ± 5% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages
3.40 ± 12% +1.6 4.97 ± 6% perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
3.57 ± 12% +1.6 5.19 ± 7% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
3.48 ± 12% +1.6 5.13 ± 6% perf-profile.calltrace.cycles-pp.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask
3.48 ± 12% +1.6 5.13 ± 6% perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.do_huge_pmd_anonymous_page
3.49 ± 12% +1.6 5.13 ± 6% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault
3.51 ± 12% +1.7 5.17 ± 6% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
3.34 ± 14% +2.2 5.57 ± 21% perf-profile.calltrace.cycles-pp.blk_mq_check_inflight.bt_iter.blk_mq_queue_tag_busy_iter.blk_mq_in_flight.part_round_stats
9.14 ± 17% +5.5 14.66 ± 22% perf-profile.calltrace.cycles-pp.bt_iter.blk_mq_queue_tag_busy_iter.blk_mq_in_flight.part_round_stats.blk_account_io_done
12.10 ± 17% +6.8 18.89 ± 20% perf-profile.calltrace.cycles-pp.blk_mq_queue_tag_busy_iter.blk_mq_in_flight.part_round_stats.blk_account_io_done.blk_mq_end_request
13.88 ± 15% +7.1 21.01 ± 20% perf-profile.calltrace.cycles-pp.handle_irq.do_IRQ.ret_from_intr.cpuidle_enter_state.do_idle
13.87 ± 15% +7.1 21.00 ± 20% perf-profile.calltrace.cycles-pp.handle_edge_irq.handle_irq.do_IRQ.ret_from_intr.cpuidle_enter_state
13.97 ± 15% +7.1 21.10 ± 20% perf-profile.calltrace.cycles-pp.ret_from_intr.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
13.94 ± 15% +7.1 21.07 ± 20% perf-profile.calltrace.cycles-pp.do_IRQ.ret_from_intr.cpuidle_enter_state.do_idle.cpu_startup_entry
12.50 ± 17% +7.2 19.65 ± 20% perf-profile.calltrace.cycles-pp.blk_account_io_done.blk_mq_end_request.blk_mq_complete_request.nvme_irq.__handle_irq_event_percpu
12.70 ± 17% +7.2 19.86 ± 21% perf-profile.calltrace.cycles-pp.blk_mq_end_request.blk_mq_complete_request.nvme_irq.__handle_irq_event_percpu.handle_irq_event_percpu
12.48 ± 17% +7.2 19.65 ± 20% perf-profile.calltrace.cycles-pp.part_round_stats.blk_account_io_done.blk_mq_end_request.blk_mq_complete_request.nvme_irq
12.46 ± 17% +7.2 19.63 ± 21% perf-profile.calltrace.cycles-pp.blk_mq_in_flight.part_round_stats.blk_account_io_done.blk_mq_end_request.blk_mq_complete_request
14.78 ± 18% +8.1 22.83 ± 20% perf-profile.calltrace.cycles-pp.blk_mq_complete_request.nvme_irq.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event
14.87 ± 18% +8.1 22.95 ± 21% perf-profile.calltrace.cycles-pp.nvme_irq.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event.handle_edge_irq
14.89 ± 18% +8.1 22.98 ± 21% perf-profile.calltrace.cycles-pp.handle_irq_event_percpu.handle_irq_event.handle_edge_irq.handle_irq.do_IRQ
14.90 ± 18% +8.1 22.99 ± 21% perf-profile.calltrace.cycles-pp.handle_irq_event.handle_edge_irq.handle_irq.do_IRQ.ret_from_intr
14.88 ± 18% +8.1 22.97 ± 21% perf-profile.calltrace.cycles-pp.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event.handle_edge_irq.handle_irq
4.79 ± 22% -1.3 3.52 ± 9% perf-profile.children.cycles-pp.hrtimer_interrupt
3.04 ± 16% -0.7 2.30 ± 8% perf-profile.children.cycles-pp.__hrtimer_run_queues
1.98 ± 29% -0.7 1.29 ± 7% perf-profile.children.cycles-pp.tick_sched_timer
1.70 ± 27% -0.6 1.14 ± 9% perf-profile.children.cycles-pp.tick_sched_handle
1.57 ± 29% -0.5 1.03 ± 10% perf-profile.children.cycles-pp.update_process_times
3.02 ± 8% -0.5 2.52 ± 10% perf-profile.children.cycles-pp.menu_select
1.19 ± 19% -0.3 0.89 ± 10% perf-profile.children.cycles-pp.tick_nohz_next_event
0.81 ± 25% -0.2 0.56 ± 11% perf-profile.children.cycles-pp.scheduler_tick
0.42 ± 19% -0.1 0.30 ± 14% perf-profile.children.cycles-pp._raw_spin_lock
0.27 ± 13% -0.1 0.21 ± 15% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.11 ± 35% -0.0 0.07 ± 19% perf-profile.children.cycles-pp.run_local_timers
0.10 ± 15% -0.0 0.07 ± 28% perf-profile.children.cycles-pp.cpu_load_update
0.14 ± 9% -0.0 0.11 ± 15% perf-profile.children.cycles-pp.perf_event_task_tick
0.07 ± 17% +0.0 0.10 ± 12% perf-profile.children.cycles-pp.blk_flush_plug_list
0.07 ± 17% +0.0 0.10 ± 12% perf-profile.children.cycles-pp.blk_mq_flush_plug_list
0.06 ± 26% +0.0 0.11 ± 17% perf-profile.children.cycles-pp.read
0.07 ± 17% +0.0 0.11 ± 36% perf-profile.children.cycles-pp.deferred_split_scan
0.15 ± 14% +0.1 0.21 ± 13% perf-profile.children.cycles-pp.blk_mq_sched_dispatch_requests
0.15 ± 16% +0.1 0.21 ± 15% perf-profile.children.cycles-pp.blk_mq_dispatch_rq_list
0.15 ± 14% +0.1 0.22 ± 14% perf-profile.children.cycles-pp.__blk_mq_run_hw_queue
0.08 ± 23% +0.1 0.14 ± 34% perf-profile.children.cycles-pp.shrink_slab
0.08 ± 23% +0.1 0.14 ± 34% perf-profile.children.cycles-pp.do_shrink_slab
0.72 ± 19% +0.3 1.00 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.44 ± 18% +0.3 0.74 ± 14% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.86 ± 18% +0.4 1.30 ± 9% perf-profile.children.cycles-pp.blk_mq_try_issue_directly
0.88 ± 18% +0.5 1.34 ± 9% perf-profile.children.cycles-pp.__blk_mq_try_issue_directly
0.82 ± 19% +0.5 1.30 ± 9% perf-profile.children.cycles-pp.dma_pool_alloc
1.02 ± 15% +0.5 1.55 ± 9% perf-profile.children.cycles-pp.nvme_queue_rq
1.07 ± 15% +0.7 1.76 ± 21% perf-profile.children.cycles-pp.__indirect_thunk_start
2.42 ± 16% +0.7 3.16 ± 13% perf-profile.children.cycles-pp.find_next_bit
0.95 ± 26% +1.1 2.00 ± 15% perf-profile.children.cycles-pp.blk_account_io_start
2.19 ± 17% +1.5 3.72 ± 8% perf-profile.children.cycles-pp.blk_mq_make_request
2.20 ± 17% +1.5 3.73 ± 8% perf-profile.children.cycles-pp.submit_bio
2.20 ± 17% +1.5 3.73 ± 8% perf-profile.children.cycles-pp.generic_make_request
2.21 ± 17% +1.5 3.76 ± 8% perf-profile.children.cycles-pp.__swap_writepage
2.23 ± 17% +1.5 3.77 ± 7% perf-profile.children.cycles-pp.pageout
3.60 ± 13% +1.6 5.22 ± 7% perf-profile.children.cycles-pp.__alloc_pages_nodemask
3.51 ± 15% +1.6 5.15 ± 6% perf-profile.children.cycles-pp.shrink_page_list
3.48 ± 12% +1.6 5.13 ± 6% perf-profile.children.cycles-pp.do_try_to_free_pages
3.53 ± 14% +1.6 5.17 ± 6% perf-profile.children.cycles-pp.shrink_inactive_list
3.49 ± 12% +1.6 5.13 ± 6% perf-profile.children.cycles-pp.try_to_free_pages
3.51 ± 12% +1.7 5.19 ± 6% perf-profile.children.cycles-pp.__alloc_pages_slowpath
3.61 ± 14% +1.7 5.30 ± 6% perf-profile.children.cycles-pp.shrink_node_memcg
3.69 ± 15% +1.8 5.45 ± 7% perf-profile.children.cycles-pp.shrink_node
4.10 ± 17% +2.4 6.47 ± 18% perf-profile.children.cycles-pp.blk_mq_check_inflight
10.64 ± 16% +6.8 17.39 ± 17% perf-profile.children.cycles-pp.bt_iter
13.17 ± 16% +7.4 20.59 ± 18% perf-profile.children.cycles-pp.blk_account_io_done
13.39 ± 16% +7.4 20.81 ± 19% perf-profile.children.cycles-pp.blk_mq_end_request
15.67 ± 17% +8.2 23.84 ± 20% perf-profile.children.cycles-pp.handle_irq
15.66 ± 17% +8.2 23.83 ± 20% perf-profile.children.cycles-pp.handle_edge_irq
15.57 ± 17% +8.2 23.73 ± 20% perf-profile.children.cycles-pp.nvme_irq
15.60 ± 17% +8.2 23.77 ± 20% perf-profile.children.cycles-pp.handle_irq_event
15.58 ± 17% +8.2 23.75 ± 20% perf-profile.children.cycles-pp.__handle_irq_event_percpu
15.59 ± 17% +8.2 23.77 ± 20% perf-profile.children.cycles-pp.handle_irq_event_percpu
15.75 ± 17% +8.2 23.93 ± 20% perf-profile.children.cycles-pp.ret_from_intr
15.73 ± 17% +8.2 23.91 ± 20% perf-profile.children.cycles-pp.do_IRQ
15.54 ± 17% +8.2 23.73 ± 19% perf-profile.children.cycles-pp.blk_mq_complete_request
14.07 ± 16% +8.4 22.45 ± 16% perf-profile.children.cycles-pp.part_round_stats
14.05 ± 16% +8.4 22.45 ± 16% perf-profile.children.cycles-pp.blk_mq_queue_tag_busy_iter
14.05 ± 16% +8.4 22.45 ± 16% perf-profile.children.cycles-pp.blk_mq_in_flight
0.38 ± 20% -0.1 0.28 ± 16% perf-profile.self.cycles-pp._raw_spin_lock
0.10 ± 13% -0.0 0.07 ± 17% perf-profile.self.cycles-pp.idle_cpu
0.12 ± 14% -0.0 0.09 ± 7% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.10 ± 15% -0.0 0.07 ± 28% perf-profile.self.cycles-pp.cpu_load_update
0.14 ± 9% -0.0 0.11 ± 15% perf-profile.self.cycles-pp.perf_event_task_tick
0.62 ± 16% +0.3 0.89 ± 12% perf-profile.self.cycles-pp.dma_pool_alloc
0.44 ± 18% +0.3 0.74 ± 14% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.81 ± 16% +0.5 1.32 ± 25% perf-profile.self.cycles-pp.__indirect_thunk_start
2.07 ± 16% +0.6 2.69 ± 12% perf-profile.self.cycles-pp.find_next_bit
1.77 ± 16% +1.1 2.91 ± 15% perf-profile.self.cycles-pp.blk_mq_queue_tag_busy_iter
3.82 ± 16% +2.2 6.00 ± 18% perf-profile.self.cycles-pp.blk_mq_check_inflight
6.43 ± 15% +4.1 10.56 ± 16% perf-profile.self.cycles-pp.bt_iter
vm-scalability.time.system_time
600 +-+-------------------------------------------------------------------+
| |
500 +-O OO |
O O OO |
| O O O O O O |
400 +-+ O O O O |
| |
300 +-+ .+.++. .+ .+ .+ .+.+ +.+ .++.+.++ .++.+.++. |
| ++ + :.+ + + + : +.+ : +.+ + .++.+ +.+ .|
200 +-+ + + : : : : + + |
|: : : : : |
|: :: : : |
100 +-+ :: :: |
| : : |
0 +-+-------------------------------------------------------------------+
vm-scalability.time.maximum_resident_set_size
2.5e+07 +-+---------------------------------------------------------------+
| |
| ++.++.++.+. +. +. + +.++.+ +.++.+ .+.++. .++. .++.+ .+ |
2e+07 +-+ + .+ + +.+: : : : + ++ + + +.|
| : + : : : : |
| : : : : : |
1.5e+07 O-+O OO: : : : |
|: O O O O : : : : |
1e+07 +-+ O O O O OO O : : : : |
|:O O :: :: |
|: :: :: |
5e+06 +-+ : : |
| : : |
| : : |
0 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 2 months
[proc] 3f02daf340: kernel_selftests.proc.proc-pid-vm.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 3f02daf3406e77d938c20ebd37c2ca74e3779a85 ("[PATCH] proc: test /proc/*/maps, smaps, smaps_rollup, statm")
url: https://github.com/0day-ci/linux/commits/Alexey-Dobriyan/proc-test-proc-m...
base: https://git.kernel.org/cgit/linux/kernel/git/shuah/linux-kselftest.git next
in testcase: kernel_selftests
with following parameters:
group: kselftests-02
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
KERNEL SELFTESTS: linux_headers_dir is /usr/src/linux-headers-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85
2019-02-18 22:33:32 ln -sf /usr/bin/clang-7 /usr/bin/clang
2019-02-18 22:33:32 ln -sf /usr/bin/llc-7 /usr/bin/llc
media_tests test: not in Makefile
2019-02-18 22:33:32 make TARGETS=media_tests
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/media_tests'
gcc -I../ -I../../../../usr/include/ media_device_test.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/media_tests/media_device_test
gcc -I../ -I../../../../usr/include/ media_device_open.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/media_tests/media_device_open
gcc -I../ -I../../../../usr/include/ video_device_test.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/media_tests/video_device_test
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/media_tests'
ignored_by_lkp media_tests test
2019-02-18 22:33:32 make run_tests -C membarrier
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/membarrier'
gcc -g -I../../../../usr/include/ membarrier_test.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/membarrier/membarrier_test
TAP version 13
selftests: membarrier: membarrier_test
========================================
ok 1 sys_membarrier available
ok 2 sys membarrier invalid command test: command = -1, flags = 0, errno = 22. Failed as expected
ok 3 sys membarrier MEMBARRIER_CMD_QUERY invalid flags test: flags = 1, errno = 22. Failed as expected
ok 4 sys membarrier MEMBARRIER_CMD_GLOBAL test: flags = 0
ok 5 sys membarrier MEMBARRIER_CMD_PRIVATE_EXPEDITED not registered failure test: flags = 0, errno = 1
ok 6 sys membarrier MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED test: flags = 0
ok 7 sys membarrier MEMBARRIER_CMD_PRIVATE_EXPEDITED test: flags = 0
ok 8 sys membarrier MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE not registered failure test: flags = 0, errno = 1
ok 9 sys membarrier MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE test: flags = 0
ok 10 sys membarrier MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE test: flags = 0
ok 11 sys membarrier MEMBARRIER_CMD_GLOBAL_EXPEDITED test: flags = 0
ok 12 sys membarrier MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED test: flags = 0
ok 13 sys membarrier MEMBARRIER_CMD_GLOBAL_EXPEDITED test: flags = 0
Pass 13 Fail 0 Xfail 0 Xpass 0 Skip 0 Error 0
1..13
ok 1..1 selftests: membarrier: membarrier_test [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/membarrier'
2019-02-18 22:33:32 make run_tests -C memfd
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/memfd'
gcc -D_FILE_OFFSET_BITS=64 -I../../../../include/uapi/ -I../../../../include/ -I../../../../usr/include/ -c -o common.o common.c
gcc -D_FILE_OFFSET_BITS=64 -I../../../../include/uapi/ -I../../../../include/ -I../../../../usr/include/ memfd_test.c common.o -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/memfd/memfd_test
memfd_test.c: In function ‘mfd_assert_get_seals’:
memfd_test.c:74:6: warning: implicit declaration of function ‘fcntl’ [-Wimplicit-function-declaration]
r = fcntl(fd, F_GET_SEALS);
^~~~~
memfd_test.c: In function ‘mfd_assert_open’:
memfd_test.c:197:6: warning: implicit declaration of function ‘open’ [-Wimplicit-function-declaration]
r = open(buf, flags, mode);
^~~~
memfd_test.c: In function ‘mfd_assert_write’:
memfd_test.c:328:6: warning: implicit declaration of function ‘fallocate’ [-Wimplicit-function-declaration]
r = fallocate(fd,
^~~~~~~~~
gcc -D_FILE_OFFSET_BITS=64 -I../../../../include/uapi/ -I../../../../include/ -I../../../../usr/include/ fuse_mnt.c -lfuse -pthread -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/memfd/fuse_mnt
gcc -D_FILE_OFFSET_BITS=64 -I../../../../include/uapi/ -I../../../../include/ -I../../../../usr/include/ fuse_test.c common.o -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/memfd/fuse_test
fuse_test.c: In function ‘mfd_assert_get_seals’:
fuse_test.c:67:6: warning: implicit declaration of function ‘fcntl’ [-Wimplicit-function-declaration]
r = fcntl(fd, F_GET_SEALS);
^~~~~
fuse_test.c: In function ‘main’:
fuse_test.c:261:7: warning: implicit declaration of function ‘open’ [-Wimplicit-function-declaration]
fd = open(argv[1], O_RDONLY | O_CLOEXEC);
^~~~
TAP version 13
selftests: memfd: memfd_test
========================================
memfd: CREATE
memfd: BASIC
memfd: SEAL-WRITE
memfd: SEAL-SHRINK
memfd: SEAL-GROW
memfd: SEAL-RESIZE
memfd: SHARE-DUP
memfd: SHARE-MMAP
memfd: SHARE-OPEN
memfd: SHARE-FORK
memfd: SHARE-DUP (shared file-table)
memfd: SHARE-MMAP (shared file-table)
memfd: SHARE-OPEN (shared file-table)
memfd: SHARE-FORK (shared file-table)
memfd: DONE
ok 1..1 selftests: memfd: memfd_test [PASS]
selftests: memfd: run_fuse_test.sh
========================================
opening: ./mnt/memfd
fuse: DONE
ok 1..2 selftests: memfd: run_fuse_test.sh [PASS]
selftests: memfd: run_hugetlbfs_test.sh
========================================
memfd-hugetlb: CREATE
memfd-hugetlb: BASIC
memfd-hugetlb: SEAL-WRITE
memfd-hugetlb: SEAL-SHRINK
memfd-hugetlb: SEAL-GROW
memfd-hugetlb: SEAL-RESIZE
memfd-hugetlb: SHARE-DUP
memfd-hugetlb: SHARE-MMAP
memfd-hugetlb: SHARE-OPEN
memfd-hugetlb: SHARE-FORK
memfd-hugetlb: SHARE-DUP (shared file-table)
memfd-hugetlb: SHARE-MMAP (shared file-table)
memfd-hugetlb: SHARE-OPEN (shared file-table)
memfd-hugetlb: SHARE-FORK (shared file-table)
memfd: DONE
opening: ./mnt/memfd
fuse: DONE
ok 1..3 selftests: memfd: run_hugetlbfs_test.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/memfd'
2019-02-18 22:33:36 make run_tests -C memory-hotplug
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/memory-hotplug'
TAP version 13
selftests: memory-hotplug: mem-on-off-test.sh
========================================
Test scope: 2% hotplug memory
online all hot-pluggable memory in offline state:
SKIPPED - no hot-pluggable memory in offline state
offline 2% hot-pluggable memory in online state
trying to offline 1 out of 14 memory block(s):
online->offline memory1
online all hot-pluggable memory in offline state:
offline->online memory1
Test with memory notifier error injection
ok 1..1 selftests: memory-hotplug: mem-on-off-test.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/memory-hotplug'
2019-02-18 22:33:36 make run_tests -C mount
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/mount'
gcc -Wall -O2 unprivileged-remount-test.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/mount/unprivileged-remount-test
TAP version 13
selftests: mount: run_tests.sh
========================================
ok 1..1 selftests: mount: run_tests.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/mount'
2019-02-18 22:33:37 make run_tests -C mqueue
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/mqueue'
gcc -O2 mq_open_tests.c -lrt -lpthread -lpopt -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/mqueue/mq_open_tests
gcc -O2 mq_perf_tests.c -lrt -lpthread -lpopt -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/mqueue/mq_perf_tests
TAP version 13
selftests: mqueue: mq_open_tests
========================================
Using Default queue path - /test1
Initial system state:
Using queue path: /test1
RLIMIT_MSGQUEUE(soft): 819200
RLIMIT_MSGQUEUE(hard): 819200
Maximum Message Size: 8192
Maximum Queue Size: 10
Default Message Size: 8192
Default Queue Size: 10
Adjusted system state for testing:
RLIMIT_MSGQUEUE(soft): 819200
RLIMIT_MSGQUEUE(hard): 819200
Maximum Message Size: 8192
Maximum Queue Size: 10
Default Message Size: 8192
Default Queue Size: 10
Test series 1, behavior when no attr struct passed to mq_open:
Kernel supports setting defaults separately from maximums: PASS
Given sane values, mq_open without an attr struct succeeds: PASS
Kernel properly honors default setting knobs: PASS
Kernel properly limits default values to lesser of default/max: PASS
Kernel properly fails to create queue when defaults would
exceed rlimit: PASS
Test series 2, behavior when attr struct is passed to mq_open:
Queue open in excess of rlimit max when euid = 0 failed: PASS
Queue open with mq_maxmsg > limit when euid = 0 succeeded: PASS
Queue open with mq_msgsize > limit when euid = 0 succeeded: PASS
Queue open with total size > 2GB when euid = 0 failed: PASS
Queue open in excess of rlimit max when euid = 99 failed: PASS
Queue open with mq_maxmsg > limit when euid = 99 failed: PASS
Queue open with mq_msgsize > limit when euid = 99 failed: PASS
Queue open with total size > 2GB when euid = 99 failed: PASS
ok 1..1 selftests: mqueue: mq_open_tests [PASS]
selftests: mqueue: mq_perf_tests
========================================
Initial system state:
Using queue path: /mq_perf_tests
RLIMIT_MSGQUEUE(soft): 819200
RLIMIT_MSGQUEUE(hard): 819200
Maximum Message Size: 8192
Maximum Queue Size: 10
Nice value: 0
Adjusted system state for testing:
RLIMIT_MSGQUEUE(soft): (unlimited)
RLIMIT_MSGQUEUE(hard): (unlimited)
Maximum Message Size: 16777216
Maximum Queue Size: 65530
Nice value: -20
Continuous mode: (disabled)
CPUs to pin: 1
Queue /mq_perf_tests created:
mq_flags: O_NONBLOCK
mq_maxmsg: 65530
mq_msgsize: 16
mq_curmsgs: 0
Started mqueue performance test thread on CPU 1
Max priorities: 32768
Clock resolution: 1 nsec
Test #1: Time send/recv message, queue empty
(10000000 iterations)
Send msg: 19.813690196s total time
1981 nsec/msg
Recv msg: 20.304767964s total time
2030 nsec/msg
Test #2a: Time send/recv message, queue full, constant prio
:
(100000 iterations)
Filling queue...done. 0.60703526s
Testing...done.
Send msg: 0.213043727s total time
2130 nsec/msg
Recv msg: 0.217941154s total time
2179 nsec/msg
Draining queue...done. 0.60446833s
Test #2b: Time send/recv message, queue full, increasing prio
:
(100000 iterations)
Filling queue...done. 0.82697942s
Testing...done.
Send msg: 0.253747345s total time
2537 nsec/msg
Recv msg: 0.240003919s total time
2400 nsec/msg
Draining queue...done. 0.79813195s
Test #2c: Time send/recv message, queue full, decreasing prio
:
(100000 iterations)
Filling queue...done. 0.91069113s
Testing...done.
Send msg: 0.232305494s total time
2323 nsec/msg
Recv msg: 0.218167421s total time
2181 nsec/msg
Draining queue...done. 0.81395272s
Test #2d: Time send/recv message, queue full, random prio
:
(100000 iterations)
Filling queue...done. 0.106973941s
Testing...done.
Send msg: 0.293635488s total time
2936 nsec/msg
Recv msg: 0.271307473s total time
2713 nsec/msg
Draining queue...done. 0.88153043s
ok 1..2 selftests: mqueue: mq_perf_tests [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/mqueue'
2019-02-18 22:34:30 make run_tests -C net
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net'
make ARCH=x86 -C ../../../.. headers_install
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85'
HOSTCC scripts/basic/fixdep
WRAP arch/x86/include/generated/uapi/asm/bpf_perf_event.h
WRAP arch/x86/include/generated/uapi/asm/poll.h
SYSTBL arch/x86/include/generated/asm/syscalls_32.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_32.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_64.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_x32.h
HOSTCC arch/x86/tools/relocs_32.o
HOSTCC arch/x86/tools/relocs_64.o
HOSTCC arch/x86/tools/relocs_common.o
HOSTLD arch/x86/tools/relocs
UPD include/generated/uapi/linux/version.h
HOSTCC scripts/unifdef
INSTALL usr/include/asm-generic/ (37 files)
INSTALL usr/include/drm/ (26 files)
INSTALL usr/include/linux/ (503 files)
INSTALL usr/include/linux/android/ (2 files)
INSTALL usr/include/linux/byteorder/ (2 files)
INSTALL usr/include/linux/caif/ (2 files)
INSTALL usr/include/linux/can/ (6 files)
INSTALL usr/include/linux/cifs/ (1 file)
INSTALL usr/include/linux/dvb/ (8 files)
INSTALL usr/include/linux/genwqe/ (1 file)
INSTALL usr/include/linux/hdlc/ (1 file)
INSTALL usr/include/linux/hsi/ (2 files)
INSTALL usr/include/linux/iio/ (2 files)
INSTALL usr/include/linux/isdn/ (1 file)
INSTALL usr/include/linux/mmc/ (1 file)
INSTALL usr/include/linux/netfilter/ (88 files)
INSTALL usr/include/linux/netfilter/ipset/ (4 files)
INSTALL usr/include/linux/netfilter_arp/ (2 files)
INSTALL usr/include/linux/netfilter_bridge/ (17 files)
INSTALL usr/include/linux/netfilter_ipv4/ (9 files)
INSTALL usr/include/linux/netfilter_ipv6/ (13 files)
INSTALL usr/include/linux/nfsd/ (5 files)
INSTALL usr/include/linux/raid/ (2 files)
INSTALL usr/include/linux/sched/ (1 file)
INSTALL usr/include/linux/spi/ (1 file)
INSTALL usr/include/linux/sunrpc/ (1 file)
INSTALL usr/include/linux/tc_act/ (15 files)
INSTALL usr/include/linux/tc_ematch/ (5 files)
INSTALL usr/include/linux/usb/ (13 files)
INSTALL usr/include/linux/wimax/ (1 file)
INSTALL usr/include/misc/ (2 files)
INSTALL usr/include/mtd/ (5 files)
INSTALL usr/include/rdma/ (25 files)
INSTALL usr/include/rdma/hfi/ (2 files)
INSTALL usr/include/scsi/ (5 files)
INSTALL usr/include/scsi/fc/ (4 files)
INSTALL usr/include/sound/ (16 files)
INSTALL usr/include/video/ (3 files)
INSTALL usr/include/xen/ (4 files)
INSTALL usr/include/asm/ (62 files)
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85'
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ reuseport_bpf.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/reuseport_bpf
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ reuseport_bpf_cpu.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/reuseport_bpf_cpu
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ -lnuma reuseport_bpf_numa.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/reuseport_bpf_numa
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ reuseport_dualstack.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/reuseport_dualstack
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ reuseaddr_conflict.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/reuseaddr_conflict
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ tls.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/tls
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ socket.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/socket
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ psock_fanout.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/psock_fanout
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ psock_tpacket.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/psock_tpacket
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ msg_zerocopy.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/msg_zerocopy
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ reuseport_addr_any.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/reuseport_addr_any
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ -lpthread tcp_mmap.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/tcp_mmap
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ -lpthread tcp_inq.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/tcp_inq
tcp_inq.c: In function ‘main’:
tcp_inq.c:178:4: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
inq = *((int *) CMSG_DATA(cm));
^~~
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ psock_snd.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/psock_snd
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ txring_overwrite.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/txring_overwrite
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ udpgso.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/udpgso
udpgso.c: In function ‘send_one’:
udpgso.c:484:3: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*((uint16_t *) CMSG_DATA(cm)) = gso_len;
^
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ udpgso_bench_tx.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/udpgso_bench_tx
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ udpgso_bench_rx.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/udpgso_bench_rx
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ ip_defrag.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net/ip_defrag
TAP version 13
selftests: net: reuseport_bpf
========================================
---- IPv4 UDP ----
Testing EBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing EBPF mod 20...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 10: 10
Socket 11: 11
Socket 12: 12
Socket 13: 13
Socket 14: 14
Socket 15: 15
Socket 16: 16
Socket 17: 17
Socket 18: 18
Socket 19: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 10: 30
Socket 11: 31
Socket 12: 32
Socket 13: 33
Socket 14: 34
Socket 15: 35
Socket 16: 36
Socket 17: 37
Socket 18: 38
Socket 19: 39
Reprograming, testing mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 0: 30
Socket 1: 31
Socket 2: 32
Socket 3: 33
Socket 4: 34
Socket 5: 35
Socket 6: 36
Socket 7: 37
Socket 8: 38
Socket 9: 39
Testing CBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing CBPF mod 20...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 10: 10
Socket 11: 11
Socket 12: 12
Socket 13: 13
Socket 14: 14
Socket 15: 15
Socket 16: 16
Socket 17: 17
Socket 18: 18
Socket 19: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 10: 30
Socket 11: 31
Socket 12: 32
Socket 13: 33
Socket 14: 34
Socket 15: 35
Socket 16: 36
Socket 17: 37
Socket 18: 38
Socket 19: 39
Reprograming, testing mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 0: 30
Socket 1: 31
Socket 2: 32
Socket 3: 33
Socket 4: 34
Socket 5: 35
Socket 6: 36
Socket 7: 37
Socket 8: 38
Socket 9: 39
Testing too many filters...
Testing filters on non-SO_REUSEPORT socket...
---- IPv6 UDP ----
Testing EBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing EBPF mod 20...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 10: 10
Socket 11: 11
Socket 12: 12
Socket 13: 13
Socket 14: 14
Socket 15: 15
Socket 16: 16
Socket 17: 17
Socket 18: 18
Socket 19: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 10: 30
Socket 11: 31
Socket 12: 32
Socket 13: 33
Socket 14: 34
Socket 15: 35
Socket 16: 36
Socket 17: 37
Socket 18: 38
Socket 19: 39
Reprograming, testing mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 0: 30
Socket 1: 31
Socket 2: 32
Socket 3: 33
Socket 4: 34
Socket 5: 35
Socket 6: 36
Socket 7: 37
Socket 8: 38
Socket 9: 39
Testing CBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing CBPF mod 20...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 10: 10
Socket 11: 11
Socket 12: 12
Socket 13: 13
Socket 14: 14
Socket 15: 15
Socket 16: 16
Socket 17: 17
Socket 18: 18
Socket 19: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 10: 30
Socket 11: 31
Socket 12: 32
Socket 13: 33
Socket 14: 34
Socket 15: 35
Socket 16: 36
Socket 17: 37
Socket 18: 38
Socket 19: 39
Reprograming, testing mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 0: 30
Socket 1: 31
Socket 2: 32
Socket 3: 33
Socket 4: 34
Socket 5: 35
Socket 6: 36
Socket 7: 37
Socket 8: 38
Socket 9: 39
Testing too many filters...
Testing filters on non-SO_REUSEPORT socket...
---- IPv6 UDP w/ mapped IPv4 ----
Testing EBPF mod 20...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 10: 10
Socket 11: 11
Socket 12: 12
Socket 13: 13
Socket 14: 14
Socket 15: 15
Socket 16: 16
Socket 17: 17
Socket 18: 18
Socket 19: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 10: 30
Socket 11: 31
Socket 12: 32
Socket 13: 33
Socket 14: 34
Socket 15: 35
Socket 16: 36
Socket 17: 37
Socket 18: 38
Socket 19: 39
Reprograming, testing mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 0: 30
Socket 1: 31
Socket 2: 32
Socket 3: 33
Socket 4: 34
Socket 5: 35
Socket 6: 36
Socket 7: 37
Socket 8: 38
Socket 9: 39
Testing EBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing CBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing CBPF mod 20...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 10: 10
Socket 11: 11
Socket 12: 12
Socket 13: 13
Socket 14: 14
Socket 15: 15
Socket 16: 16
Socket 17: 17
Socket 18: 18
Socket 19: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 10: 30
Socket 11: 31
Socket 12: 32
Socket 13: 33
Socket 14: 34
Socket 15: 35
Socket 16: 36
Socket 17: 37
Socket 18: 38
Socket 19: 39
Reprograming, testing mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 0: 30
Socket 1: 31
Socket 2: 32
Socket 3: 33
Socket 4: 34
Socket 5: 35
Socket 6: 36
Socket 7: 37
Socket 8: 38
Socket 9: 39
---- IPv4 TCP ----
Testing EBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing CBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing too many filters...
Testing filters on non-SO_REUSEPORT socket...
---- IPv6 TCP ----
Testing EBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing CBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing too many filters...
Testing filters on non-SO_REUSEPORT socket...
---- IPv6 TCP w/ mapped IPv4 ----
Testing EBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing CBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing filter add without bind...
SUCCESS
ok 1..1 selftests: net: reuseport_bpf [PASS]
selftests: net: reuseport_bpf_cpu
========================================
---- IPv4 UDP ----
send cpu 0, receive socket 0
send cpu 1, receive socket 1
send cpu 1, receive socket 1
send cpu 0, receive socket 0
send cpu 0, receive socket 0
send cpu 1, receive socket 1
---- IPv6 UDP ----
send cpu 0, receive socket 0
send cpu 1, receive socket 1
send cpu 1, receive socket 1
send cpu 0, receive socket 0
send cpu 0, receive socket 0
send cpu 1, receive socket 1
---- IPv4 TCP ----
send cpu 0, receive socket 0
send cpu 1, receive socket 1
send cpu 1, receive socket 1
send cpu 0, receive socket 0
send cpu 0, receive socket 0
send cpu 1, receive socket 1
---- IPv6 TCP ----
send cpu 0, receive socket 0
send cpu 1, receive socket 1
send cpu 1, receive socket 1
send cpu 0, receive socket 0
send cpu 0, receive socket 0
send cpu 1, receive socket 1
SUCCESS
ok 1..2 selftests: net: reuseport_bpf_cpu [PASS]
selftests: net: reuseport_bpf_numa
========================================
---- IPv4 UDP ----
send node 0, receive socket 0
send node 0, receive socket 0
---- IPv6 UDP ----
send node 0, receive socket 0
send node 0, receive socket 0
---- IPv4 TCP ----
send node 0, receive socket 0
send node 0, receive socket 0
---- IPv6 TCP ----
send node 0, receive socket 0
send node 0, receive socket 0
SUCCESS
ok 1..3 selftests: net: reuseport_bpf_numa [PASS]
selftests: net: reuseport_dualstack
========================================
---- UDP IPv4 created before IPv6 ----
---- UDP IPv6 created before IPv4 ----
---- UDP IPv4 created before IPv6 (large) ----
---- UDP IPv6 created before IPv4 (large) ----
---- TCP IPv4 created before IPv6 ----
---- TCP IPv6 created before IPv4 ----
SUCCESS
ok 1..4 selftests: net: reuseport_dualstack [PASS]
selftests: net: reuseaddr_conflict
========================================
Opening 127.0.0.1:9999
Opening INADDR_ANY:9999
bind: Address already in use
Opening in6addr_any:9999
Opening INADDR_ANY:9999
bind: Address already in use
Opening INADDR_ANY:9999 after closing ipv6 socket
bind: Address already in use
Successok 1..5 selftests: net: reuseaddr_conflict [PASS]
selftests: net: tls
========================================
[==========] Running 29 tests from 2 test cases.
[ RUN ] tls.sendfile
[ OK ] tls.sendfile
[ RUN ] tls.send_then_sendfile
[ OK ] tls.send_then_sendfile
[ RUN ] tls.recv_max
[ OK ] tls.recv_max
[ RUN ] tls.recv_small
[ OK ] tls.recv_small
[ RUN ] tls.msg_more
[ OK ] tls.msg_more
[ RUN ] tls.sendmsg_single
[ OK ] tls.sendmsg_single
[ RUN ] tls.sendmsg_large
[ OK ] tls.sendmsg_large
[ RUN ] tls.sendmsg_multiple
[ OK ] tls.sendmsg_multiple
[ RUN ] tls.sendmsg_multiple_stress
[ OK ] tls.sendmsg_multiple_stress
[ RUN ] tls.splice_from_pipe
[ OK ] tls.splice_from_pipe
[ RUN ] tls.splice_from_pipe2
[ OK ] tls.splice_from_pipe2
[ RUN ] tls.send_and_splice
[ OK ] tls.send_and_splice
[ RUN ] tls.splice_to_pipe
[ OK ] tls.splice_to_pipe
[ RUN ] tls.recvmsg_single
[ OK ] tls.recvmsg_single
[ RUN ] tls.recvmsg_single_max
[ OK ] tls.recvmsg_single_max
[ RUN ] tls.recvmsg_multiple
[ OK ] tls.recvmsg_multiple
[ RUN ] tls.single_send_multiple_recv
[ OK ] tls.single_send_multiple_recv
[ RUN ] tls.multiple_send_single_recv
[ OK ] tls.multiple_send_single_recv
[ RUN ] tls.recv_partial
[ OK ] tls.recv_partial
[ RUN ] tls.recv_nonblock
[ OK ] tls.recv_nonblock
[ RUN ] tls.recv_peek
[ OK ] tls.recv_peek
[ RUN ] tls.recv_peek_multiple
[ OK ] tls.recv_peek_multiple
[ RUN ] tls.recv_peek_multiple_records
[ OK ] tls.recv_peek_multiple_records
[ RUN ] tls.recv_peek_large_buf_mult_recs
[ OK ] tls.recv_peek_large_buf_mult_recs
[ RUN ] tls.pollin
[ OK ] tls.pollin
[ RUN ] tls.poll_wait
[ OK ] tls.poll_wait
[ RUN ] tls.blocking
[ OK ] tls.blocking
[ RUN ] tls.nonblocking
[ OK ] tls.nonblocking
[ RUN ] tls.control_msg
[ OK ] tls.control_msg
[==========] 29 / 29 tests passed.
[ PASSED ]
ok 1..6 selftests: net: tls [PASS]
selftests: net: run_netsocktests
========================================
--------------------
running socket test
--------------------
[PASS]
ok 1..7 selftests: net: run_netsocktests [PASS]
selftests: net: run_afpackettests
========================================
--------------------
running psock_fanout test
--------------------
test: control single socket
test: control multiple sockets
test: unique ids
test: datapath 0x0 ports 8000,8002
info: count=0,0, expect=0,0
info: count=15,5, expect=15,5
info: count=20,5, expect=20,5
test: datapath 0x1000 ports 8000,8002
info: count=0,0, expect=0,0
info: count=15,5, expect=15,5
info: count=20,15, expect=20,15
test: datapath 0x1 ports 8000,8002
info: count=0,0, expect=0,0
info: count=10,10, expect=10,10
info: count=17,18, expect=18,17
test: datapath 0x3 ports 8000,8002
info: count=0,0, expect=0,0
info: count=15,5, expect=15,5
info: count=20,15, expect=20,15
test: datapath 0x6 ports 8000,8002
info: count=0,0, expect=0,0
info: count=5,15, expect=15,5
info: count=20,15, expect=15,20
test: datapath 0x7 ports 8000,8002
info: count=0,0, expect=0,0
info: count=5,15, expect=15,5
info: count=20,15, expect=15,20
test: datapath 0x2 ports 8000,8002
info: count=0,0, expect=0,0
info: count=20,0, expect=20,0
info: count=20,0, expect=20,0
test: datapath 0x2 ports 8000,8002
info: count=0,0, expect=0,0
info: count=0,20, expect=0,20
info: count=0,20, expect=0,20
test: datapath 0x2000 ports 8000,8002
info: count=0,0, expect=0,0
info: count=20,20, expect=20,20
info: count=20,20, expect=20,20
OK. All tests passed
[PASS]
--------------------
running psock_tpacket test
--------------------
test: TPACKET_V1 with PACKET_RX_RING .................... 100 pkts (14200 bytes)
test: TPACKET_V1 with PACKET_TX_RING .................... 100 pkts (14200 bytes)
test: TPACKET_V2 with PACKET_RX_RING .................... 100 pkts (14200 bytes)
test: TPACKET_V2 with PACKET_TX_RING .................... 100 pkts (14200 bytes)
test: TPACKET_V3 with PACKET_RX_RING .................... 100 pkts (14200 bytes)
test: TPACKET_V3 with PACKET_TX_RING .................... 100 pkts (14200 bytes)
OK. All tests passed
[PASS]
--------------------
running txring_overwrite test
--------------------
read: a (0x61)
read: b (0x62)
[PASS]
ok 1..8 selftests: net: run_afpackettests [PASS]
selftests: net: test_bpf.sh
========================================
test_bpf: ok
ok 1..9 selftests: net: test_bpf.sh [PASS]
selftests: net: netdevice.sh
========================================
SKIP: eth0: interface already up
Cannot get device udp-fragmentation-offload settings: Operation not supported
PASS: eth0: ethtool list features
PASS: eth0: ethtool dump
PASS: eth0: ethtool stats
SKIP: eth0: interface kept up
ok 1..10 selftests: net: netdevice.sh [PASS]
selftests: net: rtnetlink.sh
========================================
PASS: policy routing
PASS: route get
PASS: tc htb hierarchy
PASS: gre tunnel endpoint
PASS: gretap
RTNETLINK answers: Operation not supported
Cannot find device "ip6gretap00"
Cannot find device "ip6gretap00"
Cannot find device "ip6gretap00"
RTNETLINK answers: Operation not supported
Cannot find device "ip6gretap00"
FAIL: ip6gretap
PASS: erspan
RTNETLINK answers: Operation not supported
Cannot find device "ip6erspan00"
Cannot find device "ip6erspan00"
Cannot find device "ip6erspan00"
RTNETLINK answers: Operation not supported
Cannot find device "ip6erspan00"
Cannot find device "ip6erspan00"
Cannot find device "ip6erspan00"
RTNETLINK answers: Operation not supported
Cannot find device "ip6erspan00"
FAIL: ip6erspan
PASS: bridge setup
PASS: ipv6 addrlabel
PASS: set ifalias da7b7d10-b0de-4028-b3ae-13ed91e90bc9 for test-dummy0
PASS: vrf
PASS: vxlan
PASS: fou
PASS: macsec
PASS: ipsec
FAIL: ipsec_offload netdevsim doesn't support IPsec offload
SKIP: fdb get tests: iproute2 too old
SKIP: fdb get tests: iproute2 too old
ok 1..11 selftests: net: rtnetlink.sh [PASS]
selftests: net: xfrm_policy.sh
========================================
PASS: policy before exception matches
PASS: ping to .254 bypassed ipsec tunnel
PASS: direct policy matches
PASS: policy matches
ok 1..12 selftests: net: xfrm_policy.sh [PASS]
selftests: net: fib_tests.sh
========================================
Single path route test
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
Nexthop device deleted
TEST: IPv4 fibmatch - no route [ OK ]
TEST: IPv6 fibmatch - no route [ OK ]
Multipath route test
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
One nexthop device deleted
TEST: IPv4 - multipath route removed on delete [ OK ]
TEST: IPv6 - multipath down to single path [ OK ]
Second nexthop device deleted
TEST: IPv6 - no route [ OK ]
Single path, admin down
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
Route deleted on down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
Admin down multipath
Verify start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
One device down, one up
TEST: IPv4 fibmatch on down device [ OK ]
TEST: IPv6 fibmatch on down device [ OK ]
TEST: IPv4 fibmatch on up device [ OK ]
TEST: IPv6 fibmatch on up device [ OK ]
TEST: IPv4 flags on down device [ OK ]
TEST: IPv6 flags on down device [ OK ]
TEST: IPv4 flags on up device [ OK ]
TEST: IPv6 flags on up device [ OK ]
Other device down and up
TEST: IPv4 fibmatch on down device [ OK ]
TEST: IPv6 fibmatch on down device [ OK ]
TEST: IPv4 fibmatch on up device [ OK ]
TEST: IPv6 fibmatch on up device [ OK ]
TEST: IPv4 flags on down device [ OK ]
TEST: IPv6 flags on down device [ OK ]
TEST: IPv4 flags on up device [ OK ]
TEST: IPv6 flags on up device [ OK ]
Both devices down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
Local carrier tests - single path
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 - no linkdown flag [ OK ]
TEST: IPv6 - no linkdown flag [ OK ]
Carrier off on nexthop
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 - linkdown flag set [ OK ]
TEST: IPv6 - linkdown flag set [ OK ]
Route to local address with carrier down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 linkdown flag set [ OK ]
TEST: IPv6 linkdown flag set [ OK ]
Single path route carrier test
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 no linkdown flag [ OK ]
TEST: IPv6 no linkdown flag [ OK ]
Carrier down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 linkdown flag set [ OK ]
TEST: IPv6 linkdown flag set [ OK ]
Second address added with carrier down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 linkdown flag set [ OK ]
TEST: IPv6 linkdown flag set [ OK ]
IPv4 nexthop tests
<<< write me >>>
IPv6 nexthop tests
TEST: Directly connected nexthop, unicast address [ OK ]
TEST: Directly connected nexthop, unicast address with device [ OK ]
TEST: Gateway is linklocal address [ OK ]
TEST: Gateway is linklocal address, no device [ OK ]
TEST: Gateway can not be local unicast address [ OK ]
TEST: Gateway can not be local unicast address, with device [ OK ]
TEST: Gateway can not be a local linklocal address [ OK ]
TEST: Gateway can be local address in a VRF [ OK ]
TEST: Gateway can be local address in a VRF, with device [ OK ]
TEST: Gateway can be local linklocal address in a VRF [ OK ]
TEST: Redirect to VRF lookup [ OK ]
TEST: VRF route, gateway can be local address in default VRF [ OK ]
TEST: VRF route, gateway can not be a local address [ OK ]
TEST: VRF route, gateway can not be a local addr with device [ OK ]
IPv6 route add / append tests
TEST: Attempt to add duplicate route - gw [ OK ]
TEST: Attempt to add duplicate route - dev only [ OK ]
TEST: Attempt to add duplicate route - reject route [ OK ]
TEST: Append nexthop to existing route - gw [ OK ]
TEST: Add multipath route [ OK ]
TEST: Attempt to add duplicate multipath route [ OK ]
TEST: Route add with different metrics [ OK ]
TEST: Route delete with metric [ OK ]
IPv6 route replace tests
TEST: Single path with single path [ OK ]
TEST: Single path with multipath [ OK ]
TEST: Single path with single path via multipath attribute [ OK ]
TEST: Invalid nexthop [ OK ]
TEST: Single path - replace of non-existent route [ OK ]
TEST: Multipath with multipath [ OK ]
TEST: Multipath with single path [ OK ]
TEST: Multipath with single path via multipath attribute [ OK ]
TEST: Multipath - invalid first nexthop [ OK ]
TEST: Multipath - invalid second nexthop [ OK ]
TEST: Multipath - replace of non-existent route [ OK ]
IPv4 route add / append tests
TEST: Attempt to add duplicate route - gw [ OK ]
TEST: Attempt to add duplicate route - dev only [ OK ]
TEST: Attempt to add duplicate route - reject route [ OK ]
TEST: Add new nexthop for existing prefix [ OK ]
TEST: Append nexthop to existing route - gw [ OK ]
TEST: Append nexthop to existing route - dev only [ OK ]
TEST: Append nexthop to existing route - reject route [ OK ]
TEST: Append nexthop to existing reject route - gw [ OK ]
TEST: Append nexthop to existing reject route - dev only [ OK ]
TEST: add multipath route [ OK ]
TEST: Attempt to add duplicate multipath route [ OK ]
TEST: Route add with different metrics [ OK ]
TEST: Route delete with metric [ OK ]
IPv4 route replace tests
TEST: Single path with single path [ OK ]
TEST: Single path with multipath [ OK ]
TEST: Single path with reject route [ OK ]
TEST: Single path with single path via multipath attribute [ OK ]
TEST: Invalid nexthop [ OK ]
TEST: Single path - replace of non-existent route [ OK ]
TEST: Multipath with multipath [ OK ]
TEST: Multipath with single path [ OK ]
TEST: Multipath with single path via multipath attribute [ OK ]
TEST: Multipath with reject route [ OK ]
TEST: Multipath - invalid first nexthop [ OK ]
TEST: Multipath - invalid second nexthop [ OK ]
TEST: Multipath - replace of non-existent route [ OK ]
IPv6 prefix route tests
TEST: Default metric [ OK ]
TEST: User specified metric on first device [ OK ]
TEST: User specified metric on second device [ OK ]
TEST: Delete of address on first device [ OK ]
TEST: Modify metric of address [ OK ]
Command line is not complete. Try option "help"
TEST: Prefix route removed on link down [ OK ]
TEST: Prefix route with metric on link up [ OK ]
IPv4 prefix route tests
TEST: Default metric [ OK ]
TEST: User specified metric on first device [ OK ]
TEST: User specified metric on second device [ OK ]
TEST: Delete of address on first device [ OK ]
TEST: Modify metric of address [ OK ]
Command line is not complete. Try option "help"
TEST: Prefix route removed on link down [ OK ]
TEST: Prefix route with metric on link up [ OK ]
IPv6 routes with metrics
TEST: Single path route with mtu metric [ OK ]
TEST: Multipath route via 2 single routes with mtu metric on first [ OK ]
TEST: Multipath route via 2 single routes with mtu metric on 2nd [ OK ]
TEST: MTU of second leg [ OK ]
TEST: Multipath route with mtu metric [ OK ]
TEST: Using route with mtu metric [ OK ]
TEST: Invalid metric (fails metric_convert) [ OK ]
IPv4 route add / append tests
TEST: Single path route with mtu metric [ OK ]
TEST: Multipath route with mtu metric [ OK ]
TEST: Using route with mtu metric [ OK ]
TEST: Invalid metric (fails metric_convert) [ OK ]
Tests passed: 141
Tests failed: 0
ok 1..13 selftests: net: fib_tests.sh [PASS]
selftests: net: fib-onlink-tests.sh
========================================
########################################
Configuring interfaces
RTNETLINK answers: File exists
not ok 1..14 selftests: net: fib-onlink-tests.sh [FAIL]
selftests: net: pmtu.sh
========================================
TEST: ipv4: PMTU exceptions [ OK ]
connect: Cannot assign requested address
connect: Cannot assign requested address
TEST: ipv6: PMTU exceptions [FAIL]
PMTU exception wasn't created after exceeding MTU
TEST: IPv4 over vxlan4: PMTU exceptions [ OK ]
TEST: IPv6 over vxlan4: PMTU exceptions [ OK ]
TEST: IPv4 over vxlan6: PMTU exceptions [ OK ]
connect: Cannot assign requested address
TEST: IPv6 over vxlan6: PMTU exceptions [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on vxlan interface
RTNETLINK answers: Operation not supported
geneve4 not supported
TEST: IPv4 over geneve4: PMTU exceptions [SKIP]
RTNETLINK answers: Operation not supported
geneve4 not supported
TEST: IPv6 over geneve4: PMTU exceptions [SKIP]
RTNETLINK answers: Operation not supported
geneve6 not supported
TEST: IPv4 over geneve6: PMTU exceptions [SKIP]
RTNETLINK answers: Operation not supported
geneve6 not supported
TEST: IPv6 over geneve6: PMTU exceptions [SKIP]
TEST: IPv4 over fou4: PMTU exceptions [ OK ]
TEST: IPv6 over fou4: PMTU exceptions [ OK ]
TEST: IPv4 over fou6: PMTU exceptions [ OK ]
TEST: IPv6 over fou6: PMTU exceptions [ OK ]
TEST: IPv4 over gue4: PMTU exceptions [ OK ]
TEST: IPv6 over gue4: PMTU exceptions [ OK ]
TEST: IPv4 over gue6: PMTU exceptions [ OK ]
TEST: IPv6 over gue6: PMTU exceptions [FAIL]
found PMTU exception with incorrect MTU 4940, expected 3940, after exceeding link layer MTU on gue interface
TEST: vti6: PMTU exceptions [ OK ]
TEST: vti4: PMTU exceptions [ OK ]
TEST: vti4: default MTU assignment [ OK ]
TEST: vti6: default MTU assignment [ OK ]
TEST: vti4: MTU setting on link creation [ OK ]
TEST: vti6: MTU setting on link creation [ OK ]
TEST: vti6: MTU changes on link changes [ OK ]
not ok 1..15 selftests: net: pmtu.sh [FAIL]
selftests: net: udpgso.sh
========================================
ipv4 cmsg
device mtu (orig): 65536
device mtu (test): 1500
ipv4 tx:1 gso:0
ipv4 tx:1472 gso:0
ipv4 tx:1473 gso:0 (fail)
ipv4 tx:1472 gso:1472 (fail)
ipv4 tx:1473 gso:1472
ipv4 tx:2944 gso:1472
ipv4 tx:2945 gso:1472
ipv4 tx:64768 gso:1472
ipv4 tx:65507 gso:1472
ipv4 tx:65508 gso:1472 (fail)
ipv4 tx:1 gso:1 (fail)
ipv4 tx:2 gso:1
ipv4 tx:5 gso:2
ipv4 tx:36 gso:1
ipv4 tx:37 gso:1 (fail)
OK
ipv4 setsockopt
device mtu (orig): 65536
device mtu (test): 1500
ipv4 tx:1 gso:0
ipv4 tx:1472 gso:0
ipv4 tx:1473 gso:0 (fail)
ipv4 tx:1472 gso:1472 (fail)
ipv4 tx:1473 gso:1472
ipv4 tx:2944 gso:1472
ipv4 tx:2945 gso:1472
ipv4 tx:64768 gso:1472
ipv4 tx:65507 gso:1472
ipv4 tx:65508 gso:1472 (fail)
ipv4 tx:1 gso:1 (fail)
ipv4 tx:2 gso:1
ipv4 tx:5 gso:2
ipv4 tx:36 gso:1
ipv4 tx:37 gso:1 (fail)
OK
ipv6 cmsg
device mtu (orig): 65536
device mtu (test): 1500
ipv6 tx:1 gso:0
ipv6 tx:1452 gso:0
ipv6 tx:1453 gso:0 (fail)
ipv6 tx:1452 gso:1452 (fail)
ipv6 tx:1453 gso:1452
ipv6 tx:2904 gso:1452
ipv6 tx:2905 gso:1452
ipv6 tx:65340 gso:1452
ipv6 tx:65527 gso:1452
ipv6 tx:65528 gso:1452 (fail)
ipv6 tx:1 gso:1 (fail)
ipv6 tx:2 gso:1
ipv6 tx:5 gso:2
ipv6 tx:16 gso:1
ipv6 tx:17 gso:1 (fail)
OK
ipv6 setsockopt
device mtu (orig): 65536
device mtu (test): 1500
ipv6 tx:1 gso:0
ipv6 tx:1452 gso:0
ipv6 tx:1453 gso:0 (fail)
ipv6 tx:1452 gso:1452 (fail)
ipv6 tx:1453 gso:1452
ipv6 tx:2904 gso:1452
ipv6 tx:2905 gso:1452
ipv6 tx:65340 gso:1452
ipv6 tx:65527 gso:1452
ipv6 tx:65528 gso:1452 (fail)
ipv6 tx:1 gso:1 (fail)
ipv6 tx:2 gso:1
ipv6 tx:5 gso:2
ipv6 tx:16 gso:1
ipv6 tx:17 gso:1 (fail)
OK
ipv4 connected
device mtu (orig): 65536
device mtu (test): 1600
route mtu (test): 1500
path mtu (read): 1500
ipv4 tx:1 gso:0
ipv4 tx:1472 gso:0
ipv4 tx:1473 gso:0 (fail)
ipv4 tx:1472 gso:1472 (fail)
ipv4 tx:1473 gso:1472
ipv4 tx:2944 gso:1472
ipv4 tx:2945 gso:1472
ipv4 tx:64768 gso:1472
ipv4 tx:65507 gso:1472
ipv4 tx:65508 gso:1472 (fail)
ipv4 tx:1 gso:1 (fail)
ipv4 tx:2 gso:1
ipv4 tx:5 gso:2
ipv4 tx:36 gso:1
ipv4 tx:37 gso:1 (fail)
OK
ipv4 msg_more
device mtu (orig): 65536
device mtu (test): 1500
ipv4 tx:1 gso:0
ipv4 tx:1472 gso:0
ipv4 tx:1473 gso:0 (fail)
ipv4 tx:1472 gso:1472 (fail)
ipv4 tx:1473 gso:1472
ipv4 tx:2944 gso:1472
ipv4 tx:2945 gso:1472
ipv4 tx:64768 gso:1472
ipv4 tx:65507 gso:1472
ipv4 tx:65508 gso:1472 (fail)
ipv4 tx:1 gso:1 (fail)
ipv4 tx:2 gso:1
ipv4 tx:5 gso:2
ipv4 tx:36 gso:1
ipv4 tx:37 gso:1 (fail)
OK
ipv6 msg_more
device mtu (orig): 65536
device mtu (test): 1500
ipv6 tx:1 gso:0
ipv6 tx:1452 gso:0
ipv6 tx:1453 gso:0 (fail)
ipv6 tx:1452 gso:1452 (fail)
ipv6 tx:1453 gso:1452
ipv6 tx:2904 gso:1452
ipv6 tx:2905 gso:1452
ipv6 tx:65340 gso:1452
ipv6 tx:65527 gso:1452
ipv6 tx:65528 gso:1452 (fail)
ipv6 tx:1 gso:1 (fail)
ipv6 tx:2 gso:1
ipv6 tx:5 gso:2
ipv6 tx:16 gso:1
ipv6 tx:17 gso:1 (fail)
OK
ok 1..16 selftests: net: udpgso.sh [PASS]
selftests: net: ip_defrag.sh
========================================
ipv4 defrag
PASS
seed = 1550500573
ipv4 defrag with overlaps
seed = 1550500574
./ip_defrag: recv: expected timeout; got 1564
not ok 1..17 selftests: net: ip_defrag.sh [FAIL]
selftests: net: udpgso_bench.sh
========================================
ipv4
tcp
tcp rx: 4492 MB/s 74311 calls/s
tcp tx: 4492 MB/s 76190 calls/s 76190 msg/s
tcp tx: 3694 MB/s 62653 calls/s 62653 msg/s
tcp rx: 3694 MB/s 61543 calls/s
tcp rx: 3762 MB/s 63279 calls/s
tcp tx: 3762 MB/s 63811 calls/s 63811 msg/s
tcp zerocopy
tcp tx: 2467 MB/s 41843 calls/s 41843 msg/s
tcp rx: 2467 MB/s 31760 calls/s
tcp tx: 2317 MB/s 39311 calls/s 39311 msg/s
tcp rx: 2317 MB/s 32552 calls/s
tcp rx: 2252 MB/s 23121 calls/s
tcp tx: 2252 MB/s 38201 calls/s 38201 msg/s
udp
udp rx: 267 MB/s 190480 calls/s
udp tx: 280 MB/s 199584 calls/s 4752 msg/s
udp rx: 317 MB/s 225956 calls/s
udp tx: 319 MB/s 227892 calls/s 5426 msg/s
udp rx: 295 MB/s 210504 calls/s
udp tx: 306 MB/s 218442 calls/s 5201 msg/s
udp rx: 348 MB/s 247930 calls/s
udp gso
udp rx: 933 MB/s 665282 calls/s
udp tx: 1028 MB/s 17440 calls/s 17440 msg/s
udp rx: 886 MB/s 631833 calls/s
udp tx: 999 MB/s 16954 calls/s 16954 msg/s
udp rx: 859 MB/s 612456 calls/s
udp tx: 913 MB/s 15499 calls/s 15499 msg/s
udp rx: 757 MB/s 539526 calls/s
udp gso zerocopy
udp rx: 655 MB/s 466979 calls/s
udp tx: 685 MB/s 11619 calls/s 11619 msg/s
udp rx: 661 MB/s 471468 calls/s
udp tx: 691 MB/s 11725 calls/s 11725 msg/s
udp rx: 634 MB/s 452166 calls/s
udp tx: 646 MB/s 10972 calls/s 10972 msg/s
udp rx: 652 MB/s 464657 calls/s
ipv6
tcp
tcp rx: 2994 MB/s 46973 calls/s
tcp tx: 2994 MB/s 50797 calls/s 50797 msg/s
tcp rx: 2834 MB/s 42723 calls/s
tcp tx: 2834 MB/s 48071 calls/s 48071 msg/s
tcp rx: 2987 MB/s 47227 calls/s
tcp tx: 2990 MB/s 50714 calls/s 50714 msg/s
tcp zerocopy
tcp rx: 1815 MB/s 26120 calls/s
tcp tx: 1816 MB/s 30801 calls/s 30801 msg/s
tcp rx: 1950 MB/s 29090 calls/s
tcp tx: 1950 MB/s 33077 calls/s 33077 msg/s
tcp rx: 1486 MB/s 16815 calls/s
tcp tx: 1486 MB/s 25220 calls/s 25220 msg/s
udp
udp rx: 276 MB/s 201956 calls/s
udp tx: 313 MB/s 228545 calls/s 5315 msg/s
udp rx: 286 MB/s 208677 calls/s
udp tx: 306 MB/s 223643 calls/s 5201 msg/s
udp rx: 345 MB/s 252232 calls/s
udp tx: 361 MB/s 263934 calls/s 6138 msg/s
udp gso
udp rx: 828 MB/s 604264 calls/s
udp tx: 863 MB/s 14654 calls/s 14654 msg/s
udp rx: 823 MB/s 600814 calls/s
udp tx: 859 MB/s 14575 calls/s 14575 msg/s
udp rx: 776 MB/s 566122 calls/s
udp tx: 836 MB/s 14185 calls/s 14185 msg/s
udp rx: 821 MB/s 598858 calls/s
udp gso zerocopy
udp rx: 698 MB/s 509059 calls/s
udp tx: 713 MB/s 12108 calls/s 12108 msg/s
udp rx: 694 MB/s 506531 calls/s
udp tx: 723 MB/s 12268 calls/s 12268 msg/s
udp rx: 678 MB/s 494678 calls/s
udp tx: 701 MB/s 11892 calls/s 11892 msg/s
udp rx: 544 MB/s 396882 calls/s
ok 1..18 selftests: net: udpgso_bench.sh [PASS]
selftests: net: fib_rule_tests.sh
========================================
######################################################################
TEST SECTION: IPv4 fib rule
######################################################################
TEST: rule4 check: oif dummy0 [ OK ]
TEST: rule4 del by pref: oif dummy0 [ OK ]
RTNETLINK answers: No route to host
TEST: rule4 check: from 192.51.100.3 iif dummy0 [FAIL]
TEST: rule4 del by pref: from 192.51.100.3 iif dummy0 [ OK ]
TEST: rule4 check: tos 0x10 [ OK ]
TEST: rule4 del by pref: tos 0x10 [ OK ]
TEST: rule4 check: fwmark 0x64 [ OK ]
TEST: rule4 del by pref: fwmark 0x64 [ OK ]
TEST: rule4 check: uidrange 100-100 [ OK ]
TEST: rule4 del by pref: uidrange 100-100 [ OK ]
TEST: rule4 check: sport 666 dport 777 [ OK ]
TEST: rule4 del by pref: sport 666 dport 777 [ OK ]
TEST: rule4 check: ipproto tcp [ OK ]
TEST: rule4 del by pref: ipproto tcp [ OK ]
TEST: rule4 check: ipproto icmp [ OK ]
TEST: rule4 del by pref: ipproto icmp [ OK ]
######################################################################
TEST SECTION: IPv6 fib rule
######################################################################
TEST: rule6 check: oif dummy0 [ OK ]
TEST: rule6 del by pref: oif dummy0 [ OK ]
TEST: rule6 check: from 2001:db8:1::3 iif dummy0 [ OK ]
TEST: rule6 del by pref: from 2001:db8:1::3 iif dummy0 [ OK ]
TEST: rule6 check: tos 0x10 [ OK ]
TEST: rule6 del by pref: tos 0x10 [ OK ]
TEST: rule6 check: fwmark 0x64 [ OK ]
TEST: rule6 del by pref: fwmark 0x64 [ OK ]
TEST: rule6 check: uidrange 100-100 [ OK ]
TEST: rule6 del by pref: uidrange 100-100 [ OK ]
TEST: rule6 check: sport 666 dport 777 [ OK ]
TEST: rule6 del by pref: sport 666 dport 777 [ OK ]
TEST: rule6 check: ipproto tcp [ OK ]
TEST: rule6 del by pref: ipproto tcp [ OK ]
TEST: rule6 check: ipproto icmp [ OK ]
TEST: rule6 del by pref: ipproto icmp [ OK ]
ok 1..19 selftests: net: fib_rule_tests.sh [PASS]
selftests: net: msg_zerocopy.sh
========================================
ipv4 tcp -t 1
./msg_zerocopy: setaffinity 2
./msg_zerocopy: setaffinity 3
not ok 1..20 selftests: net: msg_zerocopy.sh [FAIL]
selftests: net: psock_snd.sh
========================================
dgram
tx: 128
rx: 142
rx: 100
OK
dgram bind
tx: 128
rx: 142
rx: 100
OK
raw
tx: 142
rx: 142
rx: 100
OK
raw bind
tx: 142
rx: 142
rx: 100
OK
raw qdisc bypass
tx: 142
rx: 142
rx: 100
OK
raw vlan
tx: 146
rx: 100
OK
raw vnet hdr
tx: 152
rx: 142
rx: 100
OK
raw csum_off
tx: 152
rx: 142
rx: 100
OK
raw csum_off with bad offset (fails)
./psock_snd: write: Invalid argument
raw min size
tx: 42
rx: 0
OK
raw mtu size
tx: 1514
rx: 1472
OK
raw mtu size + 1 (fails)
./psock_snd: write: Message too long
raw vlan mtu size + 1 (fails)
./psock_snd: write: Message too long
dgram mtu size
tx: 1500
rx: 1472
OK
dgram mtu size + 1 (fails)
./psock_snd: write: Message too long
raw truncate hlen (fails: does not arrive)
tx: 14
./psock_snd: recv: Resource temporarily unavailable
raw truncate hlen - 1 (fails: EINVAL)
./psock_snd: write: Invalid argument
raw gso min size
tx: 1525
rx: 1473
OK
raw gso min size - 1 (fails)
tx: 1524
./psock_snd: recv: Resource temporarily unavailable
raw gso max size
tx: 65559
rx: 65507
OK
raw gso max size + 1 (fails)
tx: 65560
./psock_snd: recv: Resource temporarily unavailable
OK. All tests passed
ok 1..21 selftests: net: psock_snd.sh [PASS]
selftests: net: udpgro_bench.sh
========================================
Missing xdp_dummy helper. Build bpf selftest first
not ok 1..22 selftests: net: udpgro_bench.sh [FAIL]
selftests: net: udpgro.sh
========================================
Missing xdp_dummy helper. Build bpf selftest first
not ok 1..23 selftests: net: udpgro.sh [FAIL]
selftests: net: test_vxlan_under_vrf.sh
========================================
Checking HV connectivity [ OK ]
Check VM connectivity through VXLAN (underlay in the default VRF) [ OK ]
Check VM connectivity through VXLAN (underlay in a VRF) [FAIL]
not ok 1..24 selftests: net: test_vxlan_under_vrf.sh [FAIL]
selftests: net: reuseport_addr_any.sh
========================================
UDP IPv4 ... pass
UDP IPv6 ... pass
UDP IPv4 mapped to IPv6 ... pass
TCP IPv4 ... pass
TCP IPv6 ... pass
TCP IPv4 mapped to IPv6 ... pass
DCCP IPv4 ... pass
DCCP IPv6 ... pass
DCCP IPv4 mapped to IPv6 ... pass
SUCCESS
ok 1..25 selftests: net: reuseport_addr_any.sh [PASS]
selftests: net: test_vxlan_fdb_changelink.sh
========================================
expected two remotes after fdb append [ OK ]
expected two remotes after link set [ OK ]
ok 1..26 selftests: net: test_vxlan_fdb_changelink.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/net'
2019-02-18 22:36:59 make run_tests -C netfilter
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/netfilter'
TAP version 13
selftests: netfilter: nft_trans_stress.sh
========================================
SKIP: Could not run test without nft tool
not ok 1..1 selftests: netfilter: nft_trans_stress.sh [SKIP]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/netfilter'
2019-02-18 22:36:59 make run_tests -C nsfs
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/nsfs'
gcc -Wall -Werror owner.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/nsfs/owner
gcc -Wall -Werror pidns.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/nsfs/pidns
TAP version 13
selftests: nsfs: owner
========================================
ok 1..1 selftests: nsfs: owner [PASS]
selftests: nsfs: pidns
========================================
ok 1..2 selftests: nsfs: pidns [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/nsfs'
ignored_by_lkp powerpc test
prctl test: not in Makefile
2019-02-18 22:36:59 make TARGETS=prctl
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/prctl'
Makefile:14: warning: overriding recipe for target 'clean'
../lib.mk:137: warning: ignoring old recipe for target 'clean'
gcc disable-tsc-ctxt-sw-stress-test.c -o disable-tsc-ctxt-sw-stress-test
gcc disable-tsc-on-off-stress-test.c -o disable-tsc-on-off-stress-test
gcc disable-tsc-test.c -o disable-tsc-test
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/prctl'
2019-02-18 22:37:00 make run_tests -C prctl
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/prctl'
Makefile:14: warning: overriding recipe for target 'clean'
../lib.mk:137: warning: ignoring old recipe for target 'clean'
TAP version 13
selftests: prctl: disable-tsc-ctxt-sw-stress-test
========================================
[No further output means we're allright]
ok 1..1 selftests: prctl: disable-tsc-ctxt-sw-stress-test [PASS]
selftests: prctl: disable-tsc-on-off-stress-test
========================================
[No further output means we're allright]
ok 1..2 selftests: prctl: disable-tsc-on-off-stress-test [PASS]
selftests: prctl: disable-tsc-test
========================================
rdtsc() == 594942551353
prctl(PR_GET_TSC, &tsc_val); tsc_val == PR_TSC_ENABLE
rdtsc() == 594942806222
prctl(PR_SET_TSC, PR_TSC_ENABLE)
rdtsc() == 594942882173
prctl(PR_SET_TSC, PR_TSC_SIGSEGV)
rdtsc() == [ SIG_SEGV ]
prctl(PR_GET_TSC, &tsc_val); tsc_val == PR_TSC_SIGSEGV
prctl(PR_SET_TSC, PR_TSC_ENABLE)
rdtsc() == 594943173310
ok 1..3 selftests: prctl: disable-tsc-test [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/prctl'
2019-02-18 22:37:20 make run_tests -C proc
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc'
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE fd-001-lookup.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc/fd-001-lookup
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE fd-002-posix-eq.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc/fd-002-posix-eq
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE fd-003-kthread.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc/fd-003-kthread
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-loadavg-001.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc/proc-loadavg-001
proc-loadavg-001.c:17:0: warning: "_GNU_SOURCE" redefined
#define _GNU_SOURCE
<command-line>:0:0: note: this is the location of the previous definition
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-pid-vm.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc/proc-pid-vm
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-self-map-files-001.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc/proc-self-map-files-001
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-self-map-files-002.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc/proc-self-map-files-002
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-self-syscall.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc/proc-self-syscall
proc-self-syscall.c:16:0: warning: "_GNU_SOURCE" redefined
#define _GNU_SOURCE
<command-line>:0:0: note: this is the location of the previous definition
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-self-wchan.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc/proc-self-wchan
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-uptime-001.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc/proc-uptime-001
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-uptime-002.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc/proc-uptime-002
proc-uptime-002.c:18:0: warning: "_GNU_SOURCE" redefined
#define _GNU_SOURCE
<command-line>:0:0: note: this is the location of the previous definition
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE read.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc/read
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE self.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc/self
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE thread-self.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc/thread-self
TAP version 13
selftests: proc: fd-001-lookup
========================================
ok 1..1 selftests: proc: fd-001-lookup [PASS]
selftests: proc: fd-002-posix-eq
========================================
ok 1..2 selftests: proc: fd-002-posix-eq [PASS]
selftests: proc: fd-003-kthread
========================================
ok 1..3 selftests: proc: fd-003-kthread [PASS]
selftests: proc: proc-loadavg-001
========================================
ok 1..4 selftests: proc: proc-loadavg-001 [PASS]
selftests: proc: proc-pid-vm
========================================
proc-pid-vm: proc-pid-vm.c:277: main: Assertion `rv == strlen(buf0)' failed.
Aborted
not ok 1..5 selftests: proc: proc-pid-vm [FAIL]
selftests: proc: proc-self-map-files-001
========================================
ok 1..6 selftests: proc: proc-self-map-files-001 [PASS]
selftests: proc: proc-self-map-files-002
========================================
ok 1..7 selftests: proc: proc-self-map-files-002 [PASS]
selftests: proc: proc-self-syscall
========================================
ok 1..8 selftests: proc: proc-self-syscall [PASS]
selftests: proc: proc-self-wchan
========================================
ok 1..9 selftests: proc: proc-self-wchan [PASS]
selftests: proc: proc-uptime-001
========================================
ok 1..10 selftests: proc: proc-uptime-001 [PASS]
selftests: proc: proc-uptime-002
========================================
ok 1..11 selftests: proc: proc-uptime-002 [PASS]
selftests: proc: read
========================================
ok 1..12 selftests: proc: read [PASS]
selftests: proc: self
========================================
ok 1..13 selftests: proc: self [PASS]
selftests: proc: thread-self
========================================
ok 1..14 selftests: proc: thread-self [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/proc'
2019-02-18 22:37:25 make run_tests -C pstore
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/pstore'
TAP version 13
selftests: pstore: pstore_tests
========================================
=== Pstore unit tests (pstore_tests) ===
UUID=50adacea-c174-405f-8ab2-f83e625bba6b
Checking pstore backend is registered ... ok
backend=ramoops
cmdline=ip=::::vm-snb-4G-733::dhcp root=/dev/ram0 user=lkp job=/lkp/jobs/scheduled/vm-snb-4G-733/kernel_selftests-kselftests-02-debian-x86_64-2018-04-03.cgz-3f02daf-20190218-22674-59eleu-6.yaml ARCH=x86_64 kconfig=x86_64-rhel-7.2 branch=linux-devel/devel-hourly-2019021706 commit=3f02daf3406e77d938c20ebd37c2ca74e3779a85 BOOT_IMAGE=/pkg/linux/x86_64-rhel-7.2/gcc-7/3f02daf3406e77d938c20ebd37c2ca74e3779a85/vmlinuz-5.0.0-rc1-00001-g3f02daf erst_disable max_uptime=3600 RESULT_ROOT=/result/kernel_selftests/kselftests-02/vm-snb-4G/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/3f02daf3406e77d938c20ebd37c2ca74e3779a85/8 LKP_SERVER=inn debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 net.ifnames=0 printk.devkmsg=on panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 drbd.minor_count=8 systemd.log_level=err ignore_loglevel console=tty0 earlyprintk=ttyS0,115200 console=ttyS0,115200 vga=normal rw rcuperf.shutdown=0
Checking pstore console is registered ... ok
Checking /dev/pmsg0 exists ... ok
Writing unique string to /dev/pmsg0 ... ok
ok 1..1 selftests: pstore: pstore_tests [PASS]
selftests: pstore: pstore_post_reboot_tests
========================================
=== Pstore unit tests (pstore_post_reboot_tests) ===
UUID=53f55baf-88ff-4a8e-a4c0-9141d1d48b5d
Checking pstore backend is registered ... ok
backend=ramoops
cmdline=ip=::::vm-snb-4G-733::dhcp root=/dev/ram0 user=lkp job=/lkp/jobs/scheduled/vm-snb-4G-733/kernel_selftests-kselftests-02-debian-x86_64-2018-04-03.cgz-3f02daf-20190218-22674-59eleu-6.yaml ARCH=x86_64 kconfig=x86_64-rhel-7.2 branch=linux-devel/devel-hourly-2019021706 commit=3f02daf3406e77d938c20ebd37c2ca74e3779a85 BOOT_IMAGE=/pkg/linux/x86_64-rhel-7.2/gcc-7/3f02daf3406e77d938c20ebd37c2ca74e3779a85/vmlinuz-5.0.0-rc1-00001-g3f02daf erst_disable max_uptime=3600 RESULT_ROOT=/result/kernel_selftests/kselftests-02/vm-snb-4G/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/3f02daf3406e77d938c20ebd37c2ca74e3779a85/8 LKP_SERVER=inn debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 net.ifnames=0 printk.devkmsg=on panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 drbd.minor_count=8 systemd.log_level=err ignore_loglevel console=tty0 earlyprintk=ttyS0,115200 console=ttyS0,115200 vga=normal rw rcuperf.shutdown=0
pstore_crash_test has not been executed yet. we skip further tests.
not ok 1..2 selftests: pstore: pstore_post_reboot_tests [SKIP]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/pstore'
ptp test: not in Makefile
2019-02-18 22:37:26 make TARGETS=ptp
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/ptp'
Makefile:10: warning: overriding recipe for target 'clean'
../lib.mk:137: warning: ignoring old recipe for target 'clean'
gcc -I../../../../usr/include/ testptp.c -lrt -o testptp
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/ptp'
2019-02-18 22:37:26 make run_tests -C ptp
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/ptp'
Makefile:10: warning: overriding recipe for target 'clean'
../lib.mk:137: warning: ignoring old recipe for target 'clean'
TAP version 13
selftests: ptp: testptp
========================================
ok 1..1 selftests: ptp: testptp [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/ptp'
2019-02-18 22:37:26 make run_tests -C ptrace
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/ptrace'
gcc -iquote../../../../include/uapi -Wall peeksiginfo.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/ptrace/peeksiginfo
TAP version 13
selftests: ptrace: peeksiginfo
========================================
PASS
ok 1..1 selftests: ptrace: peeksiginfo [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/ptrace'
2019-02-18 22:37:26 make run_tests -C rseq
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/rseq'
gcc -O2 -Wall -g -I./ -I../../../../usr/include/ -L./ -Wl,-rpath=./ -shared -fPIC rseq.c -lpthread -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/rseq/librseq.so
gcc -O2 -Wall -g -I./ -I../../../../usr/include/ -L./ -Wl,-rpath=./ basic_test.c -lpthread -lrseq -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/rseq/basic_test
gcc -O2 -Wall -g -I./ -I../../../../usr/include/ -L./ -Wl,-rpath=./ basic_percpu_ops_test.c -lpthread -lrseq -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/rseq/basic_percpu_ops_test
gcc -O2 -Wall -g -I./ -I../../../../usr/include/ -L./ -Wl,-rpath=./ param_test.c -lpthread -lrseq -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/rseq/param_test
gcc -O2 -Wall -g -I./ -I../../../../usr/include/ -L./ -Wl,-rpath=./ -DBENCHMARK param_test.c -lpthread -lrseq -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/rseq/param_test_benchmark
gcc -O2 -Wall -g -I./ -I../../../../usr/include/ -L./ -Wl,-rpath=./ -DRSEQ_COMPARE_TWICE param_test.c -lpthread -lrseq -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/rseq/param_test_compare_twice
TAP version 13
selftests: rseq: basic_test
========================================
testing current cpu
ok 1..1 selftests: rseq: basic_test [PASS]
selftests: rseq: basic_percpu_ops_test
========================================
spinlock
percpu_list
ok 1..2 selftests: rseq: basic_percpu_ops_test [PASS]
selftests: rseq: param_test
========================================
ok 1..3 selftests: rseq: param_test [PASS]
selftests: rseq: param_test_benchmark
========================================
ok 1..4 selftests: rseq: param_test_benchmark [PASS]
selftests: rseq: param_test_compare_twice
========================================
ok 1..5 selftests: rseq: param_test_compare_twice [PASS]
selftests: rseq: run_param_test.sh
========================================
Default parameters
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Loop injection: 10000 loops
Injecting at <1>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <2>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <3>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <4>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <5>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <6>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Yield injection (25%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Yield injection (50%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Yield injection (100%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Kill injection (25%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Kill injection (50%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Kill injection (100%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Sleep injection (1ms, 25%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Sleep injection (1ms, 50%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Sleep injection (1ms, 100%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
ok 1..6 selftests: rseq: run_param_test.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/rseq'
2019-02-18 23:09:23 make run_tests -C rtc
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/rtc'
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm rtctest.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/rtc/rtctest
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm setdate.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/rtc/setdate
TAP version 13
selftests: rtc: rtctest
========================================
rtctest.c:49:rtc.date_read:Current RTC date/time is 18/02/2019 23:09:24.
rtctest.c:137:rtc.alarm_alm_set:Alarm time now set to 23:09:33.
rtctest.c:198:rtc.alarm_wkalm_set:Alarm time now set to 18/02/2019 23:09:36.
[==========] Running 5 tests from 2 test cases.
[ RUN ] rtc.date_read
[ OK ] rtc.date_read
[ RUN ] rtc.uie_read
[ OK ] rtc.uie_read
[ RUN ] rtc.uie_select
[ OK ] rtc.uie_select
[ RUN ] rtc.alarm_alm_set
[ OK ] rtc.alarm_alm_set
[ RUN ] rtc.alarm_wkalm_set
[ OK ] rtc.alarm_wkalm_set
[==========] 5 / 5 tests passed.
[ PASSED ]
ok 1..1 selftests: rtc: rtctest [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/rtc'
2019-02-18 23:09:35 make run_tests -C seccomp
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/seccomp'
gcc -Wl,-no-as-needed -Wall -lpthread seccomp_bpf.c -o seccomp_bpf
gcc -Wl,-no-as-needed -Wall seccomp_benchmark.c -o seccomp_benchmark
TAP version 13
selftests: seccomp: seccomp_bpf
========================================
[==========] Running 72 tests from 1 test cases.
[ RUN ] global.mode_strict_support
[ OK ] global.mode_strict_support
[ RUN ] global.mode_strict_cannot_call_prctl
[ OK ] global.mode_strict_cannot_call_prctl
[ RUN ] global.no_new_privs_support
[ OK ] global.no_new_privs_support
[ RUN ] global.mode_filter_support
[ OK ] global.mode_filter_support
[ RUN ] global.mode_filter_without_nnp
[ OK ] global.mode_filter_without_nnp
[ RUN ] global.filter_size_limits
[ OK ] global.filter_size_limits
[ RUN ] global.filter_chain_limits
[ OK ] global.filter_chain_limits
[ RUN ] global.mode_filter_cannot_move_to_strict
[ OK ] global.mode_filter_cannot_move_to_strict
[ RUN ] global.mode_filter_get_seccomp
[ OK ] global.mode_filter_get_seccomp
[ RUN ] global.ALLOW_all
[ OK ] global.ALLOW_all
[ RUN ] global.empty_prog
[ OK ] global.empty_prog
[ RUN ] global.log_all
[ OK ] global.log_all
[ RUN ] global.unknown_ret_is_kill_inside
[ OK ] global.unknown_ret_is_kill_inside
[ RUN ] global.unknown_ret_is_kill_above_allow
[ OK ] global.unknown_ret_is_kill_above_allow
[ RUN ] global.KILL_all
[ OK ] global.KILL_all
[ RUN ] global.KILL_one
[ OK ] global.KILL_one
[ RUN ] global.KILL_one_arg_one
[ OK ] global.KILL_one_arg_one
[ RUN ] global.KILL_one_arg_six
[ OK ] global.KILL_one_arg_six
[ RUN ] global.KILL_thread
[==========] Running 72 tests from 1 test cases.
[ RUN ] global.mode_strict_support
[ OK ] global.mode_strict_support
[ RUN ] global.mode_strict_cannot_call_prctl
[ OK ] global.mode_strict_cannot_call_prctl
[ RUN ] global.no_new_privs_support
[ OK ] global.no_new_privs_support
[ RUN ] global.mode_filter_support
[ OK ] global.mode_filter_support
[ RUN ] global.mode_filter_without_nnp
[ OK ] global.mode_filter_without_nnp
[ RUN ] global.filter_size_limits
[ OK ] global.filter_size_limits
[ RUN ] global.filter_chain_limits
[ OK ] global.filter_chain_limits
[ RUN ] global.mode_filter_cannot_move_to_strict
[ OK ] global.mode_filter_cannot_move_to_strict
[ RUN ] global.mode_filter_get_seccomp
[ OK ] global.mode_filter_get_seccomp
[ RUN ] global.ALLOW_all
[ OK ] global.ALLOW_all
[ RUN ] global.empty_prog
[ OK ] global.empty_prog
[ RUN ] global.log_all
[ OK ] global.log_all
[ RUN ] global.unknown_ret_is_kill_inside
[ OK ] global.unknown_ret_is_kill_inside
[ RUN ] global.unknown_ret_is_kill_above_allow
[ OK ] global.unknown_ret_is_kill_above_allow
[ RUN ] global.KILL_all
[ OK ] global.KILL_all
[ RUN ] global.KILL_one
[ OK ] global.KILL_one
[ RUN ] global.KILL_one_arg_one
[ OK ] global.KILL_one_arg_one
[ RUN ] global.KILL_one_arg_six
[ OK ] global.KILL_one_arg_six
[ RUN ] global.KILL_thread
[ OK ] global.KILL_thread
[ RUN ] global.KILL_process
[ OK ] global.KILL_process
[ RUN ] global.arg_out_of_range
[ OK ] global.arg_out_of_range
[ RUN ] global.ERRNO_valid
[ OK ] global.ERRNO_valid
[ RUN ] global.ERRNO_zero
[ OK ] global.ERRNO_zero
[ RUN ] global.ERRNO_capped
[ OK ] global.ERRNO_capped
[ RUN ] global.ERRNO_order
[ OK ] global.ERRNO_order
[ RUN ] TRAP.dfl
[ OK ] TRAP.dfl
[ RUN ] TRAP.ign
[ OK ] TRAP.ign
[ RUN ] TRAP.handler
[ OK ] TRAP.handler
[ RUN ] precedence.allow_ok
[ OK ] precedence.allow_ok
[ RUN ] precedence.kill_is_highest
[ OK ] precedence.kill_is_highest
[ RUN ] precedence.kill_is_highest_in_any_order
[ OK ] precedence.kill_is_highest_in_any_order
[ RUN ] precedence.trap_is_second
[ OK ] precedence.trap_is_second
[ RUN ] precedence.trap_is_second_in_any_order
[ OK ] precedence.trap_is_second_in_any_order
[ RUN ] precedence.errno_is_third
[ OK ] precedence.errno_is_third
[ RUN ] precedence.errno_is_third_in_any_order
[ OK ] precedence.errno_is_third_in_any_order
[ RUN ] precedence.trace_is_fourth
[ OK ] precedence.trace_is_fourth
[ RUN ] precedence.trace_is_fourth_in_any_order
[ OK ] precedence.trace_is_fourth_in_any_order
[ RUN ] precedence.log_is_fifth
[ OK ] precedence.log_is_fifth
[ RUN ] precedence.log_is_fifth_in_any_order
[ OK ] precedence.log_is_fifth_in_any_order
[ RUN ] TRACE_poke.read_has_side_effects
[ OK ] TRACE_poke.read_has_side_effects
[ RUN ] TRACE_poke.getpid_runs_normally
[ OK ] TRACE_poke.getpid_runs_normally
[ RUN ] TRACE_syscall.ptrace_syscall_redirected
[ OK ] TRACE_syscall.ptrace_syscall_redirected
[ RUN ] TRACE_syscall.ptrace_syscall_dropped
[ OK ] TRACE_syscall.ptrace_syscall_dropped
[ RUN ] TRACE_syscall.syscall_allowed
[ OK ] TRACE_syscall.syscall_allowed
[ RUN ] TRACE_syscall.syscall_redirected
[ OK ] TRACE_syscall.syscall_redirected
[ RUN ] TRACE_syscall.syscall_dropped
[ OK ] TRACE_syscall.syscall_dropped
[ RUN ] TRACE_syscall.skip_after_RET_TRACE
[ OK ] TRACE_syscall.skip_after_RET_TRACE
[ RUN ] TRACE_syscall.kill_after_RET_TRACE
[ OK ] TRACE_syscall.kill_after_RET_TRACE
[ RUN ] TRACE_syscall.skip_after_ptrace
[ OK ] TRACE_syscall.skip_after_ptrace
[ RUN ] TRACE_syscall.kill_after_ptrace
[ OK ] TRACE_syscall.kill_after_ptrace
[ RUN ] global.seccomp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
[ OK ] global.user_notification_signal
[ RUN ] global.user_notification_closed_listener
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
[ OK ] global.user_notification_signal
[ RUN ] global.user_notification_closed_listener
[ OK ] global.user_notification_closed_listener
[ RUN ] global.user_notification_child_pid_ns
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
[ OK ] global.user_notification_signal
[ RUN ] global.user_notification_closed_listener
[ OK ] global.user_notification_closed_listener
[ RUN ] global.user_notification_child_pid_ns
[ OK ] global.user_notification_child_pid_ns
[ RUN ] global.user_notification_sibling_pid_ns
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
[ OK ] global.user_notification_signal
[ RUN ] global.user_notification_closed_listener
[ OK ] global.user_notification_closed_listener
[ RUN ] global.user_notification_child_pid_ns
[ OK ] global.user_notification_child_pid_ns
[ RUN ] global.user_notification_sibling_pid_ns
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
[ OK ] global.user_notification_signal
[ RUN ] global.user_notification_closed_listener
[ OK ] global.user_notification_closed_listener
[ RUN ] global.user_notification_child_pid_ns
[ OK ] global.user_notification_child_pid_ns
[ RUN ] global.user_notification_sibling_pid_ns
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
[ OK ] global.user_notification_signal
[ RUN ] global.user_notification_closed_listener
[ OK ] global.user_notification_closed_listener
[ RUN ] global.user_notification_child_pid_ns
[ OK ] global.user_notification_child_pid_ns
[ RUN ] global.user_notification_sibling_pid_ns
[ OK ] global.user_notification_sibling_pid_ns
[ RUN ] global.user_notification_fault_recv
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
[ OK ] global.user_notification_signal
[ RUN ] global.user_notification_closed_listener
[ OK ] global.user_notification_closed_listener
[ RUN ] global.user_notification_child_pid_ns
[ OK ] global.user_notification_child_pid_ns
[ RUN ] global.user_notification_sibling_pid_ns
[ OK ] global.user_notification_sibling_pid_ns
[ RUN ] global.user_notification_fault_recv
[ OK ] global.user_notification_fault_recv
[ RUN ] global.seccomp_get_notif_sizes
[ OK ] global.seccomp_get_notif_sizes
[==========] 72 / 72 tests passed.
[ PASSED ]
ok 1..1 selftests: seccomp: seccomp_bpf [PASS]
selftests: seccomp: seccomp_benchmark
========================================
Calibrating reasonable sample size...
1550502579.086986297 - 1550502579.086967332 = 18965
1550502579.087037020 - 1550502579.087001658 = 35362
1550502579.087122463 - 1550502579.087038994 = 83469
1550502579.087292562 - 1550502579.087124550 = 168012
1550502579.087580922 - 1550502579.087295074 = 285848
1550502579.088167774 - 1550502579.087583841 = 583933
1550502579.090848755 - 1550502579.088170771 = 2677984
1550502579.093857351 - 1550502579.090852608 = 3004743
1550502579.099778056 - 1550502579.093861714 = 5916342
1550502579.113724187 - 1550502579.099784109 = 13940078
1550502579.139023309 - 1550502579.113731275 = 25292034
1550502579.186753920 - 1550502579.139031294 = 47722626
1550502579.332515361 - 1550502579.186758869 = 145756492
1550502579.625585123 - 1550502579.332520789 = 293064334
1550502580.143342433 - 1550502579.625590844 = 517751589
1550502580.871555419 - 1550502580.143350111 = 728205308
1550502582.387082205 - 1550502580.871560277 = 1515521928
1550502585.267233082 - 1550502582.387089749 = 2880143333
1550502591.259264196 - 1550502585.267241613 = 5992022583
1550502602.690666042 - 1550502591.259271730 = 11431394312
Benchmarking 16777216 samples...
29.038142463 - 19.381334957 = 9656807506
getpid native: 575 ns
42.588324538 - 29.038240824 = 13550083714
getpid RET_ALLOW: 807 ns
Estimated seccomp overhead per syscall: 232 ns
ok 1..2 selftests: seccomp: seccomp_benchmark [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/seccomp'
2019-02-18 23:10:42 make run_tests -C sigaltstack
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sigaltstack'
gcc -Wall sas.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sigaltstack/sas
TAP version 13
selftests: sigaltstack: sas
========================================
ok 1 Initial sigaltstack state was SS_DISABLE
# [RUN] signal USR1
ok 2 sigaltstack is disabled in sighandler
# [RUN] switched to user ctx
# [RUN] signal USR2
# [OK] Stack preserved
ok 3 sigaltstack is still SS_AUTODISARM after signal
Pass 3 Fail 0 Xfail 0 Xpass 0 Skip 0 Error 0
1..3
ok 1..1 selftests: sigaltstack: sas [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sigaltstack'
2019-02-18 23:10:43 make run_tests -C size
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/size'
gcc -static -ffreestanding -nostartfiles -s get_size.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/size/get_size
TAP version 13
selftests: size: get_size
========================================
TAP version 13
# Testing system size.
ok 1 get runtime memory use
# System runtime memory report (units in Kilobytes):
---
Total: 4033160
Free: 1906412
Buffer: 0
In use: 2126748
...
1..1
ok 1..1 selftests: size: get_size [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/size'
2019-02-18 23:10:43 make run_tests -C sparc64
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sparc64'
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sparc64'
2019-02-18 23:10:43 make run_tests -C splice
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/splice'
gcc default_file_splice_read.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/splice/default_file_splice_read
TAP version 13
selftests: splice: default_file_splice_read.sh
========================================
ok 1..1 selftests: splice: default_file_splice_read.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/splice'
2019-02-18 23:10:44 make run_tests -C static_keys
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/static_keys'
TAP version 13
selftests: static_keys: test_static_keys.sh
========================================
static_key: ok
ok 1..1 selftests: static_keys: test_static_keys.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/static_keys'
2019-02-18 23:10:44 make run_tests -C sync
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync'
gcc -c sync_alloc.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_alloc.o
gcc -c sync_fence.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_fence.o
gcc -c sync_merge.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_merge.o
gcc -c sync_wait.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_wait.o
gcc -c sync_stress_parallelism.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_stress_parallelism.o
gcc -c sync_stress_consumer.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_stress_consumer.o
gcc -c sync_stress_merge.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_stress_merge.o
gcc -c sync_test.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_test.o -O2 -g -std=gnu89 -pthread -Wall -Wextra -I../../../../usr/include/
gcc -c sync.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync.o -O2 -g -std=gnu89 -pthread -Wall -Wextra -I../../../../usr/include/
gcc -o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_test /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_test.o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync.o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_alloc.o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_fence.o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_merge.o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_wait.o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_stress_parallelism.o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_stress_consumer.o /usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync/sync_stress_merge.o -O2 -g -std=gnu89 -pthread -Wall -Wextra -I../../../../usr/include/ -pthread
TAP version 13
selftests: sync: sync_test
========================================
# [RUN] Testing sync framework
ok 1 [RUN] test_alloc_timeline
ok 2 [RUN] test_alloc_fence
ok 3 [RUN] test_alloc_fence_negative
ok 4 [RUN] test_fence_one_timeline_wait
ok 5 [RUN] test_fence_one_timeline_merge
ok 6 [RUN] test_fence_merge_same_fence
ok 7 [RUN] test_fence_multi_timeline_wait
ok 8 [RUN] test_stress_two_threads_shared_timeline
ok 9 [RUN] test_consumer_stress_multi_producer_single_consumer
ok 10 [RUN] test_merge_stress_random_merge
Pass 10 Fail 0 Xfail 0 Xpass 0 Skip 0 Error 0
1..10
ok 1..1 selftests: sync: sync_test [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sync'
2019-02-18 23:10:50 make run_tests -C sysctl
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sysctl'
TAP version 13
selftests: sysctl: sysctl.sh
========================================
Checking production write strict setting ... ok
Mon Feb 18 23:10:50 CST 2019
Running test: sysctl_test_0001 - run #0
== Testing sysctl behavior against /proc/sys/debug/test_sysctl/int_0001 ==
Writing test file ... ok
Checking sysctl is not set to test value ... ok
Writing sysctl from shell ... ok
Resetting sysctl to original value ... ok
Writing entire sysctl in single write ... ok
Writing middle of sysctl after synchronized seek ... ok
Writing beyond end of sysctl ... ok
Writing sysctl with multiple long writes ... ok
Checking ignoring spaces up to PAGE_SIZE works on write ...ok
Checking passing PAGE_SIZE of spaces fails on write ...ok
Mon Feb 18 23:10:50 CST 2019
Running test: sysctl_test_0002 - run #0
== Testing sysctl behavior against /proc/sys/debug/test_sysctl/string_0001 ==
Writing test file ... ok
Checking sysctl is not set to test value ... ok
Writing sysctl from shell ... ok
Resetting sysctl to original value ... ok
Writing entire sysctl in single write ... ok
Writing middle of sysctl after synchronized seek ... ok
Writing beyond end of sysctl ... ok
Writing sysctl with multiple long writes ... ok
Writing entire sysctl in short writes ... ok
Writing middle of sysctl after unsynchronized seek ... ok
Checking sysctl maxlen is at least 65 ... ok
Checking sysctl keeps original string on overflow append ... ok
Checking sysctl stays NULL terminated on write ... ok
Checking sysctl stays NULL terminated on overwrite ... ok
Mon Feb 18 23:10:50 CST 2019
Running test: sysctl_test_0003 - run #0
== Testing sysctl behavior against /proc/sys/debug/test_sysctl/int_0002 ==
Writing test file ... ok
Checking sysctl is not set to test value ... ok
Writing sysctl from shell ... ok
Resetting sysctl to original value ... ok
Writing entire sysctl in single write ... ok
Writing middle of sysctl after synchronized seek ... ok
Writing beyond end of sysctl ... ok
Writing sysctl with multiple long writes ... ok
Checking ignoring spaces up to PAGE_SIZE works on write ...ok
Checking passing PAGE_SIZE of spaces fails on write ...ok
Testing INT_MAX works ...ok
Testing INT_MAX + 1 will fail as expected...ok
Testing negative values will work as expected...ok
Mon Feb 18 23:10:51 CST 2019
Running test: sysctl_test_0004 - run #0
== Testing sysctl behavior against /proc/sys/debug/test_sysctl/uint_0001 ==
Writing test file ... ok
Checking sysctl is not set to test value ... ok
Writing sysctl from shell ... ok
Resetting sysctl to original value ... ok
Writing entire sysctl in single write ... ok
Writing middle of sysctl after synchronized seek ... ok
Writing beyond end of sysctl ... ok
Writing sysctl with multiple long writes ... ok
Checking ignoring spaces up to PAGE_SIZE works on write ...ok
Checking passing PAGE_SIZE of spaces fails on write ...ok
Testing UINT_MAX works ...ok
Testing UINT_MAX + 1 will fail as expected...ok
Testing negative values will not work as expected ...ok
Mon Feb 18 23:10:51 CST 2019
Running test: sysctl_test_0005 - run #0
Testing array works as expected ... ok
Testing skipping trailing array elements works ... ok
Testing PAGE_SIZE limit on array works ... ok
Testing exceeding PAGE_SIZE limit fails as expected ... Files - and /proc/sys/debug/test_sysctl/int_0003 differ
ok
Mon Feb 18 23:10:51 CST 2019
Running test: sysctl_test_0005 - run #1
Testing array works as expected ... ok
Testing skipping trailing array elements works ... ok
Testing PAGE_SIZE limit on array works ... ok
Testing exceeding PAGE_SIZE limit fails as expected ... Files - and /proc/sys/debug/test_sysctl/int_0003 differ
ok
Mon Feb 18 23:10:51 CST 2019
Running test: sysctl_test_0005 - run #2
Testing array works as expected ... ok
Testing skipping trailing array elements works ... ok
Testing PAGE_SIZE limit on array works ... ok
Testing exceeding PAGE_SIZE limit fails as expected ... Files - and /proc/sys/debug/test_sysctl/int_0003 differ
ok
ok 1..1 selftests: sysctl: sysctl.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-3f02daf3406e77d938c20ebd37c2ca74e3779a85/tools/testing/selftests/sysctl'
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years, 3 months
[page cache] eb797a8ee0: vm-scalability.throughput -16.5% regression
by kernel test robot
Greeting,
FYI, we noticed a -16.5% regression of vm-scalability.throughput due to commit:
commit: eb797a8ee0ab4cd03df556980ce7bf167cadaa50 ("page cache: Rearrange address_space")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: vm-scalability
on test machine: 80 threads Skylake with 64G memory
with following parameters:
runtime: 300s
test: small-allocs
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
In addition to that, the commit also has significant impact on the following tests:
+------------------+----------------------------------------------------------------------+
| testcase: change | unixbench: unixbench.score 20.9% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=100% |
| | runtime=300s |
| | test=shell8 |
| | ucode=0xb00002e |
+------------------+----------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.2/debian-x86_64-2018-04-03.cgz/300s/lkp-skl-2sp2/small-allocs/vm-scalability
commit:
f32f004cdd ("ida: Convert to XArray")
eb797a8ee0 ("page cache: Rearrange address_space")
f32f004cddf86d63 eb797a8ee0ab4cd03df556980c
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
3:4 -13% 3:4 perf-profile.calltrace.cycles-pp.error_entry
3:4 -12% 3:4 perf-profile.children.cycles-pp.error_entry
1:4 -6% 1:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
235891 -16.5% 197085 vm-scalability.median
18881481 -16.5% 15769081 vm-scalability.throughput
316.19 +14.4% 361.58 vm-scalability.time.elapsed_time
316.19 +14.4% 361.58 vm-scalability.time.elapsed_time.max
22924 +15.9% 26576 vm-scalability.time.system_time
3254041 ± 9% +36.4% 4437311 ± 3% vm-scalability.time.voluntary_context_switches
277831 ± 3% +9.5% 304359 interrupts.CAL:Function_call_interrupts
102367 ± 2% +10.1% 112694 ± 2% meminfo.Shmem
6.67 ± 5% -0.9 5.76 ± 2% mpstat.cpu.usr%
0.49 -5.0% 0.46 pmeter.Average_Active_Power
38678749 -12.1% 34005251 pmeter.performance_per_watt
2621420 ± 38% +59.2% 4173292 ± 3% turbostat.C1
62964314 ± 10% +18.7% 74735975 turbostat.IRQ
20821 ± 10% +20.6% 25103 ± 2% vmstat.system.cs
192700 ± 8% +5.9% 204006 vmstat.system.in
76742 +3.7% 79550 proc-vmstat.nr_active_anon
25578 ± 2% +10.1% 28154 ± 2% proc-vmstat.nr_shmem
76742 +3.7% 79550 proc-vmstat.nr_zone_active_anon
34211075 ± 14% +28.7% 44023085 ± 3% cpuidle.C1.time
2628057 ± 37% +59.1% 4179955 ± 3% cpuidle.C1.usage
200488 ± 20% +75.5% 351836 ± 13% cpuidle.POLL.time
57706 ± 49% +93.1% 111419 ± 18% cpuidle.POLL.usage
2.08 ± 14% +20.7% 2.51 ± 4% sched_debug.cfs_rq:/.nr_spread_over.avg
50.05 ± 36% -38.0% 31.04 ± 24% sched_debug.cpu.cpu_load[1].max
0.25 ± 14% +17.4% 0.29 ± 7% sched_debug.cpu.nr_running.stddev
42905 ± 12% +26.2% 54143 ± 3% sched_debug.cpu.nr_switches.avg
19996 ± 17% +43.0% 28586 ± 7% sched_debug.cpu.nr_switches.min
43774 ± 12% +26.5% 55370 ± 3% sched_debug.cpu.sched_count.avg
20285 ± 17% +43.1% 29024 ± 8% sched_debug.cpu.sched_count.min
19949 ± 12% +28.0% 25530 ± 3% sched_debug.cpu.sched_goidle.avg
9260 ± 16% +44.9% 13422 ± 8% sched_debug.cpu.sched_goidle.min
23108 ± 12% +26.0% 29110 ± 3% sched_debug.cpu.ttwu_count.avg
25905 ± 11% +25.8% 32593 ± 2% sched_debug.cpu.ttwu_count.max
21683 ± 12% +26.0% 27323 ± 3% sched_debug.cpu.ttwu_count.min
73.74 -3.6 70.12 perf-stat.cache-miss-rate%
2.83e+10 ± 2% +5.5% 2.985e+10 perf-stat.cache-misses
3.838e+10 +10.9% 4.257e+10 perf-stat.cache-references
6787110 ± 9% +35.5% 9197959 ± 3% perf-stat.context-switches
3.17 +11.1% 3.52 perf-stat.cpi
7.609e+13 +14.5% 8.715e+13 perf-stat.cpu-cycles
21380 +11.1% 23753 perf-stat.cpu-migrations
0.08 -0.0 0.07 perf-stat.dTLB-load-miss-rate%
6.634e+12 +3.8% 6.888e+12 perf-stat.dTLB-loads
0.13 -0.0 0.12 perf-stat.dTLB-store-miss-rate%
1.135e+09 -5.6% 1.071e+09 perf-stat.dTLB-store-misses
2.184e+09 -2.3% 2.135e+09 perf-stat.iTLB-load-misses
2.404e+13 +3.1% 2.479e+13 perf-stat.instructions
11006 +5.5% 11614 perf-stat.instructions-per-iTLB-miss
0.32 -10.0% 0.28 perf-stat.ipc
2.898e+09 +15.2% 3.337e+09 ± 2% perf-stat.node-load-misses
7.235e+08 +19.1% 8.615e+08 ± 4% perf-stat.node-store-misses
4975 +5.3% 5236 perf-stat.path-length
1.10 -0.1 0.97 ± 2% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link
0.68 ± 2% -0.1 0.59 perf-profile.calltrace.cycles-pp.vma_interval_tree_insert.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
0.59 ± 3% -0.0 0.55 ± 2% perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
1.04 ± 5% +0.1 1.19 ± 3% perf-profile.calltrace.cycles-pp.task_numa_work.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.04 ± 5% +0.1 1.19 ± 3% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.04 ± 5% +0.1 1.19 ± 3% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.64 ± 2% +0.4 2.00 perf-profile.calltrace.cycles-pp.page_fault
1.57 ± 2% +0.4 1.93 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
1.59 ± 2% +0.4 1.96 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
0.65 +0.5 1.19 ± 3% perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 +1.0 1.03 ± 4% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.10 -0.1 0.97 ± 2% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.68 ± 2% -0.1 0.59 perf-profile.children.cycles-pp.vma_interval_tree_insert
0.62 ± 2% -0.1 0.55 ± 2% perf-profile.children.cycles-pp.native_irq_return_iret
0.47 -0.1 0.41 perf-profile.children.cycles-pp.sync_regs
0.32 ± 4% -0.1 0.27 ± 3% perf-profile.children.cycles-pp.get_unmapped_area
0.26 ± 3% -0.1 0.21 ± 3% perf-profile.children.cycles-pp.__perf_sw_event
0.21 ± 4% -0.0 0.17 ± 5% perf-profile.children.cycles-pp.find_vma
0.27 ± 3% -0.0 0.22 ± 3% perf-profile.children.cycles-pp.unmapped_area_topdown
0.11 ± 42% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.29 ± 2% -0.0 0.24 ± 3% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.19 ± 4% -0.0 0.15 ± 2% perf-profile.children.cycles-pp.___perf_sw_event
0.13 ± 3% -0.0 0.10 ± 4% perf-profile.children.cycles-pp.vma_policy_mof
0.59 ± 3% -0.0 0.56 ± 2% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.16 ± 2% -0.0 0.13 perf-profile.children.cycles-pp.__rb_insert_augmented
0.19 ± 2% -0.0 0.17 ± 4% perf-profile.children.cycles-pp.perf_event_mmap
0.08 ± 10% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.vmacache_find
0.08 ± 10% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.__fget
0.08 -0.0 0.07 ± 6% perf-profile.children.cycles-pp.d_path
0.09 ± 4% +0.0 0.13 ± 3% perf-profile.children.cycles-pp.__vma_link_rb
0.01 ±173% +0.1 0.12 ± 4% perf-profile.children.cycles-pp.osq_unlock
1.19 ± 3% +0.2 1.36 ± 2% perf-profile.children.cycles-pp.exit_to_usermode_loop
1.19 ± 3% +0.2 1.36 ± 2% perf-profile.children.cycles-pp.task_work_run
1.19 ± 3% +0.2 1.36 ± 2% perf-profile.children.cycles-pp.task_numa_work
1.66 ± 2% +0.4 2.01 perf-profile.children.cycles-pp.page_fault
1.59 ± 2% +0.4 1.96 perf-profile.children.cycles-pp.do_page_fault
1.58 ± 2% +0.4 1.95 perf-profile.children.cycles-pp.__do_page_fault
0.66 +0.5 1.20 ± 4% perf-profile.children.cycles-pp.handle_mm_fault
0.47 +0.6 1.04 ± 4% perf-profile.children.cycles-pp.__handle_mm_fault
1.09 -0.1 0.96 ± 2% perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.67 ± 2% -0.1 0.59 ± 2% perf-profile.self.cycles-pp.vma_interval_tree_insert
0.62 ± 2% -0.1 0.55 ± 3% perf-profile.self.cycles-pp.native_irq_return_iret
0.46 -0.1 0.40 perf-profile.self.cycles-pp.sync_regs
0.43 -0.1 0.37 ± 2% perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.30 ± 5% -0.1 0.25 ± 3% perf-profile.self.cycles-pp.__do_page_fault
0.11 ± 42% -0.0 0.07 ± 7% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.27 ± 3% -0.0 0.22 ± 3% perf-profile.self.cycles-pp.unmapped_area_topdown
0.17 ± 3% -0.0 0.13 ± 3% perf-profile.self.cycles-pp.___perf_sw_event
0.14 ± 8% -0.0 0.11 perf-profile.self.cycles-pp.mmap_region
0.18 ± 2% -0.0 0.15 ± 2% perf-profile.self.cycles-pp.handle_mm_fault
0.15 -0.0 0.12 ± 4% perf-profile.self.cycles-pp.__rb_insert_augmented
0.12 ± 4% -0.0 0.10 ± 7% perf-profile.self.cycles-pp.find_vma
0.08 ± 5% -0.0 0.06 perf-profile.self.cycles-pp.vma_policy_mof
0.08 ± 8% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.vmacache_find
0.08 ± 10% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.__fget
0.07 ± 6% -0.0 0.05 ± 8% perf-profile.self.cycles-pp.__perf_sw_event
0.06 -0.0 0.05 perf-profile.self.cycles-pp.d_path
0.08 ± 5% +0.0 0.10 ± 7% perf-profile.self.cycles-pp.down_write
0.11 ± 4% +0.0 0.16 ± 2% perf-profile.self.cycles-pp.up_write
0.00 +0.1 0.08 ± 5% perf-profile.self.cycles-pp.__vma_link_rb
0.18 ± 9% +0.1 0.28 ± 6% perf-profile.self.cycles-pp.rwsem_down_write_failed
0.01 ±173% +0.1 0.12 ± 4% perf-profile.self.cycles-pp.osq_unlock
1.05 ± 4% +0.2 1.26 ± 2% perf-profile.self.cycles-pp.task_numa_work
0.35 ± 2% +0.6 0.95 ± 5% perf-profile.self.cycles-pp.__handle_mm_fault
vm-scalability.time.system_time
30000 +-+-----------------------------------------------------------------+
| O O |
25000 O-O O O O O O O O O O O O O O O O O O O O O |
|.+..+.+..+.+..+.+.+..+.+..+.+..+.+ .+..+.+.+..+.+..+.+..+.|
| : + + |
20000 +-+ : : : |
| : :: : |
15000 +-+ : : : : |
| : : : : |
10000 +-+ : : : : |
| : : : : |
| : : : : |
5000 +-+ :: :: |
| : : |
0 +-+-----------------------------------------------------------------+
vm-scalability.time.elapsed_time
400 +-+-------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O O O |
350 +-+ |
300 +-++.+..+.+..+.+..+.+..+.+..+.+..+.+ + +..+.+..+.+..+.+..+.+..+.|
| : : : |
250 +-+ : :: : |
| : : : : |
200 +-+ : : : : |
| : : : : |
150 +-+ : : : : |
100 +-+ : : : : |
| : : : : |
50 +-+ :: :: |
| : : |
0 +-+-------------------------------------------------------------------+
vm-scalability.time.elapsed_time.max
400 +-+-------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O O O |
350 +-+ |
300 +-++.+..+.+..+.+..+.+..+.+..+.+..+.+ + +..+.+..+.+..+.+..+.+..+.|
| : : : |
250 +-+ : :: : |
| : : : : |
200 +-+ : : : : |
| : : : : |
150 +-+ : : : : |
100 +-+ : : : : |
| : : : : |
50 +-+ :: :: |
| : : |
0 +-+-------------------------------------------------------------------+
vm-scalability.throughput
2e+07 +-+---------------------------------------------------------------+
1.8e+07 +-+..+.+.+..+.+..+.+.+..+.+.+..+.+ + +..+.+..+.+.+..+.+.+..+.|
| : : : |
1.6e+07 O-O O O O O O O O O O O O O O O O O O O O O O O |
1.4e+07 +-+ : :: : |
| : : : : |
1.2e+07 +-+ : : : : |
1e+07 +-+ : : : : |
8e+06 +-+ : : : : |
| : : : : |
6e+06 +-+ : : : : |
4e+06 +-+ :: : |
| : : |
2e+06 +-+ : : |
0 +-+---------------------------------------------------------------+
vm-scalability.median
250000 +-+----------------------------------------------------------------+
|.+..+.+..+.+.+..+.+..+.+.+..+ + + +.+..+.+..+.+.+..+.+..+.|
| : : : |
200000 O-O O O O O O O O O O O O O O O O O O O O O O O |
| : :: : |
| : : : : |
150000 +-+ : : : : |
| : : : : |
100000 +-+ : : : : |
| : : : : |
| : : : : |
50000 +-+ : :: |
| : : |
| : : |
0 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-ep3b: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-bdw-ep3b/shell8/unixbench/0xb00002e
commit:
f32f004cdd ("ida: Convert to XArray")
eb797a8ee0 ("page cache: Rearrange address_space")
f32f004cddf86d63 eb797a8ee0ab4cd03df556980c
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at_ip___perf_sw_event/0x
:4 25% 1:4 dmesg.WARNING:at_ip__slab_free/0x
:4 25% 1:4 dmesg.WARNING:at_ip_filename_lookup/0x
1:4 -25% :4 dmesg.WARNING:at_ip_fsnotify/0x
1:4 -25% :4 dmesg.WARNING:at_ip_perf_event_mmap_output/0x
1:4 9% 2:4 perf-profile.children.cycles-pp.error_entry
1:4 7% 1:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
39940 +20.9% 48288 unixbench.score
641.38 +1.0% 647.51 unixbench.time.elapsed_time
641.38 +1.0% 647.51 unixbench.time.elapsed_time.max
24220392 +29.1% 31263367 ± 2% unixbench.time.involuntary_context_switches
1.836e+09 +20.8% 2.217e+09 unixbench.time.minor_page_faults
7052 +2.1% 7200 unixbench.time.percent_of_cpu_this_job_got
36337 -2.7% 35347 unixbench.time.system_time
8893 +26.8% 11279 ± 2% unixbench.time.user_time
85632909 +4.3% 89281018 unixbench.time.voluntary_context_switches
15301176 +21.7% 18622462 unixbench.workload
42884872 +19.0% 51033794 softirqs.RCU
43387938 ± 6% -12.9% 37797790 ± 2% cpuidle.C1.usage
1155156 ± 4% -24.0% 878414 ± 4% cpuidle.POLL.usage
152008 ± 9% +15.2% 175048 ± 7% meminfo.DirectMap4k
255196 +13.6% 289806 meminfo.Shmem
100.25 +26.7% 127.00 ± 2% vmstat.procs.r
337268 +3.6% 349315 vmstat.system.cs
19.30 -1.7 17.59 mpstat.cpu.idle%
0.01 ± 30% +0.0 0.02 ± 35% mpstat.cpu.iowait%
15.54 +3.6 19.11 mpstat.cpu.usr%
7.086e+08 +20.8% 8.56e+08 ± 2% numa-numastat.node0.local_node
7.086e+08 +20.8% 8.561e+08 ± 2% numa-numastat.node0.numa_hit
7.053e+08 +21.3% 8.558e+08 numa-numastat.node1.local_node
7.053e+08 +21.3% 8.558e+08 numa-numastat.node1.numa_hit
2274 +2.1% 2322 turbostat.Avg_MHz
43383925 ± 6% -12.9% 37793914 ± 2% turbostat.C1
14.41 ± 2% -17.4% 11.90 ± 7% turbostat.CPU%c1
243.24 +2.2% 248.48 turbostat.PkgWatt
14.16 +8.1% 15.30 turbostat.RAMWatt
6807 ± 4% +48.1% 10084 ± 5% slabinfo.files_cache.active_objs
6807 ± 4% +48.2% 10090 ± 5% slabinfo.files_cache.num_objs
3616 +23.8% 4475 ± 5% slabinfo.kmalloc-256.active_objs
3616 +23.8% 4475 ± 5% slabinfo.kmalloc-256.num_objs
17134 ± 4% +13.8% 19493 ± 2% slabinfo.proc_inode_cache.active_objs
17134 ± 4% +14.0% 19524 ± 2% slabinfo.proc_inode_cache.num_objs
3393 ± 6% +17.7% 3994 ± 3% slabinfo.sock_inode_cache.active_objs
3393 ± 6% +17.7% 3994 ± 3% slabinfo.sock_inode_cache.num_objs
1241 ± 8% +15.0% 1427 ± 6% slabinfo.task_group.active_objs
1241 ± 8% +15.0% 1427 ± 6% slabinfo.task_group.num_objs
713963 ± 14% -20.9% 564405 ± 4% numa-meminfo.node0.FilePages
20539 ± 9% -19.7% 16496 ± 15% numa-meminfo.node0.Mapped
1281514 ± 9% -10.6% 1145886 ± 4% numa-meminfo.node0.MemUsed
683.75 ± 11% +54.1% 1053 ± 12% numa-meminfo.node0.Mlocked
48239 ± 4% -19.8% 38686 ± 17% numa-meminfo.node0.SReclaimable
669804 ± 15% +27.8% 856173 ± 3% numa-meminfo.node1.FilePages
4719 ± 56% +227.5% 15456 ± 43% numa-meminfo.node1.Inactive
4644 ± 57% +232.0% 15419 ± 43% numa-meminfo.node1.Inactive(anon)
14609 ± 12% +47.2% 21499 ± 11% numa-meminfo.node1.Mapped
40126 ± 6% +29.6% 52000 ± 12% numa-meminfo.node1.SReclaimable
119605 ± 96% +131.3% 276593 ± 3% numa-meminfo.node1.Shmem
130207 +18.0% 153583 ± 5% numa-meminfo.node1.Slab
547652 ± 4% +5.4% 577200 ± 4% numa-meminfo.node1.Unevictable
178549 ± 14% -21.0% 141123 ± 4% numa-vmstat.node0.nr_file_pages
5179 ± 9% -20.3% 4130 ± 16% numa-vmstat.node0.nr_mapped
170.50 ± 12% +54.5% 263.50 ± 13% numa-vmstat.node0.nr_mlock
12061 ± 4% -19.8% 9670 ± 17% numa-vmstat.node0.nr_slab_reclaimable
3.542e+08 +21.5% 4.304e+08 ± 2% numa-vmstat.node0.numa_hit
3.542e+08 +21.5% 4.304e+08 ± 2% numa-vmstat.node0.numa_local
167459 ± 15% +27.9% 214111 ± 3% numa-vmstat.node1.nr_file_pages
1150 ± 57% +234.3% 3845 ± 44% numa-vmstat.node1.nr_inactive_anon
3673 ± 11% +49.4% 5488 ± 13% numa-vmstat.node1.nr_mapped
29911 ± 96% +131.3% 69188 ± 3% numa-vmstat.node1.nr_shmem
10036 ± 6% +29.5% 13000 ± 12% numa-vmstat.node1.nr_slab_reclaimable
22541 +12.7% 25401 ± 3% numa-vmstat.node1.nr_slab_unreclaimable
136898 ± 4% +5.4% 144317 ± 4% numa-vmstat.node1.nr_unevictable
1150 ± 57% +234.3% 3845 ± 44% numa-vmstat.node1.nr_zone_inactive_anon
136898 ± 4% +5.4% 144317 ± 4% numa-vmstat.node1.nr_zone_unevictable
3.525e+08 +22.0% 4.301e+08 ± 2% numa-vmstat.node1.numa_hit
3.523e+08 +22.0% 4.299e+08 ± 2% numa-vmstat.node1.numa_local
149893 +6.3% 159383 proc-vmstat.nr_active_anon
345938 +2.7% 355212 proc-vmstat.nr_file_pages
6075 +9.3% 6643 proc-vmstat.nr_inactive_anon
30490 +2.2% 31159 proc-vmstat.nr_kernel_stack
8870 +8.1% 9585 proc-vmstat.nr_mapped
9254 +2.3% 9471 proc-vmstat.nr_page_table_pages
63803 +13.6% 72460 proc-vmstat.nr_shmem
22097 +2.6% 22671 proc-vmstat.nr_slab_reclaimable
47965 +6.9% 51274 proc-vmstat.nr_slab_unreclaimable
149893 +6.3% 159383 proc-vmstat.nr_zone_active_anon
6075 +9.3% 6643 proc-vmstat.nr_zone_inactive_anon
1.414e+09 +21.1% 1.712e+09 proc-vmstat.numa_hit
1.414e+09 +21.1% 1.712e+09 proc-vmstat.numa_local
47447 +10.8% 52569 proc-vmstat.pgactivate
1.487e+09 +21.1% 1.801e+09 proc-vmstat.pgalloc_normal
1.846e+09 +20.7% 2.228e+09 proc-vmstat.pgfault
1.487e+09 +21.1% 1.801e+09 proc-vmstat.pgfree
63795 +20.6% 76962 proc-vmstat.thp_deferred_split_page
63790 +20.6% 76940 proc-vmstat.thp_fault_alloc
26894638 +20.9% 32515246 proc-vmstat.unevictable_pgs_culled
1.556e+13 +4.9% 1.632e+13 perf-stat.branch-instructions
1.54 +0.2 1.75 perf-stat.branch-miss-rate%
2.399e+11 +19.4% 2.863e+11 perf-stat.branch-misses
1.034e+11 +17.1% 1.21e+11 ± 2% perf-stat.cache-misses
9.784e+11 +18.6% 1.161e+12 perf-stat.cache-references
2.18e+08 +4.6% 2.281e+08 perf-stat.context-switches
1.60 -1.0% 1.59 perf-stat.cpi
1.281e+14 +3.2% 1.322e+14 perf-stat.cpu-cycles
46095973 +22.0% 56220581 perf-stat.cpu-migrations
0.19 +0.0 0.23 ± 2% perf-stat.dTLB-load-miss-rate%
4.148e+10 +23.3% 5.113e+10 ± 2% perf-stat.dTLB-load-misses
2.213e+13 +2.1% 2.261e+13 perf-stat.dTLB-loads
5.537e+09 +22.2% 6.767e+09 ± 2% perf-stat.dTLB-store-misses
8.001e+12 +20.3% 9.625e+12 perf-stat.dTLB-stores
2.034e+10 +20.1% 2.443e+10 perf-stat.iTLB-load-misses
1.3e+10 +17.1% 1.522e+10 perf-stat.iTLB-loads
7.991e+13 +4.3% 8.332e+13 perf-stat.instructions
3929 -13.2% 3411 perf-stat.instructions-per-iTLB-miss
1.81e+09 +20.7% 2.184e+09 perf-stat.minor-faults
91.68 -2.3 89.39 perf-stat.node-load-miss-rate%
3.395e+10 +14.5% 3.889e+10 perf-stat.node-load-misses
3.083e+09 +49.7% 4.616e+09 ± 3% perf-stat.node-loads
1.018e+10 +20.2% 1.224e+10 ± 2% perf-stat.node-store-misses
8.896e+09 +21.3% 1.079e+10 ± 2% perf-stat.node-stores
1.81e+09 +20.7% 2.184e+09 perf-stat.page-faults
5222880 -14.3% 4475405 perf-stat.path-length
1942 ± 7% +37.3% 2665 ± 18% sched_debug.cfs_rq:/.exec_clock.stddev
47.72 ± 7% +18.8% 56.67 ± 2% sched_debug.cfs_rq:/.load_avg.avg
4.95 ± 9% +44.0% 7.14 ± 7% sched_debug.cfs_rq:/.load_avg.min
265960 ± 8% +36.3% 362493 ± 15% sched_debug.cfs_rq:/.min_vruntime.stddev
0.33 ± 6% -11.5% 0.29 ± 5% sched_debug.cfs_rq:/.nr_running.stddev
2.77 ± 7% +88.4% 5.23 ± 3% sched_debug.cfs_rq:/.nr_spread_over.avg
7.18 ± 5% +79.1% 12.86 ± 15% sched_debug.cfs_rq:/.nr_spread_over.max
0.27 ± 52% +358.3% 1.25 ± 25% sched_debug.cfs_rq:/.nr_spread_over.min
1.57 ± 2% +40.9% 2.21 ± 2% sched_debug.cfs_rq:/.nr_spread_over.stddev
27.36 ± 5% +13.2% 30.98 ± 6% sched_debug.cfs_rq:/.removed.load_avg.avg
1263 ± 5% +14.1% 1442 ± 6% sched_debug.cfs_rq:/.removed.runnable_sum.avg
265962 ± 8% +36.3% 362499 ± 15% sched_debug.cfs_rq:/.spread0.stddev
8169 ± 28% +64.2% 13411 ± 21% sched_debug.cpu.avg_idle.min
298.80 ± 11% +57.4% 470.43 ± 28% sched_debug.cpu.cpu_load[0].max
7.46 ± 5% +34.4% 10.03 ± 12% sched_debug.cpu.cpu_load[1].avg
212.55 ± 15% +52.5% 324.14 ± 17% sched_debug.cpu.cpu_load[1].max
25.69 ± 9% +52.4% 39.15 ± 19% sched_debug.cpu.cpu_load[1].stddev
7.18 ± 3% +32.7% 9.53 ± 5% sched_debug.cpu.cpu_load[2].avg
151.27 ± 16% +51.9% 229.80 ± 11% sched_debug.cpu.cpu_load[2].max
19.45 ± 11% +50.2% 29.22 ± 10% sched_debug.cpu.cpu_load[2].stddev
6.92 ± 2% +29.5% 8.96 ± 4% sched_debug.cpu.cpu_load[3].avg
106.45 ± 23% +52.6% 162.50 ± 15% sched_debug.cpu.cpu_load[3].max
14.94 ± 15% +48.0% 22.10 ± 10% sched_debug.cpu.cpu_load[3].stddev
6.45 ± 2% +25.9% 8.12 ± 6% sched_debug.cpu.cpu_load[4].avg
79.82 ± 34% +56.2% 124.64 ± 25% sched_debug.cpu.cpu_load[4].max
11.70 ± 23% +47.3% 17.23 ± 17% sched_debug.cpu.cpu_load[4].stddev
422565 ± 24% +22.7% 518276 ± 15% sched_debug.cpu.load.max
0.00 ± 3% +23.4% 0.00 ± 14% sched_debug.cpu.next_balance.stddev
1.14 ± 5% +31.4% 1.50 ± 7% sched_debug.cpu.nr_running.avg
3.16 ± 2% +38.1% 4.36 ± 9% sched_debug.cpu.nr_running.max
0.71 ± 2% +33.7% 0.95 ± 7% sched_debug.cpu.nr_running.stddev
39284 ± 7% +47.6% 57979 ± 20% sched_debug.cpu.nr_switches.stddev
1658 ± 12% +73.9% 2884 ± 11% sched_debug.cpu.nr_uninterruptible.max
-1902 +212.1% -5938 sched_debug.cpu.nr_uninterruptible.min
716.66 ± 9% +107.5% 1487 ± 30% sched_debug.cpu.nr_uninterruptible.stddev
43221 ± 7% +46.5% 63340 ± 18% sched_debug.cpu.sched_count.stddev
351785 ± 3% -14.1% 302212 sched_debug.cpu.sched_goidle.min
21765 ± 7% +51.5% 32969 ± 24% sched_debug.cpu.sched_goidle.stddev
148360 +23.9% 183753 ± 2% sched_debug.cpu.ttwu_local.avg
152765 +24.4% 190065 ± 2% sched_debug.cpu.ttwu_local.max
2051 ± 10% +87.1% 3838 ± 32% sched_debug.cpu.ttwu_local.stddev
0.00 ± 25% +2.1e+05% 0.56 ± 46% sched_debug.rt_rq:/.rt_time.avg
0.02 ± 25% +2.1e+05% 49.46 ± 46% sched_debug.rt_rq:/.rt_time.max
0.00 ± 25% +2.1e+05% 5.24 ± 46% sched_debug.rt_rq:/.rt_time.stddev
20.84 ± 2% -4.4 16.42 ± 3% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.unlink_file_vma
21.69 ± 2% -4.4 17.32 ± 2% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.unlink_file_vma.free_pgtables
17.27 ± 2% -3.4 13.86 ± 3% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.unlink_file_vma.free_pgtables.exit_mmap
17.44 -3.4 14.05 ± 3% perf-profile.calltrace.cycles-pp.down_write.unlink_file_vma.free_pgtables.exit_mmap.mmput
36.96 -3.1 33.88 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
36.94 -3.1 33.87 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
12.69 -2.2 10.53 ± 2% perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
12.71 -2.2 10.55 ± 2% perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
13.36 -2.0 11.41 perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
8.96 ± 2% -1.9 7.03 ± 3% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.__vma_adjust.__split_vma
8.63 ± 2% -1.9 6.71 ± 3% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.__vma_adjust
13.61 -1.9 11.71 perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.11 -1.8 6.36 ± 3% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.copy_process
8.44 -1.7 6.70 ± 2% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.copy_process._do_fork.do_syscall_64
8.44 -1.7 6.70 ± 2% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.copy_process._do_fork
8.50 -1.7 6.76 ± 2% perf-profile.calltrace.cycles-pp.down_write.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.70 ± 2% -1.7 7.01 ± 3% perf-profile.calltrace.cycles-pp.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec
9.06 -1.6 7.47 ± 2% perf-profile.calltrace.cycles-pp.unlink_file_vma.free_pgtables.exit_mmap.mmput.do_exit
9.36 ± 2% -1.6 7.77 ± 2% perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary
9.71 -1.5 8.21 ± 2% perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.mmput.do_exit.do_group_exit
6.71 ± 2% -1.5 5.26 ± 3% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.__vma_adjust.__split_vma.mprotect_fixup
6.77 ± 2% -1.4 5.32 ± 3% perf-profile.calltrace.cycles-pp.down_write.__vma_adjust.__split_vma.mprotect_fixup.do_mprotect_pkey
9.54 -1.3 8.23 ± 3% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
10.17 -1.3 8.86 ± 4% perf-profile.calltrace.cycles-pp.secondary_startup_64
9.31 -1.3 8.01 ± 4% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
10.04 -1.3 8.74 ± 3% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
7.63 ± 2% -1.3 6.33 ± 2% perf-profile.calltrace.cycles-pp.__vma_adjust.__split_vma.mprotect_fixup.do_mprotect_pkey.__x64_sys_mprotect
10.04 -1.3 8.75 ± 3% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
10.04 -1.3 8.75 ± 3% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
7.34 -1.3 6.05 ± 2% perf-profile.calltrace.cycles-pp.do_munmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
7.71 ± 2% -1.3 6.43 ± 2% perf-profile.calltrace.cycles-pp.__split_vma.mprotect_fixup.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64
10.84 -1.2 9.62 perf-profile.calltrace.cycles-pp.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
8.01 ± 2% -1.2 6.80 ± 2% perf-profile.calltrace.cycles-pp.mprotect_fixup.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.09 ± 2% -1.2 6.90 ± 2% perf-profile.calltrace.cycles-pp.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.10 ± 2% -1.2 6.90 ± 2% perf-profile.calltrace.cycles-pp.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.17 -1.2 9.99 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_fork
11.16 -1.2 9.99 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
11.15 -1.2 9.98 perf-profile.calltrace.cycles-pp._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
11.80 -1.1 10.73 perf-profile.calltrace.cycles-pp.__libc_fork
4.28 -1.1 3.21 ± 3% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link
4.44 ± 2% -1.0 3.47 ± 2% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.unlink_file_vma.free_pgtables.unmap_region
4.45 -1.0 3.49 ± 2% perf-profile.calltrace.cycles-pp.down_write.unlink_file_vma.free_pgtables.unmap_region.do_munmap
11.18 -1.0 10.22 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
4.48 -1.0 3.52 ± 2% perf-profile.calltrace.cycles-pp.unlink_file_vma.free_pgtables.unmap_region.do_munmap.mmap_region
11.19 -1.0 10.23 perf-profile.calltrace.cycles-pp.mmput.flush_old_exec.load_elf_binary.search_binary_handler.__do_execve_file
4.43 -0.9 3.48 ± 8% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.vma_link.mmap_region.do_mmap
4.43 -0.9 3.48 ± 8% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link.mmap_region
4.67 -0.9 3.73 ± 2% perf-profile.calltrace.cycles-pp.free_pgtables.unmap_region.do_munmap.mmap_region.do_mmap
4.45 -0.9 3.51 ± 9% perf-profile.calltrace.cycles-pp.down_write.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
11.30 -0.9 10.38 perf-profile.calltrace.cycles-pp.flush_old_exec.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve
4.85 -0.9 3.98 ± 2% perf-profile.calltrace.cycles-pp.unmap_region.do_munmap.mmap_region.do_mmap.vm_mmap_pgoff
5.40 -0.8 4.57 ± 2% perf-profile.calltrace.cycles-pp.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
2.25 -0.5 1.78 ± 3% perf-profile.calltrace.cycles-pp.down_write.__vma_adjust.__split_vma.do_munmap.mmap_region
2.25 -0.5 1.78 ± 3% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.__vma_adjust.__split_vma.do_munmap
2.42 -0.4 1.98 ± 2% perf-profile.calltrace.cycles-pp.__vma_adjust.__split_vma.do_munmap.mmap_region.do_mmap
2.43 -0.4 1.99 ± 2% perf-profile.calltrace.cycles-pp.__split_vma.do_munmap.mmap_region.do_mmap.vm_mmap_pgoff
13.17 -0.3 12.82 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
13.17 -0.3 12.83 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
2.69 ± 2% -0.1 2.56 perf-profile.calltrace.cycles-pp.filemap_map_pages.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.56 +0.2 0.72 ± 2% perf-profile.calltrace.cycles-pp._IO_default_xsputn
0.63 +0.2 0.78 ± 3% perf-profile.calltrace.cycles-pp.copy_strings.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.75 +0.2 0.92 ± 2% perf-profile.calltrace.cycles-pp.wp_page_copy.do_wp_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.82 +0.2 1.01 ± 2% perf-profile.calltrace.cycles-pp.do_wp_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.94 ± 2% +0.2 1.16 perf-profile.calltrace.cycles-pp._dl_addr
0.64 ± 2% +0.2 0.87 ± 4% perf-profile.calltrace.cycles-pp.ret_from_fork
0.63 ± 2% +0.2 0.87 ± 4% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.67 ± 2% +0.2 0.91 ± 4% perf-profile.calltrace.cycles-pp.page_remove_rmap.unmap_page_range.unmap_vmas.exit_mmap.mmput
0.87 +0.2 1.12 ± 2% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.88 +0.2 1.13 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.setlocale
0.70 ± 2% +0.3 0.96 ± 3% perf-profile.calltrace.cycles-pp.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap.mmput.flush_old_exec
1.10 +0.3 1.37 ± 2% perf-profile.calltrace.cycles-pp.__strcoll_l
0.70 ± 2% +0.3 0.97 ± 3% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.flush_old_exec.load_elf_binary
1.04 +0.3 1.32 ± 2% perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.setlocale
0.89 +0.3 1.17 ± 3% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.flush_old_exec
0.95 +0.3 1.25 ± 3% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.flush_old_exec.load_elf_binary
1.15 +0.3 1.46 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.setlocale
4.73 +0.3 5.04 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.15 +0.3 1.46 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.setlocale
1.07 +0.3 1.39 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
1.09 ± 2% +0.3 1.42 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
1.11 ± 2% +0.3 1.45 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.mmap64
1.11 +0.3 1.44 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
0.26 ±100% +0.3 0.60 perf-profile.calltrace.cycles-pp.copy_page_range.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.85 +0.3 5.19 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.39 ± 57% +0.4 0.74 ± 3% perf-profile.calltrace.cycles-pp.wait4
1.20 +0.4 1.56 perf-profile.calltrace.cycles-pp.mmap64
13.89 +0.4 14.25 perf-profile.calltrace.cycles-pp.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64
13.91 +0.4 14.28 perf-profile.calltrace.cycles-pp.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.78 ± 2% +0.4 2.19 perf-profile.calltrace.cycles-pp.vfprintf.__vsnprintf_chk
0.55 ± 3% +0.4 0.99 ± 3% perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.elf_map.load_elf_binary
0.60 ± 4% +0.5 1.06 ± 3% perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.elf_map.load_elf_binary.search_binary_handler
0.66 ± 3% +0.5 1.12 ± 2% perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.elf_map.load_elf_binary.search_binary_handler.__do_execve_file
5.43 +0.5 5.91 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
5.45 +0.5 5.92 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
0.00 +0.5 0.51 perf-profile.calltrace.cycles-pp.page_fault.setlocale
0.00 +0.5 0.51 ± 3% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
5.58 +0.5 6.09 perf-profile.calltrace.cycles-pp.page_fault
0.00 +0.5 0.52 ± 4% perf-profile.calltrace.cycles-pp.smpboot_thread_fn.kthread.ret_from_fork
0.00 +0.5 0.53 ± 2% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
1.37 ± 2% +0.5 1.90 ± 5% perf-profile.calltrace.cycles-pp.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap.mmput.do_exit
1.38 ± 2% +0.5 1.90 ± 5% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.do_exit.do_group_exit
0.00 +0.5 0.53 ± 3% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +0.5 0.54 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
1.73 ± 2% +0.5 2.26 ± 3% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
0.00 +0.5 0.54 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
0.00 +0.5 0.54 ± 3% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
2.69 +0.5 3.24 perf-profile.calltrace.cycles-pp.__vsnprintf_chk
1.80 ± 2% +0.6 2.35 ± 3% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
0.00 +0.6 0.56 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +0.6 0.56 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
2.46 +0.6 3.03 ± 2% perf-profile.calltrace.cycles-pp.setlocale
0.00 +0.6 0.58 ± 3% perf-profile.calltrace.cycles-pp.write
0.00 +0.6 0.58 ± 2% perf-profile.calltrace.cycles-pp.do_faccessat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.12 ±173% +0.6 0.71 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.wait4
0.12 ±173% +0.6 0.71 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait4
0.00 +0.6 0.61 ± 3% perf-profile.calltrace.cycles-pp.read
0.00 +0.6 0.61 ± 5% perf-profile.calltrace.cycles-pp.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.62 ± 5% perf-profile.calltrace.cycles-pp.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait4
0.00 +0.6 0.63 ± 4% perf-profile.calltrace.cycles-pp.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait4
1.63 ± 2% +0.7 2.31 ± 5% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap
0.00 +0.7 0.74 ± 3% perf-profile.calltrace.cycles-pp.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.elf_map
0.12 ±173% +0.8 0.91 ± 3% perf-profile.calltrace.cycles-pp.do_munmap.vm_munmap.elf_map.load_elf_binary.search_binary_handler
2.05 ± 2% +0.8 2.84 ± 4% perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap.mmput
0.13 ±173% +0.8 0.92 ± 4% perf-profile.calltrace.cycles-pp.vm_munmap.elf_map.load_elf_binary.search_binary_handler.__do_execve_file
15.67 +0.8 16.48 perf-profile.calltrace.cycles-pp.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
15.68 +0.8 16.50 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
15.68 +0.8 16.50 perf-profile.calltrace.cycles-pp.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
15.68 +0.8 16.50 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
15.71 +0.8 16.53 perf-profile.calltrace.cycles-pp.execve
1.16 ± 3% +0.9 2.05 ± 3% perf-profile.calltrace.cycles-pp.elf_map.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve
42.48 -8.6 33.85 ± 3% perf-profile.children.cycles-pp.osq_lock
44.24 -8.5 35.76 ± 2% perf-profile.children.cycles-pp.rwsem_down_write_failed
44.25 -8.5 35.78 ± 2% perf-profile.children.cycles-pp.call_rwsem_down_write_failed
44.77 -8.4 36.37 ± 2% perf-profile.children.cycles-pp.down_write
22.58 -4.1 18.50 ± 2% perf-profile.children.cycles-pp.unlink_file_vma
24.10 -3.9 20.25 ± 2% perf-profile.children.cycles-pp.free_pgtables
13.81 -1.8 11.97 perf-profile.children.cycles-pp.ksys_mmap_pgoff
69.44 -1.8 67.68 perf-profile.children.cycles-pp.do_syscall_64
69.50 -1.8 67.74 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
10.31 ± 2% -1.5 8.80 ± 2% perf-profile.children.cycles-pp.__vma_adjust
13.93 -1.5 12.43 perf-profile.children.cycles-pp.mmap_region
10.40 -1.5 8.90 ± 2% perf-profile.children.cycles-pp.__split_vma
14.23 -1.4 12.80 perf-profile.children.cycles-pp.do_mmap
14.44 -1.4 13.06 perf-profile.children.cycles-pp.vm_mmap_pgoff
9.67 -1.3 8.35 ± 4% perf-profile.children.cycles-pp.cpuidle_enter_state
24.58 -1.3 23.26 perf-profile.children.cycles-pp.mmput
24.55 -1.3 23.24 perf-profile.children.cycles-pp.exit_mmap
9.44 -1.3 8.12 ± 4% perf-profile.children.cycles-pp.intel_idle
10.17 -1.3 8.86 ± 4% perf-profile.children.cycles-pp.secondary_startup_64
10.17 -1.3 8.86 ± 4% perf-profile.children.cycles-pp.cpu_startup_entry
10.17 -1.3 8.86 ± 4% perf-profile.children.cycles-pp.do_idle
10.04 -1.3 8.75 ± 3% perf-profile.children.cycles-pp.start_secondary
11.07 -1.2 9.82 perf-profile.children.cycles-pp.copy_process
8.02 ± 2% -1.2 6.80 ± 2% perf-profile.children.cycles-pp.mprotect_fixup
11.38 -1.2 10.18 perf-profile.children.cycles-pp._do_fork
8.10 ± 2% -1.2 6.90 ± 2% perf-profile.children.cycles-pp.do_mprotect_pkey
8.10 ± 2% -1.2 6.90 ± 2% perf-profile.children.cycles-pp.__x64_sys_mprotect
11.84 -1.1 10.77 perf-profile.children.cycles-pp.__libc_fork
11.51 -0.9 10.57 perf-profile.children.cycles-pp.flush_old_exec
8.20 -0.8 7.44 perf-profile.children.cycles-pp.do_munmap
5.36 -0.6 4.79 perf-profile.children.cycles-pp.unmap_region
5.70 ± 2% -0.5 5.21 perf-profile.children.cycles-pp.filemap_map_pages
5.81 -0.4 5.39 perf-profile.children.cycles-pp.vma_link
0.29 ± 3% -0.1 0.15 perf-profile.children.cycles-pp.radix_tree_next_chunk
0.39 ± 3% -0.0 0.36 perf-profile.children.cycles-pp.time
0.16 ± 4% -0.0 0.13 perf-profile.children.cycles-pp.find_busiest_group
0.30 -0.0 0.29 perf-profile.children.cycles-pp.load_balance
0.23 -0.0 0.21 ± 2% perf-profile.children.cycles-pp.__strcasecmp
0.17 -0.0 0.15 ± 3% perf-profile.children.cycles-pp.fopen
0.05 +0.0 0.06 perf-profile.children.cycles-pp.__put_task_struct
0.05 +0.0 0.06 perf-profile.children.cycles-pp.vm_brk_flags
0.05 +0.0 0.06 perf-profile.children.cycles-pp.__perf_event__output_id_sample
0.05 +0.0 0.06 perf-profile.children.cycles-pp.security_mmap_addr
0.05 +0.0 0.06 perf-profile.children.cycles-pp.selinux_file_open
0.06 +0.0 0.07 perf-profile.children.cycles-pp.__switch_to
0.07 ± 5% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.__x64_sys_pipe
0.07 ± 5% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.do_pipe2
0.07 ± 5% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.unmap_single_vma
0.07 +0.0 0.08 ± 5% perf-profile.children.cycles-pp.__errno_location
0.05 +0.0 0.06 ± 6% perf-profile.children.cycles-pp.mem_cgroup_throttle_swaprate
0.05 ± 8% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.perf_event_task_output
0.05 ± 8% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.load_elf_phdrs
0.17 ± 2% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.sched_ttwu_pending
0.06 ± 6% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.cpumask_next
0.06 +0.0 0.07 ± 5% perf-profile.children.cycles-pp.kfree
0.08 ± 6% +0.0 0.09 perf-profile.children.cycles-pp.do_signal
0.08 ± 6% +0.0 0.09 perf-profile.children.cycles-pp.memcpy
0.08 ± 5% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.__pipe
0.08 +0.0 0.10 ± 5% perf-profile.children.cycles-pp.vfs_getattr
0.08 ± 5% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.security_inode_getattr
0.08 ± 5% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.avc_has_perm
0.07 ± 6% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.fsnotify
0.07 +0.0 0.09 ± 5% perf-profile.children.cycles-pp.__tsearch
0.07 ± 7% +0.0 0.08 perf-profile.children.cycles-pp.memchr
0.05 ± 8% +0.0 0.07 ± 6% perf-profile.children.cycles-pp.file_has_perm
0.05 ± 8% +0.0 0.07 ± 6% perf-profile.children.cycles-pp.__alloc_fd
0.05 +0.0 0.07 ± 7% perf-profile.children.cycles-pp.free_unref_page_commit
0.05 +0.0 0.07 ± 7% perf-profile.children.cycles-pp.up_read
0.29 +0.0 0.30 ± 2% perf-profile.children.cycles-pp.save_stack_trace_tsk
0.13 ± 3% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.pipe_wait
0.06 +0.0 0.08 ± 6% perf-profile.children.cycles-pp.__put_anon_vma
0.06 ± 11% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.wake_up_page_bit
0.05 ± 9% +0.0 0.07 perf-profile.children.cycles-pp.dup_fd
0.19 ± 2% +0.0 0.21 perf-profile.children.cycles-pp.dequeue_entity
0.10 ± 5% +0.0 0.11 ± 3% perf-profile.children.cycles-pp.move_queued_task
0.10 ± 7% +0.0 0.12 ± 3% perf-profile.children.cycles-pp.__list_add_valid
0.08 ± 6% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.find_next_bit
0.07 ± 5% +0.0 0.09 ± 7% perf-profile.children.cycles-pp.arch_dup_task_struct
0.11 +0.0 0.13 ± 3% perf-profile.children.cycles-pp.find_get_entry
0.07 ± 10% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.cp_new_stat
0.07 ± 7% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.load_new_mm_cr3
0.07 ± 7% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.set_next_entity
0.07 ± 6% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.__install_special_mapping
0.06 ± 6% +0.0 0.08 perf-profile.children.cycles-pp._IO_setb
0.07 ± 6% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.trailing_symlink
0.06 ± 6% +0.0 0.08 perf-profile.children.cycles-pp.setup_new_exec
0.06 ± 6% +0.0 0.08 perf-profile.children.cycles-pp.perf_event_task
0.05 +0.0 0.07 ± 6% perf-profile.children.cycles-pp.__legitimize_mnt
0.05 ± 8% +0.0 0.07 perf-profile.children.cycles-pp.vma_gap_callbacks_rotate
0.05 +0.0 0.07 ± 6% perf-profile.children.cycles-pp.security_file_free
0.05 ± 8% +0.0 0.07 perf-profile.children.cycles-pp.down_write_killable
0.08 ± 5% +0.0 0.10 ± 4% perf-profile.children.cycles-pp.available_idle_cpu
0.08 ± 5% +0.0 0.10 ± 4% perf-profile.children.cycles-pp._vm_normal_page
0.08 ± 5% +0.0 0.10 perf-profile.children.cycles-pp.expand_downwards
0.06 ± 7% +0.0 0.08 ± 6% perf-profile.children.cycles-pp._copy_to_user
0.06 +0.0 0.08 ± 5% perf-profile.children.cycles-pp.__unlock_page_memcg
0.06 ± 7% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.security_file_alloc
0.17 ± 2% +0.0 0.19 ± 4% perf-profile.children.cycles-pp.osq_unlock
0.10 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.finish_fault
0.09 ± 4% +0.0 0.11 ± 3% perf-profile.children.cycles-pp.may_open
0.09 ± 4% +0.0 0.11 ± 3% perf-profile.children.cycles-pp.perf_output_copy
0.10 ± 4% +0.0 0.12 ± 3% perf-profile.children.cycles-pp.selinux_mmap_file
0.07 ± 5% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.try_charge
0.07 ± 5% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.___slab_alloc
0.08 ± 6% +0.0 0.10 ± 5% perf-profile.children.cycles-pp.__pthread_once_slow
0.07 +0.0 0.09 perf-profile.children.cycles-pp.move_page_tables
0.07 +0.0 0.09 perf-profile.children.cycles-pp.touch_atime
0.06 ± 6% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.queue_work_on
0.09 ± 7% +0.0 0.11 perf-profile.children.cycles-pp.filp_close
0.08 ± 8% +0.0 0.10 perf-profile.children.cycles-pp.selinux_vm_enough_memory
0.09 ± 4% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.sync_regs
0.08 +0.0 0.10 perf-profile.children.cycles-pp.__inode_security_revalidate
0.07 ± 6% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.prepend_name
0.06 ± 6% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.complete_walk
0.06 +0.0 0.08 ± 8% perf-profile.children.cycles-pp.unlock_page_memcg
0.06 ± 11% +0.0 0.08 perf-profile.children.cycles-pp.cred_has_capability
0.06 +0.0 0.08 perf-profile.children.cycles-pp.security_file_open
0.06 +0.0 0.08 perf-profile.children.cycles-pp.__queue_work
0.05 ± 8% +0.0 0.07 ± 11% perf-profile.children.cycles-pp.copyin
0.06 ± 9% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.entry_SYSCALL_64_stage2
0.06 ± 9% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.simple_write_begin
0.06 ± 7% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.security_file_permission
0.14 ± 5% +0.0 0.16 ± 2% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.13 +0.0 0.15 ± 2% perf-profile.children.cycles-pp.__anon_vma_prepare
0.10 ± 7% +0.0 0.12 ± 3% perf-profile.children.cycles-pp.migration_cpu_stop
0.09 ± 7% +0.0 0.11 ± 3% perf-profile.children.cycles-pp._IO_file_close
0.09 +0.0 0.11 ± 3% perf-profile.children.cycles-pp.__call_rcu
0.07 +0.0 0.09 ± 4% perf-profile.children.cycles-pp.kmem_cache_alloc_trace
0.11 ± 4% +0.0 0.13 perf-profile.children.cycles-pp.do_brk_flags
0.10 ± 8% +0.0 0.12 ± 4% perf-profile.children.cycles-pp.prepare_creds
0.09 +0.0 0.11 ± 3% perf-profile.children.cycles-pp.put_task_stack
0.08 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.__slab_alloc
0.09 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.__x64_sys_brk
0.08 ± 5% +0.0 0.10 perf-profile.children.cycles-pp.map_vdso
0.11 ± 7% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.get_zeroed_page
0.08 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.anon_vma_interval_tree_remove
0.12 ± 3% +0.0 0.15 ± 2% perf-profile.children.cycles-pp.__libc_sigaction
0.11 ± 4% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.cpu_stopper_thread
0.10 ± 5% +0.0 0.12 ± 5% perf-profile.children.cycles-pp.prepare_binprm
0.09 +0.0 0.11 ± 4% perf-profile.children.cycles-pp.__d_alloc
0.09 ± 4% +0.0 0.11 ± 3% perf-profile.children.cycles-pp.simple_lookup
0.08 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.vm_area_alloc
0.04 ± 57% +0.0 0.06 ± 6% perf-profile.children.cycles-pp.alloc_pid
0.11 ± 4% +0.0 0.13 ± 3% perf-profile.children.cycles-pp.brk
0.08 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.memset_erms
0.11 ± 3% +0.0 0.14 ± 6% perf-profile.children.cycles-pp.__get_user_8
0.09 ± 5% +0.0 0.11 ± 3% perf-profile.children.cycles-pp.evict
0.15 ± 3% +0.0 0.17 ± 2% perf-profile.children.cycles-pp.avc_has_perm_noaudit
0.13 ± 3% +0.0 0.15 ± 3% perf-profile.children.cycles-pp.sched_move_task
0.11 ± 7% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.__dentry_kill
0.10 ± 4% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.unmapped_area_topdown
0.10 ± 4% +0.0 0.12 ± 4% perf-profile.children.cycles-pp.__snprintf_chk
0.09 ± 4% +0.0 0.11 ± 7% perf-profile.children.cycles-pp.select_idle_sibling
0.08 ± 5% +0.0 0.11 ± 8% perf-profile.children.cycles-pp.lockref_get_not_dead
0.07 +0.0 0.10 ± 4% perf-profile.children.cycles-pp.d_add
0.04 ± 57% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.do_huge_pmd_anonymous_page
0.11 +0.0 0.14 ± 3% perf-profile.children.cycles-pp.fput
0.11 ± 6% +0.0 0.14 ± 3% perf-profile.children.cycles-pp._exit
0.12 ± 6% +0.0 0.15 ± 5% perf-profile.children.cycles-pp.security_vm_enough_memory_mm
0.12 ± 3% +0.0 0.15 ± 2% perf-profile.children.cycles-pp.free_pgd_range
0.11 ± 4% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.vm_area_dup
0.10 ± 7% +0.0 0.13 perf-profile.children.cycles-pp.vma_interval_tree_augment_rotate
0.10 ± 5% +0.0 0.12 ± 4% perf-profile.children.cycles-pp.rcu_all_qs
0.09 ± 4% +0.0 0.12 ± 3% perf-profile.children.cycles-pp.munmap
0.08 ± 6% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.__mmdrop
0.07 ± 6% +0.0 0.10 ± 4% perf-profile.children.cycles-pp.truncate_inode_pages_range
0.04 ± 57% +0.0 0.07 ± 6% perf-profile.children.cycles-pp.grab_cache_page_write_begin
0.12 ± 3% +0.0 0.15 ± 2% perf-profile.children.cycles-pp.__lru_cache_add
0.11 ± 4% +0.0 0.15 ± 3% perf-profile.children.cycles-pp.prepend_path
0.14 ± 3% +0.0 0.17 ± 6% perf-profile.children.cycles-pp.security_mmap_file
0.14 ± 3% +0.0 0.17 ± 7% perf-profile.children.cycles-pp.mem_cgroup_try_charge
0.12 ± 3% +0.0 0.15 ± 9% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.13 +0.0 0.16 ± 2% perf-profile.children.cycles-pp.lockref_put_or_lock
0.14 ± 6% +0.0 0.17 ± 4% perf-profile.children.cycles-pp.kernel_read
0.12 ± 3% +0.0 0.15 ± 3% perf-profile.children.cycles-pp.__put_user_4
0.09 +0.0 0.12 ± 8% perf-profile.children.cycles-pp.file_free_rcu
0.11 ± 4% +0.0 0.15 ± 5% perf-profile.children.cycles-pp.__x64_sys_close
0.20 ± 3% +0.0 0.23 ± 2% perf-profile.children.cycles-pp.update_load_avg
0.16 +0.0 0.20 ± 5% perf-profile.children.cycles-pp.vfs_statx_fd
0.17 ± 3% +0.0 0.20 ± 3% perf-profile.children.cycles-pp.d_path
0.15 ± 3% +0.0 0.19 ± 3% perf-profile.children.cycles-pp.stop_one_cpu
0.16 +0.0 0.20 ± 2% perf-profile.children.cycles-pp.anon_vma_clone
0.16 ± 2% +0.0 0.20 ± 2% perf-profile.children.cycles-pp.vma_compute_subtree_gap
0.16 ± 2% +0.0 0.20 ± 2% perf-profile.children.cycles-pp.getenv
0.15 ± 2% +0.0 0.18 ± 2% perf-profile.children.cycles-pp._init
0.14 ± 5% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.__pud_alloc
0.12 ± 4% +0.0 0.16 ± 4% perf-profile.children.cycles-pp.__getrlimit
0.09 +0.0 0.12 ± 4% perf-profile.children.cycles-pp.wake_q_add
0.30 ± 2% +0.0 0.34 ± 3% perf-profile.children.cycles-pp.strchrnul
0.18 ± 2% +0.0 0.21 ± 3% perf-profile.children.cycles-pp.open_exec
0.14 +0.0 0.18 ± 2% perf-profile.children.cycles-pp.shift_arg_pages
0.14 ± 5% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.preempt_schedule_common
0.19 ± 4% +0.0 0.23 perf-profile.children.cycles-pp.getname_flags
0.17 ± 4% +0.0 0.21 ± 2% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.14 ± 3% +0.0 0.18 ± 5% perf-profile.children.cycles-pp.__might_fault
0.15 ± 2% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.perf_event_mmap_output
0.13 ± 6% +0.0 0.17 ± 5% perf-profile.children.cycles-pp.dentry_kill
0.13 ± 5% +0.0 0.17 ± 4% perf-profile.children.cycles-pp.copy_user_generic_unrolled
0.13 ± 3% +0.0 0.17 ± 3% perf-profile.children.cycles-pp.do_unlinkat
0.11 +0.0 0.15 ± 2% perf-profile.children.cycles-pp.inode_permission
0.14 ± 5% +0.0 0.18 perf-profile.children.cycles-pp.free_pcppages_bulk
0.08 +0.0 0.12 ± 11% perf-profile.children.cycles-pp.rcu_cblist_dequeue
0.23 ± 2% +0.0 0.27 ± 4% perf-profile.children.cycles-pp.ptep_clear_flush
0.18 ± 4% +0.0 0.22 perf-profile.children.cycles-pp.change_protection_range
0.16 ± 2% +0.0 0.20 ± 4% perf-profile.children.cycles-pp.memcpy_erms
0.15 ± 3% +0.0 0.20 ± 4% perf-profile.children.cycles-pp.pagecache_get_page
0.14 ± 3% +0.0 0.18 ± 4% perf-profile.children.cycles-pp.legitimize_path
0.39 ± 2% +0.0 0.43 ± 3% perf-profile.children.cycles-pp.malloc
0.54 +0.0 0.58 perf-profile.children.cycles-pp.ttwu_do_activate
0.25 ± 3% +0.0 0.29 perf-profile.children.cycles-pp.dequeue_task_fair
0.19 ± 2% +0.0 0.23 ± 3% perf-profile.children.cycles-pp._copy_from_user
0.18 ± 2% +0.0 0.22 ± 3% perf-profile.children.cycles-pp.vmacache_find
0.18 ± 2% +0.0 0.22 perf-profile.children.cycles-pp.open64
0.14 ± 3% +0.0 0.18 perf-profile.children.cycles-pp.lock_page_memcg
0.01 ±173% +0.0 0.05 ± 9% perf-profile.children.cycles-pp.__follow_mount_rcu
0.21 ± 6% +0.0 0.26 ± 3% perf-profile.children.cycles-pp._IO_file_xsputn
0.17 ± 3% +0.0 0.21 ± 3% perf-profile.children.cycles-pp.signal_wake_up_state
0.14 ± 3% +0.0 0.18 ± 3% perf-profile.children.cycles-pp.count
0.13 ± 3% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.unlinkat
0.11 ± 7% +0.0 0.15 ± 9% perf-profile.children.cycles-pp.__get_vm_area_node
0.09 ± 4% +0.0 0.14 ± 13% perf-profile.children.cycles-pp.alloc_vmap_area
0.20 ± 2% +0.0 0.24 ± 3% perf-profile.children.cycles-pp.__vma_link_rb
0.17 ± 2% +0.0 0.21 ± 3% perf-profile.children.cycles-pp.schedule_tail
0.14 ± 3% +0.0 0.19 ± 4% perf-profile.children.cycles-pp.__rb_erase_color
0.01 ±173% +0.0 0.06 ± 7% perf-profile.children.cycles-pp.insert_vm_struct
0.01 ±173% +0.0 0.06 ± 7% perf-profile.children.cycles-pp.put_prev_entity
0.19 ± 2% +0.0 0.24 ± 2% perf-profile.children.cycles-pp.copy_strings_kernel
0.19 ± 2% +0.0 0.24 ± 5% perf-profile.children.cycles-pp.setup_arg_pages
0.17 ± 4% +0.0 0.22 perf-profile.children.cycles-pp.__pmd_alloc
0.15 ± 3% +0.0 0.19 ± 5% perf-profile.children.cycles-pp.unlazy_walk
0.14 ± 5% +0.0 0.19 ± 4% perf-profile.children.cycles-pp.__get_free_pages
0.01 ±173% +0.0 0.06 perf-profile.children.cycles-pp.__free_pages_ok
0.15 +0.0 0.20 ± 4% perf-profile.children.cycles-pp.mark_page_accessed
0.15 ± 2% +0.0 0.20 ± 3% perf-profile.children.cycles-pp.d_alloc
0.57 +0.0 0.61 perf-profile.children.cycles-pp.enqueue_entity
0.23 +0.0 0.28 ± 4% perf-profile.children.cycles-pp.__do_sys_newfstat
0.16 ± 5% +0.0 0.21 ± 6% perf-profile.children.cycles-pp.__generic_file_write_iter
0.15 ± 7% +0.0 0.20 ± 5% perf-profile.children.cycles-pp.generic_perform_write
0.00 +0.1 0.05 perf-profile.children.cycles-pp.vma_merge
0.00 +0.1 0.05 perf-profile.children.cycles-pp.free_one_page
0.00 +0.1 0.05 perf-profile.children.cycles-pp.rcu_segcblist_enqueue
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__fget_light
0.17 ± 7% +0.1 0.22 ± 5% perf-profile.children.cycles-pp.generic_file_write_iter
0.24 ± 2% +0.1 0.30 ± 3% perf-profile.children.cycles-pp.anon_vma_fork
0.19 ± 3% +0.1 0.24 ± 4% perf-profile.children.cycles-pp.mem_cgroup_try_charge_delay
0.19 +0.1 0.24 ± 3% perf-profile.children.cycles-pp.finish_task_switch
0.18 ± 2% +0.1 0.24 ± 3% perf-profile.children.cycles-pp.__send_signal
0.18 ± 2% +0.1 0.23 ± 2% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.__pagevec_release
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.__perf_event_header__init_id
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.check_preempt_curr
0.21 ± 2% +0.1 0.27 ± 5% perf-profile.children.cycles-pp.autoremove_wake_function
0.21 ± 3% +0.1 0.27 ± 3% perf-profile.children.cycles-pp.generic_file_read_iter
0.30 ± 2% +0.1 0.36 ± 3% perf-profile.children.cycles-pp.wake_up_new_task
0.25 +0.1 0.30 perf-profile.children.cycles-pp.get_unmapped_area
0.19 ± 3% +0.1 0.24 ± 4% perf-profile.children.cycles-pp.get_user_arg_ptr
0.17 ± 2% +0.1 0.22 ± 3% perf-profile.children.cycles-pp.pgd_alloc
0.00 +0.1 0.05 ± 9% perf-profile.children.cycles-pp.page_add_new_anon_rmap
0.21 ± 2% +0.1 0.26 ± 4% perf-profile.children.cycles-pp.__might_sleep
0.31 +0.1 0.36 perf-profile.children.cycles-pp.native_flush_tlb_one_user
0.26 ± 3% +0.1 0.31 ± 2% perf-profile.children.cycles-pp.do_open_execat
0.20 ± 4% +0.1 0.25 ± 7% perf-profile.children.cycles-pp.__vmalloc_node_range
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.__mod_node_page_state
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.anon_vma_interval_tree_insert
0.22 +0.1 0.28 ± 4% perf-profile.children.cycles-pp.__wake_up_common_lock
0.22 ± 3% +0.1 0.28 ± 4% perf-profile.children.cycles-pp.do_notify_parent
0.28 ± 2% +0.1 0.34 ± 5% perf-profile.children.cycles-pp.__rb_insert_augmented
0.23 ± 2% +0.1 0.29 perf-profile.children.cycles-pp.strlen
0.19 ± 2% +0.1 0.25 ± 5% perf-profile.children.cycles-pp.copyout
0.19 ± 4% +0.1 0.25 ± 3% perf-profile.children.cycles-pp.mm_init
0.64 +0.1 0.70 perf-profile.children.cycles-pp.enqueue_task_fair
0.29 ± 2% +0.1 0.35 ± 3% perf-profile.children.cycles-pp.___perf_sw_event
0.24 ± 2% +0.1 0.30 ± 3% perf-profile.children.cycles-pp.terminate_walk
0.26 ± 3% +0.1 0.33 perf-profile.children.cycles-pp.free_unref_page_list
0.23 +0.1 0.30 ± 5% perf-profile.children.cycles-pp.__wake_up_common
0.32 +0.1 0.38 perf-profile.children.cycles-pp.__perf_sw_event
0.28 ± 2% +0.1 0.35 ± 4% perf-profile.children.cycles-pp.get_user_pages_remote
0.25 ± 2% +0.1 0.32 ± 4% perf-profile.children.cycles-pp.pipe_write
0.22 ± 3% +0.1 0.29 ± 5% perf-profile.children.cycles-pp.copy_page_to_iter
0.23 ± 2% +0.1 0.29 ± 2% perf-profile.children.cycles-pp._IO_file_open
0.32 ± 3% +0.1 0.38 ± 2% perf-profile.children.cycles-pp._fini
0.28 +0.1 0.34 ± 4% perf-profile.children.cycles-pp.__get_user_pages
0.34 ± 2% +0.1 0.41 perf-profile.children.cycles-pp.unlock_page
0.30 ± 2% +0.1 0.36 ± 4% perf-profile.children.cycles-pp.__pte_alloc
0.32 +0.1 0.39 perf-profile.children.cycles-pp.vma_interval_tree_remove
0.29 ± 2% +0.1 0.36 ± 3% perf-profile.children.cycles-pp.pipe_read
0.23 +0.1 0.30 ± 2% perf-profile.children.cycles-pp.do_dentry_open
0.10 ± 5% +0.1 0.16 ± 10% perf-profile.children.cycles-pp.run_ksoftirqd
0.22 ± 3% +0.1 0.29 ± 3% perf-profile.children.cycles-pp.d_alloc_parallel
0.39 +0.1 0.46 ± 3% perf-profile.children.cycles-pp.__fxstat64
0.34 +0.1 0.41 perf-profile.children.cycles-pp.flush_tlb_func_common
0.27 ± 3% +0.1 0.34 ± 3% perf-profile.children.cycles-pp.__d_lookup_rcu
0.34 ± 2% +0.1 0.41 ± 2% perf-profile.children.cycles-pp._IO_padn
0.16 ± 2% +0.1 0.23 ± 3% perf-profile.children.cycles-pp.release_task
0.24 +0.1 0.31 ± 3% perf-profile.children.cycles-pp.__lookup_slow
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.do_prlimit
0.32 +0.1 0.40 ± 2% perf-profile.children.cycles-pp.selinux_inode_permission
0.39 ± 2% +0.1 0.46 ± 2% perf-profile.children.cycles-pp.find_vma
0.00 +0.1 0.08 ± 6% perf-profile.children.cycles-pp.__x64_sys_getrlimit
0.59 +0.1 0.67 ± 2% perf-profile.children.cycles-pp.schedule
0.28 +0.1 0.36 ± 2% perf-profile.children.cycles-pp.__list_del_entry_valid
0.35 ± 2% +0.1 0.42 ± 2% perf-profile.children.cycles-pp.security_inode_permission
0.32 ± 3% +0.1 0.40 ± 3% perf-profile.children.cycles-pp.__do_sys_newstat
0.27 ± 3% +0.1 0.35 ± 2% perf-profile.children.cycles-pp.lookup_slow
0.33 ± 2% +0.1 0.41 ± 2% perf-profile.children.cycles-pp._cond_resched
0.32 +0.1 0.40 ± 3% perf-profile.children.cycles-pp.remove_vma
0.30 ± 2% +0.1 0.39 ± 2% perf-profile.children.cycles-pp.perf_iterate_sb
0.13 ± 5% +0.1 0.21 ± 14% perf-profile.children.cycles-pp.find_vmap_area
0.33 ± 3% +0.1 0.41 ± 3% perf-profile.children.cycles-pp.vfs_statx
0.39 ± 2% +0.1 0.47 ± 2% perf-profile.children.cycles-pp._IO_fwrite
0.29 ± 2% +0.1 0.38 ± 3% perf-profile.children.cycles-pp.exit_to_usermode_loop
0.18 ± 8% +0.1 0.27 ± 12% perf-profile.children.cycles-pp.__vunmap
0.43 +0.1 0.52 ± 2% perf-profile.children.cycles-pp.flush_tlb_mm_range
0.37 ± 3% +0.1 0.46 ± 4% perf-profile.children.cycles-pp.___might_sleep
0.40 ± 3% +0.1 0.49 ± 2% perf-profile.children.cycles-pp.__slab_free
0.29 +0.1 0.38 ± 2% perf-profile.children.cycles-pp.create_elf_tables
0.18 ± 7% +0.1 0.27 ± 10% perf-profile.children.cycles-pp.free_work
0.37 ± 2% +0.1 0.46 perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.18 ± 4% +0.1 0.28 ± 4% perf-profile.children.cycles-pp.wait_consider_task
0.39 ± 2% +0.1 0.48 ± 3% perf-profile.children.cycles-pp.__xstat64
0.39 +0.1 0.48 ± 2% perf-profile.children.cycles-pp.pte_alloc_one
0.29 ± 2% +0.1 0.39 ± 2% perf-profile.children.cycles-pp.__alloc_file
0.40 ± 2% +0.1 0.49 ± 2% perf-profile.children.cycles-pp.unlink_anon_vmas
0.32 ± 2% +0.1 0.41 ± 3% perf-profile.children.cycles-pp.alloc_empty_file
0.44 +0.1 0.54 perf-profile.children.cycles-pp.page_add_file_rmap
0.41 ± 2% +0.1 0.51 ± 2% perf-profile.children.cycles-pp.sched_exec
0.39 +0.1 0.48 perf-profile.children.cycles-pp.copy_page
0.31 ± 3% +0.1 0.41 ± 3% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.52 +0.1 0.62 ± 3% perf-profile.children.cycles-pp.native_irq_return_iret
0.20 ± 6% +0.1 0.30 ± 9% perf-profile.children.cycles-pp.process_one_work
0.36 ± 2% +0.1 0.47 ± 3% perf-profile.children.cycles-pp.__fput
0.51 +0.1 0.62 ± 2% perf-profile.children.cycles-pp.copy_page_range
0.40 ± 2% +0.1 0.50 ± 3% perf-profile.children.cycles-pp.__clear_user
0.36 ± 2% +0.1 0.47 perf-profile.children.cycles-pp.__x64_sys_munmap
0.35 +0.1 0.46 perf-profile.children.cycles-pp.strnlen_user
0.23 ± 5% +0.1 0.34 ± 9% perf-profile.children.cycles-pp.worker_thread
0.48 ± 2% +0.1 0.59 ± 2% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.40 ± 2% +0.1 0.51 ± 3% perf-profile.children.cycles-pp.dput
0.44 +0.1 0.55 ± 4% perf-profile.children.cycles-pp.lookup_fast
1.14 +0.1 1.25 ± 4% perf-profile.children.cycles-pp.__softirqentry_text_start
0.42 ± 2% +0.1 0.54 ± 2% perf-profile.children.cycles-pp.free_pages_and_swap_cache
1.08 ± 2% +0.1 1.20 ± 2% perf-profile.children.cycles-pp.__sched_text_start
0.51 +0.1 0.63 ± 2% perf-profile.children.cycles-pp.ksys_read
0.40 +0.1 0.52 ± 4% perf-profile.children.cycles-pp.smpboot_thread_fn
0.46 +0.1 0.58 ± 3% perf-profile.children.cycles-pp.write
0.49 +0.1 0.62 ± 3% perf-profile.children.cycles-pp.read
0.42 ± 2% +0.1 0.55 ± 3% perf-profile.children.cycles-pp.__vfs_write
0.44 +0.1 0.57 ± 2% perf-profile.children.cycles-pp.task_work_run
0.99 ± 2% +0.1 1.12 ± 4% perf-profile.children.cycles-pp.rcu_process_callbacks
0.55 ± 2% +0.1 0.67 ± 2% perf-profile.children.cycles-pp.kmem_cache_free
0.54 +0.1 0.66 ± 3% perf-profile.children.cycles-pp.__vfs_read
0.62 +0.1 0.75 ± 3% perf-profile.children.cycles-pp.select_task_rq_fair
0.46 +0.1 0.59 ± 4% perf-profile.children.cycles-pp.vfs_write
0.45 ± 2% +0.1 0.58 ± 2% perf-profile.children.cycles-pp.do_faccessat
0.47 +0.1 0.60 ± 3% perf-profile.children.cycles-pp.ksys_write
0.15 ± 6% +0.1 0.28 ± 7% perf-profile.children.cycles-pp.queued_write_lock_slowpath
0.14 ± 8% +0.1 0.27 ± 9% perf-profile.children.cycles-pp.queued_read_lock_slowpath
0.51 ± 3% +0.1 0.65 perf-profile.children.cycles-pp.kmem_cache_alloc
0.58 +0.1 0.72 ± 2% perf-profile.children.cycles-pp.__entry_SYSCALL_64_trampoline
0.62 +0.1 0.77 perf-profile.children.cycles-pp.vfs_read
0.63 +0.2 0.78 ± 2% perf-profile.children.cycles-pp.perf_event_mmap
0.62 +0.2 0.77 ± 2% perf-profile.children.cycles-pp.alloc_pages_vma
0.35 ± 4% +0.2 0.51 ± 4% perf-profile.children.cycles-pp.lru_add_drain
0.35 ± 3% +0.2 0.52 ± 5% perf-profile.children.cycles-pp.lru_add_drain_cpu
0.60 ± 2% +0.2 0.78 ± 2% perf-profile.children.cycles-pp.path_lookupat
0.62 ± 2% +0.2 0.80 ± 2% perf-profile.children.cycles-pp.filename_lookup
0.43 ± 2% +0.2 0.61 ± 3% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.64 +0.2 0.83 ± 3% perf-profile.children.cycles-pp.walk_component
0.75 ± 2% +0.2 0.95 ± 2% perf-profile.children.cycles-pp._IO_default_xsputn
0.86 +0.2 1.07 ± 3% perf-profile.children.cycles-pp.clear_page_erms
0.84 +0.2 1.04 ± 3% perf-profile.children.cycles-pp.copy_strings
0.41 ± 4% +0.2 0.62 ± 5% perf-profile.children.cycles-pp.do_wait
0.43 ± 5% +0.2 0.64 ± 5% perf-profile.children.cycles-pp.__do_sys_wait4
0.42 ± 4% +0.2 0.64 ± 5% perf-profile.children.cycles-pp.kernel_wait4
0.95 ± 2% +0.2 1.16 perf-profile.children.cycles-pp._dl_addr
0.95 +0.2 1.17 ± 2% perf-profile.children.cycles-pp.wp_page_copy
0.80 +0.2 1.02 ± 2% perf-profile.children.cycles-pp.link_path_walk
0.93 +0.2 1.16 perf-profile.children.cycles-pp.vma_interval_tree_insert
0.95 +0.2 1.18 perf-profile.children.cycles-pp.alloc_set_pte
0.51 ± 3% +0.2 0.74 ± 3% perf-profile.children.cycles-pp.wait4
0.63 ± 2% +0.2 0.87 ± 4% perf-profile.children.cycles-pp.kthread
1.04 +0.2 1.27 ± 2% perf-profile.children.cycles-pp.do_wp_page
1.29 +0.2 1.54 ± 3% perf-profile.children.cycles-pp.wake_up_q
2.06 +0.3 2.33 ± 2% perf-profile.children.cycles-pp.up_write
9.02 +0.3 9.29 perf-profile.children.cycles-pp.handle_mm_fault
1.53 +0.3 1.79 ± 2% perf-profile.children.cycles-pp.rwsem_wake
0.81 +0.3 1.08 ± 3% perf-profile.children.cycles-pp.ret_from_fork
1.53 +0.3 1.81 ± 2% perf-profile.children.cycles-pp.call_rwsem_wake
1.10 +0.3 1.38 ± 2% perf-profile.children.cycles-pp.__strcoll_l
0.81 ± 2% +0.3 1.08 ± 4% perf-profile.children.cycles-pp._raw_spin_lock
0.93 +0.3 1.23 perf-profile.children.cycles-pp.rwsem_spin_on_owner
1.24 +0.3 1.54 ± 3% perf-profile.children.cycles-pp.get_page_from_freelist
1.35 +0.3 1.69 ± 3% perf-profile.children.cycles-pp.__alloc_pages_nodemask
1.68 +0.4 2.04 ± 3% perf-profile.children.cycles-pp.try_to_wake_up
1.22 +0.4 1.58 perf-profile.children.cycles-pp.mmap64
14.16 +0.4 14.53 perf-profile.children.cycles-pp.load_elf_binary
1.05 ± 2% +0.4 1.42 ± 4% perf-profile.children.cycles-pp.page_remove_rmap
14.19 +0.4 14.56 perf-profile.children.cycles-pp.search_binary_handler
2.09 ± 2% +0.4 2.52 perf-profile.children.cycles-pp.vfprintf
9.80 +0.5 10.26 perf-profile.children.cycles-pp.__do_page_fault
9.82 +0.5 10.29 perf-profile.children.cycles-pp.do_page_fault
1.27 ± 2% +0.5 1.79 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.96 +0.5 2.49 ± 2% perf-profile.children.cycles-pp.path_openat
2.01 +0.5 2.54 perf-profile.children.cycles-pp.do_sys_open
1.98 +0.5 2.52 ± 2% perf-profile.children.cycles-pp.do_filp_open
0.86 ± 2% +0.5 1.40 ± 2% perf-profile.children.cycles-pp.vm_munmap
10.22 +0.6 10.78 perf-profile.children.cycles-pp.page_fault
2.73 +0.6 3.29 perf-profile.children.cycles-pp.__vsnprintf_chk
2.76 +0.6 3.39 ± 2% perf-profile.children.cycles-pp.setlocale
1.77 ± 2% +0.7 2.50 ± 4% perf-profile.children.cycles-pp.release_pages
1.47 +0.8 2.27 ± 6% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
2.12 ± 2% +0.8 2.93 ± 4% perf-profile.children.cycles-pp.tlb_flush_mmu_free
15.71 +0.8 16.53 perf-profile.children.cycles-pp.execve
16.01 +0.8 16.84 perf-profile.children.cycles-pp.__do_execve_file
16.02 +0.8 16.86 perf-profile.children.cycles-pp.__x64_sys_execve
2.24 ± 2% +0.8 3.09 ± 4% perf-profile.children.cycles-pp.arch_tlb_finish_mmu
2.25 ± 2% +0.8 3.10 ± 4% perf-profile.children.cycles-pp.tlb_finish_mmu
2.77 +0.9 3.62 ± 3% perf-profile.children.cycles-pp.unmap_page_range
2.89 +0.9 3.77 ± 3% perf-profile.children.cycles-pp.unmap_vmas
1.18 ± 3% +0.9 2.09 ± 2% perf-profile.children.cycles-pp.elf_map
41.69 -8.5 33.23 ± 3% perf-profile.self.cycles-pp.osq_lock
9.44 -1.3 8.12 ± 4% perf-profile.self.cycles-pp.intel_idle
4.14 ± 2% -0.6 3.53 perf-profile.self.cycles-pp.filemap_map_pages
0.54 ± 2% -0.2 0.37 perf-profile.self.cycles-pp.rwsem_down_write_failed
0.28 ± 3% -0.1 0.15 ± 2% perf-profile.self.cycles-pp.radix_tree_next_chunk
0.13 ± 6% -0.0 0.11 ± 4% perf-profile.self.cycles-pp.find_busiest_group
0.05 +0.0 0.06 perf-profile.self.cycles-pp.anon_vma_fork
0.05 +0.0 0.06 perf-profile.self.cycles-pp.__call_rcu
0.05 +0.0 0.06 perf-profile.self.cycles-pp.__unlock_page_memcg
0.06 +0.0 0.07 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.06 +0.0 0.07 perf-profile.self.cycles-pp.do_wp_page
0.06 +0.0 0.07 perf-profile.self.cycles-pp.kfree
0.06 +0.0 0.07 perf-profile.self.cycles-pp.__switch_to
0.07 ± 5% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.find_next_bit
0.07 +0.0 0.08 ± 5% perf-profile.self.cycles-pp.__tsearch
0.05 ± 8% +0.0 0.07 ± 7% perf-profile.self.cycles-pp.path_openat
0.05 +0.0 0.06 ± 6% perf-profile.self.cycles-pp.d_alloc_parallel
0.05 +0.0 0.06 ± 6% perf-profile.self.cycles-pp.unlock_page_memcg
0.05 +0.0 0.06 ± 6% perf-profile.self.cycles-pp.up_read
0.06 +0.0 0.07 ± 5% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.06 ± 6% +0.0 0.08 ± 6% perf-profile.self.cycles-pp.__perf_sw_event
0.06 ± 7% +0.0 0.07 perf-profile.self.cycles-pp.strncpy_from_user
0.09 ± 4% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.update_load_avg
0.07 +0.0 0.09 ± 5% perf-profile.self.cycles-pp.find_get_entry
0.05 +0.0 0.07 ± 7% perf-profile.self.cycles-pp.__d_alloc
0.05 ± 9% +0.0 0.07 perf-profile.self.cycles-pp.create_elf_tables
0.17 ± 3% +0.0 0.18 ± 2% perf-profile.self.cycles-pp.osq_unlock
0.07 ± 11% +0.0 0.09 perf-profile.self.cycles-pp.__alloc_pages_nodemask
0.07 ± 5% +0.0 0.09 perf-profile.self.cycles-pp.getenv
0.07 ± 11% +0.0 0.09 perf-profile.self.cycles-pp._vm_normal_page
0.11 ± 7% +0.0 0.12 ± 4% perf-profile.self.cycles-pp.__sched_text_start
0.09 ± 4% +0.0 0.11 ± 6% perf-profile.self.cycles-pp._cond_resched
0.09 ± 4% +0.0 0.11 perf-profile.self.cycles-pp.__list_add_valid
0.08 ± 5% +0.0 0.10 perf-profile.self.cycles-pp.perf_iterate_sb
0.07 ± 7% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.__vma_adjust
0.08 ± 8% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.sync_regs
0.07 ± 7% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.load_new_mm_cr3
0.06 ± 6% +0.0 0.08 perf-profile.self.cycles-pp._IO_setb
0.05 +0.0 0.07 ± 6% perf-profile.self.cycles-pp.__legitimize_mnt
0.08 ± 5% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.__libc_fork
0.08 ± 5% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.change_protection_range
0.06 +0.0 0.08 ± 5% perf-profile.self.cycles-pp.memchr
0.08 ± 6% +0.0 0.10 ± 5% perf-profile.self.cycles-pp.anon_vma_interval_tree_remove
0.07 ± 12% +0.0 0.09 ± 9% perf-profile.self.cycles-pp.try_charge
0.08 ± 5% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.inode_permission
0.07 +0.0 0.09 perf-profile.self.cycles-pp.avc_has_perm
0.08 +0.0 0.10 ± 7% perf-profile.self.cycles-pp.free_pgd_range
0.07 ± 7% +0.0 0.09 ± 5% perf-profile.self.cycles-pp.prepend_name
0.05 ± 8% +0.0 0.07 ± 5% perf-profile.self.cycles-pp.entry_SYSCALL_64_stage2
0.08 ± 10% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.available_idle_cpu
0.10 ± 8% +0.0 0.12 ± 4% perf-profile.self.cycles-pp.unlink_anon_vmas
0.10 ± 7% +0.0 0.12 ± 6% perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.08 ± 8% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.memset_erms
0.07 ± 5% +0.0 0.10 ± 5% perf-profile.self.cycles-pp.rcu_all_qs
0.08 ± 6% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.lockref_get_not_dead
0.06 ± 9% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.memcpy
0.08 ± 5% +0.0 0.10 perf-profile.self.cycles-pp.fput
0.12 ± 6% +0.0 0.15 ± 4% perf-profile.self.cycles-pp.do_syscall_64
0.07 ± 5% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.__snprintf_chk
0.04 ± 57% +0.0 0.06 ± 6% perf-profile.self.cycles-pp.vm_area_dup
0.11 ± 4% +0.0 0.13 ± 3% perf-profile.self.cycles-pp.__get_user_8
0.08 ± 5% +0.0 0.11 ± 7% perf-profile.self.cycles-pp.file_free_rcu
0.08 +0.0 0.11 ± 4% perf-profile.self.cycles-pp.__vma_link_rb
0.13 ± 5% +0.0 0.16 ± 5% perf-profile.self.cycles-pp.copy_page_range
0.11 ± 3% +0.0 0.14 ± 3% perf-profile.self.cycles-pp.perf_event_mmap
0.18 ± 2% +0.0 0.21 ± 2% perf-profile.self.cycles-pp.find_vma
0.14 +0.0 0.17 ± 2% perf-profile.self.cycles-pp.copy_process
0.10 ± 4% +0.0 0.13 perf-profile.self.cycles-pp.link_path_walk
0.08 +0.0 0.11 ± 4% perf-profile.self.cycles-pp.__fput
0.08 ± 8% +0.0 0.11 ± 4% perf-profile.self.cycles-pp.free_pcppages_bulk
0.04 ± 57% +0.0 0.07 ± 7% perf-profile.self.cycles-pp.do_mmap
0.04 ± 57% +0.0 0.07 ± 7% perf-profile.self.cycles-pp.kmem_cache_alloc_trace
0.10 ± 4% +0.0 0.13 ± 3% perf-profile.self.cycles-pp.vma_interval_tree_augment_rotate
0.15 ± 2% +0.0 0.18 ± 2% perf-profile.self.cycles-pp.vma_compute_subtree_gap
0.14 ± 3% +0.0 0.17 ± 4% perf-profile.self.cycles-pp.avc_has_perm_noaudit
0.12 +0.0 0.15 ± 2% perf-profile.self.cycles-pp.lock_page_memcg
0.11 ± 4% +0.0 0.15 ± 2% perf-profile.self.cycles-pp._init
0.10 ± 5% +0.0 0.13 ± 3% perf-profile.self.cycles-pp.unmapped_area_topdown
0.11 +0.0 0.14 ± 3% perf-profile.self.cycles-pp.lockref_put_or_lock
0.03 ±100% +0.0 0.06 perf-profile.self.cycles-pp.unmap_vmas
0.18 +0.0 0.21 ± 4% perf-profile.self.cycles-pp.handle_mm_fault
0.09 +0.0 0.12 ± 4% perf-profile.self.cycles-pp.wake_q_add
0.15 ± 5% +0.0 0.18 ± 2% perf-profile.self.cycles-pp.kmem_cache_free
0.08 ± 5% +0.0 0.12 ± 5% perf-profile.self.cycles-pp.queued_write_lock_slowpath
0.18 ± 3% +0.0 0.22 ± 3% perf-profile.self.cycles-pp._IO_file_xsputn
0.17 ± 2% +0.0 0.21 ± 3% perf-profile.self.cycles-pp.strchrnul
0.16 ± 5% +0.0 0.20 ± 4% perf-profile.self.cycles-pp.memcpy_erms
0.12 ± 3% +0.0 0.16 ± 4% perf-profile.self.cycles-pp.__rb_erase_color
0.12 ± 7% +0.0 0.15 ± 3% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.17 ± 2% +0.0 0.21 ± 2% perf-profile.self.cycles-pp.vmacache_find
0.08 +0.0 0.12 ± 11% perf-profile.self.cycles-pp.rcu_cblist_dequeue
0.16 +0.0 0.20 perf-profile.self.cycles-pp.mmap_region
0.14 +0.0 0.18 ± 2% perf-profile.self.cycles-pp.mark_page_accessed
0.07 ± 6% +0.0 0.11 ± 11% perf-profile.self.cycles-pp.queued_read_lock_slowpath
0.21 ± 2% +0.0 0.26 ± 4% perf-profile.self.cycles-pp.get_page_from_freelist
0.17 ± 2% +0.0 0.22 perf-profile.self.cycles-pp.malloc
0.16 ± 2% +0.0 0.21 ± 2% perf-profile.self.cycles-pp.strlen
0.01 ±173% +0.0 0.06 ± 7% perf-profile.self.cycles-pp.down_read_trylock
0.01 ±173% +0.0 0.06 ± 7% perf-profile.self.cycles-pp.__errno_location
0.17 ± 2% +0.0 0.22 ± 3% perf-profile.self.cycles-pp.selinux_inode_permission
0.26 +0.0 0.30 ± 5% perf-profile.self.cycles-pp.__rb_insert_augmented
0.18 ± 2% +0.0 0.23 ± 5% perf-profile.self.cycles-pp.__might_sleep
0.00 +0.1 0.05 perf-profile.self.cycles-pp.load_elf_binary
0.00 +0.1 0.05 perf-profile.self.cycles-pp.page_fault
0.00 +0.1 0.05 perf-profile.self.cycles-pp.pagevec_lru_move_fn
0.00 +0.1 0.05 perf-profile.self.cycles-pp.flush_tlb_mm_range
0.00 +0.1 0.05 perf-profile.self.cycles-pp.__inode_security_revalidate
0.00 +0.1 0.05 perf-profile.self.cycles-pp.dup_fd
0.00 +0.1 0.05 perf-profile.self.cycles-pp.acct_collect
0.00 +0.1 0.05 perf-profile.self.cycles-pp.vma_merge
0.00 +0.1 0.05 perf-profile.self.cycles-pp.rcu_segcblist_enqueue
0.24 ± 2% +0.1 0.30 ± 2% perf-profile.self.cycles-pp.___perf_sw_event
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.mem_cgroup_throttle_swaprate
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.anon_vma_interval_tree_insert
0.44 ± 2% +0.1 0.49 ± 4% perf-profile.self.cycles-pp.down_write
0.00 +0.1 0.05 ± 9% perf-profile.self.cycles-pp.__might_fault
0.00 +0.1 0.05 ± 9% perf-profile.self.cycles-pp.__mod_node_page_state
0.22 +0.1 0.28 ± 3% perf-profile.self.cycles-pp.__vsnprintf_chk
0.15 ± 2% +0.1 0.21 ± 5% perf-profile.self.cycles-pp.__alloc_file
0.30 +0.1 0.36 perf-profile.self.cycles-pp.native_flush_tlb_one_user
0.18 ± 4% +0.1 0.24 ± 3% perf-profile.self.cycles-pp.__do_page_fault
0.31 ± 2% +0.1 0.37 perf-profile.self.cycles-pp._IO_padn
0.33 ± 3% +0.1 0.40 perf-profile.self.cycles-pp.unlock_page
0.21 ± 3% +0.1 0.28 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.20 ± 2% +0.1 0.27 ± 4% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.36 +0.1 0.43 ± 2% perf-profile.self.cycles-pp.page_add_file_rmap
0.31 +0.1 0.38 perf-profile.self.cycles-pp.vma_interval_tree_remove
0.27 ± 3% +0.1 0.34 ± 3% perf-profile.self.cycles-pp.__d_lookup_rcu
0.28 +0.1 0.35 ± 3% perf-profile.self.cycles-pp.__list_del_entry_valid
0.41 +0.1 0.49 ± 3% perf-profile.self.cycles-pp.select_task_rq_fair
0.34 +0.1 0.42 perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.35 ± 3% +0.1 0.43 ± 3% perf-profile.self.cycles-pp.___might_sleep
0.29 ± 3% +0.1 0.38 perf-profile.self.cycles-pp.kmem_cache_alloc
0.36 ± 3% +0.1 0.45 perf-profile.self.cycles-pp._IO_fwrite
0.44 +0.1 0.53 perf-profile.self.cycles-pp.__handle_mm_fault
0.39 ± 3% +0.1 0.48 ± 2% perf-profile.self.cycles-pp.__slab_free
0.32 ± 2% +0.1 0.41 perf-profile.self.cycles-pp.alloc_set_pte
0.38 +0.1 0.47 perf-profile.self.cycles-pp.copy_page
0.52 +0.1 0.62 ± 3% perf-profile.self.cycles-pp.native_irq_return_iret
0.34 ± 2% +0.1 0.45 ± 2% perf-profile.self.cycles-pp.strnlen_user
0.41 ± 2% +0.1 0.52 ± 2% perf-profile.self.cycles-pp.free_pages_and_swap_cache
0.47 ± 2% +0.1 0.59 ± 2% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.51 +0.1 0.64 ± 4% perf-profile.self.cycles-pp._raw_spin_lock
0.53 +0.1 0.66 perf-profile.self.cycles-pp.__entry_SYSCALL_64_trampoline
0.65 ± 2% +0.2 0.82 perf-profile.self.cycles-pp._IO_default_xsputn
0.84 +0.2 1.05 ± 3% perf-profile.self.cycles-pp.clear_page_erms
0.91 ± 2% +0.2 1.13 perf-profile.self.cycles-pp._dl_addr
0.91 +0.2 1.14 perf-profile.self.cycles-pp.vma_interval_tree_insert
1.08 +0.3 1.35 ± 2% perf-profile.self.cycles-pp.__strcoll_l
0.91 +0.3 1.20 perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.92 ± 2% +0.3 1.26 ± 4% perf-profile.self.cycles-pp.page_remove_rmap
1.33 +0.4 1.72 ± 2% perf-profile.self.cycles-pp.unmap_page_range
1.75 ± 2% +0.4 2.18 perf-profile.self.cycles-pp.vfprintf
1.23 ± 2% +0.5 1.73 ± 5% perf-profile.self.cycles-pp.release_pages
1.47 +0.8 2.26 ± 6% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 4 months
[sched/fair] 2c83362734: pft.faults_per_sec_per_cpu -41.4% regression
by kernel test robot
Greeting,
FYI, we noticed a -41.4% regression of pft.faults_per_sec_per_cpu due to commit:
commit: 2c83362734dad8e48ccc0710b5cd2436a0323893 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: pft
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
with following parameters:
runtime: 300s
nr_task: 50%
cpufreq_governor: performance
ucode: 0xb00002e
test-description: Pft is the page fault test micro benchmark.
test-url: https://github.com/gormanm/pft
In addition to that, the commit also has significant impact on the following tests:
+------------------+--------------------------------------------------------------------------+
| testcase: change | stream: |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | array_size=10000000 |
| | cpufreq_governor=performance |
| | nr_threads=25% |
| | omp=true |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: reaim.jobs_per_min 1.3% improvement |
| test machine | 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory |
| test parameters | cpufreq_governor=performance |
| | nr_job=3000 |
| | nr_task=100% |
| | runtime=300s |
| | test=custom |
| | ucode=0x3d |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stream: stream.add_bandwidth_MBps -32.0% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | array_size=10000000 |
| | cpufreq_governor=performance |
| | nr_threads=50% |
| | omp=true |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | plzip: |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | nr_threads=100% |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: reaim.jobs_per_min -11.9% regression |
| test machine | 192 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 512G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=100% |
| | runtime=300s |
| | test=all_utime |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | hackbench: hackbench.throughput -7.3% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | ipc=pipe |
| | mode=process |
| | nr_threads=1600% |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: reaim.std_dev_percent 11.4% undefined |
| test machine | 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=100% |
| | runtime=300s |
| | test=custom |
| | ucode=0x200004d |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: boot-time.boot 95.3% regression |
| test machine | 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=100% |
| | runtime=300s |
| | test=alltests |
| | ucode=0x200004d |
+------------------+--------------------------------------------------------------------------+
| testcase: change | pft: pft.faults_per_sec_per_cpu -42.7% regression |
| test machine | 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=50% |
| | runtime=300s |
| | ucode=0x200004d |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stream: stream.add_bandwidth_MBps -28.8% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | array_size=50000000 |
| | cpufreq_governor=performance |
| | nr_threads=50% |
| | omp=true |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stream: stream.add_bandwidth_MBps -30.6% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | array_size=10000000 |
| | cpufreq_governor=performance |
| | nr_threads=50% |
| | omp=true |
+------------------+--------------------------------------------------------------------------+
| testcase: change | pft: pft.faults_per_sec_per_cpu -42.5% regression |
| test machine | 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=50% |
| | runtime=300s |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: reaim.child_systime -1.4% undefined |
| test machine | 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory |
| test parameters | cpufreq_governor=performance |
| | iterations=30 |
| | nr_task=1600% |
| | test=compute |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.fifo.ops_per_sec 76.2% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=pipe |
| | cpufreq_governor=performance |
| | nr_threads=100% |
| | testtime=1s |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.tsearch.ops_per_sec -17.1% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=cpu |
| | cpufreq_governor=performance |
| | nr_threads=100% |
| | testtime=1s |
+------------------+--------------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/50%/debian-x86_64-2018-04-03.cgz/300s/lkp-bdw-ep3/pft/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 -50% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=_cond_resched/0x
:4 25% 1:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
%stddev %change %stddev
\ | \
250875 -41.4% 146900 pft.faults_per_sec_per_cpu
7127548 -32.9% 4779214 pft.time.minor_page_faults
3533 +14.5% 4047 pft.time.percent_of_cpu_this_job_got
10444 +13.3% 11828 pft.time.system_time
189.79 +84.8% 350.70 ± 5% pft.time.user_time
105380 ± 2% +31.5% 138536 ± 5% pft.time.voluntary_context_switches
6180331 ± 9% -60.3% 2451607 ± 34% numa-numastat.node1.local_node
6187616 ± 9% -60.3% 2455998 ± 34% numa-numastat.node1.numa_hit
58.50 -9.8% 52.75 vmstat.cpu.id
35.75 +14.0% 40.75 vmstat.procs.r
59.07 -6.1 52.99 mpstat.cpu.idle%
39.94 +5.5 45.45 mpstat.cpu.sys%
0.93 +0.6 1.54 ± 5% mpstat.cpu.usr%
147799 ± 6% +13.9% 168315 ± 5% cpuidle.C3.usage
1.532e+10 ± 3% -11.9% 1.35e+10 cpuidle.C6.time
15901515 ± 3% -12.4% 13931328 cpuidle.C6.usage
1.062e+08 ± 9% +123.1% 2.369e+08 ± 14% cpuidle.POLL.time
2491 ± 8% -28.0% 1793 ± 10% slabinfo.eventpoll_epi.active_objs
2491 ± 8% -28.0% 1793 ± 10% slabinfo.eventpoll_epi.num_objs
4332 ± 7% -27.9% 3125 ± 10% slabinfo.eventpoll_pwq.active_objs
4332 ± 7% -27.9% 3125 ± 10% slabinfo.eventpoll_pwq.num_objs
4601 ± 2% -16.0% 3866 ± 2% slabinfo.mm_struct.active_objs
4662 ± 2% -15.7% 3930 ± 2% slabinfo.mm_struct.num_objs
5396205 ± 2% +18.2% 6378310 meminfo.Active
5333263 ± 2% +18.4% 6316138 meminfo.Active(anon)
2054416 ± 5% +22.7% 2521118 ± 16% meminfo.AnonHugePages
5180518 ± 2% +18.7% 6147856 meminfo.AnonPages
4.652e+08 ± 2% +16.4% 5.416e+08 meminfo.Committed_AS
6863577 +13.8% 7807920 meminfo.Memused
9381 ± 2% +11.7% 10480 ± 7% meminfo.PageTables
1203 +15.0% 1383 turbostat.Avg_MHz
43.02 +6.3 49.30 turbostat.Busy%
147383 ± 6% +14.1% 168181 ± 5% turbostat.C3
15900108 ± 3% -12.4% 13931193 turbostat.C6
56.94 -6.2 50.74 turbostat.C6%
33.20 -47.1% 17.57 ± 4% turbostat.CPU%c1
23.69 ± 4% +39.5% 33.05 ± 2% turbostat.CPU%c6
186.47 -9.2% 169.38 turbostat.PkgWatt
18.43 -19.6% 14.82 turbostat.RAMWatt
715561 ± 15% +70.3% 1218740 ± 16% numa-vmstat.node0.nr_active_anon
701119 ± 15% +69.9% 1190948 ± 15% numa-vmstat.node0.nr_anon_pages
544.25 ± 20% +80.0% 979.50 ± 20% numa-vmstat.node0.nr_anon_transparent_hugepages
1240 ± 15% +37.7% 1709 ± 16% numa-vmstat.node0.nr_page_table_pages
715526 ± 15% +70.3% 1218728 ± 16% numa-vmstat.node0.nr_zone_active_anon
636795 ± 10% -44.9% 350831 ± 53% numa-vmstat.node1.nr_active_anon
614709 ± 11% -45.1% 337557 ± 54% numa-vmstat.node1.nr_anon_pages
636843 ± 11% -44.9% 350830 ± 53% numa-vmstat.node1.nr_zone_active_anon
3520461 ± 10% -41.3% 2066299 ± 34% numa-vmstat.node1.numa_hit
3380522 ± 10% -42.9% 1929309 ± 37% numa-vmstat.node1.numa_local
2860563 ± 14% +73.7% 4970140 ± 14% numa-meminfo.node0.Active
2829095 ± 14% +74.6% 4938791 ± 14% numa-meminfo.node0.Active(anon)
1029787 ± 15% +90.1% 1957511 ± 16% numa-meminfo.node0.AnonHugePages
2782901 ± 14% +73.2% 4820952 ± 13% numa-meminfo.node0.AnonPages
3598217 ± 11% +59.0% 5721834 ± 12% numa-meminfo.node0.MemUsed
4779 ± 14% +39.3% 6658 ± 13% numa-meminfo.node0.PageTables
2711469 ± 11% -48.3% 1401167 ± 53% numa-meminfo.node1.Active
2679996 ± 11% -48.9% 1370343 ± 55% numa-meminfo.node1.Active(anon)
1104830 ± 11% -52.3% 527490 ± 43% numa-meminfo.node1.AnonHugePages
2592998 ± 13% -50.4% 1285649 ± 54% numa-meminfo.node1.AnonPages
3441628 ± 9% -39.6% 2078804 ± 37% numa-meminfo.node1.MemUsed
1348502 ± 3% +15.9% 1562690 ± 2% proc-vmstat.nr_active_anon
1313538 ± 3% +16.0% 1523248 ± 2% proc-vmstat.nr_anon_pages
993.00 ± 7% +20.4% 1195 ± 4% proc-vmstat.nr_anon_transparent_hugepages
1484488 -1.4% 1464054 proc-vmstat.nr_dirty_background_threshold
2972608 -1.4% 2931689 proc-vmstat.nr_dirty_threshold
14732846 -1.4% 14528187 proc-vmstat.nr_free_pages
2334 ± 4% +11.9% 2611 ± 3% proc-vmstat.nr_page_table_pages
39493 -2.2% 38606 proc-vmstat.nr_slab_unreclaimable
1348499 ± 3% +15.9% 1562687 ± 2% proc-vmstat.nr_zone_active_anon
11707 ± 19% -79.6% 2390 ±105% proc-vmstat.numa_hint_faults
5736 ± 68% -68.8% 1789 ±122% proc-vmstat.numa_hint_faults_local
12846700 -31.1% 8854558 proc-vmstat.numa_hit
834.00 ± 15% -55.0% 375.50 ± 29% proc-vmstat.numa_huge_pte_updates
12829442 -31.1% 8837365 proc-vmstat.numa_local
29698 ± 16% -71.4% 8480 ± 72% proc-vmstat.numa_pages_migrated
464744 ± 17% -57.4% 197920 ± 31% proc-vmstat.numa_pte_updates
2.591e+09 -33.0% 1.736e+09 proc-vmstat.pgalloc_normal
7958915 -29.7% 5591702 proc-vmstat.pgfault
2.589e+09 -33.0% 1.735e+09 proc-vmstat.pgfree
29698 ± 16% -71.4% 8480 ± 72% proc-vmstat.pgmigrate_success
5041287 -33.0% 3378764 proc-vmstat.thp_deferred_split_page
5044208 -33.0% 3379878 proc-vmstat.thp_fault_alloc
495.50 ± 58% -64.8% 174.50 ± 4% interrupts.35:PCI-MSI.3145732-edge.eth0-TxRx-3
3476 ± 10% -14.1% 2986 interrupts.CPU1.CAL:Function_call_interrupts
40310 ± 8% -50.4% 20001 ± 37% interrupts.CPU1.RES:Rescheduling_interrupts
3458 ± 12% -13.5% 2992 ± 2% interrupts.CPU13.CAL:Function_call_interrupts
495.50 ± 58% -64.8% 174.50 ± 4% interrupts.CPU14.35:PCI-MSI.3145732-edge.eth0-TxRx-3
232.75 ± 37% +199.9% 698.00 ± 59% interrupts.CPU17.RES:Rescheduling_interrupts
372.75 ± 61% +226.2% 1215 ± 41% interrupts.CPU19.RES:Rescheduling_interrupts
3428 ± 10% -12.0% 3016 ± 3% interrupts.CPU2.CAL:Function_call_interrupts
16318 ± 14% -79.4% 3366 ± 18% interrupts.CPU2.RES:Rescheduling_interrupts
112.50 ± 39% +573.1% 757.25 ± 42% interrupts.CPU20.RES:Rescheduling_interrupts
103.75 ± 37% +1322.2% 1475 ± 34% interrupts.CPU21.RES:Rescheduling_interrupts
1046 ± 41% +3220.0% 34735 ± 46% interrupts.CPU22.RES:Rescheduling_interrupts
485.25 ± 36% +3116.5% 15608 ±125% interrupts.CPU23.RES:Rescheduling_interrupts
404.75 ± 48% -81.6% 74.50 ± 55% interrupts.CPU29.RES:Rescheduling_interrupts
12888 ± 17% -73.6% 3399 ± 57% interrupts.CPU3.RES:Rescheduling_interrupts
341.25 ± 47% -78.7% 72.75 ± 77% interrupts.CPU31.RES:Rescheduling_interrupts
290.75 ± 29% -65.9% 99.25 ± 99% interrupts.CPU34.RES:Rescheduling_interrupts
3520 ± 7% -28.8% 2507 ± 30% interrupts.CPU35.CAL:Function_call_interrupts
238.75 ± 50% -75.6% 58.25 ± 35% interrupts.CPU35.RES:Rescheduling_interrupts
285.50 ± 66% -87.3% 36.25 ± 70% interrupts.CPU36.RES:Rescheduling_interrupts
3520 ± 9% -22.8% 2716 ± 16% interrupts.CPU37.CAL:Function_call_interrupts
303.00 ± 55% -81.2% 57.00 ±101% interrupts.CPU38.RES:Rescheduling_interrupts
261.75 ± 68% -83.2% 44.00 ± 81% interrupts.CPU39.RES:Rescheduling_interrupts
9633 ± 7% -79.9% 1935 ± 41% interrupts.CPU4.RES:Rescheduling_interrupts
169.75 ± 47% -71.4% 48.50 ± 67% interrupts.CPU41.RES:Rescheduling_interrupts
230.25 ± 52% -73.4% 61.25 ± 92% interrupts.CPU42.RES:Rescheduling_interrupts
3426 ± 10% -11.4% 3036 interrupts.CPU6.CAL:Function_call_interrupts
4643 ± 20% -77.7% 1036 ± 13% interrupts.CPU6.RES:Rescheduling_interrupts
237.75 ± 49% -79.7% 48.25 ± 82% interrupts.CPU66.RES:Rescheduling_interrupts
231.00 ± 64% -89.0% 25.50 ± 28% interrupts.CPU69.RES:Rescheduling_interrupts
3432 ± 10% -15.0% 2918 ± 4% interrupts.CPU7.CAL:Function_call_interrupts
4134 ± 13% -64.2% 1481 ± 43% interrupts.CPU7.RES:Rescheduling_interrupts
96.75 ± 51% -80.4% 19.00 ± 46% interrupts.CPU75.RES:Rescheduling_interrupts
3453 ± 11% -12.7% 3015 interrupts.CPU8.CAL:Function_call_interrupts
18.33 ± 5% -13.3 5.08 ± 4% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
19.33 ± 3% -12.0 7.38 ± 4% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
19.70 ± 3% -11.9 7.76 ± 7% perf-profile.calltrace.cycles-pp.secondary_startup_64
19.40 ± 3% -11.9 7.50 ± 4% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
19.40 ± 3% -11.9 7.50 ± 4% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
19.40 ± 3% -11.9 7.50 ± 4% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
0.55 ± 4% +0.1 0.64 ± 2% perf-profile.calltrace.cycles-pp.clear_huge_page
0.95 +0.1 1.06 ± 2% perf-profile.calltrace.cycles-pp.___might_sleep.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.56 ± 4% +0.3 0.84 ± 8% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.clear_page_erms.clear_huge_page
0.62 ± 3% +0.3 0.94 ± 8% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page
0.60 ± 4% +0.3 0.93 perf-profile.calltrace.cycles-pp._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.63 ± 3% +0.3 0.96 ± 8% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault
1.17 ± 3% +0.3 1.50 ± 2% perf-profile.calltrace.cycles-pp.__free_pages_ok.release_pages.tlb_flush_mmu_free.unmap_page_range.unmap_vmas
0.38 ± 57% +0.4 0.76 ± 9% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.clear_page_erms
1.24 ± 3% +0.4 1.65 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap
1.24 ± 2% +0.4 1.66 perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap.mmput
1.31 ± 2% +0.5 1.79 perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
1.31 ± 2% +0.5 1.78 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
1.34 ± 2% +0.5 1.81 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
1.34 ± 2% +0.5 1.81 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__wake_up_parent
1.35 ± 3% +0.5 1.83 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
1.35 ± 3% +0.5 1.83 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.34 ± 3% +0.5 1.82 perf-profile.calltrace.cycles-pp.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.34 ± 3% +0.5 1.82 perf-profile.calltrace.cycles-pp.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.34 ± 3% +0.5 1.82 perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.57 ± 2% perf-profile.calltrace.cycles-pp.___might_sleep
0.00 +0.6 0.64 ± 10% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 +0.7 0.67 ± 10% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
0.00 +0.7 0.68 ± 10% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.80 ± 2% +0.7 2.54 ± 2% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
1.80 ± 2% +0.7 2.54 ± 2% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.00 +0.9 0.88 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page
0.00 +0.9 0.90 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault
0.66 ± 62% +1.2 1.88 ± 34% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
69.36 +8.7 78.09 perf-profile.calltrace.cycles-pp.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
72.75 +10.1 82.87 perf-profile.calltrace.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
74.99 +10.9 85.91 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
74.91 +10.9 85.84 perf-profile.calltrace.cycles-pp.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
75.01 +10.9 85.95 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
75.10 +11.0 86.06 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
75.10 +11.0 86.06 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
75.11 +11.0 86.08 perf-profile.calltrace.cycles-pp.page_fault
103.91 -34.9% 67.64 ± 3% perf-stat.i.MPKI
1.482e+09 +13.6% 1.684e+09 ± 5% perf-stat.i.branch-instructions
9762787 +14.7% 11201300 ± 2% perf-stat.i.branch-misses
7.38 ± 2% +1.0 8.41 perf-stat.i.cache-miss-rate%
45987730 ± 3% -19.9% 36831773 ± 2% perf-stat.i.cache-misses
6.184e+08 -29.2% 4.378e+08 perf-stat.i.cache-references
1.035e+11 +16.0% 1.2e+11 perf-stat.i.cpu-cycles
35.49 ± 2% -24.7% 26.73 ± 8% perf-stat.i.cpu-migrations
1053398 +19.7% 1260826 perf-stat.i.dTLB-load-misses
1.728e+09 +15.0% 1.987e+09 ± 5% perf-stat.i.dTLB-loads
0.04 ± 3% +0.0 0.04 ± 2% perf-stat.i.dTLB-store-miss-rate%
592773 -13.0% 515699 perf-stat.i.dTLB-store-misses
1.69e+09 -22.2% 1.314e+09 perf-stat.i.dTLB-stores
49.34 ± 2% +24.3 73.67 perf-stat.i.iTLB-load-miss-rate%
462159 +19.0% 550118 perf-stat.i.iTLB-load-misses
481420 ± 3% -56.1% 211333 ± 3% perf-stat.i.iTLB-loads
6.07e+09 +11.9% 6.793e+09 ± 4% perf-stat.i.instructions
13676 -8.6% 12500 ± 4% perf-stat.i.instructions-per-iTLB-miss
25994 -29.2% 18413 perf-stat.i.minor-faults
12.62 ± 5% -3.1 9.52 ± 10% perf-stat.i.node-load-miss-rate%
813594 ± 8% -45.3% 445188 ± 10% perf-stat.i.node-load-misses
6036541 ± 3% -29.9% 4234277 perf-stat.i.node-loads
2.63 ± 15% -2.0 0.66 ± 32% perf-stat.i.node-store-miss-rate%
488029 ± 37% -68.5% 153814 ± 34% perf-stat.i.node-store-misses
23876394 -3.5% 23039440 perf-stat.i.node-stores
25995 -29.2% 18414 perf-stat.i.page-faults
101.89 -36.6% 64.58 ± 3% perf-stat.overall.MPKI
7.44 +1.0 8.41 perf-stat.overall.cache-miss-rate%
2251 +44.8% 3258 perf-stat.overall.cycles-between-cache-misses
0.04 +0.0 0.04 ± 2% perf-stat.overall.dTLB-store-miss-rate%
48.98 ± 2% +23.3 72.25 perf-stat.overall.iTLB-load-miss-rate%
11.86 ± 4% -2.3 9.51 ± 10% perf-stat.overall.node-load-miss-rate%
1.99 ± 36% -1.3 0.66 ± 33% perf-stat.overall.node-store-miss-rate%
1.477e+09 +13.6% 1.677e+09 ± 5% perf-stat.ps.branch-instructions
9696637 +14.7% 11121372 ± 2% perf-stat.ps.branch-misses
45824683 ± 3% -19.9% 36699436 ± 2% perf-stat.ps.cache-misses
6.162e+08 -29.2% 4.362e+08 perf-stat.ps.cache-references
1.031e+11 +16.0% 1.195e+11 perf-stat.ps.cpu-cycles
35.37 ± 2% -24.7% 26.64 ± 8% perf-stat.ps.cpu-migrations
1049561 +19.7% 1256350 perf-stat.ps.dTLB-load-misses
1.721e+09 +15.0% 1.979e+09 ± 5% perf-stat.ps.dTLB-loads
590625 -13.0% 513863 perf-stat.ps.dTLB-store-misses
1.684e+09 -22.2% 1.31e+09 perf-stat.ps.dTLB-stores
460416 +19.1% 548246 perf-stat.ps.iTLB-load-misses
479698 ± 3% -56.1% 210661 ± 3% perf-stat.ps.iTLB-loads
6.047e+09 +11.9% 6.766e+09 ± 4% perf-stat.ps.instructions
25909 -29.2% 18350 perf-stat.ps.minor-faults
810622 ± 7% -45.3% 443783 ± 10% perf-stat.ps.node-load-misses
6015816 ± 3% -29.9% 4219461 perf-stat.ps.node-loads
486412 ± 37% -68.5% 153252 ± 33% perf-stat.ps.node-store-misses
23792569 -3.5% 22959184 perf-stat.ps.node-stores
25909 -29.2% 18349 perf-stat.ps.page-faults
1.843e+12 +10.8% 2.043e+12 ± 4% perf-stat.total.instructions
40.36 ±173% -100.0% 0.00 ± 5% sched_debug.cfs_rq:/.MIN_vruntime.stddev
49736 +54.0% 76581 ± 9% sched_debug.cfs_rq:/.exec_clock.avg
81364 ± 8% +74.4% 141938 ± 4% sched_debug.cfs_rq:/.exec_clock.max
10337 ± 14% +266.4% 37876 ± 42% sched_debug.cfs_rq:/.exec_clock.stddev
11225 ± 10% +25.9% 14136 ± 13% sched_debug.cfs_rq:/.load.avg
40.36 ±173% -100.0% 0.00 ± 5% sched_debug.cfs_rq:/.max_vruntime.stddev
2067637 +61.7% 3344320 ± 9% sched_debug.cfs_rq:/.min_vruntime.avg
3187109 ± 6% +93.2% 6157097 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
402561 ± 13% +311.1% 1655080 ± 42% sched_debug.cfs_rq:/.min_vruntime.stddev
0.42 ± 8% +26.9% 0.53 ± 4% sched_debug.cfs_rq:/.nr_running.avg
1.53 ± 10% +58.3% 2.42 ± 14% sched_debug.cfs_rq:/.nr_spread_over.avg
1.13 ± 24% +63.7% 1.85 ± 19% sched_debug.cfs_rq:/.nr_spread_over.stddev
3.41 ± 33% -57.7% 1.45 ±111% sched_debug.cfs_rq:/.removed.load_avg.avg
201.45 -58.0% 84.54 ±100% sched_debug.cfs_rq:/.removed.load_avg.max
25.53 ± 17% -57.7% 10.80 ±103% sched_debug.cfs_rq:/.removed.load_avg.stddev
156.58 ± 33% -57.5% 66.54 ±111% sched_debug.cfs_rq:/.removed.runnable_sum.avg
9231 -57.8% 3894 ±100% sched_debug.cfs_rq:/.removed.runnable_sum.max
1170 ± 17% -57.6% 496.50 ±103% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
9.87 ± 2% +27.1% 12.55 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.avg
11202 ± 10% +25.8% 14092 ± 13% sched_debug.cfs_rq:/.runnable_weight.avg
1717275 ± 24% +109.4% 3595584 ± 21% sched_debug.cfs_rq:/.spread0.max
-313871 +414.7% -1615516 sched_debug.cfs_rq:/.spread0.min
402568 ± 13% +311.1% 1655086 ± 42% sched_debug.cfs_rq:/.spread0.stddev
441.98 ± 8% +19.5% 528.20 ± 7% sched_debug.cfs_rq:/.util_avg.avg
152317 +19.8% 182542 sched_debug.cpu.clock.avg
152324 +19.8% 182549 sched_debug.cpu.clock.max
152309 +19.8% 182535 sched_debug.cpu.clock.min
152317 +19.8% 182542 sched_debug.cpu.clock_task.avg
152324 +19.8% 182549 sched_debug.cpu.clock_task.max
152309 +19.8% 182535 sched_debug.cpu.clock_task.min
9.55 +7.5% 10.27 ± 4% sched_debug.cpu.cpu_load[1].avg
89744 +27.7% 114579 sched_debug.cpu.nr_load_updates.avg
105109 ± 2% +46.9% 154390 ± 7% sched_debug.cpu.nr_load_updates.max
7386 ± 13% +245.2% 25495 ± 34% sched_debug.cpu.nr_load_updates.stddev
6176 +32.2% 8164 ± 4% sched_debug.cpu.nr_switches.avg
33527 ± 10% +167.9% 89818 ± 23% sched_debug.cpu.nr_switches.max
5721 ± 7% +107.1% 11850 ± 23% sched_debug.cpu.nr_switches.stddev
6.70 ± 19% +122.0% 14.88 ± 40% sched_debug.cpu.nr_uninterruptible.max
2.62 ± 8% +36.4% 3.58 ± 7% sched_debug.cpu.nr_uninterruptible.stddev
5865 +35.3% 7937 ± 4% sched_debug.cpu.sched_count.avg
45223 ± 23% +103.6% 92061 ± 23% sched_debug.cpu.sched_count.max
6871 ± 10% +83.8% 12631 ± 20% sched_debug.cpu.sched_count.stddev
2580 ± 2% +33.7% 3449 ± 6% sched_debug.cpu.sched_goidle.avg
15649 ± 9% +169.3% 42137 ± 30% sched_debug.cpu.sched_goidle.max
2697 ± 7% +106.9% 5582 ± 28% sched_debug.cpu.sched_goidle.stddev
2494 ± 2% +40.6% 3507 ± 4% sched_debug.cpu.ttwu_count.avg
20565 ± 18% +174.4% 56434 ± 39% sched_debug.cpu.ttwu_count.max
3385 ± 9% +112.6% 7196 ± 31% sched_debug.cpu.ttwu_count.stddev
856.38 +60.8% 1377 ± 3% sched_debug.cpu.ttwu_local.avg
2955 ± 13% +147.0% 7301 ± 28% sched_debug.cpu.ttwu_local.max
550.87 ± 4% +61.0% 886.64 ± 19% sched_debug.cpu.ttwu_local.stddev
152311 +19.8% 182536 sched_debug.cpu_clk
152311 +19.8% 182536 sched_debug.ktime
155210 +19.5% 185434 sched_debug.sched_clk
30209 ± 6% -41.7% 17601 ± 15% softirqs.CPU1.SCHED
94445 ± 9% +28.2% 121092 ± 5% softirqs.CPU1.TIMER
8720 ± 16% -30.8% 6038 ± 15% softirqs.CPU10.SCHED
91259 ± 7% +45.3% 132597 ± 12% softirqs.CPU10.TIMER
9600 ± 10% -37.6% 5988 ± 24% softirqs.CPU11.SCHED
93321 ± 6% +40.7% 131338 ± 14% softirqs.CPU11.TIMER
93395 ± 8% +39.0% 129851 ± 12% softirqs.CPU12.TIMER
9114 ± 11% -34.2% 6001 ± 13% softirqs.CPU13.SCHED
92702 ± 8% +44.7% 134175 ± 10% softirqs.CPU13.TIMER
98912 ± 8% +31.0% 129598 ± 11% softirqs.CPU14.TIMER
91279 ± 7% +35.5% 123650 ± 8% softirqs.CPU15.TIMER
3119 ± 6% +152.6% 7878 ± 73% softirqs.CPU16.RCU
94832 ± 7% +55.9% 147813 ± 9% softirqs.CPU16.TIMER
94741 ± 7% +38.2% 130953 ± 13% softirqs.CPU17.TIMER
89710 ± 7% +48.8% 133512 ± 12% softirqs.CPU18.TIMER
95107 ± 7% +41.2% 134251 ± 11% softirqs.CPU19.TIMER
17569 ± 3% -51.5% 8516 ± 17% softirqs.CPU2.SCHED
101951 ± 8% +34.6% 137209 ± 7% softirqs.CPU2.TIMER
95682 ± 5% +43.4% 137183 ± 8% softirqs.CPU20.TIMER
90560 ± 18% +47.5% 133620 ± 10% softirqs.CPU21.TIMER
8858 ± 9% +171.4% 24041 ± 37% softirqs.CPU22.SCHED
82796 ± 11% -32.9% 55594 ± 22% softirqs.CPU22.TIMER
92560 ± 8% -30.4% 64376 ± 14% softirqs.CPU23.TIMER
89890 ± 11% -31.9% 61223 ± 18% softirqs.CPU25.TIMER
82728 ± 11% -28.7% 58954 ± 21% softirqs.CPU26.TIMER
7585 ± 5% +65.0% 12517 ± 50% softirqs.CPU27.SCHED
87967 ± 7% -22.7% 68014 ± 20% softirqs.CPU29.TIMER
3847 ± 11% +78.0% 6847 ± 48% softirqs.CPU3.RCU
15118 ± 6% -51.1% 7400 ± 19% softirqs.CPU3.SCHED
99241 ± 8% +33.0% 131954 ± 13% softirqs.CPU3.TIMER
5273 ± 70% -52.9% 2481 ± 13% softirqs.CPU30.RCU
86738 ± 4% -26.9% 63403 ± 20% softirqs.CPU30.TIMER
87717 ± 4% -19.7% 70426 ± 11% softirqs.CPU33.TIMER
91009 ± 8% -26.6% 66826 ± 30% softirqs.CPU37.TIMER
91238 ± 6% -34.6% 59629 ± 35% softirqs.CPU39.TIMER
13630 ± 6% -48.8% 6975 ± 14% softirqs.CPU4.SCHED
98325 ± 9% +36.5% 134220 ± 10% softirqs.CPU4.TIMER
7882 ± 58% -37.7% 4910 ± 76% softirqs.CPU40.RCU
91734 ± 7% -41.1% 54044 ± 24% softirqs.CPU40.TIMER
7837 ± 57% -64.0% 2820 ± 18% softirqs.CPU42.RCU
90928 ± 7% -29.9% 63731 ± 23% softirqs.CPU42.TIMER
92238 ± 5% -36.2% 58803 ± 27% softirqs.CPU43.TIMER
9727 ± 9% -39.0% 5931 ± 17% softirqs.CPU45.SCHED
95857 ± 5% +35.5% 129846 ± 10% softirqs.CPU45.TIMER
9358 ± 8% -36.5% 5941 ± 21% softirqs.CPU46.SCHED
92711 ± 10% +43.7% 133268 ± 10% softirqs.CPU46.TIMER
9920 ± 9% -37.9% 6161 ± 22% softirqs.CPU47.SCHED
97785 ± 9% +36.4% 133427 ± 9% softirqs.CPU47.TIMER
9119 ± 13% -31.4% 6255 ± 18% softirqs.CPU48.SCHED
96161 ± 12% +39.8% 134472 ± 9% softirqs.CPU48.TIMER
95489 ± 12% +36.7% 130507 ± 10% softirqs.CPU49.TIMER
95172 ± 8% +34.9% 128353 ± 5% softirqs.CPU5.TIMER
95302 ± 9% +39.9% 133360 ± 11% softirqs.CPU50.TIMER
93574 ± 13% +47.3% 137842 ± 8% softirqs.CPU51.TIMER
3339 ± 6% +102.3% 6754 ± 50% softirqs.CPU52.RCU
96278 ± 5% +35.0% 129973 ± 10% softirqs.CPU53.TIMER
3243 ± 11% +95.5% 6339 ± 60% softirqs.CPU54.RCU
7818 ± 22% -35.3% 5060 ± 11% softirqs.CPU54.SCHED
97165 ± 4% +35.0% 131219 ± 12% softirqs.CPU54.TIMER
99534 ± 8% +33.4% 132761 ± 11% softirqs.CPU55.TIMER
93342 ± 6% +37.6% 128456 ± 13% softirqs.CPU56.TIMER
3104 ± 5% +176.2% 8574 ± 51% softirqs.CPU57.RCU
8598 ± 19% -43.3% 4871 ± 17% softirqs.CPU57.SCHED
96358 ± 5% +34.3% 129405 ± 14% softirqs.CPU57.TIMER
90709 ± 6% +45.1% 131643 ± 10% softirqs.CPU58.TIMER
94807 ± 6% +31.0% 124229 ± 10% softirqs.CPU59.TIMER
10284 ± 8% -34.9% 6694 ± 12% softirqs.CPU6.SCHED
94038 ± 11% +44.9% 136217 ± 10% softirqs.CPU6.TIMER
95144 ± 5% +37.2% 130541 ± 13% softirqs.CPU60.TIMER
95772 ± 8% +33.9% 128220 ± 13% softirqs.CPU61.TIMER
96450 ± 8% +34.0% 129241 ± 13% softirqs.CPU62.TIMER
2935 ± 8% +104.4% 5999 ± 58% softirqs.CPU63.RCU
93061 ± 7% +43.2% 133226 ± 9% softirqs.CPU63.TIMER
95785 ± 10% +38.0% 132228 ± 10% softirqs.CPU64.TIMER
87927 ± 20% +51.6% 133269 ± 8% softirqs.CPU65.TIMER
87499 ± 10% -38.1% 54161 ± 26% softirqs.CPU66.TIMER
91805 ± 9% -29.9% 64360 ± 25% softirqs.CPU67.TIMER
92714 ± 8% -29.6% 65310 ± 23% softirqs.CPU68.TIMER
87340 ± 7% -31.8% 59600 ± 28% softirqs.CPU69.TIMER
3716 ± 5% +86.8% 6942 ± 61% softirqs.CPU7.RCU
10113 ± 11% -41.6% 5907 ± 9% softirqs.CPU7.SCHED
92161 ± 9% +43.3% 132028 ± 11% softirqs.CPU7.TIMER
91161 ± 17% -27.6% 66000 ± 24% softirqs.CPU73.TIMER
85233 ± 9% -19.1% 68973 ± 14% softirqs.CPU75.TIMER
94828 ± 2% -32.7% 63848 ± 22% softirqs.CPU77.TIMER
89287 ± 7% -29.9% 62592 ± 27% softirqs.CPU78.TIMER
90874 ± 9% -24.0% 69092 ± 24% softirqs.CPU80.TIMER
7403 ± 58% -68.8% 2307 ± 12% softirqs.CPU82.RCU
90855 ± 10% -27.9% 65495 ± 25% softirqs.CPU82.TIMER
90678 ± 11% -28.8% 64580 ± 12% softirqs.CPU84.TIMER
90188 ± 15% -21.7% 70622 ± 3% softirqs.CPU85.TIMER
7438 ± 57% -37.8% 4626 ± 74% softirqs.CPU86.RCU
89728 ± 12% -35.7% 57665 ± 17% softirqs.CPU86.TIMER
5097 ± 72% -52.3% 2430 ± 5% softirqs.CPU87.RCU
88927 ± 9% -34.6% 58201 ± 25% softirqs.CPU87.TIMER
3488 ± 8% +98.4% 6921 ± 45% softirqs.CPU9.RCU
88673 ± 12% +43.7% 127443 ± 12% softirqs.CPU9.TIMER
pft.faults_per_sec_per_cpu
300000 +-+----------------------------------------------------------------+
| |
250000 +-+.++.+.+.+.++.+ +.+.++.+.+.+.+ |
| : : |
| : : |
200000 +-+ : : |
| : : |
150000 O-O O O O O OO O:O:O O OO O O OO O O O OO O O O O OO O O O OO O O
| : : |
100000 +-+ : : |
| : : |
| :: |
50000 +-+ : |
| : |
0 +-+-O--------------------------O-----------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
***************************************************************************************************
lkp-hsw-ep2: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_job/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/3000/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-hsw-ep2/custom/reaim/0x3d
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=sched_slice/0x
%stddev %change %stddev
\ | \
389.13 -2.7% 378.77 reaim.child_systime
540687 +1.3% 547673 reaim.jobs_per_min
7509 +1.3% 7606 reaim.jobs_per_min_child
544939 +1.2% 551502 reaim.max_jobs_per_min
23.92 -1.3% 23.62 reaim.parent_time
0.68 ± 2% +21.6% 0.83 reaim.std_dev_percent
0.16 ± 2% +20.1% 0.19 reaim.std_dev_time
311.40 -1.2% 307.71 reaim.time.elapsed_time
311.40 -1.2% 307.71 reaim.time.elapsed_time.max
5096562 -16.0% 4280492 reaim.time.involuntary_context_switches
4694 -2.7% 4569 reaim.time.system_time
21203641 +2.1% 21642097 reaim.time.voluntary_context_switches
169548 +1.9% 172739 vmstat.system.cs
5.805e+08 +14.5% 6.645e+08 cpuidle.C1E.time
5649746 +14.9% 6493131 cpuidle.C1E.usage
114430 ± 4% -6.4% 107089 softirqs.CPU33.TIMER
116034 -8.8% 105833 ± 2% softirqs.CPU35.TIMER
13118 +13.4% 14873 meminfo.PageTables
49287 ± 5% +21.9% 60093 ± 7% meminfo.Shmem
7913 +12.1% 8874 meminfo.max_used_kB
7969 ± 42% +115.6% 17180 ± 5% numa-meminfo.node0.Inactive(anon)
12094 ± 15% +35.1% 16333 ± 3% numa-meminfo.node0.Mapped
10024 ± 34% -82.7% 1739 ± 48% numa-meminfo.node1.Inactive(anon)
6709 ± 5% +30.3% 8745 ± 8% numa-meminfo.node1.PageTables
5649886 +14.9% 6493215 turbostat.C1E
2.58 +0.4 2.98 turbostat.C1E%
0.29 -58.6% 0.12 turbostat.CPU%c3
2.18 ± 2% +10.3% 2.41 ± 2% turbostat.Pkg%pc2
13.25 -3.8% 12.76 turbostat.RAMWatt
1993 ± 42% +115.1% 4288 ± 4% numa-vmstat.node0.nr_inactive_anon
3125 ± 15% +32.9% 4154 ± 2% numa-vmstat.node0.nr_mapped
1993 ± 42% +115.1% 4288 ± 4% numa-vmstat.node0.nr_zone_inactive_anon
2498 ± 34% -82.9% 426.75 ± 47% numa-vmstat.node1.nr_inactive_anon
1670 ± 5% +31.9% 2202 ± 9% numa-vmstat.node1.nr_page_table_pages
2498 ± 34% -82.9% 426.75 ± 47% numa-vmstat.node1.nr_zone_inactive_anon
2027 ± 2% -21.6% 1590 ± 8% slabinfo.eventpoll_epi.active_objs
2027 ± 2% -21.6% 1590 ± 8% slabinfo.eventpoll_epi.num_objs
3548 ± 2% -22.0% 2769 ± 9% slabinfo.eventpoll_pwq.active_objs
3548 ± 2% -22.0% 2769 ± 9% slabinfo.eventpoll_pwq.num_objs
662.25 ± 6% -19.6% 532.50 ± 3% slabinfo.file_lock_cache.active_objs
662.25 ± 6% -19.6% 532.50 ± 3% slabinfo.file_lock_cache.num_objs
933.75 ± 2% -23.5% 714.00 ± 2% slabinfo.names_cache.active_objs
938.50 ± 2% -23.9% 714.00 ± 2% slabinfo.names_cache.num_objs
93741 +3.5% 97068 proc-vmstat.nr_active_anon
84153 +1.7% 85581 proc-vmstat.nr_anon_pages
235621 +1.1% 238317 proc-vmstat.nr_file_pages
4495 +5.3% 4732 proc-vmstat.nr_inactive_anon
15557 +2.7% 15981 proc-vmstat.nr_kernel_stack
6662 +5.9% 7053 proc-vmstat.nr_mapped
3245 +13.7% 3692 proc-vmstat.nr_page_table_pages
12326 ± 5% +21.9% 15022 ± 7% proc-vmstat.nr_shmem
44278 -1.3% 43703 proc-vmstat.nr_slab_unreclaimable
93741 +3.5% 97068 proc-vmstat.nr_zone_active_anon
4495 +5.3% 4732 proc-vmstat.nr_zone_inactive_anon
158417 ± 11% -20.5% 125975 ± 3% proc-vmstat.numa_pte_updates
2340 -100.0% 0.00 sched_debug.cfs_rq:/.load.min
0.74 -17.1% 0.61 ± 3% sched_debug.cfs_rq:/.nr_running.avg
0.17 -100.0% 0.00 sched_debug.cfs_rq:/.nr_running.min
1.83 ± 22% -100.0% 0.00 sched_debug.cfs_rq:/.runnable_load_avg.min
25.56 ± 51% +131.9% 59.28 ± 32% sched_debug.cfs_rq:/.runnable_load_avg.stddev
2339 -100.0% 0.00 sched_debug.cfs_rq:/.runnable_weight.min
741.21 -19.7% 595.16 ± 2% sched_debug.cfs_rq:/.util_avg.avg
144.54 ± 16% +38.7% 200.54 ± 27% sched_debug.cpu.cpu_load[2].max
15.92 ± 13% +33.3% 21.22 ± 18% sched_debug.cpu.cpu_load[3].stddev
3.08 ± 18% -44.6% 1.71 ± 46% sched_debug.cpu.cpu_load[4].min
23537 ± 2% -16.0% 19781 ± 4% sched_debug.cpu.curr->pid.avg
4236 -100.0% 0.00 sched_debug.cpu.curr->pid.min
2340 -100.0% 0.00 sched_debug.cpu.load.min
0.89 ± 2% -15.7% 0.75 ± 5% sched_debug.cpu.nr_running.avg
0.17 -100.0% 0.00 sched_debug.cpu.nr_running.min
128.17 ± 5% +10.5% 141.67 ± 4% sched_debug.cpu.nr_uninterruptible.max
-143.46 -9.9% -129.21 sched_debug.cpu.nr_uninterruptible.min
7407 ± 7% +16.7% 8643 ± 6% sched_debug.cpu.sched_count.stddev
0.00 ± 5% -44.0% 0.00 ± 14% sched_debug.rt_rq:/.rt_time.avg
0.04 ± 7% -42.0% 0.02 ± 57% sched_debug.rt_rq:/.rt_time.max
0.00 ± 8% -36.3% 0.00 ± 40% sched_debug.rt_rq:/.rt_time.stddev
2.198e+10 +1.1% 2.223e+10 perf-stat.i.branch-instructions
2.02 -0.0 1.99 perf-stat.i.branch-miss-rate%
2.44 -0.5 1.89 perf-stat.i.cache-miss-rate%
84674052 -20.1% 67690215 perf-stat.i.cache-misses
3.972e+09 +1.0% 4.013e+09 perf-stat.i.cache-references
171277 +1.8% 174409 perf-stat.i.context-switches
4613 ± 3% +20.8% 5574 ± 3% perf-stat.i.cycles-between-cache-misses
1.248e+10 +1.3% 1.264e+10 perf-stat.i.dTLB-loads
10489856 ± 2% -6.8% 9775230 ± 2% perf-stat.i.dTLB-store-misses
61.17 -0.5 60.69 perf-stat.i.iTLB-load-miss-rate%
1.04e+11 +1.1% 1.052e+11 perf-stat.i.instructions
79.97 -1.9 78.03 perf-stat.i.node-load-miss-rate%
54374000 -26.7% 39866795 perf-stat.i.node-load-misses
5262627 -5.0% 4998608 perf-stat.i.node-loads
47.13 -1.5 45.63 perf-stat.i.node-store-miss-rate%
11256870 -13.1% 9780571 perf-stat.i.node-store-misses
12952272 -6.5% 12106552 perf-stat.i.node-stores
2.13 -0.4 1.69 perf-stat.overall.cache-miss-rate%
1806 +25.2% 2262 perf-stat.overall.cycles-between-cache-misses
0.08 ± 2% -0.0 0.07 ± 3% perf-stat.overall.dTLB-store-miss-rate%
57.16 -0.4 56.72 perf-stat.overall.iTLB-load-miss-rate%
6126 +2.6% 6282 perf-stat.overall.instructions-per-iTLB-miss
0.68 +1.0% 0.69 perf-stat.overall.ipc
91.17 -2.3 88.86 perf-stat.overall.node-load-miss-rate%
46.50 -1.8 44.68 perf-stat.overall.node-store-miss-rate%
2.191e+10 +1.1% 2.215e+10 perf-stat.ps.branch-instructions
84380063 -20.1% 67448069 perf-stat.ps.cache-misses
3.959e+09 +1.0% 4e+09 perf-stat.ps.cache-references
170674 +1.8% 173772 perf-stat.ps.context-switches
1.244e+10 +1.3% 1.259e+10 perf-stat.ps.dTLB-loads
10454233 ± 2% -6.8% 9740884 ± 2% perf-stat.ps.dTLB-store-misses
1.037e+11 +1.1% 1.048e+11 perf-stat.ps.instructions
54182937 -26.7% 39721849 perf-stat.ps.node-load-misses
5244550 -5.0% 4980894 perf-stat.ps.node-loads
11218214 -13.1% 9746115 perf-stat.ps.node-store-misses
12908974 -6.5% 12064855 perf-stat.ps.node-stores
252.25 ± 40% -32.0% 171.50 ± 4% interrupts.40:PCI-MSI.1572869-edge.eth0-TxRx-5
10875 ± 4% -15.8% 9161 ± 8% interrupts.CPU0.RES:Rescheduling_interrupts
7594 ± 2% -23.1% 5838 ± 6% interrupts.CPU1.RES:Rescheduling_interrupts
7393 -35.6% 4762 ± 3% interrupts.CPU10.RES:Rescheduling_interrupts
7422 ± 13% -33.1% 4965 ± 8% interrupts.CPU11.RES:Rescheduling_interrupts
7029 ± 2% -35.2% 4558 ± 2% interrupts.CPU12.RES:Rescheduling_interrupts
7067 ± 2% -36.0% 4526 interrupts.CPU13.RES:Rescheduling_interrupts
7201 ± 2% -34.6% 4709 interrupts.CPU14.RES:Rescheduling_interrupts
7104 ± 2% -36.1% 4537 ± 5% interrupts.CPU15.RES:Rescheduling_interrupts
7208 -35.5% 4646 ± 4% interrupts.CPU16.RES:Rescheduling_interrupts
7127 -33.9% 4709 ± 9% interrupts.CPU17.RES:Rescheduling_interrupts
7395 ± 3% -36.3% 4709 ± 2% interrupts.CPU18.RES:Rescheduling_interrupts
7005 -34.4% 4596 ± 3% interrupts.CPU19.RES:Rescheduling_interrupts
7853 ± 3% -31.2% 5406 ± 8% interrupts.CPU2.RES:Rescheduling_interrupts
7205 -31.2% 4958 ± 8% interrupts.CPU20.RES:Rescheduling_interrupts
7137 -34.3% 4686 ± 3% interrupts.CPU21.RES:Rescheduling_interrupts
6968 -32.7% 4691 interrupts.CPU22.RES:Rescheduling_interrupts
6914 -30.8% 4787 ± 3% interrupts.CPU23.RES:Rescheduling_interrupts
6922 -32.8% 4651 ± 2% interrupts.CPU24.RES:Rescheduling_interrupts
7184 ± 2% -35.8% 4615 ± 2% interrupts.CPU25.RES:Rescheduling_interrupts
7341 ± 2% -36.9% 4635 ± 3% interrupts.CPU26.RES:Rescheduling_interrupts
7230 -34.7% 4720 ± 4% interrupts.CPU27.RES:Rescheduling_interrupts
7205 ± 2% -31.4% 4939 ± 7% interrupts.CPU28.RES:Rescheduling_interrupts
7064 -33.9% 4667 ± 3% interrupts.CPU29.RES:Rescheduling_interrupts
8138 ± 11% -36.3% 5183 ± 6% interrupts.CPU3.RES:Rescheduling_interrupts
7003 -33.7% 4644 ± 2% interrupts.CPU30.RES:Rescheduling_interrupts
7189 ± 3% -37.2% 4512 ± 3% interrupts.CPU31.RES:Rescheduling_interrupts
7236 ± 4% -35.7% 4654 interrupts.CPU32.RES:Rescheduling_interrupts
7071 -35.4% 4571 interrupts.CPU33.RES:Rescheduling_interrupts
7111 ± 2% -36.1% 4540 interrupts.CPU34.RES:Rescheduling_interrupts
7196 ± 2% -33.4% 4791 ± 4% interrupts.CPU35.RES:Rescheduling_interrupts
6758 -34.0% 4462 interrupts.CPU36.RES:Rescheduling_interrupts
6853 -35.7% 4403 ± 4% interrupts.CPU37.RES:Rescheduling_interrupts
7193 ± 3% -34.8% 4691 interrupts.CPU38.RES:Rescheduling_interrupts
7197 ± 5% -35.2% 4660 ± 3% interrupts.CPU39.RES:Rescheduling_interrupts
7753 -31.2% 5333 ± 7% interrupts.CPU4.RES:Rescheduling_interrupts
7031 -33.0% 4711 ± 3% interrupts.CPU40.RES:Rescheduling_interrupts
6888 -35.7% 4431 interrupts.CPU41.RES:Rescheduling_interrupts
6914 ± 2% -32.8% 4644 ± 3% interrupts.CPU42.RES:Rescheduling_interrupts
7186 ± 6% -37.4% 4498 interrupts.CPU43.RES:Rescheduling_interrupts
7018 ± 2% -36.5% 4458 ± 2% interrupts.CPU44.RES:Rescheduling_interrupts
7083 ± 3% -37.4% 4430 ± 2% interrupts.CPU45.RES:Rescheduling_interrupts
7044 ± 3% -35.8% 4521 ± 2% interrupts.CPU46.RES:Rescheduling_interrupts
6916 -34.3% 4546 interrupts.CPU47.RES:Rescheduling_interrupts
7012 ± 3% -36.6% 4443 ± 2% interrupts.CPU48.RES:Rescheduling_interrupts
6966 -35.3% 4506 ± 3% interrupts.CPU49.RES:Rescheduling_interrupts
252.25 ± 40% -32.0% 171.50 ± 4% interrupts.CPU5.40:PCI-MSI.1572869-edge.eth0-TxRx-5
7180 ± 2% -33.7% 4760 ± 3% interrupts.CPU5.RES:Rescheduling_interrupts
6969 -35.1% 4523 interrupts.CPU50.RES:Rescheduling_interrupts
7000 -34.5% 4587 interrupts.CPU51.RES:Rescheduling_interrupts
7039 ± 2% -34.0% 4643 ± 3% interrupts.CPU52.RES:Rescheduling_interrupts
6917 -35.8% 4443 ± 2% interrupts.CPU53.RES:Rescheduling_interrupts
7059 ± 2% -37.2% 4430 ± 2% interrupts.CPU54.RES:Rescheduling_interrupts
6918 -33.8% 4580 ± 2% interrupts.CPU55.RES:Rescheduling_interrupts
7187 -35.1% 4662 ± 3% interrupts.CPU56.RES:Rescheduling_interrupts
6875 -33.5% 4568 interrupts.CPU57.RES:Rescheduling_interrupts
7008 -34.7% 4574 ± 3% interrupts.CPU58.RES:Rescheduling_interrupts
6891 -32.0% 4683 ± 3% interrupts.CPU59.RES:Rescheduling_interrupts
7151 ± 4% -33.2% 4780 ± 2% interrupts.CPU6.RES:Rescheduling_interrupts
6903 -30.9% 4767 ± 7% interrupts.CPU60.RES:Rescheduling_interrupts
7063 -33.4% 4706 ± 3% interrupts.CPU61.RES:Rescheduling_interrupts
6855 ± 3% -33.9% 4532 ± 4% interrupts.CPU62.RES:Rescheduling_interrupts
7054 -35.1% 4578 ± 2% interrupts.CPU63.RES:Rescheduling_interrupts
7195 ± 2% -37.0% 4531 ± 2% interrupts.CPU64.RES:Rescheduling_interrupts
7040 ± 2% -35.3% 4556 ± 4% interrupts.CPU65.RES:Rescheduling_interrupts
6848 -33.7% 4538 ± 2% interrupts.CPU66.RES:Rescheduling_interrupts
7026 ± 2% -35.4% 4538 interrupts.CPU67.RES:Rescheduling_interrupts
7055 -34.2% 4645 ± 2% interrupts.CPU68.RES:Rescheduling_interrupts
6876 -33.4% 4576 interrupts.CPU69.RES:Rescheduling_interrupts
7598 ± 5% -35.5% 4898 ± 2% interrupts.CPU7.RES:Rescheduling_interrupts
7097 ± 2% -35.9% 4548 ± 5% interrupts.CPU70.RES:Rescheduling_interrupts
6982 -35.7% 4488 interrupts.CPU71.RES:Rescheduling_interrupts
7378 ± 2% -36.2% 4705 ± 2% interrupts.CPU8.RES:Rescheduling_interrupts
7349 -34.9% 4784 interrupts.CPU9.RES:Rescheduling_interrupts
83.75 ± 11% +52.5% 127.75 ± 18% interrupts.IWI:IRQ_work_interrupts
516736 -34.1% 340755 interrupts.RES:Rescheduling_interrupts
22.25 -1.3 20.95 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
22.22 -1.3 20.93 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.69 ± 5% -1.1 2.60 ± 3% perf-profile.calltrace.cycles-pp.search_binary_handler.do_execveat_common.sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.69 ± 5% -1.1 2.59 ± 3% perf-profile.calltrace.cycles-pp.load_elf_binary.search_binary_handler.do_execveat_common.sys_execve.do_syscall_64
1.71 ± 4% -0.8 0.88 ± 4% perf-profile.calltrace.cycles-pp.apic_timer_interrupt
1.64 ± 4% -0.8 0.84 ± 4% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
1.02 -0.8 0.27 ±100% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.25 ± 3% -0.6 0.60 ± 9% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
3.74 ± 2% -0.5 3.20 perf-profile.calltrace.cycles-pp.copy_page_range.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.06 ± 13% -0.5 1.56 ± 3% perf-profile.calltrace.cycles-pp.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.sys_execve
1.86 ± 3% -0.5 1.36 ± 2% perf-profile.calltrace.cycles-pp._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.97 ± 12% -0.5 1.51 ± 3% perf-profile.calltrace.cycles-pp.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common
1.96 ± 12% -0.5 1.50 ± 3% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
1.70 ± 3% -0.4 1.27 ± 2% perf-profile.calltrace.cycles-pp.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.29 ± 7% -0.4 0.87 ± 16% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.29 ± 7% -0.4 0.87 ± 16% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
4.20 ± 3% -0.4 3.80 ± 3% perf-profile.calltrace.cycles-pp.execve
1.21 ± 6% -0.4 0.85 ± 2% perf-profile.calltrace.cycles-pp.setlocale
3.81 ± 3% -0.4 3.46 ± 3% perf-profile.calltrace.cycles-pp.do_execveat_common.sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
3.84 ± 3% -0.4 3.49 ± 3% perf-profile.calltrace.cycles-pp.sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
3.84 ± 3% -0.3 3.50 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
3.84 ± 3% -0.3 3.50 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
0.73 ± 6% -0.2 0.54 perf-profile.calltrace.cycles-pp._dl_addr
0.91 ± 3% -0.2 0.75 ± 4% perf-profile.calltrace.cycles-pp.ret_from_fork
0.89 ± 3% -0.1 0.74 ± 4% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.68 ± 4% -0.1 0.60 ± 5% perf-profile.calltrace.cycles-pp.__pte_alloc.copy_page_range.copy_process._do_fork.do_syscall_64
0.72 ± 3% +0.1 0.79 ± 4% perf-profile.calltrace.cycles-pp.read
0.56 ± 7% +0.1 0.66 perf-profile.calltrace.cycles-pp.do_brk_flags.sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
0.54 ± 6% +0.1 0.64 ± 4% perf-profile.calltrace.cycles-pp.kmem_cache_free.unlink_anon_vmas.free_pgtables.exit_mmap.mmput
0.91 ± 3% +0.1 1.03 perf-profile.calltrace.cycles-pp.anon_vma_interval_tree_insert.anon_vma_clone.anon_vma_fork.copy_process._do_fork
0.56 ± 3% +0.1 0.68 ± 4% perf-profile.calltrace.cycles-pp.close
0.70 ± 6% +0.1 0.82 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.kill
0.71 ± 6% +0.1 0.83 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.kill
0.96 ± 6% +0.1 1.11 ± 3% perf-profile.calltrace.cycles-pp.kill
0.83 ± 3% +0.2 0.98 ± 2% perf-profile.calltrace.cycles-pp.free_pages_and_swap_cache.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap
0.94 +0.2 1.09 ± 2% perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.05 +0.2 1.22 ± 2% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.54 ± 4% +0.2 0.71 ± 3% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.do_munmap.sys_brk.do_syscall_64
0.54 ± 4% +0.2 0.71 ± 3% perf-profile.calltrace.cycles-pp.arch_tlb_finish_mmu.tlb_finish_mmu.unmap_region.do_munmap.sys_brk
1.12 +0.2 1.29 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
1.10 +0.2 1.27 ± 3% perf-profile.calltrace.cycles-pp.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
1.13 +0.2 1.31 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
0.71 ± 6% +0.2 0.91 ± 3% perf-profile.calltrace.cycles-pp.unmap_region.do_munmap.sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.75 ± 5% +0.2 0.96 ± 3% perf-profile.calltrace.cycles-pp.do_munmap.sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
1.86 ± 2% +0.2 2.08 ± 3% perf-profile.calltrace.cycles-pp.anon_vma_clone.anon_vma_fork.copy_process._do_fork.do_syscall_64
3.38 ± 3% +0.3 3.65 perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.mmput.do_exit.do_group_exit
1.72 ± 2% +0.3 1.98 perf-profile.calltrace.cycles-pp.write
1.93 ± 2% +0.3 2.23 perf-profile.calltrace.cycles-pp.page_remove_rmap.unmap_page_range.unmap_vmas.exit_mmap.mmput
0.28 ±100% +0.3 0.60 ± 4% perf-profile.calltrace.cycles-pp.do_notify_parent.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
2.17 ± 4% +0.3 2.49 ± 2% perf-profile.calltrace.cycles-pp.unlink_anon_vmas.free_pgtables.exit_mmap.mmput.do_exit
0.26 ±100% +0.3 0.59 ± 4% perf-profile.calltrace.cycles-pp.__slab_free.kmem_cache_free.unlink_anon_vmas.free_pgtables.exit_mmap
1.44 ± 6% +0.3 1.78 perf-profile.calltrace.cycles-pp.sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
1.47 ± 6% +0.3 1.81 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
2.81 ± 3% +0.4 3.16 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap
1.48 ± 6% +0.4 1.84 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.brk
2.75 +0.4 3.10 ± 3% perf-profile.calltrace.cycles-pp.anon_vma_fork.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.71 ± 4% +0.4 2.09 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.92 ± 2% +0.4 6.30 perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
1.73 ± 4% +0.4 2.11 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
1.86 ± 4% +0.4 2.25 perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
5.77 +0.4 6.16 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
1.89 ± 4% +0.4 2.28 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
0.13 ±173% +0.4 0.53 ± 2% perf-profile.calltrace.cycles-pp.remove_vma.exit_mmap.mmput.do_exit.do_group_exit
1.89 ± 4% +0.4 2.30 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat
0.27 ±100% +0.4 0.69 perf-profile.calltrace.cycles-pp.wait_consider_task.do_wait.kernel_wait4.SYSC_wait4.do_syscall_64
1.97 ± 6% +0.4 2.40 perf-profile.calltrace.cycles-pp.brk
2.36 ± 4% +0.5 2.81 perf-profile.calltrace.cycles-pp.creat
0.13 ±173% +0.5 0.60 ± 8% perf-profile.calltrace.cycles-pp.down_write.anon_vma_fork.copy_process._do_fork.do_syscall_64
0.67 ± 4% +0.5 1.16 ± 4% perf-profile.calltrace.cycles-pp.queued_read_lock_slowpath.do_wait.kernel_wait4.SYSC_wait4.do_syscall_64
3.65 ± 2% +0.5 4.15 perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap.mmput
0.13 ±173% +0.5 0.63 perf-profile.calltrace.cycles-pp.release_task.wait_consider_task.do_wait.kernel_wait4.SYSC_wait4
3.68 ± 2% +0.5 4.20 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.do_exit.do_group_exit
3.67 ± 2% +0.5 4.19 perf-profile.calltrace.cycles-pp.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap.mmput.do_exit
1.44 ± 2% +0.7 2.13 perf-profile.calltrace.cycles-pp.do_wait.kernel_wait4.SYSC_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.45 ± 2% +0.7 2.15 perf-profile.calltrace.cycles-pp.kernel_wait4.SYSC_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
1.47 ± 2% +0.7 2.17 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.wait
1.46 ± 2% +0.7 2.17 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
1.45 ± 2% +0.7 2.16 perf-profile.calltrace.cycles-pp.SYSC_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
1.69 ± 2% +0.7 2.43 perf-profile.calltrace.cycles-pp.wait
11.98 +0.7 12.73 perf-profile.calltrace.cycles-pp.__libc_fork
9.99 ± 2% +0.8 10.77 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_fork
9.99 ± 2% +0.8 10.77 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
9.98 ± 2% +0.8 10.77 perf-profile.calltrace.cycles-pp._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
9.35 +0.8 10.18 perf-profile.calltrace.cycles-pp.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
27.80 +0.8 28.64 perf-profile.calltrace.cycles-pp.secondary_startup_64
0.00 +0.9 0.86 ± 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath.queued_read_lock_slowpath.do_wait.kernel_wait4.SYSC_wait4
27.36 +0.9 28.29 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
27.36 +0.9 28.29 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
27.35 +0.9 28.29 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
26.38 +1.0 27.42 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
13.81 +1.2 14.97 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__wake_up_parent
13.84 +1.2 15.00 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
15.92 +1.3 17.21 perf-profile.calltrace.cycles-pp.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.92 +1.3 17.21 perf-profile.calltrace.cycles-pp.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.92 +1.3 17.21 perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
24.25 +1.3 25.56 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
array_size/compiler/cpufreq_governor/kconfig/nr_threads/omp/rootfs/tbox_group/testcase/ucode:
10000000/gcc-7/performance/x86_64-rhel-7.2/50%/true/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep4/stream/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
70910 ± 3% -32.0% 48230 ± 2% stream.add_bandwidth_MBps
76061 -39.0% 46386 ± 3% stream.copy_bandwidth_MBps
67937 ± 2% -35.7% 43683 ± 2% stream.scale_bandwidth_MBps
52.42 +47.9% 77.52 ± 2% stream.time.user_time
74940 -33.4% 49941 ± 2% stream.triad_bandwidth_MBps
19.95 ± 9% +3.4 23.35 ± 2% mpstat.cpu.usr%
3192799 ± 31% +124.6% 7172184 ± 16% cpuidle.C1E.time
23982 ± 9% +47.3% 35332 ± 28% cpuidle.C1E.usage
91178 ± 8% +43.2% 130560 ± 4% meminfo.AnonHugePages
669014 ± 11% -20.3% 533101 meminfo.max_used_kB
643.25 ± 15% +408.3% 3269 ±129% softirqs.CPU16.RCU
10783 ± 4% +13.6% 12250 ± 10% softirqs.CPU46.TIMER
84.00 -4.2% 80.50 vmstat.cpu.id
15.00 ± 8% +23.3% 18.50 ± 2% vmstat.cpu.us
5888 +3.3% 6081 proc-vmstat.nr_mapped
985.50 ± 2% +7.6% 1060 ± 2% proc-vmstat.nr_page_table_pages
81826 +2.9% 84193 proc-vmstat.numa_hit
64731 +3.6% 67074 proc-vmstat.numa_local
20069 ± 12% +60.3% 32178 ± 32% turbostat.C1E
1.15 ± 31% +1.2 2.33 ± 17% turbostat.C1E%
34.77 ± 4% -24.8% 26.13 ± 2% turbostat.CPU%c1
30.82 ± 16% +25.9% 38.80 ± 9% turbostat.CPU%c6
289644 ± 6% +17.8% 341157 ± 7% turbostat.IRQ
24.77 ± 6% -10.6% 22.14 ± 2% turbostat.RAMWatt
672.00 ± 9% +46.4% 984.00 ± 22% slabinfo.dmaengine-unmap-16.active_objs
672.00 ± 9% +46.4% 984.00 ± 22% slabinfo.dmaengine-unmap-16.num_objs
2128 ± 9% -25.9% 1576 ± 9% slabinfo.eventpoll_epi.active_objs
2128 ± 9% -25.9% 1576 ± 9% slabinfo.eventpoll_epi.num_objs
3724 ± 9% -25.9% 2758 ± 9% slabinfo.eventpoll_pwq.active_objs
3724 ± 9% -25.9% 2758 ± 9% slabinfo.eventpoll_pwq.num_objs
760.00 ± 6% -17.1% 630.00 ± 2% slabinfo.file_lock_cache.active_objs
760.00 ± 6% -17.1% 630.00 ± 2% slabinfo.file_lock_cache.num_objs
7771 ± 60% +112.3% 16501 numa-meminfo.node0.Inactive(anon)
8035 ± 58% +111.6% 17005 ± 2% numa-meminfo.node0.Shmem
93367 ± 2% +12.8% 105291 numa-meminfo.node0.Slab
21669 +8.8% 23569 ± 4% numa-meminfo.node1.Active(file)
9225 ± 51% -94.7% 493.50 ± 62% numa-meminfo.node1.Inactive(anon)
31028 ± 6% -10.9% 27656 numa-meminfo.node1.SReclaimable
62893 ± 4% -13.6% 54313 ± 7% numa-meminfo.node1.SUnreclaim
9625 ± 49% -92.8% 690.00 ± 51% numa-meminfo.node1.Shmem
93922 -12.7% 81969 ± 5% numa-meminfo.node1.Slab
1942 ± 61% +112.4% 4125 numa-vmstat.node0.nr_inactive_anon
2007 ± 59% +111.7% 4250 ± 2% numa-vmstat.node0.nr_shmem
1942 ± 61% +112.4% 4125 numa-vmstat.node0.nr_zone_inactive_anon
5390 +9.3% 5892 ± 4% numa-vmstat.node1.nr_active_file
2305 ± 51% -94.7% 123.25 ± 62% numa-vmstat.node1.nr_inactive_anon
2404 ± 49% -92.8% 172.00 ± 51% numa-vmstat.node1.nr_shmem
15666 ± 4% -13.3% 13579 ± 7% numa-vmstat.node1.nr_slab_unreclaimable
5390 +9.3% 5892 ± 4% numa-vmstat.node1.nr_zone_active_file
2305 ± 51% -94.7% 123.25 ± 62% numa-vmstat.node1.nr_zone_inactive_anon
11.33 ±120% -11.3 0.00 perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.read
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
20.83 ±103% -8.3 12.50 ±173% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
20.83 ±103% -8.3 12.50 ±173% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
20.83 ±103% -8.3 12.50 ±173% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
20.33 ±103% -7.8 12.50 ±173% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
20.33 ±103% -7.8 12.50 ±173% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
12.83 ±108% -0.3 12.50 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
12.83 ±108% -0.3 12.50 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
27016 ± 6% -12.9% 23538 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
2243 ± 40% -47.7% 1173 ± 35% sched_debug.cfs_rq:/.min_vruntime.min
-692.28 +943.1% -7221 sched_debug.cfs_rq:/.spread0.avg
19808 ± 11% -47.9% 10321 ± 61% sched_debug.cfs_rq:/.spread0.max
-5010 +140.7% -12059 sched_debug.cfs_rq:/.spread0.min
2.18 ± 9% -12.5% 1.91 ± 8% sched_debug.cpu.cpu_load[0].avg
160.00 ± 22% -42.2% 92.50 ± 25% sched_debug.cpu.cpu_load[4].max
18.25 ± 22% -37.0% 11.50 ± 14% sched_debug.cpu.cpu_load[4].stddev
282.50 ± 11% -16.2% 236.72 ± 6% sched_debug.cpu.curr->pid.avg
642.46 ± 4% -7.3% 595.31 sched_debug.cpu.curr->pid.stddev
229.75 -19.0% 186.00 ± 9% sched_debug.cpu.nr_switches.min
1754 ± 11% +30.2% 2284 ± 8% sched_debug.cpu.nr_switches.stddev
5.25 ± 15% +128.6% 12.00 ± 20% sched_debug.cpu.nr_uninterruptible.max
2.38 ± 13% +50.4% 3.57 ± 19% sched_debug.cpu.nr_uninterruptible.stddev
1.56 ±171% +602.6% 10.93 ± 52% sched_debug.cpu.ttwu_count.avg
9.75 ±161% +2407.7% 244.50 ± 51% sched_debug.cpu.ttwu_count.max
3.53 ±167% +1270.9% 48.37 ± 50% sched_debug.cpu.ttwu_count.stddev
3.14 ± 12% +41.1% 4.43 ± 17% perf-stat.i.cpi
0.32 ± 13% -24.8% 0.24 ± 12% perf-stat.i.ipc
14784 ± 4% -25.8% 10970 ± 16% perf-stat.i.minor-faults
37.43 ± 11% -32.1 5.36 ± 27% perf-stat.i.node-load-miss-rate%
1700226 ± 23% -86.3% 233356 ± 9% perf-stat.i.node-load-misses
2814735 ± 20% +183.3% 7973698 ± 61% perf-stat.i.node-loads
36.40 ± 15% -34.6 1.80 ± 79% perf-stat.i.node-store-miss-rate%
3649361 ± 35% -95.2% 175235 ± 54% perf-stat.i.node-store-misses
14790 ± 4% -25.8% 10971 ± 16% perf-stat.i.page-faults
3.14 ± 12% +45.8% 4.58 ± 20% perf-stat.overall.cpi
0.32 ± 13% -29.6% 0.23 ± 20% perf-stat.overall.ipc
37.43 ± 11% -33.1 4.30 ± 57% perf-stat.overall.node-load-miss-rate%
36.40 ± 15% -34.8 1.58 ± 99% perf-stat.overall.node-store-miss-rate%
7440 ± 4% -15.4% 6296 ± 6% perf-stat.ps.minor-faults
855845 ± 24% -83.9% 138073 ± 21% perf-stat.ps.node-load-misses
1416749 ± 20% +257.9% 5070429 ± 69% perf-stat.ps.node-loads
1837180 ± 35% -94.4% 103194 ± 56% perf-stat.ps.node-store-misses
7443 ± 4% -15.4% 6297 ± 6% perf-stat.ps.page-faults
2645 ± 6% +21.1% 3205 ± 8% interrupts.CPU0.LOC:Local_timer_interrupts
2659 ± 5% +19.9% 3189 ± 8% interrupts.CPU1.LOC:Local_timer_interrupts
2690 ± 4% +21.8% 3278 ± 10% interrupts.CPU10.LOC:Local_timer_interrupts
2649 ± 5% +22.0% 3233 ± 8% interrupts.CPU11.LOC:Local_timer_interrupts
2665 ± 5% +20.0% 3198 ± 8% interrupts.CPU12.LOC:Local_timer_interrupts
2697 ± 7% +18.6% 3200 ± 8% interrupts.CPU13.LOC:Local_timer_interrupts
2668 ± 5% +19.6% 3192 ± 8% interrupts.CPU14.LOC:Local_timer_interrupts
2656 ± 5% +21.3% 3223 ± 7% interrupts.CPU15.LOC:Local_timer_interrupts
2652 ± 5% +20.6% 3199 ± 8% interrupts.CPU16.LOC:Local_timer_interrupts
2680 ± 4% +20.4% 3227 ± 8% interrupts.CPU17.LOC:Local_timer_interrupts
2653 ± 5% +21.5% 3224 ± 9% interrupts.CPU18.LOC:Local_timer_interrupts
2685 ± 5% +20.2% 3228 ± 8% interrupts.CPU19.LOC:Local_timer_interrupts
2676 ± 5% +20.7% 3229 ± 9% interrupts.CPU2.LOC:Local_timer_interrupts
2665 ± 5% +20.6% 3215 ± 9% interrupts.CPU20.LOC:Local_timer_interrupts
2681 ± 5% +19.3% 3198 ± 8% interrupts.CPU21.LOC:Local_timer_interrupts
2653 ± 5% +20.8% 3206 ± 8% interrupts.CPU22.LOC:Local_timer_interrupts
2643 ± 6% +22.8% 3245 ± 8% interrupts.CPU23.LOC:Local_timer_interrupts
2654 ± 5% +21.7% 3231 ± 7% interrupts.CPU24.LOC:Local_timer_interrupts
2649 ± 5% +21.8% 3226 ± 7% interrupts.CPU25.LOC:Local_timer_interrupts
2707 ± 7% +20.0% 3248 ± 7% interrupts.CPU26.LOC:Local_timer_interrupts
2701 ± 6% +20.2% 3247 ± 8% interrupts.CPU27.LOC:Local_timer_interrupts
2701 ± 6% +18.8% 3208 ± 8% interrupts.CPU28.LOC:Local_timer_interrupts
2702 ± 5% +18.4% 3201 ± 8% interrupts.CPU29.LOC:Local_timer_interrupts
2688 ± 6% +18.8% 3193 ± 8% interrupts.CPU30.LOC:Local_timer_interrupts
2641 ± 6% +21.5% 3208 ± 8% interrupts.CPU31.LOC:Local_timer_interrupts
2666 ± 4% +20.4% 3210 ± 8% interrupts.CPU32.LOC:Local_timer_interrupts
2639 ± 5% +21.4% 3204 ± 8% interrupts.CPU33.LOC:Local_timer_interrupts
2656 ± 5% +22.2% 3246 ± 8% interrupts.CPU34.LOC:Local_timer_interrupts
2664 ± 5% +21.8% 3245 ± 8% interrupts.CPU35.LOC:Local_timer_interrupts
2680 ± 6% +20.9% 3241 ± 8% interrupts.CPU36.LOC:Local_timer_interrupts
2650 ± 6% +22.4% 3245 ± 7% interrupts.CPU37.LOC:Local_timer_interrupts
2653 ± 6% +20.8% 3206 ± 8% interrupts.CPU38.LOC:Local_timer_interrupts
2653 ± 5% +21.6% 3226 ± 8% interrupts.CPU39.LOC:Local_timer_interrupts
2650 ± 5% +23.6% 3276 ± 6% interrupts.CPU4.LOC:Local_timer_interrupts
2679 ± 6% +19.6% 3204 ± 8% interrupts.CPU40.LOC:Local_timer_interrupts
2649 ± 5% +20.9% 3204 ± 8% interrupts.CPU41.LOC:Local_timer_interrupts
2666 ± 6% +20.2% 3204 ± 8% interrupts.CPU42.LOC:Local_timer_interrupts
2699 ± 6% +18.9% 3209 ± 8% interrupts.CPU43.LOC:Local_timer_interrupts
2685 ± 8% +21.2% 3254 ± 9% interrupts.CPU44.LOC:Local_timer_interrupts
2718 ± 8% +18.9% 3233 ± 8% interrupts.CPU45.LOC:Local_timer_interrupts
2644 ± 6% +21.0% 3200 ± 9% interrupts.CPU46.LOC:Local_timer_interrupts
2672 ± 5% +19.8% 3201 ± 9% interrupts.CPU47.LOC:Local_timer_interrupts
2662 ± 5% +21.0% 3220 ± 9% interrupts.CPU48.LOC:Local_timer_interrupts
2676 ± 5% +20.2% 3216 ± 8% interrupts.CPU49.LOC:Local_timer_interrupts
2655 ± 5% +20.5% 3200 ± 8% interrupts.CPU5.LOC:Local_timer_interrupts
623.75 +14.3% 712.75 ± 11% interrupts.CPU50.CAL:Function_call_interrupts
2672 ± 6% +19.4% 3191 ± 8% interrupts.CPU50.LOC:Local_timer_interrupts
2653 ± 5% +20.4% 3194 ± 8% interrupts.CPU51.LOC:Local_timer_interrupts
2640 ± 6% +20.9% 3193 ± 8% interrupts.CPU52.LOC:Local_timer_interrupts
2665 ± 6% +19.9% 3196 ± 8% interrupts.CPU53.LOC:Local_timer_interrupts
2657 ± 5% +20.0% 3189 ± 8% interrupts.CPU54.LOC:Local_timer_interrupts
2653 ± 5% +21.1% 3212 ± 9% interrupts.CPU55.LOC:Local_timer_interrupts
2659 ± 5% +20.0% 3192 ± 8% interrupts.CPU56.LOC:Local_timer_interrupts
208.75 ± 59% -84.7% 32.00 ±173% interrupts.CPU56.TLB:TLB_shootdowns
2657 ± 5% +20.2% 3194 ± 8% interrupts.CPU57.LOC:Local_timer_interrupts
2638 ± 6% +21.0% 3192 ± 8% interrupts.CPU58.LOC:Local_timer_interrupts
2667 ± 7% +19.7% 3194 ± 8% interrupts.CPU59.LOC:Local_timer_interrupts
2649 ± 5% +21.0% 3205 ± 8% interrupts.CPU6.LOC:Local_timer_interrupts
236.00 ± 66% -86.2% 32.50 ±169% interrupts.CPU6.TLB:TLB_shootdowns
2677 ± 6% +19.0% 3186 ± 8% interrupts.CPU60.LOC:Local_timer_interrupts
2674 ± 6% +19.3% 3191 ± 8% interrupts.CPU61.LOC:Local_timer_interrupts
2658 ± 5% +20.2% 3194 ± 8% interrupts.CPU62.LOC:Local_timer_interrupts
2666 ± 5% +20.8% 3221 ± 9% interrupts.CPU63.LOC:Local_timer_interrupts
2690 ± 6% +19.9% 3225 ± 10% interrupts.CPU64.LOC:Local_timer_interrupts
2653 ± 5% +20.4% 3194 ± 8% interrupts.CPU65.LOC:Local_timer_interrupts
652.50 ± 9% +21.9% 795.25 ± 18% interrupts.CPU66.CAL:Function_call_interrupts
2669 ± 5% +19.7% 3196 ± 8% interrupts.CPU66.LOC:Local_timer_interrupts
2655 ± 5% +20.2% 3191 ± 8% interrupts.CPU67.LOC:Local_timer_interrupts
2665 ± 5% +20.4% 3207 ± 8% interrupts.CPU68.LOC:Local_timer_interrupts
614.50 +27.0% 780.50 ± 25% interrupts.CPU69.CAL:Function_call_interrupts
2666 ± 5% +21.2% 3232 ± 9% interrupts.CPU69.LOC:Local_timer_interrupts
2678 ± 6% +21.1% 3243 ± 7% interrupts.CPU7.LOC:Local_timer_interrupts
657.50 ± 9% +24.1% 815.75 ± 21% interrupts.CPU70.CAL:Function_call_interrupts
2676 ± 6% +19.9% 3208 ± 8% interrupts.CPU70.LOC:Local_timer_interrupts
657.00 ± 9% +24.2% 815.75 ± 21% interrupts.CPU71.CAL:Function_call_interrupts
2674 ± 6% +19.5% 3194 ± 8% interrupts.CPU71.LOC:Local_timer_interrupts
2653 ± 5% +21.4% 3221 ± 9% interrupts.CPU72.LOC:Local_timer_interrupts
2670 ± 6% +20.0% 3204 ± 8% interrupts.CPU73.LOC:Local_timer_interrupts
2669 ± 6% +19.8% 3197 ± 9% interrupts.CPU74.LOC:Local_timer_interrupts
652.00 ± 9% +25.3% 816.75 ± 21% interrupts.CPU75.CAL:Function_call_interrupts
2641 ± 5% +21.4% 3205 ± 8% interrupts.CPU75.LOC:Local_timer_interrupts
625.00 +25.8% 786.25 ± 16% interrupts.CPU76.CAL:Function_call_interrupts
2660 ± 5% +20.6% 3208 ± 8% interrupts.CPU76.LOC:Local_timer_interrupts
656.75 ± 9% +20.6% 791.75 ± 17% interrupts.CPU77.CAL:Function_call_interrupts
2665 ± 5% +19.6% 3187 ± 8% interrupts.CPU77.LOC:Local_timer_interrupts
657.00 ± 9% +21.3% 796.75 ± 18% interrupts.CPU78.CAL:Function_call_interrupts
2666 ± 5% +19.7% 3190 ± 8% interrupts.CPU78.LOC:Local_timer_interrupts
2660 ± 5% +19.9% 3190 ± 8% interrupts.CPU79.LOC:Local_timer_interrupts
648.75 ± 7% +19.2% 773.00 ± 16% interrupts.CPU8.CAL:Function_call_interrupts
2647 ± 5% +21.3% 3211 ± 7% interrupts.CPU8.LOC:Local_timer_interrupts
2671 ± 6% +19.5% 3191 ± 8% interrupts.CPU80.LOC:Local_timer_interrupts
2660 ± 5% +20.9% 3217 ± 8% interrupts.CPU81.LOC:Local_timer_interrupts
2639 ± 6% +21.6% 3209 ± 8% interrupts.CPU82.LOC:Local_timer_interrupts
2652 ± 5% +21.5% 3223 ± 7% interrupts.CPU83.LOC:Local_timer_interrupts
2656 ± 5% +20.5% 3201 ± 8% interrupts.CPU84.LOC:Local_timer_interrupts
2678 ± 4% +19.8% 3207 ± 8% interrupts.CPU85.LOC:Local_timer_interrupts
2659 ± 5% +20.7% 3209 ± 8% interrupts.CPU86.LOC:Local_timer_interrupts
656.25 ± 9% +24.8% 819.00 ± 22% interrupts.CPU87.CAL:Function_call_interrupts
2656 ± 5% +20.5% 3200 ± 8% interrupts.CPU87.LOC:Local_timer_interrupts
2655 ± 5% +20.9% 3211 ± 8% interrupts.CPU9.LOC:Local_timer_interrupts
234559 ± 5% +20.5% 282643 ± 8% interrupts.LOC:Local_timer_interrupts
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep6/plzip/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
4674851 -5.8% 4402603 plzip.time.minor_page_faults
187.93 -1.7% 184.66 plzip.time.system_time
236915 +3.3% 244744 plzip.time.voluntary_context_switches
20188778 ± 7% +62.1% 32719513 ± 32% cpuidle.POLL.time
0.00 -0.0 0.00 ± 24% mpstat.cpu.soft%
11424 ± 49% -62.2% 4321 ±169% numa-numastat.node0.other_node
2223 -2.7% 2163 vmstat.system.cs
199507 -6.3% 186973 vmstat.system.in
11254 +22.5% 13784 ± 16% numa-meminfo.node0.Mapped
75.00 +544.0% 483.00 ± 81% numa-meminfo.node0.Mlocked
75.00 +544.0% 483.00 ± 81% numa-meminfo.node0.Unevictable
38772 ± 5% -11.6% 34263 ± 10% numa-meminfo.node1.SReclaimable
2793 +25.4% 3502 ± 14% numa-vmstat.node0.nr_mapped
18.50 ± 2% +550.0% 120.25 ± 82% numa-vmstat.node0.nr_mlock
18.50 ± 2% +550.0% 120.25 ± 82% numa-vmstat.node0.nr_unevictable
18.50 ± 2% +550.0% 120.25 ± 82% numa-vmstat.node0.nr_zone_unevictable
11543 ± 49% -60.8% 4519 ±159% numa-vmstat.node0.numa_other
9693 ± 5% -11.6% 8566 ± 10% numa-vmstat.node1.nr_slab_reclaimable
3008454 -9.0% 2737405 proc-vmstat.numa_hint_faults
2421331 -7.5% 2240371 proc-vmstat.numa_hint_faults_local
185494 -11.9% 163404 proc-vmstat.numa_huge_pte_updates
5591805 ± 3% -12.4% 4899638 ± 3% proc-vmstat.numa_pages_migrated
98610869 -11.8% 86933972 proc-vmstat.numa_pte_updates
5570737 -4.9% 5295155 proc-vmstat.pgfault
5591805 ± 3% -12.4% 4899638 ± 3% proc-vmstat.pgmigrate_success
2358 ± 3% -24.9% 1770 ± 14% slabinfo.eventpoll_epi.active_objs
2358 ± 3% -24.9% 1770 ± 14% slabinfo.eventpoll_epi.num_objs
4127 ± 3% -25.6% 3071 ± 15% slabinfo.eventpoll_pwq.active_objs
4127 ± 3% -25.6% 3071 ± 15% slabinfo.eventpoll_pwq.num_objs
767.50 ± 5% -16.6% 640.00 ± 10% slabinfo.file_lock_cache.active_objs
767.50 ± 5% -16.6% 640.00 ± 10% slabinfo.file_lock_cache.num_objs
548.00 ± 12% -13.4% 474.75 ± 17% slabinfo.skbuff_fclone_cache.active_objs
548.00 ± 12% -13.4% 474.75 ± 17% slabinfo.skbuff_fclone_cache.num_objs
3424 -13.7% 2955 ± 8% slabinfo.sock_inode_cache.active_objs
3424 -13.7% 2955 ± 8% slabinfo.sock_inode_cache.num_objs
1.38 +8.7% 1.50 ± 6% sched_debug.cfs_rq:/.nr_spread_over.avg
123441 ± 32% -87.5% 15420 ± 31% sched_debug.cfs_rq:/.spread0.avg
310956 ± 19% -40.1% 186343 ± 7% sched_debug.cfs_rq:/.spread0.max
29.67 ± 5% -7.4% 27.46 ± 5% sched_debug.cpu.cpu_load[1].max
53.50 ± 8% -16.1% 44.88 ± 16% sched_debug.cpu.cpu_load[4].max
4.96 ± 7% -16.0% 4.17 ± 17% sched_debug.cpu.cpu_load[4].stddev
3942 ± 15% -36.1% 2519 ± 42% sched_debug.cpu.curr->pid.min
312.17 ± 4% -12.6% 272.83 ± 4% sched_debug.cpu.sched_goidle.stddev
5485 +31.8% 7231 ± 15% sched_debug.cpu.ttwu_count.max
829.33 ± 6% -10.5% 742.58 ± 12% sched_debug.cpu.ttwu_count.min
3789 ± 3% +38.5% 5249 ± 18% sched_debug.cpu.ttwu_local.max
361.67 ± 16% -24.7% 272.42 ± 9% sched_debug.cpu.ttwu_local.min
42735 ± 4% -9.5% 38655 ± 4% softirqs.CPU1.RCU
43986 -11.1% 39106 ± 6% softirqs.CPU11.RCU
138451 ± 3% -6.2% 129877 ± 3% softirqs.CPU11.TIMER
43616 ± 3% -11.3% 38683 ± 8% softirqs.CPU14.RCU
46391 ± 3% -15.0% 39420 ± 4% softirqs.CPU2.RCU
46679 ± 2% -10.4% 41843 ± 2% softirqs.CPU35.RCU
54405 ± 9% -13.0% 47347 ± 11% softirqs.CPU36.RCU
53662 ± 5% -16.5% 44815 ± 5% softirqs.CPU37.RCU
51563 ± 11% -17.6% 42505 ± 5% softirqs.CPU39.RCU
146287 ± 3% -7.7% 135015 ± 3% softirqs.CPU4.TIMER
51810 ± 8% -12.9% 45103 ± 11% softirqs.CPU41.RCU
144965 ± 3% -7.2% 134488 ± 2% softirqs.CPU5.TIMER
36399 ± 2% +19.8% 43611 ± 2% softirqs.CPU66.RCU
54540 -17.2% 45175 ± 2% softirqs.CPU68.RCU
45027 -10.8% 40167 ± 6% softirqs.CPU7.RCU
46747 ± 10% -20.2% 37293 ± 10% softirqs.CPU77.RCU
42610 ± 2% -10.8% 38011 ± 7% softirqs.CPU83.RCU
43534 ± 3% -10.6% 38935 ± 9% softirqs.CPU87.RCU
45144 ± 4% -8.2% 41426 ± 5% softirqs.CPU9.RCU
2207 -2.7% 2149 perf-stat.i.context-switches
132.50 -6.8% 123.43 ± 2% perf-stat.i.cpu-migrations
0.04 -0.0 0.03 ± 2% perf-stat.i.dTLB-store-miss-rate%
2870039 -7.1% 2665015 perf-stat.i.dTLB-store-misses
69.91 +2.8 72.76 ± 2% perf-stat.i.iTLB-load-miss-rate%
593399 -13.4% 513752 ± 3% perf-stat.i.iTLB-loads
15830 -5.4% 14979 perf-stat.i.minor-faults
5.76 ± 6% -1.1 4.63 ± 12% perf-stat.i.node-load-miss-rate%
29280168 ± 5% -22.1% 22809166 ± 7% perf-stat.i.node-load-misses
5.96 ± 8% -1.4 4.52 ± 11% perf-stat.i.node-store-miss-rate%
4992121 ± 7% -27.5% 3621719 ± 7% perf-stat.i.node-store-misses
15830 -5.4% 14979 perf-stat.i.page-faults
0.04 -0.0 0.03 perf-stat.overall.dTLB-store-miss-rate%
5.46 ± 5% -1.2 4.24 ± 7% perf-stat.overall.node-load-miss-rate%
5.68 ± 7% -1.6 4.12 ± 8% perf-stat.overall.node-store-miss-rate%
2201 -2.6% 2143 perf-stat.ps.context-switches
132.11 -6.8% 123.07 ± 2% perf-stat.ps.cpu-migrations
2861551 -7.1% 2657323 perf-stat.ps.dTLB-store-misses
591695 -13.4% 512284 ± 3% perf-stat.ps.iTLB-loads
15785 -5.4% 14937 perf-stat.ps.minor-faults
29194092 ± 5% -22.1% 22742509 ± 7% perf-stat.ps.node-load-misses
4977398 ± 7% -27.4% 3611184 ± 7% perf-stat.ps.node-store-misses
15785 -5.4% 14937 perf-stat.ps.page-faults
37.67 -3.5 34.22 ± 3% perf-profile.calltrace.cycles-pp.apic_timer_interrupt
36.20 -3.2 32.98 ± 3% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
30.64 -2.6 28.08 ± 3% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
15.44 ± 8% -2.2 13.23 ± 5% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
16.59 ± 7% -2.2 14.40 ± 4% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
14.94 ± 8% -2.1 12.84 ± 5% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
23.31 ± 2% -2.1 21.23 ± 3% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
11.29 ± 5% -1.7 9.60 ± 5% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
8.27 ± 2% -1.0 7.23 ± 4% perf-profile.calltrace.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
12.36 -0.6 11.71 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
12.36 -0.6 11.71 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.88 -0.6 3.31 ± 4% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.86 ± 2% -0.6 3.30 ± 4% perf-profile.calltrace.cycles-pp.pipe_write.__vfs_write.vfs_write.sys_write.do_syscall_64
3.88 -0.6 3.32 ± 4% perf-profile.calltrace.cycles-pp.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.86 ± 2% -0.6 3.31 ± 3% perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.08 -0.5 3.56 ± 2% perf-profile.calltrace.cycles-pp.update_load_avg.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
2.81 ± 2% -0.4 2.39 ± 7% perf-profile.calltrace.cycles-pp.copy_page_from_iter.pipe_write.__vfs_write.vfs_write.sys_write
2.62 ± 4% -0.4 2.21 ± 6% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.copy_page_from_iter.pipe_write.__vfs_write
2.62 ± 4% -0.4 2.21 ± 6% perf-profile.calltrace.cycles-pp.copyin.copy_page_from_iter.pipe_write.__vfs_write.vfs_write
0.87 ± 6% -0.4 0.47 ± 57% perf-profile.calltrace.cycles-pp.native_apic_msr_eoi_write.smp_apic_timer_interrupt.apic_timer_interrupt
0.83 ± 7% -0.4 0.44 ± 58% perf-profile.calltrace.cycles-pp.account_user_time.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
3.40 ± 3% -0.3 3.12 ± 2% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
3.00 ± 4% -0.3 2.71 perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.44 ± 2% -0.2 1.29 ± 10% perf-profile.calltrace.cycles-pp.update_curr.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
0.77 ± 3% -0.1 0.62 ± 12% perf-profile.calltrace.cycles-pp.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
545.00 ± 25% -41.9% 316.75 ± 47% interrupts.36:IR-PCI-MSI.1572867-edge.eth0-TxRx-2
203.50 ± 9% +45.3% 295.75 ± 24% interrupts.39:IR-PCI-MSI.1572870-edge.eth0-TxRx-5
38166429 -11.0% 33961585 interrupts.CAL:Function_call_interrupts
432615 -10.9% 385659 interrupts.CPU0.CAL:Function_call_interrupts
434222 -11.2% 385591 interrupts.CPU1.CAL:Function_call_interrupts
432253 ± 2% -10.6% 386505 interrupts.CPU10.CAL:Function_call_interrupts
432492 ± 2% -11.0% 384996 interrupts.CPU11.CAL:Function_call_interrupts
450925 ± 2% -7.4% 417361 ± 2% interrupts.CPU11.TLB:TLB_shootdowns
431152 -10.4% 386198 interrupts.CPU12.CAL:Function_call_interrupts
7893 -37.4% 4939 ± 34% interrupts.CPU12.NMI:Non-maskable_interrupts
7893 -37.4% 4939 ± 34% interrupts.CPU12.PMI:Performance_monitoring_interrupts
431989 ± 2% -10.8% 385478 interrupts.CPU13.CAL:Function_call_interrupts
432141 -10.8% 385372 interrupts.CPU14.CAL:Function_call_interrupts
430309 -10.2% 386216 interrupts.CPU15.CAL:Function_call_interrupts
545.00 ± 25% -41.9% 316.75 ± 47% interrupts.CPU16.36:IR-PCI-MSI.1572867-edge.eth0-TxRx-2
432466 -10.4% 387465 interrupts.CPU16.CAL:Function_call_interrupts
432582 -11.0% 384925 interrupts.CPU17.CAL:Function_call_interrupts
434853 -11.0% 386855 interrupts.CPU18.CAL:Function_call_interrupts
203.50 ± 9% +45.3% 295.75 ± 24% interrupts.CPU19.39:IR-PCI-MSI.1572870-edge.eth0-TxRx-5
432706 -10.6% 386950 interrupts.CPU19.CAL:Function_call_interrupts
434387 -12.4% 380661 interrupts.CPU2.CAL:Function_call_interrupts
453229 -9.2% 411516 interrupts.CPU2.TLB:TLB_shootdowns
435346 -11.3% 386104 interrupts.CPU20.CAL:Function_call_interrupts
454122 -7.9% 418229 ± 2% interrupts.CPU20.TLB:TLB_shootdowns
434493 -10.9% 387125 interrupts.CPU21.CAL:Function_call_interrupts
621.00 ± 11% +64.5% 1021 ± 20% interrupts.CPU21.RES:Rescheduling_interrupts
425823 ± 3% -9.4% 385897 interrupts.CPU22.CAL:Function_call_interrupts
433468 -11.0% 385821 interrupts.CPU23.CAL:Function_call_interrupts
434913 -11.2% 386097 interrupts.CPU24.CAL:Function_call_interrupts
434374 -11.2% 385692 interrupts.CPU25.CAL:Function_call_interrupts
433527 -10.6% 387596 interrupts.CPU26.CAL:Function_call_interrupts
434308 -11.3% 385115 interrupts.CPU27.CAL:Function_call_interrupts
431181 -10.9% 384326 interrupts.CPU28.CAL:Function_call_interrupts
434448 -11.5% 384490 interrupts.CPU29.CAL:Function_call_interrupts
453474 ± 2% -8.0% 417056 interrupts.CPU29.TLB:TLB_shootdowns
433938 -11.1% 385867 interrupts.CPU3.CAL:Function_call_interrupts
2739 ± 14% -41.8% 1594 ± 27% interrupts.CPU3.RES:Rescheduling_interrupts
433773 -11.0% 386014 interrupts.CPU30.CAL:Function_call_interrupts
434250 -11.1% 385917 interrupts.CPU31.CAL:Function_call_interrupts
436173 -11.6% 385778 interrupts.CPU32.CAL:Function_call_interrupts
455271 ± 2% -8.0% 418794 ± 2% interrupts.CPU32.TLB:TLB_shootdowns
435787 -11.4% 386189 interrupts.CPU33.CAL:Function_call_interrupts
436438 -11.5% 386321 interrupts.CPU34.CAL:Function_call_interrupts
433917 -11.4% 384653 interrupts.CPU35.CAL:Function_call_interrupts
434362 -10.8% 387306 interrupts.CPU36.CAL:Function_call_interrupts
434686 -11.3% 385416 interrupts.CPU37.CAL:Function_call_interrupts
453983 ± 2% -7.8% 418472 interrupts.CPU37.TLB:TLB_shootdowns
433245 -10.7% 386871 interrupts.CPU38.CAL:Function_call_interrupts
430705 -10.0% 387585 interrupts.CPU39.CAL:Function_call_interrupts
434411 -11.2% 385547 interrupts.CPU4.CAL:Function_call_interrupts
1251 ± 9% +90.3% 2380 ± 32% interrupts.CPU4.RES:Rescheduling_interrupts
434631 -11.1% 386349 interrupts.CPU40.CAL:Function_call_interrupts
436453 -11.6% 385886 interrupts.CPU41.CAL:Function_call_interrupts
436514 -11.5% 386455 interrupts.CPU42.CAL:Function_call_interrupts
432629 ± 2% -11.0% 385234 interrupts.CPU43.CAL:Function_call_interrupts
1982 ± 40% -39.4% 1201 ± 44% interrupts.CPU43.RES:Rescheduling_interrupts
438403 -11.0% 390257 interrupts.CPU44.CAL:Function_call_interrupts
437673 -10.6% 391240 interrupts.CPU45.CAL:Function_call_interrupts
436956 -10.0% 393070 interrupts.CPU46.CAL:Function_call_interrupts
436535 ± 2% -10.3% 391586 interrupts.CPU47.CAL:Function_call_interrupts
436404 ± 2% -10.3% 391331 interrupts.CPU48.CAL:Function_call_interrupts
440924 -11.0% 392351 interrupts.CPU49.CAL:Function_call_interrupts
455336 ± 2% -8.0% 418988 ± 2% interrupts.CPU49.TLB:TLB_shootdowns
429597 -10.2% 385817 interrupts.CPU5.CAL:Function_call_interrupts
2657 ± 16% -57.7% 1125 ± 38% interrupts.CPU5.RES:Rescheduling_interrupts
432290 -10.8% 385504 interrupts.CPU50.CAL:Function_call_interrupts
430785 ± 2% -10.2% 386960 interrupts.CPU51.CAL:Function_call_interrupts
433428 ± 2% -11.0% 385627 interrupts.CPU52.CAL:Function_call_interrupts
452795 ± 2% -7.6% 418273 interrupts.CPU52.TLB:TLB_shootdowns
432275 -10.8% 385758 interrupts.CPU53.CAL:Function_call_interrupts
432401 -10.8% 385721 interrupts.CPU54.CAL:Function_call_interrupts
433685 -11.2% 384984 interrupts.CPU55.CAL:Function_call_interrupts
434907 -11.3% 385865 interrupts.CPU56.CAL:Function_call_interrupts
454111 ± 2% -7.7% 419097 ± 2% interrupts.CPU56.TLB:TLB_shootdowns
433831 ± 2% -11.6% 383393 interrupts.CPU57.CAL:Function_call_interrupts
452988 ± 2% -8.1% 416171 interrupts.CPU57.TLB:TLB_shootdowns
434640 -11.0% 386694 interrupts.CPU58.CAL:Function_call_interrupts
433779 -11.1% 385790 ± 2% interrupts.CPU59.CAL:Function_call_interrupts
433594 -11.3% 384456 ± 2% interrupts.CPU6.CAL:Function_call_interrupts
433902 -11.5% 383949 interrupts.CPU60.CAL:Function_call_interrupts
453607 -8.1% 416661 interrupts.CPU60.TLB:TLB_shootdowns
432735 -10.7% 386644 interrupts.CPU61.CAL:Function_call_interrupts
430100 ± 2% -10.3% 385648 interrupts.CPU62.CAL:Function_call_interrupts
1361 ± 17% -46.0% 734.75 ± 21% interrupts.CPU62.RES:Rescheduling_interrupts
430135 ± 2% -10.4% 385243 interrupts.CPU63.CAL:Function_call_interrupts
433506 -11.4% 384298 interrupts.CPU64.CAL:Function_call_interrupts
428442 -10.6% 383084 interrupts.CPU65.CAL:Function_call_interrupts
934.00 +41.7% 1323 ± 6% interrupts.CPU65.RES:Rescheduling_interrupts
429851 -10.2% 385861 ± 2% interrupts.CPU66.CAL:Function_call_interrupts
432662 -10.6% 386768 interrupts.CPU67.CAL:Function_call_interrupts
1137 ± 30% -37.3% 713.25 ± 22% interrupts.CPU67.RES:Rescheduling_interrupts
435998 -11.2% 387345 interrupts.CPU68.CAL:Function_call_interrupts
433910 -11.1% 385547 ± 2% interrupts.CPU69.CAL:Function_call_interrupts
433665 -11.1% 385376 interrupts.CPU7.CAL:Function_call_interrupts
433964 -11.2% 385200 interrupts.CPU70.CAL:Function_call_interrupts
434280 -11.2% 385800 interrupts.CPU71.CAL:Function_call_interrupts
436755 -12.1% 383884 interrupts.CPU72.CAL:Function_call_interrupts
434534 -11.1% 386259 interrupts.CPU73.CAL:Function_call_interrupts
435838 -11.3% 386437 interrupts.CPU74.CAL:Function_call_interrupts
849.50 +22.9% 1044 ± 15% interrupts.CPU74.RES:Rescheduling_interrupts
431545 -11.2% 383340 interrupts.CPU75.CAL:Function_call_interrupts
435028 -11.6% 384709 interrupts.CPU76.CAL:Function_call_interrupts
434623 -11.7% 383907 interrupts.CPU77.CAL:Function_call_interrupts
432569 -11.0% 384848 interrupts.CPU78.CAL:Function_call_interrupts
433842 -12.4% 379948 ± 3% interrupts.CPU79.CAL:Function_call_interrupts
453571 -9.3% 411502 ± 3% interrupts.CPU79.TLB:TLB_shootdowns
434907 -11.6% 384337 ± 2% interrupts.CPU8.CAL:Function_call_interrupts
435116 -11.4% 385431 interrupts.CPU80.CAL:Function_call_interrupts
677.50 ± 25% -24.1% 514.00 ± 28% interrupts.CPU80.RES:Rescheduling_interrupts
432192 -11.0% 384794 interrupts.CPU81.CAL:Function_call_interrupts
433320 -10.7% 387090 interrupts.CPU82.CAL:Function_call_interrupts
1174 ± 9% -50.1% 586.50 ± 29% interrupts.CPU82.RES:Rescheduling_interrupts
435824 -11.9% 384143 interrupts.CPU83.CAL:Function_call_interrupts
434477 -11.0% 386568 interrupts.CPU84.CAL:Function_call_interrupts
432705 -10.4% 387511 interrupts.CPU85.CAL:Function_call_interrupts
434758 ± 2% -11.5% 384778 interrupts.CPU86.CAL:Function_call_interrupts
454777 ± 2% -7.9% 418646 interrupts.CPU86.TLB:TLB_shootdowns
432224 -11.3% 383338 interrupts.CPU87.CAL:Function_call_interrupts
432930 -11.2% 384537 interrupts.CPU9.CAL:Function_call_interrupts
***************************************************************************************************
lkp-bdw-ex1: 192 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 512G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-bdw-ex1/all_utime/reaim/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
%stddev %change %stddev
\ | \
0.08 -10.2% 0.07 reaim.child_systime
753802 -11.9% 663774 reaim.jobs_per_min
3926 -11.9% 3457 reaim.jobs_per_min_child
97.65 -3.8% 93.97 reaim.jti
806247 -6.3% 755738 reaim.max_jobs_per_min
1.56 +13.6% 1.78 reaim.parent_time
1.88 ± 3% +194.4% 5.53 ± 4% reaim.std_dev_percent
0.03 ± 3% +217.2% 0.09 ± 4% reaim.std_dev_time
104138 +86.0% 193748 reaim.time.involuntary_context_switches
1074884 +2.8% 1105377 reaim.time.minor_page_faults
7721 -6.3% 7236 reaim.time.percent_of_cpu_this_job_got
23369 -6.5% 21847 reaim.time.user_time
66048 -1.7% 64924 reaim.time.voluntary_context_switches
1612800 -5.7% 1521600 reaim.workload
1.448e+08 ± 4% +75.9% 2.548e+08 ± 27% cpuidle.POLL.time
449918 ± 4% -25.3% 336278 ± 15% numa-numastat.node1.local_node
462355 ± 2% -23.2% 354953 ± 14% numa-numastat.node1.numa_hit
59.00 +5.1% 62.00 vmstat.cpu.id
40.00 -7.5% 37.00 vmstat.cpu.us
2033 +18.8% 2415 vmstat.system.cs
1092 -5.8% 1029 turbostat.Avg_MHz
12.57 +17.7% 14.79 turbostat.CPU%c1
2.17 ± 5% -26.9% 1.59 ± 13% turbostat.Pkg%pc6
20.95 +7.4% 22.50 ± 4% turbostat.RAMWatt
22388 ± 34% -62.8% 8322 ± 85% numa-vmstat.node1.nr_active_anon
6845 ± 15% -38.8% 4189 ± 10% numa-vmstat.node1.nr_slab_reclaimable
17978 ± 3% -26.0% 13303 ± 10% numa-vmstat.node1.nr_slab_unreclaimable
22388 ± 34% -62.8% 8322 ± 85% numa-vmstat.node1.nr_zone_active_anon
2740 ± 62% -95.1% 135.50 ± 77% numa-vmstat.node3.nr_inactive_anon
2921 ± 59% -91.1% 259.00 ± 31% numa-vmstat.node3.nr_shmem
2740 ± 62% -95.1% 135.50 ± 77% numa-vmstat.node3.nr_zone_inactive_anon
101194 ± 30% -53.7% 46870 ± 62% numa-meminfo.node1.Active
89611 ± 34% -62.9% 33266 ± 85% numa-meminfo.node1.Active(anon)
566915 ± 3% -18.3% 463403 ± 8% numa-meminfo.node1.MemUsed
27379 ± 15% -38.8% 16759 ± 10% numa-meminfo.node1.SReclaimable
71912 ± 3% -26.0% 53214 ± 10% numa-meminfo.node1.SUnreclaim
99292 ± 5% -29.5% 69973 ± 8% numa-meminfo.node1.Slab
10962 ± 62% -95.0% 543.25 ± 77% numa-meminfo.node3.Inactive(anon)
11687 ± 59% -91.1% 1037 ± 31% numa-meminfo.node3.Shmem
77999 -3.5% 75239 proc-vmstat.nr_slab_unreclaimable
38308 ± 13% +242.0% 131006 ± 2% proc-vmstat.numa_hint_faults
780.75 ±100% +571.6% 5243 ± 75% proc-vmstat.numa_hint_faults_local
1910995 +2.1% 1950408 proc-vmstat.numa_hit
1854894 +2.1% 1894321 proc-vmstat.numa_local
40180 ± 21% +204.9% 122514 ± 4% proc-vmstat.numa_pages_migrated
145766 ± 44% +152.1% 367445 proc-vmstat.numa_pte_updates
2082332 +1.5% 2113859 proc-vmstat.pgalloc_normal
2106944 +1.5% 2139589 proc-vmstat.pgfault
2018610 +1.6% 2050051 proc-vmstat.pgfree
40180 ± 21% +204.9% 122514 ± 4% proc-vmstat.pgmigrate_success
9772 ± 7% +48.0% 14465 ± 12% softirqs.CPU0.SCHED
62268 +13.2% 70466 softirqs.CPU0.TIMER
14263 ± 6% +19.6% 17059 ± 7% softirqs.CPU148.RCU
53614 +10.2% 59094 ± 6% softirqs.CPU148.TIMER
15228 ± 3% +19.8% 18246 ± 14% softirqs.CPU37.RCU
13686 ± 9% +19.4% 16346 ± 7% softirqs.CPU53.RCU
14747 ± 9% +15.4% 17023 ± 7% softirqs.CPU54.RCU
14975 ± 4% +20.3% 18017 ± 18% softirqs.CPU55.RCU
14654 ± 4% +12.7% 16510 ± 9% softirqs.CPU59.RCU
14608 ± 4% +9.0% 15927 ± 8% softirqs.CPU60.RCU
14141 +13.6% 16067 ± 7% softirqs.CPU61.RCU
13788 ± 3% +28.1% 17667 ± 19% softirqs.CPU63.RCU
14460 ± 7% -16.3% 12105 ± 9% softirqs.CPU96.RCU
3871 ± 3% -23.7% 2953 ± 7% slabinfo.biovec-64.active_objs
3871 ± 3% -23.7% 2953 ± 7% slabinfo.biovec-64.num_objs
4456 ± 4% -66.5% 1490 ± 3% slabinfo.buffer_head.active_objs
4456 ± 4% -66.5% 1490 ± 3% slabinfo.buffer_head.num_objs
9327 ± 7% -49.8% 4681 ± 20% slabinfo.eventpoll_epi.active_objs
9327 ± 7% -49.8% 4681 ± 20% slabinfo.eventpoll_epi.num_objs
8161 ± 7% -49.8% 4096 ± 20% slabinfo.eventpoll_pwq.active_objs
8161 ± 7% -49.8% 4096 ± 20% slabinfo.eventpoll_pwq.num_objs
870.00 ± 10% -22.0% 678.25 ± 5% slabinfo.file_lock_cache.active_objs
870.00 ± 10% -22.0% 678.25 ± 5% slabinfo.file_lock_cache.num_objs
11826 -12.3% 10377 ± 4% slabinfo.shmem_inode_cache.active_objs
11826 -12.3% 10377 ± 4% slabinfo.shmem_inode_cache.num_objs
7919 ± 2% -16.7% 6596 ± 2% slabinfo.sighand_cache.active_objs
8073 ± 2% -18.0% 6616 ± 2% slabinfo.sighand_cache.num_objs
13382 -20.2% 10683 slabinfo.signal_cache.active_objs
13441 -20.4% 10695 slabinfo.signal_cache.num_objs
9837 +26.3% 12423 slabinfo.sigqueue.active_objs
9837 +26.3% 12423 slabinfo.sigqueue.num_objs
6269 ± 3% -20.0% 5015 ± 6% slabinfo.sock_inode_cache.active_objs
6269 ± 3% -20.0% 5015 ± 6% slabinfo.sock_inode_cache.num_objs
16.88 ± 23% -8.9 7.94 ± 20% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
16.79 ± 23% -8.9 7.88 ± 20% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
11.45 ± 34% -7.5 3.90 ± 28% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
11.06 ± 35% -7.4 3.63 ± 29% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
9.53 ± 42% -6.6 2.90 ± 22% perf-profile.calltrace.cycles-pp.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
9.49 ± 42% -6.6 2.88 ± 22% perf-profile.calltrace.cycles-pp._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt
8.59 ± 41% -6.2 2.44 ± 29% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter
6.58 ± 4% -1.5 5.11 ± 11% perf-profile.calltrace.cycles-pp.apic_timer_interrupt
6.38 ± 4% -1.4 4.93 ± 11% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
5.84 ± 4% -1.4 4.48 ± 12% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
4.18 ± 6% -1.0 3.16 ± 12% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
2.96 ± 3% -0.8 2.14 ± 12% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
2.70 ± 4% -0.8 1.91 ± 13% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
3.17 ± 3% -0.8 2.40 ± 14% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
2.67 ± 4% -0.8 1.90 ± 13% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
1.98 ± 7% -0.8 1.20 ± 13% perf-profile.calltrace.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
2.38 ± 5% -0.7 1.65 ± 13% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
1.97 ± 4% -0.5 1.49 ± 12% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.59 ± 7% -0.5 1.13 ± 17% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
1.39 ± 5% -0.4 0.97 ± 17% perf-profile.calltrace.cycles-pp.__tick_nohz_idle_enter.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.68 ± 8% -0.4 0.29 ±100% perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.19 ± 6% -0.4 0.81 ± 18% perf-profile.calltrace.cycles-pp.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.82 ± 6% -0.2 0.62 ± 11% perf-profile.calltrace.cycles-pp.cpuidle_enter_state
86.66 +3.0 89.68 perf-profile.calltrace.cycles-pp.secondary_startup_64
86.09 +3.1 89.20 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
86.09 +3.1 89.20 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
86.09 +3.1 89.20 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
84.14 +3.5 87.62 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
9.63 ± 15% +5.5 15.11 ± 20% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
56.81 ± 6% +7.1 63.92 ± 2% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.00 ± 22% +4.8e+24% 71.37 ±105% sched_debug.cfs_rq:/.MIN_vruntime.stddev
612.74 ± 12% +597.4% 4272 ± 9% sched_debug.cfs_rq:/.exec_clock.stddev
433198 ± 28% -50.6% 214019 ± 58% sched_debug.cfs_rq:/.load.max
0.00 ± 22% +4.8e+24% 71.37 ±105% sched_debug.cfs_rq:/.max_vruntime.stddev
72913 ± 11% +344.0% 323724 ± 12% sched_debug.cfs_rq:/.min_vruntime.stddev
370.32 ± 23% -45.8% 200.85 ± 64% sched_debug.cfs_rq:/.runnable_load_avg.max
432768 ± 28% -52.6% 204920 ± 65% sched_debug.cfs_rq:/.runnable_weight.max
70393 ±132% -862.4% -536655 sched_debug.cfs_rq:/.spread0.avg
333239 ± 42% -70.2% 99452 ± 81% sched_debug.cfs_rq:/.spread0.max
-139349 +878.6% -1363651 sched_debug.cfs_rq:/.spread0.min
72941 ± 11% +344.6% 324292 ± 12% sched_debug.cfs_rq:/.spread0.stddev
276.49 ± 35% +61.9% 447.55 ± 28% sched_debug.cfs_rq:/.util_avg.avg
109.18 ± 13% +77.2% 193.51 ± 19% sched_debug.cfs_rq:/.util_avg.stddev
24.69 ± 30% -43.1% 14.05 ± 18% sched_debug.cpu.clock.stddev
24.69 ± 30% -43.1% 14.05 ± 18% sched_debug.cpu.clock_task.stddev
304.28 ± 18% -42.8% 173.94 ± 59% sched_debug.cpu.cpu_load[3].max
22.83 ± 18% -40.2% 13.66 ± 54% sched_debug.cpu.cpu_load[3].stddev
2.88 ± 9% +14.8% 3.30 ± 7% sched_debug.cpu.cpu_load[4].avg
231.47 ± 14% -35.1% 150.19 ± 45% sched_debug.cpu.cpu_load[4].max
17.23 ± 14% -31.6% 11.79 ± 43% sched_debug.cpu.cpu_load[4].stddev
433198 ± 28% -50.6% 214019 ± 58% sched_debug.cpu.load.max
37141 ± 29% -47.0% 19685 ± 56% sched_debug.cpu.load.stddev
0.00 ± 29% -26.6% 0.00 ± 16% sched_debug.cpu.next_balance.stddev
1243 ± 7% +228.3% 4082 ± 12% sched_debug.cpu.nr_load_updates.stddev
0.15 ± 21% +37.7% 0.20 ± 31% sched_debug.cpu.nr_running.stddev
926.31 ± 7% -15.7% 780.48 ± 4% sched_debug.cpu.nr_switches.min
1739 ± 8% +24.6% 2167 ± 8% sched_debug.cpu.nr_switches.stddev
-6.12 +51.0% -9.25 sched_debug.cpu.nr_uninterruptible.min
3408 ± 12% +120.8% 7525 ± 20% sched_debug.cpu.sched_goidle.max
187.60 ± 10% -23.7% 143.10 ± 10% sched_debug.cpu.sched_goidle.min
501.83 ± 2% +49.4% 749.50 ± 9% sched_debug.cpu.sched_goidle.stddev
4105 ± 11% +145.9% 10094 ± 12% sched_debug.cpu.ttwu_count.max
591.57 ± 5% +66.5% 985.13 ± 11% sched_debug.cpu.ttwu_count.stddev
177.60 ± 11% +60.5% 285.13 ± 7% sched_debug.cpu.ttwu_local.stddev
0.02 ± 27% +33.0% 0.03 ± 12% sched_debug.rt_rq:/.rt_time.max
0.00 ± 20% +51.7% 0.00 ± 17% sched_debug.rt_rq:/.rt_time.stddev
6.249e+09 -4.9% 5.943e+09 perf-stat.i.branch-instructions
733533 ± 4% -12.9% 638917 ± 3% perf-stat.i.cache-misses
2001 +19.5% 2392 perf-stat.i.context-switches
2.108e+11 -5.9% 1.983e+11 perf-stat.i.cpu-cycles
124.23 +51.2% 187.86 perf-stat.i.cpu-migrations
404084 ± 2% -17.4% 333693 ± 3% perf-stat.i.cycles-between-cache-misses
4.2e+10 -5.1% 3.984e+10 perf-stat.i.dTLB-loads
0.06 ± 3% -0.0 0.06 ± 2% perf-stat.i.dTLB-store-miss-rate%
787597 -2.2% 770036 perf-stat.i.dTLB-store-misses
2.75e+10 -5.2% 2.606e+10 perf-stat.i.dTLB-stores
83.86 -4.9 78.94 perf-stat.i.iTLB-load-miss-rate%
989375 +1.9% 1008403 perf-stat.i.iTLB-load-misses
153254 ± 6% +31.7% 201890 ± 2% perf-stat.i.iTLB-loads
1.251e+11 -5.3% 1.184e+11 perf-stat.i.instructions
406474 ± 2% -13.6% 351010 perf-stat.i.instructions-per-iTLB-miss
6821 +1.9% 6953 perf-stat.i.minor-faults
279978 -9.6% 253077 ± 2% perf-stat.i.node-load-misses
6824 +1.9% 6954 perf-stat.i.page-faults
0.63 ± 2% +7.1% 0.68 perf-stat.overall.MPKI
0.26 +0.0 0.28 ± 3% perf-stat.overall.branch-miss-rate%
0.94 ± 6% -0.1 0.80 ± 3% perf-stat.overall.cache-miss-rate%
284356 ± 4% +8.2% 307605 ± 3% perf-stat.overall.cycles-between-cache-misses
0.01 +0.0 0.01 perf-stat.overall.dTLB-load-miss-rate%
0.00 +0.0 0.00 perf-stat.overall.dTLB-store-miss-rate%
86.63 -3.2 83.38 perf-stat.overall.iTLB-load-miss-rate%
124752 -7.0% 115987 perf-stat.overall.instructions-per-iTLB-miss
6.168e+09 -4.8% 5.869e+09 perf-stat.ps.branch-instructions
734409 ± 4% -13.1% 638302 ± 2% perf-stat.ps.cache-misses
1996 +19.2% 2380 perf-stat.ps.context-switches
2.084e+11 -5.9% 1.962e+11 perf-stat.ps.cpu-cycles
123.49 +50.6% 185.98 perf-stat.ps.cpu-migrations
4.154e+10 -5.1% 3.942e+10 perf-stat.ps.dTLB-loads
789367 -2.3% 771264 perf-stat.ps.dTLB-store-misses
2.72e+10 -5.2% 2.579e+10 perf-stat.ps.dTLB-stores
990459 +1.9% 1008890 perf-stat.ps.iTLB-load-misses
152951 ± 6% +31.5% 201083 ± 2% perf-stat.ps.iTLB-loads
1.236e+11 -5.3% 1.17e+11 perf-stat.ps.instructions
6818 +1.9% 6949 perf-stat.ps.minor-faults
280027 -9.8% 252575 ± 2% perf-stat.ps.node-load-misses
6819 +1.9% 6949 perf-stat.ps.page-faults
3.747e+13 -5.6% 3.536e+13 perf-stat.total.instructions
156.25 ± 4% +253.4% 552.25 ± 78% interrupts.132:PCI-MSI.1574914-edge.eth3-TxRx-2
2522 ± 24% +31.5% 3317 ± 3% interrupts.CPU0.NMI:Non-maskable_interrupts
2522 ± 24% +31.5% 3317 ± 3% interrupts.CPU0.PMI:Performance_monitoring_interrupts
9419 ± 4% +114.1% 20162 ± 15% interrupts.CPU0.RES:Rescheduling_interrupts
2454 ± 23% +27.4% 3126 ± 5% interrupts.CPU10.NMI:Non-maskable_interrupts
2454 ± 23% +27.4% 3126 ± 5% interrupts.CPU10.PMI:Performance_monitoring_interrupts
2438 ± 23% +44.2% 3516 ± 22% interrupts.CPU11.NMI:Non-maskable_interrupts
2438 ± 23% +44.2% 3516 ± 22% interrupts.CPU11.PMI:Performance_monitoring_interrupts
2807 ± 2% +11.9% 3141 ± 4% interrupts.CPU119.NMI:Non-maskable_interrupts
2807 ± 2% +11.9% 3141 ± 4% interrupts.CPU119.PMI:Performance_monitoring_interrupts
2800 ± 2% +12.0% 3135 ± 5% interrupts.CPU12.NMI:Non-maskable_interrupts
2800 ± 2% +12.0% 3135 ± 5% interrupts.CPU12.PMI:Performance_monitoring_interrupts
2800 +13.9% 3190 ± 6% interrupts.CPU120.NMI:Non-maskable_interrupts
2800 +13.9% 3190 ± 6% interrupts.CPU120.PMI:Performance_monitoring_interrupts
62.25 ± 47% +101.6% 125.50 ± 39% interrupts.CPU121.RES:Rescheduling_interrupts
2800 ± 2% +12.2% 3143 ± 5% interrupts.CPU13.NMI:Non-maskable_interrupts
2800 ± 2% +12.2% 3143 ± 5% interrupts.CPU13.PMI:Performance_monitoring_interrupts
2770 +14.5% 3171 ± 6% interrupts.CPU131.NMI:Non-maskable_interrupts
2770 +14.5% 3171 ± 6% interrupts.CPU131.PMI:Performance_monitoring_interrupts
63.00 ± 46% +174.2% 172.75 ± 33% interrupts.CPU133.RES:Rescheduling_interrupts
73.75 ± 27% +1000.7% 811.75 ±142% interrupts.CPU134.RES:Rescheduling_interrupts
118.25 ± 22% +35.7% 160.50 ± 16% interrupts.CPU137.RES:Rescheduling_interrupts
108.50 ± 35% +241.0% 370.00 ± 44% interrupts.CPU141.RES:Rescheduling_interrupts
2791 ± 2% +22.5% 3420 ± 5% interrupts.CPU145.NMI:Non-maskable_interrupts
2791 ± 2% +22.5% 3420 ± 5% interrupts.CPU145.PMI:Performance_monitoring_interrupts
2776 ± 2% +14.8% 3188 ± 4% interrupts.CPU15.NMI:Non-maskable_interrupts
2776 ± 2% +14.8% 3188 ± 4% interrupts.CPU15.PMI:Performance_monitoring_interrupts
2813 ± 2% +15.5% 3248 ± 4% interrupts.CPU16.NMI:Non-maskable_interrupts
2813 ± 2% +15.5% 3248 ± 4% interrupts.CPU16.PMI:Performance_monitoring_interrupts
124.50 ± 40% +134.5% 292.00 ± 38% interrupts.CPU16.RES:Rescheduling_interrupts
2801 ± 2% +11.4% 3119 ± 3% interrupts.CPU17.NMI:Non-maskable_interrupts
2801 ± 2% +11.4% 3119 ± 3% interrupts.CPU17.PMI:Performance_monitoring_interrupts
2802 ± 2% +17.4% 3288 ± 9% interrupts.CPU183.NMI:Non-maskable_interrupts
2802 ± 2% +17.4% 3288 ± 9% interrupts.CPU183.PMI:Performance_monitoring_interrupts
44.00 ± 31% +383.5% 212.75 ± 77% interrupts.CPU183.RES:Rescheduling_interrupts
2809 +17.5% 3301 ± 10% interrupts.CPU189.NMI:Non-maskable_interrupts
2809 +17.5% 3301 ± 10% interrupts.CPU189.PMI:Performance_monitoring_interrupts
2848 ± 3% +11.6% 3179 interrupts.CPU19.NMI:Non-maskable_interrupts
2848 ± 3% +11.6% 3179 interrupts.CPU19.PMI:Performance_monitoring_interrupts
82.00 ± 45% +145.7% 201.50 ± 53% interrupts.CPU190.RES:Rescheduling_interrupts
2797 ± 2% +25.5% 3509 ± 8% interrupts.CPU191.NMI:Non-maskable_interrupts
2797 ± 2% +25.5% 3509 ± 8% interrupts.CPU191.PMI:Performance_monitoring_interrupts
156.25 ± 4% +253.4% 552.25 ± 78% interrupts.CPU2.132:PCI-MSI.1574914-edge.eth3-TxRx-2
668.50 ± 16% +290.0% 2607 ± 31% interrupts.CPU2.RES:Rescheduling_interrupts
2852 +14.2% 3256 ± 7% interrupts.CPU23.NMI:Non-maskable_interrupts
2852 +14.2% 3256 ± 7% interrupts.CPU23.PMI:Performance_monitoring_interrupts
190.50 ± 18% +169.8% 514.00 ± 91% interrupts.CPU26.RES:Rescheduling_interrupts
570.25 ± 19% +109.6% 1195 ± 15% interrupts.CPU3.RES:Rescheduling_interrupts
87.25 ± 29% +769.6% 758.75 ±119% interrupts.CPU35.RES:Rescheduling_interrupts
96.00 ± 27% +182.0% 270.75 ± 33% interrupts.CPU38.RES:Rescheduling_interrupts
1619 ± 96% -90.1% 160.00 ±101% interrupts.CPU49.RES:Rescheduling_interrupts
157.75 ± 25% -38.5% 97.00 ± 29% interrupts.CPU59.RES:Rescheduling_interrupts
140.50 ± 33% -46.6% 75.00 ± 67% interrupts.CPU62.RES:Rescheduling_interrupts
152.25 ± 39% +160.8% 397.00 ± 34% interrupts.CPU72.RES:Rescheduling_interrupts
176.25 ± 29% -50.1% 88.00 ± 33% interrupts.CPU86.RES:Rescheduling_interrupts
60.50 ± 47% +114.0% 129.50 ± 35% interrupts.CPU87.RES:Rescheduling_interrupts
2799 ± 2% +23.8% 3466 ± 7% interrupts.CPU89.NMI:Non-maskable_interrupts
2799 ± 2% +23.8% 3466 ± 7% interrupts.CPU89.PMI:Performance_monitoring_interrupts
2813 ± 3% +17.3% 3300 ± 8% interrupts.CPU95.NMI:Non-maskable_interrupts
2813 ± 3% +17.3% 3300 ± 8% interrupts.CPU95.PMI:Performance_monitoring_interrupts
2834 +14.9% 3256 ± 3% interrupts.CPU96.NMI:Non-maskable_interrupts
2834 +14.9% 3256 ± 3% interrupts.CPU96.PMI:Performance_monitoring_interrupts
51108 +36.3% 69684 ± 2% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-bdw-ep3: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/pipe/x86_64-rhel-7.2/process/1600%/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep3/hackbench/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=sched_slice/0x
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule/0x
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
%stddev %change %stddev
\ | \
494809 -7.3% 458896 hackbench.throughput
612.95 +1.8% 624.20 hackbench.time.elapsed_time
612.95 +1.8% 624.20 hackbench.time.elapsed_time.max
4.886e+08 +88.4% 9.206e+08 ± 2% hackbench.time.involuntary_context_switches
48035824 -4.2% 46000152 hackbench.time.minor_page_faults
7176 +1.1% 7255 hackbench.time.percent_of_cpu_this_job_got
36530 +3.8% 37917 hackbench.time.system_time
7459 -1.2% 7372 hackbench.time.user_time
8.862e+08 +62.7% 1.442e+09 hackbench.time.voluntary_context_switches
2.534e+09 -4.2% 2.429e+09 hackbench.workload
2794 ± 8% -9.4% 2530 ± 2% boot-time.idle
62869 ± 8% -27.5% 45585 ± 26% numa-meminfo.node1.SReclaimable
15712 ± 8% -27.4% 11404 ± 26% numa-vmstat.node1.nr_slab_reclaimable
66.75 +2.6% 68.50 vmstat.cpu.sy
2236124 +69.2% 3782640 vmstat.system.cs
151493 +64.8% 249683 vmstat.system.in
25739582 +28.3% 33017943 ± 17% cpuidle.C1.time
2600683 +89.6% 4931044 ± 21% cpuidle.C1.usage
1.177e+08 -23.5% 90006774 ± 3% cpuidle.C3.time
400081 -26.3% 294861 cpuidle.C3.usage
28889782 ± 37% +210.6% 89736479 ± 39% cpuidle.POLL.time
32231 ± 2% +139.5% 77205 ± 21% cpuidle.POLL.usage
2274 +1.7% 2314 turbostat.Avg_MHz
2598175 +89.7% 4928921 ± 21% turbostat.C1
399873 -26.3% 294659 turbostat.C3
0.22 -0.1 0.16 ± 2% turbostat.C3%
5.81 ± 2% -10.8% 5.18 turbostat.CPU%c1
0.12 -33.3% 0.08 ± 15% turbostat.CPU%c3
93461424 +67.4% 1.564e+08 ± 2% turbostat.IRQ
2.65 ± 6% +52.8% 4.05 ± 8% turbostat.Pkg%pc2
1541845 +14.3% 1762636 meminfo.Active
1480497 +14.9% 1701349 meminfo.Active(anon)
1440429 ± 2% +11.9% 1612079 meminfo.AnonPages
37373539 ± 2% +16.5% 43542569 meminfo.Committed_AS
18778 +11.2% 20883 ± 3% meminfo.Inactive(anon)
510694 ± 2% +14.5% 584945 meminfo.KernelStack
6655144 +12.7% 7497435 meminfo.Memused
1212773 +16.3% 1410085 meminfo.PageTables
1202917 +12.2% 1349397 meminfo.SUnreclaim
54584 ± 12% +64.5% 89814 ± 10% meminfo.Shmem
1318488 +11.2% 1465692 meminfo.Slab
55493 ± 2% -28.9% 39468 ± 2% meminfo.max_used_kB
369842 ± 2% +14.7% 424031 proc-vmstat.nr_active_anon
359829 ± 2% +11.7% 401857 proc-vmstat.nr_anon_pages
1491001 -1.4% 1470310 proc-vmstat.nr_dirty_background_threshold
2985649 -1.4% 2944217 proc-vmstat.nr_dirty_threshold
234629 +3.7% 243333 proc-vmstat.nr_file_pages
14800700 -1.4% 14593509 proc-vmstat.nr_free_pages
4698 +10.9% 5209 ± 3% proc-vmstat.nr_inactive_anon
510506 ± 2% +14.3% 583465 proc-vmstat.nr_kernel_stack
6228 +2.9% 6408 proc-vmstat.nr_mapped
303038 ± 2% +15.9% 351178 proc-vmstat.nr_page_table_pages
13683 ± 13% +63.7% 22404 ± 11% proc-vmstat.nr_shmem
301079 ± 2% +12.1% 337558 proc-vmstat.nr_slab_unreclaimable
369842 ± 2% +14.7% 424031 proc-vmstat.nr_zone_active_anon
4698 +10.9% 5209 ± 3% proc-vmstat.nr_zone_inactive_anon
403.75 ±123% +1057.7% 4674 ±122% proc-vmstat.numa_hint_faults
47.25 ± 81% +4297.4% 2077 ±112% proc-vmstat.numa_hint_faults_local
6.147e+08 -7.3% 5.701e+08 proc-vmstat.numa_hit
6.147e+08 -7.3% 5.701e+08 proc-vmstat.numa_local
11583 ± 16% +71.6% 19875 ± 14% proc-vmstat.pgactivate
6.224e+08 -7.2% 5.774e+08 proc-vmstat.pgalloc_normal
48581147 -3.7% 46773973 proc-vmstat.pgfault
6.221e+08 -7.2% 5.773e+08 proc-vmstat.pgfree
400811 ± 7% -48.8% 205202 ± 51% sched_debug.cfs_rq:/.load.max
63923 ± 21% -56.9% 27565 ± 49% sched_debug.cfs_rq:/.load.stddev
473.56 ± 9% -34.1% 311.94 ± 18% sched_debug.cfs_rq:/.load_avg.max
70.01 ± 13% -36.1% 44.72 ± 20% sched_debug.cfs_rq:/.load_avg.stddev
24050559 +66.1% 39942558 ± 2% sched_debug.cfs_rq:/.min_vruntime.avg
27186152 +150.9% 68197622 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
22417469 -26.5% 16476834 ± 5% sched_debug.cfs_rq:/.min_vruntime.min
934692 ± 9% +2357.1% 22966409 ± 6% sched_debug.cfs_rq:/.min_vruntime.stddev
0.42 ± 4% -28.3% 0.30 ± 9% sched_debug.cfs_rq:/.nr_running.stddev
10.48 ± 24% -37.7% 6.53 ± 11% sched_debug.cfs_rq:/.runnable_load_avg.avg
343.08 ± 13% -66.5% 114.81 ± 78% sched_debug.cfs_rq:/.runnable_load_avg.max
43.75 ± 18% -64.0% 15.76 ± 57% sched_debug.cfs_rq:/.runnable_load_avg.stddev
399576 ± 7% -49.9% 200210 ± 54% sched_debug.cfs_rq:/.runnable_weight.max
63786 ± 21% -57.9% 26851 ± 52% sched_debug.cfs_rq:/.runnable_weight.stddev
3479430 ± 23% +1081.6% 41111920 ± 35% sched_debug.cfs_rq:/.spread0.max
934513 ± 9% +2358.3% 22973201 ± 6% sched_debug.cfs_rq:/.spread0.stddev
329009 ± 9% +75.8% 578522 ± 9% sched_debug.cpu.avg_idle.avg
100.19 ± 17% +339.0% 439.87 ± 71% sched_debug.cpu.clock.stddev
100.19 ± 17% +339.0% 439.87 ± 71% sched_debug.cpu.clock_task.stddev
255.50 ± 31% -43.7% 143.92 ± 61% sched_debug.cpu.cpu_load[0].max
31.10 ± 29% -40.5% 18.51 ± 48% sched_debug.cpu.cpu_load[0].stddev
239.47 ± 25% -49.9% 119.92 ± 31% sched_debug.cpu.cpu_load[2].max
29.44 ± 24% -46.2% 15.84 ± 24% sched_debug.cpu.cpu_load[2].stddev
225.33 ± 23% -50.8% 110.81 ± 30% sched_debug.cpu.cpu_load[3].max
27.43 ± 23% -45.6% 14.91 ± 22% sched_debug.cpu.cpu_load[3].stddev
210.61 ± 21% -45.9% 114.03 ± 30% sched_debug.cpu.cpu_load[4].max
25.26 ± 21% -40.7% 14.98 ± 20% sched_debug.cpu.cpu_load[4].stddev
22185 ± 8% -20.7% 17584 ± 12% sched_debug.cpu.curr->pid.stddev
401289 ± 7% -38.3% 247619 ± 32% sched_debug.cpu.load.max
63993 ± 21% -51.8% 30820 ± 28% sched_debug.cpu.load.stddev
0.00 ± 25% +314.4% 0.00 ± 65% sched_debug.cpu.next_balance.stddev
2161 ± 6% +73.4% 3748 ± 3% sched_debug.cpu.nr_load_updates.stddev
1.80 ± 44% +1219.7% 23.78 ± 53% sched_debug.cpu.nr_running.avg
18.56 ± 32% +447.0% 101.50 ± 63% sched_debug.cpu.nr_running.max
3.54 ± 40% +673.8% 27.43 ± 58% sched_debug.cpu.nr_running.stddev
7777523 +69.6% 13189331 sched_debug.cpu.nr_switches.avg
8892283 +187.0% 25524847 sched_debug.cpu.nr_switches.max
6863956 ± 2% -34.3% 4508011 ± 4% sched_debug.cpu.nr_switches.min
399499 ± 6% +1992.9% 8360950 ± 2% sched_debug.cpu.nr_switches.stddev
0.25 ± 52% +8572.4% 21.46 ± 62% sched_debug.cpu.nr_uninterruptible.avg
889.83 ± 4% +243.9% 3060 ± 4% sched_debug.cpu.nr_uninterruptible.max
-802.22 +323.8% -3399 sched_debug.cpu.nr_uninterruptible.min
348.15 ± 3% +506.4% 2111 ± 3% sched_debug.cpu.nr_uninterruptible.stddev
584593 ± 2% +14.5% 669643 slabinfo.anon_vma.active_objs
13084 ± 2% +14.5% 14982 slabinfo.anon_vma.active_slabs
601913 ± 2% +14.5% 689224 slabinfo.anon_vma.num_objs
13084 ± 2% +14.5% 14982 slabinfo.anon_vma.num_slabs
63008 +10.7% 69740 slabinfo.cred_jar.active_objs
1505 +10.9% 1669 slabinfo.cred_jar.active_slabs
63231 +10.9% 70129 slabinfo.cred_jar.num_objs
1505 +10.9% 1669 slabinfo.cred_jar.num_slabs
2494 ± 7% -20.4% 1985 ± 17% slabinfo.eventpoll_epi.active_objs
2494 ± 7% -20.4% 1985 ± 17% slabinfo.eventpoll_epi.num_objs
4365 ± 7% -20.4% 3474 ± 17% slabinfo.eventpoll_pwq.active_objs
4365 ± 7% -20.4% 3474 ± 17% slabinfo.eventpoll_pwq.num_objs
777.00 ± 8% -22.1% 605.00 ± 11% slabinfo.file_lock_cache.active_objs
777.00 ± 8% -22.1% 605.00 ± 11% slabinfo.file_lock_cache.num_objs
4667 ± 5% -9.5% 4226 ± 2% slabinfo.kmalloc-128.active_objs
4667 ± 5% -9.5% 4226 ± 2% slabinfo.kmalloc-128.num_objs
63873 ± 2% +10.2% 70416 ± 3% slabinfo.kmalloc-96.active_objs
36770 ± 2% +13.3% 41645 slabinfo.mm_struct.active_objs
2448 ± 2% +13.3% 2774 slabinfo.mm_struct.active_slabs
39182 ± 2% +13.3% 44397 slabinfo.mm_struct.num_objs
2448 ± 2% +13.3% 2774 slabinfo.mm_struct.num_slabs
1239866 ± 2% +15.4% 1430445 slabinfo.pid.active_objs
23099 ± 2% +16.9% 27003 slabinfo.pid.active_slabs
1478380 ± 2% +16.9% 1728215 slabinfo.pid.num_objs
23099 ± 2% +16.9% 27003 slabinfo.pid.num_slabs
347835 -12.6% 304030 ± 2% slabinfo.selinux_file_security.active_objs
1364 -12.7% 1191 ± 3% slabinfo.selinux_file_security.active_slabs
349266 -12.6% 305089 ± 3% slabinfo.selinux_file_security.num_objs
1364 -12.7% 1191 ± 3% slabinfo.selinux_file_security.num_slabs
41509 +16.7% 48454 slabinfo.sighand_cache.active_objs
2772 +17.0% 3243 slabinfo.sighand_cache.active_slabs
41590 +17.0% 48658 slabinfo.sighand_cache.num_objs
2772 +17.0% 3243 slabinfo.sighand_cache.num_slabs
44791 ± 2% +15.0% 51511 slabinfo.signal_cache.active_objs
1498 ± 2% +15.3% 1726 slabinfo.signal_cache.active_slabs
44952 ± 2% +15.3% 51811 slabinfo.signal_cache.num_objs
1498 ± 2% +15.3% 1726 slabinfo.signal_cache.num_slabs
39448 ± 2% +17.6% 46374 slabinfo.task_struct.active_objs
13155 ± 2% +17.7% 15484 slabinfo.task_struct.active_slabs
39467 ± 2% +17.7% 46452 slabinfo.task_struct.num_objs
13155 ± 2% +17.7% 15484 slabinfo.task_struct.num_slabs
886246 ± 2% +16.0% 1027892 slabinfo.vm_area_struct.active_objs
22632 ± 2% +15.6% 26163 slabinfo.vm_area_struct.active_slabs
905325 ± 2% +15.6% 1046529 slabinfo.vm_area_struct.num_objs
22632 ± 2% +15.6% 26163 slabinfo.vm_area_struct.num_slabs
36.79 ± 6% -36.8 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_write
29.71 ± 8% -29.7 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_read
24.00 ± 6% -24.0 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_write
23.48 ± 6% -23.5 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
22.75 ± 6% -22.8 0.00 perf-profile.calltrace.cycles-pp.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
21.26 ± 6% -21.3 0.00 perf-profile.calltrace.cycles-pp.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
16.73 ± 8% -16.7 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_read
16.22 ± 8% -16.2 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
15.47 ± 8% -15.5 0.00 perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
14.40 ± 8% -14.4 0.00 perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
3.88 ± 3% -1.7 2.15 ± 6% perf-profile.calltrace.cycles-pp.mutex_unlock.pipe_write.__vfs_write.vfs_write.sys_write
0.65 ± 11% +0.3 0.97 ± 28% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
0.68 ± 11% +0.4 1.05 ± 7% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.22 ± 7% +0.4 2.65 ± 9% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.pipe_read.__vfs_read
1.10 ± 8% +0.5 1.55 ± 11% perf-profile.calltrace.cycles-pp.touch_atime.pipe_read.__vfs_read.vfs_read.sys_read
0.42 ± 59% +0.6 0.97 ± 25% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.pipe_wait.pipe_read
0.28 ±101% +0.6 0.86 ± 25% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
1.18 ± 10% +0.6 1.79 ± 36% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
1.37 ± 11% +0.6 2.00 ± 31% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.00 +0.6 0.64 ± 7% perf-profile.calltrace.cycles-pp.__lock_text_start.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write
1.47 ± 11% +0.7 2.17 ± 30% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.30 ±101% +0.7 0.99 ± 29% perf-profile.calltrace.cycles-pp.__switch_to
0.00 +0.7 0.73 ± 20% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.00 +0.7 0.73 ± 20% perf-profile.calltrace.cycles-pp.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
0.00 +0.8 0.78 ± 10% perf-profile.calltrace.cycles-pp.__indirect_thunk_start
0.00 +1.0 0.96 ± 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_stage2
1.28 ± 15% +1.0 2.28 ± 26% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.29 ± 15% +1.0 2.29 ± 26% perf-profile.calltrace.cycles-pp.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.32 ± 15% +1.0 2.35 ± 26% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.1 1.13 ± 8% perf-profile.calltrace.cycles-pp.__fdget_pos.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.1 1.15 ± 11% perf-profile.calltrace.cycles-pp.__fdget_pos.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.71 ± 9% +1.5 5.17 ± 35% perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_read.__vfs_read
3.80 ± 9% +1.5 5.28 ± 35% perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_read.__vfs_read.vfs_read
4.22 ± 9% +1.7 5.91 ± 35% perf-profile.calltrace.cycles-pp.pipe_wait.pipe_read.__vfs_read.vfs_read.sys_read
2.54 ± 18% +1.8 4.29 ± 34% perf-profile.calltrace.cycles-pp.idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
4.11 ± 9% +2.2 6.30 ± 32% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
14.37 +2.4 16.75 ± 5% perf-profile.calltrace.cycles-pp.pipe_read.__vfs_read.vfs_read.sys_read.do_syscall_64
3.42 ± 17% +2.5 5.88 ± 31% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
15.21 +2.5 17.68 ± 4% perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.46 ± 8% +2.5 11.97 ± 28% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write.sys_write
7.24 ± 11% +2.8 10.03 ± 33% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
7.63 ± 10% +2.8 10.43 ± 33% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write
7.29 ± 10% +2.8 10.12 ± 33% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.__vfs_write
1.45 ± 29% +8.8 10.29 ± 14% perf-profile.calltrace.cycles-pp._entry_trampoline
1.54 ± 27% +9.0 10.56 ± 14% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
6.32 ± 18% +17.4 23.75 perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.63 ± 19% +18.7 25.34 perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.10 ± 17% +20.3 30.45 ± 3% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.36 ± 17% +21.7 32.02 ± 2% perf-profile.calltrace.cycles-pp.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.46 ± 17% +42.8 62.30 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.69 ± 17% +44.0 63.67 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
17.10 -21.3% 13.46 ± 2% perf-stat.i.MPKI
4.035e+10 -17.5% 3.33e+10 ± 3% perf-stat.i.branch-instructions
2.14 -0.2 1.95 perf-stat.i.branch-miss-rate%
7.194e+08 -18.6% 5.855e+08 ± 3% perf-stat.i.branch-misses
12.70 -1.4 11.33 perf-stat.i.cache-miss-rate%
1.014e+08 -11.3% 89967376 ± 3% perf-stat.i.cache-misses
3519724 +39.3% 4901677 ± 3% perf-stat.i.context-switches
2.06 ± 2% -6.7% 1.92 perf-stat.i.cpi
3.139e+11 -16.5% 2.622e+11 ± 3% perf-stat.i.cpu-cycles
113205 ± 2% +72.7% 195554 ± 5% perf-stat.i.cpu-migrations
0.47 +0.1 0.53 ± 3% perf-stat.i.dTLB-load-miss-rate%
1.716e+08 +16.6% 2.001e+08 ± 5% perf-stat.i.dTLB-load-misses
6.393e+10 -18.1% 5.234e+10 ± 3% perf-stat.i.dTLB-loads
8147741 +33.5% 10877323 ± 3% perf-stat.i.dTLB-store-misses
4.046e+10 -18.6% 3.294e+10 ± 3% perf-stat.i.dTLB-stores
55.06 +1.1 56.14 perf-stat.i.iTLB-load-miss-rate%
2.161e+08 -8.0% 1.989e+08 ± 3% perf-stat.i.iTLB-load-misses
1.927e+08 -14.7% 1.643e+08 ± 4% perf-stat.i.iTLB-loads
2.102e+11 -17.7% 1.731e+11 ± 3% perf-stat.i.instructions
1069 ± 2% -8.4% 979.54 ± 3% perf-stat.i.instructions-per-iTLB-miss
0.56 +3.8% 0.58 perf-stat.i.ipc
124005 -22.0% 96666 ± 3% perf-stat.i.minor-faults
277317 -17.9% 227661 ± 3% perf-stat.i.msec
63.61 -8.3 55.35 ± 2% perf-stat.i.node-load-miss-rate%
14378837 ± 2% -5.7% 13566343 ± 3% perf-stat.i.node-loads
21.74 ± 2% -4.3 17.40 perf-stat.i.node-store-miss-rate%
5331671 -16.6% 4445492 ± 3% perf-stat.i.node-store-misses
124005 -22.0% 96666 ± 3% perf-stat.i.page-faults
3.76 +23.4% 4.64 perf-stat.overall.MPKI
1.78 -0.0 1.76 perf-stat.overall.branch-miss-rate%
12.84 -1.6 11.22 perf-stat.overall.cache-miss-rate%
1.49 +1.5% 1.52 perf-stat.overall.cpi
0.27 +0.1 0.38 ± 5% perf-stat.overall.dTLB-load-miss-rate%
0.02 +0.0 0.03 perf-stat.overall.dTLB-store-miss-rate%
52.87 +1.9 54.77 perf-stat.overall.iTLB-load-miss-rate%
972.65 -10.6% 869.88 perf-stat.overall.instructions-per-iTLB-miss
0.67 -1.4% 0.66 perf-stat.overall.ipc
55.35 +1.8 57.18 perf-stat.overall.node-load-miss-rate%
23.25 -2.9 20.38 perf-stat.overall.node-store-miss-rate%
32464 +6.1% 34453 perf-stat.overall.path-length
1.579e+13 +2.0% 1.61e+13 perf-stat.total.branch-instructions
3.969e+10 +9.6% 4.352e+10 perf-stat.total.cache-misses
3.09e+11 +25.5% 3.879e+11 perf-stat.total.cache-references
1.378e+09 +72.1% 2.371e+09 perf-stat.total.context-switches
1.229e+14 +3.2% 1.268e+14 perf-stat.total.cpu-cycles
44320452 ± 2% +113.8% 94752867 ± 7% perf-stat.total.cpu-migrations
6.719e+10 +44.1% 9.685e+10 ± 5% perf-stat.total.dTLB-load-misses
2.502e+13 +1.1% 2.531e+13 perf-stat.total.dTLB-loads
3.189e+09 +65.0% 5.262e+09 perf-stat.total.dTLB-store-misses
8.459e+10 +13.7% 9.621e+10 perf-stat.total.iTLB-load-misses
7.542e+10 +5.3% 7.945e+10 perf-stat.total.iTLB-loads
8.228e+13 +1.7% 8.368e+13 perf-stat.total.instructions
48538538 -3.7% 46749483 perf-stat.total.minor-faults
1.086e+08 +1.4% 1.101e+08 perf-stat.total.msec
6.976e+09 +25.7% 8.768e+09 ± 3% perf-stat.total.node-load-misses
5.627e+09 +16.6% 6.562e+09 perf-stat.total.node-loads
2.087e+09 +3.0% 2.15e+09 perf-stat.total.node-store-misses
6.891e+09 +21.9% 8.4e+09 perf-stat.total.node-stores
48538482 -3.7% 46749452 perf-stat.total.page-faults
44380 ± 9% +36.4% 60536 ± 3% softirqs.CPU0.RCU
42700 +26.8% 54157 softirqs.CPU0.SCHED
41219 ± 3% +51.8% 62583 ± 3% softirqs.CPU1.RCU
40291 ± 2% +53.7% 61915 ± 2% softirqs.CPU10.RCU
39712 ± 2% +60.0% 63530 ± 2% softirqs.CPU11.RCU
39626 +58.3% 62721 softirqs.CPU12.RCU
39746 ± 3% +56.8% 62327 softirqs.CPU13.RCU
40013 ± 2% +62.5% 65013 ± 4% softirqs.CPU14.RCU
44176 ± 8% +49.0% 65821 ± 3% softirqs.CPU15.RCU
43495 ± 8% +50.4% 65412 ± 2% softirqs.CPU16.RCU
41320 +53.6% 63484 softirqs.CPU17.RCU
41735 +60.7% 67060 ± 7% softirqs.CPU18.RCU
44197 ± 9% +51.0% 66754 ± 2% softirqs.CPU19.RCU
40193 ± 3% +52.0% 61104 ± 2% softirqs.CPU2.RCU
44564 ± 10% +52.5% 67952 ± 6% softirqs.CPU20.RCU
42527 +55.1% 65966 ± 3% softirqs.CPU21.RCU
42153 +58.6% 66857 ± 5% softirqs.CPU22.RCU
42327 ± 3% +68.9% 71495 ± 5% softirqs.CPU23.RCU
42306 ± 2% +59.1% 67318 ± 9% softirqs.CPU24.RCU
41475 ± 2% +66.9% 69214 ± 14% softirqs.CPU25.RCU
41655 ± 2% +60.5% 66866 ± 3% softirqs.CPU26.RCU
41766 ± 2% +51.7% 63373 softirqs.CPU27.RCU
41548 ± 2% +60.0% 66490 ± 5% softirqs.CPU28.RCU
44279 ± 6% +50.4% 66601 ± 9% softirqs.CPU29.RCU
42001 ± 8% +51.4% 63605 ± 5% softirqs.CPU3.RCU
40890 ± 6% +50.8% 61645 ± 2% softirqs.CPU30.RCU
40491 ± 2% +53.8% 62280 ± 3% softirqs.CPU31.RCU
41471 ± 2% +56.5% 64894 ± 4% softirqs.CPU32.RCU
40349 ± 2% +62.0% 65360 ± 5% softirqs.CPU33.RCU
40479 ± 2% +62.5% 65781 ± 3% softirqs.CPU34.RCU
42319 ± 10% +48.6% 62876 ± 3% softirqs.CPU35.RCU
41799 +55.1% 64841 ± 3% softirqs.CPU36.RCU
43731 ± 8% +54.5% 67558 ± 3% softirqs.CPU37.RCU
41031 ± 3% +73.3% 71116 ± 8% softirqs.CPU38.RCU
42978 ± 6% +53.1% 65803 ± 7% softirqs.CPU39.RCU
39754 ± 2% +55.0% 61636 ± 3% softirqs.CPU4.RCU
46367 ± 9% +41.4% 65558 ± 9% softirqs.CPU40.RCU
40343 ± 2% +59.9% 64513 ± 3% softirqs.CPU41.RCU
42227 ± 3% +55.1% 65511 ± 4% softirqs.CPU42.RCU
40727 ± 3% +65.6% 67457 ± 3% softirqs.CPU43.RCU
40253 ± 4% +45.8% 58700 ± 2% softirqs.CPU44.RCU
40121 ± 2% +52.7% 61278 ± 2% softirqs.CPU45.RCU
40516 ± 3% +51.1% 61219 ± 2% softirqs.CPU46.RCU
41106 ± 10% +48.5% 61035 ± 6% softirqs.CPU47.RCU
39554 +58.7% 62780 ± 6% softirqs.CPU48.RCU
38899 +53.9% 59879 ± 2% softirqs.CPU49.RCU
39677 ± 2% +59.7% 63369 ± 6% softirqs.CPU5.RCU
42427 ± 9% +47.9% 62754 softirqs.CPU50.RCU
41751 ± 7% +47.8% 61711 ± 3% softirqs.CPU51.RCU
39992 +65.4% 66128 ± 7% softirqs.CPU52.RCU
39026 +54.8% 60393 ± 3% softirqs.CPU53.RCU
42161 ± 10% +47.6% 62218 ± 7% softirqs.CPU54.RCU
40080 ± 3% +60.2% 64206 ± 6% softirqs.CPU55.RCU
39427 ± 3% +57.5% 62094 softirqs.CPU56.RCU
39673 ± 3% +65.7% 65735 ± 7% softirqs.CPU57.RCU
40865 +53.4% 62669 ± 3% softirqs.CPU58.RCU
40346 +55.4% 62713 ± 3% softirqs.CPU59.RCU
39668 ± 2% +58.5% 62879 ± 2% softirqs.CPU6.RCU
39930 ± 2% +68.1% 67122 ± 5% softirqs.CPU60.RCU
40045 ± 2% +55.1% 62093 ± 2% softirqs.CPU61.RCU
40057 +57.6% 63145 ± 5% softirqs.CPU62.RCU
40579 +57.4% 63889 ± 7% softirqs.CPU63.RCU
40947 ± 2% +51.6% 62075 ± 2% softirqs.CPU64.RCU
42989 ± 8% +45.6% 62592 softirqs.CPU65.RCU
44393 ± 8% +53.9% 68301 ± 6% softirqs.CPU66.RCU
41370 ± 3% +57.9% 65307 ± 5% softirqs.CPU67.RCU
41532 +59.4% 66200 ± 2% softirqs.CPU68.RCU
40604 ± 3% +58.8% 64470 ± 6% softirqs.CPU69.RCU
39822 +53.2% 61021 softirqs.CPU7.RCU
40977 ± 2% +58.1% 64805 ± 6% softirqs.CPU70.RCU
42698 ± 7% +52.4% 65068 ± 6% softirqs.CPU71.RCU
41420 +70.9% 70802 ± 3% softirqs.CPU72.RCU
41458 +52.1% 63040 ± 2% softirqs.CPU73.RCU
41010 ± 2% +64.8% 67576 ± 7% softirqs.CPU74.RCU
38997 ± 9% +58.5% 61793 ± 5% softirqs.CPU75.RCU
37004 ± 3% +55.5% 57533 ± 2% softirqs.CPU76.RCU
37324 ± 2% +61.2% 60153 ± 6% softirqs.CPU77.RCU
40007 ± 8% +45.9% 58373 ± 3% softirqs.CPU78.RCU
37264 ± 4% +52.2% 56698 ± 3% softirqs.CPU79.RCU
39957 ± 2% +54.1% 61588 ± 2% softirqs.CPU8.RCU
38811 ± 2% +51.8% 58931 ± 2% softirqs.CPU80.RCU
40186 ± 6% +53.6% 61730 ± 6% softirqs.CPU81.RCU
37439 ± 2% +77.3% 66389 ± 6% softirqs.CPU82.RCU
37454 ± 3% +62.9% 61016 ± 8% softirqs.CPU83.RCU
39574 ± 7% +49.7% 59255 ± 3% softirqs.CPU84.RCU
36603 ± 2% +77.5% 64978 ± 7% softirqs.CPU85.RCU
40354 ± 8% +46.8% 59245 ± 5% softirqs.CPU86.RCU
37744 ± 3% +65.6% 62495 ± 5% softirqs.CPU87.RCU
39090 +58.6% 62000 softirqs.CPU9.RCU
3594895 +56.1% 5612323 softirqs.RCU
500558 +17.5% 588306 ± 2% softirqs.SCHED
555.75 ± 24% -31.4% 381.50 ± 16% interrupts.34:PCI-MSI.3145731-edge.eth0-TxRx-2
352056 +19.6% 421007 ± 2% interrupts.CAL:Function_call_interrupts
3999 ± 3% +18.8% 4752 ± 6% interrupts.CPU0.CAL:Function_call_interrupts
4038 +19.2% 4812 ± 2% interrupts.CPU1.CAL:Function_call_interrupts
3898 ± 7% +23.6% 4816 ± 3% interrupts.CPU10.CAL:Function_call_interrupts
2771 ± 32% +160.5% 7220 ± 17% interrupts.CPU10.NMI:Non-maskable_interrupts
2771 ± 32% +160.5% 7220 ± 17% interrupts.CPU10.PMI:Performance_monitoring_interrupts
3957 ± 4% +17.7% 4657 ± 7% interrupts.CPU11.CAL:Function_call_interrupts
2930 ± 28% +145.8% 7204 ± 18% interrupts.CPU11.NMI:Non-maskable_interrupts
2930 ± 28% +145.8% 7204 ± 18% interrupts.CPU11.PMI:Performance_monitoring_interrupts
3996 +21.3% 4848 ± 2% interrupts.CPU12.CAL:Function_call_interrupts
2777 ± 32% +159.5% 7207 ± 18% interrupts.CPU12.NMI:Non-maskable_interrupts
2777 ± 32% +159.5% 7207 ± 18% interrupts.CPU12.PMI:Performance_monitoring_interrupts
555.75 ± 24% -31.4% 381.50 ± 16% interrupts.CPU13.34:PCI-MSI.3145731-edge.eth0-TxRx-2
4059 +18.7% 4819 ± 3% interrupts.CPU13.CAL:Function_call_interrupts
2781 ± 32% +159.2% 7207 ± 18% interrupts.CPU13.NMI:Non-maskable_interrupts
2781 ± 32% +159.2% 7207 ± 18% interrupts.CPU13.PMI:Performance_monitoring_interrupts
3994 ± 2% +21.0% 4834 ± 3% interrupts.CPU14.CAL:Function_call_interrupts
2773 ± 32% +168.6% 7448 ± 12% interrupts.CPU14.NMI:Non-maskable_interrupts
2773 ± 32% +168.6% 7448 ± 12% interrupts.CPU14.PMI:Performance_monitoring_interrupts
4047 +19.2% 4823 ± 3% interrupts.CPU15.CAL:Function_call_interrupts
2774 ± 32% +159.6% 7201 ± 18% interrupts.CPU15.NMI:Non-maskable_interrupts
2774 ± 32% +159.6% 7201 ± 18% interrupts.CPU15.PMI:Performance_monitoring_interrupts
4056 +19.4% 4844 ± 3% interrupts.CPU16.CAL:Function_call_interrupts
2781 ± 32% +158.8% 7198 ± 18% interrupts.CPU16.NMI:Non-maskable_interrupts
2781 ± 32% +158.8% 7198 ± 18% interrupts.CPU16.PMI:Performance_monitoring_interrupts
3921 ± 5% +23.2% 4829 ± 2% interrupts.CPU17.CAL:Function_call_interrupts
3303 ± 31% +118.2% 7209 ± 18% interrupts.CPU17.NMI:Non-maskable_interrupts
3303 ± 31% +118.2% 7209 ± 18% interrupts.CPU17.PMI:Performance_monitoring_interrupts
4048 +18.8% 4808 ± 3% interrupts.CPU18.CAL:Function_call_interrupts
3315 ± 31% +117.3% 7204 ± 18% interrupts.CPU18.NMI:Non-maskable_interrupts
3315 ± 31% +117.3% 7204 ± 18% interrupts.CPU18.PMI:Performance_monitoring_interrupts
4056 +18.4% 4803 ± 3% interrupts.CPU19.CAL:Function_call_interrupts
3316 ± 31% +117.2% 7204 ± 18% interrupts.CPU19.NMI:Non-maskable_interrupts
3316 ± 31% +117.2% 7204 ± 18% interrupts.CPU19.PMI:Performance_monitoring_interrupts
4021 ± 2% +20.5% 4844 ± 2% interrupts.CPU2.CAL:Function_call_interrupts
3307 ± 31% +117.9% 7204 ± 18% interrupts.CPU20.NMI:Non-maskable_interrupts
3307 ± 31% +117.9% 7204 ± 18% interrupts.CPU20.PMI:Performance_monitoring_interrupts
4039 +19.5% 4824 ± 3% interrupts.CPU21.CAL:Function_call_interrupts
3307 ± 31% +117.9% 7206 ± 18% interrupts.CPU21.NMI:Non-maskable_interrupts
3307 ± 31% +117.9% 7206 ± 18% interrupts.CPU21.PMI:Performance_monitoring_interrupts
4018 +17.6% 4725 ± 4% interrupts.CPU22.CAL:Function_call_interrupts
3306 ± 31% +117.1% 7179 ± 19% interrupts.CPU22.NMI:Non-maskable_interrupts
3306 ± 31% +117.1% 7179 ± 19% interrupts.CPU22.PMI:Performance_monitoring_interrupts
4060 +18.1% 4794 interrupts.CPU23.CAL:Function_call_interrupts
3853 ± 23% +86.0% 7169 ± 19% interrupts.CPU23.NMI:Non-maskable_interrupts
3853 ± 23% +86.0% 7169 ± 19% interrupts.CPU23.PMI:Performance_monitoring_interrupts
4009 ± 2% +20.7% 4838 ± 3% interrupts.CPU24.CAL:Function_call_interrupts
3840 ± 23% +86.7% 7172 ± 19% interrupts.CPU24.NMI:Non-maskable_interrupts
3840 ± 23% +86.7% 7172 ± 19% interrupts.CPU24.PMI:Performance_monitoring_interrupts
3886 ± 5% +22.9% 4775 ± 3% interrupts.CPU25.CAL:Function_call_interrupts
3874 ± 23% +85.2% 7175 ± 19% interrupts.CPU25.NMI:Non-maskable_interrupts
3874 ± 23% +85.2% 7175 ± 19% interrupts.CPU25.PMI:Performance_monitoring_interrupts
4008 +19.2% 4779 ± 4% interrupts.CPU26.CAL:Function_call_interrupts
3848 ± 23% +86.4% 7173 ± 19% interrupts.CPU26.NMI:Non-maskable_interrupts
3848 ± 23% +86.4% 7173 ± 19% interrupts.CPU26.PMI:Performance_monitoring_interrupts
4051 +19.6% 4846 ± 2% interrupts.CPU27.CAL:Function_call_interrupts
3307 ± 31% +116.9% 7175 ± 19% interrupts.CPU27.NMI:Non-maskable_interrupts
3307 ± 31% +116.9% 7175 ± 19% interrupts.CPU27.PMI:Performance_monitoring_interrupts
3885 ± 6% +23.4% 4793 interrupts.CPU28.CAL:Function_call_interrupts
3305 ± 31% +117.4% 7186 ± 18% interrupts.CPU28.NMI:Non-maskable_interrupts
3305 ± 31% +117.4% 7186 ± 18% interrupts.CPU28.PMI:Performance_monitoring_interrupts
3303 ± 31% +117.4% 7182 ± 19% interrupts.CPU29.NMI:Non-maskable_interrupts
3303 ± 31% +117.4% 7182 ± 19% interrupts.CPU29.PMI:Performance_monitoring_interrupts
4041 +19.6% 4831 ± 2% interrupts.CPU3.CAL:Function_call_interrupts
3753 ± 13% +30.1% 4882 ± 2% interrupts.CPU30.CAL:Function_call_interrupts
3848 ± 23% +86.4% 7173 ± 19% interrupts.CPU30.NMI:Non-maskable_interrupts
3848 ± 23% +86.4% 7173 ± 19% interrupts.CPU30.PMI:Performance_monitoring_interrupts
3954 +22.5% 4845 ± 2% interrupts.CPU31.CAL:Function_call_interrupts
3851 ± 24% +86.1% 7165 ± 19% interrupts.CPU31.NMI:Non-maskable_interrupts
3851 ± 24% +86.1% 7165 ± 19% interrupts.CPU31.PMI:Performance_monitoring_interrupts
4032 +19.4% 4814 ± 2% interrupts.CPU32.CAL:Function_call_interrupts
3898 ± 6% +23.8% 4825 ± 3% interrupts.CPU33.CAL:Function_call_interrupts
3919 ± 2% +20.4% 4718 ± 6% interrupts.CPU34.CAL:Function_call_interrupts
3863 ± 24% +85.7% 7173 ± 19% interrupts.CPU34.NMI:Non-maskable_interrupts
3863 ± 24% +85.7% 7173 ± 19% interrupts.CPU34.PMI:Performance_monitoring_interrupts
3993 +16.2% 4641 ± 5% interrupts.CPU35.CAL:Function_call_interrupts
3855 ± 24% +85.9% 7168 ± 19% interrupts.CPU35.NMI:Non-maskable_interrupts
3855 ± 24% +85.9% 7168 ± 19% interrupts.CPU35.PMI:Performance_monitoring_interrupts
3911 ± 2% +23.2% 4817 ± 2% interrupts.CPU36.CAL:Function_call_interrupts
3856 ± 24% +85.9% 7171 ± 19% interrupts.CPU36.NMI:Non-maskable_interrupts
3856 ± 24% +85.9% 7171 ± 19% interrupts.CPU36.PMI:Performance_monitoring_interrupts
3928 ± 4% +22.3% 4802 ± 2% interrupts.CPU37.CAL:Function_call_interrupts
4026 +15.0% 4629 ± 6% interrupts.CPU38.CAL:Function_call_interrupts
4034 +19.7% 4829 interrupts.CPU39.CAL:Function_call_interrupts
4036 +19.1% 4805 ± 3% interrupts.CPU4.CAL:Function_call_interrupts
4031 +20.5% 4856 ± 2% interrupts.CPU40.CAL:Function_call_interrupts
4035 +17.9% 4758 ± 5% interrupts.CPU41.CAL:Function_call_interrupts
3919 ± 4% +20.0% 4704 ± 6% interrupts.CPU42.CAL:Function_call_interrupts
3950 +19.2% 4710 ± 4% interrupts.CPU43.CAL:Function_call_interrupts
3931 ± 7% +22.5% 4816 ± 3% interrupts.CPU44.CAL:Function_call_interrupts
2836 ± 32% +119.7% 6232 ± 28% interrupts.CPU44.NMI:Non-maskable_interrupts
2836 ± 32% +119.7% 6232 ± 28% interrupts.CPU44.PMI:Performance_monitoring_interrupts
4018 +20.5% 4841 ± 2% interrupts.CPU45.CAL:Function_call_interrupts
2770 ± 32% +124.5% 6218 ± 28% interrupts.CPU45.NMI:Non-maskable_interrupts
2770 ± 32% +124.5% 6218 ± 28% interrupts.CPU45.PMI:Performance_monitoring_interrupts
4043 +19.1% 4817 ± 3% interrupts.CPU46.CAL:Function_call_interrupts
2778 ± 32% +102.1% 5616 ± 42% interrupts.CPU46.NMI:Non-maskable_interrupts
2778 ± 32% +102.1% 5616 ± 42% interrupts.CPU46.PMI:Performance_monitoring_interrupts
4064 +19.3% 4846 ± 3% interrupts.CPU47.CAL:Function_call_interrupts
2771 ± 32% +138.2% 6600 ± 35% interrupts.CPU47.NMI:Non-maskable_interrupts
2771 ± 32% +138.2% 6600 ± 35% interrupts.CPU47.PMI:Performance_monitoring_interrupts
4037 +19.5% 4825 ± 3% interrupts.CPU48.CAL:Function_call_interrupts
2774 ± 32% +102.6% 5622 ± 42% interrupts.CPU48.NMI:Non-maskable_interrupts
2774 ± 32% +102.6% 5622 ± 42% interrupts.CPU48.PMI:Performance_monitoring_interrupts
4054 +18.6% 4809 ± 3% interrupts.CPU49.CAL:Function_call_interrupts
2769 ± 32% +102.9% 5618 ± 42% interrupts.CPU49.NMI:Non-maskable_interrupts
2769 ± 32% +102.9% 5618 ± 42% interrupts.CPU49.PMI:Performance_monitoring_interrupts
4036 +15.9% 4680 ± 6% interrupts.CPU5.CAL:Function_call_interrupts
4056 +19.0% 4826 ± 3% interrupts.CPU50.CAL:Function_call_interrupts
2770 ± 32% +138.3% 6600 ± 35% interrupts.CPU50.NMI:Non-maskable_interrupts
2770 ± 32% +138.3% 6600 ± 35% interrupts.CPU50.PMI:Performance_monitoring_interrupts
4039 +19.6% 4832 ± 3% interrupts.CPU51.CAL:Function_call_interrupts
2774 ± 32% +137.9% 6602 ± 35% interrupts.CPU51.NMI:Non-maskable_interrupts
2774 ± 32% +137.9% 6602 ± 35% interrupts.CPU51.PMI:Performance_monitoring_interrupts
4059 +18.5% 4811 ± 3% interrupts.CPU52.CAL:Function_call_interrupts
2768 ± 32% +138.5% 6602 ± 35% interrupts.CPU52.NMI:Non-maskable_interrupts
2768 ± 32% +138.5% 6602 ± 35% interrupts.CPU52.PMI:Performance_monitoring_interrupts
4040 +18.2% 4775 ± 3% interrupts.CPU53.CAL:Function_call_interrupts
2768 ± 32% +138.4% 6599 ± 35% interrupts.CPU53.NMI:Non-maskable_interrupts
2768 ± 32% +138.4% 6599 ± 35% interrupts.CPU53.PMI:Performance_monitoring_interrupts
3982 ± 4% +21.5% 4839 ± 2% interrupts.CPU54.CAL:Function_call_interrupts
4036 ± 2% +16.0% 4681 ± 7% interrupts.CPU55.CAL:Function_call_interrupts
4002 ± 2% +17.5% 4701 ± 7% interrupts.CPU56.CAL:Function_call_interrupts
3319 ± 31% +117.0% 7201 ± 18% interrupts.CPU56.NMI:Non-maskable_interrupts
3319 ± 31% +117.0% 7201 ± 18% interrupts.CPU56.PMI:Performance_monitoring_interrupts
4049 +18.9% 4816 ± 4% interrupts.CPU57.CAL:Function_call_interrupts
3312 ± 31% +117.3% 7197 ± 18% interrupts.CPU57.NMI:Non-maskable_interrupts
3312 ± 31% +117.3% 7197 ± 18% interrupts.CPU57.PMI:Performance_monitoring_interrupts
3973 ± 3% +22.3% 4861 ± 2% interrupts.CPU58.CAL:Function_call_interrupts
3314 ± 31% +139.8% 7948 interrupts.CPU58.NMI:Non-maskable_interrupts
3314 ± 31% +139.8% 7948 interrupts.CPU58.PMI:Performance_monitoring_interrupts
4046 +20.2% 4861 ± 2% interrupts.CPU59.CAL:Function_call_interrupts
2779 ± 32% +158.8% 7191 ± 18% interrupts.CPU59.NMI:Non-maskable_interrupts
2779 ± 32% +158.8% 7191 ± 18% interrupts.CPU59.PMI:Performance_monitoring_interrupts
4030 +20.2% 4845 ± 2% interrupts.CPU6.CAL:Function_call_interrupts
3315 ± 32% +117.2% 7199 ± 18% interrupts.CPU6.NMI:Non-maskable_interrupts
3315 ± 32% +117.2% 7199 ± 18% interrupts.CPU6.PMI:Performance_monitoring_interrupts
4047 +19.0% 4817 ± 3% interrupts.CPU60.CAL:Function_call_interrupts
2782 ± 32% +158.6% 7194 ± 18% interrupts.CPU60.NMI:Non-maskable_interrupts
2782 ± 32% +158.6% 7194 ± 18% interrupts.CPU60.PMI:Performance_monitoring_interrupts
3935 ± 5% +22.3% 4812 ± 3% interrupts.CPU61.CAL:Function_call_interrupts
2775 ± 32% +159.4% 7200 ± 18% interrupts.CPU61.NMI:Non-maskable_interrupts
2775 ± 32% +159.4% 7200 ± 18% interrupts.CPU61.PMI:Performance_monitoring_interrupts
4055 +19.9% 4861 ± 2% interrupts.CPU62.CAL:Function_call_interrupts
2776 ± 32% +159.3% 7198 ± 18% interrupts.CPU62.NMI:Non-maskable_interrupts
2776 ± 32% +159.3% 7198 ± 18% interrupts.CPU62.PMI:Performance_monitoring_interrupts
4055 +19.0% 4827 ± 3% interrupts.CPU63.CAL:Function_call_interrupts
2781 ± 32% +158.8% 7197 ± 18% interrupts.CPU63.NMI:Non-maskable_interrupts
2781 ± 32% +158.8% 7197 ± 18% interrupts.CPU63.PMI:Performance_monitoring_interrupts
4044 +16.9% 4729 ± 5% interrupts.CPU64.CAL:Function_call_interrupts
2770 ± 32% +159.7% 7195 ± 18% interrupts.CPU64.NMI:Non-maskable_interrupts
2770 ± 32% +159.7% 7195 ± 18% interrupts.CPU64.PMI:Performance_monitoring_interrupts
4040 +19.6% 4833 ± 3% interrupts.CPU65.CAL:Function_call_interrupts
2782 ± 32% +158.9% 7204 ± 18% interrupts.CPU65.NMI:Non-maskable_interrupts
2782 ± 32% +158.9% 7204 ± 18% interrupts.CPU65.PMI:Performance_monitoring_interrupts
4047 +15.9% 4691 ± 6% interrupts.CPU66.CAL:Function_call_interrupts
2778 ± 32% +158.0% 7168 ± 19% interrupts.CPU66.NMI:Non-maskable_interrupts
2778 ± 32% +158.0% 7168 ± 19% interrupts.CPU66.PMI:Performance_monitoring_interrupts
4069 +13.7% 4626 ± 4% interrupts.CPU67.CAL:Function_call_interrupts
2783 ± 31% +157.4% 7165 ± 19% interrupts.CPU67.NMI:Non-maskable_interrupts
2783 ± 31% +157.4% 7165 ± 19% interrupts.CPU67.PMI:Performance_monitoring_interrupts
4045 +19.9% 4851 ± 2% interrupts.CPU68.CAL:Function_call_interrupts
2779 ± 32% +157.8% 7166 ± 19% interrupts.CPU68.NMI:Non-maskable_interrupts
2779 ± 32% +157.8% 7166 ± 19% interrupts.CPU68.PMI:Performance_monitoring_interrupts
3897 ± 5% +23.9% 4828 ± 3% interrupts.CPU69.CAL:Function_call_interrupts
2784 ± 31% +157.6% 7171 ± 19% interrupts.CPU69.NMI:Non-maskable_interrupts
2784 ± 31% +157.6% 7171 ± 19% interrupts.CPU69.PMI:Performance_monitoring_interrupts
4045 +19.7% 4841 ± 2% interrupts.CPU7.CAL:Function_call_interrupts
2774 ± 32% +159.5% 7200 ± 18% interrupts.CPU7.NMI:Non-maskable_interrupts
2774 ± 32% +159.5% 7200 ± 18% interrupts.CPU7.PMI:Performance_monitoring_interrupts
4040 +17.1% 4730 ± 5% interrupts.CPU70.CAL:Function_call_interrupts
2784 ± 32% +157.7% 7174 ± 18% interrupts.CPU70.NMI:Non-maskable_interrupts
2784 ± 32% +157.7% 7174 ± 18% interrupts.CPU70.PMI:Performance_monitoring_interrupts
4051 +19.3% 4833 ± 2% interrupts.CPU71.CAL:Function_call_interrupts
2784 ± 32% +157.5% 7169 ± 19% interrupts.CPU71.NMI:Non-maskable_interrupts
2784 ± 32% +157.5% 7169 ± 19% interrupts.CPU71.PMI:Performance_monitoring_interrupts
2785 ± 33% +159.1% 7217 ± 17% interrupts.CPU72.NMI:Non-maskable_interrupts
2785 ± 33% +159.1% 7217 ± 17% interrupts.CPU72.PMI:Performance_monitoring_interrupts
3999 ± 3% +17.1% 4682 ± 5% interrupts.CPU73.CAL:Function_call_interrupts
2780 ± 32% +157.8% 7169 ± 19% interrupts.CPU73.NMI:Non-maskable_interrupts
2780 ± 32% +157.8% 7169 ± 19% interrupts.CPU73.PMI:Performance_monitoring_interrupts
3834 ± 8% +26.8% 4861 ± 2% interrupts.CPU74.CAL:Function_call_interrupts
2778 ± 32% +158.4% 7177 ± 19% interrupts.CPU74.NMI:Non-maskable_interrupts
2778 ± 32% +158.4% 7177 ± 19% interrupts.CPU74.PMI:Performance_monitoring_interrupts
4063 +19.1% 4840 ± 3% interrupts.CPU75.CAL:Function_call_interrupts
3331 ± 32% +115.3% 7171 ± 19% interrupts.CPU75.NMI:Non-maskable_interrupts
3331 ± 32% +115.3% 7171 ± 19% interrupts.CPU75.PMI:Performance_monitoring_interrupts
4021 +19.4% 4801 ± 3% interrupts.CPU76.CAL:Function_call_interrupts
3461 ± 26% +108.5% 7214 ± 18% interrupts.CPU76.NMI:Non-maskable_interrupts
3461 ± 26% +108.5% 7214 ± 18% interrupts.CPU76.PMI:Performance_monitoring_interrupts
3984 ± 3% +20.9% 4819 ± 3% interrupts.CPU77.CAL:Function_call_interrupts
3653 ± 39% +96.4% 7176 ± 19% interrupts.CPU77.NMI:Non-maskable_interrupts
3653 ± 39% +96.4% 7176 ± 19% interrupts.CPU77.PMI:Performance_monitoring_interrupts
4024 +18.9% 4786 ± 4% interrupts.CPU78.CAL:Function_call_interrupts
2782 ± 32% +158.1% 7181 ± 19% interrupts.CPU78.NMI:Non-maskable_interrupts
2782 ± 32% +158.1% 7181 ± 19% interrupts.CPU78.PMI:Performance_monitoring_interrupts
3897 ± 3% +22.8% 4787 ± 2% interrupts.CPU79.CAL:Function_call_interrupts
2784 ± 32% +157.7% 7174 ± 19% interrupts.CPU79.NMI:Non-maskable_interrupts
2784 ± 32% +157.7% 7174 ± 19% interrupts.CPU79.PMI:Performance_monitoring_interrupts
4024 ± 2% +19.5% 4808 ± 3% interrupts.CPU8.CAL:Function_call_interrupts
2770 ± 32% +159.9% 7201 ± 18% interrupts.CPU8.NMI:Non-maskable_interrupts
2770 ± 32% +159.9% 7201 ± 18% interrupts.CPU8.PMI:Performance_monitoring_interrupts
3710 ± 10% +30.5% 4840 ± 2% interrupts.CPU80.CAL:Function_call_interrupts
3323 ± 32% +115.8% 7172 ± 19% interrupts.CPU80.NMI:Non-maskable_interrupts
3323 ± 32% +115.8% 7172 ± 19% interrupts.CPU80.PMI:Performance_monitoring_interrupts
3997 +18.4% 4733 ± 2% interrupts.CPU81.CAL:Function_call_interrupts
3311 ± 32% +117.4% 7198 ± 18% interrupts.CPU81.NMI:Non-maskable_interrupts
3311 ± 32% +117.4% 7198 ± 18% interrupts.CPU81.PMI:Performance_monitoring_interrupts
4049 +17.6% 4763 ± 2% interrupts.CPU82.CAL:Function_call_interrupts
3319 ± 32% +116.1% 7172 ± 19% interrupts.CPU82.NMI:Non-maskable_interrupts
3319 ± 32% +116.1% 7172 ± 19% interrupts.CPU82.PMI:Performance_monitoring_interrupts
4013 +20.3% 4826 ± 2% interrupts.CPU83.CAL:Function_call_interrupts
3318 ± 32% +116.9% 7199 ± 18% interrupts.CPU83.NMI:Non-maskable_interrupts
3318 ± 32% +116.9% 7199 ± 18% interrupts.CPU83.PMI:Performance_monitoring_interrupts
4040 +19.6% 4834 ± 2% interrupts.CPU84.CAL:Function_call_interrupts
3345 ± 32% +114.5% 7175 ± 19% interrupts.CPU84.NMI:Non-maskable_interrupts
3345 ± 32% +114.5% 7175 ± 19% interrupts.CPU84.PMI:Performance_monitoring_interrupts
3991 ± 2% +19.8% 4781 ± 4% interrupts.CPU86.CAL:Function_call_interrupts
3350 ± 33% +113.7% 7162 ± 19% interrupts.CPU86.NMI:Non-maskable_interrupts
3350 ± 33% +113.7% 7162 ± 19% interrupts.CPU86.PMI:Performance_monitoring_interrupts
3326 ± 32% +117.0% 7216 ± 17% interrupts.CPU87.NMI:Non-maskable_interrupts
3326 ± 32% +117.0% 7216 ± 17% interrupts.CPU87.PMI:Performance_monitoring_interrupts
4058 +19.1% 4835 ± 3% interrupts.CPU9.CAL:Function_call_interrupts
2782 ± 32% +159.5% 7220 ± 18% interrupts.CPU9.NMI:Non-maskable_interrupts
2782 ± 32% +159.5% 7220 ± 18% interrupts.CPU9.PMI:Performance_monitoring_interrupts
51.00 ± 59% +150.0% 127.50 ± 23% interrupts.IWI:IRQ_work_interrupts
283310 ± 21% +114.7% 608405 ± 20% interrupts.NMI:Non-maskable_interrupts
283310 ± 21% +114.7% 608405 ± 20% interrupts.PMI:Performance_monitoring_interrupts
38196883 +161.0% 99690427 ± 2% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-skl-2sp4: 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-skl-2sp4/custom/reaim/0x200004d
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 -50% :4 kmsg.do_IRQ:#No_irq_handler_for_vector
:4 25% 1:4 dmesg.WARNING:at#for_ip_error_entry/0x
:4 25% 1:4 dmesg.WARNING:at#for_ip_ret_from_intr/0x
:4 25% 1:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=__slab_free/0x
2:4 17% 2:4 perf-profile.calltrace.cycles-pp.error_entry
2:4 24% 3:4 perf-profile.children.cycles-pp.error_entry
%stddev %change %stddev
\ | \
4.77 +11.4% 5.32 ± 2% reaim.std_dev_percent
0.03 +8.2% 0.03 reaim.std_dev_time
2025938 -10.6% 1811037 reaim.time.involuntary_context_switches
8797312 +1.0% 8887670 reaim.time.voluntary_context_switches
9078 +12.0% 10169 meminfo.PageTables
2269 +11.6% 2531 proc-vmstat.nr_page_table_pages
41893 +6.7% 44683 ± 4% softirqs.CPU98.RCU
4.65 ± 2% -15.3% 3.94 ± 10% turbostat.Pkg%pc6
178.24 +2.0% 181.79 turbostat.PkgWatt
12123 ± 18% +33.1% 16134 ± 3% numa-meminfo.node0.Mapped
41009 ± 5% +12.4% 46109 ± 4% numa-meminfo.node0.SReclaimable
131166 ± 5% +4.8% 137483 ± 5% numa-meminfo.node0.SUnreclaim
172176 ± 2% +6.6% 183593 ± 3% numa-meminfo.node0.Slab
3014 ± 18% +34.4% 4052 ± 2% numa-vmstat.node0.nr_mapped
10257 ± 6% +12.4% 11530 ± 4% numa-vmstat.node0.nr_slab_reclaimable
32780 ± 5% +4.9% 34383 ± 5% numa-vmstat.node0.nr_slab_unreclaimable
3664 ± 16% -25.6% 2725 ± 2% numa-vmstat.node1.nr_mapped
1969 ± 4% +2.7% 2022 ± 5% slabinfo.kmalloc-4096.active_objs
5584 +9.8% 6130 ± 3% slabinfo.mm_struct.active_objs
5614 +9.7% 6158 ± 3% slabinfo.mm_struct.num_objs
877.00 +11.5% 978.25 ± 3% slabinfo.names_cache.active_objs
877.00 +11.5% 978.25 ± 3% slabinfo.names_cache.num_objs
1495 -9.8% 1348 ± 5% slabinfo.nsproxy.active_objs
1495 -9.8% 1348 ± 5% slabinfo.nsproxy.num_objs
1313 ± 15% +27.1% 1668 ± 2% sched_debug.cpu.nr_load_updates.stddev
2664 +44.4% 3847 ± 14% sched_debug.cpu.nr_switches.stddev
0.02 ± 77% -89.5% 0.00 ±223% sched_debug.cpu.nr_uninterruptible.avg
81.22 ± 12% +30.5% 106.02 ± 8% sched_debug.cpu.nr_uninterruptible.max
30.10 +53.8% 46.29 ± 7% sched_debug.cpu.nr_uninterruptible.stddev
2408 ± 3% +52.2% 3665 ± 18% sched_debug.cpu.sched_count.stddev
1066 ± 6% +68.1% 1792 ± 17% sched_debug.cpu.sched_goidle.stddev
1413 ± 3% +36.5% 1929 ± 19% sched_debug.cpu.ttwu_count.stddev
553.65 ± 2% -8.6% 505.96 ± 2% sched_debug.cpu.ttwu_local.stddev
3.19 -0.2 2.99 ± 2% perf-stat.i.branch-miss-rate%
7.45 ± 2% -0.7 6.79 ± 2% perf-stat.i.cache-miss-rate%
56644639 -11.5% 50150131 perf-stat.i.cache-misses
5.795e+10 +1.5% 5.883e+10 perf-stat.i.cpu-cycles
0.33 -0.0 0.29 ± 3% perf-stat.i.dTLB-load-miss-rate%
0.10 -0.0 0.09 ± 3% perf-stat.i.dTLB-store-miss-rate%
44.46 -1.5 42.99 ± 2% perf-stat.i.iTLB-load-miss-rate%
3761774 +5.5% 3969797 perf-stat.i.iTLB-load-misses
79.67 -4.1 75.54 perf-stat.i.node-load-miss-rate%
16697015 -17.2% 13819973 perf-stat.i.node-load-misses
2830435 -2.3% 2764205 perf-stat.i.node-loads
60.87 ± 2% -6.0 54.85 ± 3% perf-stat.i.node-store-miss-rate%
3631713 -15.0% 3088593 ± 3% perf-stat.i.node-store-misses
2047418 +7.2% 2195246 perf-stat.i.node-stores
6.61 -0.9 5.71 perf-stat.overall.cache-miss-rate%
44.36 +0.7 45.06 perf-stat.overall.iTLB-load-miss-rate%
11359 -3.1% 11011 perf-stat.overall.instructions-per-iTLB-miss
85.51 -2.2 83.33 perf-stat.overall.node-load-miss-rate%
63.95 -5.5 58.44 perf-stat.overall.node-store-miss-rate%
1.782e+10 -13.1% 1.548e+10 perf-stat.total.cache-misses
5160156 -1.5% 5081874 perf-stat.total.cpu-migrations
1.184e+09 +3.5% 1.225e+09 perf-stat.total.iTLB-load-misses
5.254e+09 -18.8% 4.266e+09 perf-stat.total.node-load-misses
8.905e+08 -4.2% 8.532e+08 perf-stat.total.node-loads
1.143e+09 -16.6% 9.537e+08 ± 3% perf-stat.total.node-store-misses
6.442e+08 +5.2% 6.776e+08 perf-stat.total.node-stores
2.51 -0.3 2.23 perf-profile.calltrace.cycles-pp.copy_page_range.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.07 ± 2% -0.2 0.84 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu
1.04 ± 2% -0.2 0.81 ± 4% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu_free.arch_tlb_finish_mmu
1.37 ± 3% -0.2 1.21 ± 9% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.88 ± 6% -0.1 0.77 ± 6% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.flush_old_exec.load_elf_binary
0.81 ± 6% -0.1 0.72 ± 6% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.flush_old_exec
0.60 ± 8% +0.1 0.72 ± 16% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath.queued_read_lock_slowpath.do_wait.kernel_wait4.SYSC_wait4
0.17 ±141% +0.4 0.57 ± 3% perf-profile.calltrace.cycles-pp.queued_write_lock_slowpath.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
1.29 ± 10% +0.4 1.70 ± 12% perf-profile.calltrace.cycles-pp.do_wait.kernel_wait4.SYSC_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
25.70 +1.0 26.66 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
4.15 ± 23% -0.9 3.20 ± 7% perf-profile.children.cycles-pp._raw_spin_lock
14.46 -0.5 14.00 ± 2% perf-profile.children.cycles-pp.exit_mmap
10.89 -0.5 10.43 perf-profile.children.cycles-pp._do_fork
2.46 -0.4 2.09 ± 3% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
3.90 -0.3 3.56 ± 5% perf-profile.children.cycles-pp.apic_timer_interrupt
3.87 -0.3 3.53 ± 5% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
10.20 -0.3 9.88 perf-profile.children.cycles-pp.copy_process
2.53 -0.3 2.24 ± 2% perf-profile.children.cycles-pp.copy_page_range
1.64 ± 5% -0.2 1.41 ± 2% perf-profile.children.cycles-pp.get_page_from_freelist
1.73 ± 4% -0.2 1.51 perf-profile.children.cycles-pp.__alloc_pages_nodemask
1.29 ± 3% -0.2 1.10 ± 4% perf-profile.children.cycles-pp.__schedule
1.22 ± 8% -0.2 1.04 ± 5% perf-profile.children.cycles-pp.ret_from_fork
0.59 ± 3% -0.1 0.45 ± 5% perf-profile.children.cycles-pp.pick_next_task_fair
0.43 ± 2% -0.1 0.30 ± 6% perf-profile.children.cycles-pp.free_pgd_range
0.64 -0.1 0.51 ± 4% perf-profile.children.cycles-pp.wake_up_new_task
0.30 -0.1 0.18 ± 6% perf-profile.children.cycles-pp.free_unref_page
0.98 ± 2% -0.1 0.86 ± 2% perf-profile.children.cycles-pp.rcu_process_callbacks
0.50 ± 4% -0.1 0.38 ± 4% perf-profile.children.cycles-pp.load_balance
0.51 ± 4% -0.1 0.40 ± 5% perf-profile.children.cycles-pp.__list_del_entry_valid
1.22 -0.1 1.10 ± 4% perf-profile.children.cycles-pp.__softirqentry_text_start
0.39 ± 9% -0.1 0.28 ± 7% perf-profile.children.cycles-pp.schedule_tail
0.29 -0.1 0.18 ± 4% perf-profile.children.cycles-pp.free_pcppages_bulk
0.35 ± 4% -0.1 0.25 ± 3% perf-profile.children.cycles-pp.do_task_dead
0.88 -0.1 0.78 ± 2% perf-profile.children.cycles-pp.select_task_rq_fair
0.35 -0.1 0.26 ± 4% perf-profile.children.cycles-pp.free_unref_page_commit
0.13 ± 7% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.sched_ttwu_pending
1.04 -0.1 0.96 ± 3% perf-profile.children.cycles-pp.kmem_cache_free
0.70 ± 2% -0.1 0.62 ± 3% perf-profile.children.cycles-pp.__pte_alloc
0.21 ± 3% -0.1 0.14 ± 7% perf-profile.children.cycles-pp.idle_cpu
0.32 ± 6% -0.1 0.25 ± 6% perf-profile.children.cycles-pp.find_busiest_group
0.36 ± 4% -0.1 0.29 ± 5% perf-profile.children.cycles-pp.finish_task_switch
0.97 -0.1 0.91 perf-profile.children.cycles-pp.anon_vma_interval_tree_insert
0.32 -0.1 0.25 ± 6% perf-profile.children.cycles-pp.mm_init
0.28 ± 3% -0.1 0.22 ± 4% perf-profile.children.cycles-pp.pgd_alloc
0.14 ± 10% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.free_one_page
0.56 ± 4% -0.1 0.50 ± 4% perf-profile.children.cycles-pp.schedule
0.23 ± 2% -0.1 0.18 ± 8% perf-profile.children.cycles-pp.__get_free_pages
0.81 ± 2% -0.1 0.75 ± 3% perf-profile.children.cycles-pp.__slab_free
0.31 ± 9% -0.1 0.25 ± 7% perf-profile.children.cycles-pp.__put_user_4
0.19 ± 2% -0.1 0.14 ± 5% perf-profile.children.cycles-pp.dup_userfaultfd
0.15 ± 6% -0.1 0.10 ± 9% perf-profile.children.cycles-pp.__free_pages_ok
2.25 -0.0 2.20 perf-profile.children.cycles-pp.anon_vma_clone
0.08 ± 5% -0.0 0.04 ± 60% perf-profile.children.cycles-pp.unfreeze_partials
1.00 -0.0 0.96 perf-profile.children.cycles-pp.sys_write
0.20 ± 4% -0.0 0.16 ± 13% perf-profile.children.cycles-pp.devkmsg_write
0.20 ± 4% -0.0 0.16 ± 13% perf-profile.children.cycles-pp.printk_emit
0.21 ± 3% -0.0 0.18 ± 6% perf-profile.children.cycles-pp.update_curr
0.89 -0.0 0.86 ± 2% perf-profile.children.cycles-pp.__vfs_write
0.09 -0.0 0.06 ± 11% perf-profile.children.cycles-pp.new_slab
0.16 ± 7% -0.0 0.13 ± 11% perf-profile.children.cycles-pp.__mmdrop
0.09 ± 9% -0.0 0.06 ± 13% perf-profile.children.cycles-pp.put_cpu_partial
0.44 -0.0 0.41 perf-profile.children.cycles-pp.remove_vma
0.52 -0.0 0.49 ± 3% perf-profile.children.cycles-pp.enqueue_task_fair
0.07 -0.0 0.04 ± 57% perf-profile.children.cycles-pp.lapic_next_deadline
0.20 ± 4% -0.0 0.18 ± 6% perf-profile.children.cycles-pp.__put_task_struct
0.17 ± 7% -0.0 0.15 ± 5% perf-profile.children.cycles-pp.__lock_text_start
0.14 ± 5% -0.0 0.12 ± 5% perf-profile.children.cycles-pp.do_send_sig_info
0.09 -0.0 0.07 perf-profile.children.cycles-pp.entry_SYSCALL_64_stage2
0.13 ± 6% -0.0 0.11 ± 3% perf-profile.children.cycles-pp.put_task_stack
0.09 ± 9% -0.0 0.07 ± 5% perf-profile.children.cycles-pp.__queue_work
0.14 ± 3% -0.0 0.12 ± 5% perf-profile.children.cycles-pp.unmap_single_vma
0.10 -0.0 0.09 ± 4% perf-profile.children.cycles-pp.tcp_sendmsg_locked
0.28 ± 4% +0.0 0.31 ± 2% perf-profile.children.cycles-pp.vfs_statx
0.26 ± 4% +0.0 0.29 ± 3% perf-profile.children.cycles-pp.SYSC_newstat
0.56 ± 3% +0.0 0.59 ± 3% perf-profile.children.cycles-pp.elf_map
0.45 ± 5% +0.0 0.48 ± 3% perf-profile.children.cycles-pp.__wake_up_common
1.12 +0.2 1.32 ± 3% perf-profile.children.cycles-pp.queued_read_lock_slowpath
1.26 ± 2% +0.2 1.49 ± 3% perf-profile.children.cycles-pp.queued_write_lock_slowpath
2.16 ± 2% +0.2 2.39 ± 2% perf-profile.children.cycles-pp.do_wait
2.19 ± 2% +0.2 2.43 ± 2% perf-profile.children.cycles-pp.SYSC_wait4
2.18 ± 2% +0.2 2.42 ± 2% perf-profile.children.cycles-pp.kernel_wait4
25.95 +1.0 26.92 perf-profile.children.cycles-pp.intel_idle
1.46 -0.1 1.31 ± 3% perf-profile.self.cycles-pp.copy_page_range
0.51 ± 4% -0.1 0.40 ± 5% perf-profile.self.cycles-pp.__list_del_entry_valid
0.21 ± 3% -0.1 0.14 ± 7% perf-profile.self.cycles-pp.idle_cpu
0.96 -0.1 0.89 perf-profile.self.cycles-pp.anon_vma_interval_tree_insert
0.19 -0.1 0.14 ± 3% perf-profile.self.cycles-pp.dup_userfaultfd
0.35 ± 2% -0.0 0.30 ± 5% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.25 ± 5% -0.0 0.20 ± 6% perf-profile.self.cycles-pp.find_busiest_group
0.28 ± 2% -0.0 0.24 ± 3% perf-profile.self.cycles-pp.unlink_anon_vmas
0.57 -0.0 0.53 ± 3% perf-profile.self.cycles-pp.select_task_rq_fair
0.15 ± 3% -0.0 0.11 ± 7% perf-profile.self.cycles-pp.free_pcppages_bulk
0.55 -0.0 0.52 ± 2% perf-profile.self.cycles-pp.__slab_free
0.09 -0.0 0.07 perf-profile.self.cycles-pp.entry_SYSCALL_64_stage2
0.07 ± 11% -0.0 0.05 perf-profile.self.cycles-pp.update_rq_clock
0.14 ± 6% +0.0 0.16 perf-profile.self.cycles-pp.handle_mm_fault
25.94 +1.0 26.91 perf-profile.self.cycles-pp.intel_idle
2592 ± 11% -20.4% 2062 ± 11% interrupts.CPU1.RES:Rescheduling_interrupts
1584 -20.9% 1254 ± 8% interrupts.CPU10.RES:Rescheduling_interrupts
1405 ± 3% -21.0% 1110 ± 2% interrupts.CPU100.RES:Rescheduling_interrupts
1420 ± 4% -23.8% 1082 ± 10% interrupts.CPU101.RES:Rescheduling_interrupts
1421 ± 2% -19.7% 1141 ± 11% interrupts.CPU102.RES:Rescheduling_interrupts
1501 ± 27% +35.5% 2033 ± 8% interrupts.CPU103.NMI:Non-maskable_interrupts
1501 ± 27% +35.5% 2033 ± 8% interrupts.CPU103.PMI:Performance_monitoring_interrupts
1394 -23.0% 1074 ± 5% interrupts.CPU103.RES:Rescheduling_interrupts
1566 -19.1% 1266 ± 11% interrupts.CPU11.RES:Rescheduling_interrupts
1531 -17.2% 1267 ± 7% interrupts.CPU12.RES:Rescheduling_interrupts
1559 ± 2% -22.6% 1206 ± 8% interrupts.CPU13.RES:Rescheduling_interrupts
1503 -23.4% 1151 ± 6% interrupts.CPU15.RES:Rescheduling_interrupts
1584 ± 3% -24.2% 1201 ± 8% interrupts.CPU16.RES:Rescheduling_interrupts
1528 ± 6% -18.8% 1240 ± 13% interrupts.CPU17.RES:Rescheduling_interrupts
1518 ± 3% -21.1% 1197 ± 6% interrupts.CPU18.RES:Rescheduling_interrupts
2303 ± 11% -19.5% 1854 interrupts.CPU19.NMI:Non-maskable_interrupts
2303 ± 11% -19.5% 1854 interrupts.CPU19.PMI:Performance_monitoring_interrupts
1457 ± 4% -18.0% 1194 ± 4% interrupts.CPU19.RES:Rescheduling_interrupts
1884 ± 5% -15.3% 1596 ± 4% interrupts.CPU2.RES:Rescheduling_interrupts
1543 ± 4% -22.9% 1189 ± 7% interrupts.CPU20.RES:Rescheduling_interrupts
1480 -19.4% 1193 ± 5% interrupts.CPU21.RES:Rescheduling_interrupts
1492 ± 2% -17.5% 1231 ± 6% interrupts.CPU22.RES:Rescheduling_interrupts
1482 ± 2% -17.0% 1230 ± 7% interrupts.CPU24.RES:Rescheduling_interrupts
1434 ± 3% -17.4% 1184 ± 6% interrupts.CPU25.RES:Rescheduling_interrupts
1568 ± 4% -12.7% 1368 ± 4% interrupts.CPU26.RES:Rescheduling_interrupts
1544 -16.5% 1289 ± 3% interrupts.CPU27.RES:Rescheduling_interrupts
1486 ± 3% -16.6% 1238 ± 5% interrupts.CPU29.RES:Rescheduling_interrupts
1856 ± 3% -14.3% 1591 ± 8% interrupts.CPU3.RES:Rescheduling_interrupts
1507 -18.8% 1224 ± 9% interrupts.CPU30.RES:Rescheduling_interrupts
1561 ± 2% -19.9% 1250 ± 3% interrupts.CPU31.RES:Rescheduling_interrupts
1551 -23.4% 1187 ± 3% interrupts.CPU32.RES:Rescheduling_interrupts
1449 ± 4% -16.6% 1208 ± 9% interrupts.CPU33.RES:Rescheduling_interrupts
1521 ± 2% -21.6% 1193 ± 6% interrupts.CPU34.RES:Rescheduling_interrupts
1644 ± 15% -26.2% 1212 ± 7% interrupts.CPU35.RES:Rescheduling_interrupts
1498 -22.5% 1161 ± 2% interrupts.CPU36.RES:Rescheduling_interrupts
1487 ± 3% -19.8% 1192 ± 4% interrupts.CPU37.RES:Rescheduling_interrupts
1538 ± 4% -25.0% 1153 ± 5% interrupts.CPU38.RES:Rescheduling_interrupts
1486 -20.6% 1181 ± 7% interrupts.CPU39.RES:Rescheduling_interrupts
1488 ± 2% -20.2% 1187 ± 3% interrupts.CPU40.RES:Rescheduling_interrupts
1503 -22.3% 1168 ± 10% interrupts.CPU41.RES:Rescheduling_interrupts
1560 ± 23% +24.5% 1942 ± 4% interrupts.CPU43.NMI:Non-maskable_interrupts
1560 ± 23% +24.5% 1942 ± 4% interrupts.CPU43.PMI:Performance_monitoring_interrupts
1654 ± 7% -26.5% 1216 ± 4% interrupts.CPU43.RES:Rescheduling_interrupts
1501 ± 5% -23.5% 1148 ± 4% interrupts.CPU44.RES:Rescheduling_interrupts
1473 ± 3% -21.0% 1164 ± 7% interrupts.CPU45.RES:Rescheduling_interrupts
1424 ± 3% -18.0% 1167 ± 6% interrupts.CPU46.RES:Rescheduling_interrupts
1481 -25.1% 1109 ± 4% interrupts.CPU47.RES:Rescheduling_interrupts
1436 -19.8% 1152 ± 4% interrupts.CPU48.RES:Rescheduling_interrupts
1688 ± 2% -20.2% 1347 ± 8% interrupts.CPU5.RES:Rescheduling_interrupts
1440 ± 2% -20.9% 1139 ± 7% interrupts.CPU50.RES:Rescheduling_interrupts
1462 -23.5% 1118 ± 7% interrupts.CPU51.RES:Rescheduling_interrupts
1410 ± 2% -14.7% 1203 ± 5% interrupts.CPU52.RES:Rescheduling_interrupts
1524 ± 2% -24.6% 1149 ± 5% interrupts.CPU53.RES:Rescheduling_interrupts
1438 -16.5% 1201 ± 9% interrupts.CPU54.RES:Rescheduling_interrupts
1454 ± 2% -19.5% 1170 ± 6% interrupts.CPU55.RES:Rescheduling_interrupts
1468 -20.1% 1173 ± 4% interrupts.CPU56.RES:Rescheduling_interrupts
1461 -20.6% 1159 ± 4% interrupts.CPU57.RES:Rescheduling_interrupts
1410 ± 2% -18.1% 1155 ± 4% interrupts.CPU58.RES:Rescheduling_interrupts
1452 ± 3% -19.0% 1176 ± 6% interrupts.CPU59.RES:Rescheduling_interrupts
1621 ± 4% -16.3% 1357 ± 5% interrupts.CPU6.RES:Rescheduling_interrupts
1455 ± 2% -22.7% 1124 ± 8% interrupts.CPU60.RES:Rescheduling_interrupts
1491 ± 3% -25.8% 1106 ± 11% interrupts.CPU61.RES:Rescheduling_interrupts
1401 -18.4% 1143 ± 3% interrupts.CPU62.RES:Rescheduling_interrupts
1429 -19.4% 1152 ± 9% interrupts.CPU63.RES:Rescheduling_interrupts
1437 -22.8% 1109 ± 8% interrupts.CPU64.RES:Rescheduling_interrupts
1499 -26.4% 1104 ± 7% interrupts.CPU65.RES:Rescheduling_interrupts
1485 ± 4% -23.0% 1144 ± 7% interrupts.CPU66.RES:Rescheduling_interrupts
1405 ± 3% -19.0% 1138 ± 8% interrupts.CPU67.RES:Rescheduling_interrupts
1492 ± 2% -22.4% 1159 ± 12% interrupts.CPU68.RES:Rescheduling_interrupts
1435 ± 4% -19.9% 1149 ± 14% interrupts.CPU69.RES:Rescheduling_interrupts
1625 ± 3% -15.6% 1371 ± 6% interrupts.CPU7.RES:Rescheduling_interrupts
1480 ± 3% -21.4% 1164 ± 12% interrupts.CPU70.RES:Rescheduling_interrupts
2355 ± 10% -30.9% 1627 ± 25% interrupts.CPU71.NMI:Non-maskable_interrupts
2355 ± 10% -30.9% 1627 ± 25% interrupts.CPU71.PMI:Performance_monitoring_interrupts
1428 ± 3% -19.4% 1151 ± 8% interrupts.CPU71.RES:Rescheduling_interrupts
1427 ± 2% -19.7% 1145 ± 9% interrupts.CPU72.RES:Rescheduling_interrupts
1452 ± 4% -17.5% 1198 ± 7% interrupts.CPU73.RES:Rescheduling_interrupts
1419 ± 2% -19.0% 1149 ± 6% interrupts.CPU74.RES:Rescheduling_interrupts
1441 ± 2% -18.4% 1176 ± 9% interrupts.CPU75.RES:Rescheduling_interrupts
1435 ± 3% -16.1% 1204 ± 6% interrupts.CPU76.RES:Rescheduling_interrupts
1445 -22.2% 1124 ± 6% interrupts.CPU77.RES:Rescheduling_interrupts
1481 ± 4% -23.8% 1128 ± 8% interrupts.CPU78.RES:Rescheduling_interrupts
1392 -20.7% 1104 ± 9% interrupts.CPU79.RES:Rescheduling_interrupts
1621 ± 4% -22.7% 1252 ± 9% interrupts.CPU8.RES:Rescheduling_interrupts
1478 ± 2% -24.3% 1118 ± 6% interrupts.CPU80.RES:Rescheduling_interrupts
1481 ± 4% -23.2% 1137 ± 8% interrupts.CPU81.RES:Rescheduling_interrupts
1453 ± 2% -20.8% 1151 ± 4% interrupts.CPU82.RES:Rescheduling_interrupts
1431 -22.5% 1110 ± 10% interrupts.CPU83.RES:Rescheduling_interrupts
1477 ± 4% -25.9% 1094 ± 7% interrupts.CPU84.RES:Rescheduling_interrupts
1467 ± 2% -21.4% 1153 ± 6% interrupts.CPU85.RES:Rescheduling_interrupts
1427 ± 3% -20.1% 1140 ± 12% interrupts.CPU86.RES:Rescheduling_interrupts
1512 ± 5% -25.5% 1126 ± 5% interrupts.CPU87.RES:Rescheduling_interrupts
1409 -20.8% 1115 ± 5% interrupts.CPU88.RES:Rescheduling_interrupts
1408 ± 2% -18.8% 1143 ± 5% interrupts.CPU89.RES:Rescheduling_interrupts
1659 ± 2% -22.0% 1294 ± 11% interrupts.CPU9.RES:Rescheduling_interrupts
1475 ± 3% -23.7% 1126 ± 5% interrupts.CPU90.RES:Rescheduling_interrupts
1444 -22.8% 1114 ± 7% interrupts.CPU91.RES:Rescheduling_interrupts
1442 ± 6% -21.3% 1135 ± 7% interrupts.CPU92.RES:Rescheduling_interrupts
1466 ± 2% -21.7% 1148 ± 2% interrupts.CPU93.RES:Rescheduling_interrupts
1413 ± 2% -17.7% 1163 ± 6% interrupts.CPU94.RES:Rescheduling_interrupts
1611 ± 3% -29.8% 1131 ± 5% interrupts.CPU95.RES:Rescheduling_interrupts
1406 -20.1% 1123 ± 5% interrupts.CPU96.RES:Rescheduling_interrupts
1386 ± 3% -20.0% 1109 ± 9% interrupts.CPU97.RES:Rescheduling_interrupts
1406 ± 4% -21.6% 1102 ± 7% interrupts.CPU98.RES:Rescheduling_interrupts
1379 ± 4% -18.9% 1118 ± 8% interrupts.CPU99.RES:Rescheduling_interrupts
163356 -19.1% 132229 interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-skl-2sp4: 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-skl-2sp4/alltests/reaim/0x200004d
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_error_entry/0x
:4 25% 1:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=__slab_free/0x
1:4 9% 1:4 perf-profile.children.cycles-pp.error_entry
0:4 4% 0:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
8.37 -4.0% 8.03 reaim.child_systime
0.42 +2.4% 0.43 reaim.std_dev_percent
294600 -7.7% 271774 reaim.time.involuntary_context_switches
329.71 -4.0% 316.42 reaim.time.system_time
1675279 +1.3% 1696674 reaim.time.voluntary_context_switches
4.516e+08 ± 6% +17.8% 5.322e+08 ± 10% cpuidle.POLL.time
200189 ± 3% +10.4% 220923 meminfo.AnonHugePages
154.84 +0.8% 156.13 turbostat.PkgWatt
64.74 ± 20% +95.3% 126.45 ± 19% boot-time.boot
6459 ± 21% +99.3% 12874 ± 20% boot-time.idle
40.50 ± 43% +96.3% 79.50 ± 32% numa-vmstat.node1.nr_anon_transparent_hugepages
12122 ± 7% -20.2% 9674 ± 14% numa-vmstat.node1.nr_slab_reclaimable
153053 ± 3% +10.6% 169298 ± 7% numa-meminfo.node0.Slab
84083 ± 42% +95.4% 164331 ± 32% numa-meminfo.node1.AnonHugePages
48467 ± 7% -20.2% 38679 ± 14% numa-meminfo.node1.SReclaimable
97.25 ± 3% +10.8% 107.75 proc-vmstat.nr_anon_transparent_hugepages
26303 -1.8% 25827 proc-vmstat.nr_kernel_stack
4731 -5.2% 4485 ± 3% proc-vmstat.nr_page_table_pages
2463 ±134% -91.1% 220.50 ± 98% proc-vmstat.numa_hint_faults
16885 ± 6% -9.6% 15258 ± 4% softirqs.CPU32.RCU
30855 ± 13% -14.3% 26453 softirqs.CPU4.TIMER
17409 ± 7% -12.5% 15231 softirqs.CPU51.RCU
18157 ± 9% -12.9% 15809 ± 2% softirqs.CPU55.RCU
16588 ± 10% -11.6% 14662 ± 2% softirqs.CPU78.RCU
17307 ± 7% -12.6% 15130 ± 2% softirqs.CPU87.RCU
17146 ± 9% -13.2% 14880 ± 2% softirqs.CPU90.RCU
2593 ± 6% -17.3% 2145 ± 10% slabinfo.Acpi-ParseExt.active_objs
2593 ± 6% -17.3% 2145 ± 10% slabinfo.Acpi-ParseExt.num_objs
682.50 ± 6% -17.1% 565.50 ± 5% slabinfo.bdev_cache.active_objs
682.50 ± 6% -17.1% 565.50 ± 5% slabinfo.bdev_cache.num_objs
6102 ± 5% -30.6% 4234 ± 13% slabinfo.eventpoll_epi.active_objs
6102 ± 5% -30.6% 4234 ± 13% slabinfo.eventpoll_epi.num_objs
5340 ± 5% -30.6% 3704 ± 13% slabinfo.eventpoll_pwq.active_objs
5340 ± 5% -30.6% 3704 ± 13% slabinfo.eventpoll_pwq.num_objs
1018 ± 9% -17.6% 839.00 ± 2% slabinfo.file_lock_cache.active_objs
1018 ± 9% -17.6% 839.00 ± 2% slabinfo.file_lock_cache.num_objs
3359 ± 7% -12.7% 2933 ± 8% slabinfo.fsnotify_mark_connector.active_objs
3359 ± 7% -12.7% 2933 ± 8% slabinfo.fsnotify_mark_connector.num_objs
1485 ± 3% -10.9% 1323 ± 6% slabinfo.nsproxy.active_objs
1485 ± 3% -10.9% 1323 ± 6% slabinfo.nsproxy.num_objs
1.67 -0.0 1.64 perf-stat.branch-miss-rate%
1.383e+10 -1.6% 1.361e+10 perf-stat.branch-misses
7.52 -1.0 6.54 perf-stat.cache-miss-rate%
3.062e+09 -13.7% 2.643e+09 perf-stat.cache-misses
1.041e+13 +2.4% 1.066e+13 perf-stat.cpu-cycles
744682 -1.2% 735818 perf-stat.cpu-migrations
4.046e+08 +3.2% 4.177e+08 perf-stat.iTLB-load-misses
4.821e+08 +3.3% 4.98e+08 perf-stat.iTLB-loads
16668 -2.8% 16200 perf-stat.instructions-per-iTLB-miss
87.92 -2.1 85.80 perf-stat.node-load-miss-rate%
8.016e+08 -20.8% 6.351e+08 perf-stat.node-load-misses
1.102e+08 -4.6% 1.051e+08 perf-stat.node-loads
59.53 -6.5 53.07 perf-stat.node-store-miss-rate%
1.435e+08 -24.7% 1.081e+08 ± 2% perf-stat.node-store-misses
97539783 -2.0% 95550500 perf-stat.node-stores
552.25 ± 27% +67.3% 923.75 ± 24% interrupts.CPU10.NMI:Non-maskable_interrupts
552.25 ± 27% +67.3% 923.75 ± 24% interrupts.CPU10.PMI:Performance_monitoring_interrupts
455.50 +32.9% 605.50 ± 19% interrupts.CPU15.RES:Rescheduling_interrupts
361.75 ± 6% +58.9% 574.75 ± 23% interrupts.CPU26.RES:Rescheduling_interrupts
321.25 ± 7% +22.0% 392.00 ± 6% interrupts.CPU30.RES:Rescheduling_interrupts
278.25 ± 9% +18.1% 328.75 ± 13% interrupts.CPU41.RES:Rescheduling_interrupts
746.75 ± 11% +60.5% 1198 ± 37% interrupts.CPU44.NMI:Non-maskable_interrupts
746.75 ± 11% +60.5% 1198 ± 37% interrupts.CPU44.PMI:Performance_monitoring_interrupts
645.25 ± 32% +43.0% 922.50 ± 13% interrupts.CPU47.NMI:Non-maskable_interrupts
645.25 ± 32% +43.0% 922.50 ± 13% interrupts.CPU47.PMI:Performance_monitoring_interrupts
631.25 ± 23% +37.4% 867.25 ± 12% interrupts.CPU58.NMI:Non-maskable_interrupts
631.25 ± 23% +37.4% 867.25 ± 12% interrupts.CPU58.PMI:Performance_monitoring_interrupts
713.50 ± 12% +22.2% 871.75 ± 10% interrupts.CPU65.NMI:Non-maskable_interrupts
713.50 ± 12% +22.2% 871.75 ± 10% interrupts.CPU65.PMI:Performance_monitoring_interrupts
620.00 ± 14% +95.4% 1211 ± 56% interrupts.CPU72.NMI:Non-maskable_interrupts
620.00 ± 14% +95.4% 1211 ± 56% interrupts.CPU72.PMI:Performance_monitoring_interrupts
620.75 ± 30% +72.8% 1072 ± 33% interrupts.CPU83.NMI:Non-maskable_interrupts
620.75 ± 30% +72.8% 1072 ± 33% interrupts.CPU83.PMI:Performance_monitoring_interrupts
779.83 ± 4% +53.9% 1200 ± 16% sched_debug.cfs_rq:/.exec_clock.stddev
43531 ± 4% +69.1% 73628 ± 7% sched_debug.cfs_rq:/.min_vruntime.stddev
1.00 +9.3% 1.09 ± 3% sched_debug.cfs_rq:/.nr_spread_over.avg
2.39 ± 75% +256.7% 8.54 ± 53% sched_debug.cfs_rq:/.removed.load_avg.avg
16.68 ± 62% +171.4% 45.26 ± 35% sched_debug.cfs_rq:/.removed.load_avg.stddev
110.67 ± 75% +254.3% 392.14 ± 53% sched_debug.cfs_rq:/.removed.runnable_sum.avg
771.19 ± 62% +170.1% 2082 ± 35% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
43530 ± 4% +69.1% 73628 ± 7% sched_debug.cfs_rq:/.spread0.stddev
216468 ± 5% +36.6% 295649 ± 6% sched_debug.cpu.clock.avg
216487 ± 5% +36.6% 295669 ± 6% sched_debug.cpu.clock.max
216210 ± 5% +36.6% 295436 ± 6% sched_debug.cpu.clock.min
216468 ± 5% +36.6% 295649 ± 6% sched_debug.cpu.clock_task.avg
216487 ± 5% +36.6% 295669 ± 6% sched_debug.cpu.clock_task.max
216210 ± 5% +36.6% 295436 ± 6% sched_debug.cpu.clock_task.min
1593 ± 11% -7.6% 1473 ± 5% sched_debug.cpu.curr->pid.avg
34247 +11.9% 38310 ± 6% sched_debug.cpu.nr_switches.max
2145 ± 2% +19.3% 2559 ± 5% sched_debug.cpu.nr_switches.stddev
31588 +16.2% 36707 ± 6% sched_debug.cpu.sched_count.max
1791 ± 3% +27.7% 2288 ± 7% sched_debug.cpu.sched_count.stddev
13661 +18.6% 16205 ± 8% sched_debug.cpu.sched_goidle.max
838.65 ± 4% +25.7% 1054 ± 9% sched_debug.cpu.sched_goidle.stddev
12457 ± 3% +33.1% 16586 ± 13% sched_debug.cpu.ttwu_count.max
887.04 ± 3% +37.4% 1218 ± 13% sched_debug.cpu.ttwu_count.stddev
264.20 ± 4% +25.9% 332.56 ± 11% sched_debug.cpu.ttwu_local.stddev
216473 ± 5% +36.6% 295655 ± 6% sched_debug.cpu_clk
216473 ± 5% +36.6% 295655 ± 6% sched_debug.ktime
217713 ± 5% +36.3% 296841 ± 6% sched_debug.sched_clk
0.54 ± 4% +0.1 0.67 ± 8% perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.60 ± 5% +0.3 1.89 ± 9% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
1.40 ± 12% +0.9 2.29 ± 17% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter
1.60 ± 12% +1.0 2.60 ± 17% perf-profile.calltrace.cycles-pp._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt
1.61 ± 12% +1.0 2.62 ± 17% perf-profile.calltrace.cycles-pp.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
3.06 ± 6% +1.3 4.36 ± 12% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
3.13 ± 5% +1.3 4.45 ± 12% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
5.58 ± 2% +1.7 7.32 ± 12% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
5.60 ± 2% +1.8 7.36 ± 12% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.07 ± 17% +0.0 0.10 ± 8% perf-profile.children.cycles-pp.copy_user_generic_unrolled
0.10 ± 4% +0.0 0.13 ± 11% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.10 ± 4% +0.0 0.14 ± 12% perf-profile.children.cycles-pp.vfs_statx
0.09 ± 4% +0.0 0.12 ± 13% perf-profile.children.cycles-pp.SYSC_newstat
0.06 ± 14% +0.0 0.09 ± 24% perf-profile.children.cycles-pp.__vmalloc_node_range
0.11 ± 14% +0.0 0.14 ± 25% perf-profile.children.cycles-pp.tlb_flush_mmu_tlbonly
0.06 ± 63% +0.0 0.10 ± 18% perf-profile.children.cycles-pp.terminate_walk
0.00 +0.1 0.06 ± 14% perf-profile.children.cycles-pp.find_vmap_area
0.01 ±173% +0.1 0.08 ± 11% perf-profile.children.cycles-pp.remove_vm_area
0.00 +0.1 0.07 ± 38% perf-profile.children.cycles-pp.__get_vm_area_node
0.18 ± 11% +0.1 0.26 ± 15% perf-profile.children.cycles-pp.creat
0.58 ± 5% +0.1 0.73 ± 8% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
1.57 ± 11% +1.0 2.52 ± 16% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
1.66 ± 14% +1.0 2.67 ± 16% perf-profile.children.cycles-pp.tick_do_update_jiffies64
2.28 ± 7% +1.1 3.35 ± 14% perf-profile.children.cycles-pp._raw_spin_lock
3.15 ± 7% +1.3 4.44 ± 12% perf-profile.children.cycles-pp.tick_irq_enter
3.21 ± 6% +1.3 4.53 ± 12% perf-profile.children.cycles-pp.irq_enter
6.39 +1.8 8.22 ± 11% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
6.42 +1.9 8.29 ± 11% perf-profile.children.cycles-pp.apic_timer_interrupt
0.07 ± 13% +0.0 0.08 ± 13% perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.12 ± 22% +0.0 0.16 ± 21% perf-profile.self.cycles-pp.do_syscall_64
0.27 ± 8% +0.1 0.35 ± 12% perf-profile.self.cycles-pp.tick_irq_enter
0.55 ± 4% +0.1 0.69 ± 9% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
1.57 ± 11% +1.0 2.52 ± 16% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
***************************************************************************************************
lkp-skl-2sp4: 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/50%/debian-x86_64-2018-04-03.cgz/300s/lkp-skl-2sp4/pft/0x200004d
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_error_entry/0x
1:4 -25% :4 dmesg.WARNING:at#for_ip_ret_from_intr/0x
%stddev %change %stddev
\ | \
230070 -42.7% 131828 pft.faults_per_sec_per_cpu
24154 -4.1% 23160 ± 3% pft.time.involuntary_context_switches
7817260 -32.7% 5262150 pft.time.minor_page_faults
4048 +18.0% 4778 pft.time.percent_of_cpu_this_job_got
11917 +16.9% 13933 pft.time.system_time
250.86 +73.7% 435.77 ± 10% pft.time.user_time
142515 +59.4% 227173 ± 5% pft.time.voluntary_context_switches
7334244 ± 7% -64.3% 2618153 ± 65% numa-numastat.node1.local_node
7344576 ± 7% -64.2% 2628307 ± 65% numa-numastat.node1.numa_hit
59.50 -11.3% 52.75 vmstat.cpu.id
39.75 +20.1% 47.75 vmstat.procs.r
59.92 -6.7 53.19 mpstat.cpu.idle%
0.00 ± 26% +0.0 0.01 ± 7% mpstat.cpu.soft%
39.08 +6.1 45.22 mpstat.cpu.sys%
0.99 +0.6 1.58 ± 10% mpstat.cpu.usr%
4673420 ± 11% +30.2% 6084101 ± 13% cpuidle.C1.time
2.712e+08 ± 5% +30.1% 3.529e+08 ± 2% cpuidle.C1E.time
624111 ± 2% +36.3% 850658 ± 4% cpuidle.C1E.usage
1.791e+10 -12.1% 1.574e+10 cpuidle.C6.time
18607622 -13.1% 16169546 cpuidle.C6.usage
1.361e+08 ± 19% +118.3% 2.971e+08 ± 13% cpuidle.POLL.time
5103004 ± 2% +26.6% 6459744 meminfo.Active
5067740 ± 2% +27.0% 6436925 meminfo.Active(anon)
756326 ± 8% +124.1% 1695158 ± 25% meminfo.AnonHugePages
4887369 ± 3% +26.7% 6193791 meminfo.AnonPages
5.41e+08 +19.0% 6.437e+08 meminfo.Committed_AS
6892 +33.2% 9179 ± 10% meminfo.PageTables
574999 ± 9% +115.7% 1240273 ± 28% numa-vmstat.node0.nr_active_anon
554302 ± 7% +116.5% 1200168 ± 29% numa-vmstat.node0.nr_anon_pages
574609 ± 9% +115.8% 1240281 ± 28% numa-vmstat.node0.nr_zone_active_anon
10839 ± 9% -25.3% 8092 ± 8% numa-vmstat.node1.nr_slab_reclaimable
4140994 ± 7% -49.5% 2091526 ± 62% numa-vmstat.node1.numa_hit
3996598 ± 7% -51.3% 1947593 ± 68% numa-vmstat.node1.numa_local
2314942 ± 12% +115.3% 4985067 ± 27% numa-meminfo.node0.Active
2297543 ± 13% +116.5% 4973510 ± 27% numa-meminfo.node0.Active(anon)
359259 ± 13% +249.8% 1256563 ± 42% numa-meminfo.node0.AnonHugePages
2191741 ± 12% +118.8% 4794936 ± 28% numa-meminfo.node0.AnonPages
3351366 ± 9% +80.5% 6050637 ± 27% numa-meminfo.node0.MemUsed
3133 ± 9% +91.1% 5990 ± 32% numa-meminfo.node0.PageTables
43356 ± 9% -25.3% 32370 ± 8% numa-meminfo.node1.SReclaimable
1177 +15.6% 1360 turbostat.Avg_MHz
42.29 +6.6 48.84 turbostat.Busy%
602767 ± 3% +35.6% 817443 ± 2% turbostat.C1E
0.86 ± 6% +0.3 1.11 turbostat.C1E%
18604760 -13.1% 16166704 turbostat.C6
57.00 -6.9 50.14 turbostat.C6%
56.09 -12.8% 48.90 ± 4% turbostat.CPU%c1
255.40 -8.2% 234.42 turbostat.PkgWatt
65.03 -17.0% 53.98 turbostat.RAMWatt
6202 ± 2% -28.7% 4424 ± 6% slabinfo.eventpoll_epi.active_objs
6202 ± 2% -28.7% 4424 ± 6% slabinfo.eventpoll_epi.num_objs
5427 ± 2% -28.7% 3871 ± 6% slabinfo.eventpoll_pwq.active_objs
5427 ± 2% -28.7% 3871 ± 6% slabinfo.eventpoll_pwq.num_objs
14969 ± 4% -21.6% 11739 ± 3% slabinfo.files_cache.active_objs
15026 ± 4% -21.3% 11821 ± 3% slabinfo.files_cache.num_objs
5869 -25.8% 4356 ± 11% slabinfo.mm_struct.active_objs
5933 -25.8% 4402 ± 12% slabinfo.mm_struct.num_objs
6573 ± 2% -19.3% 5303 ± 12% slabinfo.sighand_cache.active_objs
6602 ± 2% -19.1% 5339 ± 12% slabinfo.sighand_cache.num_objs
1424 ± 6% -17.5% 1175 ± 3% slabinfo.skbuff_fclone_cache.active_objs
1424 ± 6% -17.5% 1175 ± 3% slabinfo.skbuff_fclone_cache.num_objs
1804 ± 4% -12.8% 1573 ± 8% slabinfo.task_group.active_objs
1804 ± 4% -12.8% 1573 ± 8% slabinfo.task_group.num_objs
74.90 -7.1 67.78 perf-stat.cache-miss-rate%
1.917e+10 -30.7% 1.329e+10 ± 2% perf-stat.cache-misses
2.559e+10 -23.4% 1.96e+10 ± 2% perf-stat.cache-references
3.608e+13 +15.1% 4.153e+13 perf-stat.cpu-cycles
10929 -21.3% 8606 ± 15% perf-stat.cpu-migrations
0.08 -0.0 0.05 ± 13% perf-stat.dTLB-store-miss-rate%
1.187e+08 -48.3% 61361329 ± 7% perf-stat.dTLB-store-misses
1.41e+11 -12.2% 1.238e+11 ± 6% perf-stat.dTLB-stores
2.149e+08 -30.2% 1.5e+08 ± 7% perf-stat.iTLB-loads
8598974 -29.6% 6054449 perf-stat.minor-faults
1.421e+08 ± 3% -21.4% 1.117e+08 ± 6% perf-stat.node-load-misses
1.5e+09 -27.5% 1.088e+09 perf-stat.node-loads
0.85 ± 70% -0.5 0.38 ± 13% perf-stat.node-store-miss-rate%
98747708 ± 70% -67.5% 32046709 ± 13% perf-stat.node-store-misses
1.146e+10 -26.0% 8.481e+09 perf-stat.node-stores
8598995 -29.6% 6054463 perf-stat.page-faults
1300724 ± 2% +22.6% 1594074 proc-vmstat.nr_active_anon
1256932 ± 2% +22.5% 1539865 proc-vmstat.nr_anon_pages
406.25 ± 8% +79.9% 731.00 ± 14% proc-vmstat.nr_anon_transparent_hugepages
1461780 -2.2% 1429319 proc-vmstat.nr_dirty_background_threshold
2927135 -2.2% 2862134 proc-vmstat.nr_dirty_threshold
14564117 -1.9% 14293825 proc-vmstat.nr_free_pages
8545 +6.6% 9109 ± 2% proc-vmstat.nr_mapped
1771 ± 2% +23.5% 2188 ± 5% proc-vmstat.nr_page_table_pages
1300723 ± 2% +22.6% 1594070 proc-vmstat.nr_zone_active_anon
13839779 -30.8% 9576432 proc-vmstat.numa_hit
13819218 -30.9% 9555852 proc-vmstat.numa_local
2.725e+09 -32.6% 1.837e+09 proc-vmstat.pgalloc_normal
8634919 -29.6% 6080480 proc-vmstat.pgfault
2.725e+09 -32.6% 1.836e+09 proc-vmstat.pgfree
5304368 -32.6% 3574443 proc-vmstat.thp_deferred_split_page
5305872 -32.6% 3575998 proc-vmstat.thp_fault_alloc
402915 +1.3% 408011 interrupts.CAL:Function_call_interrupts
186176 ± 6% -21.8% 145644 ± 12% interrupts.CPU0.RES:Rescheduling_interrupts
12448 ± 16% -71.5% 3551 ± 58% interrupts.CPU2.RES:Rescheduling_interrupts
334.25 ± 58% +181.7% 941.50 ± 24% interrupts.CPU21.RES:Rescheduling_interrupts
202.75 ± 50% +1023.3% 2277 ± 98% interrupts.CPU22.RES:Rescheduling_interrupts
138.50 ± 59% +1086.3% 1643 ± 36% interrupts.CPU23.RES:Rescheduling_interrupts
179.00 ± 55% +910.3% 1808 ±106% interrupts.CPU25.RES:Rescheduling_interrupts
485.50 ± 29% +8854.8% 43475 ± 37% interrupts.CPU26.RES:Rescheduling_interrupts
248.75 ± 38% +1876.2% 4915 ± 52% interrupts.CPU27.RES:Rescheduling_interrupts
116.75 ± 12% +297.9% 464.50 ± 11% interrupts.CPU29.RES:Rescheduling_interrupts
8061 ± 28% -54.5% 3669 ± 61% interrupts.CPU3.RES:Rescheduling_interrupts
3674 ± 6% -57.9% 1546 ± 95% interrupts.CPU31.NMI:Non-maskable_interrupts
3674 ± 6% -57.9% 1546 ± 95% interrupts.CPU31.PMI:Performance_monitoring_interrupts
79.25 ± 40% -56.8% 34.25 ± 97% interrupts.CPU50.RES:Rescheduling_interrupts
86.75 ± 56% -77.8% 19.25 ± 71% interrupts.CPU51.RES:Rescheduling_interrupts
669.25 ± 78% -73.4% 177.75 ±155% interrupts.CPU53.RES:Rescheduling_interrupts
498.25 ± 80% -95.1% 24.50 ± 37% interrupts.CPU55.RES:Rescheduling_interrupts
238.00 ± 58% -82.7% 41.25 ± 81% interrupts.CPU58.RES:Rescheduling_interrupts
278.50 ± 28% -92.8% 20.00 ± 52% interrupts.CPU59.RES:Rescheduling_interrupts
256.75 ± 47% -90.4% 24.75 ± 50% interrupts.CPU60.RES:Rescheduling_interrupts
225.25 ± 71% -91.2% 19.75 ± 27% interrupts.CPU61.RES:Rescheduling_interrupts
236.00 ± 92% -88.2% 27.75 ± 80% interrupts.CPU63.RES:Rescheduling_interrupts
171.25 ± 73% -91.2% 15.00 ± 22% interrupts.CPU64.RES:Rescheduling_interrupts
239.00 ± 36% -76.4% 56.50 ±130% interrupts.CPU65.RES:Rescheduling_interrupts
196.75 ± 51% -89.8% 20.00 ± 15% interrupts.CPU66.RES:Rescheduling_interrupts
196.50 ± 53% -78.1% 43.00 ±111% interrupts.CPU70.RES:Rescheduling_interrupts
191.00 ± 45% -90.2% 18.75 ± 45% interrupts.CPU71.RES:Rescheduling_interrupts
203.25 ± 81% -93.7% 12.75 ± 23% interrupts.CPU72.RES:Rescheduling_interrupts
103.25 ± 59% -78.9% 21.75 ± 24% interrupts.CPU73.RES:Rescheduling_interrupts
111.25 ± 79% -80.9% 21.25 ± 76% interrupts.CPU74.RES:Rescheduling_interrupts
93.50 ±106% -78.1% 20.50 ± 67% interrupts.CPU78.RES:Rescheduling_interrupts
400.50 ±155% -97.0% 12.00 ± 21% interrupts.CPU81.RES:Rescheduling_interrupts
347.50 ±156% -96.3% 13.00 ± 75% interrupts.CPU82.RES:Rescheduling_interrupts
285.00 ±149% -96.7% 9.50 ± 49% interrupts.CPU83.RES:Rescheduling_interrupts
265.00 ±136% -94.2% 15.25 ± 48% interrupts.CPU84.RES:Rescheduling_interrupts
153.50 ±145% -89.7% 15.75 ± 63% interrupts.CPU85.RES:Rescheduling_interrupts
167.00 ±101% -91.0% 15.00 ± 54% interrupts.CPU87.RES:Rescheduling_interrupts
81.50 ± 79% -87.4% 10.25 ± 78% interrupts.CPU88.RES:Rescheduling_interrupts
114.00 ± 92% -79.2% 23.75 ±103% interrupts.CPU98.RES:Rescheduling_interrupts
54567 ± 11% +54.4% 84277 ± 12% sched_debug.cfs_rq:/.exec_clock.avg
102845 ± 10% +41.8% 145786 ± 2% sched_debug.cfs_rq:/.exec_clock.max
12134 ± 15% +286.2% 46865 ± 36% sched_debug.cfs_rq:/.exec_clock.stddev
190887 ± 88% -83.1% 32327 ± 43% sched_debug.cfs_rq:/.load.max
26347 ± 56% -61.1% 10240 ± 19% sched_debug.cfs_rq:/.load.stddev
2704219 ± 11% +61.5% 4367328 ± 12% sched_debug.cfs_rq:/.min_vruntime.avg
4650229 ± 8% +60.8% 7478802 ± 3% sched_debug.cfs_rq:/.min_vruntime.max
546421 ± 15% +343.4% 2422973 ± 37% sched_debug.cfs_rq:/.min_vruntime.stddev
0.41 ± 4% +38.1% 0.56 ± 8% sched_debug.cfs_rq:/.nr_running.avg
1.65 ± 14% +40.8% 2.32 ± 21% sched_debug.cfs_rq:/.nr_spread_over.avg
1.07 ± 23% +49.1% 1.59 ± 20% sched_debug.cfs_rq:/.nr_spread_over.stddev
8.96 ± 5% +21.6% 10.90 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.avg
12.33 ± 22% -28.3% 8.84 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.stddev
188871 ± 89% -83.6% 30944 ± 47% sched_debug.cfs_rq:/.runnable_weight.max
26233 ± 57% -61.0% 10219 ± 20% sched_debug.cfs_rq:/.runnable_weight.stddev
3417321 ± 11% +37.6% 4701239 ± 20% sched_debug.cfs_rq:/.spread0.max
-122012 +1597.6% -2071325 sched_debug.cfs_rq:/.spread0.min
546405 ± 15% +343.4% 2422990 ± 37% sched_debug.cfs_rq:/.spread0.stddev
425.53 ± 5% +34.3% 571.59 ± 6% sched_debug.cfs_rq:/.util_avg.avg
172409 ± 17% -51.9% 83000 ± 34% sched_debug.cpu.avg_idle.min
15.80 ± 35% -55.0% 7.11 ± 31% sched_debug.cpu.clock.stddev
15.80 ± 35% -55.0% 7.11 ± 31% sched_debug.cpu.clock_task.stddev
11.99 ± 20% -22.2% 9.34 ± 4% sched_debug.cpu.cpu_load[0].stddev
25155 ± 9% -18.1% 20603 sched_debug.cpu.curr->pid.max
11971 ± 8% -16.4% 10012 ± 2% sched_debug.cpu.curr->pid.stddev
139384 ± 77% -46.1% 75152 ±117% sched_debug.cpu.load.max
110514 ± 11% +33.6% 147689 ± 4% sched_debug.cpu.nr_load_updates.max
9429 ± 13% +240.3% 32086 ± 33% sched_debug.cpu.nr_load_updates.stddev
6703 ± 11% +31.8% 8832 ± 9% sched_debug.cpu.nr_switches.avg
42299 ± 4% +297.2% 168004 ± 44% sched_debug.cpu.nr_switches.max
6781 ± 7% +161.4% 17726 ± 37% sched_debug.cpu.nr_switches.stddev
16.50 ± 22% +35.1% 22.29 ± 9% sched_debug.cpu.nr_uninterruptible.max
5906 ± 12% +37.3% 8110 ± 8% sched_debug.cpu.sched_count.avg
41231 +301.8% 165649 ± 45% sched_debug.cpu.sched_count.max
6649 ± 4% +162.1% 17430 ± 38% sched_debug.cpu.sched_count.stddev
2794 ± 14% +37.8% 3849 ± 11% sched_debug.cpu.sched_goidle.avg
19770 ± 5% +318.5% 82741 ± 45% sched_debug.cpu.sched_goidle.max
3237 ± 7% +168.2% 8684 ± 39% sched_debug.cpu.sched_goidle.stddev
2707 ± 14% +42.0% 3845 ± 10% sched_debug.cpu.ttwu_count.avg
24144 ± 4% +222.3% 77807 ± 43% sched_debug.cpu.ttwu_count.max
3710 ± 6% +149.1% 9243 ± 35% sched_debug.cpu.ttwu_count.stddev
890.11 ± 12% +63.4% 1454 ± 3% sched_debug.cpu.ttwu_local.avg
3405 ± 20% +65.9% 5649 ± 9% sched_debug.cpu.ttwu_local.max
599.71 ± 10% +47.1% 881.99 ± 12% sched_debug.cpu.ttwu_local.stddev
101950 ± 5% -22.2% 79276 ± 13% softirqs.CPU0.SCHED
76082 ± 8% +65.3% 125732 ± 20% softirqs.CPU10.TIMER
87879 ± 13% -33.2% 58674 ± 30% softirqs.CPU102.TIMER
84916 ± 12% -28.8% 60445 ± 28% softirqs.CPU103.TIMER
79117 ± 7% +60.4% 126925 ± 19% softirqs.CPU11.TIMER
80478 ± 7% +57.9% 127096 ± 19% softirqs.CPU12.TIMER
79602 ± 8% +58.9% 126500 ± 19% softirqs.CPU13.TIMER
76463 ± 9% +63.9% 125308 ± 21% softirqs.CPU14.TIMER
71283 ± 15% +76.9% 126120 ± 21% softirqs.CPU15.TIMER
75177 ± 10% +71.9% 129197 ± 19% softirqs.CPU16.TIMER
75260 ± 13% +69.9% 127848 ± 20% softirqs.CPU17.TIMER
78227 ± 14% +49.7% 117133 ± 28% softirqs.CPU18.TIMER
76725 ± 15% +60.8% 123342 ± 24% softirqs.CPU19.TIMER
14186 ± 12% -60.0% 5675 ± 37% softirqs.CPU2.SCHED
78243 ± 12% +57.5% 123220 ± 23% softirqs.CPU20.TIMER
74069 ± 13% +66.7% 123496 ± 24% softirqs.CPU21.TIMER
72445 ± 8% +71.5% 124276 ± 23% softirqs.CPU24.TIMER
71429 ± 7% +69.8% 121306 ± 23% softirqs.CPU25.TIMER
6838 ± 11% +291.4% 26765 ± 31% softirqs.CPU26.SCHED
84914 ± 11% -44.7% 46972 ± 35% softirqs.CPU26.TIMER
85536 ± 11% -37.9% 53112 ± 36% softirqs.CPU27.TIMER
92319 ± 11% -39.7% 55695 ± 41% softirqs.CPU29.TIMER
11818 ± 11% -50.4% 5865 ± 27% softirqs.CPU3.SCHED
91562 ± 6% -37.7% 57040 ± 41% softirqs.CPU30.TIMER
94268 ± 6% -43.9% 52887 ± 45% softirqs.CPU31.TIMER
93396 ± 4% -40.8% 55292 ± 45% softirqs.CPU32.TIMER
6065 ± 24% +65.3% 10023 ± 19% softirqs.CPU34.SCHED
5558 ± 18% +49.6% 8313 ± 19% softirqs.CPU35.SCHED
5181 ± 19% +91.7% 9930 ± 23% softirqs.CPU36.SCHED
10638 ± 5% -45.1% 5843 ± 17% softirqs.CPU4.SCHED
82385 ± 7% -30.8% 57037 ± 20% softirqs.CPU42.TIMER
85276 ± 10% -42.1% 49344 ± 47% softirqs.CPU45.TIMER
90182 ± 11% -38.8% 55156 ± 54% softirqs.CPU48.TIMER
9407 ± 11% -33.4% 6268 ± 36% softirqs.CPU5.SCHED
86739 ± 5% +50.2% 130246 ± 16% softirqs.CPU5.TIMER
90646 ± 10% -40.1% 54319 ± 46% softirqs.CPU51.TIMER
8726 ± 21% -54.2% 3998 ± 40% softirqs.CPU55.SCHED
77984 ± 6% +59.3% 124232 ± 21% softirqs.CPU55.TIMER
8399 ± 18% -53.5% 3905 ± 25% softirqs.CPU57.SCHED
8031 ± 17% -39.5% 4859 ± 35% softirqs.CPU58.SCHED
77083 ± 15% +56.7% 120805 ± 18% softirqs.CPU58.TIMER
8371 ± 17% -50.9% 4107 ± 37% softirqs.CPU59.SCHED
80820 ± 10% +54.3% 124710 ± 21% softirqs.CPU59.TIMER
84277 ± 10% +51.3% 127533 ± 11% softirqs.CPU6.TIMER
77675 ± 7% +57.8% 122547 ± 19% softirqs.CPU60.TIMER
79746 ± 7% +56.4% 124719 ± 19% softirqs.CPU61.TIMER
75401 ± 16% +64.4% 123975 ± 21% softirqs.CPU62.TIMER
80622 ± 9% +54.4% 124509 ± 20% softirqs.CPU63.TIMER
78498 ± 7% +57.4% 123570 ± 22% softirqs.CPU64.TIMER
78854 ± 11% +58.7% 125157 ± 19% softirqs.CPU65.TIMER
78398 ± 10% +56.5% 122679 ± 22% softirqs.CPU66.TIMER
70518 ± 14% +77.2% 124944 ± 21% softirqs.CPU67.TIMER
76676 ± 18% +65.8% 127094 ± 20% softirqs.CPU68.TIMER
79804 ± 13% +59.8% 127489 ± 19% softirqs.CPU69.TIMER
87607 ± 7% +47.0% 128769 ± 16% softirqs.CPU7.TIMER
72877 ± 11% +64.6% 119922 ± 24% softirqs.CPU74.TIMER
72901 ± 8% +69.5% 123554 ± 22% softirqs.CPU77.TIMER
89804 ± 9% -49.5% 45318 ± 45% softirqs.CPU78.TIMER
83519 ± 4% +44.1% 120383 ± 16% softirqs.CPU8.TIMER
88956 ± 14% -37.8% 55317 ± 39% softirqs.CPU81.TIMER
85839 ± 10% -40.4% 51145 ± 51% softirqs.CPU83.TIMER
5980 ± 25% +65.2% 9881 ± 13% softirqs.CPU86.SCHED
5232 ± 15% +68.0% 8790 ± 17% softirqs.CPU87.SCHED
79808 ± 6% +57.2% 125474 ± 20% softirqs.CPU9.TIMER
19.41 ± 4% -16.0 3.44 ± 9% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
21.19 ± 2% -15.3 5.84 ± 17% perf-profile.calltrace.cycles-pp.secondary_startup_64
20.82 ± 2% -15.2 5.66 ± 19% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
20.82 ± 2% -15.2 5.66 ± 19% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
20.77 ± 2% -15.2 5.61 ± 19% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
20.81 ± 2% -15.2 5.66 ± 19% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.23 ± 2% +0.2 1.46 ± 2% perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap.mmput
1.22 ± 2% +0.2 1.45 ± 2% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap
0.59 ± 5% +0.4 0.95 ± 2% perf-profile.calltrace.cycles-pp.___might_sleep
1.39 +0.5 1.89 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
1.39 +0.5 1.89 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.38 +0.5 1.88 perf-profile.calltrace.cycles-pp.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.38 +0.5 1.88 perf-profile.calltrace.cycles-pp.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.38 +0.5 1.88 perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.37 +0.5 1.87 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__wake_up_parent
1.37 +0.5 1.88 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
1.33 +0.5 1.84 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
1.34 +0.5 1.85 perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
1.45 ± 2% +0.5 2.00 perf-profile.calltrace.cycles-pp._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.93 ± 2% +0.6 1.50 ± 3% perf-profile.calltrace.cycles-pp.clear_huge_page
0.28 ±173% +1.0 1.25 ± 44% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.71 ± 2% +1.0 2.73 perf-profile.calltrace.cycles-pp.___might_sleep.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.70 ± 9% +1.1 1.81 ± 2% perf-profile.calltrace.cycles-pp.clear_page_erms
0.83 ± 7% +1.8 2.62 ± 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page
0.83 ± 7% +1.8 2.63 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault
2.51 ± 2% +2.2 4.69 ± 4% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
2.52 ± 2% +2.2 4.71 ± 4% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
61.29 +8.6 69.94 perf-profile.calltrace.cycles-pp.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
69.75 +10.3 80.06 perf-profile.calltrace.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
72.75 +12.6 85.32 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
72.78 +12.6 85.35 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
72.64 +12.6 85.23 perf-profile.calltrace.cycles-pp.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
72.84 +12.6 85.45 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
72.84 +12.6 85.45 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
72.84 +12.6 85.47 perf-profile.calltrace.cycles-pp.page_fault
19.73 ± 3% -16.2 3.58 ± 10% perf-profile.children.cycles-pp.intel_idle
21.19 ± 2% -15.3 5.84 ± 17% perf-profile.children.cycles-pp.secondary_startup_64
21.19 ± 2% -15.3 5.84 ± 17% perf-profile.children.cycles-pp.cpu_startup_entry
21.19 ± 2% -15.3 5.84 ± 17% perf-profile.children.cycles-pp.do_idle
21.16 ± 2% -15.3 5.82 ± 17% perf-profile.children.cycles-pp.cpuidle_enter_state
20.82 ± 2% -15.2 5.66 ± 19% perf-profile.children.cycles-pp.start_secondary
0.37 ± 18% -0.2 0.18 ± 74% perf-profile.children.cycles-pp.start_kernel
0.10 ± 15% -0.1 0.04 ±107% perf-profile.children.cycles-pp.read
0.18 ± 6% -0.0 0.14 ± 5% perf-profile.children.cycles-pp.native_irq_return_iret
0.07 ± 7% +0.0 0.08 perf-profile.children.cycles-pp.__list_del_entry_valid
0.08 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.__lru_cache_add
0.08 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.07 ± 13% +0.0 0.09 ± 11% perf-profile.children.cycles-pp.cmd_stat
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.__run_perf_stat
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.process_interval
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.read_counters
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.perf_evsel__read_counter
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.__read_nocancel
0.06 ± 15% +0.0 0.09 ± 7% perf-profile.children.cycles-pp.perf_event_read
0.06 ± 20% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.perf_read
0.06 ± 15% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.smp_call_function_single
0.03 ±100% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.09 ± 9% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.pte_alloc_one
0.00 +0.1 0.05 perf-profile.children.cycles-pp.___perf_sw_event
0.06 ± 20% +0.1 0.11 ± 25% perf-profile.children.cycles-pp.ktime_get
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.__perf_sw_event
0.01 ±173% +0.1 0.08 ± 11% perf-profile.children.cycles-pp.__perf_event_read_value
0.13 ± 6% +0.1 0.23 ± 5% perf-profile.children.cycles-pp.__put_compound_page
0.40 +0.1 0.50 perf-profile.children.cycles-pp.rcu_all_qs
0.11 ± 7% +0.1 0.22 ± 5% perf-profile.children.cycles-pp.__page_cache_release
0.03 ±100% +0.1 0.17 ± 4% perf-profile.children.cycles-pp.free_transhuge_page
0.22 ± 6% +0.2 0.42 perf-profile.children.cycles-pp.free_one_page
1.23 ± 2% +0.2 1.46 ± 2% perf-profile.children.cycles-pp.tlb_flush_mmu_free
1.23 ± 2% +0.2 1.47 ± 2% perf-profile.children.cycles-pp.release_pages
0.06 ± 11% +0.3 0.33 ± 11% perf-profile.children.cycles-pp.deferred_split_huge_page
0.09 ± 4% +0.3 0.37 ± 9% perf-profile.children.cycles-pp.zap_huge_pmd
1.38 +0.5 1.88 perf-profile.children.cycles-pp.__wake_up_parent
1.38 +0.5 1.88 perf-profile.children.cycles-pp.do_group_exit
1.38 +0.5 1.88 perf-profile.children.cycles-pp.do_exit
1.37 +0.5 1.88 perf-profile.children.cycles-pp.mmput
1.37 +0.5 1.88 perf-profile.children.cycles-pp.exit_mmap
1.33 ± 2% +0.5 1.84 perf-profile.children.cycles-pp.unmap_page_range
1.34 +0.5 1.85 perf-profile.children.cycles-pp.unmap_vmas
1.75 +0.6 2.37 perf-profile.children.cycles-pp._cond_resched
0.54 ± 63% +0.7 1.25 ± 44% perf-profile.children.cycles-pp.poll_idle
2.31 ± 3% +1.4 3.69 perf-profile.children.cycles-pp.___might_sleep
2.61 ± 2% +2.2 4.83 ± 4% perf-profile.children.cycles-pp.get_page_from_freelist
2.62 ± 2% +2.2 4.85 ± 4% perf-profile.children.cycles-pp.__alloc_pages_nodemask
1.06 ± 6% +2.3 3.36 ± 4% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.69 ± 11% +2.5 4.17 ± 10% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
62.42 +9.8 72.18 perf-profile.children.cycles-pp.clear_page_erms
70.69 +10.9 81.56 perf-profile.children.cycles-pp.clear_huge_page
72.77 +12.6 85.33 perf-profile.children.cycles-pp.__handle_mm_fault
72.79 +12.6 85.36 perf-profile.children.cycles-pp.handle_mm_fault
72.64 +12.6 85.23 perf-profile.children.cycles-pp.do_huge_pmd_anonymous_page
72.86 +12.6 85.47 perf-profile.children.cycles-pp.__do_page_fault
72.86 +12.6 85.47 perf-profile.children.cycles-pp.do_page_fault
72.86 +12.6 85.48 perf-profile.children.cycles-pp.page_fault
19.73 ± 3% -16.2 3.58 ± 10% perf-profile.self.cycles-pp.intel_idle
0.78 ± 3% -0.2 0.60 ± 6% perf-profile.self.cycles-pp.__free_pages_ok
0.18 ± 6% -0.0 0.14 ± 5% perf-profile.self.cycles-pp.native_irq_return_iret
0.07 ± 7% +0.0 0.08 perf-profile.self.cycles-pp.__list_del_entry_valid
0.06 ± 15% +0.0 0.09 ± 4% perf-profile.self.cycles-pp.smp_call_function_single
0.03 ±102% +0.1 0.10 ± 29% perf-profile.self.cycles-pp.ktime_get
0.39 +0.1 0.49 perf-profile.self.cycles-pp.rcu_all_qs
1.62 +0.3 1.97 ± 2% perf-profile.self.cycles-pp.get_page_from_freelist
1.50 +0.6 2.08 perf-profile.self.cycles-pp._cond_resched
6.08 +0.7 6.78 ± 2% perf-profile.self.cycles-pp.clear_huge_page
0.54 ± 64% +0.7 1.25 ± 44% perf-profile.self.cycles-pp.poll_idle
2.29 ± 3% +1.4 3.66 perf-profile.self.cycles-pp.___might_sleep
1.69 ± 11% +2.5 4.17 ± 10% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
61.81 +9.9 71.69 perf-profile.self.cycles-pp.clear_page_erms
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
array_size/compiler/cpufreq_governor/kconfig/nr_threads/omp/rootfs/tbox_group/testcase:
50000000/gcc-7/performance/x86_64-rhel-7.2/50%/true/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep4/stream
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
70867 ± 16% -28.8% 50456 ± 6% stream.add_bandwidth_MBps
11358 ± 9% +13.4% 12879 stream.time.minor_page_faults
263.61 ± 16% +36.1% 358.87 stream.time.user_time
9477 ± 14% -22.8% 7312 ± 6% stream.time.voluntary_context_switches
34.65 -2.3% 33.85 ± 2% boot-time.boot
14973 ± 5% -7.9% 13783 ± 6% softirqs.CPU6.TIMER
67.50 -3.7% 65.00 vmstat.cpu.id
31.50 ± 2% +7.9% 34.00 ± 3% vmstat.cpu.us
3619 ± 15% -29.4% 2555 ± 7% vmstat.system.cs
2863126 ± 6% -15.5% 2419307 ± 2% cpuidle.C3.time
8360 ± 6% -15.7% 7050 ± 2% cpuidle.C3.usage
4.188e+08 ± 14% +32.5% 5.548e+08 ± 2% cpuidle.C6.time
428117 ± 14% +31.9% 564534 ± 2% cpuidle.C6.usage
3694 ± 23% +130.5% 8515 ± 73% proc-vmstat.numa_hint_faults
98740 ± 2% +8.8% 107468 ± 3% proc-vmstat.numa_hit
81603 ± 2% +10.7% 90367 ± 4% proc-vmstat.numa_local
48999 ± 5% +25.0% 61261 ± 11% proc-vmstat.pgfault
7862 ± 6% -14.0% 6758 ± 2% turbostat.C3
0.38 ± 17% -0.1 0.24 ± 4% turbostat.C3%
428197 ± 14% +32.2% 566126 ± 2% turbostat.C6
19.93 ± 28% +53.7% 30.64 ± 5% turbostat.CPU%c6
960324 ± 12% +23.7% 1187618 turbostat.IRQ
2392 ± 3% -26.4% 1760 ± 8% slabinfo.eventpoll_epi.active_objs
2392 ± 3% -26.4% 1760 ± 8% slabinfo.eventpoll_epi.num_objs
4186 ± 3% -26.4% 3080 ± 8% slabinfo.eventpoll_pwq.active_objs
4186 ± 3% -26.4% 3080 ± 8% slabinfo.eventpoll_pwq.num_objs
650.00 ± 7% -13.2% 564.50 ± 3% slabinfo.file_lock_cache.active_objs
650.00 ± 7% -13.2% 564.50 ± 3% slabinfo.file_lock_cache.num_objs
28794 ± 14% -24.0% 21889 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
4667 ± 33% -60.6% 1837 ± 65% sched_debug.cfs_rq:/.min_vruntime.min
3081 ± 24% -73.8% 808.08 ±142% sched_debug.cfs_rq:/.spread0.avg
-2493 +70.7% -4255 sched_debug.cfs_rq:/.spread0.min
295.31 ± 3% -16.9% 245.52 ± 11% sched_debug.cfs_rq:/.util_avg.stddev
0.00 ± 8% +19.3% 0.00 ± 7% sched_debug.cpu.next_balance.stddev
240.50 ± 4% -14.9% 204.75 ± 11% sched_debug.cpu.nr_switches.min
2.62 ± 3% +32.0% 3.46 ± 7% sched_debug.cpu.nr_uninterruptible.stddev
3.769e+10 ± 5% -7.8% 3.476e+10 ± 4% perf-stat.branch-instructions
9.742e+09 +6.0% 1.032e+10 ± 4% perf-stat.cache-references
28360 ± 6% -14.5% 24239 ± 8% perf-stat.context-switches
4.31 ± 20% +40.5% 6.06 perf-stat.cpi
8.342e+11 ± 17% +34.0% 1.118e+12 ± 4% perf-stat.cpu-cycles
451.50 ± 10% -27.8% 326.00 ± 8% perf-stat.cpu-migrations
0.02 ± 19% +0.0 0.03 ± 8% perf-stat.dTLB-load-miss-rate%
9450196 ± 21% +41.5% 13368418 ± 4% perf-stat.dTLB-load-misses
0.24 ± 17% -31.3% 0.17 perf-stat.ipc
47611 ± 7% +22.9% 58520 ± 10% perf-stat.minor-faults
47365 ± 6% +23.6% 58532 ± 10% perf-stat.page-faults
1.19 ±173% +5.6 6.76 ± 54% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
1.19 ±173% +5.6 6.76 ± 54% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
1.19 ±173% +5.6 6.76 ± 54% perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
1.19 ±173% +5.6 6.76 ± 54% perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +6.6 6.55 ± 54% perf-profile.calltrace.cycles-pp.kstat_irqs_cpu.show_interrupts.seq_read.proc_reg_read.__vfs_read
1.19 ±173% +6.7 7.84 ± 43% perf-profile.calltrace.cycles-pp.read
1.19 ±173% +10.0 11.20 ± 56% perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.19 ±173% +10.0 11.20 ± 56% perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.78 ±173% +10.2 11.99 ± 52% perf-profile.calltrace.cycles-pp.show_interrupts.seq_read.proc_reg_read.__vfs_read.vfs_read
2.38 ±173% +14.5 16.87 ± 38% perf-profile.calltrace.cycles-pp.proc_reg_read.__vfs_read.vfs_read.sys_read.do_syscall_64
2.38 ±173% +14.5 16.87 ± 38% perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.__vfs_read.vfs_read.sys_read
2.38 ±173% +15.6 17.96 ± 38% perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +6.5 6.55 ± 54% perf-profile.children.cycles-pp.kstat_irqs_cpu
1.19 ±173% +6.7 7.84 ± 43% perf-profile.children.cycles-pp.read
1.78 ±173% +10.2 11.98 ± 52% perf-profile.children.cycles-pp.show_interrupts
2.38 ±173% +14.5 16.87 ± 38% perf-profile.children.cycles-pp.proc_reg_read
2.38 ±173% +15.6 17.95 ± 38% perf-profile.children.cycles-pp.sys_read
2.38 ±173% +15.6 17.95 ± 38% perf-profile.children.cycles-pp.vfs_read
2.38 ±173% +15.6 17.95 ± 38% perf-profile.children.cycles-pp.__vfs_read
2.38 ±173% +15.6 17.95 ± 38% perf-profile.children.cycles-pp.seq_read
17.64 ± 77% +28.9 46.59 ± 26% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
17.64 ± 77% +28.9 46.59 ± 26% perf-profile.children.cycles-pp.do_syscall_64
8103 ± 15% +30.9% 10608 interrupts.CPU0.LOC:Local_timer_interrupts
8095 ± 15% +30.5% 10561 interrupts.CPU1.LOC:Local_timer_interrupts
8078 ± 15% +31.3% 10610 interrupts.CPU10.LOC:Local_timer_interrupts
8026 ± 14% +32.0% 10598 interrupts.CPU11.LOC:Local_timer_interrupts
8103 ± 15% +30.9% 10603 interrupts.CPU12.LOC:Local_timer_interrupts
8070 ± 15% +31.3% 10593 interrupts.CPU13.LOC:Local_timer_interrupts
8068 ± 15% +32.4% 10678 ± 2% interrupts.CPU14.LOC:Local_timer_interrupts
8067 ± 15% +31.5% 10609 interrupts.CPU15.LOC:Local_timer_interrupts
8103 ± 15% +30.7% 10588 interrupts.CPU16.LOC:Local_timer_interrupts
8192 ± 16% +29.3% 10596 interrupts.CPU17.LOC:Local_timer_interrupts
8099 ± 15% +31.2% 10628 interrupts.CPU18.LOC:Local_timer_interrupts
8061 ± 15% +31.6% 10604 interrupts.CPU19.LOC:Local_timer_interrupts
8114 ± 15% +30.8% 10610 interrupts.CPU2.LOC:Local_timer_interrupts
8086 ± 15% +31.2% 10610 interrupts.CPU20.LOC:Local_timer_interrupts
8088 ± 15% +31.4% 10629 interrupts.CPU21.LOC:Local_timer_interrupts
8064 ± 16% +31.3% 10591 interrupts.CPU22.LOC:Local_timer_interrupts
8069 ± 15% +30.5% 10530 interrupts.CPU23.LOC:Local_timer_interrupts
8079 ± 15% +31.3% 10611 interrupts.CPU24.LOC:Local_timer_interrupts
8089 ± 15% +31.0% 10598 interrupts.CPU25.LOC:Local_timer_interrupts
8114 ± 15% +30.6% 10597 interrupts.CPU26.LOC:Local_timer_interrupts
8093 ± 15% +31.3% 10630 interrupts.CPU27.LOC:Local_timer_interrupts
8092 ± 15% +30.9% 10589 interrupts.CPU28.LOC:Local_timer_interrupts
8084 ± 15% +31.1% 10599 interrupts.CPU29.LOC:Local_timer_interrupts
8083 ± 15% +30.5% 10547 interrupts.CPU3.LOC:Local_timer_interrupts
8096 ± 15% +31.2% 10624 interrupts.CPU30.LOC:Local_timer_interrupts
8154 ± 15% +31.0% 10680 ± 2% interrupts.CPU31.LOC:Local_timer_interrupts
8129 ± 15% +30.7% 10628 interrupts.CPU32.LOC:Local_timer_interrupts
8096 ± 15% +31.9% 10676 ± 2% interrupts.CPU33.LOC:Local_timer_interrupts
8119 ± 15% +30.8% 10620 interrupts.CPU34.LOC:Local_timer_interrupts
8085 ± 15% +31.2% 10612 interrupts.CPU35.LOC:Local_timer_interrupts
8083 ± 15% +31.2% 10602 interrupts.CPU36.LOC:Local_timer_interrupts
819.25 ± 35% +162.8% 2153 ± 47% interrupts.CPU37.CAL:Function_call_interrupts
8100 ± 15% +31.5% 10655 interrupts.CPU37.LOC:Local_timer_interrupts
32.75 ±173% +4387.0% 1469 ± 71% interrupts.CPU37.TLB:TLB_shootdowns
8085 ± 15% +31.2% 10612 interrupts.CPU38.LOC:Local_timer_interrupts
985.25 ± 24% +124.5% 2211 ± 52% interrupts.CPU39.CAL:Function_call_interrupts
8154 ± 15% +30.3% 10625 interrupts.CPU39.LOC:Local_timer_interrupts
162.75 ±110% +852.4% 1550 ± 74% interrupts.CPU39.TLB:TLB_shootdowns
8171 ± 14% +29.8% 10603 interrupts.CPU4.LOC:Local_timer_interrupts
8088 ± 15% +31.2% 10609 interrupts.CPU40.LOC:Local_timer_interrupts
8091 ± 15% +31.6% 10652 ± 2% interrupts.CPU41.LOC:Local_timer_interrupts
8097 ± 15% +30.9% 10601 interrupts.CPU42.LOC:Local_timer_interrupts
8105 ± 15% +30.7% 10595 ± 2% interrupts.CPU43.LOC:Local_timer_interrupts
8069 ± 15% +31.3% 10593 interrupts.CPU44.LOC:Local_timer_interrupts
8075 ± 15% +31.7% 10631 interrupts.CPU45.LOC:Local_timer_interrupts
8070 ± 15% +31.3% 10596 interrupts.CPU46.LOC:Local_timer_interrupts
8093 ± 15% +31.2% 10617 interrupts.CPU47.LOC:Local_timer_interrupts
8085 ± 15% +30.9% 10586 interrupts.CPU48.LOC:Local_timer_interrupts
8093 ± 15% +31.8% 10668 interrupts.CPU49.LOC:Local_timer_interrupts
2611 ± 26% -41.5% 1526 ± 15% interrupts.CPU5.CAL:Function_call_interrupts
8078 ± 15% +31.1% 10592 interrupts.CPU5.LOC:Local_timer_interrupts
1801 ± 23% -51.2% 878.50 ± 27% interrupts.CPU5.TLB:TLB_shootdowns
8082 ± 15% +31.1% 10596 interrupts.CPU50.LOC:Local_timer_interrupts
8093 ± 15% +31.3% 10628 interrupts.CPU51.LOC:Local_timer_interrupts
8075 ± 15% +31.3% 10600 interrupts.CPU52.LOC:Local_timer_interrupts
8057 ± 15% +31.6% 10602 interrupts.CPU53.LOC:Local_timer_interrupts
8079 ± 15% +30.8% 10566 interrupts.CPU54.LOC:Local_timer_interrupts
8068 ± 15% +31.3% 10596 interrupts.CPU55.LOC:Local_timer_interrupts
8085 ± 15% +31.0% 10593 interrupts.CPU56.LOC:Local_timer_interrupts
8059 ± 15% +31.6% 10605 interrupts.CPU57.LOC:Local_timer_interrupts
8069 ± 15% +31.3% 10599 interrupts.CPU58.LOC:Local_timer_interrupts
8065 ± 15% +31.5% 10608 interrupts.CPU59.LOC:Local_timer_interrupts
8077 ± 15% +31.2% 10600 interrupts.CPU6.LOC:Local_timer_interrupts
8079 ± 15% +31.3% 10607 interrupts.CPU60.LOC:Local_timer_interrupts
8155 ± 15% +30.3% 10628 interrupts.CPU61.LOC:Local_timer_interrupts
8090 ± 15% +31.4% 10633 interrupts.CPU62.LOC:Local_timer_interrupts
8127 ± 16% +30.5% 10603 interrupts.CPU63.LOC:Local_timer_interrupts
8091 ± 15% +31.8% 10664 interrupts.CPU64.LOC:Local_timer_interrupts
8090 ± 16% +31.1% 10604 interrupts.CPU65.LOC:Local_timer_interrupts
8078 ± 15% +32.6% 10714 interrupts.CPU66.LOC:Local_timer_interrupts
8090 ± 15% +31.2% 10611 interrupts.CPU67.LOC:Local_timer_interrupts
8087 ± 15% +31.1% 10606 interrupts.CPU68.LOC:Local_timer_interrupts
8059 ± 15% +31.4% 10588 interrupts.CPU69.LOC:Local_timer_interrupts
7999 ± 16% +32.6% 10610 interrupts.CPU7.LOC:Local_timer_interrupts
8069 ± 15% +31.7% 10625 interrupts.CPU70.LOC:Local_timer_interrupts
8074 ± 15% +31.1% 10586 interrupts.CPU71.LOC:Local_timer_interrupts
8076 ± 15% +31.3% 10602 interrupts.CPU72.LOC:Local_timer_interrupts
8112 ± 16% +30.9% 10618 interrupts.CPU73.LOC:Local_timer_interrupts
8075 ± 15% +31.7% 10635 interrupts.CPU74.LOC:Local_timer_interrupts
8075 ± 15% +31.2% 10595 interrupts.CPU75.LOC:Local_timer_interrupts
8077 ± 15% +31.2% 10598 interrupts.CPU76.LOC:Local_timer_interrupts
8191 ± 14% +30.4% 10682 interrupts.CPU77.LOC:Local_timer_interrupts
8025 ± 16% +32.2% 10609 interrupts.CPU78.LOC:Local_timer_interrupts
8086 ± 15% +31.3% 10615 interrupts.CPU79.LOC:Local_timer_interrupts
8072 ± 15% +31.3% 10600 interrupts.CPU8.LOC:Local_timer_interrupts
3.00 ±137% +1808.3% 57.25 ± 87% interrupts.CPU8.RES:Rescheduling_interrupts
8085 ± 15% +32.1% 10681 interrupts.CPU80.LOC:Local_timer_interrupts
8088 ± 15% +31.2% 10608 interrupts.CPU81.LOC:Local_timer_interrupts
8082 ± 15% +31.5% 10625 interrupts.CPU82.LOC:Local_timer_interrupts
8092 ± 15% +31.2% 10620 interrupts.CPU83.LOC:Local_timer_interrupts
8110 ± 15% +30.7% 10600 interrupts.CPU84.LOC:Local_timer_interrupts
8082 ± 15% +31.4% 10623 interrupts.CPU85.LOC:Local_timer_interrupts
8088 ± 15% +31.0% 10597 interrupts.CPU86.LOC:Local_timer_interrupts
8081 ± 15% +31.2% 10600 interrupts.CPU87.LOC:Local_timer_interrupts
8082 ± 15% +31.1% 10598 interrupts.CPU9.LOC:Local_timer_interrupts
207.25 ±170% +301.3% 831.75 ± 52% interrupts.CPU9.TLB:TLB_shootdowns
711846 ± 15% +31.2% 933907 interrupts.LOC:Local_timer_interrupts
5731 ± 21% +26.1% 7229 ± 16% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
array_size/compiler/cpufreq_governor/kconfig/nr_threads/omp/rootfs/tbox_group/testcase:
10000000/gcc-7/performance/x86_64-rhel-7.2/50%/true/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep4/stream
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
:4 25% 1:4 dmesg.WARNING:stack_recursion
%stddev %change %stddev
\ | \
68287 -30.6% 47413 ± 3% stream.add_bandwidth_MBps
74377 ± 3% -38.8% 45504 ± 3% stream.copy_bandwidth_MBps
71751 -39.8% 43172 ± 4% stream.scale_bandwidth_MBps
51.82 +51.5% 78.49 ± 3% stream.time.user_time
221.50 ± 45% +431.6% 1177 ± 35% stream.time.voluntary_context_switches
74941 -34.4% 49156 ± 3% stream.triad_bandwidth_MBps
1792 ± 9% +28.3% 2299 ± 14% numa-meminfo.node0.PageTables
447.25 ± 9% +31.8% 589.50 ± 15% numa-vmstat.node0.nr_page_table_pages
983.50 +4.8% 1031 ± 2% proc-vmstat.nr_page_table_pages
18576 ± 24% -29.5% 13093 ± 19% softirqs.CPU44.TIMER
6562 ± 14% -31.8% 4477 ± 44% turbostat.C1
35.21 ± 11% -31.6% 24.09 ± 9% turbostat.CPU%c1
2376 ± 12% -27.3% 1728 ± 10% slabinfo.eventpoll_epi.active_objs
2376 ± 12% -27.3% 1728 ± 10% slabinfo.eventpoll_epi.num_objs
4158 ± 12% -27.3% 3024 ± 10% slabinfo.eventpoll_pwq.active_objs
4158 ± 12% -27.3% 3024 ± 10% slabinfo.eventpoll_pwq.num_objs
3904 ± 5% -9.2% 3544 ± 3% slabinfo.skbuff_head_cache.active_objs
1475 ± 4% -13.1% 1282 ± 10% slabinfo.task_group.active_objs
1475 ± 4% -13.1% 1282 ± 10% slabinfo.task_group.num_objs
40.50 ± 50% +210.5% 125.75 ± 52% interrupts.CPU1.RES:Rescheduling_interrupts
1.00 +7075.0% 71.75 ±141% interrupts.CPU22.RES:Rescheduling_interrupts
150.25 ± 12% -57.4% 64.00 ±100% interrupts.CPU27.TLB:TLB_shootdowns
166.50 ± 19% -61.6% 64.00 ±100% interrupts.CPU28.TLB:TLB_shootdowns
160.50 ± 21% -60.1% 64.00 ±100% interrupts.CPU40.TLB:TLB_shootdowns
174.25 ± 19% -62.7% 65.00 ±100% interrupts.CPU74.TLB:TLB_shootdowns
149.25 ± 5% -57.1% 64.00 ±100% interrupts.CPU82.TLB:TLB_shootdowns
7124 ± 14% -29.4% 5030 ± 3% interrupts.TLB:TLB_shootdowns
4984 ± 45% -78.7% 1060 ± 46% sched_debug.cfs_rq:/.min_vruntime.min
25.89 ± 57% -66.2% 8.74 ±111% sched_debug.cfs_rq:/.removed.load_avg.avg
152.68 ± 29% -57.6% 64.71 ±103% sched_debug.cfs_rq:/.removed.load_avg.stddev
1199 ± 56% -66.4% 402.43 ±111% sched_debug.cfs_rq:/.removed.runnable_sum.avg
7076 ± 29% -57.9% 2981 ±103% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
2404 ± 88% -368.5% -6456 sched_debug.cfs_rq:/.spread0.avg
20903 ± 15% -52.0% 10029 ± 29% sched_debug.cfs_rq:/.spread0.max
-2008 +507.1% -12193 sched_debug.cfs_rq:/.spread0.min
225.00 ± 8% -16.7% 187.50 ± 15% sched_debug.cpu.nr_switches.min
1.356e+10 ± 9% -28.4% 9.705e+09 ± 17% perf-stat.branch-instructions
3.35 ± 4% +69.5% 5.67 ± 9% perf-stat.cpi
1694005 ± 15% -32.0% 1152591 ± 21% perf-stat.iTLB-loads
6.601e+10 ± 9% -24.9% 4.959e+10 ± 17% perf-stat.instructions
0.30 ± 4% -40.5% 0.18 ± 10% perf-stat.ipc
39.73 ± 4% -36.0 3.70 ± 35% perf-stat.node-load-miss-rate%
18391383 ± 6% -90.9% 1670191 ± 34% perf-stat.node-load-misses
27864581 ± 2% +56.6% 43648277 ± 3% perf-stat.node-loads
40.45 ± 4% -38.9 1.51 ± 85% perf-stat.node-store-miss-rate%
44193260 ± 5% -96.7% 1470898 ± 85% perf-stat.node-store-misses
65038165 ± 3% +48.9% 96838822 ± 4% perf-stat.node-stores
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__ioctl.perf_evlist__disable.cmd_record.run_builtin
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__ioctl.perf_evlist__disable.cmd_record
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.__ioctl.perf_evlist__disable.cmd_record.run_builtin.main
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.__ioctl.perf_evlist__disable
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.perf_evlist__disable.cmd_record.run_builtin.main.generic_start_main
4.37 ±111% -4.4 0.00 perf-profile.calltrace.cycles-pp.__mutex_lock.show_interrupts.seq_read.proc_reg_read.__vfs_read
4.37 ±111% -4.4 0.00 perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.show_interrupts.seq_read.proc_reg_read
4.33 ±109% -4.3 0.00 perf-profile.calltrace.cycles-pp.seq_printf.show_interrupts.seq_read.proc_reg_read.__vfs_read
4.33 ±109% -4.3 0.00 perf-profile.calltrace.cycles-pp.seq_vprintf.seq_printf.show_interrupts.seq_read.proc_reg_read
4.33 ±109% -4.3 0.00 perf-profile.calltrace.cycles-pp.vsnprintf.seq_vprintf.seq_printf.show_interrupts.seq_read
13.76 ± 91% -3.8 10.00 ±173% perf-profile.calltrace.cycles-pp.proc_reg_read.__vfs_read.vfs_read.sys_read.do_syscall_64
13.76 ± 91% -3.8 10.00 ±173% perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.__vfs_read.vfs_read.sys_read
12.31 ± 90% -2.3 10.00 ±173% perf-profile.calltrace.cycles-pp.show_interrupts.seq_read.proc_reg_read.__vfs_read.vfs_read
3.51 ±103% -1.2 2.31 ±173% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
3.51 ±103% -1.2 2.31 ±173% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.27 ±104% +2.7 10.00 ±173% perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.27 ±104% +2.7 10.00 ±173% perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.78 ±105% -5.8 0.00 perf-profile.children.cycles-pp.seq_printf
5.78 ±105% -5.8 0.00 perf-profile.children.cycles-pp.seq_vprintf
5.78 ±105% -5.8 0.00 perf-profile.children.cycles-pp.vsnprintf
7.68 ±133% -4.9 2.78 ±173% perf-profile.children.cycles-pp.perf_evlist__disable
14.46 ± 83% -4.5 10.00 ±173% perf-profile.children.cycles-pp.seq_read
4.37 ±111% -4.4 0.00 perf-profile.children.cycles-pp.__mutex_lock
4.37 ±111% -4.4 0.00 perf-profile.children.cycles-pp.mutex_spin_on_owner
13.76 ± 91% -3.8 10.00 ±173% perf-profile.children.cycles-pp.proc_reg_read
12.31 ± 90% -2.3 10.00 ±173% perf-profile.children.cycles-pp.show_interrupts
3.51 ±103% -1.2 2.31 ±173% perf-profile.children.cycles-pp.do_filp_open
3.51 ±103% -1.2 2.31 ±173% perf-profile.children.cycles-pp.path_openat
4.37 ±111% -4.4 0.00 perf-profile.self.cycles-pp.mutex_spin_on_owner
***************************************************************************************************
lkp-skl-2sp4: 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/testcase:
gcc-7/performance/x86_64-rhel-7.2/50%/debian-x86_64-2018-04-03.cgz/300s/lkp-skl-2sp4/pft
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_apic_timer_interrupt/0x
%stddev %change %stddev
\ | \
230747 -42.5% 132607 pft.faults_per_sec_per_cpu
7861494 -33.0% 5265420 pft.time.minor_page_faults
4057 +17.5% 4767 pft.time.percent_of_cpu_this_job_got
11947 +16.2% 13877 pft.time.system_time
249.55 +86.6% 465.61 ± 7% pft.time.user_time
143006 +47.0% 210176 ± 4% pft.time.voluntary_context_switches
6501752 ± 9% -40.5% 3870149 ± 34% numa-numastat.node0.local_node
6515543 ± 9% -40.4% 3881040 ± 34% numa-numastat.node0.numa_hit
59.50 -10.1% 53.50 vmstat.cpu.id
39.75 ± 2% +20.1% 47.75 vmstat.procs.r
5101 ± 2% +31.8% 6723 ± 9% vmstat.system.cs
59.84 -6.1 53.74 mpstat.cpu.idle%
0.00 ± 61% +0.0 0.01 ± 33% mpstat.cpu.soft%
39.14 +5.4 44.54 mpstat.cpu.sys%
0.99 +0.7 1.67 ± 7% mpstat.cpu.usr%
32208 ± 14% +37.3% 44213 ± 4% numa-meminfo.node0.SReclaimable
411790 ± 16% +161.6% 1077435 ± 17% numa-meminfo.node1.AnonHugePages
3769 ± 13% +29.0% 4860 ± 9% numa-meminfo.node1.PageTables
41429 ± 11% -31.3% 28458 ± 7% numa-meminfo.node1.SReclaimable
45429 ± 49% -59.8% 18251 ±108% numa-meminfo.node1.Shmem
14247 -16.1% 11952 ± 4% slabinfo.files_cache.active_objs
14326 -15.5% 12108 ± 4% slabinfo.files_cache.num_objs
5797 -18.6% 4718 ± 8% slabinfo.mm_struct.active_objs
5863 -18.2% 4797 ± 8% slabinfo.mm_struct.num_objs
1193 ± 3% -10.7% 1066 ± 2% slabinfo.nsproxy.active_objs
1193 ± 3% -10.7% 1066 ± 2% slabinfo.nsproxy.num_objs
6180787 ± 3% -15.2% 5242821 ± 14% cpuidle.C1.time
342482 ± 4% +37.0% 469078 ± 7% cpuidle.C1.usage
2.939e+08 ± 3% +31.0% 3.85e+08 ± 3% cpuidle.C1E.time
848558 ± 6% +30.9% 1110683 ± 2% cpuidle.C1E.usage
1.796e+10 -10.1% 1.615e+10 ± 2% cpuidle.C6.time
18678065 -10.9% 16650357 cpuidle.C6.usage
1.116e+08 ± 11% +128.8% 2.554e+08 ± 10% cpuidle.POLL.time
6702 ± 3% +38.5% 9283 cpuidle.POLL.usage
5173027 ± 2% +19.2% 6166000 ± 2% meminfo.Active
5128144 ± 2% +19.4% 6121960 ± 2% meminfo.Active(anon)
733868 +130.3% 1690255 ± 33% meminfo.AnonHugePages
4988775 ± 2% +19.2% 5948484 ± 3% meminfo.AnonPages
5.318e+08 +18.8% 6.319e+08 meminfo.Committed_AS
1082 ± 84% +87.0% 2024 ± 9% meminfo.Mlocked
7070 +26.5% 8941 ± 12% meminfo.PageTables
1083 ± 84% +87.1% 2027 ± 9% meminfo.Unevictable
8051 ± 14% +37.3% 11053 ± 4% numa-vmstat.node0.nr_slab_reclaimable
3703595 ± 7% -43.4% 2095655 ± 15% numa-vmstat.node0.numa_hit
3689371 ± 7% -43.5% 2084541 ± 15% numa-vmstat.node0.numa_local
191.75 ± 16% +164.5% 507.25 ± 34% numa-vmstat.node1.nr_anon_transparent_hugepages
11353 ± 49% -59.8% 4565 ±108% numa-vmstat.node1.nr_shmem
10358 ± 11% -31.3% 7114 ± 7% numa-vmstat.node1.nr_slab_reclaimable
4161642 ± 6% -12.2% 3653839 ± 9% numa-vmstat.node1.numa_hit
4020032 ± 6% -12.7% 3508992 ± 9% numa-vmstat.node1.numa_local
1177 +14.0% 1341 turbostat.Avg_MHz
42.29 +5.9 48.23 turbostat.Busy%
340371 ± 4% +37.3% 467253 ± 7% turbostat.C1
842290 ± 6% +31.3% 1105626 ± 2% turbostat.C1E
0.93 ± 3% +0.3 1.21 ± 4% turbostat.C1E%
18676469 -10.9% 16648172 turbostat.C6
56.92 -6.2 50.68 turbostat.C6%
56.18 -13.0% 48.89 ± 3% turbostat.CPU%c1
255.16 -8.8% 232.68 turbostat.PkgWatt
65.13 -17.8% 53.51 turbostat.RAMWatt
25420 ± 11% -32.7% 17115 ± 18% softirqs.CPU1.SCHED
16168 ± 10% -56.0% 7113 ± 22% softirqs.CPU2.SCHED
91499 ± 10% +18.0% 107985 ± 15% softirqs.CPU29.TIMER
13561 ± 16% -51.4% 6584 ± 16% softirqs.CPU3.SCHED
84774 ± 6% +27.2% 107811 ± 18% softirqs.CPU30.TIMER
82435 ± 8% +24.5% 102670 ± 21% softirqs.CPU31.TIMER
77710 ± 13% +35.6% 105402 ± 20% softirqs.CPU34.TIMER
11043 ± 8% -41.2% 6493 ± 19% softirqs.CPU4.SCHED
87767 ± 4% +22.1% 107187 ± 15% softirqs.CPU48.TIMER
10229 ± 10% -40.4% 6098 ± 21% softirqs.CPU5.SCHED
86042 ± 4% +22.1% 105097 ± 17% softirqs.CPU80.TIMER
86912 ± 5% +23.7% 107494 ± 16% softirqs.CPU81.TIMER
82985 ± 10% +32.6% 110071 ± 13% softirqs.CPU87.TIMER
79387 ± 12% +40.0% 111162 ± 13% softirqs.CPU90.TIMER
82466 ± 13% +39.0% 114662 ± 11% softirqs.CPU93.TIMER
6797 ± 31% -45.9% 3674 ± 34% softirqs.CPU99.SCHED
355628 ± 2% +14.3% 406464 ± 6% softirqs.RCU
8446116 +9.7% 9267049 softirqs.TIMER
1262046 +22.3% 1542932 proc-vmstat.nr_active_anon
1228899 +22.1% 1500715 proc-vmstat.nr_anon_pages
356.25 ± 9% +100.1% 712.75 ± 3% proc-vmstat.nr_anon_transparent_hugepages
1473503 -1.8% 1446646 proc-vmstat.nr_dirty_background_threshold
2950611 -1.8% 2896831 proc-vmstat.nr_dirty_threshold
14627049 -1.8% 14358270 proc-vmstat.nr_free_pages
269.75 ± 84% +87.8% 506.50 ± 8% proc-vmstat.nr_mlock
1758 +21.4% 2136 ± 2% proc-vmstat.nr_page_table_pages
18410 -1.3% 18167 proc-vmstat.nr_slab_reclaimable
52708 -2.6% 51354 proc-vmstat.nr_slab_unreclaimable
270.00 ± 84% +87.8% 507.00 ± 8% proc-vmstat.nr_unevictable
1262043 +22.3% 1542927 proc-vmstat.nr_zone_active_anon
270.00 ± 84% +87.8% 507.00 ± 8% proc-vmstat.nr_zone_unevictable
13918060 -31.1% 9583409 proc-vmstat.numa_hit
13897019 -31.2% 9562209 proc-vmstat.numa_local
2.743e+09 -32.9% 1.841e+09 proc-vmstat.pgalloc_normal
8687624 -30.0% 6084756 proc-vmstat.pgfault
2.742e+09 -32.9% 1.841e+09 proc-vmstat.pgfree
5339231 -32.9% 3584118 proc-vmstat.thp_deferred_split_page
5339481 -32.9% 3584076 proc-vmstat.thp_fault_alloc
4.798e+11 +20.9% 5.802e+11 ± 7% perf-stat.branch-instructions
0.45 ± 4% -0.1 0.37 ± 3% perf-stat.branch-miss-rate%
73.80 -8.9 64.94 perf-stat.cache-miss-rate%
1.921e+10 -30.4% 1.337e+10 ± 2% perf-stat.cache-misses
2.603e+10 -20.9% 2.059e+10 ± 2% perf-stat.cache-references
1527973 ± 2% +34.0% 2047421 ± 9% perf-stat.context-switches
3.625e+13 +15.2% 4.178e+13 perf-stat.cpu-cycles
11029 -14.8% 9402 ± 5% perf-stat.cpu-migrations
5.477e+11 +18.3% 6.481e+11 ± 5% perf-stat.dTLB-loads
0.08 -0.0 0.05 ± 8% perf-stat.dTLB-store-miss-rate%
1.202e+08 -48.5% 61867822 ± 4% perf-stat.dTLB-store-misses
1.521e+11 -11.0% 1.354e+11 ± 4% perf-stat.dTLB-stores
22.62 +10.0 32.67 ± 10% perf-stat.iTLB-load-miss-rate%
63435492 +20.8% 76661143 ± 8% perf-stat.iTLB-load-misses
2.17e+08 -26.9% 1.586e+08 ± 7% perf-stat.iTLB-loads
1.91e+12 +16.2% 2.219e+12 ± 4% perf-stat.instructions
8642807 -30.0% 6049660 perf-stat.minor-faults
8.59 ± 2% -0.7 7.90 ± 6% perf-stat.node-load-miss-rate%
1.413e+08 ± 3% -34.4% 92653965 ± 6% perf-stat.node-load-misses
1.504e+09 -28.1% 1.081e+09 perf-stat.node-loads
1.152e+10 -27.4% 8.37e+09 perf-stat.node-stores
8642829 -30.0% 6049673 perf-stat.page-faults
611362 ± 11% +11.6% 682342 interrupts.CAL:Function_call_interrupts
34040 ± 11% -33.7% 22553 ± 16% interrupts.CPU1.RES:Rescheduling_interrupts
94.00 ±115% -82.7% 16.25 ± 84% interrupts.CPU103.RES:Rescheduling_interrupts
5539 ± 10% +18.1% 6544 interrupts.CPU16.CAL:Function_call_interrupts
5560 ± 9% +18.5% 6589 interrupts.CPU17.CAL:Function_call_interrupts
15417 ± 18% -66.0% 5237 ± 47% interrupts.CPU2.RES:Rescheduling_interrupts
150.00 ± 74% +346.3% 669.50 ± 32% interrupts.CPU23.RES:Rescheduling_interrupts
145.25 ± 65% +430.1% 770.00 ± 61% interrupts.CPU24.RES:Rescheduling_interrupts
522.00 ± 75% +3786.5% 20287 ± 52% interrupts.CPU26.RES:Rescheduling_interrupts
268.00 ± 57% +1782.2% 5044 ± 80% interrupts.CPU27.RES:Rescheduling_interrupts
129.25 ± 48% +713.9% 1052 ± 82% interrupts.CPU28.RES:Rescheduling_interrupts
101.75 ± 31% +397.8% 506.50 ± 41% interrupts.CPU29.RES:Rescheduling_interrupts
9029 ± 10% -72.3% 2498 ± 65% interrupts.CPU3.RES:Rescheduling_interrupts
5206 ± 16% +28.1% 6672 interrupts.CPU36.CAL:Function_call_interrupts
5821 ± 12% -73.2% 1561 ± 51% interrupts.CPU4.RES:Rescheduling_interrupts
83.00 ± 17% -32.5% 56.00 ± 29% interrupts.CPU40.RES:Rescheduling_interrupts
4349 ± 30% -80.8% 836.00 ± 50% interrupts.CPU5.RES:Rescheduling_interrupts
2458 ± 44% -68.3% 780.00 ±114% interrupts.CPU55.NMI:Non-maskable_interrupts
2458 ± 44% -68.3% 780.00 ±114% interrupts.CPU55.PMI:Performance_monitoring_interrupts
3153 ± 48% -78.7% 673.25 ± 74% interrupts.CPU56.NMI:Non-maskable_interrupts
3153 ± 48% -78.7% 673.25 ± 74% interrupts.CPU56.PMI:Performance_monitoring_interrupts
3769 ± 13% -80.7% 728.75 ± 48% interrupts.CPU6.RES:Rescheduling_interrupts
336.75 ±126% -91.2% 29.50 ± 95% interrupts.CPU74.RES:Rescheduling_interrupts
1766 ± 54% -65.7% 606.50 ± 54% interrupts.CPU8.RES:Rescheduling_interrupts
30.00 ± 28% +432.5% 159.75 ± 92% interrupts.CPU85.RES:Rescheduling_interrupts
320.65 ±173% +2126.1% 7138 ±100% sched_debug.cfs_rq:/.MIN_vruntime.avg
60894 +24.1% 75590 ± 14% sched_debug.cfs_rq:/.exec_clock.avg
13190 ± 9% +144.9% 32297 ± 29% sched_debug.cfs_rq:/.exec_clock.stddev
320.65 ±173% +2126.1% 7138 ±100% sched_debug.cfs_rq:/.max_vruntime.avg
3022932 +29.2% 3906103 ± 13% sched_debug.cfs_rq:/.min_vruntime.avg
5123973 ± 16% +40.1% 7181169 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
593877 ± 8% +180.2% 1664130 ± 28% sched_debug.cfs_rq:/.min_vruntime.stddev
0.40 ± 6% +45.8% 0.59 ± 3% sched_debug.cfs_rq:/.nr_running.avg
1.71 ± 2% +67.3% 2.86 ± 19% sched_debug.cfs_rq:/.nr_spread_over.avg
1.29 ± 21% +52.4% 1.97 ± 15% sched_debug.cfs_rq:/.nr_spread_over.stddev
9.02 ± 3% +24.5% 11.23 ± 3% sched_debug.cfs_rq:/.runnable_load_avg.avg
28.38 ± 16% -30.8% 19.62 sched_debug.cfs_rq:/.runnable_load_avg.max
11.55 ± 11% -26.2% 8.53 ± 3% sched_debug.cfs_rq:/.runnable_load_avg.stddev
3730816 ± 19% +54.5% 5765499 sched_debug.cfs_rq:/.spread0.max
593850 ± 8% +180.2% 1664097 ± 28% sched_debug.cfs_rq:/.spread0.stddev
406.87 ± 7% +48.3% 603.22 ± 3% sched_debug.cfs_rq:/.util_avg.avg
28.33 ± 16% -30.6% 19.67 sched_debug.cpu.cpu_load[0].max
11.23 ± 9% -19.2% 9.07 sched_debug.cpu.cpu_load[0].stddev
48.62 ± 23% -59.1% 19.88 sched_debug.cpu.cpu_load[1].max
12.53 ± 7% -28.2% 9.00 sched_debug.cpu.cpu_load[1].stddev
67.50 ± 44% -68.6% 21.21 ± 11% sched_debug.cpu.cpu_load[2].max
14.14 ± 24% -36.6% 8.97 sched_debug.cpu.cpu_load[2].stddev
121.54 ±111% -81.0% 23.08 ± 12% sched_debug.cpu.cpu_load[3].max
19.34 ± 71% -53.4% 9.01 sched_debug.cpu.cpu_load[3].stddev
139.17 ±118% -80.4% 27.33 ± 14% sched_debug.cpu.cpu_load[4].max
20.88 ± 79% -55.4% 9.30 ± 3% sched_debug.cpu.cpu_load[4].stddev
27826 -25.9% 20621 sched_debug.cpu.curr->pid.max
12953 ± 3% -21.4% 10185 sched_debug.cpu.curr->pid.stddev
0.00 ± 17% +24.4% 0.00 ± 12% sched_debug.cpu.next_balance.stddev
122120 ± 3% +15.7% 141347 ± 2% sched_debug.cpu.nr_load_updates.max
10094 ± 7% +127.8% 22993 ± 25% sched_debug.cpu.nr_load_updates.stddev
0.37 ± 6% +15.8% 0.43 sched_debug.cpu.nr_running.avg
7428 ± 2% +23.6% 9182 ± 7% sched_debug.cpu.nr_switches.avg
42101 ± 7% +105.4% 86497 ± 25% sched_debug.cpu.nr_switches.max
6842 ± 3% +82.1% 12458 ± 10% sched_debug.cpu.nr_switches.stddev
6724 +24.8% 8389 ± 7% sched_debug.cpu.sched_count.avg
40147 ± 7% +109.1% 83942 ± 26% sched_debug.cpu.sched_count.max
6608 ± 3% +83.2% 12108 ± 11% sched_debug.cpu.sched_count.stddev
3230 +19.5% 3861 ± 6% sched_debug.cpu.sched_goidle.avg
19924 ± 7% +110.0% 41838 ± 26% sched_debug.cpu.sched_goidle.max
3301 ± 3% +78.9% 5906 ± 12% sched_debug.cpu.sched_goidle.stddev
3123 ± 2% +29.1% 4032 ± 7% sched_debug.cpu.ttwu_count.avg
27111 ± 20% +105.9% 55815 ± 34% sched_debug.cpu.ttwu_count.max
4227 ± 10% +83.8% 7771 ± 11% sched_debug.cpu.ttwu_count.stddev
992.95 +56.0% 1548 ± 10% sched_debug.cpu.ttwu_local.avg
3551 ± 15% +159.8% 9228 ± 48% sched_debug.cpu.ttwu_local.max
194.04 ± 71% +90.8% 370.25 ± 9% sched_debug.cpu.ttwu_local.min
603.41 ± 7% +114.2% 1292 ± 45% sched_debug.cpu.ttwu_local.stddev
19.92 -16.7 3.21 ± 15% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
21.42 -16.0 5.42 ± 8% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
21.46 -16.0 5.49 ± 8% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
21.46 -16.0 5.49 ± 8% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
21.46 -16.0 5.49 ± 8% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
21.74 ± 2% -15.9 5.86 ± 6% perf-profile.calltrace.cycles-pp.secondary_startup_64
1.17 ± 3% +0.3 1.43 ± 2% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap
1.17 ± 3% +0.3 1.44 ± 2% perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap.mmput
0.55 +0.4 0.95 ± 2% perf-profile.calltrace.cycles-pp.___might_sleep
1.43 ± 2% +0.5 1.97 perf-profile.calltrace.cycles-pp._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
1.31 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.31 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.31 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.30 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
1.29 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__wake_up_parent
1.32 ± 3% +0.6 1.90 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
1.32 ± 3% +0.6 1.90 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.26 ± 3% +0.6 1.85 ± 3% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
1.26 ± 3% +0.6 1.85 ± 3% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
0.92 ± 2% +0.6 1.54 perf-profile.calltrace.cycles-pp.clear_huge_page
0.55 ±102% +0.9 1.42 ± 11% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.69 ± 4% +1.0 1.69 ± 2% perf-profile.calltrace.cycles-pp.clear_page_erms
1.66 ± 2% +1.1 2.72 perf-profile.calltrace.cycles-pp.___might_sleep.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.77 ± 4% +1.7 2.44 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault
0.76 ± 4% +1.7 2.43 ± 4% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page
2.43 ± 2% +2.1 4.53 ± 2% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
2.44 ± 2% +2.1 4.55 ± 2% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
60.91 +9.2 70.07 perf-profile.calltrace.cycles-pp.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
69.30 +10.9 80.25 perf-profile.calltrace.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
72.23 +13.1 85.35 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
72.25 +13.1 85.38 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
72.11 +13.1 85.25 perf-profile.calltrace.cycles-pp.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
72.31 +13.2 85.48 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
72.31 +13.2 85.48 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
72.32 +13.2 85.50 perf-profile.calltrace.cycles-pp.page_fault
20.15 ± 2% -16.6 3.55 ± 4% perf-profile.children.cycles-pp.intel_idle
21.46 -16.0 5.49 ± 8% perf-profile.children.cycles-pp.start_secondary
21.71 ± 2% -15.9 5.82 ± 6% perf-profile.children.cycles-pp.cpuidle_enter_state
21.74 ± 2% -15.9 5.86 ± 6% perf-profile.children.cycles-pp.do_idle
21.74 ± 2% -15.9 5.86 ± 6% perf-profile.children.cycles-pp.secondary_startup_64
21.74 ± 2% -15.9 5.86 ± 6% perf-profile.children.cycles-pp.cpu_startup_entry
0.14 ± 16% -0.1 0.08 ± 10% perf-profile.children.cycles-pp.read
0.17 ± 6% -0.0 0.15 ± 4% perf-profile.children.cycles-pp.native_irq_return_iret
0.07 ± 6% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.__list_del_entry_valid
0.08 ± 17% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.cmd_stat
0.08 ± 17% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.__run_perf_stat
0.08 ± 17% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.process_interval
0.08 ± 17% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.read_counters
0.07 ± 17% +0.0 0.10 ± 10% perf-profile.children.cycles-pp.perf_event_read
0.08 ± 19% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.perf_evsel__read_counter
0.08 ± 19% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.__read_nocancel
0.08 ± 16% +0.0 0.11 ± 11% perf-profile.children.cycles-pp.perf_read
0.07 ± 15% +0.0 0.11 ± 7% perf-profile.children.cycles-pp.smp_call_function_single
0.03 ±100% +0.0 0.06 ± 6% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.57 ± 5% +0.0 0.61 ± 4% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +0.1 0.05 perf-profile.children.cycles-pp.___perf_sw_event
0.08 ± 8% +0.1 0.13 ± 3% perf-profile.children.cycles-pp.pte_alloc_one
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.__perf_sw_event
0.05 ± 62% +0.1 0.12 ± 19% perf-profile.children.cycles-pp.ktime_get
0.40 ± 2% +0.1 0.49 ± 2% perf-profile.children.cycles-pp.rcu_all_qs
0.13 ± 3% +0.1 0.25 ± 17% perf-profile.children.cycles-pp.__put_compound_page
0.10 ± 8% +0.1 0.23 ± 18% perf-profile.children.cycles-pp.__page_cache_release
0.00 +0.2 0.16 ± 4% perf-profile.children.cycles-pp.free_transhuge_page
0.20 ± 4% +0.2 0.41 ± 5% perf-profile.children.cycles-pp.free_one_page
1.17 ± 3% +0.3 1.44 ± 2% perf-profile.children.cycles-pp.tlb_flush_mmu_free
1.18 ± 3% +0.3 1.46 ± 2% perf-profile.children.cycles-pp.release_pages
0.07 ± 6% +0.3 0.39 ± 11% perf-profile.children.cycles-pp.zap_huge_pmd
0.00 +0.3 0.34 ± 12% perf-profile.children.cycles-pp.deferred_split_huge_page
1.30 ± 3% +0.6 1.88 ± 3% perf-profile.children.cycles-pp.mmput
1.31 ± 3% +0.6 1.89 ± 3% perf-profile.children.cycles-pp.__wake_up_parent
1.31 ± 3% +0.6 1.89 ± 3% perf-profile.children.cycles-pp.do_group_exit
1.31 ± 3% +0.6 1.89 ± 3% perf-profile.children.cycles-pp.do_exit
1.30 ± 3% +0.6 1.88 ± 3% perf-profile.children.cycles-pp.exit_mmap
1.27 ± 3% +0.6 1.85 ± 3% perf-profile.children.cycles-pp.unmap_vmas
1.26 ± 3% +0.6 1.85 ± 3% perf-profile.children.cycles-pp.unmap_page_range
1.71 +0.6 2.34 perf-profile.children.cycles-pp._cond_resched
0.67 ± 73% +0.8 1.42 ± 11% perf-profile.children.cycles-pp.poll_idle
2.22 ± 2% +1.5 3.67 perf-profile.children.cycles-pp.___might_sleep
2.54 ± 2% +2.2 4.69 ± 2% perf-profile.children.cycles-pp.__alloc_pages_nodemask
2.52 ± 2% +2.2 4.68 ± 2% perf-profile.children.cycles-pp.get_page_from_freelist
0.97 ± 4% +2.2 3.19 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.57 ± 14% +2.3 3.85 ± 10% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
62.03 +10.2 72.23 perf-profile.children.cycles-pp.clear_page_erms
70.21 +11.6 81.78 perf-profile.children.cycles-pp.clear_huge_page
72.25 +13.1 85.37 perf-profile.children.cycles-pp.__handle_mm_fault
72.27 +13.1 85.40 perf-profile.children.cycles-pp.handle_mm_fault
72.11 +13.1 85.26 perf-profile.children.cycles-pp.do_huge_pmd_anonymous_page
72.33 +13.2 85.50 perf-profile.children.cycles-pp.do_page_fault
72.33 +13.2 85.50 perf-profile.children.cycles-pp.__do_page_fault
72.34 +13.2 85.51 perf-profile.children.cycles-pp.page_fault
20.15 ± 2% -16.6 3.55 ± 4% perf-profile.self.cycles-pp.intel_idle
0.78 ± 4% -0.2 0.59 ± 4% perf-profile.self.cycles-pp.__free_pages_ok
0.17 ± 6% -0.0 0.15 ± 4% perf-profile.self.cycles-pp.native_irq_return_iret
0.07 ± 6% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.__list_del_entry_valid
0.07 ± 15% +0.0 0.11 ± 8% perf-profile.self.cycles-pp.smp_call_function_single
0.03 ±100% +0.1 0.11 ± 23% perf-profile.self.cycles-pp.ktime_get
0.39 ± 2% +0.1 0.48 ± 2% perf-profile.self.cycles-pp.rcu_all_qs
1.61 +0.4 2.00 perf-profile.self.cycles-pp.get_page_from_freelist
1.46 +0.6 2.07 perf-profile.self.cycles-pp._cond_resched
0.66 ± 73% +0.8 1.42 ± 11% perf-profile.self.cycles-pp.poll_idle
6.08 +0.8 6.88 perf-profile.self.cycles-pp.clear_huge_page
2.19 +1.5 3.65 perf-profile.self.cycles-pp.___might_sleep
1.57 ± 14% +2.3 3.85 ± 10% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
61.40 +10.3 71.66 perf-profile.self.cycles-pp.clear_page_erms
***************************************************************************************************
lkp-hsx04: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory
=========================================================================================
compiler/cpufreq_governor/iterations/kconfig/nr_task/rootfs/tbox_group/test/testcase:
gcc-7/performance/30/x86_64-rhel-7.2/1600%/debian-x86_64-2016-08-31.cgz/lkp-hsx04/compute/reaim
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_error_entry/0x
1:4 -25% :4 dmesg.WARNING:at#for_ip_retint_user/0x
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=__slab_free/0x
:4 50% 2:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
6:4 0% 6:4 perf-profile.calltrace.cycles-pp.error_entry
7:4 -1% 7:4 perf-profile.children.cycles-pp.error_entry
5:4 1% 5:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
124.00 -1.4% 122.25 reaim.child_systime
87.40 -9.5% 79.08 reaim.jti
12.09 ± 2% +69.5% 20.49 ± 2% reaim.std_dev_percent
1.53 +50.7% 2.30 ± 2% reaim.std_dev_time
17534460 -5.9% 16498575 reaim.time.involuntary_context_switches
3734 -1.4% 3683 reaim.time.system_time
49689016 ± 17% +26.1% 62671778 ± 6% cpuidle.POLL.time
0.39 ± 3% -15.5% 0.33 ± 10% turbostat.Pkg%pc6
70.96 -2.2% 69.38 turbostat.RAMWatt
1695 -10.0% 1525 vmstat.procs.r
62117 -3.2% 60144 vmstat.system.cs
7903 ± 4% -56.9% 3405 ± 24% slabinfo.eventpoll_epi.active_objs
7903 ± 4% -56.9% 3405 ± 24% slabinfo.eventpoll_epi.num_objs
6915 ± 4% -56.9% 2979 ± 24% slabinfo.eventpoll_pwq.active_objs
6915 ± 4% -56.9% 2979 ± 24% slabinfo.eventpoll_pwq.num_objs
1872863 -15.2% 1587464 ± 2% meminfo.Active
1789974 -15.8% 1506890 meminfo.Active(anon)
1680128 -17.4% 1388186 meminfo.AnonPages
3268127 -13.1% 2840098 meminfo.Committed_AS
1855 -58.7% 766.75 meminfo.Mlocked
1861 -58.5% 773.25 meminfo.Unevictable
1.69 -0.2 1.52 perf-stat.cache-miss-rate%
2.587e+10 -10.6% 2.313e+10 perf-stat.cache-misses
30767306 -3.3% 29760895 perf-stat.context-switches
5874734 -14.8% 5007295 perf-stat.cpu-migrations
71.33 -0.8 70.58 perf-stat.node-load-miss-rate%
8.155e+09 -10.6% 7.291e+09 perf-stat.node-load-misses
3.277e+09 -7.3% 3.039e+09 perf-stat.node-loads
2.765e+09 -11.5% 2.448e+09 ± 2% perf-stat.node-store-misses
1.519e+10 -13.1% 1.32e+10 perf-stat.node-stores
396395 ± 4% -18.4% 323621 ± 7% numa-meminfo.node1.AnonPages
281797 ± 8% -9.8% 254296 ± 3% numa-meminfo.node1.Inactive(file)
521557 ± 6% -17.7% 429170 ± 5% numa-meminfo.node2.Active
500694 ± 7% -18.6% 407636 ± 5% numa-meminfo.node2.Active(anon)
478540 ± 8% -17.2% 396013 ± 12% numa-meminfo.node3.Active
459279 ± 7% -17.9% 377244 ± 13% numa-meminfo.node3.Active(anon)
32735 ± 41% -97.8% 716.00 ±100% numa-meminfo.node3.AnonHugePages
434193 ± 4% -23.8% 330780 ± 11% numa-meminfo.node3.AnonPages
285808 ± 5% -10.7% 255254 ± 4% numa-meminfo.node3.Inactive
280289 ± 6% -9.2% 254380 ± 3% numa-meminfo.node3.Inactive(file)
445067 -15.6% 375712 proc-vmstat.nr_active_anon
417525 -17.1% 346011 proc-vmstat.nr_anon_pages
56599 -5.0% 53752 proc-vmstat.nr_kernel_stack
463.50 -58.9% 190.50 proc-vmstat.nr_mlock
100103 -2.7% 97448 proc-vmstat.nr_slab_unreclaimable
465.00 -58.7% 192.25 proc-vmstat.nr_unevictable
445067 -15.6% 375712 proc-vmstat.nr_zone_active_anon
465.00 -58.7% 192.25 proc-vmstat.nr_zone_unevictable
195.25 ± 56% +729.4% 1619 ± 87% proc-vmstat.numa_hint_faults_local
7687 ± 3% +4.5% 8036 ± 2% proc-vmstat.numa_pte_updates
634.12 ± 6% -26.8% 464.15 ± 12% sched_debug.cfs_rq:/.exec_clock.stddev
49884777 ± 5% +21.8% 60745074 sched_debug.cfs_rq:/.min_vruntime.avg
56196226 ± 6% +25.2% 70352699 sched_debug.cfs_rq:/.min_vruntime.max
43890810 ± 4% +15.7% 50767701 ± 3% sched_debug.cfs_rq:/.min_vruntime.min
2523222 ± 6% +98.7% 5013049 ± 19% sched_debug.cfs_rq:/.min_vruntime.stddev
1.56 ± 5% +12.6% 1.76 ± 4% sched_debug.cfs_rq:/.nr_spread_over.avg
-5929434 +78.4% -10580036 sched_debug.cfs_rq:/.spread0.min
2487301 ± 6% +101.3% 5007639 ± 19% sched_debug.cfs_rq:/.spread0.stddev
243.96 ± 2% -10.0% 219.50 sched_debug.cfs_rq:/.util_avg.stddev
670504 ± 5% -15.4% 566996 ± 6% sched_debug.cpu.avg_idle.min
58664 ± 11% +40.9% 82632 ± 12% sched_debug.cpu.avg_idle.stddev
470.51 ± 17% -47.7% 246.06 ± 20% sched_debug.cpu.clock.stddev
470.51 ± 17% -47.7% 246.06 ± 20% sched_debug.cpu.clock_task.stddev
7233 ± 9% +18.2% 8551 ± 12% sched_debug.cpu.load.avg
137222 ± 62% +119.4% 301049 ± 35% sched_debug.cpu.load.max
15787 ± 43% +85.8% 29335 ± 30% sched_debug.cpu.load.stddev
0.00 ± 17% -35.3% 0.00 ± 16% sched_debug.cpu.next_balance.stddev
169.75 ± 13% -25.0% 127.31 ± 8% sched_debug.cpu.sched_goidle.min
3.38 +39.4% 4.71 ± 15% sched_debug.rt_rq:/.rt_runtime.stddev
131.75 ± 23% -58.3% 55.00 ± 28% numa-vmstat.node0.nr_mlock
132.00 ± 23% -58.1% 55.25 ± 27% numa-vmstat.node0.nr_unevictable
132.00 ± 23% -58.1% 55.25 ± 27% numa-vmstat.node0.nr_zone_unevictable
98455 ± 4% -18.2% 80516 ± 6% numa-vmstat.node1.nr_anon_pages
319.25 ±142% -100.0% 0.00 numa-vmstat.node1.nr_dirtied
70449 ± 8% -9.8% 63574 ± 3% numa-vmstat.node1.nr_inactive_file
99.50 ± 3% -60.1% 39.75 numa-vmstat.node1.nr_mlock
100.00 ± 4% -60.2% 39.75 numa-vmstat.node1.nr_unevictable
319.25 ±142% -100.0% 0.00 numa-vmstat.node1.nr_written
70449 ± 8% -9.8% 63574 ± 3% numa-vmstat.node1.nr_zone_inactive_file
100.00 ± 4% -60.2% 39.75 numa-vmstat.node1.nr_zone_unevictable
124407 ± 7% -18.3% 101653 ± 5% numa-vmstat.node2.nr_active_anon
124405 ± 7% -18.3% 101652 ± 5% numa-vmstat.node2.nr_zone_active_anon
114276 ± 7% -17.8% 93899 ± 12% numa-vmstat.node3.nr_active_anon
108032 ± 4% -23.8% 82312 ± 11% numa-vmstat.node3.nr_anon_pages
70071 ± 6% -9.2% 63594 ± 3% numa-vmstat.node3.nr_inactive_file
134.00 ± 21% -64.6% 47.50 ± 28% numa-vmstat.node3.nr_mlock
135.00 ± 21% -64.6% 47.75 ± 28% numa-vmstat.node3.nr_unevictable
114275 ± 7% -17.8% 93901 ± 12% numa-vmstat.node3.nr_zone_active_anon
70071 ± 6% -9.2% 63594 ± 3% numa-vmstat.node3.nr_zone_inactive_file
135.00 ± 21% -64.6% 47.75 ± 28% numa-vmstat.node3.nr_zone_unevictable
1.411e+09 ± 8% -3.3e+08 1.077e+09 ± 14% syscalls.sys_brk.noise.100%
1.42e+09 ± 7% -3.3e+08 1.086e+09 ± 14% syscalls.sys_brk.noise.2%
1.416e+09 ± 7% -3.3e+08 1.082e+09 ± 14% syscalls.sys_brk.noise.25%
1.42e+09 ± 7% -3.3e+08 1.085e+09 ± 14% syscalls.sys_brk.noise.5%
1.414e+09 ± 8% -3.3e+08 1.08e+09 ± 14% syscalls.sys_brk.noise.50%
1.413e+09 ± 8% -3.3e+08 1.079e+09 ± 14% syscalls.sys_brk.noise.75%
4.046e+09 ± 13% -1.3e+09 2.793e+09 ± 6% syscalls.sys_newstat.noise.100%
4.119e+09 ± 12% -1.3e+09 2.868e+09 ± 6% syscalls.sys_newstat.noise.2%
4.101e+09 ± 12% -1.3e+09 2.849e+09 ± 6% syscalls.sys_newstat.noise.25%
4.117e+09 ± 12% -1.3e+09 2.866e+09 ± 6% syscalls.sys_newstat.noise.5%
4.08e+09 ± 12% -1.3e+09 2.828e+09 ± 6% syscalls.sys_newstat.noise.50%
4.062e+09 ± 12% -1.3e+09 2.811e+09 ± 6% syscalls.sys_newstat.noise.75%
1.541e+11 ± 10% -4.7e+10 1.072e+11 ± 15% syscalls.sys_read.noise.100%
1.541e+11 ± 10% -4.7e+10 1.073e+11 ± 15% syscalls.sys_read.noise.2%
1.541e+11 ± 10% -4.7e+10 1.073e+11 ± 15% syscalls.sys_read.noise.25%
1.541e+11 ± 10% -4.7e+10 1.073e+11 ± 15% syscalls.sys_read.noise.5%
1.541e+11 ± 10% -4.7e+10 1.073e+11 ± 15% syscalls.sys_read.noise.50%
1.541e+11 ± 10% -4.7e+10 1.072e+11 ± 15% syscalls.sys_read.noise.75%
130453 ± 16% -69.2% 40150 ±103% syscalls.sys_rt_sigaction.max
19777092 ± 4% -1.3e+07 6543904 ±100% syscalls.sys_rt_sigaction.noise.100%
27560343 ± 2% -1.7e+07 10538623 ±100% syscalls.sys_rt_sigaction.noise.2%
26718095 ± 3% -1.7e+07 9971159 ±100% syscalls.sys_rt_sigaction.noise.25%
27550355 ± 2% -1.7e+07 10510393 ±100% syscalls.sys_rt_sigaction.noise.5%
24718035 ± 3% -1.6e+07 8356079 ±100% syscalls.sys_rt_sigaction.noise.50%
22249116 ± 4% -1.5e+07 7149959 ±100% syscalls.sys_rt_sigaction.noise.75%
27266292 ± 11% -1.6e+07 11532735 ±100% syscalls.sys_times.noise.100%
32337364 ± 9% -1.8e+07 14209280 ±100% syscalls.sys_times.noise.2%
31159606 ± 9% -1.8e+07 13621578 ±100% syscalls.sys_times.noise.25%
32279406 ± 9% -1.8e+07 14182805 ±100% syscalls.sys_times.noise.5%
30086951 ± 9% -1.7e+07 13027260 ±100% syscalls.sys_times.noise.50%
28978220 ± 10% -1.7e+07 12426543 ±100% syscalls.sys_times.noise.75%
4922 ± 13% -12.0% 4333 interrupts.CPU102.CAL:Function_call_interrupts
15763 ± 12% -17.4% 13021 ± 3% interrupts.CPU104.RES:Rescheduling_interrupts
4930 ± 14% -12.3% 4325 interrupts.CPU11.CAL:Function_call_interrupts
4910 ± 13% -12.1% 4318 interrupts.CPU118.CAL:Function_call_interrupts
4935 ± 14% -17.6% 4064 ± 10% interrupts.CPU119.CAL:Function_call_interrupts
4926 ± 13% -12.0% 4332 interrupts.CPU120.CAL:Function_call_interrupts
4838 ± 32% +34.0% 6483 ± 21% interrupts.CPU122.NMI:Non-maskable_interrupts
4838 ± 32% +34.0% 6483 ± 21% interrupts.CPU122.PMI:Performance_monitoring_interrupts
4913 ± 14% -17.4% 4058 ± 8% interrupts.CPU123.CAL:Function_call_interrupts
4907 ± 13% -11.7% 4330 interrupts.CPU124.CAL:Function_call_interrupts
14843 ± 3% -10.8% 13246 ± 3% interrupts.CPU126.RES:Rescheduling_interrupts
15606 -17.6% 12854 ± 4% interrupts.CPU130.RES:Rescheduling_interrupts
15198 ± 8% -14.3% 13028 interrupts.CPU131.RES:Rescheduling_interrupts
4902 ± 14% -12.6% 4286 interrupts.CPU134.CAL:Function_call_interrupts
4878 ± 14% -12.2% 4285 interrupts.CPU14.CAL:Function_call_interrupts
4945 ± 14% -13.1% 4297 interrupts.CPU140.CAL:Function_call_interrupts
15827 ± 4% -13.6% 13669 ± 5% interrupts.CPU141.RES:Rescheduling_interrupts
4786 ± 12% -14.4% 4097 ± 7% interrupts.CPU18.CAL:Function_call_interrupts
15216 ± 12% -15.3% 12883 ± 2% interrupts.CPU18.RES:Rescheduling_interrupts
15525 ± 5% -15.0% 13200 ± 2% interrupts.CPU19.RES:Rescheduling_interrupts
14771 ± 3% -7.8% 13620 interrupts.CPU2.RES:Rescheduling_interrupts
15066 ± 7% -10.6% 13468 ± 5% interrupts.CPU20.RES:Rescheduling_interrupts
4822 ± 10% -9.7% 4352 interrupts.CPU21.CAL:Function_call_interrupts
4908 ± 14% -12.3% 4305 interrupts.CPU26.CAL:Function_call_interrupts
4944 ± 31% +54.2% 7623 ± 5% interrupts.CPU27.NMI:Non-maskable_interrupts
4944 ± 31% +54.2% 7623 ± 5% interrupts.CPU27.PMI:Performance_monitoring_interrupts
4947 ± 13% -14.2% 4246 ± 4% interrupts.CPU3.CAL:Function_call_interrupts
4031 ± 5% +73.1% 6977 ± 22% interrupts.CPU3.NMI:Non-maskable_interrupts
4031 ± 5% +73.1% 6977 ± 22% interrupts.CPU3.PMI:Performance_monitoring_interrupts
15219 ± 4% -10.9% 13568 ± 5% interrupts.CPU3.RES:Rescheduling_interrupts
4849 ± 32% +54.9% 7510 ± 8% interrupts.CPU30.NMI:Non-maskable_interrupts
4849 ± 32% +54.9% 7510 ± 8% interrupts.CPU30.PMI:Performance_monitoring_interrupts
15365 ± 5% -10.0% 13833 ± 6% interrupts.CPU33.RES:Rescheduling_interrupts
4758 ± 21% +63.9% 7797 interrupts.CPU5.NMI:Non-maskable_interrupts
4758 ± 21% +63.9% 7797 interrupts.CPU5.PMI:Performance_monitoring_interrupts
4937 ± 14% -12.5% 4321 interrupts.CPU56.CAL:Function_call_interrupts
4932 ± 14% -12.0% 4340 interrupts.CPU58.CAL:Function_call_interrupts
4935 ± 14% -11.9% 4347 interrupts.CPU60.CAL:Function_call_interrupts
4836 ± 32% +35.3% 6542 ± 21% interrupts.CPU60.NMI:Non-maskable_interrupts
4836 ± 32% +35.3% 6542 ± 21% interrupts.CPU60.PMI:Performance_monitoring_interrupts
4867 ± 14% -12.5% 4260 interrupts.CPU62.CAL:Function_call_interrupts
4922 ± 14% -12.0% 4333 interrupts.CPU64.CAL:Function_call_interrupts
15118 ± 7% -14.0% 13008 ± 8% interrupts.CPU64.RES:Rescheduling_interrupts
4922 ± 13% -12.1% 4329 interrupts.CPU65.CAL:Function_call_interrupts
15324 ± 9% -12.5% 13415 ± 3% interrupts.CPU67.RES:Rescheduling_interrupts
4884 ± 14% -13.0% 4248 interrupts.CPU71.CAL:Function_call_interrupts
4890 ± 14% -11.8% 4311 interrupts.CPU77.CAL:Function_call_interrupts
4889 ± 13% -11.4% 4330 interrupts.CPU80.CAL:Function_call_interrupts
14898 ± 3% -9.4% 13504 ± 2% interrupts.CPU80.RES:Rescheduling_interrupts
15793 ± 12% -13.6% 13651 ± 3% interrupts.CPU83.RES:Rescheduling_interrupts
14835 ± 3% -9.2% 13466 ± 3% interrupts.CPU85.RES:Rescheduling_interrupts
4831 ± 32% +41.0% 6809 ± 15% interrupts.CPU86.NMI:Non-maskable_interrupts
4831 ± 32% +41.0% 6809 ± 15% interrupts.CPU86.PMI:Performance_monitoring_interrupts
15141 ± 11% -14.7% 12921 ± 2% interrupts.CPU91.RES:Rescheduling_interrupts
4919 ± 13% -12.0% 4328 interrupts.CPU94.CAL:Function_call_interrupts
15256 ± 11% -10.1% 13721 ± 6% interrupts.CPU94.RES:Rescheduling_interrupts
15869 ± 7% -13.2% 13771 ± 4% interrupts.CPU96.RES:Rescheduling_interrupts
4919 ± 13% -12.3% 4316 interrupts.CPU97.CAL:Function_call_interrupts
4.50 ± 3% -0.3 4.23 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
4.50 ± 3% -0.3 4.23 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
4.48 ± 3% -0.3 4.22 perf-profile.calltrace.cycles-pp.do_execveat_common.sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
4.55 ± 3% -0.2 4.35 perf-profile.calltrace.cycles-pp._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.45 ± 2% -0.1 2.31 perf-profile.calltrace.cycles-pp.__libc_fork
1.67 ± 2% -0.1 1.55 ± 2% perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary
1.22 ± 6% -0.1 1.13 ± 6% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
2.17 -0.1 2.10 perf-profile.calltrace.cycles-pp.filemap_map_pages.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.60 -0.1 1.53 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.69 ± 4% -0.1 0.62 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.sync
0.69 ± 4% -0.1 0.63 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.sync
0.68 ± 4% -0.1 0.62 ± 2% perf-profile.calltrace.cycles-pp.sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe.sync
0.62 ± 4% -0.1 0.56 ± 3% perf-profile.calltrace.cycles-pp.iterate_supers.sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe.sync
0.92 ± 2% -0.0 0.88 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.71 ± 3% -0.0 0.67 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.setlocale
0.72 -0.0 0.68 ± 3% perf-profile.calltrace.cycles-pp.rcu_process_callbacks.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.66 ± 5% +0.0 0.71 ± 2% perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.__handle_mm_fault.handle_mm_fault
1.23 ± 3% +0.1 1.30 ± 2% perf-profile.calltrace.cycles-pp.__lru_cache_add.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.06 ± 4% +0.1 1.16 perf-profile.calltrace.cycles-pp.__perf_sw_event.__do_page_fault.do_page_fault.page_fault
1.50 ± 3% +0.1 1.61 ± 4% perf-profile.calltrace.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
1.89 +0.1 2.02 ± 2% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
12.85 +0.2 13.05 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
13.47 +0.2 13.67 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
16.66 +0.2 16.91 perf-profile.calltrace.cycles-pp.page_fault
16.49 +0.3 16.75 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
16.46 +0.3 16.72 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
0.26 ±100% +0.3 0.54 ± 3% perf-profile.calltrace.cycles-pp.selinux_vm_enough_memory.security_vm_enough_memory_mm.do_brk_flags.sys_brk.do_syscall_64
0.00 +0.5 0.51 ± 2% perf-profile.calltrace.cycles-pp.__xstat64
1.24 ± 16% -0.3 0.90 ± 10% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
4.98 ± 2% -0.2 4.80 perf-profile.children.cycles-pp.unmap_vmas
1.90 ± 4% -0.1 1.76 ± 5% perf-profile.children.cycles-pp.__softirqentry_text_start
2.45 ± 2% -0.1 2.31 perf-profile.children.cycles-pp.__libc_fork
2.38 -0.1 2.27 perf-profile.children.cycles-pp.do_mmap
2.73 -0.1 2.63 perf-profile.children.cycles-pp.free_pgtables
2.10 -0.1 2.00 perf-profile.children.cycles-pp.mmap_region
2.54 -0.1 2.45 perf-profile.children.cycles-pp.vm_mmap_pgoff
0.74 ± 3% -0.1 0.66 ± 3% perf-profile.children.cycles-pp.wake_up_new_task
2.11 -0.1 2.03 perf-profile.children.cycles-pp.path_openat
2.12 -0.1 2.04 perf-profile.children.cycles-pp.do_filp_open
0.69 ± 3% -0.1 0.62 ± 2% perf-profile.children.cycles-pp.sys_sync
0.36 ± 4% -0.1 0.30 ± 5% perf-profile.children.cycles-pp.update_rq_clock
0.62 ± 4% -0.1 0.57 ± 3% perf-profile.children.cycles-pp.iterate_supers
0.15 ± 6% -0.0 0.11 ± 9% perf-profile.children.cycles-pp.mark_page_accessed
0.09 ± 11% -0.0 0.06 ± 13% perf-profile.children.cycles-pp.alloc_vmap_area
0.78 ± 2% -0.0 0.75 perf-profile.children.cycles-pp.lookup_fast
0.11 ± 6% -0.0 0.08 ± 17% perf-profile.children.cycles-pp.__get_vm_area_node
0.15 ± 7% -0.0 0.12 ± 5% perf-profile.children.cycles-pp.__vunmap
0.15 ± 5% -0.0 0.12 ± 8% perf-profile.children.cycles-pp.__d_alloc
0.15 ± 5% -0.0 0.13 ± 5% perf-profile.children.cycles-pp.free_work
0.31 ± 3% -0.0 0.29 perf-profile.children.cycles-pp.__update_load_avg_se
0.10 ± 7% +0.0 0.12 ± 7% perf-profile.children.cycles-pp.generic_permission
0.10 ± 8% +0.0 0.12 ± 6% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.10 ± 14% +0.0 0.14 ± 10% perf-profile.children.cycles-pp.__do_fault
0.47 ± 3% +0.0 0.51 ± 2% perf-profile.children.cycles-pp.__xstat64
0.01 ±173% +0.0 0.05 ± 9% perf-profile.children.cycles-pp.__alloc_fd
0.68 +0.0 0.73 ± 3% perf-profile.children.cycles-pp.sys_read
0.18 ± 4% +0.1 0.23 ± 16% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.41 ± 6% +0.1 0.48 ± 8% perf-profile.children.cycles-pp.mem_cgroup_try_charge
2.12 ± 3% +0.1 2.20 perf-profile.children.cycles-pp.vfs_statx
1.60 ± 2% +0.1 1.70 ± 3% perf-profile.children.cycles-pp.pagevec_lru_move_fn
1.64 ± 2% +0.1 1.75 ± 3% perf-profile.children.cycles-pp.task_tick_fair
1.21 ± 4% +0.1 1.31 perf-profile.children.cycles-pp.__perf_sw_event
0.35 ± 9% +0.1 0.46 ± 4% perf-profile.children.cycles-pp.wait_consider_task
2.10 ± 2% +0.1 2.22 perf-profile.children.cycles-pp.syscall_return_via_sysret
3.41 ± 2% +0.1 3.54 perf-profile.children.cycles-pp.tlb_flush_mmu_free
0.75 ± 4% +0.1 0.89 ± 4% perf-profile.children.cycles-pp.SYSC_wait4
0.75 ± 4% +0.1 0.89 ± 4% perf-profile.children.cycles-pp.kernel_wait4
0.70 ± 5% +0.1 0.84 ± 5% perf-profile.children.cycles-pp.do_wait
5.25 +0.2 5.42 perf-profile.children.cycles-pp.alloc_pages_vma
19.59 +0.2 19.82 perf-profile.children.cycles-pp.do_page_fault
1.23 ± 16% -0.3 0.90 ± 10% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
3.01 -0.1 2.90 perf-profile.self.cycles-pp.unmap_page_range
0.32 ± 6% -0.1 0.26 ± 6% perf-profile.self.cycles-pp.update_rq_clock
0.19 ± 13% -0.0 0.15 ± 7% perf-profile.self.cycles-pp.scheduler_tick
1.20 ± 2% -0.0 1.16 perf-profile.self.cycles-pp._dl_addr
0.84 -0.0 0.81 perf-profile.self.cycles-pp.copy_page_range
0.15 ± 7% -0.0 0.11 ± 9% perf-profile.self.cycles-pp.mark_page_accessed
0.09 -0.0 0.07 ± 16% perf-profile.self.cycles-pp.__d_alloc
0.15 ± 3% -0.0 0.14 ± 6% perf-profile.self.cycles-pp.lru_cache_add_active_or_unevictable
0.22 ± 8% +0.0 0.26 ± 5% perf-profile.self.cycles-pp.__perf_sw_event
0.18 ± 4% +0.0 0.23 ± 16% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.07 +0.1 0.12 ± 4% perf-profile.self.cycles-pp.queued_write_lock_slowpath
0.69 ± 5% +0.1 0.76 ± 2% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.69 ± 7% +0.1 0.78 ± 4% perf-profile.self.cycles-pp.__vma_adjust
0.05 ± 62% +0.1 0.14 ± 20% perf-profile.self.cycles-pp.wait_consider_task
2.10 ± 2% +0.1 2.22 perf-profile.self.cycles-pp.syscall_return_via_sysret
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime:
pipe/gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2016-08-31.cgz/lkp-bdw-ep6/stress-ng/1s
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
3463075 ± 10% +76.2% 6101037 ± 32% stress-ng.fifo.ops
3460753 ± 10% +76.2% 6096158 ± 32% stress-ng.fifo.ops_per_sec
22252433 ± 4% -41.9% 12934864 ± 38% cpuidle.C1.time
1240735 ± 5% -41.6% 724182 ± 38% cpuidle.C1.usage
9537 ± 12% +72.0% 16402 ± 19% sched_debug.cpu.nr_switches.max
1567 ± 5% +28.0% 2006 ± 11% sched_debug.cpu.nr_switches.stddev
1239038 ± 5% -41.7% 722719 ± 38% turbostat.C1
3.32 ± 5% -1.4 1.89 ± 41% turbostat.C1%
696934 ± 3% +7.9% 751814 ± 6% turbostat.IRQ
2473 ± 9% -25.8% 1834 ± 10% slabinfo.eventpoll_epi.active_objs
2473 ± 9% -25.8% 1834 ± 10% slabinfo.eventpoll_epi.num_objs
4267 ± 9% -25.8% 3165 ± 10% slabinfo.eventpoll_pwq.active_objs
4267 ± 9% -25.8% 3165 ± 10% slabinfo.eventpoll_pwq.num_objs
316.70 ± 42% +286.3% 1223 ± 88% interrupts.CPU1.RES:Rescheduling_interrupts
214.30 ± 35% +225.2% 697.00 ± 67% interrupts.CPU14.RES:Rescheduling_interrupts
249.60 ± 36% +241.5% 852.30 ± 82% interrupts.CPU15.RES:Rescheduling_interrupts
280.70 ± 45% +166.1% 746.90 ± 68% interrupts.CPU17.RES:Rescheduling_interrupts
290.10 ± 31% +178.8% 808.70 ± 88% interrupts.CPU2.RES:Rescheduling_interrupts
221.70 ± 27% +239.7% 753.10 ± 63% interrupts.CPU20.RES:Rescheduling_interrupts
256.40 ± 28% +174.5% 703.90 ± 97% interrupts.CPU49.RES:Rescheduling_interrupts
22379 ± 11% +107.5% 46443 ± 20% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime:
cpu/gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2016-08-31.cgz/lkp-bdw-ep6/stress-ng/1s
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
189975 -25.8% 140873 ± 7% stress-ng.context.ops
190032 -25.9% 140895 ± 7% stress-ng.context.ops_per_sec
180580 -15.3% 152874 ± 9% stress-ng.hsearch.ops
180604 -15.4% 152852 ± 9% stress-ng.hsearch.ops_per_sec
47965 +6.3% 50971 stress-ng.time.involuntary_context_switches
4076749 +6.0% 4319630 stress-ng.time.minor_page_faults
6259 ± 3% -8.8% 5706 ± 2% stress-ng.time.percent_of_cpu_this_job_got
1601 -8.3% 1468 stress-ng.time.user_time
836.00 -17.3% 691.50 ± 3% stress-ng.tsearch.ops
806.28 -17.1% 668.40 ± 3% stress-ng.tsearch.ops_per_sec
103796 ± 48% -49.0% 52979 ± 9% meminfo.AnonHugePages
54.87 ± 3% -6.2 48.67 ± 6% mpstat.cpu.usr%
64134 ± 45% -62.1% 24298 ± 56% numa-meminfo.node0.AnonHugePages
3.66 ±105% -3.7 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
3.66 ±105% -3.7 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
338.42 ± 4% -10.1% 304.30 sched_debug.cfs_rq:/.util_avg.avg
10253 ± 21% +66.7% 17088 ± 6% sched_debug.cpu.nr_switches.max
1602 ± 7% +25.6% 2013 ± 5% sched_debug.cpu.nr_switches.stddev
2071 ± 15% -18.7% 1683 ± 8% slabinfo.eventpoll_epi.num_objs
3605 ± 15% -19.2% 2912 ± 8% slabinfo.eventpoll_pwq.num_objs
490.00 ± 10% -28.0% 353.00 ± 3% slabinfo.file_lock_cache.num_objs
11628 ± 3% -8.3% 10665 ± 2% slabinfo.kmalloc-96.active_objs
33103068 ± 5% -14.1% 28438786 ± 7% cpuidle.C3.time
141695 ± 5% -8.0% 130380 ± 6% cpuidle.C3.usage
6.907e+08 ± 10% +29.0% 8.912e+08 ± 11% cpuidle.C6.time
642637 ± 11% +20.4% 773937 ± 8% cpuidle.C6.usage
3162428 ± 53% +81.0% 5724968 ± 26% cpuidle.POLL.time
22657 ± 2% -9.2% 20570 softirqs.CPU46.TIMER
22637 ± 3% -7.4% 20972 ± 4% softirqs.CPU5.TIMER
21862 ± 3% -7.9% 20139 ± 2% softirqs.CPU52.TIMER
22165 ± 4% -9.8% 19993 softirqs.CPU54.TIMER
21851 ± 4% -8.6% 19977 ± 2% softirqs.CPU59.TIMER
9771 +2.0% 9966 proc-vmstat.nr_mapped
2767 ± 2% -5.0% 2629 ± 2% proc-vmstat.nr_page_table_pages
1602 ± 13% +188.0% 4614 ± 27% proc-vmstat.numa_hint_faults
379391 ± 2% -38.9% 231826 ± 16% proc-vmstat.numa_pte_updates
4840033 +4.8% 5074161 proc-vmstat.pgfault
2502 ± 80% -81.7% 457.50 ± 7% proc-vmstat.thp_fault_alloc
2011 ± 2% -8.6% 1839 ± 3% turbostat.Avg_MHz
1.12 ± 7% -0.2 0.95 ± 11% turbostat.C3%
644225 ± 10% +20.2% 774089 ± 8% turbostat.C6
23.43 ± 9% +6.3 29.72 ± 7% turbostat.C6%
12.89 ± 10% +17.2% 15.10 ± 7% turbostat.CPU%c1
14.33 ± 6% +28.9% 18.47 ± 11% turbostat.CPU%c6
220.42 -4.7% 210.07 ± 2% turbostat.PkgWatt
1.206e+11 ± 7% -22.6% 9.329e+10 ± 9% perf-stat.branch-instructions
4.75 ± 5% +0.5 5.25 perf-stat.branch-miss-rate%
5.708e+09 -14.0% 4.908e+09 ± 10% perf-stat.branch-misses
1.676e+09 ± 19% -24.5% 1.266e+09 ± 4% perf-stat.cache-references
1.017e+12 ± 3% -16.3% 8.519e+11 ± 3% perf-stat.cpu-cycles
0.02 ± 3% +0.0 0.03 ± 5% perf-stat.dTLB-load-miss-rate%
1.196e+11 ± 5% -16.7% 9.964e+10 ± 5% perf-stat.dTLB-loads
0.01 ± 4% +0.0 0.01 ± 4% perf-stat.dTLB-store-miss-rate%
5273411 ± 5% +9.0% 5746770 ± 3% perf-stat.dTLB-store-misses
4.876e+10 ± 8% -21.0% 3.853e+10 ± 4% perf-stat.dTLB-stores
42.05 ± 2% -4.0 38.06 perf-stat.iTLB-load-miss-rate%
2.264e+08 ± 3% -31.9% 1.542e+08 ± 7% perf-stat.iTLB-load-misses
3.119e+08 -19.6% 2.508e+08 ± 6% perf-stat.iTLB-loads
6.53e+11 ± 6% -19.6% 5.253e+11 ± 5% perf-stat.instructions
13642943 ± 9% -14.0% 11738663 ± 4% perf-stat.node-load-misses
3805381 ± 34% -32.1% 2584140 ± 8% perf-stat.node-stores
30946 ± 6% -15.8% 26047 ± 7% interrupts.CPU0.LOC:Local_timer_interrupts
1326 ± 8% +22.0% 1618 ± 12% interrupts.CPU1.RES:Rescheduling_interrupts
271.50 ± 4% -9.8% 245.00 ± 5% interrupts.CPU12.RES:Rescheduling_interrupts
31204 ± 4% -13.8% 26901 ± 3% interrupts.CPU13.LOC:Local_timer_interrupts
407.50 ± 25% -41.7% 237.75 ± 17% interrupts.CPU13.RES:Rescheduling_interrupts
37.75 ± 61% +341.7% 166.75 ± 68% interrupts.CPU14.34:IR-PCI-MSI.1572865-edge.eth0-TxRx-0
527.00 ± 24% +45.3% 765.75 ± 13% interrupts.CPU2.RES:Rescheduling_interrupts
359.50 ± 14% +67.1% 600.75 ± 18% interrupts.CPU22.RES:Rescheduling_interrupts
350.50 ± 25% -42.4% 201.75 ± 3% interrupts.CPU27.RES:Rescheduling_interrupts
318.25 ± 23% +90.7% 606.75 ± 25% interrupts.CPU3.RES:Rescheduling_interrupts
6321 ± 7% +36.9% 8655 ± 4% interrupts.CPU30.CAL:Function_call_interrupts
3534 ± 13% +74.1% 6154 ± 6% interrupts.CPU30.TLB:TLB_shootdowns
287.75 ± 11% -28.7% 205.25 ± 6% interrupts.CPU31.RES:Rescheduling_interrupts
326.75 ± 13% -26.9% 239.00 ± 16% interrupts.CPU32.RES:Rescheduling_interrupts
265.25 ± 7% -22.9% 204.50 ± 14% interrupts.CPU35.RES:Rescheduling_interrupts
6704 ± 13% +21.5% 8146 ± 7% interrupts.CPU36.CAL:Function_call_interrupts
3893 ± 20% +46.3% 5696 ± 8% interrupts.CPU36.TLB:TLB_shootdowns
6798 ± 18% +25.9% 8555 ± 5% interrupts.CPU37.CAL:Function_call_interrupts
273.00 ± 19% -30.1% 190.75 ± 11% interrupts.CPU37.RES:Rescheduling_interrupts
4075 ± 29% +48.7% 6058 ± 7% interrupts.CPU37.TLB:TLB_shootdowns
303.75 ± 12% -38.5% 186.75 ± 22% interrupts.CPU41.RES:Rescheduling_interrupts
6346 ± 9% +46.8% 9315 ± 12% interrupts.CPU42.CAL:Function_call_interrupts
3690 ± 15% +82.6% 6736 ± 20% interrupts.CPU42.TLB:TLB_shootdowns
255.75 ± 10% -37.8% 159.00 ± 12% interrupts.CPU43.RES:Rescheduling_interrupts
7579 ± 9% -25.8% 5620 ± 33% interrupts.CPU50.CAL:Function_call_interrupts
398.75 ± 47% -50.3% 198.00 ± 22% interrupts.CPU51.RES:Rescheduling_interrupts
31904 ± 5% -13.3% 27660 ± 9% interrupts.CPU52.LOC:Local_timer_interrupts
7487 ± 11% -19.2% 6048 ± 9% interrupts.CPU54.CAL:Function_call_interrupts
4747 ± 18% -26.8% 3474 ± 16% interrupts.CPU54.TLB:TLB_shootdowns
7486 ± 17% -30.7% 5187 ± 11% interrupts.CPU57.CAL:Function_call_interrupts
270.75 ± 11% -16.9% 225.00 ± 11% interrupts.CPU57.RES:Rescheduling_interrupts
4782 ± 28% -46.9% 2537 ± 31% interrupts.CPU57.TLB:TLB_shootdowns
238.25 ± 7% +12.7% 268.50 ± 5% interrupts.CPU63.RES:Rescheduling_interrupts
286.75 ± 10% -29.6% 202.00 ± 10% interrupts.CPU64.RES:Rescheduling_interrupts
290.25 ± 14% -41.3% 170.25 ± 29% interrupts.CPU65.RES:Rescheduling_interrupts
6570 ± 31% +30.0% 8538 ± 4% interrupts.CPU66.CAL:Function_call_interrupts
3829 ± 58% +60.1% 6130 ± 7% interrupts.CPU66.TLB:TLB_shootdowns
267.75 ± 18% -34.9% 174.25 ± 22% interrupts.CPU67.RES:Rescheduling_interrupts
257.00 ± 12% -30.4% 178.75 ± 20% interrupts.CPU68.RES:Rescheduling_interrupts
4377 ± 28% +47.4% 6450 ± 17% interrupts.CPU68.TLB:TLB_shootdowns
244.50 ± 17% -22.4% 189.75 interrupts.CPU69.RES:Rescheduling_interrupts
4263 ± 30% +41.9% 6050 ± 7% interrupts.CPU69.TLB:TLB_shootdowns
262.50 ± 17% -28.9% 186.75 ± 9% interrupts.CPU71.RES:Rescheduling_interrupts
6230 ± 10% +43.2% 8922 ± 12% interrupts.CPU72.CAL:Function_call_interrupts
30763 ± 6% -11.1% 27340 ± 7% interrupts.CPU72.LOC:Local_timer_interrupts
3562 ± 18% +84.8% 6584 ± 19% interrupts.CPU72.TLB:TLB_shootdowns
259.25 ± 2% -19.7% 208.25 ± 10% interrupts.CPU74.RES:Rescheduling_interrupts
237.75 ± 21% -24.5% 179.50 ± 11% interrupts.CPU76.RES:Rescheduling_interrupts
297.25 ± 29% -40.2% 177.75 ± 9% interrupts.CPU79.RES:Rescheduling_interrupts
271.75 ± 14% -33.0% 182.00 ± 10% interrupts.CPU80.RES:Rescheduling_interrupts
221.25 ± 18% -23.1% 170.25 ± 13% interrupts.CPU81.RES:Rescheduling_interrupts
323.75 ± 25% -34.7% 211.50 ± 9% interrupts.CPU86.RES:Rescheduling_interrupts
308.00 ± 17% -26.2% 227.25 ± 31% interrupts.CPU87.RES:Rescheduling_interrupts
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 4 months
[mm/gup] cdaa813278: kernel_selftests.memfd.run_fuse_test.sh.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: cdaa813278ddc616ee201eacda77f63996b5dd2d ("[PATCH 4/6] mm/gup: track gup-pinned pages")
url: https://github.com/0day-ci/linux/commits/john-hubbard-gmail-com/RFC-v2-mm...
in testcase: kernel_selftests
with following parameters:
group: kselftests-02
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
KERNEL SELFTESTS: linux_headers_dir is /usr/src/linux-headers-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d
2019-02-19 00:14:39 ln -sf /usr/bin/clang-7 /usr/bin/clang
2019-02-19 00:14:39 ln -sf /usr/bin/llc-7 /usr/bin/llc
media_tests test: not in Makefile
2019-02-19 00:14:39 make TARGETS=media_tests
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/media_tests'
gcc -I../ -I../../../../usr/include/ media_device_test.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/media_tests/media_device_test
gcc -I../ -I../../../../usr/include/ media_device_open.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/media_tests/media_device_open
gcc -I../ -I../../../../usr/include/ video_device_test.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/media_tests/video_device_test
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/media_tests'
ignored_by_lkp media_tests test
2019-02-19 00:14:40 make run_tests -C membarrier
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/membarrier'
gcc -g -I../../../../usr/include/ membarrier_test.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/membarrier/membarrier_test
TAP version 13
selftests: membarrier: membarrier_test
========================================
ok 1 sys_membarrier available
ok 2 sys membarrier invalid command test: command = -1, flags = 0, errno = 22. Failed as expected
ok 3 sys membarrier MEMBARRIER_CMD_QUERY invalid flags test: flags = 1, errno = 22. Failed as expected
ok 4 sys membarrier MEMBARRIER_CMD_GLOBAL test: flags = 0
ok 5 sys membarrier MEMBARRIER_CMD_PRIVATE_EXPEDITED not registered failure test: flags = 0, errno = 1
ok 6 sys membarrier MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED test: flags = 0
ok 7 sys membarrier MEMBARRIER_CMD_PRIVATE_EXPEDITED test: flags = 0
ok 8 sys membarrier MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE not registered failure test: flags = 0, errno = 1
ok 9 sys membarrier MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE test: flags = 0
ok 10 sys membarrier MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE test: flags = 0
ok 11 sys membarrier MEMBARRIER_CMD_GLOBAL_EXPEDITED test: flags = 0
ok 12 sys membarrier MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED test: flags = 0
ok 13 sys membarrier MEMBARRIER_CMD_GLOBAL_EXPEDITED test: flags = 0
Pass 13 Fail 0 Xfail 0 Xpass 0 Skip 0 Error 0
1..13
ok 1..1 selftests: membarrier: membarrier_test [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/membarrier'
2019-02-19 00:14:40 make run_tests -C memfd
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/memfd'
gcc -D_FILE_OFFSET_BITS=64 -I../../../../include/uapi/ -I../../../../include/ -I../../../../usr/include/ -c -o common.o common.c
gcc -D_FILE_OFFSET_BITS=64 -I../../../../include/uapi/ -I../../../../include/ -I../../../../usr/include/ memfd_test.c common.o -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/memfd/memfd_test
memfd_test.c: In function ‘mfd_assert_get_seals’:
memfd_test.c:74:6: warning: implicit declaration of function ‘fcntl’ [-Wimplicit-function-declaration]
r = fcntl(fd, F_GET_SEALS);
^~~~~
memfd_test.c: In function ‘mfd_assert_open’:
memfd_test.c:197:6: warning: implicit declaration of function ‘open’ [-Wimplicit-function-declaration]
r = open(buf, flags, mode);
^~~~
memfd_test.c: In function ‘mfd_assert_write’:
memfd_test.c:328:6: warning: implicit declaration of function ‘fallocate’ [-Wimplicit-function-declaration]
r = fallocate(fd,
^~~~~~~~~
gcc -D_FILE_OFFSET_BITS=64 -I../../../../include/uapi/ -I../../../../include/ -I../../../../usr/include/ fuse_mnt.c -lfuse -pthread -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/memfd/fuse_mnt
gcc -D_FILE_OFFSET_BITS=64 -I../../../../include/uapi/ -I../../../../include/ -I../../../../usr/include/ fuse_test.c common.o -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/memfd/fuse_test
fuse_test.c: In function ‘mfd_assert_get_seals’:
fuse_test.c:67:6: warning: implicit declaration of function ‘fcntl’ [-Wimplicit-function-declaration]
r = fcntl(fd, F_GET_SEALS);
^~~~~
fuse_test.c: In function ‘main’:
fuse_test.c:261:7: warning: implicit declaration of function ‘open’ [-Wimplicit-function-declaration]
fd = open(argv[1], O_RDONLY | O_CLOEXEC);
^~~~
TAP version 13
selftests: memfd: memfd_test
========================================
memfd: CREATE
memfd: BASIC
memfd: SEAL-WRITE
memfd: SEAL-SHRINK
memfd: SEAL-GROW
memfd: SEAL-RESIZE
memfd: SHARE-DUP
memfd: SHARE-MMAP
memfd: SHARE-OPEN
memfd: SHARE-FORK
memfd: SHARE-DUP (shared file-table)
memfd: SHARE-MMAP (shared file-table)
memfd: SHARE-OPEN (shared file-table)
memfd: SHARE-FORK (shared file-table)
memfd: DONE
ok 1..1 selftests: memfd: memfd_test [PASS]
selftests: memfd: run_fuse_test.sh
========================================
Aborted
not ok 1..2 selftests: memfd: run_fuse_test.sh [FAIL]
selftests: memfd: run_hugetlbfs_test.sh
========================================
memfd-hugetlb: CREATE
memfd-hugetlb: BASIC
memfd-hugetlb: SEAL-WRITE
memfd-hugetlb: SEAL-SHRINK
memfd-hugetlb: SEAL-GROW
memfd-hugetlb: SEAL-RESIZE
memfd-hugetlb: SHARE-DUP
memfd-hugetlb: SHARE-MMAP
memfd-hugetlb: SHARE-OPEN
memfd-hugetlb: SHARE-FORK
memfd-hugetlb: SHARE-DUP (shared file-table)
memfd-hugetlb: SHARE-MMAP (shared file-table)
memfd-hugetlb: SHARE-OPEN (shared file-table)
memfd-hugetlb: SHARE-FORK (shared file-table)
memfd: DONE
opening: ./mnt/memfd
fuse: DONE
ok 1..3 selftests: memfd: run_hugetlbfs_test.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/memfd'
2019-02-19 00:14:44 make run_tests -C memory-hotplug
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/memory-hotplug'
TAP version 13
selftests: memory-hotplug: mem-on-off-test.sh
========================================
Test scope: 2% hotplug memory
online all hot-pluggable memory in offline state:
SKIPPED - no hot-pluggable memory in offline state
offline 2% hot-pluggable memory in online state
trying to offline 1 out of 14 memory block(s):
online->offline memory1
online all hot-pluggable memory in offline state:
offline->online memory1
Test with memory notifier error injection
ok 1..1 selftests: memory-hotplug: mem-on-off-test.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/memory-hotplug'
2019-02-19 00:14:45 make run_tests -C mount
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/mount'
gcc -Wall -O2 unprivileged-remount-test.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/mount/unprivileged-remount-test
TAP version 13
selftests: mount: run_tests.sh
========================================
ok 1..1 selftests: mount: run_tests.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/mount'
2019-02-19 00:14:46 make run_tests -C mqueue
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/mqueue'
gcc -O2 mq_open_tests.c -lrt -lpthread -lpopt -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/mqueue/mq_open_tests
gcc -O2 mq_perf_tests.c -lrt -lpthread -lpopt -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/mqueue/mq_perf_tests
TAP version 13
selftests: mqueue: mq_open_tests
========================================
Using Default queue path - /test1
Initial system state:
Using queue path: /test1
RLIMIT_MSGQUEUE(soft): 819200
RLIMIT_MSGQUEUE(hard): 819200
Maximum Message Size: 8192
Maximum Queue Size: 10
Default Message Size: 8192
Default Queue Size: 10
Adjusted system state for testing:
RLIMIT_MSGQUEUE(soft): 819200
RLIMIT_MSGQUEUE(hard): 819200
Maximum Message Size: 8192
Maximum Queue Size: 10
Default Message Size: 8192
Default Queue Size: 10
Test series 1, behavior when no attr struct passed to mq_open:
Kernel supports setting defaults separately from maximums: PASS
Given sane values, mq_open without an attr struct succeeds: PASS
Kernel properly honors default setting knobs: PASS
Kernel properly limits default values to lesser of default/max: PASS
Kernel properly fails to create queue when defaults would
exceed rlimit: PASS
Test series 2, behavior when attr struct is passed to mq_open:
Queue open in excess of rlimit max when euid = 0 failed: PASS
Queue open with mq_maxmsg > limit when euid = 0 succeeded: PASS
Queue open with mq_msgsize > limit when euid = 0 succeeded: PASS
Queue open with total size > 2GB when euid = 0 failed: PASS
Queue open in excess of rlimit max when euid = 99 failed: PASS
Queue open with mq_maxmsg > limit when euid = 99 failed: PASS
Queue open with mq_msgsize > limit when euid = 99 failed: PASS
Queue open with total size > 2GB when euid = 99 failed: PASS
ok 1..1 selftests: mqueue: mq_open_tests [PASS]
selftests: mqueue: mq_perf_tests
========================================
Initial system state:
Using queue path: /mq_perf_tests
RLIMIT_MSGQUEUE(soft): 819200
RLIMIT_MSGQUEUE(hard): 819200
Maximum Message Size: 8192
Maximum Queue Size: 10
Nice value: 0
Adjusted system state for testing:
RLIMIT_MSGQUEUE(soft): (unlimited)
RLIMIT_MSGQUEUE(hard): (unlimited)
Maximum Message Size: 16777216
Maximum Queue Size: 65530
Nice value: -20
Continuous mode: (disabled)
CPUs to pin: 1
Queue /mq_perf_tests created:
mq_flags: O_NONBLOCK
mq_maxmsg: 65530
mq_msgsize: 16
mq_curmsgs: 0
Started mqueue performance test thread on CPU 1
Max priorities: 32768
Clock resolution: 1 nsec
Test #1: Time send/recv message, queue empty
(10000000 iterations)
Send msg: 20.769811987s total time
2076 nsec/msg
Recv msg: 21.57018776s total time
2105 nsec/msg
Test #2a: Time send/recv message, queue full, constant prio
:
(100000 iterations)
Filling queue...done. 0.69102850s
Testing...done.
Send msg: 0.207365476s total time
2073 nsec/msg
Recv msg: 0.208501734s total time
2085 nsec/msg
Draining queue...done. 0.68079024s
Test #2b: Time send/recv message, queue full, increasing prio
:
(100000 iterations)
Filling queue...done. 0.86926345s
Testing...done.
Send msg: 0.234621580s total time
2346 nsec/msg
Recv msg: 0.223862917s total time
2238 nsec/msg
Draining queue...done. 0.72429194s
Test #2c: Time send/recv message, queue full, decreasing prio
:
(100000 iterations)
Filling queue...done. 0.85459740s
Testing...done.
Send msg: 0.238305053s total time
2383 nsec/msg
Recv msg: 0.223585734s total time
2235 nsec/msg
Draining queue...done. 0.78872582s
Test #2d: Time send/recv message, queue full, random prio
:
(100000 iterations)
Filling queue...done. 0.91103960s
Testing...done.
Send msg: 0.238946474s total time
2389 nsec/msg
Recv msg: 0.217763880s total time
2177 nsec/msg
Draining queue...done. 0.77818648s
ok 1..2 selftests: mqueue: mq_perf_tests [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/mqueue'
2019-02-19 00:15:43 make run_tests -C net
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net'
make ARCH=x86 -C ../../../.. headers_install
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d'
HOSTCC scripts/basic/fixdep
WRAP arch/x86/include/generated/uapi/asm/bpf_perf_event.h
WRAP arch/x86/include/generated/uapi/asm/poll.h
SYSTBL arch/x86/include/generated/asm/syscalls_32.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_32.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_64.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_x32.h
HOSTCC arch/x86/tools/relocs_32.o
HOSTCC arch/x86/tools/relocs_64.o
HOSTCC arch/x86/tools/relocs_common.o
HOSTLD arch/x86/tools/relocs
UPD include/generated/uapi/linux/version.h
HOSTCC scripts/unifdef
INSTALL usr/include/asm-generic/ (37 files)
INSTALL usr/include/drm/ (26 files)
INSTALL usr/include/linux/ (503 files)
INSTALL usr/include/linux/android/ (2 files)
INSTALL usr/include/linux/byteorder/ (2 files)
INSTALL usr/include/linux/caif/ (2 files)
INSTALL usr/include/linux/can/ (6 files)
INSTALL usr/include/linux/cifs/ (1 file)
INSTALL usr/include/linux/dvb/ (8 files)
INSTALL usr/include/linux/genwqe/ (1 file)
INSTALL usr/include/linux/hdlc/ (1 file)
INSTALL usr/include/linux/hsi/ (2 files)
INSTALL usr/include/linux/iio/ (2 files)
INSTALL usr/include/linux/isdn/ (1 file)
INSTALL usr/include/linux/mmc/ (1 file)
INSTALL usr/include/linux/netfilter/ (88 files)
INSTALL usr/include/linux/netfilter/ipset/ (4 files)
INSTALL usr/include/linux/netfilter_arp/ (2 files)
INSTALL usr/include/linux/netfilter_bridge/ (17 files)
INSTALL usr/include/linux/netfilter_ipv4/ (9 files)
INSTALL usr/include/linux/netfilter_ipv6/ (13 files)
INSTALL usr/include/linux/nfsd/ (5 files)
INSTALL usr/include/linux/raid/ (2 files)
INSTALL usr/include/linux/sched/ (1 file)
INSTALL usr/include/linux/spi/ (1 file)
INSTALL usr/include/linux/sunrpc/ (1 file)
INSTALL usr/include/linux/tc_act/ (15 files)
INSTALL usr/include/linux/tc_ematch/ (5 files)
INSTALL usr/include/linux/usb/ (13 files)
INSTALL usr/include/linux/wimax/ (1 file)
INSTALL usr/include/misc/ (2 files)
INSTALL usr/include/mtd/ (5 files)
INSTALL usr/include/rdma/ (25 files)
INSTALL usr/include/rdma/hfi/ (2 files)
INSTALL usr/include/scsi/ (5 files)
INSTALL usr/include/scsi/fc/ (4 files)
INSTALL usr/include/sound/ (16 files)
INSTALL usr/include/video/ (3 files)
INSTALL usr/include/xen/ (4 files)
INSTALL usr/include/asm/ (62 files)
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d'
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ reuseport_bpf.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/reuseport_bpf
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ reuseport_bpf_cpu.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/reuseport_bpf_cpu
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ -lnuma reuseport_bpf_numa.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/reuseport_bpf_numa
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ reuseport_dualstack.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/reuseport_dualstack
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ reuseaddr_conflict.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/reuseaddr_conflict
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ tls.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/tls
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ socket.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/socket
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ psock_fanout.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/psock_fanout
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ psock_tpacket.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/psock_tpacket
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ msg_zerocopy.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/msg_zerocopy
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ reuseport_addr_any.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/reuseport_addr_any
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ -lpthread tcp_mmap.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/tcp_mmap
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ -lpthread tcp_inq.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/tcp_inq
tcp_inq.c: In function ‘main’:
tcp_inq.c:178:4: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
inq = *((int *) CMSG_DATA(cm));
^~~
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ psock_snd.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/psock_snd
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ txring_overwrite.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/txring_overwrite
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ udpgso.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/udpgso
udpgso.c: In function ‘send_one’:
udpgso.c:484:3: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*((uint16_t *) CMSG_DATA(cm)) = gso_len;
^
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ udpgso_bench_tx.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/udpgso_bench_tx
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ udpgso_bench_rx.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/udpgso_bench_rx
gcc -Wall -Wl,--no-as-needed -O2 -g -I../../../../usr/include/ ip_defrag.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net/ip_defrag
TAP version 13
selftests: net: reuseport_bpf
========================================
---- IPv4 UDP ----
Testing EBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing EBPF mod 20...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 10: 10
Socket 11: 11
Socket 12: 12
Socket 13: 13
Socket 14: 14
Socket 15: 15
Socket 16: 16
Socket 17: 17
Socket 18: 18
Socket 19: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 10: 30
Socket 11: 31
Socket 12: 32
Socket 13: 33
Socket 14: 34
Socket 15: 35
Socket 16: 36
Socket 17: 37
Socket 18: 38
Socket 19: 39
Reprograming, testing mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 0: 30
Socket 1: 31
Socket 2: 32
Socket 3: 33
Socket 4: 34
Socket 5: 35
Socket 6: 36
Socket 7: 37
Socket 8: 38
Socket 9: 39
Testing CBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing CBPF mod 20...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 10: 10
Socket 11: 11
Socket 12: 12
Socket 13: 13
Socket 14: 14
Socket 15: 15
Socket 16: 16
Socket 17: 17
Socket 18: 18
Socket 19: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 10: 30
Socket 11: 31
Socket 12: 32
Socket 13: 33
Socket 14: 34
Socket 15: 35
Socket 16: 36
Socket 17: 37
Socket 18: 38
Socket 19: 39
Reprograming, testing mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 0: 30
Socket 1: 31
Socket 2: 32
Socket 3: 33
Socket 4: 34
Socket 5: 35
Socket 6: 36
Socket 7: 37
Socket 8: 38
Socket 9: 39
Testing too many filters...
Testing filters on non-SO_REUSEPORT socket...
---- IPv6 UDP ----
Testing EBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing EBPF mod 20...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 10: 10
Socket 11: 11
Socket 12: 12
Socket 13: 13
Socket 14: 14
Socket 15: 15
Socket 16: 16
Socket 17: 17
Socket 18: 18
Socket 19: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 10: 30
Socket 11: 31
Socket 12: 32
Socket 13: 33
Socket 14: 34
Socket 15: 35
Socket 16: 36
Socket 17: 37
Socket 18: 38
Socket 19: 39
Reprograming, testing mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 0: 30
Socket 1: 31
Socket 2: 32
Socket 3: 33
Socket 4: 34
Socket 5: 35
Socket 6: 36
Socket 7: 37
Socket 8: 38
Socket 9: 39
Testing CBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing CBPF mod 20...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 10: 10
Socket 11: 11
Socket 12: 12
Socket 13: 13
Socket 14: 14
Socket 15: 15
Socket 16: 16
Socket 17: 17
Socket 18: 18
Socket 19: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 10: 30
Socket 11: 31
Socket 12: 32
Socket 13: 33
Socket 14: 34
Socket 15: 35
Socket 16: 36
Socket 17: 37
Socket 18: 38
Socket 19: 39
Reprograming, testing mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 0: 30
Socket 1: 31
Socket 2: 32
Socket 3: 33
Socket 4: 34
Socket 5: 35
Socket 6: 36
Socket 7: 37
Socket 8: 38
Socket 9: 39
Testing too many filters...
Testing filters on non-SO_REUSEPORT socket...
---- IPv6 UDP w/ mapped IPv4 ----
Testing EBPF mod 20...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 10: 10
Socket 11: 11
Socket 12: 12
Socket 13: 13
Socket 14: 14
Socket 15: 15
Socket 16: 16
Socket 17: 17
Socket 18: 18
Socket 19: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 10: 30
Socket 11: 31
Socket 12: 32
Socket 13: 33
Socket 14: 34
Socket 15: 35
Socket 16: 36
Socket 17: 37
Socket 18: 38
Socket 19: 39
Reprograming, testing mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 0: 30
Socket 1: 31
Socket 2: 32
Socket 3: 33
Socket 4: 34
Socket 5: 35
Socket 6: 36
Socket 7: 37
Socket 8: 38
Socket 9: 39
Testing EBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing CBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing CBPF mod 20...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 10: 10
Socket 11: 11
Socket 12: 12
Socket 13: 13
Socket 14: 14
Socket 15: 15
Socket 16: 16
Socket 17: 17
Socket 18: 18
Socket 19: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 10: 30
Socket 11: 31
Socket 12: 32
Socket 13: 33
Socket 14: 34
Socket 15: 35
Socket 16: 36
Socket 17: 37
Socket 18: 38
Socket 19: 39
Reprograming, testing mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Socket 0: 20
Socket 1: 21
Socket 2: 22
Socket 3: 23
Socket 4: 24
Socket 5: 25
Socket 6: 26
Socket 7: 27
Socket 8: 28
Socket 9: 29
Socket 0: 30
Socket 1: 31
Socket 2: 32
Socket 3: 33
Socket 4: 34
Socket 5: 35
Socket 6: 36
Socket 7: 37
Socket 8: 38
Socket 9: 39
---- IPv4 TCP ----
Testing EBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing CBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing too many filters...
Testing filters on non-SO_REUSEPORT socket...
---- IPv6 TCP ----
Testing EBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing CBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing too many filters...
Testing filters on non-SO_REUSEPORT socket...
---- IPv6 TCP w/ mapped IPv4 ----
Testing EBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing CBPF mod 10...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 5: 5
Socket 6: 6
Socket 7: 7
Socket 8: 8
Socket 9: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 5: 15
Socket 6: 16
Socket 7: 17
Socket 8: 18
Socket 9: 19
Reprograming, testing mod 5...
Socket 0: 0
Socket 1: 1
Socket 2: 2
Socket 3: 3
Socket 4: 4
Socket 0: 5
Socket 1: 6
Socket 2: 7
Socket 3: 8
Socket 4: 9
Socket 0: 10
Socket 1: 11
Socket 2: 12
Socket 3: 13
Socket 4: 14
Socket 0: 15
Socket 1: 16
Socket 2: 17
Socket 3: 18
Socket 4: 19
Testing filter add without bind...
SUCCESS
ok 1..1 selftests: net: reuseport_bpf [PASS]
selftests: net: reuseport_bpf_cpu
========================================
---- IPv4 UDP ----
send cpu 0, receive socket 0
send cpu 1, receive socket 1
send cpu 1, receive socket 1
send cpu 0, receive socket 0
send cpu 0, receive socket 0
send cpu 1, receive socket 1
---- IPv6 UDP ----
send cpu 0, receive socket 0
send cpu 1, receive socket 1
send cpu 1, receive socket 1
send cpu 0, receive socket 0
send cpu 0, receive socket 0
send cpu 1, receive socket 1
---- IPv4 TCP ----
send cpu 0, receive socket 0
send cpu 1, receive socket 1
send cpu 1, receive socket 1
send cpu 0, receive socket 0
send cpu 0, receive socket 0
send cpu 1, receive socket 1
---- IPv6 TCP ----
send cpu 0, receive socket 0
send cpu 1, receive socket 1
send cpu 1, receive socket 1
send cpu 0, receive socket 0
send cpu 0, receive socket 0
send cpu 1, receive socket 1
SUCCESS
ok 1..2 selftests: net: reuseport_bpf_cpu [PASS]
selftests: net: reuseport_bpf_numa
========================================
---- IPv4 UDP ----
send node 0, receive socket 0
send node 0, receive socket 0
---- IPv6 UDP ----
send node 0, receive socket 0
send node 0, receive socket 0
---- IPv4 TCP ----
send node 0, receive socket 0
send node 0, receive socket 0
---- IPv6 TCP ----
send node 0, receive socket 0
send node 0, receive socket 0
SUCCESS
ok 1..3 selftests: net: reuseport_bpf_numa [PASS]
selftests: net: reuseport_dualstack
========================================
---- UDP IPv4 created before IPv6 ----
---- UDP IPv6 created before IPv4 ----
---- UDP IPv4 created before IPv6 (large) ----
---- UDP IPv6 created before IPv4 (large) ----
---- TCP IPv4 created before IPv6 ----
---- TCP IPv6 created before IPv4 ----
SUCCESS
ok 1..4 selftests: net: reuseport_dualstack [PASS]
selftests: net: reuseaddr_conflict
========================================
Opening 127.0.0.1:9999
Opening INADDR_ANY:9999
bind: Address already in use
Opening in6addr_any:9999
Opening INADDR_ANY:9999
bind: Address already in use
Opening INADDR_ANY:9999 after closing ipv6 socket
bind: Address already in use
Successok 1..5 selftests: net: reuseaddr_conflict [PASS]
selftests: net: tls
========================================
[==========] Running 29 tests from 2 test cases.
[ RUN ] tls.sendfile
[ OK ] tls.sendfile
[ RUN ] tls.send_then_sendfile
[ OK ] tls.send_then_sendfile
[ RUN ] tls.recv_max
[ OK ] tls.recv_max
[ RUN ] tls.recv_small
[ OK ] tls.recv_small
[ RUN ] tls.msg_more
[ OK ] tls.msg_more
[ RUN ] tls.sendmsg_single
[ OK ] tls.sendmsg_single
[ RUN ] tls.sendmsg_large
[ OK ] tls.sendmsg_large
[ RUN ] tls.sendmsg_multiple
[ OK ] tls.sendmsg_multiple
[ RUN ] tls.sendmsg_multiple_stress
[ OK ] tls.sendmsg_multiple_stress
[ RUN ] tls.splice_from_pipe
[ OK ] tls.splice_from_pipe
[ RUN ] tls.splice_from_pipe2
[ OK ] tls.splice_from_pipe2
[ RUN ] tls.send_and_splice
[ OK ] tls.send_and_splice
[ RUN ] tls.splice_to_pipe
[ OK ] tls.splice_to_pipe
[ RUN ] tls.recvmsg_single
[ OK ] tls.recvmsg_single
[ RUN ] tls.recvmsg_single_max
[ OK ] tls.recvmsg_single_max
[ RUN ] tls.recvmsg_multiple
[ OK ] tls.recvmsg_multiple
[ RUN ] tls.single_send_multiple_recv
[ OK ] tls.single_send_multiple_recv
[ RUN ] tls.multiple_send_single_recv
[ OK ] tls.multiple_send_single_recv
[ RUN ] tls.recv_partial
[ OK ] tls.recv_partial
[ RUN ] tls.recv_nonblock
[ OK ] tls.recv_nonblock
[ RUN ] tls.recv_peek
[ OK ] tls.recv_peek
[ RUN ] tls.recv_peek_multiple
[ OK ] tls.recv_peek_multiple
[ RUN ] tls.recv_peek_multiple_records
[ OK ] tls.recv_peek_multiple_records
[ RUN ] tls.recv_peek_large_buf_mult_recs
[ OK ] tls.recv_peek_large_buf_mult_recs
[ RUN ] tls.pollin
[ OK ] tls.pollin
[ RUN ] tls.poll_wait
[ OK ] tls.poll_wait
[ RUN ] tls.blocking
[ OK ] tls.blocking
[ RUN ] tls.nonblocking
[ OK ] tls.nonblocking
[ RUN ] tls.control_msg
[ OK ] tls.control_msg
[==========] 29 / 29 tests passed.
[ PASSED ]
ok 1..6 selftests: net: tls [PASS]
selftests: net: run_netsocktests
========================================
--------------------
running socket test
--------------------
[PASS]
ok 1..7 selftests: net: run_netsocktests [PASS]
selftests: net: run_afpackettests
========================================
--------------------
running psock_fanout test
--------------------
test: control single socket
test: control multiple sockets
test: unique ids
test: datapath 0x0 ports 8000,8002
info: count=0,0, expect=0,0
info: count=5,15, expect=15,5
info: count=5,20, expect=20,5
test: datapath 0x1000 ports 8000,8002
info: count=0,0, expect=0,0
info: count=5,15, expect=15,5
info: count=15,20, expect=20,15
test: datapath 0x1 ports 8000,8002
info: count=0,0, expect=0,0
info: count=10,10, expect=10,10
info: count=17,18, expect=18,17
test: datapath 0x3 ports 8000,8002
info: count=0,0, expect=0,0
info: count=15,5, expect=15,5
info: count=20,15, expect=20,15
test: datapath 0x6 ports 8000,8002
info: count=0,0, expect=0,0
info: count=5,15, expect=15,5
info: count=20,15, expect=15,20
test: datapath 0x7 ports 8000,8002
info: count=0,0, expect=0,0
info: count=5,15, expect=15,5
info: count=20,15, expect=15,20
test: datapath 0x2 ports 8000,8002
info: count=0,0, expect=0,0
info: count=20,0, expect=20,0
info: count=20,0, expect=20,0
test: datapath 0x2 ports 8000,8002
info: count=0,0, expect=0,0
info: count=0,20, expect=0,20
info: count=0,20, expect=0,20
test: datapath 0x2000 ports 8000,8002
info: count=0,0, expect=0,0
info: count=20,20, expect=20,20
info: count=20,20, expect=20,20
OK. All tests passed
[PASS]
--------------------
running psock_tpacket test
--------------------
test: TPACKET_V1 with PACKET_RX_RING .................... 100 pkts (14200 bytes)
test: TPACKET_V1 with PACKET_TX_RING .................... 100 pkts (14200 bytes)
test: TPACKET_V2 with PACKET_RX_RING .................... 100 pkts (14200 bytes)
test: TPACKET_V2 with PACKET_TX_RING .................... 100 pkts (14200 bytes)
test: TPACKET_V3 with PACKET_RX_RING .................... 100 pkts (14200 bytes)
test: TPACKET_V3 with PACKET_TX_RING .................... 100 pkts (14200 bytes)
OK. All tests passed
[PASS]
--------------------
running txring_overwrite test
--------------------
read: a (0x61)
read: b (0x62)
[PASS]
ok 1..8 selftests: net: run_afpackettests [PASS]
selftests: net: test_bpf.sh
========================================
test_bpf: ok
ok 1..9 selftests: net: test_bpf.sh [PASS]
selftests: net: netdevice.sh
========================================
SKIP: eth0: interface already up
Cannot get device udp-fragmentation-offload settings: Operation not supported
PASS: eth0: ethtool list features
PASS: eth0: ethtool dump
PASS: eth0: ethtool stats
SKIP: eth0: interface kept up
ok 1..10 selftests: net: netdevice.sh [PASS]
selftests: net: rtnetlink.sh
========================================
PASS: policy routing
PASS: route get
PASS: tc htb hierarchy
PASS: gre tunnel endpoint
PASS: gretap
RTNETLINK answers: Operation not supported
Cannot find device "ip6gretap00"
Cannot find device "ip6gretap00"
Cannot find device "ip6gretap00"
RTNETLINK answers: Operation not supported
Cannot find device "ip6gretap00"
FAIL: ip6gretap
PASS: erspan
RTNETLINK answers: Operation not supported
Cannot find device "ip6erspan00"
Cannot find device "ip6erspan00"
Cannot find device "ip6erspan00"
RTNETLINK answers: Operation not supported
Cannot find device "ip6erspan00"
Cannot find device "ip6erspan00"
Cannot find device "ip6erspan00"
RTNETLINK answers: Operation not supported
Cannot find device "ip6erspan00"
FAIL: ip6erspan
PASS: bridge setup
PASS: ipv6 addrlabel
PASS: set ifalias 58bb7090-b9ad-46cd-ba02-d0c3cc611fa5 for test-dummy0
PASS: vrf
PASS: vxlan
PASS: fou
PASS: macsec
PASS: ipsec
FAIL: ipsec_offload netdevsim doesn't support IPsec offload
SKIP: fdb get tests: iproute2 too old
SKIP: fdb get tests: iproute2 too old
ok 1..11 selftests: net: rtnetlink.sh [PASS]
selftests: net: xfrm_policy.sh
========================================
PASS: policy before exception matches
PASS: ping to .254 bypassed ipsec tunnel
PASS: direct policy matches
PASS: policy matches
ok 1..12 selftests: net: xfrm_policy.sh [PASS]
selftests: net: fib_tests.sh
========================================
Single path route test
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
Nexthop device deleted
TEST: IPv4 fibmatch - no route [ OK ]
TEST: IPv6 fibmatch - no route [ OK ]
Multipath route test
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
One nexthop device deleted
TEST: IPv4 - multipath route removed on delete [ OK ]
TEST: IPv6 - multipath down to single path [ OK ]
Second nexthop device deleted
TEST: IPv6 - no route [ OK ]
Single path, admin down
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
Route deleted on down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
Admin down multipath
Verify start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
One device down, one up
TEST: IPv4 fibmatch on down device [ OK ]
TEST: IPv6 fibmatch on down device [ OK ]
TEST: IPv4 fibmatch on up device [ OK ]
TEST: IPv6 fibmatch on up device [ OK ]
TEST: IPv4 flags on down device [ OK ]
TEST: IPv6 flags on down device [ OK ]
TEST: IPv4 flags on up device [ OK ]
TEST: IPv6 flags on up device [ OK ]
Other device down and up
TEST: IPv4 fibmatch on down device [ OK ]
TEST: IPv6 fibmatch on down device [ OK ]
TEST: IPv4 fibmatch on up device [ OK ]
TEST: IPv6 fibmatch on up device [ OK ]
TEST: IPv4 flags on down device [ OK ]
TEST: IPv6 flags on down device [ OK ]
TEST: IPv4 flags on up device [ OK ]
TEST: IPv6 flags on up device [ OK ]
Both devices down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
Local carrier tests - single path
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 - no linkdown flag [ OK ]
TEST: IPv6 - no linkdown flag [ OK ]
Carrier off on nexthop
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 - linkdown flag set [ OK ]
TEST: IPv6 - linkdown flag set [ OK ]
Route to local address with carrier down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 linkdown flag set [ OK ]
TEST: IPv6 linkdown flag set [ OK ]
Single path route carrier test
Start point
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 no linkdown flag [ OK ]
TEST: IPv6 no linkdown flag [ OK ]
Carrier down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 linkdown flag set [ OK ]
TEST: IPv6 linkdown flag set [ OK ]
Second address added with carrier down
TEST: IPv4 fibmatch [ OK ]
TEST: IPv6 fibmatch [ OK ]
TEST: IPv4 linkdown flag set [ OK ]
TEST: IPv6 linkdown flag set [ OK ]
IPv4 nexthop tests
<<< write me >>>
IPv6 nexthop tests
TEST: Directly connected nexthop, unicast address [ OK ]
TEST: Directly connected nexthop, unicast address with device [ OK ]
TEST: Gateway is linklocal address [ OK ]
TEST: Gateway is linklocal address, no device [ OK ]
TEST: Gateway can not be local unicast address [ OK ]
TEST: Gateway can not be local unicast address, with device [ OK ]
TEST: Gateway can not be a local linklocal address [ OK ]
TEST: Gateway can be local address in a VRF [ OK ]
TEST: Gateway can be local address in a VRF, with device [ OK ]
TEST: Gateway can be local linklocal address in a VRF [ OK ]
TEST: Redirect to VRF lookup [ OK ]
TEST: VRF route, gateway can be local address in default VRF [ OK ]
TEST: VRF route, gateway can not be a local address [ OK ]
TEST: VRF route, gateway can not be a local addr with device [ OK ]
IPv6 route add / append tests
TEST: Attempt to add duplicate route - gw [ OK ]
TEST: Attempt to add duplicate route - dev only [ OK ]
TEST: Attempt to add duplicate route - reject route [ OK ]
TEST: Append nexthop to existing route - gw [ OK ]
TEST: Add multipath route [ OK ]
TEST: Attempt to add duplicate multipath route [ OK ]
TEST: Route add with different metrics [ OK ]
TEST: Route delete with metric [ OK ]
IPv6 route replace tests
TEST: Single path with single path [ OK ]
TEST: Single path with multipath [ OK ]
TEST: Single path with single path via multipath attribute [ OK ]
TEST: Invalid nexthop [ OK ]
TEST: Single path - replace of non-existent route [ OK ]
TEST: Multipath with multipath [ OK ]
TEST: Multipath with single path [ OK ]
TEST: Multipath with single path via multipath attribute [ OK ]
TEST: Multipath - invalid first nexthop [ OK ]
TEST: Multipath - invalid second nexthop [ OK ]
TEST: Multipath - replace of non-existent route [ OK ]
IPv4 route add / append tests
TEST: Attempt to add duplicate route - gw [ OK ]
TEST: Attempt to add duplicate route - dev only [ OK ]
TEST: Attempt to add duplicate route - reject route [ OK ]
TEST: Add new nexthop for existing prefix [ OK ]
TEST: Append nexthop to existing route - gw [ OK ]
TEST: Append nexthop to existing route - dev only [ OK ]
TEST: Append nexthop to existing route - reject route [ OK ]
TEST: Append nexthop to existing reject route - gw [ OK ]
TEST: Append nexthop to existing reject route - dev only [ OK ]
TEST: add multipath route [ OK ]
TEST: Attempt to add duplicate multipath route [ OK ]
TEST: Route add with different metrics [ OK ]
TEST: Route delete with metric [ OK ]
IPv4 route replace tests
TEST: Single path with single path [ OK ]
TEST: Single path with multipath [ OK ]
TEST: Single path with reject route [ OK ]
TEST: Single path with single path via multipath attribute [ OK ]
TEST: Invalid nexthop [ OK ]
TEST: Single path - replace of non-existent route [ OK ]
TEST: Multipath with multipath [ OK ]
TEST: Multipath with single path [ OK ]
TEST: Multipath with single path via multipath attribute [ OK ]
TEST: Multipath with reject route [ OK ]
TEST: Multipath - invalid first nexthop [ OK ]
TEST: Multipath - invalid second nexthop [ OK ]
TEST: Multipath - replace of non-existent route [ OK ]
IPv6 prefix route tests
TEST: Default metric [ OK ]
TEST: User specified metric on first device [ OK ]
TEST: User specified metric on second device [ OK ]
TEST: Delete of address on first device [ OK ]
TEST: Modify metric of address [ OK ]
Command line is not complete. Try option "help"
TEST: Prefix route removed on link down [ OK ]
TEST: Prefix route with metric on link up [ OK ]
IPv4 prefix route tests
TEST: Default metric [ OK ]
TEST: User specified metric on first device [ OK ]
TEST: User specified metric on second device [ OK ]
TEST: Delete of address on first device [ OK ]
TEST: Modify metric of address [ OK ]
Command line is not complete. Try option "help"
TEST: Prefix route removed on link down [ OK ]
TEST: Prefix route with metric on link up [ OK ]
IPv6 routes with metrics
TEST: Single path route with mtu metric [ OK ]
TEST: Multipath route via 2 single routes with mtu metric on first [ OK ]
TEST: Multipath route via 2 single routes with mtu metric on 2nd [ OK ]
TEST: MTU of second leg [ OK ]
TEST: Multipath route with mtu metric [ OK ]
TEST: Using route with mtu metric [ OK ]
TEST: Invalid metric (fails metric_convert) [ OK ]
IPv4 route add / append tests
TEST: Single path route with mtu metric [ OK ]
TEST: Multipath route with mtu metric [ OK ]
TEST: Using route with mtu metric [ OK ]
TEST: Invalid metric (fails metric_convert) [ OK ]
Tests passed: 141
Tests failed: 0
ok 1..13 selftests: net: fib_tests.sh [PASS]
selftests: net: fib-onlink-tests.sh
========================================
########################################
Configuring interfaces
RTNETLINK answers: File exists
not ok 1..14 selftests: net: fib-onlink-tests.sh [FAIL]
selftests: net: pmtu.sh
========================================
TEST: ipv4: PMTU exceptions [ OK ]
connect: Cannot assign requested address
connect: Cannot assign requested address
TEST: ipv6: PMTU exceptions [FAIL]
PMTU exception wasn't created after exceeding MTU
TEST: IPv4 over vxlan4: PMTU exceptions [ OK ]
connect: Cannot assign requested address
TEST: IPv6 over vxlan4: PMTU exceptions [FAIL]
PMTU exception wasn't created after exceeding link layer MTU on vxlan interface
TEST: IPv4 over vxlan6: PMTU exceptions [ OK ]
TEST: IPv6 over vxlan6: PMTU exceptions [ OK ]
RTNETLINK answers: Operation not supported
geneve4 not supported
TEST: IPv4 over geneve4: PMTU exceptions [SKIP]
RTNETLINK answers: Operation not supported
geneve4 not supported
TEST: IPv6 over geneve4: PMTU exceptions [SKIP]
RTNETLINK answers: Operation not supported
geneve6 not supported
TEST: IPv4 over geneve6: PMTU exceptions [SKIP]
RTNETLINK answers: Operation not supported
geneve6 not supported
TEST: IPv6 over geneve6: PMTU exceptions [SKIP]
TEST: IPv4 over fou4: PMTU exceptions [ OK ]
TEST: IPv6 over fou4: PMTU exceptions [ OK ]
TEST: IPv4 over fou6: PMTU exceptions [ OK ]
TEST: IPv6 over fou6: PMTU exceptions [ OK ]
TEST: IPv4 over gue4: PMTU exceptions [ OK ]
TEST: IPv6 over gue4: PMTU exceptions [ OK ]
TEST: IPv4 over gue6: PMTU exceptions [ OK ]
TEST: IPv6 over gue6: PMTU exceptions [ OK ]
TEST: vti6: PMTU exceptions [ OK ]
TEST: vti4: PMTU exceptions [ OK ]
TEST: vti4: default MTU assignment [ OK ]
TEST: vti6: default MTU assignment [ OK ]
TEST: vti4: MTU setting on link creation [ OK ]
TEST: vti6: MTU setting on link creation [ OK ]
TEST: vti6: MTU changes on link changes [ OK ]
not ok 1..15 selftests: net: pmtu.sh [FAIL]
selftests: net: udpgso.sh
========================================
ipv4 cmsg
device mtu (orig): 65536
device mtu (test): 1500
ipv4 tx:1 gso:0
ipv4 tx:1472 gso:0
ipv4 tx:1473 gso:0 (fail)
ipv4 tx:1472 gso:1472 (fail)
ipv4 tx:1473 gso:1472
ipv4 tx:2944 gso:1472
ipv4 tx:2945 gso:1472
ipv4 tx:64768 gso:1472
ipv4 tx:65507 gso:1472
ipv4 tx:65508 gso:1472 (fail)
ipv4 tx:1 gso:1 (fail)
ipv4 tx:2 gso:1
ipv4 tx:5 gso:2
ipv4 tx:36 gso:1
ipv4 tx:37 gso:1 (fail)
OK
ipv4 setsockopt
device mtu (orig): 65536
device mtu (test): 1500
ipv4 tx:1 gso:0
ipv4 tx:1472 gso:0
ipv4 tx:1473 gso:0 (fail)
ipv4 tx:1472 gso:1472 (fail)
ipv4 tx:1473 gso:1472
ipv4 tx:2944 gso:1472
ipv4 tx:2945 gso:1472
ipv4 tx:64768 gso:1472
ipv4 tx:65507 gso:1472
ipv4 tx:65508 gso:1472 (fail)
ipv4 tx:1 gso:1 (fail)
ipv4 tx:2 gso:1
ipv4 tx:5 gso:2
ipv4 tx:36 gso:1
ipv4 tx:37 gso:1 (fail)
OK
ipv6 cmsg
device mtu (orig): 65536
device mtu (test): 1500
ipv6 tx:1 gso:0
ipv6 tx:1452 gso:0
ipv6 tx:1453 gso:0 (fail)
ipv6 tx:1452 gso:1452 (fail)
ipv6 tx:1453 gso:1452
ipv6 tx:2904 gso:1452
ipv6 tx:2905 gso:1452
ipv6 tx:65340 gso:1452
ipv6 tx:65527 gso:1452
ipv6 tx:65528 gso:1452 (fail)
ipv6 tx:1 gso:1 (fail)
ipv6 tx:2 gso:1
ipv6 tx:5 gso:2
ipv6 tx:16 gso:1
ipv6 tx:17 gso:1 (fail)
OK
ipv6 setsockopt
device mtu (orig): 65536
device mtu (test): 1500
ipv6 tx:1 gso:0
ipv6 tx:1452 gso:0
ipv6 tx:1453 gso:0 (fail)
ipv6 tx:1452 gso:1452 (fail)
ipv6 tx:1453 gso:1452
ipv6 tx:2904 gso:1452
ipv6 tx:2905 gso:1452
ipv6 tx:65340 gso:1452
ipv6 tx:65527 gso:1452
ipv6 tx:65528 gso:1452 (fail)
ipv6 tx:1 gso:1 (fail)
ipv6 tx:2 gso:1
ipv6 tx:5 gso:2
ipv6 tx:16 gso:1
ipv6 tx:17 gso:1 (fail)
OK
ipv4 connected
device mtu (orig): 65536
device mtu (test): 1600
route mtu (test): 1500
path mtu (read): 1500
ipv4 tx:1 gso:0
ipv4 tx:1472 gso:0
ipv4 tx:1473 gso:0 (fail)
ipv4 tx:1472 gso:1472 (fail)
ipv4 tx:1473 gso:1472
ipv4 tx:2944 gso:1472
ipv4 tx:2945 gso:1472
ipv4 tx:64768 gso:1472
ipv4 tx:65507 gso:1472
ipv4 tx:65508 gso:1472 (fail)
ipv4 tx:1 gso:1 (fail)
ipv4 tx:2 gso:1
ipv4 tx:5 gso:2
ipv4 tx:36 gso:1
ipv4 tx:37 gso:1 (fail)
OK
ipv4 msg_more
device mtu (orig): 65536
device mtu (test): 1500
ipv4 tx:1 gso:0
ipv4 tx:1472 gso:0
ipv4 tx:1473 gso:0 (fail)
ipv4 tx:1472 gso:1472 (fail)
ipv4 tx:1473 gso:1472
ipv4 tx:2944 gso:1472
ipv4 tx:2945 gso:1472
ipv4 tx:64768 gso:1472
ipv4 tx:65507 gso:1472
ipv4 tx:65508 gso:1472 (fail)
ipv4 tx:1 gso:1 (fail)
ipv4 tx:2 gso:1
ipv4 tx:5 gso:2
ipv4 tx:36 gso:1
ipv4 tx:37 gso:1 (fail)
OK
ipv6 msg_more
device mtu (orig): 65536
device mtu (test): 1500
ipv6 tx:1 gso:0
ipv6 tx:1452 gso:0
ipv6 tx:1453 gso:0 (fail)
ipv6 tx:1452 gso:1452 (fail)
ipv6 tx:1453 gso:1452
ipv6 tx:2904 gso:1452
ipv6 tx:2905 gso:1452
ipv6 tx:65340 gso:1452
ipv6 tx:65527 gso:1452
ipv6 tx:65528 gso:1452 (fail)
ipv6 tx:1 gso:1 (fail)
ipv6 tx:2 gso:1
ipv6 tx:5 gso:2
ipv6 tx:16 gso:1
ipv6 tx:17 gso:1 (fail)
OK
ok 1..16 selftests: net: udpgso.sh [PASS]
selftests: net: ip_defrag.sh
========================================
ipv4 defrag
PASS
seed = 1550506667
ipv4 defrag with overlaps
PASS
seed = 1550506668
ipv6 defrag
PASS
seed = 1550506699
ipv6 defrag with overlaps
PASS
seed = 1550506699
all tests done
ok 1..17 selftests: net: ip_defrag.sh [PASS]
selftests: net: udpgso_bench.sh
========================================
ipv4
tcp
tcp rx: 2473 MB/s 35430 calls/s
tcp tx: 2473 MB/s 41947 calls/s 41947 msg/s
tcp rx: 2778 MB/s 35862 calls/s
tcp tx: 2819 MB/s 47828 calls/s 47828 msg/s
tcp rx: 3248 MB/s 53210 calls/s
tcp tx: 3227 MB/s 54739 calls/s 54739 msg/s
tcp zerocopy
tcp rx: 2206 MB/s 34504 calls/s
tcp tx: 2206 MB/s 37420 calls/s 37420 msg/s
tcp tx: 1834 MB/s 31109 calls/s 31109 msg/s
tcp rx: 1834 MB/s 23760 calls/s
tcp tx: 1675 MB/s 28413 calls/s 28413 msg/s
tcp rx: 1677 MB/s 22661 calls/s
udp
udp rx: 273 MB/s 195140 calls/s
udp tx: 277 MB/s 197652 calls/s 4706 msg/s
udp rx: 361 MB/s 257264 calls/s
udp tx: 369 MB/s 262920 calls/s 6260 msg/s
udp rx: 349 MB/s 248638 calls/s
udp tx: 355 MB/s 253470 calls/s 6035 msg/s
udp rx: 302 MB/s 215415 calls/s
udp gso
udp rx: 678 MB/s 483509 calls/s
udp tx: 703 MB/s 11935 calls/s 11935 msg/s
udp rx: 839 MB/s 598198 calls/s
udp tx: 909 MB/s 15422 calls/s 15422 msg/s
udp rx: 799 MB/s 569595 calls/s
udp tx: 812 MB/s 13779 calls/s 13779 msg/s
udp rx: 739 MB/s 526925 calls/s
udp gso zerocopy
udp rx: 633 MB/s 450933 calls/s
udp tx: 643 MB/s 10909 calls/s 10909 msg/s
udp rx: 727 MB/s 518471 calls/s
udp tx: 734 MB/s 12450 calls/s 12450 msg/s
udp rx: 724 MB/s 516082 calls/s
udp tx: 731 MB/s 12402 calls/s 12402 msg/s
ipv6
tcp
tcp tx: 2819 MB/s 47825 calls/s 47825 msg/s
tcp rx: 2819 MB/s 46683 calls/s
tcp rx: 3329 MB/s 56088 calls/s
tcp tx: 3329 MB/s 56476 calls/s 56476 msg/s
tcp rx: 3614 MB/s 60462 calls/s
tcp tx: 3614 MB/s 61311 calls/s 61311 msg/s
tcp zerocopy
tcp rx: 2205 MB/s 33711 calls/s
tcp tx: 2205 MB/s 37414 calls/s 37414 msg/s
tcp tx: 2740 MB/s 46484 calls/s 46484 msg/s
tcp rx: 2743 MB/s 33023 calls/s
tcp tx: 2247 MB/s 38126 calls/s 38126 msg/s
tcp rx: 2248 MB/s 35172 calls/s
udp
udp rx: 300 MB/s 219186 calls/s
udp tx: 305 MB/s 222525 calls/s 5175 msg/s
udp tx: 281 MB/s 205282 calls/s 4774 msg/s
udp rx: 244 MB/s 178121 calls/s
udp tx: 298 MB/s 217967 calls/s 5069 msg/s
udp rx: 265 MB/s 193917 calls/s
udp gso
udp rx: 349 MB/s 254957 calls/s
udp tx: 727 MB/s 12333 calls/s 12333 msg/s
udp tx: 685 MB/s 11629 calls/s 11629 msg/s
udp rx: 300 MB/s 219356 calls/s
udp tx: 675 MB/s 11451 calls/s 11451 msg/s
udp rx: 273 MB/s 199736 calls/s
udp gso zerocopy
udp rx: 291 MB/s 212843 calls/s
udp tx: 540 MB/s 9170 calls/s 9170 msg/s
udp rx: 198 MB/s 144606 calls/s
udp tx: 337 MB/s 5720 calls/s 5720 msg/s
udp tx: 604 MB/s 10255 calls/s 10255 msg/s
udp rx: 251 MB/s 183111 calls/s
ok 1..18 selftests: net: udpgso_bench.sh [PASS]
selftests: net: fib_rule_tests.sh
========================================
######################################################################
TEST SECTION: IPv4 fib rule
######################################################################
TEST: rule4 check: oif dummy0 [ OK ]
TEST: rule4 del by pref: oif dummy0 [ OK ]
RTNETLINK answers: No route to host
TEST: rule4 check: from 192.51.100.3 iif dummy0 [FAIL]
TEST: rule4 del by pref: from 192.51.100.3 iif dummy0 [ OK ]
TEST: rule4 check: tos 0x10 [ OK ]
TEST: rule4 del by pref: tos 0x10 [ OK ]
TEST: rule4 check: fwmark 0x64 [ OK ]
TEST: rule4 del by pref: fwmark 0x64 [ OK ]
TEST: rule4 check: uidrange 100-100 [ OK ]
TEST: rule4 del by pref: uidrange 100-100 [ OK ]
TEST: rule4 check: sport 666 dport 777 [ OK ]
TEST: rule4 del by pref: sport 666 dport 777 [ OK ]
TEST: rule4 check: ipproto tcp [ OK ]
TEST: rule4 del by pref: ipproto tcp [ OK ]
TEST: rule4 check: ipproto icmp [ OK ]
TEST: rule4 del by pref: ipproto icmp [ OK ]
######################################################################
TEST SECTION: IPv6 fib rule
######################################################################
TEST: rule6 check: oif dummy0 [ OK ]
TEST: rule6 del by pref: oif dummy0 [ OK ]
TEST: rule6 check: from 2001:db8:1::3 iif dummy0 [ OK ]
TEST: rule6 del by pref: from 2001:db8:1::3 iif dummy0 [ OK ]
TEST: rule6 check: tos 0x10 [ OK ]
TEST: rule6 del by pref: tos 0x10 [ OK ]
TEST: rule6 check: fwmark 0x64 [ OK ]
TEST: rule6 del by pref: fwmark 0x64 [ OK ]
TEST: rule6 check: uidrange 100-100 [ OK ]
TEST: rule6 del by pref: uidrange 100-100 [ OK ]
TEST: rule6 check: sport 666 dport 777 [ OK ]
TEST: rule6 del by pref: sport 666 dport 777 [ OK ]
TEST: rule6 check: ipproto tcp [ OK ]
TEST: rule6 del by pref: ipproto tcp [ OK ]
TEST: rule6 check: ipproto icmp [ OK ]
TEST: rule6 del by pref: ipproto icmp [ OK ]
ok 1..19 selftests: net: fib_rule_tests.sh [PASS]
selftests: net: msg_zerocopy.sh
========================================
ipv4 tcp -t 1
./msg_zerocopy: setaffinity 2
./msg_zerocopy: setaffinity 3
not ok 1..20 selftests: net: msg_zerocopy.sh [FAIL]
selftests: net: psock_snd.sh
========================================
dgram
tx: 128
rx: 142
rx: 100
OK
dgram bind
tx: 128
rx: 142
rx: 100
OK
raw
tx: 142
rx: 142
rx: 100
OK
raw bind
tx: 142
rx: 142
rx: 100
OK
raw qdisc bypass
tx: 142
rx: 142
rx: 100
OK
raw vlan
tx: 146
rx: 100
OK
raw vnet hdr
tx: 152
rx: 142
rx: 100
OK
raw csum_off
tx: 152
rx: 142
rx: 100
OK
raw csum_off with bad offset (fails)
./psock_snd: write: Invalid argument
raw min size
tx: 42
rx: 0
OK
raw mtu size
tx: 1514
rx: 1472
OK
raw mtu size + 1 (fails)
./psock_snd: write: Message too long
raw vlan mtu size + 1 (fails)
./psock_snd: write: Message too long
dgram mtu size
tx: 1500
rx: 1472
OK
dgram mtu size + 1 (fails)
./psock_snd: write: Message too long
raw truncate hlen (fails: does not arrive)
tx: 14
./psock_snd: recv: Resource temporarily unavailable
raw truncate hlen - 1 (fails: EINVAL)
./psock_snd: write: Invalid argument
raw gso min size
tx: 1525
rx: 1473
OK
raw gso min size - 1 (fails)
tx: 1524
./psock_snd: recv: Resource temporarily unavailable
raw gso max size
tx: 65559
rx: 65507
OK
raw gso max size + 1 (fails)
tx: 65560
./psock_snd: recv: Resource temporarily unavailable
OK. All tests passed
ok 1..21 selftests: net: psock_snd.sh [PASS]
selftests: net: udpgro_bench.sh
========================================
Missing xdp_dummy helper. Build bpf selftest first
not ok 1..22 selftests: net: udpgro_bench.sh [FAIL]
selftests: net: udpgro.sh
========================================
Missing xdp_dummy helper. Build bpf selftest first
not ok 1..23 selftests: net: udpgro.sh [FAIL]
selftests: net: test_vxlan_under_vrf.sh
========================================
Checking HV connectivity [ OK ]
Check VM connectivity through VXLAN (underlay in the default VRF) [ OK ]
Check VM connectivity through VXLAN (underlay in a VRF) [FAIL]
not ok 1..24 selftests: net: test_vxlan_under_vrf.sh [FAIL]
selftests: net: reuseport_addr_any.sh
========================================
UDP IPv4 ... pass
UDP IPv6 ... pass
UDP IPv4 mapped to IPv6 ... pass
TCP IPv4 ... pass
TCP IPv6 ... pass
TCP IPv4 mapped to IPv6 ... pass
DCCP IPv4 ... pass
DCCP IPv6 ... pass
DCCP IPv4 mapped to IPv6 ... pass
SUCCESS
ok 1..25 selftests: net: reuseport_addr_any.sh [PASS]
selftests: net: test_vxlan_fdb_changelink.sh
========================================
expected two remotes after fdb append [ OK ]
expected two remotes after link set [ OK ]
ok 1..26 selftests: net: test_vxlan_fdb_changelink.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/net'
2019-02-19 00:19:34 make run_tests -C netfilter
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/netfilter'
TAP version 13
selftests: netfilter: nft_trans_stress.sh
========================================
SKIP: Could not run test without nft tool
not ok 1..1 selftests: netfilter: nft_trans_stress.sh [SKIP]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/netfilter'
2019-02-19 00:19:35 make run_tests -C nsfs
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/nsfs'
gcc -Wall -Werror owner.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/nsfs/owner
gcc -Wall -Werror pidns.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/nsfs/pidns
TAP version 13
selftests: nsfs: owner
========================================
ok 1..1 selftests: nsfs: owner [PASS]
selftests: nsfs: pidns
========================================
ok 1..2 selftests: nsfs: pidns [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/nsfs'
ignored_by_lkp powerpc test
prctl test: not in Makefile
2019-02-19 00:19:35 make TARGETS=prctl
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/prctl'
Makefile:14: warning: overriding recipe for target 'clean'
../lib.mk:137: warning: ignoring old recipe for target 'clean'
gcc disable-tsc-ctxt-sw-stress-test.c -o disable-tsc-ctxt-sw-stress-test
gcc disable-tsc-on-off-stress-test.c -o disable-tsc-on-off-stress-test
gcc disable-tsc-test.c -o disable-tsc-test
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/prctl'
2019-02-19 00:19:36 make run_tests -C prctl
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/prctl'
Makefile:14: warning: overriding recipe for target 'clean'
../lib.mk:137: warning: ignoring old recipe for target 'clean'
TAP version 13
selftests: prctl: disable-tsc-ctxt-sw-stress-test
========================================
[No further output means we're allright]
ok 1..1 selftests: prctl: disable-tsc-ctxt-sw-stress-test [PASS]
selftests: prctl: disable-tsc-on-off-stress-test
========================================
[No further output means we're allright]
ok 1..2 selftests: prctl: disable-tsc-on-off-stress-test [PASS]
selftests: prctl: disable-tsc-test
========================================
rdtsc() == 802151371749
prctl(PR_GET_TSC, &tsc_val); tsc_val == PR_TSC_ENABLE
rdtsc() == 802151561910
prctl(PR_SET_TSC, PR_TSC_ENABLE)
rdtsc() == 802151570169
prctl(PR_SET_TSC, PR_TSC_SIGSEGV)
rdtsc() == [ SIG_SEGV ]
prctl(PR_GET_TSC, &tsc_val); tsc_val == PR_TSC_SIGSEGV
prctl(PR_SET_TSC, PR_TSC_ENABLE)
rdtsc() == 802151606539
ok 1..3 selftests: prctl: disable-tsc-test [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/prctl'
2019-02-19 00:19:57 make run_tests -C proc
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc'
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE fd-001-lookup.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc/fd-001-lookup
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE fd-002-posix-eq.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc/fd-002-posix-eq
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE fd-003-kthread.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc/fd-003-kthread
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-loadavg-001.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc/proc-loadavg-001
proc-loadavg-001.c:17:0: warning: "_GNU_SOURCE" redefined
#define _GNU_SOURCE
<command-line>:0:0: note: this is the location of the previous definition
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-self-map-files-001.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc/proc-self-map-files-001
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-self-map-files-002.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc/proc-self-map-files-002
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-self-syscall.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc/proc-self-syscall
proc-self-syscall.c:16:0: warning: "_GNU_SOURCE" redefined
#define _GNU_SOURCE
<command-line>:0:0: note: this is the location of the previous definition
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-self-wchan.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc/proc-self-wchan
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-uptime-001.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc/proc-uptime-001
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE proc-uptime-002.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc/proc-uptime-002
proc-uptime-002.c:18:0: warning: "_GNU_SOURCE" redefined
#define _GNU_SOURCE
<command-line>:0:0: note: this is the location of the previous definition
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE read.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc/read
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE self.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc/self
gcc -Wall -O2 -Wno-unused-function -D_GNU_SOURCE thread-self.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc/thread-self
TAP version 13
selftests: proc: fd-001-lookup
========================================
ok 1..1 selftests: proc: fd-001-lookup [PASS]
selftests: proc: fd-002-posix-eq
========================================
ok 1..2 selftests: proc: fd-002-posix-eq [PASS]
selftests: proc: fd-003-kthread
========================================
ok 1..3 selftests: proc: fd-003-kthread [PASS]
selftests: proc: proc-loadavg-001
========================================
ok 1..4 selftests: proc: proc-loadavg-001 [PASS]
selftests: proc: proc-self-map-files-001
========================================
ok 1..5 selftests: proc: proc-self-map-files-001 [PASS]
selftests: proc: proc-self-map-files-002
========================================
ok 1..6 selftests: proc: proc-self-map-files-002 [PASS]
selftests: proc: proc-self-syscall
========================================
ok 1..7 selftests: proc: proc-self-syscall [PASS]
selftests: proc: proc-self-wchan
========================================
ok 1..8 selftests: proc: proc-self-wchan [PASS]
selftests: proc: proc-uptime-001
========================================
ok 1..9 selftests: proc: proc-uptime-001 [PASS]
selftests: proc: proc-uptime-002
========================================
ok 1..10 selftests: proc: proc-uptime-002 [PASS]
selftests: proc: read
========================================
ok 1..11 selftests: proc: read [PASS]
selftests: proc: self
========================================
ok 1..12 selftests: proc: self [PASS]
selftests: proc: thread-self
========================================
ok 1..13 selftests: proc: thread-self [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/proc'
2019-02-19 00:20:02 make run_tests -C pstore
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/pstore'
TAP version 13
selftests: pstore: pstore_tests
========================================
=== Pstore unit tests (pstore_tests) ===
UUID=0cd9b2af-a682-4d12-9ebe-9c3f346cfad3
Checking pstore backend is registered ... ok
backend=ramoops
cmdline=ip=::::vm-snb-4G-746::dhcp root=/dev/ram0 user=lkp job=/lkp/jobs/scheduled/vm-snb-4G-746/kernel_selftests-kselftests-02-debian-x86_64-2018-04-03.cgz-cdaa813-20190219-32418-xvb5w7-8.yaml ARCH=x86_64 kconfig=x86_64-rhel-7.2 branch=linux-devel/devel-hourly-2019021713 commit=cdaa813278ddc616ee201eacda77f63996b5dd2d BOOT_IMAGE=/pkg/linux/x86_64-rhel-7.2/gcc-7/cdaa813278ddc616ee201eacda77f63996b5dd2d/vmlinuz-5.0.0-rc4-00004-gcdaa8132 erst_disable max_uptime=3600 RESULT_ROOT=/result/kernel_selftests/kselftests-02/vm-snb-4G/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/cdaa813278ddc616ee201eacda77f63996b5dd2d/8 LKP_SERVER=inn debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 net.ifnames=0 printk.devkmsg=on panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 drbd.minor_count=8 systemd.log_level=err ignore_loglevel console=tty0 earlyprintk=ttyS0,115200 console=ttyS0,115200 vga=normal rw rcuperf.shutdown=0
Checking pstore console is registered ... ok
Checking /dev/pmsg0 exists ... ok
Writing unique string to /dev/pmsg0 ... ok
ok 1..1 selftests: pstore: pstore_tests [PASS]
selftests: pstore: pstore_post_reboot_tests
========================================
=== Pstore unit tests (pstore_post_reboot_tests) ===
UUID=071a0f6f-7d44-4c16-adf0-70e0245025c7
Checking pstore backend is registered ... ok
backend=ramoops
cmdline=ip=::::vm-snb-4G-746::dhcp root=/dev/ram0 user=lkp job=/lkp/jobs/scheduled/vm-snb-4G-746/kernel_selftests-kselftests-02-debian-x86_64-2018-04-03.cgz-cdaa813-20190219-32418-xvb5w7-8.yaml ARCH=x86_64 kconfig=x86_64-rhel-7.2 branch=linux-devel/devel-hourly-2019021713 commit=cdaa813278ddc616ee201eacda77f63996b5dd2d BOOT_IMAGE=/pkg/linux/x86_64-rhel-7.2/gcc-7/cdaa813278ddc616ee201eacda77f63996b5dd2d/vmlinuz-5.0.0-rc4-00004-gcdaa8132 erst_disable max_uptime=3600 RESULT_ROOT=/result/kernel_selftests/kselftests-02/vm-snb-4G/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/cdaa813278ddc616ee201eacda77f63996b5dd2d/8 LKP_SERVER=inn debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 net.ifnames=0 printk.devkmsg=on panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 drbd.minor_count=8 systemd.log_level=err ignore_loglevel console=tty0 earlyprintk=ttyS0,115200 console=ttyS0,115200 vga=normal rw rcuperf.shutdown=0
pstore_crash_test has not been executed yet. we skip further tests.
not ok 1..2 selftests: pstore: pstore_post_reboot_tests [SKIP]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/pstore'
ptp test: not in Makefile
2019-02-19 00:20:02 make TARGETS=ptp
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/ptp'
Makefile:10: warning: overriding recipe for target 'clean'
../lib.mk:137: warning: ignoring old recipe for target 'clean'
gcc -I../../../../usr/include/ testptp.c -lrt -o testptp
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/ptp'
2019-02-19 00:20:02 make run_tests -C ptp
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/ptp'
Makefile:10: warning: overriding recipe for target 'clean'
../lib.mk:137: warning: ignoring old recipe for target 'clean'
TAP version 13
selftests: ptp: testptp
========================================
ok 1..1 selftests: ptp: testptp [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/ptp'
2019-02-19 00:20:02 make run_tests -C ptrace
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/ptrace'
gcc -iquote../../../../include/uapi -Wall peeksiginfo.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/ptrace/peeksiginfo
TAP version 13
selftests: ptrace: peeksiginfo
========================================
PASS
ok 1..1 selftests: ptrace: peeksiginfo [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/ptrace'
2019-02-19 00:20:02 make run_tests -C rseq
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/rseq'
gcc -O2 -Wall -g -I./ -I../../../../usr/include/ -L./ -Wl,-rpath=./ -shared -fPIC rseq.c -lpthread -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/rseq/librseq.so
gcc -O2 -Wall -g -I./ -I../../../../usr/include/ -L./ -Wl,-rpath=./ basic_test.c -lpthread -lrseq -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/rseq/basic_test
gcc -O2 -Wall -g -I./ -I../../../../usr/include/ -L./ -Wl,-rpath=./ basic_percpu_ops_test.c -lpthread -lrseq -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/rseq/basic_percpu_ops_test
gcc -O2 -Wall -g -I./ -I../../../../usr/include/ -L./ -Wl,-rpath=./ param_test.c -lpthread -lrseq -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/rseq/param_test
gcc -O2 -Wall -g -I./ -I../../../../usr/include/ -L./ -Wl,-rpath=./ -DBENCHMARK param_test.c -lpthread -lrseq -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/rseq/param_test_benchmark
gcc -O2 -Wall -g -I./ -I../../../../usr/include/ -L./ -Wl,-rpath=./ -DRSEQ_COMPARE_TWICE param_test.c -lpthread -lrseq -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/rseq/param_test_compare_twice
TAP version 13
selftests: rseq: basic_test
========================================
testing current cpu
ok 1..1 selftests: rseq: basic_test [PASS]
selftests: rseq: basic_percpu_ops_test
========================================
spinlock
percpu_list
ok 1..2 selftests: rseq: basic_percpu_ops_test [PASS]
selftests: rseq: param_test
========================================
ok 1..3 selftests: rseq: param_test [PASS]
selftests: rseq: param_test_benchmark
========================================
ok 1..4 selftests: rseq: param_test_benchmark [PASS]
selftests: rseq: param_test_compare_twice
========================================
ok 1..5 selftests: rseq: param_test_compare_twice [PASS]
selftests: rseq: run_param_test.sh
========================================
Default parameters
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Loop injection: 10000 loops
Injecting at <1>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <2>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <3>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <4>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <5>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <6>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Yield injection (25%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Yield injection (50%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Yield injection (100%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Kill injection (25%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Kill injection (50%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Kill injection (100%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Sleep injection (1ms, 25%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Sleep injection (1ms, 50%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Sleep injection (1ms, 100%)
Injecting at <7>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <8>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
Injecting at <9>
Running test spinlock
Running compare-twice test spinlock
Running test list
Running compare-twice test list
Running test buffer
Running compare-twice test buffer
Running test buffer with barrier
Running compare-twice test buffer with barrier
Running test memcpy
Running compare-twice test memcpy
Running test memcpy with barrier
Running compare-twice test memcpy with barrier
Running test increment
Running compare-twice test increment
ok 1..6 selftests: rseq: run_param_test.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/rseq'
2019-02-19 00:53:46 make run_tests -C rtc
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/rtc'
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm rtctest.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/rtc/rtctest
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm setdate.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/rtc/setdate
TAP version 13
selftests: rtc: rtctest
========================================
rtctest.c:49:rtc.date_read:Current RTC date/time is 19/02/2019 00:53:47.
rtctest.c:137:rtc.alarm_alm_set:Alarm time now set to 00:53:56.
rtctest.c:156:rtc.alarm_alm_set:data: 1a0
rtctest.c:195:rtc.alarm_wkalm_set:Alarm time now set to 19/02/2019 00:53:59.
rtctest.c:239:rtc.alarm_alm_set_minute:Alarm time now set to 00:54:00.
rtctest.c:258:rtc.alarm_alm_set_minute:data: 1a0
rtctest.c:297:rtc.alarm_wkalm_set_minute:Alarm time now set to 19/02/2019 00:55:00.
[==========] Running 7 tests from 2 test cases.
[ RUN ] rtc.date_read
[ OK ] rtc.date_read
[ RUN ] rtc.uie_read
[ OK ] rtc.uie_read
[ RUN ] rtc.uie_select
[ OK ] rtc.uie_select
[ RUN ] rtc.alarm_alm_set
[ OK ] rtc.alarm_alm_set
[ RUN ] rtc.alarm_wkalm_set
[ OK ] rtc.alarm_wkalm_set
[ RUN ] rtc.alarm_alm_set_minute
[ OK ] rtc.alarm_alm_set_minute
[ RUN ] rtc.alarm_wkalm_set_minute
[ OK ] rtc.alarm_wkalm_set_minute
[==========] 7 / 7 tests passed.
[ PASSED ]
ok 1..1 selftests: rtc: rtctest [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/rtc'
2019-02-19 00:54:59 make run_tests -C seccomp
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/seccomp'
gcc -Wl,-no-as-needed -Wall seccomp_bpf.c -lpthread -o seccomp_bpf
gcc -Wl,-no-as-needed -Wall seccomp_benchmark.c -o seccomp_benchmark
TAP version 13
selftests: seccomp: seccomp_bpf
========================================
[==========] Running 72 tests from 1 test cases.
[ RUN ] global.mode_strict_support
[ OK ] global.mode_strict_support
[ RUN ] global.mode_strict_cannot_call_prctl
[ OK ] global.mode_strict_cannot_call_prctl
[ RUN ] global.no_new_privs_support
[ OK ] global.no_new_privs_support
[ RUN ] global.mode_filter_support
[ OK ] global.mode_filter_support
[ RUN ] global.mode_filter_without_nnp
[ OK ] global.mode_filter_without_nnp
[ RUN ] global.filter_size_limits
[ OK ] global.filter_size_limits
[ RUN ] global.filter_chain_limits
[ OK ] global.filter_chain_limits
[ RUN ] global.mode_filter_cannot_move_to_strict
[ OK ] global.mode_filter_cannot_move_to_strict
[ RUN ] global.mode_filter_get_seccomp
[ OK ] global.mode_filter_get_seccomp
[ RUN ] global.ALLOW_all
[ OK ] global.ALLOW_all
[ RUN ] global.empty_prog
[ OK ] global.empty_prog
[ RUN ] global.log_all
[ OK ] global.log_all
[ RUN ] global.unknown_ret_is_kill_inside
[ OK ] global.unknown_ret_is_kill_inside
[ RUN ] global.unknown_ret_is_kill_above_allow
[ OK ] global.unknown_ret_is_kill_above_allow
[ RUN ] global.KILL_all
[ OK ] global.KILL_all
[ RUN ] global.KILL_one
[ OK ] global.KILL_one
[ RUN ] global.KILL_one_arg_one
[ OK ] global.KILL_one_arg_one
[ RUN ] global.KILL_one_arg_six
[ OK ] global.KILL_one_arg_six
[ RUN ] global.KILL_thread
[==========] Running 72 tests from 1 test cases.
[ RUN ] global.mode_strict_support
[ OK ] global.mode_strict_support
[ RUN ] global.mode_strict_cannot_call_prctl
[ OK ] global.mode_strict_cannot_call_prctl
[ RUN ] global.no_new_privs_support
[ OK ] global.no_new_privs_support
[ RUN ] global.mode_filter_support
[ OK ] global.mode_filter_support
[ RUN ] global.mode_filter_without_nnp
[ OK ] global.mode_filter_without_nnp
[ RUN ] global.filter_size_limits
[ OK ] global.filter_size_limits
[ RUN ] global.filter_chain_limits
[ OK ] global.filter_chain_limits
[ RUN ] global.mode_filter_cannot_move_to_strict
[ OK ] global.mode_filter_cannot_move_to_strict
[ RUN ] global.mode_filter_get_seccomp
[ OK ] global.mode_filter_get_seccomp
[ RUN ] global.ALLOW_all
[ OK ] global.ALLOW_all
[ RUN ] global.empty_prog
[ OK ] global.empty_prog
[ RUN ] global.log_all
[ OK ] global.log_all
[ RUN ] global.unknown_ret_is_kill_inside
[ OK ] global.unknown_ret_is_kill_inside
[ RUN ] global.unknown_ret_is_kill_above_allow
[ OK ] global.unknown_ret_is_kill_above_allow
[ RUN ] global.KILL_all
[ OK ] global.KILL_all
[ RUN ] global.KILL_one
[ OK ] global.KILL_one
[ RUN ] global.KILL_one_arg_one
[ OK ] global.KILL_one_arg_one
[ RUN ] global.KILL_one_arg_six
[ OK ] global.KILL_one_arg_six
[ RUN ] global.KILL_thread
[ OK ] global.KILL_thread
[ RUN ] global.KILL_process
[ OK ] global.KILL_process
[ RUN ] global.arg_out_of_range
[ OK ] global.arg_out_of_range
[ RUN ] global.ERRNO_valid
[ OK ] global.ERRNO_valid
[ RUN ] global.ERRNO_zero
[ OK ] global.ERRNO_zero
[ RUN ] global.ERRNO_capped
[ OK ] global.ERRNO_capped
[ RUN ] global.ERRNO_order
[ OK ] global.ERRNO_order
[ RUN ] TRAP.dfl
[ OK ] TRAP.dfl
[ RUN ] TRAP.ign
[ OK ] TRAP.ign
[ RUN ] TRAP.handler
[ OK ] TRAP.handler
[ RUN ] precedence.allow_ok
[ OK ] precedence.allow_ok
[ RUN ] precedence.kill_is_highest
[ OK ] precedence.kill_is_highest
[ RUN ] precedence.kill_is_highest_in_any_order
[ OK ] precedence.kill_is_highest_in_any_order
[ RUN ] precedence.trap_is_second
[ OK ] precedence.trap_is_second
[ RUN ] precedence.trap_is_second_in_any_order
[ OK ] precedence.trap_is_second_in_any_order
[ RUN ] precedence.errno_is_third
[ OK ] precedence.errno_is_third
[ RUN ] precedence.errno_is_third_in_any_order
[ OK ] precedence.errno_is_third_in_any_order
[ RUN ] precedence.trace_is_fourth
[ OK ] precedence.trace_is_fourth
[ RUN ] precedence.trace_is_fourth_in_any_order
[ OK ] precedence.trace_is_fourth_in_any_order
[ RUN ] precedence.log_is_fifth
[ OK ] precedence.log_is_fifth
[ RUN ] precedence.log_is_fifth_in_any_order
[ OK ] precedence.log_is_fifth_in_any_order
[ RUN ] TRACE_poke.read_has_side_effects
[ OK ] TRACE_poke.read_has_side_effects
[ RUN ] TRACE_poke.getpid_runs_normally
[ OK ] TRACE_poke.getpid_runs_normally
[ RUN ] TRACE_syscall.ptrace_syscall_redirected
[ OK ] TRACE_syscall.ptrace_syscall_redirected
[ RUN ] TRACE_syscall.ptrace_syscall_dropped
[ OK ] TRACE_syscall.ptrace_syscall_dropped
[ RUN ] TRACE_syscall.syscall_allowed
[ OK ] TRACE_syscall.syscall_allowed
[ RUN ] TRACE_syscall.syscall_redirected
[ OK ] TRACE_syscall.syscall_redirected
[ RUN ] TRACE_syscall.syscall_dropped
[ OK ] TRACE_syscall.syscall_dropped
[ RUN ] TRACE_syscall.skip_after_RET_TRACE
[ OK ] TRACE_syscall.skip_after_RET_TRACE
[ RUN ] TRACE_syscall.kill_after_RET_TRACE
[ OK ] TRACE_syscall.kill_after_RET_TRACE
[ RUN ] TRACE_syscall.skip_after_ptrace
[ OK ] TRACE_syscall.skip_after_ptrace
[ RUN ] TRACE_syscall.kill_after_ptrace
[ OK ] TRACE_syscall.kill_after_ptrace
[ RUN ] global.seccomp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
[ OK ] global.user_notification_signal
[ RUN ] global.user_notification_closed_listener
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
[ OK ] global.user_notification_signal
[ RUN ] global.user_notification_closed_listener
[ OK ] global.user_notification_closed_listener
[ RUN ] global.user_notification_child_pid_ns
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
[ OK ] global.user_notification_signal
[ RUN ] global.user_notification_closed_listener
[ OK ] global.user_notification_closed_listener
[ RUN ] global.user_notification_child_pid_ns
[ OK ] global.user_notification_child_pid_ns
[ RUN ] global.user_notification_sibling_pid_ns
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
[ OK ] global.user_notification_signal
[ RUN ] global.user_notification_closed_listener
[ OK ] global.user_notification_closed_listener
[ RUN ] global.user_notification_child_pid_ns
[ OK ] global.user_notification_child_pid_ns
[ RUN ] global.user_notification_sibling_pid_ns
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
[ OK ] global.user_notification_signal
[ RUN ] global.user_notification_closed_listener
[ OK ] global.user_notification_closed_listener
[ RUN ] global.user_notification_child_pid_ns
[ OK ] global.user_notification_child_pid_ns
[ RUN ] global.user_notification_sibling_pid_ns
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
[ OK ] global.user_notification_signal
[ RUN ] global.user_notification_closed_listener
[ OK ] global.user_notification_closed_listener
[ RUN ] global.user_notification_child_pid_ns
[ OK ] global.user_notification_child_pid_ns
[ RUN ] global.user_notification_sibling_pid_ns
[ OK ] global.user_notification_sibling_pid_ns
[ RUN ] global.user_notification_fault_recv
comp_syscall
[ OK ] global.seccomp_syscall
[ RUN ] global.seccomp_syscall_mode_lock
[ OK ] global.seccomp_syscall_mode_lock
[ RUN ] global.detect_seccomp_filter_flags
[ OK ] global.detect_seccomp_filter_flags
[ RUN ] global.TSYNC_first
[ OK ] global.TSYNC_first
[ RUN ] TSYNC.siblings_fail_prctl
[ OK ] TSYNC.siblings_fail_prctl
[ RUN ] TSYNC.two_siblings_with_ancestor
[ OK ] TSYNC.two_siblings_with_ancestor
[ RUN ] TSYNC.two_sibling_want_nnp
[ OK ] TSYNC.two_sibling_want_nnp
[ RUN ] TSYNC.two_siblings_with_no_filter
[ OK ] TSYNC.two_siblings_with_no_filter
[ RUN ] TSYNC.two_siblings_with_one_divergence
[ OK ] TSYNC.two_siblings_with_one_divergence
[ RUN ] TSYNC.two_siblings_not_under_filter
[ OK ] TSYNC.two_siblings_not_under_filter
[ RUN ] global.syscall_restart
[ OK ] global.syscall_restart
[ RUN ] global.filter_flag_log
[ OK ] global.filter_flag_log
[ RUN ] global.get_action_avail
[ OK ] global.get_action_avail
[ RUN ] global.get_metadata
[ OK ] global.get_metadata
[ RUN ] global.user_notification_basic
[ OK ] global.user_notification_basic
[ RUN ] global.user_notification_kill_in_middle
[ OK ] global.user_notification_kill_in_middle
[ RUN ] global.user_notification_signal
[ OK ] global.user_notification_signal
[ RUN ] global.user_notification_closed_listener
[ OK ] global.user_notification_closed_listener
[ RUN ] global.user_notification_child_pid_ns
[ OK ] global.user_notification_child_pid_ns
[ RUN ] global.user_notification_sibling_pid_ns
[ OK ] global.user_notification_sibling_pid_ns
[ RUN ] global.user_notification_fault_recv
[ OK ] global.user_notification_fault_recv
[ RUN ] global.seccomp_get_notif_sizes
[ OK ] global.seccomp_get_notif_sizes
[==========] 72 / 72 tests passed.
[ PASSED ]
ok 1..1 selftests: seccomp: seccomp_bpf [PASS]
selftests: seccomp: seccomp_benchmark
========================================
Calibrating reasonable sample size...
1550508902.529610763 - 1550508902.529593018 = 17745
1550508902.529659235 - 1550508902.529624715 = 34520
1550508902.529750301 - 1550508902.529661101 = 89200
1550508902.529911844 - 1550508902.529752278 = 159566
1550508902.530194944 - 1550508902.529914636 = 280308
1550508902.530767388 - 1550508902.530197631 = 569757
1550508902.531934050 - 1550508902.530769912 = 1164138
1550508902.534279121 - 1550508902.531937387 = 2341734
1550508902.539050194 - 1550508902.534282589 = 4767605
1550508902.548742785 - 1550508902.539055346 = 9687439
1550508902.568579306 - 1550508902.548746982 = 19832324
1550508902.606993431 - 1550508902.568584574 = 38408857
1550508902.683909763 - 1550508902.606998476 = 76911287
1550508902.836080783 - 1550508902.683925539 = 152155244
1550508903.135341904 - 1550508902.836086328 = 299255576
1550508903.888593466 - 1550508903.135346985 = 753246481
1550508905.248434850 - 1550508903.888599337 = 1359835513
1550508907.771756305 - 1550508905.248442445 = 2523313860
1550508912.820201621 - 1550508907.771763889 = 5048437732
1550508923.024991061 - 1550508912.820209425 = 10204781636
Benchmarking 16777216 samples...
28.407345730 - 19.373343798 = 9034001932
getpid native: 538 ns
40.654192508 - 28.407456313 = 12246736195
getpid RET_ALLOW: 729 ns
Estimated seccomp overhead per syscall: 191 ns
ok 1..2 selftests: seccomp: seccomp_benchmark [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/seccomp'
2019-02-19 00:55:44 make run_tests -C sigaltstack
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sigaltstack'
gcc -Wall sas.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sigaltstack/sas
TAP version 13
selftests: sigaltstack: sas
========================================
ok 1 Initial sigaltstack state was SS_DISABLE
# [RUN] signal USR1
ok 2 sigaltstack is disabled in sighandler
# [RUN] switched to user ctx
# [RUN] signal USR2
# [OK] Stack preserved
ok 3 sigaltstack is still SS_AUTODISARM after signal
Pass 3 Fail 0 Xfail 0 Xpass 0 Skip 0 Error 0
1..3
ok 1..1 selftests: sigaltstack: sas [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sigaltstack'
2019-02-19 00:55:44 make run_tests -C size
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/size'
gcc -static -ffreestanding -nostartfiles -s get_size.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/size/get_size
TAP version 13
selftests: size: get_size
========================================
TAP version 13
# Testing system size.
ok 1 get runtime memory use
# System runtime memory report (units in Kilobytes):
---
Total: 4033148
Free: 837776
Buffer: 0
In use: 3195372
...
1..1
ok 1..1 selftests: size: get_size [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/size'
2019-02-19 00:55:44 make run_tests -C sparc64
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sparc64'
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sparc64'
2019-02-19 00:55:44 make run_tests -C splice
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/splice'
gcc default_file_splice_read.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/splice/default_file_splice_read
TAP version 13
selftests: splice: default_file_splice_read.sh
========================================
ok 1..1 selftests: splice: default_file_splice_read.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/splice'
2019-02-19 00:55:45 make run_tests -C static_keys
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/static_keys'
TAP version 13
selftests: static_keys: test_static_keys.sh
========================================
static_key: ok
ok 1..1 selftests: static_keys: test_static_keys.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/static_keys'
2019-02-19 00:55:45 make run_tests -C sync
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync'
gcc -c sync_alloc.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_alloc.o
gcc -c sync_fence.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_fence.o
gcc -c sync_merge.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_merge.o
gcc -c sync_wait.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_wait.o
gcc -c sync_stress_parallelism.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_stress_parallelism.o
gcc -c sync_stress_consumer.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_stress_consumer.o
gcc -c sync_stress_merge.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_stress_merge.o
gcc -c sync_test.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_test.o -O2 -g -std=gnu89 -pthread -Wall -Wextra -I../../../../usr/include/
gcc -c sync.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync.o -O2 -g -std=gnu89 -pthread -Wall -Wextra -I../../../../usr/include/
gcc -o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_test /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_test.o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync.o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_alloc.o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_fence.o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_merge.o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_wait.o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_stress_parallelism.o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_stress_consumer.o /usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync/sync_stress_merge.o -O2 -g -std=gnu89 -pthread -Wall -Wextra -I../../../../usr/include/ -pthread
TAP version 13
selftests: sync: sync_test
========================================
# [RUN] Testing sync framework
ok 1 [RUN] test_alloc_timeline
ok 2 [RUN] test_alloc_fence
ok 3 [RUN] test_alloc_fence_negative
ok 4 [RUN] test_fence_one_timeline_wait
ok 5 [RUN] test_fence_one_timeline_merge
ok 6 [RUN] test_fence_merge_same_fence
ok 7 [RUN] test_fence_multi_timeline_wait
ok 8 [RUN] test_stress_two_threads_shared_timeline
ok 9 [RUN] test_consumer_stress_multi_producer_single_consumer
ok 10 [RUN] test_merge_stress_random_merge
Pass 10 Fail 0 Xfail 0 Xpass 0 Skip 0 Error 0
1..10
ok 1..1 selftests: sync: sync_test [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sync'
2019-02-19 00:55:48 make run_tests -C sysctl
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sysctl'
TAP version 13
selftests: sysctl: sysctl.sh
========================================
Checking production write strict setting ... ok
Tue Feb 19 00:55:48 CST 2019
Running test: sysctl_test_0001 - run #0
== Testing sysctl behavior against /proc/sys/debug/test_sysctl/int_0001 ==
Writing test file ... ok
Checking sysctl is not set to test value ... ok
Writing sysctl from shell ... ok
Resetting sysctl to original value ... ok
Writing entire sysctl in single write ... ok
Writing middle of sysctl after synchronized seek ... ok
Writing beyond end of sysctl ... ok
Writing sysctl with multiple long writes ... ok
Checking ignoring spaces up to PAGE_SIZE works on write ...ok
Checking passing PAGE_SIZE of spaces fails on write ...ok
Tue Feb 19 00:55:48 CST 2019
Running test: sysctl_test_0002 - run #0
== Testing sysctl behavior against /proc/sys/debug/test_sysctl/string_0001 ==
Writing test file ... ok
Checking sysctl is not set to test value ... ok
Writing sysctl from shell ... ok
Resetting sysctl to original value ... ok
Writing entire sysctl in single write ... ok
Writing middle of sysctl after synchronized seek ... ok
Writing beyond end of sysctl ... ok
Writing sysctl with multiple long writes ... ok
Writing entire sysctl in short writes ... ok
Writing middle of sysctl after unsynchronized seek ... ok
Checking sysctl maxlen is at least 65 ... ok
Checking sysctl keeps original string on overflow append ... ok
Checking sysctl stays NULL terminated on write ... ok
Checking sysctl stays NULL terminated on overwrite ... ok
Tue Feb 19 00:55:48 CST 2019
Running test: sysctl_test_0003 - run #0
== Testing sysctl behavior against /proc/sys/debug/test_sysctl/int_0002 ==
Writing test file ... ok
Checking sysctl is not set to test value ... ok
Writing sysctl from shell ... ok
Resetting sysctl to original value ... ok
Writing entire sysctl in single write ... ok
Writing middle of sysctl after synchronized seek ... ok
Writing beyond end of sysctl ... ok
Writing sysctl with multiple long writes ... ok
Checking ignoring spaces up to PAGE_SIZE works on write ...ok
Checking passing PAGE_SIZE of spaces fails on write ...ok
Testing INT_MAX works ...ok
Testing INT_MAX + 1 will fail as expected...ok
Testing negative values will work as expected...ok
Tue Feb 19 00:55:48 CST 2019
Running test: sysctl_test_0004 - run #0
== Testing sysctl behavior against /proc/sys/debug/test_sysctl/uint_0001 ==
Writing test file ... ok
Checking sysctl is not set to test value ... ok
Writing sysctl from shell ... ok
Resetting sysctl to original value ... ok
Writing entire sysctl in single write ... ok
Writing middle of sysctl after synchronized seek ... ok
Writing beyond end of sysctl ... ok
Writing sysctl with multiple long writes ... ok
Checking ignoring spaces up to PAGE_SIZE works on write ...ok
Checking passing PAGE_SIZE of spaces fails on write ...ok
Testing UINT_MAX works ...ok
Testing UINT_MAX + 1 will fail as expected...ok
Testing negative values will not work as expected ...ok
Tue Feb 19 00:55:48 CST 2019
Running test: sysctl_test_0005 - run #0
Testing array works as expected ... ok
Testing skipping trailing array elements works ... ok
Testing PAGE_SIZE limit on array works ... ok
Testing exceeding PAGE_SIZE limit fails as expected ... Files - and /proc/sys/debug/test_sysctl/int_0003 differ
ok
Tue Feb 19 00:55:48 CST 2019
Running test: sysctl_test_0005 - run #1
Testing array works as expected ... ok
Testing skipping trailing array elements works ... ok
Testing PAGE_SIZE limit on array works ... ok
Testing exceeding PAGE_SIZE limit fails as expected ... Files - and /proc/sys/debug/test_sysctl/int_0003 differ
ok
Tue Feb 19 00:55:48 CST 2019
Running test: sysctl_test_0005 - run #2
Testing array works as expected ... ok
Testing skipping trailing array elements works ... ok
Testing PAGE_SIZE limit on array works ... ok
Testing exceeding PAGE_SIZE limit fails as expected ... Files - and /proc/sys/debug/test_sysctl/int_0003 differ
ok
ok 1..1 selftests: sysctl: sysctl.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-cdaa813278ddc616ee201eacda77f63996b5dd2d/tools/testing/selftests/sysctl'
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years, 4 months
[blk] 510c824ac6: WARNING:possible_recursive_locking_detected
by kernel test robot
FYI, we noticed the following commit (built with gcc-8):
commit: 510c824ac6cce98962e5909efe6c4ad17ba1771e ("blk-cgroup: Convert to XArray")
git://git.infradead.org/users/willy/linux-dax.git xarray-conv
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-i386 -enable-kvm -cpu SandyBridge -smp 2 -m 1G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------------------------------------------+------------+------------+
| | f38fbc92dc | 510c824ac6 |
+------------------------------------------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 52 | 13 |
| WARNING:at_drivers/gpu/drm/drm_mode_config.c:#drm_mode_config_cleanup | 52 | 13 |
| EIP:drm_mode_config_cleanup | 52 | 13 |
| kobject(#):tried_to_init_an_initialized_object,something_is_seriously_wrong | 52 | 13 |
| WARNING:suspicious_RCU_usage | 52 | |
| include/linux/xarray.h:#suspicious_rcu_dereference_check()usage | 52 | |
| kobject(fd#efeb3):tried_to_init_an_initialized_object,something_is_seriously_wrong | 1 | |
| WARNING:possible_recursive_locking_detected | 0 | 13 |
+------------------------------------------------------------------------------------+------------+------------+
[ 630.593508] WARNING: possible recursive locking detected
[ 630.593508] 5.0.0-rc5-00027-g510c824 #1 Tainted: G W
[ 630.593508] --------------------------------------------
[ 630.593508] swapper/1 is trying to acquire lock:
[ 630.593508] c06af2e5 (&(&xa->xa_lock)->rlock#5){....}, at: xa_erase+0x10/0x2a
[ 630.593508]
[ 630.593508] but task is already holding lock:
[ 630.593508] c06af2e5 (&(&xa->xa_lock)->rlock#5){....}, at: blkcg_exit_queue+0x46/0x7c
[ 630.593508]
[ 630.593508] other info that might help us debug this:
[ 630.593508] Possible unsafe locking scenario:
[ 630.593508]
[ 630.593508] CPU0
[ 630.593508] ----
[ 630.593508] lock(&(&xa->xa_lock)->rlock#5);
[ 630.593508] lock(&(&xa->xa_lock)->rlock#5);
[ 630.593508]
[ 630.593508] *** DEADLOCK ***
[ 630.593508]
[ 630.593508] May be due to missing lock nesting notation
[ 630.593508]
[ 630.593508] 4 locks held by swapper/1:
[ 630.593508] #0: bb576619 (&dev->mutex){....}, at: __driver_attach+0x55/0x89
[ 630.593508] #1: eb75eb22 (ide_cfg_mtx){+.+.}, at: ide_host_remove+0x3d/0xe2
[ 630.593508] #2: 9b475a3a (&(&q->queue_lock)->rlock){....}, at: blkcg_exit_queue+0x15/0x7c
[ 630.593508] #3: c06af2e5 (&(&xa->xa_lock)->rlock#5){....}, at: blkcg_exit_queue+0x46/0x7c
[ 630.593508]
[ 630.593508] stack backtrace:
[ 630.593508] CPU: 0 PID: 1 Comm: swapper Tainted: G W 5.0.0-rc5-00027-g510c824 #1
[ 630.593508] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 630.593508] Call Trace:
[ 630.593508] dump_stack+0x16/0x18
[ 630.593508] __lock_acquire+0xbc7/0x1233
[ 630.593508] ? __bfs+0x12/0x19f
[ 630.593508] ? __lock_acquire+0xd9d/0x1233
[ 630.593508] lock_acquire+0xe1/0xff
[ 630.593508] ? xa_erase+0x10/0x2a
[ 630.593508] _raw_spin_lock+0x21/0x30
[ 630.593508] ? xa_erase+0x10/0x2a
[ 630.593508] xa_erase+0x10/0x2a
[ 630.593508] blkg_destroy+0x162/0x1c4
[ 630.593508] ? _raw_spin_lock+0x28/0x30
[ 630.593508] blkcg_exit_queue+0x50/0x7c
[ 630.593508] blk_exit_queue+0x29/0x34
[ 630.593508] blk_cleanup_queue+0x7e/0xa2
[ 630.593508] drive_release_dev+0x2a/0x51
[ 630.593508] device_release+0x42/0x6b
[ 630.593508] kobject_put+0x65/0x79
[ 630.593508] put_device+0xf/0x12
[ 630.593508] device_unregister+0x12/0x15
[ 630.593508] __ide_port_unregister_devices+0x27/0x3e
[ 630.593508] ide_host_remove+0x4d/0xe2
[ 630.593508] ide_pci_remove+0x44/0x71
[ 630.593508] pci_device_remove+0x27/0x79
[ 630.593508] really_probe+0x146/0x291
[ 630.593508] driver_probe_device+0xdf/0x119
[ 630.593508] __driver_attach+0x64/0x89
[ 630.593508] bus_for_each_dev+0x49/0x63
[ 630.593508] driver_attach+0x14/0x16
[ 630.593508] ? driver_probe_device+0x119/0x119
[ 630.593508] bus_add_driver+0xb9/0x182
[ 630.593508] ? opti621_ide_init+0x16/0x16
[ 630.593508] driver_register+0x87/0xb9
[ 630.593508] ? opti621_ide_init+0x16/0x16
[ 630.593508] __pci_register_driver+0x4b/0x4e
[ 630.593508] piix_ide_init+0x8f/0x94
[ 630.593508] do_one_initcall+0xaa/0x1b7
[ 630.593508] ? trace_hardirqs_on_thunk+0xc/0x10
[ 630.593508] ? restore_all_kernel+0xf/0x98
[ 630.593508] kernel_init_freeable+0x1d7/0x260
[ 630.593508] ? rest_init+0x108/0x108
[ 630.593508] kernel_init+0x8/0xd0
[ 630.593508] ret_from_fork+0x19/0x30
[ 631.069177] piix 0000:00:01.1: IDE controller (0x8086:0x7010 rev 0x00)
[ 631.077684] piix 0000:00:01.1: not 100% native mode: will probe irqs later
[ 631.092861] ide0: BM-DMA at 0xc0c0-0xc0c7
[ 631.097720] ide1: BM-DMA at 0xc0c8-0xc0cf
[ 631.101995] Probing IDE interface ide0...
[ 631.753461] Probing IDE interface ide1...
[ 632.512416] hdc: QEMU DVD-ROM, ATAPI CD/DVD-ROM drive
[ 633.204687] hdc: host max PIO4 wanted PIO255(auto-tune) selected PIO0
[ 633.220187] hdc: MWDMA2 mode selected
[ 633.233799] ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
[ 633.246376] ide1 at 0x170-0x177,0x376 on irq 15
[ 633.277330] ide_generic: please use "probe_mask=0x3f" module parameter for probing all legacy ISA IDE ports
[ 633.306793] ide-gd driver 1.18
[ 633.319396] ide-cd driver 5.00
[ 633.340601] ide-cd: hdc: ATAPI 4X DVD-ROM drive, 512kB Cache
[ 633.355018] cdrom: Uniform CD-ROM driver Revision: 3.20
[ 633.413938] ide-cd: hdc: ATAPI 4X DVD-ROM drive, 512kB Cache
[ 633.452222] SCSI Media Changer driver v0.25
[ 633.461721] osd: LOADED open-osd 0.2.1
[ 633.486984] Rounding down aligned max_sectors from 4294967295 to 4294967288
[ 633.499070] db_root: cannot open: /etc/target
[ 633.509133] mtdoops: mtd device (mtddev=name/number) must be supplied
[ 633.516161] L440GX flash mapping: failed to find PIIX4 ISA bridge, cannot continue
[ 633.526103] device id = 2440
[ 633.530592] device id = 2480
[ 633.534596] device id = 24c0
[ 633.543077] device id = 24d0
[ 633.547751] device id = 25a1
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years, 4 months
[xfs] 4ff1f5b3d9: aim7.jobs-per-min -30.4% regression
by kernel test robot
Greeting,
FYI, we noticed a -30.4% regression of aim7.jobs-per-min due to commit:
commit: 4ff1f5b3d991dd50d49e7772952c53f947f833fb ("xfs: parallelize inode inactivation")
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git djwong-wtf
in testcase: aim7
on test machine: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
with following parameters:
disk: 1BRD_48G
fs: xfs
test: disk_src
load: 3000
cpufreq_governor: performance
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-7/performance/1BRD_48G/xfs/x86_64-rhel-7.2/3000/debian-x86_64-2018-04-03.cgz/lkp-ivb-ep01/disk_src/aim7
commit:
d21621840e ("xfs: force inactivation before fallocate when space is low")
4ff1f5b3d9 ("xfs: parallelize inode inactivation")
d21621840e601d57 4ff1f5b3d991dd50d49e7772952
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 0% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:4 25% 1:4 dmesg.WARNING:at_ip___perf_sw_event/0x
:4 100% 4:4 kmsg.ACPI_Error:Method_parse/execution_failed~_SB.PMI0._GHL,AE_NOT_EXIST(#/psparse-#)
:4 100% 4:4 kmsg.ACPI_Error:Method_parse/execution_failed~_SB.PMI0._PMC,AE_NOT_EXIST(#/psparse-#)
:4 100% 4:4 kmsg.ACPI_Error:No_handler_for_Region[SYSI](#)[IPMI](#/evregion-#)
:4 100% 4:4 kmsg.ACPI_Error:Region_IPMI(ID=#)has_no_handler(#/exfldio-#)
2:4 -50% :4 kmsg.XFS(ram#):xlog_verify_grant_tail:space>BBTOB(tail_blocks)
%stddev %change %stddev
\ | \
14529 -30.4% 10116 aim7.jobs-per-min
1239 +43.6% 1780 aim7.time.elapsed_time
1239 +43.6% 1780 aim7.time.elapsed_time.max
5548 +101.5% 11181 ± 3% aim7.time.involuntary_context_switches
3435 +48.7% 5108 aim7.time.system_time
104.79 +14.0% 119.45 aim7.time.user_time
27741617 +48.9% 41302729 aim7.time.voluntary_context_switches
92.23 -1.2% 91.12 iostat.cpu.idle
7.52 +15.2% 8.66 ± 2% iostat.cpu.system
7.51 +1.1 8.66 ± 2% mpstat.cpu.sys%
0.25 -0.0 0.21 mpstat.cpu.usr%
3004482 ± 6% -20.3% 2394055 numa-numastat.node1.local_node
3006284 ± 6% -20.2% 2398052 numa-numastat.node1.numa_hit
92.00 -1.4% 90.75 vmstat.cpu.id
12421 -25.6% 9239 vmstat.io.bo
3949806 -71.5% 1126257 vmstat.memory.cache
54607 +10.1% 60109 vmstat.system.cs
1.072e+09 +24.6% 1.336e+09 ± 11% cpuidle.C1.time
12450429 +36.1% 16942123 ± 8% cpuidle.C1.usage
1.458e+09 +285.5% 5.621e+09 ± 52% cpuidle.C1E.time
9648938 +272.1% 35902807 ± 61% cpuidle.C1E.usage
3.099e+10 ± 6% +34.4% 4.164e+10 ± 9% cpuidle.C6.time
1079386 +70.6% 1841065 ± 2% cpuidle.POLL.time
70217 +56.5% 109919 cpuidle.POLL.usage
2976541 -94.9% 150749 meminfo.KReclaimable
10894926 -70.8% 3186076 meminfo.Memused
2976541 -94.9% 150749 meminfo.SReclaimable
2189071 -90.3% 212524 meminfo.SUnreclaim
34172 +17.5% 40138 meminfo.Shmem
5165613 -93.0% 363274 meminfo.Slab
16354 -86.5% 2214 meminfo.max_used_kB
2759 ± 27% +69.4% 4673 ± 31% numa-vmstat.node0.nr_shmem
373298 -94.5% 20349 ± 7% numa-vmstat.node0.nr_slab_reclaimable
278964 ± 4% -90.3% 27152 ± 38% numa-vmstat.node0.nr_slab_unreclaimable
45797 ± 24% +22.2% 55956 ± 18% numa-vmstat.node1.nr_active_anon
43827 ± 26% +22.1% 53502 ± 19% numa-vmstat.node1.nr_anon_pages
370653 -95.3% 17336 ± 8% numa-vmstat.node1.nr_slab_reclaimable
268165 ± 4% -90.3% 25976 ± 39% numa-vmstat.node1.nr_slab_unreclaimable
45797 ± 24% +22.2% 55956 ± 18% numa-vmstat.node1.nr_zone_active_anon
1776154 ± 4% -15.0% 1509746 ± 6% numa-vmstat.node1.numa_hit
159.67 +4.0% 166.00 turbostat.Avg_MHz
12442054 +36.1% 16934916 ± 8% turbostat.C1
9648658 +272.1% 35902542 ± 61% turbostat.C1E
2.94 +4.9 7.85 ± 51% turbostat.C1E%
8.25 ± 15% -63.4% 3.01 ± 85% turbostat.CPU%c3
56.33 ± 2% -13.0% 49.00 turbostat.CoreTmp
1.008e+08 +42.9% 1.441e+08 turbostat.IRQ
0.67 ± 9% -56.9% 0.29 ± 11% turbostat.Pkg%pc2
56.67 ± 2% -13.5% 49.00 turbostat.PkgTmp
99306 +43.5% 142520 turbostat.SMI
1493417 -94.5% 81402 ± 7% numa-meminfo.node0.KReclaimable
5538082 -70.1% 1658246 ± 13% numa-meminfo.node0.MemUsed
1493417 -94.5% 81402 ± 7% numa-meminfo.node0.SReclaimable
1116040 ± 4% -90.3% 108602 ± 38% numa-meminfo.node0.SUnreclaim
11039 ± 27% +69.4% 18697 ± 31% numa-meminfo.node0.Shmem
2609458 -92.7% 190004 ± 21% numa-meminfo.node0.Slab
183176 ± 24% +22.2% 223893 ± 18% numa-meminfo.node1.Active
183172 ± 24% +22.2% 223825 ± 18% numa-meminfo.node1.Active(anon)
175292 ± 26% +22.1% 214007 ± 19% numa-meminfo.node1.AnonPages
1482818 -95.3% 69351 ± 8% numa-meminfo.node1.KReclaimable
5356088 -71.5% 1527854 ± 13% numa-meminfo.node1.MemUsed
1482818 -95.3% 69351 ± 8% numa-meminfo.node1.SReclaimable
1072807 ± 4% -90.3% 103903 ± 39% numa-meminfo.node1.SUnreclaim
2555625 -93.2% 173254 ± 24% numa-meminfo.node1.Slab
113079 +3.9% 117521 proc-vmstat.nr_active_anon
109959 +2.7% 112943 proc-vmstat.nr_anon_pages
98.00 +6.1% 104.00 proc-vmstat.nr_anon_transparent_hugepages
354.67 ± 2% +3.4% 366.75 proc-vmstat.nr_dirtied
9577429 +2.0% 9769871 proc-vmstat.nr_dirty_background_threshold
19178276 +2.0% 19563630 proc-vmstat.nr_dirty_threshold
96329054 +2.0% 98256289 proc-vmstat.nr_free_pages
5250 +5.5% 5541 proc-vmstat.nr_inactive_anon
340.00 +1.5% 345.00 proc-vmstat.nr_inactive_file
8543 +17.5% 10034 proc-vmstat.nr_shmem
744139 -94.9% 37686 proc-vmstat.nr_slab_reclaimable
547263 -90.3% 53129 proc-vmstat.nr_slab_unreclaimable
353.67 ± 2% +3.6% 366.25 proc-vmstat.nr_written
113079 +3.9% 117521 proc-vmstat.nr_zone_active_anon
5250 +5.5% 5541 proc-vmstat.nr_zone_inactive_anon
340.00 +1.5% 345.00 proc-vmstat.nr_zone_inactive_file
2284 ± 8% +166.9% 6098 ± 2% proc-vmstat.numa_hint_faults_local
5647278 -16.5% 4715166 proc-vmstat.numa_hit
5639393 -16.5% 4707260 proc-vmstat.numa_local
27823 ± 31% -33.6% 18466 ± 4% proc-vmstat.numa_pages_migrated
1036 ± 8% +30.9% 1356 ± 8% proc-vmstat.pgactivate
8609506 -37.7% 5365970 proc-vmstat.pgalloc_normal
3296783 +40.3% 4625166 proc-vmstat.pgfault
4012655 +22.4% 4913301 proc-vmstat.pgfree
27823 ± 31% -33.6% 18466 ± 4% proc-vmstat.pgmigrate_success
15372540 ± 2% +7.1% 16457304 proc-vmstat.pgpgout
25.92 -33.7% 17.19 ± 3% perf-stat.i.MPKI
7.471e+08 ± 6% +27.6% 9.53e+08 perf-stat.i.branch-instructions
30969748 ± 2% +23.0% 38088600 ± 2% perf-stat.i.branch-misses
14.89 ± 8% -3.6 11.33 perf-stat.i.cache-miss-rate%
13934723 ± 15% -34.8% 9086049 ± 2% perf-stat.i.cache-misses
91416041 ± 5% -11.8% 80671348 ± 2% perf-stat.i.cache-references
54668 +10.0% 60134 perf-stat.i.context-switches
2.03 ± 2% -27.0% 1.48 ± 2% perf-stat.i.cpi
483.98 ± 3% +168.0% 1297 ± 2% perf-stat.i.cpu-migrations
560.95 ± 6% +36.1% 763.38 perf-stat.i.cycles-between-cache-misses
0.78 ± 4% -0.3 0.44 ± 7% perf-stat.i.dTLB-load-miss-rate%
7006483 ± 7% -19.6% 5631048 ± 6% perf-stat.i.dTLB-load-misses
9.075e+08 ± 7% +41.7% 1.286e+09 perf-stat.i.dTLB-loads
0.11 ± 3% -0.0 0.07 ± 9% perf-stat.i.dTLB-store-miss-rate%
682889 ± 2% -26.8% 499664 ± 7% perf-stat.i.dTLB-store-misses
6.047e+08 ± 6% +24.7% 7.542e+08 ± 7% perf-stat.i.dTLB-stores
50.96 ± 5% +6.9 57.84 perf-stat.i.iTLB-load-miss-rate%
1725697 +16.0% 2001978 perf-stat.i.iTLB-load-misses
3.605e+09 ± 6% +32.8% 4.786e+09 perf-stat.i.instructions
2080 ± 4% +14.9% 2389 ± 2% perf-stat.i.instructions-per-iTLB-miss
0.50 ± 2% +36.6% 0.69 ± 2% perf-stat.i.ipc
2598 -2.1% 2542 perf-stat.i.minor-faults
43.11 ± 2% +2.5 45.60 perf-stat.i.node-load-miss-rate%
5915330 ± 36% -53.1% 2776569 perf-stat.i.node-loads
39.85 +1.8 41.64 perf-stat.i.node-store-miss-rate%
1498790 ± 6% -10.5% 1341835 ± 2% perf-stat.i.node-store-misses
2255272 ± 7% -17.0% 1872796 perf-stat.i.node-stores
2598 -2.1% 2542 perf-stat.i.page-faults
25.38 -33.6% 16.86 ± 2% perf-stat.overall.MPKI
15.15 ± 10% -3.9 11.26 perf-stat.overall.cache-miss-rate%
1.97 ± 3% -26.4% 1.45 ± 2% perf-stat.overall.cpi
518.24 ± 12% +47.2% 762.76 perf-stat.overall.cycles-between-cache-misses
0.77 ± 4% -0.3 0.44 ± 7% perf-stat.overall.dTLB-load-miss-rate%
0.11 ± 3% -0.0 0.07 ± 9% perf-stat.overall.dTLB-store-miss-rate%
50.39 ± 5% +7.2 57.62 perf-stat.overall.iTLB-load-miss-rate%
2087 ± 5% +14.6% 2390 ± 2% perf-stat.overall.instructions-per-iTLB-miss
0.51 ± 3% +35.9% 0.69 ± 2% perf-stat.overall.ipc
40.95 ± 6% +4.6 45.59 perf-stat.overall.node-load-miss-rate%
39.94 +1.8 41.74 perf-stat.overall.node-store-miss-rate%
7.465e+08 ± 6% +27.6% 9.525e+08 perf-stat.ps.branch-instructions
30944263 ± 2% +23.0% 38067137 ± 2% perf-stat.ps.branch-misses
13922480 ± 15% -34.8% 9080937 ± 2% perf-stat.ps.cache-misses
91340090 ± 5% -11.7% 80625818 ± 2% perf-stat.ps.cache-references
54623 +10.0% 60100 perf-stat.ps.context-switches
483.58 ± 3% +168.1% 1296 ± 2% perf-stat.ps.cpu-migrations
7000707 ± 7% -19.6% 5627875 ± 6% perf-stat.ps.dTLB-load-misses
9.067e+08 ± 7% +41.8% 1.286e+09 perf-stat.ps.dTLB-loads
682332 ± 2% -26.8% 499382 ± 7% perf-stat.ps.dTLB-store-misses
6.042e+08 ± 6% +24.8% 7.538e+08 ± 7% perf-stat.ps.dTLB-stores
1724289 +16.0% 2000848 perf-stat.ps.iTLB-load-misses
3.602e+09 ± 6% +32.8% 4.783e+09 perf-stat.ps.instructions
2596 -2.1% 2541 perf-stat.ps.minor-faults
5909580 ± 36% -53.0% 2775010 perf-stat.ps.node-loads
1497558 ± 6% -10.4% 1341077 ± 2% perf-stat.ps.node-store-misses
2253406 ± 7% -16.9% 1871742 perf-stat.ps.node-stores
2596 -2.1% 2541 perf-stat.ps.page-faults
4.467e+12 ± 6% +90.7% 8.517e+12 perf-stat.total.instructions
51255 +64.5% 84302 sched_debug.cfs_rq:/.exec_clock.avg
209012 -57.0% 89792 ± 2% sched_debug.cfs_rq:/.exec_clock.max
41408 +95.1% 80786 sched_debug.cfs_rq:/.exec_clock.min
25327 -90.5% 2417 ± 12% sched_debug.cfs_rq:/.exec_clock.stddev
2420 ± 8% -40.9% 1429 ± 34% sched_debug.cfs_rq:/.load_avg.max
0.25 ± 61% -73.8% 0.07 ±173% sched_debug.cfs_rq:/.load_avg.min
412.79 ± 4% -35.4% 266.72 ± 31% sched_debug.cfs_rq:/.load_avg.stddev
216580 +42.3% 308167 sched_debug.cfs_rq:/.min_vruntime.avg
360515 ± 11% +34.9% 486486 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
186432 +53.7% 286556 sched_debug.cfs_rq:/.min_vruntime.min
0.14 ± 2% +11.2% 0.15 ± 5% sched_debug.cfs_rq:/.nr_running.avg
14.77 ± 8% +26.0% 18.61 ± 18% sched_debug.cfs_rq:/.util_est_enqueued.avg
611829 ± 2% +10.4% 675598 sched_debug.cpu.avg_idle.avg
164110 ± 7% +19.1% 195481 ± 9% sched_debug.cpu.avg_idle.min
236480 -11.0% 210366 ± 2% sched_debug.cpu.avg_idle.stddev
635922 +42.7% 907749 sched_debug.cpu.clock.avg
635962 +42.7% 907796 sched_debug.cpu.clock.max
635881 +42.7% 907703 sched_debug.cpu.clock.min
23.70 +15.9% 27.46 ± 9% sched_debug.cpu.clock.stddev
635922 +42.7% 907749 sched_debug.cpu.clock_task.avg
635962 +42.7% 907796 sched_debug.cpu.clock_task.max
635881 +42.7% 907703 sched_debug.cpu.clock_task.min
23.70 +15.9% 27.46 ± 9% sched_debug.cpu.clock_task.stddev
668.98 ± 37% -57.0% 287.96 ± 23% sched_debug.cpu.cpu_load[1].max
753.68 ± 40% -62.3% 284.23 ± 21% sched_debug.cpu.cpu_load[2].max
121.44 ± 39% -56.6% 52.70 ± 18% sched_debug.cpu.cpu_load[2].stddev
23.72 ± 25% -34.5% 15.54 ± 13% sched_debug.cpu.cpu_load[3].avg
708.89 ± 36% -62.4% 266.19 ± 16% sched_debug.cpu.cpu_load[3].max
114.92 ± 34% -56.5% 49.97 ± 14% sched_debug.cpu.cpu_load[3].stddev
21.19 ± 15% -30.5% 14.73 ± 14% sched_debug.cpu.cpu_load[4].avg
626.62 ± 23% -60.2% 249.32 ± 14% sched_debug.cpu.cpu_load[4].max
101.81 ± 22% -54.0% 46.80 ± 14% sched_debug.cpu.cpu_load[4].stddev
780.25 ± 2% +32.5% 1034 ± 5% sched_debug.cpu.curr->pid.avg
19307 +34.1% 25896 sched_debug.cpu.curr->pid.max
3186 +37.5% 4380 sched_debug.cpu.curr->pid.stddev
608234 +44.5% 878900 sched_debug.cpu.nr_load_updates.avg
615952 +43.8% 885845 sched_debug.cpu.nr_load_updates.max
605651 +44.7% 876407 sched_debug.cpu.nr_load_updates.min
839549 +55.0% 1301467 sched_debug.cpu.nr_switches.avg
4512683 -69.5% 1377858 sched_debug.cpu.nr_switches.max
622663 ± 2% +97.4% 1228956 sched_debug.cpu.nr_switches.min
588922 -90.6% 55523 ± 6% sched_debug.cpu.nr_switches.stddev
163.90 ± 4% +20.2% 196.95 ± 5% sched_debug.cpu.nr_uninterruptible.max
837970 +55.1% 1299788 sched_debug.cpu.sched_count.avg
4511470 -69.5% 1376198 sched_debug.cpu.sched_count.max
621018 ± 2% +97.6% 1227438 sched_debug.cpu.sched_count.min
588959 -90.6% 55619 ± 7% sched_debug.cpu.sched_count.stddev
405140 +57.7% 638835 sched_debug.cpu.sched_goidle.avg
2057532 -67.1% 677064 sched_debug.cpu.sched_goidle.max
304696 +98.0% 603265 sched_debug.cpu.sched_goidle.min
264975 -89.4% 28187 ± 8% sched_debug.cpu.sched_goidle.stddev
419432 +55.2% 651053 sched_debug.cpu.ttwu_count.avg
2222748 -68.8% 692406 sched_debug.cpu.ttwu_count.max
322383 ± 2% +90.2% 613211 sched_debug.cpu.ttwu_count.min
289153 -89.7% 29888 ± 10% sched_debug.cpu.ttwu_count.stddev
13408 +16.4% 15611 sched_debug.cpu.ttwu_local.avg
30957 ± 3% -28.7% 22087 ± 4% sched_debug.cpu.ttwu_local.max
6697 ± 15% +57.8% 10567 ± 5% sched_debug.cpu.ttwu_local.min
3892 ± 2% -32.0% 2648 ± 10% sched_debug.cpu.ttwu_local.stddev
635879 +42.7% 907700 sched_debug.cpu_clk
633439 +42.9% 905254 sched_debug.ktime
636354 +42.7% 908173 sched_debug.sched_clk
1549 +45.0% 2246 slabinfo.Acpi-ParseExt.active_objs
1549 +45.0% 2246 slabinfo.Acpi-ParseExt.num_objs
269.67 ± 6% +26.1% 340.00 ± 15% slabinfo.biovec-128.active_objs
269.67 ± 6% +26.1% 340.00 ± 15% slabinfo.biovec-128.num_objs
272.00 +32.3% 359.75 ± 8% slabinfo.buffer_head.active_objs
272.00 +32.3% 359.75 ± 8% slabinfo.buffer_head.num_objs
3018310 -98.6% 40945 slabinfo.dmaengine-unmap-16.active_objs
72008 -98.3% 1211 slabinfo.dmaengine-unmap-16.active_slabs
3024351 -98.3% 50889 slabinfo.dmaengine-unmap-16.num_objs
72008 -98.3% 1211 slabinfo.dmaengine-unmap-16.num_slabs
16983 ± 2% -71.8% 4785 slabinfo.kmalloc-128.active_objs
532.33 ± 2% -72.0% 149.25 slabinfo.kmalloc-128.active_slabs
17058 ± 2% -71.9% 4785 slabinfo.kmalloc-128.num_objs
532.33 ± 2% -72.0% 149.25 slabinfo.kmalloc-128.num_slabs
3024970 -96.7% 100606 slabinfo.kmalloc-16.active_objs
11827 -96.6% 396.25 slabinfo.kmalloc-16.active_slabs
3028004 -96.6% 101626 slabinfo.kmalloc-16.num_objs
11827 -96.6% 396.25 slabinfo.kmalloc-16.num_slabs
8063 -33.1% 5397 ± 2% slabinfo.kmalloc-1k.active_objs
8178 -34.0% 5399 ± 2% slabinfo.kmalloc-1k.num_objs
4370 -13.2% 3793 slabinfo.kmalloc-2k.active_objs
4441 -13.1% 3858 slabinfo.kmalloc-2k.num_objs
108697 -82.1% 19496 ± 11% slabinfo.kmalloc-32.active_objs
849.67 -81.9% 153.75 ± 10% slabinfo.kmalloc-32.active_slabs
108850 -81.9% 19730 ± 10% slabinfo.kmalloc-32.num_objs
849.67 -81.9% 153.75 ± 10% slabinfo.kmalloc-32.num_slabs
2664947 -98.6% 36570 slabinfo.kmalloc-512.active_objs
83752 -98.2% 1471 slabinfo.kmalloc-512.active_slabs
2680087 -98.2% 47097 slabinfo.kmalloc-512.num_objs
83752 -98.2% 1471 slabinfo.kmalloc-512.num_slabs
1205 -22.7% 931.50 slabinfo.kmalloc-8k.active_objs
1411 -25.3% 1054 slabinfo.kmalloc-8k.num_objs
10001 ± 2% +10.5% 11055 ± 2% slabinfo.kmalloc-96.active_objs
10508 +10.0% 11556 ± 2% slabinfo.kmalloc-96.num_objs
95480 -95.4% 4387 slabinfo.mnt_cache.active_objs
2276 -95.4% 104.00 slabinfo.mnt_cache.active_slabs
95629 -95.4% 4387 slabinfo.mnt_cache.num_objs
2276 -95.4% 104.00 slabinfo.mnt_cache.num_slabs
528.00 ± 2% +27.1% 671.25 ± 6% slabinfo.nfs_commit_data.active_objs
528.00 ± 2% +27.1% 671.25 ± 6% slabinfo.nfs_commit_data.num_objs
410.00 ± 9% +45.5% 596.75 ± 4% slabinfo.nfs_read_data.active_objs
410.00 ± 9% +45.5% 596.75 ± 4% slabinfo.nfs_read_data.num_objs
69156 -75.7% 16774 slabinfo.radix_tree_node.active_objs
2472 -75.7% 601.50 slabinfo.radix_tree_node.active_slabs
69238 -75.7% 16850 slabinfo.radix_tree_node.num_objs
2472 -75.7% 601.50 slabinfo.radix_tree_node.num_slabs
429.67 ± 16% +18.8% 510.25 ± 15% slabinfo.skbuff_ext_cache.active_objs
429.67 ± 16% +18.8% 510.25 ± 15% slabinfo.skbuff_ext_cache.num_objs
540.33 ± 12% +31.3% 709.25 ± 18% slabinfo.skbuff_fclone_cache.active_objs
540.33 ± 12% +31.3% 709.25 ± 18% slabinfo.skbuff_fclone_cache.num_objs
9615 -34.8% 6268 slabinfo.xfs_buf_item.active_objs
325.00 -34.8% 212.00 slabinfo.xfs_buf_item.active_slabs
9760 -34.6% 6378 slabinfo.xfs_buf_item.num_objs
325.00 -34.8% 212.00 slabinfo.xfs_buf_item.num_slabs
10325 ± 5% -75.8% 2503 ± 6% slabinfo.xfs_efd_item.active_objs
280.33 ± 5% -76.1% 67.00 ± 7% slabinfo.xfs_efd_item.active_slabs
10394 ± 5% -75.9% 2503 ± 6% slabinfo.xfs_efd_item.num_objs
280.33 ± 5% -76.1% 67.00 ± 7% slabinfo.xfs_efd_item.num_slabs
3018096 -98.7% 40718 slabinfo.xfs_inode.active_objs
88945 -98.3% 1490 slabinfo.xfs_inode.active_slabs
3024161 -98.3% 50678 slabinfo.xfs_inode.num_objs
88945 -98.3% 1490 slabinfo.xfs_inode.num_slabs
69.15 ± 2% -5.9 63.20 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
69.15 ± 2% -5.9 63.20 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
69.07 ± 2% -5.9 63.16 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
62.33 ± 3% -5.5 56.87 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
70.98 ± 3% -5.0 65.99 perf-profile.calltrace.cycles-pp.secondary_startup_64
51.64 ± 3% -2.5 49.13 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
9.20 ± 4% -2.4 6.84 ± 2% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
8.71 ± 4% -2.3 6.41 ± 3% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
3.03 ± 4% -1.0 2.04 ± 14% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
4.53 ± 9% -0.9 3.58 ± 7% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
2.88 ± 4% -0.9 1.99 ± 11% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
2.55 ± 3% -0.8 1.70 ± 19% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.07 ± 7% -0.7 0.40 ± 57% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
3.07 ± 8% -0.6 2.44 ± 5% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.80 ± 3% -0.5 0.31 ±101% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.80 ± 7% -0.5 0.31 ±102% perf-profile.calltrace.cycles-pp.run_rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.77 ± 9% -0.5 0.30 ±102% perf-profile.calltrace.cycles-pp.update_blocked_averages.run_rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt
0.68 ± 3% -0.4 0.30 ±100% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.60 ± 9% -0.3 1.32 ± 5% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.05 ± 9% -0.3 0.78 ± 7% perf-profile.calltrace.cycles-pp.xfs_dir2_node_removename.xfs_dir_removename.xfs_remove.xfs_vn_unlink.vfs_unlink
1.04 ± 7% -0.3 0.79 ± 7% perf-profile.calltrace.cycles-pp.xfs_dir2_block_addname.xfs_dir_createname.xfs_create.xfs_generic_create.path_openat
0.83 ± 8% -0.2 0.59 ± 9% perf-profile.calltrace.cycles-pp.__xfs_dir3_data_check.xfs_dir3_data_check.xfs_dir2_block_addname.xfs_dir_createname.xfs_create
0.84 ± 9% -0.2 0.61 ± 10% perf-profile.calltrace.cycles-pp.xfs_dir3_data_check.xfs_dir2_block_addname.xfs_dir_createname.xfs_create.xfs_generic_create
1.22 ± 3% -0.2 0.99 ± 5% perf-profile.calltrace.cycles-pp.down_write.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.86 ± 7% -0.2 0.63 ± 9% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.do_unlinkat
1.20 ± 3% -0.2 0.98 ± 5% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.do_unlinkat.do_syscall_64
1.20 ± 3% -0.2 0.98 ± 5% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.75 ± 3% -0.2 1.54 ± 9% perf-profile.calltrace.cycles-pp.xfs_dir2_block_removename.xfs_dir_removename.xfs_remove.xfs_vn_unlink.vfs_unlink
1.29 ± 9% -0.2 1.08 ± 6% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
1.13 ± 7% -0.1 0.99 ± 4% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.57 ± 5% +0.1 0.69 ± 10% perf-profile.calltrace.cycles-pp.xfs_trans_free_items.xfs_log_commit_cil.__xfs_trans_commit.xfs_remove.xfs_vn_unlink
1.10 ± 8% +0.2 1.25 ± 8% perf-profile.calltrace.cycles-pp.xfs_iunlink.xfs_remove.xfs_vn_unlink.vfs_unlink.do_unlinkat
0.67 ± 17% +0.3 0.92 ± 4% perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
0.68 ± 15% +0.3 0.95 ± 4% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.37 ± 71% +0.3 0.63 ± 5% perf-profile.calltrace.cycles-pp.xfs_trans_free_items.xfs_log_commit_cil.__xfs_trans_commit.xfs_create.xfs_generic_create
6.59 ± 3% +0.3 6.88 ± 2% perf-profile.calltrace.cycles-pp.xfs_remove.xfs_vn_unlink.vfs_unlink.do_unlinkat.do_syscall_64
6.60 ± 3% +0.3 6.89 ± 2% perf-profile.calltrace.cycles-pp.xfs_vn_unlink.vfs_unlink.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.77 ± 14% +0.3 1.08 ± 5% perf-profile.calltrace.cycles-pp.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
6.77 ± 3% +0.3 7.09 ± 2% perf-profile.calltrace.cycles-pp.vfs_unlink.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.79 ± 14% +0.3 1.11 ± 5% perf-profile.calltrace.cycles-pp.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate
0.69 ± 9% +0.3 1.03 ± 4% perf-profile.calltrace.cycles-pp.xfs_btree_check_sblock.xfs_btree_get_rec.xfs_inobt_get_rec.xfs_check_agi_freecount.xfs_dialloc_ag
1.84 ± 4% +0.4 2.20 ± 22% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.path_openat.do_filp_open
0.89 ± 17% +0.4 1.25 ± 6% perf-profile.calltrace.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate.sched_ttwu_pending
1.86 ± 4% +0.4 2.22 ± 22% perf-profile.calltrace.cycles-pp.down_write.path_openat.do_filp_open.do_sys_open.do_syscall_64
1.85 ± 4% +0.4 2.21 ± 22% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.path_openat.do_filp_open.do_sys_open
0.70 ± 6% +0.4 1.08 ± 3% perf-profile.calltrace.cycles-pp.xfs_btree_check_sblock.xfs_btree_increment.xfs_check_agi_freecount.xfs_dialloc_ag.xfs_dialloc
0.18 ±141% +0.4 0.56 ± 8% perf-profile.calltrace.cycles-pp.xfs_buf_item_unlock.xfs_trans_free_items.xfs_log_commit_cil.__xfs_trans_commit.xfs_remove
0.43 ± 75% +0.4 0.82 ± 9% perf-profile.calltrace.cycles-pp.xfs_iunlink_remove.xfs_ifree.xfs_inactive_ifree.xfs_inactive.xfs_inactive_inode
0.18 ±141% +0.4 0.58 ± 8% perf-profile.calltrace.cycles-pp.xfs_trans_read_buf_map.xfs_read_agi.xfs_iunlink.xfs_remove.xfs_vn_unlink
0.51 ± 74% +0.4 0.93 ± 9% perf-profile.calltrace.cycles-pp.xfs_btree_check_sblock.xfs_btree_increment.xfs_check_agi_freecount.xfs_difree_inobt.xfs_difree
0.37 ± 70% +0.4 0.80 ± 7% perf-profile.calltrace.cycles-pp.__xfs_btree_check_sblock.xfs_btree_check_sblock.xfs_btree_increment.xfs_check_agi_freecount.xfs_dialloc_ag
0.17 ±141% +0.5 0.62 ± 5% perf-profile.calltrace.cycles-pp.xfs_ialloc_read_agi.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc.xfs_create
1.11 ± 15% +0.5 1.57 ± 4% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.sched_ttwu_pending.do_idle
0.22 ±141% +0.5 0.68 ± 16% perf-profile.calltrace.cycles-pp.__xfs_btree_check_sblock.xfs_btree_check_sblock.xfs_btree_increment.xfs_check_agi_freecount.xfs_difree_inobt
1.15 ± 13% +0.5 1.64 ± 5% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.sched_ttwu_pending.do_idle.cpu_startup_entry
1.17 ± 13% +0.5 1.66 ± 5% perf-profile.calltrace.cycles-pp.ttwu_do_activate.sched_ttwu_pending.do_idle.cpu_startup_entry.start_secondary
1.02 ± 2% +0.5 1.56 ± 4% perf-profile.calltrace.cycles-pp.xfs_btree_increment.xfs_check_agi_freecount.xfs_dialloc_ag.xfs_dialloc.xfs_ialloc
1.30 ± 11% +0.5 1.85 ± 5% perf-profile.calltrace.cycles-pp.sched_ttwu_pending.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.00 +0.6 0.55 ± 6% perf-profile.calltrace.cycles-pp.xfs_trans_read_buf_map.xfs_read_agi.xfs_ialloc_read_agi.xfs_dialloc.xfs_ialloc
1.00 ± 7% +0.6 1.57 ± 6% perf-profile.calltrace.cycles-pp.xfs_btree_get_rec.xfs_inobt_get_rec.xfs_check_agi_freecount.xfs_dialloc_ag.xfs_dialloc
0.00 +0.6 0.58 ± 6% perf-profile.calltrace.cycles-pp.xfs_read_agi.xfs_ialloc_read_agi.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc
0.00 +0.6 0.64 ± 9% perf-profile.calltrace.cycles-pp.unwind_next_frame.__save_stack_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
0.19 ±141% +0.7 0.85 ± 25% perf-profile.calltrace.cycles-pp.xfs_dir2_leaf_removename.xfs_dir_removename.xfs_remove.xfs_vn_unlink.vfs_unlink
0.59 ± 79% +0.7 1.33 ± 4% perf-profile.calltrace.cycles-pp.xfs_log_commit_cil.__xfs_trans_commit.xfs_inactive_ifree.xfs_inactive.xfs_inactive_inode
0.60 ± 79% +0.8 1.39 ± 4% perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_inactive_ifree.xfs_inactive.xfs_inactive_inode.xfs_reclaim_inodes_pag
1.65 ± 8% +0.8 2.48 ± 5% perf-profile.calltrace.cycles-pp.xfs_inobt_get_rec.xfs_check_agi_freecount.xfs_dialloc_ag.xfs_dialloc.xfs_ialloc
0.46 ± 74% +1.1 1.52 ± 6% perf-profile.calltrace.cycles-pp.__xfs_btree_check_sblock.xfs_btree_check_sblock.xfs_btree_get_rec.xfs_inobt_get_rec.xfs_check_agi_freecount
3.73 ± 4% +1.4 5.15 ± 2% perf-profile.calltrace.cycles-pp.xfs_dialloc_ag.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc.xfs_create
8.77 ± 3% +1.4 10.20 perf-profile.calltrace.cycles-pp.xfs_generic_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
8.57 ± 3% +1.4 10.01 perf-profile.calltrace.cycles-pp.xfs_create.xfs_generic_create.path_openat.do_filp_open.do_sys_open
4.82 ± 4% +1.5 6.30 ± 2% perf-profile.calltrace.cycles-pp.xfs_dir_ialloc.xfs_create.xfs_generic_create.path_openat.do_filp_open
4.79 ± 4% +1.5 6.29 ± 2% perf-profile.calltrace.cycles-pp.xfs_ialloc.xfs_dir_ialloc.xfs_create.xfs_generic_create.path_openat
3.09 ± 4% +1.5 4.61 ± 3% perf-profile.calltrace.cycles-pp.xfs_check_agi_freecount.xfs_dialloc_ag.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc
4.33 ± 3% +1.5 5.85 ± 2% perf-profile.calltrace.cycles-pp.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc.xfs_create.xfs_generic_create
2.36 ± 75% +1.7 4.08 ± 7% perf-profile.calltrace.cycles-pp.xfs_difree_inobt.xfs_difree.xfs_ifree.xfs_inactive_ifree.xfs_inactive
13.23 ± 2% +1.8 15.04 ± 2% perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.49 ± 75% +1.8 4.31 ± 6% perf-profile.calltrace.cycles-pp.xfs_difree.xfs_ifree.xfs_inactive_ifree.xfs_inactive.xfs_inactive_inode
13.07 ± 2% +1.8 14.89 ± 2% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
13.08 ± 2% +1.8 14.91 ± 2% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
23.05 +2.0 25.04 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
23.07 +2.0 25.07 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
2.94 ± 75% +2.2 5.19 ± 4% perf-profile.calltrace.cycles-pp.xfs_ifree.xfs_inactive_ifree.xfs_inactive.xfs_inactive_inode.xfs_reclaim_inodes_pag
4.48 ± 59% +3.0 7.48 ± 3% perf-profile.calltrace.cycles-pp.ret_from_fork
4.47 ± 59% +3.0 7.48 ± 3% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
4.25 ± 61% +3.0 7.28 ± 3% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
4.23 ± 61% +3.0 7.27 ± 3% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
3.68 ± 76% +3.1 6.81 ± 2% perf-profile.calltrace.cycles-pp.xfs_inactive_worker.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +6.7 6.68 ± 3% perf-profile.calltrace.cycles-pp.xfs_inactive_ifree.xfs_inactive.xfs_inactive_inode.xfs_reclaim_inodes_pag.xfs_inactive_worker
0.00 +6.7 6.70 ± 2% perf-profile.calltrace.cycles-pp.xfs_inactive.xfs_inactive_inode.xfs_reclaim_inodes_pag.xfs_inactive_worker.process_one_work
0.00 +6.8 6.79 ± 2% perf-profile.calltrace.cycles-pp.xfs_inactive_inode.xfs_reclaim_inodes_pag.xfs_inactive_worker.process_one_work.worker_thread
0.00 +6.8 6.81 ± 2% perf-profile.calltrace.cycles-pp.xfs_reclaim_inodes_pag.xfs_inactive_worker.process_one_work.worker_thread.kthread
350472 ± 2% +47.0% 515216 ± 4% softirqs.CPU0.RCU
178137 ± 4% +45.2% 258701 ± 3% softirqs.CPU0.SCHED
475912 ± 9% +26.6% 602627 ± 8% softirqs.CPU0.TIMER
351031 ± 2% +46.9% 515648 ± 3% softirqs.CPU1.RCU
177205 ± 4% +43.6% 254440 ± 3% softirqs.CPU1.SCHED
477011 ± 10% +29.5% 617834 ± 9% softirqs.CPU1.TIMER
355813 ± 2% +46.1% 519913 ± 5% softirqs.CPU10.RCU
176887 ± 4% +45.7% 257643 ± 2% softirqs.CPU10.SCHED
483096 ± 9% +34.2% 648174 ± 9% softirqs.CPU10.TIMER
351205 ± 2% +48.0% 519792 ± 3% softirqs.CPU11.RCU
178176 ± 3% +44.5% 257381 ± 2% softirqs.CPU11.SCHED
509243 ± 5% +22.5% 623578 ± 9% softirqs.CPU11.TIMER
354284 ± 3% +46.6% 519224 ± 5% softirqs.CPU12.RCU
178183 ± 3% +44.5% 257434 ± 2% softirqs.CPU12.SCHED
485506 ± 9% +33.8% 649681 ± 10% softirqs.CPU12.TIMER
350181 ± 3% +49.2% 522540 ± 4% softirqs.CPU13.RCU
178505 ± 3% +44.5% 257948 ± 2% softirqs.CPU13.SCHED
511564 ± 5% +23.4% 631349 ± 7% softirqs.CPU13.TIMER
358014 ± 2% +46.7% 525091 ± 5% softirqs.CPU14.RCU
178876 ± 3% +44.4% 258277 ± 2% softirqs.CPU14.SCHED
487637 ± 9% +36.3% 664878 ± 6% softirqs.CPU14.TIMER
376767 ± 8% +37.9% 519729 ± 3% softirqs.CPU15.RCU
179212 ± 3% +46.0% 261688 ± 3% softirqs.CPU15.SCHED
501815 ± 6% +19.0% 597149 ± 13% softirqs.CPU15.TIMER
342231 +51.1% 517281 ± 6% softirqs.CPU16.RCU
174376 +48.1% 258219 ± 2% softirqs.CPU16.SCHED
439278 ± 10% +51.2% 664047 ± 8% softirqs.CPU16.TIMER
349494 ± 3% +47.5% 515663 ± 5% softirqs.CPU17.RCU
178311 ± 3% +45.1% 258733 ± 2% softirqs.CPU17.SCHED
513865 ± 5% +27.3% 654392 ± 8% softirqs.CPU17.TIMER
350783 ± 3% +47.9% 518746 ± 5% softirqs.CPU18.RCU
182358 ± 2% +41.9% 258705 ± 2% softirqs.CPU18.SCHED
459578 ± 18% +49.3% 686336 ± 2% softirqs.CPU18.TIMER
351545 ± 2% +48.1% 520706 ± 4% softirqs.CPU19.RCU
178437 ± 3% +45.0% 258724 ± 2% softirqs.CPU19.SCHED
512878 ± 4% +28.2% 657397 ± 8% softirqs.CPU19.TIMER
357605 ± 2% +45.9% 521686 ± 4% softirqs.CPU2.RCU
176320 ± 4% +48.1% 261181 ± 2% softirqs.CPU2.SCHED
477369 ± 9% +44.0% 687253 ± 18% softirqs.CPU2.TIMER
353070 ± 2% +48.5% 524194 ± 5% softirqs.CPU20.RCU
174940 ± 4% +45.2% 253945 ± 3% softirqs.CPU20.SCHED
475767 ± 9% +28.4% 610744 ± 6% softirqs.CPU20.TIMER
353397 ± 2% +47.8% 522147 ± 4% softirqs.CPU21.RCU
176012 ± 4% +44.1% 253708 ± 3% softirqs.CPU21.SCHED
476021 ± 10% +30.8% 622474 ± 7% softirqs.CPU21.TIMER
362004 ± 2% +46.6% 530709 ± 5% softirqs.CPU22.RCU
176322 ± 4% +48.1% 261159 ± 2% softirqs.CPU22.SCHED
478634 ± 10% +19.9% 574057 ± 13% softirqs.CPU22.TIMER
363725 +46.6% 533242 ± 3% softirqs.CPU23.RCU
177800 ± 3% +43.4% 254985 ± 3% softirqs.CPU23.SCHED
478565 ± 11% +29.4% 619240 ± 8% softirqs.CPU23.TIMER
365199 +46.8% 536212 ± 4% softirqs.CPU24.RCU
176725 ± 4% +45.5% 257046 ± 2% softirqs.CPU24.SCHED
478986 ± 10% +29.6% 620820 ± 8% softirqs.CPU24.TIMER
359975 +48.7% 535443 ± 2% softirqs.CPU25.RCU
181025 ± 4% +40.6% 254450 ± 3% softirqs.CPU25.SCHED
447316 ± 19% +37.7% 615960 ± 9% softirqs.CPU25.TIMER
361455 ± 2% +47.8% 534276 ± 5% softirqs.CPU26.RCU
176157 ± 4% +47.9% 260536 ± 3% softirqs.CPU26.SCHED
479646 ± 9% +25.9% 603755 ± 12% softirqs.CPU26.TIMER
360462 +47.0% 529894 ± 2% softirqs.CPU27.RCU
178260 ± 3% +43.3% 255441 ± 3% softirqs.CPU27.SCHED
505255 ± 6% +23.1% 621841 ± 8% softirqs.CPU27.TIMER
348842 ± 6% +53.1% 533960 ± 5% softirqs.CPU28.RCU
172723 ± 7% +48.9% 257257 ± 2% softirqs.CPU28.SCHED
500520 ± 5% +32.2% 661543 ± 5% softirqs.CPU28.TIMER
358679 ± 2% +48.1% 531097 ± 3% softirqs.CPU29.RCU
178070 ± 3% +43.4% 255300 ± 3% softirqs.CPU29.SCHED
504558 ± 5% +24.0% 625466 ± 7% softirqs.CPU29.TIMER
357266 ± 2% +47.2% 526003 ± 3% softirqs.CPU3.RCU
178003 ± 3% +43.2% 254968 ± 3% softirqs.CPU3.SCHED
475849 ± 10% +32.1% 628681 ± 6% softirqs.CPU3.TIMER
359014 ± 2% +47.3% 528798 ± 6% softirqs.CPU30.RCU
177534 ± 3% +45.2% 257845 ± 2% softirqs.CPU30.SCHED
480857 ± 9% +36.9% 658480 ± 7% softirqs.CPU30.TIMER
352751 ± 2% +49.4% 527040 ± 4% softirqs.CPU31.RCU
178325 ± 3% +44.2% 257065 ± 3% softirqs.CPU31.SCHED
506996 ± 5% +24.2% 629661 ± 7% softirqs.CPU31.TIMER
309246 +46.3% 452366 ± 3% softirqs.CPU32.RCU
178408 ± 3% +44.6% 258044 ± 2% softirqs.CPU32.SCHED
487386 ± 9% +35.9% 662460 ± 7% softirqs.CPU32.TIMER
307793 ± 2% +45.3% 447116 softirqs.CPU33.RCU
178346 ± 3% +44.6% 257893 ± 2% softirqs.CPU33.SCHED
510065 ± 5% +22.5% 624959 ± 9% softirqs.CPU33.TIMER
317885 +45.5% 462518 softirqs.CPU34.RCU
179003 ± 3% +44.1% 257979 ± 2% softirqs.CPU34.SCHED
491374 ± 9% +32.7% 652163 ± 10% softirqs.CPU34.TIMER
310841 ± 2% +46.3% 454843 softirqs.CPU35.RCU
171543 ± 7% +52.8% 262108 ± 3% softirqs.CPU35.SCHED
503129 ± 5% +38.4% 696471 ± 19% softirqs.CPU35.TIMER
342548 ± 10% +35.8% 465062 softirqs.CPU36.RCU
182025 ± 4% +41.7% 257873 ± 2% softirqs.CPU36.SCHED
522374 ± 17% +24.9% 652676 ± 11% softirqs.CPU36.TIMER
317881 ± 2% +46.2% 464732 softirqs.CPU37.RCU
179258 ± 3% +44.0% 258173 ± 2% softirqs.CPU37.SCHED
515687 ± 5% +25.4% 646835 ± 10% softirqs.CPU37.TIMER
315664 ± 2% +46.9% 463821 softirqs.CPU38.RCU
182092 ± 2% +41.9% 258304 ± 2% softirqs.CPU38.SCHED
555470 ± 8% +34.1% 745113 ± 11% softirqs.CPU38.TIMER
318718 ± 2% +44.6% 460755 softirqs.CPU39.RCU
178974 ± 3% +44.3% 258318 ± 2% softirqs.CPU39.SCHED
515014 ± 4% +26.0% 649046 ± 11% softirqs.CPU39.TIMER
363141 +46.5% 532039 ± 4% softirqs.CPU4.RCU
175995 ± 4% +46.1% 257188 ± 2% softirqs.CPU4.SCHED
477697 ± 9% +32.2% 631353 ± 6% softirqs.CPU4.TIMER
355561 ± 2% +49.6% 531834 ± 2% softirqs.CPU5.RCU
181315 ± 4% +40.2% 254271 ± 3% softirqs.CPU5.SCHED
544589 ± 12% +14.4% 622872 ± 7% softirqs.CPU5.TIMER
359996 +46.9% 528818 ± 5% softirqs.CPU6.RCU
176407 ± 4% +48.2% 261494 ± 3% softirqs.CPU6.SCHED
478671 ± 9% +48.3% 709673 ± 16% softirqs.CPU6.TIMER
360493 +47.7% 532348 ± 2% softirqs.CPU7.RCU
178141 ± 3% +43.4% 255425 ± 3% softirqs.CPU7.SCHED
503929 ± 5% +25.2% 631075 ± 5% softirqs.CPU7.TIMER
376377 ± 4% +40.4% 528491 ± 4% softirqs.CPU8.RCU
176203 ± 4% +45.8% 256922 ± 2% softirqs.CPU8.SCHED
498117 ± 5% +29.3% 644298 ± 9% softirqs.CPU8.TIMER
356558 ± 2% +48.3% 528683 ± 3% softirqs.CPU9.RCU
177928 ± 3% +43.6% 255576 ± 3% softirqs.CPU9.SCHED
505291 ± 6% +24.6% 629841 ± 6% softirqs.CPU9.TIMER
13967990 ± 2% +46.7% 20497699 ± 3% softirqs.RCU
7112543 ± 3% +44.8% 10302369 ± 2% softirqs.SCHED
19732539 ± 7% +30.1% 25676278 ± 7% softirqs.TIMER
1965 +19.4% 2345 interrupts.10:IR-IO-APIC.10-edge.ipmi_si
1775 ± 31% +45.8% 2589 ± 26% interrupts.37:IR-PCI-MSI.524289-edge.eth0-TxRx-0
715.33 +205.9% 2188 ± 3% interrupts.41:IR-PCI-MSI.524293-edge.eth0-TxRx-4
606.00 +43.5% 869.75 interrupts.76:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
399993 +43.8% 575091 interrupts.CAL:Function_call_interrupts
10146 ± 8% +46.2% 14830 ± 6% interrupts.CPU0.CAL:Function_call_interrupts
2482784 +43.5% 3563654 interrupts.CPU0.LOC:Local_timer_interrupts
13988 ± 13% +91.3% 26758 ± 18% interrupts.CPU0.RES:Rescheduling_interrupts
10395 ± 6% +38.8% 14425 ± 6% interrupts.CPU1.CAL:Function_call_interrupts
2482642 +43.5% 3563164 interrupts.CPU1.LOC:Local_timer_interrupts
14273 ± 16% +68.4% 24036 ± 21% interrupts.CPU1.RES:Rescheduling_interrupts
9224 ± 3% +62.1% 14955 ± 4% interrupts.CPU10.CAL:Function_call_interrupts
2482714 +43.5% 3563439 interrupts.CPU10.LOC:Local_timer_interrupts
10652 ± 3% +144.4% 26030 ± 16% interrupts.CPU10.RES:Rescheduling_interrupts
9328 ± 2% +60.4% 14965 ± 5% interrupts.CPU11.CAL:Function_call_interrupts
2482726 +43.5% 3563349 interrupts.CPU11.LOC:Local_timer_interrupts
158.33 ±141% +269.3% 584.75 ± 71% interrupts.CPU11.NMI:Non-maskable_interrupts
158.33 ±141% +269.3% 584.75 ± 71% interrupts.CPU11.PMI:Performance_monitoring_interrupts
10654 +155.4% 27214 ± 16% interrupts.CPU11.RES:Rescheduling_interrupts
10732 ± 2% +41.7% 15206 ± 4% interrupts.CPU12.CAL:Function_call_interrupts
2482697 +43.5% 3563237 interrupts.CPU12.LOC:Local_timer_interrupts
16311 ± 2% +77.1% 28895 ± 3% interrupts.CPU12.RES:Rescheduling_interrupts
10469 ± 5% +40.6% 14722 ± 5% interrupts.CPU13.CAL:Function_call_interrupts
2482942 +43.5% 3563379 interrupts.CPU13.LOC:Local_timer_interrupts
14521 ± 19% +73.2% 25149 ± 21% interrupts.CPU13.RES:Rescheduling_interrupts
10235 ± 4% +44.9% 14835 ± 7% interrupts.CPU14.CAL:Function_call_interrupts
2482777 +43.5% 3563414 interrupts.CPU14.LOC:Local_timer_interrupts
14459 ± 16% +83.8% 26575 ± 17% interrupts.CPU14.RES:Rescheduling_interrupts
10475 ± 5% +42.2% 14893 ± 5% interrupts.CPU15.CAL:Function_call_interrupts
2482263 +43.5% 3563256 interrupts.CPU15.LOC:Local_timer_interrupts
10270 ± 4% +39.2% 14297 ± 8% interrupts.CPU16.CAL:Function_call_interrupts
2482715 +43.5% 3563527 interrupts.CPU16.LOC:Local_timer_interrupts
13353 ± 17% +81.0% 24169 ± 20% interrupts.CPU16.RES:Rescheduling_interrupts
10596 ± 3% +32.4% 14031 ± 5% interrupts.CPU17.CAL:Function_call_interrupts
2482547 +43.5% 3563381 interrupts.CPU17.LOC:Local_timer_interrupts
15715 ± 8% +41.7% 22268 ± 17% interrupts.CPU17.RES:Rescheduling_interrupts
10162 ± 4% +38.0% 14021 ± 8% interrupts.CPU18.CAL:Function_call_interrupts
2482670 +43.5% 3563564 interrupts.CPU18.LOC:Local_timer_interrupts
15192 ± 17% +59.1% 24167 ± 21% interrupts.CPU18.RES:Rescheduling_interrupts
10490 ± 3% +33.9% 14051 ± 5% interrupts.CPU19.CAL:Function_call_interrupts
2482487 +43.5% 3563418 interrupts.CPU19.LOC:Local_timer_interrupts
16413 ± 6% +51.3% 24830 ± 19% interrupts.CPU19.RES:Rescheduling_interrupts
10153 ± 4% +41.1% 14330 ± 6% interrupts.CPU2.CAL:Function_call_interrupts
2482585 +43.5% 3563309 interrupts.CPU2.LOC:Local_timer_interrupts
14386 ± 19% +65.2% 23766 ± 22% interrupts.CPU2.RES:Rescheduling_interrupts
9830 ± 6% +40.7% 13834 ± 4% interrupts.CPU20.CAL:Function_call_interrupts
2482474 +43.6% 3563715 interrupts.CPU20.LOC:Local_timer_interrupts
12446 ± 20% +67.4% 20840 ± 19% interrupts.CPU20.RES:Rescheduling_interrupts
10022 ± 4% +46.2% 14658 ± 6% interrupts.CPU21.CAL:Function_call_interrupts
2482761 +43.5% 3563322 interrupts.CPU21.LOC:Local_timer_interrupts
12414 ± 19% +110.6% 26145 ± 15% interrupts.CPU21.RES:Rescheduling_interrupts
9641 ± 10% +48.7% 14334 ± 8% interrupts.CPU22.CAL:Function_call_interrupts
2482862 +43.5% 3563405 interrupts.CPU22.LOC:Local_timer_interrupts
12562 ± 18% +93.4% 24296 ± 24% interrupts.CPU22.RES:Rescheduling_interrupts
9796 ± 4% +50.0% 14698 ± 7% interrupts.CPU23.CAL:Function_call_interrupts
2482838 +43.5% 3563157 interrupts.CPU23.LOC:Local_timer_interrupts
11819 ± 19% +109.1% 24709 ± 18% interrupts.CPU23.RES:Rescheduling_interrupts
10259 ± 6% +41.5% 14515 ± 6% interrupts.CPU24.CAL:Function_call_interrupts
2482791 +43.5% 3563576 interrupts.CPU24.LOC:Local_timer_interrupts
15199 ± 19% +53.5% 23324 ± 20% interrupts.CPU24.RES:Rescheduling_interrupts
9824 ± 4% +53.3% 15063 ± 6% interrupts.CPU25.CAL:Function_call_interrupts
2482696 +43.5% 3563590 interrupts.CPU25.LOC:Local_timer_interrupts
157.67 ±141% +301.5% 633.00 ± 65% interrupts.CPU25.NMI:Non-maskable_interrupts
157.67 ±141% +301.5% 633.00 ± 65% interrupts.CPU25.PMI:Performance_monitoring_interrupts
12009 ± 16% +120.3% 26455 ± 15% interrupts.CPU25.RES:Rescheduling_interrupts
1775 ± 31% +45.8% 2589 ± 26% interrupts.CPU26.37:IR-PCI-MSI.524289-edge.eth0-TxRx-0
9659 ± 10% +47.7% 14267 ± 5% interrupts.CPU26.CAL:Function_call_interrupts
2482909 +43.5% 3563595 interrupts.CPU26.LOC:Local_timer_interrupts
12858 ± 16% +85.8% 23889 ± 20% interrupts.CPU26.RES:Rescheduling_interrupts
9768 ± 6% +40.6% 13739 ± 8% interrupts.CPU27.CAL:Function_call_interrupts
2482824 +43.5% 3562529 interrupts.CPU27.LOC:Local_timer_interrupts
12876 ± 22% +73.5% 22335 ± 14% interrupts.CPU27.RES:Rescheduling_interrupts
9889 ± 8% +36.8% 13526 ± 7% interrupts.CPU28.CAL:Function_call_interrupts
2482643 +43.5% 3563431 interrupts.CPU28.LOC:Local_timer_interrupts
11749 ± 27% +81.9% 21373 ± 23% interrupts.CPU28.RES:Rescheduling_interrupts
9717 ± 10% +52.3% 14797 ± 5% interrupts.CPU29.CAL:Function_call_interrupts
2482876 +43.5% 3563145 interrupts.CPU29.LOC:Local_timer_interrupts
182.33 ±107% +191.0% 530.50 ± 22% interrupts.CPU29.NMI:Non-maskable_interrupts
182.33 ±107% +191.0% 530.50 ± 22% interrupts.CPU29.PMI:Performance_monitoring_interrupts
12399 ± 19% +111.8% 26258 ± 15% interrupts.CPU29.RES:Rescheduling_interrupts
9915 ± 8% +45.1% 14388 ± 5% interrupts.CPU3.CAL:Function_call_interrupts
2482914 +43.5% 3563108 interrupts.CPU3.LOC:Local_timer_interrupts
12900 ± 21% +87.5% 24189 ± 19% interrupts.CPU3.RES:Rescheduling_interrupts
10273 ± 9% +33.7% 13735 ± 10% interrupts.CPU30.CAL:Function_call_interrupts
2482440 +43.5% 3563278 interrupts.CPU30.LOC:Local_timer_interrupts
14696 ± 12% +46.5% 21527 ± 16% interrupts.CPU30.RES:Rescheduling_interrupts
10671 ± 2% +30.3% 13903 ± 5% interrupts.CPU31.CAL:Function_call_interrupts
2482690 +43.5% 3563421 interrupts.CPU31.LOC:Local_timer_interrupts
15897 +36.0% 21625 ± 19% interrupts.CPU31.RES:Rescheduling_interrupts
9336 ± 2% +42.8% 13334 ± 2% interrupts.CPU32.CAL:Function_call_interrupts
2482536 +43.5% 3563218 interrupts.CPU32.LOC:Local_timer_interrupts
11007 ± 4% +68.1% 18500 ± 2% interrupts.CPU32.RES:Rescheduling_interrupts
9807 ± 9% +47.2% 14438 ± 8% interrupts.CPU33.CAL:Function_call_interrupts
2482381 +43.5% 3563052 interrupts.CPU33.LOC:Local_timer_interrupts
159.33 ±141% +372.3% 752.50 ± 37% interrupts.CPU33.NMI:Non-maskable_interrupts
159.33 ±141% +372.3% 752.50 ± 37% interrupts.CPU33.PMI:Performance_monitoring_interrupts
12320 ± 19% +93.3% 23809 ± 18% interrupts.CPU33.RES:Rescheduling_interrupts
715.33 +205.9% 2188 ± 3% interrupts.CPU34.41:IR-PCI-MSI.524293-edge.eth0-TxRx-4
9817 ± 8% +39.7% 13714 ± 4% interrupts.CPU34.CAL:Function_call_interrupts
2482766 +43.5% 3563310 interrupts.CPU34.LOC:Local_timer_interrupts
12476 ± 18% +66.0% 20710 ± 18% interrupts.CPU34.RES:Rescheduling_interrupts
9850 ± 9% +41.9% 13976 ± 6% interrupts.CPU35.CAL:Function_call_interrupts
2482956 +43.5% 3563268 interrupts.CPU35.LOC:Local_timer_interrupts
11311 ± 29% +87.9% 21257 ± 17% interrupts.CPU35.RES:Rescheduling_interrupts
9784 ± 9% +45.2% 14206 ± 5% interrupts.CPU36.CAL:Function_call_interrupts
2482486 +43.5% 3563409 interrupts.CPU36.LOC:Local_timer_interrupts
9305 ± 3% +55.2% 14440 ± 8% interrupts.CPU37.CAL:Function_call_interrupts
2482564 +43.5% 3562982 interrupts.CPU37.LOC:Local_timer_interrupts
11000 ± 7% +117.0% 23870 ± 21% interrupts.CPU37.RES:Rescheduling_interrupts
9502 ± 11% +51.0% 14352 ± 6% interrupts.CPU38.CAL:Function_call_interrupts
2482635 +43.5% 3563178 interrupts.CPU38.LOC:Local_timer_interrupts
11903 ± 21% +92.5% 22913 ± 19% interrupts.CPU38.RES:Rescheduling_interrupts
9345 ± 2% +55.4% 14520 ± 8% interrupts.CPU39.CAL:Function_call_interrupts
2482870 +43.5% 3563092 interrupts.CPU39.LOC:Local_timer_interrupts
96.33 ±141% +224.4% 312.50 ± 98% interrupts.CPU39.NMI:Non-maskable_interrupts
96.33 ±141% +224.4% 312.50 ± 98% interrupts.CPU39.PMI:Performance_monitoring_interrupts
10384 ± 3% +127.6% 23634 ± 21% interrupts.CPU39.RES:Rescheduling_interrupts
9743 ± 4% +46.2% 14242 ± 10% interrupts.CPU4.CAL:Function_call_interrupts
2482637 +43.5% 3563660 interrupts.CPU4.LOC:Local_timer_interrupts
12474 ± 16% +91.7% 23919 ± 19% interrupts.CPU4.RES:Rescheduling_interrupts
10284 ± 7% +35.7% 13954 ± 6% interrupts.CPU5.CAL:Function_call_interrupts
2482985 +43.5% 3563234 interrupts.CPU5.LOC:Local_timer_interrupts
14377 ± 16% +51.1% 21730 ± 20% interrupts.CPU5.RES:Rescheduling_interrupts
606.00 +43.5% 869.75 interrupts.CPU6.76:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
10159 ± 4% +45.7% 14803 ± 7% interrupts.CPU6.CAL:Function_call_interrupts
2482577 +43.5% 3563712 interrupts.CPU6.LOC:Local_timer_interrupts
14706 ± 18% +79.6% 26418 ± 17% interrupts.CPU6.RES:Rescheduling_interrupts
10361 ± 7% +43.1% 14831 ± 5% interrupts.CPU7.CAL:Function_call_interrupts
2482952 +43.5% 3563417 interrupts.CPU7.LOC:Local_timer_interrupts
14163 ± 18% +89.0% 26764 ± 16% interrupts.CPU7.RES:Rescheduling_interrupts
1965 +19.4% 2345 interrupts.CPU8.10:IR-IO-APIC.10-edge.ipmi_si
10416 ± 5% +42.8% 14873 ± 7% interrupts.CPU8.CAL:Function_call_interrupts
2482862 +43.5% 3563671 interrupts.CPU8.LOC:Local_timer_interrupts
10333 ± 5% +39.1% 14374 ± 7% interrupts.CPU9.CAL:Function_call_interrupts
2482570 +43.5% 3563526 interrupts.CPU9.LOC:Local_timer_interrupts
14052 ± 18% +71.2% 24058 ± 20% interrupts.CPU9.RES:Rescheduling_interrupts
99308053 +43.5% 1.425e+08 interrupts.LOC:Local_timer_interrupts
120.00 +66.7% 200.00 interrupts.MCP:Machine_check_polls
1078626 -10.5% 964955 interrupts.RES:Rescheduling_interrupts
336.67 ± 5% +51.2% 509.00 ± 4% interrupts.TLB:TLB_shootdowns
aim7.jobs-per-min
16000 +-+-----------------------------------------------------------------+
| +.+.+ .+ +.|
14000 +-+ .+. + + : : |
12000 +-+.+.+.++.+.+ +.+.+.+.++.+.+.+ +.+.+.++.+.+. .+.+ : : |
| : : + : : |
10000 O-O O O O O O O O O O O OO O O : : |
| : : : : |
8000 +-+ : : : : |
| : : : : |
6000 +-+ : : : : |
4000 +-+ : : : : |
| : : : |
2000 +-+ : : |
| : : |
0 +-+------O----------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years, 4 months
7b0a7531cd: WARNING:possible_irq_lock_inversion_dependency_detected
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 7b0a7531cd210e510e4bcf40a732060ec7503107 ("debug")
https://git.kernel.org/cgit/linux/kernel/git/frederic/linux-dynticks.git softirq/soft-interruptible-v2-0day
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 2G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+---------------------------------------------------------+------------+------------+
| | 62fd64b7f1 | 7b0a7531cd |
+---------------------------------------------------------+------------+------------+
| boot_successes | 44 | 1 |
| boot_failures | 2 | 23 |
| BUG:kernel_in_stage | 2 | 2 |
| WARNING:possible_irq_lock_inversion_dependency_detected | 0 | 21 |
+---------------------------------------------------------+------------+------------+
[ 15.123927] WARNING: possible irq lock inversion dependency detected
[ 15.125747] 5.0.0-rc2-00264-g7b0a753 #168 Not tainted
[ 15.127213] --------------------------------------------------------
[ 15.129053] systemd/1 just changed the state of lock:
[ 15.130454] (____ptrval____) (__prout_spinB){+.+.+.+.+.+.+.+.+.+.+.}, at: proc_pid_cmdline_read+0xcd/0x32a
[ 15.133232] but this lock was taken by another, BLOCK_SOFTIRQ-safe lock in the past:
[ 15.135467] (__prout_spinA){+.+.+.+.+.-.+.+.+.+.+.}
[ 15.135471]
[ 15.135471]
[ 15.135471] and interrupts could create inverse lock ordering between them.
[ 15.135471]
[ 15.140008]
[ 15.140008] other info that might help us debug this:
[ 15.141878] Possible interrupt unsafe locking scenario:
[ 15.141878]
[ 15.143788] CPU0 CPU1
[ 15.145117] ---- ----
[ 15.146416] lock(__prout_spinB);
[ 15.147438] local_irq_disable();
[ 15.149152] lock(__prout_spinA);
[ 15.150871] lock(__prout_spinB);
[ 15.152586] <Interrupt>
[ 15.153342] lock(__prout_spinA);
[ 15.154401]
[ 15.154401] *** DEADLOCK ***
[ 15.154401]
[ 15.156147] no locks held by systemd/1.
[ 15.157268]
[ 15.157268] the shortest dependencies between 2nd lock and 1st lock:
[ 15.159503] -> (__prout_spinA){+.+.+.+.+.-.+.+.+.+.+.} {
[ 15.161111] HARDIRQ-ON-W at:
[ 15.162053] _raw_spin_lock+0x2e/0x5f
[ 15.163641] proc_pid_cmdline_read+0x64/0x32a
[ 15.165372] __vfs_read+0x1e/0xa2
[ 15.166961] vfs_read+0xa4/0xc0
[ 15.168372] ksys_read+0x4b/0x79
[ 15.169879] do_syscall_64+0x14f/0x160
[ 15.171411] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 15.173371] HI_SOFTIRQ-ON-W at:
[ 15.174440] _raw_spin_lock_bh_mask+0x3a/0x71
[ 15.176351] proc_pid_cmdline_read+0x99/0x32a
[ 15.178199] __vfs_read+0x1e/0xa2
[ 15.179889] vfs_read+0xa4/0xc0
[ 15.181391] ksys_read+0x4b/0x79
[ 15.182955] do_syscall_64+0x14f/0x160
[ 15.184623] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 15.186777] TIMER_SOFTIRQ-ON-W at:
[ 15.187878] _raw_spin_lock_bh_mask+0x3a/0x71
[ 15.189784] proc_pid_cmdline_read+0x99/0x32a
[ 15.191705] __vfs_read+0x1e/0xa2
[ 15.193305] vfs_read+0xa4/0xc0
[ 15.194940] ksys_read+0x4b/0x79
[ 15.196533] do_syscall_64+0x14f/0x160
[ 15.198269] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 15.200426] NET_TX_SOFTIRQ-ON-W at:
[ 15.201577] _raw_spin_lock_bh_mask+0x3a/0x71
[ 15.203468] proc_pid_cmdline_read+0x99/0x32a
[ 15.205435] __vfs_read+0x1e/0xa2
[ 15.207202] vfs_read+0xa4/0xc0
[ 15.208853] ksys_read+0x4b/0x79
[ 15.210596] do_syscall_64+0x14f/0x160
[ 15.212307] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 15.214334] NET_RX_SOFTIRQ-ON-W at:
[ 15.215442] _raw_spin_lock_bh_mask+0x3a/0x71
[ 15.217410] proc_pid_cmdline_read+0x99/0x32a
[ 15.219434] __vfs_read+0x1e/0xa2
[ 15.221162] vfs_read+0xa4/0xc0
[ 15.222747] ksys_read+0x4b/0x79
[ 15.224346] do_syscall_64+0x14f/0x160
[ 15.226067] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 15.228155] IN-BLOCK_SOFTIRQ-W at:
[ 15.229292] _raw_spin_lock+0x2e/0x5f
[ 15.230958] proc_pid_cmdline_read+0x64/0x32a
[ 15.232867] __vfs_read+0x1e/0xa2
[ 15.234418] vfs_read+0xa4/0xc0
[ 15.235933] ksys_read+0x4b/0x79
[ 15.237551] do_syscall_64+0x14f/0x160
[ 15.239314] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 15.241419] IRQ_POLL_SOFTIRQ-ON-W at:
[ 15.242666] _raw_spin_lock_bh_mask+0x3a/0x71
[ 15.244626] proc_pid_cmdline_read+0x99/0x32a
[ 15.246665] __vfs_read+0x1e/0xa2
[ 15.248330] vfs_read+0xa4/0xc0
[ 15.249974] ksys_read+0x4b/0x79
[ 15.251593] do_syscall_64+0x14f/0x160
[ 15.253388] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 15.255653] TASKLET_SOFTIRQ-ON-W at:
[ 15.256752] _raw_spin_lock_bh_mask+0x3a/0x71
[ 15.258747] proc_pid_cmdline_read+0x99/0x32a
[ 15.260689] __vfs_read+0x1e/0xa2
[ 15.262363] vfs_read+0xa4/0xc0
[ 15.264145] ksys_read+0x4b/0x79
[ 15.265795] do_syscall_64+0x14f/0x160
[ 15.267570] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 15.269670] SCHED_SOFTIRQ-ON-W at:
[ 15.270746] _raw_spin_lock_bh_mask+0x3a/0x71
[ 15.272641] proc_pid_cmdline_read+0x99/0x32a
[ 15.274667] __vfs_read+0x1e/0xa2
[ 15.276222] vfs_read+0xa4/0xc0
[ 15.277717] ksys_read+0x4b/0x79
[ 15.279266] do_syscall_64+0x14f/0x160
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years, 4 months