Re: [LKP] [rcu] kernel BUG at include/linux/pagemap.h:149!
by Frederic Weisbecker
On Fri, Sep 11, 2015 at 10:19:47AM +0800, Boqun Feng wrote:
> Subject: [PATCH 01/27] rcu: Don't disable preemption for Tiny and Tree RCU
> readers
>
> Because preempt_disable() maps to barrier() for non-debug builds,
> it forces the compiler to spill and reload registers. Because Tree
> RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
> barrier() instances generate needless extra code for each instance of
> rcu_read_lock() and rcu_read_unlock(). This extra code slows down Tree
> RCU and bloats Tiny RCU.
>
> This commit therefore removes the preempt_disable() and preempt_enable()
> from the non-preemptible implementations of __rcu_read_lock() and
> __rcu_read_unlock(), respectively.
>
> For debug purposes, preempt_disable() and preempt_enable() are still
> kept if CONFIG_PREEMPT_COUNT=y, which makes the detection of sleeping
> inside atomic sections still work in non-preemptible kernels.
>
> Signed-off-by: Boqun Feng <boqun.feng(a)gmail.com>
> Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> ---
> include/linux/rcupdate.h | 6 ++++--
> include/linux/rcutiny.h | 1 +
> kernel/rcu/tree.c | 9 +++++++++
> 3 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index d63bb77..6c3cece 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -297,12 +297,14 @@ void synchronize_rcu(void);
>
> static inline void __rcu_read_lock(void)
> {
> - preempt_disable();
> + if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
> + preempt_disable();
preempt_disable() is a no-op when !CONFIG_PREEMPT_COUNT, right?
Or rather it's a barrier(), which is anyway implied by rcu_read_lock().
So perhaps we can get rid of the IS_ENABLED() check?
3 years
Extending the 0-day system with syzkaller?
by David Drysdale
Hi Fengguang / LKP-folk,
Quick question -- how easy is it to add extra builds/tests/checks to
your marvellous 0-day kbuild system?
The reason I ask is that I've recently been exploring syzkaller [1],
which is a system call fuzzer written by some of my colleagues here at
Google (cc'ed). Although it's fairly new, it has uncovered a bunch of
kernel bugs already [2] so I wondered if it might be a good candidate
for inclusion in the 0-day checks at some point.
(As an aside, I'm in the process of writing an article about syzkaller
for LWN, which might also expose it to more folk.)
What do you think?
Thanks,
David
[1] https://github.com/google/syzkaller
[2] https://github.com/google/syzkaller/wiki/Found-Bugs
6 years
[lkp] [mm, oom] faad2185f4: vm-scalability.throughput -11.8% regression
by kernel test robot
FYI, we noticed vm-scalability.throughput -11.8% regression with the following commit:
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit faad2185f482578d50d363746006a1b95dde9d0a ("mm, oom: rework oom detection")
on test machine: lkp-hsw-ep2: 72 threads Brickland Haswell-EP with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_pmem/nr_task/rootfs/tbox_group/test/testcase/thp_defrag/thp_enabled:
gcc-4.9/performance/x86_64-rhel-pmem/1/16/debian-x86_64-2015-02-07.cgz/lkp-hsw-ep2/swap-w-rand/vm-scalability/never/never
commit:
0da9597ac9c0adb8a521b9935fbe43d8b0e8cc64
faad2185f482578d50d363746006a1b95dde9d0a
0da9597ac9c0adb8 faad2185f482578d50d3637460
---------------- --------------------------
fail:runs %reproduction fail:runs
%stddev %change %stddev
\ | \
43802 ± 0% -11.8% 38653 ± 0% vm-scalability.throughput
310.35 ± 0% -100.0% 0.00 ± -1% vm-scalability.time.elapsed_time
310.35 ± 0% -100.0% 0.00 ± -1% vm-scalability.time.elapsed_time.max
234551 ± 6% -100.0% 0.00 ± -1% vm-scalability.time.involuntary_context_switches
44654748 ± 9% -100.0% 0.00 ± -1% vm-scalability.time.major_page_faults
2442686 ± 11% -100.0% 0.00 ± -1% vm-scalability.time.maximum_resident_set_size
34477365 ± 0% -100.0% 0.00 ± -1% vm-scalability.time.minor_page_faults
4096 ± 0% -100.0% 0.00 ± -1% vm-scalability.time.page_size
1595 ± 0% -100.0% 0.00 ± -1% vm-scalability.time.percent_of_cpu_this_job_got
4935 ± 0% -100.0% 0.00 ± -1% vm-scalability.time.system_time
19.08 ± 6% -100.0% 0.00 ± -1% vm-scalability.time.user_time
342.89 ± 0% -71.7% 96.99 ± -1% uptime.boot
18719 ± 1% -70.3% 5555 ± 0% uptime.idle
227271 ± 3% -68.0% 72623 ± 0% softirqs.RCU
208173 ± 7% -69.7% 63118 ± 0% softirqs.SCHED
3204631 ± 1% -73.0% 866292 ± 0% softirqs.TIMER
739.50 ± 0% -1.6% 728.00 ± 0% turbostat.Avg_MHz
61.50 ± 3% +20.3% 74.00 ± -1% turbostat.CoreTmp
0.07 ± 57% +1092.7% 0.82 ±-121% turbostat.Pkg%pc2
64.75 ± 2% +14.3% 74.00 ± -1% turbostat.PkgTmp
51.45 ± 0% +1.8% 52.39 ± -1% turbostat.RAMWatt
789322 ± 4% +49.2% 1177649 ± 0% vmstat.memory.free
53141272 ± 1% -45.8% 28781900 ± 0% vmstat.memory.swpd
0.00 ± 0% +Inf% 1.00 ±-100% vmstat.procs.b
780938 ± 7% +66.2% 1297589 ± 0% vmstat.swap.so
4217 ± 6% +103.4% 8576 ± 0% vmstat.system.cs
204460 ± 6% +62.0% 331270 ± 0% vmstat.system.in
9128034 ± 43% -85.7% 1306182 ± 0% cpuidle.C1E-HSW.time
5009 ± 52% -88.9% 557.00 ± 0% cpuidle.C1E-HSW.usage
9110 ±130% -93.3% 611.00 ± 0% cpuidle.C3-HSW.usage
1.655e+10 ± 0% -79.5% 3.397e+09 ± 0% cpuidle.C6-HSW.time
621881 ± 2% -71.5% 177398 ± 0% cpuidle.C6-HSW.usage
53981965 ± 58% -80.4% 10553789 ± 0% cpuidle.POLL.time
85773 ± 9% -18.4% 69982 ± 0% cpuidle.POLL.usage
2925199 ± 94% -75.8% 706866 ± 0% numa-numastat.node0.local_node
2931002 ± 93% -75.6% 716120 ± 0% numa-numastat.node0.numa_hit
12041792 ± 24% -67.4% 3919657 ± 0% numa-numastat.node0.numa_miss
12047595 ± 24% -67.4% 3928911 ± 0% numa-numastat.node0.other_node
64592910 ± 10% -66.5% 21635175 ± 0% numa-numastat.node1.local_node
12041716 ± 24% -67.5% 3919210 ± 0% numa-numastat.node1.numa_foreign
64601023 ± 10% -66.5% 21639833 ± 0% numa-numastat.node1.numa_hit
4730 ± 13% +290.9% 18491 ± 0% meminfo.Inactive(file)
12978 ± 8% +46.3% 18985 ± 0% meminfo.Mapped
703327 ± 9% +72.4% 1212584 ± 0% meminfo.MemAvailable
732344 ± 8% +65.0% 1208500 ± 0% meminfo.MemFree
99286 ± 4% +30.3% 129348 ± 0% meminfo.SReclaimable
3920 ± 21% +332.5% 16955 ± 0% meminfo.Shmem
206164 ± 2% +14.7% 236528 ± 0% meminfo.Slab
1113 ± 10% +23.6% 1377 ± 0% meminfo.SwapCached
47130509 ± 3% +53.1% 72150055 ± 0% meminfo.SwapFree
1012 ± 12% +60.9% 1628 ± 0% slabinfo.blkdev_requests.active_objs
1012 ± 12% +60.9% 1628 ± 0% slabinfo.blkdev_requests.num_objs
1531 ± 5% +12.5% 1722 ± 0% slabinfo.mnt_cache.active_objs
1531 ± 5% +12.5% 1722 ± 0% slabinfo.mnt_cache.num_objs
9719 ± 9% -16.8% 8087 ± 0% slabinfo.proc_inode_cache.num_objs
92960 ± 6% +69.6% 157683 ± 0% slabinfo.radix_tree_node.active_objs
9336 ± 9% +35.2% 12624 ± 0% slabinfo.radix_tree_node.active_slabs
95203 ± 6% +66.0% 158075 ± 0% slabinfo.radix_tree_node.num_objs
9336 ± 9% +35.2% 12624 ± 0% slabinfo.radix_tree_node.num_slabs
310.35 ± 0% -100.0% 0.00 ± -1% time.elapsed_time
310.35 ± 0% -100.0% 0.00 ± -1% time.elapsed_time.max
600.00 ± 27% -100.0% 0.00 ± -1% time.file_system_inputs
234551 ± 6% -100.0% 0.00 ± -1% time.involuntary_context_switches
44654748 ± 9% -100.0% 0.00 ± -1% time.major_page_faults
2442686 ± 11% -100.0% 0.00 ± -1% time.maximum_resident_set_size
34477365 ± 0% -100.0% 0.00 ± -1% time.minor_page_faults
4096 ± 0% -100.0% 0.00 ± -1% time.page_size
1595 ± 0% -100.0% 0.00 ± -1% time.percent_of_cpu_this_job_got
4935 ± 0% -100.0% 0.00 ± -1% time.system_time
19.08 ± 6% -100.0% 0.00 ± -1% time.user_time
390.50 ± 34% -100.0% 0.00 ± -1% time.voluntary_context_switches
914507 ± 7% -13.3% 792912 ± 0% numa-meminfo.node0.Active
913915 ± 7% -13.5% 790259 ± 0% numa-meminfo.node0.Active(anon)
592.00 ± 31% +348.1% 2653 ± 0% numa-meminfo.node0.Active(file)
1217059 ± 7% -13.7% 1049893 ± 0% numa-meminfo.node0.AnonPages
306384 ± 7% -12.0% 269631 ± 0% numa-meminfo.node0.Inactive
304389 ± 7% -14.4% 260426 ± 0% numa-meminfo.node0.Inactive(anon)
1995 ± 8% +361.3% 9204 ± 0% numa-meminfo.node0.Inactive(file)
5801 ± 4% +16.7% 6772 ± 0% numa-meminfo.node0.Mapped
32196 ± 7% +36.4% 43932 ± 0% numa-meminfo.node0.MemFree
55651 ± 5% +10.6% 61563 ± 0% numa-meminfo.node0.SUnreclaim
2966 ± 15% +232.9% 9875 ± 0% numa-meminfo.node1.Inactive(file)
7446 ± 13% +67.7% 12486 ± 0% numa-meminfo.node1.Mapped
679948 ± 6% +76.7% 1201231 ± 0% numa-meminfo.node1.MemFree
66811 ± 7% +48.5% 99246 ± 0% numa-meminfo.node1.SReclaimable
51227 ± 5% -11.0% 45616 ± 0% numa-meminfo.node1.SUnreclaim
3090 ± 39% +415.6% 15932 ± 0% numa-meminfo.node1.Shmem
118039 ± 3% +22.7% 144863 ± 0% numa-meminfo.node1.Slab
0.00 ± -1% +Inf% 1.58 ±-63% perf-profile.cycles-pp.__alloc_pages_slowpath.constprop.93.__alloc_pages_nodemask.alloc_kmem_pages_node.copy_process._do_fork
0.00 ± -1% +Inf% 26.40 ± -3% perf-profile.cycles-pp.__alloc_pages_slowpath.constprop.93.__alloc_pages_nodemask.alloc_pages_vma.__read_swap_cache_async.read_swap_cache_async
0.00 ± -1% +Inf% 39.64 ± -2% perf-profile.cycles-pp.__alloc_pages_slowpath.constprop.93.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault
5.20 ±140% -100.0% 0.00 ± -1% perf-profile.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_kmem_pages_node.copy_process
25.02 ± 10% -100.0% 0.00 ± -1% perf-profile.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_vma.__read_swap_cache_async
38.03 ± 9% -100.0% 0.00 ± -1% perf-profile.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault
0.00 ± -1% +Inf% 1.59 ±-62% perf-profile.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_kmem_pages_node
0.00 ± -1% +Inf% 65.24 ± -1% perf-profile.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma
5.20 ±140% -100.0% 0.00 ± -1% perf-profile.cycles-pp.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_kmem_pages_node
63.09 ± 8% -100.0% 0.00 ± -1% perf-profile.cycles-pp.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_vma
0.00 ± -1% +Inf% 67.08 ± -1% perf-profile.cycles-pp.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask
69.00 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.shrink_zone_memcg.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask
0.00 ± -1% +Inf% 66.87 ± -1% perf-profile.cycles-pp.shrink_zone_memcg.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
5.20 ±140% -100.0% 0.00 ± -1% perf-profile.cycles-pp.try_to_free_pages.__alloc_pages_nodemask.alloc_kmem_pages_node.copy_process._do_fork
25.05 ± 11% -100.0% 0.00 ± -1% perf-profile.cycles-pp.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_vma.__read_swap_cache_async.read_swap_cache_async
38.06 ± 9% -100.0% 0.00 ± -1% perf-profile.cycles-pp.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault
0.00 ± -1% +Inf% 1.59 ±-62% perf-profile.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_kmem_pages_node.copy_process
0.00 ± -1% +Inf% 26.22 ± -3% perf-profile.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma.__read_swap_cache_async
0.00 ± -1% +Inf% 39.01 ± -2% perf-profile.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault
228466 ± 7% -13.4% 197776 ± 0% numa-vmstat.node0.nr_active_anon
147.50 ± 31% +334.6% 641.00 ± 0% numa-vmstat.node0.nr_active_file
304255 ± 7% -13.6% 262822 ± 0% numa-vmstat.node0.nr_anon_pages
8062 ± 8% +34.3% 10829 ± 0% numa-vmstat.node0.nr_free_pages
76095 ± 7% -14.3% 65250 ± 0% numa-vmstat.node0.nr_inactive_anon
498.00 ± 8% +347.8% 2230 ± 0% numa-vmstat.node0.nr_inactive_file
1466 ± 5% +15.0% 1686 ± 0% numa-vmstat.node0.nr_mapped
13912 ± 5% +10.6% 15390 ± 0% numa-vmstat.node0.nr_slab_unreclaimable
7585474 ± 5% -73.6% 2005989 ± 0% numa-vmstat.node0.nr_vmscan_write
7585495 ± 5% -73.6% 2006038 ± 0% numa-vmstat.node0.nr_written
2042806 ± 73% -60.3% 810553 ± 0% numa-vmstat.node0.numa_hit
1973969 ± 76% -62.6% 737625 ± 0% numa-vmstat.node0.numa_local
6640606 ± 22% -72.4% 1834872 ± 0% numa-vmstat.node0.numa_miss
6709443 ± 22% -71.6% 1907800 ± 0% numa-vmstat.node0.numa_other
169806 ± 5% +71.9% 291868 ± 0% numa-vmstat.node1.nr_free_pages
740.75 ± 15% +223.1% 2393 ± 0% numa-vmstat.node1.nr_inactive_file
1860 ± 13% +69.4% 3153 ± 0% numa-vmstat.node1.nr_mapped
767.88 ± 39% +415.4% 3958 ± 0% numa-vmstat.node1.nr_shmem
16698 ± 7% +46.9% 24534 ± 0% numa-vmstat.node1.nr_slab_reclaimable
12806 ± 5% -10.9% 11405 ± 0% numa-vmstat.node1.nr_slab_unreclaimable
27889038 ± 6% -68.4% 8818477 ± 0% numa-vmstat.node1.nr_vmscan_write
27889106 ± 6% -68.4% 8818479 ± 0% numa-vmstat.node1.nr_written
6640458 ± 22% -72.4% 1834609 ± 0% numa-vmstat.node1.numa_foreign
38283180 ± 9% -71.4% 10962630 ± 0% numa-vmstat.node1.numa_hit
38265243 ± 9% -71.4% 10948754 ± 0% numa-vmstat.node1.numa_local
539498 ± 6% -66.2% 182224 ± 0% proc-vmstat.allocstall
144.38 ± 22% -96.5% 5.00 ±-20% proc-vmstat.compact_fail
15889726 ± 25% -87.2% 2027142 ± 0% proc-vmstat.compact_free_scanned
7424 ± 21% -95.5% 337.00 ± 0% proc-vmstat.compact_isolated
18421 ±120% -98.3% 310.00 ± 0% proc-vmstat.compact_migrate_scanned
192.00 ± 21% -96.4% 7.00 ±-14% proc-vmstat.compact_stall
49525 ± 43% +154.6% 126090 ± 0% proc-vmstat.kswapd_low_wmark_hit_quickly
17344 ± 4% +73.0% 30013 ± 0% proc-vmstat.nr_dirty_background_threshold
34690 ± 4% +73.0% 60026 ± 0% proc-vmstat.nr_dirty_threshold
180484 ± 4% +67.9% 303083 ± 0% proc-vmstat.nr_free_pages
1227 ± 10% +276.8% 4623 ± 0% proc-vmstat.nr_inactive_file
3303 ± 6% +43.0% 4722 ± 0% proc-vmstat.nr_mapped
1012 ± 17% +321.7% 4270 ± 0% proc-vmstat.nr_shmem
24900 ± 4% +29.6% 32265 ± 0% proc-vmstat.nr_slab_reclaimable
35587470 ± 5% -69.7% 10775004 ± 0% proc-vmstat.nr_vmscan_write
61007414 ± 6% -65.6% 21016129 ± 0% proc-vmstat.nr_written
16970144 ± 12% -37.0% 10686065 ± 0% proc-vmstat.numa_foreign
10074000 ± 1% -45.2% 5519749 ± 0% proc-vmstat.numa_hint_faults
9673661 ± 5% -44.4% 5377833 ± 0% proc-vmstat.numa_hint_faults_local
67528367 ± 6% -67.0% 22278204 ± 0% proc-vmstat.numa_hit
67514451 ± 6% -67.0% 22264292 ± 0% proc-vmstat.numa_local
16969897 ± 12% -37.0% 10686272 ± 0% proc-vmstat.numa_miss
16983813 ± 12% -37.0% 10700184 ± 0% proc-vmstat.numa_other
41943046 ± 1% -43.9% 23535513 ± 0% proc-vmstat.numa_pte_updates
49539 ± 43% +154.5% 126102 ± 0% proc-vmstat.pageoutrun
45300466 ± 9% -79.2% 9418945 ± 0% proc-vmstat.pgactivate
558557 ± 14% -34.1% 367818 ± 0% proc-vmstat.pgalloc_dma
14967174 ± 3% -69.1% 4626484 ± 0% proc-vmstat.pgalloc_dma32
71037855 ± 7% -57.6% 30119030 ± 0% proc-vmstat.pgalloc_normal
62292933 ± 6% -65.2% 21706559 ± 0% proc-vmstat.pgdeactivate
79824509 ± 5% -56.2% 34976920 ± 0% proc-vmstat.pgfault
86163698 ± 6% -68.5% 27162999 ± 0% proc-vmstat.pgfree
44685673 ± 9% -79.4% 9192073 ± 0% proc-vmstat.pgmajfault
13765509 ± 7% -69.7% 4168976 ± 0% proc-vmstat.pgrefill_dma32
48547731 ± 6% -63.8% 17561899 ± 0% proc-vmstat.pgrefill_normal
12122138 ± 7% -69.7% 3675632 ± 0% proc-vmstat.pgscan_direct_dma32
67953310 ± 7% -66.4% 22842830 ± 0% proc-vmstat.pgscan_direct_normal
11915527 ± 10% -79.3% 2460668 ± 0% proc-vmstat.pgscan_kswapd_dma32
15559179 ± 9% -80.7% 2995996 ± 0% proc-vmstat.pgscan_kswapd_normal
8844259 ± 8% -70.8% 2582588 ± 0% proc-vmstat.pgsteal_direct_dma32
43061102 ± 7% -64.0% 15515081 ± 0% proc-vmstat.pgsteal_direct_normal
4732303 ± 6% -69.0% 1469200 ± 0% proc-vmstat.pgsteal_kswapd_dma32
4380170 ± 7% -66.6% 1462100 ± 0% proc-vmstat.pgsteal_kswapd_normal
44709819 ± 9% -79.4% 9217280 ± 0% proc-vmstat.pswpin
61007674 ± 6% -65.6% 21016726 ± 0% proc-vmstat.pswpout
37.61 ± 8% -39.3% 22.83 ± -4% sched_debug.cfs_rq:/.load.avg
884.52 ± 5% -36.7% 559.50 ± 0% sched_debug.cfs_rq:/.load.max
146.88 ± 5% -38.3% 90.64 ± -1% sched_debug.cfs_rq:/.load.stddev
47.93 ± 5% +28.5% 61.60 ± -1% sched_debug.cfs_rq:/.load_avg.avg
1095 ± 10% +52.2% 1667 ± 0% sched_debug.cfs_rq:/.load_avg.max
170.96 ± 7% +39.6% 238.66 ± 0% sched_debug.cfs_rq:/.load_avg.stddev
578829 ± 2% -80.5% 112739 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
2507544 ± 0% -80.8% 482665 ± 0% sched_debug.cfs_rq:/.min_vruntime.max
998179 ± 1% -82.2% 177613 ± 0% sched_debug.cfs_rq:/.min_vruntime.stddev
0.24 ± 2% -37.1% 0.15 ±-654% sched_debug.cfs_rq:/.nr_running.avg
0.41 ± 1% -22.1% 0.32 ±-312% sched_debug.cfs_rq:/.nr_running.stddev
34.69 ± 0% -38.3% 21.40 ± -4% sched_debug.cfs_rq:/.runnable_load_avg.avg
849.33 ± 0% -37.9% 527.50 ± 0% sched_debug.cfs_rq:/.runnable_load_avg.max
138.61 ± 0% -38.4% 85.43 ± -1% sched_debug.cfs_rq:/.runnable_load_avg.stddev
444145 ± 30% -87.1% 57376 ± 0% sched_debug.cfs_rq:/.spread0.avg
2372869 ± 5% -82.0% 427303 ± 0% sched_debug.cfs_rq:/.spread0.max
998183 ± 1% -82.2% 177613 ± 0% sched_debug.cfs_rq:/.spread0.stddev
242.15 ± 1% -28.8% 172.45 ± 0% sched_debug.cfs_rq:/.util_avg.avg
392.49 ± 0% -21.4% 308.56 ± 0% sched_debug.cfs_rq:/.util_avg.stddev
184988 ± 1% -64.1% 66460 ± 0% sched_debug.cpu.clock.avg
184996 ± 1% -64.1% 66466 ± 0% sched_debug.cpu.clock.max
184978 ± 1% -64.1% 66453 ± 0% sched_debug.cpu.clock.min
5.60 ± 18% -30.8% 3.88 ±-25% sched_debug.cpu.clock.stddev
184988 ± 1% -64.1% 66460 ± 0% sched_debug.cpu.clock_task.avg
184996 ± 1% -64.1% 66466 ± 0% sched_debug.cpu.clock_task.max
184978 ± 1% -64.1% 66453 ± 0% sched_debug.cpu.clock_task.min
5.60 ± 18% -30.8% 3.88 ±-25% sched_debug.cpu.clock_task.stddev
36.54 ± 4% -42.2% 21.11 ± -4% sched_debug.cpu.cpu_load[0].avg
950.98 ± 7% -44.5% 527.50 ± 0% sched_debug.cpu.cpu_load[0].max
151.40 ± 6% -43.7% 85.22 ± -1% sched_debug.cpu.cpu_load[0].stddev
35.91 ± 2% -41.2% 21.10 ± -4% sched_debug.cpu.cpu_load[1].avg
899.77 ± 3% -41.4% 527.50 ± 0% sched_debug.cpu.cpu_load[1].max
145.18 ± 3% -41.3% 85.22 ± -1% sched_debug.cpu.cpu_load[1].stddev
35.61 ± 2% -40.7% 21.12 ± -4% sched_debug.cpu.cpu_load[2].avg
877.87 ± 2% -39.9% 527.50 ± 0% sched_debug.cpu.cpu_load[2].max
142.54 ± 2% -40.2% 85.23 ± -1% sched_debug.cpu.cpu_load[2].stddev
35.37 ± 2% -40.1% 21.20 ± -4% sched_debug.cpu.cpu_load[3].avg
867.60 ± 2% -39.2% 527.50 ± 0% sched_debug.cpu.cpu_load[3].max
141.21 ± 2% -39.6% 85.33 ± -1% sched_debug.cpu.cpu_load[3].stddev
35.16 ± 1% -39.6% 21.24 ± -4% sched_debug.cpu.cpu_load[4].avg
858.88 ± 2% -38.6% 527.50 ± 0% sched_debug.cpu.cpu_load[4].max
140.16 ± 2% -39.0% 85.43 ± -1% sched_debug.cpu.cpu_load[4].stddev
456.75 ± 2% -41.9% 265.40 ± 0% sched_debug.cpu.curr->pid.avg
5331 ± 1% -53.3% 2491 ± 0% sched_debug.cpu.curr->pid.max
912.17 ± 2% -38.3% 562.54 ± 0% sched_debug.cpu.curr->pid.stddev
37.90 ± 7% -39.8% 22.83 ± -4% sched_debug.cpu.load.avg
904.00 ± 7% -38.1% 559.50 ± 0% sched_debug.cpu.load.max
149.42 ± 6% -39.3% 90.64 ± -1% sched_debug.cpu.load.stddev
0.00 ± 5% -31.3% 0.00 ±-4394145% sched_debug.cpu.next_balance.stddev
72445 ± 0% -75.8% 17509 ± 0% sched_debug.cpu.nr_load_updates.avg
155443 ± 0% -77.5% 34902 ± 0% sched_debug.cpu.nr_load_updates.max
19501 ± 33% -61.4% 7530 ± 0% sched_debug.cpu.nr_load_updates.min
46290 ± 1% -80.6% 8995 ± 0% sched_debug.cpu.nr_load_updates.stddev
0.25 ± 3% -34.8% 0.16 ±-626% sched_debug.cpu.nr_running.avg
1.12 ± 6% +33.3% 1.50 ±-66% sched_debug.cpu.nr_running.max
0.42 ± 2% -14.1% 0.36 ±-276% sched_debug.cpu.nr_running.stddev
9841 ± 4% -54.7% 4459 ± 0% sched_debug.cpu.nr_switches.avg
323.71 ± 11% -26.8% 237.00 ± 0% sched_debug.cpu.nr_switches.min
10606 ± 6% -26.9% 7748 ± 0% sched_debug.cpu.nr_switches.stddev
0.00 ± 68% +1100.0% 0.05 ±-2057% sched_debug.cpu.nr_uninterruptible.avg
184975 ± 1% -64.1% 66455 ± 0% sched_debug.cpu_clk
181780 ± 1% -65.3% 63027 ± 0% sched_debug.ktime
0.12 ± 5% +205.5% 0.36 ±-275% sched_debug.rt_rq:/.rt_time.avg
4.33 ± 7% +211.4% 13.48 ± -7% sched_debug.rt_rq:/.rt_time.max
0.60 ± 6% +208.6% 1.85 ±-54% sched_debug.rt_rq:/.rt_time.stddev
184975 ± 1% -64.1% 66455 ± 0% sched_debug.sched_clk
lkp-hsw-ep2: 72 threads Brickland Haswell-EP with 128G memory
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
6 years, 1 month
[lkp] [ASoC] 24164c2d37: BUG: unable to handle kernel
by Ye, Xiaolong
FYI, we noticed the following commit:
https://github.com/0day-ci/linux vedang-patel-intel-com/ASoC-Reduce-audio-related-kernel-spew/20160427-070807
commit 24164c2d37ef871e14780078cb1ff4fe90f76e78 ("ASoC: Intel: Skylake: Increase loglevel of debug messages.")
on test machine: vm-kbuild-1G: 2 threads qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap with 1G memory
caused below changes:
+----------------+------------+------------+
| | e9a585dde9 | 24164c2d37 |
+----------------+------------+------------+
| boot_successes | 2 | 0 |
+----------------+------------+------------+
[ 1.250173] Performance Events:
[ 1.250173] Performance Events: unsupported p6 CPU model 60 unsupported p6 CPU model 60 no PMU driver, software events only.
no PMU driver, software events only.
[ 1.286862] BUG: unable to handle kernel
[ 1.286862] BUG: unable to handle kernel NULL pointer dereferenceNULL pointer dereference at (null)
at (null)
[ 1.302122] IP:
[ 1.302122] IP: [<ffffffff8800c361>] dynamic_debug_init+0x15a/0x669
[<ffffffff8800c361>] dynamic_debug_init+0x15a/0x669
[ 1.308529] PGD 0
[ 1.308529] PGD 0
[ 1.309125] Oops: 0000 [#1]
[ 1.309125] Oops: 0000 [#1]
[ 1.311225] CPU: 0 PID: 1 Comm: swapper Not tainted 4.6.0-rc5-00127-g24164c2 #2
[ 1.311225] CPU: 0 PID: 1 Comm: swapper Not tainted 4.6.0-rc5-00127-g24164c2 #2
[ 1.313394] task: ffff88000011a000 ti: ffff88000011c000 task.ti: ffff88000011c000
[ 1.313394] task: ffff88000011a000 ti: ffff88000011c000 task.ti: ffff88000011c000
[ 1.325487] RIP: 0010:[<ffffffff8800c361>]
[ 1.325487] RIP: 0010:[<ffffffff8800c361>] [<ffffffff8800c361>] dynamic_debug_init+0x15a/0x669
[<ffffffff8800c361>] dynamic_debug_init+0x15a/0x669
[ 1.338189] RSP: 0000:ffff88000011fde0 EFLAGS: 00010286
[ 1.338189] RSP: 0000:ffff88000011fde0 EFLAGS: 00010286
[ 1.339640] RAX: 0000000000000000 RBX: 0000000000000004 RCX: ffffffffffffffff
[ 1.339640] RAX: 0000000000000000 RBX: 0000000000000004 RCX: ffffffffffffffff
[ 1.345523] RDX: ffffffffffffffc4 RSI: ffffffff84875105 RDI: 0000000000000000
[ 1.345523] RDX: ffffffffffffffc4 RSI: ffffffff84875105 RDI: 0000000000000000
[ 1.350793] RBP: ffff88000011fe30 R08: ffffffffffffffef R09: ffffffffffffffe9
[ 1.350793] RBP: ffff88000011fe30 R08: ffffffffffffffef R09: ffffffffffffffe9
[ 1.352935] R10: ffff88000011fd90 R11: 0000000000008001 R12: ffffffff8772b1e8
[ 1.352935] R10: ffff88000011fd90 R11: 0000000000008001 R12: ffffffff8772b1e8
[ 1.366765] R13: 00000000000001d1 R14: 0000000000000000 R15: ffffffff84875105
[ 1.366765] R13: 00000000000001d1 R14: 0000000000000000 R15: ffffffff84875105
[ 1.371474] FS: 0000000000000000(0000) GS:ffffffff84b4c000(0000) knlGS:0000000000000000
[ 1.371474] FS: 0000000000000000(0000) GS:ffffffff84b4c000(0000) knlGS:0000000000000000
[ 1.376489] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1.376489] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1.381509] CR2: 0000000000000000 CR3: 0000000004a0c000 CR4: 00000000000406f0
[ 1.381509] CR2: 0000000000000000 CR3: 0000000004a0c000 CR4: 00000000000406f0
[ 1.383648] Stack:
[ 1.383648] Stack:
[ 1.390087] 00000000787097a8
[ 1.390087] 00000000787097a8 0000000000000282 0000000000000282 0000000000000aec 0000000000000aec ffffffff8772b148 ffffffff8772b148
[ 1.397447] 00000cbe00052d61
[ 1.397447] 00000cbe00052d61 ffffffff8800c207 ffffffff8800c207 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 1.402442] 0000000000000002
[ 1.402442] 0000000000000002 000000000000000c 000000000000000c ffff88000011fec8 ffff88000011fec8 ffffffff87f4ab40 ffffffff87f4ab40
[ 1.412369] Call Trace:
[ 1.412369] Call Trace:
[ 1.413129] [<ffffffff8800c207>] ? dynamic_debug_init_debugfs+0x1e0/0x1e0
[ 1.413129] [<ffffffff8800c207>] ? dynamic_debug_init_debugfs+0x1e0/0x1e0
[ 1.420767] [<ffffffff87f4ab40>] do_one_initcall+0x430/0x7f4
[ 1.420767] [<ffffffff87f4ab40>] do_one_initcall+0x430/0x7f4
[ 1.422502] [<ffffffff81139e00>] ? native_irq_disable+0x7/0x7
[ 1.422502] [<ffffffff81139e00>] ? native_irq_disable+0x7/0x7
[ 1.431393] [<ffffffff83d31d10>] ? _raw_spin_unlock_irq+0x53/0xe4
[ 1.431393] [<ffffffff83d31d10>] ? _raw_spin_unlock_irq+0x53/0xe4
[ 1.433155] [<ffffffff87f4affe>] kernel_init_freeable+0xfa/0x499
[ 1.433155] [<ffffffff87f4affe>] kernel_init_freeable+0xfa/0x499
[ 1.441677] [<ffffffff83d13b83>] kernel_init+0x17/0x3fa
[ 1.441677] [<ffffffff83d13b83>] kernel_init+0x17/0x3fa
[ 1.447926] [<ffffffff83d331d2>] ret_from_fork+0x22/0x50
[ 1.447926] [<ffffffff83d331d2>] ret_from_fork+0x22/0x50
[ 1.449567] [<ffffffff83d13b6c>] ? rest_init+0x243/0x243
[ 1.449567] [<ffffffff83d13b6c>] ? rest_init+0x243/0x243
[ 1.457948] Code:
[ 1.457948] Code: f7 f7 f2 f2 ae ae 49 49 8b 8b 7c 7c 24 24 08 08 49 49 89 89 c8 c8 48 48 83 83 c9 c9 ff ff f2 f2 ae ae 49 49 8b 8b 7c 7c 24 24 10 10 49 49 89 89 c9 c9 48 48 83 83 c9 c9 ff ff f2 f2 ae ae 49 49 8b 8b 7c 7c 24 24 18 18 48 48 89 89 ca ca 48 48 83 83 c9 c9 ff ff <f2> <f2> ae ae 4c 4c 89 89 ff ff 48 48 89 89 c8 c8 8b 8b 4d 4d d0 d0 48 48 f7 f7 d0 d0 44 44 29 29 c9 c9 44 44 29 29 c1 c1 29 29
[ 1.479779] RIP
[ 1.479779] RIP [<ffffffff8800c361>] dynamic_debug_init+0x15a/0x669
[<ffffffff8800c361>] dynamic_debug_init+0x15a/0x669
[ 1.488480] RSP <ffff88000011fde0>
[ 1.488480] RSP <ffff88000011fde0>
[ 1.489588] CR2: 0000000000000000
[ 1.489588] CR2: 0000000000000000
[ 1.497304] ---[ end trace 13f41c3d0b577fa5 ]---
[ 1.497304] ---[ end trace 13f41c3d0b577fa5 ]---
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap -kernel /pkg/linux/x86_64-randconfig-v0-04271113/gcc-5/24164c2d37ef871e14780078cb1ff4fe90f76e78/vmlinuz-4.6.0-rc5-00127-g24164c2 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-1G-10/bisect_boot-1-debian-x86_64-2015-02-07.cgz-x86_64-randconfig-v0-04271113-24164c2d37ef871e14780078cb1ff4fe90f76e78-20160427-74012-1a1njq3-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-v0-04271113 branch=linux-devel/devel-spot-201604271010 commit=24164c2d37ef871e14780078cb1ff4fe90f76e78 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-v0-04271113/gcc-5/24164c2d37ef871e14780078cb1ff4fe90f76e78/vmlinuz-4.6.0-rc5-00127-g24164c2 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-1G/debian-x86_64-2015-02-07.cgz/x86_64-randconfig-v0-04271113/gcc-5/24164c2d37ef871e14780078cb1ff4fe90f76e78/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-1G-10::dhcp' -initrd /fs/sde1/initrd-vm-kbuild-1G-10 -m 1024 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::23009-:22 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -device virtio-scsi-pci,id=scsi0 -drive file=/fs/sde1/disk0-vm-kbuild-1G-10,if=none,id=hd0,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd0,scsi-id=1,lun=0 -drive file=/fs/sde1/disk1-vm-kbuild-1G-10,if=none,id=hd1,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd1,scsi-id=1,lun=1 -drive file=/fs/sde1/disk2-vm-kbuild-1G-10,if=none,id=hd2,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd2,scsi-id=1,lun=2 -drive file=/fs/sde1/disk3-vm-kbuild-1G-10,if=none,id=hd3,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd3,scsi-id=1,lun=3 -drive file=/fs/sde1/disk4-vm-kbuild-1G-10,if=none,id=hd4,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd4,scsi-id=1,lun=4 -pidfile /dev/shm/kboot/pid-vm-kbuild-1G-10 -serial file:/dev/shm/kboot/serial-vm-kbuild-1G-10 -daemonize -display none -monitor null
Thanks,
Kernel Test Robot
6 years, 1 month
[lkp] [DO NOT MERGE] cf3d34a671: [drm:intel_guc_ucode_init [i915]] *ERROR* Failed to fetch GuC firmware from i915/skl_guc_ver6.bin (error -2)
by kernel test robot
FYI, we noticed the following commit:
https://github.com/0day-ci/linux Tvrtko-Ursulin/GuC-premature-LRC-unpin/20160415-195642
commit cf3d34a6714809910988aaebf2323c2d2cd60a02 ("DO NOT MERGE: drm/i915: Enable GuC submission")
on test machine: lkp-skl-d01: 8 threads Skylake with 8G memory
caused below changes:
[ 234.579681] i915 0000:00:02.0: Failed to load DMC firmware [https://01.org/linuxgraphics/intel-linux-graphics-firmwares], disabling runtime power management.
[ 234.583097] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io+mem:owns=io+mem
[ 234.583700] i915 0000:00:02.0: Direct firmware load for i915/skl_guc_ver6.bin failed with error -2
[ 234.583725] [drm:intel_guc_ucode_init [i915]] *ERROR* Failed to fetch GuC firmware from i915/skl_guc_ver6.bin (error -2)
[ 234.594023] [drm:intel_guc_ucode_load [i915]] *ERROR* GuC firmware load failed, err -5
[ 234.594041] [drm:i915_gem_init_hw [i915]] *ERROR* Failed to initialize GuC, error -5
[ 234.594053] [drm:i915_gem_init [i915]] *ERROR* Failed to initialize GPU, declaring it wedged
[ 234.605392] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no)
[ 234.605859] acpi device:0f: registered as cooling_device13
[ 234.605925] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/LNXVIDEO:00/input/input10
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Xiaolong
6 years, 1 month
[lkp] [x86/efi] 9d8f983fe7: BUG: sleeping function called from invalid context at kernel/locking/mutex.c:97
by kernel test robot
FYI, we noticed the following commit:
https://git.kernel.org/pub/scm/linux/kernel/git/mfleming/efi.git next
commit 9d8f983fe74ecbc411f4824e42e7901df8e7cd85 ("x86/efi: Force EFI reboot to process pending capsules")
on test machine: vm-lkp-wsx03-1G: 1 threads qemu-system-x86_64 -enable-kvm -cpu host with 1G memory
caused below changes:
[ 18.069005] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:97
[ 18.071639] in_atomic(): 0, irqs_disabled(): 1, pid: 7, name: rcu_sched
[ 18.073274] CPU: 0 PID: 7 Comm: rcu_sched Tainted: G D 4.6.0-rc4-00028-g9d8f983 #1
[ 18.075685] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 18.078135] ffff88003436f9a0 ffff88003436f9a0 ffffffff8142e53a ffff880034370000
[ 18.080700] 0000000000000061 ffff88003436f9b8 ffffffff810a1048 ffffffff81c8cdd0
[ 18.083325] ffff88003436f9e0 ffffffff810a10d9 ffffffff81f30be0 0000000000000000
[ 18.085940] Call Trace:
[ 18.086911] [<ffffffff8142e53a>] dump_stack+0x63/0x89
[ 18.088260] [<ffffffff810a1048>] ___might_sleep+0xd8/0x120
[ 18.089860] [<ffffffff810a10d9>] __might_sleep+0x49/0x80
[ 18.091272] [<ffffffff818f5110>] mutex_lock+0x20/0x50
[ 18.092636] [<ffffffff81771edd>] efi_capsule_pending+0x1d/0x60
[ 18.094272] [<ffffffff8104e749>] native_machine_emergency_restart+0x59/0x280
[ 18.095975] [<ffffffff8104e5d9>] machine_emergency_restart+0x19/0x20
[ 18.097685] [<ffffffff8109d4b8>] emergency_restart+0x18/0x20
[ 18.099303] [<ffffffff81172d6d>] panic+0x1ba/0x217
[ 18.100631] [<ffffffff81030a22>] oops_end+0xc2/0xd0
[ 18.102078] [<ffffffff810654a2>] no_context+0x112/0x380
[ 18.117867] [<ffffffff81065f40>] ? vmalloc_fault+0x340/0x340
[ 18.119315] [<ffffffff8106578c>] __bad_area_nosemaphore+0x7c/0x200
[ 18.120886] [<ffffffff81065915>] ? bad_area_nosemaphore+0x5/0x20
[ 18.122374] [<ffffffff81065924>] bad_area_nosemaphore+0x14/0x20
[ 18.123909] [<ffffffff81065fcb>] __do_page_fault+0x8b/0x4d0
[ 18.125327] [<ffffffff81065f45>] ? __do_page_fault+0x5/0x4d0
[ 18.126832] [<ffffffff810664d3>] trace_do_page_fault+0x43/0x140
[ 18.128385] [<ffffffff8105f6ba>] do_async_page_fault+0x1a/0xa0
[ 18.130434] [<ffffffff818f96f8>] async_page_fault+0x28/0x30
[ 18.131933] [<ffffffff818f61e8>] ? schedule_timeout+0x158/0x2d0
[ 18.133486] [<ffffffff818f9ca8>] ? ftrace_epilogue+0x2/0x2
[ 18.135512] [<ffffffff811475be>] ? ftrace_return_to_handler+0x8e/0x100
[ 18.137597] [<ffffffff818f3060>] ? __schedule+0x8b0/0x8b0
[ 18.139604] [<ffffffff818f9ebd>] return_to_handler+0x15/0x27
[ 18.141556] [<ffffffff818f9ea8>] ? ftrace_graph_caller+0xa8/0xa8
[ 18.143176] [<ffffffff810e3781>] rcu_gp_kthread+0x421/0x970
[ 18.145048] [<ffffffff810e7bc0>] ? trace_raw_output_tick_stop+0x80/0x80
[ 18.146682] [<ffffffff818f9ea8>] ftrace_graph_caller+0xa8/0xa8
[ 18.148193] [<ffffffff810e3360>] ? rcu_process_callbacks+0x610/0x610
[ 18.149766] [<ffffffff8109aef4>] kthread+0xd4/0xf0
[ 18.151185] [<ffffffff818f7682>] ret_from_fork+0x22/0x40
[ 18.152852] [<ffffffff8109ae20>] ? kthread_park+0x60/0x60
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu host -kernel /pkg/linux/x86_64-rhel/gcc-4.9/9d8f983fe74ecbc411f4824e42e7901df8e7cd85/vmlinuz-4.6.0-rc4-00028-g9d8f983 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-lkp-wsx03-1G-10/bisect_kernel_selftests-defaults-debian-x86_64-2015-02-07.cgz-x86_64-rhel-9d8f983fe74ecbc411f4824e42e7901df8e7cd85-20160425-37341-1dgwxmg-0.yaml ARCH=x86_64 kconfig=x86_64-rhel branch=linux-devel/devel-spot-201604251010 commit=9d8f983fe74ecbc411f4824e42e7901df8e7cd85 BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/9d8f983fe74ecbc411f4824e42e7901df8e7cd85/vmlinuz-4.6.0-rc4-00028-g9d8f983 max_uptime=3600 RESULT_ROOT=/result/kernel_selftests/defaults/vm-lkp-wsx03-1G/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/9d8f983fe74ecbc411f4824e42e7901df8e7cd85/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-lkp-wsx03-1G-10::dhcp' -initrd /fs/sdc1/initrd-vm-lkp-wsx03-1G-10 -m 1024 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::23609-:22 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdc1/disk0-vm-lkp-wsx03-1G-10,media=disk,if=virtio -drive file=/fs/sdc1/disk1-vm-lkp-wsx03-1G-10,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-lkp-wsx03-1G-10 -serial file:/dev/shm/kboot/serial-vm-lkp-wsx03-1G-10 -daemonize -display none -monitor null
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Xiaolong
6 years, 1 month
[lkp] [Add kset to file_system_type] dfac59d60d: WARNING: CPU: 0 PID: 0 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x64/0x80
by kernel test robot
FYI, we noticed the following commit:
https://github.com/goldwynr/linux kobjectify-fs
commit dfac59d60d602849663e045af372ce9222d7672e ("Add kset to file_system_type")
on test machine: vm-kbuild-1G: 2 threads qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap with 1G memory
caused below changes:
[ 0.332120] Mount-cache hash table entries: 2048 (order: 2, 16384 bytes)
[ 0.334294] Mountpoint-cache hash table entries: 2048 (order: 2, 16384 bytes)
[ 0.336683] ------------[ cut here ]------------
[ 0.338551] WARNING: CPU: 0 PID: 0 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x64/0x80
[ 0.341857] sysfs: cannot create duplicate filename '/fs/cgroup'
[ 0.343838] Modules linked in:
[ 0.345451] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.6.0-rc5-00001-gdfac59d #1
[ 0.348441] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 0.351800] 0000000000000000 ffffffff81e03d68 ffffffff8142e5aa ffffffff81e03db8
[ 0.355385] 0000000000000000 ffffffff81e03da8 ffffffff8107b9a1 0000001f00000073
[ 0.359553] ffff880035823000 ffffffff81cc71bf ffff880035a470f0 0000000000000000
[ 0.363055] Call Trace:
[ 0.364358] [<ffffffff8142e5aa>] dump_stack+0x63/0x89
[ 0.367373] [<ffffffff8107b9a1>] __warn+0xd1/0xf0
[ 0.369126] [<ffffffff8107ba0f>] warn_slowpath_fmt+0x4f/0x60
[ 0.371126] [<ffffffff8127a3b4>] sysfs_warn_dup+0x64/0x80
[ 0.373048] [<ffffffff8127a497>] sysfs_create_dir_ns+0x77/0x90
[ 0.374994] [<ffffffff814313a5>] kobject_add_internal+0xb5/0x340
[ 0.377034] [<ffffffff814311be>] ? kobject_set_name+0x3e/0x40
[ 0.379066] [<ffffffff814316f9>] kset_register+0x49/0x70
[ 0.380955] [<ffffffff8143178b>] kset_create_and_add+0x6b/0xa0
[ 0.382880] [<ffffffff81219f27>] register_filesystem+0x97/0xa0
[ 0.384803] [<ffffffff82004661>] cgroup_init+0x356/0x3c9
[ 0.386783] [<ffffffff81fdaf44>] start_kernel+0x3f1/0x436
[ 0.388622] [<ffffffff81fda120>] ? early_idt_handler_array+0x120/0x120
[ 0.390790] [<ffffffff81fda4ec>] x86_64_start_reservations+0x2a/0x2c
[ 0.392822] [<ffffffff81fda629>] x86_64_start_kernel+0x13b/0x14a
[ 0.394913] ---[ end trace bbb5bd06d9154d79 ]---
[ 0.396643] ------------[ cut here ]------------
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap -kernel /pkg/linux/x86_64-rhel/gcc-4.9/dfac59d60d602849663e045af372ce9222d7672e/vmlinuz-4.6.0-rc5-00001-gdfac59d -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-1G-10/bisect_boot-1-debian-x86_64-2015-02-07.cgz-x86_64-rhel-dfac59d60d602849663e045af372ce9222d7672e-20160427-91533-idhx15-1.yaml ARCH=x86_64 kconfig=x86_64-rhel branch=linux-devel/devel-catchup-201604262209 commit=dfac59d60d602849663e045af372ce9222d7672e BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/dfac59d60d602849663e045af372ce9222d7672e/vmlinuz-4.6.0-rc5-00001-gdfac59d max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-1G/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/dfac59d60d602849663e045af372ce9222d7672e/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-1G-10::dhcp' -initrd /fs/sde1/initrd-vm-kbuild-1G-10 -m 1024 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::23009-:22 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -device virtio-scsi-pci,id=scsi0 -drive file=/fs/sde1/disk0-vm-kbuild-1G-10,if=none,id=hd0,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd0,scsi-id=1,lun=0 -drive file=/fs/sde1/disk1-vm-kbuild-1G-10,if=none,id=hd1,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd1,scsi-id=1,lun=1 -drive file=/fs/sde1/disk2-vm-kbuild-1G-10,if=none,id=hd2,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd2,scsi-id=1,lun=2 -drive file=/fs/sde1/disk3-vm-kbuild-1G-10,if=none,id=hd3,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd3,scsi-id=1,lun=3 -drive file=/fs/sde1/disk4-vm-kbuild-1G-10,if=none,id=hd4,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd4,scsi-id=1,lun=4 -pidfile /dev/shm/kboot/pid-vm-kbuild-1G-10 -serial file:/dev/shm/kboot/serial-vm-kbuild-1G-10 -daemonize -display none -monitor null
Thanks,
Xiaolong
6 years, 1 month
[lkp] [IDR/IDA] dbb5d8f2ad: BUG: unable to handle kernel paging request at 0000000082403e5c
by kernel test robot
FYI, we noticed the following commit:
git://git.infradead.org/users/willy/linux-dax.git idr-2016-04-25
commit dbb5d8f2adff4465b3ff4d9bae970e01f9eb7940 ("Reimplement IDR and IDA using the radix tree")
on test machine: vm-lkp-wsx03-quantal-x86_64: 2 threads qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap with 360M memory
caused below changes:
+------------------------------------------+------------+------------+
| | 411ca2f674 | dbb5d8f2ad |
+------------------------------------------+------------+------------+
| boot_successes | 6 | 0 |
| boot_failures | 0 | 7 |
| BUG:unable_to_handle_kernel | 0 | 7 |
| Oops | 0 | 7 |
| RIP:ida_simple_get | 0 | 7 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 7 |
+------------------------------------------+------------+------------+
[ 0.144964]
[ 0.145224] Mount-cache hash table entries: 1024 (order: 1, 8192 bytes)
[ 0.146039] Mountpoint-cache hash table entries: 1024 (order: 1, 8192 bytes)
[ 0.146869] BUG: unable to handle kernel paging request at 0000000082403e5c
[ 0.147696] IP: [<ffffffff813f3e0f>] ida_simple_get+0x69/0xae
[ 0.148370] PGD 0
[ 0.148622] Oops: 0000 [#1] PREEMPT SMP
[ 0.149097] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.6.0-rc2-00066-gdbb5d8f #1
[ 0.149954] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 0.150887] task: ffffffff82412500 ti: ffffffff82400000 task.ti: ffffffff82400000
[ 0.151683] RIP: 0010:[<ffffffff813f3e0f>] [<ffffffff813f3e0f>] ida_simple_get+0x69/0xae
[ 0.152554] RSP: 0000:ffffffff82403e50 EFLAGS: 00010046
[ 0.153097] RAX: 0000000000000000 RBX: ffff8800000820d0 RCX: ffffffff82403e40
[ 0.154062] RDX: ffff8800000820d8 RSI: 0000000000000001 RDI: ffff8800000820d0
[ 0.154931] RBP: 0000000082403e88 R08: 0000000000000001 R09: 0000000000000037
[ 0.155814] R10: ffffffff82403e50 R11: 0000000000000000 R12: 0000000000000001
[ 0.156684] R13: 00000000024000c0 R14: 0000000080000000 R15: 0000000000000202
[ 0.157550] FS: 0000000000000000(0000) GS:ffff880013c00000(0000) knlGS:0000000000000000
[ 0.158599] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 0.159294] CR2: 0000000082403e5c CR3: 000000000240b000 CR4: 00000000000006b0
[ 0.160177] Stack:
[ 0.160443] 0000000000000000 ffffffff82403e88 ffff88000002f000 ffffffff8230ec72
[ 0.161418] ffff8800000820c0 0000000000000001 000000000000416d ffffffff82403ec0
[ 0.162443] ffffffff811ad167 ffff8800000820c0 fffffffffffffff4 0000000000000002
[ 0.163408] Call Trace:
[ 0.163718] [<ffffffff811ad167>] ? __kernfs_new_node+0x5b/0xb7
[ 0.164443] [<ffffffff811ae874>] ? kernfs_create_root+0x7c/0xec
[ 0.165174] [<ffffffff8277070a>] ? sysfs_init+0x13/0x51
[ 0.165894] [<ffffffff8276fbe4>] ? mnt_init+0xf6/0x1f8
[ 0.166552] [<ffffffff8276f934>] ? vfs_caches_init+0x5f/0x6b
[ 0.167256] [<ffffffff8274beff>] ? start_kernel+0x47c/0x4da
[ 0.167950] [<ffffffff8274b120>] ? early_idt_handler_array+0x120/0x120
[ 0.168756] [<ffffffff8274b2f6>] ? x86_64_start_reservations+0x2a/0x2c
[ 0.169563] [<ffffffff8274b3de>] ? x86_64_start_kernel+0xe6/0xf3
[ 0.170357] Code: e8 bf 4a 00 00 85 c0 74 55 48 c7 c7 20 b9 45 82 e8 a0 48 89 00 48 8d 55 d4 44 89 e6 48 89 df 49 89 c7 e8 04 4b 00 00 85 c0 75 17 <8b> 45 d4 41 39 c6 73 0f 89 c6 48 89 df e8 16 4c 00 00 b8 e4 ff
[ 0.173783] RIP [<ffffffff813f3e0f>] ida_simple_get+0x69/0xae
[ 0.174520] RSP <ffffffff82403e50>
[ 0.174946] CR2: 0000000082403e5c
[ 0.175361] ---[ end trace f4b42fa6fb26a58c ]---
[ 0.175931] Kernel panic - not syncing: Fatal exception
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap -kernel /pkg/linux/x86_64-acpi-redef/gcc-5/dbb5d8f2adff4465b3ff4d9bae970e01f9eb7940/vmlinuz-4.6.0-rc2-00066-gdbb5d8f -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-lkp-wsx03-quantal-x86_64-1/bisect_boot-1-quantal-core-x86_64.cgz-x86_64-acpi-redef-dbb5d8f2adff4465b3ff4d9bae970e01f9eb7940-20160426-86997-t5gvxe-0.yaml ARCH=x86_64 kconfig=x86_64-acpi-redef branch=linux-devel/devel-catchup-201604260608 commit=dbb5d8f2adff4465b3ff4d9bae970e01f9eb7940 BOOT_IMAGE=/pkg/linux/x86_64-acpi-redef/gcc-5/dbb5d8f2adff4465b3ff4d9bae970e01f9eb7940/vmlinuz-4.6.0-rc2-00066-gdbb5d8f max_uptime=600 RESULT_ROOT=/result/boot/1/vm-lkp-wsx03-quantal-x86_64/quantal-core-x86_64.cgz/x86_64-acpi-redef/gcc-5/dbb5d8f2adff4465b3ff4d9bae970e01f9eb7940/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-lkp-wsx03-quantal-x86_64-1::dhcp drbd.minor_count=8' -initrd /fs/sdc1/initrd-vm-lkp-wsx03-quantal-x86_64-1 -m 360 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -pidfile /dev/shm/kboot/pid-vm-lkp-wsx03-quantal-x86_64-1 -serial file:/dev/shm/kboot/serial-vm-lkp-wsx03-quantal-x86_64-1 -daemonize -display none -monitor null
Thanks,
Kernel Test Robot
6 years, 1 month
[lkp] [of] dda0e3dc86: __of_overlay_create: of_build_overlay_info() failed for tree@/testcase-data/overlay19
by kernel test robot
FYI, we noticed the following commit:
https://github.com/pantoniou/linux-beagle-track-mainline.git bbb-overlays
commit dda0e3dc86735058b0bd8f88c00d978257072395 ("of: unittest: Unit-tests for target root overlays.")
on test machine: vm-vp-quantal-x86_64: 2 threads qemu-system-x86_64 -enable-kvm with 360M memory
caused below changes:
[ 8.327386] of_overlay_destroy: removal check failed for overlay #5
[ 8.327386] of_overlay_destroy: removal check failed for overlay #5
[ 8.332972] find_target_node_direct: target "/testcase-data/overlay-node/test-bus/test-unittest18" not under target_root "/testcase-data/overlay-node/test-bus/test-unittest19"
[ 8.332972] find_target_node_direct: target "/testcase-data/overlay-node/test-bus/test-unittest18" not under target_root "/testcase-data/overlay-node/test-bus/test-unittest19"
[ 8.335870] __of_overlay_create: of_build_overlay_info() failed for tree@/testcase-data/overlay19
[ 8.335870] __of_overlay_create: of_build_overlay_info() failed for tree@/testcase-data/overlay19
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -kernel /pkg/linux/x86_64-randconfig-r0-04260040/gcc-5/dda0e3dc86735058b0bd8f88c00d978257072395/vmlinuz-4.6.0-rc4-00040-gdda0e3d -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-vp-quantal-x86_64-57/bisect_boot-1-quantal-core-x86_64.cgz-x86_64-randconfig-r0-04260040-dda0e3dc86735058b0bd8f88c00d978257072395-20160426-54281-1qcjv0e-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-r0-04260040 branch=linux-devel/devel-catchup-201604260232 commit=dda0e3dc86735058b0bd8f88c00d978257072395 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-r0-04260040/gcc-5/dda0e3dc86735058b0bd8f88c00d978257072395/vmlinuz-4.6.0-rc4-00040-gdda0e3d max_uptime=600 RESULT_ROOT=/result/boot/1/vm-vp-quantal-x86_64/quantal-core-x86_64.cgz/x86_64-randconfig-r0-04260040/gcc-5/dda0e3dc86735058b0bd8f88c00d978257072395/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-vp-quantal-x86_64-57::dhcp drbd.minor_count=8' -initrd /fs/sdh1/initrd-vm-vp-quantal-x86_64-57 -m 360 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -pidfile /dev/shm/kboot/pid-vm-vp-quantal-x86_64-57 -serial file:/dev/shm/kboot/serial-vm-vp-quantal-x86_64-57 -daemonize -display none -monitor null
Thanks,
Xiaolong
6 years, 1 month