On 9/3/19 9:25 PM, Josef Bacik wrote:
> On Tue, Sep 03, 2019 at 04:06:33PM +0800, kernel test robot wrote:
>> FYI, we noticed the following commit (built with gcc-7):
>> commit: 3ae92b3782182d282a92573abe95c96d34ca6e73 ("btrfs: change the minimum global reserve size")
>> https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git master
>> in testcase: xfstests
>> with following parameters:
>> disk: 4HDD
>> fs: btrfs
>> test: generic-group13
>> test-description: xfstests is a regression test suite for xfs and other files ystems.
>> test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
>> on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
>> caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
> It would help if you could capture generic/269.full, but this is likely a
> problem with fsck that I fixed a few weeks ago where we're seeing nbytes of an
> inode is wrong, but there's an orphan item so it doesn't matter. This patch
> just made it more likely for us to have a still being iput'ed inode after a
> transaction commit. Thanks,
I enclose the generic/269.full file for your reference.
Hello 0-day devs,
[Hope I figures correct email addresses]
I have some questions regarding state of 0-day project to better
understand state of upstream kernel testing.
1. What kernel configs does 0-day bot use? In particular I am
interested in various runtime debugging features. Is there a list of
all configs? I've found one in a recent "[bpf] 9fe4f05d33:
kernel_selftests.bpf.test_verifier.fail" report, but it does not
include CONFIG_KASAN, though I am sure 0-day uses KASAN. So is it a
different config? Or the config was "minimized" to exclude KASAN since
it was not relevant for the failure? I am interested in the following
debug features: KASAN, LOCKDEP, KMEMLEAK, FAULT_INJECTION,
DEBUG_OBJECTS, DEBUG_VM. Are these used? Any other notable debug
features that are used?
2. What tests are being run? Is there a list somewhere?
I know that 0-day now runs a pretty extensive set of tests. But is
there any quantitative qualification? Are you done with onboarding
more test suites? Or
you want to onboard more? How would you estimate your progress on test
onboarding? Is it 10%, 50% or 90%?
3. On a related note, do you know test code coverage? Are there
4. As far as I understand there is no automated reporting and a human
is involved in sending of each report. Is it correct? What is the
reason? You want to double-check? It's not automated? Something else?
5. Do you intercept all incoming upstream patches or you know that
some are missing? If you don't intercept all, what are the main
sources? I know some people send pull requests to Linus from their
github trees and these may skip most of the common process.
6. Does it happen that 0-day fails to parse/apply a patch? I mean things
like parsing problems when 0-day just can't make sense out of the email
text, or when you can't figure out base tree/branch.
7. As far as I understand 0-day has some heuristics to figure out base
git repo/branch. How frequently do they fail? How much tuning and
maintenance does this require?
8. What are your major pain points? Or where time is going to? Major TODO items?
9. How many people are working on 0-day? Or taking into account the
project started long time ago, more relevant question is probably:
what is the estimation of engineer-years spent on 0-day? Are there any
major areas where the human time goes?
Thanks in advance