[mythtv] Null pointer dereference warnings when compiling with -O3
mythtv at love2code.net
Thu Sep 24 15:34:34 UTC 2020
On Wed, 2020-09-23 at 19:58 +0200, Klaas de Waal wrote:
> I've recently started to build with "--extra-cppflags=-O3", to
> investigate if incidental loss of transport stream packets can be
> related to runtime performance.
You could also try " --toolchain=hardened". That will get you O2
optimization and a couple of other things.
> To my surprise I do then get a lot of "null pointer dereference" and
> "possible null pointer dereference" compilation warnings. This is
> with Fedora-32 and gcc 10.2.1.
> I have the impression that these warnings are just wrong.
> This is for the following reasons:
> - The code that is flagged runs already for years and years. If there
> was a null pointer dereference I think that somebody would have
> noticed this. The word "segfault" comes to the mind.
You are misunderstanding what the compiler is saying. It is telling you
that after inlining and optimization, based on what it can infer from
the code, there are definitely code paths that will lead to a segfault.
Here's a scenario that may help you understand why the compiler is
Function A operates on pointer "foo" and has a priori knowledge that
"foo" will never be set to nullptr, so it can dereference the pointer
without testing its validity. Function A calls function B, and function
B (being called from many places) performs a validity test of the
passed in "foo" before using it. This code will compile cleanly when
there is no optimization.
When compiled with optimization, the above code will spew null pointer
dereference warnings about function A. This occurs because the compiler
is allowed to completely embed the source code of function B into
function A to eliminate overhead of calling an extra function. When
processing this new combined function A', the compiler notices the
validity test for the variable "foo" (from the former B), and it
scribbles a note for itself that says "the developer must know that
"foo" can be null because she wrote code that performs a validity
check". When the compiler finds a later dereference of "foo" (from the
former A) and can't find a validity check prior to that dereference, it
prints a warning message.
As a quick rule of thumb: In any given function, if one dereference of
a pointer is protected then they must all be protected. This rule of
thumb explains why the unoptimized two function code compiles cleanly
(each function is self consistent in whether or not it checks the
pointer), and why the optimized one big combined function doesn't
(because it is inconsistent in whether or not it checks the pointer).
If there was a way to embed into the source code the a priori knowledge
that the argument to function A will never be null, then the compiler
shouldn't complain at all about the optimized code. Unfortunately,
here's no way to label that function parameter in standard C++. Adding
an actual null pointer test at the top of function A would codify that
knowledge in a way that the compiler understands.
> - There are several warnings about class DTVChannelInfo declared in
> file dtvconfparser.h. The interesting bit is that when I change the
> order of the class variables, e.g. move "QString m_name" two lines
> up, that some of the "null pointer dereference" warnings disappear.
> This is not logical; the order of the class members should not
> determine if code is valid or not. As far as I know of course.
I agree, that's weird behavior. But then again, this is the optimizer
we're talking about. Everything about the optimizer seems to be a maze
of twisty little passages, all alike.
> - When compiled with "-O3 -fno-inline" then all "null pointer
> dereference"warnings but one do disappear. This is again not logical
> to me. Possibly only when "-fno-inline" is given the "null pointer
> dereference" warnings are correct.
Disabling inlines is disabling one of the major optimizations the
compiler can make.
When you inline one (or more) function(s) into another you are now
compiling a new function which is potentially as large as the sum of
the old function and every function that it calls. The compiler
metadata for this new function will be different, the code flow
decisions of which code path requires a branch to be taken and which
doesn't could change, the choices of which variable goes into which
register could change, etc, etc.
> Considering this I do not think it is a good idea to rewrite the code
> to satisfy the compiler.
I disagree. Its very likely a big rewrite isn't necessary, but all
these warnings should all be investigated to see exactly what is
happening and why the compiler is complaining. All the example
above would need is a validity check of "foo" at the top of function A
and the null dereference warnings will go away. Its very likely that
the solution to your warnings will have a similar fix.
> One way to deal with this is to remove the "-Wnull-dereference"
> option from the standard set of compiler flags.
To quote from the GCC documentation:
Warn if the compiler detects paths that trigger erroneous or
undefined behavior due to dereferencing a null pointer.
Why in the world would you ever want to remove that check? Compiler
and other tool developers have spent a lot of time implementing these
kinds of checks to help other developers improve code robustness,
speed, readability, etc. We should be taking advantage of that effort
wherever we can.
> What I want to achieve is to build fully optimized without getting
> warnings, so that if there is a warning it is something that needs to
> be looked at.
I absolutely agree with this statement! However, I believe that
disabling warnings instead of fixing them is the wrong approach.
More information about the mythtv-dev