lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


My opinion is worthwile, because you spoke about GCC and I correctly assumed you used Linux for it.

(there are very few projects on Windows that use GCC on Windows, given that MSVC is free for use now (even if its licence is not free/libre and its source not open), and well supported. And there are many other compilers on Windows. Most projects for Windows use Microsoft of Intel compilers (Intel compilers can be used as a backend integrated in the Dev Studio, which itself is now also free, but Intel compilers are among the best optimizing ones, but their licence is quite costly).

GCC on Windows is quite badly supported, it's compiled intself in an informal process like Mingw, which has lot of issues. And now that Windows 10 supports a native Linux environment (from Ubuntu), I think that when you speak about GCC on Windows, actually you use GCC on Ubuntu for Windows (that Microsoft initially called "Bash for Windows".. but its name is changing regularly, even if Microsoft really admits that its development is shared between Microsoft and Ubuntu that have cooperated to make adjustments in the Windows kernel to make better support of the Linux APIs, improve the security and compatiblity, including in core components like the console host, multithreading and scheduling, memory models, event queues, and filesystem support... for various things, this lead to patches being developped for Linux itself to debug some of its components and reduce the overhead of hypervisors like Hyper-V or others, that are now also better supporting Linux VMs and Windows VM concurrently, with these VMs being also easier to migrate between hypervisors based on Linux, Windows, or other systems).

With virtualization of OSes in rapid progress now, the underlying OS will no longer matter (except for performance or security goals). But today, security is a major need because of severe impacts of bugs (and performance is no longer a problem, and even storage space or memory resources is less critical: we have tons of them at cheap price). All developers now want compielrs that help them secure their code. A lot of them have abandoned C, and prefered working with much more managed or scripting languages (Java, Python, _javascript_, even PHP) because they offer wide advantages (and are much easier to deploy and scale on many more platforms, rather than being "optimized" for a static architecture: JIT compilers, or install-time compilers now frequently offer better performance at run time on the final target host than statically compiled C code which may be perfect on a given machien, but not optimized at all on another: this explains the success of Java, C#, _javascript_; the other reason is the mobile development, for Java-based VMs on Android and iOS).

And there is now a valid use of C compilers that no longer target a native machine code, but a virtual "bytecode" machine: the program will be recompiled to native on the final target host at deployment time, or with JIT compilers. And the final performance is good everywhere.

Le dim. 12 mai 2019 à 19:33, Andrew Gierth <andrew@tao11.riddles.org.uk> a écrit :
>>>>> "Philippe" == Philippe Verdy <verdy_p@wanadoo.fr> writes:

 >> Possibly unfortunately, this is not a bug. The program invokes
 >> undefined behavior when it does n=-n on the the value shown; the
 >> compiler is entitled to produce any output at all as a result.
 >>
 >> In this case it's fairly obvious what happened: after seeing the line
 >>
 >> if (n<0) n=-n;
 >>
 >> the compiler is entitled to assume that n>=0 is true regardless of
 >> the prior value of n, since only undefined behavior could cause that
 >> not to be the case and the compiler is allowed to assume that
 >> undefined behavior never occurs. So in the second fprintf, the
 >> compiler can optimize (n>=0) to a constant 1, and (n<0) to a
 >> constant 0, while still computing the remaining tests.

 Philippe> Such compiler assumption is wrong

The language standards disagree. Now you might choose to disagree with
those standards, but they are what they are.

 Philippe> But with your code I do not get the same result as you,

What exactly did you compile and how, and with what?

I don't keep a large supply of compilers on hand, but putting the code
on godbolt and looking at the output for various compiler versions, it's
clear that the optimization I described above is in effect for gcc
versions back to at least 4.9, and clang from 7.0 onwards (notably,
clang 6.0 does _not_ do it). See for example line 46 of the assembler
output of https://godbolt.org/z/vE8qCi where the "pushq $1" shows that
a constant 1 was pushed for the result of the >= test.

It _looks_ like neither icc nor msvc do this optimization (but I'm not
sure I got sufficient optimization flags in, I don't use either of those
myself).

 Philippe> Finally, not just the compiler version is important here: you
 Philippe> may have used some advanced (but experimental) optimizing
 Philippe> options on its command line

For both clang and gcc a simple -O is sufficient to show the issue.
Obviously if optimization is entirely disabled, the problem does not
occur.

 Philippe> My opinion is that your GCC compiler version has a bug or you
 Philippe> used some unsupported options or non-standard/untested patch,
 Philippe> or your Linux distribution is not reliable and delivers you
 Philippe> very badly compiled products: change your Linux update source
 Philippe> or switch to a reliable distribution.

Since I'm not even using linux, and the issue is easily demonstrated
with compiler comparison tools, your "opinion" here is based on false
assumptions and therefore worthless.

--
Andrew.