Those things aren’t up to the language, they’re up to hardware. C is a portable language that runs on many different platforms. Some platforms might have protected memory and trap on out of bounds memory access. Other platforms have a single, flat address space where out of bounds memory access is not an error, it just reads whatever is there since your program has full access to all memory.
The same goes for integer overflow. Some platforms use 1’s complement signed integers, some platforms use 2’s complement. Signed overflow would simply give different answers on these platforms. The standards committee long ago decided that there’s no sensible answer to give which covers all cases, so they declared it undefined behaviour which allows compilers to assume it’ll never happen in practice and make lots of optimizations.
Forcing signed overflow to have a defined behaviour means forcing every single signed arithmetic operation through this path, removing the ability for compilers to combine, reorder, or elide operations. This makes a lot of optimizations impossible.
Most old computer architectures had a much more complete set of hardware exceptions, including cases like integer overflow or out-of-bounds access.
In modern superscalar pipelined CPUs, implementing all the desirable hardware exceptions without reducing the performance remains possible (through speculative execution), but it is more expensive than in simple CPUs.
Because of that, the hardware designers have taken advantage of the popularity gained by languages like C and C++ and almost all modern programming languages, which no longer specify the behavior for various errors, and they omit the required hardware means, to reduce the CPU cost, justifying their decision by the existing programming language standards.
The correct way to solve this would have been to include in all programming language standards well-defined and uniform behaviors for all erroneous conditions, which would have forced the CPU designers to provide efficient means to detect such conditions, like they are forced to implement the IEEE standard for floating-point arithmetic, despite their desire to provide unreliable arithmetic, which is cheaper and which could win benchmarks by cheating.
CPU designers don't like having their hand forced like that. If you create a new standard forcing them to add extra hardware to their designs, they'll skip your standard and target the older one (which has way more software marketshare anyway). They will absolutely bend over backwards to save a few cycles here and a few transistors there, just so they can cram in an extra feature or claim a better score on some microbenchmark. They absolutely do not care at all about making life easier for low-level programmers, hardware testers, or compiler writers.
> In modern superscalar pipelined CPUs, implementing all the desirable hardware exceptions without reducing the performance remains possible (through speculative execution), but it is more expensive than in simple CPUs.
Yeah, and that's how you get security vulnerabilities!
The same goes for integer overflow. Some platforms use 1’s complement signed integers, some platforms use 2’s complement. Signed overflow would simply give different answers on these platforms. The standards committee long ago decided that there’s no sensible answer to give which covers all cases, so they declared it undefined behaviour which allows compilers to assume it’ll never happen in practice and make lots of optimizations.
Forcing signed overflow to have a defined behaviour means forcing every single signed arithmetic operation through this path, removing the ability for compilers to combine, reorder, or elide operations. This makes a lot of optimizations impossible.