Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm all for the evolution of C, but this list...

1) has some downright idiotic things (exceptions, operator overloading)

2) has a few reasonable, but mostly inconsequential things (declaration inside if, case ranges)

3) is missing a few real improvements (closures, although it is not clear whether the "nested routines" can be returned)



Agree 100%. Improvements to C would be things like removing "undefined behavior", not adding more syntax sugar. If anything, C's grammar is already too bloated. (I'm looking at you, function pointer type declarations inside anonymized unions inside parameter definition lists.)


> Improvements to C would be things like removing "undefined behavior"

This nonsense again. I don't get this "undefined behavior" cliche. It seems it became fashionable for some people to parrot it like a mantra as a form of signaling. Undefined behavior just refers to something that is not covered by the international standard, and therefore doesn't exist nor should be used, but an implementation may offer implementation-specific behavior.


When people talk about "removing undefined behavior", they usually mean requiring that compile-time-detectible undefined behaviors be converted into explicit errors.

For example, there are quite a few people who would like to see a C where you can't actually write this:

    // int x, y;
    if(++x < y) { ... }
...because, well, the behavior of integer overflow is undefined in C, so that code could technically do anything, even though it seems perfectly innocent, especially when coming from a checked language.

Of course, you can't do anything in the C standard to require that this code work as-is, because the C standard applies to architectures where mutually-exclusive things happen under integer overflow. But you can always just disallow it completely, and require that people use intrinsics that are explicit about what overflow behavior they expect (where that behavior reduces to plain output on target architectures that follow it, and to a shim on target architectures that don't. You know, like floating-point support, or atomics.)


...because, well, the behavior of integer overflow is undefined in C, so that code could technically do anything

No compiler is going to go out of its way to compile an increment into the machine's usual increment instruction and an additional overflow check that does whatever, just because it can. It's going to compile it into the machine's usual increment instruction and what happens on overflow is what happens naturally.

It's as absurd as claiming that even "x + y" can invoke undefined behaviour, because while the standard allows it, any compiler that compiles such an addition to anything other than the machine's addition instruction (i.e. with implementation-defined effects), to speak nothing of adding the additional(!) checks to deliberately do something else, is clearly not benefiting anyone.

"In theory, there is no difference between theory and practice. In practice, there is." Pure fearmongering, IMHO.


> No compiler is going to go out of its way to compile an increment into the machine's usual increment instruction and an additional overflow check that does whatever, just because it can

What people are actually worried about is when the compiler starts removing - not adding - seemingly unrelated code in a hard to reason about fashion. And compilers absolutely will go out of their way to do this in the name of optimization and performance. And it will do this because it got smart enough to prove that the "unrelated" code can't run without first technically invoking undefined behavior, at which point it can jump to the wild conclusion that it must never actually execute (or that it can remove the code even if it does, because it's legal for the compiler to do anything after invoking undefined behavior - including not execute that code!)

Sometimes the removed code is important security checks, leading to CVEs, hotpatches, etc. - this is not theoretical, and is not remotely new at this point: https://www.grsecurity.net/~spender/exploits/cheddar_bay/exp...

It also makes reporting compiler bugs annoying, as you first have to definitively prove to yourself and the compiler guys that you've actually got a compiler bug, rather than a compiler "feature" of aggressive optimization within the letter of the C++ standard. It's only out of pure stubbornness that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84658 got reported upstream, I was assuming it was UB in our codebase most of the way down and thus INVALID as a compiler bug...

But perhaps CVEs and expected behavior being borderline indistinguishable from compiler bugs to most C and C++ programmers I know is just "fear mongering" as you say. IMNSHO, it's not \o/


It's not about the compiler doing "extra checks" to deliberately do something different. The real issue is with aggressive optimizations.

In order to get maximum performance, the compiler is allowed to assume that the programmer doesn't invoke undefined behavior. In other words, it can replace code with something that is equivalent in the presence of UB, but does something totally different in the absence of UB. See e.g. https://blog.regehr.org/archives/767 for some examples of how this can go wrong. (My favorite is the third one.)


Modern C compilers do some very tricky things with undefined behavior.

See https://blogs.msdn.microsoft.com/oldnewthing/20140627-00/?p=... for a very detailed example.

See https://blog.regehr.org/archives/1307 for some strict aliasing examples.


Compilers deciding to elide code paths that contain undefined behavior is weird, especially when it chooses to silently elide your checks for division by zero or overflow. It's not weird that actually dividing by zero can do anything; it's weird that by having a possible division by zero can allow the compiler to decide that (divisor==0) is false and ignore it.


> they usually mean requiring that compile-time-detectible undefined behaviors

I don't believe that's the case for plenty of reasons, such as:

- compilers already do that ( yeah, it's one of those RTFM things. See for example GCC's undefined behavior sanitizer)

- the standard already specifies exactly what it is left undefined, thus it's a compiler-related issue (see point above)

Let's face it: some people mindlessly parrot the"undefined behaviour" mantra just for show.


> compilers already do that ( yeah, it's one of those RTFM things)

I believe 99% of what people care about the C standard doing, re: UB handling, is requiring compilers to make certain behaviours the default, rather than hidden behind different flags that C newbies who don't understand UB (who thus code most of the bad C!) won't ever set.


> - compilers already do that ( yeah, it's one of those RTFM things. See for example GCC's undefined behavior sanitizer)

well, no they don't. UBSan is at run-time because most UB is impossible to catch at compile-time.


Implementation defined behavior is not the same thing as undefined behavior.

Undefined is out of the scope of the language entirely. Using a non-existent index into an array, for example. While you might reasonably expect the program will just look past the end, there is not guarantee it will do so. Optimizing compilers, in particular will assume such a thing cannot happen, and can assume a code branch that does something like this is impossible to reach and discard it entirely.

No one will "fix" such an optimization bug, because the code behind it is valid for ASTs that may have been put into that form from conforming code generated by macros and branches that wouldn't be called. There's nothing to fix.

You're telling it to do something impossible, and it's assuming it can't happen.


There's also behavior which is undefined for hardware reasons.

An example is what the C standard calls "trap representations": Bit patterns which fit into the space occupied by a specific type, but which will cause a hardware trap (exception, interrupt, what have you) if you actually store them in a variable of that type. The only type which cannot have trap representations is unsigned char. Basically, what it amounts to is this: C compilers don't compile to a runtime, they compile to raw machine code with, perhaps, a standard library. If you do something the hardware doesn't like when your program runs, well, the C compiler is long gone by that point and the C standard makes no guarantees.

More prosaically, storing to a location beyond the end of an array might not cause a segfault. It might corrupt some other array, it might cause a hardware crash, it might even corrupt the program's machine code. Because C is explicitly a language for embedded hardware, with no MMUs, no W^X protection, and no OSes, the C standard can say very little about such things.


You're mixing up "undefined behavior" and "implementation-defined" behavior. Implementation-defined behavior is fine. "Undefined behavior", as the term is used in the C spec, means that literally anything can happen. The compiler is allowed to assume that UB never happens, so if it does the program can produce random results.

See http://en.cppreference.com/w/cpp/language/ub


> You're mixing up "undefined behavior" and "implementation-defined" behavior. Implementation-defined behavior is fine. "Undefined behavior", as the term is used in the C spec, means that literally anything can happen.

Actually I didn't. My point was rather obvious: the whole point of the standards specifying UB is precisely to let implementations define the behavior themselves.


> the whole point of the standards specifying UB is precisely to let implementations define the behavior themselves.

This used to be the case. Signed integer overflow for instance is undefined because some CPUs go bananas when you try that. Other platform performed 2's complement just fine, and we used to be able to rely on this.

No longer.

See, the standard doesn't say "implementation defined". It doesn't say "undefined on platforms that go bananas, implementation defined otherwise". It says "undefined" period.

Signed integer overflow is undefined on all platforms, even your modern x86-64 CPU. Compiler writers interpreted it as a licence to assume it never happens, to help optimisations. For instance:

  int x = whatever;
  x += small_increment;
  if (x < 0) { // check for overflow
      abort(); // security shut down
  }
  proceed(x);  // overflow didn't happen, we're safe!
Here's what the compiler thinks:

  int x = whatever;
  x += small_increment;
  if (x < 0) { // only true if signed overflow -> false
      abort(); // dead code
  }
  proceed(x);
Then the compiler simply deletes your security check:

  int x = whatever;
  x += small_increment;
  proceed(x);
Don't listen to Chandler Carruth, nasal demons are real. Some undefined behaviours can encrypt your whole hard drive, assuming they're exploitable by malicious inputs.


This is the sort of emotionally-charged fearmongering around UB that really makes any discussion pointless. That example is wrong. Integers can be signed. If a compiler cannot prove x >= 0, then it simply cannot remove that code.

Now, if you used

    unsigned int x = whatever;
    ...
    if(x < 0)
There would be an obvious case for removing that if.


A very simple test case demonstrates that GCC can remove tests in the presence of signed overflow, even in ways that change a program's behavior.

    $ cat undefined.c
    #include <limits.h>
    #include <stdio.h>
    #include <stdlib.h>

    int main() {
        int x = INT_MAX;
        if (x+1 > x) {
            printf("%d > %d\n", x+1, x);
        } else {
            printf("overflow!\n");
        }
    }
    
    $ gcc --version
    gcc (Ubuntu 7.2.0-8ubuntu3.2) 7.2.0
    Copyright (C) 2017 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions.  There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    
    $ gcc undefined.c && ./a.out
    overflow!
    
    $ gcc -O3 undefined.c && ./a.out
    -2147483648 > 2147483647


Yes, that example is well-known but different; here, the compiler is assuming that x + 1 will always be greater than x, which is entirely something else than the parent's assertion of assuming that x + small_increment will always be positive.


The difference doesn't matter, you would know better if you weren't clinging so hard to your beliefs. Here's the "difference":

  $ cat undefined.c
  #include <limits.h>
  #include <stdio.h>
  #include <stdlib.h>

  int main() {
      int x = INT_MAX;
      if (x+1 < 0) {
          printf("%d < 0\n", x+1);
      } else {
          printf("overflow!\n");
      }
  }

  $ gcc --version
  gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
  Copyright (C) 2015 Free Software Foundation, Inc.
  This is free software; see the source for copying conditions.  There is NO
  warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

  $ gcc undefined.c && ./a.out
  -2147483648 < 0

  $ gcc -O3 undefined.c && ./a.out
  overflow!
The security check is gone all the same.


Your "small_increment" needs to be at least large enough to turn the smallest negative int into a non-negative number, and that makes it unrepresentable as a signed int itself.

There are cases where the compiler removes misguided overflow checks, since "perform UB, then check whether it happened" doesn't actually work, but your example is not such a case.


> "perform UB, then check whether it happened" doesn't actually work

This raises an interesting point, because in many cases, assuming 2's complement and wrapping around, checking for overflow after the fact, as opposed to preventing it from happening in the first place, is actually easier. (And actually works if you use the `-fwrap` flag.)

The right thing should be easier to do than the wrong thing. It's a shame this is not the case here.


I was of course assuming that `whatever` was positive, and the compiler knew it. See this sibling thread: https://news.ycombinator.com/item?id=16664546


> let implementations define the behavior themselves

That's literally the definition of implementation-defined behavior.

Undefined behavior really means undefined; in terms of the C language, there are no constraints on behavior. Sure, you might get a result one way on one implementation, but if you rely on that you're technically writing a dialect of C, and need to let the compiler know using flags.


Implementation defined behavior does the same thing for a given implementation. The compiler can produce totally different code every time you compile a program with UB. It can even affect code that is totally unrelated to the UB.


I'm not sure I buy that second part.

Legalistically, I guess it could (the behavior isn't defined, after all), but typically the optimizer makes some valid-only-if-the-code-is deductions and things snowball from there....


That's not the point, though. In the case of undefined behavior, a compiler doesn't have to define the behavior or even act consistently (most evident by the behavior of the compiler's optimizer).

This is entirely different than implementation defined in that a conforming compiler has to document the behavior they implement and do it consistently.


I'll just refer you to a comment I wrote last week, which I covers a specific case where undefined behavior and how a compiler chose to handle it caused a security problem in Cap'N'Proto.[1] I think it's about as condensed as I've seen it explained, while linking to further information.

1: https://news.ycombinator.com/item?id=16596409


I've noticed the same thing. A strongly adversarial relationship between compiler writers and users, justified by religious adherence to The Holy Standard and a complete lack of understanding of how people actually want and expect the language to behave in practice.

Undefined behavior just refers to something that is not covered by the international standard, and therefore doesn't exist nor should be used, but an implementation may offer implementation-specific behavior

Indeed. Even the standard itself, to quote its definition of undefined behaviour (emphasis mine):

"behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements NOTE Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message)."

The fact that the standard "imposes no requirements" should not be taken as carte blanche to completely ignore the intent of the programmers and do entirely unreasonable things "just because you can", yet unfortunately quite a few in the compiler/programming language community think that way.

It's why I'm very encouraging of more programmers writing their own compilers and exploring new techniques, to move away from that suffocating and divisive culture. Compilers should be helping users, not acting aggressively against them just because it's allowed by one standards committee.


The thing is that you only see this compiler writer culture among C and C++ devs.

Other language communities that put correctness before performance at any cost, don't share this mentality, including the compiler writers.


> I don't get this "undefined behavior" cliche.

The problem is that you can do everything in portable C that you can do with undefined behavior in C, and often in a more straightforward fashion.[1] The compiler won't tell you that you are doing it wrong; there are many examples, tutorials, and books that encourage you to do the wrong thing. The undefined behavior will work until you switch to a different system. Why allow the wrong thing to continue to happen?

[1] A very good example of this is endianness:

https://news.ycombinator.com/item?id=16189110

https://commandcenter.blogspot.se/2012/04/byte-order-fallacy...


You are conflating undefined with implementation defined. If you use undefined behavior the compiler might simply delete your code since it's not defined.


You and a lot of people in this discussion seem confused the other way. The amount of things in C that are implementation-defined is relatively small and most has to do with character encodings and byte size (storage unit). The kind of behavior people are pointing to is definitely labeled as undefined, and there is a lot of it. Take a look at Harbison and Steele for example.


Or it's something undesirable and you parroting back the definition of UB doesn't add anything.


I think many people overlook that "undefined behavior" mainly limits portability. Invoking undefined behavior won't cause your program to do random things, it just might not do the same thing when you switch to a different compiler. This is still a big problem, but not as big as people make it out to be, in my opinion. I think undefined behavior problems in the standard could be fixed, but compilers would need special flags for backwards compatibility with programs that rely on it.


> Invoking undefined behavior won't cause your program to do random things

According to the spec, it literally can. The compiler is free to replace your entire program with unrelated functionality. It can ever do a different thing each time it compiles the program.

There are implementation specific behaviors (# of bits in a char), which are different.


Undefined behavior is undefined even using the same version of the same compiler. If it does produce the same repeatable behavior, consider it luck.

Implementation-defined behavior limits portability in the way you describe, but UB's not the same thing. IB should behave the same way in the same implementation. UB doesn't have to.


> "undefined behavior" mainly limits portability

I had the impression that some things are UB because they can't specify a single behavior that would be efficient across all platforms C targets.


No that's implementation-defined. Undefined means code that is logically incorrect, but too expensive for the compiler/runtime to check and handle (reject) it. A simple example is array out of bounds access. It's too expensive to demand the compiler prevent it or guarantee to trigger some kind of signal at runtime. But if you are willing to pay the cost , you can use a language or compiler that is safer and promises to throw an exception or provides safe statically sizd arrays


> function pointer type declarations inside anonymized unions inside parameter definition lists

Could you please provide a code snippet of this kind? Hard for me to visualize otherwise. Thanks.


Sure, lets do it in pieces

    typedef int (*my_func_ptr_t)(const void*,const void*);

    typedef union { my_func_ptr_t func_ptr; int* data; } my_union_t;

    void my_func (my_union_t param);

    // Now exploiding it

    void my_func (union { int (*func_ptr)(const void*,const void*); int* data; } param);
Of course this is very basic example, but there is hope to make it into an IOCC entry.


Can you explain why exceptions and operator overloading are "idiotic" things? Are you from the Go school of boilerplate-error-checking-code design, or something?


Exceptions work great in garbage collected languages. Reasoning about exceptional control flow w.r.t. manual memory management is a total nightmare.


> Reasoning about exceptional control flow w.r.t. manual memory management is a total nightmare.

Exceptions make manual memory management easier because a proper exception system has unwind-protect[1]. Exceptions are just movements up the stack - exceptions combine naturally with dynamic scoping for memory allocation (memory regions/pools). This kind of memory management was used in some Lisp systems in the 1980s, and made its way into C++ in the form of RAII. By extending the compiler you can add further memory management conveniences like smart pointers to this scheme.

Now if you want to talk about something that actually makes manual memory management a total nightmare, look at the OP's suggestion for adding closures to C.

[1] http://www.lispworks.com/documentation/HyperSpec/Body/s_unwi...


C does not have RAII-like memory management in any way. Exceptions work beautifully with memory management like that, but if it's not there, you can't just say it should work because memory management should work like that.

So basically you're saying, before adding exceptions, add RAII-like memory management, and then actually add exceptions. I like both features, but am not sure how you'd wedge RAII into C. Any ideas on that?


> C does not have RAII-like memory management in any way.

Yes it does, as language extension on gcc and clang.

It is called cleanup attribute.

https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attribute...

https://clang.llvm.org/docs/LanguageExtensions.html#non-stan...


That's not C, _the language_, those are compiler features.


I explicitly mentioned they are extensions.


Thanks! I learned something. Didn't know about that.


> C does not have RAII-like memory management in any way.

C does not have memory management in any way period. The C standard library does. How you get to something with dynamic scoping like RAII in C is to use a different library for managing memory. For example Thinlisp[1] and Ravenbrook's Memory Pool System[2] both provide dynamically-scoped region/pool allocation schemes.

[1] https://github.com/vsedach/Thinlisp-1.1

[2] https://www.ravenbrook.com/project/mps/


[On the Cforall team] For what it's worth, one of the features Cforall adds to C is RAII.

The exception implementation isn't done yet, but it's waiting on (limited) run-time type information, it already respects RAII.


I have often wanted c99 with destructors for RAII purposes.


you do not need destructors if you put your stuff on the stack


just putting stuff on the stack in C won't magically call `fclose` or `pthread_mutex_unlock`, unlike destructors


I might have missed this.. but how is Cforall implemented?

A new GCC or LLVM frontend, or is it a transpiles-to-C implementation ala. Nim or Vala?


Transpiles-to-(GNU-)C -- it was first written before LLVM, if we were starting the project today it would likely be a Clang fork.


Clang harks back to 2007 and LLVM 2003.. is this a research project that was recently taken back up?

I was curious about the implementation because I've had rough experiences with Vala and Nim's approach. Unlike with "transpiles-to-js" languages, transpiling to C has some tooling gaps (debugging being the big one). I admittedly don't have a ton of experience with either language but I couldn't find a plugin that gave me a step-through debugger for something like CLion or VS Code. You can debug the C output directly but this will turn off newcomers and assumes the C output is clean.


The initial implementation was finished in '03, and we revived the project somewhere around '15, so your guess about a research project that was recently taken back up is correct.

We intend to write a "proper compiler" at some point (probably either a Clang fork or a Cforall front-end on LLVM), but it hasn't been a priority for our limited engineering staff yet. I think we are getting a summer student to work on our debugging story (at least in GDB -- setting it up so it knows how to talk to our threading runtime and demangle our names), and improving our debugging capabilities has been a major focus of our pre-beta-release push.


What I will like is more strict compile time checks. Most C pros have to rely on external tooling for that.


It's maybe not quite what you're looking for, but Cforall's polymorphic functions can eliminate nearly-all the unsafety of void-pointer-based polymorphism at little-to-no extra runtime cost (in fact, microbenchmarks in our as-yet-unpublished paper show speedup over void-pointer-based C in most cases due to more efficient generic type layout). As an example:

    forall(dtype T | sized(T))
    T* malloc() {  // in our stdlib
        return (T*)malloc(sizeof(T)); // calls libc malloc
    }

    int* i = malloc(); // infers T from return type


Excuse me for my lamerism, but can you tell me what is a polymorphic function?

My idea was that if it is better to do as much compile time checks as possible before you introduce run-time checks. Does that void pointer protection run faster that code that was checked at compile time? How?


A polymorphic function is one that can operate on different types[1]. You would maybe be familiar with them as template functions in C++, though where C++ compiles different versions of the template functions based on the parameters, we pass extra implicit parameters. The example above translates to something like the following in pure C:

    void* malloc_T(size_t sizeof_T, size_t alignof_T) {
        return malloc(sizeof_T);
    }

    int* i = (int*)malloc_T(sizeof(int), alignof(int));
In this case, since the compiler verifies that int is actually a type with known size (fulfilling `sized(T)`), it can generate all the casts and size parameters above, knowing they're correct.

[1] To anyone inclined to bash my definition of polymorphism, I'm mostly talking about parametric polymorphism here, though Cforall also supports ad-hoc polymorphism (name-overloading). The phrasing I used accounts for both, and I simplified it for pedagogical reasons.


Good point.


Sum types are the new trend.


exceptions because in embedded contexts they may not always be a good idea (and C targets such contexts). overloading because it is too easy to abuse and as such it gets abused a lot by those who do not know better. The rest of us are then stuck decoding what the hell "operator +" means when applied to a "serial port driver" object


> 3) is missing a few real improvements (closures, although it is not clear whether the "nested routines" can be returned)

Ah, I wish Blocks[0] would have made to into the C language as a standard†... Although you can use them with clang already:

    $ clang -fblocks blocks-test.c # Mac OS X
    $ clang -fblocks blocks-test.c -lBlocksRuntime # Linux
Since closures are poor man's object, I had some fun with them to fake object-orientedness[1].

† or at least that the copyright dispute between Apple and the FSF for integration into GCC would have been resolved (copyright transferred to the FSF being required in spite of a compatible license).

[0]: https://en.wikipedia.org/wiki/Blocks_%28C_language_extension...

[1]: https://github.com/lloeki/cblocks-clobj/blob/master/main.c#L...


Constructs like closures come at a cost. Function call abstraction and locality means hardware cannot easily prefetch, instruction cache misses, data cache misses, memory copying, basically, a lot of the slowness you see in dynamic languages. The point of C is to map as close to hardware as possible, so unless these constructs are free, better off without them and sticking to what CPUs can actually run at full speed.


Closures are logical abstractions and cost nothing, since they are logical. Naive runtime implementations of closures can of course be a bit slower than native functions, but so can be everything.


Clousure costs a lot if we are talking of real closures, that capture variables from the scope where they are defined, because you need to save somewhere that information, so you need to alloc an object with all the complexity associated.

And it can easily get very trick in a language like C where you don't have garbage collection and you have manually memory management, it's easy to capture things in a closure and then deallocate them, imagine if a closure captures a struct or an array that is allocated on the stack of a function for example.

I think we don't need closures in C, the only thing that I think we would need is a form of syntactic for anonymous function, that cannot capture anything of course, it will do most of the things that people uses closure for and doesn't have any performance problems or add complexity to the runtime.


> so you need to alloc an object

Not always! Rust and C++ closures don't need to allocate in every case. I can speak more definitively about Rust's, but as long as you aren't trying to move them around in certain ways, there's no allocation, even if you close over something.

Consider this sum function, which also adds in an extra factor on each summation:

  pub fn sum(nums: &[i32]) -> i32 {
      let factor = 5;

      nums.iter().fold(0, |a, b| a + b + factor)
  }
The closure here closes over factor. There's zero allocations being done here.

If you want to return a closure, you may need to allocate. Rust will let you know, and the cost will be explicit (with Box). That's where my sibling's comment comes into play.


If a closure cannot be optimized out, i.e. the scope of the closure outlives the scope of the function it captures a variable from, than this closure is equivalent to a heap allocated struct, which cannot be allocated on the stack either if it outlives its scope. So the cost is still the same.


Couldn't agree more. and i'll add another:

the suggested syntax is ridiculous. What is this punctuation soup?

   void ?{}( S & s, int asize ) with( s ) { // constructor operator
   void ^?{}( S & s ) with( s ) {          // destructor operator
   ^x{};  ^y{};                              // explicit calls to de-initialize




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: