It leaves a bad taste in my mouth when an argument in that area starts using fake math to drive a point home. Top-tier engineers spend 6 months to have 20% improvement that could be achieved by spending 20% more on hardware?
Where IS all this coming from? Why is it acceptable to pull out a few random numbers out of our ass and conclude that it's always best to buy more hardware?
I know Google literally invented HTTP/2 and HTTP/3 to shave a few % off their hardware costs. And they had to not only implement this in Chrome and Google.com but to make it a world-wide standard. So surely they must be the idiots putting top-tier engineers for 6 months (or more) on such optimization efforts, instead of spending more on hardware? I don't know.
20% more hardware for google isn't the same as for your local IT biz rocking a small data center. If google can save 1% on hw cost they can feed an army of devs for 6 months.
I wanna see a language where the type system and the actual language are the same language.
I'm not talking about self-hosted compiler, which TS is already. But basically what runs statically and what is compiled for runtime to be the same language in two contexts.
C++ is moving in that direction with constexpr expansion, but it also has to carry the legacy of macros, its existing type system, templates, and so on.
This sounds to me like dependently-typed languages might fit the bill. Look at Agda (more fancy-features focused) or Idris (more real-world programming focused).
Their idea is that you can lift value-level computations to the type level one-to-one. This allows you to do tricks with them that are normally only reserved for type-level things: things like equality proofs that are verified at compile time. In fact, the main reason why dependently-typed languages are developed is to exploit the Curry-Howard correspondence saying that every first-order logic statement corresponds to a _type_ in a dependently-typed language, and the validity of the statement is equivalent to the existence of an inhabitant of that corresponding type (= a value that has that type). This allows you to prove arbitrary properties about the functions in your program without ever leaving your programming languages!
Maybe not exactly what you were looking for, but it technically fits your bill :)
Curry-Howard isn’t that specific, nor do dependent types correspond to first-order logic. Curry-Howard is a broad correspondence between logics and type theories. System F, which is weaker than dependent type theory corresponds to higher-order logic. First-order logic actually doesn’t actually correspond to a popular type system, though the correspondence of course exists.
Dependently-typed languages often work this way! See for instance "Inductively Defined Propositions" in Coq. (remembering, of course, that propositions are types [2], which proof assistants take quite literally)
Totally agree. I think that something that has some sort of "partial" evaluation and basically merging values and types would be so powerful and crush many interesting bugs.
Instead of having to do type-level programming, how about relying on the fact that the type checker is also running on a computing machine? The fact that we don't even have super simple stuff like first-class union types (let alone nice type manipulation logic) in most stuff is really painful.
An aside: I think one way Typescript gets away with this (beyond soundness stuff), is that it doesn't have to actually build some high performance evaluation model for its type system. It doesn't care, it's here to check the validity of your code, not to run it! It's a very powerful concept that opened up so much without having to futz about with like... how to efficiently represent all these unions.
How many times have you stared at a thing, think "if I could open up the compiler/type checker at the end, I could totally verify some property of my code, but it is not expressible in the type system"?
As others have mentioned (only partially though, hence this summary), there are several ways (with varying trade-offs) to achieve this.
Multi-stage programming languages have, well, multiple stages of compilation and you can manipulate types as a first-class value in higher stages. This is a natural extension to the type-level evaluation, as the only thing changes is that the type-level evaluation uses the same fragment of language as the value-level evaluation. Of course we would then have a type of types, a type of types of types and so on; some languages do not distinguish them and a single meta-type is used as catch-all. This model is powerful but you wouldn't get stronger guarantees for lower stages; in particular this model is not a substitute for generic types which typically guarantee that every instance of a generic type is instantiable.
(By the way it is a common misconception that being able to use something like `function f<T>() { return T.size; }` is a first-class type; the type parameter T and the lowered value T is a different thing in different stages.)
Dependently typed languages extend the type system itself to handle mixed types and values. This can be much more huge undertaking than multi-staged programming and harder to reconcile with the type inference and other goodies typically available for strong (enough) type systems, but correctly done they can provide a very strong guarantee that was only possible with static analyses to the limited extent. Note that dependent typing is historically associated with proof assistants but still possible without them; its usefulness and usability would be limited though. (You might have guessed but this is why I don't think dependent typing at the current stage is not the future.)
Conventional metaprogramming methods like macros or code generation are huge wrenches always available for throwing. (Multi-stage programming is also a kind of metaprogramming, but not included in the colloquial "metaprogramming".) They would be significantly limited in scope and/or inconvenient to use, but depending on the scope they might be still appropriate.
I really don't like Jai's presentation because there is essentially no text material is available and thus much harder to discuss anything. For the posteriority the actual demonstration is available here [1]; it shows that it is a multi-staged compilation model like Zig, which is very useful and also quite limited.
The language in my head has an effect system, a proof assistant, lets you run code at compile time iff you prove it to be total and pure, and lets you generate new code and proofs with it. This is just dependent types (and it's similar to Jai and Rust) but I think you could market them in a more imperative way to make them usable.
Stephen Chang, Michael Ballantyne, Milo Turner, and William J. Bowman. 2020. Dependent Type Systems as
Macros. Proc. ACM Program. Lang. 4, POPL, Article 3 (January 2020) - https://dl.acm.org/ft_gateway.cfm?id=3371071 (Redirects to download PDF)
> We present Turnstile+, a high-level, macros-based metaDSL for building dependently typed languages
> This meme about Apple products deriving most of their demand from being status symbols is a decade or two out of date.
Both of you are arguing on which side of an ideal abstraction (Veblen goods) Apple is. Seems silly to me.
Apple products ARE status symbols in many countries, that's what most of the Chinese market is for Apple presently.
They're also playing as utilitarian tech products with mass appeal. Apple has always had this hybrid strategy, it's not one OR the other.
A quick look at their profit margins vs. the competition will show that they're marking up their products significantly above the competition. And I mean in profit margin specifically, so that we don't have this argument about whether they have more expensive software and hardware in principle (they do, but most of it is profit margin).
And also sorry but this is a $15 battery sold for $99, even as an Apple user they're losing me here. Some of their accessories are literally for people who want to spend money on Apple shit.
Indeed, the vast majority of comments written here maintain an extremely high standard of grammar, and can thus be trusted as first-class news and information.
I don't care much about tracking, but this ad is very effective. It's kinda dishonest about what tracking is (it's not bunch of people making job and purchase decisions for you, come on). But effective.
No one made any decision for the main character. A shop employee recommended an anti itch cream. It looked like he asked her to. The others snooped and gave out his information.
There's a limit to how much we can hypothesize about the nature of other universes, given we don't know if they actually exist. It'd be foolish to claim "QED" either way.
Since we can't possibly know anything about these universes for all intents and purposes we can say that they don't exist. This QED is more than acceptable.
If you look at all sorts of limitations on the M1 macs (less ports, only one external screen except on Mini where its one DP and one HDMI) it's plain that it's paying the price for using a phone SoC in a general purpose computer - no generic expansion buses, connectors optimised for phone use case, etc.
And they are banking hard on unified memory model in all sorts of marketing zbut it's unclear if it's the cause or effect of ring unable to use external GPU.
The M1 was hailed as an entry level chip, and merely the start of greater things to come. If that were true, logic would hold that they would have a higher end version of the existing M1 chip available by now.
Not sure about "hailed by", but that perception is probably driven largely by the market position of the Macs Apple put it into first. There is no Mac Mini or Macbook Air that isn't entry level.
I think this person has no idea how much work goes into bring a product up. It's an incredible amount of work and time that goes into it. And that's with code from Intel/AMD/Qualcomm + reference designs. Designing the silicon and the board and the firmware in under a few years is absolutely ridiculous. To do so in 8 months is ludicrous.
I do believe there is something behind-the-scenes that is not going according to plan, but one thing I've found a little interesting is since Apple now controls all aspects of the system, it would have been an incredible bit of showmanship and a PR coup to just announce the refresh of the entire Mac lineup at November 2020 launch announcement with a Jobsian "One more thing..." kind of surprise.
Where IS all this coming from? Why is it acceptable to pull out a few random numbers out of our ass and conclude that it's always best to buy more hardware?
I know Google literally invented HTTP/2 and HTTP/3 to shave a few % off their hardware costs. And they had to not only implement this in Chrome and Google.com but to make it a world-wide standard. So surely they must be the idiots putting top-tier engineers for 6 months (or more) on such optimization efforts, instead of spending more on hardware? I don't know.