Yes, it's all good and nice that your types are sound and you don't have panics, but I feel like this could get you in trouble in the real world (gleam also uses this division convention, and people very much use gleam for "real world" things). Suppose you took an average over an unintentionally empty list (maybe your streaming data source just didn't send anything over the last minute due to a backhoe hitting a fiber in your external data source's data center) and took some downstream action based off of what you think is the rolling average. You could get royally fucked if money is involved.
Crashing would have been preferable.
1/0 = 0 is unsuitable and dangerous for anyone doing anything in the real world.
People are too scared of crashes. Sure, crashing is not ideal. Best is to do what the program is supposed to do, and if you can’t, then it’s better to produce a friendly error message than to crash. But there are far worse outcomes than crashing. Avoiding a crash by assigning some arbitrary behavior to an edge case is not the right approach.
Strongly agree here. IMO libraries should try hard to return sensible error codes (within reason, eg null pointer access is unrecoverable imo) but application code should just crash. And when a library returns an error code, default to just crashing if it fails until you have a compelling reason to do something more complicated.
Yes but here's the conflict. You design a typed language, you want the primary operators to be type stable so you can compose them. Then there's no room to return an error from a basic operation. So if your language also makes it a priority to NEVER CRASH, you are stuck.
Right, for arithmetic operations, you must have one of:
1. Might crash.
2. Result may not be what you’d expect from conventional math.
3. Inputs and outputs are different types.
4. Nonlinear control flow i.e. exceptions.
Division isn’t even particularly special here. If you have fixed-width integer types (as most languages seem to) then this is a problem for all the basic operators.
3 and 4 are attractive solutions but can get annoying or cause more bugs. (How many catch blocks out there have zero test coverage?) Between 1 and 2, 1 is usually much better.
For cases where the programmer wants 2, you can provide alternate operators. For example, Swift crashes on overflow or with the standard operators, but has variants like &+ for modular arithmetic.
Yes, it's absolutely better to crash if you're in an unexpected state. I had to deal with a service once which had a top-level exception handler that ensured that all exceptions would simply log and let the service keep running. That's great for the majority of exceptions which reach that point because most of them are no big deal to push through.
But one time an exception came at just the right time to cause the internal state and database state to be out of sync. That caused data updates in the service from that point on to start saving bad data into the database. It took a few hours to notice the issue and by that point a lot of the persisted data was trashed. We had to take down the service, restore the database from a backup, and reconstruct the correct data for the entire day.
Fortunately the data issues here were low impact, but it could just as easily have been critical data that was bad. And having a business operate on incorrect data like that could cause far bigger issues than a bit of downtime while the service restarts.
OP didn’t say Gleam is dangerous in general. They said it’s dangerous anywhere around physical or financial values. Your app isn’t critically dealing with either, so it’s not really a retort to their point.
> keeping my client information's integrity is as important to me as keeping the financials
Nobody is questioning your intentions. People writing apps in memory-unsafe languages don’t give fewer shits. They’re just more prone to certain classes of errors.
> how the `1/0=0` problem can be entirely avoided
1/0 problems are generally expected to be entirely avoided. This is about where the system behaves unexpectedly, whether due to human error or the computer being weird.
Correct, these are all trade-offs we make when building a product. Choosing between the "1/0 crashes your program" problem and the "1/0 returns 0" problem is one such tradeoff.
All I was doing was clarifying the impression OP gave.
Now that we all know the details we can make whatever tradeoff we prefer.
Let's be clear. Gleam is still a bit of an esolang. If you had a company and onboarded a junior onto it would you expect them to know that 1/0 == 0? As a senior doing code review for said junior, would you be confident that you would correctly think through every corner case when you encounter the / operator?
Its the year of the Lord 2024, why is a new language putting in such a huge footgun out of the box in its stdlib.
> Gleam offers division functions that return an error type, and you can use those if you need that check.
Yes, but is that what any given developer will reach for first? Especially considering that an error-returning division is not composable?
The language puts people into a place where the instinctive design can cause very dangerous outcome, hard to see in a code review, unless someone on the team is a language lawyer. You probably don't want one of those on your team.
I think there's a reasonable argument for gleam to have an operator that does division resulting in zero but at the very least that should NOT be "/"
as so often, the really preferable solution would be to make it impossible to code the wrong thing from the start:
- a sum type (or some wrapper type) `number | DIVISION_BY_ZERO` forces you to explicitly handle the case of having divided by zero
- alternatively, if the division operator only accepted the set `number - 0` as type for the denominator you'd have to explicitly handle it ahead of the division. Probably better as you don't even try to divide by zero, but not sure how many languages can represent `number - 0` as a type.
All Rust's primitive integer types have a corresponding non-zero variant, NonZeroU8, NonZeroI32, NonZeroU64, NonZeroI128 etc. and indeed NonZero<T> is the corresponding type, for any primitive type T if that's useful in your generic code.
Remember that "almost all" of the Reals are unrepresentable using finite sequences of symbols, since the latter are "only" countably infinite. The next logical step is probably the Radicals (i.e. nth roots, or fractional powers).
I know that nested radicals can't always be un-nested, so I don't think larger sets (like the Algebraic numbers) can be reduced to a unique normal form. That makes comparing them for equality harder, since we can't just compare them syntactically. For large sets like the Computable numbers, many of their operations become undecidable. For example, say we represent Computable numbers as functions from N -> Q, where calling such a function with argument x will return a rational approximation with error smaller than 1/x. We can write an addition function for these numbers (which, given some precision argument, calls the two summand functions with ever-smaller arguments until they're within the requested bound), but we can't write an equality function or even a comparison function, since we don't know when to "give up" comparing numbers like 0.000... == 0.000....
It's funny, I hold the exact opposite opinion, but from the same example: In the course of my programming career, I've had at least 3 different instances where I crashed stuff in production because I was computing an average and forgot to handle the case of the empty list. Everything would have been just fine if dividing by zero yielded zero.
What was the problem with crashing? Surely you had Kubernetes/GCP/ECS restart your container, or if you're using a BEAM based language, it would have just restarted
> Everything would have been just fine if dividing by zero yielded zero
perhaps you weren't making business decisions based on the reported average, just logging it for metrics or something, in which case I can see how a crash/restart would be annoying.
I imagine the problem was that it crashed the whole process, and so the processing of other, completely fine data that was happening in parallel, was aborted as well. Did that lead to that data being dropped on the floor? Who knows — but probably yes.
And process restarts are not instantaneous, just so you know, and that's even without talking about bringing the application into the "stable stream processing" state, which includes establishing streaming connections with other up- and downstream services.
It also seems more mathematically appropriate because it is as close to the limit of the reciprocal as one can get with that representation. Now please allow me to duck before being struck by the tomatoes of mathematicians.
It's probably due to how division is implemented, by shifting the divisor and subtracting it from the remainder. Subtracting (0 << n) leaves the remainder the same as it was and the corresponding bit in the quotient will be set at every step.
Intel's 80186 produced a result like that in one special case, because of a missing check in the microcode. This could be called a bug or an optimization: the "AAM" instruction was only documented as dividing by 10, but in fact takes a divisor as part of its opcode (D4 0A = divide by 10, as listed in the documentation; D4 00 = divide by zero). The normal divide instruction - as well as AAM on all other x86 processors - check for zero and throw an exception.
> It's probably due to how division is implemented, [...]
Or rather how division could be implemented. Risc-V is an abstract instruction set architecture not born from a concrete chip, like x86 was; but they are trying to make things easy on the hardware.
Crashing would have been preferable.
1/0 = 0 is unsuitable and dangerous for anyone doing anything in the real world.