To be pedantic about this complexity estimate for the 4 x 4 case, both of them reduce to a O(1). If N == 4, it's a constant, so we have O(4^2.778) == O(4^2.8074) == O(1).
Talking about scaling for a problem that has no scaling factor is a bit odd.
Yeah the paper uses O notation for complexity, but never when N is constant. For constant N and M, the paper uses the number of scalar multiplication operations performed as complexity measure. According to Figure 3, their algorithm has decreased the best known value from 49 to 47 for 4x4 matrices, and from 98 to 96 for 5x5 matrices (as well as some further state of the art improvements).
This is for matrix multiplication where elements are themselves 4x4 matrices. So yes, indeed this is about multiplying many many 4x4 matrices where N is the size of the outer matrix.
It used to be that one of the best things about working at Google was the "blameless postmortem". As long as you are able to learn from an incident and weren't attempting to look at private data, then you could write up a postmortem document and actually use that as part of a promotion packet. Google would loose a key part of its soul if they were to change that.
I'm inclined to agree with you here. The solutions suggested here seem to depend on heuristics that depend on the type of distribution.
A tangential question: real numbers are uncountably infinite. Are probability distributions over the real numbers likewise uncountably infinite, or do they form a higher infinity?
First, a point of terminology: uncountably infinite refers to any infinite cardinality other than countably infinite, so if the number of probability distributions is at least the cardinality of the real numbers, then it's already uncountable.
As for what uncountable cardinality they form, it's the same as the real numbers [0]. Roughly, a probability distribution is determined by the countable collection of real numbers P(X <= q) with q rational. That means the cardinality is no more than R^Q, which is isomorphic to R. (the cardinality is at least R due to, say, uniform distributions on [0, x] for x real).
Is there actually some way to guess with better than 50% chance?
I'm pretty sure this could be reduced to the secretary problem where there's a pool of 2 candidates, which yields an optimal hire with probability 50%.
Maybe you play word games and say, "I guess the other number is not higher," or "I guess the other number is not lower." Since the number you observed was produced once, there is some non-zero probability that it was produced twice, in which case the other number is neither nigher nor lower.
The reduction is leaky in that it requires you to assume that there is nothing to learn from looking at the first number.
Here's a hint that is not a full solution. If you knew that the numbers in both envelopes were independently sampled from the same Gaussian (but not necessarily which Gaussian), then there is a simple strategy that wins more than 50% of the time: pick an envelope to peek at uniformly at random and guess that it is the larger one if and only if it shows a positive number.
Why does this work? If both numbers are positive, your strategy is equivalent to "pick a random envelope" and you're left with a uniformly random choice of envelope. If both numbers are negative, you're also left with a uniformly random choice of envelope. But if one is positive and the other is negative, you win 100% of the time.
What are the odds of the third case? Well every Gaussian has at least some mass on positive numbers and some mass on negative numbers. So there is some nonzero chance, even if you don't know what it is. So while the strategy might not always do better than a coin toss, some positive percent of the time you win every time. And so you get a distinct advantage over random guessing.
The problem is now how would one transfer such an approach to the original problem without the Gaussian assumption. Somehow the two problems are less different than they initially seem.
I didn't even think of the possibility of negative numbers. I can't ever imagine playing this x, 2x envelope game if there was any chance at all that the numbers would be negative. Like you open an envelope and it says I owe them either one million or two million dollars. Yeah, screw that game.
But couldn't you make the same argument for guessing that it's higher iff the number is greater than 42? Therefore, if the number is 9, both strategies tell you to do different things, yet in each case, your probability of winning is >½. There must be a hole in the logic somewhere…
> But couldn't you make the same argument for guessing that it's higher iff the number is greater than 42?
Yep.
> Therefore, if the number is 9, both strategies tell you to do different things, yet in each case, your probability of winning is >½. There must be a hole in the logic somewhere…
For any nonnegative value C, the probability of Gaussian draw x being larger than (iid) Gaussian draw y is strictly bigger than 1/2 when conditioned solely on x > C.
The particular choice of C might change which specific draws of x and y the strategy succeeds with, but as you noticed for every C a thresholding strategy of the above form does give you some pointwise nonzero advantage over random guessing.
A very general solution, which works regardless of how the numbers in the envelope are picked:
Choose a monotonically increasing function f with values between 0 and 1. Choose one of the envelopes at random. Then look at the number x in the envelope. Then, with probability f(x) say that you have the larger envelope.
There's an unfortunate issue with software engineering: Perception is Reality.
The technical struggles you've described sound like they're about par for a junior dev. You made some costly mistakes and you think you could achieve some tasks faster? Honestly, I would find it suspicious if a junior dev isn't experiencing those kinds of issues.
However, if you get put on a PIP, it's because someone in your management chain has the perception that you're underperforming. You'll have a lot more context about the political environment -- it's hard to gauge why senior devs might be brushing you off. If they have the perception that you're not achieving up to your expectations, that's going to show up in their own private conversations with your manager (or whomever handles your evaluation). If your manager has a negative perception of you, the healthiest thing to do is to find a different team.
Perception is Reality for your manager if they put you on a PIP or give you a bad performance rating, even if you are performing well within expectations. Perception is Reality for your coworkers when they either do or don't support you as a team mate. Unfortunately, when it comes to imposter syndrome, Perception is Reality once again.
An exercise you can do that will likely help is to look at how others are performing. Actually check to see how much code others are landing, or how many bugs others are fixing, or how many stories others are completing. You might have the perception that you're not performing up to the same level, but you can look at some statistics and get a better idea of what the Truth really is.
It looks like the concept here is similar to me observing that the average number of hairs on a human head is about 1e5. Also, the average number of meals a human consumes in their lifetime is about 1e5. Conclusion: we can fix hair loss if we have people eat more meals on average.
In seriousness though, the idea of "physical constants" not being actually constant is fascinating. I think the only sci fi book that I've read that explored this much was Carl Sagan's Contact, but it's an idea that must have been used in other sci fi. The implications of what might happen if you can change physical constants is the sort of thing sci fi is made from.
As for the calculations you put in the colab... Based on wikipedia Large Number Hypothesis is focused on unit-less measurements. That is, no kilograms, meters, or other "arbitrary" measurements. The number you found that was close to the Hubble Constant has units (meters, grams, and seconds). You could argue that when scientists of ye yester-year made up those units, they started with the Hubble Constant and worked backwards so they had measurements that produced the proper phenomenon. Of course they didn't, but since they could have, it's hard to see this as anything but a coincidence.
Everything Paul wrote about in this post resonated with me. He avoided any examples, and looking through these HN comments, it's clear why. A lot of people assume that his experiences are specific, but no, there really is a culture of shooting down crazy new ideas.
I can think of a few examples of crazy ideas being dismissed. I remember seeing this post where someone was trying to figure out if it was possible to verify that a photo was either undoctored or else that someone went through a lot of trouble to hack the camera hardware of a phone. The post started off in the vein of, "Here's this idea, and even though it sounds crazy, I can't convince myself that it's a bad idea." The peanut gallery had all sorts of reasons that it was a bad idea -- I think my favorite reason was that it would be immoral to try to provide this capability. Paul's essay suggests a few reasons that this crazy idea might have generated such a personal attack.
Crazy new ideas are uncomfortable. I'm reminded of "Pitch Anything" by Oren Klaff, which describes the "croc brain" that protects the higher functioning parts of our brain by trying to discard anything that is uncomfortable. A crazy new idea challenges our worldview, so it's going to be uncomfortable.
There's a class of ideas that engineers are comfortable with: incremental improvements. If there's a framework in place to evaluate an idea, it doesn't make people so uncomfortable. By extension, if an idea is non-incremental, meaning it's a crazy new idea, then since it makes us uncomfortable, it should be rejected immediately.
I remember a conversation about the value of an idea. There's the school of thought that ideas are worthless -- a monkey with a typewriter can hammer out ten ideas before their breakfast banana. Another viewpoint recognizes that good ideas are important starting points, but after that the only thing that matters is hard work.
My feeling is that crazy new ideas are more like a lottery ticket: probably worthless. I don't know, maybe the peanut gallery is right. Since the lottery ticket is probably worthless, the easiest thing to do is to toss them all in the trash before checking them against the winning numbers.