To me the neat bit isn't that it got the exponential decay right - that's pretty standard, its that it realised there were two different timescales for the decay and got ball-park numbers for them pretty well.
This is the kind of model you would expect from a simple cylindrical model of the coffee cup with some inbuilt heat capacity of its own.
However, those decay coefficients are going to be very dependent of the physical parameters of your coffee cup - in particular the geometry and thermal parameters of the porcelain. There's a lot of assumptions and variability to account for that the models will have to deal with.
I would think that the starting temperature and ambient temperature are controlling. Boiling water is three times ambient temperature. 150°F is twice ambient temperature.
The exponential decay is obvious because he started the readings at boiling. If he had started at 150°F, it might not have been as obvious that the readings were on an exponential curve.
Is that right? I didn't get much sleep and don't drink coffee. Lol
While this is a great article, I feels it buries the lede.
For me, the key insight was from the last paragraph of the article:
C++23 introduces "deducing this", which is a way to avoid the performance
cost of dynamic dispatch without needing to use tricks like CRRT, by writing:
class Base {
public:
auto foo(this auto&& self) -> int { return 77 + self.bar(); }
};
class Derived : public Base {
public:
auto bar() -> int { return 88; }
};
I wish the article had gone into more details on how this works and when you can use it, and what its limitations are.
My mind was exploded by this somewhat similar technique https://en.wikipedia.org/wiki/Tanh-sinh_quadrature - it uses a similar transformation of domain, but uses some properties of optimal quadrature for infinite sequences in the complex plane to produce quickly convergent integrals for many simple and pathological cases.
If we then scale this by some value, such that A y_i = z_i we can write this as
z_{i+1} = dt e^(k dt) z_i + A x_i
Here the `dt e^(k dt)` plays a similar role to (1-alpha) and A is similar to P alpha - the difference being that P changes over time, while A is constant.
We can write `z_i = e^{w dt i} r_i` where w is the imaginary part of k
Where p_i = e^{-w dt (i+1) } A = e^{-w dt ) p_{i-1}
Which is exactly the result from the resonate web-page.
The neat thing about recognising this as a convolution integral, is that we can use shaping other than exponential decay - we can implement a box filter using only two states, or a triangular filter (this is a bit trickier and takes more states). While they're tricky to derive, they tend to run really quickly.
This formulation is close to that of the Sliding Windowed Infinite Fourier Transform (SWIFT), of which I became aware only yesterday.
For me the main motivation developing Resonate was for interactive systems: very simple, no buffering, no window... Also, no need to compute all the FFT bins so in that sense more efficient!
IMO the reason the compiler doesn't add special cases for the simplest version is that it doesn't know which of its _many_ special cases to use. If you actually use the unoptimised version of the code like
Then it actually inlines the code and optimises each one correctly, as it has context about which special cases are available. (Doesn't even need the `inline` keyword for this at `-O2`)
This is possible if the call site can see the implementation, but you can't count on it for separate translation units or larger functions.
My goal was to not rely on site-specific optimization and instead have one separately compiled function body that can be improved for common cases. Certainly, once the compiler has a full view of everything it can take advantage of information as it pleases but this is less controllable. If I were really picky about optimizing for each use I would make it a template.
>Doesn't even need the `inline` keyword for this at `-O2`
The inline keyword means little in terms of actually causing inlining to happen. I would expect the majority of inlining compilers do happens automatically on functions that lack the "inline" keyword. Conversely, programmers probably add "inline" as an incantation all over the place not knowing that compilers often ignore it.
>Conversely, programmers probably add "inline" as an incantation all over the place not knowing that compilers often ignore it.
Funnily, the inline keyword actually has a use, but that use isn't to tell the compiler to inline a function. The use is to allow a function (or variable) to be defined in multiple translations units without being an ODR violation.
MSVC treats inline as a hint [0] , GCC is ambiguous [1] but I read it as utilising the hint. My understanding of clang [2] is that it tries to match GCC, which would imply that it applies.
What the OP is saying is that it has a use to satisfy (https://en.cppreference.com/w/cpp/language/definition) and in that case it is not optional and not ignored by the compiler. Whether it will "actually" inline it is another matter, that is optional.
would it have made a difference if the function was static? The compiler would then be able to deduce that it isn't used anywhere else, and thus could do this inline optimization.
You can also force it by using extensions like `[[gnu::always_inline]]` or `__forceinline`. I've actually used this technique to generate an auto-vectorizable function whenever it's possible, without any code duplication [1].
On modern hardware, it's also not clear that a special case that makes things faster from a purely CPU-oriented perspective makes things faster overall. Adding additional code to handle special cases makes the code larger, and making the code larger can make it slower because of the memory hierarchy.
There's a CPPCast interview with one of the people who worked on the Intel C Compiler where he talks about the things they do to get it producing such fast binaries. At least from what he said, it sounds like the vast majority of the unique optimizations they put into the compiler were about minimizing cache misses, not CPU cycles.
The "IMO" here is used instead of "I think" or "I believe" to indicate that this is not a fact claim but a (presumably educated) guess. Not a very correct use of "IMO", technically, but a fairly common one nonetheless.
For all sorts of reasons, I think that being pedantic about the meanings of acronyms probably does more to inhibit understanding each other than it does to facilitate it.
In this particular case, regardless of how often anyone has personally seen the term used to mean something more like "I think that...", taking that to be the intended meaning is the most friendly interpretation, and therefore the preferable one. It's an informal forum, we type fast, and don't necessarily carefully proof-read comments before hitting reply. Nor should we -- this place is a more pleasant space to occupy for everyone involved if we give each other the benefit of the doubt about choice of words.
> being pedantic about the meanings of acronyms[...] taking that to be the intended meaning is the most friendly interpretation
Why are you responding as if it's a foregone conclusion that my intent was to be uncharitable/unfriendly?
Someone wrote a comment with unusual phrasing, and I asked for help understanding the message. It contains neither an explicit nor an underhanded attempt to be an asshole to someone. The adversarial reading you're going with is exactly that—something that you're bringing to it, and is (perversely) wholly uncharitable in and of itself...
Whoa, that’s wild! IMO == “In My Opinion”, and it’s all over the internet, along with its cousin IMHO (“In My Humble Opinion”). Are you sure you haven’t seen it before? Try searching for it on HN or Google. HN search shows me almost 100k comments with IMO, and the first page of hits is all this meaning.
This is the kind of model you would expect from a simple cylindrical model of the coffee cup with some inbuilt heat capacity of its own.
However, those decay coefficients are going to be very dependent of the physical parameters of your coffee cup - in particular the geometry and thermal parameters of the porcelain. There's a lot of assumptions and variability to account for that the models will have to deal with.
reply