Hacker Newsnew | past | comments | ask | show | jobs | submit | markgall's commentslogin

I only skimmed the article, but I think the idea is to use some variation on:

f(a,b,c,d,e) = the largest real solution x of the quintic equation x^5 + ax^4 + bx^3 + cx^2 + dx + e = 0

There's not a simple formula for this function (which is the basic point), but certainly it is a function: you feed it five real numbers as input, and it spits out one number as output. The proof that you can't generate this function using the single one given looks like some fairly routine Galois theory.

Whether this function is "considered elementary" depends on who you ask. Most people would not say this is elementary, but the author would like to redefine the term to include it, which would make the theorem not true anymore.

Why any of this would shake the foundations of computer engineering I do not know.


I've thought something like that, but I'm interested more in details of the argument.

As for why this could be important... we sometimes find new ways of solving old problems, when we formulate them in a different language. I remember how i was surprised to learn how representation of numbers as a tuple (ordered list of numbers), where each element is the remainder for mutually prime dividers - as many dividers as there are elements in the tuple - reduces the size of tables of division operation, and so the hardware which does the operation using thise tables may use significantly less memory. Here we might have some other interesting advantages.


But can you even express this function with the elementary operator symbols, exp, log, power and trig functions? It seems to me like no, you can't express "largest real solution" with those (and what's the intended result for complex inputs?)

At least eml can express the quintic itself, just like the above mentioned operators can


Author and EML are using different definitions of elementary functions, EML's definition being the school textbooks' one (polynomials, sin, exp, log, arcsin, arctan, closed under multiplication, division and composition). The author's definition I've never met before, it apparently includes some multi-valued functions, which are quite unusual.


Wikipedia says:

> More generally, in modern mathematics, elementary functions comprise the set of functions previously enumerated, all algebraic functions (not often encountered by beginners), and all functions obtained by roots of a polynomial whose coefficients are elementary. [...] This list of elementary functions was originally set forth by Joseph Liouville in 1833.

which seems to be what the blog post references.


I feel that saying that EML can't generate all the elementary functions because it can't express the solution of the quintic is like saying that NAND gates can't be the basis of modern computing because they can't be used to solve Turing's halting problem.


As is usual with these kinds of "structure theorems" (as they're often called), we need to precisely define what set of things we seek to express.

A function which solves a quintic is reasonably ordinary. We can readily compute it to arbitrary precision using any number of methods, just as we can do with square roots or cosines. Not just the quintic, but any polynomial with rational coefficients can be solved. But the solutions can't be expressed with a finite number of draws from a small repertoire of functions like {+, -, *, /}.

So the question is, does admitting a new function into our "repertoire" allow us to express new things? That's what a structure theorem might tell us.

The blog post is exploring this question: Does a repertoire of just the EML function, which has been shown by the original author to be able to express a great variety of functions (like + or cosine or ...) also allow us to express polynomial roots?


That’s a poor analogy because all polynomials can be solved to arbitrary precision with efficient algorithms.

Can anyone provide a link that "Some are going as far as to suggest that the entire foundations of computer engineering and machine learning should be re-built as a result of this", or anything similarly grandiose?

I am a professional mathematician, though nowhere near this kind of thing. The result seems amusing enough, but it doesn't really strike me as something that would be surprising. I confess that this thread is the first I've heard of it...


It's a fun, but unsurprising undergrad-level result. It got picked up and overhyped on HN [1] and /r/math [2] earlier this week.

Some of my favorites:

DoctorOetker: "I'm still reading this, but if this checks out, this is one of the most significant discoveries in years."

cryptonektor: "Given this amazing work, an efficient EML operator HW implementation could revolutionize a bunch of things."

zephen: "This is about continuous math, not ones and zeroes. Assuming peer review proves it out, this is outstanding."

[1] https://news.ycombinator.com/item?id=47746610

[2] https://www.reddit.com/r/math/comments/1sk63n5/all_elementar...


:)

I still consider the article important, as it demonstrates techniques to conduct searches, and emphasizes the very early stage of the research (establishes non-uniqueness for example), openly wonders which other binary operators exist and which would have more desirable properties, etc.

Sometimes articles are important not for their immediate result, but for the tools and techniques developed to solve (often artificial or constrained) problems. The history of mathematics is filled with mathematicians studying at-the-time-rather-useless-constructions which centuries or millennia later become profound to human interaction. Think of the "value" of Euclid's greatest common divisor algorithm. What starts out as a curiosity with 0 immediate relevance for society, is now routinely used by everyone who enjoys the world wide web without their government or others MitM'ing a webpage.

If the result was the main claimed importance for the article, there would be more emphasis on it than on the methodology used to find and verify candidates, but the emphasis throughout the article is on the methodology.

It is far from obvious that the tricks used would have converged at all. Before this result, a lot of people would have been skeptical that it is even possible to do search candidates this way. While the gradual early-out tightening in verification could speed up the results, many might have argued that the approach to be used doesn't contain an assurance that the false positive rate wouldn't be excessively high (i.e. many would have said "verifying candidates does not ensure finding a solution, reality may turn out that 99.99999999999999999% of candidates turn out not to pass deeper inspection").

It is certainly noteworthy to publish these results as they establish the machinery for automated search of such operations.


This result itself is being described in those terms[1]:

> If this is true, then this blog post debunking EML is going to up-end all of mathematics for the next century.

This is very concerning for mathematics in general.

1: https://news.ycombinator.com/item?id=47775105


Why on earth would it upend all of mathematics? Secondly, even if it did that, why would that be concerning for mathematics?

Yes or no, it's too early to tell.

I didn't know most of this, but eagerly await my pre-order!


> Polynomial growth (t^n) never reaches infinity at finite time. You could wait until heat death and t^47 would still be finite. Polynomials are for people who think AGI is "decades away."

> Exponential growth reaches infinity at t=∞. Technically a singularity, but an infinitely patient one. Moore's Law was exponential. We are no longer on Moore's Law.

Huh? I don't get it. e^t would also still be finite at heat death.


exponential = mañana


What? I am living in one of these places right now. The rate of road deaths is vastly higher than even in the USA. This is not a good model.


Charlie Kirk was literally in the middle of talking about the need to accept "taking one for the team" when he "took one for the team".

Similar principle here.


Is this really true? I played a few games with it in August. It's not very good.

It's one of those old programs where 95% of the moves are pretty strong. But if you just do nothing and sit back it will occasionally make a random blunder and then you grind it out. I figured it's how they were able to weaken a chess engine back in the day; can't adjust the overall strength, so add random blunders.

I'm only about 2000 on lichess but I beat it pretty much every time, especially once I realized there is no reason to try anything sharp.


My suspicion is that the bot was a fairly standard chess bot, but the difficulties were set based on computation time. As airplane computers got better, it turned into a beast.

As a result, if you tried this on older planes, it might have been “easier”


One of my first paid iOS dev jobs was porting a Go game from iPad to iPhone, don't even think the 4 was out yet. It also used computation time based difficulties. By the time I was done writing it, I knew a few tricks I could eke a win out with on 19x19.

When the iPhone 5S came out, I tried it on a whim to check the UI scaling etc... the beginner difficulty on a 9x9 board deleted me. It was grabbing something like 64x more samples per go, the lowest difficulty on the 5S (instant responses) never lost a single game vs the highest difficulty 3GS (15 second turns)

iPhones had a lot of moments like that. Silly bullshit like "what if every pixel was a cell in a collection view" would go from "oh it can barely do 128" to "more responsive than that was, with 2 million" in a few gens.


One of the minor weird things about iOS development early on was just how fast the transition was from the simulator being dramatically faster than actual devices to the simulator being slower than devices. When I started out you’d get things working nicely in the simulator and then discover it’s an order of magnitude too slow on a phone. Just a few years later and my phone was faster than my laptop until thermal throttling kicked in.


Chess on M series Macs has the same issue. Even level 1 is easily 2000+ Elo because of the same thing.


Oh, this led me down a rabbit hole…

I was maintainer of the Chess app from the early 2000s to about 2015. We first noticed in 2004 that level 1 (which was then "Computer thinks for 1 second per move) was getting stronger with each hardware generation (and in fact stronger than myself).

So we introduced 3 new levels, with the Computer thinking 1, 2, or 3 moves ahead. This solved the problem of the engine getting stronger (though the jump from "3 moves ahead" to "1 second" got worse and worse).

A few years after I had handed off the project, somebody decided to meddle with the level setting code (I was not privy to that decision). The time based levels were entirely replaced with depth based levels (which eliminates the strength inflation problem, but unfortunately was not accompanied by UI changes). But for some reason, parsing of the depth setting was broken as well, so the engine now always plays at depth 40 (stronger than ever).

This should be an easy fix, if Apple gets around to make it (Chess was always a side project for the maintainers). I filed feedback report 21609379.

It seems that somebody else had already discovered this and fixed it in a fork of the open source project: https://github.com/aglee/Chess/commit/dfb16b3f32e5a6633d2119...


I found a used copy of Warcraft 3 at the store about ten years after it came out, proudly brought it home, fired it up and didn’t recall the graphics being quite that awful, but the first time I tried to scroll the map sideways it shot to the far end because they didn’t build a timing loop onto the animation and I shut it down, disappointed.

Unfortunately they never released a remastered version of it. They seem to have made some clone of it called “reforged” whatever the fuck that means.


Yeah, Reforged was received very poorly so they basically end of life'd the franchise.

There is a thriving community with a couple different choices for servers to play on. So I'm sure there's a fix for your mouse speed issue.

Check Twitch for people streaming it: https://www.twitch.tv/directory/category/warcraft-iii

Grubby, one of the early esports stars, still streams it regularly and hosts his own for fun tournaments with other streamers.


Reforged was received poorly because it was a lazy half assed job that was a blatant cash grab. Not because culturally we have moved on and the game has aged beyond being fun

You probably knew this, but wanted to make sure others knew that the reason they ended the franchise is not because there was no market, but instead it was pure unadulterated greed that led to that situation. In an alternate reality they would have actually done the remake justice and there would be a lively competitive scene


Sorry for the aside but,

> SOLAR_FIELDS

Panoramic Greetings!


There are various hacks and tools for games (especially DOS games, but for W3 there may exist the same) which delayloop various calls to slow things down enough "to work".

The Dolphin emulator has run into similar things; usually doing things "too fast" just gets you more FPS but sometimes it causes the game to go insane.


Also, some DOS games were coded so that they ran correctly no matter the speed of the hardware, like Alley Cat :)


This is pretty much the experience of trying to play any game from the '90s on modern hardware. It always requires a bit of tinkering and usually a patch from the modding community. Funniest one I've found is Fallout Tactics. The random encounter frequency is somehow tied to clock speed so you'll basically get hit with random encounters during map travel about once every half second.


I've been enjoying Total Annihilation since 1997. Still works fine on fairly modern hardware with Windows 11. No modifications other than some additional maps that I downloaded decades ago.


Interesting. Assuming it did not use DirectDraw -- that's often a major pain point.


Sorry if this is a dumb question but did you patch it to the latest version? I don't know if the in-game updater still works but from memory you could download some sort of patch exe file and update it that way.


The original Wing Commander was like that. Playable on 286s/386s, then Pentiums and beyond showed up and it was unplayable. The game started in the "simulator" to show you the controls, and you'd get blown out of space in about 0.5 seconds.


Oh man, I remember that: on a newer computer, I'd tap the left arrow to turn and the Hornet would do a 360.

I suppose, technically, that's one way to make the Scimitar feel more responsive...


The original Wing Commander brings back memories! I remember being amazed by the graphics and the story.

These days I cannot stand games with cliched storyline and tend to skip the cutscenes, but back then it all seemed so amazing... like a cross between a movie and a game.

I remember playing it later and running into speed issues too, but usually there was a way to tweak the emulator in order to fix this.


> they didn’t build a timing loop onto the animation

Wow.

1984 (!!!) IBM PC (DOS) port of the game Alley Cat had timings built it. They actually used the system clock if I remember correctly, so it would always run at the correct pace no matter how fast the computer. Last I checked it, decades later, it still ran at the correct speed!

I guess some lessons don't get passed on?


There's an SC2 custom campaign that reimplements the wc3 campaign that is worth a look.


I think it means gcc -O0


AFAIK the only reason Chess even ships at all anymore is as a burn utility. They'll set it to AI vs AI at max difficulty to stress the system and make sure the cooling/power management works.


Never heard that one (it may indeed be used that way, but if it were the only reason Apple would probably keep it in the Apple internal parts of their OS installs).

It would also be of limited use, as the engine is purely CPU based; it is single threaded and does not even use SIMD AFAIK, let alone GPU features or the neural engines.


It can't be deleted because it's part of the system tools :)


> I'm only about 2000 on lichess

That puts you in the top 7% of players on the site. I have a hard time believing you could get to that rating without knowing that.


They aren't talking about the site, they're talking about their strength (as measured by that site) so it can be compared to the numbers in the article.


> I figured it's how they were able to weaken a chess engine back in the day; can't adjust the overall strength, so add random blunders.

In tom7’s Elo World, he does this (“dilutes” strong Chess AIs with a certain percentage of random moves) to smooth the gradient since otherwise it would be impossible to evaluate his terrible chess bots against something like Stockfish since they’d just lose every time. https://youtu.be/DpXy041BIlA?si=z7g1a_TX_QoPYN9b


Such a great video.


1. Uh, isn't 2000 like extremely fucking good?

2. I played a chess bot on Delta on easy and it was really bad, felt like random moves. I beat it trivially and I am actually bad at chess, ~1000 on chess.com. I wonder if this one is different?


Yeah, he just casually said he had an elo that high, as if that doesn't blow 90% of people out of the water.


Note that 2000 on lichess is probably weaker than 2000 on chess.com (or USCF or FIDE)


That's true, I'm 2050-2100 lichess, around 1800 on chess.com. Never played a rated tournament but played some rated players who were 1400-1500 rated USCF, and they were roughly my strength, maybe a bit better. Still the Delta bot, easy mode, was much, much better than me.


Casually just in the top 2-3 percent of chess players globally world wide humble brag. I'm not that good at it, just a little bit!


I think it depends on the pool to which you're comparing. Being top 2% of all programmers is not so impressive if you include everyone who's ever taken an Intro class. Top 2% of people who do it for a living is much more significant.

I'm in a similar boat as the other posters (2050-2100 lichess, 1400 USCF). The median active rating for USCF is around 1200 and likely much higher if you don't include scholastic players, so if we compare against the OTB pool, "2000 lichess" is probably closer to top 50% than 2%


I mean, if you’re in the top 3 percent of anything, yes that’s pretty good, but not unbelievably so, especially in the field of chess. If for instance you randomly put together a classroom full of chess players, there’s decent odds one of them is better than top 3%. Two classrooms and it’s almost a certainty.

Put another way, looking at chess.com users, there are ~6 million people who would count as the top 3 percent. Difficult to achieve, yes, but if 6 million people can achieve it, it’s not really a “humble brag,” it’s just a statement.


It made me smile to hear “I’m only 97th percentile” isn’t a humblebrag. You may be employing an old saw of mine, you can make people* react however you want by leaning on either percentages or whole numbers when you shouldn’t.

* who don’t have strong numeracy and time to think


It's still significantly stronger than the average online chess player


I heard it's never intended to be the same since initial rating for Lichess and chess.com respectively is 1500 and 1200. So they should have 300 rating difference on average. Quite fitting with what the other commenter claims actually.


I don’t think it would average out to a 300 elo difference simply based on the starting rating being 300 apart.

If everything else was the same, and people play enough games they will average out to the same elo.

The difference is caused by many factors. People don’t play enough games to sink to their real elo, the player pool is different, and you gain/lose fewer points per game with Lichess’s elo algorithm.


ELO is relative. There's no reason why a GM ELO should be 2800 or 280 or 28000. So it's all decided by ELO of every other person. So if the ELO gain/loss calculation and audience of Lichess and chess.com are exactly the same, because of different starting position, I don't think they'd converge to the same ELO but instead will differ by starting position difference.

Also I can't really prove it mathematically but I guess average ELO would also hover on the starting ELO. Because I can't see why it would hover anywhere else and any ELO gained would be lost by someone else.


On further thought yes, I think you're correct.

When I started playing I believe chess.com let you select whether you’re beginner, intermediate or advanced and your start elo was based on that. Could be wrong, and it could’ve changed since.


This was my experience on a long Delta flight, I don't remember if I picked easy or not but it was laughably bad. I took its lunch money for a game and then turned the screen off. I was mostly irritated by the horrible touch interface, it felt so laggy among other issues. (I don't have a ranking, I barely play these days and usually just in person, but my memory says around 1400 back in the yahoo chess days as a teen but it's probably closer to 1000 now.)


I wonder if it's different on different planes? I can easily beat my friend and he won a few games on a flight, I played on a different flight and got crushed for two hours straight. I'm probably 1400-ish


What's your name on lichess? Wanna play me?


I still have my old PowerBook G4 from 2005, with some not-that-old Debian currently installed. Every time my main laptop goes out commission, I get the G4 back out and use it for a few days. It's good enough for most of my work, though modern web-browsing is a challenge. (Maybe one that somebody has solved, I haven't dug at all.)


> though modern web-browsing is a challenge. (Maybe one that somebody has solved, I haven't dug at all.)

The usual solution is to run the real browser somewhere else and remote into it, eg. https://github.com/tenox7/wrp or https://www.brow.sh/


Does there exist a person who would make this argument straight-faced? I am a professional mathematician and have yet to hear of anyone coaxing an even slightly interesting new theorem out of AI. I think the day is clearly coming but it's not here.


fair enough, i suppose im a believer that the seeds are planted, the day is soon. and i must say, it seems more worhtwhile trying to figure out how to finetune an llm/implement reinforcement learning that could do some form of pure math, than it is to try and do new pure math by hand


I do research in this field. LLMs can be used as (ridiculously inefficient) implementations of some search algorithms that we haven't yet identified and implemented in software ourselves, but which can be inferred from a statistical analysis of the literature. Sometimes those search algorithms generalise to new areas, but more often than not, they flail. The primary advantage of a language model is that it's one big algorithm: when one subcomponent would flail, another (more "confident") subcomponent becomes dominant; but that doesn't solve the case where none of the subcomponents are competent and confident in equal measure. In short: to the extent it's useful, it's a research dead-end. Any potential improvements we understand are better implemented as actual search algorithms.

You've probably seen that thing where ChatGPT cracked Enigma[0]. It used several orders of magnitude more computational power than a Bombe (even given Moore's Law, still thousands of times more electrical power), and still took two dozen times longer. You would literally be better off doing brute-force search with a German dictionary. Thus is it with mathematics: a brute-force search is usually cheaper and better than trying to use a language model.

Terry Tao is one of the staunchest knowledgeable advocates of GPT models in mathematical research, and afaik he doesn't even bother trying to use the models for proof search. It's like trying to build a house with a box of shoes: sure, the shoe is technically more versatile because you can't use a hammer for tightening bolts (the shoe's sole has enough friction to do this) or foot protection (the shoe is the right shape for this) or electrical isolation (the bottom surface of the shoe is largely rubber), but please just use a hammer if you want to manipulate nails.

[0]: https://www.techradar.com/news/we-watched-an-ai-crack-the-en... – and I know that's not the original ChatGPT®, but I am not rewarding this company for such a wasteful and pointless publicity stunt.


You are talking about current capabilities versus what is probably possible. Sure, its not possible to write proofs with an LLM right now.

Somewhere inside openai is a reinforcement learning loop that looks like:

current_code = read(code_base_from_file.txt)

modify_prompt = "some prompt to modify code with expected outcome"

result = model.run(modify_prompt, current_code)

if check(result):

    provide_positive_feedback(model)
else:

    provide_negative_feedback(model)
and its clear that not only does this work, it maybe the future of what unravels software engineering.

the current models are being trained for coding, but theres no reason this couldnt be tried for other domains, like pure math.


It's not possible to write (usefully novel) proofs with an LLM, but we have other algorithms that can do that. Perhaps a reinforcement learning component could improve upon the search strategy in some way, but there's no compelling reason to use a predictive text model. (There's not even good reason to believe that naïve reinforcement learning would improve the mathematical ability of a system: RL says "that was good: do more of that", and mathematics is about discovery: thinking thoughts that nobody has ever thought before.)


Wow, so easy. Why didn’t anybody think of that before? How shall I inscribe your Fields Medal, sir?



only low iq people talk about the dunning kruger effect


Only low iq people talk confidently about fields of study they barely comprehend.


Yeah, I have been teaching calculus for ... several years, and it's very clear that the curriculum has been dumbed down over the years across the board. The last place I was at removed infinite series from the curriculum altogether. Epsilon/delta proofs were removed from the AP exam not so long ago. Some calculus classes don't use any transcendental functions because students can't handle it. At the same time grades have gone up, I suspect mostly because exams are usually weighted less than they were in the old days.

It is probably true however that there is more of a baseline expectation that you take it in high school. Whether this is sensible I do not know. It is also true that the very best students are better than ever, because the materials on the internet have gotten better and better over time.


I agree with the first two of these, they are great. (And I bet the third is too, I've just never needed it.)

If I had to submit one tip it would be to set everything up with a Makefile or similar. I keep my transactions spread across quite a few separate files for different accounts, and the actual commands I issue to include the right files are very long. Similarly I have various plotting and summary scripts whose exact syntax I don't usually remember. But with make I can just "make cashflow" "make balance 'A=Checking'" "make balance-plot 'A=retirement'" and so on.


> If I had to submit one tip it would be to set everything up with a Makefile or similar.

If I had to submit a tip on top of yours, it's to use justfiles instead of Makefiles :-)

https://github.com/casey/just


+1 from this long time make user. https://hledger.org/just.html has some PTA-related examples.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: