It is interesting to watch. The movements of the robot are robot-like. I mean, wtf, there were no robot playing tennis before, but I have an idea how a robot playing tennis would be like, and this video confirms my expectations. Sharp, unsure movements, a lot of hesitation, ...
Movies pictured robots like this long before this become possible, but how did producers guessed it?
Or maybe movies rendered different kinds of robots, but this video bring into my memory only those, that look like this. A kind of confirmation bias?
I agree that the movements look quite robotic (though not as much as you might expect), but I don't think any movies have depicted robots moving like that. A much more common depiction is moving only a single joint at a time.
> Sharp, unsure movements, a lot of hesitation, ...
I like these particular descriptors. Another I would add is holding poses unnaturally still. While waiting for the ball, the robot holds its racket extremely consistently relative to its body even while sharply turning.
People tend to choose extremes. Either a total globalization or zero-sum games only. Everything in between bears risks of a cognitive overload so should be avoided at all costs.
Aluminum smelters use the Hall-Heroult process, where alumina is dissolved in molten cryolite and reduced in massive “pots” which are large electrolytic cells. Each pot contains a carbon cathode lining that must be kept at around 950C during operation. If the pot cools down, the frozen electrolyte and solidified aluminum contract at different rates than the carbon and steel shell, cracking the lining.
Once it’s cracked, the pot has to be completely cleaned out and relined which takes weeks. A smelter usually has hundreds of pots so this alone takes a while as the liner and anything in it are basically frozen solid and need to be broken apart and torn out. Once relined the pots must be brought back up slowly and the chemistry balanced. The pots also draw a ton of power and are wired in series so they have to all be brought up slowly together (or in batches).
That assumes it was a clean shutdown with nothing else clogged up in the system. “Cleaning” in smelting means that the hardware involved needs to be replaced because it fused to molten metal while cooling down.
How much of this process is cleaning up from the previous run and how much is purely for starting up the process again? Does it make sense to clean up the system as soon as you can after shutdown, in preparation for restart, whenever that may be?
It’s one and the same. The sodium and other atoms from the molten cryolite intercalate into the carbon cathode structure and swell it by a few percent. Once in use, a cathode is held together by the steel shell and thermal equilibrium of the running pot. Once it cools the cracking is inevitable.
You also can’t fully drain a pot. You can siphon most of the aluminum and cryolite off but at those temperatures they behave like a proper liquid with surface tension and the metal wicks into the pot like solder instead of flowing with gravity.
Anything made of steel or aluminum is recyclable because they can just melt it down and easily separate the metals, but the carbon lining and anything nonmetal is basically slag afterwards. Aluminum, electrolyte, and random atoms seep in everywhere and destroy it.
The smelting process I described above is actually the more expensive process to used to produce aluminum from raw bauxite. Recycling aluminum is cheaper and a significant fraction of the world’s aluminum produced every year is from recycled feedstock (over two thirds in the US, last I checked). Same goes for steel and most other metals.
I'm sure, like any metal at an industrial scale, it is profitably recyclable. But that is beside the point. This is akin to asking: "My car's engine just threw a rod and is seized. Is it recyclable?" Hopefully you see in this analogy that the car (engine) costs way, way more than the sum of its parts (the constituent metals).
I'm not sure in this instance, but for industrial plants, the expectation is for them to run 24/7/365 without disruption. They're not designed to be turned off and then on again. When you shut something down, how do you "reset" it to a clean state so production can start again? Think about all the existing stuff still in the pipes, residual, etc.
Not sure how that impacts fertilizer demand, but it certainly screws up planting season.
The ground will be dry in a week or two, and they’re predicting the worst spring snowpack on record (after the wettest Christmas in Southern California on record).
The bitter lesson has no utility function, but it has a predicting power. Decision theory, bayesian networks and causality will see a niche applications while LLMs are getting all the money. If the former are as good as their promises, they will be developing tools and accruing a knowledge of how to use them and which problems are good for them. It will last till LLMs hit the local maxima and will not be able to move further. They will try to eat even more resources to overcome it, but they'll get just some more evidence of LLMs being trapped in a local maxima. Stocks will crush, the market will correct itself, and a lot of smart unemployed people will start to look for ways to get away from local maxima.
At that moment things will become really interesting. If decision theory, bayesianism and causality will be able to show something that can be combined with LLM to create something marketable, then they will have their big chance. Or maybe those smart people will devise some other way out of the local maxima.
Bayesian methods and causality has their applications, there are tools to use them, but you can't just feed news into them to get back a most likely structure of a secret global government run by interdimensional lizard people. Probably if you combine them with LLM, than the resulting tool will be able to perform this task?
What irks me a bit at the way the Bitter Lesson is interpreted is that seemingly it didn't just throw out handcrafted model/feature generation, but also any attempt to interpret the learned models and features.
Like, in theory, this should be the absolute best time for people interested in analyzing unstructured data: Here there is this wealth of open-weight models, trained on half the internet that must have developed all kinds of absolutely insane feature detectors for all kinds of media: Programming languages, human-language prose, images, audio, video, whatever you want!
In practice, the models are mostly treated as black boxes and the weights as inscrutable. Which is why we now have the weird situation that our models are able to understand incredibly subtle and abstract semantic concepts in text - but the pre- and postprocessing is still on the level of regexes and string heuristics like 50 years ago. There doesn't seem to be any inbetween.
Not so awful as it may seem. It would be even more awful if election cycle had no influence over decision to wage one more war. "Democracy is the worst form of Government except for all those other forms that have been tried from time to time".
> Which also renders the entire paradox somewhat moot because there is no choice for you to be made.
Not quite. You did choose your decision making methods at some point in your life, and you could change them multiple times till you came to the setup of Newcomb's paradox. If we look at your past life as a variable in the problem, then changing this variable changes the outcome, it changes the prediction made by the predictor.
> The existence of a flawless predictor means that you do not have a choice after the predictor made its prediction
I believe, that if your definition of a choice stop working if we assume a deterministic Universe, then you need a better definition of a choice. In a deterministic Universe becomes glaringly obvious that all the framework of free will and choice is just an abstraction, that abstract away things that are not really needed to make a decision.
Moreover I think I can hint how to deal with it: relativity. Different observers cannot agree if an observed agent has free will or not. Accept it fundamentally, like relativity accepts that the universal time doesn't exist, and all the logical paradoxes will go away.
> I believe, that if your definition of a choice stop working if we assume a deterministic Universe, then you need a better definition of a choice. In a deterministic Universe becomes glaringly obvious that all the framework of free will and choice is just an abstraction, that abstract away things that are not really needed to make a decision.
Indeed, I think of concepts like "agency", "choice", "free will", etc. as aspects of a particular sort of scientific model. That sort of model can make good predictions about people, organisations, etc. which would be intractable to many other approaches. It can also be useful in situations that we have more sophisticated models for, e.g. treating a physical system as "wanting" to minimise its energy can give a reasonable prediction of its behaviour very quickly.
That sort of model has also been applied to systems where its predictive powers aren't very good; e.g. modelling weather, agriculture, etc. as being determined by some "will of the gods", and attempting to infer the desires of those gods based on their observed "choices".
It baffles me that some people might think a model of this sort might have any relevance at a fundamental level.
This is a compatibilist view. However, we can tell that most people don't adhere to a compatibilist view of free will because it tends to make people very upset if you suggest they have no "genuine" free will or agency, and the moral implications behind the assumption of genuine agency are e.g. baked into everything from our welfare systems to our justice systems, that assumes people have an actual choice in what they do.
For that reason I strongly disagree with the compatibilist view - language is defined by use, and most people act in ways that clearly signal a non-compatibilist view of free will.
I personally don't see any problems with this. For example, in most cases we can agree if someone had free will or no when they committed a crime. Even if I prefer compatibilist view and you are not, we'll agree in the most cases. In the cases when we may not agree the reasons of disagreement will not stem from this fundamental disagreements, but from things like should we treat state of affect as a state when a human has no free will.
So, a compatibilist view is not incompatible with the world we live in, but moreover, it is needed to keep our world functioning. The world we live in is mostly artificially constructed. Welfare and justice systems are not "genuine", they are artificial constructs. They play a role in our society and the ideas of "free will" and "guilt" are constructed also, and they are tweaked to make our systems to work better. If you assume that free will and guilt are "genuine" or God given, then you can't tune them to better match their purposes. You are losing agency this way, losing part of your free will, you can't consciously and reasonably discuss if state of affect should be an exception from the rule "any person has a free will". You'll be forced either to skip the discussion, or to resort to some kind of theological arguments.
But if you accept, that "free will" is a social construct, then you can easily identify the affected variables: it is all about punishment for crimes or awarding people for their pro-social deeds. You can think of how "state of affect inhibits free will" can influence all these goals, you can think of the possibility of people simulating state of affect (or even nurturing their personal traits that increase the probability of entering state of affect) to avoid a punishment. You can think rationally, logically and to pick a solution that benefits the society the best. Those very society with baked in idea of free will. Or you can choose to believe "free will" is God given, because of an irrelevant linguistic argument, and lose the ability to make our world better.
> most people act in ways that clearly signal a non-compatibilist view of free will.
Of course, we are not living in quantum mechanics we live in a world that is constructed by people. I mean, all this is built on top of QM, but QM laws do not manifest themselves directly for us. We have other explanatory structures to deal with everyday physics. But even physics doesn't matter that much: I turn the switch and voila I have light, and the heck with conservation of energy. I can talk to you, despite we are residing on different continents, 1000s of km don't matter. If I want to eat I do not try to kill some animal to eat it nor do I gather seeds and roots in a wild to eat them. I go to work and do something, get my salary and buy food in a local store. We are living in an artificial world with artificial rules. Free will is part of this world. Of course we talk about it like it exists. We talk about it like it is a universal truth. Relativity I mentioned above doesn't show itself most of the time, because the world is constructed in a way, when we can agree about someone having it. Situations when this is not the case are very strange and can be even punished: manipulation (which is come close to taking people's agency away from them) is deemed amoral.
The world constructed so we can ignore that free will is just an illusion, moreover it is constructed to think about it in terms of free will, so you'll have issues thinking about it in other terms. Like you'll have a lot of issues trying to calculate aerodynamic of a plane relying on equations of quantum mechanics.
A compatibilist view, to me, is usually immoral, because it seems to maintain the pretence of agency while admitting it's an illusion, and so persist in accepting treating people as if they have agency.
People who at least genuinely believe in free will and agency has an excuse if they e.g. support punishment that is not strictly aimed at minimising harm including to the perpetrator. A compatibilist has no excuse.
It is of course possible to hold a compatibilist view and still argue we should restructure society to treat people as if they do not have agency, but then the point on holding onto the illusion drops to near zero.
> A compatibilist view, to me, is usually immoral, because it seems to maintain the pretence of agency while admitting it's an illusion, and so persist in accepting treating people as if they have agency.
Is thermodynamics immoral? You see, there is nothing fundamental about pressure or temperature, they are just statistical averages, they are all in imagination, it is an illusion. But we still pretend that pressure and temperature exist.
Or what about biological species? If you look into it, you'll see that there is no clear way to define what species are, all the definitions are imperfect projections of our high-level illusions onto the underlying biology and biochemistry. But we (and biologists also, who much more aware of the issues) still pretend that species exist. Are biologists immoral?
Nothing wrong with it. Nothing immoral, it is just a regular mental tool. You see, the question is what does it mean for thing to exist. Some things are easy: like there is a car, we can see it, we can touch it, we can drive it, therefore we agree that the car exists. But some things are not so easy, especially when we talk about immaterial things. But to make things even more interesting, some things seem to exist on some level, and do not exist on other levels. Like life for example. There is no life in an atom of carbon or hydrogen or nitrogen, but the bunch of such atoms connected just right can be alive. And it normally don't make people jumpy. At the some time some people have issues with the idea that free will exists on some levels but not others.
> People who at least genuinely believe in free will and agency has an excuse if they e.g. support punishment that is not strictly aimed at minimising harm including to the perpetrator. A compatibilist has no excuse.
Yea. I don't believe in free will and agency "genuinely", so I have no excuse. But I believe that any such excuses are borderline immoral. If anyone allows their emotions and animal instincts to take over them and act against the greater good of a society, it is immoral. I mean, if they do it for their own gain, it may be not immoral, there is a tradeoff between interests of a society and interests of an individual, and sometimes we should prefer the former and sometimes the latter. So going against the society interests is not inherently bad. But doing it because of uncontrollable emotions and animal instincts is bad. It still counts as an excuse, but I'm not sure if it is a good thing. I should believe that this is a good thing, because I don't know how to test it experimentally without risking to harm people even more. But still while I can accepts excuses of others, I don't accept such excuses from me. I just don't let myself to let emotions drive without any oversight from me (whatever this "me" is: this is one more interesting question without any good answers).
The point is: my "non-genuine" belief in free will make me much more free willed than a genuine belief. If I succumbed to my emotions and didn't control myself for three seconds, I'd see it as my personal failure. In my head I'm in control, not someone or something else.
> It is of course possible to hold a compatibilist view and still argue we should restructure society to treat people as if they do not have agency
No point in it. It is like arguing "lets forget thermodynamics and resort to pure QM because it is closer to fundamental laws of the Universe". We need the idea of free will, even if it doesn't hold on fundamental level.
> a word doc editor of some sort (Word, Gdocs, LibreOffice, etc); then enable Review Changes and annotate your post for 10 minutes; then, review and accept your changes individually and re-read what you’ve written.
Pffff... I'm not going to install LibreOffice for that, or to figure out how to make Gdocs to work with uBlock.
There is a much easier way. Open LLM chat, type there "Proofread please for grammar, keep the wording and the tone as it is, if it doesn't mess with grammar. Explain yourself." and then paste your text. I don't really know what the tools you mentioned do, but any "free" LLM on the Internet will point to things like missing articles, or messed up tenses in complex sentences.
You recommend choosing self-improvement, but I just don't believe I can figure out how to use articles. With tenses I think I can learn how to do it, but I'm not going to. I remember there is some obscure rule how to choose the right tenses, but I was never able to remember the rule itself. I'm bad with rules, it is the reason I chose math as my major. There are almost no rules in math, you are making your own rules. The grammars of languages are not like that, they have rules which can't be easily inferred, you need to remember them. Grammars have exceptions to rules, and exceptions to exceptions, and in any case they are not the rules, but more like guidelines, because people normally don't think about rules when they are talking or writing.
No way I'm starting to learn rules now, I'd better continue to rely on my skills. But LLMs can help me see when my skills fail me.
> It’s your right if you desire to join the Collective and be a uniform lego brick like the others, but then your no-longer-fully-human posts are no longer welcome at HN.
I believe you (as most of fervent supporters of the rule here) gone too far onto philosophy with this, too far from the reality and practice. You can't detect AI in my messages, because they are mine. Even when I ask LLM to find words for me, it is me who picks one of the proposed alternatives, but mostly I manage without wording changes. I transfer the LLM's edits by hand by editing the source message, so nothing unnoticed can slip into the final result. If I took the effort to ask an LLM to proofread, it means I care about the result more than usual, so I'm investing more effort into it, not less.
An AI may be able to teach you basic grammar but it’s not going to teach you to develop your voice. By design and content training set, an AI today can only pressure you towards the mean of whatever criteria you specify, not away from it. Developing your voice by doing your own proofreading pressures you away from the mean, by helping you double down on what you value most and by choosing which grammatical rules to disregard and when disregarding them is more in-tone for yourself than adherence. I can’t stop you and I won’t remember your handle after an hour has passed (being nameblind is interesting online), so you’ll probably go unnoticed by me, sure. But I still won’t equate regressing to the AI mean with personal growth away from the average masses.
> An AI may be able to teach you basic grammar but it’s not going to teach you to develop your voice.
Well, no one can help you to develop your voice. If it is your voice, then it have to be your own creation. I think we are at agreement here.
> Developing your voice by doing your own proofreading pressures you away from the mean, by helping you double down on what you value most and by choosing which grammatical rules to disregard and when disregarding them is more in-tone for yourself than adherence.
Oh... If I wanted to become a professional writer, then I'd agree with you. Maybe...
You see, I don't use LLM to fix my writing in Russian, because with Russian I'm totally in control of my grammar, I know when I deviate from it and if I do, I do it consciously. But with English I don't know. Sometimes I can see that I don't know how to follow English grammar in some particular case, and sometimes I don't even notice that I don't know.
So, returning to your argument, if I wanted to become a famous English writer, I think I'd choose to write a lot and discuss my writing with LLM, and I'd do it for hundreds of hours. LLM are unbelievably useful for digging into language nuances. Before LLMs I had urbandictionary, but it could help with specific phrases, not with choosing between "I took the effort to ask an LLM" and "I took the effort of asking an LLM". I wouldn't have a clue that there is any semantic difference. But LLM can point to it, and it can explain the difference, and give me more examples of it. Or it can point that "you recommend to choose" is not good, because of "something-something" I don't remember what, but it boils down to "you just have to remember, that the right way to use the verb 'recommend' is 'recommend choosing'". I don't see the difference, I can't choose to disregard it, because I have no opinion on if it is good or bad.
If I wanted to become an English writer, I'd spend hundreds of hours with LLM, just to get an ability to see as many differences as it is possible, to get an idea of what I value most, and which grammatical rules I like to disregard. But even after that, I think I'd continue to use LLM. It can provide unexpected takes on what you feed into it. ... Hmm... I should try it with Russian. In Russian I can pick a style for my writing and to follow it (in English I can't control the style consciously), I can (and do sometimes) turn grammar inside-out, make it alien, readable for a native speaker, but in weird ways readable (a bit like letters written by Terry Pratchett heroes like Granny Weatherwax or Carrot)... I wonder, if I can employ LLM to make it even more weird.
> I still won’t equate regressing to the AI mean with personal growth away from the average masses.
I can't obviously judge in which direction LLMs are changing my English, so I can't even give you an anecdotal counter-evidence to your statements about regression to AI mean, but I'm still sure that I'm not regressing to the mean. You see, I pick when to follow LLM advice and when not to. I'm choosing what to change. The regression to the mean you are talking to is going on in a high dimensional space, you can regress on some dimensions and continue to deviate from the mean on others as much as you like. I don't like to deviate on grammar dimensions (at least without knowing about my deviations), I was born in a family of a teacher and an engineer, which were all into to be educated and the familiarity with the grammar was one of the important part of it, and I was born in USSR, where the proper grammar was enforced in all media to the extent that make me laugh and rebel against grammar (after all the decades passed, lol). But I can't allow myself to just ignore grammar, I was taught in a way to use it properly. So I decide to use LLM. I'm too lazy to do it each time, or even every second time, but still I use it and learn from it.
The prospect to regress to the mean by using LLM seems very unlikely to me. I don't regress with all the propaganda around me when regressing is the most safe thing to do really, so mere LLM stand no chance to achieve it.
IIRC there was some psychological paper claiming that periods at the end of chat messages are bearing some emotional signal. I don't remember what it was exactly. I suppose your manager was too much into shitty psychological research.
> I wouldn’t admit to this level of frankly incompetence.
Well yeah. It reminds me of how I wrote an addon for WoW, while having no clue how to write GUI code, learning lua and Blizzard API as I go, and having no tools except a text editor. It took 3-4 sharp ideological shifts, till I got to reading about elm architecture, and refactored all the code into it, while using addons helping with debugging issues, using a scaffold to create throw away addons for testing details of how WoW API functions/object work, using Ace library for messages and some other things, using my another addon to track events to learn when and which events WoW fires... Near the end I was a pretty competent addon developer, but the most part of my way there I was just trying a lot of things to see what works.
> Why would you be so loud and proud about all this.
Oh, I also like to tell my story of how it was. When I finally got it work on clean elm architecture with clear separation of state, view and update, I was proud, obviously, but even before that I was proud because of Danning-Kruger. My code was a way better than the original addon, and it was becoming better and better with each sharp turn. It is funny in hindsight.
Movies pictured robots like this long before this become possible, but how did producers guessed it?
Or maybe movies rendered different kinds of robots, but this video bring into my memory only those, that look like this. A kind of confirmation bias?
reply