Hacker Newsnew | past | comments | ask | show | jobs | submit | xipix's commentslogin

The article seems to evaluate Wasm as it were a framework upon which apps are built. It's not that, it's an orthogonal technology allowing CPU optimisations and reuse of native code in the browser. Against that expectation, it has been a huge success despite not yet reaching bare-metal levels of performance and energy efficiency.

One such example: audio time stretch in the browser based upon a C++ library [1]. There is no way that if this were implemented in JS that it could deliver (a) similar performance or (b) source code portability to native apps.

[1] https://bungee.parabolaresearch.com/change-audio-speed-pitch


>despite not yet reaching bare-metal levels of performance and energy efficiency.

"Not yet"? It will never reach "bare-metal levels of performance and energy efficiency".


FWIW the native and WASM versions of my home computer emulators are within about 5% of each other (on an ARM Mac), e.g. more or less 'measuring noise':

https://floooh.github.io/tiny8bit/

You can squeeze out a bit more by building with -march=native, but then there's no reason that a WASM engine couldn't do the same.


SIMD and multithreading support really helped with closing the performance gap.

Still surprised about the 5% though- I’ve generally seen quite a bit more of a gap.


Maybe the emulator code is particularly WASM friendly ... it's mostly bit twiddling on 64-bit integers with very little regular integer math (except incrementing counters) and relatively few memory load/stores.


I'd have to take a contrary view on that. It'll take some time for the technologies to be developed, but ultimately managed JIT compilation has the potential to exceed native compiled speeds. It'll be a fun journey getting there though.

The initial order-of-magnitude jump in perf that JITs provided took us from the 5-2x overhead for managed runtimes down to some (1 + delta)x. That was driven by runtime type inference combined with a type-aware JIT compiler.

I expect that there's another significant, but smaller perf jump that we haven't really plumbed out - mostly to be gained from dynamic _value_ inference that's sensitive to _transient_ meta-stability in values flowing through the program.

Basically you can gather actual values flowing through code at runtime, look for patterns, and then inline / type-specialize those by deriving runtime types that are _tighter_ than the annotated types.

I think there's a reasonable amount of juice left in combining those techniques with partial specialization and JIT compilation, and that should get us over the hump from "slightly slower than native" to "slightly faster than native".

I get it's an outlier viewpoint though. Whenever I hear "managed jitcode will never be as fast as native", I interpret that as a friendly bet :)


> JIT compilation has the potential to exceed native compiled speeds

The battlecry of Java developers riding their tortoises.

Don’t we have decades of real-world experience showing native code almost always performs better?

For most things it doesn’t matter, but it always rubs me the wrong way when people mention this about JIT since it almost never works that way in the real world (you can look at web framework benchmarks as an easy example)


It's not that surprising to people who are old enough to have lived through the "reality" of "interpreted languages will never be faster than about 2x compiled languages".

The idea that an absurdly dynamic language like JS, where all objects are arbitrary property bags with prototypical dependency chains that are runtime mutable, would execute at a tech budget under 2x raw performance was just a matter of fact impossibility.

Until it wasn't. And the technology reason it ended up happening was research that was done in the 80s.

It's not surprising to me that it hasn't happened yet. This stuff is not easy to engineer and implement. Even the research isn't really there yet. Most of the modern dynamic language JIT ideas which came to the fore in the mid 200X's were directly adapting research work on Self from about two decades prior.

Dynamic runtime optimization isn't too hot in research right now, and it never was to be honest. Most of the language theory folks tend to lean more in the type theory direction.

The industry attention too has shifted away. Browsers were cutting edge a while back and there was a lot of investment in core research tech associated with that, but that's shifting more to the AI space now.

Overall the market value prop and the landscape for it just doesn't quite exist yet. Hard things are hard.


You nailed it -- the tech enabling JS to match native speed was Self research from the 80s, adapted two decades later. Let me fill in some specifics from people whose papers I highly recommend, and who I've asked questions of and had interesting discussions with!

Vanessa Freudenberg [1], Craig Latta [2], Dave Ungar [3], Dan Ingalls, and Alan Kay had some great historical and fresh insights. Vanessa passed recently -- here's a thread where we discussed these exact issues:

https://news.ycombinator.com/item?id=40917424

Vanessa had this exactly right. I asked her what she thought of using WASM with its new GC support for her SqueakJS [1] Smalltalk VM.

Everyone keeps asking why we don't just target WebAssembly instead of JavaScript. Vanessa's answer -- backed by real systems, not thought experiments -- was: why would you throw away the best dynamic runtime ever built?

To understand why, you need to know where V8 came from -- and it's not where JavaScript came from.

David Ungar and Randall B. Smith created Self [3] in 1986. Self was radical, but the radicalism was in service of simplicity: no classes, just objects with slots. Objects delegate to parent objects -- multiple parents, dynamically added and removed at runtime. That's it.

The Self team -- Ungar, Craig Chambers, Urs Hoelzle, Lars Bak -- invented most of what makes dynamic languages fast: maps (hidden classes), polymorphic inline caches, adaptive optimization, dynamic deoptimization [4], on-stack replacement. Hoelzle's 1992 deoptimization paper blew my mind -- they delivered simplicity AND performance AND debugging.

That team built Strongtalk [5] (high-performance Smalltalk), got acquired by Sun and built HotSpot (why Java got fast), then Lars Bak went to Google and built V8 [6] (why JavaScript got fast). Same playbook: hidden classes, inline caching, tiered compilation. Self's legacy is inside every browser engine.

Brendan Eich claims JavaScript was inspired by Self. This is an exaggeration based on a deep misunderstanding that borders on insult. The whole point of Self was simplicity -- objects with slots, multiple parents, dynamic delegation, everything just another object.

JavaScript took "prototypes" and made them harder than classes: __proto__ vs .prototype (two different things that sound the same), constructor functions you must call with "new" (forget it and "this" binds wrong -- silent corruption), only one constructor per prototype, single inheritance only. And of course == -- type coercion so broken you need a separate === operator to get actual equality. Brendan has a pattern of not understanding equality.

The ES6 "class" syntax was basically an admission that the prototype model was too confusing for anyone to use correctly. They bolted classes back on top -- but it's just syntax sugar over the same broken constructor/prototype mess underneath. Twenty years to arrive back at what Smalltalk had in 1980, except worse.

Self's simplicity was the point. JavaScript's prototype system is more complicated than classes, not less. It's prototype theater. The engines are brilliant -- Self's legacy. The language design fumbled the thing it claimed to borrow.

Vanessa Freudenberg worked for over two decades on live, self-supporting systems [9]. She contributed to Squeak EToys, Scratch, and Lively. She was co-founder of Croquet Corp and principal engineer of the Teatime client/server architecture that makes Croquet's replicated computation work. She brought Alan Kay's vision of computing into browsers and multiplayer worlds.

SqueakJS [7] was her masterpiece -- a bit-compatible Squeak/Smalltalk VM written entirely in JavaScript. Not a port, not a subset -- the real thing, running in your browser, with the image, the debugger, the inspector, live all the way down. It received the Dynamic Languages Symposium Most Notable Paper Award in 2024, ten years after publication [1].

The genius of her approach was the garbage collection integration. It amazed me how she pulled a rabbit out of a hat -- representing Squeak objects as plain JavaScript objects and cooperating with the host GC instead of fighting it. Most VM implementations end up with two garbage collectors in a knife fight over the heap. She made them cooperate through a hybrid scheme that allowed Squeak object enumeration without a dedicated object table. No dueling collectors. Just leverage the machinery you've already paid for.

But it wasn't just technical cleverness -- it was philosophy. She wrote:

"I just love coding and debugging in a dynamic high-level language. The only thing we could potentially gain from WASM is speed, but we would lose a lot in readability, flexibility, and to be honest, fun."

"I'd much rather make the SqueakJS JIT produce code that the JavaScript JIT can optimize well. That would potentially give us more speed than even WASM."

Her guiding principle: do as little as necessary to leverage the enormous engineering achievements in modern JS runtimes [8]. Structure your generated code so the host JIT can optimize it. Don't fight the platform -- ride it.

She was clear-eyed about WASM: yes, it helps for tight inner loops like BitBlt. But for the VM as a whole? You gain some speed and lose readability, flexibility, debuggability, and joy. Bad trade.

This wasn't conservatism. It was confidence.

Vanessa understood that JS-the-engine isn't the enemy -- it's the substrate. Work with it instead of against it, and you can go faster than "native" while keeping the system alive and humane. Keep the debugger working. Keep the image snapshotable. Keep programming joyful. Vanessa knew that, and proved it!

[1] Freudenberg et al. SqueakJS paper (DLS 2014, Most Notable Paper Award 2024). https://freudenbergs.de/vanessa/publications/Freudenberg-201...

[2] Craig Latta, Caffeine. Smalltalk livecoding in the browser. https://thiscontext.com/

[3] Self programming language. Prototype-based OO with multiple inheritance. https://selflanguage.org/

[4] Hoelzle, Chambers & Ungar. Debugging Optimized Code with Dynamic Deoptimization (1992). https://bibliography.selflanguage.org/dynamic-deoptimization...

[5] Strongtalk. High-performance Smalltalk with optional types. http://strongtalk.org/

[6] Lars Bak. Architect of Self VM, Strongtalk, HotSpot, V8. https://en.wikipedia.org/wiki/Lars_Bak_(computer_programmer)

[7] SqueakJS. Bit-compatible Squeak/Smalltalk VM in pure JavaScript. https://squeak.js.org/

[8] SqueakJS JIT design notes. Leveraging the host JS JIT. https://squeak.js.org/docs/jit.md.html

[9] Vanessa Freudenberg. Profile and contributions. https://conf.researchr.org/profile/vanessafreudenberg


Only if it doesn't make use of dynamic linking, reflection and is written to take advantage of value types.

AOT compilers without PGO data usually tend to perform worse when those conditions aren't met.

Which is why the best of both worlds is using JIT caches that survive execution runs.


Yeah I've heard this my whole career, and while it sounds great it's been long enough that we'd be able to list some major examples by now.

What are the real world chances that a) one's compiled code benefits strongly from runtime data flow analysis AND b) no one did that analysis at the compilation stage?

Some sort of crazy off label use is the only situation I think qualifies and that's not enough.


Compiled Lua vs LuaJIT is a major example imho, but maybe it's not especially pertinent given the looseness of the Lua language. I do think it demonstrates that the concept that it is possible to have a tighter type-system at runtime than at compile time (that can in turn result in real performant benefits) is a sound concept, however.


The major Javascript engines already have the concept of a type system that applies at runtime. Their JITs will learn the 'shapes' of objects that commonly go through hot-path functions and will JIT against those with appropriate bailout paths to slower dynamic implementations in case a value with an unexpected 'shape' ends up being used instead.

There's a lot of lore you pick up with Javascript when you start getting into serious optimization with it; and one of the first things you learn in that area is to avoid changing the shapes of your objects because it invalidates JIT assumptions and results in your code running slower -- even though it's 100% valid Javascript.


Totally agree on js, but it doesn't have the same easy same-language comparison that you get from compiled Lua vs LuaJIT. Although I suppose you could pre-compile JavaScript to a binary with eg QuickJS but I don't think this is as apples-to-apples comparison as compiled Lua to LuaJIT.


Any optimizations discovered at runtime by a JIT can also be applied to precompiled code. The precompiled code is then not spending runtime cycles looking for patterns, or only doing so in the minimally necessary way. So for projects which are maximally sensitive to performance, native will always be capable of outperforming JIT.

It's then just a matter of how your team values runtime performance vs other considerations such as workflow, binary portability, etc. Virtually all projects have an acceptable range of these competing values, which is where JIT shines, in giving you almost all of the performance with much better dev economics.


I think you can capture that constraint as "anything that requires finely deterministic high performance is out of reach of JIT-compiled outputs".

Obviously JITting means you'll have a compiler executing sometimes along with the program which implies a runtime by construction, and some notion of warmup to get to a steady state.

Where I think there's probably untapped opportunity is in identifying these meta-stable situations in program execution. My expectation is that there are execution "modes" that cluster together more finely than static typing would allow you to infer. This would apply to runtimes like wasm too - where the modes of execution would be characterized by the actual clusters of numeric values flowing to different code locations and influencing different code-paths to pick different control flows.

You're right that on the balance of things, trying to say.. allocate registers at runtime will necessarily allow for less optimization scope than doing it prior.

But, if you can be clever enough to identify, at runtime, preferred code-paths with higher resolution than what (generic) PGO allows (because now you can respond to temporal changes in those code-path profiles), then you can actually eliminate entire codepaths from the compiler's consideration. That tends to greatly affect the register pressure (for the better).

It might be interesting just to profile some wasm executions of common programs. If there are transient clusterings of control flow paths that manifest during execution. It'd be a fun exercise...


Why? My only guess is that the instructions don't match x86 instructions well (way too few Wasm instructions) and the runtime doesn't have enough time to compile them to x86 instructions as well as, say, GCC could.


To be fair, x86 instructions don't match internal x86 processor architecture either.


How don't they? Most x86 instructions map to just one or two uops as you can see at https://uops.info


Yes there is, WebGPU compute shaders, or misusing WebGL fragment shaders.


/


Why are so many parents completely OK with abdicating responsibility like this? Dealing with peer pressure and adolescence is something every parent needs to figure out how to do, sans governmental intervention.


In many European countries, parents have already handed the majority of their child raising duties to the state.

If your kid is in state-subsidized daycare from 8am-5pm every day and bedtime is at 730pm, your child is effectively interfacing with and being entirely raised by a life setup by the state for bulk processing far more than any bespoke one setup by you as a parent.

When you've already distanced your involvement so much, it's only natural these same people would want to remove additional parenting duties off their plate so they can further lead a fully automated life devoid of responsibility or anything uncomfortable like setting limits for your child. Because who wants to take care of their children or their parents? Yuck. Let the immigrant brown people do it while we simultaneously scapegoat them for our own economic failings.


We absolutely do need regulation of this harm by the law. It's how we stand together as a society, otherwise one child's rules will seem draconian against their friend's lax parents. There's plenty of precedent in other threshold ages at which children can start indulging in other potentially harmful vices.

The vulnerable elder population is more difficult to define by a simple age threshold. We all decline at different ages and different rates.


> There's plenty of precedent in other threshold ages at which children can start indulging in other potentially harmful vices.

Yeah but, there's no precedent for regulating something that parents are opting into (by buying their kids devices and then turning them loose with no oversight).

We should be punishing liquor stores when a parent willingly buys their child alcohol, then?


I disagree

> one child's rules will seem draconian against their friend's lax parents.

So what is wrong with that? parenting is not equal among all parents in UK and why should only this aspect be normalized?

> The vulnerable elder population is more difficult to define by a simple age threshold. We all decline at different ages and different rates.

This is a hypocritical statement. For children we are more than willing to normalize and enforce rules as us adults wants because we assume all children grow up at same age and same rates, but when it comes to policing adults, the line is gray and more difficult because everyone is different.


"Can I learn to drive?" "No, you're not old enough" "But my friend is already driving and he's 12" "OK, when you turn 12 you can too."


The parent in your conversation is just stupid and no matter how many laws we pass we cannot fix stupidity.

In that case only thing I can suggest is to pass a law to assess the eligibility and maturity of people if they want to have children and issue a permit if they are suitable to have and raise children and otherwise they cannot have children.


I'm sorry but the parent in this conversation is just stupid.

As they say, you can't fix stupid.


> one child's rules will seem draconian against their friend's lax parents

That’s how it has been for most everything. Someone else’s parents let their kids watch TV on a school night, or stay up past 10pm, or has a curfew of 1am instead of midnight, or lets them drink soda at the dinner table. The response from my parents to me, and from me to my kids, has always been to point out that families are different, they have different rules, and that in this house we do X.


> We absolutely do need regulation of this harm by the law

> There's plenty of precedent in other threshold ages at which children can start indulging in other potentially harmful vices.

In those other vices, we have various other regulations in order to reduce their harm as much as possible. Yet, there has been no similar push for the purported harm done by social media - or, apparently, the Internet in general. It's like we've tried nothing and are surprised it's still an issue.


> one child's rules will seem draconian against their friend's lax parent.

And that would be a great oportunity to teach that child that those measures exist for a reason.

The government is and must always be a subsidiary actor.

Not every risk must be addressed, otherwise zebra crossings would not exist, or driving would be prohibited.


Driving is prohibited until a certain age. Parents don't get to discuss this with their child and decide when they're old enough.


Complete digression I know...

Driving on public roads is prohibited until a certain age.

That age is 17 here in the UK but me and many of my friends growing up in a rural area learned to drive from the age of 14 or 15 on private land. Our parents would take us there/back, provide the car and be our "instructor". Some friends who lived on farms had cars/trucks/etc of their own that they could use to drive around and their parents were fine to let us try too. But we knew that we were never allowed on public roads.

By the time we all got to 17 we applied for our tests and had a few lessons with an real instructor on real public roads. We still had to learn all of the rules/etiquette/etc but most of us where completely happy with the physical aspects of controlling the vehicle, that saved us a huge amount of time.

My kid is 15 and if a suitable opportunity arises I'll let them have a go behind the wheel (not illegally obviously). Unfortunately I live in a city not a rural area, and don't own a car, so there hasn't been the chance yet.

(In the UK land like a supermarket car park is still considered as public roads despite being privately owned. Generally anywhere where the public can access it easily is not considered "private" in terms of the Road Traffic Act.)


If we need regulation of "this harm", then what we need to be regulating is the social media networks, not the children (and adults!) that use them.

We need to be banning algorithmic feeds. We need to be banning promotion of hateful content. We need to be banning moderation that is biased against marginalized groups, or against criticism of the platform.

If they weren't being subjected to feeds specifically designed to create maximum "engagement" with fear, hate, and self-doubt, most young people using social media would be interacting in similar ways to how they interact offline. Perhaps there would be a little less inhibition due to the feeling of anonymity, but overall, anything harmful they might be doing or saying to each other on there is very similar to what they would be doing or saying to each other in person, regardless of what social media you let them access.


You can't use spaces to align because you can't assume a monospaced font will always be used. You can't use tabs either for that matter. If you need structure, use the language's punctuation and line breaks.


I can and do assume a monospaced font when using spaces to align code. Folks using variable width fonts will get what they deserve.


Really please stop aligning with spaces!


You can use tabs, that’s exactly their role, but only in theory since in practice elastic tab stops that would work with proportional fonts aren’t implemented anywhere in code, only word processors


no, i can't. i want to be able to align on arbitrary columns, not on tab stops. and when i add or remove character that break the alignment, tabs mess everything up. spaces don't.


No, you can't do that with spaces because spaces have a fixed width, which can't always be aligned to a variable width column. Only in the primitive environment of fixed width fonts does this work, but even there the tabstops can also be placed at an arbitrary column position, check Word out


anyone writing code with variable width font deserves the hell they put themselves in.


Sticking to historical limitations can be kind of silly. I used to have a computer with 40-column text display, and I don't feel any need to limit myself to that anymore.


i agree with you in principle, but i am not sure that this is simply a technical limitation. it feels to me more than just a preference. i can't explain it, but looking at the apple systems font example in this article: https://storck.io/posts/proportional-programming-code-fonts/ i find it much harder to parse (visually scanning the structure) than the monospace one below. perhaps it is simply what i am used to. but i feel like i am having a more violent reaction than i should have if mere preference and habit were the issue.

perhaps the issue is the specific choice of fonts. the author in that article exclaims that if you allow proportional fonts then any font can be a programming font, while the author of the following post claims otherwise and set out to make their own proportional font that is suitable: https://timgord.com/2024-01/lisnoti-a-proportional-font-that... . there are other such projects like https://input.djr.com/info/ and https://go.dev/blog/go-fonts

one issue is that of distinguishing similar characters, which the above projects try to address while some people claim it's not an issue at all: https://alexkutas.com/posts/non-monospace-font

there is also the issue of alignment mentioned in the first article, that could possibly be addressed by limiting character widths to multiples of the smallest width.

but there is still the issue of interacting with other programmers, mentioned in this article: https://nelsonslog.wordpress.com/2021/09/12/proportional-fon... we would all have to agree on the same font if there is any kind of alignment needed apart from indentation. right now we can say that you can choose any monospace font, and things will look as intended, but with a proportional font things will look different depending on the font choice. whether that is an issue or not needs further study i think.

more discussion of pros and cons can be found here: https://stackoverflow.com/questions/218623/why-use-monospace...


I definitely do not enjoy the font choice in your first link. The Go font has been pretty good for me. My editor is configured to use "Go" (not "Go Mono"), I don't even notice when I switch from editing prose to editing code.

"Distinguishing similar characters" is just as much of a concern for me when I'm editing technical documentation as when I'm editing source code. Once again, the Go font has served me well. Previously I used some other humanist style font from from the Ubuntu people that had that classic IBM slash-through zero. Some fonts have overly-aggressive ligatures and such, but those fonts should not be used in any technical context, source code is not special.

Alignment is a non-issue because wanting alignment is bad. There, solved that ;-) Arbitrary diff noise over multiple lines because you added one line at the end is a problem, not a goal!

I interact with other programmers just fine. Your linked article seems to be re-engaging in the tabs-vs-spaces flamewar in an era of gofmt rustfmt et al. It's a total non-issue.

Give it two weeks.


Can it handle "nodes" that emit a different number of audio samples than they consume?

I'm thinking of time stretch effects like mine https://github.com/bungee-audio-stretch/bungee


It's basically just a wrapper around WebAudio, I've generally just used the builtin nodes, but I think you could do sample-level processing with this? https://developer.mozilla.org/en-US/docs/Web/API/AudioWorkle...

love the demo https://bungee.parabolaresearch.com/change-audio-speed-pitch

have you thought about wrapping it as an audio unit or vst via juce/clap/iplug so its usable in a daw?

https://juce.com/ https://cleveraudio.org/developers-getting-started/ https://github.com/iPlug2/iPlug2


Nice analysis, for 1x playback speed. If you're playing back at a different speed, for example, for music practice, YouTube audio is awful.

Why doesn't this huge AV platform use a better audio time stretch algorithm?


I think time stretching is done natively by your browser, not by Youtube at all. I use https://github.com/igrigorik/videospeed on sites...allows any media to be stretched. Did you try another browser?


That's no excuse for YouTube because (a) audio processing can be done in JS/WASM and (b) they have the influence to improve browser playbackRate implementations to something better [1].

Besides, their Android and iOS apps do slow music as bad if not worse than on web.

[1] https://bungee.parabolaresearch.com/compare-audio-stretch-te...


> audio processing can be done in JS/WASM

If there's reason to believe this is a useful way to handle time stretching, then there's reason to believe the same browser could do it natively just fine.


There is no reason to believe that browsers do it just fine if you have evidence for the contrary.


Which one in that list is better than the default one in Chrome? What media serving sites that you know of intentionally don't use native browser APIs?


Because this is such a niche use the number of users taking advantage of this "feature" would be so small as to not get anybody a promotion


Changing playback speed is a heavily used feature. They’ve also refined it several times, recently adding 0.05 speed increments not just 0.25.


took me a while to recognize you can long press on mobile yt to 2x, maybe even more than 2x but I ain’t figured out the finger incantation


Keyboard commands still make it jump in 0.25 steps.

Does anyone know a youtube-frontend that lets me change the playback speed in smaller steps using keyboard?


How did you become this convinced that it's some niche, unused feature? I'd go as far as to say it's essential. So many videos, especially of courses, out there that really need that 1.25x playback rate boost.


The niche was for the people using YT to learn to play a song. You’re now applying to something else, so that’s a bit of goal post moving.

Also, your use of need is odd as well, and seem to have convinced yourself that the world is wrong and only you’re right. If it needed, the creators would have made it that way


> Also, your use of need is odd as well, and seem to have convinced yourself that the world is wrong and only you’re right. If it needed, the creators would have made it that way

It's called having personal experiences and an opinion. You should try them sometime. It's almost as if appropriate tempo was in the "eye" of the "viewer", and so I was very clearly not suggesting my needs and essentials are universal objective truths in the first place. Getting extremely tired of having to insert "I think", "I believe", "in my opinion" to signal subjectivity in what - I think - are ostensibly subjective contexts, just so that I can avoid subsequent bikeshedding like this.

> You’re now applying to something else, so that’s a bit of goal post moving.

This will be crazy I know, but instead of this delightfully malice-assuming explanation, I simply missed the words where they said "music practice". Didn't help that "timestretch for music practice" is not a feature of YouTube, only timestretch is (as part of the playback rate adjustment feature), so when you were (according to my personal impression of the wording of your previous comment) generally addressing the feature, I replied in kind.

If I was being extra prickly, I'd accuse you of intentionally writing in a way so that you could accuse me of strawmanning you later (goalpost moving is a very loose fit here) for an easy dunk, but of course as someone who reaches for fallacies immediately, you wouldn't do that, right?

This is what distrust sown between people, as well as just plain not being able to know your discussion partner looks like. It's been increasingly frustrating me, and it looks like it's having an effect on you too.


> If I was being extra prickly, I'd accuse you of intentionally writing in a way so that you could accuse me of strawmanning you later

That’s the most bizarre comment I think I’ve ever seen. You didn’t have to reply to my comment. You’re now saying that I assume that the reading comprehension of everyone is so bad that I make comments specifically as gotchas. WTF is that logic? People that post comments in threads without taking the whole thread into consideration are like people that butt their way into a conversation based on the last sentence heard. It never goes well. It’s called social etiquette.

Just admit you didn’t read the full thread and that based on now understanding the full context of the conversation that your comment is out of place and have a nice day


It's also basic etiquette to not assume malice or that the other party is lying, but you explicitly and proudly continue to fail at that. So it's a bit tough for me to accept criticism from you regarding this.

> the reading comprehension of everyone is so bad

No, I am not saying this. This is just your headcanon, it is not even a remotely necessary presumption to have.

> You didn’t have to reply to my comment.

I felt compelled to after being told that I'm "moving goalposts". Obviously. Again, a subjectively perceived need.

> based on now understanding the full context of the conversation that your comment is out of place and have a nice day

Even with the additional context your original comment still rings unreasonable and self-absorbed. It is true however that I do not need you to explain why anymore. Have a nice day indeed.


Its the browser that does this on the client, not YouTube on the server.


Let's be fair, for anything music related, you'll be doing 1x speed... higher than that is usually speech only, where it doesn't matter as much.


If you're learning guitar, drums, piano, trumpet, etc, you'll want to start playing along at about 75% speed then work up until you can play 110% faster than you need to. YouTube's built in audio time stretch makes this a painful exercise.


What's the benefit of learning to play faster than you need to? I understand wanting to play slower, but why faster?


You want to play within your abilities, not right at the very edge of them. This technique nudges the edges out.


Fair enough. Never heard this technique before. Note that such a technique wouldn't translate to other skills like dancing, singing, sawing a piece of wood. Doing those things faster I can't imagine would be of any help in improving how you do them at normal speed.


Reliability builds on skill redundancy.


> Fun fact: Unlicensed stations began streaming jungle music from onboard ships off the coast of Britain, hence the expression "pirate radios."

Didn't pirate radio broadcast from boats predate Jungle by about 20 years?


It's weird things like using "streaming" instead of "broadcasting", let alone the fact by the time Jungle was invented pirate radio was no longer happening on the seas but had moved to having the "studio" in one high rise block of flats with a point to point link to the actual broadcast site on another, coupled with the general structure and other fantasies makes me question how much of this was LLM generated. I couldn't continue reading after a while.


Hi there. Author here. Thanks for the heads up. I would never even consider using LLMs for this, this is just a subject that was always interesting to me so I thought it would be a good idea to write some of it down. Some of these mentions look like shortcomings of English not being my first language; I'll try to review it and change the words or terms that you suggested.


From the mid 60s


Two thoughts.

Suppose I deem it safe and useful for my 6yo to drive for a while, using his Xbox controller from the passenger seat.

It is illegal in many countries for a device (or anything else) to obscure any part of the driver's forward view (area swept by wipers). So even without actually controlling the car, we have an unlawful vehicle.


I'd be even more gleeful if I was in car insurance. "Openpilot's not covered, your insurance is invalidated."


Most likely not legal to invalidate in EU. There is laws that’s say that ev everything you can do manually, you are allowed to automate. Any rules against that is null and void.

The “horse winning race” case is a known one where they go into this.


Geico will cover you. You can disclose ahead of time that you have an aftermarket ADAS if you want. If it drives you off the road, it will be as if you drove off the road and you will be declared at-fault, of course.


Insurance is a non-issue with openpilot.


Has that been tested in court? In my jurisdiction?


Many OP users have asked insurance about it and gotten the a-okay. Believe it or not insurance likes safety devices.


Neat, love it.

Now try synchronising the music to the game.

You could use our Bungee library for the audio processing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: