Hacker Newsnew | past | comments | ask | show | jobs | submit | sempron64's commentslogin

Diagrams! So much documentation lacks diagrams because they are hard to make

True! Though I’d argue diagrams as code like PlantUML or Mermaid are better than an image!

Agree just from a text search perspective alone that Mermaid even ASCII diagrams are usually preferable.

I think the discrepancy here is that almost all these crashes would not have resulted in an insurance claim, e.g. backing into a pole at 1 mph -- this is not enough damage to report for an average driver.

That said, really bad numbers for an autonomous system which is supposed to be way better than humans.


It depends on what part of the car is crumpled, dented, scratched, or misaligned and what your deductible is. It doesn’t take much body work to hit $250, $500, or even $2000.

A hyperbolic curve doesn't have an underlying meaning modeling a process beyond being a curve which goes vertical at a chosen point. It's a bad curve to fit to a process. Exponentials make sense to model a compounding or self-improving process.


You have not read far enough.


But this is a phase change process.

Also, the temptation to shitpost in this thread ...


I read TFA. They found a best fit to a hyperbola. Great. One more data point will break the fit. Because it's not modeling a process, it's assigning an arbitrary zero point. Bad model.


This is an excellent example to illustrate an S-curve. There is a certain amount of energy in a photon. It cannot be emitted with less energy. There is 100% efficiency barrier that cannot be surpassed no matter how smart you are.


Sure, but the technology lifetimes and adoption rates have compressed exponentially despite that.

Efficiency is not the only relevant metric, there's also cost, flexibility, lifetime/durability, CRI, etc...

For example, OLEDs are (literally) flexible, but burn out faster then LEDs and are less efficient.

As another example, the light sources for televisions have undergone nearly annual changes! They started with CFL backlights, then side-illumination with white LEDs, then blue light with quantum dots, OLED panels, backlights as controllable grids of LEDs, mini-LED, micro-LED, RGB micro-LED, etc...

We're up to something like 10K dimming zones with the latest TCL panels and 100K is just around the corner.


There's still plenty of room to improve lighting.

If you want to light an indoor room to be as bright as the outdoors on a sunny day, you're going to need a lot of heavy, expensive equipment that produces a lot of waste heat (LEDs produce way less than incandescent, but still a significant amount). It's also not going to be a full continuous spectrum of light the way that sunlight is.


I think we can stop building new streetlights at the moment we have full daylight illumination on the visible spectrum 24x7 in urban areas. We’ll probably settle for much less and be happy with that.

If we need more light, we can deploy more power generators.


I think the mistake here is that there is a certain rate of progress where humanity can no longer even collectively process the progress and it is equivalent to infinite progress. This point is the singularity and requires non-human driven progress. We may or may not reach that point but full automation is a requirement to reach it. We may hit a hard wall and devolve to an s-curve, hit a maximum linear progress rate, hit a progress rate bounded by population growth and human capability growth (a much slower exponential), or pass the 1/epsilon slope point where we throw up our hands (singularity). Or have a dark age where progress goes negative. Time will tell.


I think we are on the cusp of it and that growing sense of chaos and acceleration and fear and at the same time gravitational attraction towards it is the beginning.


I think they're probably best known for making money.


This is for Latin. The Dead Sea Scrolls have clear spacing between the words. https://www.imj.org.il/en/wings/shrine-book/dead-sea-scrolls

The Talmud discusses the spacing between the words of the Bible: https://www.bible-researcher.com/hebrewtext1.html


It's amusing to me that in the 90s you could easily play Quake or Doom with your friends by calling their phone number over the modem whereas now setting up any sort of multiplayer essentially requires a server unless you use some very user-unfriendly NAT busting.


Glad you mentioned DOOM! Sometimes people forget that DOOM supported multiplayer as early as December 1993, via a serial line and February 1994 for IPX networking. 4 player games on a LAN in 1994! On release, TCP/IP wasn't supported at all, but as the Internet took off, that was solved as well. I remember testing an early-ish version of the 3rd party iDOOM TCP setup driver from my dorm room (10 base T connection) when I was supposed to be in class, and it was a true game changer.


What was even more amazing is you could daisy chain serial ports on computers to get multiplayer Doom running. One or more of those links could even be a phone modem.

Downside is that your framerate was capped to the person with the slowest computer, and there was always that guy with the 486sx25 who got invited to play.


Or slave two copies to yours and get "side view" which was only supported for a few releases IIRC - https://doomwiki.org/wiki/Three_screen_mode


Yes, Doom with multi-monitors! There's (at least) one video on Youtube showing it in action with 3 monitors plus a fourth one with the map: https://www.youtube.com/watch?v=q3NQQ7bPf6U#t=1798.333333


Someone tried running that in one of the campus computer labs when I was a student, and the (probably misconfigured) IPX routers amplified it into... a campus-wide outage. Seems weird to me, but that's what the big sign on the door said the next day.

The perpetrator was never caught.


You usually just need to forward a port or two on your router. That gets through the NAT because you specify which destination IP to forward it to. You also need to open that port in your Windows firewall in most cases.

Some configuration, but you don't have to update the port forwarding as often as you would expect.

The reason you can't just play games with your friends anymore is that game companies make way too much money from skins and do not want you to be able to run a version of the server that does not check whether you paid your real money for those skins. Weirdly, despite literally inventing loot boxes, Valve does not suffer from this sometimes. TF2 had a robust custom server community that had dummied out checks so you could wear and use whatever you want. Similar to how Minecraft still allows you to turn off authentication so you can play with friends who have a pirate copy.


Starcraft could only do internet play through battle.net, which required a legit copy. Pirated copies could still do LAN IPX play though, and with IPX over IP software you could theoretically do remote play with your internet buddies.

By the way, this is why bnetd is illegal to distribute and was ruled such in a court of law: authenticating with battle.net counts as an "effective" copy protection measure under the DMCA, and providing an alternate implementation that skips that therefore counts as "circumvention technology".


Multi-player started with Doom 2. Original doom was single player only. Doom 2 was for 4 players which I used in my mod ArsDoom. Quake then extended it to scale via a dedicated quake server.


You can kinda solve that problem with STUN servers. Most games on Steam use Steam Datagram Relay, which also solves this: https://partner.steamgames.com/doc/features/multiplayer/stea...

It's like in-engine Hamachi. Works really well with P2P games.


I played a lot of Brood War with my friends like this.


I wonder if there is a way to use tailscale to make it easy again?


Quite literally folks have done this for decades using Hamachi.


Hamachi and STUN were what I was thinking of when I referred to user-unfriendly NAT busting. It's true that these are not much harder to get working than a modem, but they don't match up with modern consumer expectations of ease-of-use and reliability on firewalled networks. It would be nice if Internet standards could keep up with industry so that these expectations could be met. It's totally understandable where we've landed due to modern security requirements, but I still feel something has been lost.


But how are you going to circumvent the user firewall? He still has to open ports there, even using STUN or Steam Relay or Hamachi.


Hamachi does not require you to open any ports on your firewall by nature. Except maybe the local firewall (Windows firewall, likely) which apps should automatically get asked for when they try to use a port.


I mean, internet standards kept up. IPv6 is a thing, and some form of dynamic IPv6 stateful firewall hole punching a la UPnP would be useful here. Particularly if the application used the temporary address for the hole punch--because once the address lifetime ends, it's basically not going to get used again (64-bit address space). So that effectively nullifies any longer term concerns about security vulnerabilities.


This is just not true. You can still write GTK2 or SDL apps, you just need to package your app for the target distro or open source it because it's an open-source-first ecosystem.

If you're looking for binary stability and to ship your app as a file, ELF is extremely stable. If your app accesses files, accesses the network through sockets, and use stable libraries like SDL or GTK it will work fine as a regular binary and be easy to ship. People just don't want to write their apps in C, when the operating system is designed for that.

Many native apps like Blender, Firefox, etc ship portable Linux x64 and arm64 binaries as tar gz files. This works fine. You can also use flatpak if you want automatic cross platform updates but yes, the format is unfortunately bloated.

It's not that easy to ship a JavaScript app on other OSes either and electron apps abound there too.


What does ELF being stable or people not writing apps in C have to do with Linux binary compatibility? No matter what language you use, it’s either dynamically linking to the distro’s libc or using Linux system calls directly.

Also, I recommend taking a gander at what the Linux build process/linking looks like for large apps that “just work” out of the box like Firefox or Chromium. There’s games they have to play just to get consistent glibc symbol versions, and basically anything graphics/GUI related has to do a bunch of `dlopen`s at runtime.

Flatpak and similar take a cop-out by bundling their own copies of glibc and other libraries, and then doing a bunch of hacks to get the system’s userspace graphics libraries to work inside the container.



Betteridge's law applies to editors adding question marks to cover-the-ass of articles with weak claims, not bloggers begging questions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: