Hacker Newsnew | past | comments | ask | show | jobs | submit | rjinman's commentslogin

I wasn't asking it to define it. I came up with the list of principles first, then spent ages trying to think of a suitable name for them. It was quite gratifying when ChatGPT, without any context, when asked to guess what the term "freehold" might mean with respect to software, came up with almost the exact same set of principles. That told me that the "freehold" term is a pretty good fit. It would be an incredible coincidence otherwise.


Oh, I see, almost rubber-ducking the semantic meaning of the term. That makes more sense to me. Apologies for my knee-jerk LLM skepticism.


The more interstellar objects we find that resemble comets, the weirder Oumuamua is.


Maybe. I think it's more likely that an alien probe - assuming there are aliens and they fly probes - would be the size of a cubesat, and we wouldn't even notice it.

Perhaps Oumuamua was the mothership and the solar system is now swarming with cubesats we're not noticing.


>I think it's more likely that an alien probe - assuming there are aliens and they fly probes - would be the size of a cubesat

Or maybe the size of a sub-atomic particle, as in the sci-fi Novel 'The 3 body problem'.

https://three-body-problem.fandom.com/wiki/Sophons


Does anyone else see a timer ticking down in their vision or is it just me?

Time to quit my job at the LHC and be a baker.


The Ramans do everything in threes.


Thank you! Finally a good Rama reference in the wild.


I really hope someone sends a probe to catch Omaumau. When Starship is flying regularly it should be doable, just barely.


It’s news to me that Starship flying is doable.



The chances that it's a rare type of interstellar object are incredibly small.


Can we get Musk to pilot it?


When I posted about this project here and on reddit a few months ago I got a lot of people asking for advice and learning resources. I promised I'd one day provide a detailed write-up explaining everything, so here it is :)


unique_ptr is much better because then each object has a sole owner, which makes object lifetimes much easier to reason about and you can't end up with cyclic references causing memory leaks.


I wrote a game of Tetris in JavaScript with SVG many years ago. It had nice graphics and was smoothly animated. I hadn’t heard of anyone else using SVG like that at the time.

I also made a game called Pro Office Calculator (available on Steam), which includes a Doom-style 3D engine for which I used Inkscape as my map editor. Here’s an example of a map: https://github.com/robjinman/pro_office_calc/blob/develop/da...


Reminds me of Avara which used MacDraw as a level editor. Very cool!


As someone who is terrified of agentic ASI, I desperately hope this is true. We need more time to figure out alignment.


I'm not sure this will ever be solved. It requires both a technical solution and social consensus. I don't see consensus on "alignment" happening any time soon. I think it'll boil down to "aligned with the goals of the nation-state", and lots of nation states have incompatible goals.


I agree unfortunately. I might be a bit of an extremist on this issue. I genuinely think that building agentic ASI is suicidally stupid and we just shouldn’t do it. All the utopian visions we hear from the optimists describe unstable outcomes. A world populated by super-intelligent agents will be incredibly dangerous even if it appears initially to have gone well. We’ll have built a paradise in which we can never relax.


What's the difference between your "agentic AIs" and, say, "script kiddies" or "expert anarchist/black-hat hackers"?

It's been obvious for a while that the narrow-waist APIs between things matter, and apparent that agentic AI is leaning into adaptive API consumption, but I don't see how that gives the agentic client some super-power we don't already need to defend against since before AGI we already have HGI (human general intelligence) motivated to "do bad things" to/through those APIs, both self-interested and nation-state sponsored.

We're seeing more corporate investment in this interplay, trending us towards Snow Crash, but "all you have to do" is have some "I" in API be "dual key human in the loop" to enable a scenario where AGI/HGI "presses the red button" in the oval office, nuclear war still doesn't happen, WarGames or Crimson Tide style.

I'm not saying dual key is the answer to everything, I'm saying, defenses against adversaries already matter, and will continue to. We have developed concepts like air gaps or modality changes, and need more, but thinking in terms of interfaces (APIs) in the general rather than the literal gives a rich territory for guardrails and safeguards.


> What's the difference between your "agentic AIs" and, say, "script kiddies" or "expert anarchist/black-hat hackers"?

Intelligence. I'm talking about super-intelligence. If you want to know what it feels like to be intellectually outclassed by a machine, download the latest Go engine and have fun losing again and again while not understanding why. Now imagine an ASI that isn't confined to the Go board, but operating out in the world. It's doing things you don't like at speeds you can scarcely comprehend and there's not a thing you can do about it.


But the world is not a game where you "win" by intelligence; very far from it. Just look at who is currently in the White House.


> Now imagine an ASI that isn't confined to the Go board, but operating out in the world.

I don't think it's reasonable at all to look at a system's capability in games with perfect and easily-ingested information and extrapolate about its future capabilities interacting with the real world. What makes you confident that these problem domains are compatible?


That’s not what I was saying at all. I was using Go as an example of what the experience of being helplessly outclassed by a superior intelligence is like: you are losing and you don’t know why and there’s nothing you can do.


I completely agree with you. Chess/Go/Poker have shown that these systems can become so advanced, it becomes impossible for a human to understand why the AI chose a move.

Talk to the best chess players in the world and they'll tell you flat out they can't begin to understand some of the engine's moves.

It won't be any different with ASI. It will do things for reasons we are incapable of understanding. Some of those things, will certainly be harmful to humans.


> What's the difference between your "agentic AIs" and, say, "script kiddies" or "expert anarchist/black-hat hackers"?

The difference is that a highly intelligent human adversary is still limited by human constraints. The smartest and most dangerous human adversary is still one we can understand and keep up with. AI is a different ball game. It's more similar to the difference in intelligence between a human and a dog.


> we just shouldn’t do it.

I think what Accelerationism gets right is that capitalism is just doing it - autonomizing itself - and that our agency is very limited, especially given the arms race dynamics and the rise of decentralized blockchain infrastructure.

As Nick Land puts it, in his characteristically detached style, in A Quick-and-Dirty Introduction to Accelerationism:

"As blockchains, drone logistics, nanotechnology, quantum computing, computational genomics, and virtual reality flood in, drenched in ever-higher densities of artificial intelligence, accelerationism won't be going anywhere, unless ever deeper into itself. To be rushed by the phenomenon, to the point of terminal institutional paralysis, is the phenomenon. Naturally — which is to say completely inevitably — the human species will define this ultimate terrestrial event as a problem. To see it is already to say: We have to do something. To which accelerationism can only respond: You're finally saying that now? Perhaps we ought to get started? In its colder variants, which are those that win out, it tends to laugh." [0]

[0] https://retrochronic.com/#a-quick-and-dirty-introduction-to-...


It doesn't do anyone any good to stress over non-existent things. ASI is a sci-fi trope, a pure fantasy in context of present day and time. AGI does not exist either, and AFAIK there's not even any agreement what it possibly means beyond very vague "no worse than a human".

In other words, I'm sure you're terrified of a modern fairy tale.


"alignment" is a bs term made up to deflect blame from the overpromises the AI companies made to hype up their product to obtain their valuations.


Big take given how much AI companies hate alignment folks.


What the frell! This is cool.


Couldn't agree more with the necessity for fast feedback loops. I've experienced the opposite, and it's not fun.

I worked with Clojure/ClojureScript (mostly ClojureScript) for a couple of years many years ago. It was the first time I'd worked professionally with a functional language, so I made a game of minesweeper in my free time to help get to grips with it: https://github.com/robjinman/cljsmines

Back then, I fully bought into the idea that functional language like Clojure were the future, especially on the web. The way application state is managed is perhaps the key virtue of functional programming - if you get it right, you can design your program to consist mostly of completely pure functions. I remember how enlightening that was once I understood it.


Interesting, I’ve only tried it on two machines - my (fairly old) laptop with integrated graphics and my desktop with an RTX4060. I’ve also tried it on the desktop on Windows in a Linux VM running Ubuntu 24.04. It runs way smoother on the laptop.

Thanks for trying it :)

BTW, is there a problem with the JavaScript on my website?


Now I've at least got it working on an old laptop with 1366x768 resolution :)

Well, not immediately. At first there was an entirely different problem: everything on the screen was garbled!

However it was easy to figure out why and fix it: depending on the hardware and resolution, lines in the framebuffer can have extra padding beyond what's needed for the visible pixels. That information is available in another info structure:

              ; Get size of line in framebuffer
              mov eax, 16                   ; sys_ioctl
              mov edi, [drw_fbfd]
              mov esi, 0x4602               ; FBIOGET_FSCREENINFO
              mov rdx, rsp
              syscall
              mov eax, [rsp+0x30]           ; bytes per line
              shr eax, 2                    ; convert to pixels
              mov [drw_fb_pitch], eax
There are only a few places in draw.asm that then need to be changed to multiply by this new variable instead of drw_fb_w.


Hi, thanks for the update. I’ll make this change when I get a chance. Thanks


I've now also figured out a solution to my first problem, though still not much of an idea of what really causes it. In drw_flush, after writing out the screen buffer, do this:

              push 0                        ; put X,Y offsets on stack
              sub rsp, 16                   ; 4 more dwords (don't care)
              mov eax, 16                   ; sys_ioctl
              mov edi, [drw_fbfd]
              mov esi, 0x4606               ; FBIOPAN_DISPLAY
              mov rdx, rsp                  ; ptr to structure on stack
              syscall
              add rsp, 24                   ; drop it
              ret
From what little documentation there is, this IOCTL takes a fb_var_screeninfo structure, but only pays attention to the x/y offset fields (so allocating 24 bytes for it should be enough). With the offsets set to zero, it now works perfectly on my machine.


Thanks, I've made a new branch called display_fixes. Let me know if it works


I might try it out on other machines or without running X at the same time. Maybe some conflict with the framebuffer console + X?

>BTW, is there a problem with the JavaScript on my website?

I just block all JavaScript by default, and strongly believe that websites should remain usable that way.


In case you're still reading this, I tried a more mature program using the framebuffer (fbi image viewer), and it had exactly the same problem! I now suspect it has something to do with the Intel graphics driver (i915), either a bug or more likely some undocumented interaction with the console.

It's really too bad how the Linux kernel provides these APIs, but barely any documentation, and the only way you're supposed to interact with them is through some specially "blessed" abstraction layer running in userspace (X11/Wayland/SDL, pulseaudio, etc.) which inevitably demands linking in C libraries.


I learnt to program on a Psion as a ten year old.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: