Hacker Newsnew | past | comments | ask | show | jobs | submit | phippsytech's commentslogin

This sound like it's more efficient - however... (and ignoring the US specific stuff)

Even in forms where I've designed this it breaks my flow as a user. I'm used to suburb state postcode order (I'm an Aussie). It's how we were taught to write the address in school. It's been a pattern for a long time, and the reversal of the fields slows my brain down.

IMHO The better user experience isn't changing the order of the fields. It's honouring auto-complete so the user doesn't have to enter this data in the first place.


The only thing I've decided to not bother trying to self-host is email. IP reputation and managing deliverability is hard.


I'm suprised that ZKP almost never gets mentioned when it comes to age verification. It seems like it is a solution that does protect PII. There is a learning curve for the general public, but having watched the hoops a mother recently had to jump through so her kids could play Mario Kart on Nintendo Switch, I think it is not that difficult.


The thing I dislike about smartphone keyboards is the amount of screen real estate they use. This keyboard seems to take more screen real estate rather than less.


I used minumm keyboard a long time ago and it was actually good, 1-2 cm of keyboard height, sadly I think it's been discontinued long ago, but you can see a couple of screenshots here

http://minuum.com/


Yeah, the example is pretty bad. This layout also seems to be hard so squash vertically without increasing the error rate a lot compared to a normal one. The error rate on smaller sizes is something a lot of novel touchscreen keyboards should probably have focused on instead.


I think the more realistic direction is exposing API / MCP-style interfaces for agents to interact with a product’s functionality, rather than shipping UI components that an AI client would render.

The "AI renders your components inside chat" idea feels very similar to Facebook’s old canvas apps. That model disappeared for good reasons: abuse, security, and loss of platform control.

It seems far more likely that AI platforms will provide their own interaction primitives (forms, pickers, confirmations, etc.) and simply call third-party tools behind the scenes. That lets the platform retain control over UX and safety, and avoids the risks of embedding arbitrary third-party UI.


I hoard information that I find interesting but cannot act on right now, mostly because engaging with it immediately would derail me. The problem is that I almost never come back to what I have saved.

A simple example is YouTube. I save videos to watch later because I am not in the right headspace at the time. Then I avoid them completely. I think part of the resistance is that I know watching them properly will demand attention and probably lead to follow-up work, and I am rarely in a mode where I want that interruption.

I have thought about the whole “second brain” idea, but I worry it would just become a dumping ground. Nothing would really resurface when it actually matters. I would mostly be relying on myself to remember that I once made a note about something when I happen to be working on a related problem.

Lately I have been thinking about the idea of a passive, radio style feed that summarises the information I have collected and plays it back to me, so I can at least consume each item once.

You see those TV shows about people who hoard. They cannot throw things away because they might be important one day. This feels uncomfortably similar.

Maybe the real problem is not how we store information. Maybe it is that we aren't filtering hard enough on what is actually worth keeping in the first place.


This resonates a lot. “Save now so I don’t derail myself” is rational in the moment, but it creates a pile that later feels like a commitment (attention + follow-up work), so you avoid it.

I think you’re pointing at two separate problems that get tangled:

1. Re-entry: how to resurface the right item when you’re actually in the right mode

2. Filtering: deciding what’s worth keeping so the backlog doesn’t become guilt

The “radio-style passive feed” is interesting because it changes the contract: you’re not promising yourself you’ll do deep work, you’re just letting the system replay what you captured at a low cognitive cost. If it worked, it could also become a filter: only the stuff that still feels valuable on playback deserves a second pass.

One question: if you had a “listen/read later” mode, would you prefer it to be time-based (10 minutes a day) or context-based (only when you mark yourself as in a “curious/exploration” headspace)?

Details in my HN profile/bio if you want to compare this to the “active projects + pull-based resurfacing” angle I’m validating.


I am mostly imagining using the radio while driving, so I would not be able to actually act on anything in the moment. For me, “listen later” or “read later” only really works when I am in a curious or exploratory headspace.

That headspace is not always there. A good example is when I sit down to watch something on a streaming service and end up browsing for ages instead of committing to anything. In theory, that would be a perfect moment to actively review things I have saved, but in practice I am not convinced my neurodiverse brain would reliably cooperate.

So to your question, I think I would lean much more toward a context-based mode than a time-based one. A fixed daily slot would quickly turn into another obligation. A lightweight “I am in curiosity mode right now” switch feels closer to how my brain actually works, especially if the radio-style playback keeps the cost of re-entry low.


A fixed daily slot really can turn into another obligation (and then the backlog becomes guilt again), whereas a lightweight “I’m in curiosity mode” toggle matches how attention actually works. The driving “radio” use case also suggests the output should be low-friction: short, self-contained, and not demanding follow-up right now.

If I were to design around your constraints, it would look like:

* a manual toggle for “curiosity mode”

* a queue that plays 1–3 small “snack” insights (not full summaries)

* and a single “save this to revisit” action that you can do in 1 second, so you don’t lose it while driving

One question: when you hear something interesting in that mode, what’s the most natural next step for you later—open the original link/video, add it to an “active project/topic”, or capture a single note like “try X / look up Y”? (More context on the direction I’m validating is in my HN profile/bio if you want to compare.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: