I have a MAD I got online as well and the experience is a bit subpar, lots of parts broke (the bands) and I can’t easily get replacements parts. My MAD also broke, and I can’t replace it easily (they sent me a new one but it doesn’t fit). They also are not testing me to see if there’s any difference (I think it works because I wake up less, and definitely wake up less to pee). Figuring out what band to use is also a esoteric process. It’s amazing that this exists besides a CPAP but we are really beta testers at this point. Oh and I learned later that they don’t replace your retainers
The cool thing about noq (and Quinn, where we inherited this from), is that you can implement your own "Session" trait. So that can be either TLS or nQUIC.
For coding use cases you may want a way to search for symbols themselves or do a plain text exact match for the name of a symbol to find the relevant documents to include. There is more to searching than building a basic similarity search.
Sorry but who mentioned coding as a use-case? My comment was general and not specific to the coding use-case, and I don't understand where did you get the idea from that I am arguing that building a similarity search engine would be a substitute to the symbol-search engine or that symbol-search is inferior to the similarity-search? Please don't put words into my mouth. My question was genuine without making any presumptions.
Even with the coding use-case you would still likely want to build a similarity search engine because searching through plain symbols isn't enough to build a contextual understanding of higher-level concepts in the code.
I mentioned coding as a use case in my comment you replied to. You were asking for an example for when one wouldn't use vector search and I provided one. I did not say similarity search would be a substitute. I said that for the coding case you do not need it.
>you would still likely want to build a similarity search engine
In practice tools like Claude Code, Codex, Gemini, Kimi Code, etc are getting away with searching for code with grep / find and understanding code by loading a sufficient amount of code into the context window. It is sufficient to understand higher level concepts in the code. The extra complexity of maintaining vector database top of this is not free and requires extra complexity.
In your point you said "There is more to searching than building a basic similarity search." which assumed and implied all kinds of things and which was completely unnecessary.
> In practice tools like Claude Code, Codex, Gemini, Kimi Code, etc are getting away with searching for code with grep / find and understanding code by loading a sufficient amount of code into the context window
Getting away is the formulation I would use as well. "Sufficient amount" OTOH is arguable and subjective. What suffices in one usage example, it does not in another, so the perception of how sufficient it really is depends on the usage patterns, e.g. type and size of the codebases and actual queries asked.
The crux of the problem is what amount and what parts of the codebase do you want to load into the context while not blowing up the context and while still maintaining the capability of the model to be able to reason about the codebase correctly.
And I find it hard to argue that building the vector database would not help exactly in that problem.
And yet your blog says you think NFTs are alive. Curious.
But seriously, RAG/retrieval is thriving. It'll be part of the mix alongside long context, reranking, and tool-based context assembly for the forseeable future.
I don't think RAG is dead, and I don't think NFTs have any use and think that they are completely dead.
But the OP's blog is more about ZK than about NFTs, and crypto is the only place funding work on ZK. It's kind of a devil's bargain, but I've taken crypto money to work on privacy preserving tech before and would again.
The issue I had with RAG when I tried building our own internal chat/knowledge bot was pulling in the relevant knowledge before sending to the LLM. Domain questions like "What is Cat Block B?" are common and, for a human, provide all the context that is needed for someone to answer within our org. But vectorizing that and then finding matching knowledge produced so many false positives. I tried to circumvent that by adding custom weighting based on keywords, source (Confluence, Teams, Email), but it just seemed unreliable. This was probably a year ago and, admittedly, I was diving in head first without truly understanding RAG end to end.
Being able to just train a model on all of our domain knowledge would, I imagine, produce much better results.
I have no interest in anything crypto, but they are making a proposal about NFTs tied to AI (LLMs and verifiable machine learning) so they can make ownership decisions.
So it'd be alive in the making decisions sense, not in a "the technology is thriving" sense.
> Of course you would have to set a temperature of 0 to prevent abuse from the operator, and also assume that an operator has access to the pre-prompt
Doesn't the fact that LLM's are still non-deterministic with a 0 temperature render all of this moot? And why was I compelled to read a random blog post on the unsolved issue of validating natural language? It's a SQL injection except without a predetermined syntax to validate against, and thus a NP problem we've yet to solve.
Just after that extremely gentle poke about a grift that died many years ago, you'll be pleased to see that I address the very silly claim about RAG in a straightforward, ad rem way.
The 400g don’t bother me personally. I agree not being able to turn it off sucks, I switched to android for some time and it would just leave the pair on until the battery died. I think it’s basically a feature to force users to stay within the iOS ecosystem
The problem is a lot of very strong engineers are also very difficult to work with. I worked at Meta too and can tell you the other side of the coin is that people who were too toxic could get canned as well!
Yes, I have worked with the strong but arrogant/snarky engineers. Luckily most of them got canned or forced out because the environment they create around themselves more than negates the positive impact they have. The strongest engineers I have worked with are all humble and kind.
It is their loss, I cannot imagine letting a minor work quarrel live rent free in my head for over a decade. I feel bad enough when something is stuck in my mind for a week.
Maybe. I'l am also not saying they need to say where the dollars came from, went to, or what they were for. Aggregate daily flows. Could you do some deductive reasoning to make an informed guess especially when large sums are involved? Perhaps.
I am also of the (perhaps wrong) opinion that the majority of the important stuff leaks anyways, just not on a level playing field.
Financials aren't like technology or IP where having the information open to all (perhaps with limited monopolies on usage a la patents) is essentially for the betterment of all mankind, they can be more like order of battle in a war zone.
If your competitors know that your Florida subsidiary is running inefficiently and being subsidized by your successful business elsewhere, they can target their own operations in Florida, undercut you more than you can possibly sustain, force you to exit that market entirely, so that they can monopolize there.
Sure, but others can also do that to your competitor. Hence my comment that everyone's in the same boat. The playing field would be level and the players would adapt to the new environment.
Of course I realize it's possible it might introduce systemic problems that I'm unaware of.
Isn't this exactly what we should want from a market system? If your division in Florida is inefficient, then from the market perspective we should absolutely want competitors to enter the market and crush them.
I think the problem is that people have gotten so used to seeing capitalism from the companies' perspective (i.e.: profits good), and forgot that it is supposed to be all about the collective good. So if you think sustained high profits are good... then you have missed the whole point (the market should always be driving them towards near-zero).
I really think that golang makes it easy to read code, rust makes it easy to write code. If Golang had sum types it would be a much nicer language to write complex applications with
I find Go code mind numbing to read. There's just _so much of it_ that the parts of the code that should jump out at me for requiring greater attention get lost in the noise. Interfaces also make reading Go more difficult than it could be without LSP - there's no `impl Xyz for` to grep for.
It's the complete opposite for me. Rust code, especially async Rust code is just full of noise the only purpose of which is to make the borrow checker shut up
Go does have sum types — but the syntax is awkward and a bit transparent, so many don't recognize it as being there, and those that do don't love using it.
This forms a closed set of types (A, B, nil -- don't forget nil!) but the compiler doesn't understand it as such and complains that the following type-switch is not exhaustive ("missing return"):
func Foo(s SumType) bool {
switch s.(type) {
case A: return true
case B: return true
case nil: return true
}
}
Also, you, the package author, may know what constitutes SumType, but the consumers of your package don't, at least not without source. Moreover, you can spread A, B, and any other implementations of SumType across many source files, making it hard to answer the question even with source. This is even a problem for the standard library, just consider go/ast and its Decl, Expr, and Stmt interfaces, none of which document what types actually implement them.
Right — While it does has sum types, it doesn't have some other features found in other languages.
But, of course, if one wanted those features they would talk about those features. In this discussion, we're talking specifically about sum types, which Go most definitely does have.
> nil -- don't forget nil!
This is why alternative syntax has never been added. Nobody can figure out how to eliminate nil or make it clear that nil is always part of the set in a way that improves upon the current sum types.
interfaces in go aren’t types, so no, that’s not a sum type, it’s just an interface.
The set of objects that can fulfill that interface is not just string and int, it’s anything in the world that someone might decide to write an isSumType function for.
Interfaces in Go are structurally typed but they're still types. A variable of an interface type has two components: a pointer to its dynamic type information, including virtual method table, and a pointer to its value. When you consider that any compiled Go program has a finite set of known types concretely implementing each of its interfaces, they essentially become discriminated unions, albeit without Rust's compact inline representation (unless the dynamic type is itself a thin pointer).
> it’s anything in the world that someone might decide to write an isSumType function for.
No. Notice the lowercase tag name. It is impossible for anyone else to add an arbitrary type to the closed set.
Unless your argument is that sum types fundamentally cannot exist? Obviously given a more traditional syntax like,
type SumType tagged {
A | B
}
...one can come along and add C just the same. I guess that is true in some natural properties of the universe way. It is a poor take in context, however.
reply