I mean it's a bit hard to take away hard data from just monthly sales across the whole of Europe. 10% is a lot, but far from overwhelming, and individual popular models can create big changes in overall stats.
For example, the Brits love their SUVs, and the Jaecoo undercut the market, so that particular model has been selling very well.
I think we should wait and see for stats to stabilize over time. Chinese smartphones did something similar - initially they were way better for way cheaper, but other manufactures adapted, so they failed to grab huge chunks of the market.
The report uses the number of newly registered vehicles AFAIU and correlates them to the total % of Chinese vehicles sold, so that's not really the monthly sales but the general trend, no?
Market cap amounts to 10% as an absolute figure but that isn't what I find most important from reading that article - it is the trend I find intriguing, which doubles or triples by each year, so if such or similar trend continues, Chinese manufacturers will penetrate into the market substantially more.
They innovate at much higher pace at much lower price points and at pretty high quality as the evidence we have so far suggests. So IMO it's going to be hell of a ride for European manufacturers to adapt - they need to start moving faster and deliver at much lower cost. This means complete restructuring which I find hard to believe it will happen any time soon.
Well, I wouldn't extrapolate much from this. Look at what happened to entry level EVs - brands like BYD and MG undercut VW, Kia and others, and their cars came fully loaded with extras missing from Western models.
In response VW and co. dropped prices, so Chinese cars are not that much cheaper again.
Just to understand where I'm coming from - if the Chinese would put out a better car for less money, I would buy it in a heartbeat. But from listening to reviews, as well expert opinion, you don't get better stuff for the same money just by buying Chinese
Interesting article. According to it, the missing piece is scaling the conversion facilities from 8% to x%, and then scaling uranium enrichment process from 30% to x%. With that in place heavy dependency to Russia+China would have been solved, no?
It's wrong to assume incompetence, which is what you did to a comment which displays much deeper chain of thought about the subject. A more proper way of doing it would be to reflect over your own opinions and critically assess them, as the comment points that out. To be more specific, what makes you think that the person you're replying to is not aware that LLMs can give false information, and is not taking that into account?
They do not find it favorable all of the time. If you look into the "What people are concerned about" section, these same people will call out the "Unreliability" as a top-1 concern. So, you can be excited and critical of the technology at the same time. To me this is a more worthy indicator than people who are on either of the extremes, highly critical of the tech or not critical at all.
For coding use cases you may want a way to search for symbols themselves or do a plain text exact match for the name of a symbol to find the relevant documents to include. There is more to searching than building a basic similarity search.
Sorry but who mentioned coding as a use-case? My comment was general and not specific to the coding use-case, and I don't understand where did you get the idea from that I am arguing that building a similarity search engine would be a substitute to the symbol-search engine or that symbol-search is inferior to the similarity-search? Please don't put words into my mouth. My question was genuine without making any presumptions.
Even with the coding use-case you would still likely want to build a similarity search engine because searching through plain symbols isn't enough to build a contextual understanding of higher-level concepts in the code.
I mentioned coding as a use case in my comment you replied to. You were asking for an example for when one wouldn't use vector search and I provided one. I did not say similarity search would be a substitute. I said that for the coding case you do not need it.
>you would still likely want to build a similarity search engine
In practice tools like Claude Code, Codex, Gemini, Kimi Code, etc are getting away with searching for code with grep / find and understanding code by loading a sufficient amount of code into the context window. It is sufficient to understand higher level concepts in the code. The extra complexity of maintaining vector database top of this is not free and requires extra complexity.
In your point you said "There is more to searching than building a basic similarity search." which assumed and implied all kinds of things and which was completely unnecessary.
> In practice tools like Claude Code, Codex, Gemini, Kimi Code, etc are getting away with searching for code with grep / find and understanding code by loading a sufficient amount of code into the context window
Getting away is the formulation I would use as well. "Sufficient amount" OTOH is arguable and subjective. What suffices in one usage example, it does not in another, so the perception of how sufficient it really is depends on the usage patterns, e.g. type and size of the codebases and actual queries asked.
The crux of the problem is what amount and what parts of the codebase do you want to load into the context while not blowing up the context and while still maintaining the capability of the model to be able to reason about the codebase correctly.
And I find it hard to argue that building the vector database would not help exactly in that problem.
Perf counters are only indicative of certain performance characteristics at the uarch level but when one improves one or more aspects of it the result does not necessarily positively correlate to the actual measurable performance gains in E2E workloads deployed on a system.
That said, one of the comments above suggests that the HW change was a switch to Ivy Bridge, when zeroing memory became cheaper, which is a bit unexpected (to me). So you might be more right when you say that the improvement was the result of memory allocation patterns and jemalloc.
Then it should be pretty easy to display that 20% "faster for free", no? But as always the devil is in the details. I experimented a lot with huge pages, and although in theory you should see the performance boost, the workloads I have been using to test this hypothesis did not end up with anything statistically significant/measurable. So, my conclusion was ... it depends.
Yes, I understand that. It is implied that there's a high TLB miss rate. However, I'm wondering if the penalty which we can quantify as O(4) memory accesses for 4-level page table, which amounts to ~20 cycles if pages are already in L1 cache, or ~60-200 cycles if they are in L2/L3, would be noticeable in workloads which are IO bound. In other words, would such workloads benefit from switching to the huge pages when most of the time CPU anyways sits waiting on the data to arrive from the storage.
I am trying to understand the reason behind why "zeroing got cheaper" circa 2012-2014. Do you have some plausible explanations that you can share?
Haswell (2013) doubled the store throughput to 32 bytes/cycle per core, and Sandy Bridge (2011) doubled the load throughput to the same, but the dataset being operated at FB is most likely much larger than what L1+L2+L3 can fit so I am wondering how much effect the vectorization engine might have had since bulk-zeroing operation for large datasets is anyways going to be bottlenecked by the single core memory bandwidth, which at the time was ~20GB/s.
Perhaps the operation became cheaper simply because of moving to another CPU uarch with higher clock and larger memory bandwidth rather than the vectorization.
https://www.carscoops.com/2026/01/chinese-car-sales-europe-2... says pretty much the opposite. Maybe more detailed analysis is provided by https://chinaevhome.com/2026/01/20/chinese-automakers-europe....
Both of them align what I personally see on the streets, more and more Chinese brands, especially BYD, MG and Geely.
reply