Hacker Newsnew | past | comments | ask | show | jobs | submit | lhecker's commentslogin

The plan is indeed to make it extensible if time and popularity permits it. LSP would be an extension itself, however, instead of being built into the editor. We want to retain a lean core editor so people can ship it everywhere (e.g. even into small Docker images).


That’s excellent. Thanks for sharing and best wishes to the team’s success and popularity of the new editor!


It's indeed Maple Mono. I love that font!


Abstracting away the `assume_init` is a great idea! I think I could use something like that for the editor. The only concern I have is that the `read` function is templated on the parameter type. I'd ideally _really_ prefer it if I didn't need two copies of the same function to switch over `[u8]` and `[MaybeUninit<u8>]` due to different return types. [^1] I guess the approach could be tuned to avoid this?

Personally, I also like the simpler approach overall, compared to the `BorrowedBuf` trait, for the same reasons outlined in the article.

While this possibly solves parts of pain points that I had, what I meant to write is that in an ideal world I could write Rust while mostly not thinking about this issue much, if at all. Even with this approach, I'd still need to decide whether my API needs to take a `[u8]` or a `Buffer`, just in the mere off-chance that a caller may want to pass an uninitialized array further up in the call chain. This then requires making the call path generic for the buffer parameter which may end up duplicating any of the functions along the path, even though that's not really my intention by marking it as `Buffer`.

I think if there was a way to modify Rust so we can boldly state in writing "You may cast a `[MaybeUninit<T>]` into a `[T]` and pass it into a call _if_ you're absolutely certain that nothing reads from the slice", it would already go a long way. It may not make this more comfortable yet, but it would definitely take off a large part of my worries when writing such unsafe casts. That's basically what I meant with "occupy my mind": It's not that I wouldn't think about it at all, rather it just wouldn't be a larger concern for me anymore, for code where I know for sure that this requirement is fulfilled (i.e. similar to how I know it when writing equivalent C code).

Edit: jcranmer's suggestion of write-only references would solve this, I think? https://news.ycombinator.com/item?id=44048450

[^1]: This is of course not a problem for a simple `read` syscall, but may be an issue for more complex functions, e.g. the UTF8 <> UTF16 converter API I suggested elsewhere in this thread, particularly if it's accelerated, the way simdutf is.


What I meant is that if I write a UTF8 --> UTF16 conversion function for my editor in C I can write

  size_t convert(state_t* state, const void* inp, void* out)
This function now works with both initialized and uninitialized data in practice. It also is transparent over whether the output buffer is an `u8` (a byte buffer to write it out into a `File`) or `u16` (a buffer for then using the UTF16). I've never had to think about whether this doesn't work (in this particular context; let's ignore any alignment concerns for writes into `out` in this example) and I don't recall running into any issues writing such code in a long long time.

If I write the equivalent code in Rust I may write

  fn convert(&mut self, inp: &[u8], out: &mut [MaybeUninit<u8>]) -> usize
The problem is now obvious to me, but at least my intention is clear: "Come here! Give me your uninitialized arrays! I don't care!". But this is not the end of the problem, because writing this code is theoretically unsafe. If you have a `[u8]` slice for `out` you have to convert it to `[MaybeUninit<u8>]`, but then the function could theoretically write uninitialized data and that's UB isn't it? So now I have to think about this problem and write this instead:

  fn convert(&mut self, inp: &[u8], out: &mut [u8]) -> usize
...and that will also be unsafe, because now I have to convert my actual `[MaybeUninit<u8>]` buffer (for file writes) to `[u8]` for calls to this API.

Long story short, this is a problem that occupies my mind when writing in Rust, but not in C. That doesn't mean that C's many unsafeties don't worry me, it just means that this _particular_ problem type described above doesn't come up as an issue in C code that I write.

Edit: Also, what usefulcat said.


Why wouldn’t you accept a &mut [MaybeUninit<T>] and return a &mut [u8], hiding the unsafe bits that transmute the underlying reference?

Something like:

  fn convert<'i, 'o>(inp: &'i [u8], buf: &'o mut MaybeUninit<u8>) -> &'o mut [u8]
(Honest question, actually… because the above may be impossible to write and I’m on my phone and can’t try it.)

Edit: it works: https://play.rust-lang.org/?version=stable&mode=debug&editio...


That's a fair workaround for my specific example. But I believe it's possible to contrive a different example where such a solution would not be possible. Put differently, I only tried to convey the overall idea of what I think is a shortcoming in Rust at the moment.

Edit: Also, I believe your code would fail my second section, as the `convert` function would have difficulty accepting a `[u8]` slice. Converting `[u8]` to `[MaybeUninit<u8>]` is not safe per se.


Yeah, you’d need to do something like accept an enum that is either &mut [u8] or &mut [MaybeUninit<u8>], and make a couple of impl From<>’s so callers can .into() whatever they want to pass…

But I don’t think this is really a shortcoming, so much as a simple consequence of strong typing. If you want take “whatever” as a parameter, you have to spell out the types that satisfy it, whether it’s via a trait, or an enum with specific variants, etc. You don’t get to just cast things to void and hope for the best, and still call the result safe.


Hey all! I made this! I really hope you like it and if you don't, please open an issue: https://github.com/microsoft/edit

To respond to some of the questions or those parts I personally find interesting:

The custom TUI library is so that I can write a plugin model around a C ABI. Existing TUI frameworks that I found and were popular usually didn't map well to plain C. Others were just too large. The arena allocator exists primarily because building trees in Rust is quite annoying otherwise. It doesn't use bumpalo, because I took quite the liking to "scratch arenas" (https://nullprogram.com/blog/2023/09/27/) and it's really not that difficult to write such an allocator.

Regarding the choice of Rust, I actually wrote the prototype in C, C++, Zig, and Rust! Out of these 4 I personally liked Zig the most, followed by C, Rust, and C++ in that order. Since Zig is not internally supported at Microsoft just yet (chain of trust, etc.), I continued writing it in C, but after a while I became quite annoyed by the lack of features that I came to like about Zig. So, I ported it to Rust over a few days, as it is internally supported and really not all that bad either. The reason I didn't like Rust so much is because of the rather weak allocator support and how difficult building trees was. I also found the lack of cursors for linked lists in stable Rust rather irritating if I'm honest. But I would say that I enjoyed it overall.

We decided against nano, kilo, micro, yori, and others for various reasons. What we wanted was a small binary so we can ship it with all variants of Windows without extra justifications for the added binary size. It also needed to have decent Unicode support. It should've also been one built around VT output as opposed to Console APIs to allow for seamless integration with SSH. Lastly, first class support for Windows was obviously also quite important. I think out of the listed editors, micro was probably the one we wanted to use the most, but... it's just too large. I proposed building our own editor and while it took me roughly twice as long as I had planned, it was still only about 4 months (and a bit for prototyping last year).

As GuinansEyebrows put it, it's definitely quite a bit of "NIH" in the project, but I also spent all of my weekends on it and I think all of Christmas, simply because I had fun working on it. So, why not have fun learning something new, writing most things myself? I definitely learned tons working on this, which I can now use in other projects as well.

If you have any questions, let me know!


I’d love to hear about the use of nightly features. I haven’t had time to dig into the usage, but that was something I was surprised by!


Up until around 2 months ago the project actually built with stable Rust. But as I had to get the project ready for release it became a recurring annoyance to write shims for things I needed (e.g. `maybe_uninit_fill` to conveniently fill the return value of my arena allocator). My breaking point was the aforementioned `LinkedList` API and its lack of cursors in stable Rust. I know it's silly, but this, combined with the time pressure, and combined with the lack of `allocator_api` in stable, just kind of broke me. I deleted all my shims the same day (or sometime around it at least), switched to nightly Rust and called it a day.

It definitely helped me with my development speed, because I had a much larger breadth of APIs available to me all at once. Now that the project is released, I'll probably stay with the nightly version for another few months until after `let_chains` is out in stable, because I genuinely love that quality-of-life feature so much and just don't want to live without it anymore. Afterward, I'll make sure it builds in stable Rust. There's not really any genuine reason it needs nightly, except for... time.

Apropos custom helpers, I think it may be worth optimizing `Vec::splice`. I wrote myself a custom splice function to reduce the binary size: https://github.com/microsoft/edit/blob/e8d40f6e7a95a6e19765f...

The differences can be quite significant: https://godbolt.org/z/GeoEnf5M7


Thank you!

> I know it's silly,

Nah, what's silly is LinkedList.

> I think it may be worth optimizing `Vec::splice`.

If this is upstream-able, you should try! Generally upstream is interested in optimizations. sort and float parsing are two things I remember changing significantly over the years. I didn't check to see what the differences are and how easy that actually would be...


The quirkiness of Zig is real. I'd love for Zig to win out but it's just too weird, and it's not progressing in a consistent direction. I can appreciate you falling back to Rust.


> it's not progressing in a consistent direction

I've maintained a project in zig since either 0.4 or 0.5 and i dont think this is the case at all. supporting 0.12 -> 0.13 was no lines of code, iirc, and 0.13->0.14 was just making sure my zig parser could handle the new case feature (that lets you write a duff's device).

zig may seem quirky but it's highly internally consistent, and not far off from C. every difference with c was made for good reasons (e.g. `var x:u8` vs `char x` gives you context free parsing)

i would say my gripes are:

1. losing async

2. not making functions const declarations like javascript (but i get why they want the sugar)


Thanks for this. Don't ask why but i just defaulted to Edit on Linux. I noticed there's no locking for edited files. Not even a notification saying "This file has been modified elsewhere since you opened it. Do you still want to save"

Can you confirm this? Is it some thing you intend to add? Curious to know why, if the answer is no


I checked the git history to see if you included the Zig version but looks like first revision is rust...

In the Zig version did you use my zigwin32 project or did you go with something else? Also, how did you like the Zig build system vs rusts?


Back then (a year ago?) I simply included the Windows.h header into a Zig file. Is that not possible anymore? It worked great back then for me IIRC!

Overall, I liked the build system. What I found annoying is that I had to manually search for the Windows SDK path in build.zig just so I can addIncludePath it. I needed that so I can add ICU as a dependency.

The only thing that bothered me apart from that was that producing LTO'd, stripped release builds while retaining debug symbols in a separate file was seemingly impossible. This was extra bad for Windows, where conventionally debug information is always kept in a separate file (a PDB). That just didn't work and it'd be great if that was fixed since back then (or in the near term).


> Since Zig is not internally supported at Microsoft just yet (chain of trust, etc.)

Is there something about Zig in particular that makes this the case, or is it just an internal politics thing?


I don't know why, but I'm quite certain it's neither of the two. If anything, it has probably to do with commitment: When a company as large as MS adopts a new language internally, it's like spinning up an entire startup internally, dedicated to developing and supporting just that new language, due to the scale at which things are run across so many engineers and projects.


1. What do you like about Zig more than Rust?

2. How did you ensure your Zig/C memory was freed properly?

3. What do you not like about Rust?


> What do you like about Zig more than Rust?

It's been quite a while now, but:

- Great allocator support

- Comptime is better than macros

- Better interop with C

- In the context of the editor, raw byte slices work way better than validated strings (i.e. `str` in Rust) even for things I know are valid UTF8

- Constructing structs with .{} is neat

- Try/catch is kind of neat (try blocks in Rust will make this roughly equivalent I think, but that's unstable so it doesn't count)

- Despite being less complete, somehow the utility functions in Zig just "clicked" better with me - it somehow just felt nice reading the code

There's probably more. But overall, Zig feels like a good fit for writing low-level code, which is something I personally simply enjoy. Rust sometimes feels like the opposite, particularly due to the lack of allocators in most of its types. And because of the many barriers in place to write performant code safely. Example: The `Read` trait doesn't work on `MaybeUninit<u8>` yet and some people online suggest to just zero-init the read buffer because the cost is lower than the syscall. Well, they aren't entirely wrong, yet this isn't an attitude I often encounter in the Zig area.

> How did you ensure your Zig/C memory was freed properly?

Most allocations happened either in the text buffer (= one huge linear allocator) or in arenas (also linear allocators) so freeing was a matter of resetting the allocator in a few strategical places (i.e. once per render frame). This is actually very similar to the current Rust code which performs no heap allocations in a steady state either. Even though my Zig/C code had bugs, I don't remember having memory issues in particular.

> What do you not like about Rust?

I don't yet understand the value of forbidding multiple mutable aliases, particularly at a compiler level. My understanding was that the difference is only a few percent in benchmarks. Is that correct? There are huge risks you run into when writing unsafe Rust: If you accidentally create aliasing mutable pointers, you can break your code quite badly. I thought the language's goal is to be safe. Is the assumption that no one should need to write unsafe code outside of the stdlib and a few others? I understand if that's the case, but then the language isn't a perfect fit for me, because I like writing performant code and that often requires writing unsafe code, yet I don't want to write actual literal unsafe code. If what I said is correct, I think I'd personally rather have an unsafe attribute to mark certain references as `noalias` explicitly.

Another thing is the difficulty of using uninitialized data in Rust. I do understand that this involves an attribute in clang which can then perform quite drastic optimizations based on it, but this makes my life as a programmer kind of difficult at times. When it comes to `MaybeUninit`, or the previous `mem::uninit()`, I feel like the complexity of compiler engineering is leaking into the programming language itself and I'd like to be shielded from that if possible. At the end of the day, what I'd love to do is declare an array in Rust, assign it no value, `read()` into it, and magically reading from said array is safe. That's roughly how it works in C, and I know that it's also UB there if you do it wrong, but one thing is different: It doesn't really ever occupy my mind as a problem. In Rust it does.

Also, as I mentioned, `split_off` and `remove` from `LinkedList` use numeric indices and are O(n), right? `linked_list_cursors` is still marked as unstable. That's kind of irritating if I'm honest, even if it's kind of silly to complain about this in particular.

In all fairness, what bothers me the most when it comes to Zig is that the language itself often feels like it's being obtuse for no reason. Loops for instance read vastly different to most other modern languages and it's unclear to me why that's useful. Files-as-structs is also quite confusing. I'm not a big fan of this "quirkiness" and I'd rather use a language that's more similar to the average.

At the end of the day, both Zig and Rust do a fine job in their own right.


The design intent of unsafe Rust is that its usage should be rare and well-encapsulated, but supported in any domain. Alleviating a performance bottleneck is a fine reason to use unsafe, as long as it only appears at the site of the bottleneck and doesn't unnecessarily leak into the rest of the codebase.

The most basic reason why you can't have unrestricted mutable aliasing is because then the following code, which contains a use-after-free bug, would be legal:

    let mut val = Some("Hello".to_owned());
    let outer_mut = &mut val;
    let inner_mut = val.as_mut().unwrap();
    *outer_mut = None;
    println!("{}", inner_mut);
If, as is sometimes the case, you need some kind of mutable aliasing in your program, the intended solution is to use an interior-mutability API (which under the hood causes LLVM's noalias attribute to be omitted). Which one to use depends on the precise details of your use case; some (e.g., RefCell) carry performance costs, while others (e.g., Cell) are zero-cost but work only for certain types or access patterns. Having to figure this out is annoying, but such is the price of memory safety without runtime garbage collection. In the worst-case scenario you can use UnsafeCell, which as the name suggests is unsafe, but works with any type with no performance cost. UnsafeCell is also a little bit heavy on boilerplate/syntactic salt, which people used to C sometimes find annoying; there isn't that much drive to fix this because, as per above, it's supposed to be rarely used.

The "few percent in benchmarks" thing sounds like it's referring to the rule that it's UB to use unsafe code to make aliased &mut references even if you don't actually use those references in a problematic way. Lifting that rule would preclude certain compiler optimizations, and as per above would not fix the real problem; you still couldn't have unrestricted mutable aliasing. It would only alleviate the verbosity cost, and that could be done in a different way without the performance cost (like by adding special concise syntax for UnsafeCell) if it were deemed important enough.

The uninitialized-memory situation is pretty widely agreed to be unsatisfactory. Unfortunately it is hard to fix. Ideally the compiler would do flow-control analysis so that you can read from memory that was uninitialized only if it has definitely been written to since then. Unfortunately this would be a big complicated difficult-to-implement type system feature, and the need to make it unwind-safe (analogous to the concept of exception safety in C++) adds much, much more complication and difficulty on top of that. You could imagine an intermediate solution, wherein reading from uninitialized memory gets you a valid-but-unspecified value of the applicable type instead of UB, but that also has some difficulties, such as unsoundness in conjunction with MADV_FREE; see https://internals.rust-lang.org/t/freeze-maybeuninit-t-maybe... if you're curious for more details.

Again, the point here is not "the current design is optimal", it's "improving on the current design is a difficult engineering problem that no one has solved yet".

I think people who need cursors over linked lists use a third-party library from crates.io for this, but it's quite reasonable to think that the standard library should have this. Most of the time when a smallish feature like that remains unstable it's because nobody has cared enough about it to shepherd it through the stabilization process (perhaps because it's not a hard blocker if you can use the third-party library instead). Possibly that process is too slow and heavyweight, but of course enacting a big process change in a massively multiplayer engineering project that's governed by consensus is an even harder problem.


I think files as struct makes lots of sense. As it doesnt have to treat files in any special way then.


i had been dreaming about files as structs for about a two decades and Zig made it happen.

i will say the struct files (files which are not a namespace struct.. that is, they have field values) are a bit weird, but at least the syntax is consistent with an implicit surrounding bracket.


I wonder if you used GitHub Copilot or some other LLM-based code generation tool to write any of the code. If not, that's a lot of code to write from scratch while presumably under pressure to ship, and I'm impressed.


I did use Copilot a lot, just not its edit/agent modes. I found that they perform quite poorly on this type of project. What I use primarily is its autocompletion - it genuinely cured my RSI - and sometimes the chat to ask a couple questions.

What you expressed is a sentiment I've seen in quite a few places now. I think people would be shocked to learn how much time I spent on just the editing model (= cursor movement and similar behavior that's unique to this editor = a small part of the app) VS everything else. It's really not all that difficult to write a few FFI abstractions, or a UI framework, compared to that. "Pressure to ship" is definitely true, but it's not like Microsoft is holding a gun to my chest, telling me to complete the editor in 2 months flat. I also consider it important to not neglect one's own progress at becoming more experienced. How would one do that if not by challenging oneself with learning new things, right? Basically, I think I struck a balance that was... "alright".


It was very interesting to me that you liked Zig the most. Thank you for making this!


Thanks for undertaking this project, as well as making WT such an awesome app!

I already expressed my appreciation on the repo, but was promptly shushed by your colleague for Incitement Of A Language War, hehe.

I'm impressed by the architecture and implementation choices, especially the gap buffer and cursor movement. It seems we've independently arrived at the same kinds of conclusions on how to min-max a text editor: minimal concepts with maximal functionality.

Others have asked about Zig. I would love to hear more about the work you did in C. Did you start in C? What are some reasons why you didn't continue with C? If you had continued in C, with hindsight, what would have been most annoying? What was clearly better in C? Again, with hindsight, what would have been the best parts of following through with C? I see that you are C-cultured as well (Chris Wellons' blog) and some of the upsides of Zig that you mention I would have guessed you could have elegantly solved in C using Chris' insights. I'm very curious how with such expert advice available you still sought elsewhere and preferred it. Looking forward to hear about the C-side of the story.

Good luck with the project, and see you on the repo :)


Can you say more about the chain of trust issue? Does Rust also not have that problem? Or are you using mrustc to bootstrap rustc?


Indeed, we have our own bootstrapped Rust toolchain internally. I think this has to do with (legal) certifications, but I'm not entirely sure about that.


BTW, are you aware of the Bootstrappable Builds folks achievements? Starting with only an MBR worth of commented machine code, plus a ton of source code, they build up to a Linux distro.

https://bootstrappable.org/ https://lwn.net/Articles/983340/ https://github.com/fosslinux/live-bootstrap/blob/master/part... https://stagex.tools/


I'm really glad this wasn't killed off with the recent Microsoft layoffs!


Why not webassembly ABI?


I'm not familiar with that, so I can't say. If you have any links on that topic, I'd appreciate it.

Generally speaking, the requirement on my end is that whatever we use is as minimal as it gets: Minimal binary size overhead and minimal performance overhead. It also needs to be cross-platform of course. This for instance precludes the widely used WinRT ABI that's being used nowadays on Windows.


Maybe something like https://www.codecentric.de/en/knowledge-hub/blog/plug-in-arc... ?

Webassembly is the binary spec for the web. But now everyone is using that because it's portable and lightweight.

The idea is you can create plugin using any language that compiles into webassembly. C, Rust, Pascal, Go, C++. Compile once and it should work in Windows, Linux and Mac. No need to compile to multiple architecture.

Performance should be great near native, but I guess there's going to be a problem with the added webassembly runtime size. Here is a runtime with estimated sizes: https://github.com/bytecodealliance/wasm-micro-runtime

And it's sandboxed too, so should be secure.


And I guess forcing FFI for plugins is going to be a headache for many plugin authors.


What I imagined is that people could load runtimes like node.js as a plugin itself in order to then load plugins written in non-native languages (JavaScript, Lua, etc.; node.js being an extreme example). I wonder if WASM could also be one such "adapter plugin"?

But plugins are still a long way off. Until then, I updated the issue to include your suggestion (thank you for it!): https://github.com/microsoft/edit/issues/17


Yeah the choice usually js or lua. Webassembly is just a new option.


Why Rust over a compiled .NET lang? (e.g. C#)


Pretty much exclusively binary size. Even with AOT C# is still too large. Otherwise, I wouldn't have minded using it. I believe SIMD is a requirement for writing a performant editor, but outside of that, it really doesn't need to be a language like C or Rust.


Is there any context in which .NET runtime wouldn't be available on Windows (even if an older version, e.g. 4.x)? Because when you can rely on that and thus you don't need to do AOT, the .exe size for C# would likely be an order of magnitude smaller.


I intended for this editor to be cross-platform and didn't want to take on a large runtime dependency that small Docker images or similar may not be willing to bundle.


Will you implement theming?


> So, why not have fun learning something new, writing most things myself? I definitely learned tons working on this, which I can now use in other projects as well.

Because presumably you should have been doing it mostly for the benefit of Windows users, and wasting time because it’s a fun personal learning exercise means those users would suffer getting an underpowered app


Fun-shaming a passionate developer who, beyond their job description, delivered an editor that checks all the required boxes (small binary, fast, ssh-support, etc.) in just 4 months, while working on weekends and even Christmas, and calling it "wasting time" is incredibly upsetting. I'm grateful to work with people who value that kind of initiative.


You're making it all up, he's a terminal product manager, so it is not beyond job description.

Of course the app doesn't check all boxes, plenty of features other editors have had years to add simply couldn't be added even if you work on Christmas (by the way, is also entirely driven by the NIH decision to write from scratch and not than that doing a 3 language mock)

And I'm not shaming the fun, I'm saying that that's not a good justification for shipping a worse app to the millions in a professional setting


Wow, what a disheartening comment. Did GP not explain why they did what they did? This is an open-source project, the author expressed joy in working on it, and you have the heart to tell him off. This is far below what I expect of HN.


Did the comment not explain what the issue with that explanation is?

But maybe if you didn't misrepresent the situation so much you wouldn't lose your heart. This is not some tiny personal open source project where fun can be the only valid reason, but "will ship as part of Windows 11!", so millions of devices in a professional OS. Are your expectations so poorly calibrated that you have none in both cases? Why are they higher re. a forum comment?


What led you to say that the author did not have users' interests at heart? What led you to imply that there's something wrong with reimplementing something or having fun or whatever it is you disliked you so much? What leads you think that a person working on something delivered with Windows 11 deserves less respect than a person working on a less used system? Or, do you consider what you said neutral, well argued criticism?


What led you to continue to misrepresent... everything?

Why did you make up a point about a person deserving respect and pass it as my thought? Could you not come up with a more coherent difference between those two situations yourself?

Why are you asking a question about the motivation if you don't even understand "whatever" it is I disliked?

Why did you make up the implication that rejects having fun?

Why are you making it personal in the first place?

How can criticism be neutral when it's... critical?

What kind of well argued thing do you expect in a... single sentence to even ask such a question?

Again, why is there such a huge mismatch in your expectations re. a comment and a professional app?


I didn't intentionally misrepresent your comment, but I am open to having misunderstood it. Also, me answering with questions didn't help.

> > So, why not have fun learning something new, writing most things myself? I definitely learned tons working on this, which I can now use in other projects as well. > > Because presumably you should have been doing it mostly for the benefit of Windows users, and wasting time because it’s a fun personal learning exercise means those users would suffer getting an underpowered app

Would you care to elaborate on what intention you understood the author to have, which aspect of author's work you deemed as a waste of time, and why do you think the resulting app is underpowered for Windows 11 users?

I would also be interested in what mismatch you saw in my expectations as regards your original comment and a professional app.

I realise that we got off on a bad foot, but, if you care to, we can try and restart the conversation.


At the same time I think some of the most brilliant things to come from Microsoft are products of individual initiative, and when the project ends up compromised for some reason I get the idea that it's some kind of institutional higher-ups that do the damage after the fact.

Maybe just some residual instinct left over from times past when more people like Ballmer were still prominent, and they were not as user-enabling as today in some ways?


Unfortunately, none of these are responsible for the startup delay. Since version 1.18 effectively ~90% of the startup duration is spent starting up WinUI and having it draw the tab bar and window frame. It still needs a second to start. If it still used GDI like Windows NT did, it would start in well under 100ms even on an extremely old CPU.

Fixing this situation is essentially impossible because it requires rewriting almost everything that modern Windows is built on. Someone else in this thread said you couldn't sell 4 quarters worth of work to fix this, but the reality is that it requires infinite quarters, because it requires throwing away the last 10 years of Windows shell and UI work and that will never happen. You could paper over it by applying performance spotfixes here and there, but it'll never go back to how it could be that way. At a minimum, you'd essentially have to throw away WinRT which has an almost viral negative impact on performance. Never before have high latency, but still synchronous cross process RPCs been that prevalent and everything's a heap allocated object, even if it's within the same binary. It's JuniorFootgunRT.


> none of these are responsible for the startup delay

> effectively ~90% of the startup duration is spent starting up WinUI and having it draw the tab bar and window frame

I listed "Display scaling support", "Tabbed interface", and "transparency". Is none of that related to WinUI and drawing the tab bar?


Yeah, you're right, they're related to WinUI. But what I meant is that such features aren't inherently expensive, they're just made expensive due to the choice of UI framework.

Display scaling is very fast in GDI apps and has no impact on launch time, a tab bar is essentially just an array of buttons (minimal impact on launch time?) and transparency is a virtually cost-free feature coming from DWM. I wrote a WinUI lookalike using its underlying technology (Direct2D and DirectComposition) directly one time and that results in an application that starts up within ~10ms of CPU time on my laptop, quite unlike the 450ms I'm seeing for WinUI. That is including UIA, localization and auto-layout support.


> They're equivalent except that asynchronous callbacks is what actually happens [...]

Neither stackful nor stackless coroutines work like this practice. The former suspends coroutines by saving and restoring the CPU state (and stack) and the latter compiles down to state machines, as mentioned in the article. Coroutines are functionally not equivalent to callbacks at all.


Which is exactly what happens when you use asynchronous callbacks except that you have to do the storing of state explicitly. Stackless coroutines even typically compile to (or are defined as equivalent to) callback based code.


Stackless coroutines are literally the same thing.

Stackful coroutines are just a poor man's threads.


Hi! I'm the person who wrote that blog post. If you have any questions whatsoever, please let me know!

English is not my native language and "The solution is trivial" was meant sarcastic. The solution isn't trivial at all, nor is it complete, because solving the corner cases is complex.

I'd also like to mention ahead of time that DirectWrite/Direct2D is very fast at drawing western (Latin) text without coloring (>2000 FPS). In other words, in most situations Casey's suggestion doesn't help much, but it does address his termbench issue. At the time we assumed we were using the API incorrectly, after all Direct2D already had everything Casey suggested we should implement and it ran very fast outside of termbench. The new solution is largely identical to regular Direct2D with the only major difference being that ClearType/Grayscale gamma-correction is run on the GPU instead.

Casey's suggestion wasn't unique to me, as a significant number of other terminals do it exactly that way in OpenGl, something I was quite familiar with already. Originally I credited alacritty in the blog, because that's how I knew how to do it, but I removed it to keep the article as succinct as possible.

Update: I'm very sorry for publishing the blog post without giving proper credit where it is due, and it's been updated since. I'm sorry for my continued, overly defensive behavior.


Removing a credit or a reference is probably the worst way to add succintness to an article.

It's literally saying "the contributions of people whose work I based my work on is less important than the least useful sentence in describing my work".


Yeah, Leonard inadvertently gave away the game with that comment. Removing credit from a blogpost because you want it to be "succinct" is not justifiable. I think he's trying hard to pretend he didn't know any better; that excuse seems increasingly thin given the developments of this thread.

He just didn't care about other people.

Edit: it looks like Dustin from the same team at Microsoft shares the same hostility https://news.ycombinator.com/item?id=31284857


I'm not sure editing the article would be the proper thing to do at this point. Someone else suggested to edit in 1-2 weeks for possible future readers.

I can add back the sentence where I accredit Alacritty for the general, underlying algorithm/idea then, because that's where I heard about it first. There isn't really any other alternative, performant way to implement GPU-accelerated terminals, so I don't think hearing about it again from Casey changed my perception of what the only alternative to Direct2D is, in case it's fundamentally flawed for our purpose which it turned out to be.


Sometimes logic isn't enough. YOU caused a huge amount of anger among the community by being condescending, and insisting "I learned this from alacritty, not the person I was condescending towards" isn't going to make anyone less angry. I'm just telling it like it is here. Humility is, unfortunately, what you need, and you can't fake humility.


> I'm not sure editing the article would be the proper thing to do at this point. Someone else suggested to edit in 1-2 weeks for possible future readers.

There's nothing improper with adding "Edit: credit to the <name of the person> who suggested the solution. While the solution seems trivial, there are certain technical challenges to overcome".

> because that's where I heard about it first.

No. That's not where you heard it first. Otherwise you wouldn't have written "this needs doctoral research on performance" in the original issue.

Edit: I misattributed this quote, you didn't say it.

> I don't think hearing about it again from Casey

There's an actual, verifiable screenshot of your reaction to his words. That was not an "again".


> There's nothing improper with adding

The point of the waiting to edit suggestion is to avoid a bunch of reactionary edits during the most heated period. Immediately implementing every edit demanded of you when different people are demanding different edits all at once seems improper to me and waiting a short period seems a level headed approach. It also avoids the look of just trying to cover everything up the quickest way possible instead focusing on well thought out sincere reactions.

> No. That's not where you heard it first. Otherwise you wouldn't have written "this needs doctoral research on performance" in the original issue.

You've attributed the quote to the wrong person, it was not lhecker who said that.

> There's an actual, verifiable screenshot of your reaction to his words. That was not an "again".

There is nothing in lhecker's response that suggests he hadn't heard of the alternative. His comment argued it should all be able to handled in DirectWrite without using the alternative not that the alternative hadn't been thought of.


> You've attributed the quote to the wrong person, it was not lhecker who said that.

Ah, true. I definitely misattributed this statement. For this I apologize.

> His comment argued it should all be able to handled in DirectWrite without using the alternative not that the alternative hadn't been thought of.

His statement was "it isn't worth it" and "we ready doing it via the framework we're using" to only state, a year later, [1] "We actually took the same approach Casey suggested" while insisting that "it doesn't help much".

If he'd heard about this approach before and seen it in, say, Alacritty, he has a very shitty way of showing it.

[1] https://news.ycombinator.com/item?id=31285723


I'm not sure I'm much of one for being able to untangle interpretations of chunks taken from comments spread over literal years but I think the distinction missing here is previously the WT team thought they could get away with using DirectWrite's built in approach to handle the precise issue. It was never that they didn't know there was an alternative to DirectWrite just they thought DirectWrite had a way to do it that would work just as well. They then found out this wasn't the case, completely backpedaled on which was the right architectural way to achieve the goals, and wrote this blog post about having implemented it the other way. This is what the full blogpost tries to detail the history of.

It might be a good olive branch to extend thanks to cmuratori for trying to push that it was the right architectural choice but it was never about being the source of the idea. In either case it probably is a good idea to credit the other terminals that had already proven the implementation can work prior to all of this even starting though.


I totally agree with you


https://www.youtube.com/watch?v=nDO24U3hMkU

(it's bit long-winded but the general message is sound)


The problem that the poster has is that his input is basically being dismissed.

He was too 'overconfident' with his 'opinion'. But it is exactly what has been implemented.

It's basically this scenario:

> "You don't understand, it's very very hard, next to impossible, to create a website where you can leave an email address. i think you should reread all the RFCs and a bunch of programming books. You have no right to say anything, WE are Microsoft, not you, so you have NOOOOOOOOOO idea"

> 1 year later: "We created a website where you can leave an email address. It was actually not that difficult. But look, WE are Microsoft, so let us explain how smart we were to figure it out".


That's not how I interpretted the github thread and blog post. It seemed like the MS team was saying "yes we know this is one way to solve it, but it's incredibly difficult", and one year later they post "we finally did it and this is how difficult it was". As the author of the blog post mentioned here, the sentence about how "trivial" it was, was actually sarcasm, and the rest of the blog post highlights the technical challenges


That's not how interpreted it.. there's way too much elitism in tech. And also aggregations about how easy or difficult things are.

"AI will solve things" -> they have no clue, and just trust it's gonna work itself.

"It's difficult to add a field to that form, it'll take 2 weeks" -> usually bs.

I read the thread, and I fully agree that it should be a weekend work. So yeah, it's just other devs mismanaging their time, and therefore saying that things take way longer.

The issue created 2 days afterwards (https://github.com/microsoft/terminal/issues/10461) starts with the previously suggested solution which didn't make any sense because the original post "didn't understand how DirectWrite works, and yadayadayada". The solution is as trivial as "but just optimize our render pass instead"

No sarcasm there.

The whole sarcasm is just covering up their own ass

1 day later implementation starts. 2 days after that it was implemented..

> Yep! And I'm entirely on it!

So eager to work on an issue created by himself instead of someone else!

> Terminal cannot turn away valuable performance work simply on ideological grounds.

The "Atlas-Engine" label hasn't been added to the original issue either.

It just stinks like people using politics to improve their HR performance review.


I find it weird that github user cmuratori is willing to write in so much detail about how to do it and how it's only a weekend of work, but doesn't submit a pull request themself. To me, it's rather easy to make requests and speculate about different approaches, but actually implementing the solution can be much more expensive.

Not to mention, as some of the other HackerNews threads mention [1], they very well could have already gotten cmuratori's idea from another person (in fact they say they already heard it from alacritty), and it could have already been an idea in consideration but they hadn't fully explored it yet because they chose a different path to try first. The github thread you linked seems to indicate the same thing. For cmuratori to just write a few github comments and then demand credit seems a bit much.

Also note that the blog post has been updated to mention cmuratori, although that could just be to appease the outcry.

[1]: https://news.ycombinator.com/item?id=31287644



Your team needs to get the story straight. The other person in your team indicated it wasn't sarcasm.

I couldn't imagine how bad your team would treat team members with inferior authority when your team treat your own users like this.

Non native speaking skill is really no excuse here.


It's pretty clear the disconnect here, there's one side that says "Do this optimization it's trivial" and the other side going "We have to write an entire text renderer that's far from trivial" and the result was this blogpost saying "Hey we didn't this thing and it was trivial! All we had to do was write an entire text renderer first". So in a straight forward sense yes, the concept is trivial, but in a more real sense it's not, since your straight forward idea is straight forward only once you've done something difficult.


okay. so. a bit offtopic, but ... how come it's 2022 and Microsoft, with all its glory and industry leading best practices and trillions of microdollars of Azure-colored dollars valuation ... doesn't have a library for this? you know, the company that makes the thing, the suite that is used worldwide, underground, spacestationside, all the 365 days of the year.

I mean it's no wonder the introduction of the computer doesn't show up on the GDP charts when the industry is in fucking shambles and isn't even ashamed for it ... https://www.youtube.com/watch?v=5IUj1EZwpJY&t=35m40s

:|


No disconnect whatsoever.. Just someone who wants to make promotion by writing a blogpost, and dismissing someone else's input.

Politics and ego that's all


Why did you join this person's discord under a pseudonym?

Edit: I ask because if you're taking a pessimistic view of this, here is the rough order of occurrences:

1. You insulted Casey, knowingly or not, intentionally or not, but in public.

2. You join his discord, under a fake name

3. You write this blog post, without credit to Casey (other than "a community member").

I hope you can see how someone could very easily take this as something more sinister than it probably was.


1) When did this person insult him? Note that there was more than one microsoft person in that issue.

2) I don't get why so many people talk about this as if it were some horrible thing. I don't use my real name on discord the vast majority of the time, I don't see why you expect others to do so. The person in question admitted who they were when it was mentioned and later apologized for not stating it when they first joined.

3) I'm personally more mad about it not crediting the other terminals that have that kind of approach like wezterm, or whatever prior art they used.

My understanding is they thought DirectWrite did caching internally, which made the suggestion unnecessary, but it turned out that wasn't working quite as they thought it was. I didn't think the terminal people were combative at all until Casey himself was.

Ultimately, I think a big part of the outrage is just people assuming the worst interpretation of anything Microsoft.


I don't your name, pc86, is a real name, too. That's the life of them internet. So much about Pseudonym. So if already the very first statement is that questionable, how can I believe the rest you write had any truth?


That this may or may not pseudonymous is mostly a coincidence. I've talked about current (at the time) and past employers, I've posted my emails in comments and my profile before, my LinkedIn, used my first and last name separately, etc. This is not an anonymous profile by virtue of any effort on my part. I think just about every account - including my discord - is associated with my real identity and uses my real name.

The GP seems to use similar naming conventions for all their accounts. GitHub, HN, Twitter, are all "lhecker." It's a pretty standard naming convention - I'm pcopley on a ton of services. So I can see how it could raise an eyebrow when you get an aberration of that norm or habit or brand or whatever you want to call it. The simple (and honestly maybe even likely) explanation is just "it's not an anonymous account I just have a different username on discord." But if you're inclined to pick up a pitch fork every now and then it's easy to take that as some sort of nefarious act.


Besides what pc86 wrote: there’s a difference between ‘not a real name’ and ‘a fake name’. A dog is not a fake cat. Being a fake entails pretending to be the real thing, which ‘pc86’ - as should hopefully go without saying - is not.


This is way above my pay grade.

Just don’t take anything personal. Being defensive / know best is in this industry everywhere at all levels and organizations.

Being humble in general is hard, being humble in code reviews is harder, being humble to Internet comments in any forum… sometimes impossible. Then repeat that with two super genius experts - eek.

You find everyone is usually trying their best if you ever have face to face (except server admins, they are stone-cold psychopaths, stay away).

Would bet you both are helpful and open minded at the end of the day.


Oh yeah, for sure. I can't tell you how much I regret being that defensive about something that benign. This was certainly a life lesson. I've apologized to Casey in person before (well, to my best ability) and I'll apologize again if given the opportunity to do so.

I just wish people wouldn't say I'm taking credit, when it's an old, widely used idea and when I spent such an incredible amount of time reading and understanding DirectWrite/Direct2D's code in order to ship this renderer in the state it is.


It's not that you were taking credit, or that you were somehow wrong, it's that you initially put down Casey's idea so vehemently and so publicly. I think the lesson is, if you're going to step into the public arena, be humble, or be very, very right.


Maybe you should post in that github thread pointing out to the people originally arguing with Casey that the solution is in fact trivial and that he was right all along. It sounds like you and he are both on the same side in this discussion.


This is great advice (except the server admin part :-) !!)

It's sometimes tough to dissociate the problem with the people involved. Hope this will have a positive resolution as most of the people seem to be experts at what they do.


Most people are just performing the narrative that they use to shape their life in front of you (and the rest of the world). When someone comes at you overwhelmed with a disempowering emotion (e.g. locked up by anger, envy, snark, etc), how they behave has very little to do with who you are or what you did.

The first time this really hit home with me was when I was dating a girl who laughed at inappropriate times (e.g. it was extremely amusing to her how much my ski boots would squish my feet the first few days of the season). It was only after I broke up with her (for unrelated reasons) and saw her go through her own struggles that I learned that her family of origin was horrible and there was no one to comfort her when she would get hurt. Laughing was her way of dealing with the emotions of physical injury because she didn't know how to soothe herself or others.

Anyone who has the misfortune of having a friend or family member get ensnarled by the Qovid / Trump / populist zeitgeist can see that most of these people have gone through immense emotional suffering (often involving betrayal by a government agency, big business, or spouse), and are just bereft of the self awareness to understand what is going on with their emotions.

So yes, don't take anything personal. It almost never is.


>You find everyone is usually trying their best if you ever have face to face (except server admins, they are stone-cold psychopaths, stay away).

The server admin is a reflection of the union of the community of devs, maintainers, designers of their hardware, and users/benefactors of their boxen.

If your server admin is a psychopath. You may want to take a hard look what goes into keeping their boxen ticking along smoothly, and the demands of those using it.

Sometimes you get to be a peaceful, serene shepherd.

Sometimes, you're trying to keep that last critical link to reinforcements alive under siege from all threats, foreign and domestic, all at the same time.

Occupational hazard I guess.


You need to handle your PR better, dude. Whenever you speak out, always get a chance to acknowledge the help/ideas of others, unless they are your competitors. Make sure your work isn't understated though. Like "Turns out, many users complained about <...> and even a few shared patches (<link>, <link>). We even had a conversation with Mr. Smith suggesting <..>, so we finally took a good look at it and made a universal solution that covers all the bases".

It's not like you will get less credit for mentioning it - you still delivered the solution and that is what counts. But if you mention other people, it opens a few doors:

* They will like you and will be more likely to share further suggestions or code snippets (that will help you build your promotion package).

* The management will see you as a person who knows how to work with people, so you'll get a higher chance to get into the management track.

* Once you discuss a technical solution with somebody, and don't leave them in dust like an asshole, you get basic mutual trust. If you ever lose your job, they might gladly recommend you to their company, or vice versa.


If you don't mind a comment about your English, I'd suggest that what you wrote seems like it's better described as "tongue in cheek" than "sarcastic". "Sarcastic" can have connotations of meanness or snarkiness whereas "tongue in cheek" suggests good-natured joking, which seems to match the tone of the original post better.


> English is not my native language and "The solution is trivial" was meant sarcastic. The solution isn't trivial at all, nor is it complete, because solving the corner cases is complex.

I think it's really difficult to convey sarcasm well in text, and what seems obvious to you as a writer might not come across as sarcasm to readers. I certainly didn't understand that as sarcastic right away.

It might be better to address it directly and in a friendly way, like this:

> Like I did, you might be wondering right now why we don't just create our own, much larger lookup table and wrap it around Direct2D. Well, we can’t just tell Direct2D to use our custom cache [...]


Thank you for your suggestion! I believe I can't just edit the article now, since that would be disingenuous, but I'll try to avoid sarcasm in the future and use something like you suggested instead.


You could edit the article in a week or two, once things are calmer (maybe leave the original text in crossed-out form, or a link to the original via archive.org); it might help future readers who don't get there via this controversy.


"Sarcasm" unfortunately has a negative connotation. It makes it sound like you were being spiteful.

In practice, I interpreted the text as "There's a simple solution... but we cannot use it", which is a perfectly fine thing to say, in my opinion.


Perhaps worth considering is that sarcasm doesn't transmit easily over text, and certainly not in contexts where it's not expected. If you want to be clear about it, a common convention is to write /s after your sarcastic statement.

It might also help to acknowledge that Casey's feedback and the interaction with them was constructive and instrumental to this improvement being considered and deployed, even if the team ultimately took another approach than the one that Casey suggested.


We actually took the same approach Casey suggested, at least when it comes to the drawing part which is what this is about. A lot of terminals implement it that way. We have a long way ahead of us to improve the performance of our text ingestion now though.

I'm not sure what role Casey's feedback had in this being considered for implementation. His original termbench tool was _incredibly_ valuable for sure, but I'm not sure later discussions changed the outcome of that. If we had figured out a way to solve the issue while we continue using Direct2D, we would've definitely sticked with Direct2D. Since there was no solution, there was only one other way to solve it and it's a way that has many other terminals already do it.


It’s a good idea to have a native-English-speaking editor (and sometimes more than one!) — ideally one who has full situational context — review draft blog posts before they are published, and this is a good example of why. Doubly so if you’re representing a big company.


Just going to add that referring to someone as a community member [of Microsoft] is such a self aggrandizing term for "someone reported it on the project's github issues page"


I read it as member of the open source community rather than the Microsoft community.


Did you participate in the molly rocket discord?


> Casey's suggestion wasn't unique to me, as a significant number of other terminals do it exactly that way in OpenGl, something I was quite familiar with already.

In your reply to Casey you made it sound like you where unaware when you say things like "I'm somewhat surprised that other accelerated terminals aren't already doing this" is OpenGL not accelerated? What am I missing here?


You got the wrong person, I didn't write that. Once it became obvious that it's the only way to solve the issue and that a different use of Direct2D can't help with this, I implemented a prototype based on my pre-existing knowledge and explained how it works to the team.


Hi! Are you also the person who wrote the first, insulting, reply?


The names of the commenters may not be highlighted but they are still in the images. If "the first, insulting, reply" refers to "of the screenshots on Twitter" lhecker would be the second to reply.


The sarcasm was evident to me, FWIW. I don’t think you have anything to apologize for.


Sarcasm is usually very easy to spot if you assume the best in people. If you let cynicism take over then things said can be interpreted in the worst possible light.


There's no need for highly opinionated discussions. Differences in opinion should be a trigger to find a common understanding, it shouldn't result in defensive behavior. Everyone's just a person trying to help.


Honestly?! Your article is pretty good, here is a great example how misreading and Twitter cancel bubble works out :D

Do not let your sarcasm/irony out of the articles, I've enjoyed.


Seems like the solution was to convert a true type font to a bitmap font, I remember doing the same thing for my video games when I was a kid in the 90s its good to know I wasn't the only one that found it challenging.


Caseys suggestions were pointless because Direct2D is fast at drawing western text without coloring? Didn't casey open an issue because the terminal was slow with coloring?

It seems like you managed to optimize Direct2D with your 'glyph cache' pixel shader but that doesn't mean caseys suggestions were pointless.


I didn't say they were pointless anywhere. They were anything but.

I did say however: If we had figured out a way to solve the issue while we continue using Direct2D, we would've definitely sticked with Direct2D. Since there was no solution, there was only one other way to solve it and it's a way that has many other terminals already do it.


You didn't use that word, but you did say "in most situations Casey's suggestion doesn't help much", which is probably what your parent post was responding to.

Despite the apologies you or your team have given, you continue to denigrate and downplay. First, Casey's suggestion "doesn't help much"; then, "Casey's suggestion wasn't unique". And so on. Casey suggested a solution, spent time proving it was easy, and you even used the suggested solution to good effect, and you're still trying to downplay Casey's contribution or involvement.

If Casey hadn't kept pushing, it's doubtful you'd even have gotten around to doing this, so some actual credit is warranted, along with perhaps a real apology.


Perhaps leave sarcasm out of technical blog posts, or add a footnote to indicate sarcasm?

I couldn't tell you were being sarcastic and it seems others couldn't as well. I only found out through this comment.

Text is the worst medium for sarcasm since sarcasm is often signaled by vocal inflection or body language, neither of which are present in your post.


Baffles me that people think sarcasm is a good way to communicate on the internet. You are begging to be misunderstood, either accidentally, or maliciously.


It's not really sarcasm (which usually connotes contempt or mockery, which I suspect the blog writer didn't intend). It's irony, and irony has been widely used in written communication since we stopped using writing exclusively for record-keeping.

It can and does work on the internet, provided (i) the writer is prepared to accept that a subset of readers won't "get it"; they'll fail to pick up on contextual clues that signal irony and (ii) you have to be a good enough writer to include those clues so that at least your intended audience knows not to take it literally.

EDIT: To clarify, in this case, irony was a bad idea because it was badly executed. The context that would allow readers to interpret "The solution is trivial" as ironic was only available to people who were privy to the original conversation, while the blog post was intended to be read and understood by a much wider audience who lacked that context.


Irony should not really be used in technical communication at all, though. The goal of technical writing is that as many readers "get it" as possible. Therefore, any rhetorical technique for which you have to "accept that a subset of readers won't 'get it'" is a bad technique for technical writing.


Writing is hard. Writing for a multi-lingual audience is harder. Writing in your non-native tongue is harder still.

> EDIT: To clarify, in this case, irony was a bad idea because it was badly executed.

That was exactly how I felt reading it. Had it been the last statement of a long and complex explanation, it would have landed differently and warranted a chuckle.


imo, sarcasm only works over text if your text can be taken sarcastically and non-sarcastically.


Everyone knows you’re supposed to denote sarcasm with </sarcasm>. </sarcasm>


There's a (renewed?) push from Autistic/Ally TikTok users to use Tone Tags / Tone Indicators, especially when communicating with ND people.

https://tonetags.carrd.co/#masterlist

A number of creators, especially cosplayers, have recently shared posts encouraging their use.

I don't know how widespread it is, because TikTok and IG both tend to feed you content relating to your niches. So I may be seeing a disproportionate number.

(I had to Google what NBH was about, it means "This isn't aimed at anyone specific reading this".)


Gosh this seems like a brilliantly effective next step in the TikTokers’ campaign to drain life of any and all colour and playfulness and spontaneity. As an autistic person, I’d cast my vote for ‘occasionally misunderstand things’ over ‘have ridiculous sarcasm warnings on everything so that you can’t actually be sarcastic, or anything but grimly solemn and annoyingly earnest 100% of the time’.

(Also, Lord save me from people who call themselves “allies”. It’s just called being a normal decent person, but that doesn’t let you brag about it or use autistic/black/etc people as fashion accessories, so I s’pose that’s off the menu..)


I think it's about writing for one's audience, and being inclusive.

I've got autistic friends who struggle with open questions. They strongly dislike opening greetings without quickly taking the conversation somewhere.

They often miss stuff that's implied in conversation. It has to be explicitly stated.

They suck at gauging tone and intent in written language. They worry about the feelings and opinions of others. It can be upsetting and stressful for them.

But they're smart, capable, fun, artistic and creative, kind, thoughtful and inclusive.

They are certainly not lacking in colour or playfulness. (Possibly lacking in spontaneity to a degree, but I don't think that's a deal-breaker.)

And it's great that you're comfortable enough with misunderstanding things for it to be preferable to an alternative.

I choose to change my language to suit them. If using /s and /nbh or whatever helps them to correctly parse what I write, and assists in me communicating, why would I choose not to do that?

When using spoken language, I denote sarcasm through tone of voice. Does that render it pointless? If not, why would using /s?


Wait! Where's your opening tag? You monster!


It's like those people that open a parenthesis (for a short sentence, and then go 5 paragraphs without bothering to close it!


It is a good way to communicate though.


Beautiful comment because I can't tell if you're being sarcastic, and your comment works either way.


Yes. It’s even formally known as Poe’s Law:

> Poe's law is an adage of Internet culture stating that, without a clear indicator of the author's intent, every parody of extreme views can be mistaken by some readers for a sincere expression of the views being parodied.

https://en.m.wikipedia.org/wiki/Poe%27s_law


What's different about the internet when it comes to written sarcasm?


The internet lends itself to immediate/premature responses. People read headlines without reading the article. People stop reading in the middle as soon as they feel they have something to say about what they’ve read. People don’t take the time to think about what they’ve read before responding.

And that’s before you even get to the internet’s tendency to read what’s written in the least generous way possible in order to score internet points with a response to something wholly divorced from what the author intended.

Now add sarcasm to that mix. Pulling sarcasm out of context often leads to quick-draw responses to the exact opposite of the point the author was making.


Nothing, obviously.


Communication isn't always the primary goal of things put on the internet.


If you limit your communication online to a subset that cannot be willingly or unwillingly misunderstood, then you will say nothing at all. People's capacity to misread is infinite.


"It's generally not good communication to use intentionally ambiguous language" != "It's only good communication if you literally can't misunderstand it"


Re-read what you just wrote and break down what you said:

> Baffles me that people think sarcasm is a good way to communicate on the internet.

The subtext here is that the idea of using sarcasm on the internet should be obviously stupid to everyone, thus you're stating anyone who exercises it is stupid. That or you're clearly smarter than everyone else.

> You are begging to be misunderstood, either accidentally, or maliciously.

Thus, if you're using sarcasm you're either stupid or evil.

Was your intent to insult people?

Sarcasm and irony can be effective means of communication, the same as exaggeration. Communication is hard and people posting on the internet don't() invest much thought most of the time.

* Typo


> people posting on the internet do invest much thought most of the time.

i would like to sign up for the internet you're on. where can i send my money?


That'd be nice right? Sorry that was a typo.


I'll try to avoid sarcasm in the future.

What I haven't understood yet is what significance "The solution is trivial" being sarcastic or not has on the article itself. I understand it's reflecting poorly on it due to what I said earlier, but is there something that makes the article harder to understand due to this? (I realize this question sounds a bit rude, but I'm genuinely curious and I don't know how else to phrase it.)


Firstly, I think people are making a mountain out of a molehill. The article remains comprehensible on the whole. It’s a good article.

What is presumably meant is the following: sarcasm and irony require a lot of skill and nuance to get right, precisely because they can be understood as saying something along with its inverse. More often than not, a sarcastic written comment will divide the audience into people who understood it to say P and those who understood it to say !P. This is especially true when your audience is large, culturally diverse, and (on average) rather literal-minded.

But don’t beat yourself up too hard. It’s a good article by any measure, and even more so for someone who is not a native English speaker.


Thank you for being reasonable here. A lot of "feedbacks" saying that person should "avoid personal stuff" in technical articles are more likely showing their taste instead of actually work on how the writer could express better his irony. Which I understood reading the "trivial" not so trivial xD


I’ll be your mentor for 5 mins.

> What I haven't understood >yet is what significance "The >solution is trivial" being >sarcastic or not has on the >article itself.

If there’s no significance either way it shouldn’t be part of the blog. It adds no value and takes away from your goal of being succinct which you stated in one of your responses.

General (well meaning) advice from a stranger: 1. Always leave personal feelings out of blogs, technical and professional communication - especially the broadcast type communication. We tend to think about a small number of people but a larger number of people without context will interpret things very differently. 2. Sarcasm, irony etc need context and sometimes are also differently perceived by people from different cultural backgrounds. Your goal is to represent your and your teams efforts while helping your users. Everything else will detract from it. 3. When faced with feedback take it gracefully even if it you disagree completely or it makes you mad. You don’t need to get defensive and explain ‘your side of the story’. It almost never goes well.

Also why the hell were there such rude responses in the community post in the first place? I’ve worked at Microsoft before and I’d have roasted my team if one of them responded in that disrespectful manner - even if the community member may have trivialized work.


I think there is a lot of overreacting here to your blog post. That’s just the final piece. What rubbed me the wrong way was how your colleague castigated this person in the issue. I know people can be rough around the edges. I know they can be blunt and sometimes rude. But I think as MSFT you have a duty to rise above that by not engaging in it.

In any event, this whole thing is blown out or proportion and doesn’t deserve 300+ comments let alone another from me.


"The solution is trivial: [possible solution]! Well, unfortunately, [problem with solution]."

The issue here isn't so much sarcasm than that "is" should be read as "seems" here. But I don't see any other idiomatic reading that is sincerely calling the solution trivial.


> But I don't see any other idiomatic reading that is sincerely calling the solution trivial.

Agreed - it may not be obvious on first glance, but there's no other reasonable reading.


I for one caught on that the writing was analogous to, but more subtle than, "It sounds simple, you can just do blah, right?!; but actually you can't! So we had to do this complex thing to get it to work."

So yeah, as noted, sarcasm is a tricky tool in writing. While I enjoy it in technical writing often, it's definitely not as common on a company-associated blog post (for various good reasons).

My own takeway for myself is a reminder: be careful crafting snark/jokes/sarcasm. Length of the statement probably increases the chances of being misinterpreted.


Nooo... I love sarcasm ann/or irony in texts they make chuckle all the time. It actually keeps it more light and delightful to read it.


It’s a bit silly to berate a customer for their use of ‘it’s easy’ and then use it yourself. ‘It was sarcastic’ and ‘it was a language issue’ are weak excuses.

It’s just poor form, why make an excuse in the first place? You can just say this was a mistake and try to be better next time.


Some very important goods are almost exclusively produced in rural areas (food for instance, being produced by few people). The importance of these goods might be as important as the goods produced by people living in large cities (services for instance, being produced by many people).

If we start considering the people living in an area as the likely maintainers and experts of their field, I could imagine that we might want to assign the importance of votes based on the total importance of the "produced goods" in that area. That way these "maintainers" have a say in politics relative to their actual importance in society. I guess?


You might be interested in Hong Kong's "functional constituencies" system: https://en.wikipedia.org/wiki/Functional_constituency_(Hong_... Half of their legislative council is selected by voters in specific constituencies. Among them is specific rural constituency ("Heung Yee Kuk") and one for agriculture and fisheries, but also various more-urban constituencies like tourism, IT, and healthcare.

I don't have a good sense of how well this works in practice, and of course it is confounded by Hong Kong being a very unusual jurisdiction to start with. But yeah, I think it would be a more principled approach than having this one feature in our government to nominally give extra voting rights to rural populations in particular. It's difficult to figure out what those importances really are and it's unlikely everyone will agree, but I think everyone can agree that the answer is not that rural voters are the only special constituency.

Alternatively, there's a simpler argument - those things that are important in society are important because everyone cares about it, and therefore an urban legislator is unlikely to say "We don't need farms," because the urban legislator needs to eat. If the agricultural constituency says they need some measure, they already have the ability to convince the general voting public in proportion to their importance in society.

(Also, it's not like our current system effectively gives rural voters an additional voice. California is our top agricultural exporter, but it tends to vote in the opposite way from the smaller-population states. And even if you did give a specific voice to California's agricultural interests, it's highly likely that they'd disagree with the policies advocated by smaller-population states to reduce immigration and increase deportations, for instance.)


>That way these "maintainers" have a say in politics relative to their actual importance in society. I guess?

Who decides what's important? Who decides the relative importance of "maintainers?"

Even if that were a good idea (I don't think it is), there's no reasonable way to make something like that work.


> Also, like k8s & Bazel, [gRPC] is less advanced and lower performance than their internal technology.

Are you sure about that? As far as I know gRPC is literally just as good as Bazel, which is why Google is even migrating to it internally.

For instance this comment agrees with me: https://news.ycombinator.com/item?id=12348286


Note that Bazel is a build system (https://bazel.build). I believe you are confusing it with Stubby. That's not to say that the general thrust of what you're saying is necessarily wrong though.


I think you are misreading that comment. He’s saying that stubby is fast, not grpc. In fact performance is the big unknown with grpc adoption within google. It definitely isn’t on par with stubby today and it has to get there before anyone significant will switch to it.


I can't comment on raw numbers (because I simply don't have them) but at least for the service I work on, replacing Stubby with gRPC wouldn't really move the needle even if it was 2-3x slower (it might be faster, this is just for illustration) -- we spend our time waiting on IO from other services or crunching numbers in the CPU. Being a Java service, gRPC/Java might well be just as fast or faster than Stubby Java, but I could understand that Stubby C++ has been hyperoptimized over the years vs. gRPC C core which might have a ways to go. By the latest performance dashboard [1, 2], gRPC/Java is leading the pack but gRPC C++ doesn't seem like it's slouching too much either. I seem to remember the C++ impl crushing Java at performance a while back, so I'm sure that'll change in the future.

Honestly though? It'd take a _very_ demanding workload such that your RPC system was the bottleneck (so long as they're within constant factors of each other). There are services like that, but they're the exception and not the norm. Most services don't need to do 100kQPS/task. Even then, at that point you're spending a lot of time on serialization/deserialization, auth, logging, etc.. Your service is more than its communication layer, even if that's important to optimize it's still just a minor constant factor.

The real problem is inertia. There's a lot of code/tools/patterns built up around Stubby and the semantics of Stubby (including all its features which likely haven't been ported to gRPC yet) and that's difficult to overcome.

Our #1 use of gRPC so far I would imagine is at the edge. gRPC is making its way into Android apps since it's pretty trivial for translating proxies to more or less 1:1 convert gRPC to Stubby calls.

[1] https://performance-dot-grpc-testing.appspot.com/explore?das...

[2] https://performance-dot-grpc-testing.appspot.com/explore?das...


You and I seem to be using a different denominator to quantify "most" services. I'm thinking of it as "most" in terms of who has all the resources / budget. You seem to be thinking of it in terms of sheer number of services or engineers working on them. The fact is that the highly demanding services have the huge majority of the resources, and are the most sensitive to performance issues. If your service uses 10% of Google's datacenter space, you won't accept a 5% or even 1% regression just so you can port to gRPC, because at that scale your team can just staff someone or even several people to maintain the pre-gRPC system forever and still come out ahead on the budget.

Totally agree that world-facing APIs will all be gRPC and that makes perfect sense to me.


> You seem to be thinking of it in terms of sheer number of services or engineers working on them.

I'm not sure where I said that, but yes, that's part of the switching cost.

> The fact is that the highly demanding services have the huge majority of the resources, and are the most sensitive to performance issues. If your service uses 10% of Google's datacenter space, you won't accept a 5% or even 1% regression just so you can port to gRPC,

The thrust of my statement was that for many services, RPC overhead is minimal. So even a 2x or 3x increase in RPC overhead is still minimal. I agree, a 5% increase in resource utilization for a large service is something that would be weighed. But lets explore that idea for a moment:

> because at that scale your team can just staff someone or even several people to maintain the pre-gRPC system forever and still come out ahead on the budget.

Not necessarily. Engineers are expensive and becoming ever more expensive while computing resources are becoming increasingly cheaper. Not only that, but engineers tend to be more specialized and so you can't just task anyone to maintain the previous system, it tends to be people with deep expertise already. And those people also have career aims to do more than long-term support of a deprecated system, so there's retention to be considered.

Pretending for a moment that all your services except a small handful moved on to somme system B from some system A, if the maintenance burden of maintaining system B starts to eclipse the resource cost of moving to system A (which decreases all the time due to improvements in system B and the increasing cost of maintaining system A, and the monotonic reduction in computing resource cost), then you might well just swallow the 5%-10% increase in resources either permanently or temporarily and come out ahead in the end.

Additionally, as system B moves on, staying on system A becomes increasingly risky: security improvements, features, layers which don't know about system A anymore all threaten the stability of your service. If you've checked out the SRE book, you'll know that our SLOs are more important than any one resource. If nobody trusts your service to operate, then they won't use it and then you won't have to worry about resources anymore since the users will have moved on.

> because at that scale your team can just staff someone or even several people to maintain the pre-gRPC system forever and still come out ahead on the budget.

To reiterate the point above, these roles tend to be fairly specialized and hard to staff. Arguably these same engineers are better tasked making system B good enough to switch to so you can thank system A for its service and show it the door.

Bringing this back to Stubby vs. gRPC, it's a pretty academic argument so far. They're both here to stay. And honestly, when we say "Stubby" there's already different versions of Stubby which interoperate with each other and gRPC will not be any different. Likewise, we still use proto1 in addition to proto2 and proto3 (the public versions) since that just takes time and energy to fix.

We do make these kinds of decisions every day, and it's not always in favor of reduced resources. If we cared for nothing other than resource utilization, we'd be completely C++, no Java, no Python. Realistically, the cost of maintaining systems with equivalent roles can often lead to one or the other winning out, usually in favor of maintainability so long as their feature sets are roughly equivalent. We're fortunate to be in a position that we can choose code health and uniformity of vision over absolute minimum resource utilization. And again, even if we choose system B (higher resources) over system A, perhaps due to the differences in architecture or design choices the absolute bar for performance of that system will be greater than system A, despite starting lower. Sometimes it takes a critical mass of adopters to really shake out all those issues.

I know that quotes from Knuth are often trotted out during these kinds of discussions, but it's true: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

That 3% is where we choose to spend our effort, and that critical 3% includes the ability of our engineering force to make forward progress and not be hindered by too much debt. It also includes real data, check our Google Wide Profiling [1].

> Totally agree that world-facing APIs will all be gRPC and that makes perfect sense to me.

Probably not all. We still fully support HTTP/JSON APIs, but at least in our little corner of the world we've chosen to take full advantage of gRPC.

Anyways, thanks for letting me stand on my soapbox for a bit.

[1] https://storage.googleapis.com/pub-tools-public-publication-...


Interesting that you allude this the coexistence of C++, Java, Python, and Go because I think this bolsters my point. The overwhelming majority of services at Google are in C++. There are individual C++ services that consume more resources than all Java products combined. I think this speaks to the appetite for performance and efficiency within the company, since it is demonstrably the most difficult of these languages.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: