I haven't tried this project, but did switch to vterm from shell-mode a while back because it managed to fix most of the paper cuts when using shell-mode. I also used to create a lot of custom compilation buffers for things b/c it would create links to files that were helpful, but that has been less helpful to me. At the end of the day, there were papercuts that made shell-mode and compilation buffers less ideal and most folks were focusing on traditional terminal support.
This kind of framework helps to optimize a bit for returning hypertext (iow HTML snippets) rather than leveraging a frontend system that only interfaces with the backend via an API. From that perspective, you need to be able to send HTML snippets precisely and manage more URLs that provide the snippets. React already has a pretty strong abstraction around HTML with JSX that has been generally morphed into web components. Writing the HTML components on the server using a library that maintains valid HTML is convenient, and it also means you can deploy an application without having to bundle a bunch of template files.
I will say I do think some opinions on how to structure URLs to return snippets might be valuable. Some of these frameworks leverage headers htmx sends to use just part of the page, but I think it is easier to just have individual URLs for many use cases. I've used Go and Templ in a similar fashion and one benefit with Templ is that the snippets are effectively functions, so returning the specific section and passing args is reasonably natural when breaking something out of a page.
Overall though, the goal is to avoid duplicating your data model and abstractions in the UI in favor of relying better networks, faster browsers, and HTML improvements to create interesting interfaces with simpler code.
Writing post-mortems is generally pretty kludgy. You might have a Slack bot that records the big picture items, but ideally, a post-mortem would include connections to the nitty-gritty details while maintaining a good high-level overview. The other thing most post-mortems miss is communicating the discovery process. You'll get a description of how an engineer suspected some problem, but you rarely get details as to how they validated it such that others can learn new techniques. At a previous job, I worked with a great sysadmin/devop who would go through a concise set of steps when debugging things. We all sat down as a team, and he showed us the commands he ran to confirm transport in different scenarios. It was an enlightening experience. I talked to him and other DevOps folks about Rundeck, and it was clear that the problem isn't whether something can be automated, but rather whether the variables involved are limited enough to be represented in code. When you do the math, the time it would take to write code to solve some issues is not worth the benefit.
Iterating on the manual work to better communicate and formalize the debugging process could fit well into the notebook paradigm. You can show the scripts and commands you're running to debug while still composing a quality post-mortem, as the incident is happening where things are fresh.
The other thing to consider is how often you get incidents and how quickly you need to get people up to speed. In a small org, devs can keep most things in their head and use docs, but when things get larger, you need to think about how you can offload systems and operational duties. If a team starts by iterating on operational tasks in Notebooks, you can hand those off to an operations team over time. A quality, small operations team can take on a lot work and free up dev time for optimizations or feature development. The key is that devs have a good workflow to hand off operational tasks that are often fuzzier than code.
The one gotcha with a hosted service IMO is that translating local scripts into hosted ones takes a lot of work. On my laptop, I'm on a VPN and can access things directly, where you need to figure out how to allow a 3rd party to connect to production backend systems. That can be a sticky problem that makes it hard to clarify the value.
After dealing with the current state of affairs regarding the front end, I think this is pretty interesting. I've been writing Go web apps with templ and htmx. I've punted on dealing with CSS using a cdn and bulma. I did get Tailwind and friends working, but it required a lot of work being outside a React/JS framework like next.js. It also felt really weird using npm to install CSS.
The nice thing about this is that you can get a tailwind Go lib to programmatically build into your binary. There are no extra files, one build step, and one binary output. After trying out Fly and Vercel, I went to running things with docker on a VM, and a single binary in a container makes things much simpler IMO.
This looks pretty cool and I look forward to seeing folks do interesting things with it.
I do this too. Using a hot reload server and having live reload in the browser as I'm changing tailwind classes, go files or templ files is a very efficient workflow.
Then for the actual build, everything is built and embedded in a single binary by conditionally using embed with build tags.
I think that live feedback development cycle is probably the most satisfying way to code something by oneself.
It did need quite a bit of messing around with tooling and my Makefiles are pretty big but at least I can reuse that in every new project.
I use the "live reload with other tools" as it is described in the templ docs [0].
Basically, templ will start a live reload server in "proxy mode". Other tools can then request a reload by notifying the proxy. I started with the makefile described in the docs. But I also have tailwind in watch mode, esbuild builds and more.
While I can't say this is true for vim, in Emacs, I found that the customizability helps for a lot of different programing tasks. I run my terminals in Emacs and they are associated with my projects. Magit (the Emacs git package) helps me do complex rebases with diffs alongside creating branches and everything else you might do in git (even the reflog when things get rough). There is event a handy rest-mode that lets me write and save HTTP sessions. I connect to my database in buffer as well. What makes all this so handy is that I can move the buffers around to compare things side-by-side, use a single large buffer, etc. While VS Code has splits and terminals, I found that in Emacs I can access everything from my keyboard and now that I've gotten used it, I don't even think about it.
I've heard that a lot of vim folks get similar behavior via tmux and leverage other shell tools.
I'm not going to argue you should switch, because it is an investment. It is like owning a house, you have autonomy but you're also on the hook to fix the air conditioner. You also can't just drop it and move to the next editor. Your hands and workflows become tied to your editor. Keybindings may be similar, but it is not the full story. Either way, it is a journey getting good with your tools, so enjoy it!
I don't know if I fully grok the web components described by the author. I did find that using templ with Go makes it reasonable to have components you can inject some logic and state into for sending back HTML to htmx where it makes sense. The thing that makes sense is we focus on generating HTML once and there is no need to redefine models in the frontend. This is important because it means we don't take HTML, convert it to a data structure, validate the data, then send the data to a server, that has to validate things again, and return some resulting data, that once again gets rendered as HTML. Each data transition is expensive from a complexity perspective and requires caching state where caching is hard.
I'll admit that I'm not a frontend guru so my bias for writing apps like I did 20 years ago likely shows here.
Still, when I consider all the different abstractions and details around using something like Next.js + Typescript, and where the division between server and client becomes mixed, reducing complexity feels advantageous.
Tailwind is another good example where the explicit nature of attaching styles makes sense if you are writing React components. You've encapsulated all that noise behind a simple JSX tag. Using Tailwind with something that returns HTML then requires a similar way to encapsulate the noise. Again, I found using Templ with Go made that feasible, but I still have avoided Tailwind just b/c I don't want to introduce the noise too soon if possible.
It all makes me wonder if we lost the idea of a fullstack engineer, not because the problems became so challenging that we needed the extra complexity, but rather because we organized our applications into frontend and backend organizationally when we should have been more diligent in maintaining teams that could do everything.
The real tl;dr is that Templ for Go is pretty handy for writing components :)
I had a similar thought when I first looked at it, but then I thought about my browser history and URL bar. It is sort of a lot of work to open files to write scripts, keep them organized, and make them accessible just to make some commands simpler to run. I wrote https://github.com/ionrock/we for this very reason. I moved most args to env vars and made loading different env vars easily via files. Maybe the history is a better way to make these things reproducible and useful by avoiding the redirection necessary by scripts?
While I agree it may not work with everyone's workflow, maybe it could be a powerful change to folks workflow. I'm going to try it out and see for myself!
Chet is meant to be a helpful big brother that keeps track of how long commands take to run. It stores the timings in a local SQLite database so you can run queries to find opportunities to optimize. There is also a simple service where you can post timings so your whole team can measure things.
I know there are other similar tools out there, but I hope this one might be helpful or interesting to someone!
Personally, I think it is more convenient to think about things like this (ie firewall rules) as data, which makes the use of an API a convenient way to work with the data. The converse, in my mind, is that I'd have to configure each node and ensure a text representation of my firewall rules are correct. That opens the door for some thinking about concurrency that I thankfully get to avoid with an API like this.
That said, I can see your point that you are hiding some details from the OS that might be helpful such as what hosts you can talk to.
Fortunately, just because you might configure a firewall with an API rather than some Ansible plays, it doesn't mean that you can't continue to use Ansible to fill in the gaps. For example, if you did use Ansible to previously configure your iptables, you might change the playbook to call the API based on some YAML. You might use the same YAML to write some information on the host that your application can use to understand the firewall rules that are used.
The point being is that it is always good to remember these are not either/or decisions.
Lastly, I'll also speak up for those folks that don't know much about firewalls and iptables. I understand the principles, but I'm far from feeling confident managing that system myself. In my case, I'm really glad to have an option that lets me get the benefits without forcing me to operate a system I'm not well equipped to do.
Most services that might be considered "backends" (ie databases, queues, cloud services) end up being in written in languages that can safely use async techniques for I/O and still use real threading or some other method for managing CPU bound problems.
Many of the applications people think of for node.js end up being "glue", much like python, and can live within this constraint for a very long time, where the I/O optimization is a nice benefit.