But I think it could be more straight forward if the first interpreter didn't use Java, but a modern main stream language that can avoid code generation and Visitor Pattern. Those parts feels like unnecessary detour.
> but a modern main stream language that can avoid code generation and Visitor Pattern.
Yeah, a language with nice algebraic datatypes and pattern matching does eliminate the need for Visitor patterns.
But the reality is that many programmers are working in object-oriented languages where the Visitor pattern is useful, so I deliberately picked Java in order to teach that approach. The book tries to meet you where you are and not where you might wish you were.
> You want to sell Javascript based solutions, go for it. But don’t push this nonsense like “No language like Javascript”. The only reason Javascript has a wider adoption than others is that it’s forced upon us.
It is forced upon us doesn't mean it can't be universal. On the contrary, it usually is. It's like how USD is kinda forced around the world but it's universal nonetheless.
Thank you, GitHub, this is one of the best things!
No, it cannot make me write code I couldn't write before. It does not autopilot and does all the coding by itself. But it still boosts my productivity greatly, making me relaxed while coding and focusing on the important part rather than errands.
I've been using it for a while now. When I forget some syntax occasionally I'll switch this on instead of searching documentation or google, but more often than not my IDE can get me unstuck with less overhead.
Also if there are some repetitive sections of code I need to bang out quickly this will auto fill that repetitive pattern (although I'd argue this is usually a sign that the code should be cleaned up)
I avoid letting it fill in large swaths of code though. I have no idea where that code is coming from (license infringement?) and it tends to go way off the rails.
Additionally I feel that it makes me a worse programmer if I allow it to take over too much.
I've been programming for 20 years (more if you count my time as a kid) and have a certain flow. Part of that flow is the natural pause between thinking of solutions and typing. When the computer is beating me to the typing portion (and often times making mistakes) I would find myself doing more code review than code writing. Sometimes a few bugs popped up and it was thanks to copilot (or was it me failing to correct copilot's mistakes?).
I found my brain sort of switching into a different mode. Rather than thinking about my next steps I was thinking about the steps the computer just took and how I needed to clean them up.
Rather than the AI being my reviewer during a paired programming session, I was the computer's reviewer.
Additionally: when I allowed copilot to do heavier coding for me, I found myself returning later and feeling somewhat unfamiliar with the code. That's really bad for maintenance, project pace, etc. I don't want to try to re-learn, fix, remember and maintain code that someone else (a computer in this case) wrote. Its hard enough doing so reliably in group code settings (work), now injecting that into my daily coding life feels like a solution I didn't ask for.
I will say that I'm not averse to change and do appreciate the new tools that we have available to us - Starting on a x386 writing QBASIC as a kid to using Jetbrains Rider is an indescribably different experience.
That said, I'm not ready to move to the backseat and let the computer take over yet. In small doses copilot is fine, but I wouldn't lean heavily on it for large projects or to do the thinking for me.
What did I feel during the transition: Feel like an insulted because of the under-expressiveness. Everyday I find some ideas that are natural to me and non-tech people couldn't be codified directly.
Although statically typed languages are getting better each year, I still think there's a trade-off among three camps:
1. Dynamic (Ruby, JavaScript)
2. Under-expressive static (Java, C#)
3. Expressive static (Scala, TypeScript)
I worked on a full-stack project and people have different backgrounds and have to switch from time to time, here's my impression:
- Most "underexpressive-static -> dynamic" people hate dynamic programming, because they mostly lose typing for nothing - they usually don't leverage the expressiveness because that is how they were coding all along.
- Some "dynamic -> underexpressive-static" people liked the discipline, while some hated the under-expressiveness, it depends.
- The "Expressive static" was the new thing. Everybody need to learn some part. Usually, there's a learning curve, some people like the challenge while some people hate it. That also depends on how steep the curve is (TypeScript is very smooth but not Scala).
Even though I think it's still trade-off, I can see the balance is shifting toward the "expressive static" camp:
- People get more familiar when they're exposed to the concepts. Higher-order functions like map/filter/reduce were considered exotic in the 90s, but now they're everywhere. Monadic-like programming like Promise or Optional is also popularized, they're not hard at all once "everybody else" learned that. Advanced type systems are no different.
- Language creators have more experience in choosing trade-offs that makes sense. Scala was getting a bad rap during its day, but Kotlin didn't. In other words, the creators are getting better at spending their "novelty budgets"[0] to yield optimal results.
P.S. I still couldn't wrap my head that Dart spent NO "novelty budget" whatsoever... They have zero chance against TypeScript if there was no Flutter or other Google platforms that use Dart.
If you’re interested in something that creates a great developer experience on top of container runtimes but which supports more complex workflows and apps than docker swarm, check out withcoherence.com (I’m a cofounder). We orchestrate containers from dev to prod in your own cloud, without abstracting away what’s happening under the hood, but also without forcing you to deal with all the operational complexity of “doing it right”
> On one end of the spectrum are either neat platforms like Heroku or Vercel or ssh and bear-metal with simple scripts.
> On the other end of the spectrum, we have Kubernetes.
Someone else mentioned Docker Swarm, but allow me to offer my own thoughts.
The simplest option (what you allude to) is probably running containers through the "docker run" command, which can sometimes work for a limited set of circumstances, but doesn't really scale.
For single node setups, Docker Compose can also work really nicely, where you have a description of your entire environment in a YAML file and you can "orchestrate" as many containers as you need, as long as you don't need to scale out.
The aforementioned Docker Swarm is a simple step up that allows you to use the Compose syntax (which is way easier than what K8s would have you use) but orchestrate containers across multiple nodes and also has networking built in. It's simple to set up (already comes preinstalled with Docker), simple to use and maintain (a CLI like that of kubectl, but smaller), the performance and resource usage is small and the feature set is stable. It's an excellent option, as long as you don't need lots of integrations out there and whatever Docker offers is enough.
From there, you might look into the likes of Hashicorp Nomad, though their HCL is a bit more complicated than the Compose format, though still simpler than Kubernetes. The setup is a bit more complicated (if you need Consul and TLS encrypted traffic) but overall it's still just a single binary that can be setup as a client/server depending on your needs. Also, as an added bonus, you can also orchestrate other things than containers, much like how back in the day there was Apache Mesos which supported different types of workloads (e.g. you can launch Java apps or even native processes on nodes that you manage, not just containers).
Of course, even when you get into Kubernetes, there are also projects like K3s and k0s, maybe with tools like Portainer or Rancher, which let you manage it in a simpler manner, either through an easy to use UI or with those K8s distributions being slightly cut down, both in the plugins that they come with, as well as their resulting resource usage and data storage solutions (e.g. use SQLite instead of etcd for smaller deployments).
In my eyes, betting on OCI is a pretty reasonable option and it allows you to run your containers on whatever you need, depending on what your org would be best suited to.
> The skill is not likely to transfer to the next job
I'd argue that if you need to read a book to figure out how services are deployed in any given environment, then you probably should have a DevOps/Ops team and not have to worry about it as a dev. Or, if you're a part of said team, you should still mostly just use OCI containers under the hood and the rest should be much like learning a new programming language for the job (e.g. like going from Java to .NET, which are reasonably similar).
> The ecosystem is not as complete as Kubernetes
Kubernetes largely won the container wars. No other toolchain will ever have as complete of an ecosystem, but at the same time you also dodge the risks of betting on some SaaS solution that will milk your wallet dry with paid tiers and will fold a few years down the line. I'd say that all of the aforementioned technologies support all of the basic concerns (deployments, monitoring, storage, resource limits etc.).
Of course, things might change somewhat in the next years, we have seen both new tooling be developed and become viable, like Lens (https://k8slens.dev/) and some nice CLI tooling, like k9s (https://k9scli.io/), as well as numerous other options.
Though I guess things won't change as much for Docker Swarm (which is feature complete and doesn't have much new stuff be developed for it) or Hashicorp Nomad (because their "HashiStack" covers most of what you need already).
This matches my experience. People say otherwise might need to put more effort on writing better tests.
That said static typing usually catch minor problems like typo faster. But for dynamic typing it's going to catch by tests a bit later. But usually tests run faster with dynamic languages, so it's a tradeoff.
My favorite approach is the mixed approach like TypeScript:
1. Faster feedback loops: Usually statically typed languages compiles slower thus slower feedback loops. But languages like TypeScript can skip type check and emit only to the runtime, making test watcher very fast - as soon as I hitting Cmd-S I can see the result.
2. Optional typing: Sometimes a function signature is going to be 10x larger than the body, which is really a hassle. Sometimes I just skip them, or skip typing module private functions as long as it's properly tested.
I like types, but I like types even more when they are at the edge of two systems and both systems understand them. For instance, in the past we ran Rails APIs and a lot of mobile apps consumed them, and there was a lot of math involved. We put Google's protocol buffers between them. The data's type info is thus shared. I liked this pattern a lot, and it did reduce A LOT of bugs (for instance ints vs floats).
Another thing: people working with dynamic typing often forget that often times they are interacting with a system with extreme opinions on types, namely their relational database, which is often times literally the heart of the business. If your language of choice has typing, you can sync that type info (smart ORMs, codegen), and thus reduce friction. With that said, I develop in Ruby and my understanding of where there might be "type friction and potential for bugs" has improved a lot over the years, so I don't mind the lack of types until we get optional typing sorted out.
It says that "Nintendo" or "任天堂" the brand name refers to an old Chinese saying "謀事在人,成事在天" which means "Man proposes, God disposes", or "do your best and leave it to the fate".
The idea is not far from the stoic ones - try your best and stop worrying. It's like an idea from this classic debate on TDD [0]: Unit tests make you confident. You write it, you forget it. It makes you sleep at night.
In my first few years of my career, I was constantly and unconsciously thinking about the code, the business requirement, the team, even outside the working hours. That stressed and drained me a lot. In the end I realized I shouldn't have done that. If I am to worry about those things, I just book myself a calendar and go over those things within the time-box. If I need more time, I book another time block for the next day - as simple as that.
Now write that "unit test", forget it, and sleep well at night.
If you really need to worry about them, you should have a "quarterly/monthly worrying day" to reflect about your life, your career, strategies and all that. Outside that day, you just live your happy life.
This. I find open source projects written in Go or Rust are usually more pleasant to work with than Java, Django or Rails, etc. They have less clunky dependencies, are less resource-hungry, and can ship with single executables which make people's life much easier.
Not sure why you include java in that, as you mostly get a standalone file. No such thing as a jre in modern java deployment.
As for python, at least getting a dockerfile helps a lot. Otherwise it's a huge mess to get running, yes.
Python is still a hassle anyways, since the lack of true multithreading means that you often need multiple deployments, which the Celery usage here for instance shows.
> Not sure why you include java in that, as you mostly get a standalone file. No such thing as a jre in modern java deployment.
Maybe I'm behind the times, but I can't figure out what you mean here. As far as I know 'java -jar' or servlets are still the most common ways of running a Java app. Are you talking graal and native image?
For deploying your own stuff, most people do as before, yes. But even then, it's at least still only a single jar file, containing all dependencies. Not like a typical python project where they ask you to run some command to fetch dependencies and you have to pray it will work on your system.
But using jlink for java, one can package everything to a smaller runtime distributed together with the application. So then I feel it will be not much different than a Go executable.
> The generated JRE with your sample application does not have any other dependencies...
> You can distribute your application bundled with the custom runtime in custom-runtime. It includes your application.
Python application deployments are all fun and games until suddenly the documentation starts unironically suggesting that you should “write your configuration as a Python script” that should get mounted to some random specific directory within the app as if that could ever be a sane and rational idea.
No, Go modules implement a global TOFU checksum database. Obviously a compromised upstream at initial pull would not be affected, but distros (other than the well-scoped commercial ones) don’t do anything close to security audits of every module they package either. Real-world untargeted SCAs come from compromised upstreams, not long-term bad faith actors. Go modules protects against that (as well as other forms of upstream incompetence that break immutable artifacts / deterministic builds).
MVS also prevents unexpected upgrades just because someone deleted a lockfile.
generally I prefer humans in the loop, someone to actually test things. This is why distros are stable compared to other distros which are more bleeding edge.
For SC security, the fewer points of attack between me and the source the better.
For other kinds of quality, I have my own tests which are much more relevant to my use cases than whatever the distro maintainers are doing.
I've been a DD and while distros do work to integrate disparate upstreams as well as possible, they rarely reject packages for being fundamentally low quality or make significant quality judgements qua their role as maintainer (only when they're a maintainer because they're also a direct user). Other distributions do even less than Debian.
Fedora currently packages 10646 crates. It's implausible that they're manually auditing each one at each upgrade for anything other than "test suites pass", let alone something like obfuscated security vulnerabilities.
In the end most distros will be saved by the fact they don't upgrade quickly. Which is also accomplished by MVS without putting another attack vector in the pipeline.
I think I don't want "more than a hundred" additional points of trust, especially if they're trying to audit 50+ projects with various levels of familiarity each. And no, I don't believe one person can give a real audit to 50 packages each release even if was their actual job.
To paraphrase, all "more than a hundred" of those people need to be lucky every time.
The wok is very versatile. I use it for most things by default.
The skillet is great for pan frying.
The dutch oven is great for braising without me stirring it too often.
Cast iron is situational because it has large heat opacity: you can think of it as a pizza stone for pan frying.
The advantage is you can compensate by long preheating if your stove is inferior.
The disadvantage is it's slow to respond: When you turn to high it delays a while, when you want to lower the temperature you still need to wait.