In my experience, it's waaaay more efficient for developers to be in the same room if you do it right, though I'm starting to think I might have a minority opinion on that.
As a developer, a lot of my projects are not something that I need the rest of the team to do and most of the time the team doesn't need me. Hanging out in the office together is fine, but a lot of developers don't need to be in the same room to write great code as a team.
There are plenty of cases where better documentation, training, or other means of communication would provide a higher bus factor than always relying on pestering another dev. Also, with video chat, it's harder to believe that it is so difficult to meet or plan without being in the same building.
I literally write more than 90% of all code away from the office. People walking by, random questions/discussions, pointless meetings that make people feel like they accomplished something even though they didn't or someone just wanting to burn a few minutes. Office hours are the least effective and unproductive hours of my day and so I've avoided being there as much as possible because I'd rather show features more than showing face.
There are some big picture, architectural design discussions covering many interrelated components that can benefit form in-person meetings but those kind of meetings are the exception and not the norm in my experience.
It's actually pretty silly to write node.js apps in a crash-only manner. Since they often handle thousands of connections concurrently, a failure in one client's processing is pretty horrendous if it brings down a whole server (even if the server thread gets immediately restarted). This project doesn't try to solve that either though.
In the tradeoff between a short post and a long elaboration, I erred on the side of being too terse.
Node applications can be split between stateful servers (ex. chat) and stateless (ex. API for mobile clients).
It's the stateless servers where some developers write fault tolerant / fail fast / fast restart applications. Doing so in a stateful server would be counter productive.
Also, this does not mean to imply that developers are writing sloppy code that fails constantly or that they fail to implement proper error handling. What I was stating is that unhandled exceptions are unexpected, but when they do occur they indicate something is seriously wrong. Importantly, this puts the application's state into an unknown state, which is difficult to recover from. In such situations, a robust approach is to let Node fail, restart fast, and have clean state.
The reasons this is a sound approach are:
1. If the failure is due to a memory leak, then the graphs will highlight said leak clearly
2. The unhandled exception indicates that something is very wrong. A server restart is easily seen in the logs and is a warning that deeper inspection is necessary
3. Recovering state after an unhandled exception is difficult. In a stateless server its better to just restart from a clean state. This assumes the engineers wrote the application to work from a clean state (i.e. after a restart there is no need to recreate state)
4. A fault tolerant architecture is good practice as disks can fail, CPUs can fail, network connections can fail, etc. In a cluster failure is expected and applications are architected to continue operation in the face of failure
I actually think I understood you, but I'm saying that in that case where statelessness should be an advantage, node.js is actually a much less fault tolerant environment when you compare it to most other web application servers. Most other web app servers offer
(1) request isolation (so most failures in one request can't break other requests) and
(2) a way to catch all exceptions/errors in a single request (and domains don't accomplish this, unless you know what to expect errors from, or wrap everything).
Since node.js doesn't offer those features, it's not even as fault tolerant as PHP was 15 years ago. I'm a huge fan of node.js, but one of the hardest things to do on a large application with a large number of users is to keep an instance of the server from restarting and dropping all the other in-progress requests. If you write your node.js code to be crash-only (like one might do with erlang) your clients are going to have a terrible time.
We had the problem you're describing for awhile, but have since figured out how to avoid processes going down and interrupting other reqs. Essentially, you attach a global domain, and when that domain catches an error you stop accepting new connections (obviously you have to be load-balancing between procs) and start a countdown. Some reasonable amount of time later (I think we wait 30 seconds?) you assume that any in-progress request is done and restart the process. We've found this to be very successful.
We do this too, attaching req and res objects to a domain, as well as databases and other network related objects (like smtp clients, etc). This is a huge improvement, but I'm still seeing occasional uncaught error events in our logs on a very large codebase and only in production. Some of them are just ECONNRESET events with no details given, so their origins are REALLY hard to track down. Have you got some magic for catching everything without explicitly having to find all objects that could be emitting? I'd love to hear it if so...
As soon as possible during startup, create a domain and enter it. Because entered domains form a stack, this will be a fallback if an error occurs at a place that isn't covered by any other domains.
Nonsense. There's a huge, and very obvious, difference in the overall level of experience, ignorance and competence within the various programming language communities.
The JavaScript, PHP and Ruby communities have an abundance of ignorance, often due to a severe lack of experience. These are the most-hyped languages, and the ones that new developers often flock to. It's quite obvious why so many bad decisions (like using these languages in the first place, or using NoSQL databases) and so much bad code comes out of these communities; their members often just don't know any better, and often aren't willing to learn.
This is much less of an issue within the communities that attract experienced and competent developers. We're talking about C, C++, Python, Haskell, Erlang, Scala, and even Java and C#. Thanks to the wider and deeper experience that the developers in these communities tend to have, we see far fewer blatantly obvious mistakes being made. That's not to say they don't happen; they do. But the quality of the software that is produced is generally much better than what we see produced by the JavaScript, PHP and Ruby crowd.
It's easy to pretend that these very real differences don't exist, but the reality is that they do.
I consider that to be a bright spot in this post. What would a performance comparison for this very specific app prove? The way I read it, the only goal of the rewrite was a more easily maintainable code-base with performance that was "good enough". It sounds like he was successful in meeting that goal.
With that said, comparing Python and Erlang on the basis of performance is a bit like looking for the world's tallest midget isn't it? Performance isn't the point at all, within a certain threshold.
That's actually a solved problem (in node.js at least). npm builds a node_modules tree where each package gets its own copies (with the correct versions) of its dependents. Gone are the days when people took on dependency hell for a few MB of diskspace.
There are a benefits, but most people won't notice them without first actually using a hypermedia API.
* It's self-documenting. Client developers can find all the endpoints just by clicking around (instead of reading mountains of docs).
* Client apps don't need to keep a list of hard-coded urls for random access, removing one of the most brittle parts of client apps (they should know about rels of course, but those end up being easier to keep track of).
Once you actually use an API like this, other APIs feel like they're in the stone age and how to do things with them seems like a continual guessing game. And it's still simple as hell -- remember it's just json with links. It's not like that requires a lot of extra effort.
Fair enough - I haven't actually used a hypermedia API (short of OData). I really can't see the documenting point though. Sure, you get a list of actions that you can perform, but you get those in a response object. Some people compare this to "intellisense documentation", but really it's more like having to decompile the library to figure out what's going on. You have to do something extra in order to figure out what the next step is, rather than just having it available. Also, I'm pretty sure any API that is currently "discoverable" probably also has a full suite of documentation. Why do the same thing twice?
Not sure about other languages, but most of the client apps I make don't use hardcoded urls - they use a base API url, and then modifications for specific resources. The RestSharp library for C# is a great example of how this works, and even in javascript it's not hard to refactor things so they use a base url and append path & parameters based on the models you're working with.
I'm not arguing the simplicity of it, although it would be more simple to implement if we agreed on a standard like in the original article. I'm just arguing the value.
Looks more like JS the java way. Singletons make no sense in a language that has global variables and never lets you stop an object from being duplicated. The command pattern makes no sense in a language with first-class functions. I'd steer clear of this resource if you want to write javascript the right way.
You can't just explain every single design pattern available in any given language and then say, you now know how to write that language "The right way!".
Thats the feeling I got from this, it just goes into way too much detail about way too many things, and forgets to start with the basics, which is all you need to know to write JavaScript the right way.
How can you write any language the right way without knowing the basics?
This book could be an absolute disaster in the hands of people who don't know any better.
Yeah. I was expecting things like "use semicolons", "global variables are bad", "=== is often what you want, not ==". These are the things that many devs will need to learn. A bunch of design patterns was not what I expected.
While I agree with the fact that the title is misleading, I would have been even more disappointed if it was just a restating of "JavaScript: The Good Parts".
Intermediate JS devs need online resources too. Imo, an in-depth explanation of inheritance and the module pattern would be the best possible resources for people at that level of the learning curve. In other words, just read Doug Crockford.
Almost every javascript library you use is a singleton, so your second sentence makes no sense. Check the example, it is just the module pattern. If a programmer wants to deviate away from a pattern, the language lock-in won't stop them.
Also, the command pattern is a great tool for handling undo/redo on a UI. I think you are being way to dismissive of some good advice. Just because someone is trying to share a better way of doing things that is based on ideas created in another language does not detract from their value.
It is not a traditional module I will agree. However I see it as a hybrid class/interface/module and in this role, it works very well and allows you a tremendous amount of flexibility due to the dynamicness of js.
design patterns have completely failed. Sometimes, they appear in my code, but knowing them has never helped me to code better.
What helps me is to know language idioms and code patterns.
It is a pity that so many teachers give such a devotion to design patterns.
I used to think as you do, and I still think that too much emphasis is placed on patterns, and too many poor coders stitch them together in complex ways to achieve exceedingly simple tasks.
However, when part of your design begins to take a form similar to a well-known pattern, there is a good reason to convert your design to use the specific pattern if possible: communicating with other programmers (including yourself, six weeks later). It means that instead of bogging down in documentation of exactly how that component works, you can just say "it's this pattern", and everyone familiar with the pattern will instantly know how it works, minus a few details.
In addition, knowing about patterns and seeing how they're used in practice (rather than toy examples that make them seem masturbatory) can help you to reason about problems in the future. Not because you necessarily use those specific patterns, but because you have understood how they achieve what they achieve.
I've shipped very little JS code, so take this with a grain of salt. However I have had to ship JS code, and since I'd heard horror stories, I tried to find some solid resources to make sure I wasn't shipping garbage. I found these really helpful:
Eloquent JavaScript, a freely available book with an in-browser REPL and a lot of fun exercises
Douglas Crockford's JS master class, which covers a lot of the same material as his "JavaScript: the Good Parts" but in video format, and I think in a much more digestible way. The Good Parts book is very, very dense and probably best used as a reference. (why he thought starting with a formal definition of the langauge's grammar was a good idea completely escapes me)
I don't actually know if these are really up-to-date, and all the cool kids are using CoffeeScript anyway, so anyone with more experience please correct me if these aren't any good. They definitely point out how to avoid the most painful quirks of the language, and how to leverage the core that is actually decent.
Continuous integration doesn't mean all development happens in master. A policy that works well for me is that anything that will require more than 2 minor commits to accomplish should happen in a feature branch. If tests pass in the branch (and you probably want to insert code review/sign-off at this stage as well), merge to master. If tests pass in master, deploy.
From what I can tell, this just makes it easy to run tests in branches. My team uses something similar that kicks off a build whenever someone comments "build it" in a GitHub pull request, and posts the results to the pull request discussion.
It sounds like we agree that long-lived branches are not continuous integration at least. We probably differ in that I think short-lived branches and pull requests are more overhead than most teams require. Keep in mind that CI does expect daily check-ins to master. That's how short those branches can live to satisfy that requirement. Anything else is not CI... which is not the end of the world either, but I don't see the point of using a CI server at all anymore. Just build/test your stuff locally.
So what benefits are you saying are lost/given up by this model that another model preserves?
Also, feature branch development is just how we're using it. The code monitors git for any new branches and automatically clones jobs for new branches. This would support a more traditional master branch with release branches created periodically. Those release branches would automatically get Jenkins jobs created for them instead of someone manually having to do it.
I'm saying a CI server is unnecessary for these branches that aren't ready to integrate with master. These are short-lived branches with 1 or 2 contributors, right? Just run the build/tests locally. There's no advantage to this overhead with branches that won't last longer than a day.
There are lots of advantages for us.
What if the full build and test suite takes longer to run than you want to occupy your computer?
Unit tests are quick, but the full set of unit/integration/functional tests? Plus ensuring that artifacts like .war files can be successfully created, and that it runs fine on the target system rather than just on your dev laptop (which for most devs is not the same platform as they deploy to).
What if you want to show off your work-in-progress by having a job auto-deploy it to a test server so that your customers or QA can play with it before you're ready to merge to master? Sometimes a day is too long to wait to show something off. Commit and publish as a branch and you can keep working while someone else looks at it.
We have another dev code review every pull request to master to ensure code quality as well as spreading knowledge of what's happening across the team. The overhead is surprisingly low. Branches literally take about 15 seconds to make, and pull requests are no more than a minute or two unless you really go into testing instructions.
Well every team is different, so I shouldn't try to say I know the right thing for your team. What you're talking about though is farming out your branch builds/tests/deploys to jenkins, which has nothing to do with CI, but still sounds like your team finds it useful. Carry on if that's the case. While I wouldn't personally solve the problem that way, I'm wrong to suggest that jenkins should only be used for CI.
The normal usage of the term "Lean" (ie from Toyota) actually has little to do with validated learning and more to do with the elimination of wasteful aspects of production that don't directly lead to customer value. Usually that means "pulling" work out of a team based on need rather than trying to anticipate need that might never materialize (like the need to scale workforce and infrastructure, which is the case here). FTA: "The idea, he says, is to be “global from day one and have scalability built in.”". Of course in any pull system, validated learning is built-in, because new customer demand is new information, but the focus of Lean is actually the reduction of wasteful production. He'll have wasted a lot of time/money if demand doesn't match the infrastructure he's built.
You are spot on. Unfortunately the 'Lean Startup' book basically overloaded a lot of words around Lean. I love the concept in the 'Lean Startup' book, I also love Lean methodology in manufacturing.
But it is confusing as hell to talk to people in both fields becuase a lot of words have different meanings now.
In my experience, it's waaaay more efficient for developers to be in the same room if you do it right, though I'm starting to think I might have a minority opinion on that.