Hacker Newsnew | past | comments | ask | show | jobs | submit | erkok's commentslogin

Yes, the assets are property of a defunct company, so yeah, if you want to do it properly, I wouldn't advise distributing it with their assets, unless you can track them down and license the IP.

This project was mainly developed for Helbreath community as a modernisation project, a community which has ran private servers (illegally) as long as licensed servers have existed. The original developer never seemed to care for illegal use of their IP, even when they were in business. I'm aware of 2 cease and desist requests being sent to HB Olympia (commercial) and HB Nemesis (Steam listed) private servers and they are still both operating. Either they managed to license the IP, which I'm not aware of, or the request came from HB Korea server, which is the only licensed server left, and as a licensee they probably don't have the rights to the actual IP, hence the requests were ignored. I'm just speculating here. I'm not trying to encourage anyone here to do the same, just trying to explain the situation around the IP.


I now ran the client with server written in C# with 500 simulated players all running around on the same map, and all nearby also. My FPS on M1 mac that ran the server, client and the simulator dropped 5 FPS from my monitor's max refresh rate, so seems like browsers should be able to handle a pretty good load for such games. More about the test here: https://discord.com/channels/1472308778031386729/14723087787...


So far I haven't ran into performance issues, but this version doesn't have networking, so I only tried spawning lots of monsters in a small map, so they have to recalculate their movement quite often + each monster runs an AI loop for its actions, didn't drop an FPS in my M1 Mac that I'm using for development, which is quite beefy in terms of computing power. Anyone can retry this simulation. Open the demo, load "Shop" map for example, which is small, and spawn lots of monsters, see how it performs on your hardware.

One thing to note that, at least in terms of rendering, monsters are far simpler to draw, since they originate from a single sprite. Players, at least in in the multiplayer setting, have each equipment sprite drawn separately, which means that having lots of player characters in a single screen can have higher performance penalty, but this option cannot be simulated with the current version.


Having been around in the industry for a while I'm seeing abstractions being misused all the time. Being guilty of it myself also when I was younger.

For me, the only purpose of an abstraction is to reduce complexity, but often times I'm seeing it being used to reduce repetitiveness, which often times replaces well understood more verbose code with less understood less verbose and less flexible alternative. For me, as a team lead, easy to read code is far more important than subjectively perceived elegant abstraction that everyone then has to learn how to use, and potentially fight with.

In many cases I have noticed people jumping into abstracting away a complexity right away, often times ending up with a leaky or inflexible abstraction. To those people I say, do that painful thing at least 10 times, then think about abstracting it away, since then you probably have some level of understanding about the pain you're trying to alleviate and all the nuances that comes with that domain.


Everywhere I've worked that uses DynamoDB, someone will invariably write a bunch of abstraction functions that become very annoying to debug, to add functionality, and can break an entire app if changed.

DynamoDB is admittedly very verbose, but it's almost always worth it to keep your CRUD operations written within the SDK rather than as an abstraction.


A better abstraction would be a better SDK then.

BTW repetitiveness is not free, it's cognitive load that a developer must deal with. An abstraction is also a bit of cognitive load that grows with the abstraction's complexity; the point is to find a balance that minimizes it.


Code golfers think they’re helping.


Exactly


I tend to agree with the author. GraphQL has its use cases, but it is often times overused and simplicity is sacrificed for perceived elegance or efficiency that is often times not needed. "Pre-mature optimisation of root of all evil" comes to mind when GraphQL is picked for efficiency gains that may never become a problem in the first place.

Facebook invented GraphQL to solve a very specific problem back in 2012 for mobile devices. Having to make multiple queries to construct the data needed in FE in mobile clients is bandwidth constraining (back then over 3G networks) and harmful for battery life, so this technology solved this problem neatly. However, these days when server-to-server communication is needed over an API, none of the problems Facebook invented the protocol for are problems in the first place. If you really want maximum efficiency or speed you probably ought to ditch HTTP entirely and communicate over some lower level binary protocol.

REST is not perfect either, one thing I liked about SOAP was that it had a strong schema support and you got to name RPCs the way you liked, and didn't have to wrangle everything around the concept of a "resource" and CRUD operations, which often times becomes cumbersome to fit into the RESTful way of thinking if you need to support an RPC that "just does magic with multiple resources". These are the things I like about GraphQL, but on the other hand REST is just HTTP with some conventions, which you necessarily don't have to follow if things get in your way, and is generally simpler by design.

The only thing I wish with REST is having a stronger vendor support for Swagger/OpenAPI specs. One of the things my team supports is a concept of Managed APIs for our product: https://docs.adaptavist.com/src/latest/managed-apis and we support primarily RESTful APIs but also couple of GraphQL based ones and the issue we face is that REST API specs for many products are either missing, incomplete or simply outdated, so we have to fix them ourselves before we generate our Managed API clients, or write them by hand if the specs don't exist. It's becoming easier with AI these days, but one thing I personally regret when we transitioned from SOAP to REST as a community, is that the strong schema support became a secondary concern. We no longer could just throw API client generator at SOAP's WSDL and generate a client, we needed to start handcrafting the clients ourselves for REST, which is still an issue to this day, unless perfect specs exists, which in my experience is a rather rare occurrence.


I commend you trying to tackle such a challenging domain, something that feels should be handled on state level.

As a citizen of Estonia, we pretty much have any government service available over the web, and yes, we also get to enjoy state provided health care, which makes things simpler when it comes to having a single unified system for all health care workers, which we have, have had for quite a while, probably for a decade or more. And it works, including patients who can also log into the system to check any data that is collected on their behalf.


Thank you - it's a difficult problem for sure, but that makes it all the more fun and rewarding.

> should be handled on state level

Many of the aforementioned HIEs in the US are actually offshoots of state, or federal, government initiatives like TEFCA. We didn't go into details in the post, but the main HIEs are definitely not privately held startups - mostly nonprofit state sponsored organizations.

> we pretty much have any government service available over the web, and yes, we also get to enjoy state provided health care

There are pros/cons of state run centralized government systems for sure - with Metriport as a communication layer, we're hoping to bring providers in the US the best of both worlds for data exchange.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: