Hacker Newsnew | past | comments | ask | show | jobs | submit | markasoftware's commentslogin

the thing is modal is running untrusted containers, so there's not really a concept of "some front facing" containers. Any container running an untrusted workload is at high risk / is "front facing".

If Modal's customers' workloads are mainly GPU-bound, then the performance hit of gvisor isn't as big as it might be for other workloads. GPU activity does have to go through the fairly heavyweight nvproxy to be executed on the host, but most gpu activity is longer-lived async calls like running kernels so a bit of overhead in starting / retrieving the results from those calls can be tolerated.


Well if someone is gonna use Modal exactly for GPU purposes then I guess its okay but anything compute related just feels like it would have some issues performance wise

So I can agree that perhaps Modal might make sense for LLM's but they position themselves as sandbox including something like running python code etc. and some of this may be more intensive in workflows than others so I just wanted to point it out

Fly.io uses firecracker so I kinda like firecracker related applications (I tried to run firecracker myself its way too hard to build your own firecracker based provider or anything) and they recently released https://sprites.dev/

E2B is another well known solution out there. I have talked to their developers once and they mentioned that they run it on top of gcp

I am really interested in kata containers as well because I think kata runs on top of firecracker and can hook with docker rather quickly.


If you're not looking for GPU snapshotting the ecosystem is relatively mature. Specifically, CPU-only VM-based snapshotting techniques are pretty well understood. However, if you need GPUs, this is a notoriously hard problem. IIRC Fly also was planning on using gVisor (EDIT: cloud-hypervisor) for their GPU cloud, but abandoned the effort [1].

Kata runs atop many things, but is a little awkward because it creates a "pod" (VM) inside which it creates 1+ containers (runc/gVisor). Firecracker is also awkward because GPU support is pretty hard / impossible.

[1] https://fly.io/blog/wrong-about-gpu/


Ohh this makes sense now. Firecracker is good for compute related workflows but gvisor is more good for GPU related workflows, gotcha.

For my use cases usually, its Firecracker but I can now see why company like Modal would use gvisor because they focus a lot (and I mean a lot) on providing gpu access. I think that its one of their largest selling points or one of them, for them compute is secondary customer and gvisor's compute performance hit is a well worth trade off for them

Thanks for trying to explain the situation!


I have a friend who did similar tunneling a while ago. It also works on cruise ships.

He discovered that on some airlines (I think American?), they use an advanced fortinet firewall that doesn't just look at the SNI -- it also checks that the certificate presented by the server has the correct hostname and is issued by a legit certificate authority.

My friend got around that restriction by making the tunnel give the aa.com SNI, and then forward a real server hello and certificate from aa.com (in fact I think he forwards the entire TLS 1.2 handshake to/from aa.com). But then as soon as the protocol typically would turn into encrypted application data, he ignores whatever he sent in the handshake and just uses it as an encrypted tunnel.

(The modern solution is just to use TLS 1.3, which encrypts the server certificate and hence prevents the firewall from inspecting the cert, reducing the problem back to just spoofing the SNI).


This is basically what Xray [1] does. For any connection request matching a particular SNI and not presenting a secret key, it proxies the entire SSL handshake and data to a camouflage website. Otherwise it can be used as a regular proxy disguised as SSL traffic to that website (with the camouflage website being set as the SNI host, so for all purposes legit traffic to that host for an external observer).

It's meant to get around the great firewall in China, so it has to avoid the GFW's active probers that check to make sure the external website is a (legit) host. However a friend was able to get it to work American's in-flight firewall if the proxy SNI is set to Google Analytics.

[1] https://github.com/XTLS/Xray-core


Someone was using Xray, proxying to my employer, and it was detected in our attack surface management tool (Censys). I had some quite stressful few minutes before I realised what was going on, "how the hell have our TLS cert leaked to some random VPS hoster in Vietnam!?".

Thankfully for my blood pressure, whoever had set it up had left some kind of management portal accessible on a random high port number and it contained some strings which led me back to the Xray project.


> I have a friend who did similar tunneling a while ago. It also works on cruise ships.

Hah I was just about to say the same thing! I just got home from a ~3 week cruise. Internet on the ship was absurdly expensive ($50/day). And its weird - they have wifi and a phone app that works over the internet even if you don't pay. Google maps seemed to work. And my phone could receive notifications from apple just fine. But that was about it.

I spend some time staring at wireshark traces. It looks like every TCP connection is allowed to send and receive a couple packets normally. Then they take a close look at those packets to see if the connection should be allowed or blocked & reset. I'm not sure about other protocols, but for TLS, they look for a ClientHello. If preset, the domain is checked to see if its on a whitelist. Anything on their whitelist is allowed even if you aren't paying for internet. Whitelisted domains include the website of the cruise company and a few countries' visa offices. The cruise app works by whitelisting the company's own domain name. (Though I'm still not sure how my phone was getting notifications.)

They clearly know about the problem. There's some tools that make it easy to work around a block like this. But the websites for those tools are themselves blocked, even if you pay for internet. :)

If you figure out how to take advantage of this loophole, please don't abuse it too much or advertise the workaround. If it gets too well known or widely abused, they'll need to plug this little hole. And that would be a great pity indeed.


$50 a day for internet is criminal, I don't care if you're at sea or in outer space.


Your sea communications literally do go to outer space. That's why it's so expensive.


10 years ago that was a valid excuse.


Starlink does not cost $50 per day


What does a Starlink installation cost (upfront and ongoing) to service 3000-5000 daily users at expected speeds?

Don't forget to price in the costs of installing and maintaining a WiFi network that works consistently in a metal ship whose interior is composed from prefab metal modules. (Hint: every cabin, every space, has one or more APs).

I haven't done the math, and I'm sure they profit on the offering, but I doubt it's as egregious as these replies make it sound.

(I thought about this a bit when I was on a cruise that offered Starlink this past summer.)

Edit: also don't forget that everyone gets free WiFi, it's just that internet access is restricted for guests who don't pay. So it does need to support the ship's full complement and passengers.


Presumably they maintain all those wifi access points regardless of whether or not anyone buys the wifi package. That lets the cruise app work. And the staff use wifi too.

I’m sure servicing thousands of people via starlink is expensive. But the cost is amortised over the number of people using it. Thousands of users should make internet access cheaper, not make it more expensive.

They also don’t provide “normal” internet speeds. I was usually getting about 20kBps - which is painfully slow. I tried to have a zoom call on the one day I paid for internet, and every minute or two we would get a latency spike of 10+ seconds. Those latency spikes went away on other days, but the speed never improved much.

The ship I was on is apparently quite old by modern standards. Maybe they don’t have enough starlink satellites installed or something. (It was definitely starlink). But if that’s the case, it makes the price they’re asking all the more outrageous. For $50/day I could probably bring my own starlink satellite on board and it would come out cheaper.


That is very different to my experience using it on the ship we were on. I was able to stream TV shows in full quality with no issues, took phone calls from work a few times over WiFi too.

I have never used Starlink otherwise and, frankly, expected much worse service - especially on a cruise ship.

I'd definitely be unhappy paying $50/day for what you described. But I paid less (there was a discount for buying a package ahead of time for my family's devices) and got better service it sounds like.


If a cruise line wanted to offer WiFi at reasonable rates, they wouldn't be charging for it separately.


IIRC the cost of Starlink for ships is actually very high. Starts at $5k per month for a commercial vessel I think. Can’t imagine what it is for a passenger ship, but Musk is making his money to be sure.


So $1 per passenger-month or 3 cents. Network and access points were likely there already.

Starlink hardware (aka community hub) is $1.25M. Actual bandwidth cost is 75k per gbps per month.


For enterprise mobility venues like a commercial aircraft or a cruise ship it costs far more.


Perhaps they allowed Apple Push Notification service so their own app can receive notifications?


Allowing inbound messages is pressure on people to buy service so they can respond. I'd guess it was for evil marketing reasons.


Ah yeah that makes sense. They have messaging built into their app so you can message friends and family while onboard the ship. I didn't use it - but of course, if they block APNS, messages wouldn't be able to show up on the lock screen.


I bet there some IT team at the cruise line that leaves these back doors in their systems deliberately as an “on-board activity” for their hacker customers.


Hah! Well it worked for me! It kept me entertained for the better part of a day.

I never figured out a way to route internet on my phone through my laptop. But it was probably for the best. It was lovely spending a few weeks with no internet connection on my phone, in arms reach away at all times.


> Though I'm still not sure how my phone was getting notifications.

Almost all of these special pricing/zero-rating schemes will include platform push in the zero rated traffic. Can't use anything without it, and most of the platforms have public pages describing how to identify their traffic, because there's lots of networks that want to allow it.


The modern cruise ship techie Internet solution is a starlink mini. The cost of the dish plus service and a middle finger to the cruise ship company that your family dragged you on is worth more than the number of dollars it cost to go on the cruise. (The alternative, having a healthy family dynamic, is a whole other can of worms.)



Oh, the travel router trick. As a techie with too many devices, plus family, you use the travel router to buy the Internet package and then everyone else associates to the travel router and you don’t have to pay for Internet access six different times.


I've heard of cruise lines banning travel routers as well.


How do they know you are using a travel router?


Travel router open a secondary wifi for the clients, don't they?


The same way they know you’re bringing your own liquor.


Security now confiscates those when you board the ship alongside your bottles of “mouthwash”.


Why do people continue to go on cruises? I've never been on one and have no desire to go.


Why do people comment on HN? Different strokes for different folks.

But basically you get to see a bunch of destinations while all your travel is organized for you, you never have to switch rooms and constantly pack/unpack, and the actual travel part is infinitely more comfortable.

A room and sundeck and pool beats a plane seat or train seat any day.

I'm not into cruises myself, but the appeal seems pretty understandable in terms of convenience.


Downside is you don’t see that much - you get 4-6 hours each day in some city and are offered incredibly expensive day tours (kinda worth it because you have so little time).


People who are older or with limited mobility find it far easier to get see multiple destinations without having to unpack/pack, navigate difficult airports, etc. I have been on a few, and while I’m not the biggest fan, they’re not terrible if you are traveling with folks who have mobility issues. I would not go on a cruise after COVID, though.

They’re also far less expensive than many other vacations, especially if you have kids and are considering Disney stuff.

Still a human Petri dish.


I doubt this is a legitimate question, but I'll bite: It is cheap.

Go price out hotels and food in any major destination for one week. Now go price out a cruise for one week which also includes entertainment and a travel component. Somehow, the cruise is CHEAPER and offers more.

That's it. That's the whole answer.


One reason its cheap ...

Long hours and low pay - Some workers face shifts of more than 12 hours a day, seven days a week, often without overtime pay. Wages can be very low, sometimes below $20 per day, though tips can supplement income. Workers often live in small, shared cabins with limited personal space. Ships often registered in countries with lax regulations. No pay between workers contracts

These are ONLY some of the reasons ....


It helps to be on a Panamanian registered vessel in international waters.


>Why do people continue to go on cruises?

There is a level of convenience that is hard to get elsewhere.

I went on a Disney cruise 2 summers ago. All restaurants were in walking distance. All of deck 5 was dedicated to child care. They took you straight to excursions. Family was close, but not too close.

There were some downsides, too, but let's not focus on those. I think the "king" reason we went is because the grandparents were paying and they wanted everyone to be "there" and not leaving. I think the main reason we aren't going again is cost.


At least cruises are temporary. There’s a whole not-so-secret society among us who buy RVs. I really don’t understand the appeal at all.


I’m literally about to hop on a cruise ship tomorrow and trying to figure out how to solve for this, so this is timely.


You could just relax and unplug


if the world was all XHTML, then you wouldn't put an ad on your site that wasn't valid XHTML, the same way you wouldn't import a python library that's not valid python.


>, then you wouldn't put an ad on your site that wasn't valid XHTML,

You're overlooking how incentives and motivations work. The gp (and their employer) wants to integrate the advertisement snippet -- even with broken XHTML -- because they receive money for it.

The semantic data ("advertiser's message") is more important than the format ("purity of perfect XHTML").

Same incentives would happen with a jobs listing website like Monster.com. Consider that it currently has lots of red errors with incorrect HTML: https://validator.w3.org/nu/?doc=https%3A%2F%2Fwww.monster.c...

If there was a hypothetical browser that refused to load that Monster.com webpage full of errors because it's for the users' own good and the "good of the ecosystem"... the websurfers would perceive that web browser as user-hostile and would choose another browser that would be forgiving of those errors and just load the page. Job hunters care more about the raw data of the actual job listings so they can get a paycheck rather than invalid <style> tags nested inside <div> tags.

Those situations above are a different category (semantic_content-overrides-fileformatsyntax) than a developer trying to import a Python library with invalid syntax (fileformatsyntax-Is-The-Semantic_Content).

EDIT reply to: >Make the advertisement block an iframe [...] If the advertiser delivers invalid XHTML code, only the advertisement won't render.

You're proposing a "technical solution" to avoid errors instead of a "business solution" to achieve a desired monetary objective. To re-iterate, they want to render the invalid XHTML code so your idea to just not render it is the opposite of the goal.

In other words, if rendering imperfect-HTML helps the business goal more than blanking out invalid XHTML in an iframe, that means HTML "wins" in the marketplace of ideas.


If xhtml really took off, there would just be server side linting/html tidy. Its not that hard a problem to solve. Lots of websites already do this for user generated html, because even if an unclosed div doesnt take down the whole thing its still ugly.

The real problem is the benefits of xhtml are largely imaginary so there isn't really a motivation to do that work.


Or you just wouldn't create xhtml with string interpolation.


> You're overlooking how incentives and motivations work. The gp (and their employer) wants to integrate the advertisement snippet -- even with broken XHTML -- because they receive money for it.

Make the advertisement block an iframe with the src attribute set to the advertiser's URL. If the advertiser delivers invalid XHTML code, only the advertisement won't render.


But all it takes in that world is for a single browser vendor to decide - hey, we will even render broken XHTML, because we would rather show something than nothing - and you’re back to square one.

I know which I, as a user, would prefer. I want to use a browser which lets me see the website, not just a parse error. I don’t care if the code is correct.


In practice things like that did happen, though. e.g. this story of someone's website displaying user-generated content with a character outside their declared character set: https://web.archive.org/web/20060420051806/http://diveintoma...


Yes, you would be able to put an ad on your site that wasn't XHTML, because XHTML is just text parsed in the browser at runtime. And yes, that would fail, silently, or with a cryptic error


Their example of why Ada has better strong typing than Rust is that you can have floats for miles and floats for kilometers and not get them mixed up. News flash, Rust has newtype structs, and you can also do basically the same thing in C++.

I don't know much about Ada. Is its type system any better than Rust's?


This was posted to about a day ago: https://github.com/johnperry-math/AoC2023/blob/master/More_D...

But a noteworthy excerpt: ```

Ada programs tend to define types of the problem to be solved. The compiler then adapts the low-level type to match what is requested. Rust programs tend to rely on low-level types.

That may not be clear, so two examples may help:

    Ada programmers prefer to specify integer types in terms of the ranges of values they may take and/or the precision of floating-point types in terms of digits. I ended up doing this at least once, where on Day 23 I specified a floating-point type in terms of the number of digits it should reproduce accurately: Digits 18. The compiler automatically chose the most appropriate machine type for that.

    Ada arrays don't have to start from 0, nor even do they have to be indexed by integers. An example of this appears below.
By contrast, the Rust programs I've seen tend to specify types in terms of low-level, machine types. Thus, I tried to address the same problem using an f64. In this particular case, there were repercussions, but usually that works fine as long as you know what the machine types can do. You can index Rust types with non-integers, but it takes quite a bit more work than Ada.

```


> By contrast, the Rust programs I've seen tend to specify types in terms of low-level, machine types.

This seems to be an artifact of the domain that Rust is currently being used in. I don't think it's anathema to Rust to evolve to be able to add some of these features. char indexed arrays are something I've used a lot (most via `char c - 'a'`\, but native support for it would be nice).


You can use TiVec to index with something other than usize. https://crates.io/crates/typed-index-collections


You very rarely would actually want scalar types which don't map directly to hardware supported ones anyway.


Ada's mechanism is what Fortran has been using and doing for decades.


F'77 added arbitrary lower bounds on arrays, including explicit-shaped and assumed-shaped dummy arrays. It is a useful and portable feature, though somewhat confusing to newcomers when they try to pass an array with non-default lower bounds as an actual argument and they don't work as one would expect.

F'90 added arbitrary lower bounds on assumed-shape dummy arrays, as well as on allocatables and pointers. Still pretty portable, though more confusing cases were added. F'2003 then added automatic (re)allocation of allocatables, and the results continue to astonish users. And only two compilers get them right, so they're not portable, either.

Ada's array indexing is part of its type system. Fortran's is not (for variables).


You can actually do this in C as well. The Windows API has all sorts of handle types that were originally all one type: HANDLE; but by wrapping a HANDLE in various one-member structs were able to derive different handle types that couldn't be intermixed with each other in a type-safe way without some casting jiggery-pokery.

It's just much, much easier and more ergonomic in Ada.


Fun fact, that many are not aware, mostly because this is Windows 3.x knowledge and one needed the right source to learn about this.

There was a header only library on the Windows SDK that would wrap those HANDLEs into more specific types, that would still be compatible, while providing a more high level API to use them from C.

Unfortunely there is not much left on the Internet about it, but this article provides some insight,

https://www.codeguru.com/windows/using-message-crackers-in-t...

Naturally it was saner just to use TP/C++ alongside OWL, C++ with MFC back then, or VB.


Yes and no, you need to look deeper into Ada to find that it can have compile time guarantees higher than what you can get from a struct named km and miles.


There is no elegant solution in Rust to make something like

  type Temperature_K is digits 6 range 0 .. <whatever is reasonable upper bound in your domain>;


at least that is an unsigned (though there are no usigned hardware floats). If you said tempemerature C there you range starts at -273.15 and you want errors of some sort to happen if you go below that.


Ideally, the program would freeze


newtypes are not as good as native low level types. after typing a lot of code, one will find out that he needs nightly to get decent integration to avoid casting to low level and back all time.


I'm super interested how you can do this in C++. Say, I need aggregate struct with a few 16 and 32 bit fields, some are little endian and some big endian. I do not want C++ to let me mix up endianness. How do I do it?


C: struct be32_t { uint32_t _ }; struct le32_t { uint32_t _ };

C++: That, but with a billion operator overloads and conversion operators so they feel just like native integers.


In C++ you probably could even make a templated class that implements all possible operators for any type that supports it with concepts. Then you can just `using kilometer = unique_type<uint32_t, "kilometer">` without needing to create a custom type each time.


Though if you do that km times km isn't km it is a volume - so your custom type would be wrong to have all operations. what unit km times km should be isn't clear.


These libraries already exist. God how people underestimate C++ all the time.

Of course you can use a unit type that handles conversions AND mathematical operations. Feet to meter cubed and you get m³, and the library will throw a compile error if you try to assign it to anything it doesn't work with (liters would be fine, for example)


I know of about 7 different libraries, 5 of them private to my company (of which 4 are not in use). Every one takes a fundamentally different approach to the problem.

> Feet to meter cubed and you get m³, and the library will throw a compile error if you try to assign it to anything it doesn't work with (liters would be fine, for example)

Liters would not be fine if you are using standard floating point values since you lose precision moving decimal points in some cases. Maybe for your application the values are such that this doesn't matter, but without understanding your problem in depth you cannot make the generic statement.

I could write books (I won't but I could) on all the compromises and trade offs in building a unit type library.


As a more general rant - people who have maybe used 5% of the feature set of C++ come along and explain why language X is superior because it has feature Y and Z.

News flash, C++ has every conceivable feature, it's the reason why it is so unwieldy. But you can even plug in a fucking GC if you so desire. Let alone stuff like basic meta programming.


GC was removed from the C++ standard in C++23 because all the compilers were like "hell no" and it was an optional feature so they could get away with not adding it. So this optional feature never actually existed and they removed it in later standards.


The C++ standard has never included a garbage collector. It only provided mechanisms intended to facilitate the implementation of a GC, but they were useless.


There are ways to do GC without language support. They are harder, but have been around in various forms for decades. They have never caught on though.


Do they really? Their types really have no custom constructors and you can use designated initializers for your data? I would really much like to have been underestimating C++, could you show an example of such a library?


Thankfully some folks already thought that out, one possible library,

https://mpusz.github.io/mp-units/latest/


I have seen several versions. I wrote two different ones myself - both not in use because the real world of units turned out far more complex. the multiplication thing is one simple example of the issuses but not a complete list


I guess I should have said `unique_type<uint32_t, "meter", std::ratio<1000, 1>>` then :) Then you can do the same as std::chrono::duration does.


Now write a book on the various tradeoffs from that decision. there is no perfect general answer, some domains have specific needs that are different. Depending on your domain that might be a good choice or it might be terrible


who said `operator*` needs to return the same type as its parameters?


operator* can return exactly one type. You can choose which, but metric offers many possible choices, and with floating point math on computers you will lose precision converting between them in some cases so you need to take care to get the right on for your users - which will not be the same for all users.


One return type, for any given combination of parameter types, not to mention the possibility of templating to manipulate the return type….


See, more trade offs...


honestly, I’m not seeing the problem you’re seeing


C++ really needs something like `using explicit kilometer = uint32_t;`

The 'explicit' would force you to use static_cast


There's several libraries, including some supporting units and mathematical operations yielding the correct result types.

And as usual, it mostly comes with zero overhead, beyond optional runtime range checking and unit conversions.

But C++ is a meta-programming language. Making up your own types with full operator overloading and implicit and explicit conversions is rather easy.

And the ADA feature of automatically selecting a suitable type under the hood isn't actually that useful, since computers don't really handle that many basic types on a hardware level. (And just to be clear, C++ templates can do the same either way)


But do these libraries allow using values in aggregates (i.e. structs that can be initialized by listing members in {} )? While preventing endianness errors


Aside from technical factors, there are social factors involved. For example, both Python and C++ has operator overloading. But in C++ that's horrible and you run screaming from it, while in Python land it's perfectly fine. What is the difference? Culture and taste.


It isn't the same operator overloading.

In C++ operator overloading can easily mess with fundamental mechanisms, often intentionally; in Python it is usually no more dangerous than defining regular functions and usually employed purposefully for types that form nice algebraic structures.


I hardly see the difference, given the capabilities of operator overloading in Python,

    class MyNum(int):
        def __add__ (self, other):
            return super().__add__(other) * 10
        
        
    n = MyNum(12)
    a = 45
    print(n + a) # oops


This type confusion would have been identical with a plain function, __add__ is only syntactic sugar:

    class MyNum(int):
        def ten_times_sum (self, other):
            return (self+other)* 10

    n = MyNum(12)
    a = 45
    print(n.ten_times_sum(a)) 
Compare with, for example, fouling the state of output streams from operator>> in C++.


Hardly any different, trying to pretend Python is somehow better.

Operator overload is indeed syntactic sugar for function calls, regardless of the language.

By the way, you can overload >> in Python via rshift(), or __rshift__() methods.


Of course I can overload >> in Python, but I cannot foul up output stream state because it doesn't exist. Formally there is little difference between C++ and Python operator overloading and both languages have good syntax for it, but C++ has many rough edges in the standard library and intrinsic complications that can make operator overloading much more interesting in practice. For instance, overload resolution is rarely trivial.


It is only one pip install away, if anyone bothers to make on such set of overloads.


People don't though. That's the big difference. There's a certain taste in the Python community.


It's the exact same thing except in Python the community largely has taste. In C++ `cout >> "foo"` exists in the standard library.


I love how among a certain set the word "taste" has become an all-purpose substitute for having an argument or making a case. It basically means "I have more social media follows than you do, so I'm right"


I believe the C++ community as a whole are quite convinced that overloading >> for stdout was a mistake.


> Culture and taste.

You mean accumulated prejudices, myths, and superstitions that most in any given community (programming language related or not) won't challenge for fear of being cast out of the group for heresy.


Err... no I mean the good taste not to overload >> for console output. There's no fear of being cast out, don't be silly.


I mostly agree that, as someone who cooks something intricate <10 times per year, a sharp knife is in fact more dangerous than a dull one.

Scott in the video makes the argument that sharp knives are safer, because you don't have to use as much force. But the only time I've ever cut myself with a knife in the kitchen have been with very sharp knives, eg one time handling it while washing the blade.


as soon as an LLM makes a significant mistake in a chat (in this case, when it identified the text as Romanian), throw away the chat (or delete/edit the LLMs response if your chat system allows this). The context is poisoned at this point.


per application, so per person


same thing, but a game: https://brantagames.itch.io/motus


I'm pretty confused about why it's beneficial to wait to read the whole compressed file before decompressing. Surely the benefit of beginning decompression before the download is complete outweigh having to copy the memory around a few extra times as the vector is resized?


Streaming prevents many optimizations because the code can’t assume it’s done when run once, so it has to suspend / resume, clone extra data for longer, and handle boundary cases more carefully.

It’s usually only worth it after ~tens of megabytes, but vast majority of npm packages are much smaller than that. So if you can skip it, it’s better.


Streaming compression with a large buffer size handles everything in a single batch for small files.


if 5% grows exponentially, it'll become 0.025%, then 0.00125%, ...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: