Hacker Newsnew | past | comments | ask | show | jobs | submit | turingbike's commentslogin

people want to go back to monoliths, but don't want to lose the upsides of microservices. once you get a taste of serverless, it's really hard to go back to overprovisioning. at the same time, distributed state and logic that crosses service boundaries is hard. I don't know what comes next, but what I _want_ to come next is powerful static analysis that works across languages and services, that can "compile down" to serveless services


Maybe something like ballistacompute.org? I can see a language agnostic distributed solution powered by async/await with type definitions constantly automatically compiled to the target language, all wrapped up in an ide with team-level versioning. Native support for event driven, streaming, and batch data with little ceremony. Throw in distributed ipfs-like storage with persistant references and eager caching and a strongly-typed graph database for good measure.


I bounced a bunch of times too for similar reasons.

Is it worth checking out? emphatic yes

Summarize? I will try, and probably not do a great job; I'm still digesting it.

The world is both "nebulous" [1] and patterned; observed patterns aren't really out there so much as a combination of "there" and in our mind. Some of these patterns are incredibly useful. One pattern is "the rational / modern world view", which lets science work. It is very good, but not complete.

Post-modernism noticed that no pattern is a perfect fit on the partially-nebulous world, and went nowhere with that, and ended up in a nihilistic, bad place. That isn't good.

The goal is to see patterns as useful, conventional truths, not ultimate truths. Then you can pick the right pattern / conceptual framework with which to approach a given situation.

[1] https://meaningness.com/nebulosity


So, "all models are wrong, some models are useful" spun out into a whole philosophy?

In all seriousness, that is becoming one of my guiding principles (along with "divisions into categories are models"), so maybe I should pay more attention to this dude, except that he really is kind of hard to follow sometimes...


I think a better way to put it is that all models are right given a situation / context / set of assumptions. It is important to understand the latter to apply the model.

I would highly recommend reading Chapter 3-5 (Epistemology) of Objectivism by Leonard Piekoff. It's a much more logical and easy to understand take on these same ideas.


Thanks for a good summary. If you'd like an alternative take on these same ideas in a bit more systematic framework, consider read Objectivism by Leonard Piekoff.


The Fed's policies (at least since 2008) could be described as "socialism for the management upper class" - they got free money that they gave to themselves as bonuses. Their companies were buoyed up, regardless of what the market wanted or whether the business was sound.


There was a great article on the front page yesterday, When to Assume Neural Networks Can Solve a Problem, https://news.ycombinator.com/item?id=22717367 . Case 2 is very relevant here, basically: if you have solved the problem with access to lots of data, usually you can adapt to a lower-data regime.

I am way out of my element talking about brain surgery and sensors. However, one thing that I do well is say "you shouldn't bet against neural networks", which is a great way to be right on a few-year time horizon.


Given this perspective, I would postpone investment as long as possible then.


I think you will also like this quote/thought then:

> The theory is this: Infinite Jest is Wallace's attempt to both manifest and dramatize a revolutionary fiction style that he called for in his essay "E Unibus Pluram: Television and U.S. Fiction." The style is one in which a new sincerity will overturn the ironic detachment that hollowed out contemporary fiction towards the end of the 20th century. Wallace was trying to write an antidote to the cynicism that had pervaded and saddened so much of American culture in his lifetime. He was trying to create an entertainment that would get us talking again.

- http://fictionadvocate.com/2012/09/19/the-infinite-jest-live...


I assume you have never explored your own literal blind spot [0] and how the brain just plasters over it with some hacky texture filling...

[0] https://en.wikipedia.org/wiki/Blind_spot_(vision)


Why would you assume that?


Google's AutoML produces black box models that are only available over a network call. This services seems to produces downloadable models, and a notebook with Python code that creates the model. If that is the case, this is substantially better than GCP's offering.

AWS consistently releases similar products after GCP... but they are much more well-thought-out, as AWS has to support them indefinitely...


I’m not sure why the other post was marked dead, but AutoML models can be exported - you can export them to TFLite format[0] and then run them on edge devices, such as a SparkFun Edge[1] or a Coral SoC / board.

[0]: https://cloud.google.com/vision/automl/docs/export-edge#expo... [1]: https://www.sparkfun.com/categories/tags/tensorflow


You can also export your model as a server within a Docker container:

https://cloud.google.com/automl-tables/docs/model-export


GCP supports training scikit / xgboost/ Tensorflow models and exporting them to Cloud Storage for use elsewhere.

More here: https://cloud.google.com/ml-engine/docs/scikit/custom-pipeli... https://cloud.google.com/ml-engine/docs/algorithms/xgboost-s... https://cloud.google.com/ml-engine/docs/tensorflow/getting-s...

Creating custom notebook containers is now also supported with AI Platform: https://cloud.google.com/ai-platform/notebooks/docs/custom-c...

Disclaimer: I work for Google.


Right, but your automl solution is using Tensorflow, no? (even for automl-tables).


I agree, it was good to see the option to generate Python code that can be reused, tweaked, etc.


Can we stop with the Google FUD re shutdowns? Browsing hacker news to any post about Google at any time in the past year you'll only ever see discussions on how "google shuts things down".

I get the trope, you want upvotes, it's an easy joke, and you can say it without real affliction at this point, but can we instead discuss the technical merit at hand — in this case AWS?

In this specific case, as noted by several other comments, you can also export GCP models.


Unfortunately what you consider to be a joke for upvotes, is something I see as a major business threat, and a downside for any future deal with google.


This is called regulatory capture. Once you know the term, you can find some good content on it, like this https://www.thisamericanlife.org/536/the-secret-recordings-o...


They give examples of problems their model could solve that Mathematica couldn't (within a 30 second timeout) - and that's awesome. Destroy Mathematica. But, I did anyone notice if there were problems that it couldn't solve that Mathematica could?


I'm also curious whether there are problems that Mathematica can solve but this system cannot.

More importantly, I'm curious if there are problems that Mathematica knows it can't solve but for which this system silently gives wrong answers.

Another interesting extension to the experiments would be a longer timeout -- 30 seconds seems a bit arbitrary and quite low for a CAS. However, I suspect the reason for that time out is the fact that Mathematica licenses are insanely expensive. Otherwise the 5,000 (actually, only 500) test problems could be run for at least a few minutes at pretty trivial cost. Maybe there's a Mathematica employee here who can suggest Wolfram donate some compute (or at least limited licenses) for a small evaluation cluster. Especially if the authors decide to do follow-up work.

In any case, this is really interesting work. I think deep learning for symbolic mathematics is going to be a super interesting area to watch for a least the next few years. Good work, anonymous author(s).


Verifying a candidate solution for these problems is relatively easy so wrong answers aren't so bad.


I understand.

To explain: the thing that's super interesting to me about this paper (i.e., "strong result" vs. "best paper contender") is not integration per se. It's the possible applications of the method to problems with much, much, much higher computational complexity than integration. On those problems, validating the correctness of a solution is also intractable. In those cases, a sound function approximation approach would be an absolute game changer for symbolic methods.

(Not that integration isn't interesting as well.)


How are they going to generate training data if verifying solutions is hard?


Some of these decision problems have thousands of examples because they correspond to industrially relevant problems. So, not automatically generated all at once, but gleaned from people who have been using CAS for decades to solve specific problems.

Still, I fear, the numbers are currently too small to get past the information bottleneck (mere thousands). We'll see.


Are these gathered in one place anywhere? I and probably many others, including the authors of this paper, would be interested in these as a test set for models like this.


Why not just use the wolfram engine for developers? It’s available for the “insanely expensive” cost of $0. (See: https://www.wolfram.com/engine/)


I've had a lot of trouble getting permission to use Wolfram Engine. If authors are at a BigCorp, might be true for them as well.


"we report the accuracy of our models on the three different tasks, on a holdout test set composed of 5000 equations."

I had trouble finding the test cases they used. Where'd they list them?


The cases where Mathematica solves integration by spitting out all kinds of exotic functions (Bessel functions, all kind of weird elliptic integral functions and so on). They don't have these kind of integrals in their training data.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: