Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Engineer.ai raises $29.5M Series A for its AI+Humans software building platform (techcrunch.com)
95 points by Sequenza on Nov 6, 2018 | hide | past | favorite | 89 comments


Anyone remember "The Grid"? I wrote a post about it a few years ago.[0] Essentially it was a scam where "AI will replace web developers!" and raised a few million in VC. Be very careful of buzz words and companies claiming to replace engineers with AI.

[0] https://medium.com/@seibelj/the-grid-over-promise-under-deli...


Anyone actually used their builder product ? I signed up but apparently I have to 'invite' 5 or more 'friends' to get early access. Feels like a vapor-ware.


Hey man, I am the CEO at Engineer.ai - it's not vapourware but also not the holy grail - marketing put up a gate just because, well, we want to make sure we deliver a strong experience - given dropbox also had a gate - I think we took a leaf out of their book. As far as customers, we built a number of projects (186 so far through the platform) and this includes the BBC Click Audience App, the in game auction SDK that was for the SF Giants (a startup was our customers) and other clients include Virgin, Future Group and a bunch of other SMEs or the SME in big companies. You can email me s at engineer dot ai and I'll make sure to get it activated so you can see. Here is also a video demo without any marketing blah blah on it - https://youtu.be/dCk66hrlLmM


I used it (a year later than I was expecting) and it was so bad it was worthless. I brought in with low expectations and it didn't even meet them. It was slow, unresponsive and cranked out sites 10x worse than you could do in squarespace or wix if you invested 20 minutes or bought a template


Hey Matt, please email me s at engineer dot ai and I'd love to see how we messed this up. Clearly we didn't do a good enough job and I want to make sure we make it right.


>Engineer.ai’s “Builder” product breaks projects into small ‘building blocks’ of re-usable features that are customized by human engineers all over the world, making the process cheaper than the average process

There doesn't appear to be AI involved. A very good business model, but no AI.

What I expected the founder to say was "we've proven people want our product, now we can scale it even further by building the ai tool we always wanted to build," but I don't see that.


This is the part that got me: "... everyone can build an idea without learning to code"

I went to my first tech conference when I was 13. One of the hot items was a tool that made programmers unnecessary. It was targeted at cheapskate businesspeople. Decades later the company is long dead. But the suckers are still out there. They think that coding is the hard part, in the same way that they think the hard part about building a house is nailing things together. But in both cases, the part you're really paying for is expertise: good development firms and good homebuilders know how to turn hazy human desires into very specific implementations, while also shaping those desires to be reasonable and achievable.


Personally, I don't believe anyone will be able to achieve full code synthesis for a very long time. I'm a firm believer that we'll always need engineers in order to code, product managers to help refine customer requirements, and so on. Having been a developer on the front lines for almost a decade, I've always believed that good software development is more akin to art than an exact science ("good code is like poetry"). That being said, I do think a lot of the stuff we end up doing as part of the SDLC is incredibly repetitive from product to product; boilerplate code, setting up basic architecture, etc. Those are the areas that we're trying to automate as much as possible so that humans can focus only on the custom bits.

- Disclaimer, I'm a VP E at Engineer.ai


Not just expertise but experience: when you hire someone ideally it’s so they can do something they have already done.


Ideas are a dime a dozen. The hard part is always the asking the right questions and collecting the right data to filter out bad ideas and refine good ideas into practical products.


You've hit it on the nail - assembling code or building a programmatically controlled ESB is not rocket science - asking the right questions, and being able to get people to spec without knowing "how to code" or "understanding" tech is much harder. This is where have spent a large portion of our time in building out the "Studio" where you can choose templates, or problem sets and then we organize "features" and "workflow" logic behind it. The entire lifecycle is designed around the idea of an assembly process rather than a consulting - so its more prescriptive (we ask a lot of questions upfront) - its important to note that you still get connected to a human product manager, and there are designers from the capacity side (we work with over 100 dev shops around the world that give us designers and developers).


Here is where we use AI/Expert/Heuristics (pls note I am not the AI expert but trying to be as transparent as possible)

- Pricing is a Supervised Learning Model.

- Custom Features are a Convolutional Neural Net + NLP.

- Resource Allocation (we tap into capacity of other dev shops) is an OR/ML combination.

- Sequencing of what to do is an ML/SL problem.

- Complexity is a Clustering Problem.

- Grading Devs is a Static Code Analysis (industry standard) + NLP Problem.

- Quality Early Warning is a Supervised Learning + Heuristics (we identify early potential problems based on a developer + feature set history analysis)

- Templates being updated based on features being added by onward customers.

---> Building Blocks

- Features are one or many building blocks

- They communicate through an ESB thats allows a smarter way of messaging between individual areas.

- The ESB allows us to "plug n play" -> today it still needs human stitching but that a scale problem we are looking to fix.

We are step 5 out of 12 steps of the way through the final vision - and the above are at varying stages of deployment (some early, some more established).


> "They charge you for every line of code, we bill you for what’s unique."*

What this means to their clients is that if the client hires Engineer.ai to build something, their next client can get the same product for free. Good luck with that, Engineer.ai.

* https://www.engineer.ai/how-it-works


To be clear, the next client doesn't actually get it for free. The way it works is that we use pre-fabricated components in conjunction with an ESB in order to create what is essentially a base for developers to then fill in with customization and business logic. Think of it as a set of well structured libraries that can be automatically stitched together (with dependency management and merge conflict handling). We don't reuse a customer's business logic - that's what makes their app unique. So for every customer we build for, there will always be that human element of customization.

- Disclaimer, I'm a VP E at Engineer.ai


Thanks for weighing in, Rohan. I see what you're trying to do here: you are building an app/website builder and you're leveraging some of the bespoke work you've done in the past to make future work less expensive for future clients.

That's what most agencies do (even IBM, CGI, etc) but you're doing it through an online interface. If I'm not mistaken, you are leveraging economies of scale to make your offerings cheaper than those of smaller agencies, effectively trying to squeeze them out.

My comment was specific to the marketing. You make it seem like I can get a copy of someone else's app if I just want the exact same thing that someone else got, for next to nothing.


I would be astonished if the next client gets it for free. The next client might get it for less than it cost the first client, but Engineer.ai has to pay back those investors.


It was meant as a joke. I'm just pointing out how silly this all sounds. E.ai looks like just another agency with really fancy marketing and an inflated valuation.


False - venture capital is not debt.

They can do whatever they feel like doing.


You're confusing investment with constraint-free donation.


I assure you that I have never confused the two. :)

Venture investment at this level of sophistication typically involves the purchase of equity in a company with the general expectation that the value of the investment will increase over time, as valued by other actors in the market. However, at this stage there are little to no financial or legal negative repercussions for a company failing to meet this expectation as long as they've acted ethically. (I mean, outside of the value of equity going to zero and having to shut down the company.)

There are forms of investment, or vehicles of financing - venture debt and specific types of convertible notes - that have the expectation of repayment given a timeframe, but they're not typically what we're talking about when we talk about modern venture investment and straightforward equity transactions (like a Series A here).

In fact, YC explicitly invented the SAFE (Simple Agreement for Future Equity) as a way to prevent "bad actor" investors from asking for returns from convertible notes ("repayment") and sinking companies early. The unwritten rule in venture investing in SV was literally, "don't ask for your money back," some Angels were screwing companies over by doing exactly that, so the SAFE codified protection against it into a low-friction investment vehicle.

tl;dr: Investment is not constraint-free, but there is certainly not an expectation of "repayment." It might seem a little semantic (versus, say, "generating returns") but it's important people understand the difference between equity transactions and debt - the constraints exist but they differ and, as such, incentivize behavior, growth and spend differently as well.


How is this AI? It looks like they're just building interfaces and then outsourcing implementations of them. I'm not saying it's not a smart strategy, but what makes it AI?


it's just to attract investors. see how the top comment is about this being the future of the universe or some other TED talk-level BS? it works.

they are just an outsorce provider. plain and simple.


small correction - we are not an outsourced providers - about 40-60% of the building process is machine operated - the rest is human - and those humans come from our network of devshops - but we don't outsource it to them - we pick the individual engineers we want to work on the problem basis our rating system. (https://snag.gy/XgJfny.jpg) will show you what experience our Capacity Partners see.


you described something that is an outsorce operator (from the end client POV)

not trying to pick on you (specially as I have nothing to gain doing so), but this is exactly how I deal with them. with the option to have the engineers hosted or remote. Add a beefy ORM with a client UI and they could also claim 50% of the work done by machines...


Replace "AI" with "software" (which for all intents and purposes it is).

> Software is the centre of every business today and the market has been waiting for a solution that eliminates technical barriers to build software so that everyone can engage in the new economy,” said Manu Gupta, Partner at Lakestar. “By creating a software powered assembly line combined with the best global human talent, Engineer.ai’s Builder bridges the gap between an idea and a software product to enable it.”


I greatly enjoy AI startup press release bingo.

  - .ai domain name
  - No mention of how they use or define AI
  - "AI + Humans"
  - Banner picture from science fiction film (Prometheus)
(Disclaimer: I used to work at an AI startup and am generally interested in how companies market themselves)


"The most valuable businesses of the coming decades will be built by entrepreneurs who seek to empower people rather than try to make them obsolete."

"When a cheap laptop beats the smartest mathematicians at some tasks, but even a supercomputer with 16,000 CPUs can’t beat a child at others, you can tell that humans and computers are not just more or less powerful than each other – they’re categorically different."

"Palantir takes a hybrid approach: the computer would flag the most suspicious transactions on a well designed user interface, and human operators would make the final judgement as to their legitimacy."

- Peter Thiel, "Zero to One", Chapter 12

Thiel is going long on these AI+human startups.


This quote, and the kind of thinking that engenders it (and is engendered by it), bother me.

Yeah, a cheap laptop can do arithmetic faster than 'the smartest mathematicians'. And the smartest mathematicians can do arithmetic faster than many children. Does that make mathematicians and children 'categorically different things' as well? In some trivial sense, sure, but it doesn't preclude any kind of connection between the two, or require some deep new ontological commitments to model.

I'm all for the current practical approach of using 'AI technology' as a human supplemental. But I'd rather not frame it as (what I perceive) as some kind of mystic, dualistic argument. At least not until we know more about both.

I also don't really think Peter Thiel is worthy of being the keystone of any kind of argumentum ab auctoritate in this particular field.


We can train children to become mathematicians.


Yes, that's precisely my point.

I've read the rest of what you have to say on this, and it is precisely the kind of mysticism that bothers me. There are no means of disproving it, it reduces to dogma in the end, but it's philosophically disingenuous to assert that because computers and people feel like different things in some cases, they must be, and then to argue backwards from there; it's question-begging in the original sense.


I mean it's clearly obvious that today, "AI" on computers != human. Unsupervised learning is making leaps & bounds. Even supervised learning isn't able to solve all problems better than humans even if it can do so on a staggeringly large amount of problems.

I don't think anyone is positing that that's true forever and all time. It seems reasonable to bet on AI at some point becoming sophisticated enough to outperform AI+human. I think it'll happen shortly after the point where AI can identify a new problem (or class of problems) by itself that it hasn't been taught about & then build new tools to help itself tackle that problem. After that it's the singularity because that process repeats ad infinitum. The only value-add of humans after that is if our creativity is somehow better/different than & can explore problems in ways the AI can't (& even that feels like a very short-lived advantage unless there's some crucial physical/mathematical impossibility standing in the way).


Seems pretty plausible with a max entropy + Bayesian hypothesis selection. And it has empirical implications:

https://www.am-nat.org/site/halting-oracles-as-intelligent-a...

If I'm right, then all AI only companies will be beaten by a AI+human company, so we can make market predictions and propose research directions based on the hypothesis. Doesn't seem entirely without technical/financial merit. I make a note of the economic implications at the bottom of the following proof:

https://www.am-nat.org/site/law-of-information-non-growth/

Why do you say it a form of mysticism?



They also have a VP BlockChain in their team [0], despite no apparent use of blockchain tech anywhere, you can't make this shit up

[0]: https://www.engineer.ai/about-us


You're absolutely correct in that we don't currently use blockchain tech anywhere in our stack yet! It's something we're currently exploring as a way to augment our business in order to deal with complex problems like identity management, royalty payouts, and escrowed payments. Additionally, today we can only work with developers in devshops; in the future we want to expand into the freelancer market. Dealing with freelancers at scale is a complicated proposition, as it's been super hit and miss with a well structured workflow to manage them. We're hoping we can solve that problem with a mix of automation and blockchain.

- Disclaimer, I am the aforementioned VP Blockchain at Engineer.ai


Juicero is on their list of customers, too. (Same page.)


I now realize that we clearly we need to fix that page - those logos mentioned are some of the companies that our senior team have come from (hence why it's in the "Our Team" section)! Apologies if it was misleading, I'm getting that updated right away!

- Disclaimer, I'm a VP E at Engineer.ai


I'm now seriously considering adding that Prometheus photo to our press kit ;)

Jokes aside, my colleague @sachmans has posted a comment above that should hopefully answer your question on how we use AI. As for how we define AI, as an engineer myself, I'm a little annoyed by how it's become an umbrella term for everything from basic statistical models to ANNs. Unfortunately that is the reality of it - and so we've consciously decided to use it as an umbrella term for the various applications of ML/NLP/NNs that we use internally.

- Disclaimer, I'm a VP E at Engineer.ai


Yup, we should term this "the .ai bubble".


Wouldn't this technically be like the 4th ai bubble? I remember reading these bubbles have been happening since the 60s.

edit: Okay, I'm slightly wrong. I looked it up due to curiosity. This would qualify as the 3rd major ai bubble, but there have been a few minor ones. Also known as "ai winters": https://en.wikipedia.org/wiki/AI_winter

AI winter referring to the lack of funding in ai projects instead of collapse in companies.


The bad side of me wouldn't mind another AI winter. Today I heard on the radio what was essentially data aggregated and related with SQL joins referred to as "AI". I've never heard another term become as meaningless as AI. Even blockchain is better defined.


Insert old man voice "If I had things my way, making hype about anything would be punishable by death."

Not going to lie. If someone ran for president on just that platform alone, where it's illegal to make any overblown promise/hype on tech, the contenders really need to come up with some good arguments for me not to vote for the hype killer.

I'm just afraid what's going to be the next hype train. The blockchain hype train was just plain dumb. It's a super niche piece of tech with really limited uses. Still cool for that. But will never revolutionize the world. This AI bandwagon is getting out of hand. People think all AI tech is perfect 100% of the time, all the time. It's more, works 93% of the time, 23% of the time.

The next hype tech... scares me what it could be.


This is how I feel as well. Hype takes on a life of it’s own as it becomes fashionable to repeat other people’s talking points about a technology you know nothing about, have never used and have never critically considered.

But that’s sort of humans in a nutshell, isn’t it?


Do you also think it deals with "belief" of something new and fantastic. I might be reaching on this... and this idea is like a grand total of 2 minutes old in my head.

But take crop circles. Over and god damn over, they're proven to be some bored asshole that went out with a board, some rope, and free time to make shit in a field. But every time a new one pops up, "Oh my god... is it aliens!?!?!"

Every time in tech. Overblown claims are made and shown to be false or limited in scope. By "smart people" towards "smart people". But we still get "No! This will change the world!". "It might change the way we take dumps on the shitter, but not the world." "IT WILL CHANGE THE WORLD!".

Or "Making the world a better place through statistically blockchain distributed interconnected AI designed silicon chips... 2.0"

I mean... it's all a religion. Looking at it in another light. Heavy prayer and belief in things you probably shouldn't. Makes tech people more religious than those that are "religious".

I might be reaching and ranting at this point.


I strongly suspect something conceptually similar to SQL joins is going on in a human brain.


Quite hilarious yet quite frustrating for us at how we are marketing ourselves and having to explain all the nonsense going on out there is not really AI?!


I guess that's why US media is hyping how China is surpassing the US in AI.


Why would you write code using AI? If the AI is able to figure out requirements, why not just let it do it directly, simply speaking?

I always have a hard time to see the benefit of code-generation tools in general. If you can generate the code for a piece of functionality, you may as well abstract away primitives for that and make it a one-liner operation in the code you're writing. If that's not possible, it's probably a shortcoming of the language, framework or whatever system you're using (which is admittedly the case sometimes in the real world, because things evolve slowly).


Code generation can be useful if it produces provably correct code. Any abstraction you write yourself you still have to test carefully. For example it's a lot easier and safer to use something that generates statemachines from descriptions that writing your own statemachine interpreter.


> AI+Humans

Sounds like just marketing bull. It sounds like they have libraries and they are using them to spin up apps faster. That might have value if the libraries are good but that's clearly not AI.


Or perhaps they use Humans first (like Mechanical Turks), and plan to replace them later by AI when the technology is there.


What if the mech Turks are like the MC simulations used for training the Alpha Go networks?

What if you stuff the Qt docs into a DL model and using the qt source code as training data? Could the network produce usable source code based on docs?


"Pray, Mr. Babbage, if you put into the machine [random code fragments], will the right [code] come out?"

https://en.wikiquote.org/wiki/Charles_Babbage#Passages_from_...


Tl;dr.

But Babbage’s machine was not différentiable.


Getting a Theranos type vibe here where they claim some advanced technology but are just using normal stuff to get the work done. This sounds like a consulting firm essentially.


I believe what this startup does is impressive:

Train a series of neural networks in order to provide buzzwords at a fast enough rate to trick actual people with more money that brain to give them money.

Getting almost 30 millions with just vaporware is always impressive. I won't be capable of doing anything similar myself.


Their web site[0] claims they've already had 60MM in revenue. Curious why that impressive number pre-A round wasn't included in this article.

[0]: https://www.engineer.ai/customers


I’m pretty confused by the “AI” part of this. It looks like they built a library of components and templates, and then have people snap the components together. Clever, useful to a point, but not AI.


lots of red flags - the founders are supposed have exited nivio (the 'first cloud computing company') at $100m - but the company seems to have disappeared.


They claim to have raised $21m (from Videocon and AEC partners) in Feb 2012, but by the end of year the founder(s) claim to have stepped down with $100m payout (according to engineer.ai) while the company carried on and died in 2014.

I'm not an expert in this, but one of the founders is nephew to Indian billionaire Venugopal Dhoot (chairman of Videocon); according to Wikipedia and news sources he's wanted by the police since April 2018 for irregular loans between his companies accounting for hundreds of millions.

Videocon filed for bankrupcy procedures in June 2018. That connection is very sketchy.


Beyond Saurabh’s family connection, Engineer.ai has no relationship financial or otherwise with Videocon or Venugopal Dhoot.

- Disclaimer, I'm a VP E at Engineer.ai


I was referring to your founders previous company, not to Engineer.ai, didn't mean to imply there was a connection in the new company.


Our founder, Sachin, exited Nivio at a valuation of $100m, and used some of the proceeds to bootstrap Engineer.ai.

From my understanding, Nivio was merged into one of the investor's companies.

- Disclaimer, I'm a VP E at Engineer.ai


In my experience, there is a tradeoff between high re-usability and performance.

Building modules that can interact with any one of hundreds of other modules usually requires a fair amount of adapter code. This code comes with a performance penalty.

Maybe their market is ok with lesser performance. But I am highly skeptical that apps built this way will ever compete with apps built for a single, specific purpose.

I do hope they can advance the state of the art in some way though, because software development still feels way more tedious to me than it should be.


People run js everywhere and use ruby/python/php on the server. None of these are efficient


We need a new type of object that isn’t just a clever pattern that can be reused but understands how to adapt to the program it’s in, a smart object. I was hoping that’s what this company was doing with AI but I guess not.


That's the best idea I have heard this year.


And this already exists to a certain extent - people selling Windows UI widgets, people selling React components...most of this stuff has gone away and been replaced by open SDKs or services.

This just seems like gig-economy + 90s Windows Widget business


> We created Engineer.ai so that everyone can build an idea without learning to code

I'll believe that when I see it.


Where is the AI in this?


No offense but one of the founder's middle name is Dev


You know in all these years that I've worked at Engineer.ai, I never actually made that connection!


I'm surprised no one has mentioned security yet. This seems like a great strategy for building software systems with plentiful security issues.


Security is definitely a topic we're super concerned about. There's an interesting tradeoff with using re-usable components - that is, if there's a security vulnerability in one component, then all apps using it are affected (for example the notable npm incident https://www.theregister.co.uk/2018/07/12/npm_eslint/)! The flip side is that one can easily detect and patch all applications affected by that vulnerability. I'd love to have a chat with anyone who has some thoughts on how to deal with this problem more effectively (email is in my description).

- Disclaimer, I'm a VP E at Engineer.ai


I am not an engineer. I have great respect for engineers. Engineers, however, generally have views of other professions being easily disrupted wholly by AI. But god forbid, someone suggests that engineers can be taken over by AI, and we all lose our shit. :thinking: lol.


Because when someone who doesn’t understand ML and AI makes that assertion, they’re just ignorantly speculating in a pointy-haired-boss sort of way. Whereas an engineer who actually works with AI will have much more realistic views on what can and can’t be automated in the near futur


Their video is so cheesy, I could only make it half way through.


They literally have a "VP of blockchain" >.<


Blockchain is something we're currently exploring as a way to augment our business in order to deal with complex problems like identity management, royalty payouts, and escrowed payments. Additionally, today we can only work with developers in devshops; in the future we want to expand into the freelancer market. Dealing with freelancers at scale is a complicated proposition, as it's been super hit and miss with a well structured workflow to manage them. We're hoping we can solve that problem with a mix of automation and blockchain.

- Disclaimer, I am the aforementioned VP Blockchain at Engineer.ai


Okay that's actually more reasonable, thanks


Oh yeah, the Grid 2.0 it sounds like.

What exactly is AI about this?

What did the BBC use?


Well there's a pretty big difference between us the Grid - namely that we don't believe that humans will ever get fully replaced in software development. Instead we're looking to simply automate the repetitive nature of the SDLC in order to bring down the cost and time taken for developing bespoke software. Also, we're super clear that are limitations on the kind of software that our platform can help us automate. We're specifically interested in applications that have a high degree of re-use, and we actually turn away customers who don't fit that model. For example, we'd never build your enterprise ERP system, though we'd be great at building your HR workflow app, or your order management system.

Our platform is actually made up of a collection of tools and microservices including everything from a user story management system (similar to Jira/PivotalTracker), to the assembler itself which stitches components together and creates scaffolding for applications (infrastructure and code). We use AI in a variety of ways throughout this ecosystem (my colleague @sachmans has touched upon a few of those ways above).

The BBC asked us to build their BBC Click Live app. They were launching Click in India for the first time ever, and wanted an application to allow for audience participation. Since their app was relatively simple and was composed of primarily reusable components, we were able to do it a fraction of the price and timeline that it would have taken if it was created entirely by a human team.

- Disclaimer, I'm a VP E at Engineer.ai


The "building block" industry is already a decade+ old - its called AWS. Most development is already relegated to just doing the masonry between the blocks.

"Building blocks" without operational support is useless. How do you provide support for some closed "building block" contributed by someone you will never meet? Open source solves this by making "building blocks" available to everyone freely. Clouds solve this by selling you a service and you don't worry about the code.


These sorts of companies are the way of the future if humans are indeed partial halting oracles:

https://news.ycombinator.com/item?id=18381723

Formal proof:

https://www.am-nat.org/site/law-of-information-non-growth/

Discussion of proof:

https://news.ycombinator.com/item?id=18377525


You mean bullshit companies?


I mean human+AI companies.

"When a cheap laptop beats the smartest mathematicians at some tasks, but even a supercomputer with 16,000 CPUs can’t beat a child at others, you can tell that humans and computers are not just more or less powerful than each other – they’re categorically different."

- Peter Thiel, "Zero to One", Chapter 12

Further discussion: https://news.ycombinator.com/item?id=18381723


Wow, I see a lot of knee-jerk reactions here and I think most of these miss the point - even if AI helps and writes some of the code, it will only fill out some gaps at certain levels.

Think about this: we engineers are writers of specs, the same way as product owners are writers of specs, except they do it at a much higher level.

The high level spec (lets say the top 5%) gets passed down and we fill out the ~middle 50%. What's the rest at the bottom? It's the shoulders of giants we stand on.

So the same way as we welcome the increasing abstraction levels, from machine code to C, from C to TypeScript and GraphQL, why don't we welcome this development too?

It's not probable that we will get automated away anytime soon, and if we are, well that means AI has truly advanced, certainly something to celebrate, even if it comes at a financial loss for us.

Perhaps, once we won't get paid anymore to improve ad networks, we will have the time to do something that actually improves the human condition...

Edit: can the downvoter please elaborate?


I have no doubt that better app builders will emerge in the future, but I expect them from cloud providers, not gig agencies. Look at something like Aurora from AWS - what are you paying for? You're paying partially for the functionality, but mostly for the operational support (someone else keeps the SLA intact).

My guess is they are being built with a cloud acquisition in mind.


Fair enough, I don't doubt they are trying to bullshit their way to riches, my comment was in reply to people who say it can't be done because of performance issues and such.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: