Man, if they can solve that "trust" problem, OpenAI could really have an big advantage. Imagine if they were nonprofit, open source, documented all of the data that their training was being done with, or published all of their boardroom documents. That'd be a real distinguishing advantage. Somebody should start an organization like that.
The cyber security gatekeepers care very little about that kind of stuff. They care only about what does not get them in trouble, and AI in many enterprises is still viewed as a cyber threat.
One of the things that i find remarkable in my work is that they block ChatGPT because they're afraid of data leaking. But Google translate has been promoted for years and we don't really do business with Google. Were a Microsoft shop. Kinda double standards.
I mean it was probably a jive at OpenAIs transition to for-profit, but you’re absolutely right.
Enterprise decision makers care about compliance, certifications and “general market image” (which probably has a proper English word). OpenAI has none of that, and they will compete with companies that do.