top of page

Exploring the 4 types of generative AI startups

Several entrepreneurs, enterprises, and #investors seem confused or panicked. One investor asked me the other day, "How can I tell if a #generativeai startup is just an interface over OpenAI?".

Diverse personalities brought to life in a vibrant illustration.

What started as a gold rush is now a frustrated FOMO.

Earlier, I discussed data's importance, but the intellectual property goes beyond data. In ML, having an IP is about something other than defending it with a patent; it also means that it is difficult to copy, as the IP is mainly equal to hard work. Data can be used, but it can also be combined with technology in a nontrivial way. IP can also protect Gen-AI ventures from potential risks.

But what about "the interface"? Let me tell you a little secret: building a GPT model is simpler than you may think.

OpenAI and AI21 Labs perfected it, but multiple companies and organizations are doing it, including Meta, Salesforce, BigScience, NVIDIA, etc. Building your GPT infrastructure doesn't mean you have a great tool in your hand since it can be unstable and produce low-level results.

The challenge is fine-tuning a GPT model to excel. To do so, you must build a great augmented language model using significant high-value data and a solid understanding of the problem. Prompt engineering can also make a big difference. If that's all they do, I won't invest in them, but understanding LLMs is vital to building a good product.

Let's get back to the "only interface" question. "Only interface" startups fine-tune GPT models? No. Compared to the vanilla model, the fine-tuned model is entirely different. The question is how difficult it would be to copy that work and what is its market potential.

How do we handle the risk of relying on API providers? Good question. Startups need to be able to choose which GPT model to use at every stage. GPT-3.5 is incredibly easy to manipulate but extremely expensive. Hosting a model on a dedicated server reduces costs and risks, but only sometimes.

I see four archetypes of startups in the field:

  1. "The hackers" - having no proprietary data and hosting a fine-tuned open-sourced model - can make good money but may be at risk of IP infringement.

  2. Using #OpenAI API, "newbies" may have a great product, but it is easy to copy and expensive to monetize, making it a high-risk venture.

  3. "Good fellows" - having a solid IP, but relying on the infrastructure of the big boys. If they can train small models, they can save time and money on infrastructure.

  4. "The lonely wolf" - Having a solid IP and hosting their model on their server. As long as it doesn't require too much handling, this can be cost-effective and protected in the long run. Developing and operating it should be well thought out.

As for Inpris HumAIns, we built a "cognitive architecture." We run our #Dialouge2Action well-differentiated trained models; Sometimes, we run our algorithms; and in others, we may run a simple API call. If you can't tell how and when we do it, that is just fine.


bottom of page