in Blog

May 28, 2024

Exploring Vector Databases and API Models in Generative AI: Applications of Generative AI in Various Industries

Reading time:




6 minutes


We’ve talked with Kiryl Halozhyn, Solutions Architect at Databricks, about how Generative AI will change businesses and the technical aspects of implementing Generative AI within different infrastructures.

Read the interview!

Key Takeaways:

  1. Main Applications of Generative AI in Companies
  2. Current Trends: Now advancing to more sophisticated MVP solutions for specific scenarios that enhance business processes, leveraging vector databases with both unstructured and structured data.
  3. Integration with Business Knowledge: Foundational models are now enriched with company-specific knowledge to address particular business problems and reduce risks like hallucinations, combining conversational abilities with specific expertise.
  4. Initial Use of API-based Models: Companies start with OpenAI’s API-based models for proofs of concept (PoCs) due to low costs and high-quality responses.
  5. Transition to Open-Source Models: During the production phase, many companies prefer open-source models for better cost performance and lower latencies.
  6. Fine-Tuning: Although initially popular, fine-tuning is complex and costly, requiring extensive infrastructure, large datasets, and GPU power. Many companies outsource this task to specialized firms like MosaicML.
  7. Future of Domain-Specific Generative AI
  8. Favoring Smaller, Open-Source Models: The future is likely to favor smaller, faster, and cheaper models tailored to specific use cases, offering the best cost performance by combining multiple open-source models.
  9. Shift from SaaS to In-House Solutions: Although SaaS models were initially popular due to the lack of implementation knowledge, companies are now more equipped to develop their own solutions using either SaaS APIs or self-hosted models.
  10. Data Privacy Concerns: Companies need to ensure data privacy, especially when using third-party APIs like OpenAI, which involves verifying the data processing practices of these services.
  11. Company Culture and Data Quality: Many companies are still establishing foundational data platforms (like data lakehouses) and need to improve the quality of unstructured data before prioritizing Gen AI use cases.
  12. Preparation for Future Usage: Companies need to organize and prepare internal data from platforms like SharePoint, Confluence, and Jira for effective future Gen AI applications.

 

AI-report-2024-graphics-

The Interview with Kiryl Halozhyn, Solutions Architect at Databricks

Addepto: What are the main applications of Generative AI in companies?

Kiryl Halozhyn, Solutions Architect at Databricks: Initially, Gen AI was mainly used in the creation of various types of chatbots, support bots, and simply Q&A engines. Today, these solutions seem quite basic, but in the beginning, they required courage because the technology was quite new, so the risk was significant. However, now we see more and more advanced MVP solutions that are being applied for specific scenarios to improve companies’ processes and are usually based on the use of vector databases utilizing both unstructured and structured data.

Thanks to them, the so-called foundational models are enriched with very specific knowledge, which allows them to address more specific business problems, decreasing the risk of, for example, hallucinations. It can be said that we combine impressive conversational abilities known from ChatGPT with the appropriate company-specific knowledge.

What models are used as the basis for such solutions?

API-based models from OpenAI are usually used initially in PoCs because using a ready-made API in a small test solution is usually not a big cost and the model response quality is very high. However, many companies decide to reach for an open-source model at a later stage during the productionalization phase looking for lower latencies and better cost performance metrics. In the case of a product that is to be implemented throughout the company, the costs of using a smaller single open source model or multiple models in a chain are lower.

In the initial phase, there was a lot of talk about fine-tuning, but it doesn’t seem to be the solution that companies are eager to reach for at the moment.

Fine-tuning is the next stage for Gen AI use cases, even more complicated and costly. The use of vector databases does not require preparing data for fine tuning and a big amount of GPU computing power. RAG can be done using OpenAI and simply paying for the number of tokens retrieved from the API. This is quite simple. However, to do fine-tuning, you need to have your own model training architecture, well-prepared huge datasets of custom data and provisioning of a large number of GPU instances. There are multiple companies on the market which specialize in model fine tuning for the companies and thus will take part of the complexity away, like MosaicML.

Read more: Fine-Tuning LLMs. Benefits, Costs, Challenges

Will the future of domain-specific Gen AI solutions be based on open-source models?

Rather yes. It is difficult to use one large model for every scenario, every company, so in my opinion, the future will favor smaller, faster, cheaper models that will be focused on specific use cases. Combining multiple open source models in a single use case will provide the best cost performance outcome.

However, most solutions based on LLMs are interfaces that essentially offer the same thing as ChatGPT.

Yes, many such SaaS have been created, but as it seems to me they have a fairly defined lifespan. In the beginning, it might have made sense because there was a lack of knowledge on how to implement these scenarios using LLM and Gen AI in general, how to define their parameters, etc. Now major tech companies on the market have their own foundational models and overall companies have more knowledge of how to implement their own solutions, whether using SaaS APIs or own hosted models.

Is there, or do you think, that the development of Generative AI will focus on finding and addressing these areas specific to given domains for a given company?

Using publicly available models involves companies having to give the green light to use their data and send it to 3rd party SaaS API. It seems to me that this will play a major role in the development of smaller open-source models hosted on their own infrastructure.

By using the Open AI API, our data theoretically is not used to train the model.

Theoretically, yes, but this is a Microsoft product, so, for example, someone who does not use Azure or is not in the cloud at all has to verify Microsoft in terms of processing personal data.

What other barriers are blocking the introduction of AI?

Company culture and quality of unstructured data. Many companies do not think about AI because they are still in the process of implementing foundational data platforms, such as data lakehouses and it will take time for them to start prioritizing Gen AI use cases. Moreover, as before LLMs companies usually focused on the quality of structured data and creating a single layer of truth for it, now unstructured data starts to play a crucial role in the success of Gen AI solutions. Therefore, many companies have to first do their homework with getting sharepoint, confluence, jira and other internal data prepared for future usage.
But I believe that with time, LLM use cases will become a crucial part of companies data strategy and now we’re just seeing the beginnings of this new trend.

Check out the other insights about Gen AI usage in business practice, technical, legal business challenges on AI implementation in our report. 

Read more in Gen AI in Business: Global Trends Report 2024

Report Gen AI in Business



Category:


Generative AI