close
close

Vectara Portal helps non-developers build AI apps to chat with data: How to use it


Vectara Portal helps non-developers build AI apps to chat with data: How to use it

Subscribe to our daily and weekly newsletters to receive the latest updates and exclusive content on industry-leading AI coverage. Learn more


Vectara just made developing generative AI a breeze. The Palo Alto, California-based company, an early pioneer in retrieval augmented generation (RAG), has announced Vectara Portal, an open-source environment that anyone can use to build AI applications to communicate with their data.

While there are many commercial offerings that allow users to get instant answers from documents, what sets Vectara Portal apart is its ease of accessibility and use. With just a few simple steps, anyone, regardless of their technical skills or knowledge, can access a search, summary or chat app based on their data sets. You don’t even have to write a single line of code.

The offering has the potential to enable non-developers to implement multiple use cases in their organization, from policy to invoice search. However, it is important to note that performance cannot be conclusively assessed yet as the tool is still very new and only a handful of customers are testing it in beta.

Ofer Mendelevitch, head of developer relations at Vectara, tells VentureBeat that since Portal is built on Vectara’s proprietary RAG-as-a-service platform, they expect massive adoption by non-developers, leading to greater adoption of the company’s comprehensive offerings at the enterprise level.

“We look forward to seeing what users will create with Vectara Portal. We hope the level of accuracy and relevance enhanced by their documents will demonstrate the full power of (Vectara’s) RAG systems for enterprises,” he said.

How does the Vectara portal work?

The portal is available as an app hosted by Vectara and as an open source offering under the Apache 2.0 license. Vectara Portal is based on the idea that users create portals (custom applications) and then make them available for their audience to use.

First, the user needs to create a portal account using their main Vectara account credentials and set up this profile with their Vectara ID, API key and OAuth client ID. Once the profile is ready, the user just needs to click on the “Create Portal” button and provide basic details such as the name of the proposed app, its description and whether it will act as a semantic search tool, summary app or conversational chat assistant. Then, clicking on the “Create” button will add it to the tool’s portal management page.

Vectara Portal
Creation of the Vectara portal. Image credit: Vectara.

From the portal management screen, the user opens the created portal, goes to its settings and adds any number of documents to base/customize the app on their data. As these files are uploaded, they are indexed by Vecatara’s RAG-as-a-Service platform, which runs the backend of the portal, to provide accurate and hallucination-free answers.

“This (platform) means a strong query engine, our state-of-the-art Boomerang embedding model, a multilingual reranker, fewer hallucinations and overall a much higher quality of answers to user questions in the portal. Since it is a no-code product, developers can quickly build new generation AI products with just a few clicks,” said Mendelevitch.

The head of development noted that when a user creates a portal and adds documents, the tool’s backend creates a “corpus” specifically for that data in the user’s main Vectara account. This corpus serves as a place to store all documents associated with the portal. So when a user asks a question on the portal, Vectara’s RAG API runs that query against the associated corpus to find the most relevant answer.

Demo Vectara portal. Image credit: Vectara.

The platform first selects the most relevant parts of the documents (in the retrieval step) required to answer the user’s question and then feeds them into the large language model (LLM). Vectara offers users the option to choose from various LLMs, including the company’s own Mockingbird LLM as well as those from OpenAI.

“For Vectara Scale (the company’s larger plan) customers, Portal leverages the best of Vectara’s capabilities, including the most powerful LLMs,” Mendelevitch added. The apps are public and shareable via links by default, but users can also restrict them to a select group of users.

The aim is to increase the number of corporate customers

With this no-code offering, available as both a hosted and open-source product, Vectara aims to empower more enterprise users to build powerful generative AI apps for various use cases. The company hopes it will increase sign-ups and promote its core RAG-as-a-Service offering, ultimately leading to better conversion.

“RAG is a very strong use case for many enterprise developers and we wanted to make this accessible to no-code developers so they can understand the power of Vectara’s end-to-end platform. Portal does just that and we believe it will be a valuable tool for product managers, business leaders and other C-level executives to understand how Vectara can help with their AI use cases,” said Mendelevitch.

The company has raised more than $50 million in funding to date and has around 50 manufacturing customers, including Obeikan Group, Juniper Networks, Sonosim and Qumulo.

Leave a Reply

Your email address will not be published. Required fields are marked *