Building Natural Experiences Over Existing Components with LLMs

Hülya Pamukçu Crowell
5 min readDec 19, 2023

--

Large language models (LLMs) can be used to build natural language interfaces for existing applications. This can save time and effort for developers, and it can also result in more user-friendly, intuitive applications. Some benefits of using LLMs for building natural language interfaces include improved accuracy, reduced development time, and better user experiences. A common model that has been adopted is to provide “on-demand assistant widgets” to help answer domain-specific questions. Another pattern is to provide these experiences seamlessly embedded into the components users already use in the applications, such as a table with filtering and sorting capabilities or a chart with visualizations. This article explores adding an intuitive LLM integration to an existing UI component without changing the component implementation, data retrieval, or rendering mechanisms.

Approach

For this article, we use one of the well-known open-source libraries, Cloudscape, and its Table component, but you can apply similar concepts to other components. This component allows users to create “Property” filters using selectors or free text input. This, in return, enables query generation for the backend to retrieve relevant items and display them to the user. To enable LLM integration, we are creating an interception point to the property handler of the component, and depending on the input mode, we either rely on the generated filter from the selectors or the generated filter from the LLM, namely in “smart filter.”

To understand this better, let’s look at the indirection we add. When the filter property is changed in the handler, the logic in the hook retrieves the items from the backend. In “smart filter” mode, we first call OpenAI’s completion API with the gpt-3.5-turbo model and the prompts necessary to generate the filter given the user’s query. Then, we set the query state to the JSON returned. After the query state is set, the rest of the flow is unchanged. Therefore, this pattern has two additional advantages: the users can “alternate” between different interfaces, and we keep our original guardrails in place as we make the same API calls to our backend for data retrieval.

LLM integration for filter generation

The component

Let’s look at the component’s current selector-generated filter.

For example, if the table is displaying a collection of distributions with ID, State, Domain Name, Delivery Method, SSL certificate, and Logging properties, and we want to “Display active distributions with logging disabled,” we would set up the following filter:

In JSON format, this would generate:

The LLM integration

We are introducing a “smart filter” option to switch to natural interface mode. — We could detect this and switch automatically. Still, while rolling out any new experiences, it is better to have a way to explicitly opt in and make this more predictable and obvious to the user. — When the user types a question or phrase in this mode, we use LLM completion API to generate the corresponding JSON filter and pass this to the component. This triggers backend fetch to retrieve and display the related items to the user. The user can provide follow-up statements to refine the results further.

Smart Filter

The basic implementation of the smart filter hook is as follows.

  • We prepare a system prompt (see the code below). We start with setting the outline and general instructions; then, we provide different parts separated with tags as delimiters. The code section provides the type definitions used by property filters; these types define the allowed operators, token structure, and query combining operations. The distributions section is our items subset, which helps determine the fields and potential values. In the last section, we provide examples of queries and output JSONs. Note that we follow prompt tactics mentioned in the OpenAI documentation. — assign a persona, create distinct parts with tags like “<distribution> <items> or <json>,” and provide examples.
  • While setting the subsets of items, we ensure good coverage of potential field values. It is also essential to keep this list stable during the same “query session” for consistent results.
  • We maintain “query session context” as the completion APIs we use are stateless. When a user enters a prompt, we append it to the context and send it to the API in subsequent calls. — There are a few important things to consider when managing the context. We need to make sure the context is bounded. In the implementation, we added an eviction logic to remove old prompts. Depending on the application, more complex logic might be required to limit the context length while maintaining the necessary information for the model’s performance. You can find more information on strategies such as summarizing or filtering in the documentation.
  • When genFilter is requested after the user enters the prompt, we prepare the messages with appropriate roles, system, assistant, and user and make the model API call.
  • When a response is returned, we add this to our context for the subsequent calls with the assistant role. — this one turned out to be critical in some use cases. For our use case, the last query we show below, where we “inverted” the previous query, required the message with the “assistant” role and the content as the model’s response to be in the context for it to work. In the other queries, only providing user role prompts as context was sufficient.

A User Session

In the following recording, we are illustrating the execution of a smart filter.

Queries:

  1. Querying different fields:

The first user query is “Show all RTMP dist with default SSL cert.” Note that the items we provided and a single example were sufficient to relate to the Delivery Method and SSL certificate fields.

2. Refining the results:

One natural next step for the user is to refine the results. The next user query is “Please remove if they are not active.” We did not repeat the previous criteria, but since we passed the context to the model at each API call, the generated JSON was built on the last result, and we got the correct updated filter.

3. Inverting the results:

The user may want to see the opposite results during the session. Our last query is, “Now please show dist that is not shown.” Two things were needed for this to work. 1. Previous response as the assistant role content to be included in the messages sent to the API. 2. Example query and response JSON that inverts the first example (see the system prompt above).

Recap

This article explored an approach to embedding natural interfaces into existing user components. We provided a few examples of user queries and generated LLM filters needed by a component. Like filtering, other capabilities such as sorting, paging, and customizing existing visualization can be the next steps. As a footnote, similar to any other user experience, LLM-integrated experiences should be carefully studied via user research and extensively tested.

Photo by author @qulia

--

--

No responses yet