The growing capabilities of AI-supported technologies have significantly simplified its application, providing numerous benefits that strengthen our final products for our customers. This is why, at Ensolvers, we have chosen to incorporate several AI tools into our projects, with (so far) very positive results. In this note we are going to describe a particular experience we had with ChatGPT for one of our customers in the telehealth space.
Due to the nature of our client's business, which is legally required to safeguard user data under HIPAA regulations, we needed to utilize a private instance of ChatGPT for information analysis hosted in Azure Cloud. This introduced additional complexity vs. using ChatGPT API as-is due to configuring the instance and customization of the information to be analyzed. While the majority of these configuration adjustments were advised and implemented in collaboration with our client, focusing on changes to API responses and management strategies, our primary focus remained on orchestrating the various elements involved. We also had to manage different models, API addresses and also use a private Azure SDK.
GPT utilizes a designated linguistic input structure referred to as a "prompt" to facilitate information processing. The prompt serves the purpose of establishing contextual parameters, offering illustrative instances, delineating response length constraints, defining expected data formats, and potentially ascribing a persona-like role for the generated output. In this context, the prompt is segmented into discrete linguistic units termed "tokens," which are subject to predefined quantitative constraints. Consequently, it is important to implement a preliminary data filtration process to guarantee that the accumulated linguistic input does not surpass the token limit imposed by the model.
For example, to control the default amount of tokens (4092) we do it through the Java code snipped described below
It's worth noting that configurable topics within the prompt can determine the content that may or may not be included, categorized by different levels of tolerance. GPT pre-analyzes the prompt, and if it includes any restricted topics, it returns an error message, indicating the impossibility of analysis. Therefore, it's crucial to filter the information to be reviewed beforehand to prevent error responses or handle them in the output. For example, we managed with prioritization of the messages, so our client could address the medical issues that required their attention due to the urgency in the content. Also, to avoid possible rejections, we force GPT to mark these messages with a special tag so we don’t lose it.
Crafting the context is another critical aspect. If the prompt's context is not correctly provided, there is a high chance that the response generated by GPT will be erroneous, incorrect, or imprecise. Often, it is necessary to explicitly specify what should not be included in the response, such as non-existent names or incorrect formats. Good practices to have a proper answer are:
A prompt example for a Study Case: “Help me Write”, in which we attempt to create a written response to a client as a member of the company.
We add some additional context {ADD_CONTEXT} to provide GPT with more information about the client, to suggest some kind of answer, obtained from the chat messages between the company member mentioned and the user. Also we fill up the “gaps” between brackets with the relevant data to be used to create the completion.
Another use case for ChatGPT is message classification, in which we process messages, tagging them and categorizing them to be set with priorities, so they are managed and reviewed with proper relevance and urgency. In this case, we use a Prompt like this:
Here the prompt needs to be more precise about how it should handle the tagging to prevent unexpected categories, and even specify the response format so we can handle it properly. After receiving the reply, we attempt to tag every message with the info we get matching with our categories and priorities. We handle the ones that are not properly tagged with the “OTHER” tag, so we don’t miss any processed messages.
Another use case we had in this project was related to playlist suggestion for the internal music player, matching the “mood” of their users. In this case we used something like this as a prompt:
So, as the user picks a few predefined moods, we give GPT the list of playlists by name, and the moods selected, and expect to receive the ids of the playlist saved on our db that the AI finds accurate for the person's feelings at that moment.
The integration of AI tools, such as ChatGPT into our projects at Ensolvers has not only highlighted the incredible potential of artificial intelligence but also the importance of fine-grain configuration and contextual framing. As we navigate the complexities of working with private instances and fine-tuning linguistic input structures, we have discovered that success lies in both controlling the technology and clearly defining the objectives to fulfill. Through examples like message classification and playlist recommendations, we have harnessed the power of AI to better serve our customers.