In this lesson, we will explore the basics of building chat applications using language model completions and functions in .NET. We will also explore how to use Semantic Kernel and Microsoft Extensions AI (MEAI) to create chatbots. And use Semantic Kernel to create plugins, or functionality that's called by the chatbot based on the user's input.
⬆️Click the image to watch the video⬆️
Text completions might be the most basic form of interaction with the language model in an AI application. A text completion is a single response generated by the model based on the input, or prompt, that is given to the model.
A text completion itself is not a chat application, it is a one and done interaction. You might use text completions for tasks such as content summary or sentiment analysis.
Let's see how you would use text completions using the Microsoft.Extensions.AI library in .NET.
🧑💻Sample code: Here is a working example of this application you can follow along with.
// this example illustrates using a model hosted on GitHub Models
IChatClient client = new ChatCompletionsClient(
endpoint: new Uri("https://models.inference.ai.azure.com"),
new AzureKeyCredential(githubToken)) // githubToken is retrieved from the environment variables
.AsChatClient("gpt-4o-mini");
// here we're building the prompt
StringBuilder prompt = new StringBuilder();
prompt.AppendLine("You will analyze the sentiment of the following product reviews. Each line is its own review. Output the sentiment of each review in a bulleted list and then provide a generate sentiment of all reviews. ");
prompt.AppendLine("I bought this product and it's amazing. I love it!");
prompt.AppendLine("This product is terrible. I hate it.");
prompt.AppendLine("I'm not sure about this product. It's okay.");
prompt.AppendLine("I found this product based on the other reviews. It worked for a bit, and then it didn't.");
// send the prompt to the model and wait for the text completion
var response = await client.GetResponseAsync(prompt.ToString());
// display the repsonse
Console.WriteLine(response.Message);
🗒️Note: This example showed GitHub Models as the hosting service. If you want to use Ollama, check out this example (it instantiates a different
IChatClient
).If you want to use Azure AI Foundry you can use the same code, but you will need to change the endpoint and the credentials.
🙋 Need help?: If you encounter any issues, open an issue in the repository.
Building a chat application is a bit more complex. There will be a conversation with the model, where the user can send prompts and the model will respond. And like any conversation you will need to make sure you keep the context, or history, of the conversation so everything makes sense.
During a chat with the model the messages sent to the model can be of different types. Here are some examples:
- System: The system message guides the behavior of the model's responses. It serves as the initial instruction or prompt that sets the context, tone, and boundaries of the conversation. The end-user of that chat usually doesn't see this message, but it's very important in shaping the conversation.
- User: The user message is the input or prompt from the end-user. It can be a question, a statement, or a command. The model uses this message to generate a response.
- Assistant: The assistant message is the response generated by the model. These messages are based off the system and user messages and are generated by the model. The end-user sees these messages.
During the chat with the model, you will need to keep track of the chat history. This is important because the model will generate responses based on the system message, and then all of the back and forth between the user and the assistant messages. Each additional message adds more context the model uses to generate the next response.
Let's take a look at how you would build a chat application using MEAI.
// assume IChatClient is instantiated as before
List<ChatMessage> conversation =
[
new (ChatRole.System, "You are a product review assistant. Your job is to help people write great product reviews. Keep asking questions on the person's experience with the product until you have enough information to write a review. Then write the review for them and ask if they are happy with it.")
];
Console.Write("Start typing a review (type 'q' to quit): ");
// Loop to read messages from the console
while (true)
{
string message = Console.ReadLine();
if (message.ToLower() == "q")
{
break;
}
conversation.Add(new ChatMessage(ChatRole.User, message));
// Process the message with the chat client (example)
var response = await client.GetResponseAsync(conversation);
conversation.Add(response.Message);
Console.WriteLine(response.Message.Text);
}
🗒️Note: This can also be done with Semantic Kernel. Check out the code here.
🙋 Need help?: If you encounter any issues, open an issue in the repository.
⬆️Click the image to watch the video⬆️
When building AI applications you are not limited to just text-based interactions. It is possible to extend the functionality of the chatbot by calling pre-defined functions in your code based off user input. In other words, function calls serve as a bridge between the model and external systems.
🧑💻Sample code: Here is a working example of this application you can follow along with.
There are a couple of setup steps you need to take in order to call functions with MEAI.
-
First, of course, define the function that you want the chatbot to be able to call. In this example we're going to get the weather forecast.
[Description("Get the weather")] static string GetTheWeather() { var temperature = Random.Shared.Next(5, 20); var conditions = Random.Shared.Next(0, 1) == 0 ? "sunny" : "rainy"; return $"The weather is {temperature} degrees C and {conditions}."; }
-
Next we're going to create a
ChatOptions
object that will tell MEAI which functions are available to it.var chatOptions = new ChatOptions { Tools = [AIFunctionFactory.Create(GetTheWeather)] };
-
When we instantiate the
IChatClient
object we'll want to specify that we'll be using function invocation.IChatClient client = new ChatCompletionsClient( endpoint: new Uri("https://models.inference.ai.azure.com"), new AzureKeyCredential(githubToken)) // githubToken is retrieved from the environment variables .AsChatClient("gpt-4o-mini") .AsBuilder() .UseFunctionInvocation() // here we're saying that we could be invoking functions! .Build();
-
Then finally when we interact with the model, we'll send the
ChatOptions
object that specifies the function the model could call if it needs to get the weather info.var responseOne = await client.GetResponseAsync("What is today's date", chatOptions); // won't call the function var responseTwo = await client.GetResponseAsync("Should I bring an umbrella with me today?", chatOptions); // will call the function
🙋 Need help?: If you encounter any issues, open an issue in the repository.
In this lesson, we learned how to use text completions, start and manage a chat conversation, and call functions in chat applications.
In the next lesson you'll see how to start chatting with data and build what's known as a Retrieval Augmented Generation (RAG) model chatbot - and work with vision and audio in an AI application!