If you want to use Azure AI Foundry models for your .NET AI apps in this course, follow the steps in this guide.
Don't want to use Azure OpenAI?
👉 To use GitHub Models this is the guide for you 👉 Here are the steps for Ollama
To use Azure AI Foundry models, you need to create a hub and project in the Azure AI Foundry portal. Then you'll need to deploy a model. This section will show you how to do that.
- Go to the Azure AI Foundry Portal.
- Sign in with your Azure account.
- Select All hubs + projects from the left-hand menu and then click the + New hub from the dropdown. (Note: You may have to click on + New project first to see the + New hub option).
- A new window will open. Fill in the details for your hub:
- Give your hub a name (e.g., "MyAIHub").
- Choose a region closest to you.
- Select the appropriate subscription and resource group.
- You can leave the rest of the settings as they are.
- Click Next.
- Review the details and click Create.
- Once your hub is created, the portal will open its details page. Click the Create Project button.
- Give your project a name (e.g., "GenAINET") or accept the default.
- Click Create.
🎉 Done! You’ve just created your first project in Azure AI Foundry.
Now, let’s deploy a gpt-4o-mini model to your project:
- In the Azure AI Foundry portal, navigate to your project (it should automatically open after creating it).
- Click on Models and Endpoints from the left-hand menu and then the Deploy Model button.
- Select Deploy base model from the dropdown.
- Search for gpt-4o-mini in the model catalog.
- Select the model and click the Confirm button.
- Specify a deployment name (e.g., "gpt-4o-mini"). You can leave the rest of the options as they are.
- Click Deploy and wait for the model to be provisioned.
- Once deployed, note the Model Name, Target URI, and API Key from the model details page.
🎉 Done! You’ve deployed your first Large Language Model in Azure AI Foundry.
📝 Note: The endpoint maybe similar to
https://< your hub name>.openai.azure.com/openai/deployments/gpt-4o-mini/chat/completions?api-version=2024-08-01-preview
. The endpoint name that we need is onlyhttps://< your hub name >.openai.azure.com/
*.
To be secure, let's add the API key you just created to your Codespace's secrets.
-
Make sure you have forked this repository to your GitHub account.
-
Go to the Settings tab of your forked repository then expand Secrets and variables on the left-hand menu and select Codespaces.
-
Name your secret AZURE_AI_KEY.
-
Paste the API key you copied from the Azure AI Foundry portal into the Secret field.
Let's create a GitHub Codespace to develop with for the rest of this course.
- Open this repository's main page in a new window by right-clicking here and selecting Open in new window from the context menu
- Fork this repo into your GitHub account by clicking the Fork button in the top right corner of the page
- Click the Code dropdown button and then select the Codespaces tab
- Select the ... option (the three dots) and choose New with options...
From the Dev container configuration dropdown, select one of the following options:
Option 1: C# (.NET) : This is the option you should use if you plan to use GitHub Models or Azure OpenAI. It has all the core .NET development tools needed for the rest of the course and a fast startup time
Option 2: C# (.NET) - Ollama: Ollama allows you to run the demos without needing to connect to GitHub Models or Azure OpenAI. It includes all the core .NET development in addition to Ollama, but has a slower start-up time, five minutes on average. Follow this guide if you want to use Ollama
You can leave the rest of the settings as they are. Click the Create codespace button to start the Codespace creation process.
Now let’s update the code to use the newly deployed model. First we'll need to add some NuGet packages to work with Azure OpenAI.
-
Open the terminal and switch to the project directory:
cd 02-SetupDevEnvironment/src/BasicChat-01MEAI/
-
Run the following commands to add the required package:
dotnet add package Azure.AI.OpenAI --version 2.2.0-beta.2 dotnet add package Microsoft.Extensions.AI.OpenAI --version 9.3.0-preview.1.25114.11
More information about Azure.AI.OpenAI.
-
Open
/workspaces/Generative-AI-for-beginners-dotnet/02-SettingUp.NETDev/src/BasicChat-01MEAI/Program.cs
.Add the following using statements at the top of the file:
using System.ClientModel; using Azure.AI.OpenAI; using Microsoft.Extensions.AI;
-
Create new variables to hold the model name, endpoint, and API key:
var deploymentName = "< deployment name > "; // e.g. "gpt-4o-mini" var endpoint = new Uri("< endpoint >"); // e.g. "https://< your hub name >.openai.azure.com/" var apiKey = new ApiKeyCredential(Environment.GetEnvironmentVariable("AZURE_AI_SECRET"));
Making sure to replace
< deployment name >
, and< endpoint >
with the values you noted above. -
Replace the
IChatClient
creation with the following code:IChatClient client = new AzureOpenAIClient( endpoint, apiKey) .AsChatClient(deploymentName);
-
Run the following command in the terminal:
dotnet run
-
You should see output similar to the following:
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. AI encompasses a variety of technologies and approaches that enable computers and systems to perform tasks that typically require human intelligence. These tasks include: 1. **Learning**: The ability to improve performance based on experience, often through algorithms that analyze data. ...
🙋 Need help?: Something not working? Open an issue and we'll help you out.
In this lesson, you learned how to set up your development environment for the rest of the course. You created a GitHub Codespace and configured it to use Azure OpenAI. You also updated the sample code to use the newly deployed model in Azure AI Foundry.
- Azure AI Foundry Documentation
- Working with GitHub Codespaces
- How to Deploy Models in Azure AI Foundry
- Azure.AI.OpenAI NuGet Package
Next, we'll explore how to create your first AI application! 🚀