This guide will walk through the steps required to deploy GraphRAG in Azure.
The deployment process requires the following tools to be installed:
- Azure CLI >= v2.55.0
- awk - a standard linux utility
- cut - a standard linux utility
- sed - a standard linux utility
- curl - command line data transfer
- docker desktop
- helm - k8s package manager
- jq >= v1.6
- kubectl - k8s command line tool
- kubelogin - client-go credential (exec) plugin implementing azure authentication
- yq >= v4.40.7 - yaml file parser
TIP: If you open this repository as a devcontainer (i.e. VSCode Dev Containers or Codespaces), all required tools for deployment will already be available. Opening a devcontainer using VS Code requires Docker to be installed.
The setup/deployment process has been mostly automated with a shell script and Bicep files (infrastructure as code). Azure CLI will deploy all necessary Azure resources using these Bicep files. The deployment is configurable using values defined in infra/deploy.parameters.json
. To the utmost extent, we have provided default values but users are still expected to modify some values.
You will need the following Azure Role Based Access permissions to deploy the GraphRAG solution accelerator. By default, Azure resources will be deployed with Azure Managed Identities , following security best practices. Due to this enhanced security configuration, higher level permissions are required in order to deploy the necessary Azure resources:
Permission | Scope |
---|---|
Contributor | Subscription |
Role Based Access Control (RBAC) Administrator | Subscription |
Owner | Resource Group |
The Azure subscription that you deploy this solution accelerator in requires several resource providers to be registered (if they aren't already). They include:
Microsoft.OperationsManagement
Microsoft.Compute
Microsoft.AlertsManagement
This can be accomplished via the Azure Portal or with the following Azure CLI commands:
# register providers
az provider register --namespace Microsoft.OperationsManagement
az provider register --namespace Microsoft.AlertsManagement
az provider register --namespace Microsoft.Compute
# verify providers were registered
az provider show --namespace Microsoft.OperationsManagement -o table
az provider show --namespace Microsoft.AlertsManagement -o table
az provider show --namespace Microsoft.Compute -o table
As a prerequisite to deployment, you will either need access to an already deployed Azure OpenAI (AOAI) resource or have available quota. If an existing AOAI resource is not used, the deployment code in this accelerator will deploy an AOAI resource with some default model choices. Documentation on how to deploy an AOAI service can be found here.
Take note of the model deployment name and model name. Note that the AOAI instance must be in the same subscription that you plan to deploy this solution accelerator in.
As a starting point, we recommend the following quota thresholds be used for this solution accelerator to run.
Model Name | TPM Threshold |
---|---|
gpt-4 turbo | 80K |
text-embedding-ada-002 | 300K |
Login with Azure CLI and set the appropriate Azure subscription.
# login to Azure - may need to use the "--use-device-code" flag if using a remote host/virtual machine
az login
# check what subscription you are logged into
az account show
# set appropriate subscription
az account set --subscription "<subscription_name> or <subscription id>"
A resource group can be created via the Azure Portal or Azure CLI.
az group create --name <my_resource_group> --location <my_location>
In the deploy.parameters.json
file, provide values for the following required variables, if not already filled out.
Variable | Expected/Default Value | Required | Description |
---|---|---|---|
LOCATION |
<my_location> | Yes | The azure cloud region to deploy GraphRAG resources to (can be different than the location of your AOAI instance). Please use the compressed form of a cloud region name (i.e. eastus2 ). |
RESOURCE_GROUP |
<my_resource_group> | Yes | The resource group that GraphRAG will be deployed in. Will get created automatically if the resource group does not exist. |
GRAPHRAG_API_BASE |
https://<my_openai_name>.openai.azure.com | No | An existing Azure OpenAI service endpoint. |
GRAPHRAG_API_VERSION |
2023-03-15-preview | No | OpenAI API version. |
GRAPHRAG_LLM_MODEL |
gpt-4 | No | Name of the Azure OpenAI LLM model to use (or deploy). |
GRAPHRAG_LLM_MODEL_VERSION |
turbo-2024-04-09 | No | Model version of the LLM model to use (or deploy). Only required if deploying a new AOAI instance (i.e. GRAPHRAG_API_BASE is left undefined). |
GRAPHRAG_LLM_DEPLOYMENT_NAME |
gpt-4 | No | Deployment name of the LLM model to use (or deploy). |
GRAPHRAG_LLM_MODEL_QUOTA |
80 | No | TPM quota of the LLM model in units of 1000 (i.e. 10 = 10,000 TPM). Only required if deploying a new AOAI instance (i.e. GRAPHRAG_API_BASE is left undefined). |
GRAPHRAG_EMBEDDING_MODEL |
text-embedding-ada-002 | No | Name of the Azure OpenAI embedding model. |
GRAPHRAG_EMBEDDING_MODEL_VERSION |
2 | No | Model version of the embedding model to use (or deploy). Only required if deploying a new AOAI instance (i.e. GRAPHRAG_API_BASE is left undefined). |
GRAPHRAG_EMBEDDING_DEPLOYMENT_NAME |
text-embedding-ada-002 | No | Deployment name of the embedding model to use (or deploy). |
GRAPHRAG_EMBEDDING_MODEL_QUOTA |
300 | No | TPM quota of the embedding model in units of 1000 (i.e. 10 = 10,000 TPM). Only required if deploying a new AOAI instance (i.e. GRAPHRAG_API_BASE is left undefined). |
GRAPHRAG_IMAGE |
graphrag:backend | No | The name and tag of the graphrag docker image in the container registry. Will default to graphrag:backend and be hosted at my_container_registry_name>.azurecr.io/graphrag:backend . |
CONTAINER_REGISTRY_LOGIN_SERVER |
<container_registry_name>.azurecr.io | No | Endpoint of an existing Azure Container Registry where the GRAPHRAG_IMAGE docker image is hosted. If not provided, a unique name will be generated (recommended). |
COGNITIVE_SERVICES_AUDIENCE |
https://cognitiveservices.azure.com/.default |
No | Endpoint for cognitive services identity authorization. Should be explicitly set for deployments in other Azure clouds. |
APIM_NAME |
<auto_generated_unique_name> | No | Hostname of the graphrag API. Must be a globally unique name. The API will be available at https://<APIM_NAME>.azure-api.net . |
APIM_TIER |
Developer | No | The APIM tier to use. Can be either Developer or StandardV2 . StandardV2 costs more but will deploy faster. |
RESOURCE_BASE_NAME |
No | Suffix to apply to all azure resource names. If not provided a unique suffix will be generated. | |
AISEARCH_ENDPOINT_SUFFIX |
search.windows.net |
No | Suffix to apply to AI search endpoint. Should be overridden for deployments in other Azure clouds. |
AISEARCH_AUDIENCE |
https://search.azure.com/ |
No | AAD audience for AI Search. Should be overridden for deployments in other Azure clouds. |
cd infra
bash deploy.sh -h # view help menu for additional options
bash deploy.sh -p deploy.parameters.json
When deploying for the first time, it may take ~40-50 minutes to deploy all resources. In cases where a deployment error may occur (e.g. not enough quota), subsequent runs of this command will be faster if you rerun the deployment using the same resource group.
TIP: The choice of APIM_TIER
is a major contributing factor to the overall deployment time.
Once the deployment has finished, check out our Quickstart
notebook for a demonstration of how to use the GraphRAG API. To access the API documentation, visit <APIM_gateway_url>/manpage/docs
in your browser. You can find the APIM_gateway_url
by looking in the settings of the deployed APIM instance.