This project is an Azure-based Retrieval-Augmented Generation (RAG) web application that answers infrastructure questions about servers, incidents, and ownership. It combines Azure OpenAI Service (GPT + Embeddings), Azure AI Search (multiple indexes), Azure Blob Storage, Azure SQL (Azure Arc inventory), and the Log Analytics API. The app is deployed to Azure App Service using the Azure Developer CLI (azd) and Bicep IaC.
We strongly advise users of this demo not to use this code in their production environments without implementing or enabling additional well-architected (e.g., security, resiliency) features. See the Azure Well-Architected Framework guidance for tips and consult the Azure OpenAI Landing Zone reference architecture for additional best practices.
- Multiple Azure AI Search indexes (inventories, incidents) unified at query time
- Parameterized system prompt engineered for infra Q&A and typo-tolerant normalization
- One-command infra provision via
azd up(Azure OpenAI, Search, Storage, App Service, Log Analytics) - Managed Identity-based auth (no API keys in code)
- Scripted data ingest for Blob/Search and Arc data into Azure SQL
| Component | Purpose |
|---|---|
| App Service (Linux, Python) | Hosts FastAPI / Uvicorn app |
| Azure OpenAI Service (GPT + Embeddings) | Text generation & embedding vectorization |
| Azure AI Search | Hybrid/semantic retrieval across multiple indexes |
| Storage Account (Blob) | Source documents (inventories / incidents / Arc) |
| Azure SQL Database | Azure Arc VM/NIC/installed software ingestion for SQL-based Q&A |
| Log Analytics Workspace | Centralized diagnostics & logs |
| Managed Identities | Secure inter-service auth (no secrets) |
.
├── app/ # FastAPI application (entry: app/main.py)
├── infra/ # Bicep IaC (main.bicep provisions Azure OpenAI, Search, SQL, Storage, App Service, Log Analytics)
├── scripts/ # Data & index bootstrap, Arc→SQL ingest, env setup
├── sample-data/ # Sample data sources
│ ├── incidents/ # Incident markdown sample data
│ └── arc/ # Azure Arc sample data
├── docs/ # Additional guides (e.g., GitHub Actions OIDC setup)
├── requirements.txt
├── azure.yaml # azd project definition
└── README.md
Important
This project currently supports Windows only. macOS and Linux are not supported at this time.
- An Azure subscription with Owner role, or Contributor + User Access Administrator roles (needed for resource provisioning and Managed Identity role assignments)
- Python 3.11+ (App Service uses 3.12; local 3.11/3.12 are fine)
- Azure CLI (
az) and a signed-in subscription - Latest Azure Developer CLI (
azd) - Git
- ODBC Driver 18 for SQL Server
- Microsoft Visual C++ Redistributable
Dev Container (optional):
- .devcontainer/devcontainer.json includes Azure CLI and
azdfeatures
# 1. Clone
git clone https://github.com/Azure-Samples/infra-support-copilot && cd infra-support-copilot
# 2. Python venv
python -m venv .venv
./.venv/Scripts/Activate.ps1
# 3. Install deps
pip install -r requirements.txt
# 4. (First time) Provision Azure infra (creates OpenAI/Search/Storage/SQL/App Service)
az login
azd auth login
azd up # or: azd provision (infra) + azd deploy (code)
# 5. Run locally
uvicorn app.main:app --reloadLocal URL: http://127.0.0.1:8000
Important: Ensure the target Azure resource group exists before running azd up or azd provision. azd validates the deployment against the configured resource group and will fail if it doesn't exist (see error in the issue report). Create the resource group with the Azure CLI before provisioning:
az group create -n <resource-group-name> -l <location>Alternatively, set AZURE_RESOURCE_GROUP (for local env or CI environment secrets) to the name of an existing resource group. If you prefer azd to create resources automatically, ensure your azure.yaml/environment configuration does not point to a pre-existing group.
Initial full provision + deploy:
azd auth login
azd upSubsequent code-only deployments:
azd deployInfra changes only:
azd provisionMultiple environments:
azd env new stg
azd upInspect environment values:
azd env get-valuesLogs (App Service via Log Analytics): Use the Portal or az monitor log-analytics query (workspace defined in Bicep).
Note
It is possible that it takes about 30 minutes to gather data in Log Analytics.
This repository includes a GitHub Actions workflow that can deploy the same azd project to multiple Azure subscriptions in parallel by using a matrix of GitHub Environments. To use it safely and predictably, create one GitHub Environment per target subscription and set subscription-scoped secrets/variables into those environments.
Recommended setup:
- In the repo settings, go to Settings → Environments and create an environment for each target subscription (for example:
rukasakurai-env,<github-username>-env). - For each environment, set:
- Secrets:
AZURE_SUBSCRIPTION_ID,AZURE_TENANT_ID - Variables: set the following non-confidential values:
AZURE_CLIENT_IDAZURE_PRINCIPAL_IDAZURE_PRINCIPAL_TYPE(value:ServicePrincipal)AZURE_RESOURCE_GROUPAZURE_ENV_NAMEAZURE_LOCATION
- Protect the environment if you require approvals before variable use. See docs/oidc-setup-github-actions.md for CLI-first setup steps.
- Secrets:
- Assign
Directory Readersrole to the SQL server's principal id.
Notes:
- Environment-scoped secrets are available only to workflow runs that specify
environment: <env>; the CI workflow is configured to useenvironment: ${{ matrix.env }}so each matrix job automatically uses the matching environment's secrets. - To change which environments the workflow deploys to, edit
.github/workflows/cicd.ymland update thematrix.envarray. - The workflow runs with
strategy.fail-fast: falseso deployments to different subscriptions run independently; failures in one environment don't cancel others.
Example: the workflow currently contains a simple matrix:
strategy:
fail-fast: false
matrix:
env: [rukasakurai-env, <github-username>-env]
environment: ${{ matrix.env }}This makes it easy to add more environments (rows) for handover, staging, or multi-tenant deployments. Ensure each GitHub Environment has the correct secrets for its target subscription before running the workflow.
Sample data is stored under the /sample-data directory. /scripts contains scripts for inserting the sample data, which are executed automatically by azd up / azd provision. The following are steps for manually running the scripts
Blob upload + Search index creation:
python scripts/upload_data_to_blob_storage.py
python scripts/create_index.pyDetails:
- scripts/upload_data_to_blob_storage.py uploads:
- inventories: sample-data/Sample_Server_Inventories.json
- incidents: sample-data/incidents/*.md (excludes inc_format.md)
- scripts/create_index.py creates/updates Azure AI Search indexes (idempotent).
Ingest Azure Arc inventory into Azure SQL for SQL-based queries:
python scripts/upload_arc_data_to_azure_sql.pyTables created (if missing):
- dbo.virtual_machines (with indexes)
- dbo.network_interfaces (with indexes)
- dbo.installed_software (with indexes)
The app can generate safe, read-only SQL for these tables via app.services.sql_query_service.SQLQueryService.
All-in-one helper:
# Exports azd env → .env, uploads blob data, creates indexes, uploads SQL data
pwsh -NoProfile -ExecutionPolicy Bypass -File .\scripts\set_up_environment.ps1 -ForceSqlcmdEdit systemPrompt in infra/main.bicep then:
azd provision
azd deploy # optionalOr override at runtime (Portal → App Service → Configuration → Application settings → SYSTEM_PROMPT) and restart.
| Issue | Action |
|---|---|
| Auth errors to OpenAI | Verify role assignments (may need a few minutes) and restart |
| Missing env var | Confirm App Service Configuration or redeploy |
| Index not found | Re-run scripts/create_index.py and verify index names |
| Slow first response | Cold start/model warm-up; send a warm-up query post-deploy |
Tail app logs:
az webapp log tail -n <appServiceName> -g <resourceGroup>Remove all provisioned resources:
azd downOr delete the resource group in the Azure Portal.


