Intro
In this part of the series, I want to ensure that all permissions are given to the correct groups. I could provide access to individual users, but I try to stick to groups as they are easier to manage across an enterprise. I have identified the following permissions I need to grant to user groups.
- Azure AD Group to add to AVD Application groups
- Azure AD Group to add to “Virtual Machine User Login” RBAC role for the session hosts
- Azure AD Group to add to the “Storage File Data SMB Share Contributor” for the profile storage account
There are a few steps to get the storage account configured, and I will go through this process in depth.
Documentation used for writing this post can be found here:
https://docs.microsoft.com/en-us/azure/virtual-desktop/deploy-azure-ad-joined-vm
https://docs.microsoft.com/en-us/azure/virtual-desktop/create-profile-container-azure-ad
https://registry.terraform.io/providers/hashicorp/azuread/latest/docs/resources/group
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/role_assignment
Azure AD Group
First, I will look up an Azure AD group that I will use for all permission assignments. The group is called “ACC_AVD_Users.” The group is synced from my on-premises AD to Azure AD; this is a requirement for AVD.
I have added the code below to the main.tf file under the “rg-avd-cloudninja-001” folder.
data "azuread_client_config" "AzureAD" {}
data "azuread_group" "AVDGroup" {
display_name = "ACC_AVD_Users"
}
The first line is a lookup for Azure AD, the next section is the lookup for my group with AVD users.
Add group to AVD Application groups
I can now use the group to provide access to the AVD environment. I have AVD application groups, one for the desktop and one for remote applications. I will add the group to both of these application groups since I want my users to have the ability to use a single application or log on to the desktop. There is no native terraform resource for providing this access, but I can use the Azure AD Role assignment resource instead.
Below is the code I have added to the main.tf file under the “rg-avd-cloudninja-001” folder.
resource "azurerm_role_assignment" "AVDGroupDesktopAssignment" {
scope = azurerm_virtual_desktop_application_group.desktopapp.id
role_definition_name = "Desktop Virtualization User"
principal_id = data.azuread_group.AVDGroup.object_id
}
resource "azurerm_role_assignment" "AVDGroupRemoteAppAssignment" {
scope = azurerm_virtual_desktop_application_group.remoteapp.id
role_definition_name = "Desktop Virtualization User"
principal_id = data.azuread_group.AVDGroup.object_id
}
I can verify the code above using the Azure portal. The result is shown in the picture below.
Assign RBAC permissions for the session hosts
I am using Azure AD joined session hosts; for this to work, I need to add my group to the role “Virtual Machine User Login” or the role “Virtual Machine Administrator Login.” I will use the first group since I don’t want the users to have administrative rights on the session hosts. I am using the resource group as the scope for this permission; this will ensure that any session host added in the future will inherit the permission from the resource group. If you want to narrow the scope to each session host, that is possible.
Below is the code I have added to the main.tf file under the “rg-avd-cloudninja-001” folder.
resource "azurerm_role_assignment" "RBACAssignment" {
scope = azurerm_resource_group.resourcegroup.id
role_definition_name = "Virtual Machine User Login"
principal_id = data.azuread_group.AVDGroup.object_id
}
I can verify the assignment under the IAM section in the Azure portal.
Storage account configuration
To use Azure AD users to access FSLogix profiles, I will join the storage account to Azure AD. In the last part of this blog series, I created the storage account, but I have decided to move the storage account into a separate resource group. The reason for the move is that I don’t want the storage account to be deployed with a GitHub action; joining the storage account to Azure can’t be done with Terraform. I will use Terraform to create the account and PowerShell to join Azure AD and other settings.
I have removed the below code from main.tf under the “rg-avd-cloudninja-001” folder.
resource "azurerm_storage_account" "FSLogixStorageAccount" {
name = var.FSLogixStorageAccount
location = azurerm_resource_group.resourcegroup.location
resource_group_name = azurerm_resource_group.resourcegroup.name
account_tier = "Premium"
account_replication_type = "LRS"
account_kind = "StorageV2"
}
I have added the following code to main.tf under the “rg-avd-cloudninja-storage-001” folder.
resource "azurerm_resource_group" "resourcegroup" {
name = var.ResourceGroup
location = var.Location
}
resource "azurerm_storage_account" "FSLogixStorageAccount" {
name = var.FSLogixStorageAccount
location = azurerm_resource_group.resourcegroup.location
resource_group_name = azurerm_resource_group.resourcegroup.name
account_tier = "Premium"
account_replication_type = "LRS"
account_kind = "FileStorage"
}
As you might have noticed, I also changed the account_kind property from StorageV2 to FileStorage.
With the storage account created, I want to create a service endpoint and allow administration from my public IP.
I have added the following code to main.tf under the “rg-avd-cloudninja-storage-001” folder.
data "azurerm_virtual_network" "SharedServicesvNet" {
name = "vnet-sharedservices-001"
resource_group_name = "rg-sharedservices-network-001"
}
data "azurerm_subnet" "SharedServicesSubnets" {
name = "snet-sharedservices-adds-001"
virtual_network_name = data.azurerm_virtual_network.SharedServicesvNet.name
resource_group_name = data.azurerm_virtual_network.SharedServicesvNet.resource_group_name
}
data "azurerm_virtual_network" "AVDvNet" {
name = "vnet-avd-001"
resource_group_name = "rg-avd-network-001"
}
data "azurerm_subnet" "AVDSubnets" {
name = "snet-avd-hostpool-001"
virtual_network_name = data.azurerm_virtual_network.AVDvNet.name
resource_group_name = data.azurerm_virtual_network.AVDvNet.resource_group_name
}
data "azurerm_key_vault" "kv-cloudninja-vpn-002" {
name = "kv-cloudninja-vpn-002"
resource_group_name = "rg-keyvault-001"
}
data "azurerm_key_vault_secret" "PublicIP" {
name = "PublicIP"
key_vault_id = data.azurerm_key_vault.kv-cloudninja-vpn-002.id
}
resource "azurerm_storage_account_network_rules" "NetworkRules" {
storage_account_id = azurerm_storage_account.FSLogixStorageAccount.id
default_action = "Deny"
virtual_network_subnet_ids = [data.azurerm_subnet.SharedServicesSubnets.id,data.azurerm_subnet.AVDSubnets.id]
bypass = ["AzureServices"]
ip_rules = [ data.azurerm_key_vault_secret.PublicIP.value ]
}
The code above performs a lookup for the shared services vNet, the AVD vNet, the subnet for my domain controller, the subnet for the host pool, and my public IP, which is stored in a key vault. I am using these properties to configure the network settings for the storage account, so all traffic not coming from the domain controller subnet, host pool subnet, or my public IP is denied.
Now I can create my file share. The minimum quota on the file share is 100 GB on a premium storage account.
I have added the following code to main.tf under the “rg-avd-cloudninja-storage-001” folder.
resource "azurerm_storage_share" "AVDProfileShare" {
name = "avdprofiles"
storage_account_name = azurerm_storage_account.FSLogixStorageAccount.name
quota = 100
}
The last part of the Terraform code for this storage account is to add RBAC permissions allowing the AVD users to access the file share.
I have added the following code to main.tf under the “rg-avd-cloudninja-storage-001” folder.
data "azuread_client_config" "AzureAD" {}
data "azuread_group" "AVDGroup" {
display_name = "ACC_AVD_Users"
}
resource "azurerm_role_assignment" "RBACStorageAccount" {
scope = azurerm_storage_account.FSLogixStorageAccount.id
role_definition_name = "Storage File Data SMB Share Contributor"
principal_id = data.azuread_group.AVDGroup.object_id
}
Since I will not use a GitHub Action for this deployment, I will execute the Terraform code from my PC. I will do this the same way the action does it. First, I will init Terraform, then create a plan, and lastly apply the Terraform plan to the environment.
Init.
Terraform init
Plan.
terraform plan -out storageaccount.tfplan
Apply.
terraform apply "storageaccount.tfplan"
Now I can see that my storage account is created, and I have a share with a 100 GB quota.
I now need to join the storage account to Azure AD, and there are a few steps to do this. First, I want to reference the Microsoft documentation I used to create my scripts. I basically did a copy & paste from the docs site and used parameters instead of variables at the top of the script. I have also combined the commands from the docs to be scripts instead of a bunch of commands.
Microsoft docs: https://docs.microsoft.com/en-us/azure/virtual-desktop/create-profile-container-azure-ad
Configure-AzureADonStorageAccount.ps1
param (
$tenantId,
$subscriptionId,
$resourceGroupName,
$storageAccountName
)
$Uri = ('https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Storage/storageAccounts/{2}?api-version=2021-04-01' -f $subscriptionId, $resourceGroupName, $storageAccountName);
$json = @{properties=@{azureFilesIdentityBasedAuthentication=@{directoryServiceOptions="AADKERB"}}};
$json = $json | ConvertTo-Json -Depth 99
$token = $(Get-AzAccessToken).Token
$headers = @{ Authorization="Bearer $token" }
try {
Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method PATCH -Headers $Headers -Body $json;
} catch {
Write-Host $_.Exception.ToString()
Write-Error -Message "Caught exception setting Storage Account directoryServiceOptions=AADKERB: $_" -ErrorAction Stop
}
New-AzStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountName -KeyName kerb1 -ErrorAction Stop
$kerbKey1 = Get-AzStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountName -ListKerbKey | Where-Object { $_.KeyName -like "kerb1" }
$aadPasswordBuffer = [System.Linq.Enumerable]::Take([System.Convert]::FromBase64String($kerbKey1.Value), 32);
$password = "kk:" + [System.Convert]::ToBase64String($aadPasswordBuffer);
$azureAdTenantDetail = Get-AzureADTenantDetail;
$azureAdPrimaryDomain = ($azureAdTenantDetail.VerifiedDomains | Where-Object {$_._Default -eq $true}).Name
$servicePrincipalNames = New-Object string[] 3
$servicePrincipalNames[0] = 'HTTP/{0}.file.core.windows.net' -f $storageAccountName
$servicePrincipalNames[1] = 'CIFS/{0}.file.core.windows.net' -f $storageAccountName
$servicePrincipalNames[2] = 'HOST/{0}.file.core.windows.net' -f $storageAccountName
$application = New-AzureADApplication -DisplayName $storageAccountName -IdentifierUris $servicePrincipalNames -GroupMembershipClaims "All";
$servicePrincipal = New-AzureADServicePrincipal -AccountEnabled $true -AppId $application.AppId -ServicePrincipalType "Application";
$Token = ([Microsoft.Open.Azure.AD.CommonLibrary.AzureSession]::AccessTokens['AccessToken']).AccessToken
$Uri = ('https://graph.windows.net/{0}/{1}/{2}?api-version=1.6' -f $azureAdPrimaryDomain, 'servicePrincipals', $servicePrincipal.ObjectId)
$json = @'
{
"passwordCredentials": [
{
"customKeyIdentifier": null,
"endDate": "<STORAGEACCOUNTENDDATE>",
"value": "<STORAGEACCOUNTPASSWORD>",
"startDate": "<STORAGEACCOUNTSTARTDATE>"
}]
}
'@
$now = [DateTime]::UtcNow
$json = $json -replace "<STORAGEACCOUNTSTARTDATE>", $now.AddHours(-12).ToString("s")
$json = $json -replace "<STORAGEACCOUNTENDDATE>", $now.AddMonths(6).ToString("s")
$json = $json -replace "<STORAGEACCOUNTPASSWORD>", $password
$Headers = @{'authorization' = "Bearer $($Token)"}
try {
Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method Patch -Headers $Headers -Body $json
Write-Host "Success: Password is set for $storageAccountName"
} catch {
Write-Host $_.Exception.ToString()
Write-Host "StatusCode: " $_.Exception.Response.StatusCode.value
Write-Host "StatusDescription: " $_.Exception.Response.StatusDescription
}
I will run the script with parameters as shown below.
Connect-AzAccount
Connect-AzureAD
.\Configure-AzureADonStorageAccount.ps1 -tenantId "c8000000-0000-0000-0000-3d9100000000" -subscriptionId "da000000-0000-0000-0000-000000000000" -resourceGroupName "rg-avd-cloudninja-storage-001" -storageAccountName "cloudninjafsl11072022"
The script above creates a new app registration in my Azure AD. I need to add API permissions to this new app registration. I will do this in the portal this time, but I also intend to create a script to perform this action.
First, I will go to the Azure portal and then to Azure AD and App registrations. Here I will click on the new app registration and go to “API permission,” and add permissions for “Openid,” “Profile,” and “User.read.”
I will give admin consent by clicking on “Grant admin consent for MTH-Consulting,” and clicking on “Yes.”
I can see that my consent has been given.
Next, I can add the Azure AD properties to my storage account.
Set-AzureADProperties.ps1
param (
$tenantId,
$subscriptionId,
$resourceGroupName,
$storageAccountName
)
$AdModule = Get-Module ActiveDirectory;
if ($null -eq $AdModule) {
Write-Error "Please install and/or import the ActiveDirectory PowerShell module." -ErrorAction Stop;
}
$domainInformation = Get-ADDomain
$domainGuid = $domainInformation.ObjectGUID.ToString()
$domainName = $domainInformation.DnsRoot
$domainSid = $domainInformation.DomainSID.Value
$forestName = $domainInformation.Forest
$netBiosDomainName = $domainInformation.DnsRoot
$azureStorageSid = $domainSid + "-123454321";
Write-Verbose "Setting AD properties on $storageAccountName in $resourceGroupName : `
EnableActiveDirectoryDomainServicesForFile=$true, ActiveDirectoryDomainName=$domainName, `
ActiveDirectoryNetBiosDomainName=$netBiosDomainName, ActiveDirectoryForestName=$($domainInformation.Forest) `
ActiveDirectoryDomainGuid=$domainGuid, ActiveDirectoryDomainSid=$domainSid, `
ActiveDirectoryAzureStorageSid=$azureStorageSid"
$Uri = ('https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Storage/storageAccounts/{2}?api-version=2021-04-01' -f $subscriptionId, $resourceGroupName, $storageAccountName);
$json=
@{
properties=
@{azureFilesIdentityBasedAuthentication=
@{directoryServiceOptions="AADKERB";
activeDirectoryProperties=@{domainName="$($domainName)";
netBiosDomainName="$($netBiosDomainName)";
forestName="$($forestName)";
domainGuid="$($domainGuid)";
domainSid="$($domainSid)";
azureStorageSid="$($azureStorageSid)"}
}
}
};
$json = $json | ConvertTo-Json -Depth 99
$token = $(Get-AzAccessToken).Token
$headers = @{ Authorization="Bearer $token" }
try {
Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method PATCH -Headers $Headers -Body $json
} catch {
Write-Host $_.Exception.ToString()
Write-Host "Error setting Storage Account AD properties. StatusCode:" $_.Exception.Response.StatusCode.value__
Write-Host "Error setting Storage Account AD properties. StatusDescription:" $_.Exception.Response.StatusDescription
Write-Error -Message "Caught exception setting Storage Account AD properties: $_" -ErrorAction Stop
}
I need to run this script on a machine that has a line of sight to the Active Directory domain and has the Active Directory PowerShell module installed. I have run this script on my domain controller.
Connect-AzAccount
.\Set-AzureADProperties.ps1 -tenantId "c8000000-0000-0000-0000-3d9100000000" -subscriptionId "da000000-0000-0000-0000-000000000000" -resourceGroupName "rg-avd-cloudninja-storage-001" -storageAccountName "cloudninjafsl11072022"
It provides the output as shown below.
Now my storage account is almost ready for use. I just need to set file permissions in the folder. I am doing this from the domain controller, where I have mapped the file share with the storage key.
I can map the drive with PowerShell, and Microsoft has been kind enough to provide an easy to do this.
I can go to the storage account and click on “File shares,” and from here, I can click on “…” and select “Connect.”
Now I can select “Storage account key” and copy the script provided by Microsoft.
I can run this script from my domain controller, and I will now have a Z-drive attached. I am only running this on the domain controller because it is my lab environment, but a managed server/workstation should be used for this purpose in production.
I can use icacls or file explore to set the permissions. I will be using file explore to show what is happening.
On the Z-drive, I right-click and select “Properties.”
I will go to the “Security” tab and click “Advanced” here. I will remove the “Authenticated users” and “Users” groups from the list.
I will then add the group “ACC_AVD_USERS” and set the properties as shown below.
With the settings done, my storage account is finally ready for FSLogix profiles using Azure AD users.
Summary
I have set all the permissions needed for the AVD environment in this part. Users just need to be added to a group, and this group has all the required permissions to log in, create an FSLogix profile and choose if they want to start a desktop or a remote application. I must admit that the storage account configuration took quite a bit more work than I expected. Still, I hope this configuration will be easier in the future if, for instance, it could all be done with Terraform.
The next part of this series will be about adding session hosts to the AVD environment to enable users to start working on the systems.
Any feedback is welcome, so reach out on Twitter or LinkedIn, so I can fix any errors or optimize the code I am using.
Links to other parts of the blog series
Part 1: https://www.cloudninja.nu/post/2022/06/github-terraform-azure-part1/
Part 2: https://www.cloudninja.nu/post/2022/06/github-terraform-azure-part2/
Part 3: https://www.cloudninja.nu/post/2022/06/github-terraform-azure-part3/
Part 4: https://www.cloudninja.nu/post/2022/06/github-terraform-azure-part4/
Part 5: https://www.cloudninja.nu/post/2022/07/github-terraform-azure-part5/
Part 6: https://www.cloudninja.nu/post/2022/07/github-terraform-azure-part6/
Part 8: https://www.cloudninja.nu/post/2022/08/github-terraform-azure-part8/
Link for all the code in this post
I have put all the code used in this blog post on my GitHub repository, so you can download or fork the repository if you want to.
Comments