Using Microsoft Teams for Personal Accounts

With the amazing uptake of Microsoft Teams in the corporate space over the past few years, Microsoft has been working hard to define a free, personal version of the collaboration platform. Launching first on mobile devices earlier this year, the home and family version of Microsoft Teams brings the power and learnings made in the enterprise offering, simplifies it a bit and gives personal users a fantastic platform for social communication and collaboration.

Using Microsoft Teams With A Personal Account

The mobile Teams app now fully supports Personal Microsoft account sign-in with switching between personal and work accounts available. This functionality is rolling out this month for the Desktop client also. This means that corporate Teams users will be able to easily switch between personal and work accounts on the same machine. Support for multiple corporate accounts is still on the roadmap but now release date yet.

To sign in to Teams personal, log into https://teams.live.com with your personal Microsoft account and you’ll get to the personal version of the Teams web client.

From here you can manage meetings via your Outlook.com calendar and chat / meet with other personal account users in a stripped down version of the familiar Teams interface. You can also share and collaborate on OneDrive files within Teams.

Teams for personal use is a very stripped down version of the full experience that comes in the full product but as time goes on, more functionality will be brought across. For now, it is a great way to communicate quickly and efficiently with family and friends through the familiar interface we’ve all grown to love (or hate) over the past few years.

OneDrive File Structure and Sharing Report – Graph API & PowerShell

I’ve previously posted a PowerShell script I put together to report the file and folder structure in OneDrive. This script used PowerShell and Graph API to loop through all files and folders and output the information to a CSV. I’ve recently had a requirement to add to that script, I didn’t just need the file structure, but also any details on any sharing that was in place.

This information is not easy to get as the reports in the Microsoft 365 Admin Center focus on recent activity rather than ‘as-is’ state. I’ve enhanced the earlier report and created a new script to add the additional details required. The script which is available on GitHub here can be used as a template for similar requirements.

To run the script, import the module as before and run as below:

getonedrivereport -ClientID <Application ID> -TenentID <AAD Directory ID> -ClientSecret <App registration Secret> -UserListCSV <a csv of users>

The particular paramaters required are:

  • ClientID – This is the client ID of the application registration detailed below
  • TenentID – This is the AAD Directory ID
  • ClientSecret – This is a secret generated for the application registration
  • UserListCSV – This is a CSV with the heading UserPrincipalName and the UPNs of each user you want to check

For the Application Registration, you will need the user.read.all and file.read.all permissions assigned.

The script contents are detailed below, as always, please don’t run online scripts directly in your production environment until you understand them and tailor them to your needs, this script is just an example.

##Author: Sean McAvinue
##Details: Used as a Graph/PowerShell example, 
##          NOT FOR PRODUCTION USE! USE AT YOUR OWN RISK
##          Returns a report of OneDrive file and folder structure along with any sharing permissions to CSV file
function GetGraphToken {

    <#
    .SYNOPSIS
    Azure AD OAuth Application Token for Graph API
    Get OAuth token for a AAD Application (returned as $token)
    
    #>

    # Application (client) ID, tenant ID and secret
    Param(
        [parameter(Mandatory = $true)]
        $clientId,
        [parameter(Mandatory = $true)]
        $tenantId,
        [parameter(Mandatory = $true)]
        $clientSecret

    )
    
    
    
    # Construct URI
    $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token"
     
    # Construct Body
    $body = @{
        client_id     = $clientId
        scope         = "https://graph.microsoft.com/.default"
        client_secret = $clientSecret
        grant_type    = "client_credentials"
    }
     
    # Get OAuth 2.0 Token
    $tokenRequest = Invoke-WebRequest -Method Post -Uri $uri -ContentType "application/x-www-form-urlencoded" -Body $body -UseBasicParsing
     
    # Access Token
    $token = ($tokenRequest.Content | ConvertFrom-Json).access_token
    
    #Returns token
    return $token
}
    

function expandfolders {
    <#
    .SYNOPSIS
    Expands folder structure and sends files to be written and folders to be expanded
  
    .PARAMETER folder
    -Folder is the folder being passed
    
    .PARAMETER FilePath
    -filepath is the current tracked path to the file
    
    .NOTES
    General notes
    #>
    Param(
        [parameter(Mandatory = $true)]
        $folder,
        [parameter(Mandatory = $true)]
        $FilePath

    )

    write-host retrieved $filePath -ForegroundColor green
    $filepath = ($filepath + '/' + $folder.name)
    write-host $filePath -ForegroundColor yellow
    $apiUri = ('https://graph.microsoft.com/beta/users/' + $user.UserPrincipalName + '/drive/root:' + $FilePath + ':/children')

    $Data = RunQueryandEnumerateResults -ApiUri $apiUri -Token $token

    ##Loop through Root folders
    foreach ($item in $data) {

        ##IF Folder
        if ($item.folder) {

            write-host $item.name is a folder, passing $filePath as path
            expandfolders   -folder $item -filepath $filepath

            
        }##ELSE NOT Folder
        else {

            write-host $item.name is a file
            writeTofile -file $item -filepath $filePath

        }

    }


}
   
function writeTofile {
    <#
    .SYNOPSIS
    Writes files and paths to export file

    
    .PARAMETER File
    -file is the file name found
    
    .PARAMETER FilePath
    -filepath is the current tracked path
    
    #>
    Param(
        [parameter(Mandatory = $true)]
        $File,
        [parameter(Mandatory = $true)]
        $FilePath

    )

    ##If Shared, get the permissions
    if ($item.shared) {

        $Permissions = GetSharedFilePermissions -itemID $item.id -Token $token -itemname $item.name -Username $user.userprincipalname
                    
        write-host "found $($permissions.roles) permissions for $($file.name)" -ForegroundColor blue               
    }##Else blank out permission variable
    else {
        $permissions = $null
    }
    ##If there are multiple, build multiple objects and export each
    if ($Permissions -is [array]) {
        foreach ($permission in $permissions) {
            ##Build file object
            $object = [PSCustomObject]@{
                User              = $user.userprincipalname
                ID                = $item.id
                FileName          = $File.name
                shared            = $File.shared
                LastModified      = $File.lastModifiedDateTime
                Filepath          = $filepath
                ItemID            = $permission.itemID
                ItemName          = $permission.itemName
                hasPassword       = $permission.haspassword
                roles             = $permission.roles
                DirectPermissions = $permission.DirectPermissions
                LinkPermissions   = $permission.LinkPermissions
            }

            ##Export File Object
            $datestamp = (get-date).tostring('yyMMdd')
            $object | export-csv "OneDriveSharingReport-$datestamp.csv" -NoClobber -NoTypeInformation -Append


        }
    }
    else {
        ##Build file object
        $object = [PSCustomObject]@{
            User              = $user.userprincipalname
            ID                = $item.id
            FileName          = $File.name
            shared            = $File.shared
            LastModified      = $File.lastModifiedDateTime
            Filepath          = $filepath
            ItemID            = $permissions.itemID
            ItemName          = $permissions.itemName
            hasPassword       = $permissions.haspassword
            roles             = $permissions.roles
            DirectPermissions = $permissions.DirectPermissions
            LinkPermissions   = $permissions.LinkPermissions
        }

        ##Export File Object
        $datestamp = (get-date).tostring('yyMMdd')
        $object | export-csv "OneDriveSharingReport-$datestamp.csv" -NoClobber -NoTypeInformation -Append
    }
    ##Reset workingfilepath



}

function RunQueryandEnumerateResults {
    <#
    .SYNOPSIS
    Runs Graph Query and if there are any additional pages, parses them and appends to a single variable
    
    .PARAMETER apiUri
    -APIURi is the apiUri to be passed
    
    .PARAMETER token
    -token is the auth token
    
    #>
    Param(
        [parameter(Mandatory = $true)]
        [String]
        $apiUri,
        [parameter(Mandatory = $true)]
        $token

    )

    #Run Graph Query
    $Results = (Invoke-RestMethod -Headers @{Authorization = "Bearer $($Token)" } -Uri $apiUri -Method Get)
    #Output Results for debug checking
    #write-host $results

    #Begin populating results
    $ResultsValue = $Results.value

    #If there is a next page, query the next page until there are no more pages and append results to existing set
    if ($results."@odata.nextLink" -ne $null) {
        write-host enumerating pages -ForegroundColor yellow
        $NextPageUri = $results."@odata.nextLink"
        ##While there is a next page, query it and loop, append results
        While ($NextPageUri -ne $null) {
            $NextPageRequest = (Invoke-RestMethod -Headers @{Authorization = "Bearer $($Token)" } -Uri $NextPageURI -Method Get)
            $NxtPageData = $NextPageRequest.Value
            $NextPageUri = $NextPageRequest."@odata.nextLink"
            $ResultsValue = $ResultsValue + $NxtPageData
        }
    }

    ##Return completed results
    return $ResultsValue

    
}

function GetSharedFilePermissions {
    <#
    .SYNOPSIS
    Returns sharing details for input item
    
    .PARAMETER itemID
    -APIURi is the ID of the current item
    
    .PARAMETER itemName
    -token is the item to be processed
    
    .PARAMETER token
    -token is the auth token

    .PARAMETER username
    -token is current processed user
    #>
    Param(
        [parameter(Mandatory = $true)]
        [String]
        $itemID,
        [parameter(Mandatory = $true)]
        [String]
        $itemName,
        [parameter(Mandatory = $true)]
        [String]
        $Token,
        [parameter(Mandatory = $true)]
        [String]
        $Username
    )
    ##Build Query
    $apiuri = "https://graph.microsoft.com/beta/users/$username/drive/items/$itemID/permissions"
    ##Pass to run uery function
    $Permissions = RunQueryandEnumerateResults -token $token -apiUri $apiuri
    ##Build an array to hold results
    $Permismissionarray = @()
    ##Loop through Permissions and create object to hold results. If there are multiple these will be appended to the array
    foreach ($permission in $permissions) {
        $PermissionObject = New-Object PSObject -Property @{
            ItemID            = $itemID
            ItemName          = $itemName
            hasPassword       = $permission.haspassword
            roles             = $permission.roles[0]
            DirectPermissions = $permission.grantedto.user.email -join (' ')
            LinkPermissions   = $permission.grantedtoidentities.user.email -join (' ')
        }
        $Permismissionarray += $PermissionObject
        return $Permismissionarray 
    }

}


function getonedrivereport {
    <#
    .SYNOPSIS
    Main function, reports on file and folder structure in OneDrive for all imported users

    #>
    Param(
        [parameter(Mandatory = $true)]
        $clientId,
        [parameter(Mandatory = $true)]
        $tenantId,
        [parameter(Mandatory = $true)]
        $clientSecret,
        [parameter(Mandatory = $true)]
        $UserListCSV
    )

    ##Get in scope Users from CSV file##
    $Users = import-csv $UserListCSV


    #Loop Through Users
    foreach ($User in $Users) {
    
        #Generate Token
        $token = GetGraphToken -clientID $clientId -TenantID $tenantId -clientSecret $clientSecret

        ##Query Site to get Site ID
        $apiUri = 'https://graph.microsoft.com/v1.0/users/' + $User.userprincipalname + '/drive/root/children'
        $Data = RunQueryandEnumerateResults -ApiUri $apiUri -Token $token

        ##Loop through Root folders
        ForEach ($item in $data) {

            ##IF Folder, then expand folder
            if ($item.folder) {

            
                write-host $item.name is a folder
                $filepath = ""
                expandfolders -folder $item -filepath $filepath

                ##ELSE NOT Folder, then it's a file, sent to write output
            }
            else {

                write-host $item.name is a file
                $filepath = ""
                writeTofile -file $item -filepath $filepath

            }

        }


    
    }
    
    
    
}

Using Logic Apps to Trigger Key Vault Rotation

Previously I’ve written about how we can use Azure Key Vault and PIM Groups as a secure password management solution. Something I didn’t cover at the time is the requirement in large environments to rotate passwords regularly. To achieve this rotation, we can leverage Azure Logic Apps to trigger email requests to rotate keys. The setup in my previous post will be used as a basis for the configuration below.

Configure Logic App

We can create a new Logic App in the Azure Portal, all we need is an Azure subscription (which we should have from setting up the Key Vault previously)

Create the App in the Azure Portal as below.

When the Logic App is provisioned, navigate to the “Identity”, enable the System Assigned Identity and click save.

Next, we need to grant permission on our Key Vault to the Logic App identity. Open the Key Vault and open up Access Policies. Create a new Access Policy and assign the appropriate access tot he service principal. For full control use the “Key, Secret & Certificate Management” template.

Now, back in our Logic App, we can start building out our logic. Firstly, add a trigger such as a recurrence pattern to schedule the app to run.

As we want to use our managed identity, we can’t use the default Key Vault connector so we will instead send an API request. Select the HTTP connector and select the HTTP action.

Fill in the HTTP connector as below with the following values:

HeadingValue
MethodGet
URIhttps://<KeyVaultName>.vault.azure.net/secrets/<secretname>/?api-version=7.1
Authentication TypeManaged Identity
Managed IdentitySystem Assigned Managed Identity
Audiencehttps://vault.azure.net

Now we can store our results in a variable for sending, Add an action for “Initialize Variable”. Here we can specify which values from our query we want, we are specifically looking for “Updated” attribute but we can capture the secret itself among other attributes the same way.

Now that we have the last updated date, we have a lot of options. We can aded a planner task, send a webhook to our Service Desk tool, send an email etc. This final step I’ll leave up to you but Logic Apps is extremely flexible and can interact with a massive number of systems.

Overall, adding this on top of the Key Vault functionality will help to really easily add value at a very low cost.

Using PIM Groups and Azure Key Vault as a Secure, Just in Time, Password Management Solution

As an MSP, CSP, general IT Service Provider or even a regular IT department, we generate a huge number of login credentials for different systems to keep everything running. While it is best practice to maintain a single source of identity using LDAP integrations, ADFS and delegation, sometimes the systems we work with don’t support this. Things like Certificate Authority logins, root passwords to non-Windows machines, local admin passwords etc. all need to be stored securely and potentially made available to a large team when needed.

This problem is very common, with many third party solutions such as LastPass having decent offerings around credential access management. One relatively low cost tool we have at our disposal in the Microsoft Cloud is Azure Key Vault. Key Vault allows us to securely store a range of sensitive credentials like secrets/passwords, keys and certificates and allow the other technologies in Azure to help us with access management. It also allows for logging of activity, backup and versioning of credentials which goes a long way towards making the solution scalable and secure.

We can create a Key Vault in our Azure Subscription very easily and start using it to store credentials for anyone with access, this is fine but persistent access to this secure data probably isn’t required and shouldn’t be in place.

To provide access to the keys we want to publish, we create a security group in Azure which will be granted permissions in the access policy to just the Keys required. As we want to make access available as ‘just in-time’ ensure to tick the option for “Azure AD roles can be assigned to the group (Preview)”.

When our group is created we can enable it for privileged access from the group settings under “Activity”.

Once enabled, we add our users who require access as eligible to the group.

Now that our group is ready, the last step is to assign access to the group. From the Key Vault we created, create a new Access Control policy, granting appropriate access to the Key Vault for our group. For this group I am granting the “Reader” permissions to the vault but this can be tailored to the specific group needs.

If required, we can customize the settings of the eligible role to put in any approval or notification requirements.

Now that the role is assigned, we can test with our user account.

Requesting Access

When our user requires access, they can navigate to the Privileged Identity Management section of the Azure portal and under Privileged Access Groups can request the access to the group providing justification for logging purposes and subject to any approval / notification settings etc.

The request will process with any notifications or approval we have specified.

Once activated, the role will allow our user to access to Key Vault for the time specified in the request. When this time expires they will need to activate the role again.

Summary

Key Vault and PIM work together to form an extremely secure workflow for secret management. While we can’t get granular for different individual objects, it’s recommended that a tiered approach using multiple vaults is taken to ensure segregation of permissions. Key Vault is also extremely cheap to run and using this model, is highly scalable.

Some other ideas would be to integrate something like Logic Apps to remind the Key Vault owners to rotate any stale passwords and update them in the vault. For certain services this rotation can even be automated with Logic Apps and/or Azure automation.

Onboarding Windows 10 Devices to the Microsoft 365 Compliance Portal

The Microsoft 365 Compliance Portal has a huge amount of nice features which can be used with cloud services. I’ve previously posted about the new Compliance Manager tool and how it can help to assess the controls in place in the tenancy while also recommending improvements. There are also tools such as DLP, Unified Labelling and Trainable Classifiers which provide some really flexible ways of protecting Data.

These features so far relate to how a user operates within the Microsoft 365 service but we also have some cool functionality available to us which we can extend to the end users device. We can leverage tools like Insider Risk Management and Endpoint DLP to extend our protection even further.

Prerequisites

To enable the device functionality, we first need to ensure we meet the prerequisites. Microsoft have published the below list for us to verify on our devices:

  1. Must be running Windows 10 x64 build 1809 or later.
  2. Antimalware Client Version is 4.18.2009.7 or newer. Check your current version by opening Windows Security app, select the Settings icon, and then select About. The version number is listed under Antimalware Client Version. Update to the latest Antimalware Client Version by installing Windows Update KB4052623. Note: None of Windows Security components need to be active, you can run Endpoint DLP independent of Windows Security status.
  3. The following Windows Updates are installed. Note: These updates are not a pre-requisite to onboard a device to Endpoint DLP, but contain fixes for important issues thus must be installed before using the product.
    • For Windows 10 1809 – KB4559003, KB4577069, KB4580390
    • For Windows 10 1903 or 1909 – KB4559004, KB4577062, KB4580386
    • For Windows 10 2004 – KB4568831, KB4577063
    • For devices running Office 2016 (and not any other Office version) – KB4577063
  4. All devices must be Azure Active Directory (Azure AD) joined, or Hybrid Azure AD joined.
  5. Install Microsoft Chromium Edge browser on the endpoint device to enforce policy actions for the upload to cloud activity. See, Download the new Microsoft Edge based on Chromium.
  6. If you are on Monthly Enterprise Channel of Microsoft 365 Apps versions 2004-2008, there is a known issue with Endpoint DLP classifying Office content and you need to update to version 2009 or later. See Update history for Microsoft 365 Apps (listed by date) for current versions. To learn more about this issue, see the Office Suite section of Release notes for Current Channel releases in 2020.

Enable Device Onboarding

When we have met the prerequisites in our environment, we can now enable Device Onboarding from the Compliance Portal. Navigate to https://compliance.microsoft.com and open up “Settings” then “Device Onboarding”.

From here, we turn on device onboarding and we’ll see that any of our devices already onboarded to Microsoft Defender for Endpoint will already be included… more on this in a bit. For now, click OK to enable Onboarding.

We might need to wait a few minutes for everything to kick in but when it is we are ready to onboard machines.

In the onboarding section, we can see the list of onboarding options available to us, you might notice that the list looks kind of familiar. For now we’ll select Local Script as we are testing on a small scale but there is a lot of flexibility in how we can deploy.

Select Local Script and download the package. Once it’s downloaded let’s open it up and see what it’s doing.

Opening up the downloaded script confirms the feeling of Déjà vu we might have been having. The onboarding process isn’t a unique Compliance Portal process, we are enrolling in Windows Defender for Endpoint which we may have already done in our tenancy. So the enrollment is the same thing. This makes sense as Windows Defender is the agent on the machine which actually enforces our controls.

Onboard a Device

Ok, now that we have our onboarding script (or whatever method we chose earlier) we just need to run it on the device. For the Script, we just copy to the machine and run as an admin.

We get the standard warning which we accept and the script will continue and onboard the machine for us.

On a larger scale I recommend using Microsoft Endpoint Manager / Intune for onboarding but for this demo the script has worked fine.

Verify The Machine Has Been Onboarded

After a minute or two we can hop back over to the Compliance portal and see our machine has been onboarded.

If we have the licensing, we will also see the device in the Windows Defender for Endpoint page.

Now that the device is onboarded, we can use some of the device based features of the Compliance center. I’ll be going through some if these in subsequent posts!

Exchange Online Native Tenant to Tenant Migrations (Preview)

With the proliferation of Microsoft 365 as the collaboration platform of choice in the enterprise space, it’s rare to find a large organization that hasn’t undergone some form of tenant to tenant migration. This can be a result of mergers, acquisitions or divestitures. Microsoft have not previously had any native tooling to facilitate this and third parties such as BitTitan and Quest have built up some really slick products to help organizations manage this technical transition.

This has slowly begun to change with the Microsoft acquisition of Mover in 2019 to help facilitate file migrations to Office 365. Microsoft seem to be making more native migration functionality available as part of the service. The most mature of the migration tools is also the oldest, the native Exchange on-premises migration tools using Exchange MRS functionality. This has also been improved recently with the availability of the Exchange modern hybrid configuration, removing the need to open up on-premises endpoints to the cloud by leveraging application proxy technology.

This Exchange functionality has now been extended to cross-tenant migrations allowing the migration of mailboxes from one tenancy to another using the familiar Exchange migration tools.

Prepare for Migration

First we need to set up our environments for the tenant to tenant migration. To understand the configuration, Microsoft have published the below diagram which explains the process in detail:

So from this diagram, we can see the high level componants of the migration infrastructure are:

  • A Tenant relationship application registration in the destination tenancy with the below API permissions
    • Exchange: Mailbox.Migration
    • Graph: Directory.ReadWrite.All
  • Azure KeyVault stores the app secret details for this app
  • The Source Tenant grants consent to the tenant relationship app created in the destination tenant
  • A two way Organization Relationship
  • A mail enabled security group in the source tenant to scope mailboxes for migration

Luckily, Microsoft have automated a lot of this setup with PowerShell scripts located on GitHub:

Source – SetupCrossTenantRelationshipForResourceTenant.ps1

Target – SetupCrossTenantRelationshipForTargetTenant.ps1

Prepare the Target Tenant

To prepare the target tenant, download the SetupCrossTenantRelationshipForTargetTenant.ps1 script.

To run the setup script, ensure you have the ExchangeOnlineManagement, AzureAD (the Preview Module doesn’t seem to work) and AzureRM PowerShell modules installed, if you don’t, you can do that with the below commands:

Install-Module ExchangeOnlineManagement
Install-Module AzureRM
Install-Module AzureAD

Once the modules are installed, connect to Exchange Online with:

Connect-ExchangeOnline

Now we can finally run the first script. The following paramaters are required to run:

  • -ResourceTenantDomain The mail domain of the source tenant
  • -ResourceTenantAdminEmail The email address for the admin account in the source tenant. Ensure this account has a valid mailbox.
  • -TargetTenantDomain the mail domain of the target tenant
  • -ResourceTenantId The source tenant Azure AD Directory ID
  • -SubscriptionId The Subscription ID to create the KeyVault in
  • -ResourceGroup A name for the KeyVault Resource Group
  • -KeyVaultName A name for the KeyVault
  • -CertificateName A name for the certificate
  • -CertificateSubject A certificate subject name: “CN=admin_seanmc”
  • -AzureAppPermissions The permissions to grant: Exchange, MSGraph
  • -UseAppAndCertGeneratedForSendingInvitation
.\SetupCrossTenantRelationshipForTargetTenant.ps1 -ResourceTenantDomain <Source Tenant mail domain> -ResourceTenantAdminEmail <Source Tenant Admin Account Email> -TargetTenantDomain <Target tenant domain> -ResourceTenantId <Source Tenant Directory ID> -SubscriptionId <Azure Subscription ID> -ResourceGroup "CrossTenantMoveRG" -KeyVaultName "adminseanmc-Cross-TenantMovesVault" -CertificateName "adminseanmc-cert" -CertificateSubject "CN=admin_seanmc" -AzureAppPermissions Exchange, MSGraph -UseAppAndCertGeneratedForSendingInvitation 

This script will prompt for destination tenant credentials twice during its run and then will pause, asking for you to grant consent to the new app registration. In Azure AD App Registrations, open the new app and grant consent to the API permissions.

When consent is granted, hit enter on the script to continue and set up the Organization relationship.

Finally, note down the Application ID that is saved to the $AppID variable in the PowerShell session. If you miss this you can get it from the Azure AD app registrations page also.

Prepare the Source Tenant

Now that the destination tenant is configured, we can move on to the source tenant. When running the previous script, we were asked for an admin email address in the source tenant. When we log into this account we will find a B2B invitation from the destination tenant admin. Open this mail and accept the invitation.

Next, accept the permission request from the application to allow it to pull mailbox data.

With the permissions in place, we now create a mail-enabled security group to manage our migration scope. All mailboxes to be migrated will be part of this group. To create a group you can run the below Exchange Online PowerShell Command in the source tenant.

New-DistributionGroup t2tmigrationscope -Type security

Then add any in-scope mailboxes to the group with the below command.

Add-DistributionGroupMember -Identity t2tmigrationscope -Member <Mailbox to add>

With our scope in place, we can now prepare and run the source tenant preparation script. To run the script, we need the following parameters:

  • SourceMailboxMovePublishedScopes – This is our mail enabled security group created previously
  • ResourceTenantDomain – This is our source tenant mail domain
  • TargetTenantDomain – This is our target tenant mail domain
  • ApplicationId – This is the Application ID we noted during the target configuration
  • TargetTenantId – Azure AD Directory ID of the target tenant

With all of this information to hand, run the script SetupCrossTenantRelationshipForResourceTenant.ps1 as below:

SetupCrossTenantRelationshipForResourceTenant.ps1 -SourceMailboxMovePublishedScopes <security group identity> -ResourceTenantDomain <source tenant mail domain> -TargetTenantDomain <target tenant domain> -ApplicationId s<AppID> -TargetTenantId <source tenant directory ID>

When this is complete we have all permissions in place and our Organization Relationship is in place so we can move on to preparing our users.

Prepare Destination User Accounts

To migrate a mailbox cross-tenant, we need to have a valid mail user in the destination tenant. There are several attributes we need to ensure align between the two to make sure the migration is successful. To gather the required data, run the below command against the mailbox(s) you wish to move in the source tenant.

get-mailbox <mailbox> |select exchangeguid,archiveguid,legacyexchangedn,userprincipalname,primarysmtpaddress 

This will give an output similar to the below.

Use this output to create a new mail user in the destination tenant. This setup can vary depending on if your destination environment is synchronized with Active Directory but for a non-synchronized environment, the below commands in Exchange Online PowerShell should create the user with the appropriate attributes.

New-MailUser <alias> -ExternalEmailAddress <source tenant email> -PrimarySmtpAddress <destination tenant email> -MicrosoftOnlineServicesID <destination tenant username>    

PS C:> Set-MailUser debrab -ExchangeGuid <exchangeGUID from source> -ArchiveGuid <archiveGUID from source> -EmailAddresses @{Add="x500:<LegacyExchangeDN from Source>"}                                                          

Finally, once these attributes are present, give the new user(s) a valid Exchange Online license. If everything was done correctly, no Exchange Online mailbox will be provisioned when the user is licensed.

With the account(s) created, finally all the prep work is done so we can now move on to testing migrations.

Start Cross-Tenant Migration Batch

Before starting the migration, we can create a comma delimited CSV file so we can import our batch. the CSV only needs a single column named ‘EmailAddress’ and should specify each target tenant email address for our user batch.

To create a new cross tenant migration request, we navigate to the new Exchange Admin Center at https://admin.exchange.microsoft.com from the destination tenant and open up the “Migration” section. From here we create a new migration batch and select “Migration to exchange Online”

Next we select the migration type “Cross tenant migration”

We can see the prerequisites we’ve worked through listed on the next page, since we’ve done all the work already, we can hit next.

On the next page, we select the migration endpoint our script configured and hit next.

Next, upload the CSV file we prepared earlier.

Finalize the standard move configuration settings.

Configure any scheduling we need to perform and finally hit “save” to kick off the migration batch.

When the batch is created, we’ll see the success page below and then we can check the status throughout via the migration batches or by PowerShell.

After a little while the migration is synced. We can complete it as we would with any other migration batch.

We have now successfully migrated from one Exchange Online Tenant to another with native tools. When this functionality goes GA, it could really change the way a lot of Organizations approach multi-tenant configurations and migrations. For more information on Tenant to Tenant migrations, see the official Microsoft documentation here: Cross-tenant mailbox migration – Microsoft 365 Enterprise | Microsoft Docs

Project Oakdale Renamed to Microsoft Dataverse for Teams

In a previous post I went though how we can use Project Oakdale to create some pretty flexible apps in Microsoft Teams. The platform which allows for a subset of the CDS functionality to be used inside of Microsoft Teams has just gone GA! With the GA release, we’ve also got a new name.

In line with the renaming of CDS (Common Data Service) to Microsoft Dataverse, we see Project Oakdale (essentially CDS lite) be rebranded as Microsoft Dataverse for Teams. Personally I always though Project Oakdale was a bit of a stupid name… Long live Dataverse.

Cheesy names aside, Dataverse for Teams provides a great set of features that were previously locked behind a full Dataverse (Or CDS) SKU. While it’s not on the level of Dataverse, it’s a fantastic tool for small, team scale low-code solutions.

For more information on the functionality available in Dataverse and Dataverse for Teams, see the launch notes here:

Reshape the future of work with Microsoft Dataverse for Teams—now generally available | Microsoft Power Apps

Using Delegated Access Permissions in PowerShell to Manage all Microsoft 365 Services

I recently posted about how we can use Delegated Access Permissions via a partner relationship to connect to an Exchange Online organization through PowerShell. This is a fantastic piece of functionality for MSPs and CSPs to manage multiple tenancies securely without having managing a set of admin identities for all of their customers.

To expand on the previous post, I thought I would put together each of the PowerShell modules that support delegated admin permissions in one place and also highlight any that I feel are missing.

In this post I will go through the connection methods (where available) using DAP for each of the below modules:

  • ExchangeOnline
  • MSOnline
  • Azure AD
  • MicrosoftTeams
  • Skype for Business
  • SharePoint Online
  • Security & Compliance Center

Exchange Online Module (v2)

I’ve gone through this one recently in another post so full information is available there. In short, we cann connect to Exchange Online Powershell using the Exchange Online (v2) PowerShell Module by specifying the tenant domain in our connection command.

First, install the module as normal:

Install-Module ExchangeOnline

Once installed, restart PowerShell and connect using the customer tenancy domain:

Connect-ExchangeOnline -DelegatedOrganization <customerdomain.onmicrosoft.com>

MS Online Module

The MS Online Module works a little differently in that we don’t connect directly to our customer tenancy, we specify the tenancy in our commands.

We install the module with:

Install-Module MSOnline

Then we connect to our own service as normal:

Connect-MsolService

Once we are connected, we need to locate the Tenant ID of our target organization. If we don’t have it to hand we can find it using the tenant domain in the below command:

Get-MsolPartnerContract -DomainName <customerdomain.onmicrosoft.com> | Select-Object TenantID

Once we have the TenantID output (which will be a GUID), we can run commands against the tenant as below, using the -TenantID flag:

Get-MsolUser -All -TenantId <TenantID>

Azure AD Module

To connect to Azure AD, we need the Tenant ID from above to use in our connection. We can install the AzureADPreview Module:

Install-Module AzureADPreview

We then connect using our Tenant ID with the below command:

Connect-AzureAD -TenantId <TenantID>

Microsoft Teams Module

For Microsoft Teams we use the Tenant ID again. Install with:

Install-Module MicrosoftTeams

And then we connect with the Tenant ID as below:

Connect-MicrosoftTeams -TenantId <TenantID>

Skype for Business Module

The Skype for Business Module is interesting in that a lot of organizations have moved off Skype to use Microsoft Teams. The Skype module is still required to manage certain aspects of Teams though. The connection to the module is equally as strange. Once we have connected to Teams as above, we then new to create out connection to Skype using the below commands to create the session and then import it:

$session = New-CsOnlineSession
Import-PSSession $session

This will connect our existing Teams session to the Skype for Business module!

SharePoint Online Module

Unfortunately the SharePoint Online Module does not support DAP at the moment. I will update this post when/if it becomes available.

Security & Compliance Center Module

The Security and Compliance Center Module is installed as part of the Exchange Online (v2) module and allows connection to services such as DLP and Information Protection.

To connect to the Security & Compliance Center we can install the Exchange Online (v2) module as above and use the -DelegatedOrganization flag to specify our customer domain:

Connect-IPPSSession -DelegatedOrganization <CustomerDomain>

And that’s it, that’s pretty much all the modules I use on a daily basis, I will update this post as/when more updates or modules are available.

Bring Yammer into Microsoft Teams with the Communities App

The current version of Yammer is almost unrecognizable from the initial release years back. It really has earned it’s place in the Microsoft 365 ecosystem as an enterprise grade communication tool right alongside Teams and SharePoint. As part of a digital transformation project, there are tons of use cases for Yammer and when used correctly, it can be really effective for communication and collaboration.

The flexibility and overall familiar “Social Media” feel to Yammer means adoption is actually much easier than you might expect. As with a lot of the Office 365 stack now though, Yammer can be brought directly into Teams to give users easy access so the cool features it provides. This is where the “Communities” app comes in. Communities is the name for the Yammer app in Teams. A user or admin can deploy the app to the Team client to give access to all of a users Yammer communities directly from the client itself, cutting out on switching between different web pages etc. This can be done manually or via Teams app policies.

This is a nice way to use Teams to obfuscate the back end tools and bring the functionality right to the user. While this is a nice feature, the real power of the communities app comes when it is added to a Team channel.

To add communities to a channel, we simply add a new tab to an existing or new channel in Teams and select ‘Communities’.

Then we can select which Yammer community or topic we can to link in that channel. For instance, we can link a particular project Team to the relevant community or topic to read or provide updates on project status to the business.

When we link a topic, we can follow any posts relating to the topic and reply. This works really well for tracking companywide updates on particular workstreams.

When we link a community, we can post and interact within the community. This works for Project Teams to provide updates to the business and discuss within the community.

These concepts aren’t new in the Yammer world but linking them in Teams, similar to the Tasks app really shows the power that comes with integration of the different tools in the platform. I find that as more tools are used together in Office 365, the value of each to the business increases exponentially.

Presenting each tool as a part of the whole collaboration platform is a key to user training and adoption. Our users don’t really care that there is an app called Yammer that does one thing really well, however, when we present the platform as a whole, the backend Yammer functionality becomes part of the ecosystem they work in every day. This allows users to leverage this functionality with minimal effort and uncomplicates the various backend considerations that IT need to be aware of.

As with any of the Office 365 tools, training end users on the use cases for each and helping to show the benefits of each communication tool is key to any deployment. Don’t underestimate the value of a true adoption & change management program.

For more information on the Communities app, check out the official Microsoft documentation here.

Managing Office 365 Integrated Apps From The Admin Center

For all the cool features of Office 365 and the Office suite, there are always use cases for third party integrations. These apps provide an extension to the Office platform and add some specific functionality that might not be something that Microsoft can, or want to deliver to the entire platform.

These apps are hosted on the AppSource catalog where they can be searched and deployed to users by an admin. Now, this functionality has been given a new home directly on the Admin Center in the Settings section.

Deploy An Integrated App

In this section we will deploy the Outlook “Report Message” add-on from Microsoft. I tend to deploy this for almost all modern Office 365 builds as it allows users to directly report spam and phishing attempts to Microsoft, helping to improve the overall message filtering backend while also cutting down on support tickets by given the power directly to end users.

To deploy our first app, click the “Get apps” option to open the AppSource menu.

From here we can search for the app we want and get ready to deploy by clicking “Get it now”

Now we can configure our deployment scope, for the Report Message Add-On, I’ll deploy to all users by selecting “Entire organization”.

Finally, we verify the permissions we will be giving the app and deploy it when we are happy.

Now with the app deployed, we can return to modify it any time from the integrated apps section.