Setting up an Auto Attendant in Microsoft Teams

Microsoft Teams Calling Features have some great flexibility for standard calling requirements. For some more bespoke requirements there are great integrations available trough a number different contact center ISVs also. One of the cool built in features is the Microsoft Teams Auto Attendant. Auto Attendant has been around for a while now, coming from Skype for Business, and can help Organizations put a front end on customer facing numbers. In this post I’ll go through the steps to configure a Microsoft Teams Auto-Attendant.

Set up a Resource Account

The first requirement for an Auto Attendant is to create a resource account to act as the ‘entry point’ and have the associated front end phone number. Every Auto Attendant must have at least one Resource account associated and can have more than one if multiple numbers are required.

To set up a resource account, open the Teams Admin Center and go to ‘Org Settings‘ -> ‘Resource Accounts‘ and add a new account as below:

Next we assign a phone number to our resource account to enable it for dial-in. Depending on if you are using Dial Plans or Direct Routing this is slightly different.

For Dial Plans, hit the assign button and assign an Online number as below,

Screenshot of the Assign/unassign options

For Direct Routing, this currently needs to be done via the Skype For Business Online Management Shell. Use the command below to set this number on the resource account:

Set-CsOnlineVoiceApplicationInstance -Identity receptionlineRA@M365X142973.onmicrosoft.com -TelephoneNumber +0000000000

Set up an Auto Attendant

Once we have our number set we can set up our Auto Attendant. Open the ‘Voice‘ -> ‘Auto Attendant‘ section of the Admin Center and create a new Auto Attendant.

On the creation page, select:

  • An Operator – The real person to get called if a user doesn’t want to speak to the Auto Attendant. This can be a real person, another Resource Account or an number that is external to Teams
  • Time Zone – this is for mapping opening hours to your current Time Zone
  • Language – The language the Auto Attendant should be in

On the next page, we can set up an initial greeting, this can be a recorded audio file or have the Teams system read out a typed message.

Next we choose what to do with our auto attendant. We can disconnect, reroute the call to another Resource Account or play a menu with associated triggers. If we choose to play menu options, we again have the option to record the message, or have the system read a written message.

We assign dial keys to trigger one of the below options, we can also add voice commands to each of the dial keys.

  • Operator – Speak to our chosen operator
  • Person in Organization – Search Directory (We’ll get to this in a later section)
  • Voice App – Another Resource Account linked to another Auto Attendant or Call Queue
  • Voicemail – Leave a Voicemail for an Office 365 Group
  • External Phone Number – A Number External to Teams

For our directory search option, we can allow users to search by name or extension.

Next we set our Business hours and choose what happens when we are outside business hours.

We can then set our settings for during holidays:

We configure who is accessible to directory search from this line, this can be a particular group of users or all users with particular excludions.

Finally, we link our resource account to our Auto Attendant and we’re all set. After a little time to propagate changes, our Resource Account number will route directly to our Auto Attendant flow.

Auto Attendants and Call Queues add some great functionality for receiving and routing calls for a lot of basic use cases. While more complex cases such as call center management can be fulfilled with Microsoft Partner integrations, the default functionality provides a lot of out of the box flexibility.

For more on Auto Attendants, Call Queues and licensing required see the Official Microsoft Documentation.

Conditional Access for Office 365 Apps Goes GA

Conditional Access is one of the first steps any organization should take when protecting user identities in Azure AD. The flexibility available through Conditional Access policies is fantastic for meeting sign-in requirements and depending on licensing, can even do some proactive mitigation of breaches using risk and sign-in policies.

Office 365 relies heavily on Azure AD to service authentication for users. Conditional Access is often a minimum requirement to allow users to securely access Office 365 services by enforcing protection on the sign-in activity. Previously, it has been hard to manage Conditional Access policies that only target Office 365 apps as the platform constantly expands and when new apps are published, they are not automatically included in our Conditional Access Policies.

Last year, Microsoft made this much easier by including the ‘Office 365’ app in Conditional Access as a preview feature. This meant that the different components of Office 365 no longer had to be included separately.

Previously Applying Blanket Protection to Office 365 Apps Was Cumbersome
With the Office 365 App Available, This is Much Easier

This week marks the official General Availability of the Office 365 app in Conditional Access. There should now be no excuse not to be using this app to provide holistic protection to Office 365 users.

For more information on what the Office 365 app in Conditional Access applies to, see the Official Microsoft Documentation.

Customize Microsoft Teams Meeting Invitations

Something I find that is missed in a lot of organizations is the branding features available in Microsoft 365. Branding is a really nice feature for users to see when logging into to cloud services, we also tend to not realize the security benefits of having appropriate branding on our sign-in page etc. When we brand correctly, it doesn’t just look nice, it gives users more confidence they are in the right place.

We could even potentially prevent users from getting phished by generic Microsoft 365 login pages as it will look different to their normal branded page. It’s a small thing, but could help prevent an attack when all else fails.

Another piece of branding people tend to miss out on, is branding Teams meeting requests. This helps make our meeting requests look professional, add in legal or support links and also may help protect against malicious requests going to our partners and customers.

Branding is available in the Teams Admin Portal and its really easy to set up. Simply add your logos etc. to the meeting settings page to update your meeting request branding.

You can then preview the change before deploying.

Branding the Teams meeting invites is a small thing and doesnt take a lot of effort to do but can really improve the look of the invites, and also potentially help prevent phishing.

Microsoft Teams Meeting Recordings Moving to OneDrive and SharePoint

Microsoft Teams recordings are a great feature for when people miss meetings and need to catch up, or when they just need to review the content. We’ve used them in the past for technical demos and project handovers. Recordings were were, until now, saved into Microsoft Stream and available to view in the Stream app. This is changing in the near future.

Going forward, Teams recordings will be saved into OneDrive and SharePoint by default instead. For any Channel meetings, recordings will be stored in the appropriate SharePoint library and for regular, non-channel meetings, they will be saved in the OneDrive of the user who hits the record button.

The current schedule for this rollout as issued by the Microsoft Message Center is:

  • mid-October (October 19, 2020) – You can enable the Teams Meeting policy to have meeting recordings saved to OneDrive and SharePoint instead of Microsoft Stream (Classic)
  • End of October (October 31, 2020) – Meeting recordings in OneDrive and SharePoint will have support for English captions via the Teams transcription feature.
  • Early to mid-November (Rolling out between November 1 -15 , 2020) – All new Teams meeting recordings will be saved to OneDrive and SharePoint unless you delay this change by modifying your organization’s Teams Meeting policies and explicitly setting them to “Stream”
  • Q1 2021 – No new meeting recordings can be saved to Microsoft Stream (Classic); all customers will automatically have meeting recordings saved to OneDrive and SharePoint even if they’ve changed their Teams meeting policies to Stream”

To delay this change until Q1 2020, you can run the below command in the Skype Online Management Shell (Yes it’s still around and very much needed to manage Teams, however it’s now installed as part of the Microsoft Teams PowerShell Module)

Set-CsTeamsMeetingPolicy -Identity Global -RecordingStorageMode "Stream"

The above command will allow you to defer this change to 2021 but not prevent it entirely.

For more information on the benefits of this change – and the limitations after moving, see the below Microsoft Article:

https://docs.microsoft.com/en-us/MicrosoftTeams/tmr-meeting-recording-change

eDiscovery Functionality Moves to Microsoft 365 Compliance Center

eDiscovery and content search has been a staple of Microsoft 365 compliance since the early days of Office 365. Providing extremely flexible and efficient searching and actioning of data that resides anywhere in Microsoft 365, it has improved over time with a lot of extra functionality and is one of the most widely used compliance tools in the Microsoft 365 platform.

eDiscovery, which has first found in the Exchange Online Admin Center for mail discovery, was subsequently moved to the Microsoft 365 Security & Compliance Center (https://protection.office.com). The Security & Compliance Center itself has undergone a lot of changes recently and is coming near its end of life, being replaced with the Microsoft 365 Security Center (https://security.microsoft.com) and the Microsoft 365 Compliance Center (https://compliance.microsoft.com) which cater to Security tools and Data Governance/Compliance tools respectively.

The splitting of the SCC into two different portals makes sense as a lot of the time, in enterprise scenarios, these aspects of the tenancy are managed by two, completely separate teams. There will often be a dedicated security team, who deal with the identity protection and security aspects of the tenancy, and a dedicated Data Protection Team who are more concerned with the information governance side of things.

As of Oct 30th 2020, the eDiscovery suite of tools will be moving fully to the Microsoft 365 Compliance Center and the Security & Compliance Center links will redirect to the new page. This is the next step in the process of moving all the features from the old portal to the new model so if you haven’t checked out the two new pages, see below for more information.

Microsoft 365 Compliance Center: https://docs.microsoft.com/en-us/microsoft-365/compliance/microsoft-365-compliance-center?view=o365-worldwide

Microsoft 365 Security Center: https://docs.microsoft.com/en-us/microsoft-365/security/mtp/overview-security-center?view=o365-worldwide

Direct links:

SCC: https://protection.office.com

MCC: https://compliance.microsoft.com

MSC: https://security.microsoft.com

Using Graph API in PowerShell Example – OneDrive File Structure Report

Due to an issue on a file migration, I recently had a requirement to compare source and destination OneDrive structures. The easiest way I could come up with to do this was to use Graph API to expand the folder structure and export to CSV. I’ve always been a big PowerShell users so that is usually the basis for my Graph scripts.

I decided to share this basic script to help anyone who is trying to figure out how it works. The source for this script can be found on GitHub here.

The below script is intended to illustrate how you can use PowerShell and Graph calls together, not as a production Script

##Author: Sean McAvinue
##Details: Used as a Graph/PowerShell example, 
##          NOT FOR PRODUCTION USE! USE AT YOUR OWN RISK
##          Returns a report of OneDrive file and folder structure to CSV file
function GetGraphToken {
    <#
    .SYNOPSIS
    Azure AD OAuth Application Token for Graph API
    Get OAuth token for a AAD Application (returned as $token)
    
    #>

    # Application (client) ID, tenant ID and secret
    $clientId = ""
    $tenantId = ""
    $clientSecret = ""
    
    
    # Construct URI
    $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token"
     
    # Construct Body
    $body = @{
        client_id     = $clientId
        scope         = "https://graph.microsoft.com/.default"
        client_secret = $clientSecret
        grant_type    = "client_credentials"
    }
     
    # Get OAuth 2.0 Token
    $tokenRequest = Invoke-WebRequest -Method Post -Uri $uri -ContentType "application/x-www-form-urlencoded" -Body $body -UseBasicParsing
     
    # Access Token
    $token = ($tokenRequest.Content | ConvertFrom-Json).access_token
    
    #Returns token
    return $token
}
    

function expandfolders {
    <#
    .SYNOPSIS
    Expands folder structure and sends files to be written and folders to be expanded
  
    .PARAMETER folder
    -Folder is the folder being passed
    
    .PARAMETER FilePath
    -filepath is the current tracked path to the file
    
    .NOTES
    General notes
    #>
    Param(
        [parameter(Mandatory = $true)]
        $folder,
        [parameter(Mandatory = $true)]
        $FilePath

    )

    write-host retrieved $filePath -ForegroundColor green
    $filepath = ($filepath + '/' + $folder.name)
    write-host $filePath -ForegroundColor yellow
    $apiUri = ('https://graph.microsoft.com/beta/users/' + $user.UserPrincipalName + '/drive/root:' + $FilePath + ':/children')

    $Data = RunQueryandEnumerateResults -ApiUri $apiUri -Token $token

    ##Loop through Root folders
    foreach ($item in $data) {

        ##IF Folder
        if ($item.folder) {

            write-host $item.name is a folder, passing $filePath as path
            expandfolders   -folder $item -filepath $filepath

            
        }##ELSE NOT Folder
        else {

            write-host $item.name is a file
            writeTofile -file $item -filepath $filePath

        }

    }


}
   
function writeTofile {
    <#
    .SYNOPSIS
    Writes files and paths to export file

    
    .PARAMETER File
    -file is the file name found
    
    .PARAMETER FilePath
    -filepath is the current tracked path
    
    #>
    Param(
        [parameter(Mandatory = $true)]
        $File,
        [parameter(Mandatory = $true)]
        $FilePath

    )

    ##Build file object
    $object = [PSCustomObject]@{
        User         = $user.userprincipalname
        FileName     = $File.name
        LastModified = $File.lastModifiedDateTime
        Filepath     = $filepath
    }

    ##Export File Object
    $object | export-csv OneDriveReport.csv -NoClobber -NoTypeInformation -Append

    ##Reset workingfilepath



}

function RunQueryandEnumerateResults {
    <#
    .SYNOPSIS
    Runs Graph Query and if there are any additional pages, parses them and appends to a single variable
    
    .PARAMETER apiUri
    -APIURi is the apiUri to be passed
    
    .PARAMETER token
    -token is the auth token
    
    #>
    Param(
        [parameter(Mandatory = $true)]
        [String]
        $apiUri,
        [parameter(Mandatory = $true)]
        $token

    )

    #Run Graph Query
    $Results = (Invoke-RestMethod -Headers @{Authorization = "Bearer $($Token)" } -Uri $apiUri -Method Get)
    #Output Results for debug checking
    #write-host $results

    #Begin populating results
    $ResultsValue = $Results.value

    #If there is a next page, query the next page until there are no more pages and append results to existing set
    if ($results."@odata.nextLink" -ne $null) {
        write-host enumerating pages -ForegroundColor yellow
        $NextPageUri = $results."@odata.nextLink"
        ##While there is a next page, query it and loop, append results
        While ($NextPageUri -ne $null) {
            $NextPageRequest = (Invoke-RestMethod -Headers @{Authorization = "Bearer $($Token)" } -Uri $NextPageURI -Method Get)
            $NxtPageData = $NextPageRequest.Value
            $NextPageUri = $NextPageRequest."@odata.nextLink"
            $ResultsValue = $ResultsValue + $NxtPageData
        }
    }

    ##Return completed results
    return $ResultsValue

    
}


function main {
    <#
    .SYNOPSIS
    Main function, reports on file and folder structure in OneDrive for all imported users

    #>

    ##Get in scope Users from CSV file##
    $Users = import-csv userlist.csv


    #Loop Through Users
    foreach ($User in $Users) {
    
        #Generate Token
        $token = GetGraphToken

        ##Query Site to get Site ID
        $apiUri = 'https://graph.microsoft.com/v1.0/users/' + $User.userprincipalname + '/drive/root/children'
        $Data = RunQueryandEnumerateResults -ApiUri $apiUri -Token $token

        ##Loop through Root folders
        ForEach ($item in $data) {

            ##IF Folder, then expand folder
            if ($item.folder) {

                write-host $item.name is a folder
                $filepath = ""
                expandfolders -folder $item -filepath $filepath

                ##ELSE NOT Folder, then it's a file, sent to write output
            }
            else {

                write-host $item.name is a file
                $filepath = ""
                writeTofile -file $item -filepath $filepath

            }

        }


    
    }
    
    
    
}

Office 365 Outlook Insider Build – ‘Pin Email’ Feature

On my personal laptop, I run the Office Insider Build so that I can assess new features before they come into production. One of the cool new features that has been released to Beta recently is the ability in Outlook to Pin an email.

If you’re anything like me you can spend hours sifting through emails and trying to add follow up actions so you don’t lose track of multiple tasks that have come in by mail. Personally, having zero unread emails in my inbox stopped being an option for me years ago.

The new Pin email functionality is a life save for me as it essentially “pins” an email to the very top of your inbox. This puts it right in your face until you remove it, forcing you to get back to that demanding co-worker who is just full of questions.

The pin option is available on the ribbon menu and the context menu by right clicking an email and selecting Pin/Unpin.

Once pinned, you’ll see the mail at the top of your inbox in Outlook. This also works for different folders so can match your mailbox structure no matter how granular it is.

More information on this feature and other insider features are available on the Insider website: https://insider.office.com/en-us/blog/pin-important-emails-to-top-of-your-mailbox

SharePoint Syntex – Unlocking The Power Of Your Data with a Document Understanding Model

At Ignite 2019, Microsoft announced an ambitious new addition to the Microsoft 365 platform – Project Cortex. Cortex promised to bring the powerful AI features available in Azure to Microsoft 365, aiming to provide some really powerful automation and data insights. A year on and Microsoft have carried out private previews of Cortex and have announced at Ignite 2020 that rather than one large deployment, Project Cortex will be split into smaller components. The first of which is finally here, SharePoint Syntex.

At a high level, SharePoint Syntex allows organizations to unlock some powerful insights from their data using AI services. Tasks such as applying metadata and classifying/retaining data that previously had to be done manually (if they were done at all) can now be automated for some really cool results.

To illustrate the power of SharePoint Syntex, after the licensing has been added, navigate to Setup -> Organizational Knowledge in the Microsoft 365 Admin Center. Select Content Understanding Setup and configure the libraries you would like to enable for Form Processing. For now we’ll select ‘All SharePoint Libraries’. For Document Understanding we will name our Content Center site and finish setup by clicking Activate. Additional Content Centers can then be created from the SharePoint Online Admin Portal.

When our Content Center is built, we open it up to see some of the cool features available to us. The tasks we need to complete to begin the content understanding are:

First, let’s open the Models page and create a new document understanding model. We’ll create a model to assess Event Management Contracts on one of our document libraries. We’ll also create a new Content Type as part of the model creation.

Now let’s add some example files to begin training our new document understanding model. I’ve downloaded some sample contract files for a fictional event management company as Word documents that we can use to train the model. We’ll also upload a file that does not match our classification (Document 6) so we have a negative classifier.

Now that the training files are uploaded, let’s train our classifier.

We train the classifier by manually classifying the training files we uploaded.

Once we have processed our files we can either add more (The more data provided the more accurate the model) or proceed to training the model. For now we’ll proceed.

On the training page, we’ll give the model some understanding of our classification by using Explanations. Explanations provide the model with some reasoning for decisions and enhance the accuracy of predictions.

We add several explanations to our model to help it accurately predict classifications, here we’ll go with some simple currency, date and time templates as we know all event management contracts will contain all three in some format.

Finally, we test our classifier by uploading some more documents, a mixture of matching and non-matching data should be uploaded.

If our training was sufficient, we should see our content accurately predicted on the test page. If everything looks good, we can Exit Training.

Next we need to define what we extract from our documents that are classified successfully. To do this, we create entity extractors which will essentially become our Metadata for our files, extracted directly from the file itself.

For our contract example, let’s extract the following data that we expect for each contract:

  • Client Name
  • Contract Start Date
  • Event Date
  • Event Start Time
  • Event Finish Time
  • Total Fee
  • Deposit

To extract this information we will create extractors for each. We create our extractors and identify the relevant piece of information in each of our training documents.

After we’ve labelled at least five examples, we can move on to training as before. Enter templates or start from a blank context to add explanations. This is quite a basic example but the more data given to the model the more accurate it will be across different data sets and document structures.

When all of our extractors are in place, we train the model once more, we will see all of the explanations we added for our extractors are also added to the model to help with identification of data.

Finally, with all the setup done, we can apply the model to our library to see it in action!

We’ll apply the model to our Global Sales site, on the Event Management Contracts library. When this is applied, a new content type will be created for our documents and a new view of the library will be created including our extractors.

Our new view is now in place so now time to test all our work and upload documents. When we first upload we will se that analysis is taking place. After a minute or two, we can refresh the page and see our data automatically assessed and our extractors pulling the valuable information of of the document!

When classification has finished, we now see all our hard work paying off and dat is automatically classified and extracted from our documents!

SharePoint Syntex when set up correctly can help save both time and money for organizations by giving insights into data automatically, cutting down on manual processing and making the document management process much more efficient.

As the first component of Project Cortex to see release, this is already a massive step for Microsoft 365 and is no doubt the first in a long line of exciting tools available in the platform.

For more information on SharePoint Syntex: https://docs.microsoft.com/en-us/microsoft-365/contentunderstanding/?view=o365-worldwide

For more on Project Cortex: https://resources.techcommunity.microsoft.com/project-cortex-microsoft-365/

Sample Contract Files for this blog post were obtained from http://www.hloom.com/

Quick and Easy Exchange Online Mailbox Permissions Report

Over the years, I’ve built up a library of handy PowerShell scripts which I am reviewing now to update for different additional functionalities and see where I can improve them. While doing that, I thought I’d share some of them.

I’ve updated this script to use the new,  faster, Graph based EXO cmdlets. The script will return a list of all mailboxes and permissions assigned to each. It gives a nice tidy email reference of both the mailbox and the delegate and a summary of all permissions assigned.

It’s a pretty simple PowerShell script but comes in handy more than you’d think as a quick reference when assessing permissions, particularly during a migration.

#Retrieve all mailboxes
$mailboxes = get-Exomailbox -ResultSize unlimited
#Loop through mailboxes
foreach($mailbox in $mailboxes){
    
    #Get Mailbox Permissions
    $Permissions = Get-EXOMailboxPermission $mailbox.primarysmtpaddress
    #Loop Through Permission set
    foreach($permission in $permissions){
    #Build Object
    $object = [pscustomobject]@{
                'UserEmail' = $permission.user
                'MailboxEmail' = $mailbox.primarysmtpaddress
                'PermissionSet' = [String]$permission.accessrights
                }
    
    #Export Details
    $Object | export-csv MailboxPermissions.csv -NoClobber -NoTypeInformation -Append 
    #Remove Object
    Remove-Variable object
    }
}

Project Oakdale (Preview) – Bringing the Power Platform into Microsoft Teams

With the massive rise in Microsoft Teams usage in the past year, Microsoft are really investing in making it the market leader for productivity. New features were deployed rapidly to help organizations deal with enforced remote working scenarios and they are still being released at an amazing pace.

One of the more exciting features that was announced earlier this year and is currently in Preview is Project Oakdale. Project Oakdale is a cool sounding name and all but what actually is it? Well, it essentially brings the power of the Power Platform directly into Microsoft Teams! It does this by facilitating (within the Teams interface) the building of extremely flexible low/no-code solutions leveraging the Common Data Service, allowing for the use of relational data storage in our Teams Apps.

Let’s take a look at how we can put all of this into use. First, from our Teams Client, we add and open the PowerApps Teams app. Once open, we can create an app in a Team as below. There are also some premade apps and lots of learning material to go through if needed.

We can choose which Team to add a PowerApp to:

Once added, if this is the first App in the Team, it will take a moment to set up the back end environment to host our App.

When our environment is ready, we can start building our app. The first thing we will see is a familiar new, empty app. So let’s start by creating a table to store our data. Do this by clicking the “Create new table” button on the left pane.

I want to create an app that tracks internal training courses that the members of the Teams apply for so let’s create a table called “Training Course” and add a plural name of “Training Courses”.

Now that we have our table created, let’s populate it with some training courses available to our users, adding some details that we want to store for each. Let’s add in Course ID as an auto number to use as a unique identifier so we can associate the applications to courses. We’ll also add in a few courses that are available.

For this app I’m going to add a second table to store user applications to training.

Next, we’ll set up a relationship between the two tables. Navigate to the PowerApps tile, open the “Build” tab and select “See all”.

Now select “Tables” and view the “Relationships” tab of the ‘Application’ table and we can add the CourseID attribute that we created on the “Training Courses” table in a Many-to-One relationship with the “Training Course” table. Don’t forget to ‘save table’ after adding.

Now let’s add a frontend for our users as we would in PowerApps. and finish out the app. I won’t go through creating the app in detail as that’s another discussion.

When the frontend and any logic is up to date we can publish the app to the Team easily by clicking the ‘Publish to Team’ button. This will allow all members of the Team, including guests (Subject to licensing), to see and use the app!

We pick a channel to add the app to and hit Save and we’re done.

Now when our users access the Team, they’ll see the app we have published and can use the app in the Teams context without being granted specific permissions.

While this is an extremely basic app, the advanced features of the Power Platform including Flows and chatbots are available to publish to a Team in the same manner.

This is a massive step for low/no-code app development and for Teams itself. For more information on Project Oakdale and what it can do, see the below links.

https://powerapps.microsoft.com/en-us/blog/introducing-project-oakdale-a-new-low-code-data-platform-for-microsoft-teams/

https://docs.microsoft.com/en-us/powerapps/teams/overview-data-platform