eDiscovery Functionality Moves to Microsoft 365 Compliance Center

eDiscovery and content search has been a staple of Microsoft 365 compliance since the early days of Office 365. Providing extremely flexible and efficient searching and actioning of data that resides anywhere in Microsoft 365, it has improved over time with a lot of extra functionality and is one of the most widely used compliance tools in the Microsoft 365 platform.

eDiscovery, which has first found in the Exchange Online Admin Center for mail discovery, was subsequently moved to the Microsoft 365 Security & Compliance Center (https://protection.office.com). The Security & Compliance Center itself has undergone a lot of changes recently and is coming near its end of life, being replaced with the Microsoft 365 Security Center (https://security.microsoft.com) and the Microsoft 365 Compliance Center (https://compliance.microsoft.com) which cater to Security tools and Data Governance/Compliance tools respectively.

The splitting of the SCC into two different portals makes sense as a lot of the time, in enterprise scenarios, these aspects of the tenancy are managed by two, completely separate teams. There will often be a dedicated security team, who deal with the identity protection and security aspects of the tenancy, and a dedicated Data Protection Team who are more concerned with the information governance side of things.

As of Oct 30th 2020, the eDiscovery suite of tools will be moving fully to the Microsoft 365 Compliance Center and the Security & Compliance Center links will redirect to the new page. This is the next step in the process of moving all the features from the old portal to the new model so if you haven’t checked out the two new pages, see below for more information.

Microsoft 365 Compliance Center: https://docs.microsoft.com/en-us/microsoft-365/compliance/microsoft-365-compliance-center?view=o365-worldwide

Microsoft 365 Security Center: https://docs.microsoft.com/en-us/microsoft-365/security/mtp/overview-security-center?view=o365-worldwide

Direct links:

SCC: https://protection.office.com

MCC: https://compliance.microsoft.com

MSC: https://security.microsoft.com

Using Graph API in PowerShell Example – OneDrive File Structure Report

Due to an issue on a file migration, I recently had a requirement to compare source and destination OneDrive structures. The easiest way I could come up with to do this was to use Graph API to expand the folder structure and export to CSV. I’ve always been a big PowerShell users so that is usually the basis for my Graph scripts.

I decided to share this basic script to help anyone who is trying to figure out how it works. The source for this script can be found on GitHub here.

The below script is intended to illustrate how you can use PowerShell and Graph calls together, not as a production Script

##Author: Sean McAvinue
##Details: Used as a Graph/PowerShell example, 
##          NOT FOR PRODUCTION USE! USE AT YOUR OWN RISK
##          Returns a report of OneDrive file and folder structure to CSV file
function GetGraphToken {
    <#
    .SYNOPSIS
    Azure AD OAuth Application Token for Graph API
    Get OAuth token for a AAD Application (returned as $token)
    
    #>

    # Application (client) ID, tenant ID and secret
    $clientId = ""
    $tenantId = ""
    $clientSecret = ""
    
    
    # Construct URI
    $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token"
     
    # Construct Body
    $body = @{
        client_id     = $clientId
        scope         = "https://graph.microsoft.com/.default"
        client_secret = $clientSecret
        grant_type    = "client_credentials"
    }
     
    # Get OAuth 2.0 Token
    $tokenRequest = Invoke-WebRequest -Method Post -Uri $uri -ContentType "application/x-www-form-urlencoded" -Body $body -UseBasicParsing
     
    # Access Token
    $token = ($tokenRequest.Content | ConvertFrom-Json).access_token
    
    #Returns token
    return $token
}
    

function expandfolders {
    <#
    .SYNOPSIS
    Expands folder structure and sends files to be written and folders to be expanded
  
    .PARAMETER folder
    -Folder is the folder being passed
    
    .PARAMETER FilePath
    -filepath is the current tracked path to the file
    
    .NOTES
    General notes
    #>
    Param(
        [parameter(Mandatory = $true)]
        $folder,
        [parameter(Mandatory = $true)]
        $FilePath

    )

    write-host retrieved $filePath -ForegroundColor green
    $filepath = ($filepath + '/' + $folder.name)
    write-host $filePath -ForegroundColor yellow
    $apiUri = ('https://graph.microsoft.com/beta/users/' + $user.UserPrincipalName + '/drive/root:' + $FilePath + ':/children')

    $Data = RunQueryandEnumerateResults -ApiUri $apiUri -Token $token

    ##Loop through Root folders
    foreach ($item in $data) {

        ##IF Folder
        if ($item.folder) {

            write-host $item.name is a folder, passing $filePath as path
            expandfolders   -folder $item -filepath $filepath

            
        }##ELSE NOT Folder
        else {

            write-host $item.name is a file
            writeTofile -file $item -filepath $filePath

        }

    }


}
   
function writeTofile {
    <#
    .SYNOPSIS
    Writes files and paths to export file

    
    .PARAMETER File
    -file is the file name found
    
    .PARAMETER FilePath
    -filepath is the current tracked path
    
    #>
    Param(
        [parameter(Mandatory = $true)]
        $File,
        [parameter(Mandatory = $true)]
        $FilePath

    )

    ##Build file object
    $object = [PSCustomObject]@{
        User         = $user.userprincipalname
        FileName     = $File.name
        LastModified = $File.lastModifiedDateTime
        Filepath     = $filepath
    }

    ##Export File Object
    $object | export-csv OneDriveReport.csv -NoClobber -NoTypeInformation -Append

    ##Reset workingfilepath



}

function RunQueryandEnumerateResults {
    <#
    .SYNOPSIS
    Runs Graph Query and if there are any additional pages, parses them and appends to a single variable
    
    .PARAMETER apiUri
    -APIURi is the apiUri to be passed
    
    .PARAMETER token
    -token is the auth token
    
    #>
    Param(
        [parameter(Mandatory = $true)]
        [String]
        $apiUri,
        [parameter(Mandatory = $true)]
        $token

    )

    #Run Graph Query
    $Results = (Invoke-RestMethod -Headers @{Authorization = "Bearer $($Token)" } -Uri $apiUri -Method Get)
    #Output Results for debug checking
    #write-host $results

    #Begin populating results
    $ResultsValue = $Results.value

    #If there is a next page, query the next page until there are no more pages and append results to existing set
    if ($results."@odata.nextLink" -ne $null) {
        write-host enumerating pages -ForegroundColor yellow
        $NextPageUri = $results."@odata.nextLink"
        ##While there is a next page, query it and loop, append results
        While ($NextPageUri -ne $null) {
            $NextPageRequest = (Invoke-RestMethod -Headers @{Authorization = "Bearer $($Token)" } -Uri $NextPageURI -Method Get)
            $NxtPageData = $NextPageRequest.Value
            $NextPageUri = $NextPageRequest."@odata.nextLink"
            $ResultsValue = $ResultsValue + $NxtPageData
        }
    }

    ##Return completed results
    return $ResultsValue

    
}


function main {
    <#
    .SYNOPSIS
    Main function, reports on file and folder structure in OneDrive for all imported users

    #>

    ##Get in scope Users from CSV file##
    $Users = import-csv userlist.csv


    #Loop Through Users
    foreach ($User in $Users) {
    
        #Generate Token
        $token = GetGraphToken

        ##Query Site to get Site ID
        $apiUri = 'https://graph.microsoft.com/v1.0/users/' + $User.userprincipalname + '/drive/root/children'
        $Data = RunQueryandEnumerateResults -ApiUri $apiUri -Token $token

        ##Loop through Root folders
        ForEach ($item in $data) {

            ##IF Folder, then expand folder
            if ($item.folder) {

                write-host $item.name is a folder
                $filepath = ""
                expandfolders -folder $item -filepath $filepath

                ##ELSE NOT Folder, then it's a file, sent to write output
            }
            else {

                write-host $item.name is a file
                $filepath = ""
                writeTofile -file $item -filepath $filepath

            }

        }


    
    }
    
    
    
}

Office 365 Outlook Insider Build – ‘Pin Email’ Feature

On my personal laptop, I run the Office Insider Build so that I can assess new features before they come into production. One of the cool new features that has been released to Beta recently is the ability in Outlook to Pin an email.

If you’re anything like me you can spend hours sifting through emails and trying to add follow up actions so you don’t lose track of multiple tasks that have come in by mail. Personally, having zero unread emails in my inbox stopped being an option for me years ago.

The new Pin email functionality is a life save for me as it essentially “pins” an email to the very top of your inbox. This puts it right in your face until you remove it, forcing you to get back to that demanding co-worker who is just full of questions.

The pin option is available on the ribbon menu and the context menu by right clicking an email and selecting Pin/Unpin.

Once pinned, you’ll see the mail at the top of your inbox in Outlook. This also works for different folders so can match your mailbox structure no matter how granular it is.

More information on this feature and other insider features are available on the Insider website: https://insider.office.com/en-us/blog/pin-important-emails-to-top-of-your-mailbox

SharePoint Syntex – Unlocking The Power Of Your Data with a Document Understanding Model

At Ignite 2019, Microsoft announced an ambitious new addition to the Microsoft 365 platform – Project Cortex. Cortex promised to bring the powerful AI features available in Azure to Microsoft 365, aiming to provide some really powerful automation and data insights. A year on and Microsoft have carried out private previews of Cortex and have announced at Ignite 2020 that rather than one large deployment, Project Cortex will be split into smaller components. The first of which is finally here, SharePoint Syntex.

At a high level, SharePoint Syntex allows organizations to unlock some powerful insights from their data using AI services. Tasks such as applying metadata and classifying/retaining data that previously had to be done manually (if they were done at all) can now be automated for some really cool results.

To illustrate the power of SharePoint Syntex, after the licensing has been added, navigate to Setup -> Organizational Knowledge in the Microsoft 365 Admin Center. Select Content Understanding Setup and configure the libraries you would like to enable for Form Processing. For now we’ll select ‘All SharePoint Libraries’. For Document Understanding we will name our Content Center site and finish setup by clicking Activate. Additional Content Centers can then be created from the SharePoint Online Admin Portal.

When our Content Center is built, we open it up to see some of the cool features available to us. The tasks we need to complete to begin the content understanding are:

First, let’s open the Models page and create a new document understanding model. We’ll create a model to assess Event Management Contracts on one of our document libraries. We’ll also create a new Content Type as part of the model creation.

Now let’s add some example files to begin training our new document understanding model. I’ve downloaded some sample contract files for a fictional event management company as Word documents that we can use to train the model. We’ll also upload a file that does not match our classification (Document 6) so we have a negative classifier.

Now that the training files are uploaded, let’s train our classifier.

We train the classifier by manually classifying the training files we uploaded.

Once we have processed our files we can either add more (The more data provided the more accurate the model) or proceed to training the model. For now we’ll proceed.

On the training page, we’ll give the model some understanding of our classification by using Explanations. Explanations provide the model with some reasoning for decisions and enhance the accuracy of predictions.

We add several explanations to our model to help it accurately predict classifications, here we’ll go with some simple currency, date and time templates as we know all event management contracts will contain all three in some format.

Finally, we test our classifier by uploading some more documents, a mixture of matching and non-matching data should be uploaded.

If our training was sufficient, we should see our content accurately predicted on the test page. If everything looks good, we can Exit Training.

Next we need to define what we extract from our documents that are classified successfully. To do this, we create entity extractors which will essentially become our Metadata for our files, extracted directly from the file itself.

For our contract example, let’s extract the following data that we expect for each contract:

  • Client Name
  • Contract Start Date
  • Event Date
  • Event Start Time
  • Event Finish Time
  • Total Fee
  • Deposit

To extract this information we will create extractors for each. We create our extractors and identify the relevant piece of information in each of our training documents.

After we’ve labelled at least five examples, we can move on to training as before. Enter templates or start from a blank context to add explanations. This is quite a basic example but the more data given to the model the more accurate it will be across different data sets and document structures.

When all of our extractors are in place, we train the model once more, we will see all of the explanations we added for our extractors are also added to the model to help with identification of data.

Finally, with all the setup done, we can apply the model to our library to see it in action!

We’ll apply the model to our Global Sales site, on the Event Management Contracts library. When this is applied, a new content type will be created for our documents and a new view of the library will be created including our extractors.

Our new view is now in place so now time to test all our work and upload documents. When we first upload we will se that analysis is taking place. After a minute or two, we can refresh the page and see our data automatically assessed and our extractors pulling the valuable information of of the document!

When classification has finished, we now see all our hard work paying off and dat is automatically classified and extracted from our documents!

SharePoint Syntex when set up correctly can help save both time and money for organizations by giving insights into data automatically, cutting down on manual processing and making the document management process much more efficient.

As the first component of Project Cortex to see release, this is already a massive step for Microsoft 365 and is no doubt the first in a long line of exciting tools available in the platform.

For more information on SharePoint Syntex: https://docs.microsoft.com/en-us/microsoft-365/contentunderstanding/?view=o365-worldwide

For more on Project Cortex: https://resources.techcommunity.microsoft.com/project-cortex-microsoft-365/

Sample Contract Files for this blog post were obtained from http://www.hloom.com/

Quick and Easy Exchange Online Mailbox Permissions Report

Over the years, I’ve built up a library of handy PowerShell scripts which I am reviewing now to update for different additional functionalities and see where I can improve them. While doing that, I thought I’d share some of them.

I’ve updated this script to use the new,  faster, Graph based EXO cmdlets. The script will return a list of all mailboxes and permissions assigned to each. It gives a nice tidy email reference of both the mailbox and the delegate and a summary of all permissions assigned.

It’s a pretty simple PowerShell script but comes in handy more than you’d think as a quick reference when assessing permissions, particularly during a migration.

#Retrieve all mailboxes
$mailboxes = get-Exomailbox -ResultSize unlimited
#Loop through mailboxes
foreach($mailbox in $mailboxes){
    
    #Get Mailbox Permissions
    $Permissions = Get-EXOMailboxPermission $mailbox.primarysmtpaddress
    #Loop Through Permission set
    foreach($permission in $permissions){
    #Build Object
    $object = [pscustomobject]@{
                'UserEmail' = $permission.user
                'MailboxEmail' = $mailbox.primarysmtpaddress
                'PermissionSet' = [String]$permission.accessrights
                }
    
    #Export Details
    $Object | export-csv MailboxPermissions.csv -NoClobber -NoTypeInformation -Append 
    #Remove Object
    Remove-Variable object
    }
}

Project Oakdale (Preview) – Bringing the Power Platform into Microsoft Teams

With the massive rise in Microsoft Teams usage in the past year, Microsoft are really investing in making it the market leader for productivity. New features were deployed rapidly to help organizations deal with enforced remote working scenarios and they are still being released at an amazing pace.

One of the more exciting features that was announced earlier this year and is currently in Preview is Project Oakdale. Project Oakdale is a cool sounding name and all but what actually is it? Well, it essentially brings the power of the Power Platform directly into Microsoft Teams! It does this by facilitating (within the Teams interface) the building of extremely flexible low/no-code solutions leveraging the Common Data Service, allowing for the use of relational data storage in our Teams Apps.

Let’s take a look at how we can put all of this into use. First, from our Teams Client, we add and open the PowerApps Teams app. Once open, we can create an app in a Team as below. There are also some premade apps and lots of learning material to go through if needed.

We can choose which Team to add a PowerApp to:

Once added, if this is the first App in the Team, it will take a moment to set up the back end environment to host our App.

When our environment is ready, we can start building our app. The first thing we will see is a familiar new, empty app. So let’s start by creating a table to store our data. Do this by clicking the “Create new table” button on the left pane.

I want to create an app that tracks internal training courses that the members of the Teams apply for so let’s create a table called “Training Course” and add a plural name of “Training Courses”.

Now that we have our table created, let’s populate it with some training courses available to our users, adding some details that we want to store for each. Let’s add in Course ID as an auto number to use as a unique identifier so we can associate the applications to courses. We’ll also add in a few courses that are available.

For this app I’m going to add a second table to store user applications to training.

Next, we’ll set up a relationship between the two tables. Navigate to the PowerApps tile, open the “Build” tab and select “See all”.

Now select “Tables” and view the “Relationships” tab of the ‘Application’ table and we can add the CourseID attribute that we created on the “Training Courses” table in a Many-to-One relationship with the “Training Course” table. Don’t forget to ‘save table’ after adding.

Now let’s add a frontend for our users as we would in PowerApps. and finish out the app. I won’t go through creating the app in detail as that’s another discussion.

When the frontend and any logic is up to date we can publish the app to the Team easily by clicking the ‘Publish to Team’ button. This will allow all members of the Team, including guests (Subject to licensing), to see and use the app!

We pick a channel to add the app to and hit Save and we’re done.

Now when our users access the Team, they’ll see the app we have published and can use the app in the Teams context without being granted specific permissions.

While this is an extremely basic app, the advanced features of the Power Platform including Flows and chatbots are available to publish to a Team in the same manner.

This is a massive step for low/no-code app development and for Teams itself. For more information on Project Oakdale and what it can do, see the below links.

https://powerapps.microsoft.com/en-us/blog/introducing-project-oakdale-a-new-low-code-data-platform-for-microsoft-teams/

https://docs.microsoft.com/en-us/powerapps/teams/overview-data-platform

Protecting Office 365 Groups and Microsoft Teams with Sensitivity Labels (Preview)

I often end up in one of two conversations around Microsoft Teams governance with customers, the “Users can manage them themselves so we don’t need to worry” group, and the “Nobody gets a Team unless we follow this 20 step approval process and our service desk needs to set them up and lock them down” group.

Both options have their merits, but also their pitfalls. If we let everyone create teams we end up with sprawl and have no idea where our data is stored (why are there six “HR Teams”, and which one contains the right data?). On the other hand, if we don’t let our users use Teams without jumping through hoops while saying the alphabet backwards, in Latin, then we are preventing people from using some of the most powerful collaboration features available to them.

We usually end up finding a good middle ground in these discussions that leverages automation and some of the cool Information Protection features of Microsoft 365. My opinion on Teams provisioning process has been the same as it was for SharePoint sites, “I don’t care if we have ten thousand Teams, as long as they are named and protected correctly, the number doesn’t matter”.

This opinion was idealistic in the early days of Microsoft Teams as the governance features just weren’t where I wanted them to be. In the past year, Microsoft have taken strides in the features available and I’m pretty happy (albeit still a few features I’d like) with what’s available. I might even do a follow up post where I can rant about my Teams security and governance opinions down the line.

One feature that has made my life much easier since it was made available (in Preview) is the ability of Sensitivity Labels to be applied to Office 365 Groups / Teams / Sites . This feature allows us to define Sensitivity Labels which, when applied to a Group, can control the privacy level of the Group, the level of functionality available to unmanaged devices, and even the external access configuration.

During some early Teams projects I had automated scripts which changed these group settings in Azure AD, SharePoint sharing settings on the site level at the time of provisioning. That was a nightmare as it had to be maintained and knowledge transferred to the incumbent IT Teams.

When this feature is enabled, we just need to specify the settings in our sensitivity label and it’s all taken care of for us.

The site and group settings tab

To enable this feature for your tenant now, connect to the Azure AD Preview PowerShell Module and run the below to update the Directory Setting and enable MIP Labelling in Office 365 Groups.

##Copy Current Settings to $Settings Variable
$Setting = Get-AzureADDirectorySetting -Id (Get-AzureADDirectorySetting | where -Property DisplayName -Value "Group.Unified" -EQ).id

##Change EnableMIPLables setting
$Setting["EnableMIPLabels"] = "True"

##Write back changes to directory
Set-AzureADDirectorySetting -Id $Setting.Id -DirectorySetting $Setting

Next, connect to the Security & Compliance PowerShell Module and run the below to start the synchronization process between MIP and AAD.

Execute-AzureAdLabelSync

After a little time to replicate, you will be able to see the above page when configuring a new sensitivity label and then apply them to Teams/Groups/Sites and once the labels are deployed (usually 24 hours after creation) you’ll be able to apply them at provisioning time!

Using Sensitivity Labels as Conditions in DLP Policies (Preview)

Sensitivity Labels have quickly become a core component of any Enterprise level Microsoft 365 tenancy. Information Governance is becoming more and more relevant in the Cloud space where we potentially have a enormous amount of disparate unstructured data in various Teams, SharePoint Sites, OneDrive etc.

Depending on the license SKU, Sensitivity labels can allow manual or automated protection of data – at an item level – based on predefined or even Machine Learning modeled classifications (Trainable Classifiers). We can even classify Office 365 Groups/Microsoft Teams for some cool features.

Another piece of the Information Governance puzzle is Data-Loss Prevention (DLP). DLP allows us to Prevent data leaving the organization via file sharing and/or Email. We typically define DLP policies to protect data based on contents such as keywords or particular patterns.

As of this month, we get access (in Preview) to the ability to use Sensitivity Labels as criteria in DLP Policies! This not only prevent double up of work aligning sensitivity and DLP policies, but allows us to protect that data that users classify themselves or even as a result of our Trainable Classifiers.

In the below example we are creating a custom DLP Policy which we will configure to detect a particular sensitivity label added to our content.

We exclude Teams Chat and Channel messages from the policy as they can’t be applied sensitivity on their own so are not supported for this policy. (Teams file sharing comes from SharePoint so we’re still covered there)

And that’s it, we’ve protected our Sensitivity Label with DLP! All that’s left is for us to assign our actions and we’re done!

This is a really nice addition to the feature set for Microsoft 365 Compliance and helps tie together some really powerful features, giving admins better control over data egress. If you haven’t already, it’s definitely worthwhile checking out some of the other Information Protection features available in the Microsoft 365.

Microsoft 365 Compliance Manager goes GA!

On the 22nd of September the Microsoft 365 Compliance Manager came into General Availability. Available in the Compliance Portal, the Compliance Manager helps organizations with Microsoft 365 to gain control over their compliance and risk state. In this post, we’ll look at some of the features available in the Compliance Manager.

The first think you’ll notice when opening the tool, is the Compliance Score rating. This rating which is based on a range of both Microsoft and Organizational controls, gives a quick metric of the compliance performance in the organization.

Like the traditional Secure Score, this score is relative to Microsoft 365 but may not reflect the overall security posture of the organization. For example, you can gain 27 points by implementing a spam filter, however, Microsoft’s tools will not be aware of a third party spam filter that sits in front of Exchange Online. Note: Automated tests are not available for all actions so it’s important to manually review them. We can enable automated testing on all or a subset of actions in the Compliance Manager Settings.

When we open a particular action, we can see associated regulatory requirements, details on how to go about implementing the action and any uploaded supporting documentation and notes detailed by the actions owner.

From the action itself, we can edit the status to assign someone to update and/or add the detail ourselves.

Once updated ,if completed this will adjust our Compliance Score appropriately.

From the solutions page in Compliance Manager, we can see how the different tools can help us to address our open actions. We can even open the relevant tool directly from this page allowing quick navigation and resolution.

Finally we can create assessments based on existing or custom templates such as EU GDPR assessment to group controls and actions to reach compliance with particular standards.

We can track progress against these assessments separately to our overall Compliance Score and also view any associated controls/actions.

The Compliance Manager brings some really powerful functionality to Microsoft 365 and as more automations for testing become available will help to simply what can be a tedious process to move towards compliance with various standards.

While in Compliance Manager, it’s also worth looking at some of the other great tools in the Microsoft 365 Compliance Portal as it gains more and more functionality and separates itself from the legacy Security and Compliance Portal.

Easily Enhance Microsoft Search with Organizational Data

Microsoft Search, particularly when effort is put in to build metadata and Organizational Information is an extremely useful tool that doesn’t always get the credit it deserves (probably partially stemming from the consumer experience with ‘bing’ over the years).

Organizations can leverage Microsoft Search to help their users locate information extremely quickly and efficiently – Providing the right change management and training plans have been put in place to help change user behavior.

Microsoft have recently made available a host of cool admin tools to assist with configuring Microsoft Search to give users the best experience. Features like the Graph Connectors for Microsoft Search unlock a host of possibilities for extending Microsoft Search outside of just Microsoft 365.

An even easier method of enriching the Search experience is to add some organization data in the Microsoft 365 Admin Portal. In the below screenshots, we have added a few options into the Microsoft Search config:

An Acronym:

A bookmark:

A Q&A entry:

We can even enhance some of our options by linking to PowerApps, external sites and scoping the results to specific user groups.

Our users can then search within Office 365 and find our predefined results:

That’s really cool, our users can search in the Office Portal but it could probably be fancier. This is where Edge (Chromium) comes into play. The built in Microsoft Search functionality allows these result to surface as part of a regular web search:

Once the user is signed into Edge with their corporate account they will see the “Work” option in their search results. The search functionality can be used to search our entire organizations Microsoft 365 data including any Graph connectors to external systems and can even be triggered from Windows 10 machines local search bar. This is an extremely powerful tool for users to navigate very quickly and find the answers they need.