Category Archives: Microsoft

Microsoft Azure Operational Insights Preview Series – General Availability (Part 18)

Previously on Microsoft Azure Operational Insights Preview Series:

This will be the last blog post of these series simply because I’ve received e-mail that Azure Operational Insights will become Generally Available May 4th. This date matches my prediction that it will happen around Microsoft Ignite Conference. I will still continue to write blog posts on the service but I will not include it in a series. Here is the official Azure e-mail we all received about the service:

image
I hope I was helpful with these series to you.

Microsoft Azure Operational Insights Preview Series – Security and Audit (Part 17)

Previously on Microsoft Azure Operational Insights Preview Series:

Security and Audit Intelligence Pack is probably the most powerful IP of all. That IP gathers a lot of logs basically every security log on every machines you are monitoring with Operational Insights. And if you have tried doing that with SCOM Audit Collection Services in the past you know it is not an easy job to do. Azure Operational Insights solves that problem you just enable it and the OpInsights team takes care of supporting the infrastructure for all this data and updating the IP itself, you just consume the end result and make the analysis based on the data.

Now before enabling this IP keep in mind that uploads a lot of data. For a 57 machines where we have 1 SMB storage server, 4 Hyper-V servers and the rest is virtual machines we have seen up to 44GB uploaded data per day. As this information is based on preview always look for the latest data on this topic.

You can find the Security and Audit IP in the Gallery:

image

Until data is being gathered you will not be able to click on the tile and dive deep:

image

After several hours you will see data:

image

Clicking on the tile opens a lot of information:

image

You will notice that this data is scoped for the last 1 day. The reason for this is to show you day to day trends.

I will not focus on every single tile you see here in this dashboard as when you click on every single tile this will lead you to a search query. And those predefined queries are there to help you explore so you can make your own queries that makes sense in your environment.

We can find a KB article with some security Event IDs and search by them:

http://support.microsoft.com/kb/977519

Let’s say I am choosing Event 4720 which should show me what are the user accounts that are created:

Type=SecurityEvent  EventID=4720 | Select TargetUserName,UserPrincipalName,TargetSid,TargetDomaunName,SubjectAccount

It is a very simple query that I can execute and for example I can scope for the last 7 days:

image

Although that query is very simple it gives me powerful results as I search across every domain controller I am monitoring. Imagine situations where I have two separate environments but for example I want to see results from both of them on one place. That is the power of Operational Insights.

When you are trying this services I would recommend to try to convert every audit you had in the past related to Windows security logs into search queries. Keep in mind the services is still in preview so there might be some glitches or missing scenarios. Weather this service is GA or in Preview it is always good to express your suggestions in the UserVoice to help improve it.

Azure Automation is Faster Than Me

The original title of this post was suppose to be “Strategy on Running Runbooks in Azure Automation” and wanted to write it for more than a month. Why the title changed during this period? Simple – there was some limitation on running runbooks in Azure Automation and that limitation was significantly  increased which pushed me to change the title. Let me explain in detail what was the limitation, what has happened and what is that strategy I had that you might still need and it is always good to implement.

So Azure Automation had this limitation that if one runbook is running for more than 30 mins it will restart the job and it will start either from the beginning of the runbook or from the last checkpoint you’ve made in it. So those 30 mins were increased to 3 hours. I’ve found that just before writing this article. I always verify sources before writing blog post. A month ago this limitation was 30 mins and I know because I’ve hit it. I was a little bit busy so couldn’t write blog post on it right away and after a month things has changed. This limitation is described over here under Fair Share section.

But still with 3 hours I guess in some extreme cases you can still hit that limit. My proposal on this is to have one root runbook that will start other child runbooks one after another. In the child runbook you will have tasks that you will make sure they run under 3 hours or the tasks you will execute in them can be asynchronous. Usually asynchronous tasks return a job ID that you can query every 5 seconds for example to check the status. Before doing the check you can create a checkpoint (Checkpoint-Workflow ) in your runbook so in case the runbook takes longer than 3 hours and it is restarted it will continue right from your last safe point. In fact your root runbook you can use the same approach of starting child runbooks, getting their job ID, making a checkpoint and waiting for the status of the job. There is a good example on starting a runbook and checking its job status here.

Here is sample code from me in order to get the logic:

workflow ExampleRoot
{
    param (
      [Parameter(Mandatory=$true)][string]$Param1
     )
   
    $Creds       = Get-AutomationPSCredential -Name ‘Creds’
   
   
    ########################### RUNBOOK 1 #############################################################
    # Connect to Azure Subscription
    $AzureAccount = Add-AzureAccount -Credential $Creds

    $job1 = Start-AzureAutomationRunbook -AutomationAccountName CloudAdmin  `
                                         -Name                  “ExampleChild1″ `
                                         -Parameters            $param1
   
    $Creds        = $null
    #Create Checkpoint
    Checkpoint-Workflow
    $Creds       = Get-AutomationPSCredential -Name ‘Creds’
   
    # Connect to Azure Subscription
    $AzureAccount = Add-AzureAccount -Credential $Creds
   
    $doLoop1 = $true
    While ($doLoop1)
    {
        $job1 = Get-AzureAutomationJob -AutomationAccountName CloudAdmin `
                                       -Id                    $job1.Id
        $status1 = $job1.Status
        $doLoop1 = (($status1 -ne “Completed”) -and ($status1 -ne “Failed”) -and ($status1 -ne “Suspended”) -and ($status1 -ne “Stopped”))
    }

    $Output1 = Get-AzureAutomationJobOutput -AutomationAccountName CloudAdmin`
                                            -Id                    $job1.Id `
                                            -Stream                Output

 

    $RemovedAzureAccount = Remove-AzureAccount -Name          $AzureAccount.ID `
                                               -Force         `
                                               -WarningAction SilentlyContinue

   
    ########################### RUNBOOK 2 #############################################################
   
   
    # Connect to Azure Subscription
    $AzureAccount = Add-AzureAccount -Credential $Creds

 

    $job2 = Start-AzureAutomationRunbook -AutomationAccountName CloudAdmin  `
                                         -Name                  “ExampleChild1″ `
                                         -Parameters            $param1
   
    $Creds         = $null
    #Create Checkpoint
    Checkpoint-Workflow
    $Creds       = Get-AutomationPSCredential -Name ‘Creds’

    # Connect to Azure Subscription
    $AzureAccount = Add-AzureAccount -Credential $Creds
   
    $doLoop2 = $true
    While ($doLoop2)
    {
        $job2 = Get-AzureAutomationJob -AutomationAccountName CloudAdmin`
                                       -Id                    $job2.Id
        $status2 = $job2.Status
        $doLoop2 = (($status2 -ne “Completed”) -and ($status2 -ne “Failed”) -and ($status2 -ne “Suspended”) -and ($status2 -ne “Stopped”))
    }

    $Output2 = Get-AzureAutomationJobOutput -AutomationAccountName CloudAdmin `
                                            -Id                    $job2.Id `
                                            -Stream                Output

    $RemovedAzureAccount = Remove-AzureAccount -Name          $AzureAccount.ID `
                                               -Force         `
                                               -WarningAction SilentlyContinue

}

Logically this would look something like this:

image

Now there is one tip I want to share here. You will notice that before making checkpoint I am clearing the credentials variable I am using by this: $Creds= $null. Currently the checkpoint cannot handle credentials and your runbook you will probably fail if you do not clear all your credential variables you are using before making a checkpoint. This issue is described here. After clearing the credentials, making the checkpoint you can get your credentials again from Assets. Example:

   $Creds        = $null
    #Create Checkpoint
    Checkpoint-Workflow
    $Creds       = Get-AutomationPSCredential -Name ‘Creds’

 

If you have more than one credentials you are using obviously you will need to clear every one of them and get them again after the checkpoint.

Whether Azure Automation has that time limit or not it is always good to make checkpoints after certain actions. This will make your runbook  better.

I hope this was helpful.

Microsoft Azure Operational Insights Preview Series – Collecting Logs from Azure Diagnostics (Part 16)

Previously on Microsoft Azure Operational Insights Preview Series:

This blog post is about a feature you may know or may not know about OpInsights. Besides ingesting data trough agents or SCOM OpInsights can ingest data trough Azure Storage as well. And you can place data in Azure Storage trough a Azure feature like Azure Diagnostics. So lets see how all this works.

First you will need to link your OpInsights workspace to your Azure Subscription and Add Azure Storage Account to it. You can check Part 10 of my series for this but your Azure you should have  the following configured for the storage:

image

Now that we have this in place let’s see what we actually can ingest. Azure Diagnostics can collect different types of data but currently OpInsights can ingest some of it. Currently the matrix of what logs can be ingested and from what source is the following:

image

Now let’s see how to configure Windows Event logs for a VM.

To do this you will need to go to the Azure Preview portal:

https://portal.azure.com

Click Browse –> Virtual Machines

image

Select one of the Virtual Machines for which  you want to activate Azure Diagnostics:

image

Click on the monitoring tile:

image

Select Diagnostics Settings and change status from Off to On:

image

Basically for Virtual Machine if you enable every Windows event log you can gather them. In my case I’ve also selected to collected everything from Verbose to Critical you can of course can decide to collect anything above warning.

You will also need to place these logs to the same storage account that is used by Operational Insights. When you are ready click save.

After around one hour if you execute the following query:

*  | Measure count() by SourceSystem

You should see Events from source Azure Storage showing up:

image

Of course you can enable Azure Diagnostics even with Azure PowerShell. You can find example for this along on how to enable Azure Diagnostics on Web roles and Work roles on the Azure Operational Insights documentation site.

Microsoft Azure Operational Insights Preview Series – Plans and Retention (Part 15)

Previously on Microsoft Azure Operational Insights Preview Series:

Currently in Azure Operational Insights Preview we have available 3 plans:

  • Free
  • Standard
  • Premium

Two of them are associated with prices and you can read more here.

One of the differences between these plans is their retention period meaning how long in the past you can see you data when you use Search. If you’ve signed up trough https://preview.opinsights.azure.com/ for the service and you haven’t linked your account you need to know that are you automatically assigned for the Free plan. You can change that at any time but you first need to link your Azure Operational Insights workspace to your Azure subscription. You can see how to do this in Part 10 of my series and in Part 14 with the new onboarding experience you will see also which plan are you using currently. In fact even if you do not want to change your plan but you have Azure subscription it is better to link them. Switching between plans in a short period does not affect right away meaning that if you have switched from Premium from Free you will not loose your data right away and you can return to Premium. As part of the preview if you have connection to SCOM Management Group you will not always have the daily limit and retention period applied. Keep in mind that some of this information is related to the preview and will probably change with GA.

Microsoft Azure Operational Insights Preview Series – New Onboarding User Experience (Part 14)

Previously on Microsoft Azure Operational Insights Preview Series:

I was checking my Operational Insights Workspace today and I’ve noticed there is a new tile named Settings:

image

The Settings tile will lead you to a page which will guide what steps to make in order to start with Azure Operational Insights.

image

As I had this workspace for a while I’ve already completed all the steps in getting started. As we can see Data Source is our first step. Data sources are basically your direct agents, SCOM Management Groups and Azure Storage. You have the information for Direct Agent right on this page. Connect to SCOM will lead you to a guide on how to do that and connect to Azure Storage will do the same. The last two are guides are there are more steps to do in order to enable them.

You will also see a tab Logs and this is where Add Logs step leads you to. Logs are enabled by default on every new workspace but no data is gathered automatically from that because you still will need to add the logs you want to ingest and analyze in Azure Operational Insights.

image

I’ve already added some.

The last step will lead you to Intelligence Pack Gallery:

image

 

  <p>The last improvement I want to show you is that the Operational Insights portal now shows on which data plan you are:</p> <p><a href="https://cloudadministrator.files.wordpress.com/2015/04/image4.png"><img title="image" style="border-top:0;border-right:0;background-image:none;border-bottom:0;padding-top:0;padding-left:0;border-left:0;display:inline;padding-right:0;" border="0" alt="image" src="https://cloudadministrator.files.wordpress.com/2015/04/image_thumb4.png" width="315" height="147"/></a></p> <p>This is important and I will have another post on that soon.

GRE Tunneling with NVGRE Gateways and SCVMM 2012 R2 UR5

GRE tunneling option was enabled with Update Rollup 5 in SCVMM 2012 R2. But to fully enable it you had to install some update on NVGRE Gateways. I’ve predicted that such hotfix will be available soon and now it is out. You can find it here and enable the full scenario with VMM and NVGRE Gateways. Here are some of the scenarios that you can use this feature for. Documentation is for vNext but now this feature is enabled in Windows Server 2012 R2 and System Center 2012 R2.

Update:

To enable it. Download and install the hotfix on your NVGRE Gateways. Restart will be required. Make sure your SCVMM 2012 R2 server is Update Rollup 5. Refresh your gateways in VMM console -> Fabric Pane-> Networking -> Network Service -> right click Gateway and refresh. Open the properties of a gateway. Go to Provider tab. Click Test. After that for VM Network you will be able to add GRE tunnel when you have Gateway attached to that network.

GRE