Publish your Module to PowerShell Gallery with Azure Automation Connection

Recently PowerShell Gallery received a feature that you can deploy Modules from PowerShell Gallery to Azure Automation (part of OMS). You can upload PowerShell Modules to Azure Automation with little or not change to the module. If you want to make your PowerShell module more appealing and easy to use in Azure Automation you can create so called Connection. You can read more here. Recently I was able to confirm that if you publish your PowerShell Module with Connection to the PowerShell gallery that Connection will be imported as well when you import your module to Azure Automation. My fellow MVP Tao Yang helped me in confirming this. Basically when you import your module to the PowerShell Gallery make sure that the json file needed for the connection is included in the files for upload:

image

It is interesting that I think I’ve included that json file in the OMSSearch Module since its first release in the PowerShell Gallery and before that feature Deploy to Azure Automation was available.

When you import the OMSSearch module from the PowerShell gallery to your Azure Automation account you will be able to create OMSSearch Connection:

image

I hope this was useful for you and also I hope more PowerShell module authors will include connections in the modules.

Windows 10 Mail App Cannot Sync / Data Sharing Service Cannot Start

I usually do not write about client stuff but this has really annoyed me for a long time and now that I’ve fixed it is relief.

Since the fiasco for Windows Store in Windows 10 not working for some users I was not able to sync mail with all 3 mail accounts in the Windows 10 mail app. I’ve tried almost everything to fix that for a long time. Even logged my problem in the forums. Check the forum for more info. But I couldn’t fix it, until today.

While searching for the problem I’ve found this forums post. My Data sharing service could not also start and I was getting this error when trying to start it:

windows could not start the data sharing service service on local computer error 0xc1130004

I also found out that I was having a lot of these errors in the Application log:

Error ID: 454 Source: ESENT

svchost (720) DS_Token_DB: Database recovery/restore failed with unexpected error -1216.

Error ID: 494 Source: ESENT

svchost (720) DS_Token_DB: Database recovery failed with error -1216 because it encountered references to a database, ‘C:\Windows\system32\config\systemprofile\AppData\Local\DataSharing\Storage\DSTokenDB2.dat’, which is no longer present. The database was not brought to a Clean Shutdown state before it was removed (or possibly moved or renamed). The database engine will not permit recovery to complete for this instance until the missing database is re-instated. If the database is truly no longer available and no longer required, procedures for recovering from this error are available in the Microsoft Knowledge Base or by following the “more information” link at the bottom of this message.

So I went to ‘C:\Windows\system32\config\systemprofile\AppData\Local\DataSharing\Storage\’ made a backup of the files in the folder and deleted all the files in it. After that I was able to start Data Sharing Service and suddenly the Mail App started to work flawlessly.

Use this workaround at your own risk. I hope it will help you if you encounter this problem.

Operations Management Suite – Performance Monitoring

OMS today released a new feature Performance monitoring. Not sure if that is the official name but basically allows you to gather performance data from servers by adding performance counters.

The feature is located under Logs as it represents different king of log gathering. Keep in mind that this performance data gathering feature is different than the Capacity solution. This feature does not require SCOM-VMM integration and works with agent-only connected computers.

When you navigate to Logs initially you will see a box which will help you to add the most common performance counters initially:

image

Of course you have the option to add all of them select only a few of them or add other counters.

image

When you click add you will see that besides that you can choose which performance counters to add you can also choose at what interval they are gathered. 10 Seconds is the minimum. I do not know that is the maximum but I’ve tried with 10 minutes and it works. How frequent  performance data is gathered reflects in the accuracy of the data. If you gather data on 2 minutes for example you will not be able to see small spikes if you have them on your servers.

If you want to add other counters just type some word and a matched of performance counters for that word will appear. Very easy and convenient.

image

If you follow the format you can even add your own custom counters if you have them on your server/s. Just follow the format of the other counters if you are confused how to add them:

image

In summary the configurations are very flexible and convenient. Some performance counters may not be updated with new data every second thus you can gather data on longer interval and save money. I had different initial expectations on the configuration of this feature and now that is GA it exceeds them a lot.

Once you’ve enabled a few counters you will go to Search and type: “Type:Perf”

image

Below the search you will see View for Metrics. Click on it:

image

This brings cool graphics of your performance data that has been gathered. When you start initially you can narrow down the results for the last 6 hours:

image

This view gives you nice overview if you need to see overall your performance data as it allows you to see more graphics at one page. Each graphics can be expanded which will show you more details about that performance counter:

image

And actually you can expand all the graphics on the page if needed.

Depending on your interval of your data gathering you soon will see if you stage on the same page without refreshing it that new data is being visualized live:

image

With lighter blue color we can see live the new data coming.

Along with the graphics you will see Last and Average value being displayed as well on the right.

Now this performance data is available in OMS as logs that are aggregated and you can actually use with queries.

Example query:

Type:Perf (CounterName=”% Processor Time”) | measure Avg(Average) as AVGCPUTime  by Computer | Where AVGCPUTime>10 | Sort AVGCPUTime  desc | Top 5

image

When you have more servers you will get better results with that query.

As this feature is part of Log Management you will see Log Management using more data in Usage:

image

But the feature is very light on the cost from my perspective and you can always increase the interval of data gathering. In my case Log Management is big because I’ve enabled a lot of logs.

I hope you will like this feature. I am pretty excited about it and it exceeds way more my initial expectations. Of course I’ve got some feedback as well which I’ve already gave to the product group. If you have any feedback as well feel free to share it on the UserVoice.

Operations Management Suite – Custom Fields / Extract Data Feature

I’ve been waiting for this OMS feature with anticipation. At first sight you might think that this is a feature that is nothing worth to be excited but quite contrary. This feature allows you to extract additional insights from your logs. Why? Many of the logs like SysLog and Event Log stuff many of the data into one field which makes that data when ingested hard to search. Inside that filed the data is structured in some form like xml or just text. With this feature you can with a few easy clicks turn parts of that data into searchable fields. The OMS team already explained into detailed blog post how to do that. What I want is to take again trough this feature by provide a couple of more examples.

In a previous post I’ve showed you how you can audit PowerShell with OMS but let’s expand on that and make our audit more granular by using this new feature.

First step: Execute the query:

EventID=4103 EventLog=”Microsoft-Windows-PowerShell/Operational”

image

Second step: Click on the hamburger menu against the field you want to extract data.

image

Third Step: Make sure that you’ve selected the fields you want to filter. Than highlight the area that you want to be example for your data. In my case I’ve highlighted the value after “Command Name”. When the value is highlighted Name the field with name would be easy for everyone working with OMS to understand. Click Extract.

image

Fourth step: When extract is clicked this will lead you to samples of the results you will see if save that extraction. If you see some results that should not be that way you can edit them individually or ignore them. That will help the extraction algorithm to provide you with better results. Once you are ok with the results you can click Save extraction. Be carefully as currently there is no way to delete extraction that has already been saved. Not that will somehow tamper your data but might be confusing when other people are working with your OMS workspace.

image

Fifth step: Remember that this extraction will be applied to new results so depending with what velocity that log is generated you will have to wait some time. But eventually you will see the results.

image

Sixth step: Now that you have that field available you can search on it.

EventID=4103 EventLog=”Microsoft-Windows-PowerShell/Operational” | measure count() by PowerShellCommand_CF

image

We can go trough the same procedure for the same log and extract also the user who was executing the commands.

image

image

EventID=4103 EventLog=”Microsoft-Windows-PowerShell/Operational” | measure count() by PowerShellUser_CF

image

Second example is if we take the log for successfully installed updates:

Type=Event (EventLog=System) (Source=”Microsoft-Windows-WindowsUpdateClient”) (EventID=19)

image

image

image

and of course the results:

image

So I hope this examples will inspire you to think of more data that you can extract and use to provide value to your company.

Remember that couple MVPs and me have a PowerShell module for OMS on GitHub that can be imported in Azure Automation and used for more advanced scenarios.  The module is also available on the PowerShell Gallery that now allows you to import PowerShell Modules into Azure Automation:

image

I hope that was useful information for you.

Spend Your Money Wisely

With this post I would like to support my friend and fellow MVP Tao Yang. The text below is written by him but I fully support it. Read carefully.

clip_image001As what I’d like to consider myself as – a seasoned System Center specialist, I have benefitted from many awesome resources from the community during my career in System Center. These resources consist of blogs, whitepapers, training videos, management packs and various tools and utilities. Although some of them are not free (and in my opinion, they are not free for a good reason), but large percentage of these resources I value the most are all free of charge.

This is what I like the most about the System Center community. Over the last few years, I got to know many unselfish people and organisations in the System Center space, who have made their valuable work completely free and open source for the broader community. Due to what I am going to talk about in this post, I am not going to mention any names in this post (unless I absolutely have to) . But if anyone is interested t know my opinion, I’m happy to write a separate post introducing what I believe are valuable resources.

First of all, I’m just going to put it out there, I am not upset, and this is not going to be a rant and I’m trying to stay positive.

I started working on System Center around 2007-2008 (ConfigMgr and OpsMgr at that time) . I started working on OpsMgr because my then colleague and now fellow SCCDM MVP (like I mentioned, not going to mention names) has left the company we were working for and I had to pick up the MOM 2005 to OpsMgr 2007 project he left behind. The very first task for me was to figure out a way to pass the server’s NetBIOS name to the help desk ticketing system and I managed to achieve this by creating a PowerShell script and utilised the command notification channel to execute the script when alerts were raised. I then used the same concept and developed a PowerShell script to be used in the command notification to send content rich notification emails which covered many information not available from native email notification channel.

When I started blogging 5 years ago, this script was one of the very first posts I published here. I named this solution “Enhanced SCOM Alert Notification Emails”. Since it was published, it has received many positive feedbacks and recommendations. I have since published the updated version (2.0) here:

http://blog.tyang.org/2012/08/16/scom-enhanced-email-notification-script-version-2/

After version 2.0 was published, a fellow member in the System Center community, Mr. Tyson Paul has contacted me, told me he has updated my script. I was really happy to see my work got carried on by other members in the community and since then, Tyson has already made several updates to this script and published it on his blog (for free of course):

Version 2.1: http://blogs.msdn.com/b/tysonpaul/archive/2014/08/04/scom-enhanced-email-notification-script-version-2-1.aspx

Version 2.2: http://blogs.msdn.com/b/tysonpaul/archive/2015/01/30/scom-enhanced-email-notification-script-version-2-2.aspx

This morning, I have received an email from a person I have never heard of. This person told me his organisation has developed a commercial solution called “Enhanced Notification Service for SCOM” and I can request a NFR by filling out a form from his website. As the name suggests (and I had a look on the website), it does exactly what mine and Tyson’s script does – sending HTML based notification emails which include content rich information including associated knowledge articles.

Well, to be fair, on their website, they did mention a limitation of running command notifications that you have a AsyncProcessLimit of 5. But, there is a way to increase this limit and if your environment is still hitting the limit after you’ve increased it, I believe you have a more serious issue to fix (i.e. alert storm) rather than enjoying reading those “sexy” notification emails. Anyways, I don’t want to get into technical argument here, it’s not the intention of this post.

So, do I think someone took my idea and work from Tyson and myself? It is pretty obvious, make your own judgement. Am I upset? not really. If I want to make a profit from this solution, I wouldn’t have published out on my blog in the first place. And believe me, there are many solutions and proof-of-concepts I have developed in the past that I sincerely hope some software vendors can pickup and develop a commercial solution for the community – simply I don’t have the time and resources to do all these by myself (i.e. my recently published post on managing ConfigMgr log files using OMS would be a good commercial solution).

In the past, I have also seen people took scripts I published on my blog, replaced my name with theirs from the comment section and published it on social media without mentioning me whatsoever. I knew it was my script because other comments in the script are identical to my initial version. When I saw it, I have decided not to let these kind behaviour get under my skin, and I believe the best way to handle it is to let it go. So, I am not upset when I read this email today. Instead, I laughed! Hey, if this organisation can make people to pay $2 per OpsMgr agent per year (which means for a fully loaded OpsMgr management group would cost $30k per year for “sexy” notification emails), all I’m going to say is:

clip_image002

However, I do want to advise the broader System Center community: Please spend your money wisely!

There is only so much honey in the pot. You all have a budget. This is what the economist would call Opportunity Cost. If you have a certain needs or requirement and you can satisfy your requirement using free solutions, you can spend your budget on something that has a higher Price-Performance Ratio. If you think there’s a gap between the free and paid solution, please ask your self these questions:

  • Are these gaps really cost me this much?
  • Are there any ways to overcome this gap?
  • Have I reached out the the SMEs and confirm if this is a reasonable price?
  • How much would it cost me if I develop an in-house solution?

Lastly, I receive many emails from people in the community asking me for advise, and providing feedback to the tools I have published. I am trying my best to make sure I answer all the emails (and apologies if I have missed). So if you have any doubts in the future that you’d like to know my opinion, please feel free to contact me. And I am certain, not only myself, but other SMEs and activists in the System Center community would also love to help a fellow community member.

Error (2912) When You Try to Update VMM Agent on Hosts

Recently I’ve stumbled on the following error when trying to update VMM agent on hosts:

Error (2912)
An internal error has occurred trying to contact the hv01.contoso.com server: NO_PARAM: NO_PARAM.

WinRM: URL: [http://hv01.contoso.com:5985], Verb: [INVOKE], Method: [GetError], Resource: [http://schemas.microsoft.com/wbem/wsman/1/wmi/root/microsoft/bits/BitsClientJob?JobId={9D5C4B47-E79E-4090-BC3B-552578D0EC8C}]

Unknown error (0x80072f0d)

Recommended Action
Check that WS-Management service is installed and running on server hv01.contoso.com. For more information use the command “winrm helpmsg hresult”. If hv01.contoso.com is a host/library/update server or a PXE server role then ensure that VMM agent is installed and running. Refer to http://support.microsoft.com/kb/2742275 for more details.
image

At some point I’ve found workaround for this issue. Here it is:

  1. Install DHCP extension manually on every host. Usually the location is located in C:\Program Files\Microsoft System Center 2012 R2\Virtual Machine Manager\SwExtn on the VMM server. Depends on where your VMM is installed.
  2. Initiate Update Agent task from VMM console on every host. This time the task should finish successful.

I was too lazy to find proper resolution but the workaround should be sufficient enough if you occur this issue. Hope this was helpful.

Operations Management Suite – Wire Data Solution

Wire Data solution has been in Coming Soon status for some time but now it is available for all.

You will find it in the Solution Gallery where you can enable it:

image

From my experience in my environment the solution consumes more data than most of the other solutions but not so much compared to my top solution data consumers:

image

But what actually Wire Data Solution represents?

In the gallery you will find description but my short description on this is: Overview of your network and the data on the traffic flowing in it. Before continuing further I should mention that the solution works on Windows Server 2012, Windows 8.1 and higher Operating systems only. My opinion on this is that this makes it very limited as there are still a lot Windows Server 2012 and 2008 R2 out there. But even with that limit you can get very useful information from it.

Now let’s continue by first looking on how the solution looks on its main page:

image

Here you will see general information about your network. This will just give you glimpse on how your network looks according to the data gathered from Wire Data.

Before starting any investigation or analysis I would suggest to look at the built in examples:

image

This will give a good start what queries can show interesting results that will help fix/prevent problems in your environment.

Next step would be to go in Search and just type:

Type:WireData

This will give all your Wire data. You can expand a few results and have a look at what data is being collected for every record:

image

That is the beauty of OMS – you can explore data very easily. By exploring that data you can figure more queries that will help you extract value from data.

For example with query like this:

Type:WireData (ApplicationServiceName=http) | measure count() by TimeGenerated interval 1HOUR

I can see the trend for the past 6 hours how many http sessions were established:

image

Of course like any other OMS solution when you correlate data between solutions makes it very powerful in investigating and analyzing.

But how this solution actually works? Let’s have a deeper look.

On the Management Pack for the solution:

image

We will see that it relies on a dll assembly for getting and parsing the data.

Looking inside the assembly we can see that the solution relies on ETW for getting the actual data:

image

 

I hope you will find this post useful and if you have any feedback you know that the UserVoice is opened 24/7.

Begin Your Journey to the Cloud with the Cloud Administrator

Follow

Get every new post delivered to your Inbox.

Join 1,642 other followers