Notice: IE has a problem with the code snippets where it does not always display the last line of long snippets, and when copying code it copies it all as one line. Use Firefox or Chrome instead, or look to see if the snippet has a "download file" link.

Tell Microsoft To Fix The Sql Server Management Studio “Connect to Server” Dialog Position

October 17th, 2014 No comments

If you use Sql Server Management Studio (SSMS) with multiple monitors, you likely run into the issue where the “Connect to Server” dialog window opens up either half or completely off the screen when SSMS is opened on a monitor that is not the primary one (see screenshot below).

Several bugs have been reported for this, and apparently MS thinks it is not really an issue since they have decided to close all of the bugs related to it as “Won’t Fix”. Here’s a quote:

We took a look at this bug and triaged it against several others and unfortunately, it did not meet the bar to be fixed and we are closing it as ‘won’t fix’.

Why they admit that it is a problem and close it as “Won’t Fix” instead of just leaving it open with a low priority is beyond me.

What’s even more surprising though is that these issues currently have less than 10 upvotes!  Let’s fix that. Like many people, I use SSMS daily, and this is easily my biggest beef with it, especially since the fix is so simple (literally 3 clicks on a Windows Forms or WPF app).

Please go to the following 3 Connect bugs and up-vote them so MS reconsiders fixing this.

1. https://connect.microsoft.com/SQLServer/feedback/details/755689/sql-server-management-studio-connect-to-server-popup-dialog

2. https://connect.microsoft.com/SQLServer/feedback/details/724909/connection-dialog-appears-off-screen

3. https://connect.microsoft.com/SQLServer/feedback/details/389165/sql-server-management-studio-gets-confused-dealing-with-multiple-displays

 

Here’s a screenshot of the problem. Here my secondary monitors are above my primary one, but the same problem occurs even if all monitors are horizontal to one another.

Sql Management Studio Multi-Monitor Bug

Create Unique Strong Passwords That Are Easy To Remember For All Your Accounts, Without Using A Password Manager

October 11th, 2014 5 comments

The Problem

We’ve all heard the warnings that we should use a strong password to prevent others from guessing our password, and that we should use a different password for every account we have.

A strong password is simply a password that meets a set of requirements, such as being at least X characters long and includes numbers and/or small letters and/or capital letters and/or symbols. Many websites and services enforce that a strong password be used.

If you don’t use a strong password, it’s likely that your password can be brute force hacked almost instantly.  Check how secure your passwords are here.

If you do use a strong password, it’s very likely that you use the same strong password (or set of strong passwords) for all of the services you use, simply because having to remember lots of passwords and which one is for which service is hard. This is very bad practice though, since if somebody gets your password they can access all of your services. There’s a lot of ways for somebody to get your password; from simply guessing it to software vulnerabilities like the Heartbleed bug, so you should try and always use a unique password for each service.

 

The Solution

My super smart coworker Nathan Storms posted a very short blog about his solution to this problem, which I’ll repeat and expand on here.

The basic idea is that instead of remembering a whole bunch of crazy passwords, you calculate them using an algorithm/formula. So instead of just using one password for all of your accounts, you use one formula to generate all of your passwords; That means instead of remembering a password, you just remember a formula. The formula can be as simple or complex as you like. Like most people, I prefer a simple one, but you don’t want it to be so simple that it’s easy for another person to guess it if they get ahold of one or two of your passwords.

The key to creating a unique password for each service that you use is to include part of the service’s name in your formula, such as the company name or website domain name.

The key to creating a strong password is to use a common strong phrase (or “salt” in security-speak) in all of your generated passwords.

The last piece to consider is that you want your salt + formula to generate a password that is not too short or too long.  Longer passwords are always more secure, but many services have different min and max length requirements, so I find that aiming for about 12 characters satisfies most services while still generating a nice strong password.

 

Examples

So the things we need are:

  1. The service you are using. Let’s say you are creating an account at Google.com, so the service name is Google.
  2. A strong salt phrase. Let’s use: 1Qaz!   (notice it includes a number, small letter, capital letter, and symbol)

A Too Simple Formula Example:

A simple formula might be to simply combine the first 3 characters of the service name with our salt, so we get: Goo1Qaz!

That’s not bad, but howsecureismypassword.net tells us that it can be cracked within 3 days, which isn’t that great. We could simply change our salt to be a bit longer, such as 1Qaz!23>, which would make our password Goo1Qaz!23>. This puts our password at 11 characters and takes up to 50 thousand years to brute force, which is much better; Longer, stronger salts are always better.

There’s still a problem with this formula though; it’s too simple. To illustrate the point, for Yahoo.com the calculated password would be Yah1Qaz!23>. Now, if somebody got ahold of these two passwords and knew which services they were for, how long do you think it would take them to figure out your formula and be able to calculate all of your passwords? Probably not very long at all.

Better Formula Examples:

The problem with the formula above is that it’s easy for a human to recognize the pattern of how we use the service name; we just took the first 3 letters. Some better alternatives would be:

Service Name Rule (using Google) [using StackOverflow]

Google Password

StackOverflow Password

Use last 3 letters backwards (elgooG) [wolfrevOkcatS] elg1Qaz!23> wol1Qaz!23>
Use every 2nd letter, max 4 letters (Google) [StackOverflow] oge1Qaz!23> tcOe1Qaz!23>
Use next letter of first 3 letters (G + 1 = H, o + 1 = p) [S + 1 = T, t + 1 = u, a + 1 + b] Hpp1Qaz!23> Tub1Qaz!23>
Use number of vowels and total length (3 vowels, length of 6) [4 vowels, length of 13] 361Qaz!23> 4131Qaz!23>
Number of vowels in front, length at end 31Qaz!23>6 41Qaz!23>13
Number of vowels in front, length minus number of vowels at end (3 vowels, 6 – 3 = 3) [4 vowels, 13 – 4 = 9] 31Qaz!23>3 41Qaz!23>9
Number of vowels squared in front, length squared at end (3 * 3 = 9 and 6 * 6 = 36) [4 * 4 = 16 and 13 * 13 = 169] 91Qaz!23>36 161Qaz!23>169

You can see that once we introduce scrambling letters in the service name, or using numbers calculated from the service name, it becomes much harder for a human to spot the pattern and decode our formula. You want to be careful that your formula doesn’t get too complex for yourself though; StackOverflow is 13 characters long and I’ll admit that I broke out the calculator to see that 13 squared was 169.

You can also see how easy it is to come up with your own unique formula. You don’t have to stick to the rules I’ve shown here (counting vowels and length). Maybe instead of counting the number of vowels, you count the number of letters that the Service name has in common with your name. For example, my name is Daniel, so “Google” shares one letter in common with my name (the “l”), and “StackOverflow” shares 3 (“ael”). Maybe instead of squaring the numbers you multiply or add them. Maybe instead of using the numbers in your password, you use the symbols on the respective numbers. If you don’t like doing math, then avoid using math in your formula; it shouldn’t be a long or tedious process for you to calculate your password. Be creative and come up with your own formula that is fast and easy for you, and/or mix the components together in different ways.

 

More Tips and Considerations

  • In all of my examples I placed my calculated characters before or after my salt, but you could also place them in the middle of your salt, or have your formula modify the salt.
  • Since some services restrict the use of symbols, you may want to have another salt that does not contain symbols, or formula that does not generate symbols. When you try and login using your usual salt and it fails, try the password generated using your secondary symbol-free salt.
  • For extra security, include the year in your formula somehow and change your passwords every year. If you are extra paranoid, or have to change your password very frequently (e.g. for work), you can do the same thing with the month too and change your passwords monthly. An alternative to this would be to change your salt phrase or formula every year/month.
  • Similarly to how you may have had a different password for sites you don’t really care about, sites you do care about, and critical sites (e.g. bank websites), you could have different salts or formulas for each.
  • If you are weary of using this formula approach for ALL of your passwords thinking that it is too much effort, then don’t use it for ALL of your passwords. Probably 85% of the accounts you create you don’t really care about;  they don’t have any sensitive information, and you could really care less if somebody hacked them. For those, you can still use a shared strong password. Just use this approach for the remaining 15% of your accounts that you do really care about. This is a much better alternative than sharing a strong password among these 15%.
  • Some characters are “stronger” than others. For example, symbols are typically harder to guess/crack than letters or numbers, and some symbols are stronger than other symbols (e.g. < is stronger than $). It’s best to have a mix of all types of characters for your salt, but you might want to have more symbols in your salt, or when choosing the symbols for your salt you might opt for ones not on the 0 – 9 keys (i.e. !@#$%^&*()).

    Why Not Just Use A Password Manager

    With a password manager you can easily have unique passwords for all of your accounts, but there are a few reasons why I like this formula approach over using password management software:

    1. With password management software you are dependent on having the software installed and on hand; you can’t log into your accounts on your friend’s/co-worker’s/public PC since the password manager is not installed there. By using a formula instead, you ALWAYS know your passwords when you need them.
    2. Most password managers are not free, or else they are free on some platforms and not others, or they don’t support all of the platforms you use; if you want to use it on all of your devices you either can’t or you have to pay.
    3. Typically you need a password to access your account on the password manager. These types of “master passwords” are a bad idea. If somebody gets the “master password” for your password manager, they now have access to all of your passwords for all of your accounts. So even if you have a super strong master password that you never share with anybody, vulnerabilities like the Heartbleed bug make it possible for others to get your “master password”.
    4. Most password manager companies today store your passwords on their own servers in order to sync your passwords across all of your devices. This potentially makes them a large target for hackers, since if they can hack the company’s servers they get access to millions of passwords for millions of different services.

      Summary

      So instead of memorizing a password or set of passwords for all of the services you use, memorize a strong salt and a formula to calculate the passwords. Your formula doesn’t need to be overly complicated or involve a lot of hard math; just be creative with it and ensure that the formula is not obvious when looking at a few of the generated passwords. Also, you may want to have a couple different salts or formulas to help meet different strong password requirements on different services.

      Happy password generating!

      Find Largest (Or Smallest) Files In A Directory Or Drive With PowerShell

      September 8th, 2014 No comments

      One of our SQL servers was running low on disk space and I needed to quickly find the largest files on the drive to know what was eating up all of the disk space, so I wrote this PowerShell line that I thought I would share:

      # Get all files sorted by size.
      Get-ChildItem -Path 'C:\SomeFolder' -Recurse -Force -File | Select-Object -Property FullName,@{Name='SizeGB';Expression={$_.Length / 1GB}},@{Name='SizeMB';Expression={$_.Length / 1MB}},@{Name='SizeKB';Expression={$_.Length / 1KB}} | Sort-Object { $_.SizeKB } -Descending | Out-GridView
      

      Just change ‘C:\SomeFolder’ to the folder/drive that you want scanned, and it will show you all of the files in the directory and subdirectories in a GridView sorted by size, along with their size in GB, MB, and KB. The nice thing about using a GridView is that it has built in filtering, so you can quickly do things like filter for certain file types, child directories, etc.

      Here is a screenshot of the resulting GridView:

      FilesSortedBySize

       

      And again with filtering applied (i.e. the .bak at the top to only show backup files):

      FilesSortedBySizeAndFiltered

      All done with PowerShell; no external tools required.

      Happy Sys-Adminning!

      Keep PowerShell Console Window Open After Script Finishes Running

      July 7th, 2014 No comments

      I originally included this as a small bonus section at the end of my other post about fixing the issue of not being able to run a PowerShell script whose path contains a space, but thought this deserved its own dedicated post.

      When running a script by double-clicking it, or by right-clicking it and choosing Run With PowerShell or Open With Windows PowerShell, if the script completes very quickly the user will see the PowerShell console appear very briefly and then disappear.  If the script gives output that the user wants to see, or if it throws an error, the user won’t have time to read the text.  We have 3 solutions to fix this so that the PowerShell console stays open after the script has finished running:

      1. One-time solution

      Open a PowerShell console and manually run the script from the command line. I show how to do this a bit in this post, as the PowerShell syntax to run a script from the command-line is not straight-forward if you’ve never done it before.

      The other way is to launch the PowerShell process from the Run box (Windows Key + R) or command prompt using the -NoExit switch and passing in the path to the PowerShell file.
      For example: PowerShell -NoExit “C:\SomeFolder\MyPowerShellScript.ps1″

      2. Per-script solution

      Add a line like this to the end of your script:

      Read-Host -Prompt “Press Enter to exit”
      

      I typically use this following bit of code instead so that it only prompts for input when running from the PowerShell Console, and not from the PS ISE or other PS script editors (as they typically have a persistent console window integrated into the IDE).  Use whatever you prefer.

      # If running in the console, wait for input before closing.
      if ($Host.Name -eq "ConsoleHost")
      {
      	Write-Host "Press any key to continue..."
      	$Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyUp") > $null
      }
      

      I typically use this approach for scripts that other people might end up running; if it’s a script that only I will ever be running, I rely on the global solution below.

      3. Global solution

      Adjust the registry keys used to run a PowerShell script to include the –NoExit switch to prevent the console window from closing.  Here are the two registry keys we will target, along with their default value, and the value we want them to have:

      Registry Key: HKEY_CLASSES_ROOT\Applications\powershell.exe\shell\open\command
      Description: Key used when you right-click a .ps1 file and choose Open With -> Windows PowerShell.
      Default Value: "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" "%1"
      Desired Value: "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" "& \"%1\""
      
      Registry Key: HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1\Shell\0\Command
      Description: Key used when you right-click a .ps1 file and choose Run with PowerShell (shows up depending on which Windows OS and Updates you have installed).
      Default Value: "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" "-Command" "if((Get-ExecutionPolicy ) -ne 'AllSigned') { Set-ExecutionPolicy -Scope Process Bypass }; & '%1'"
      Desired Value: "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -NoExit "-Command" "if((Get-ExecutionPolicy ) -ne 'AllSigned') { Set-ExecutionPolicy -Scope Process Bypass }; & \"%1\""
      

      The Desired Values add the –NoExit switch, as well wrap the %1 in double quotes to allow the script to still run even if it’s path contains spaces.

      If you want to open the registry and manually make the change you can, or here is the registry script that we can run to make the change automatically for us:

      Windows Registry Editor Version 5.00
      
      [HKEY_CLASSES_ROOT\Applications\powershell.exe\shell\open\command]
      @="\"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -NoExit \"& \\\"%1\\\"\""
      
      [HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1\Shell\0\Command]
      @="\"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -NoExit \"-Command\" \"if((Get-ExecutionPolicy ) -ne 'AllSigned') { Set-ExecutionPolicy -Scope Process Bypass }; & \\\"%1\\\"\""
      

      You can copy and paste the text into a file with a .reg extension, or just

      Simply double-click the .reg file and click OK on the prompt to have the registry keys updated.  Now by default when you run a PowerShell script from File Explorer (i.e. Windows Explorer), the console window will stay open even after the script is finished executing.  From there you can just type exit and hit enter to close the window, or use the mouse to click the window’s X in the top right corner.

      If I have missed other common registry keys or any other information, please leave a comment to let me know.  I hope you find this useful.

      Happy coding!

      Browser Extensions To Expand GitHub Code Pages To Fill The Full Width Of Your Browser

      May 27th, 2014 No comments

      The problem

      I love GitHub, but one thing that I and most developers hate is that the pages that show source code (Pull requests, commits, blobs) are locked to a fixed width, and it’s only about 900 pixels.  Most developers have widescreen monitors, so their code lines are typically longer than 900 pixels.  This can make viewing code on GitHub painful because you have to constantly horizontally scroll to see a whole line of code.  I honestly can’t believe that after years GitHub still hasn’t fixed this.  It either means that the GitHub developers don’t dogfood their own product, or the website designers (not programmers) have the final say on how the site looks, in which case they don’t know their target audience very well.  Anyways, I digress.

      My solution

      To solve this problem I wrote a GreaseMonkey user script 2 years ago that expands the code section on GitHub to fill the width of your browser, and it works great. The problem was that GreaseMonkey is a FireFox-only extension.  Luckily, these days most browsers have a GreaseMonkey equivalent:

      Internet Explorer has one called Trixie.

      Chrome has one called TamperMonkey. Chrome also supports user scripts natively so you can install them without TamperMonkey, but TamperMonkey helps with the install/uninstall/managing of them.

      So if you have GreaseMonkey or an equivalent installed, then you can simply go ahead and install my user script for free and start viewing code on GitHub in widescreen glory.

      Alternatively, I have also released a free Chrome extension in the Chrome Web Store called Make GitHub Pages Full Width.  When you install it from the store you get all of the added Store benefits, such as having the extension sync across all of your PCs, automatically getting it installed again after you format your PC, etc.

      Results

      If you install the extension and a code page doesn’t expand it’s width to fit your page, just refresh the page.  If anybody knows how to fix this issue please let me know.

      And to give you an idea of what the result looks like, here are 2 screenshots; one without the extension installed (top, notice some text goes out of view), and one with it (bottom).

      WithoutFullWidth

      WithFullWidth

      Happy coding!

      Adding a WPF Settings Page To The Tools Options Dialog Window For Your Visual Studio Extension

      April 25th, 2014 No comments

      I recently created my first Visual Studio extension, Diff All Files, which allows you to quickly compare the changes to all files in a TFS changeset, shelveset, or pending changes (Git support coming soon). One of the first challenges I faced when I started the project was where to display my extension’s settings to the user, and where to save them.  My first instinct was to create a new Menu item to launch a page with all of the settings to display, since the wizard you go through to create the project has an option to automatically add a new Menu item the Tools menu.  After some Googling though, I found the more acceptable solution is to create a new section within the Tools -> Options window for your extension, as this will also allow the user to import and export your extension’s settings.

      Adding a grid or custom Windows Forms settings page

      Luckily I found this Stack Overflow answer that shows a Visual Basic example of how to do this, and links to the MSDN page that also shows how to do this in C#.  The MSDN page is a great resource, and it shows you everything you need to create your settings page as either a Grid Page, or a Custom Page using Windows Forms (FYI: when it says to add a UserControl, it means a System.Windows.Forms.UserControl, not a System.Windows.Controls.UserControl).  My extension’s settings page needed to have buttons on it to perform some operations, which is something the Grid Page doesn’t support, so I had to make a Custom Page.  I first made it using Windows Forms as the page shows, but it quickly reminded me how out-dated Windows Forms is (no binding!), and my settings page would have to be a fixed width and height, rather than expanding to the size of the users Options dialog window, which I didn’t like.

      Adding a custom WPF settings page

      The steps to create a Custom WPF settings page are the same as for creating a Custom Windows Forms Page, except instead having your settings control inherit from System.Forms.DialogPage (steps 1 and 2 on that page), it needs to inherit from Microsoft.VisualStudio.Shell.UIElementDialogPage.  And when you create your User Control for the settings page’s UI, it will be a WPF System.Windows.Controls.UserControl.  Also, instead of overriding the Window method of the DialogPage class, you will override the Child method of the UIElementDialogPage class.

      Here’s a sample of what the Settings class might look like:

      using System.Collections.Generic;
      using System.ComponentModel;
      using System.Linq;
      using System.Runtime.InteropServices;
      using Microsoft.VisualStudio.Shell;
      
      namespace VS_DiffAllFiles.Settings
      {
      	[ClassInterface(ClassInterfaceType.AutoDual)]
      	[Guid("1D9ECCF3-5D2F-4112-9B25-264596873DC9")]	// Special guid to tell it that this is a custom Options dialog page, not the built-in grid dialog page.
      	public class DiffAllFilesSettings : UIElementDialogPage, INotifyPropertyChanged
      	{
      		#region Notify Property Changed
      		/// <summary>
      		/// Inherited event from INotifyPropertyChanged.
      		/// </summary>
      		public event PropertyChangedEventHandler PropertyChanged;
      
      		/// <summary>
      		/// Fires the PropertyChanged event of INotifyPropertyChanged with the given property name.
      		/// </summary>
      		/// <param name="propertyName">The name of the property to fire the event against</param>
      		public void NotifyPropertyChanged(string propertyName)
      		{
      			if (PropertyChanged != null)
      				PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
      		}
      		#endregion
      
      		/// <summary>
      		/// Get / Set if new files being added to source control should be compared.
      		/// </summary>
      		public bool CompareNewFiles { get { return _compareNewFiles; } set { _compareNewFiles = value; NotifyPropertyChanged("CompareNewFiles"); } }
      		private bool _compareNewFiles = false;
      
      		#region Overridden Functions
      
      		/// <summary>
      		/// Gets the Windows Presentation Foundation (WPF) child element to be hosted inside the Options dialog page.
      		/// </summary>
      		/// <returns>The WPF child element.</returns>
      		protected override System.Windows.UIElement Child
      		{
      			get { return new DiffAllFilesSettingsPageControl(this); }
      		}
      
      		/// <summary>
      		/// Should be overridden to reset settings to their default values.
      		/// </summary>
      		public override void ResetSettings()
      		{
      			CompareNewFiles = false;
      			base.ResetSettings();
      		}
      
      		#endregion
      	}
      }
      

       

      And what the code-behind for the User Control might look like:

      using System;
      using System.Diagnostics;
      using System.Linq;
      using System.Windows;
      using System.Windows.Controls;
      using System.Windows.Input;
      using System.Windows.Navigation;
      
      namespace VS_DiffAllFiles.Settings
      {
      	/// <summary>
      	/// Interaction logic for DiffAllFilesSettingsPageControl.xaml
      	/// </summary>
      	public partial class DiffAllFilesSettingsPageControl : UserControl
      	{
      		/// <summary>
      		/// A handle to the Settings instance that this control is bound to.
      		/// </summary>
      		private DiffAllFilesSettings _settings = null;
      
      		public DiffAllFilesSettingsPageControl(DiffAllFilesSettings settings)
      		{
      			InitializeComponent();
      			_settings = settings;
      			this.DataContext = _settings;
      		}
      
      		private void btnRestoreDefaultSettings_Click(object sender, RoutedEventArgs e)
      		{
      			_settings.ResetSettings();
      		}
      
      		private void UserControl_LostKeyboardFocus(object sender, KeyboardFocusChangedEventArgs e)
      		{
      			// Find all TextBoxes in this control force the Text bindings to fire to make sure all changes have been saved.
      			// This is required because if the user changes some text, then clicks on the Options Window's OK button, it closes 
      			// the window before the TextBox's Text bindings fire, so the new value will not be saved.
      			foreach (var textBox in DiffAllFilesHelper.FindVisualChildren<TextBox>(sender as UserControl))
      			{
      				var bindingExpression = textBox.GetBindingExpression(TextBox.TextProperty);
      				if (bindingExpression != null) bindingExpression.UpdateSource();
      			}
      		}
      	}
      }
      

       

      And here’s the corresponding xaml for the UserControl:

      <UserControl x:Class="VS_DiffAllFiles.Settings.DiffAllFilesSettingsPageControl"
      						 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
      						 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
      						 xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
      						 xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
      						 xmlns:xctk="http://schemas.xceed.com/wpf/xaml/toolkit"
      						 xmlns:QC="clr-namespace:QuickConverter;assembly=QuickConverter"
      						 mc:Ignorable="d" 
      						 d:DesignHeight="350" d:DesignWidth="400" LostKeyboardFocus="UserControl_LostKeyboardFocus">
      	<UserControl.Resources>
      	</UserControl.Resources>
      
      	<Grid>
      		<StackPanel Orientation="Vertical">
      			<CheckBox Content="Compare new files" IsChecked="{Binding Path=CompareNewFiles}" ToolTip="If files being added to source control should be compared." />
      			<Button Content="Restore Default Settings" Click="btnRestoreDefaultSettings_Click" />
      		</StackPanel>
      	</Grid>
      </UserControl>
      

      You can see that I am binding the CheckBox directly to the CompareNewFiles property on the instance of my Settings class; yay, no messing around with Checked events :)

      This is a complete, but very simple example. If you want a more detailed example that shows more controls, check out the source code for my Diff All Files extension.

      A minor problem

      One problem I found was that when using a TextBox on my Settings Page UserControl, if I edited text in a TextBox and then hit the OK button on the Options dialog to close the window, the new text would not actually get applied.  This was because the window would get closed before the TextBox bindings had a chance to fire; so if I instead clicked out of the TextBox before clicking the OK button, everything worked correctly.  I know you can change the binding’s UpdateSourceTrigger to PropertyChanged, but I perform some additional logic when some of my textbox text is changed, and I didn’t want that logic firing after every key press while the user typed in the TextBox.

      To solve this problem I added a LostKeyboardFocus event to the UserControl, and in that event I find all TextBox controls on the UserControl and force their bindings to update.  You can see the code for this in the snippets above.  The one piece of code that’s not shown is the FindVisualChildren<TextBox> method, so here it is:

      /// <summary>
      /// Recursively finds the visual children of the given control.
      /// </summary>
      /// <typeparam name="T">The type of control to look for.</typeparam>
      /// <param name="dependencyObject">The dependency object.</param>
      public static IEnumerable<T> FindVisualChildren<T>(DependencyObject dependencyObject) where T : DependencyObject
      {
      	if (dependencyObject != null)
      	{
      		for (int index = 0; index < VisualTreeHelper.GetChildrenCount(dependencyObject); index++)
      		{
      			DependencyObject child = VisualTreeHelper.GetChild(dependencyObject, index);
      			if (child != null &amp;&amp; child is T)
      			{
      				yield return (T)child;
      			}
      
      			foreach (T childOfChild in FindVisualChildren<T>(child))
      			{
      				yield return childOfChild;
      			}
      		}
      	}
      }
      

       

      And that’s it.  Now you know how to make a nice Settings Page for your Visual Studio extension using WPF, instead of the archaic Windows Forms.

      Happy coding!

      Template Solution For Deploying TFS Checkin Policies To Multiple Versions Of Visual Studio And Having Them Automatically Work From “TF.exe Checkin” Too

      March 24th, 2014 No comments

      Get the source code

      Let’s get right to it by giving you the source code.  You can get it from the MSDN samples here.

       

      Explanation of source code and adding new checkin policies

      If you open the Visual Studio (VS) solution the first thing you will likely notice is that there are 5 projects.  CheckinPolicies.VS2012 simply references all of the files in CheckinPolicies.VS2013 as links (i.e. shortcut files); this is because we need to compile the CheckinPolicies.VS2012 project using TFS 2012 assemblies, and the CheckinPolicies.VS2013 project using TFS2013 assemblies, but want both projects to have all of the same checkin policies.  So the projects contain all of the same files; just a few of their references are different.  A copy of the references that are different between the two projects are stored in the project’s “Dependencies” folder; these are the Team Foundation assemblies that are specific to VS 2012 and 2013.  Having these assemblies stored in the solution allows us to still build the VS 2012 checkin policies, even if you (or a colleague) only has VS 2013 installed.

      Update: To avoid having multiple CheckinPolicy.VS* projects, we could use the msbuild targets technique that P. Kelly shows on his blog. However, I believe we would still need multiple deployment projects, as described below, in order to have the checkin policies work outside of Visual Studio.

      The other projects are CheckinPolicyDeployment.VS2012 and CheckinPolicyDeployment.VS2013 (both of which are VSPackage projects), and CheckinPolicyDeploymentShared.  The CheckinPolicyDeployment.VS2012/VS2013 projects will generate the VSIX files that are used to distribute the checkin policies, and CheckinPolicyDeploymentShared contains files/code that are common to both of the projects (the projects reference the files by linking to them).

      Basically everything is ready to go.  Just start adding new checkin policy classes to the CheckinPolicy.VS2013 project, and then also add them to the CheckinPolicy.VS2012 project as a link.  You can add a file as a link in 2 different ways in the Solution Explorer:

      1. Right-click on the CheckinPolicies.VS2012 project and choose Add -> Existing Item…, and then navigate to the new class file that you added to the CheckinPolicy.VS2013 project.  Instead of clicking the Add button though, click the little down arrow on the side of the Add button and then choose Add As Link.
      2. Drag and drop the file from the CheckinPolicy.VS2013 project to the CheckinPolicy.VS2012 project, but while releasing the left mouse button to drop the file, hold down the Alt key; this will change the operation from adding a copy of the file to that project, to adding a shortcut file that links back to the original file.
        There is a DummyCheckinPolicy.cs file in the CheckinPolicies.VS2013 project that shows you an example of how to create a new checkin policy.  Basically you just need to create a new public, serializable class that extends the CheckinPolicyBase class.  The actual logic for your checkin policy to perform goes in the Evaluate() function. If there is a policy violation in the code that is trying to be checked in, just add a new PolicyFailure instance to the failures list with the message that you want the user to see.

          Building a new version of your checkin policies

          Once you are ready to deploy your policies, you will want to update the version number in the source.extension.vsixmanifest file in both the CheckinPolicyDeployment.VS2012 and CheckinPolicyDeployment.VS2013 projects.  Since these projects will both contain the same policies, I recommend giving them the same version number as well.  Once you have updated the version number, build the solution in Release mode.  From there you will find the new VSIX files at "CheckinPolicyDeployment.VS2012\bin\Release\TFS Checkin Policies VS2012.vsix" and "CheckinPolicyDeployment.VS2013\bin\Release\TFS Checkin Policies VS2013.vsix".  You can then distribute them to your team; I recommend setting up an internal VS Extension Gallery, but the poor-man’s solution is to just email the vsix file out to everyone on your team.

          Having the policies automatically work outside of Visual Studio

          This is already hooked up and working in the template solution, so nothing needs to be changed there, but I will explain how it works here.  A while back I blogged about how to get your Team Foundation Server (TFS) checkin polices to still work when checking code in from the command line via the “tf checkin” command; by default when installing your checkin policies via a VSIX package (the MS recommended approach) you can only get them to work in Visual Studio.  I hated that I would need to manually run the script I provided each time the checkin policies were updated, so I posted a question on Stack Overflow about how to run a script automatically after the VSIX package installs the extension.  So it turns out that you can’t do that, but what you can do is use a VSPackage instead, which still uses VSIX to deploy the extension, but then also allows us to hook into Visual Studio events to run our script when VS starts up or exits.

          Here is the VSPackage class code to hook up the events and call our UpdateCheckinPoliciesInRegistry() function:

          /// <summary>
          /// This is the class that implements the package exposed by this assembly.
          ///
          /// The minimum requirement for a class to be considered a valid package for Visual Studio
          /// is to implement the IVsPackage interface and register itself with the shell.
          /// This package uses the helper classes defined inside the Managed Package Framework (MPF)
          /// to do it: it derives from the Package class that provides the implementation of the 
          /// IVsPackage interface and uses the registration attributes defined in the framework to 
          /// register itself and its components with the shell.
          /// </summary>
          // This attribute tells the PkgDef creation utility (CreatePkgDef.exe) that this class is
          // a package.
          [PackageRegistration(UseManagedResourcesOnly = true)]
          // This attribute is used to register the information needed to show this package
          // in the Help/About dialog of Visual Studio.
          [InstalledProductRegistration("#110", "#112", "1.0", IconResourceID = 400)]
          // Auto Load our assembly even when no solution is open (by using the Microsoft.VisualStudio.VSConstants.UICONTEXT_NoSolution guid).
          [ProvideAutoLoad("ADFC4E64-0397-11D1-9F4E-00A0C911004F")]
          public abstract class CheckinPolicyDeploymentPackage : Package
          {
          	private EnvDTE.DTEEvents _dteEvents;
          
          	/// <summary>
          	/// Initialization of the package; this method is called right after the package is sited, so this is the place
          	/// where you can put all the initialization code that rely on services provided by VisualStudio.
          	/// </summary>
          	protected override void Initialize()
          	{
          		base.Initialize();
          
          		var dte = (DTE2)GetService(typeof(SDTE));
          		_dteEvents = dte.Events.DTEEvents;
          		_dteEvents.OnBeginShutdown += OnBeginShutdown;
          
          		UpdateCheckinPoliciesInRegistry();
          	}
          
          	private void OnBeginShutdown()
          	{
          		_dteEvents.OnBeginShutdown -= OnBeginShutdown;
          		_dteEvents = null;
          
          		UpdateCheckinPoliciesInRegistry();
          	}
          
          	private void UpdateCheckinPoliciesInRegistry()
          	{
          		var dte = (DTE2)GetService(typeof(SDTE));
          		string visualStudioVersionNumber = dte.Version;
          		string customCheckinPolicyEntryName = "CheckinPolicies";
          
          		// Create the paths to the registry keys that contains the values to inspect.
          		string desiredRegistryKeyPath = string.Format("HKEY_CURRENT_USER\\Software\\Microsoft\\VisualStudio\\{0}_Config\\TeamFoundation\\SourceControl\\Checkin Policies", visualStudioVersionNumber);
          		string currentRegistryKeyPath = string.Empty;
          		if (Environment.Is64BitOperatingSystem)
          			currentRegistryKeyPath = string.Format("HKEY_LOCAL_MACHINE\\SOFTWARE\\Wow6432Node\\Microsoft\\VisualStudio\\{0}\\TeamFoundation\\SourceControl\\Checkin Policies", visualStudioVersionNumber);
          		else
          			currentRegistryKeyPath = string.Format("HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\VisualStudio\\{0}\\TeamFoundation\\SourceControl\\Checkin Policies", visualStudioVersionNumber);
          
          		// Get the value that the registry should have, and the value that it currently has.
          		var desiredRegistryValue = Registry.GetValue(desiredRegistryKeyPath, customCheckinPolicyEntryName, null);
          		var currentRegistryValue = Registry.GetValue(currentRegistryKeyPath, customCheckinPolicyEntryName, null);
          
          		// If the registry value is already up to date, just exit without updating the registry.
          		if (desiredRegistryValue == null || desiredRegistryValue.Equals(currentRegistryValue))
          			return;
          
          		// Get the path to the PowerShell script to run.
          		string powerShellScriptFilePath = Path.Combine(Path.GetDirectoryName(System.Reflection.Assembly.GetAssembly(typeof(CheckinPolicyDeploymentPackage)).Location),
          			"FilesFromShared", "UpdateCheckinPolicyInRegistry.ps1");
          
          		// Start a new process to execute the batch file script, which calls the PowerShell script to do the actual work.
          		var process = new Process
          		{
          			StartInfo =
          			{
          				FileName = "PowerShell",
          				Arguments = string.Format("-NoProfile -ExecutionPolicy Bypass -File \"{0}\" -VisualStudioVersion \"{1}\" -CustomCheckinPolicyEntryName \"{2}\"", powerShellScriptFilePath, visualStudioVersionNumber, customCheckinPolicyEntryName),
          
          				// Hide the PowerShell window while we run the script.
          				CreateNoWindow = true,
          				UseShellExecute = false
          			}
          		};
          		process.Start();
          	}
          }
          

          All of the attributes on the class are put there by default, except for the “[ProvideAutoLoad("ADFC4E64-0397-11D1-9F4E-00A0C911004F")]” one; this attribute is the one that actually allows the Initialize() function to get called when Visual Studio starts.  You can see in the Initialize method that we hook up an event so that the UpdateCheckinPoliciesInRegistry() function gets called when VS is closed, and we also call that function from Initialize(), which is called when VS starts up.

          You might have noticed that this class is abstract.  This is because the VS 2012 and VS 2013 classed need to have a unique ID attribute, so the actual VSPackage class just inherits from this one.  Here is what the VS 2013 one looks like:

          [Guid(GuidList.guidCheckinPolicyDeployment_VS2013PkgString)]
          public sealed class CheckinPolicyDeployment_VS2013Package : CheckinPolicyDeploymentShared.CheckinPolicyDeploymentPackage
          { }
          

          The UpdateCheckinPoliciesInRegistry() function checks to see if the appropriate registry key has been updated to allow the checkin policies to run from the “tf checkin” command prompt command.  If they have, then it simply exits, otherwise it calls a PowerShell script to set the keys for us.  A PowerShell script is used because modifying the registry requires admin permissions, and we can easily run a new PowerShell process as admin (assuming the logged in user is an admin on their local machine, which is the case for everyone in our company).

          The one variable to note here is the customCheckinPolicyEntryName. This corresponds to the registry key name that I’ve specified in the RegistryKeyToAdd.pkgdef file, so if you change it be sure to change it in both places.  This is what the RegistryKeyToAdd.pkgdef file contains:

          // We use "\..\" in the value because the projects that include this file place it in a "FilesFromShared" folder, and we want it to look for the dll in the root directory.
          [$RootKey$\TeamFoundation\SourceControl\Checkin Policies]
          "CheckinPolicies"="$PackageFolder$\..\CheckinPolicies.dll"
          

          And here are the contents of the UpdateCheckinPolicyInRegistry.ps1 PowerShell file.  This is basically just a refactored version of the script I posted on my old blog post:

          # This script copies the required registry value so that the checkin policies will work when doing a TFS checkin from the command line.
          param
          (
          	[parameter(Mandatory=$true,HelpMessage="The version of Visual Studio to update in the registry (i.e. '11.0' for VS 2012, '12.0' for VS 2013)")]
          	[string]$VisualStudioVersion,
          
          	[parameter(HelpMessage="The name of the Custom Checkin Policy Entry in the Registry Key.")]
          	[string]$CustomCheckinPolicyEntryName = 'CheckinPolicies'
          )
          
          # Turn on Strict Mode to help catch syntax-related errors.
          # 	This must come after a script's/function's param section.
          # 	Forces a function to be the first non-comment code to appear in a PowerShell Module.
          Set-StrictMode -Version Latest
          
          $ScriptBlock = {
          	function UpdateCheckinPolicyInRegistry([parameter(Mandatory=$true)][string]$VisualStudioVersion, [string]$CustomCheckinPolicyEntryName)
          	{
          		$status = 'Updating registry to allow checkin policies to work outside of Visual Studio version ' + $VisualStudioVersion + '.'
          		Write-Output $status
          
          		# Get the Registry Key Entry that holds the path to the Custom Checkin Policy Assembly.
          		$HKCUKey = 'HKCU:\Software\Microsoft\VisualStudio\' + $VisualStudioVersion + '_Config\TeamFoundation\SourceControl\Checkin Policies'
          		$CustomCheckinPolicyRegistryEntry = Get-ItemProperty -Path $HKCUKey -Name $CustomCheckinPolicyEntryName
          		$CustomCheckinPolicyEntryValue = $CustomCheckinPolicyRegistryEntry.($CustomCheckinPolicyEntryName)
          
          		# Create a new Registry Key Entry for the iQ Checkin Policy Assembly so they will work from the command line (as well as from Visual Studio).
          		if ([Environment]::Is64BitOperatingSystem)
          		{ $HKLMKey = 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\' + $VisualStudioVersion + '\TeamFoundation\SourceControl\Checkin Policies' }
          		else
          		{ $HKLMKey = 'HKLM:\SOFTWARE\Microsoft\VisualStudio\' + $VisualStudioVersion + '\TeamFoundation\SourceControl\Checkin Policies' }
          		Set-ItemProperty -Path $HKLMKey -Name $CustomCheckinPolicyEntryName -Value $CustomCheckinPolicyEntryValue
          	}
          }
          
          # Run the script block as admin so it has permissions to modify the registry.
          Start-Process -FilePath PowerShell -Verb RunAs -ArgumentList "-NoProfile -ExecutionPolicy Bypass -Command &amp; {$ScriptBlock UpdateCheckinPolicyInRegistry -VisualStudioVersion ""$VisualStudioVersion"" -CustomCheckinPolicyEntryName ""$CustomCheckinPolicyEntryName""}"
          

          While I could have just used a much smaller PowerShell script that simply set a given registry key to a given value, I chose to have some code duplication between the C# code and this script so that this script can still be used as a stand-alone script if needed.

          The slight downside to using a VSPackage is that this script still won’t get called until the user closes or opens a new instance of Visual Studio, so the checkin policies won’t work immediately from the “tf checkin” command after updating the checkin policies extension, but this still beats having to remember to manually run the script.

           

          Conclusion

          So I’ve given you a template solution that you can use without any modification to start creating your VS 2012 and VS 2013 compatible checkin policies; Just add new class files to the CheckinPolicies.VS2013 project, and then add them to the CheckinPolicies.VS2012 project as well as links.  By using links it allows you to only have to modify checkin policy files once, and have the changes go to both the 2012 and 2013 VSIX packages.  Hopefully this template solution helps you to get your TFS checkin policies up and running faster.

          Happy Coding!

          Saving And Loading A C# Object’s Data To An Xml, Json, Or Binary File

          March 14th, 2014 2 comments

          I love creating tools, particularly ones for myself and other developers to use.  A common situation that I run into is needing to save the user’s settings to a file so that I can load them up the next time the tool is ran.  I find that the easiest way to accomplish this is to create a Settings class to hold all of the user’s settings, and then use serialization to save and load the class instance to/from a file.  I mention a Settings class here, but you can use this technique to save any object (or list of objects) to a file.

          There are tons of different formats that you may want to save your object instances as, but the big three are Binary, XML, and Json.  Each of these formats have their pros and cons, which I won’t go into.  Below I present functions that can be used to save and load any object instance to / from a file, as well as the different aspects to be aware of when using each method.

          The follow code (without examples of how to use it) is also available here, and can be used directly from my NuGet Package.

           

          Writing and Reading an object to / from a Binary file

          • Writes and reads ALL object properties and variables to / from the file (i.e. public, protected, internal, and private).
          • The data saved to the file is not human readable, and thus cannot be edited outside of your application.
          • Have to decorate class (and all classes that it contains) with a [Serializable] attribute.
          • Use the [NonSerialized] attribute to exclude a variable from being written to the file; there is no way to prevent an auto-property from being serialized besides making it use a backing variable and putting the [NonSerialized] attribute on that.
          /// <summary>
          /// Functions for performing common binary Serialization operations.
          /// <para>All properties and variables will be serialized.</para>
          /// <para>Object type (and all child types) must be decorated with the [Serializable] attribute.</para>
          /// <para>To prevent a variable from being serialized, decorate it with the [NonSerialized] attribute; cannot be applied to properties.</para>
          /// </summary>
          public static class BinarySerialization
          {
          	/// <summary>
          	/// Writes the given object instance to a binary file.
          	/// <para>Object type (and all child types) must be decorated with the [Serializable] attribute.</para>
          	/// <para>To prevent a variable from being serialized, decorate it with the [NonSerialized] attribute; cannot be applied to properties.</para>
          	/// </summary>
          	/// <typeparam name="T">The type of object being written to the XML file.</typeparam>
          	/// <param name="filePath">The file path to write the object instance to.</param>
          	/// <param name="objectToWrite">The object instance to write to the XML file.</param>
          	/// <param name="append">If false the file will be overwritten if it already exists. If true the contents will be appended to the file.</param>
          	public static void WriteToBinaryFile<T>(string filePath, T objectToWrite, bool append = false)
          	{
          		using (Stream stream = File.Open(filePath, append ? FileMode.Append : FileMode.Create))
          		{
          			var binaryFormatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
          			binaryFormatter.Serialize(stream, objectToWrite);
          		}
          	}
          
          	/// <summary>
          	/// Reads an object instance from a binary file.
          	/// </summary>
          	/// <typeparam name="T">The type of object to read from the XML.</typeparam>
          	/// <param name="filePath">The file path to read the object instance from.</param>
          	/// <returns>Returns a new instance of the object read from the binary file.</returns>
          	public static T ReadFromBinaryFile<T>(string filePath)
          	{
          		using (Stream stream = File.Open(filePath, FileMode.Open))
          		{
          			var binaryFormatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
          			return (T)binaryFormatter.Deserialize(stream);
          		}
          	}
          }
          

           

          And here is an example of how to use it:

          [Serializable]
          public class Person
          {
          	public string Name { get; set; }
          	public int Age = 20;
          	public Address HomeAddress { get; set;}
          	private string _thisWillGetWrittenToTheFileToo = "even though it is a private variable.";
          
          	[NonSerialized]
          	public string ThisWillNotBeWrittenToTheFile = "because of the [NonSerialized] attribute.";
          }
          
          [Serializable]
          public class Address
          {
          	public string StreetAddress { get; set; }
          	public string City { get; set; }
          }
          
          // And then in some function.
          Person person = new Person() { Name = "Dan", Age = 30; HomeAddress = new Address() { StreetAddress = "123 My St", City = "Regina" }};
          List<Person> people = GetListOfPeople();
          BinarySerialization.WriteToBinaryFile<Person>("C:\person.bin", person);
          BinarySerialization.WriteToBinaryFile<List<People>>("C:\people.bin", people);
          
          // Then in some other function.
          Person person = BinarySerialization.ReadFromBinaryFile<Person>("C:\person.bin");
          List<Person> people = BinarySerialization.ReadFromBinaryFile<List<Person>>("C:\people.bin");
          

           

          Writing and Reading an object to / from an XML file (Using System.Xml.Serialization.XmlSerializer in the System.Xml assembly)

          • Only writes and reads the Public properties and variables to / from the file.
          • Classes to be serialized must contain a public parameterless constructor.
          • The data saved to the file is human readable, so it can easily be edited outside of your application.
          • Use the [XmlIgnore] attribute to exclude a public property or variable from being written to the file.
          /// <summary>
          /// Functions for performing common XML Serialization operations.
          /// <para>Only public properties and variables will be serialized.</para>
          /// <para>Use the [XmlIgnore] attribute to prevent a property/variable from being serialized.</para>
          /// <para>Object to be serialized must have a parameterless constructor.</para>
          /// </summary>
          public static class XmlSerialization
          {
          	/// <summary>
          	/// Writes the given object instance to an XML file.
          	/// <para>Only Public properties and variables will be written to the file. These can be any type though, even other classes.</para>
          	/// <para>If there are public properties/variables that you do not want written to the file, decorate them with the [XmlIgnore] attribute.</para>
          	/// <para>Object type must have a parameterless constructor.</para>
          	/// </summary>
          	/// <typeparam name="T">The type of object being written to the file.</typeparam>
          	/// <param name="filePath">The file path to write the object instance to.</param>
          	/// <param name="objectToWrite">The object instance to write to the file.</param>
          	/// <param name="append">If false the file will be overwritten if it already exists. If true the contents will be appended to the file.</param>
          	public static void WriteToXmlFile<T>(string filePath, T objectToWrite, bool append = false) where T : new()
          	{
          		TextWriter writer = null;
          		try
          		{
          			var serializer = new XmlSerializer(typeof(T));
          			writer = new StreamWriter(filePath, append);
          			serializer.Serialize(writer, objectToWrite);
          		}
          		finally
          		{
          			if (writer != null)
          				writer.Close();
          		}
          	}
          
          	/// <summary>
          	/// Reads an object instance from an XML file.
          	/// <para>Object type must have a parameterless constructor.</para>
          	/// </summary>
          	/// <typeparam name="T">The type of object to read from the file.</typeparam>
          	/// <param name="filePath">The file path to read the object instance from.</param>
          	/// <returns>Returns a new instance of the object read from the XML file.</returns>
          	public static T ReadFromXmlFile<T>(string filePath) where T : new()
          	{
          		TextReader reader = null;
          		try
          		{
          			var serializer = new XmlSerializer(typeof(T));
          			reader = new StreamReader(filePath);
          			return (T)serializer.Deserialize(reader);
          		}
          		finally
          		{
          			if (reader != null)
          				reader.Close();
          		}
          	}
          }
          

           

          And here is an example of how to use it:

          public class Person
          {
          	public string Name { get; set; }
          	public int Age = 20;
          	public Address HomeAddress { get; set;}
          	private string _thisWillNotGetWrittenToTheFile = "because it is not public.";
          
          	[XmlIgnore]
          	public string ThisWillNotBeWrittenToTheFile = "because of the [XmlIgnore] attribute.";
          }
          
          public class Address
          {
          	public string StreetAddress { get; set; }
          	public string City { get; set; }
          }
          
          // And then in some function.
          Person person = new Person() { Name = "Dan", Age = 30; HomeAddress = new Address() { StreetAddress = "123 My St", City = "Regina" }};
          List<Person> people = GetListOfPeople();
          XmlSerialization.WriteToXmlFile<Person>("C:\person.txt", person);
          XmlSerialization.WriteToXmlFile<List<People>>("C:\people.txt", people);
          
          // Then in some other function.
          Person person = XmlSerialization.ReadFromXmlFile<Person>("C:\person.txt");
          List<Person> people = XmlSerialization.ReadFromXmlFile<List<Person>>("C:\people.txt");
          

           

          Writing and Reading an object to / from a Json file (using the Newtonsoft.Json assembly in the Json.NET NuGet package)

          • Only writes and reads the Public properties and variables to / from the file.
          • Classes to be serialized must contain a public parameterless constructor.
          • The data saved to the file is human readable, so it can easily be edited outside of your application.
          • Use the [JsonIgnore] attribute to exclude a public property or variable from being written to the file.

          /// <summary>
          /// Functions for performing common Json Serialization operations.
          /// <para>Requires the Newtonsoft.Json assembly (Json.Net package in NuGet Gallery) to be referenced in your project.</para>
          /// <para>Only public properties and variables will be serialized.</para>
          /// <para>Use the [JsonIgnore] attribute to ignore specific public properties or variables.</para>
          /// <para>Object to be serialized must have a parameterless constructor.</para>
          /// </summary>
          public static class JsonSerialization
          {
          	/// <summary>
          	/// Writes the given object instance to a Json file.
          	/// <para>Object type must have a parameterless constructor.</para>
          	/// <para>Only Public properties and variables will be written to the file. These can be any type though, even other classes.</para>
          	/// <para>If there are public properties/variables that you do not want written to the file, decorate them with the [JsonIgnore] attribute.</para>
          	/// </summary>
          	/// <typeparam name="T">The type of object being written to the file.</typeparam>
          	/// <param name="filePath">The file path to write the object instance to.</param>
          	/// <param name="objectToWrite">The object instance to write to the file.</param>
          	/// <param name="append">If false the file will be overwritten if it already exists. If true the contents will be appended to the file.</param>
          	public static void WriteToJsonFile<T>(string filePath, T objectToWrite, bool append = false) where T : new()
          	{
          		TextWriter writer = null;
          		try
          		{
          			var contentsToWriteToFile = Newtonsoft.Json.JsonConvert.SerializeObject(objectToWrite);
          			writer = new StreamWriter(filePath, append);
          			writer.Write(contentsToWriteToFile);
          		}
          		finally
          		{
          			if (writer != null)
          				writer.Close();
          		}
          	}
          
          	/// <summary>
          	/// Reads an object instance from an Json file.
          	/// <para>Object type must have a parameterless constructor.</para>
          	/// </summary>
          	/// <typeparam name="T">The type of object to read from the file.</typeparam>
          	/// <param name="filePath">The file path to read the object instance from.</param>
          	/// <returns>Returns a new instance of the object read from the Json file.</returns>
          	public static T ReadFromJsonFile<T>(string filePath) where T : new()
          	{
          		TextReader reader = null;
          		try
          		{
          			reader = new StreamReader(filePath);
          			var fileContents = reader.ReadToEnd();
          			return Newtonsoft.Json.JsonConvert.DeserializeObject<T>(fileContents);
          		}
          		finally
          		{
          			if (reader != null)
          				reader.Close();
          		}
          	}
          }
          

          And here is an example of how to use it:

          public class Person
          {
          	public string Name { get; set; }
          	public int Age = 20;
          	public Address HomeAddress { get; set;}
          	private string _thisWillNotGetWrittenToTheFile = "because it is not public.";
          
          	[JsonIgnore]
          	public string ThisWillNotBeWrittenToTheFile = "because of the [JsonIgnore] attribute.";
          }
          
          public class Address
          {
          	public string StreetAddress { get; set; }
          	public string City { get; set; }
          }
          
          // And then in some function.
          Person person = new Person() { Name = "Dan", Age = 30; HomeAddress = new Address() { StreetAddress = "123 My St", City = "Regina" }};
          List<Person> people = GetListOfPeople();
          JsonSerialization.WriteToJsonFile<Person>("C:\person.txt");
          JsonSerialization.WriteToJsonFile<List<People>>("C:\people.txt");
          
          // Then in some other function.
          Person person = JsonSerialization.ReadFromJsonFile<Person>("C:\person.txt", person);
          List<Person> people = JsonSerialization.ReadFromJsonFile<List<Person>>("C:\people.txt", people);
          

           

          As you can see, the Json example is almost identical to the Xml example, with the exception of using the [JsonIgnore] attribute instead of [XmlIgnore].

           

          Writing and Reading an object to / from a Json file (using the JavaScriptSerializer in the System.Web.Extensions assembly)

          There are many Json serialization libraries out there.  I mentioned the Newtonsoft.Json one because it is very popular, and I am also mentioning this JavaScriptSerializer one because it is built into the .Net framework.  The catch with this one though is that it requires the Full .Net 4.0 framework, not just the .Net Framework 4.0 Client Profile.

          The caveats to be aware of are the same between the Newtonsoft.Json and JavaScriptSerializer libraries, except instead of using [JsonIgnore] you would use [ScriptIgnore].

          Be aware that the JavaScriptSerializer is in the System.Web.Extensions assembly, but in the System.Web.Script.Serialization namespace.  Here is the code from the Newtonsoft.Json code snippet that needs to be replaced in order to use the JavaScriptSerializer:

          // In WriteFromJsonFile<T>() function replace:
          var contentsToWriteToFile = Newtonsoft.Json.JsonConvert.SerializeObject(objectToWrite);
          // with:
          var contentsToWriteToFile = new System.Web.Script.Serialization.JavaScriptSerializer().Serialize(objectToWrite);
          
          // In ReadFromJsonFile<T>() function replace:
          return Newtonsoft.Json.JsonConvert.DeserializeObject<T>(fileContents);
          // with:
          return new System.Web.Script.Serialization.JavaScriptSerializer().Deserialize<T>(fileContents);
          

           

          Happy Coding!

          Categories: C#, Json, XML Tags: , , , , , , , , , , ,

          “Agent lost communication with Team Foundation Server” TFS Build Server Error

          March 12th, 2014 No comments

          We had recently started getting lots of error messages similar to the following on our TFS Build Servers:

          Exception Message: The build failed because the build server that hosts build agent TFS-BuildController001 - Agent4 lost communication with Team Foundation Server. (type FaultException`1) 
          

          This error message would appear randomly; some builds would pass, others would fail, and when they did fail with this error message it was often at different parts in the build process.

          After a bit of digging I found this post and this one, which discussed different error messages around their build process failing with some sort of error around the build controller losing connection to the TFS server.  They talked about different fixes relating to DNS issues and load balancing, so we had our network team update our DNS records and flush the cache, but were still getting the same errors.

          We have several build controllers, and I noticed that the problem was only happening on two of the three, so our network team updated the hosts file on the two with the problem to match the entries in the one that was working fine, and boom, everything started working properly again :)

          So the problem was that the hosts file on those two build controller machines somehow got changed.

          The hosts file can typically be found at "C:\Windows\System32\Drivers\etc\hosts", and here is an example of what we now have in our hosts file for entries (just the two entries):

          12.345.67.89	TFS-Server.OurDomain.local
          12.345.67.89	TFS-Server
          

          If you too are running into this TFS Build Server error I hope this helps.

          If You Like Using Macros or AutoHotkey, You Might Want To Try The Enterpad AHK Keyboard

          February 12th, 2014 No comments

          If you follow my blog then you already know I’m a huge fan of AutoHotkey (AHK), and that I created the AHK Command Picker to allow me to have a limitless number of AHK macros quickly and easily accessible from my keyboard, without having a bunch of hotkeys (i.e. keyboard shortcuts) to remember.  The team over at CEDEQ saw my blog posts and were kind enough to send me an Enterpad AHK Keyboard for free :)

           

          What is the Enterpad AHK Keyboard?

          The Enterpad AHK keyboard is a physical device with 120 different touch spots on it, each of which can be used to trigger a different AHK macro/script.  Here’s a picture of it:

          While macro keyboards/controllers are nothing new, there are a number of things that separate the Enterpad AHK keyboard from your typical macro keyboard:

          1. The touch spots are not physical buttons; instead it uses a simple flat surface with 120 different positions that respond to touch.  Think of it almost as a touch screen, but instead of having a screen to touch, you just touch a piece of paper.
          2. This leads to my next point, which is that you can use any overlay you want on the surface of Enterpad AHK keyboard; the overlay is just a piece of paper.  The default overlay (piece of paper) that it ships with just has 120 squares on it, each labeled with their number (as shown in the picture above).  Because the overlay is just a piece of paper, you can write (or draw) on it, allowing you to create custom labels for each of your 120 buttons; something that you can’t do with physical buttons.  So what if you add or remap your macros after a month or a year? Just erase and re-write your labels (if you wrote them in pencil), or simply print off a new overlay.  Also, you don’t need to have 120 different buttons; if you only require 12, then you could map 10 buttons to each one of the 12 commands you have, allowing for a larger touch spot to launch a specific script.
          3. It integrates directly with AHK.  This means that you can easily write your macros/scripts in an awesome language that you (probably) already know.  While you could technically have any old macro keyboard launch AHK scripts, it would mean mapping a keyboard shortcut for each script that you want to launch, which means cluttering up your keyboard shortcuts and potentially running them unintentionally.  With the Enterpad AHK keyboard, AHK simply sees the 120 touch spots as an additional 120 keys on your keyboard, so you don’t have to clutter up your real keyboard’s hotkeys.  Here is an example of a macro that displays a message box when the first touch spot is pressed:
            001:
            MsgBox, &quot;You pressed touch spot #1.&quot;
            Return
            

          What do you mean when you say use it to launch a macro or script?

          A macro or script is just a series of operations; basically they can be used to do ANYTHING that you can manually do on your computer.  So some examples of things you can do are:

          • Open an application or file.
          • Type specific text (such as your home address).
          • Click on specific buttons or areas of a window.

          For example, you could have a script that opens Notepad, types “This text was written by an AHK script.”, saves the file to the desktop, and then closes Notepad.  Macros are useful for automating things that you do repeatedly, such as visiting specific websites, entering usernames and passwords, typing out canned responses to emails, and much more.

          The AHK community is very large and very active.  You can find a script to do almost anything you want, and when you can’t (or if you need to customize an existing script) you are very likely to get answers to any questions that you post online.  The Enterpad team also has a bunch of general purpose scripts/examples available for you to use, such having 10 custom clipboards, where button 1 copies to a custom clipboard, and button 11 pastes from it, button 2 copies to a different custom clipboard, and button 12 pastes from it, etc..

           

          Why would I want the Enterpad AHK Keyboard?

          If you are a fan of AutoHotkey and would like a separate physical device to launch your macros/scripts, the Enterpad AHK Keyboard is definitely a great choice.  If you don’t want a separate physical device, be sure to check out AHK Command Picker, as it provides many of the same benefits without requiring a new piece of hardware.

          Some reasons you might want an Enterpad AHK Keyboard:

          • You use (or want to learn) AutoHotkey and prefer a separate physical device to launch your scripts.
          • You want to be able to launch your scripts with a single button.
          • You don’t want to clutter up your keyboard shortcuts.
          • You want to be able to label all of your hotkeys.

          Some reasons you may want a different macro keyboard:

          • It does not use physical buttons.  This is great for some situations, but not for others.  For example, if you are a gamer looking for a macro keyboard then you might prefer one with physical buttons so that you do not have to look away from the screen to be sure about which button you are pressing.  Since the overlay is just a piece of paper though, you could perhaps do something like use little pieces of sticky-tac to mark certain buttons, so you could know which button your finger is on simply by feeling it.
          • The price. At nearly $300 US, the Enterpad AHK keyboard is more expensive than many other macro keyboards.  That said, those keyboards also don’t provide all of the benefits that the Enterpad AHK keyboard does.

          Even if you don’t want to use the Enterpad AHK keyboard yourself, you may want to get it for a friend or relative; especially if you have a very non-technical one.  For example, you could hook it up to your grandma’s computer and write a AHK script that calls your computer via Skype, and then label a button (or 10 buttons to make it nice and big) on the Enterpad AHK keyboard so it is clear which button to press in order to call you.

          One market that I think the Enterpad AHK keyboard could really be useful for is the corporate world, where you have many people doing the same job, and who all follow a set of instructions to do some processing.  For example, at a call center where you have tens or hundreds of employees using the same software and performing the same job.  One of their duties might be for placing new orders of a product for a caller, and this may involve clicking through 10 different menus or screens in order to get to the correct place to enter the customers information.  This whole process could be automated to a single button press on the Enterpad AHK keyboard.  You are probably thinking that the software should be redesigned to make the process of submitting orders less cumbersome, and you are right, but most companies don’t develop the software that they use, so they are at the mercy of the 3rd party software provider.  In these cases, AHK can be a real time-saver, by a company deploying an Enterpad AHK keyboard to all of its staff with a custom labeled overlay, and the IT department writing the AHK scripts that the employees use with their Enterpad AHK keyboards, all of the staff can benefit from it without needing to know anything about AHK.

           

          Conclusion

          So should you go buy an Enterpad AHK Keyboard?  That is really up to you.  I have one, but find that I don’t use it very often because I tend to prefer to use the AHK Command Picker software so that my fingers never leave my keyboard.  Some of my co-workers have tried it out though and really love it, so if you prefer to have a separate physical device for launching your macros then the Enterpad AHK Keyboard might be perfect for you.

          Categories: AutoHotkey Tags: , , ,

          Don’t Write WPF Converters; Write C# Inline In Your XAML Instead Using QuickConverter

          December 13th, 2013 1 comment

          If you’ve used binding at all in WPF then you more then likely have also written a converter.  There are lots of tutorials on creating converters, so I’m not going to discuss that in length here.  Instead I want to spread the word about a little known gem called QuickConverter.  QuickConverter is awesome because it allows you to write C# code directly in your XAML; this means no need for creating an explicit converter class.  And it’s available on NuGet so it’s a snap to get it into your project.

           

          A simple inverse boolean converter example

          As a simple example, let’s do an inverse boolean converter; something that is so basic I’m surprised that it is still not included out of the box with Visual Studio (and why packages like WPF Converters exist).  So basically if the property we are binding to is true, we want it to return false, and if it’s false, we want it to return true.

          The traditional approach

          This post shows the code for how you would traditionally accomplish this.  Basically you:

          1) add a new file to your project to hold your new converter class,

          2) have the class implement IValueConverter,

          3) add the class as a resource in your xaml file, and then finally

          4) use it in the Converter property of the xaml control.  Man, that is a lot of work to flip a bit!

          Just for reference, this is what step 4 might look like in the xaml:

          <CheckBox IsEnabled="{Binding Path=ViewModel.SomeBooleanProperty, Converter={StaticResource InverseBooleanConverter}" />
          

           

          Using QuickConverter

          This is what you would do using QuickConverter:

          <CheckBox IsEnabled="{qc:Binding '!$P', P={Binding Path=ViewModel.SomeBooleanProperty}}" />
          

          That it! 1 step! How freaking cool is that!  Basically we bind our SomeBooleanProperty to the variable $P, and then write our C# expressions against $P, all in xaml! This also allows us to skip steps 1, 2, and 3 of the traditional approach, allowing you to get more done.

           

          More examples using QuickConverter

          The QuickConverter documentation page shows many more examples, such as a Visibility converter:

          Visibility="{qc:Binding '$P ? Visibility.Visible : Visibility.Collapsed', P={Binding ShowElement}}"
          

           

          Doing a null check:

          IsEnabled="{qc:Binding '$P != null', P={Binding Path=SomeProperty}"
          

           

          Checking a class instance’s property values:

          IsEnabled="{qc:Binding '$P.IsValid || $P.ForceAlways', P={Binding Path=SomeClassInstance}"
          

           

          Doing two-way binding:

          Height="{qc:Binding '$P * 10', ConvertBack='$value * 0.1', P={Binding TestWidth, Mode=TwoWay}}"
          

           

          Doing Multi-binding:

          Angle="{qc:MultiBinding 'Math.Atan2($P0, $P1) * 180 / 3.14159', P0={Binding ActualHeight, ElementName=rootElement}, P1={Binding ActualWidth, ElementName=rootElement}}"
          

           

          Declaring and using local variables in your converter expression:

          IsEnabled="{qc:Binding '(Loc = $P.Value, A = $P.Show) => $Loc != null &amp;&amp; $A', P={Binding Obj}}"
          

          * Note that the "&&" operator must be written as "&amp;&amp;" in XML.

           

          And there is even limited support for using lambdas, which allows LINQ to be used:

          ItemsSource="{qc:Binding '$P.Where(( (int)i ) => (bool)($i % 2 == 0))', P={Binding Source}}"
          

           

          Quick Converter Setup

          As mentioned above, Quick Converter is available via NuGet.  Once you have it installed in your project, there are 2 things you need to do:

          1. Register assemblies for the types that you plan to use in your quick converters

          For example, if you want to use the Visibility converter shown above, you need to register the System.Windows assembly, since that is where the System.Windows.Visibility enum being referenced lives.  You can register the System.Windows assembly with QuickConverter using this line:

          QuickConverter.EquationTokenizer.AddNamespace(typeof(System.Windows.Visibility));
          

          In order to avoid a XamlParseException at run-time, this line needs to be executed before the quick converter executes.  To make this easy, I just register all of the assemblies with QuickConverter in my application’s constructor.  That way I know they have been registered before any quick converter expressions are evaluated.

          So my App.xaml.cs file contains this:

          public partial class App : Application
          {
          	public App() : base()
          	{
          		// Setup Quick Converter.
          		QuickConverter.EquationTokenizer.AddNamespace(typeof(object));
          		QuickConverter.EquationTokenizer.AddNamespace(typeof(System.Windows.Visibility));
          	}
          }
          

          Here I also registered the System assembly (using “typeof(object)”) in order to make the primitive types (like bool) available.

           

          2. Add the QuickConverter namespace to your Xaml files

          As with all controls in xaml, before you can use a you a control you must create a reference to the namespace that the control is in.  So to be able to access and use QuickConverter in your xaml file, you must include it’s namespace, which can be done using:

          xmlns:qc="clr-namespace:QuickConverter;assembly=QuickConverter"
          

           

          So should I go delete all my existing converters?

          As crazy awesome as QuickConverter is, it’s not a complete replacement for converters.  Here are a few scenarios where you would likely want to stick with traditional converters:

          1. You need some very complex logic that is simply easier to write using a traditional converter.  For example, we have some converters that access our application cache and lock resources and do a lot of other logic, where it would be tough (impossible?) to write all of that logic inline with QuickConverter.  Also, by writing it using the traditional approach you get things like VS intellisense and compile-time error checking.

          2. If the converter logic that you are writing is very complex, you may want it enclosed in a converter class to make it more easily reusable; this allows for a single reusable object and avoids copy-pasting complex logic all over the place.  Perhaps the first time you write it you might do it as a QuickConverter, but if you find yourself copy-pasting that complex logic a lot, move it into a traditional converter.

          3. If you need to debug your converter, that can’t be done with QuickConverter (yet?).

           

          Summary

          So QuickConverter is super useful and can help speed up development time by allowing most, if not all, of your converters to be written inline.  In my experience 95% of converters are doing very simple things (null checks, to strings, adapting one value type to another, etc.), which are easy to implement inline.  This means fewer files and classes cluttering up your projects.  If you need to do complex logic or debug your converters though, then you may want to use traditional converters for those few cases.

          So, writing C# inline in your xaml; how cool is that!  I can’t believe Microsoft didn’t think of and implement this.  One of the hardest things to believe is that Johannes Moersch came up with this idea and implemented it while on a co-op work term in my office!  A CO-OP STUDENT WROTE QUICKCONVERTER!  Obviously Johannes is a very smart guy, and he’s no longer a co-op student; he’ll be finishing up his bachelor’s degree in the coming months.

          I hope you find QuickConverter as helpful as I have, and if you have any suggestions for improvements, be sure to leave Johannes a comment on the CodePlex page.

          Happy coding!

          Categories: C#, WPF, XAML Tags: , , , , , , ,

          Get AutoHotkey To Interact With Admin Windows Without Running AHK Script As Admin

          November 21st, 2013 3 comments

          A while back I posted about AutoHotkey not being able to interact with Windows 8 windows and other applications that were Ran As Admin.  My solution was to run your AutoHotkey (AHK) script as admin as well, and I also showed how to have your AHK script start automatically with Windows, but not as an admin.  Afterwards I followed that up with a post about how to get your AHK script to run as admin on startup, so life was much better, but still not perfect.UAC Never Notify

           

          Problems with running your AHK script as admin

          1. You may have to deal with the annoying UAC prompt every time you launch your script.
          2. Any programs the script launches also receive administrative privileges.

          #1 is only a problem if you haven’t set your AHK script to run as admin on startup as I showed in my other blog post (i.e. you are still manually launching your script) or you haven’t changed your UAC settings to never prompt you with notifications (which some companies restrict) (see screenshot to the right).

          #2 was a problem for me. I use AHK Command Picker every day. A lot. I’m a developer and in order for Visual Studio to interact with IIS it requires admin privileges, which meant that if I wanted to be able to use AHK Command Picker in Visual Studio, I had to run it as admin as well.  The problem for me was that I use AHK Command Picker to launch almost all of my applications, which meant that most of my apps were now also running as an administrator.  For the most part this was fine, but there were a couple programs that gave me problems running as admin. E.g. With PowerShell ISE when I double clicked on a PowerShell file to edit it, instead of opening in the current (admin) ISE instance, it would open a new ISE instance.

            There is a solution

            Today I stumbled across this post on the AHK community forums.  Lexikos has provided an AHK script that will digitally sign the AutoHotkey executable, allowing it to interact with applications running as admin, even when your AHK script isn’t.

            Running his script is pretty straight forward:

            1. Download and unzip his EnableUIAccess.zip file.
            2. Double-click the EnableUIAccess.ahk script to run it, and it will automatically prompt you.
            3. Read the disclaimer and click OK.
            4. On the Select Source File prompt choose the C:\Program Files\AutoHotkey\AutoHotkey.exe file.  This was already selected by default for me. (Might be Program Files (x86) if you have 32-bit AHK installed on 64-bit Windows)
            5. On the Select Destination File prompt choose the same C:\Program Files\AutoHotkey\AutoHotkey.exe file again.  Again, this was already selected by default for me.
            6. Click Yes to replace the existing file.
            7. Click Yes when prompted to Run With UI Access.

            That’s it.  (Re)Start your AHK scripts and they should now be able to interact with Windows 8 windows and applications running as admin :)

            This is a great solution if you want your AHK script to interact with admin windows, but don’t want to run your script as an admin.

             

            Did you know

            If you do want to launch an application as admin, but don’t want to run your AHK script as admin, you can use the RunAs command.

             

            I hope you found this article useful.  Feel free to leave a comment.

            Happy coding!

            Provide A Batch File To Run Your PowerShell Script From; Your Users Will Love You For It

            November 16th, 2013 46 comments

            A while ago in one of my older posts I included a little gem that I think deserves it’s own dedicated post; calling PowerShell scripts from a batch file.

            Why call my PowerShell script from a batch file?

            When I am writing a script for other people to use (in my organization, or for the general public) or even for myself sometimes, I will often include a simple batch file (i.e. *.bat or *.cmd file) that just simply calls my PowerShell script and then exits.  I do this because even though PowerShell is awesome, not everybody knows what it is or how to use it; non-technical folks obviously, but even many of the technical folks in our organization have never used PowerShell.

            Let’s list the problems with sending somebody the PowerShell script alone; The first two points below are hurdles that every user stumbles over the first time they encounter PowerShell (they are there for security purposes):

            1. When you double-click a PowerShell script (*.ps1 file) the default action is often to open it up in an editor, not to run it (you can change this for your PC).
            2. When you do figure out you need to right-click the .ps1 file and choose Open With –> Windows PowerShell to run the script, it will fail with a warning saying that the execution policy is currently configured to not allow scripts to be ran.
            3. My script may require admin privileges in order to run correctly, and it can be tricky to run a PowerShell script as admin without going into a PowerShell console and running the script from there, which a lot of people won’t know how to do.
            4. A potential problem that could affect PowerShell Pros is that it’s possible for them to have variables or other settings set in their PowerShell profile that could cause my script to not perform correctly; this is pretty unlikely, but still a possibility.
                So imagine you’ve written a PowerShell script that you want your grandma to run (or an HR employee, or an executive, or your teenage daughter, etc.). Do you think they’re going to be able to do it?  Maybe, maybe not.

            You should be kind to your users and provide a batch file to call your PowerShell script.

            The beauty of batch file scripts is that by default the script is ran when it is double-clicked (solves problem #1), and all of the other problems can be overcome by using a few arguments in our batch file.

            Ok, I see your point. So how do I call my PowerShell script from a batch file?

            First, the code I provide assumes that the batch file and PowerShell script are in the same directory.  So if you have a PowerShell script called “MyPowerShellScript.ps1” and a batch file called “RunMyPowerShellScript.cmd”, this is what the batch file would contain:

            @ECHO OFF
            SET ThisScriptsDirectory=%~dp0
            SET PowerShellScriptPath=%ThisScriptsDirectory%MyPowerShellScript.ps1
            PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%PowerShellScriptPath%'";
            

            Line 1 just prevents the contents of the batch file from being printed to the command prompt (so it’s optional).  Line 2 gets the directory that the batch file is in.  Line 3 just appends the PowerShell script filename to the script directory to get the full path to the PowerShell script file, so this is the only line you would need to modify; replace MyPowerShellScript.ps1 with your PowerShell script’s filename.  The 4th line is the one that actually calls the PowerShell script and contains the magic.

            The –NoProfile switch solves problem #4 above, and the –ExecutionPolicy Bypass argument solves problem #2.  But that still leaves problem #3 above, right?

            Call your PowerShell script from a batch file with Administrative permissions (i.e. Run As Admin)

            If your PowerShell script needs to be run as an admin for whatever reason, the 4th line of the batch file will need to change a bit:

            @ECHO OFF
            SET ThisScriptsDirectory=%~dp0
            SET PowerShellScriptPath=%ThisScriptsDirectory%MyPowerShellScript.ps1
            PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File ""%PowerShellScriptPath%""' -Verb RunAs}";
            

            We can’t call the PowerShell script as admin from the command prompt, but we can from PowerShell; so we essentially start a new PowerShell session, and then have that session call the PowerShell script using the –Verb RunAs argument to specify that the script should be run as an administrator.

            And voila, that’s it.  Now all anybody has to do to run your PowerShell script is double-click the batch file; something that even your grandma can do (well, hopefully).  So will your users really love you for this; well, no.  Instead they just won’t be cursing you for sending them a script that they can’t figure out how to run.  It’s one of those things that nobody notices until it doesn’t work.

            So take the extra 10 seconds to create a batch file and copy/paste the above text into it; it’ll save you time in the long run when you don’t have to repeat to all your users the specific instructions they need to follow to run your PowerShell script.

            I typically use this trick for myself too when my script requires admin rights, as it just makes running the script faster and easier.

            Bonus

            One more tidbit that I often include at the end of my PowerShell scripts is the following code:

            # If running in the console, wait for input before closing.
            if ($Host.Name -eq "ConsoleHost")
            { 
            	Write-Host "Press any key to continue..."
            	$Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyUp") > $null
            }
            

            This will prompt the user for keyboard input before closing the PowerShell console window.  This is useful because it allows users to read any errors that your PowerShell script may have thrown before the window closes, or even just so they can see the “Everything completed successfully” message that your script spits out so they know that it ran correctly.  Related side note: you can change your PC to always leave the PowerShell console window open after running a script, if that is your preference.

            I hope you find this useful.  Feel free to leave comments.

            Happy coding!

            Update

            Several people have left comments asking how to pass parameters into the PowerShell script from the batch file.

            Here is how to pass in ordered parameters:

            PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%PowerShellScriptPath%' 'First Param Value' 'Second Param Value'";
            

            And here is how to pass in named parameters:

            PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%PowerShellScriptPath%' -Param1Name 'Param 1 Value' -Param2Name 'Param 2 Value'"
            

            And if you are running the admin version of the script, here is how to pass in ordered parameters:

            PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File """"%PowerShellScriptPath%"""" """"First Param Value"""" """"Second Param Value"""" ' -Verb RunAs}"
            
            And here is how to pass in named parameters:
            PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File """"%PowerShellScriptPath%"""" -Param1Name """"Param 1 Value"""" -Param2Name """"Param 2 value"""" ' -Verb RunAs}";
            
            And yes, the PowerShell script name and parameters need to be wrapped in 4 double quotes in order to properly handle paths/values with spaces.

            Problems Caused By Installing Windows 8.1 Update

            November 8th, 2013 No comments

            Myself and a few co-workers have updated from Windows 8 to Windows 8.1 and have run into some weird problems.  After a bit of Googling I have found that we are not alone.  This is just a quick list of some things the the Windows 8.1 Update seems to have broken.  I’ll update this post as I find more issues.

             

            IE 11 breaks some websites

            • I found that some of the links in the website our office uploads our Escrow deposits to no longer worked in IE 11 (which 8.1 installs).  Turning on the developer tools showed that it was throwing a Javascript error about an undefined function.  Everything works fine in IE 10 though and no undefined errors are thrown.
            • I have also noticed that after doing a search on Google and clicking one of the links, in order to get back to the Google results page you have to click the Back button twice; the first Back-click just takes you to a blank page (when you click the Google link it directs you to an empty page, which then forwards you to the correct page).
            • Others have complained that they are experiencing problems with GMail and Silverlight after the 8.1 update.
              So it may just be that IE 11 updated it’s standards to be more compliant and now many websites don’t meet the new requirements (I’m not sure); but either way, you may find that some of your favorite websites no longer work properly with IE 11, and you’ll have to wait for IE 11 or the website to make an update.

             

            VPN stopped working

            We use the SonicWall VPN client at my office, and I found that it no longer worked after updating to Windows 8.1.  The solution was a simple uninstall, reinstall, but still, it’s just one more issue to add to the list.

             

            More?

            Have you noticed other things broken after doing the Windows 8.1 update? Share them in the comments below!

            In my personal opinion, I would wait a while longer before updating to Windows 8.1; give Microsoft more time to fix some of these issues.  Many of the new features in Windows 8.1 aren’t even noticeable yet, as many apps don’t yet take advantage of them.  Also, while MS did put a Start button back in, it’s not nearly as powerful as the Windows 7 Start button, so if that’s your reason for upgrading to 8.1 just go get Classic Shell instead.

            Hopefully Microsoft will be releasing hotfixes to get these issues addressed sooner than later.

            Always Explicitly Set Your Parameter Set Variables For PowerShell v2.0 Compatibility

            October 28th, 2013 2 comments

            What are parameter sets anyways?

            Parameter sets were introduced in PowerShell v2.0 and are useful for enforcing mutually exclusive parameters on a cmdlet.  Ed Wilson has a good little article explaining what parameter sets are and how to use them.  Essentially they allow us to write a single cmdlet that might otherwise have to be written as 2 or more cmdlets that took different parameters.  For example, instead of having to create Process-InfoFromUser, Process-InfoFromFile, and Process-InfoFromUrl cmdlets, we could create a single Process-Info cmdlet that has 3 mutually exclusive parameters, [switch]$PromptUser, [string]$FilePath, and [string]$Url.  If the cmdlet is called with more than one of these parameters, it throws an error.

            You could just be lazy and not use parameter sets and allow all 3 parameters to be specified and then just use the first one, but the user won’t know which one of the 3 they provided will be used; they might assume that all 3 will be used.  This would also force the user to have to read the documentation (assuming you have provided it).  Using parameter sets enforces makes it clear to the user which parameters are able to be used with other parameters.  Also, most PowerShell editors process parameter sets to have the intellisense properly show the parameters that can be used with each other.

             

            Ok, parameter sets sound awesome, I want to use them! What’s the problem?

            The problem I ran into was in my Invoke-MsBuild module that I put on CodePlex, I had a [switch]$PassThru parameter that was part of a parameter set.  Within the module I had:

            if ($PassThru) { do something... }
            else { do something else... }
            

            This worked great for me during my testing since I was using PowerShell v3.0.  The problem arose once I released my code to the public; I received an issue from a user who was getting the following error message:

            Invoke-MsBuild : Unexpect error occured while building "<path>\my.csproj": The variable ‘$PassThru’ cannot be retrieved because it has not been set.

            At build.ps1:84 char:25

            • $result = Invoke-MsBuild <<<< -Path "<path>\my.csproj" -BuildLogDirectoryPath "$scriptPath" -Pa

              rams "/property:Configuration=Release"

            After some investigation I determined the problem was that they were using PowerShell v2.0, and that my script uses Strict Mode.  I use Set-StrictMode -Version Latest in all of my scripts to help me catch any syntax related errors and to make sure my scripts will in fact do what I intend them to do.  While you could simply not use strict mode and you wouldn’t have a problem, I don’t recommend that; if others are going to call your cmdlet (or you call it from a different script), there’s a good chance they may have Strict Mode turned on and your cmdlet may break for them.

             

            So should I not use parameter sets with PowerShell v2.0? Is there a fix?

            You absolutely SHOULD use parameter sets whenever you can and it makes sense, and yes there is a fix.  If you require your script to run on PowerShell v2.0, there is just one extra step you need to take, which is to explicitly set the values for any parameters that use a parameter set and don’t exist.  Luckily we can use the Test-Path cmdlet to test if a variable has been defined in a specific scope or not.

            Here is an example of how to detect if a variable is not defined in the Private scope and set its default value.  We specify the scope in case a variable with the same name exists outside of the cmdlet in the global scope or an inherited scope.

            # Default the ParameterSet variables that may not have been set depending on which parameter set is being used. This is required for PowerShell v2.0 compatibility.
            if (!(Test-Path Variable:Private:SomeStringParameter)) { $SomeStringParameter = $null }
            if (!(Test-Path Variable:Private:SomeIntegerParameter)) { $SomeIntegerParameter = 0 }
            if (!(Test-Path Variable:Private:SomeSwitchParameter)) { $SomeSwitchParameter = $false }
            

            If you prefer, instead of setting a default value for the parameter you could just check if it is defined first when using it in your script.  I like this approach however, because I can put this code right after my cmdlet parameters so I’m modifying all of my parameter set properties in one place, and I don’t have to remember to check if the variable is defined later when writing the body of my cmdlet; otherwise I’m likely to forget to do the “is defined” check, and will likely miss the problem since I do most of my testing in PowerShell v3.0.

            Another approach rather than checking if a parameter is defined or not, is to check which Parameter Set Name is being used; this will implicitly let you know which parameters are defined.

            switch ($PsCmdlet.ParameterSetName)
            {
            	"SomeParameterSetName"  { Write-Host "You supplied the Some variable."; break}
            	"OtherParameterSetName"  { Write-Host "You supplied the Other variable."; break}
            } 
            

            I still prefer to default all of my parameters, but you may prefer this method.

            I hope you find this useful.  Check out my other article for more PowerShell v2.0 vs. v3.0 differences.

            Happy coding!

            PowerShell Code To Ensure Client Is Using At Least The Minimum Required PowerShell Version

            October 25th, 2013 2 comments

            Here’s some simple code that will throw an exception if the client running your script is not using the version of PowerShell (or greater) that is required; just change the $REQUIRED_POWERSHELL_VERSION variable value to the minimum version that the script requires.

            # Throw an exception if client is not using the minimum required PowerShell version.
            $REQUIRED_POWERSHELL_VERSION = 3.0	# The minimum Major.Minor PowerShell version that is required for the script to run.
            $POWERSHELL_VERSION = $PSVersionTable.PSVersion.Major + ($PSVersionTable.PSVersion.Minor / 10)
            if ($REQUIRED_POWERSHELL_VERSION -gt $POWERSHELL_VERSION)
            { throw "PowerShell version $REQUIRED_POWERSHELL_VERSION is required for this script; You are only running version $POWERSHELL_VERSION. Please update PowerShell to at least version $REQUIRED_POWERSHELL_VERSION." }
            

            – UPDATE {

            Thanks to Robin M for pointing out that PowerShell has the built-in #Requires statement for this purpose, so you do not need to use the code above. Instead, simply place the following code anywhere in your script to enforce the desired PowerShell version required to run the script:

            #Requires -Version 3.0
            

            If the user does not have the minimum required version of PowerShell installed, they will see an error message like this:

            The script ‘foo.ps1′ cannot be run because it contained a "#requires" statement at line 1 for Windows PowerShell version 3.0 which is incompatible with the installed Windows PowerShell version of 2.0.

            } UPDATE –

            So if your script requires, for example, PowerShell v3.0, just put this at the start of your script to have it error out right away with a meaningful error message; otherwise your script may throw other errors that mask the real issue, potentially leading the user to spend many hours troubleshooting your script, or to give up on it all together.

            I’ve been bitten by this in the past a few times now, where people report issues on my Codeplex scripts where the error message seems ambiguous.  So now any scripts that I release to the general public will have this check in it to give them a proper error message.  I have also created a page on PowerShell v2 vs. v3 differences that I’m going to use to keep track of the differences that I encounter, so that I can have confidence in the minimum powershell version that I set on my scripts.  I also plan on creating a v3 vs. v4 page once I start using PS v4 features more.  Of course, the best test is to actually run your script in the minimum powershell version that you set, which I mention how to do on my PS v2 vs. v3 page.

            Happy coding!

            PowerShell Script To Get Path Lengths

            October 24th, 2013 5 comments

            A while ago I created a Path Length Checker tool in C# that has a “nice” GUI, and put it up on CodePlex.  One of the users reported that he was trying to use it to scan his entire C: drive, but that it was crashing.  Turns out that the System.IO.Directory.GetFileSystemEntries() call was throwing a permissions exception when trying to access the “C:\Documents and Settings” directory.  Even when running the app as admin it throws this exception.  In the meantime while I am working on implementing a workaround for the app, I wrote up a quick PowerShell script that the user could use to get all of the path lengths.  That is what I present to you here.

            $pathToScan = "C:\Some Folder"	# The path to scan and the the lengths for (sub-directories will be scanned as well).
            $outputFilePath = "C:\temp\PathLengths.txt"	# This must be a file in a directory that exists and does not require admin rights to write to.
            $writeToConsoleAsWell = $true	# Writing to the console will be much slower.
            
            # Open a new file stream (nice and fast) and write all the paths and their lengths to it.
            $outputFileDirectory = Split-Path $outputFilePath -Parent
            if (!(Test-Path $outputFileDirectory)) { New-Item $outputFileDirectory -ItemType Directory }
            $stream = New-Object System.IO.StreamWriter($outputFilePath, $false)
            Get-ChildItem -Path $pathToScan -Recurse -Force | Select-Object -Property FullName, @{Name="FullNameLength";Expression={($_.FullName.Length)}} | Sort-Object -Property FullNameLength -Descending | ForEach-Object {
                $filePath = $_.FullName
                $length = $_.FullNameLength
                $string = "$length : $filePath"
                
                # Write to the Console.
                if ($writeToConsoleAsWell) { Write-Host $string }
             
                #Write to the file.
                $stream.WriteLine($string)
            }
            $stream.Close()
            

            Happy coding!

            PowerShell Functions To Convert, Remove, and Delete IIS Web Applications

            October 23rd, 2013 No comments

            I recently refactored some of our PowerShell scripts that we use to publish and remove IIS 7 web applications, creating some general functions that can be used anywhere.  In this post I show these functions along with how I structure our scripts to make creating, removing, and deleting web applications for our various products fully automated and tidy.  Note that these scripts require at least PowerShell v3.0 and use the IIS Admin Cmdlets that I believe require IIS v7.0; the IIS Admin Cmdlet calls can easily be replaced though by calls to appcmd.exe, msdeploy, or any other tool for working with IIS that you want.

            I’ll blast you with the first file’s code and explain it below (ApplicationServiceUtilities.ps1).

            # Turn on Strict Mode to help catch syntax-related errors.
            # 	This must come after a script's/function's param section.
            # 	Forces a function to be the first non-comment code to appear in a PowerShell Module.
            Set-StrictMode -Version Latest
            
            # Define the code block that will add the ApplicationServiceInformation class to the PowerShell session.
            # NOTE: If this class is modified you will need to restart your PowerShell session to see the changes.
            $AddApplicationServiceInformationTypeScriptBlock = {
                # Wrap in a try-catch in case we try to add this type twice.
                try {
                # Create a class to hold an IIS Application Service's Information.
                Add-Type -TypeDefinition "
                    using System;
                
                    public class ApplicationServiceInformation
                    {
                        // The name of the Website in IIS.
                        public string Website { get; set;}
                    
                        // The path to the Application, relative to the Website root.
                        public string ApplicationPath { get; set; }
            
                        // The Application Pool that the application is running in.
                        public string ApplicationPool { get; set; }
            
                        // Whether this application should be published or not.
                        public bool ConvertToApplication { get; set; }
            
                        // Implicit Constructor.
                        public ApplicationServiceInformation() { this.ConvertToApplication = true; }
            
                        // Explicit constructor.
                        public ApplicationServiceInformation(string website, string applicationPath, string applicationPool, bool convertToApplication = true)
                        {
                            this.Website = website;
                            this.ApplicationPath = applicationPath;
                            this.ApplicationPool = applicationPool;
                            this.ConvertToApplication = convertToApplication;
                        }
                    }
                "
                } catch {}
            }
            # Add the ApplicationServiceInformation class to this PowerShell session.
            & $AddApplicationServiceInformationTypeScriptBlock
            
            <#
                .SYNOPSIS
                Converts the given files to application services on the given Server.
            
                .PARAMETER Server
                The Server Host Name to connect to and convert the applications on.
            
                .PARAMETER ApplicationServicesInfo
                The [ApplicationServiceInformation[]] containing the files to convert to application services.
            #>
            function ConvertTo-ApplicationServices
            {
                [CmdletBinding()]
                param
                (
                    [string] $Server,
                    [ApplicationServiceInformation[]] $ApplicationServicesInfo
                )
            
                $block = {
            	    param([PSCustomObject[]] $ApplicationServicesInfo)
                    $VerbosePreference = $Using:VerbosePreference
            	    Write-Verbose "Converting To Application Services..."
            
                    # Import the WebAdministration module to make sure we have access to the required cmdlets and the IIS: drive.
                    Import-Module WebAdministration 4> $null	# Don't write the verbose output.
            	
            	    # Create all of the Web Applications, making sure to first try and remove them in case they already exist (in order to avoid a PS error).
            	    foreach ($appInfo in [PSCustomObject[]]$ApplicationServicesInfo)
                    {
                        $website = $appInfo.Website
                        $applicationPath = $appInfo.ApplicationPath
                        $applicationPool = $appInfo.ApplicationPool
            		    $fullPath = Join-Path $website $applicationPath
            
                        # If this application should not be converted, continue onto the next one in the list.
                        if (!$appInfo.ConvertToApplication) { Write-Verbose "Skipping publish of '$fullPath'"; continue }
            		
            		    Write-Verbose "Checking if we need to remove '$fullPath' before converting it..."
            		    if (Get-WebApplication -Site "$website" -Name "$applicationPath")
            		    {
            			    Write-Verbose "Removing '$fullPath'..."
            			    Remove-WebApplication -Site "$website" -Name "$applicationPath"
            		    }
            
                        Write-Verbose "Converting '$fullPath' to an application with Application Pool '$applicationPool'..."
                        ConvertTo-WebApplication "IIS:\Sites\$fullPath" -ApplicationPool "$applicationPool"
                    }
                }
            
                # Connect to the host Server and run the commands directly o that computer.
                # Before we run our script block we first have to add the ApplicationServiceInformation class type into the PowerShell session.
                $session = New-PSSession -ComputerName $Server
                Invoke-Command -Session $session -ScriptBlock $AddApplicationServiceInformationTypeScriptBlock
                Invoke-Command -Session $session -ScriptBlock $block -ArgumentList (,$ApplicationServicesInfo)
                Remove-PSSession -Session $session
            }
            
            <#
                .SYNOPSIS
                Removes the given application services from the given Server.
            
                .PARAMETER Server
                The Server Host Name to connect to and remove the applications from.
            
                .PARAMETER ApplicationServicesInfo
                The [ApplicationServiceInformation[]] containing the applications to remove.
            #>
            function Remove-ApplicationServices
            {
                [CmdletBinding()]
                param
                (
                    [string] $Server,
                    [ApplicationServiceInformation[]] $ApplicationServicesInfo
                )
            
                $block = {
            	    param([ApplicationServiceInformation[]] $ApplicationServicesInfo)
                    $VerbosePreference = $Using:VerbosePreference
            	    Write-Verbose "Removing Application Services..."
            
                    # Import the WebAdministration module to make sure we have access to the required cmdlets and the IIS: drive.
                    Import-Module WebAdministration 4> $null	# Don't write the verbose output.
            
            	    # Remove all of the Web Applications, making sure they exist first (in order to avoid a PS error).
            	    foreach ($appInfo in [ApplicationServiceInformation[]]$ApplicationServicesInfo)
                    {
                        $website = $appInfo.Website
                        $applicationPath = $appInfo.ApplicationPath
            		    $fullPath = Join-Path $website $applicationPath
            		
            		    Write-Verbose "Checking if we need to remove '$fullPath'..."
            		    if (Get-WebApplication -Site "$website" -Name "$applicationPath")
            		    {
            			    Write-Verbose "Removing '$fullPath'..."
            			    Remove-WebApplication -Site "$website" -Name "$applicationPath"
            		    }
                    }
                }
            
                # Connect to the host Server and run the commands directly on that computer.
                # Before we run our script block we first have to add the ApplicationServiceInformation class type into the PowerShell session.
                $session = New-PSSession -ComputerName $Server
                Invoke-Command -Session $session -ScriptBlock $AddApplicationServiceInformationTypeScriptBlock
                Invoke-Command -Session $session -ScriptBlock $block -ArgumentList (,$ApplicationServicesInfo)
                Remove-PSSession -Session $session
            }
            
            <#
                .SYNOPSIS
                Removes the given application services from the given Server and deletes all associated files.
            
                .PARAMETER Server
                The Server Host Name to connect to and delete the applications from.
            
                .PARAMETER ApplicationServicesInfo
                The [ApplicationServiceInformation[]] containing the applications to delete.
            
                .PARAMETER OnlyDeleteIfNotConvertedToApplication
                If this switch is supplied and the application services are still running (i.e. have not been removed yet), the services will not be removed and the files will not be deleted.
            
                .PARAMETER DeleteEmptyParentDirectories
                If this switch is supplied, after the application services folder has been removed, it will recursively check parent folders and remove them if they are empty, until the Website root is reached.
            #>
            function Delete-ApplicationServices
            {
                [CmdletBinding()]
                param
                (
                    [string] $Server,
                    [ApplicationServiceInformation[]] $ApplicationServicesInfo,
                    [switch] $OnlyDeleteIfNotConvertedToApplication,
                    [switch] $DeleteEmptyParentDirectories
                )
                
                $block = {
            	    param([ApplicationServiceInformation[]] $ApplicationServicesInfo)
                    $VerbosePreference = $Using:VerbosePreference
            	    Write-Verbose "Deleting Application Services..."
            
                    # Import the WebAdministration module to make sure we have access to the required cmdlets and the IIS: drive.
                    Import-Module WebAdministration 4> $null	# Don't write the verbose output.
            
            	    # Remove all of the Web Applications and delete their files from disk.
            	    foreach ($appInfo in [ApplicationServiceInformation[]]$ApplicationServicesInfo)
                    {
                        $website = $appInfo.Website
                        $applicationPath = $appInfo.ApplicationPath
            		    $fullPath = Join-Path $website $applicationPath
                        $iisSitesDirectory = "IIS:\Sites\"
            		
            		    Write-Verbose "Checking if we need to remove '$fullPath'..."
            		    if (Get-WebApplication -Site "$website" -Name "$applicationPath")
            		    {
                            # If we should only delete the files they're not currently running as a Web Application, continue on to the next one in the list.
                            if ($Using:OnlyDeleteIfNotConvertedToApplication) { Write-Verbose "'$fullPath' is still running as a Web Application, so its files will not be deleted."; continue }
            
            			    Write-Verbose "Removing '$fullPath'..."
            			    Remove-WebApplication -Site "$website" -Name "$applicationPath"
            		    }
                        
                        Write-Verbose "Deleting the directory '$fullPath'..."
                        Remove-Item -Path "$iisSitesDirectory$fullPath" -Recurse -Force
            
                        # If we should delete empty parent directories of this application.
                        if ($Using:DeleteEmptyParentDirectories)
                        {
                            Write-Verbose "Deleting empty parent directories..."
                            $parent = Split-Path -Path $fullPath -Parent
            
                            # Only delete the parent directory if it is not the Website directory, and it is empty.
                            while (($parent -ne $website) -and (Test-Path -Path "$iisSitesDirectory$parent") -and ((Get-ChildItem -Path "$iisSitesDirectory$parent") -eq $null))
                            {
                                $path = $parent
                                Write-Verbose "Deleting empty parent directory '$path'..."
                                Remove-Item -Path "$iisSitesDirectory$path" -Force
                                $parent = Split-Path -Path $path -Parent
                            }
                        }
                    }
                }
            
                # Connect to the host Server and run the commands directly on that computer.
                # Before we run our script block we first have to add the ApplicationServiceInformation class type into the PowerShell session.
                $session = New-PSSession -ComputerName $Server
                Invoke-Command -Session $session -ScriptBlock $AddApplicationServiceInformationTypeScriptBlock
                Invoke-Command -Session $session -ScriptBlock $block -ArgumentList (,$ApplicationServicesInfo)
                Remove-PSSession -Session $session
            }
            

            This first file contains all of the meat.  At the top it declares (in C#) the ApplicationServiceInformation class that is used to hold the information about a web application; mainly the Website that the application should go in, the ApplicationPath (where within the website the application should be created), and the Application Pool that the application should run under.  Notice that the $AddApplicationServiceInformationTypeScriptBlock script block is executed right below where it is declared, in order to actually import the ApplicationServiceInformation class type into the current PowerShell session.

            There is one extra property on this class that I found I needed, but you may be able to ignore; that is the ConvertToApplication boolean.  This is inspected by our ConvertTo-ApplicationServices function to tell it whether the application should actually be published or not.  I required this field because we have some web services that should only be “converted to applications” in specific environments (or only on a developers local machine), but whose files we still want to delete when using the Delete-ApplicationServices function.  While I could just create 2 separate lists of ApplicationServiceInformation objects depending on which function I was calling (see below), I decided to instead just include this one extra property.

            Below the class declaration are our functions to perform the actual work:

            • ConvertTo-ApplicationServices: Converts the files to an application using the ConvertTo-WebApplication cmdlet.
            • Remove-ApplicationServices: Converts the application back to regular files using the Remove-WebApplication cmdlet.
            • Delete-ApplicationServices: First removes any applications, and then deletes the files from disk.
              The Delete-ApplicationServices function includes an couple additional switches.  The $OnlyDeleteIfNotConvertedToApplication switch can be used as a bit of a safety net to ensure that you only delete files for application services that are not currently running as a web application (i.e. the web application has already been removed).  If this switch is omitted, the web application will be removed and the files deleted.  The $DeleteEmptyParentDirectories switch that may be used to remove parent directories once the application files have been deleted. This is useful for us because we version our services, so they are all placed in a directory corresponding to a version number. e.g. \Website\[VersionNumber]\App1 and \Website\[VersionNumber]\App2. This switch allows the [VersionNumber] directory to be deleted automatically once the App1 and App2 directories have been deleted.
              Note that I don’t have a function to copy files to the server (i.e. publish them); I assume that the files have already been copied to the server, as we currently have this as a separate step in our deployment process.

            My 2nd file (ApplicationServiceLibrary.ps1) is optional and is really just a collection of functions used to return the ApplicationServiceInformation instances that I require as an array, depending on which projects I want to convert/remove/delete.

            # Get the directory that this script is in.
            $THIS_SCRIPTS_DIRECTORY = Split-Path $script:MyInvocation.MyCommand.Path
            
            # Include the required ApplicationServiceInformation type.
            . (Join-Path $THIS_SCRIPTS_DIRECTORY ApplicationServiceUtilities.ps1)
            
            #=================================
            # Replace all of the functions below with your own.
            # These are provided as examples.
            #=================================
            
            function Get-AllApplicationServiceInformation([string] $Release)
            {
                [ApplicationServiceInformation[]] $appServiceInfo = @()
            
                $appServiceInfo += Get-RqApplicationServiceInformation -Release $Release
                $appServiceInfo += Get-PublicApiApplicationServiceInformation -Release $Release
                $appServiceInfo += Get-IntraApplicationServiceInformation -Release $Release
            
                return $appServiceInfo    
            }
            
            function Get-RqApplicationServiceInformation([string] $Release)
            {
                return [ApplicationServiceInformation[]] @(
            	    (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/Core.Reporting.Services"; ApplicationPool = "RQ Services .NET4"}),
            	    (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/Core.Services"; ApplicationPool = "RQ Core Services .NET4"}),
            	    (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/DeskIntegration.Services"; ApplicationPool = "RQ Services .NET4"}),
            	    (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/Retail.Integration.Services"; ApplicationPool = "RQ Services .NET4"}),
            
                    # Simulator Services that are only for Dev; we don't want to convert them to an application, but do want to remove their files that got copied to the web server.
                    (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/Simulator.Services"; ApplicationPool = "Simulator Services .NET4"; ConvertToApplication = $false}))
            }
            
            function Get-PublicApiApplicationServiceInformation([string] $Release)
            {
                return [ApplicationServiceInformation[]] @(
                    (New-Object ApplicationServiceInformation -Property @{Website = "API Services"; ApplicationPath = "$Release/PublicAPI.Host"; ApplicationPool = "API Services .NET4"}),
            	    (New-Object ApplicationServiceInformation -Property @{Website = "API Services"; ApplicationPath = "$Release/PublicAPI.Documentation"; ApplicationPool = "API Services .NET4"}))
            }
            
            function Get-IntraApplicationServiceInformation([string] $Release)
            {
                return [ApplicationServiceInformation[]] @(
                    (New-Object ApplicationServiceInformation -Property @{Website = "Intra Services"; ApplicationPath = "$Release"; ApplicationPool = "Intra Services .NET4"}))
            }
            

            You can see the first thing it does is dot source the ApplicationServiceUtilities.ps1 file (I assume all these scripts are in the same directory).  This is done in order to include the ApplicationServiceInformation type into the PowerShell session.  Next I just have functions that return the various application service information that our various projects specify.  I break them apart by project so that I’m able to easily publish one project separately from another, but also have a Get-All function that returns back all of the service information for when we deploy all services together.  We deploy many of our projects in lock-step, so having a Get-All function makes sense for us, but it may not for you.  We have many more projects and services than I show here; I just show these as an example of how you can set yours up if you choose.

            One other thing you may notice is that my Get-*ApplicationServiceInformation functions take a $Release parameter that is used in the ApplicationPath; this is because our services are versioned.  Yours may not be though, in which case you can omit that parameter for your Get functions (or add any additional parameters that you do need).

            Lastly, to make things nice and easy, I create ConvertTo, Remove, and Delete scripts for each of our projects, as well as a scripts to do all of the projects at once.  Here’s an example of what one of these scripts would look like:

            param
            (
            	[parameter(Position=0,Mandatory=$true,HelpMessage="The 3 hex-value version number of the release (x.x.x).")]
            	[ValidatePattern("^\d{1,5}\.\d{1,5}\.\d{1,5}$")]
            	[string] $Release
            )
            
            # Get the directory that this script is in.
            $THIS_SCRIPTS_DIRECTORY = Split-Path $script:MyInvocation.MyCommand.Path
            
            # Include the functions used to perform the actual operations.
            . (Join-Path $THIS_SCRIPTS_DIRECTORY ApplicationServiceLibrary.ps1)
            
            ConvertTo-ApplicationServices -Server "Our.WebServer.local" -ApplicationServicesInfo (Get-RqApplicationServiceInformation -Release $Release) -Verbose
            

            The first thing it does is prompt for the $Release version number; again, if you don’t version your services then you can omit that.

            The next thing it does is dot-source the ApplicationServicesLibrary.ps1 script to make all of the Get-*ApplicationServiceInformation functions that we defined in the previous file available.  I prefer to use the ApplicationServicesLibrary.ps1 file to place all of our services in a common place, and to avoid copy/pasting the ApplicationServiceInformation for each project into each Convert/Remove/Delete script; but that’s my personal choice and if you prefer to copy-paste the code into a few different files instead of having a central library file, go hard.  If you omit the Library script though, then you will instead need to dot-source the ApplicationServiceUtilities.ps1 file here, since our Library script currently dot-sources it in for us.

            The final line is the one that actually calls our utility function to perform the operation.  It provides the web server hostname to connect to, and calls the library’s Get-*ApplicationServiceInformation to retrieve the information for the web applications that should be created.  Notice too that it also provides the –Verbose switch.  Some of the IIS operations can take quite a while to run and don’t generate any output, so I like to see the verbose output so I can gauge the progress of the script, but feel free to omit it.

            So this sample script creates all of the web applications for our Rq product and can be ran very easily.  To make the corresponding Remove and Delete scripts, I would just copy this file and replace “ConvertTo-” with “Remove-” and “Delete-” respectively.  This allows you to have separate scripts for creating and removing each of your products that can easily be ran automatically or manually, fully automating the process of creating and removing your web applications in IIS.

            If I need to remove the services for a bunch of versions, here is an example of how I can just create a quick script that calls my Remove Services script for each version that needs to be removed:

            # Get the directory that this script is in.
            $thisScriptsDirectory = Split-Path $script:MyInvocation.MyCommand.Path
            
            # Remove Rq application services for versions 4.11.33 to 4.11.43.
            $majorMinorVersion = "4.11"
            33..43 | foreach {
                $Release = "$majorMinorVersion.$_"
                Write-Host "Removing Rq '$Release' services..."
                & "$thisScriptsDirectory\Remove-RqServices.ps1" $Release
            }
            

            If you have any questions or suggestions feel free to leave a comment.  I hope you find this useful.

            Happy coding!

            PowerShell 2.0 vs. 3.0 Syntax Differences And More

            October 22nd, 2013 No comments

            I’m fortunate enough to work for a great company that tries to stay ahead of the curve and use newer technologies.  This means that when I’m writing my PowerShell (PS) scripts I typically don’t have to worry about only using PS v2.0 compatible syntax and cmdlets, as all of our PCs have v3.0 (soon to have v4.0).  This is great, until I release these scripts (or snippets from the scripts) for the general public to use; I have to keep in mind that many other people are still stuck running older versions of Windows, or not allowed to upgrade PowerShell.  So to help myself release PS v2.0 compatible scripts to the general public, I’m going to use this as a living document of the differences between PowerShell 2.0 and 3.0 that I encounter (so it will continue to grow over time; read as, bookmark it).  Of course there are other sites that have some of this info, but I’m going to try and compile a list of the ones that are relevant to me, in a nice simple format.

            Before we get to the differences, here are some things you may want to know relating to PowerShell versions.

            How to check which version of PowerShell you are running

            All PS versions:

            $PSVersionTable.PSVersion
            

             

            How to run/test your script against an older version of PowerShell (source)

            All PS versions:  use PowerShell.exe –Version [version] to start a new PowerShell session, where [version] is the PowerShell version that you want the session to use, then run your script in this new session.  Shorthand is PowerShell –v [version]

            PowerShell.exe -Version 2.0
            

            Note: You can’t run PowerShell ISE in an older version of PowerShell; only the Windows PowerShell console.

             

            PowerShell v2 and v3 Differences:

             

            Where-Object no longer requires braces (source)

            PS v2.0:

            Get-Service | Where { $_.Status -eq ‘running’ }
            

            PS v3.0:

            Get-Service | Where Status -eq ‘running
            

            PS V2.0 Error Message:

            Where : Cannot bind parameter ‘FilterScript’. Cannot convert the “[PropertyName]” value of the type “[Type]” to type “System.Management.Automation.ScriptBlock”.

             

            Using local variables in remote sessions (source)

            PS v2.0:

            $class = "win32_bios"
            Invoke-Command -cn dc3 {param($class) gwmi -class $class} -ArgumentList $class
            

            PS v3.0:

            $class = "win32_bios"
            Invoke-Command -cn dc3 {gwmi -class $Using:class}
            

             

            Variable validation attributes (source)

            PS v2.0: Validation only available on cmdlet/function/script parameters.

            PS v3.0: Validation available on cmdlet/function/script parameters, and on variables.

            [ValidateRange(1,5)][int]$someLocalVariable = 1
            

             

            Stream redirection (source)

            The Windows PowerShell redirection operators use the following characters to represent each output type:
                    *   All output
                    1   Success output
                    2   Errors
                    3   Warning messages
                    4   Verbose output
                    5   Debug messages
            
            NOTE: The All (*), Warning (3), Verbose (4) and Debug (5) redirection operators were introduced
                   in Windows PowerShell 3.0. They do not work in earlier versions of Windows PowerShell.

             

            PS v2.0: Could only redirect Success and Error output.

            # Sends errors (2) and success output (1) to the success output stream.
            Get-Process none, Powershell 2>&1
            

            PS v3.0: Can also redirect Warning, Verbose, Debug, and All output.

            # Function to generate each kind of output.
            function Test-Output { Get-Process PowerShell, none; Write-Warning "Test!"; Write-Verbose "Test Verbose"; Write-Debug "Test Debug"}
            
            # Write every output stream to a text file.
            Test-Output *> Test-Output.txt
            
            

             

            Explicitly set parameter set variable values when not defined (source)

            PS v2.0 will throw an error if you try and access a parameter set parameter that has not been defined.  The solution is to give it a default value when it is not defined. Specify the Private scope in case a variable with the same name exists in the global scope or an inherited scope:

            # Default the ParameterSet variables that may not have been set depending on which parameter set is being used. This is required for PowerShell v2.0 compatibility.
            if (!(Test-Path Variable:Private:SomeStringParameter)) { $SomeStringParameter = $null }
            if (!(Test-Path Variable:Private:SomeIntegerParameter)) { $SomeIntegerParameter = 0 }
            if (!(Test-Path Variable:Private:SomeSwitchParameter)) { $SomeSwitchParameter = $false }
            

            PS v2.0 Error Message:

            The variable ‘$[VariableName]’ cannot be retrieved because it has not been set.

             

            Parameter attributes require the equals sign

            PS v2.0:

            [parameter(Position=1,Mandatory=$true)] [string] $SomeParameter
            

            PS v3.0:

            [parameter(Position=1,Mandatory)] [string] $SomeParameter
            

            PS v2.0 Error Message:

            The “=” operator is missing after a named argument.

             

            Cannot use String.IsNullOrWhitespace (or any other post .Net 3.5 functionality)

            PS v2.0:

            [string]::IsNullOrEmpty($SomeString)
            

            PS v3.0:

            [string]::IsNullOrWhiteSpace($SomeString)
            

            PS v2.0 Error Message:

            IsNullOrWhitespace : Method invocation failed because [System.String] doesn’t contain a method named ‘IsNullOrWhiteSpace’.

            PS v2.0 compatible version of IsNullOrWhitespace function:

            # PowerShell v2.0 compatible version of [string]::IsNullOrWhitespace.
            function StringIsNullOrWhitespace([string] $string)
            {
                if ($string -ne $null) { $string = $string.Trim() }
                return [string]::IsNullOrEmpty($string)
            }
            

             

            Get-ChildItem cmdlet’s –Directory and –File switches were introduced in PS v3.0

            PS v2.0:

            Get-ChildItem -Path $somePath | Where-Object { $_.PSIsContainer }	# Get directories only.
            Get-ChildItem -Path $somePath | Where-Object { !$_.PSIsContainer }	# Get files only.
            

            PS v3.0:

            Get-ChildItem -Path $somePath -Directory
            Get-ChildItem -Path $somePath -File
            

             

             

            Other Links

            Creating Strongly Typed Objects In PowerShell, Rather Than Using An Array Or PSCustomObject

            October 21st, 2013 No comments

            I recently read a great article that explained how to create hashtables, dictionaries, and PowerShell objects.  I already knew a bit about these, but this article gives a great comparison between them, when to use each of them, and how to create them in the different versions of PowerShell.

            Right now I’m working on refactoring some existing code into some general functions for creating, removing, and destroying IIS applications (read about it here).  At first, I thought that this would be a great place to use PSCustomObject, as in order to perform these operations I needed 3 pieces of information about a website; the Website name, the Application Name (essentially the path to the application under the Website root), and the Application Pool that the application should run in.

             

            Using an array

            So initially the code I wrote just used an array to hold the 3 properties of each application service:

            # Store app service info as an array of arrays.
            $AppServices = @(
            	("MyWebsite", "$Version/Reporting.Services", "Services .NET4"),
            	("MyWebsite", "$Version/Core.Services", "Services .NET4"),
            	...
            )
            
            # Remove all of the Web Applications.
            foreach ($appInfo in $AppServices )
            {
            	$website = $appInfo[0]
            	$appName = $appInfo[1]
            	$appPool = $appInfo[2]
            	...
            }
            
            

            There is nothing “wrong” with using an array to store the properties; it works.  However, now that I am refactoring the functions to make them general purpose to be used by other people/scripts,  this does have one very undesirable limitation; The properties must always be stored in the correct order in the array (i.e. Website in position 0, App Name in 1, and App Pool in 2).  Since the list of app services will be passed into my functions, this would require the calling script to know to put the properties in this order.  Boo.

            Another option that I didn’t consider when I originally wrote the script was to use an associative array, but it has the same drawbacks as using a PSCustomObject discussed below.

             

            Using PSCustomObject

            So I thought let’s use a PSCustomObject instead, as that way the client does not have to worry about the order of the information; as long as their PSCustomObject has Website, ApplicationPath, and ApplicationPool properties then we’ll be able to process it.  So I had this:

            [PSCustomObject[]] $applicationServicesInfo = @(
            	[PSCustomObject]@{Website = "MyWebsite"; ApplicationPath = "$Version/Reporting.Services"; ApplicationPool = "Services .NET4"},
            	[PSCustomObject]@{Website = "MyWebsite"; ApplicationPath = "$Version/Core.Services"; ApplicationPool = "Services .NET4},
            	...
            )
            
            function Remove-ApplicationServices
            {
            	param([PSCustomObject[]] $ApplicationServicesInfo)
            
            	# Remove all of the Web Applications.
            	foreach ($appInfo in [PSCustomObject[]]$ApplicationServicesInfo)
            	{
            		$website = $appInfo.Website
            		$appPath = $appInfo.ApplicationPath
            		$appPool = $appInfo.ApplicationPool
            		...
            	}
            }
            

            I liked this better as the properties are explicitly named, so there’s no guess work about which information the property contains, but it’s still not great.  One thing that I don’t have here (and really should), is validation to make sure that the passed in PSCustomObjects actually have Website, ApplicationPath, and ApplicationPool properties on them, otherwise an exception will be thrown when I try to access them.  So with this approach I would still need to have documentation and validation to ensure that the client passes in a PSCustomObject with those properties.

             

            Using a new strongly typed object

            I frequently read other PowerShell blog posts and recently stumbled across this one.  In the article he mentions creating a new compiled type by passing a string to the Add-Type cmdlet; essentially writing C# code in his PowerShell script to create a new class.  I knew that you could use Add-Type to import other assemblies, but never realized that you could use it to import an assembly that doesn’t actually exist (i.e. a string in your PowerShell script).  This is freaking amazing! So here is what my new solution looks like:

            try {	# Wrap in a try-catch in case we try to add this type twice.
            # Create a class to hold an IIS Application Service's Information.
            Add-Type -TypeDefinition @"
            	using System;
            	
            	public class ApplicationServiceInformation
            	{
            		// The name of the Website in IIS.
            		public string Website { get; set;}
            		
            		// The path to the Application, relative to the Website root.
            		public string ApplicationPath { get; set; }
            
            		// The Application Pool that the application is running in.
            		public string ApplicationPool { get; set; }
            
            		// Implicit Constructor.
            		public ApplicationServiceInformation() { }
            
            		// Explicit constructor.
            		public ApplicationServiceInformation(string website, string applicationPath, string applicationPool)
            		{
            			this.Website = website;
            			this.ApplicationPath = applicationPath;
            			this.ApplicationPool = applicationPool;
            		}
            	}
            "@
            } catch {}
            
            $anotherService = New-Object ApplicationServiceInformation
            $anotherService.Website = "MyWebsite"
            $anotherService.ApplicationPath = "$Version/Payment.Services"
            $anotherService.ApplicationPool = "Services .NET4"
            	
            [ApplicationServiceInformation[]] $applicationServicesInfo = @(
            	(New-Object ApplicationServiceInformation("MyWebsite", "$Version/Reporting.Services", "Services .NET4")),
            	(New-Object ApplicationServiceInformation -Property @{Website = "MyWebsite"; ApplicationPath = "$Version/Core.Services"; ApplicationPool = "Services .NET4}),
            	$anotherService,
            	...
            )
            
            function Remove-ApplicationServices
            {
            	param([ApplicationServiceInformation[]] $ApplicationServicesInfo)
            
            	# Remove all of the Web Applications.
            	foreach ($appInfo in [ApplicationServiceInformation[]]$ApplicationServicesInfo)
            	{
            		$website = $appInfo.Website
            		$appPath = $appInfo.ApplicationPath
            		$appPool = $appInfo.ApplicationPool
            		...
            	}
            }
            

            I first create a simple container class to hold the application service information, and now all of my properties are explicit like with the PSCustomObject, but also I’m guaranteed the properties will exist on the object that is passed into my function.  From there I declare my array of ApplicationServiceInformation objects, and the function that we can pass them into. Note that I wrap each New-Object call in parenthesis, otherwise PowerShell parses it incorrectly and will throw an error.

            As you can see from the snippets above and below, there are several different ways that we can initialize a new instance of our ApplicationServiceInformation class:

            $service1 = New-Object ApplicationServiceInformation("Explicit Constructor", "Core.Services", ".NET4")
            
            $service2 = New-Object ApplicationServiceInformation -ArgumentList ("Explicit Constructor ArgumentList", "Core.Services", ".NET4")
            
            $service3 = New-Object ApplicationServiceInformation -Property @{Website = "Using Property"; ApplicationPath = "Core.Services"; ApplicationPool = ".NET4"}
            
            $service4 = New-Object ApplicationServiceInformation
            $service4.Website = "Properties added individually"
            $service4.ApplicationPath = "Core.Services"
            $service4.ApplicationPool = "Services .NET4"
            

             

            Caveats

            • Note that I wrapped the call to Add-Type in a Try-Catch block.  This is to prevent PowerShell from throwing an error if the type tries to get added twice.  It’s sort of a hacky workaround, but there aren’t many good alternatives, since you cannot unload an assembly.
            • This means that while developing if you make any changes to the class, you’ll have to restart your PowerShell session for the changes to be applied, since the Add-Type cmdlet will only work properly the first time that it is called in a session.

            I hope you found something in here useful.

            Happy coding!