Notice: IE has a problem with the code snippets where it does not always display the last line of long snippets, and when copying code it copies it all as one line. Use Firefox or Chrome instead, or look to see if the snippet has a "download file" link.

Keep PowerShell Console Window Open After Script Finishes Running

July 7th, 2014 No comments

I originally included this as a small bonus section at the end of my other post about fixing the issue of not being able to run a PowerShell script whose path contains a space, but thought this deserved its own dedicated post.

When running a script by double-clicking it, or by right-clicking it and choosing Run With PowerShell or Open With Windows PowerShell, if the script completes very quickly the user will see the PowerShell console appear very briefly and then disappear.  If the script gives output that the user wants to see, or if it throws an error, the user won’t have time to read the text.  We have 3 solutions to fix this so that the PowerShell console stays open after the script has finished running:

1. One-time solution

Open a PowerShell console and manually run the script from the command line. I show how to do this a bit in this post, as the PowerShell syntax to run a script from the command-line is not straight-forward if you’ve never done it before.

The other way is to launch the PowerShell process from the Run box (Windows Key + R) or command prompt using the -NoExit switch and passing in the path to the PowerShell file.
For example: PowerShell -NoExit “C:\SomeFolder\MyPowerShellScript.ps1″

2. Per-script solution

Add a line like this to the end of your script:

Read-Host -Prompt “Press Enter to exit”

I typically use this following bit of code instead so that it only prompts for input when running from the PowerShell Console, and not from the PS ISE or other PS script editors (as they typically have a persistent console window integrated into the IDE).  Use whatever you prefer.

# If running in the console, wait for input before closing.
if ($Host.Name -eq "ConsoleHost")
{
	Write-Host "Press any key to continue..."
	$Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyUp") > $null
}

I typically use this approach for scripts that other people might end up running; if it’s a script that only I will ever be running, I rely on the global solution below.

3. Global solution

Adjust the registry keys used to run a PowerShell script to include the –NoExit switch to prevent the console window from closing.  Here are the two registry keys we will target, along with their default value, and the value we want them to have:

Registry Key: HKEY_CLASSES_ROOT\Applications\powershell.exe\shell\open\command
Description: Key used when you right-click a .ps1 file and choose Open With -> Windows PowerShell.
Default Value: "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" "%1"
Desired Value: "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" "& \"%1\""

Registry Key: HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1\Shell\0\Command
Description: Key used when you right-click a .ps1 file and choose Run with PowerShell (shows up depending on which Windows OS and Updates you have installed).
Default Value: "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" "-Command" "if((Get-ExecutionPolicy ) -ne 'AllSigned') { Set-ExecutionPolicy -Scope Process Bypass }; & '%1'"
Desired Value: "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -NoExit "-Command" "if((Get-ExecutionPolicy ) -ne 'AllSigned') { Set-ExecutionPolicy -Scope Process Bypass }; & \"%1\""

The Desired Values add the –NoExit switch, as well wrap the %1 in double quotes to allow the script to still run even if it’s path contains spaces.

If you want to open the registry and manually make the change you can, or here is the registry script that we can run to make the change automatically for us:

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\Applications\powershell.exe\shell\open\command]
@="\"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -NoExit \"& \\\"%1\\\"\""

[HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1\Shell\0\Command]
@="\"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -NoExit \"-Command\" \"if((Get-ExecutionPolicy ) -ne 'AllSigned') { Set-ExecutionPolicy -Scope Process Bypass }; & \\\"%1\\\"\""

You can copy and paste the text into a file with a .reg extension, or just

Simply double-click the .reg file and click OK on the prompt to have the registry keys updated.  Now by default when you run a PowerShell script from File Explorer (i.e. Windows Explorer), the console window will stay open even after the script is finished executing.  From there you can just type exit and hit enter to close the window, or use the mouse to click the window’s X in the top right corner.

If I have missed other common registry keys or any other information, please leave a comment to let me know.  I hope you find this useful.

Happy coding!

Browser Extensions To Expand GitHub Code Pages To Fill The Full Width Of Your Browser

May 27th, 2014 No comments

The problem

I love GitHub, but one thing that I and most developers hate is that the pages that show source code (Pull requests, commits, blobs) are locked to a fixed width, and it’s only about 900 pixels.  Most developers have widescreen monitors, so their code lines are typically longer than 900 pixels.  This can make viewing code on GitHub painful because you have to constantly horizontally scroll to see a whole line of code.  I honestly can’t believe that after years GitHub still hasn’t fixed this.  It either means that the GitHub developers don’t dogfood their own product, or the website designers (not programmers) have the final say on how the site looks, in which case they don’t know their target audience very well.  Anyways, I digress.

My solution

To solve this problem I wrote a GreaseMonkey user script 2 years ago that expands the code section on GitHub to fill the width of your browser, and it works great. The problem was that GreaseMonkey is a FireFox-only extension.  Luckily, these days most browsers have a GreaseMonkey equivalent:

Internet Explorer has one called Trixie.

Chrome has one called TamperMonkey. Chrome also supports user scripts natively so you can install them without TamperMonkey, but TamperMonkey helps with the install/uninstall/managing of them.

So if you have GreaseMonkey or an equivalent installed, then you can simply go ahead and install my user script for free and start viewing code on GitHub in widescreen glory.

Alternatively, I have also released a free Chrome extension in the Chrome Web Store called Make GitHub Pages Full Width.  When you install it from the store you get all of the added Store benefits, such as having the extension sync across all of your PCs, automatically getting it installed again after you format your PC, etc.

Results

If you install the extension and a code page doesn’t expand it’s width to fit your page, just refresh the page.  If anybody knows how to fix this issue please let me know.

And to give you an idea of what the result looks like, here are 2 screenshots; one without the extension installed (top, notice some text goes out of view), and one with it (bottom).

WithoutFullWidth

WithFullWidth

Happy coding!

Adding a WPF Settings Page To The Tools Options Dialog Window For Your Visual Studio Extension

April 25th, 2014 No comments

I recently created my first Visual Studio extension, Diff All Files, which allows you to quickly compare the changes to all files in a TFS changeset, shelveset, or pending changes (Git support coming soon). One of the first challenges I faced when I started the project was where to display my extension’s settings to the user, and where to save them.  My first instinct was to create a new Menu item to launch a page with all of the settings to display, since the wizard you go through to create the project has an option to automatically add a new Menu item the Tools menu.  After some Googling though, I found the more acceptable solution is to create a new section within the Tools -> Options window for your extension, as this will also allow the user to import and export your extension’s settings.

Adding a grid or custom Windows Forms settings page

Luckily I found this Stack Overflow answer that shows a Visual Basic example of how to do this, and links to the MSDN page that also shows how to do this in C#.  The MSDN page is a great resource, and it shows you everything you need to create your settings page as either a Grid Page, or a Custom Page using Windows Forms (FYI: when it says to add a UserControl, it means a System.Windows.Forms.UserControl, not a System.Windows.Controls.UserControl).  My extension’s settings page needed to have buttons on it to perform some operations, which is something the Grid Page doesn’t support, so I had to make a Custom Page.  I first made it using Windows Forms as the page shows, but it quickly reminded me how out-dated Windows Forms is (no binding!), and my settings page would have to be a fixed width and height, rather than expanding to the size of the users Options dialog window, which I didn’t like.

Adding a custom WPF settings page

The steps to create a Custom WPF settings page are the same as for creating a Custom Windows Forms Page, except instead having your settings control inherit from System.Forms.DialogPage (steps 1 and 2 on that page), it needs to inherit from Microsoft.VisualStudio.Shell.UIElementDialogPage.  And when you create your User Control for the settings page’s UI, it will be a WPF System.Windows.Controls.UserControl.  Also, instead of overriding the Window method of the DialogPage class, you will override the Child method of the UIElementDialogPage class.

Here’s a sample of what the Settings class might look like:

using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
using System.Runtime.InteropServices;
using Microsoft.VisualStudio.Shell;

namespace VS_DiffAllFiles.Settings
{
	[ClassInterface(ClassInterfaceType.AutoDual)]
	[Guid("1D9ECCF3-5D2F-4112-9B25-264596873DC9")]	// Special guid to tell it that this is a custom Options dialog page, not the built-in grid dialog page.
	public class DiffAllFilesSettings : UIElementDialogPage, INotifyPropertyChanged
	{
		#region Notify Property Changed
		/// <summary>
		/// Inherited event from INotifyPropertyChanged.
		/// </summary>
		public event PropertyChangedEventHandler PropertyChanged;

		/// <summary>
		/// Fires the PropertyChanged event of INotifyPropertyChanged with the given property name.
		/// </summary>
		/// <param name="propertyName">The name of the property to fire the event against</param>
		public void NotifyPropertyChanged(string propertyName)
		{
			if (PropertyChanged != null)
				PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
		}
		#endregion

		/// <summary>
		/// Get / Set if new files being added to source control should be compared.
		/// </summary>
		public bool CompareNewFiles { get { return _compareNewFiles; } set { _compareNewFiles = value; NotifyPropertyChanged("CompareNewFiles"); } }
		private bool _compareNewFiles = false;

		#region Overridden Functions

		/// <summary>
		/// Gets the Windows Presentation Foundation (WPF) child element to be hosted inside the Options dialog page.
		/// </summary>
		/// <returns>The WPF child element.</returns>
		protected override System.Windows.UIElement Child
		{
			get { return new DiffAllFilesSettingsPageControl(this); }
		}

		/// <summary>
		/// Should be overridden to reset settings to their default values.
		/// </summary>
		public override void ResetSettings()
		{
			CompareNewFiles = false;
			base.ResetSettings();
		}

		#endregion
	}
}

 

And what the code-behind for the User Control might look like:

using System;
using System.Diagnostics;
using System.Linq;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Input;
using System.Windows.Navigation;

namespace VS_DiffAllFiles.Settings
{
	/// <summary>
	/// Interaction logic for DiffAllFilesSettingsPageControl.xaml
	/// </summary>
	public partial class DiffAllFilesSettingsPageControl : UserControl
	{
		/// <summary>
		/// A handle to the Settings instance that this control is bound to.
		/// </summary>
		private DiffAllFilesSettings _settings = null;

		public DiffAllFilesSettingsPageControl(DiffAllFilesSettings settings)
		{
			InitializeComponent();
			_settings = settings;
			this.DataContext = _settings;
		}

		private void btnRestoreDefaultSettings_Click(object sender, RoutedEventArgs e)
		{
			_settings.ResetSettings();
		}

		private void UserControl_LostKeyboardFocus(object sender, KeyboardFocusChangedEventArgs e)
		{
			// Find all TextBoxes in this control force the Text bindings to fire to make sure all changes have been saved.
			// This is required because if the user changes some text, then clicks on the Options Window's OK button, it closes 
			// the window before the TextBox's Text bindings fire, so the new value will not be saved.
			foreach (var textBox in DiffAllFilesHelper.FindVisualChildren<TextBox>(sender as UserControl))
			{
				var bindingExpression = textBox.GetBindingExpression(TextBox.TextProperty);
				if (bindingExpression != null) bindingExpression.UpdateSource();
			}
		}
	}
}

 

And here’s the corresponding xaml for the UserControl:

<UserControl x:Class="VS_DiffAllFiles.Settings.DiffAllFilesSettingsPageControl"
						 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
						 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
						 xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
						 xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
						 xmlns:xctk="http://schemas.xceed.com/wpf/xaml/toolkit"
						 xmlns:QC="clr-namespace:QuickConverter;assembly=QuickConverter"
						 mc:Ignorable="d" 
						 d:DesignHeight="350" d:DesignWidth="400" LostKeyboardFocus="UserControl_LostKeyboardFocus">
	<UserControl.Resources>
	</UserControl.Resources>

	<Grid>
		<StackPanel Orientation="Vertical">
			<CheckBox Content="Compare new files" IsChecked="{Binding Path=CompareNewFiles}" ToolTip="If files being added to source control should be compared." />
			<Button Content="Restore Default Settings" Click="btnRestoreDefaultSettings_Click" />
		</StackPanel>
	</Grid>
</UserControl>

You can see that I am binding the CheckBox directly to the CompareNewFiles property on the instance of my Settings class; yay, no messing around with Checked events :)

This is a complete, but very simple example. If you want a more detailed example that shows more controls, check out the source code for my Diff All Files extension.

A minor problem

One problem I found was that when using a TextBox on my Settings Page UserControl, if I edited text in a TextBox and then hit the OK button on the Options dialog to close the window, the new text would not actually get applied.  This was because the window would get closed before the TextBox bindings had a chance to fire; so if I instead clicked out of the TextBox before clicking the OK button, everything worked correctly.  I know you can change the binding’s UpdateSourceTrigger to PropertyChanged, but I perform some additional logic when some of my textbox text is changed, and I didn’t want that logic firing after every key press while the user typed in the TextBox.

To solve this problem I added a LostKeyboardFocus event to the UserControl, and in that event I find all TextBox controls on the UserControl and force their bindings to update.  You can see the code for this in the snippets above.  The one piece of code that’s not shown is the FindVisualChildren<TextBox> method, so here it is:

/// <summary>
/// Recursively finds the visual children of the given control.
/// </summary>
/// <typeparam name="T">The type of control to look for.</typeparam>
/// <param name="dependencyObject">The dependency object.</param>
public static IEnumerable<T> FindVisualChildren<T>(DependencyObject dependencyObject) where T : DependencyObject
{
	if (dependencyObject != null)
	{
		for (int index = 0; index < VisualTreeHelper.GetChildrenCount(dependencyObject); index++)
		{
			DependencyObject child = VisualTreeHelper.GetChild(dependencyObject, index);
			if (child != null &amp;&amp; child is T)
			{
				yield return (T)child;
			}

			foreach (T childOfChild in FindVisualChildren<T>(child))
			{
				yield return childOfChild;
			}
		}
	}
}

 

And that’s it.  Now you know how to make a nice Settings Page for your Visual Studio extension using WPF, instead of the archaic Windows Forms.

Happy coding!

Template Solution For Deploying TFS Checkin Policies To Multiple Versions Of Visual Studio And Having Them Automatically Work From “TF.exe Checkin” Too

March 24th, 2014 No comments

Get the source code

Let’s get right to it by giving you the source code.  You can get it from the MSDN samples here.

 

Explanation of source code and adding new checkin policies

If you open the Visual Studio (VS) solution the first thing you will likely notice is that there are 5 projects.  CheckinPolicies.VS2012 simply references all of the files in CheckinPolicies.VS2013 as links (i.e. shortcut files); this is because we need to compile the CheckinPolicies.VS2012 project using TFS 2012 assemblies, and the CheckinPolicies.VS2013 project using TFS2013 assemblies, but want both projects to have all of the same checkin policies.  So the projects contain all of the same files; just a few of their references are different.  A copy of the references that are different between the two projects are stored in the project’s “Dependencies” folder; these are the Team Foundation assemblies that are specific to VS 2012 and 2013.  Having these assemblies stored in the solution allows us to still build the VS 2012 checkin policies, even if you (or a colleague) only has VS 2013 installed.

Update: To avoid having multiple CheckinPolicy.VS* projects, we could use the msbuild targets technique that P. Kelly shows on his blog. However, I believe we would still need multiple deployment projects, as described below, in order to have the checkin policies work outside of Visual Studio.

The other projects are CheckinPolicyDeployment.VS2012 and CheckinPolicyDeployment.VS2013 (both of which are VSPackage projects), and CheckinPolicyDeploymentShared.  The CheckinPolicyDeployment.VS2012/VS2013 projects will generate the VSIX files that are used to distribute the checkin policies, and CheckinPolicyDeploymentShared contains files/code that are common to both of the projects (the projects reference the files by linking to them).

Basically everything is ready to go.  Just start adding new checkin policy classes to the CheckinPolicy.VS2013 project, and then also add them to the CheckinPolicy.VS2012 project as a link.  You can add a file as a link in 2 different ways in the Solution Explorer:

  1. Right-click on the CheckinPolicies.VS2012 project and choose Add -> Existing Item…, and then navigate to the new class file that you added to the CheckinPolicy.VS2013 project.  Instead of clicking the Add button though, click the little down arrow on the side of the Add button and then choose Add As Link.
  2. Drag and drop the file from the CheckinPolicy.VS2013 project to the CheckinPolicy.VS2012 project, but while releasing the left mouse button to drop the file, hold down the Alt key; this will change the operation from adding a copy of the file to that project, to adding a shortcut file that links back to the original file.
    There is a DummyCheckinPolicy.cs file in the CheckinPolicies.VS2013 project that shows you an example of how to create a new checkin policy.  Basically you just need to create a new public, serializable class that extends the CheckinPolicyBase class.  The actual logic for your checkin policy to perform goes in the Evaluate() function. If there is a policy violation in the code that is trying to be checked in, just add a new PolicyFailure instance to the failures list with the message that you want the user to see.

      Building a new version of your checkin policies

      Once you are ready to deploy your policies, you will want to update the version number in the source.extension.vsixmanifest file in both the CheckinPolicyDeployment.VS2012 and CheckinPolicyDeployment.VS2013 projects.  Since these projects will both contain the same policies, I recommend giving them the same version number as well.  Once you have updated the version number, build the solution in Release mode.  From there you will find the new VSIX files at "CheckinPolicyDeployment.VS2012\bin\Release\TFS Checkin Policies VS2012.vsix" and "CheckinPolicyDeployment.VS2013\bin\Release\TFS Checkin Policies VS2013.vsix".  You can then distribute them to your team; I recommend setting up an internal VS Extension Gallery, but the poor-man’s solution is to just email the vsix file out to everyone on your team.

      Having the policies automatically work outside of Visual Studio

      This is already hooked up and working in the template solution, so nothing needs to be changed there, but I will explain how it works here.  A while back I blogged about how to get your Team Foundation Server (TFS) checkin polices to still work when checking code in from the command line via the “tf checkin” command; by default when installing your checkin policies via a VSIX package (the MS recommended approach) you can only get them to work in Visual Studio.  I hated that I would need to manually run the script I provided each time the checkin policies were updated, so I posted a question on Stack Overflow about how to run a script automatically after the VSIX package installs the extension.  So it turns out that you can’t do that, but what you can do is use a VSPackage instead, which still uses VSIX to deploy the extension, but then also allows us to hook into Visual Studio events to run our script when VS starts up or exits.

      Here is the VSPackage class code to hook up the events and call our UpdateCheckinPoliciesInRegistry() function:

      /// <summary>
      /// This is the class that implements the package exposed by this assembly.
      ///
      /// The minimum requirement for a class to be considered a valid package for Visual Studio
      /// is to implement the IVsPackage interface and register itself with the shell.
      /// This package uses the helper classes defined inside the Managed Package Framework (MPF)
      /// to do it: it derives from the Package class that provides the implementation of the 
      /// IVsPackage interface and uses the registration attributes defined in the framework to 
      /// register itself and its components with the shell.
      /// </summary>
      // This attribute tells the PkgDef creation utility (CreatePkgDef.exe) that this class is
      // a package.
      [PackageRegistration(UseManagedResourcesOnly = true)]
      // This attribute is used to register the information needed to show this package
      // in the Help/About dialog of Visual Studio.
      [InstalledProductRegistration("#110", "#112", "1.0", IconResourceID = 400)]
      // Auto Load our assembly even when no solution is open (by using the Microsoft.VisualStudio.VSConstants.UICONTEXT_NoSolution guid).
      [ProvideAutoLoad("ADFC4E64-0397-11D1-9F4E-00A0C911004F")]
      public abstract class CheckinPolicyDeploymentPackage : Package
      {
      	private EnvDTE.DTEEvents _dteEvents;
      
      	/// <summary>
      	/// Initialization of the package; this method is called right after the package is sited, so this is the place
      	/// where you can put all the initialization code that rely on services provided by VisualStudio.
      	/// </summary>
      	protected override void Initialize()
      	{
      		base.Initialize();
      
      		var dte = (DTE2)GetService(typeof(SDTE));
      		_dteEvents = dte.Events.DTEEvents;
      		_dteEvents.OnBeginShutdown += OnBeginShutdown;
      
      		UpdateCheckinPoliciesInRegistry();
      	}
      
      	private void OnBeginShutdown()
      	{
      		_dteEvents.OnBeginShutdown -= OnBeginShutdown;
      		_dteEvents = null;
      
      		UpdateCheckinPoliciesInRegistry();
      	}
      
      	private void UpdateCheckinPoliciesInRegistry()
      	{
      		var dte = (DTE2)GetService(typeof(SDTE));
      		string visualStudioVersionNumber = dte.Version;
      		string customCheckinPolicyEntryName = "CheckinPolicies";
      
      		// Create the paths to the registry keys that contains the values to inspect.
      		string desiredRegistryKeyPath = string.Format("HKEY_CURRENT_USER\\Software\\Microsoft\\VisualStudio\\{0}_Config\\TeamFoundation\\SourceControl\\Checkin Policies", visualStudioVersionNumber);
      		string currentRegistryKeyPath = string.Empty;
      		if (Environment.Is64BitOperatingSystem)
      			currentRegistryKeyPath = string.Format("HKEY_LOCAL_MACHINE\\SOFTWARE\\Wow6432Node\\Microsoft\\VisualStudio\\{0}\\TeamFoundation\\SourceControl\\Checkin Policies", visualStudioVersionNumber);
      		else
      			currentRegistryKeyPath = string.Format("HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\VisualStudio\\{0}\\TeamFoundation\\SourceControl\\Checkin Policies", visualStudioVersionNumber);
      
      		// Get the value that the registry should have, and the value that it currently has.
      		var desiredRegistryValue = Registry.GetValue(desiredRegistryKeyPath, customCheckinPolicyEntryName, null);
      		var currentRegistryValue = Registry.GetValue(currentRegistryKeyPath, customCheckinPolicyEntryName, null);
      
      		// If the registry value is already up to date, just exit without updating the registry.
      		if (desiredRegistryValue == null || desiredRegistryValue.Equals(currentRegistryValue))
      			return;
      
      		// Get the path to the PowerShell script to run.
      		string powerShellScriptFilePath = Path.Combine(Path.GetDirectoryName(System.Reflection.Assembly.GetAssembly(typeof(CheckinPolicyDeploymentPackage)).Location),
      			"FilesFromShared", "UpdateCheckinPolicyInRegistry.ps1");
      
      		// Start a new process to execute the batch file script, which calls the PowerShell script to do the actual work.
      		var process = new Process
      		{
      			StartInfo =
      			{
      				FileName = "PowerShell",
      				Arguments = string.Format("-NoProfile -ExecutionPolicy Bypass -File \"{0}\" -VisualStudioVersion \"{1}\" -CustomCheckinPolicyEntryName \"{2}\"", powerShellScriptFilePath, visualStudioVersionNumber, customCheckinPolicyEntryName),
      
      				// Hide the PowerShell window while we run the script.
      				CreateNoWindow = true,
      				UseShellExecute = false
      			}
      		};
      		process.Start();
      	}
      }
      

      All of the attributes on the class are put there by default, except for the “[ProvideAutoLoad("ADFC4E64-0397-11D1-9F4E-00A0C911004F")]” one; this attribute is the one that actually allows the Initialize() function to get called when Visual Studio starts.  You can see in the Initialize method that we hook up an event so that the UpdateCheckinPoliciesInRegistry() function gets called when VS is closed, and we also call that function from Initialize(), which is called when VS starts up.

      You might have noticed that this class is abstract.  This is because the VS 2012 and VS 2013 classed need to have a unique ID attribute, so the actual VSPackage class just inherits from this one.  Here is what the VS 2013 one looks like:

      [Guid(GuidList.guidCheckinPolicyDeployment_VS2013PkgString)]
      public sealed class CheckinPolicyDeployment_VS2013Package : CheckinPolicyDeploymentShared.CheckinPolicyDeploymentPackage
      { }
      

      The UpdateCheckinPoliciesInRegistry() function checks to see if the appropriate registry key has been updated to allow the checkin policies to run from the “tf checkin” command prompt command.  If they have, then it simply exits, otherwise it calls a PowerShell script to set the keys for us.  A PowerShell script is used because modifying the registry requires admin permissions, and we can easily run a new PowerShell process as admin (assuming the logged in user is an admin on their local machine, which is the case for everyone in our company).

      The one variable to note here is the customCheckinPolicyEntryName. This corresponds to the registry key name that I’ve specified in the RegistryKeyToAdd.pkgdef file, so if you change it be sure to change it in both places.  This is what the RegistryKeyToAdd.pkgdef file contains:

      // We use "\..\" in the value because the projects that include this file place it in a "FilesFromShared" folder, and we want it to look for the dll in the root directory.
      [$RootKey$\TeamFoundation\SourceControl\Checkin Policies]
      "CheckinPolicies"="$PackageFolder$\..\CheckinPolicies.dll"
      

      And here are the contents of the UpdateCheckinPolicyInRegistry.ps1 PowerShell file.  This is basically just a refactored version of the script I posted on my old blog post:

      # This script copies the required registry value so that the checkin policies will work when doing a TFS checkin from the command line.
      param
      (
      	[parameter(Mandatory=$true,HelpMessage="The version of Visual Studio to update in the registry (i.e. '11.0' for VS 2012, '12.0' for VS 2013)")]
      	[string]$VisualStudioVersion,
      
      	[parameter(HelpMessage="The name of the Custom Checkin Policy Entry in the Registry Key.")]
      	[string]$CustomCheckinPolicyEntryName = 'CheckinPolicies'
      )
      
      # Turn on Strict Mode to help catch syntax-related errors.
      # 	This must come after a script's/function's param section.
      # 	Forces a function to be the first non-comment code to appear in a PowerShell Module.
      Set-StrictMode -Version Latest
      
      $ScriptBlock = {
      	function UpdateCheckinPolicyInRegistry([parameter(Mandatory=$true)][string]$VisualStudioVersion, [string]$CustomCheckinPolicyEntryName)
      	{
      		$status = 'Updating registry to allow checkin policies to work outside of Visual Studio version ' + $VisualStudioVersion + '.'
      		Write-Output $status
      
      		# Get the Registry Key Entry that holds the path to the Custom Checkin Policy Assembly.
      		$HKCUKey = 'HKCU:\Software\Microsoft\VisualStudio\' + $VisualStudioVersion + '_Config\TeamFoundation\SourceControl\Checkin Policies'
      		$CustomCheckinPolicyRegistryEntry = Get-ItemProperty -Path $HKCUKey -Name $CustomCheckinPolicyEntryName
      		$CustomCheckinPolicyEntryValue = $CustomCheckinPolicyRegistryEntry.($CustomCheckinPolicyEntryName)
      
      		# Create a new Registry Key Entry for the iQ Checkin Policy Assembly so they will work from the command line (as well as from Visual Studio).
      		if ([Environment]::Is64BitOperatingSystem)
      		{ $HKLMKey = 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\' + $VisualStudioVersion + '\TeamFoundation\SourceControl\Checkin Policies' }
      		else
      		{ $HKLMKey = 'HKLM:\SOFTWARE\Microsoft\VisualStudio\' + $VisualStudioVersion + '\TeamFoundation\SourceControl\Checkin Policies' }
      		Set-ItemProperty -Path $HKLMKey -Name $CustomCheckinPolicyEntryName -Value $CustomCheckinPolicyEntryValue
      	}
      }
      
      # Run the script block as admin so it has permissions to modify the registry.
      Start-Process -FilePath PowerShell -Verb RunAs -ArgumentList "-NoProfile -ExecutionPolicy Bypass -Command &amp; {$ScriptBlock UpdateCheckinPolicyInRegistry -VisualStudioVersion ""$VisualStudioVersion"" -CustomCheckinPolicyEntryName ""$CustomCheckinPolicyEntryName""}"
      

      While I could have just used a much smaller PowerShell script that simply set a given registry key to a given value, I chose to have some code duplication between the C# code and this script so that this script can still be used as a stand-alone script if needed.

      The slight downside to using a VSPackage is that this script still won’t get called until the user closes or opens a new instance of Visual Studio, so the checkin policies won’t work immediately from the “tf checkin” command after updating the checkin policies extension, but this still beats having to remember to manually run the script.

       

      Conclusion

      So I’ve given you a template solution that you can use without any modification to start creating your VS 2012 and VS 2013 compatible checkin policies; Just add new class files to the CheckinPolicies.VS2013 project, and then add them to the CheckinPolicies.VS2012 project as well as links.  By using links it allows you to only have to modify checkin policy files once, and have the changes go to both the 2012 and 2013 VSIX packages.  Hopefully this template solution helps you to get your TFS checkin policies up and running faster.

      Happy Coding!

      Saving And Loading A C# Object’s Data To An Xml, Json, Or Binary File

      March 14th, 2014 No comments

      I love creating tools, particularly ones for myself and other developers to use.  A common situation that I run into is needing to save the user’s settings to a file so that I can load them up the next time the tool is ran.  I find that the easiest way to accomplish this is to create a Settings class to hold all of the user’s settings, and then use serialization to save and load the class instance to/from a file.  I mention a Settings class here, but you can use this technique to save any object (or list of objects) to a file.

      There are tons of different formats that you may want to save your object instances as, but the big three are Binary, XML, and Json.  Each of these formats have their pros and cons, which I won’t go into.  Below I present functions that can be used to save and load any object instance to / from a file, as well as the different aspects to be aware of when using each method.

      The follow code (without examples of how to use it) is also available here, and can be used directly from my NuGet Package.

       

      Writing and Reading an object to / from a Binary file

      • Writes and reads ALL object properties and variables to / from the file (i.e. public, protected, internal, and private).
      • The data saved to the file is not human readable, and thus cannot be edited outside of your application.
      • Have to decorate class (and all classes that it contains) with a [Serializable] attribute.
      • Use the [NonSerialized] attribute to exclude a variable from being written to the file; there is no way to prevent an auto-property from being serialized besides making it use a backing variable and putting the [NonSerialized] attribute on that.
      /// <summary>
      /// Functions for performing common binary Serialization operations.
      /// <para>All properties and variables will be serialized.</para>
      /// <para>Object type (and all child types) must be decorated with the [Serializable] attribute.</para>
      /// <para>To prevent a variable from being serialized, decorate it with the [NonSerialized] attribute; cannot be applied to properties.</para>
      /// </summary>
      public static class BinarySerialization
      {
      	/// <summary>
      	/// Writes the given object instance to a binary file.
      	/// <para>Object type (and all child types) must be decorated with the [Serializable] attribute.</para>
      	/// <para>To prevent a variable from being serialized, decorate it with the [NonSerialized] attribute; cannot be applied to properties.</para>
      	/// </summary>
      	/// <typeparam name="T">The type of object being written to the XML file.</typeparam>
      	/// <param name="filePath">The file path to write the object instance to.</param>
      	/// <param name="objectToWrite">The object instance to write to the XML file.</param>
      	/// <param name="append">If false the file will be overwritten if it already exists. If true the contents will be appended to the file.</param>
      	public static void WriteToBinaryFile<T>(string filePath, T objectToWrite, bool append = false)
      	{
      		using (Stream stream = File.Open(filePath, append ? FileMode.Append : FileMode.Create))
      		{
      			var binaryFormatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
      			binaryFormatter.Serialize(stream, objectToWrite);
      		}
      	}
      
      	/// <summary>
      	/// Reads an object instance from a binary file.
      	/// </summary>
      	/// <typeparam name="T">The type of object to read from the XML.</typeparam>
      	/// <param name="filePath">The file path to read the object instance from.</param>
      	/// <returns>Returns a new instance of the object read from the binary file.</returns>
      	public static T ReadFromBinaryFile<T>(string filePath)
      	{
      		using (Stream stream = File.Open(filePath, FileMode.Open))
      		{
      			var binaryFormatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
      			return (T)binaryFormatter.Deserialize(stream);
      		}
      	}
      }
      

       

      And here is an example of how to use it:

      [Serializable]
      public class Person
      {
      	public string Name { get; set; }
      	public int Age = 20;
      	public Address HomeAddress { get; set;}
      	private string _thisWillGetWrittenToTheFileToo = "even though it is a private variable.";
      
      	[NonSerialized]
      	public string ThisWillNotBeWrittenToTheFile = "because of the [NonSerialized] attribute.";
      }
      
      [Serializable]
      public class Address
      {
      	public string StreetAddress { get; set; }
      	public string City { get; set; }
      }
      
      // And then in some function.
      Person person = new Person() { Name = "Dan", Age = 30; HomeAddress = new Address() { StreetAddress = "123 My St", City = "Regina" }};
      List<Person> people = GetListOfPeople();
      BinarySerialization.WriteToBinaryFile<Person>("C:\person.bin", person);
      BinarySerialization.WriteToBinaryFile<List<People>>("C:\people.bin", people);
      
      // Then in some other function.
      Person person = BinarySerialization.ReadFromBinaryFile<Person>("C:\person.bin");
      List<Person> people = BinarySerialization.ReadFromBinaryFile<List<Person>>("C:\people.bin");
      

       

      Writing and Reading an object to / from an XML file (Using System.Xml.Serialization.XmlSerializer in the System.Xml assembly)

      • Only writes and reads the Public properties and variables to / from the file.
      • Classes to be serialized must contain a public parameterless constructor.
      • The data saved to the file is human readable, so it can easily be edited outside of your application.
      • Use the [XmlIgnore] attribute to exclude a public property or variable from being written to the file.
      /// <summary>
      /// Functions for performing common XML Serialization operations.
      /// <para>Only public properties and variables will be serialized.</para>
      /// <para>Use the [XmlIgnore] attribute to prevent a property/variable from being serialized.</para>
      /// <para>Object to be serialized must have a parameterless constructor.</para>
      /// </summary>
      public static class XmlSerialization
      {
      	/// <summary>
      	/// Writes the given object instance to an XML file.
      	/// <para>Only Public properties and variables will be written to the file. These can be any type though, even other classes.</para>
      	/// <para>If there are public properties/variables that you do not want written to the file, decorate them with the [XmlIgnore] attribute.</para>
      	/// <para>Object type must have a parameterless constructor.</para>
      	/// </summary>
      	/// <typeparam name="T">The type of object being written to the file.</typeparam>
      	/// <param name="filePath">The file path to write the object instance to.</param>
      	/// <param name="objectToWrite">The object instance to write to the file.</param>
      	/// <param name="append">If false the file will be overwritten if it already exists. If true the contents will be appended to the file.</param>
      	public static void WriteToXmlFile<T>(string filePath, T objectToWrite, bool append = false) where T : new()
      	{
      		TextWriter writer = null;
      		try
      		{
      			var serializer = new XmlSerializer(typeof(T));
      			writer = new StreamWriter(filePath, append);
      			serializer.Serialize(writer, objectToWrite);
      		}
      		finally
      		{
      			if (writer != null)
      				writer.Close();
      		}
      	}
      
      	/// <summary>
      	/// Reads an object instance from an XML file.
      	/// <para>Object type must have a parameterless constructor.</para>
      	/// </summary>
      	/// <typeparam name="T">The type of object to read from the file.</typeparam>
      	/// <param name="filePath">The file path to read the object instance from.</param>
      	/// <returns>Returns a new instance of the object read from the XML file.</returns>
      	public static T ReadFromXmlFile<T>(string filePath) where T : new()
      	{
      		TextReader reader = null;
      		try
      		{
      			var serializer = new XmlSerializer(typeof(T));
      			reader = new StreamReader(filePath);
      			return (T)serializer.Deserialize(reader);
      		}
      		finally
      		{
      			if (reader != null)
      				reader.Close();
      		}
      	}
      }
      

       

      And here is an example of how to use it:

      public class Person
      {
      	public string Name { get; set; }
      	public int Age = 20;
      	public Address HomeAddress { get; set;}
      	private string _thisWillNotGetWrittenToTheFile = "because it is not public.";
      
      	[XmlIgnore]
      	public string ThisWillNotBeWrittenToTheFile = "because of the [XmlIgnore] attribute.";
      }
      
      public class Address
      {
      	public string StreetAddress { get; set; }
      	public string City { get; set; }
      }
      
      // And then in some function.
      Person person = new Person() { Name = "Dan", Age = 30; HomeAddress = new Address() { StreetAddress = "123 My St", City = "Regina" }};
      List<Person> people = GetListOfPeople();
      XmlSerialization.WriteToXmlFile<Person>("C:\person.txt", person);
      XmlSerialization.WriteToXmlFile<List<People>>("C:\people.txt", people);
      
      // Then in some other function.
      Person person = XmlSerialization.ReadFromXmlFile<Person>("C:\person.txt");
      List<Person> people = XmlSerialization.ReadFromXmlFile<List<Person>>("C:\people.txt");
      

       

      Writing and Reading an object to / from a Json file (using the Newtonsoft.Json assembly in the Json.NET NuGet package)

      • Only writes and reads the Public properties and variables to / from the file.
      • Classes to be serialized must contain a public parameterless constructor.
      • The data saved to the file is human readable, so it can easily be edited outside of your application.
      • Use the [JsonIgnore] attribute to exclude a public property or variable from being written to the file.

      /// <summary>
      /// Functions for performing common Json Serialization operations.
      /// <para>Requires the Newtonsoft.Json assembly (Json.Net package in NuGet Gallery) to be referenced in your project.</para>
      /// <para>Only public properties and variables will be serialized.</para>
      /// <para>Use the [JsonIgnore] attribute to ignore specific public properties or variables.</para>
      /// <para>Object to be serialized must have a parameterless constructor.</para>
      /// </summary>
      public static class JsonSerialization
      {
      	/// <summary>
      	/// Writes the given object instance to a Json file.
      	/// <para>Object type must have a parameterless constructor.</para>
      	/// <para>Only Public properties and variables will be written to the file. These can be any type though, even other classes.</para>
      	/// <para>If there are public properties/variables that you do not want written to the file, decorate them with the [JsonIgnore] attribute.</para>
      	/// </summary>
      	/// <typeparam name="T">The type of object being written to the file.</typeparam>
      	/// <param name="filePath">The file path to write the object instance to.</param>
      	/// <param name="objectToWrite">The object instance to write to the file.</param>
      	/// <param name="append">If false the file will be overwritten if it already exists. If true the contents will be appended to the file.</param>
      	public static void WriteToJsonFile<T>(string filePath, T objectToWrite, bool append = false) where T : new()
      	{
      		TextWriter writer = null;
      		try
      		{
      			var contentsToWriteToFile = Newtonsoft.Json.JsonConvert.SerializeObject(objectToWrite);
      			writer = new StreamWriter(filePath, append);
      			writer.Write(contentsToWriteToFile);
      		}
      		finally
      		{
      			if (writer != null)
      				writer.Close();
      		}
      	}
      
      	/// <summary>
      	/// Reads an object instance from an Json file.
      	/// <para>Object type must have a parameterless constructor.</para>
      	/// </summary>
      	/// <typeparam name="T">The type of object to read from the file.</typeparam>
      	/// <param name="filePath">The file path to read the object instance from.</param>
      	/// <returns>Returns a new instance of the object read from the Json file.</returns>
      	public static T ReadFromJsonFile<T>(string filePath) where T : new()
      	{
      		TextReader reader = null;
      		try
      		{
      			reader = new StreamReader(filePath);
      			var fileContents = reader.ReadToEnd();
      			return Newtonsoft.Json.JsonConvert.DeserializeObject<T>(fileContents);
      		}
      		finally
      		{
      			if (reader != null)
      				reader.Close();
      		}
      	}
      }
      

      And here is an example of how to use it:

      public class Person
      {
      	public string Name { get; set; }
      	public int Age = 20;
      	public Address HomeAddress { get; set;}
      	private string _thisWillNotGetWrittenToTheFile = "because it is not public.";
      
      	[JsonIgnore]
      	public string ThisWillNotBeWrittenToTheFile = "because of the [JsonIgnore] attribute.";
      }
      
      public class Address
      {
      	public string StreetAddress { get; set; }
      	public string City { get; set; }
      }
      
      // And then in some function.
      Person person = new Person() { Name = "Dan", Age = 30; HomeAddress = new Address() { StreetAddress = "123 My St", City = "Regina" }};
      List<Person> people = GetListOfPeople();
      JsonSerialization.WriteToJsonFile<Person>("C:\person.txt");
      JsonSerialization.WriteToJsonFile<List<People>>("C:\people.txt");
      
      // Then in some other function.
      Person person = JsonSerialization.ReadFromJsonFile<Person>("C:\person.txt", person);
      List<Person> people = JsonSerialization.ReadFromJsonFile<List<Person>>("C:\people.txt", people);
      

       

      As you can see, the Json example is almost identical to the Xml example, with the exception of using the [JsonIgnore] attribute instead of [XmlIgnore].

       

      Writing and Reading an object to / from a Json file (using the JavaScriptSerializer in the System.Web.Extensions assembly)

      There are many Json serialization libraries out there.  I mentioned the Newtonsoft.Json one because it is very popular, and I am also mentioning this JavaScriptSerializer one because it is built into the .Net framework.  The catch with this one though is that it requires the Full .Net 4.0 framework, not just the .Net Framework 4.0 Client Profile.

      The caveats to be aware of are the same between the Newtonsoft.Json and JavaScriptSerializer libraries, except instead of using [JsonIgnore] you would use [ScriptIgnore].

      Be aware that the JavaScriptSerializer is in the System.Web.Extensions assembly, but in the System.Web.Script.Serialization namespace.  Here is the code from the Newtonsoft.Json code snippet that needs to be replaced in order to use the JavaScriptSerializer:

      // In WriteFromJsonFile<T>() function replace:
      var contentsToWriteToFile = Newtonsoft.Json.JsonConvert.SerializeObject(objectToWrite);
      // with:
      var contentsToWriteToFile = new System.Web.Script.Serialization.JavaScriptSerializer().Serialize(objectToWrite);
      
      // In ReadFromJsonFile<T>() function replace:
      return Newtonsoft.Json.JsonConvert.DeserializeObject<T>(fileContents);
      // with:
      return new System.Web.Script.Serialization.JavaScriptSerializer().Deserialize<T>(fileContents);
      

       

      Happy Coding!

      Categories: C#, Json, XML Tags: , , , , , , , , , , ,

      “Agent lost communication with Team Foundation Server” TFS Build Server Error

      March 12th, 2014 No comments

      We had recently started getting lots of error messages similar to the following on our TFS Build Servers:

      Exception Message: The build failed because the build server that hosts build agent TFS-BuildController001 - Agent4 lost communication with Team Foundation Server. (type FaultException`1) 
      

      This error message would appear randomly; some builds would pass, others would fail, and when they did fail with this error message it was often at different parts in the build process.

      After a bit of digging I found this post and this one, which discussed different error messages around their build process failing with some sort of error around the build controller losing connection to the TFS server.  They talked about different fixes relating to DNS issues and load balancing, so we had our network team update our DNS records and flush the cache, but were still getting the same errors.

      We have several build controllers, and I noticed that the problem was only happening on two of the three, so our network team updated the hosts file on the two with the problem to match the entries in the one that was working fine, and boom, everything started working properly again :)

      So the problem was that the hosts file on those two build controller machines somehow got changed.

      The hosts file can typically be found at "C:\Windows\System32\Drivers\etc\hosts", and here is an example of what we now have in our hosts file for entries (just the two entries):

      12.345.67.89	TFS-Server.OurDomain.local
      12.345.67.89	TFS-Server
      

      If you too are running into this TFS Build Server error I hope this helps.

      If You Like Using Macros or AutoHotkey, You Might Want To Try The Enterpad AHK Keyboard

      February 12th, 2014 No comments

      If you follow my blog then you already know I’m a huge fan of AutoHotkey (AHK), and that I created the AHK Command Picker to allow me to have a limitless number of AHK macros quickly and easily accessible from my keyboard, without having a bunch of hotkeys (i.e. keyboard shortcuts) to remember.  The team over at CEDEQ saw my blog posts and were kind enough to send me an Enterpad AHK Keyboard for free :)

       

      What is the Enterpad AHK Keyboard?

      The Enterpad AHK keyboard is a physical device with 120 different touch spots on it, each of which can be used to trigger a different AHK macro/script.  Here’s a picture of it:

      While macro keyboards/controllers are nothing new, there are a number of things that separate the Enterpad AHK keyboard from your typical macro keyboard:

      1. The touch spots are not physical buttons; instead it uses a simple flat surface with 120 different positions that respond to touch.  Think of it almost as a touch screen, but instead of having a screen to touch, you just touch a piece of paper.
      2. This leads to my next point, which is that you can use any overlay you want on the surface of Enterpad AHK keyboard; the overlay is just a piece of paper.  The default overlay (piece of paper) that it ships with just has 120 squares on it, each labeled with their number (as shown in the picture above).  Because the overlay is just a piece of paper, you can write (or draw) on it, allowing you to create custom labels for each of your 120 buttons; something that you can’t do with physical buttons.  So what if you add or remap your macros after a month or a year? Just erase and re-write your labels (if you wrote them in pencil), or simply print off a new overlay.  Also, you don’t need to have 120 different buttons; if you only require 12, then you could map 10 buttons to each one of the 12 commands you have, allowing for a larger touch spot to launch a specific script.
      3. It integrates directly with AHK.  This means that you can easily write your macros/scripts in an awesome language that you (probably) already know.  While you could technically have any old macro keyboard launch AHK scripts, it would mean mapping a keyboard shortcut for each script that you want to launch, which means cluttering up your keyboard shortcuts and potentially running them unintentionally.  With the Enterpad AHK keyboard, AHK simply sees the 120 touch spots as an additional 120 keys on your keyboard, so you don’t have to clutter up your real keyboard’s hotkeys.  Here is an example of a macro that displays a message box when the first touch spot is pressed:
        001:
        MsgBox, &quot;You pressed touch spot #1.&quot;
        Return
        

      What do you mean when you say use it to launch a macro or script?

      A macro or script is just a series of operations; basically they can be used to do ANYTHING that you can manually do on your computer.  So some examples of things you can do are:

      • Open an application or file.
      • Type specific text (such as your home address).
      • Click on specific buttons or areas of a window.

      For example, you could have a script that opens Notepad, types “This text was written by an AHK script.”, saves the file to the desktop, and then closes Notepad.  Macros are useful for automating things that you do repeatedly, such as visiting specific websites, entering usernames and passwords, typing out canned responses to emails, and much more.

      The AHK community is very large and very active.  You can find a script to do almost anything you want, and when you can’t (or if you need to customize an existing script) you are very likely to get answers to any questions that you post online.  The Enterpad team also has a bunch of general purpose scripts/examples available for you to use, such having 10 custom clipboards, where button 1 copies to a custom clipboard, and button 11 pastes from it, button 2 copies to a different custom clipboard, and button 12 pastes from it, etc..

       

      Why would I want the Enterpad AHK Keyboard?

      If you are a fan of AutoHotkey and would like a separate physical device to launch your macros/scripts, the Enterpad AHK Keyboard is definitely a great choice.  If you don’t want a separate physical device, be sure to check out AHK Command Picker, as it provides many of the same benefits without requiring a new piece of hardware.

      Some reasons you might want an Enterpad AHK Keyboard:

      • You use (or want to learn) AutoHotkey and prefer a separate physical device to launch your scripts.
      • You want to be able to launch your scripts with a single button.
      • You don’t want to clutter up your keyboard shortcuts.
      • You want to be able to label all of your hotkeys.

      Some reasons you may want a different macro keyboard:

      • It does not use physical buttons.  This is great for some situations, but not for others.  For example, if you are a gamer looking for a macro keyboard then you might prefer one with physical buttons so that you do not have to look away from the screen to be sure about which button you are pressing.  Since the overlay is just a piece of paper though, you could perhaps do something like use little pieces of sticky-tac to mark certain buttons, so you could know which button your finger is on simply by feeling it.
      • The price. At nearly $300 US, the Enterpad AHK keyboard is more expensive than many other macro keyboards.  That said, those keyboards also don’t provide all of the benefits that the Enterpad AHK keyboard does.

      Even if you don’t want to use the Enterpad AHK keyboard yourself, you may want to get it for a friend or relative; especially if you have a very non-technical one.  For example, you could hook it up to your grandma’s computer and write a AHK script that calls your computer via Skype, and then label a button (or 10 buttons to make it nice and big) on the Enterpad AHK keyboard so it is clear which button to press in order to call you.

      One market that I think the Enterpad AHK keyboard could really be useful for is the corporate world, where you have many people doing the same job, and who all follow a set of instructions to do some processing.  For example, at a call center where you have tens or hundreds of employees using the same software and performing the same job.  One of their duties might be for placing new orders of a product for a caller, and this may involve clicking through 10 different menus or screens in order to get to the correct place to enter the customers information.  This whole process could be automated to a single button press on the Enterpad AHK keyboard.  You are probably thinking that the software should be redesigned to make the process of submitting orders less cumbersome, and you are right, but most companies don’t develop the software that they use, so they are at the mercy of the 3rd party software provider.  In these cases, AHK can be a real time-saver, by a company deploying an Enterpad AHK keyboard to all of its staff with a custom labeled overlay, and the IT department writing the AHK scripts that the employees use with their Enterpad AHK keyboards, all of the staff can benefit from it without needing to know anything about AHK.

       

      Conclusion

      So should you go buy an Enterpad AHK Keyboard?  That is really up to you.  I have one, but find that I don’t use it very often because I tend to prefer to use the AHK Command Picker software so that my fingers never leave my keyboard.  Some of my co-workers have tried it out though and really love it, so if you prefer to have a separate physical device for launching your macros then the Enterpad AHK Keyboard might be perfect for you.

      Categories: AutoHotkey Tags: , , ,

      Don’t Write WPF Converters; Write C# Inline In Your XAML Instead Using QuickConverter

      December 13th, 2013 1 comment

      If you’ve used binding at all in WPF then you more then likely have also written a converter.  There are lots of tutorials on creating converters, so I’m not going to discuss that in length here.  Instead I want to spread the word about a little known gem called QuickConverter.  QuickConverter is awesome because it allows you to write C# code directly in your XAML; this means no need for creating an explicit converter class.  And it’s available on NuGet so it’s a snap to get it into your project.

       

      A simple inverse boolean converter example

      As a simple example, let’s do an inverse boolean converter; something that is so basic I’m surprised that it is still not included out of the box with Visual Studio (and why packages like WPF Converters exist).  So basically if the property we are binding to is true, we want it to return false, and if it’s false, we want it to return true.

      The traditional approach

      This post shows the code for how you would traditionally accomplish this.  Basically you:

      1) add a new file to your project to hold your new converter class,

      2) have the class implement IValueConverter,

      3) add the class as a resource in your xaml file, and then finally

      4) use it in the Converter property of the xaml control.  Man, that is a lot of work to flip a bit!

      Just for reference, this is what step 4 might look like in the xaml:

      <CheckBox IsEnabled="{Binding Path=ViewModel.SomeBooleanProperty, Converter={StaticResource InverseBooleanConverter}" />
      

       

      Using QuickConverter

      This is what you would do using QuickConverter:

      <CheckBox IsEnabled="{qc:Binding '!$P', P={Binding Path=ViewModel.SomeBooleanProperty}}" />
      

      That it! 1 step! How freaking cool is that!  Basically we bind our SomeBooleanProperty to the variable $P, and then write our C# expressions against $P, all in xaml! This also allows us to skip steps 1, 2, and 3 of the traditional approach, allowing you to get more done.

       

      More examples using QuickConverter

      The QuickConverter documentation page shows many more examples, such as a Visibility converter:

      Visibility="{qc:Binding '$P ? Visibility.Visible : Visibility.Collapsed', P={Binding ShowElement}}"
      

       

      Doing a null check:

      IsEnabled="{qc:Binding '$P != null', P={Binding Path=SomeProperty}"
      

       

      Checking a class instance’s property values:

      IsEnabled="{qc:Binding '$P.IsValid || $P.ForceAlways', P={Binding Path=SomeClassInstance}"
      

       

      Doing two-way binding:

      Height="{qc:Binding '$P * 10', ConvertBack='$value * 0.1', P={Binding TestWidth, Mode=TwoWay}}"
      

       

      Doing Multi-binding:

      Angle="{qc:MultiBinding 'Math.Atan2($P0, $P1) * 180 / 3.14159', P0={Binding ActualHeight, ElementName=rootElement}, P1={Binding ActualWidth, ElementName=rootElement}}"
      

       

      Declaring and using local variables in your converter expression:

      IsEnabled="{qc:Binding '(Loc = $P.Value, A = $P.Show) => $Loc != null &amp;&amp; $A', P={Binding Obj}}"
      

      * Note that the "&&" operator must be written as "&amp;&amp;" in XML.

       

      And there is even limited support for using lambdas, which allows LINQ to be used:

      ItemsSource="{qc:Binding '$P.Where(( (int)i ) => (bool)($i % 2 == 0))', P={Binding Source}}"
      

       

      Quick Converter Setup

      As mentioned above, Quick Converter is available via NuGet.  Once you have it installed in your project, there are 2 things you need to do:

      1. Register assemblies for the types that you plan to use in your quick converters

      For example, if you want to use the Visibility converter shown above, you need to register the System.Windows assembly, since that is where the System.Windows.Visibility enum being referenced lives.  You can register the System.Windows assembly with QuickConverter using this line:

      QuickConverter.EquationTokenizer.AddNamespace(typeof(System.Windows.Visibility));
      

      In order to avoid a XamlParseException at run-time, this line needs to be executed before the quick converter executes.  To make this easy, I just register all of the assemblies with QuickConverter in my application’s constructor.  That way I know they have been registered before any quick converter expressions are evaluated.

      So my App.xaml.cs file contains this:

      public partial class App : Application
      {
      	public App() : base()
      	{
      		// Setup Quick Converter.
      		QuickConverter.EquationTokenizer.AddNamespace(typeof(object));
      		QuickConverter.EquationTokenizer.AddNamespace(typeof(System.Windows.Visibility));
      	}
      }
      

      Here I also registered the System assembly (using “typeof(object)”) in order to make the primitive types (like bool) available.

       

      2. Add the QuickConverter namespace to your Xaml files

      As with all controls in xaml, before you can use a you a control you must create a reference to the namespace that the control is in.  So to be able to access and use QuickConverter in your xaml file, you must include it’s namespace, which can be done using:

      xmlns:qc="clr-namespace:QuickConverter;assembly=QuickConverter"
      

       

      So should I go delete all my existing converters?

      As crazy awesome as QuickConverter is, it’s not a complete replacement for converters.  Here are a few scenarios where you would likely want to stick with traditional converters:

      1. You need some very complex logic that is simply easier to write using a traditional converter.  For example, we have some converters that access our application cache and lock resources and do a lot of other logic, where it would be tough (impossible?) to write all of that logic inline with QuickConverter.  Also, by writing it using the traditional approach you get things like VS intellisense and compile-time error checking.

      2. If the converter logic that you are writing is very complex, you may want it enclosed in a converter class to make it more easily reusable; this allows for a single reusable object and avoids copy-pasting complex logic all over the place.  Perhaps the first time you write it you might do it as a QuickConverter, but if you find yourself copy-pasting that complex logic a lot, move it into a traditional converter.

      3. If you need to debug your converter, that can’t be done with QuickConverter (yet?).

       

      Summary

      So QuickConverter is super useful and can help speed up development time by allowing most, if not all, of your converters to be written inline.  In my experience 95% of converters are doing very simple things (null checks, to strings, adapting one value type to another, etc.), which are easy to implement inline.  This means fewer files and classes cluttering up your projects.  If you need to do complex logic or debug your converters though, then you may want to use traditional converters for those few cases.

      So, writing C# inline in your xaml; how cool is that!  I can’t believe Microsoft didn’t think of and implement this.  One of the hardest things to believe is that Johannes Moersch came up with this idea and implemented it while on a co-op work term in my office!  A CO-OP STUDENT WROTE QUICKCONVERTER!  Obviously Johannes is a very smart guy, and he’s no longer a co-op student; he’ll be finishing up his bachelor’s degree in the coming months.

      I hope you find QuickConverter as helpful as I have, and if you have any suggestions for improvements, be sure to leave Johannes a comment on the CodePlex page.

      Happy coding!

      Categories: C#, WPF, XAML Tags: , , , , , , ,

      Get AutoHotkey To Interact With Admin Windows Without Running AHK Script As Admin

      November 21st, 2013 3 comments

      A while back I posted about AutoHotkey not being able to interact with Windows 8 windows and other applications that were Ran As Admin.  My solution was to run your AutoHotkey (AHK) script as admin as well, and I also showed how to have your AHK script start automatically with Windows, but not as an admin.  Afterwards I followed that up with a post about how to get your AHK script to run as admin on startup, so life was much better, but still not perfect.UAC Never Notify

       

      Problems with running your AHK script as admin

      1. You may have to deal with the annoying UAC prompt every time you launch your script.
      2. Any programs the script launches also receive administrative privileges.

      #1 is only a problem if you haven’t set your AHK script to run as admin on startup as I showed in my other blog post (i.e. you are still manually launching your script) or you haven’t changed your UAC settings to never prompt you with notifications (which some companies restrict) (see screenshot to the right).

      #2 was a problem for me. I use AHK Command Picker every day. A lot. I’m a developer and in order for Visual Studio to interact with IIS it requires admin privileges, which meant that if I wanted to be able to use AHK Command Picker in Visual Studio, I had to run it as admin as well.  The problem for me was that I use AHK Command Picker to launch almost all of my applications, which meant that most of my apps were now also running as an administrator.  For the most part this was fine, but there were a couple programs that gave me problems running as admin. E.g. With PowerShell ISE when I double clicked on a PowerShell file to edit it, instead of opening in the current (admin) ISE instance, it would open a new ISE instance.

        There is a solution

        Today I stumbled across this post on the AHK community forums.  Lexikos has provided an AHK script that will digitally sign the AutoHotkey executable, allowing it to interact with applications running as admin, even when your AHK script isn’t.

        Running his script is pretty straight forward:

        1. Download and unzip his EnableUIAccess.zip file.
        2. Double-click the EnableUIAccess.ahk script to run it, and it will automatically prompt you.
        3. Read the disclaimer and click OK.
        4. On the Select Source File prompt choose the C:\Program Files\AutoHotkey\AutoHotkey.exe file.  This was already selected by default for me. (Might be Program Files (x86) if you have 32-bit AHK installed on 64-bit Windows)
        5. On the Select Destination File prompt choose the same C:\Program Files\AutoHotkey\AutoHotkey.exe file again.  Again, this was already selected by default for me.
        6. Click Yes to replace the existing file.
        7. Click Yes when prompted to Run With UI Access.

        That’s it.  (Re)Start your AHK scripts and they should now be able to interact with Windows 8 windows and applications running as admin :)

        This is a great solution if you want your AHK script to interact with admin windows, but don’t want to run your script as an admin.

         

        Did you know

        If you do want to launch an application as admin, but don’t want to run your AHK script as admin, you can use the RunAs command.

         

        I hope you found this article useful.  Feel free to leave a comment.

        Happy coding!

        Provide A Batch File To Run Your PowerShell Script From; Your Users Will Love You For It

        November 16th, 2013 42 comments

        A while ago in one of my older posts I included a little gem that I think deserves it’s own dedicated post; calling PowerShell scripts from a batch file.

        Why call my PowerShell script from a batch file?

        When I am writing a script for other people to use (in my organization, or for the general public) or even for myself sometimes, I will often include a simple batch file (i.e. *.bat or *.cmd file) that just simply calls my PowerShell script and then exits.  I do this because even though PowerShell is awesome, not everybody knows what it is or how to use it; non-technical folks obviously, but even many of the technical folks in our organization have never used PowerShell.

        Let’s list the problems with sending somebody the PowerShell script alone; The first two points below are hurdles that every user stumbles over the first time they encounter PowerShell (they are there for security purposes):

        1. When you double-click a PowerShell script (*.ps1 file) the default action is often to open it up in an editor, not to run it (you can change this for your PC).
        2. When you do figure out you need to right-click the .ps1 file and choose Open With –> Windows PowerShell to run the script, it will fail with a warning saying that the execution policy is currently configured to not allow scripts to be ran.
        3. My script may require admin privileges in order to run correctly, and it can be tricky to run a PowerShell script as admin without going into a PowerShell console and running the script from there, which a lot of people won’t know how to do.
        4. A potential problem that could affect PowerShell Pros is that it’s possible for them to have variables or other settings set in their PowerShell profile that could cause my script to not perform correctly; this is pretty unlikely, but still a possibility.
            So imagine you’ve written a PowerShell script that you want your grandma to run (or an HR employee, or an executive, or your teenage daughter, etc.). Do you think they’re going to be able to do it?  Maybe, maybe not.

        You should be kind to your users and provide a batch file to call your PowerShell script.

        The beauty of batch file scripts is that by default the script is ran when it is double-clicked (solves problem #1), and all of the other problems can be overcome by using a few arguments in our batch file.

        Ok, I see your point. So how do I call my PowerShell script from a batch file?

        First, the code I provide assumes that the batch file and PowerShell script are in the same directory.  So if you have a PowerShell script called “MyPowerShellScript.ps1” and a batch file called “RunMyPowerShellScript.cmd”, this is what the batch file would contain:

        @ECHO OFF
        SET ThisScriptsDirectory=%~dp0
        SET PowerShellScriptPath=%ThisScriptsDirectory%MyPowerShellScript.ps1
        PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%PowerShellScriptPath%'";
        

        Line 1 just prevents the contents of the batch file from being printed to the command prompt (so it’s optional).  Line 2 gets the directory that the batch file is in.  Line 3 just appends the PowerShell script filename to the script directory to get the full path to the PowerShell script file, so this is the only line you would need to modify; replace MyPowerShellScript.ps1 with your PowerShell script’s filename.  The 4th line is the one that actually calls the PowerShell script and contains the magic.

        The –NoProfile switch solves problem #4 above, and the –ExecutionPolicy Bypass argument solves problem #2.  But that still leaves problem #3 above, right?

        Call your PowerShell script from a batch file with Administrative permissions (i.e. Run As Admin)

        If your PowerShell script needs to be run as an admin for whatever reason, the 4th line of the batch file will need to change a bit:

        @ECHO OFF
        SET ThisScriptsDirectory=%~dp0
        SET PowerShellScriptPath=%ThisScriptsDirectory%MyPowerShellScript.ps1
        PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File ""%PowerShellScriptPath%""' -Verb RunAs}";
        

        We can’t call the PowerShell script as admin from the command prompt, but we can from PowerShell; so we essentially start a new PowerShell session, and then have that session call the PowerShell script using the –Verb RunAs argument to specify that the script should be run as an administrator.

        And voila, that’s it.  Now all anybody has to do to run your PowerShell script is double-click the batch file; something that even your grandma can do (well, hopefully).  So will your users really love you for this; well, no.  Instead they just won’t be cursing you for sending them a script that they can’t figure out how to run.  It’s one of those things that nobody notices until it doesn’t work.

        So take the extra 10 seconds to create a batch file and copy/paste the above text into it; it’ll save you time in the long run when you don’t have to repeat to all your users the specific instructions they need to follow to run your PowerShell script.

        I typically use this trick for myself too when my script requires admin rights, as it just makes running the script faster and easier.

        Bonus

        One more tidbit that I often include at the end of my PowerShell scripts is the following code:

        # If running in the console, wait for input before closing.
        if ($Host.Name -eq "ConsoleHost")
        { 
        	Write-Host "Press any key to continue..."
        	$Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyUp") > $null
        }
        

        This will prompt the user for keyboard input before closing the PowerShell console window.  This is useful because it allows users to read any errors that your PowerShell script may have thrown before the window closes, or even just so they can see the “Everything completed successfully” message that your script spits out so they know that it ran correctly.  Related side note: you can change your PC to always leave the PowerShell console window open after running a script, if that is your preference.

        I hope you find this useful.  Feel free to leave comments.

        Happy coding!

        Update

        Several people have left comments asking how to pass parameters into the PowerShell script from the batch file.

        Here is how to pass in ordered parameters:

        PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%PowerShellScriptPath%' 'First Param Value' 'Second Param Value'";
        

        And here is how to pass in named parameters:

        PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%PowerShellScriptPath%' -Param1Name 'Param 1 Value' -Param2Name 'Param 2 Value'"
        

        And if you are running the admin version of the script, here is how to pass in ordered parameters:

        PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File """"%PowerShellScriptPath%"""" """"First Param Value"""" """"Second Param Value"""" ' -Verb RunAs}"
        
        And here is how to pass in named parameters:
        PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File """"%PowerShellScriptPath%"""" -Param1Name """"Param 1 Value"""" -Param2Name """"Param 2 value"""" ' -Verb RunAs}";
        
        And yes, the PowerShell script name and parameters need to be wrapped in 4 double quotes in order to properly handle paths/values with spaces.

        Problems Caused By Installing Windows 8.1 Update

        November 8th, 2013 No comments

        Myself and a few co-workers have updated from Windows 8 to Windows 8.1 and have run into some weird problems.  After a bit of Googling I have found that we are not alone.  This is just a quick list of some things the the Windows 8.1 Update seems to have broken.  I’ll update this post as I find more issues.

         

        IE 11 breaks some websites

        • I found that some of the links in the website our office uploads our Escrow deposits to no longer worked in IE 11 (which 8.1 installs).  Turning on the developer tools showed that it was throwing a Javascript error about an undefined function.  Everything works fine in IE 10 though and no undefined errors are thrown.
        • I have also noticed that after doing a search on Google and clicking one of the links, in order to get back to the Google results page you have to click the Back button twice; the first Back-click just takes you to a blank page (when you click the Google link it directs you to an empty page, which then forwards you to the correct page).
        • Others have complained that they are experiencing problems with GMail and Silverlight after the 8.1 update.
          So it may just be that IE 11 updated it’s standards to be more compliant and now many websites don’t meet the new requirements (I’m not sure); but either way, you may find that some of your favorite websites no longer work properly with IE 11, and you’ll have to wait for IE 11 or the website to make an update.

         

        VPN stopped working

        We use the SonicWall VPN client at my office, and I found that it no longer worked after updating to Windows 8.1.  The solution was a simple uninstall, reinstall, but still, it’s just one more issue to add to the list.

         

        More?

        Have you noticed other things broken after doing the Windows 8.1 update? Share them in the comments below!

        In my personal opinion, I would wait a while longer before updating to Windows 8.1; give Microsoft more time to fix some of these issues.  Many of the new features in Windows 8.1 aren’t even noticeable yet, as many apps don’t yet take advantage of them.  Also, while MS did put a Start button back in, it’s not nearly as powerful as the Windows 7 Start button, so if that’s your reason for upgrading to 8.1 just go get Classic Shell instead.

        Hopefully Microsoft will be releasing hotfixes to get these issues addressed sooner than later.

        Always Explicitly Set Your Parameter Set Variables For PowerShell v2.0 Compatibility

        October 28th, 2013 2 comments

        What are parameter sets anyways?

        Parameter sets were introduced in PowerShell v2.0 and are useful for enforcing mutually exclusive parameters on a cmdlet.  Ed Wilson has a good little article explaining what parameter sets are and how to use them.  Essentially they allow us to write a single cmdlet that might otherwise have to be written as 2 or more cmdlets that took different parameters.  For example, instead of having to create Process-InfoFromUser, Process-InfoFromFile, and Process-InfoFromUrl cmdlets, we could create a single Process-Info cmdlet that has 3 mutually exclusive parameters, [switch]$PromptUser, [string]$FilePath, and [string]$Url.  If the cmdlet is called with more than one of these parameters, it throws an error.

        You could just be lazy and not use parameter sets and allow all 3 parameters to be specified and then just use the first one, but the user won’t know which one of the 3 they provided will be used; they might assume that all 3 will be used.  This would also force the user to have to read the documentation (assuming you have provided it).  Using parameter sets enforces makes it clear to the user which parameters are able to be used with other parameters.  Also, most PowerShell editors process parameter sets to have the intellisense properly show the parameters that can be used with each other.

         

        Ok, parameter sets sound awesome, I want to use them! What’s the problem?

        The problem I ran into was in my Invoke-MsBuild module that I put on CodePlex, I had a [switch]$PassThru parameter that was part of a parameter set.  Within the module I had:

        if ($PassThru) { do something... }
        else { do something else... }
        

        This worked great for me during my testing since I was using PowerShell v3.0.  The problem arose once I released my code to the public; I received an issue from a user who was getting the following error message:

        Invoke-MsBuild : Unexpect error occured while building "<path>\my.csproj": The variable ‘$PassThru’ cannot be retrieved because it has not been set.

        At build.ps1:84 char:25

        • $result = Invoke-MsBuild <<<< -Path "<path>\my.csproj" -BuildLogDirectoryPath "$scriptPath" -Pa

          rams "/property:Configuration=Release"

        After some investigation I determined the problem was that they were using PowerShell v2.0, and that my script uses Strict Mode.  I use Set-StrictMode -Version Latest in all of my scripts to help me catch any syntax related errors and to make sure my scripts will in fact do what I intend them to do.  While you could simply not use strict mode and you wouldn’t have a problem, I don’t recommend that; if others are going to call your cmdlet (or you call it from a different script), there’s a good chance they may have Strict Mode turned on and your cmdlet may break for them.

         

        So should I not use parameter sets with PowerShell v2.0? Is there a fix?

        You absolutely SHOULD use parameter sets whenever you can and it makes sense, and yes there is a fix.  If you require your script to run on PowerShell v2.0, there is just one extra step you need to take, which is to explicitly set the values for any parameters that use a parameter set and don’t exist.  Luckily we can use the Test-Path cmdlet to test if a variable has been defined in a specific scope or not.

        Here is an example of how to detect if a variable is not defined in the Private scope and set its default value.  We specify the scope in case a variable with the same name exists outside of the cmdlet in the global scope or an inherited scope.

        # Default the ParameterSet variables that may not have been set depending on which parameter set is being used. This is required for PowerShell v2.0 compatibility.
        if (!(Test-Path Variable:Private:SomeStringParameter)) { $SomeStringParameter = $null }
        if (!(Test-Path Variable:Private:SomeIntegerParameter)) { $SomeIntegerParameter = 0 }
        if (!(Test-Path Variable:Private:SomeSwitchParameter)) { $SomeSwitchParameter = $false }
        

        If you prefer, instead of setting a default value for the parameter you could just check if it is defined first when using it in your script.  I like this approach however, because I can put this code right after my cmdlet parameters so I’m modifying all of my parameter set properties in one place, and I don’t have to remember to check if the variable is defined later when writing the body of my cmdlet; otherwise I’m likely to forget to do the “is defined” check, and will likely miss the problem since I do most of my testing in PowerShell v3.0.

        Another approach rather than checking if a parameter is defined or not, is to check which Parameter Set Name is being used; this will implicitly let you know which parameters are defined.

        switch ($PsCmdlet.ParameterSetName)
        {
        	"SomeParameterSetName"  { Write-Host "You supplied the Some variable."; break}
        	"OtherParameterSetName"  { Write-Host "You supplied the Other variable."; break}
        } 
        

        I still prefer to default all of my parameters, but you may prefer this method.

        I hope you find this useful.  Check out my other article for more PowerShell v2.0 vs. v3.0 differences.

        Happy coding!

        PowerShell Code To Ensure Client Is Using At Least The Minimum Required PowerShell Version

        October 25th, 2013 2 comments

        Here’s some simple code that will throw an exception if the client running your script is not using the version of PowerShell (or greater) that is required; just change the $REQUIRED_POWERSHELL_VERSION variable value to the minimum version that the script requires.

        # Throw an exception if client is not using the minimum required PowerShell version.
        $REQUIRED_POWERSHELL_VERSION = 3.0	# The minimum Major.Minor PowerShell version that is required for the script to run.
        $POWERSHELL_VERSION = $PSVersionTable.PSVersion.Major + ($PSVersionTable.PSVersion.Minor / 10)
        if ($REQUIRED_POWERSHELL_VERSION -gt $POWERSHELL_VERSION)
        { throw "PowerShell version $REQUIRED_POWERSHELL_VERSION is required for this script; You are only running version $POWERSHELL_VERSION. Please update PowerShell to at least version $REQUIRED_POWERSHELL_VERSION." }
        

        – UPDATE {

        Thanks to Robin M for pointing out that PowerShell has the built-in #Requires statement for this purpose, so you do not need to use the code above. Instead, simply place the following code anywhere in your script to enforce the desired PowerShell version required to run the script:

        #Requires -Version 3.0
        

        If the user does not have the minimum required version of PowerShell installed, they will see an error message like this:

        The script ‘foo.ps1′ cannot be run because it contained a "#requires" statement at line 1 for Windows PowerShell version 3.0 which is incompatible with the installed Windows PowerShell version of 2.0.

        } UPDATE –

        So if your script requires, for example, PowerShell v3.0, just put this at the start of your script to have it error out right away with a meaningful error message; otherwise your script may throw other errors that mask the real issue, potentially leading the user to spend many hours troubleshooting your script, or to give up on it all together.

        I’ve been bitten by this in the past a few times now, where people report issues on my Codeplex scripts where the error message seems ambiguous.  So now any scripts that I release to the general public will have this check in it to give them a proper error message.  I have also created a page on PowerShell v2 vs. v3 differences that I’m going to use to keep track of the differences that I encounter, so that I can have confidence in the minimum powershell version that I set on my scripts.  I also plan on creating a v3 vs. v4 page once I start using PS v4 features more.  Of course, the best test is to actually run your script in the minimum powershell version that you set, which I mention how to do on my PS v2 vs. v3 page.

        Happy coding!

        PowerShell Script To Get Path Lengths

        October 24th, 2013 4 comments

        A while ago I created a Path Length Checker tool in C# that has a “nice” GUI, and put it up on CodePlex.  One of the users reported that he was trying to use it to scan his entire C: drive, but that it was crashing.  Turns out that the System.IO.Directory.GetFileSystemEntries() call was throwing a permissions exception when trying to access the “C:\Documents and Settings” directory.  Even when running the app as admin it throws this exception.  In the meantime while I am working on implementing a workaround for the app, I wrote up a quick PowerShell script that the user could use to get all of the path lengths.  That is what I present to you here.

        $pathToScan = "C:\Some Folder"	# The path to scan and the the lengths for (sub-directories will be scanned as well).
        $outputFilePath = "C:\temp\PathLengths.txt"	# This must be a file in a directory that exists and does not require admin rights to write to.
        $writeToConsoleAsWell = $true	# Writing to the console will be much slower.
        
        # Open a new file stream (nice and fast) and write all the paths and their lengths to it.
        $outputFileDirectory = Split-Path $outputFilePath -Parent
        if (!(Test-Path $outputFileDirectory)) { New-Item $outputFileDirectory -ItemType Directory }
        $stream = New-Object System.IO.StreamWriter($outputFilePath, $false)
        Get-ChildItem -Path $pathToScan -Recurse -Force | Select-Object -Property FullName, @{Name="FullNameLength";Expression={($_.FullName.Length)}} | Sort-Object -Property FullNameLength -Descending | ForEach-Object {
            $filePath = $_.FullName
            $length = $_.FullNameLength
            $string = "$length : $filePath"
            
            # Write to the Console.
            if ($writeToConsoleAsWell) { Write-Host $string }
         
            #Write to the file.
            $stream.WriteLine($string)
        }
        $stream.Close()
        

        Happy coding!

        PowerShell Functions To Convert, Remove, and Delete IIS Web Applications

        October 23rd, 2013 No comments

        I recently refactored some of our PowerShell scripts that we use to publish and remove IIS 7 web applications, creating some general functions that can be used anywhere.  In this post I show these functions along with how I structure our scripts to make creating, removing, and deleting web applications for our various products fully automated and tidy.  Note that these scripts require at least PowerShell v3.0 and use the IIS Admin Cmdlets that I believe require IIS v7.0; the IIS Admin Cmdlet calls can easily be replaced though by calls to appcmd.exe, msdeploy, or any other tool for working with IIS that you want.

        I’ll blast you with the first file’s code and explain it below (ApplicationServiceUtilities.ps1).

        # Turn on Strict Mode to help catch syntax-related errors.
        # 	This must come after a script's/function's param section.
        # 	Forces a function to be the first non-comment code to appear in a PowerShell Module.
        Set-StrictMode -Version Latest
        
        # Define the code block that will add the ApplicationServiceInformation class to the PowerShell session.
        # NOTE: If this class is modified you will need to restart your PowerShell session to see the changes.
        $AddApplicationServiceInformationTypeScriptBlock = {
            # Wrap in a try-catch in case we try to add this type twice.
            try {
            # Create a class to hold an IIS Application Service's Information.
            Add-Type -TypeDefinition "
                using System;
            
                public class ApplicationServiceInformation
                {
                    // The name of the Website in IIS.
                    public string Website { get; set;}
                
                    // The path to the Application, relative to the Website root.
                    public string ApplicationPath { get; set; }
        
                    // The Application Pool that the application is running in.
                    public string ApplicationPool { get; set; }
        
                    // Whether this application should be published or not.
                    public bool ConvertToApplication { get; set; }
        
                    // Implicit Constructor.
                    public ApplicationServiceInformation() { this.ConvertToApplication = true; }
        
                    // Explicit constructor.
                    public ApplicationServiceInformation(string website, string applicationPath, string applicationPool, bool convertToApplication = true)
                    {
                        this.Website = website;
                        this.ApplicationPath = applicationPath;
                        this.ApplicationPool = applicationPool;
                        this.ConvertToApplication = convertToApplication;
                    }
                }
            "
            } catch {}
        }
        # Add the ApplicationServiceInformation class to this PowerShell session.
        & $AddApplicationServiceInformationTypeScriptBlock
        
        <#
            .SYNOPSIS
            Converts the given files to application services on the given Server.
        
            .PARAMETER Server
            The Server Host Name to connect to and convert the applications on.
        
            .PARAMETER ApplicationServicesInfo
            The [ApplicationServiceInformation[]] containing the files to convert to application services.
        #>
        function ConvertTo-ApplicationServices
        {
            [CmdletBinding()]
            param
            (
                [string] $Server,
                [ApplicationServiceInformation[]] $ApplicationServicesInfo
            )
        
            $block = {
        	    param([PSCustomObject[]] $ApplicationServicesInfo)
                $VerbosePreference = $Using:VerbosePreference
        	    Write-Verbose "Converting To Application Services..."
        
                # Import the WebAdministration module to make sure we have access to the required cmdlets and the IIS: drive.
                Import-Module WebAdministration 4> $null	# Don't write the verbose output.
        	
        	    # Create all of the Web Applications, making sure to first try and remove them in case they already exist (in order to avoid a PS error).
        	    foreach ($appInfo in [PSCustomObject[]]$ApplicationServicesInfo)
                {
                    $website = $appInfo.Website
                    $applicationPath = $appInfo.ApplicationPath
                    $applicationPool = $appInfo.ApplicationPool
        		    $fullPath = Join-Path $website $applicationPath
        
                    # If this application should not be converted, continue onto the next one in the list.
                    if (!$appInfo.ConvertToApplication) { Write-Verbose "Skipping publish of '$fullPath'"; continue }
        		
        		    Write-Verbose "Checking if we need to remove '$fullPath' before converting it..."
        		    if (Get-WebApplication -Site "$website" -Name "$applicationPath")
        		    {
        			    Write-Verbose "Removing '$fullPath'..."
        			    Remove-WebApplication -Site "$website" -Name "$applicationPath"
        		    }
        
                    Write-Verbose "Converting '$fullPath' to an application with Application Pool '$applicationPool'..."
                    ConvertTo-WebApplication "IIS:\Sites\$fullPath" -ApplicationPool "$applicationPool"
                }
            }
        
            # Connect to the host Server and run the commands directly o that computer.
            # Before we run our script block we first have to add the ApplicationServiceInformation class type into the PowerShell session.
            $session = New-PSSession -ComputerName $Server
            Invoke-Command -Session $session -ScriptBlock $AddApplicationServiceInformationTypeScriptBlock
            Invoke-Command -Session $session -ScriptBlock $block -ArgumentList (,$ApplicationServicesInfo)
            Remove-PSSession -Session $session
        }
        
        <#
            .SYNOPSIS
            Removes the given application services from the given Server.
        
            .PARAMETER Server
            The Server Host Name to connect to and remove the applications from.
        
            .PARAMETER ApplicationServicesInfo
            The [ApplicationServiceInformation[]] containing the applications to remove.
        #>
        function Remove-ApplicationServices
        {
            [CmdletBinding()]
            param
            (
                [string] $Server,
                [ApplicationServiceInformation[]] $ApplicationServicesInfo
            )
        
            $block = {
        	    param([ApplicationServiceInformation[]] $ApplicationServicesInfo)
                $VerbosePreference = $Using:VerbosePreference
        	    Write-Verbose "Removing Application Services..."
        
                # Import the WebAdministration module to make sure we have access to the required cmdlets and the IIS: drive.
                Import-Module WebAdministration 4> $null	# Don't write the verbose output.
        
        	    # Remove all of the Web Applications, making sure they exist first (in order to avoid a PS error).
        	    foreach ($appInfo in [ApplicationServiceInformation[]]$ApplicationServicesInfo)
                {
                    $website = $appInfo.Website
                    $applicationPath = $appInfo.ApplicationPath
        		    $fullPath = Join-Path $website $applicationPath
        		
        		    Write-Verbose "Checking if we need to remove '$fullPath'..."
        		    if (Get-WebApplication -Site "$website" -Name "$applicationPath")
        		    {
        			    Write-Verbose "Removing '$fullPath'..."
        			    Remove-WebApplication -Site "$website" -Name "$applicationPath"
        		    }
                }
            }
        
            # Connect to the host Server and run the commands directly on that computer.
            # Before we run our script block we first have to add the ApplicationServiceInformation class type into the PowerShell session.
            $session = New-PSSession -ComputerName $Server
            Invoke-Command -Session $session -ScriptBlock $AddApplicationServiceInformationTypeScriptBlock
            Invoke-Command -Session $session -ScriptBlock $block -ArgumentList (,$ApplicationServicesInfo)
            Remove-PSSession -Session $session
        }
        
        <#
            .SYNOPSIS
            Removes the given application services from the given Server and deletes all associated files.
        
            .PARAMETER Server
            The Server Host Name to connect to and delete the applications from.
        
            .PARAMETER ApplicationServicesInfo
            The [ApplicationServiceInformation[]] containing the applications to delete.
        
            .PARAMETER OnlyDeleteIfNotConvertedToApplication
            If this switch is supplied and the application services are still running (i.e. have not been removed yet), the services will not be removed and the files will not be deleted.
        
            .PARAMETER DeleteEmptyParentDirectories
            If this switch is supplied, after the application services folder has been removed, it will recursively check parent folders and remove them if they are empty, until the Website root is reached.
        #>
        function Delete-ApplicationServices
        {
            [CmdletBinding()]
            param
            (
                [string] $Server,
                [ApplicationServiceInformation[]] $ApplicationServicesInfo,
                [switch] $OnlyDeleteIfNotConvertedToApplication,
                [switch] $DeleteEmptyParentDirectories
            )
            
            $block = {
        	    param([ApplicationServiceInformation[]] $ApplicationServicesInfo)
                $VerbosePreference = $Using:VerbosePreference
        	    Write-Verbose "Deleting Application Services..."
        
                # Import the WebAdministration module to make sure we have access to the required cmdlets and the IIS: drive.
                Import-Module WebAdministration 4> $null	# Don't write the verbose output.
        
        	    # Remove all of the Web Applications and delete their files from disk.
        	    foreach ($appInfo in [ApplicationServiceInformation[]]$ApplicationServicesInfo)
                {
                    $website = $appInfo.Website
                    $applicationPath = $appInfo.ApplicationPath
        		    $fullPath = Join-Path $website $applicationPath
                    $iisSitesDirectory = "IIS:\Sites\"
        		
        		    Write-Verbose "Checking if we need to remove '$fullPath'..."
        		    if (Get-WebApplication -Site "$website" -Name "$applicationPath")
        		    {
                        # If we should only delete the files they're not currently running as a Web Application, continue on to the next one in the list.
                        if ($Using:OnlyDeleteIfNotConvertedToApplication) { Write-Verbose "'$fullPath' is still running as a Web Application, so its files will not be deleted."; continue }
        
        			    Write-Verbose "Removing '$fullPath'..."
        			    Remove-WebApplication -Site "$website" -Name "$applicationPath"
        		    }
                    
                    Write-Verbose "Deleting the directory '$fullPath'..."
                    Remove-Item -Path "$iisSitesDirectory$fullPath" -Recurse -Force
        
                    # If we should delete empty parent directories of this application.
                    if ($Using:DeleteEmptyParentDirectories)
                    {
                        Write-Verbose "Deleting empty parent directories..."
                        $parent = Split-Path -Path $fullPath -Parent
        
                        # Only delete the parent directory if it is not the Website directory, and it is empty.
                        while (($parent -ne $website) -and (Test-Path -Path "$iisSitesDirectory$parent") -and ((Get-ChildItem -Path "$iisSitesDirectory$parent") -eq $null))
                        {
                            $path = $parent
                            Write-Verbose "Deleting empty parent directory '$path'..."
                            Remove-Item -Path "$iisSitesDirectory$path" -Force
                            $parent = Split-Path -Path $path -Parent
                        }
                    }
                }
            }
        
            # Connect to the host Server and run the commands directly on that computer.
            # Before we run our script block we first have to add the ApplicationServiceInformation class type into the PowerShell session.
            $session = New-PSSession -ComputerName $Server
            Invoke-Command -Session $session -ScriptBlock $AddApplicationServiceInformationTypeScriptBlock
            Invoke-Command -Session $session -ScriptBlock $block -ArgumentList (,$ApplicationServicesInfo)
            Remove-PSSession -Session $session
        }
        

        This first file contains all of the meat.  At the top it declares (in C#) the ApplicationServiceInformation class that is used to hold the information about a web application; mainly the Website that the application should go in, the ApplicationPath (where within the website the application should be created), and the Application Pool that the application should run under.  Notice that the $AddApplicationServiceInformationTypeScriptBlock script block is executed right below where it is declared, in order to actually import the ApplicationServiceInformation class type into the current PowerShell session.

        There is one extra property on this class that I found I needed, but you may be able to ignore; that is the ConvertToApplication boolean.  This is inspected by our ConvertTo-ApplicationServices function to tell it whether the application should actually be published or not.  I required this field because we have some web services that should only be “converted to applications” in specific environments (or only on a developers local machine), but whose files we still want to delete when using the Delete-ApplicationServices function.  While I could just create 2 separate lists of ApplicationServiceInformation objects depending on which function I was calling (see below), I decided to instead just include this one extra property.

        Below the class declaration are our functions to perform the actual work:

        • ConvertTo-ApplicationServices: Converts the files to an application using the ConvertTo-WebApplication cmdlet.
        • Remove-ApplicationServices: Converts the application back to regular files using the Remove-WebApplication cmdlet.
        • Delete-ApplicationServices: First removes any applications, and then deletes the files from disk.
          The Delete-ApplicationServices function includes an couple additional switches.  The $OnlyDeleteIfNotConvertedToApplication switch can be used as a bit of a safety net to ensure that you only delete files for application services that are not currently running as a web application (i.e. the web application has already been removed).  If this switch is omitted, the web application will be removed and the files deleted.  The $DeleteEmptyParentDirectories switch that may be used to remove parent directories once the application files have been deleted. This is useful for us because we version our services, so they are all placed in a directory corresponding to a version number. e.g. \Website\[VersionNumber]\App1 and \Website\[VersionNumber]\App2. This switch allows the [VersionNumber] directory to be deleted automatically once the App1 and App2 directories have been deleted.
          Note that I don’t have a function to copy files to the server (i.e. publish them); I assume that the files have already been copied to the server, as we currently have this as a separate step in our deployment process.

        My 2nd file (ApplicationServiceLibrary.ps1) is optional and is really just a collection of functions used to return the ApplicationServiceInformation instances that I require as an array, depending on which projects I want to convert/remove/delete.

        # Get the directory that this script is in.
        $THIS_SCRIPTS_DIRECTORY = Split-Path $script:MyInvocation.MyCommand.Path
        
        # Include the required ApplicationServiceInformation type.
        . (Join-Path $THIS_SCRIPTS_DIRECTORY ApplicationServiceUtilities.ps1)
        
        #=================================
        # Replace all of the functions below with your own.
        # These are provided as examples.
        #=================================
        
        function Get-AllApplicationServiceInformation([string] $Release)
        {
            [ApplicationServiceInformation[]] $appServiceInfo = @()
        
            $appServiceInfo += Get-RqApplicationServiceInformation -Release $Release
            $appServiceInfo += Get-PublicApiApplicationServiceInformation -Release $Release
            $appServiceInfo += Get-IntraApplicationServiceInformation -Release $Release
        
            return $appServiceInfo    
        }
        
        function Get-RqApplicationServiceInformation([string] $Release)
        {
            return [ApplicationServiceInformation[]] @(
        	    (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/Core.Reporting.Services"; ApplicationPool = "RQ Services .NET4"}),
        	    (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/Core.Services"; ApplicationPool = "RQ Core Services .NET4"}),
        	    (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/DeskIntegration.Services"; ApplicationPool = "RQ Services .NET4"}),
        	    (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/Retail.Integration.Services"; ApplicationPool = "RQ Services .NET4"}),
        
                # Simulator Services that are only for Dev; we don't want to convert them to an application, but do want to remove their files that got copied to the web server.
                (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/Simulator.Services"; ApplicationPool = "Simulator Services .NET4"; ConvertToApplication = $false}))
        }
        
        function Get-PublicApiApplicationServiceInformation([string] $Release)
        {
            return [ApplicationServiceInformation[]] @(
                (New-Object ApplicationServiceInformation -Property @{Website = "API Services"; ApplicationPath = "$Release/PublicAPI.Host"; ApplicationPool = "API Services .NET4"}),
        	    (New-Object ApplicationServiceInformation -Property @{Website = "API Services"; ApplicationPath = "$Release/PublicAPI.Documentation"; ApplicationPool = "API Services .NET4"}))
        }
        
        function Get-IntraApplicationServiceInformation([string] $Release)
        {
            return [ApplicationServiceInformation[]] @(
                (New-Object ApplicationServiceInformation -Property @{Website = "Intra Services"; ApplicationPath = "$Release"; ApplicationPool = "Intra Services .NET4"}))
        }
        

        You can see the first thing it does is dot source the ApplicationServiceUtilities.ps1 file (I assume all these scripts are in the same directory).  This is done in order to include the ApplicationServiceInformation type into the PowerShell session.  Next I just have functions that return the various application service information that our various projects specify.  I break them apart by project so that I’m able to easily publish one project separately from another, but also have a Get-All function that returns back all of the service information for when we deploy all services together.  We deploy many of our projects in lock-step, so having a Get-All function makes sense for us, but it may not for you.  We have many more projects and services than I show here; I just show these as an example of how you can set yours up if you choose.

        One other thing you may notice is that my Get-*ApplicationServiceInformation functions take a $Release parameter that is used in the ApplicationPath; this is because our services are versioned.  Yours may not be though, in which case you can omit that parameter for your Get functions (or add any additional parameters that you do need).

        Lastly, to make things nice and easy, I create ConvertTo, Remove, and Delete scripts for each of our projects, as well as a scripts to do all of the projects at once.  Here’s an example of what one of these scripts would look like:

        param
        (
        	[parameter(Position=0,Mandatory=$true,HelpMessage="The 3 hex-value version number of the release (x.x.x).")]
        	[ValidatePattern("^\d{1,5}\.\d{1,5}\.\d{1,5}$")]
        	[string] $Release
        )
        
        # Get the directory that this script is in.
        $THIS_SCRIPTS_DIRECTORY = Split-Path $script:MyInvocation.MyCommand.Path
        
        # Include the functions used to perform the actual operations.
        . (Join-Path $THIS_SCRIPTS_DIRECTORY ApplicationServiceLibrary.ps1)
        
        ConvertTo-ApplicationServices -Server "Our.WebServer.local" -ApplicationServicesInfo (Get-RqApplicationServiceInformation -Release $Release) -Verbose
        

        The first thing it does is prompt for the $Release version number; again, if you don’t version your services then you can omit that.

        The next thing it does is dot-source the ApplicationServicesLibrary.ps1 script to make all of the Get-*ApplicationServiceInformation functions that we defined in the previous file available.  I prefer to use the ApplicationServicesLibrary.ps1 file to place all of our services in a common place, and to avoid copy/pasting the ApplicationServiceInformation for each project into each Convert/Remove/Delete script; but that’s my personal choice and if you prefer to copy-paste the code into a few different files instead of having a central library file, go hard.  If you omit the Library script though, then you will instead need to dot-source the ApplicationServiceUtilities.ps1 file here, since our Library script currently dot-sources it in for us.

        The final line is the one that actually calls our utility function to perform the operation.  It provides the web server hostname to connect to, and calls the library’s Get-*ApplicationServiceInformation to retrieve the information for the web applications that should be created.  Notice too that it also provides the –Verbose switch.  Some of the IIS operations can take quite a while to run and don’t generate any output, so I like to see the verbose output so I can gauge the progress of the script, but feel free to omit it.

        So this sample script creates all of the web applications for our Rq product and can be ran very easily.  To make the corresponding Remove and Delete scripts, I would just copy this file and replace “ConvertTo-” with “Remove-” and “Delete-” respectively.  This allows you to have separate scripts for creating and removing each of your products that can easily be ran automatically or manually, fully automating the process of creating and removing your web applications in IIS.

        If I need to remove the services for a bunch of versions, here is an example of how I can just create a quick script that calls my Remove Services script for each version that needs to be removed:

        # Get the directory that this script is in.
        $thisScriptsDirectory = Split-Path $script:MyInvocation.MyCommand.Path
        
        # Remove Rq application services for versions 4.11.33 to 4.11.43.
        $majorMinorVersion = "4.11"
        33..43 | foreach {
            $Release = "$majorMinorVersion.$_"
            Write-Host "Removing Rq '$Release' services..."
            & "$thisScriptsDirectory\Remove-RqServices.ps1" $Release
        }
        

        If you have any questions or suggestions feel free to leave a comment.  I hope you find this useful.

        Happy coding!

        PowerShell 2.0 vs. 3.0 Syntax Differences And More

        October 22nd, 2013 No comments

        I’m fortunate enough to work for a great company that tries to stay ahead of the curve and use newer technologies.  This means that when I’m writing my PowerShell (PS) scripts I typically don’t have to worry about only using PS v2.0 compatible syntax and cmdlets, as all of our PCs have v3.0 (soon to have v4.0).  This is great, until I release these scripts (or snippets from the scripts) for the general public to use; I have to keep in mind that many other people are still stuck running older versions of Windows, or not allowed to upgrade PowerShell.  So to help myself release PS v2.0 compatible scripts to the general public, I’m going to use this as a living document of the differences between PowerShell 2.0 and 3.0 that I encounter (so it will continue to grow over time; read as, bookmark it).  Of course there are other sites that have some of this info, but I’m going to try and compile a list of the ones that are relevant to me, in a nice simple format.

        Before we get to the differences, here are some things you may want to know relating to PowerShell versions.

        How to check which version of PowerShell you are running

        All PS versions:

        $PSVersionTable.PSVersion
        

         

        How to run/test your script against an older version of PowerShell (source)

        All PS versions:  use PowerShell.exe –Version [version] to start a new PowerShell session, where [version] is the PowerShell version that you want the session to use, then run your script in this new session.  Shorthand is PowerShell –v [version]

        PowerShell.exe -Version 2.0
        

        Note: You can’t run PowerShell ISE in an older version of PowerShell; only the Windows PowerShell console.

         

        PowerShell v2 and v3 Differences:

         

        Where-Object no longer requires braces (source)

        PS v2.0:

        Get-Service | Where { $_.Status -eq ‘running’ }
        

        PS v3.0:

        Get-Service | Where Status -eq ‘running
        

        PS V2.0 Error Message:

        Where : Cannot bind parameter ‘FilterScript’. Cannot convert the “[PropertyName]” value of the type “[Type]” to type “System.Management.Automation.ScriptBlock”.

         

        Using local variables in remote sessions (source)

        PS v2.0:

        $class = "win32_bios"
        Invoke-Command -cn dc3 {param($class) gwmi -class $class} -ArgumentList $class
        

        PS v3.0:

        $class = "win32_bios"
        Invoke-Command -cn dc3 {gwmi -class $Using:class}
        

         

        Variable validation attributes (source)

        PS v2.0: Validation only available on cmdlet/function/script parameters.

        PS v3.0: Validation available on cmdlet/function/script parameters, and on variables.

        [ValidateRange(1,5)][int]$someLocalVariable = 1
        

         

        Stream redirection (source)

        The Windows PowerShell redirection operators use the following characters to represent each output type:
                *   All output
                1   Success output
                2   Errors
                3   Warning messages
                4   Verbose output
                5   Debug messages
        
        NOTE: The All (*), Warning (3), Verbose (4) and Debug (5) redirection operators were introduced
               in Windows PowerShell 3.0. They do not work in earlier versions of Windows PowerShell.

         

        PS v2.0: Could only redirect Success and Error output.

        # Sends errors (2) and success output (1) to the success output stream.
        Get-Process none, Powershell 2>&1
        

        PS v3.0: Can also redirect Warning, Verbose, Debug, and All output.

        # Function to generate each kind of output.
        function Test-Output { Get-Process PowerShell, none; Write-Warning "Test!"; Write-Verbose "Test Verbose"; Write-Debug "Test Debug"}
        
        # Write every output stream to a text file.
        Test-Output *> Test-Output.txt
        
        

         

        Explicitly set parameter set variable values when not defined (source)

        PS v2.0 will throw an error if you try and access a parameter set parameter that has not been defined.  The solution is to give it a default value when it is not defined. Specify the Private scope in case a variable with the same name exists in the global scope or an inherited scope:

        # Default the ParameterSet variables that may not have been set depending on which parameter set is being used. This is required for PowerShell v2.0 compatibility.
        if (!(Test-Path Variable:Private:SomeStringParameter)) { $SomeStringParameter = $null }
        if (!(Test-Path Variable:Private:SomeIntegerParameter)) { $SomeIntegerParameter = 0 }
        if (!(Test-Path Variable:Private:SomeSwitchParameter)) { $SomeSwitchParameter = $false }
        

        PS v2.0 Error Message:

        The variable ‘$[VariableName]’ cannot be retrieved because it has not been set.

         

        Parameter attributes require the equals sign

        PS v2.0:

        [parameter(Position=1,Mandatory=$true)] [string] $SomeParameter
        

        PS v3.0:

        [parameter(Position=1,Mandatory)] [string] $SomeParameter
        

        PS v2.0 Error Message:

        The “=” operator is missing after a named argument.

         

        Cannot use String.IsNullOrWhitespace (or any other post .Net 3.5 functionality)

        PS v2.0:

        [string]::IsNullOrEmpty($SomeString)
        

        PS v3.0:

        [string]::IsNullOrWhiteSpace($SomeString)
        

        PS v2.0 Error Message:

        IsNullOrWhitespace : Method invocation failed because [System.String] doesn’t contain a method named ‘IsNullOrWhiteSpace’.

        PS v2.0 compatible version of IsNullOrWhitespace function:

        # PowerShell v2.0 compatible version of [string]::IsNullOrWhitespace.
        function StringIsNullOrWhitespace([string] $string)
        {
            if ($string -ne $null) { $string = $string.Trim() }
            return [string]::IsNullOrEmpty($string)
        }
        

         

        Get-ChildItem cmdlet’s –Directory and –File switches were introduced in PS v3.0

        PS v2.0:

        Get-ChildItem -Path $somePath | Where-Object { $_.PSIsContainer }	# Get directories only.
        Get-ChildItem -Path $somePath | Where-Object { !$_.PSIsContainer }	# Get files only.
        

        PS v3.0:

        Get-ChildItem -Path $somePath -Directory
        Get-ChildItem -Path $somePath -File
        

         

         

        Other Links

        Creating Strongly Typed Objects In PowerShell, Rather Than Using An Array Or PSCustomObject

        October 21st, 2013 No comments

        I recently read a great article that explained how to create hashtables, dictionaries, and PowerShell objects.  I already knew a bit about these, but this article gives a great comparison between them, when to use each of them, and how to create them in the different versions of PowerShell.

        Right now I’m working on refactoring some existing code into some general functions for creating, removing, and destroying IIS applications (read about it here).  At first, I thought that this would be a great place to use PSCustomObject, as in order to perform these operations I needed 3 pieces of information about a website; the Website name, the Application Name (essentially the path to the application under the Website root), and the Application Pool that the application should run in.

         

        Using an array

        So initially the code I wrote just used an array to hold the 3 properties of each application service:

        # Store app service info as an array of arrays.
        $AppServices = @(
        	("MyWebsite", "$Version/Reporting.Services", "Services .NET4"),
        	("MyWebsite", "$Version/Core.Services", "Services .NET4"),
        	...
        )
        
        # Remove all of the Web Applications.
        foreach ($appInfo in $AppServices )
        {
        	$website = $appInfo[0]
        	$appName = $appInfo[1]
        	$appPool = $appInfo[2]
        	...
        }
        
        

        There is nothing “wrong” with using an array to store the properties; it works.  However, now that I am refactoring the functions to make them general purpose to be used by other people/scripts,  this does have one very undesirable limitation; The properties must always be stored in the correct order in the array (i.e. Website in position 0, App Name in 1, and App Pool in 2).  Since the list of app services will be passed into my functions, this would require the calling script to know to put the properties in this order.  Boo.

        Another option that I didn’t consider when I originally wrote the script was to use an associative array, but it has the same drawbacks as using a PSCustomObject discussed below.

         

        Using PSCustomObject

        So I thought let’s use a PSCustomObject instead, as that way the client does not have to worry about the order of the information; as long as their PSCustomObject has Website, ApplicationPath, and ApplicationPool properties then we’ll be able to process it.  So I had this:

        [PSCustomObject[]] $applicationServicesInfo = @(
        	[PSCustomObject]@{Website = "MyWebsite"; ApplicationPath = "$Version/Reporting.Services"; ApplicationPool = "Services .NET4"},
        	[PSCustomObject]@{Website = "MyWebsite"; ApplicationPath = "$Version/Core.Services"; ApplicationPool = "Services .NET4},
        	...
        )
        
        function Remove-ApplicationServices
        {
        	param([PSCustomObject[]] $ApplicationServicesInfo)
        
        	# Remove all of the Web Applications.
        	foreach ($appInfo in [PSCustomObject[]]$ApplicationServicesInfo)
        	{
        		$website = $appInfo.Website
        		$appPath = $appInfo.ApplicationPath
        		$appPool = $appInfo.ApplicationPool
        		...
        	}
        }
        

        I liked this better as the properties are explicitly named, so there’s no guess work about which information the property contains, but it’s still not great.  One thing that I don’t have here (and really should), is validation to make sure that the passed in PSCustomObjects actually have Website, ApplicationPath, and ApplicationPool properties on them, otherwise an exception will be thrown when I try to access them.  So with this approach I would still need to have documentation and validation to ensure that the client passes in a PSCustomObject with those properties.

         

        Using a new strongly typed object

        I frequently read other PowerShell blog posts and recently stumbled across this one.  In the article he mentions creating a new compiled type by passing a string to the Add-Type cmdlet; essentially writing C# code in his PowerShell script to create a new class.  I knew that you could use Add-Type to import other assemblies, but never realized that you could use it to import an assembly that doesn’t actually exist (i.e. a string in your PowerShell script).  This is freaking amazing! So here is what my new solution looks like:

        try {	# Wrap in a try-catch in case we try to add this type twice.
        # Create a class to hold an IIS Application Service's Information.
        Add-Type -TypeDefinition @"
        	using System;
        	
        	public class ApplicationServiceInformation
        	{
        		// The name of the Website in IIS.
        		public string Website { get; set;}
        		
        		// The path to the Application, relative to the Website root.
        		public string ApplicationPath { get; set; }
        
        		// The Application Pool that the application is running in.
        		public string ApplicationPool { get; set; }
        
        		// Implicit Constructor.
        		public ApplicationServiceInformation() { }
        
        		// Explicit constructor.
        		public ApplicationServiceInformation(string website, string applicationPath, string applicationPool)
        		{
        			this.Website = website;
        			this.ApplicationPath = applicationPath;
        			this.ApplicationPool = applicationPool;
        		}
        	}
        "@
        } catch {}
        
        $anotherService = New-Object ApplicationServiceInformation
        $anotherService.Website = "MyWebsite"
        $anotherService.ApplicationPath = "$Version/Payment.Services"
        $anotherService.ApplicationPool = "Services .NET4"
        	
        [ApplicationServiceInformation[]] $applicationServicesInfo = @(
        	(New-Object ApplicationServiceInformation("MyWebsite", "$Version/Reporting.Services", "Services .NET4")),
        	(New-Object ApplicationServiceInformation -Property @{Website = "MyWebsite"; ApplicationPath = "$Version/Core.Services"; ApplicationPool = "Services .NET4}),
        	$anotherService,
        	...
        )
        
        function Remove-ApplicationServices
        {
        	param([ApplicationServiceInformation[]] $ApplicationServicesInfo)
        
        	# Remove all of the Web Applications.
        	foreach ($appInfo in [ApplicationServiceInformation[]]$ApplicationServicesInfo)
        	{
        		$website = $appInfo.Website
        		$appPath = $appInfo.ApplicationPath
        		$appPool = $appInfo.ApplicationPool
        		...
        	}
        }
        

        I first create a simple container class to hold the application service information, and now all of my properties are explicit like with the PSCustomObject, but also I’m guaranteed the properties will exist on the object that is passed into my function.  From there I declare my array of ApplicationServiceInformation objects, and the function that we can pass them into. Note that I wrap each New-Object call in parenthesis, otherwise PowerShell parses it incorrectly and will throw an error.

        As you can see from the snippets above and below, there are several different ways that we can initialize a new instance of our ApplicationServiceInformation class:

        $service1 = New-Object ApplicationServiceInformation("Explicit Constructor", "Core.Services", ".NET4")
        
        $service2 = New-Object ApplicationServiceInformation -ArgumentList ("Explicit Constructor ArgumentList", "Core.Services", ".NET4")
        
        $service3 = New-Object ApplicationServiceInformation -Property @{Website = "Using Property"; ApplicationPath = "Core.Services"; ApplicationPool = ".NET4"}
        
        $service4 = New-Object ApplicationServiceInformation
        $service4.Website = "Properties added individually"
        $service4.ApplicationPath = "Core.Services"
        $service4.ApplicationPool = "Services .NET4"
        

         

        Caveats

        • Note that I wrapped the call to Add-Type in a Try-Catch block.  This is to prevent PowerShell from throwing an error if the type tries to get added twice.  It’s sort of a hacky workaround, but there aren’t many good alternatives, since you cannot unload an assembly.
        • This means that while developing if you make any changes to the class, you’ll have to restart your PowerShell session for the changes to be applied, since the Add-Type cmdlet will only work properly the first time that it is called in a session.

        I hope you found something in here useful.

        Happy coding!

        PowerShell Functions To Delete Old Files And Empty Directories

        October 15th, 2013 7 comments

        I thought I’d share some PowerShell (PS) functions that I wrote for some clean-up scripts at work.  I use these functions to delete files older than a certain date. Note that these functions require PS v3.0; slower PS v2.0 compatible functions are given at the end of this article.

        # Function to remove all empty directories under the given path.
        # If -DeletePathIfEmpty is provided the given Path directory will also be deleted if it is empty.
        # If -OnlyDeleteDirectoriesCreatedBeforeDate is provided, empty folders will only be deleted if they were created before the given date.
        # If -OnlyDeleteDirectoriesNotModifiedAfterDate is provided, empty folders will only be deleted if they have not been written to after the given date.
        function Remove-EmptyDirectories([parameter(Mandatory)][ValidateScript({Test-Path $_})][string] $Path, [switch] $DeletePathIfEmpty, [DateTime] $OnlyDeleteDirectoriesCreatedBeforeDate = [DateTime]::MaxValue, [DateTime] $OnlyDeleteDirectoriesNotModifiedAfterDate = [DateTime]::MaxValue)
        {
            Get-ChildItem -Path $Path -Recurse -Force -Directory | Where-Object { (Get-ChildItem -Path $_.FullName -Recurse -Force -File) -eq $null } | 
                Where-Object { $_.CreationTime -lt $OnlyDeleteDirectoriesCreatedBeforeDate -and $_.LastWriteTime -lt $OnlyDeleteDirectoriesNotModifiedAfterDate } | 
                Remove-Item -Force -Recurse
        
            # If we should delete the given path when it is empty, and it is a directory, and it is empty, and it meets the date requirements, then delete it.
            if ($DeletePathIfEmpty -and (Test-Path -Path $Path -PathType Container) -and (Get-ChildItem -Path $Path -Force) -eq $null -and
                ((Get-Item $Path).CreationTime -lt $OnlyDeleteDirectoriesCreatedBeforeDate) -and ((Get-Item $Path).LastWriteTime -lt $OnlyDeleteDirectoriesNotModifiedAfterDate))
            { Remove-Item -Path $Path -Force }
        }
        
        # Function to remove all files in the given Path that were created before the given date, as well as any empty directories that may be left behind.
        function Remove-FilesCreatedBeforeDate([parameter(Mandatory)][ValidateScript({Test-Path $_})][string] $Path, [parameter(Mandatory)][DateTime] $DateTime, [switch] $DeletePathIfEmpty)
        {
            Get-ChildItem -Path $Path -Recurse -Force -File | Where-Object { $_.CreationTime -lt $DateTime } | Remove-Item -Force
            Remove-EmptyDirectories -Path $Path -DeletePathIfEmpty:$DeletePathIfEmpty -OnlyDeleteDirectoriesCreatedBeforeDate $DateTime
        }
        
        # Function to remove all files in the given Path that have not been modified after the given date, as well as any empty directories that may be left behind.
        function Remove-FilesNotModifiedAfterDate([parameter(Mandatory)][ValidateScript({Test-Path $_})][string] $Path, [parameter(Mandatory)][DateTime] $DateTime, [switch] $DeletePathIfEmpty)
        {
            Get-ChildItem -Path $Path -Recurse -Force -File | Where-Object { $_.LastWriteTime -lt $DateTime } | Remove-Item -Force
            Remove-EmptyDirectories -Path $Path -DeletePathIfEmpty:$DeletePathIfEmpty -OnlyDeleteDirectoriesNotModifiedAfterDate $DateTime
        }
        
        

        The Remove-EmptyDirectories function removes all empty directories under the given path, and optionally (via the DeletePathIfEmpty switch) the path directory itself if it is empty after cleaning up the other directories. It also takes a couple parameters that may be specified if you only want to delete the empty directories that were created before a certain date, or that haven’t been written to since a certain date.

        The Remove-FilesCreatedBeforeDate and Remove-FilesNotModifiedAfterDate functions are very similar to each other.  They delete all files under the given path whose Created Date or Last Written To Date, respectfully, is less than the given DateTime.  They then call the Remove-EmptyDirectories function with the provided date to clean up any left over empty directories.

        To call the last 2 functions, just provide the path to the file/directory that you want it to delete if older than the given date-time.  Here are some examples of calling all the functions:

        # Delete all files created more than 2 days ago.
        Remove-FilesCreatedBeforeDate -Path &quot;C:\Some\Directory&quot; -DateTime ((Get-Date).AddDays(-2)) -DeletePathIfEmpty
        
        # Delete all files that have not been updated in 8 hours.
        Remove-FilesNotModifiedAfterDate -Path &quot;C:\Another\Directory&quot; -DateTime ((Get-Date).AddHours(-8))
        
        # Delete a single file if it is more than 30 minutes old.
        Remove-FilesCreatedBeforeDate -Path &quot;C:\Another\Directory\SomeFile.txt&quot; -DateTime ((Get-Date).AddMinutes(-30))
        
        # Delete all empty directories in the Temp folder, as well as the Temp folder itself if it is empty.
        Remove-EmptyDirectories -Path &quot;C:\SomePath\Temp&quot; -DeletePathIfEmpty
        
        # Delete all empty directories created after Jan 1, 2014 3PM.
        Remove-EmptyDirectories -Path &quot;C:\SomePath\WithEmpty\Directories&quot; -OnlyDeleteDirectoriesCreatedBeforeDate ([DateTime]::Parse(&quot;Jan 1, 2014 15:00:00&quot;))
        
        

        Notice that I am using Get-Date to get the current date and time, and then subtracting the specified amount of time from it in order to get a date-time relative to the current time; you can use any valid DateTime though, such as a hard-coded date of January 1st, 2014 3PM.

        I use these functions in some scripts that we run nightly via a scheduled task in Windows.  Hopefully you find them useful too.

         

        PowerShell v2.0 Compatible Functions

        As promised, here are the slower PS v2.0 compatible functions.  The main difference is that they use $_.PSIsContainer in the Where-Object clause rather than using the –File / –Directory Get-ChildItem switches.  The Measure-Command cmdlet shows that using the switches is about 3x faster than using the where clause, but since we are talking about milliseconds here you likely won’t notice the difference unless you are traversing a large file tree (which I happen to be for my scripts that we use to clean up TFS builds).

        # Function to remove all empty directories under the given path.
        # If -DeletePathIfEmpty is provided the given Path directory will also be deleted if it is empty.
        # If -OnlyDeleteDirectoriesCreatedBeforeDate is provided, empty folders will only be deleted if they were created before the given date.
        # If -OnlyDeleteDirectoriesNotModifiedAfterDate is provided, empty folders will only be deleted if they have not been written to after the given date.
        function Remove-EmptyDirectories([parameter(Mandatory=$true)][ValidateScript({Test-Path $_})][string] $Path, [switch] $DeletePathIfEmpty, [DateTime] $OnlyDeleteDirectoriesCreatedBeforeDate = [DateTime]::MaxValue, [DateTime] $OnlyDeleteDirectoriesNotModifiedAfterDate = [DateTime]::MaxValue)
        {
            Get-ChildItem -Path $Path -Recurse -Force | Where-Object { $_.PSIsContainer -and (Get-ChildItem -Path $_.FullName -Recurse -Force | Where-Object { !$_.PSIsContainer }) -eq $null } | 
                Where-Object { $_.CreationTime -lt $OnlyDeleteDirectoriesCreatedBeforeDate -and $_.LastWriteTime -lt $OnlyDeleteDirectoriesNotModifiedAfterDate } | 
                Remove-Item -Force -Recurse
        
            # If we should delete the given path when it is empty, and it is a directory, and it is empty, and it meets the date requirements, then delete it.
            if ($DeletePathIfEmpty -and (Test-Path -Path $Path -PathType Container) -and (Get-ChildItem -Path $Path -Force) -eq $null -and
                ((Get-Item $Path).CreationTime -lt $OnlyDeleteDirectoriesCreatedBeforeDate) -and ((Get-Item $Path).LastWriteTime -lt $OnlyDeleteDirectoriesNotModifiedAfterDate))
            { Remove-Item -Path $Path -Force }
        }
        
        # Function to remove all files in the given Path that were created before the given date, as well as any empty directories that may be left behind.
        function Remove-FilesCreatedBeforeDate([parameter(Mandatory=$true)][ValidateScript({Test-Path $_})][string] $Path, [parameter(Mandatory=$true)][DateTime] $DateTime, [switch] $DeletePathIfEmpty)
        {
            Get-ChildItem -Path $Path -Recurse -Force | Where-Object { !$_.PSIsContainer -and $_.CreationTime -lt $DateTime } | Remove-Item -Force
            Remove-EmptyDirectories -Path $Path -DeletePathIfEmpty:$DeletePathIfEmpty -OnlyDeleteDirectoriesCreatedBeforeDate $DateTime
        }
        
        # Function to remove all files in the given Path that have not been modified after the given date, as well as any empty directories that may be left behind.
        function Remove-FilesNotModifiedAfterDate([parameter(Mandatory=$true)][ValidateScript({Test-Path $_})][string] $Path, [parameter(Mandatory=$true)][DateTime] $DateTime, [switch] $DeletePathIfEmpty)
        {
            Get-ChildItem -Path $Path -Recurse -Force | Where-Object { !$_.PSIsContainer -and $_.LastWriteTime -lt $DateTime } | Remove-Item -Force
            Remove-EmptyDirectories -Path $Path -DeletePathIfEmpty:$DeletePathIfEmpty -OnlyDeleteDirectoriesNotModifiedAfterDate $DateTime
        }
        
        

        Happy coding!

        Windows Phone Developers: Do not renew your subscription until the expiry DAY or else Microsoft steals your money

        October 1st, 2013 No comments

        The Problem

        So as I found out today, if you renew your Windows Phone Developer subscription early, it does not renew it for a year from the expiry date, it renews it for a year from the date you paid to have it renewed.  So essentially you pay for a 12 month subscription, but receive less than 12 months.  I’m not sure if the Windows Store subscription has the same problem or not, but beware.

         

        The Story

        After this happened I started up a support request chat with MS to have them extend the expiry date to what it should be, but was told that they are not able to do this.  Here is our chat transcript:

        General Info

        Chat start time

        Oct 1, 2013 6:16:27 PM EST

        Chat end time

        Oct 1, 2013 6:40:49 PM EST

        Duration (actual chatting time)

        00:24:21

        Operator

        Adrian

        Chat Transcript

        info: Please wait for an agent to respond.  You are currently ’1′ in the queue.
        info: Privacy Statement
        You are now chatting with ‘Adrian’.
        Adrian: Hello, Dan
        Dan: Hi there, I just renewed my Windows Phone Developer subscription today
        Dan: My old expiry date was 10/24/2013, but when I renewed it the new expiry date is 10/1/2014
        Dan: but it should be 10/24/2014
        Adrian: So I understand that you have renewed your subscription before the expiration date and it seems that you have lost several days on your subscription.
        Dan: yup, that’s basically it
        Dan: I got the email notification about it expiring soon today, so I thought I would do it now before I forgot about it
        Adrian: As it turns out, renewing your subscription manually is only available within 30 days of the expiration, but currently it does not stack the subscription.
        Adrian: We recommend that you wait closer to your renewal date or let the account auto-renew (on by default) so that you do not lose any days.
        Dan: so can you manually adjust my expiration date to be 10/24/2014 like it should, and submit a bug for them to fix that?
        Adrian: I apologize for the inconvenience since I currently do not have a way to extend the subscription or modify the expiration date. Our engineers are already aware of the renewal behavior, but there is no estimated date on when a change will be implemented.
        Dan: so can you guys credit my credit card for the difference then? You know that’s essentially jut stealing then…
        Dan: Or just escalate me to a supervisor/manager/engineer who does have the ability to change the expiration date?
        Adrian: I apologize as there is not a way to modify the expiration date within the system. My team works here as peers so there is no escalation path. The prorated amount for 24 days would amount to 1.28 USD out of 19 USD for a years subscription however we do not offer partial refunds per Microsoft policy.
        Adrian: At best, I can refund the full amount and cancel your current subscription.  
        Adrian: If there is any other way that I can offer my assistance I will be glad to help.
        Adrian: Hello, are you there? I have not received any response from you.
        Adrian: I have not yet received any response from you. I’ll wait for a minute before closing this chat session. Please feel free to initiate a new chat session so that we can assist you further.
        Adrian: For some reason, possibly due to technical difficulty, I have not received a response from you. I will update the case notes and end this session. Please feel free to initiate a new chat session so that we can assist you further. Thank you for contacting MarketPlace Chat Support. Have a great day!

        I started this chat up while at work and unfortunately had to leave my desk for 15 minutes, so Adrian closed our chat before I could reply.  The Windows Phone Developer subscription is now only $19/year, but I actually used a promo code that I received many months ago when it was still $100/year; so while Adrian mentioned that the missed days only added up to $1.28, the cost would actually be closer to $10.28 for the people who gave me the promo code.  Also, by this time next year the price may go back up to $100/year, in which case I’ll be forking over the $10.28 to pay for the month of October 2014.

        Also, while Adrian admits that “Our engineers are already aware of the renewal behavior, but there is no estimated date on when a change will be implemented.”, this behaviour is not stated anywhere on the web page when renewing your account.  This seems pretty irresponsible to me, especially when it directly affects payments.  Can you imagine if your internet provider was allowed to just charge you for an extra month of service without warning or agreement.

        Adrian mentioned that the recommended thing to do is to just let your annual subscription auto-renew.  This is likely the ideal situation, but I’m often paranoid that automatic transactions won’t go through and will be unnoticed (I’ve been bit by this in the past), or that by the time the renewal comes around my credit card info will have changed, etc., so I often manually renew my annual subscriptions.  Especially when the consequence of not renewing your membership means your apps are removed from the store and you stop making money off of them.  MS is basically stealing money from those who choose to manually renew their subscription.

        I’m not going to bother pursuing this with MS as $10 isn’t worth the time or stress, but I wanted to try and let others know so that you don’t get burned as well.

        WLW Post Fails With Error “The underlying connection was closed: An unexpected error occurred on a receive.”

        September 27th, 2013 1 comment

        When trying to upload my last blog post from Windows Live Writer (WLW) to WordPress (WP) I received the following error:

        ————————————————————————-

        Network Connection Error

        Error attempting to connect to blog at:

        http://blog.danskingdom.com/xmlrpc.php

        The underlying connection was closed. An unexpected error occurred on a receive.

        ————————————————————————-

        WLWNetworkConnectionError

         

        I had no problem uploading to my blog a couple weeks earlier and hadn’t done any updates or changed anything, so I thought this was strange.  After waiting a day, thinking maybe GoDaddy (my WP host) was having issues, I was still getting the error.  After Googling I found many others reporting this error with varying degrees of success fixing it.  So after trying some suggestions that worked for others (change WLW blog URL from http to https, edit the WP xmlrpc.php file, delete and recreate blog account in WLW, reboot, etc.) I was still getting this same error.

        So I decided to try posting a new “test” post, and low and behold it worked.  So it appeared the problem was something with the content of my article.  So I started removing chunks of content from the article and trying to post.  Eventually I found that the problem was being caused by the string “In that post” in the first paragraph of the post.  I thought that maybe some weird hidden characters maybe got in there somehow, but after reviewing the article’s Source I could see that it was just plain old text.  I deleted the sentence and retyped it, but it still didn’t work.  If I just removed “In that post” from the sentence then everything worked fine; very strange  After more playing around, I found that if I just added a comma to the end and made it “In that post,”, that also fixed the problem.  So that’s how I’ve left it.

        I don’t know what is special about the string “In that post”;  I created another test article with that string in it and was able to post it without any problems.  Just a weird one-off WLW-WP problem I guess.

         

        Moral of the story

        If you run into this same error, before you go muddling with config files and recreating your blog account, just try posting a quick “test” article.  If it works, then the problem is somewhere in your article’s content, so start stripping pieces away until you are able to get it to post successfully and narrow down the culprit.  Also, if you don’t want to publish a half-baked article while you are tracking down the problem, you can do a Save Post Draft To Blog instead of a full Publish to see if you are still getting the error

        Happy coding!

         

        – Update –

        I’ve ran into this problem again when trying to post this article.  3 different spots in the article were causing the problem.  Here is the source of the article with what broke it, and what worked:

        1. This broke:

        <li>Click Yes when prompted to < strong > Run With UI Access < / strong > . </li>

        (I had to add spaces around all of the 3 characters <, >, and / in the strong tags to get it to post here)

        This worked:

        <li>Click Yes when prompted to Run With UI Access.</li>

         

        2. This broke:

        <p>Today I stumbled across <a href="http://www.autohotkey.com/board/topic/70449-enable-interaction-with-administrative-programs/">this post on the AHK community forums < / a > .&#160;

        (I had to add spaces around the each character of the closing </a> tag to get it to post here)

        This worked:

        <p>Today I stumbled across <a href="http://www.autohotkey.com/board/topic/70449-enable-interaction-with-administrative-programs/">this post</a> on the AHK community forums.&#160;

         

        3. This broke:

        the <a href="http://www.autohotkey.com/docs/commands/RunAs.htm">RunAs command < / a > .</p>

        (Again, I had to add spaces around each character in the closing </a> tag to get it to post here)

        This worked:

        the <a href="http://www.autohotkey.com/docs/commands/RunAs.htm">RunAs</a> command.</p>

         

        I can reproduce this issue every time on that article, and also on this one (which is why I had to change the problem code slightly so I could get it to post here).  So unlike my first encounter with this problem, these ones all seem to be problems parsing html markup tags; specifically the </> characters.  I’m not sure if this is a problem with Windows Live Writer or WordPress, but it is definitely a frustrating bug.  I’m running Windows 8 x64 and the latest versions of WLW and WP.

        If you have any thoughts please comment below.