Archive

Archive for the ‘PowerShell’ Category

Module to Synchronously Zip and Unzip using PowerShell 2.0

May 2nd, 2015 10 comments

If you search for ways to zip and unzip files using PowerShell, you will find that there a lot of different methods.  Some people invoke .Net 4.5 assembly methods, others call a 3rd party executable (I’ve shown how to do this in one of my other posts).  For my needs this time around I required a method that didn’t involve using 3rd party tools, and wanted my PowerShell script to work on any Windows OS, not just ones that had .Net 4.5 installed (which isn’t available for older OSs like Windows XP).

I quickly found what I was after; you can use the OS native Application.Shell object.  This is what Windows/File Explorer uses behind the scenes when you copy/cut/paste a file.  The problem though was that the operations all happen asynchronously, so there was no way for me to determine when the Zip operation actually completed.  This was a problem, as I wanted my PowerShell script to copy the zip file to a network location once all of the files had been zipped up, and perform other operations on files once they were unzipped from a different zip file, and if I’m zipping/unzipping many MB or GBs or data, the operation might take several minutes.  Most examples I found online worked around this by just putting a Start-Sleep –Seconds 10 after the call to create or extract the Zip files.  That’s a super simple solution, and it works, but I wasn’t always sure how large the directory that I wanted to zip/unzip was going to be, and didn’t want to have my script sleep for 5 minutes when the zip/unzip operation sometimes only takes half a second.  This is what led to me creating the following PowerShell module below.

This module allows you to add files and directories to a new or existing zip file, as well as to extract the contents of a zip file.  Also, it will block script execution until the zip/unzip operation completes.

Here is an example of how to call the 2 public module functions, Compress-ZipFile (i.e. Zip) and Expand-ZipFile (i.e. Unzip):

# If you place the psm1 file in the global PowerShell Modules directory then you could reference it just by name, not by the entire file path like we do here (assumes psm1 file is in same directory as your script).
$THIS_SCRIPTS_DIRECTORY_PATH = Split-Path $script:MyInvocation.MyCommand.Path
$SynchronousZipAndUnzipModulePath = Join-Path $THIS_SCRIPTS_DIRECTORY_PATH 'Synchronous-ZipAndUnzip.psm1'

# Import the Synchronous-ZipAndUnzip module.
Import-Module -Name $SynchronousZipAndUnzipModulePath

# Variables used to test the functions.
$zipFilePath = "C:\Temp\ZipFile.zip"
$filePath = "C:\Test.txt"
$directoryPath = "C:\Test\ZipMeUp"
$destinationDirectoryPath = "C:\Temp\UnzippedContents"

# Create a new Zip file that contains only Test.txt.
Compress-ZipFile -ZipFilePath $zipFilePath -FileOrDirectoryPathToAddToZipFile $filePath -OverwriteWithoutPrompting

# Add the ZipMeUp directory to the zip file.
Compress-ZipFile -ZipFilePath $zipFilePath -FileOrDirectoryPathToAddToZipFile $directoryPath -OverwriteWithoutPrompting

# Unzip the Zip file to a new UnzippedContents directory.
Expand-ZipFile -ZipFilePath $zipFilePath -DestinationDirectoryPath $destinationDirectoryPath -OverwriteWithoutPrompting

 

And here is the Synchronous-ZipAndUnzip.psm1 module code itself:

#Requires -Version 2.0

# Recursive function to calculate the total number of files and directories in the Zip file.
function GetNumberOfItemsInZipFileItems($shellItems)
{
	[int]$totalItems = $shellItems.Count
	foreach ($shellItem in $shellItems)
	{
		if ($shellItem.IsFolder)
		{ $totalItems += GetNumberOfItemsInZipFileItems -shellItems $shellItem.GetFolder.Items() }
	}
	$totalItems
}

# Recursive function to move a directory into a Zip file, since we can move files out of a Zip file, but not directories, and copying a directory into a Zip file when it already exists is not allowed.
function MoveDirectoryIntoZipFile($parentInZipFileShell, $pathOfItemToCopy)
{
	# Get the name of the file/directory to copy, and the item itself.
	$nameOfItemToCopy = Split-Path -Path $pathOfItemToCopy -Leaf
	if ($parentInZipFileShell.IsFolder)
	{ $parentInZipFileShell = $parentInZipFileShell.GetFolder }
	$itemToCopyShell = $parentInZipFileShell.ParseName($nameOfItemToCopy)
	
	# If this item does not exist in the Zip file yet, or it is a file, move it over.
	if ($itemToCopyShell -eq $null -or !$itemToCopyShell.IsFolder)
	{
		$parentInZipFileShell.MoveHere($pathOfItemToCopy)
		
		# Wait for the file to be moved before continuing, to avoid erros about the zip file being locked or a file not being found.
		while (Test-Path -Path $pathOfItemToCopy)
		{ Start-Sleep -Milliseconds 10 }
	}
	# Else this is a directory that already exists in the Zip file, so we need to traverse it and copy each file/directory within it.
	else
	{
		# Copy each file/directory in the directory to the Zip file.
		foreach ($item in (Get-ChildItem -Path $pathOfItemToCopy -Force))
		{
			MoveDirectoryIntoZipFile -parentInZipFileShell $itemToCopyShell -pathOfItemToCopy $item.FullName
		}
	}
}

# Recursive function to move all of the files that start with the File Name Prefix to the Directory To Move Files To.
function MoveFilesOutOfZipFileItems($shellItems, $directoryToMoveFilesToShell, $fileNamePrefix)
{
	# Loop through every item in the file/directory.
	foreach ($shellItem in $shellItems)
	{
		# If this is a directory, recursively call this function to iterate over all files/directories within it.
		if ($shellItem.IsFolder)
		{ 
			$totalItems += MoveFilesOutOfZipFileItems -shellItems $shellItem.GetFolder.Items() -directoryToMoveFilesTo $directoryToMoveFilesToShell -fileNameToMatch $fileNameToMatch
		}
		# Else this is a file.
		else
		{
			# If this file name starts with the File Name Prefix, move it to the specified directory.
			if ($shellItem.Name.StartsWith($fileNamePrefix))
			{
				$directoryToMoveFilesToShell.MoveHere($shellItem)
			}
		}			
	}
}

function Expand-ZipFile
{
	[CmdletBinding()]
	param
	(
		[parameter(Position=1,Mandatory=$true)]
		[ValidateScript({(Test-Path -Path $_ -PathType Leaf) -and $_.EndsWith('.zip', [StringComparison]::OrdinalIgnoreCase)})]
		[string]$ZipFilePath, 
		
		[parameter(Position=2,Mandatory=$false)]
		[string]$DestinationDirectoryPath, 
		
		[Alias("Force")]
		[switch]$OverwriteWithoutPrompting
	)
	
	BEGIN { }
	END { }
	PROCESS
	{	
		# If a Destination Directory was not given, create one in the same directory as the Zip file, with the same name as the Zip file.
		if ($DestinationDirectoryPath -eq $null -or $DestinationDirectoryPath.Trim() -eq [string]::Empty)
		{
			$zipFileDirectoryPath = Split-Path -Path $ZipFilePath -Parent
			$zipFileNameWithoutExtension = [System.IO.Path]::GetFileNameWithoutExtension($ZipFilePath)
			$DestinationDirectoryPath = Join-Path -Path $zipFileDirectoryPath -ChildPath $zipFileNameWithoutExtension
		}
		
		# If the directory to unzip the files to does not exist yet, create it.
		if (!(Test-Path -Path $DestinationDirectoryPath -PathType Container)) 
		{ New-Item -Path $DestinationDirectoryPath -ItemType Container > $null }

		# Flags and values found at: https://msdn.microsoft.com/en-us/library/windows/desktop/bb759795%28v=vs.85%29.aspx
		$FOF_SILENT = 0x0004
		$FOF_NOCONFIRMATION = 0x0010
		$FOF_NOERRORUI = 0x0400

		# Set the flag values based on the parameters provided.
		$copyFlags = 0
		if ($OverwriteWithoutPrompting)
		{ $copyFlags = $FOF_NOCONFIRMATION }
	#	{ $copyFlags = $FOF_SILENT + $FOF_NOCONFIRMATION + $FOF_NOERRORUI }

		# Get the Shell object, Destination Directory, and Zip file.
	    $shell = New-Object -ComObject Shell.Application
		$destinationDirectoryShell = $shell.NameSpace($DestinationDirectoryPath)
	    $zipShell = $shell.NameSpace($ZipFilePath)
		
		# Start copying the Zip files into the destination directory, using the flags specified by the user. This is an asynchronous operation.
	    $destinationDirectoryShell.CopyHere($zipShell.Items(), $copyFlags)

		# Get the number of files and directories in the Zip file.
		$numberOfItemsInZipFile = GetNumberOfItemsInZipFileItems -shellItems $zipShell.Items()
		
		# The Copy (i.e. unzip) operation is asynchronous, so wait until it is complete before continuing. That is, sleep until the Destination Directory has the same number of files as the Zip file.
		while ((Get-ChildItem -Path $DestinationDirectoryPath -Recurse -Force).Count -lt $numberOfItemsInZipFile)
		{ Start-Sleep -Milliseconds 100 }
	}
}

function Compress-ZipFile
{
	[CmdletBinding()]
	param
	(
		[parameter(Position=1,Mandatory=$true)]
		[ValidateScript({Test-Path -Path $_})]
		[string]$FileOrDirectoryPathToAddToZipFile, 
	
		[parameter(Position=2,Mandatory=$false)]
		[string]$ZipFilePath,
		
		[Alias("Force")]
		[switch]$OverwriteWithoutPrompting
	)
	
	BEGIN { }
	END { }
	PROCESS
	{
		# If a Zip File Path was not given, create one in the same directory as the file/directory being added to the zip file, with the same name as the file/directory.
		if ($ZipFilePath -eq $null -or $ZipFilePath.Trim() -eq [string]::Empty)
		{ $ZipFilePath = Join-Path -Path $FileOrDirectoryPathToAddToZipFile -ChildPath '.zip' }
		
		# If the Zip file to create does not have an extension of .zip (which is required by the shell.application), add it.
		if (!$ZipFilePath.EndsWith('.zip', [StringComparison]::OrdinalIgnoreCase))
		{ $ZipFilePath += '.zip' }
		
		# If the Zip file to add the file to does not exist yet, create it.
		if (!(Test-Path -Path $ZipFilePath -PathType Leaf))
		{ New-Item -Path $ZipFilePath -ItemType File > $null }

		# Get the Name of the file or directory to add to the Zip file.
		$fileOrDirectoryNameToAddToZipFile = Split-Path -Path $FileOrDirectoryPathToAddToZipFile -Leaf

		# Get the number of files and directories to add to the Zip file.
		$numberOfFilesAndDirectoriesToAddToZipFile = (Get-ChildItem -Path $FileOrDirectoryPathToAddToZipFile -Recurse -Force).Count
		
		# Get if we are adding a file or directory to the Zip file.
		$itemToAddToZipIsAFile = Test-Path -Path $FileOrDirectoryPathToAddToZipFile -PathType Leaf

		# Get Shell object and the Zip File.
		$shell = New-Object -ComObject Shell.Application
		$zipShell = $shell.NameSpace($ZipFilePath)

		# We will want to check if we can do a simple copy operation into the Zip file or not. Assume that we can't to start with.
		# We can if the file/directory does not exist in the Zip file already, or it is a file and the user wants to be prompted on conflicts.
		$canPerformSimpleCopyIntoZipFile = $false

		# If the file/directory does not already exist in the Zip file, or it does exist, but it is a file and the user wants to be prompted on conflicts, then we can perform a simple copy into the Zip file.
		$fileOrDirectoryInZipFileShell = $zipShell.ParseName($fileOrDirectoryNameToAddToZipFile)
		$itemToAddToZipIsAFileAndUserWantsToBePromptedOnConflicts = ($itemToAddToZipIsAFile -and !$OverwriteWithoutPrompting)
		if ($fileOrDirectoryInZipFileShell -eq $null -or $itemToAddToZipIsAFileAndUserWantsToBePromptedOnConflicts)
		{
			$canPerformSimpleCopyIntoZipFile = $true
		}
		
		# If we can perform a simple copy operation to get the file/directory into the Zip file.
		if ($canPerformSimpleCopyIntoZipFile)
		{
			# Start copying the file/directory into the Zip file since there won't be any conflicts. This is an asynchronous operation.
			$zipShell.CopyHere($FileOrDirectoryPathToAddToZipFile)	# Copy Flags are ignored when copying files into a zip file, so can't use them like we did with the Expand-ZipFile function.
			
			# The Copy operation is asynchronous, so wait until it is complete before continuing.
			# Wait until we can see that the file/directory has been created.
			while ($zipShell.ParseName($fileOrDirectoryNameToAddToZipFile) -eq $null)
			{ Start-Sleep -Milliseconds 100 }
			
			# If we are copying a directory into the Zip file, we want to wait until all of the files/directories have been copied.
			if (!$itemToAddToZipIsAFile)
			{
				# Get the number of files and directories that should be copied into the Zip file.
				$numberOfItemsToCopyIntoZipFile = (Get-ChildItem -Path $FileOrDirectoryPathToAddToZipFile -Recurse -Force).Count
			
				# Get a handle to the new directory we created in the Zip file.
				$newDirectoryInZipFileShell = $zipShell.ParseName($fileOrDirectoryNameToAddToZipFile)
				
				# Wait until the new directory in the Zip file has the expected number of files and directories in it.
				while ((GetNumberOfItemsInZipFileItems -shellItems $newDirectoryInZipFileShell.GetFolder.Items()) -lt $numberOfItemsToCopyIntoZipFile)
				{ Start-Sleep -Milliseconds 100 }
			}
		}
		# Else we cannot do a simple copy operation. We instead need to move the files out of the Zip file so that we can merge the directory, or overwrite the file without the user being prompted.
		# We cannot move a directory into the Zip file if a directory with the same name already exists, as a MessageBox warning is thrown, not a conflict resolution prompt like with files.
		# We cannot silently overwrite an existing file in the Zip file, as the flags passed to the CopyHere/MoveHere functions seem to be ignored when copying into a Zip file.
		else
		{
			# Create a temp directory to hold our file/directory.
			$tempDirectoryPath = $null
			$tempDirectoryPath = Join-Path -Path ([System.IO.Path]::GetTempPath()) -ChildPath ([System.IO.Path]::GetRandomFileName())
			New-Item -Path $tempDirectoryPath -ItemType Container > $null
		
			# If we will be moving a directory into the temp directory.
			$numberOfItemsInZipFilesDirectory = 0
			if ($fileOrDirectoryInZipFileShell.IsFolder)
			{
				# Get the number of files and directories in the Zip file's directory.
				$numberOfItemsInZipFilesDirectory = GetNumberOfItemsInZipFileItems -shellItems $fileOrDirectoryInZipFileShell.GetFolder.Items()
			}
		
			# Start moving the file/directory out of the Zip file and into a temp directory. This is an asynchronous operation.
			$tempDirectoryShell = $shell.NameSpace($tempDirectoryPath)
			$tempDirectoryShell.MoveHere($fileOrDirectoryInZipFileShell)
			
			# If we are moving a directory, we need to wait until all of the files and directories in that Zip file's directory have been moved.
			$fileOrDirectoryPathInTempDirectory = Join-Path -Path $tempDirectoryPath -ChildPath $fileOrDirectoryNameToAddToZipFile
			if ($fileOrDirectoryInZipFileShell.IsFolder)
			{
				# The Move operation is asynchronous, so wait until it is complete before continuing. That is, sleep until the Destination Directory has the same number of files as the directory in the Zip file.
				while ((Get-ChildItem -Path $fileOrDirectoryPathInTempDirectory -Recurse -Force).Count -lt $numberOfItemsInZipFilesDirectory)
				{ Start-Sleep -Milliseconds 100 }
			}
			# Else we are just moving a file, so we just need to check for when that one file has been moved.
			else
			{
				# The Move operation is asynchronous, so wait until it is complete before continuing.
				while (!(Test-Path -Path $fileOrDirectoryPathInTempDirectory))
				{ Start-Sleep -Milliseconds 100 }
			}
			
			# We want to copy the file/directory to add to the Zip file to the same location in the temp directory, so that files/directories are merged.
			# If we should automatically overwrite files, do it.
			if ($OverwriteWithoutPrompting)
			{ Copy-Item -Path $FileOrDirectoryPathToAddToZipFile -Destination $tempDirectoryPath -Recurse -Force }
			# Else the user should be prompted on each conflict.
			else
			{ Copy-Item -Path $FileOrDirectoryPathToAddToZipFile -Destination $tempDirectoryPath -Recurse -Confirm -ErrorAction SilentlyContinue }	# SilentlyContinue errors to avoid an error for every directory copied.

			# For whatever reason the zip.MoveHere() function is not able to move empty directories into the Zip file, so we have to put dummy files into these directories 
			# and then remove the dummy files from the Zip file after.
			# If we are copying a directory into the Zip file.
			$dummyFileNamePrefix = 'Dummy.File'
			[int]$numberOfDummyFilesCreated = 0
			if ($fileOrDirectoryInZipFileShell.IsFolder)
			{
				# Place a dummy file in each of the empty directories so that it gets copied into the Zip file without an error.
				$emptyDirectories = Get-ChildItem -Path $fileOrDirectoryPathInTempDirectory -Recurse -Force -Directory | Where-Object { (Get-ChildItem -Path $_ -Force) -eq $null }
				foreach ($emptyDirectory in $emptyDirectories)
				{
					$numberOfDummyFilesCreated++
					New-Item -Path (Join-Path -Path $emptyDirectory.FullName -ChildPath "$dummyFileNamePrefix$numberOfDummyFilesCreated") -ItemType File -Force > $null
				}
			}		

			# If we need to copy a directory back into the Zip file.
			if ($fileOrDirectoryInZipFileShell.IsFolder)
			{
				MoveDirectoryIntoZipFile -parentInZipFileShell $zipShell -pathOfItemToCopy $fileOrDirectoryPathInTempDirectory
			}
			# Else we need to copy a file back into the Zip file.
			else
			{
				# Start moving the merged file back into the Zip file. This is an asynchronous operation.
				$zipShell.MoveHere($fileOrDirectoryPathInTempDirectory)
			}
			
			# The Move operation is asynchronous, so wait until it is complete before continuing.
			# Sleep until all of the files have been moved into the zip file. The MoveHere() function leaves empty directories behind, so we only need to watch for files.
			do
			{
				Start-Sleep -Milliseconds 100
				$files = Get-ChildItem -Path $fileOrDirectoryPathInTempDirectory -Force -Recurse | Where-Object { !$_.PSIsContainer }
			} while ($files -ne $null)
			
			# If there are dummy files that need to be moved out of the Zip file.
			if ($numberOfDummyFilesCreated -gt 0)
			{
				# Move all of the dummy files out of the supposed-to-be empty directories in the Zip file.
				MoveFilesOutOfZipFileItems -shellItems $zipShell.items() -directoryToMoveFilesToShell $tempDirectoryShell -fileNamePrefix $dummyFileNamePrefix
				
				# The Move operation is asynchronous, so wait until it is complete before continuing.
				# Sleep until all of the dummy files have been moved out of the zip file.
				do
				{
					Start-Sleep -Milliseconds 100
					[Object[]]$files = Get-ChildItem -Path $tempDirectoryPath -Force -Recurse | Where-Object { !$_.PSIsContainer -and $_.Name.StartsWith($dummyFileNamePrefix) }
				} while ($files -eq $null -or $files.Count -lt $numberOfDummyFilesCreated)
			}
			
			# Delete the temp directory that we created.
			Remove-Item -Path $tempDirectoryPath -Force -Recurse > $null
		}
	}
}

# Specify which functions should be publicly accessible.
Export-ModuleMember -Function Expand-ZipFile
Export-ModuleMember -Function Compress-ZipFile

 

Of course if you don’t want to reference an external module you could always just copy paste the functions from the module directly into your script and call the functions that way.

Happy coding!

Disclaimer: At the time of this writing I have only tested the module on Windows 8.1, so if you discover problems running it on another version of Windows please let me know.

Find Largest (Or Smallest) Files In A Directory Or Drive With PowerShell

September 8th, 2014 5 comments

One of our SQL servers was running low on disk space and I needed to quickly find the largest files on the drive to know what was eating up all of the disk space, so I wrote this PowerShell line that I thought I would share:

# Get all files sorted by size.
Get-ChildItem -Path 'C:\SomeFolder' -Recurse -Force -File | Select-Object -Property FullName,@{Name='SizeGB';Expression={$_.Length / 1GB}},@{Name='SizeMB';Expression={$_.Length / 1MB}},@{Name='SizeKB';Expression={$_.Length / 1KB}} | Sort-Object { $_.SizeKB } -Descending | Out-GridView

If you are still only running PowerShell 2.0, it will complain that it doesn’t know what the -File switch is, so here’s the PowerShell 2.0 compatible version (which is a bit slower):

# Get all files sorted by size.
Get-ChildItem -Path 'C:\SomeFolder' -Recurse -Force | Where-Object { !$_.PSIsContainer } | Select-Object -Property FullName,@{Name='SizeGB';Expression={$_.Length / 1GB}},@{Name='SizeMB';Expression={$_.Length / 1MB}},@{Name='SizeKB';Expression={$_.Length / 1KB}} | Sort-Object { $_.SizeKB } -Descending | Out-GridView

Just change ‘C:\SomeFolder’ to the folder/drive that you want scanned, and it will show you all of the files in the directory and subdirectories in a GridView sorted by size, along with their size in GB, MB, and KB. The nice thing about using a GridView is that it has built in filtering, so you can quickly do things like filter for certain file types, child directories, etc.

Here is a screenshot of the resulting GridView:

FilesSortedBySize

 

And again with filtering applied (i.e. the .bak at the top to only show backup files):

FilesSortedBySizeAndFiltered

All done with PowerShell; no external tools required.

Happy Sys-Adminning!

Keep PowerShell Console Window Open After Script Finishes Running

July 7th, 2014 4 comments

I originally included this as a small bonus section at the end of my other post about fixing the issue of not being able to run a PowerShell script whose path contains a space, but thought this deserved its own dedicated post.

When running a script by double-clicking it, or by right-clicking it and choosing Run With PowerShell or Open With Windows PowerShell, if the script completes very quickly the user will see the PowerShell console appear very briefly and then disappear.  If the script gives output that the user wants to see, or if it throws an error, the user won’t have time to read the text.  We have 3 solutions to fix this so that the PowerShell console stays open after the script has finished running:

1. One-time solution

Open a PowerShell console and manually run the script from the command line. I show how to do this a bit in this post, as the PowerShell syntax to run a script from the command-line is not straight-forward if you’ve never done it before.

The other way is to launch the PowerShell process from the Run box (Windows Key + R) or command prompt using the -NoExit switch and passing in the path to the PowerShell file.
For example: PowerShell -NoExit “C:\SomeFolder\MyPowerShellScript.ps1”

2. Per-script solution

Add a line like this to the end of your script:

Read-Host -Prompt “Press Enter to exit”

I typically use this following bit of code instead so that it only prompts for input when running from the PowerShell Console, and not from the PS ISE or other PS script editors (as they typically have a persistent console window integrated into the IDE).  Use whatever you prefer.

# If running in the console, wait for input before closing.
if ($Host.Name -eq "ConsoleHost")
{
	Write-Host "Press any key to continue..."
	$Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyUp") > $null
}

I typically use this approach for scripts that other people might end up running; if it’s a script that only I will ever be running, I rely on the global solution below.

3. Global solution

Adjust the registry keys used to run a PowerShell script to include the –NoExit switch to prevent the console window from closing.  Here are the two registry keys we will target, along with their default value, and the value we want them to have:

Registry Key: HKEY_CLASSES_ROOT\Applications\powershell.exe\shell\open\command
Description: Key used when you right-click a .ps1 file and choose Open With -> Windows PowerShell.
Default Value: "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" "%1"
Desired Value: "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" "& \"%1\""

Registry Key: HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1\Shell\0\Command
Description: Key used when you right-click a .ps1 file and choose Run with PowerShell (shows up depending on which Windows OS and Updates you have installed).
Default Value: "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" "-Command" "if((Get-ExecutionPolicy ) -ne 'AllSigned') { Set-ExecutionPolicy -Scope Process Bypass }; & '%1'"
Desired Value: "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -NoExit "-Command" "if((Get-ExecutionPolicy ) -ne 'AllSigned') { Set-ExecutionPolicy -Scope Process Bypass }; & \"%1\""

The Desired Values add the –NoExit switch, as well wrap the %1 in double quotes to allow the script to still run even if it’s path contains spaces.

If you want to open the registry and manually make the change you can, or here is the registry script that we can run to make the change automatically for us:

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\Applications\powershell.exe\shell\open\command]
@="\"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -NoExit \"& \\\"%1\\\"\""

[HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1\Shell\0\Command]
@="\"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -NoExit \"-Command\" \"if((Get-ExecutionPolicy ) -ne 'AllSigned') { Set-ExecutionPolicy -Scope Process Bypass }; & \\\"%1\\\"\""

You can copy and paste the text into a file with a .reg extension, or just

Simply double-click the .reg file and click OK on the prompt to have the registry keys updated.  Now by default when you run a PowerShell script from File Explorer (i.e. Windows Explorer), the console window will stay open even after the script is finished executing.  From there you can just type exit and hit enter to close the window, or use the mouse to click the window’s X in the top right corner.

If I have missed other common registry keys or any other information, please leave a comment to let me know.  I hope you find this useful.

Happy coding!

Template Solution For Deploying TFS Checkin Policies To Multiple Versions Of Visual Studio And Having Them Automatically Work From “TF.exe Checkin” Too

March 24th, 2014 No comments

Get the source code

Let’s get right to it by giving you the source code.  You can get it from the MSDN samples here.

 

Explanation of source code and adding new checkin policies

If you open the Visual Studio (VS) solution the first thing you will likely notice is that there are 5 projects.  CheckinPolicies.VS2012 simply references all of the files in CheckinPolicies.VS2013 as links (i.e. shortcut files); this is because we need to compile the CheckinPolicies.VS2012 project using TFS 2012 assemblies, and the CheckinPolicies.VS2013 project using TFS2013 assemblies, but want both projects to have all of the same checkin policies.  So the projects contain all of the same files; just a few of their references are different.  A copy of the references that are different between the two projects are stored in the project’s “Dependencies” folder; these are the Team Foundation assemblies that are specific to VS 2012 and 2013.  Having these assemblies stored in the solution allows us to still build the VS 2012 checkin policies, even if you (or a colleague) only has VS 2013 installed.

Update: To avoid having multiple CheckinPolicy.VS* projects, we could use the msbuild targets technique that P. Kelly shows on his blog. However, I believe we would still need multiple deployment projects, as described below, in order to have the checkin policies work outside of Visual Studio.

The other projects are CheckinPolicyDeployment.VS2012 and CheckinPolicyDeployment.VS2013 (both of which are VSPackage projects), and CheckinPolicyDeploymentShared.  The CheckinPolicyDeployment.VS2012/VS2013 projects will generate the VSIX files that are used to distribute the checkin policies, and CheckinPolicyDeploymentShared contains files/code that are common to both of the projects (the projects reference the files by linking to them).

Basically everything is ready to go.  Just start adding new checkin policy classes to the CheckinPolicy.VS2013 project, and then also add them to the CheckinPolicy.VS2012 project as a link.  You can add a file as a link in 2 different ways in the Solution Explorer:

  1. Right-click on the CheckinPolicies.VS2012 project and choose Add -> Existing Item…, and then navigate to the new class file that you added to the CheckinPolicy.VS2013 project.  Instead of clicking the Add button though, click the little down arrow on the side of the Add button and then choose Add As Link.
  2. Drag and drop the file from the CheckinPolicy.VS2013 project to the CheckinPolicy.VS2012 project, but while releasing the left mouse button to drop the file, hold down the Alt key; this will change the operation from adding a copy of the file to that project, to adding a shortcut file that links back to the original file.
    There is a DummyCheckinPolicy.cs file in the CheckinPolicies.VS2013 project that shows you an example of how to create a new checkin policy.  Basically you just need to create a new public, serializable class that extends the CheckinPolicyBase class.  The actual logic for your checkin policy to perform goes in the Evaluate() function. If there is a policy violation in the code that is trying to be checked in, just add a new PolicyFailure instance to the failures list with the message that you want the user to see.

      Building a new version of your checkin policies

      Once you are ready to deploy your policies, you will want to update the version number in the source.extension.vsixmanifest file in both the CheckinPolicyDeployment.VS2012 and CheckinPolicyDeployment.VS2013 projects.  Since these projects will both contain the same policies, I recommend giving them the same version number as well.  Once you have updated the version number, build the solution in Release mode.  From there you will find the new VSIX files at "CheckinPolicyDeployment.VS2012\bin\Release\TFS Checkin Policies VS2012.vsix" and "CheckinPolicyDeployment.VS2013\bin\Release\TFS Checkin Policies VS2013.vsix".  You can then distribute them to your team; I recommend setting up an internal VS Extension Gallery, but the poor-man’s solution is to just email the vsix file out to everyone on your team.

      Having the policies automatically work outside of Visual Studio

      This is already hooked up and working in the template solution, so nothing needs to be changed there, but I will explain how it works here.  A while back I blogged about how to get your Team Foundation Server (TFS) checkin polices to still work when checking code in from the command line via the “tf checkin” command; by default when installing your checkin policies via a VSIX package (the MS recommended approach) you can only get them to work in Visual Studio.  I hated that I would need to manually run the script I provided each time the checkin policies were updated, so I posted a question on Stack Overflow about how to run a script automatically after the VSIX package installs the extension.  So it turns out that you can’t do that, but what you can do is use a VSPackage instead, which still uses VSIX to deploy the extension, but then also allows us to hook into Visual Studio events to run our script when VS starts up or exits.

      Here is the VSPackage class code to hook up the events and call our UpdateCheckinPoliciesInRegistry() function:

      /// <summary>
      /// This is the class that implements the package exposed by this assembly.
      ///
      /// The minimum requirement for a class to be considered a valid package for Visual Studio
      /// is to implement the IVsPackage interface and register itself with the shell.
      /// This package uses the helper classes defined inside the Managed Package Framework (MPF)
      /// to do it: it derives from the Package class that provides the implementation of the 
      /// IVsPackage interface and uses the registration attributes defined in the framework to 
      /// register itself and its components with the shell.
      /// </summary>
      // This attribute tells the PkgDef creation utility (CreatePkgDef.exe) that this class is
      // a package.
      [PackageRegistration(UseManagedResourcesOnly = true)]
      // This attribute is used to register the information needed to show this package
      // in the Help/About dialog of Visual Studio.
      [InstalledProductRegistration("#110", "#112", "1.0", IconResourceID = 400)]
      // Auto Load our assembly even when no solution is open (by using the Microsoft.VisualStudio.VSConstants.UICONTEXT_NoSolution guid).
      [ProvideAutoLoad("ADFC4E64-0397-11D1-9F4E-00A0C911004F")]
      public abstract class CheckinPolicyDeploymentPackage : Package
      {
      	private EnvDTE.DTEEvents _dteEvents;
      
      	/// <summary>
      	/// Initialization of the package; this method is called right after the package is sited, so this is the place
      	/// where you can put all the initialization code that rely on services provided by VisualStudio.
      	/// </summary>
      	protected override void Initialize()
      	{
      		base.Initialize();
      
      		var dte = (DTE2)GetService(typeof(SDTE));
      		_dteEvents = dte.Events.DTEEvents;
      		_dteEvents.OnBeginShutdown += OnBeginShutdown;
      
      		UpdateCheckinPoliciesInRegistry();
      	}
      
      	private void OnBeginShutdown()
      	{
      		_dteEvents.OnBeginShutdown -= OnBeginShutdown;
      		_dteEvents = null;
      
      		UpdateCheckinPoliciesInRegistry();
      	}
      
      	private void UpdateCheckinPoliciesInRegistry()
      	{
      		var dte = (DTE2)GetService(typeof(SDTE));
      		string visualStudioVersionNumber = dte.Version;
      		string customCheckinPolicyEntryName = "CheckinPolicies";
      
      		// Create the paths to the registry keys that contains the values to inspect.
      		string desiredRegistryKeyPath = string.Format("HKEY_CURRENT_USER\\Software\\Microsoft\\VisualStudio\\{0}_Config\\TeamFoundation\\SourceControl\\Checkin Policies", visualStudioVersionNumber);
      		string currentRegistryKeyPath = string.Empty;
      		if (Environment.Is64BitOperatingSystem)
      			currentRegistryKeyPath = string.Format("HKEY_LOCAL_MACHINE\\SOFTWARE\\Wow6432Node\\Microsoft\\VisualStudio\\{0}\\TeamFoundation\\SourceControl\\Checkin Policies", visualStudioVersionNumber);
      		else
      			currentRegistryKeyPath = string.Format("HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\VisualStudio\\{0}\\TeamFoundation\\SourceControl\\Checkin Policies", visualStudioVersionNumber);
      
      		// Get the value that the registry should have, and the value that it currently has.
      		var desiredRegistryValue = Registry.GetValue(desiredRegistryKeyPath, customCheckinPolicyEntryName, null);
      		var currentRegistryValue = Registry.GetValue(currentRegistryKeyPath, customCheckinPolicyEntryName, null);
      
      		// If the registry value is already up to date, just exit without updating the registry.
      		if (desiredRegistryValue == null || desiredRegistryValue.Equals(currentRegistryValue))
      			return;
      
      		// Get the path to the PowerShell script to run.
      		string powerShellScriptFilePath = Path.Combine(Path.GetDirectoryName(System.Reflection.Assembly.GetAssembly(typeof(CheckinPolicyDeploymentPackage)).Location),
      			"FilesFromShared", "UpdateCheckinPolicyInRegistry.ps1");
      
      		// Start a new process to execute the batch file script, which calls the PowerShell script to do the actual work.
      		var process = new Process
      		{
      			StartInfo =
      			{
      				FileName = "PowerShell",
      				Arguments = string.Format("-NoProfile -ExecutionPolicy Bypass -File \"{0}\" -VisualStudioVersion \"{1}\" -CustomCheckinPolicyEntryName \"{2}\"", powerShellScriptFilePath, visualStudioVersionNumber, customCheckinPolicyEntryName),
      
      				// Hide the PowerShell window while we run the script.
      				CreateNoWindow = true,
      				UseShellExecute = false
      			}
      		};
      		process.Start();
      	}
      }
      

      All of the attributes on the class are put there by default, except for the “[ProvideAutoLoad("ADFC4E64-0397-11D1-9F4E-00A0C911004F")]” one; this attribute is the one that actually allows the Initialize() function to get called when Visual Studio starts.  You can see in the Initialize method that we hook up an event so that the UpdateCheckinPoliciesInRegistry() function gets called when VS is closed, and we also call that function from Initialize(), which is called when VS starts up.

      You might have noticed that this class is abstract.  This is because the VS 2012 and VS 2013 classed need to have a unique ID attribute, so the actual VSPackage class just inherits from this one.  Here is what the VS 2013 one looks like:

      [Guid(GuidList.guidCheckinPolicyDeployment_VS2013PkgString)]
      public sealed class CheckinPolicyDeployment_VS2013Package : CheckinPolicyDeploymentShared.CheckinPolicyDeploymentPackage
      { }
      

      The UpdateCheckinPoliciesInRegistry() function checks to see if the appropriate registry key has been updated to allow the checkin policies to run from the “tf checkin” command prompt command.  If they have, then it simply exits, otherwise it calls a PowerShell script to set the keys for us.  A PowerShell script is used because modifying the registry requires admin permissions, and we can easily run a new PowerShell process as admin (assuming the logged in user is an admin on their local machine, which is the case for everyone in our company).

      The one variable to note here is the customCheckinPolicyEntryName. This corresponds to the registry key name that I’ve specified in the RegistryKeyToAdd.pkgdef file, so if you change it be sure to change it in both places.  This is what the RegistryKeyToAdd.pkgdef file contains:

      // We use "\..\" in the value because the projects that include this file place it in a "FilesFromShared" folder, and we want it to look for the dll in the root directory.
      [$RootKey$\TeamFoundation\SourceControl\Checkin Policies]
      "CheckinPolicies"="$PackageFolder$\..\CheckinPolicies.dll"
      

      And here are the contents of the UpdateCheckinPolicyInRegistry.ps1 PowerShell file.  This is basically just a refactored version of the script I posted on my old blog post:

      # This script copies the required registry value so that the checkin policies will work when doing a TFS checkin from the command line.
      param
      (
      	[parameter(Mandatory=$true,HelpMessage="The version of Visual Studio to update in the registry (i.e. '11.0' for VS 2012, '12.0' for VS 2013)")]
      	[string]$VisualStudioVersion,
      
      	[parameter(HelpMessage="The name of the Custom Checkin Policy Entry in the Registry Key.")]
      	[string]$CustomCheckinPolicyEntryName = 'CheckinPolicies'
      )
      
      # Turn on Strict Mode to help catch syntax-related errors.
      # 	This must come after a script's/function's param section.
      # 	Forces a function to be the first non-comment code to appear in a PowerShell Module.
      Set-StrictMode -Version Latest
      
      $ScriptBlock = {
      	function UpdateCheckinPolicyInRegistry([parameter(Mandatory=$true)][string]$VisualStudioVersion, [string]$CustomCheckinPolicyEntryName)
      	{
      		$status = 'Updating registry to allow checkin policies to work outside of Visual Studio version ' + $VisualStudioVersion + '.'
      		Write-Output $status
      
      		# Get the Registry Key Entry that holds the path to the Custom Checkin Policy Assembly.
      		$HKCUKey = 'HKCU:\Software\Microsoft\VisualStudio\' + $VisualStudioVersion + '_Config\TeamFoundation\SourceControl\Checkin Policies'
      		$CustomCheckinPolicyRegistryEntry = Get-ItemProperty -Path $HKCUKey -Name $CustomCheckinPolicyEntryName
      		$CustomCheckinPolicyEntryValue = $CustomCheckinPolicyRegistryEntry.($CustomCheckinPolicyEntryName)
      
      		# Create a new Registry Key Entry for the iQ Checkin Policy Assembly so they will work from the command line (as well as from Visual Studio).
      		if ([Environment]::Is64BitOperatingSystem)
      		{ $HKLMKey = 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\' + $VisualStudioVersion + '\TeamFoundation\SourceControl\Checkin Policies' }
      		else
      		{ $HKLMKey = 'HKLM:\SOFTWARE\Microsoft\VisualStudio\' + $VisualStudioVersion + '\TeamFoundation\SourceControl\Checkin Policies' }
      		Set-ItemProperty -Path $HKLMKey -Name $CustomCheckinPolicyEntryName -Value $CustomCheckinPolicyEntryValue
      	}
      }
      
      # Run the script block as admin so it has permissions to modify the registry.
      Start-Process -FilePath PowerShell -Verb RunAs -ArgumentList "-NoProfile -ExecutionPolicy Bypass -Command &amp; {$ScriptBlock UpdateCheckinPolicyInRegistry -VisualStudioVersion ""$VisualStudioVersion"" -CustomCheckinPolicyEntryName ""$CustomCheckinPolicyEntryName""}"
      

      While I could have just used a much smaller PowerShell script that simply set a given registry key to a given value, I chose to have some code duplication between the C# code and this script so that this script can still be used as a stand-alone script if needed.

      The slight downside to using a VSPackage is that this script still won’t get called until the user closes or opens a new instance of Visual Studio, so the checkin policies won’t work immediately from the “tf checkin” command after updating the checkin policies extension, but this still beats having to remember to manually run the script.

       

      Conclusion

      So I’ve given you a template solution that you can use without any modification to start creating your VS 2012 and VS 2013 compatible checkin policies; Just add new class files to the CheckinPolicies.VS2013 project, and then add them to the CheckinPolicies.VS2012 project as well as links.  By using links it allows you to only have to modify checkin policy files once, and have the changes go to both the 2012 and 2013 VSIX packages.  Hopefully this template solution helps you to get your TFS checkin policies up and running faster.

      Happy Coding!

      Provide A Batch File To Run Your PowerShell Script From; Your Users Will Love You For It

      November 16th, 2013 82 comments

      A while ago in one of my older posts I included a little gem that I think deserves it’s own dedicated post; calling PowerShell scripts from a batch file.

      Why call my PowerShell script from a batch file?

      When I am writing a script for other people to use (in my organization, or for the general public) or even for myself sometimes, I will often include a simple batch file (i.e. *.bat or *.cmd file) that just simply calls my PowerShell script and then exits.  I do this because even though PowerShell is awesome, not everybody knows what it is or how to use it; non-technical folks obviously, but even many of the technical folks in our organization have never used PowerShell.

      Let’s list the problems with sending somebody the PowerShell script alone; The first two points below are hurdles that every user stumbles over the first time they encounter PowerShell (they are there for security purposes):

      1. When you double-click a PowerShell script (*.ps1 file) the default action is often to open it up in an editor, not to run it (you can change this for your PC).
      2. When you do figure out you need to right-click the .ps1 file and choose Open With –> Windows PowerShell to run the script, it will fail with a warning saying that the execution policy is currently configured to not allow scripts to be ran.
      3. My script may require admin privileges in order to run correctly, and it can be tricky to run a PowerShell script as admin without going into a PowerShell console and running the script from there, which a lot of people won’t know how to do.
      4. A potential problem that could affect PowerShell Pros is that it’s possible for them to have variables or other settings set in their PowerShell profile that could cause my script to not perform correctly; this is pretty unlikely, but still a possibility.
          So imagine you’ve written a PowerShell script that you want your grandma to run (or an HR employee, or an executive, or your teenage daughter, etc.). Do you think they’re going to be able to do it?  Maybe, maybe not.

      You should be kind to your users and provide a batch file to call your PowerShell script.

      The beauty of batch file scripts is that by default the script is ran when it is double-clicked (solves problem #1), and all of the other problems can be overcome by using a few arguments in our batch file.

      Ok, I see your point. So how do I call my PowerShell script from a batch file?

      First, the code I provide assumes that the batch file and PowerShell script are in the same directory.  So if you have a PowerShell script called “MyPowerShellScript.ps1” and a batch file called “RunMyPowerShellScript.cmd”, this is what the batch file would contain:

      @ECHO OFF
      SET ThisScriptsDirectory=%~dp0
      SET PowerShellScriptPath=%ThisScriptsDirectory%MyPowerShellScript.ps1
      PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%PowerShellScriptPath%'";
      

      Line 1 just prevents the contents of the batch file from being printed to the command prompt (so it’s optional).  Line 2 gets the directory that the batch file is in.  Line 3 just appends the PowerShell script filename to the script directory to get the full path to the PowerShell script file, so this is the only line you would need to modify; replace MyPowerShellScript.ps1 with your PowerShell script’s filename.  The 4th line is the one that actually calls the PowerShell script and contains the magic.

      The –NoProfile switch solves problem #4 above, and the –ExecutionPolicy Bypass argument solves problem #2.  But that still leaves problem #3 above, right?

      Call your PowerShell script from a batch file with Administrative permissions (i.e. Run As Admin)

      If your PowerShell script needs to be run as an admin for whatever reason, the 4th line of the batch file will need to change a bit:

      @ECHO OFF
      SET ThisScriptsDirectory=%~dp0
      SET PowerShellScriptPath=%ThisScriptsDirectory%MyPowerShellScript.ps1
      PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File ""%PowerShellScriptPath%""' -Verb RunAs}";
      

      We can’t call the PowerShell script as admin from the command prompt, but we can from PowerShell; so we essentially start a new PowerShell session, and then have that session call the PowerShell script using the –Verb RunAs argument to specify that the script should be run as an administrator.

      And voila, that’s it.  Now all anybody has to do to run your PowerShell script is double-click the batch file; something that even your grandma can do (well, hopefully).  So will your users really love you for this; well, no.  Instead they just won’t be cursing you for sending them a script that they can’t figure out how to run.  It’s one of those things that nobody notices until it doesn’t work.

      So take the extra 10 seconds to create a batch file and copy/paste the above text into it; it’ll save you time in the long run when you don’t have to repeat to all your users the specific instructions they need to follow to run your PowerShell script.

      I typically use this trick for myself too when my script requires admin rights, as it just makes running the script faster and easier.

      Bonus

      One more tidbit that I often include at the end of my PowerShell scripts is the following code:

      # If running in the console, wait for input before closing.
      if ($Host.Name -eq "ConsoleHost")
      { 
      	Write-Host "Press any key to continue..."
      	$Host.UI.RawUI.FlushInputBuffer()	# Make sure buffered input doesn't "press a key" and skip the ReadKey().
      	$Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyUp") > $null
      }
      

      This will prompt the user for keyboard input before closing the PowerShell console window.  This is useful because it allows users to read any errors that your PowerShell script may have thrown before the window closes, or even just so they can see the “Everything completed successfully” message that your script spits out so they know that it ran correctly.  Related side note: you can change your PC to always leave the PowerShell console window open after running a script, if that is your preference.

      I hope you find this useful.  Feel free to leave comments.

      Happy coding!

      Update

      Several people have left comments asking how to pass parameters into the PowerShell script from the batch file.

      Here is how to pass in ordered parameters:

      PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%PowerShellScriptPath%' 'First Param Value' 'Second Param Value'";
      

      And here is how to pass in named parameters:

      PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%PowerShellScriptPath%' -Param1Name 'Param 1 Value' -Param2Name 'Param 2 Value'"
      

      And if you are running the admin version of the script, here is how to pass in ordered parameters:

      PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File """"%PowerShellScriptPath%"""" """"First Param Value"""" """"Second Param Value"""" ' -Verb RunAs}"
      
      And here is how to pass in named parameters:
      PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File """"%PowerShellScriptPath%"""" -Param1Name """"Param 1 Value"""" -Param2Name """"Param 2 value"""" ' -Verb RunAs}";
      
      And yes, the PowerShell script name and parameters need to be wrapped in 4 double quotes in order to properly handle paths/values with spaces.

      Always Explicitly Set Your Parameter Set Variables For PowerShell v2.0 Compatibility

      October 28th, 2013 2 comments

      What are parameter sets anyways?

      Parameter sets were introduced in PowerShell v2.0 and are useful for enforcing mutually exclusive parameters on a cmdlet.  Ed Wilson has a good little article explaining what parameter sets are and how to use them.  Essentially they allow us to write a single cmdlet that might otherwise have to be written as 2 or more cmdlets that took different parameters.  For example, instead of having to create Process-InfoFromUser, Process-InfoFromFile, and Process-InfoFromUrl cmdlets, we could create a single Process-Info cmdlet that has 3 mutually exclusive parameters, [switch]$PromptUser, [string]$FilePath, and [string]$Url.  If the cmdlet is called with more than one of these parameters, it throws an error.

      You could just be lazy and not use parameter sets and allow all 3 parameters to be specified and then just use the first one, but the user won’t know which one of the 3 they provided will be used; they might assume that all 3 will be used.  This would also force the user to have to read the documentation (assuming you have provided it).  Using parameter sets enforces makes it clear to the user which parameters are able to be used with other parameters.  Also, most PowerShell editors process parameter sets to have the intellisense properly show the parameters that can be used with each other.

       

      Ok, parameter sets sound awesome, I want to use them! What’s the problem?

      The problem I ran into was in my Invoke-MsBuild module that I put on CodePlex, I had a [switch]$PassThru parameter that was part of a parameter set.  Within the module I had:

      if ($PassThru) { do something... }
      else { do something else... }
      

      This worked great for me during my testing since I was using PowerShell v3.0.  The problem arose once I released my code to the public; I received an issue from a user who was getting the following error message:

      Invoke-MsBuild : Unexpect error occured while building "<path>\my.csproj": The variable ‘$PassThru’ cannot be retrieved because it has not been set.

      At build.ps1:84 char:25

      • $result = Invoke-MsBuild <<<< -Path "<path>\my.csproj" -BuildLogDirectoryPath "$scriptPath" -Pa

        rams "/property:Configuration=Release"

      After some investigation I determined the problem was that they were using PowerShell v2.0, and that my script uses Strict Mode.  I use Set-StrictMode -Version Latest in all of my scripts to help me catch any syntax related errors and to make sure my scripts will in fact do what I intend them to do.  While you could simply not use strict mode and you wouldn’t have a problem, I don’t recommend that; if others are going to call your cmdlet (or you call it from a different script), there’s a good chance they may have Strict Mode turned on and your cmdlet may break for them.

       

      So should I not use parameter sets with PowerShell v2.0? Is there a fix?

      You absolutely SHOULD use parameter sets whenever you can and it makes sense, and yes there is a fix.  If you require your script to run on PowerShell v2.0, there is just one extra step you need to take, which is to explicitly set the values for any parameters that use a parameter set and don’t exist.  Luckily we can use the Test-Path cmdlet to test if a variable has been defined in a specific scope or not.

      Here is an example of how to detect if a variable is not defined in the Private scope and set its default value.  We specify the scope in case a variable with the same name exists outside of the cmdlet in the global scope or an inherited scope.

      # Default the ParameterSet variables that may not have been set depending on which parameter set is being used. This is required for PowerShell v2.0 compatibility.
      if (!(Test-Path Variable:Private:SomeStringParameter)) { $SomeStringParameter = $null }
      if (!(Test-Path Variable:Private:SomeIntegerParameter)) { $SomeIntegerParameter = 0 }
      if (!(Test-Path Variable:Private:SomeSwitchParameter)) { $SomeSwitchParameter = $false }
      

      If you prefer, instead of setting a default value for the parameter you could just check if it is defined first when using it in your script.  I like this approach however, because I can put this code right after my cmdlet parameters so I’m modifying all of my parameter set properties in one place, and I don’t have to remember to check if the variable is defined later when writing the body of my cmdlet; otherwise I’m likely to forget to do the “is defined” check, and will likely miss the problem since I do most of my testing in PowerShell v3.0.

      Another approach rather than checking if a parameter is defined or not, is to check which Parameter Set Name is being used; this will implicitly let you know which parameters are defined.

      switch ($PsCmdlet.ParameterSetName)
      {
      	"SomeParameterSetName"  { Write-Host "You supplied the Some variable."; break}
      	"OtherParameterSetName"  { Write-Host "You supplied the Other variable."; break}
      } 
      

      I still prefer to default all of my parameters, but you may prefer this method.

      I hope you find this useful.  Check out my other article for more PowerShell v2.0 vs. v3.0 differences.

      Happy coding!

      PowerShell Code To Ensure Client Is Using At Least The Minimum Required PowerShell Version

      October 25th, 2013 3 comments

      Here’s some simple code that will throw an exception if the client running your script is not using the version of PowerShell (or greater) that is required; just change the $REQUIRED_POWERSHELL_VERSION variable value to the minimum version that the script requires.

      # Throw an exception if client is not using the minimum required PowerShell version.
      $REQUIRED_POWERSHELL_VERSION = 3.0	# The minimum Major.Minor PowerShell version that is required for the script to run.
      $POWERSHELL_VERSION = $PSVersionTable.PSVersion.Major + ($PSVersionTable.PSVersion.Minor / 10)
      if ($REQUIRED_POWERSHELL_VERSION -gt $POWERSHELL_VERSION)
      { throw "PowerShell version $REQUIRED_POWERSHELL_VERSION is required for this script; You are only running version $POWERSHELL_VERSION. Please update PowerShell to at least version $REQUIRED_POWERSHELL_VERSION." }
      

      — UPDATE {

      Thanks to Robin M for pointing out that PowerShell has the built-in #Requires statement for this purpose, so you do not need to use the code above. Instead, simply place the following code anywhere in your script to enforce the desired PowerShell version required to run the script:

      #Requires -Version 3.0
      

      If the user does not have the minimum required version of PowerShell installed, they will see an error message like this:

      The script ‘foo.ps1’ cannot be run because it contained a "#requires" statement at line 1 for Windows PowerShell version 3.0 which is incompatible with the installed Windows PowerShell version of 2.0.

      } UPDATE —

      So if your script requires, for example, PowerShell v3.0, just put this at the start of your script to have it error out right away with a meaningful error message; otherwise your script may throw other errors that mask the real issue, potentially leading the user to spend many hours troubleshooting your script, or to give up on it all together.

      I’ve been bitten by this in the past a few times now, where people report issues on my Codeplex scripts where the error message seems ambiguous.  So now any scripts that I release to the general public will have this check in it to give them a proper error message.  I have also created a page on PowerShell v2 vs. v3 differences that I’m going to use to keep track of the differences that I encounter, so that I can have confidence in the minimum powershell version that I set on my scripts.  I also plan on creating a v3 vs. v4 page once I start using PS v4 features more.  Of course, the best test is to actually run your script in the minimum powershell version that you set, which I mention how to do on my PS v2 vs. v3 page.

      Happy coding!

      PowerShell Script To Get Path Lengths

      October 24th, 2013 6 comments

      A while ago I created a Path Length Checker tool in C# that has a “nice” GUI, and put it up on CodePlex.  One of the users reported that he was trying to use it to scan his entire C: drive, but that it was crashing.  Turns out that the System.IO.Directory.GetFileSystemEntries() call was throwing a permissions exception when trying to access the “C:\Documents and Settings” directory.  Even when running the app as admin it throws this exception.  In the meantime while I am working on implementing a workaround for the app, I wrote up a quick PowerShell script that the user could use to get all of the path lengths.  That is what I present to you here.

      $pathToScan = "C:\Some Folder"	# The path to scan and the the lengths for (sub-directories will be scanned as well).
      $outputFilePath = "C:\temp\PathLengths.txt"	# This must be a file in a directory that exists and does not require admin rights to write to.
      $writeToConsoleAsWell = $true	# Writing to the console will be much slower.
      
      # Open a new file stream (nice and fast) and write all the paths and their lengths to it.
      $outputFileDirectory = Split-Path $outputFilePath -Parent
      if (!(Test-Path $outputFileDirectory)) { New-Item $outputFileDirectory -ItemType Directory }
      $stream = New-Object System.IO.StreamWriter($outputFilePath, $false)
      Get-ChildItem -Path $pathToScan -Recurse -Force | Select-Object -Property FullName, @{Name="FullNameLength";Expression={($_.FullName.Length)}} | Sort-Object -Property FullNameLength -Descending | ForEach-Object {
          $filePath = $_.FullName
          $length = $_.FullNameLength
          $string = "$length : $filePath"
          
          # Write to the Console.
          if ($writeToConsoleAsWell) { Write-Host $string }
       
          #Write to the file.
          $stream.WriteLine($string)
      }
      $stream.Close()
      

      Happy coding!

      PowerShell Functions To Convert, Remove, and Delete IIS Web Applications

      October 23rd, 2013 No comments

      I recently refactored some of our PowerShell scripts that we use to publish and remove IIS 7 web applications, creating some general functions that can be used anywhere.  In this post I show these functions along with how I structure our scripts to make creating, removing, and deleting web applications for our various products fully automated and tidy.  Note that these scripts require at least PowerShell v3.0 and use the IIS Admin Cmdlets that I believe require IIS v7.0; the IIS Admin Cmdlet calls can easily be replaced though by calls to appcmd.exe, msdeploy, or any other tool for working with IIS that you want.

      I’ll blast you with the first file’s code and explain it below (ApplicationServiceUtilities.ps1).

      # Turn on Strict Mode to help catch syntax-related errors.
      # 	This must come after a script's/function's param section.
      # 	Forces a function to be the first non-comment code to appear in a PowerShell Module.
      Set-StrictMode -Version Latest
      
      # Define the code block that will add the ApplicationServiceInformation class to the PowerShell session.
      # NOTE: If this class is modified you will need to restart your PowerShell session to see the changes.
      $AddApplicationServiceInformationTypeScriptBlock = {
          # Wrap in a try-catch in case we try to add this type twice.
          try {
          # Create a class to hold an IIS Application Service's Information.
          Add-Type -TypeDefinition "
              using System;
          
              public class ApplicationServiceInformation
              {
                  // The name of the Website in IIS.
                  public string Website { get; set;}
              
                  // The path to the Application, relative to the Website root.
                  public string ApplicationPath { get; set; }
      
                  // The Application Pool that the application is running in.
                  public string ApplicationPool { get; set; }
      
                  // Whether this application should be published or not.
                  public bool ConvertToApplication { get; set; }
      
                  // Implicit Constructor.
                  public ApplicationServiceInformation() { this.ConvertToApplication = true; }
      
                  // Explicit constructor.
                  public ApplicationServiceInformation(string website, string applicationPath, string applicationPool, bool convertToApplication = true)
                  {
                      this.Website = website;
                      this.ApplicationPath = applicationPath;
                      this.ApplicationPool = applicationPool;
                      this.ConvertToApplication = convertToApplication;
                  }
              }
          "
          } catch {}
      }
      # Add the ApplicationServiceInformation class to this PowerShell session.
      & $AddApplicationServiceInformationTypeScriptBlock
      
      <#
          .SYNOPSIS
          Converts the given files to application services on the given Server.
      
          .PARAMETER Server
          The Server Host Name to connect to and convert the applications on.
      
          .PARAMETER ApplicationServicesInfo
          The [ApplicationServiceInformation[]] containing the files to convert to application services.
      #>
      function ConvertTo-ApplicationServices
      {
          [CmdletBinding()]
          param
          (
              [string] $Server,
              [ApplicationServiceInformation[]] $ApplicationServicesInfo
          )
      
          $block = {
      	    param([PSCustomObject[]] $ApplicationServicesInfo)
              $VerbosePreference = $Using:VerbosePreference
      	    Write-Verbose "Converting To Application Services..."
      
              # Import the WebAdministration module to make sure we have access to the required cmdlets and the IIS: drive.
              Import-Module WebAdministration 4> $null	# Don't write the verbose output.
      	
      	    # Create all of the Web Applications, making sure to first try and remove them in case they already exist (in order to avoid a PS error).
      	    foreach ($appInfo in [PSCustomObject[]]$ApplicationServicesInfo)
              {
                  $website = $appInfo.Website
                  $applicationPath = $appInfo.ApplicationPath
                  $applicationPool = $appInfo.ApplicationPool
      		    $fullPath = Join-Path $website $applicationPath
      
                  # If this application should not be converted, continue onto the next one in the list.
                  if (!$appInfo.ConvertToApplication) { Write-Verbose "Skipping publish of '$fullPath'"; continue }
      		
      		    Write-Verbose "Checking if we need to remove '$fullPath' before converting it..."
      		    if (Get-WebApplication -Site "$website" -Name "$applicationPath")
      		    {
      			    Write-Verbose "Removing '$fullPath'..."
      			    Remove-WebApplication -Site "$website" -Name "$applicationPath"
      		    }
      
                  Write-Verbose "Converting '$fullPath' to an application with Application Pool '$applicationPool'..."
                  ConvertTo-WebApplication "IIS:\Sites\$fullPath" -ApplicationPool "$applicationPool"
              }
          }
      
          # Connect to the host Server and run the commands directly o that computer.
          # Before we run our script block we first have to add the ApplicationServiceInformation class type into the PowerShell session.
          $session = New-PSSession -ComputerName $Server
          Invoke-Command -Session $session -ScriptBlock $AddApplicationServiceInformationTypeScriptBlock
          Invoke-Command -Session $session -ScriptBlock $block -ArgumentList (,$ApplicationServicesInfo)
          Remove-PSSession -Session $session
      }
      
      <#
          .SYNOPSIS
          Removes the given application services from the given Server.
      
          .PARAMETER Server
          The Server Host Name to connect to and remove the applications from.
      
          .PARAMETER ApplicationServicesInfo
          The [ApplicationServiceInformation[]] containing the applications to remove.
      #>
      function Remove-ApplicationServices
      {
          [CmdletBinding()]
          param
          (
              [string] $Server,
              [ApplicationServiceInformation[]] $ApplicationServicesInfo
          )
      
          $block = {
      	    param([ApplicationServiceInformation[]] $ApplicationServicesInfo)
              $VerbosePreference = $Using:VerbosePreference
      	    Write-Verbose "Removing Application Services..."
      
              # Import the WebAdministration module to make sure we have access to the required cmdlets and the IIS: drive.
              Import-Module WebAdministration 4> $null	# Don't write the verbose output.
      
      	    # Remove all of the Web Applications, making sure they exist first (in order to avoid a PS error).
      	    foreach ($appInfo in [ApplicationServiceInformation[]]$ApplicationServicesInfo)
              {
                  $website = $appInfo.Website
                  $applicationPath = $appInfo.ApplicationPath
      		    $fullPath = Join-Path $website $applicationPath
      		
      		    Write-Verbose "Checking if we need to remove '$fullPath'..."
      		    if (Get-WebApplication -Site "$website" -Name "$applicationPath")
      		    {
      			    Write-Verbose "Removing '$fullPath'..."
      			    Remove-WebApplication -Site "$website" -Name "$applicationPath"
      		    }
              }
          }
      
          # Connect to the host Server and run the commands directly on that computer.
          # Before we run our script block we first have to add the ApplicationServiceInformation class type into the PowerShell session.
          $session = New-PSSession -ComputerName $Server
          Invoke-Command -Session $session -ScriptBlock $AddApplicationServiceInformationTypeScriptBlock
          Invoke-Command -Session $session -ScriptBlock $block -ArgumentList (,$ApplicationServicesInfo)
          Remove-PSSession -Session $session
      }
      
      <#
          .SYNOPSIS
          Removes the given application services from the given Server and deletes all associated files.
      
          .PARAMETER Server
          The Server Host Name to connect to and delete the applications from.
      
          .PARAMETER ApplicationServicesInfo
          The [ApplicationServiceInformation[]] containing the applications to delete.
      
          .PARAMETER OnlyDeleteIfNotConvertedToApplication
          If this switch is supplied and the application services are still running (i.e. have not been removed yet), the services will not be removed and the files will not be deleted.
      
          .PARAMETER DeleteEmptyParentDirectories
          If this switch is supplied, after the application services folder has been removed, it will recursively check parent folders and remove them if they are empty, until the Website root is reached.
      #>
      function Delete-ApplicationServices
      {
          [CmdletBinding()]
          param
          (
              [string] $Server,
              [ApplicationServiceInformation[]] $ApplicationServicesInfo,
              [switch] $OnlyDeleteIfNotConvertedToApplication,
              [switch] $DeleteEmptyParentDirectories
          )
          
          $block = {
      	    param([ApplicationServiceInformation[]] $ApplicationServicesInfo)
              $VerbosePreference = $Using:VerbosePreference
      	    Write-Verbose "Deleting Application Services..."
      
              # Import the WebAdministration module to make sure we have access to the required cmdlets and the IIS: drive.
              Import-Module WebAdministration 4> $null	# Don't write the verbose output.
      
      	    # Remove all of the Web Applications and delete their files from disk.
      	    foreach ($appInfo in [ApplicationServiceInformation[]]$ApplicationServicesInfo)
              {
                  $website = $appInfo.Website
                  $applicationPath = $appInfo.ApplicationPath
      		    $fullPath = Join-Path $website $applicationPath
                  $iisSitesDirectory = "IIS:\Sites\"
      		
      		    Write-Verbose "Checking if we need to remove '$fullPath'..."
      		    if (Get-WebApplication -Site "$website" -Name "$applicationPath")
      		    {
                      # If we should only delete the files they're not currently running as a Web Application, continue on to the next one in the list.
                      if ($Using:OnlyDeleteIfNotConvertedToApplication) { Write-Verbose "'$fullPath' is still running as a Web Application, so its files will not be deleted."; continue }
      
      			    Write-Verbose "Removing '$fullPath'..."
      			    Remove-WebApplication -Site "$website" -Name "$applicationPath"
      		    }
                  
                  Write-Verbose "Deleting the directory '$fullPath'..."
                  Remove-Item -Path "$iisSitesDirectory$fullPath" -Recurse -Force
      
                  # If we should delete empty parent directories of this application.
                  if ($Using:DeleteEmptyParentDirectories)
                  {
                      Write-Verbose "Deleting empty parent directories..."
                      $parent = Split-Path -Path $fullPath -Parent
      
                      # Only delete the parent directory if it is not the Website directory, and it is empty.
                      while (($parent -ne $website) -and (Test-Path -Path "$iisSitesDirectory$parent") -and ((Get-ChildItem -Path "$iisSitesDirectory$parent") -eq $null))
                      {
                          $path = $parent
                          Write-Verbose "Deleting empty parent directory '$path'..."
                          Remove-Item -Path "$iisSitesDirectory$path" -Force
                          $parent = Split-Path -Path $path -Parent
                      }
                  }
              }
          }
      
          # Connect to the host Server and run the commands directly on that computer.
          # Before we run our script block we first have to add the ApplicationServiceInformation class type into the PowerShell session.
          $session = New-PSSession -ComputerName $Server
          Invoke-Command -Session $session -ScriptBlock $AddApplicationServiceInformationTypeScriptBlock
          Invoke-Command -Session $session -ScriptBlock $block -ArgumentList (,$ApplicationServicesInfo)
          Remove-PSSession -Session $session
      }
      

      This first file contains all of the meat.  At the top it declares (in C#) the ApplicationServiceInformation class that is used to hold the information about a web application; mainly the Website that the application should go in, the ApplicationPath (where within the website the application should be created), and the Application Pool that the application should run under.  Notice that the $AddApplicationServiceInformationTypeScriptBlock script block is executed right below where it is declared, in order to actually import the ApplicationServiceInformation class type into the current PowerShell session.

      There is one extra property on this class that I found I needed, but you may be able to ignore; that is the ConvertToApplication boolean.  This is inspected by our ConvertTo-ApplicationServices function to tell it whether the application should actually be published or not.  I required this field because we have some web services that should only be “converted to applications” in specific environments (or only on a developers local machine), but whose files we still want to delete when using the Delete-ApplicationServices function.  While I could just create 2 separate lists of ApplicationServiceInformation objects depending on which function I was calling (see below), I decided to instead just include this one extra property.

      Below the class declaration are our functions to perform the actual work:

      • ConvertTo-ApplicationServices: Converts the files to an application using the ConvertTo-WebApplication cmdlet.
      • Remove-ApplicationServices: Converts the application back to regular files using the Remove-WebApplication cmdlet.
      • Delete-ApplicationServices: First removes any applications, and then deletes the files from disk.
        The Delete-ApplicationServices function includes an couple additional switches.  The $OnlyDeleteIfNotConvertedToApplication switch can be used as a bit of a safety net to ensure that you only delete files for application services that are not currently running as a web application (i.e. the web application has already been removed).  If this switch is omitted, the web application will be removed and the files deleted.  The $DeleteEmptyParentDirectories switch that may be used to remove parent directories once the application files have been deleted. This is useful for us because we version our services, so they are all placed in a directory corresponding to a version number. e.g. \Website\[VersionNumber]\App1 and \Website\[VersionNumber]\App2. This switch allows the [VersionNumber] directory to be deleted automatically once the App1 and App2 directories have been deleted.
        Note that I don’t have a function to copy files to the server (i.e. publish them); I assume that the files have already been copied to the server, as we currently have this as a separate step in our deployment process.

      My 2nd file (ApplicationServiceLibrary.ps1) is optional and is really just a collection of functions used to return the ApplicationServiceInformation instances that I require as an array, depending on which projects I want to convert/remove/delete.

      # Get the directory that this script is in.
      $THIS_SCRIPTS_DIRECTORY = Split-Path $script:MyInvocation.MyCommand.Path
      
      # Include the required ApplicationServiceInformation type.
      . (Join-Path $THIS_SCRIPTS_DIRECTORY ApplicationServiceUtilities.ps1)
      
      #=================================
      # Replace all of the functions below with your own.
      # These are provided as examples.
      #=================================
      
      function Get-AllApplicationServiceInformation([string] $Release)
      {
          [ApplicationServiceInformation[]] $appServiceInfo = @()
      
          $appServiceInfo += Get-RqApplicationServiceInformation -Release $Release
          $appServiceInfo += Get-PublicApiApplicationServiceInformation -Release $Release
          $appServiceInfo += Get-IntraApplicationServiceInformation -Release $Release
      
          return $appServiceInfo    
      }
      
      function Get-RqApplicationServiceInformation([string] $Release)
      {
          return [ApplicationServiceInformation[]] @(
      	    (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/Core.Reporting.Services"; ApplicationPool = "RQ Services .NET4"}),
      	    (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/Core.Services"; ApplicationPool = "RQ Core Services .NET4"}),
      	    (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/DeskIntegration.Services"; ApplicationPool = "RQ Services .NET4"}),
      	    (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/Retail.Integration.Services"; ApplicationPool = "RQ Services .NET4"}),
      
              # Simulator Services that are only for Dev; we don't want to convert them to an application, but do want to remove their files that got copied to the web server.
              (New-Object ApplicationServiceInformation -Property @{Website = "Application Services"; ApplicationPath = "$Release/Simulator.Services"; ApplicationPool = "Simulator Services .NET4"; ConvertToApplication = $false}))
      }
      
      function Get-PublicApiApplicationServiceInformation([string] $Release)
      {
          return [ApplicationServiceInformation[]] @(
              (New-Object ApplicationServiceInformation -Property @{Website = "API Services"; ApplicationPath = "$Release/PublicAPI.Host"; ApplicationPool = "API Services .NET4"}),
      	    (New-Object ApplicationServiceInformation -Property @{Website = "API Services"; ApplicationPath = "$Release/PublicAPI.Documentation"; ApplicationPool = "API Services .NET4"}))
      }
      
      function Get-IntraApplicationServiceInformation([string] $Release)
      {
          return [ApplicationServiceInformation[]] @(
              (New-Object ApplicationServiceInformation -Property @{Website = "Intra Services"; ApplicationPath = "$Release"; ApplicationPool = "Intra Services .NET4"}))
      }
      

      You can see the first thing it does is dot source the ApplicationServiceUtilities.ps1 file (I assume all these scripts are in the same directory).  This is done in order to include the ApplicationServiceInformation type into the PowerShell session.  Next I just have functions that return the various application service information that our various projects specify.  I break them apart by project so that I’m able to easily publish one project separately from another, but also have a Get-All function that returns back all of the service information for when we deploy all services together.  We deploy many of our projects in lock-step, so having a Get-All function makes sense for us, but it may not for you.  We have many more projects and services than I show here; I just show these as an example of how you can set yours up if you choose.

      One other thing you may notice is that my Get-*ApplicationServiceInformation functions take a $Release parameter that is used in the ApplicationPath; this is because our services are versioned.  Yours may not be though, in which case you can omit that parameter for your Get functions (or add any additional parameters that you do need).

      Lastly, to make things nice and easy, I create ConvertTo, Remove, and Delete scripts for each of our projects, as well as a scripts to do all of the projects at once.  Here’s an example of what one of these scripts would look like:

      param
      (
      	[parameter(Position=0,Mandatory=$true,HelpMessage="The 3 hex-value version number of the release (x.x.x).")]
      	[ValidatePattern("^\d{1,5}\.\d{1,5}\.\d{1,5}$")]
      	[string] $Release
      )
      
      # Get the directory that this script is in.
      $THIS_SCRIPTS_DIRECTORY = Split-Path $script:MyInvocation.MyCommand.Path
      
      # Include the functions used to perform the actual operations.
      . (Join-Path $THIS_SCRIPTS_DIRECTORY ApplicationServiceLibrary.ps1)
      
      ConvertTo-ApplicationServices -Server "Our.WebServer.local" -ApplicationServicesInfo (Get-RqApplicationServiceInformation -Release $Release) -Verbose
      

      The first thing it does is prompt for the $Release version number; again, if you don’t version your services then you can omit that.

      The next thing it does is dot-source the ApplicationServicesLibrary.ps1 script to make all of the Get-*ApplicationServiceInformation functions that we defined in the previous file available.  I prefer to use the ApplicationServicesLibrary.ps1 file to place all of our services in a common place, and to avoid copy/pasting the ApplicationServiceInformation for each project into each Convert/Remove/Delete script; but that’s my personal choice and if you prefer to copy-paste the code into a few different files instead of having a central library file, go hard.  If you omit the Library script though, then you will instead need to dot-source the ApplicationServiceUtilities.ps1 file here, since our Library script currently dot-sources it in for us.

      The final line is the one that actually calls our utility function to perform the operation.  It provides the web server hostname to connect to, and calls the library’s Get-*ApplicationServiceInformation to retrieve the information for the web applications that should be created.  Notice too that it also provides the –Verbose switch.  Some of the IIS operations can take quite a while to run and don’t generate any output, so I like to see the verbose output so I can gauge the progress of the script, but feel free to omit it.

      So this sample script creates all of the web applications for our Rq product and can be ran very easily.  To make the corresponding Remove and Delete scripts, I would just copy this file and replace “ConvertTo-” with “Remove-” and “Delete-” respectively.  This allows you to have separate scripts for creating and removing each of your products that can easily be ran automatically or manually, fully automating the process of creating and removing your web applications in IIS.

      If I need to remove the services for a bunch of versions, here is an example of how I can just create a quick script that calls my Remove Services script for each version that needs to be removed:

      # Get the directory that this script is in.
      $thisScriptsDirectory = Split-Path $script:MyInvocation.MyCommand.Path
      
      # Remove Rq application services for versions 4.11.33 to 4.11.43.
      $majorMinorVersion = "4.11"
      33..43 | foreach {
          $Release = "$majorMinorVersion.$_"
          Write-Host "Removing Rq '$Release' services..."
          & "$thisScriptsDirectory\Remove-RqServices.ps1" $Release
      }
      

      If you have any questions or suggestions feel free to leave a comment.  I hope you find this useful.

      Happy coding!

      PowerShell 2.0 vs. 3.0 Syntax Differences And More

      October 22nd, 2013 1 comment

      I’m fortunate enough to work for a great company that tries to stay ahead of the curve and use newer technologies.  This means that when I’m writing my PowerShell (PS) scripts I typically don’t have to worry about only using PS v2.0 compatible syntax and cmdlets, as all of our PCs have v3.0 (soon to have v4.0).  This is great, until I release these scripts (or snippets from the scripts) for the general public to use; I have to keep in mind that many other people are still stuck running older versions of Windows, or not allowed to upgrade PowerShell.  So to help myself release PS v2.0 compatible scripts to the general public, I’m going to use this as a living document of the differences between PowerShell 2.0 and 3.0 that I encounter (so it will continue to grow over time; read as, bookmark it).  Of course there are other sites that have some of this info, but I’m going to try and compile a list of the ones that are relevant to me, in a nice simple format.

      Before we get to the differences, here are some things you may want to know relating to PowerShell versions.

      How to check which version of PowerShell you are running

      All PS versions:

      $PSVersionTable.PSVersion
      

       

      How to run/test your script against an older version of PowerShell (source)

      All PS versions:  use PowerShell.exe –Version [version] to start a new PowerShell session, where [version] is the PowerShell version that you want the session to use, then run your script in this new session.  Shorthand is PowerShell –v [version]

      PowerShell.exe -Version 2.0
      

      Note: You can’t run PowerShell ISE in an older version of PowerShell; only the Windows PowerShell console.

       

      PowerShell v2 and v3 Differences:

       

      Where-Object no longer requires braces (source)

      PS v2.0:

      Get-Service | Where { $_.Status -eq ‘running’ }
      

      PS v3.0:

      Get-Service | Where Status -eq ‘running
      

      PS V2.0 Error Message:

      Where : Cannot bind parameter ‘FilterScript’. Cannot convert the “[PropertyName]” value of the type “[Type]” to type “System.Management.Automation.ScriptBlock”.

       

      Using local variables in remote sessions (source)

      PS v2.0:

      $class = "win32_bios"
      Invoke-Command -cn dc3 {param($class) gwmi -class $class} -ArgumentList $class
      

      PS v3.0:

      $class = "win32_bios"
      Invoke-Command -cn dc3 {gwmi -class $Using:class}
      

       

      Variable validation attributes (source)

      PS v2.0: Validation only available on cmdlet/function/script parameters.

      PS v3.0: Validation available on cmdlet/function/script parameters, and on variables.

      [ValidateRange(1,5)][int]$someLocalVariable = 1
      

       

      Stream redirection (source)

      The Windows PowerShell redirection operators use the following characters to represent each output type:
              *   All output
              1   Success output
              2   Errors
              3   Warning messages
              4   Verbose output
              5   Debug messages
      
      NOTE: The All (*), Warning (3), Verbose (4) and Debug (5) redirection operators were introduced
             in Windows PowerShell 3.0. They do not work in earlier versions of Windows PowerShell.

       

      PS v2.0: Could only redirect Success and Error output.

      # Sends errors (2) and success output (1) to the success output stream.
      Get-Process none, Powershell 2>&1
      

      PS v3.0: Can also redirect Warning, Verbose, Debug, and All output.

      # Function to generate each kind of output.
      function Test-Output { Get-Process PowerShell, none; Write-Warning "Test!"; Write-Verbose "Test Verbose"; Write-Debug "Test Debug"}
      
      # Write every output stream to a text file.
      Test-Output *> Test-Output.txt
      
      

       

      Explicitly set parameter set variable values when not defined (source)

      PS v2.0 will throw an error if you try and access a parameter set parameter that has not been defined.  The solution is to give it a default value when it is not defined. Specify the Private scope in case a variable with the same name exists in the global scope or an inherited scope:

      # Default the ParameterSet variables that may not have been set depending on which parameter set is being used. This is required for PowerShell v2.0 compatibility.
      if (!(Test-Path Variable:Private:SomeStringParameter)) { $SomeStringParameter = $null }
      if (!(Test-Path Variable:Private:SomeIntegerParameter)) { $SomeIntegerParameter = 0 }
      if (!(Test-Path Variable:Private:SomeSwitchParameter)) { $SomeSwitchParameter = $false }
      

      PS v2.0 Error Message:

      The variable ‘$[VariableName]’ cannot be retrieved because it has not been set.

       

      Parameter attributes require the equals sign

      PS v2.0:

      [parameter(Position=1,Mandatory=$true)] [string] $SomeParameter
      

      PS v3.0:

      [parameter(Position=1,Mandatory)] [string] $SomeParameter
      

      PS v2.0 Error Message:

      The “=” operator is missing after a named argument.

       

      Cannot use String.IsNullOrWhitespace (or any other post .Net 3.5 functionality)

      PS v2.0:

      [string]::IsNullOrEmpty($SomeString)
      

      PS v3.0:

      [string]::IsNullOrWhiteSpace($SomeString)
      

      PS v2.0 Error Message:

      IsNullOrWhitespace : Method invocation failed because [System.String] doesn’t contain a method named ‘IsNullOrWhiteSpace’.

      PS v2.0 compatible version of IsNullOrWhitespace function:

      # PowerShell v2.0 compatible version of [string]::IsNullOrWhitespace.
      function StringIsNullOrWhitespace([string] $string)
      {
          if ($string -ne $null) { $string = $string.Trim() }
          return [string]::IsNullOrEmpty($string)
      }
      

       

      Get-ChildItem cmdlet’s –Directory and –File switches were introduced in PS v3.0

      PS v2.0:

      Get-ChildItem -Path $somePath | Where-Object { $_.PSIsContainer }	# Get directories only.
      Get-ChildItem -Path $somePath | Where-Object { !$_.PSIsContainer }	# Get files only.
      

      PS v3.0:

      Get-ChildItem -Path $somePath -Directory
      Get-ChildItem -Path $somePath -File
      

       

       

      Other Links

      Creating Strongly Typed Objects In PowerShell, Rather Than Using An Array Or PSCustomObject

      October 21st, 2013 1 comment

      I recently read a great article that explained how to create hashtables, dictionaries, and PowerShell objects.  I already knew a bit about these, but this article gives a great comparison between them, when to use each of them, and how to create them in the different versions of PowerShell.

      Right now I’m working on refactoring some existing code into some general functions for creating, removing, and destroying IIS applications (read about it here).  At first, I thought that this would be a great place to use PSCustomObject, as in order to perform these operations I needed 3 pieces of information about a website; the Website name, the Application Name (essentially the path to the application under the Website root), and the Application Pool that the application should run in.

       

      Using an array

      So initially the code I wrote just used an array to hold the 3 properties of each application service:

      # Store app service info as an array of arrays.
      $AppServices = @(
      	("MyWebsite", "$Version/Reporting.Services", "Services .NET4"),
      	("MyWebsite", "$Version/Core.Services", "Services .NET4"),
      	...
      )
      
      # Remove all of the Web Applications.
      foreach ($appInfo in $AppServices )
      {
      	$website = $appInfo[0]
      	$appName = $appInfo[1]
      	$appPool = $appInfo[2]
      	...
      }
      
      

      There is nothing “wrong” with using an array to store the properties; it works.  However, now that I am refactoring the functions to make them general purpose to be used by other people/scripts,  this does have one very undesirable limitation; The properties must always be stored in the correct order in the array (i.e. Website in position 0, App Name in 1, and App Pool in 2).  Since the list of app services will be passed into my functions, this would require the calling script to know to put the properties in this order.  Boo.

      Another option that I didn’t consider when I originally wrote the script was to use an associative array, but it has the same drawbacks as using a PSCustomObject discussed below.

       

      Using PSCustomObject

      So I thought let’s use a PSCustomObject instead, as that way the client does not have to worry about the order of the information; as long as their PSCustomObject has Website, ApplicationPath, and ApplicationPool properties then we’ll be able to process it.  So I had this:

      [PSCustomObject[]] $applicationServicesInfo = @(
      	[PSCustomObject]@{Website = "MyWebsite"; ApplicationPath = "$Version/Reporting.Services"; ApplicationPool = "Services .NET4"},
      	[PSCustomObject]@{Website = "MyWebsite"; ApplicationPath = "$Version/Core.Services"; ApplicationPool = "Services .NET4},
      	...
      )
      
      function Remove-ApplicationServices
      {
      	param([PSCustomObject[]] $ApplicationServicesInfo)
      
      	# Remove all of the Web Applications.
      	foreach ($appInfo in [PSCustomObject[]]$ApplicationServicesInfo)
      	{
      		$website = $appInfo.Website
      		$appPath = $appInfo.ApplicationPath
      		$appPool = $appInfo.ApplicationPool
      		...
      	}
      }
      

      I liked this better as the properties are explicitly named, so there’s no guess work about which information the property contains, but it’s still not great.  One thing that I don’t have here (and really should), is validation to make sure that the passed in PSCustomObjects actually have Website, ApplicationPath, and ApplicationPool properties on them, otherwise an exception will be thrown when I try to access them.  So with this approach I would still need to have documentation and validation to ensure that the client passes in a PSCustomObject with those properties.

       

      Using a new strongly typed object

      I frequently read other PowerShell blog posts and recently stumbled across this one.  In the article he mentions creating a new compiled type by passing a string to the Add-Type cmdlet; essentially writing C# code in his PowerShell script to create a new class.  I knew that you could use Add-Type to import other assemblies, but never realized that you could use it to import an assembly that doesn’t actually exist (i.e. a string in your PowerShell script).  This is freaking amazing! So here is what my new solution looks like:

      try {	# Wrap in a try-catch in case we try to add this type twice.
      # Create a class to hold an IIS Application Service's Information.
      Add-Type -TypeDefinition @"
      	using System;
      	
      	public class ApplicationServiceInformation
      	{
      		// The name of the Website in IIS.
      		public string Website { get; set;}
      		
      		// The path to the Application, relative to the Website root.
      		public string ApplicationPath { get; set; }
      
      		// The Application Pool that the application is running in.
      		public string ApplicationPool { get; set; }
      
      		// Implicit Constructor.
      		public ApplicationServiceInformation() { }
      
      		// Explicit constructor.
      		public ApplicationServiceInformation(string website, string applicationPath, string applicationPool)
      		{
      			this.Website = website;
      			this.ApplicationPath = applicationPath;
      			this.ApplicationPool = applicationPool;
      		}
      	}
      "@
      } catch {}
      
      $anotherService = New-Object ApplicationServiceInformation
      $anotherService.Website = "MyWebsite"
      $anotherService.ApplicationPath = "$Version/Payment.Services"
      $anotherService.ApplicationPool = "Services .NET4"
      	
      [ApplicationServiceInformation[]] $applicationServicesInfo = @(
      	(New-Object ApplicationServiceInformation("MyWebsite", "$Version/Reporting.Services", "Services .NET4")),
      	(New-Object ApplicationServiceInformation -Property @{Website = "MyWebsite"; ApplicationPath = "$Version/Core.Services"; ApplicationPool = "Services .NET4}),
      	$anotherService,
      	...
      )
      
      function Remove-ApplicationServices
      {
      	param([ApplicationServiceInformation[]] $ApplicationServicesInfo)
      
      	# Remove all of the Web Applications.
      	foreach ($appInfo in [ApplicationServiceInformation[]]$ApplicationServicesInfo)
      	{
      		$website = $appInfo.Website
      		$appPath = $appInfo.ApplicationPath
      		$appPool = $appInfo.ApplicationPool
      		...
      	}
      }
      

      I first create a simple container class to hold the application service information, and now all of my properties are explicit like with the PSCustomObject, but also I’m guaranteed the properties will exist on the object that is passed into my function.  From there I declare my array of ApplicationServiceInformation objects, and the function that we can pass them into. Note that I wrap each New-Object call in parenthesis, otherwise PowerShell parses it incorrectly and will throw an error.

      As you can see from the snippets above and below, there are several different ways that we can initialize a new instance of our ApplicationServiceInformation class:

      $service1 = New-Object ApplicationServiceInformation("Explicit Constructor", "Core.Services", ".NET4")
      
      $service2 = New-Object ApplicationServiceInformation -ArgumentList ("Explicit Constructor ArgumentList", "Core.Services", ".NET4")
      
      $service3 = New-Object ApplicationServiceInformation -Property @{Website = "Using Property"; ApplicationPath = "Core.Services"; ApplicationPool = ".NET4"}
      
      $service4 = New-Object ApplicationServiceInformation
      $service4.Website = "Properties added individually"
      $service4.ApplicationPath = "Core.Services"
      $service4.ApplicationPool = "Services .NET4"
      

       

      Caveats

      • Note that I wrapped the call to Add-Type in a Try-Catch block.  This is to prevent PowerShell from throwing an error if the type tries to get added twice.  It’s sort of a hacky workaround, but there aren’t many good alternatives, since you cannot unload an assembly.
      • This means that while developing if you make any changes to the class, you’ll have to restart your PowerShell session for the changes to be applied, since the Add-Type cmdlet will only work properly the first time that it is called in a session.

      I hope you found something in here useful.

      Happy coding!

      PowerShell Functions To Delete Old Files And Empty Directories

      October 15th, 2013 20 comments

      I thought I’d share some PowerShell (PS) functions that I wrote for some clean-up scripts at work.  I use these functions to delete files older than a certain date. Note that these functions require PS v3.0; slower PS v2.0 compatible functions are given at the end of this article.

      # Function to remove all empty directories under the given path.
      # If -DeletePathIfEmpty is provided the given Path directory will also be deleted if it is empty.
      # If -OnlyDeleteDirectoriesCreatedBeforeDate is provided, empty folders will only be deleted if they were created before the given date.
      # If -OnlyDeleteDirectoriesNotModifiedAfterDate is provided, empty folders will only be deleted if they have not been written to after the given date.
      function Remove-EmptyDirectories([parameter(Mandatory)][ValidateScript({Test-Path $_})][string] $Path, [switch] $DeletePathIfEmpty, [DateTime] $OnlyDeleteDirectoriesCreatedBeforeDate = [DateTime]::MaxValue, [DateTime] $OnlyDeleteDirectoriesNotModifiedAfterDate = [DateTime]::MaxValue, [switch] $OutputDeletedPaths, [switch] $WhatIf)
      {
          Get-ChildItem -Path $Path -Recurse -Force -Directory | Where-Object { (Get-ChildItem -Path $_.FullName -Recurse -Force -File) -eq $null } | 
              Where-Object { $_.CreationTime -lt $OnlyDeleteDirectoriesCreatedBeforeDate -and $_.LastWriteTime -lt $OnlyDeleteDirectoriesNotModifiedAfterDate } | 
              ForEach-Object { if ($OutputDeletedPaths) { Write-Output $_.FullName } Remove-Item -Path $_.FullName -Force -WhatIf:$WhatIf }
      
          # If we should delete the given path when it is empty, and it is a directory, and it is empty, and it meets the date requirements, then delete it.
          if ($DeletePathIfEmpty -and (Test-Path -Path $Path -PathType Container) -and (Get-ChildItem -Path $Path -Force) -eq $null -and
              ((Get-Item $Path).CreationTime -lt $OnlyDeleteDirectoriesCreatedBeforeDate) -and ((Get-Item $Path).LastWriteTime -lt $OnlyDeleteDirectoriesNotModifiedAfterDate))
          { if ($OutputDeletedPaths) { Write-Output $Path } Remove-Item -Path $Path -Force -WhatIf:$WhatIf }
      }
      
      # Function to remove all files in the given Path that were created before the given date, as well as any empty directories that may be left behind.
      function Remove-FilesCreatedBeforeDate([parameter(Mandatory)][ValidateScript({Test-Path $_})][string] $Path, [parameter(Mandatory)][DateTime] $DateTime, [switch] $DeletePathIfEmpty, [switch] $OutputDeletedPaths, [switch] $WhatIf)
      {
          Get-ChildItem -Path $Path -Recurse -Force -File | Where-Object { $_.CreationTime -lt $DateTime } | 
      		ForEach-Object { if ($OutputDeletedPaths) { Write-Output $_.FullName } Remove-Item -Path $_.FullName -Force -WhatIf:$WhatIf }
          Remove-EmptyDirectories -Path $Path -DeletePathIfEmpty:$DeletePathIfEmpty -OnlyDeleteDirectoriesCreatedBeforeDate $DateTime -OutputDeletedPaths:$OutputDeletedPaths -WhatIf:$WhatIf
      }
      
      # Function to remove all files in the given Path that have not been modified after the given date, as well as any empty directories that may be left behind.
      function Remove-FilesNotModifiedAfterDate([parameter(Mandatory)][ValidateScript({Test-Path $_})][string] $Path, [parameter(Mandatory)][DateTime] $DateTime, [switch] $DeletePathIfEmpty, [switch] $OutputDeletedPaths, [switch] $WhatIf)
      {
          Get-ChildItem -Path $Path -Recurse -Force -File | Where-Object { $_.LastWriteTime -lt $DateTime } | 
      	ForEach-Object { if ($OutputDeletedPaths) { Write-Output $_.FullName } Remove-Item -Path $_.FullName -Force -WhatIf:$WhatIf }
          Remove-EmptyDirectories -Path $Path -DeletePathIfEmpty:$DeletePathIfEmpty -OnlyDeleteDirectoriesNotModifiedAfterDate $DateTime -OutputDeletedPaths:$OutputDeletedPaths -WhatIf:$WhatIf
      }
      
      

      The Remove-EmptyDirectories function removes all empty directories under the given path, and optionally (via the DeletePathIfEmpty switch) the path directory itself if it is empty after cleaning up the other directories. It also takes a couple parameters that may be specified if you only want to delete the empty directories that were created before a certain date, or that haven’t been written to since a certain date.

      The Remove-FilesCreatedBeforeDate and Remove-FilesNotModifiedAfterDate functions are very similar to each other.  They delete all files under the given path whose Created Date or Last Written To Date, respectfully, is less than the given DateTime.  They then call the Remove-EmptyDirectories function with the provided date to clean up any left over empty directories.

      To call the last 2 functions, just provide the path to the file/directory that you want it to delete if older than the given date-time.  Here are some examples of calling all the functions:

      # Delete all files created more than 2 days ago.
      Remove-FilesCreatedBeforeDate -Path "C:\Some\Directory" -DateTime ((Get-Date).AddDays(-2)) -DeletePathIfEmpty
      
      # Delete all files that have not been updated in 8 hours.
      Remove-FilesNotModifiedAfterDate -Path "C:\Another\Directory" -DateTime ((Get-Date).AddHours(-8))
      
      # Delete a single file if it is more than 30 minutes old.
      Remove-FilesCreatedBeforeDate -Path "C:\Another\Directory\SomeFile.txt" -DateTime ((Get-Date).AddMinutes(-30))
      
      # Delete all empty directories in the Temp folder, as well as the Temp folder itself if it is empty.
      Remove-EmptyDirectories -Path "C:\SomePath\Temp" -DeletePathIfEmpty
      
      # Delete all empty directories created after Jan 1, 2014 3PM.
      Remove-EmptyDirectories -Path "C:\SomePath\WithEmpty\Directories" -OnlyDeleteDirectoriesCreatedBeforeDate ([DateTime]::Parse("Jan 1, 2014 15:00:00"))
      
      # See what files and directories would be deleted if we ran the command.
      Remove-FilesCreatedBeforeDate -Path "C:\SomePath\Temp" -DateTime (Get-Date) -DeletePathIfEmpty -WhatIf
      
      # Delete all files and directories in the Temp folder, as well as the Temp folder itself if it is empty, and output all paths that were deleted.
      Remove-FilesCreatedBeforeDate -Path "C:\SomePath\Temp" -DateTime (Get-Date) -DeletePathIfEmpty -OutputDeletedPaths
      
      

      Notice that I am using Get-Date to get the current date and time, and then subtracting the specified amount of time from it in order to get a date-time relative to the current time; you can use any valid DateTime though, such as a hard-coded date of January 1st, 2014 3PM.

      I use these functions in some scripts that we run nightly via a scheduled task in Windows.  Hopefully you find them useful too.

       

      PowerShell v2.0 Compatible Functions

      As promised, here are the slower PS v2.0 compatible functions.  The main difference is that they use $_.PSIsContainer in the Where-Object clause rather than using the –File / –Directory Get-ChildItem switches.  The Measure-Command cmdlet shows that using the switches is about 3x faster than using the where clause, but since we are talking about milliseconds here you likely won’t notice the difference unless you are traversing a large file tree (which I happen to be for my scripts that we use to clean up TFS builds).

      # Function to remove all empty directories under the given path.
      # If -DeletePathIfEmpty is provided the given Path directory will also be deleted if it is empty.
      # If -OnlyDeleteDirectoriesCreatedBeforeDate is provided, empty folders will only be deleted if they were created before the given date.
      # If -OnlyDeleteDirectoriesNotModifiedAfterDate is provided, empty folders will only be deleted if they have not been written to after the given date.
      function Remove-EmptyDirectories([parameter(Mandatory=$true)][ValidateScript({Test-Path $_})][string] $Path, [switch] $DeletePathIfEmpty, [DateTime] $OnlyDeleteDirectoriesCreatedBeforeDate = [DateTime]::MaxValue, [DateTime] $OnlyDeleteDirectoriesNotModifiedAfterDate = [DateTime]::MaxValue, [switch] $OutputDeletedPaths, [switch] $WhatIf)
      {
          Get-ChildItem -Path $Path -Recurse -Force | Where-Object { $_.PSIsContainer -and (Get-ChildItem -Path $_.FullName -Recurse -Force | Where-Object { !$_.PSIsContainer }) -eq $null } | 
              Where-Object { $_.CreationTime -lt $OnlyDeleteDirectoriesCreatedBeforeDate -and $_.LastWriteTime -lt $OnlyDeleteDirectoriesNotModifiedAfterDate } | 
              ForEach-Object { if ($OutputDeletedPaths) { Write-Output $_.FullName } Remove-Item -Path $_.FullName -Force -WhatIf:$WhatIf }
      
          # If we should delete the given path when it is empty, and it is a directory, and it is empty, and it meets the date requirements, then delete it.
          if ($DeletePathIfEmpty -and (Test-Path -Path $Path -PathType Container) -and (Get-ChildItem -Path $Path -Force) -eq $null -and
              ((Get-Item $Path).CreationTime -lt $OnlyDeleteDirectoriesCreatedBeforeDate) -and ((Get-Item $Path).LastWriteTime -lt $OnlyDeleteDirectoriesNotModifiedAfterDate))
          { if ($OutputDeletedPaths) { Write-Output $Path } Remove-Item -Path $Path -Force -WhatIf:$WhatIf }
      }
      
      # Function to remove all files in the given Path that were created before the given date, as well as any empty directories that may be left behind.
      function Remove-FilesCreatedBeforeDate([parameter(Mandatory=$true)][ValidateScript({Test-Path $_})][string] $Path, [parameter(Mandatory)][DateTime] $DateTime, [switch] $DeletePathIfEmpty, [switch] $OutputDeletedPaths, [switch] $WhatIf)
      {
          Get-ChildItem -Path $Path -Recurse -Force | Where-Object { !$_.PSIsContainer -and $_.CreationTime -lt $DateTime } | 
      		ForEach-Object { if ($OutputDeletedPaths) { Write-Output $_.FullName } Remove-Item -Path $_.FullName -Force -WhatIf:$WhatIf }
          Remove-EmptyDirectories -Path $Path -DeletePathIfEmpty:$DeletePathIfEmpty -OnlyDeleteDirectoriesCreatedBeforeDate $DateTime -OutputDeletedPaths:$OutputDeletedPaths -WhatIf:$WhatIf
      }
      
      # Function to remove all files in the given Path that have not been modified after the given date, as well as any empty directories that may be left behind.
      function Remove-FilesNotModifiedAfterDate([parameter(Mandatory=$true)][ValidateScript({Test-Path $_})][string] $Path, [parameter(Mandatory)][DateTime] $DateTime, [switch] $DeletePathIfEmpty, [switch] $OutputDeletedPaths, [switch] $WhatIf)
      {
          Get-ChildItem -Path $Path -Recurse -Force | Where-Object { !$_.PSIsContainer -and $_.LastWriteTime -lt $DateTime } | 
      	ForEach-Object { if ($OutputDeletedPaths) { Write-Output $_.FullName } Remove-Item -Path $_.FullName -Force -WhatIf:$WhatIf }
          Remove-EmptyDirectories -Path $Path -DeletePathIfEmpty:$DeletePathIfEmpty -OnlyDeleteDirectoriesNotModifiedAfterDate $DateTime -OutputDeletedPaths:$OutputDeletedPaths -WhatIf:$WhatIf
      }
      
      

      Happy coding!

      Have Your NuGet Package Install Itself As A Development Dependency

      September 18th, 2013 3 comments

      The Problem

      I recently blogged about a NuGet package I made that allows you to easily turn your own projects into a NuGet package, making it easy to share your work with the world.  One problem I ran into with this was that if somebody used my NuGet package to create their package, their NuGet package listed my NuGet package as a dependency.  This meant that when they distributed their package to others, it would install both their package and mine.  Obviously this is undesirable, since their library has no dependency on my package; my package was meant purely to help them with the development process.

      Unfortunately there wasn’t much I could do about this; that is, until the release of NuGet 2.7 which came out a few weeks ago.  You can see from the release notes that they added a new developmentDependency attribute that can be used.  This made things a bit better because it allowed users who installed my package to go into their project’s packages.config file, find the element corresponding to my package, and add the developmentDependency=”true” attribute to it.

      So this was better, but still kinda sucked because it required users to do this step manually, and most of them likely aren’t even aware of the problem or that there was a fix for it.  When users (and myself) install a package they want it to just work; which is why I created a fix for this.

       

      The Fix

      Update – As of NuGet 2.8 there is a built-in way to do the fix below. See this post for more info.

      The nice thing about NuGet packages is that you can define PowerShell scripts that can run when users install and uninstall your packages, as is documented near the bottom of this page.  I’ve created a PowerShell script that will automatically go in and adjust the project’s packages.config file to mark your package as a development dependency.  This means there is no extra work for the user to do.

      The first thing you need to do (if you haven’t already) is include an Install.ps1 script in your NuGet package’s .nuspec file.  If you don’t currently use a .nuspec file, check out this page for more information.  I also include a sample .nuspec file at the end of this post for reference.  The line to add to your .nuspec file will look something like this:

      <file src=”NuGetFiles\Install.ps1″ target=”tools\Install.ps1″ />

      and then the contents of Install.ps1 should look like this:

      param($installPath, $toolsPath, $package, $project)
      
      # Edits the project's packages.config file to make sure the reference to the given package uses the developmentDependency="true" attribute.
      function Set-PackageToBeDevelopmentDependency($PackageId, $ProjectDirectoryPath)
      {
          function Get-XmlNamespaceManager($XmlDocument, [string]$NamespaceURI = "")
          {
              # If a Namespace URI was not given, use the Xml document's default namespace.
      	    if ([string]::IsNullOrEmpty($NamespaceURI)) { $NamespaceURI = $XmlDocument.DocumentElement.NamespaceURI }	
      
      	    # In order for SelectSingleNode() to actually work, we need to use the fully qualified node path along with an Xml Namespace Manager, so set them up.
      	    [System.Xml.XmlNamespaceManager]$xmlNsManager = New-Object System.Xml.XmlNamespaceManager($XmlDocument.NameTable)
      	    $xmlNsManager.AddNamespace("ns", $NamespaceURI)
              return ,$xmlNsManager		# Need to put the comma before the variable name so that PowerShell doesn't convert it into an Object[].
          }
      
          function Get-FullyQualifiedXmlNodePath([string]$NodePath, [string]$NodeSeparatorCharacter = '.')
          {
              return "/ns:$($NodePath.Replace($($NodeSeparatorCharacter), '/ns:'))"
          }
      
          function Get-XmlNodes($XmlDocument, [string]$NodePath, [string]$NamespaceURI = "", [string]$NodeSeparatorCharacter = '.')
          {
      	    $xmlNsManager = Get-XmlNamespaceManager -XmlDocument $XmlDocument -NamespaceURI $NamespaceURI
      	    [string]$fullyQualifiedNodePath = Get-FullyQualifiedXmlNodePath -NodePath $NodePath -NodeSeparatorCharacter $NodeSeparatorCharacter
      
      	    # Try and get the nodes, then return them. Returns $null if no nodes were found.
      	    $nodes = $XmlDocument.SelectNodes($fullyQualifiedNodePath, $xmlNsManager)
      	    return $nodes
          }
      
          # Get the path to the project's packages.config file.
          Write-Debug "Project directory is '$ProjectDirectoryPath'."
          $packagesConfigFilePath = Join-Path $ProjectDirectoryPath "packages.config"
      
          # If we found the packages.config file, try and update it.
          if (Test-Path -Path $packagesConfigFilePath)
          {
              Write-Debug "Found packages.config file at '$packagesConfigFilePath'."
      
              # Load the packages.config xml document and grab all of the <package> elements.
              $xmlFile = New-Object System.Xml.XmlDocument
              $xmlFile.Load($packagesConfigFilePath)
              $packageElements = Get-XmlNodes -XmlDocument $xmlFile -NodePath "packages.package"
      
              Write-Debug "Packages.config contents before modification are:`n$($xmlFile.InnerXml)"
      
              if (!($packageElements))
              {
                  Write-Debug "Could not find any <package> elements in the packages.config xml file '$packagesConfigFilePath'."
                  return
              }
      
              # Add the developmentDependency attribute to the NuGet package's entry.
              $packageElements | Where-Object { $_.id -eq $PackageId } | ForEach-Object { $_.SetAttribute("developmentDependency", "true") }
      
              # Save the packages.config file back now that we've changed it.
              $xmlFile.Save($packagesConfigFilePath)
          }
          # Else we coudn't find the packages.config file for some reason, so error out.
          else
          {
              Write-Debug "Could not find packages.config file at '$packagesConfigFilePath'."
          }
      }
      
      # Set this NuGet Package to be installed as a Development Dependency.
      Set-PackageToBeDevelopmentDependency -PackageId $package.Id -ProjectDirectoryPath ([System.IO.Directory]::GetParent($project.FullName))
      

      And that’s it.  Basically this script will be ran after your package is installed, and it will parse the project’s packages.config xml file looking for the element with your package’s ID, and then it will add the developmentDependency=”true” attribute to that element.  And of course, if you want to add more code to the end of the file to do additional work, go ahead.

      So now your users won’t have to manually edit their packages.config file, and your user’s users won’t have additional, unnecessary dependencies installed.

       

      More Info

      As promised, here is a sample .nuspec file for those of you that are not familiar with them and what they should look like.  This is actually the .nuspec file I use for my package mentioned at the start of this post.  You can see that I include the Install.ps1 file near the bottom of the file.

      <?xml version="1.0" encoding="utf-8"?>
      <package xmlns="http://schemas.microsoft.com/packaging/2011/08/nuspec.xsd">
        <metadata>
          <id>CreateNewNuGetPackageFromProjectAfterEachBuild</id>
          <version>1.4.2</version>
          <title>Create New NuGet Package From Project After Each Build</title>
          <authors>Daniel Schroeder,iQmetrix</authors>
          <owners>Daniel Schroeder,iQmetrix</owners>
          <licenseUrl>https://newnugetpackage.codeplex.com/license</licenseUrl>
          <projectUrl>https://newnugetpackage.codeplex.com/wikipage?title=NuGet%20Package%20To%20Create%20A%20NuGet%20Package%20From%20Your%20Project%20After%20Every%20Build</projectUrl>
          <requireLicenseAcceptance>false</requireLicenseAcceptance>
          <description>Automatically creates a NuGet package from your project each time it builds. The NuGet package is placed in the project's output directory.
      	If you want to use a .nuspec file, place it in the same directory as the project's project file (e.g. .csproj, .vbproj, .fsproj).
      	This adds a PostBuildScripts folder to your project to house the PowerShell script that is called from the project's Post-Build event to create the NuGet package.
      	If it does not seem to be working, check the Output window for any errors that may have occurred.</description>
          <summary>Automatically creates a NuGet package from your project each time it builds.</summary>
          <releaseNotes>Updated to use latest version of New-NuGetPackage.ps1.</releaseNotes>
          <copyright>Daniel Schroeder 2013</copyright>
          <tags>Auto Automatic Automatically Build Pack Create New NuGet Package From Project After Each Build On PowerShell Power Shell .nupkg new nuget package NewNuGetPackage New-NuGetPackage</tags>
        </metadata>
        <files>
          <file src="..\New-NuGetPackage.ps1" target="content\PostBuildScripts\New-NuGetPackage.ps1" />
          <file src="Content\NuGet.exe" target="content\PostBuildScripts\NuGet.exe" />
          <file src="Content\BuildNewPackage-RanAutomatically.ps1" target="content\PostBuildScripts\BuildNewPackage-RanAutomatically.ps1" />
          <file src="Content\UploadPackage-RunManually.ps1" target="content\PostBuildScripts\UploadPackage-RunManually.ps1" />
          <file src="Content\UploadPackage-RunManually.bat" target="content\PostBuildScripts\UploadPackage-RunManually.bat" />
          <file src="tools\Install.ps1" target="tools\Install.ps1" />
          <file src="tools\Uninstall.ps1" target="tools\Uninstall.ps1" />
        </files>
      </package>
      

       

      Happy coding!

      PowerShell Needs A Centralized Package Management Solution

      September 9th, 2013 4 comments

      TL;DR – PowerShell needs centralized package management.  Please go up-vote this request to have it added to PowerShell.


      I love PowerShell, and I love writing reusable PowerShell modules.  They work great when I am writing scripts for myself.  The problem comes in when I write a script that depends on some modules, and I then want to share that script with others.  I basically have 2 options:

      1. Track down all of the module files that the script depends on, zip them all up, and send them to the recipient along with instructions such as, “Navigate to this folder on your PC, create a new folder with this name, copy file X to this location, rinse, repeat…”.
      2. Track down all of the module files that the script depends on and copy-paste their contents directly into the top of the script file, so I just send the user one very large file.

      Neither of these solutions are ideal.  Maybe I’m missing something?  In my opinion, PowerShell really needs centralized package management; something similar to Ruby Gems would be great.  Basically a website where users can upload their scripts with a unique ID, and then in their PowerShell script at the top of the file just list the modules that the script depends on.  If the modules are not installed on that PC yet, then they would automatically be downloaded and installed.  This would make PowerShell so much more convenient, and I believe it would help drive more users to write reusable modules and avoid duplicating modules that have already been written (likely better) by others.

      In order for this to work though, it has to be baked directly into the PowerShell architecture by the PowerShell team; it’s not something that a 3rd party could do.  So to try and bring this feature request to Microsoft’s attention, I have create a Suggestion on the MS Connect site.  Please go up-vote it.

      Before thinking to create a feature request for this (duh), I spammed some of my favourite PowerShell Twitter accounts (@JamesBru @ShayLevy @dfinke @PowerShellMag @StevenMurawski @JeffHicks @ScriptingGuys) to bring it to their attention and get their thoughts; sorry about that guys!  This blog’s comments are a better forum than Twitter for discussing these types of things.

      If you have thoughts on centralized package management for PowerShell, or have a better solution for dealing with distributing scripts that depend on modules, please leave a comment below. Thanks.

      Happy coding!

      [Update]

      While PowerShell does not provide a native module management solution, Joel “Jaykul” Bennett has written one and all of the modules are hosted at http://poshcode.org/, although I believe it can download modules from other sources as well (e.g. GitHub or any other URL).  One place that it cannot download files from is CodePlex since CodePlex does not provide direct download links to the latest versions of files or to their download links (it is done through Javascript).  Please go up-vote this issue and this issue to try and get this restriction removed.

      Getting Custom TFS Checkin Policies To Work When Committing From The Command Line (i.e. tf checkin)

      September 6th, 2013 1 comment

      Update – I show how to have your checkin policies automatically update the registry keys shown in this blog post on this newer blog post. If you are not the person creating the checkin policies though, then you will still need to use the technique shown in this post.

      I frequently check code into TFS from the command line, instead of from Visual Studio (VS), for a number of reasons:

      1. I prefer the VS 2010 style of checkin window over the VS 2012 one, and the 2010 style window is still displayed when checking in from the command line.
      2. I use AutoHotkey to pop the checkin window via a keyboard shortcut, so I don’t need to have VS open to check files in (or navigate to the pending changes window within VS).
        – Aside: Just add this one line to your AutoHotkey script for this functionality. This sets the hotkey to Ctrl+Windows+C to pop the checkin window, but feel free to change it to something else.
        ^#C UP::Run, tf checkin
        
      3. Other programs, such as Git-Tf and the Windows Explorer shell extension, call the TFS checkin window via the command line, so you don’t have the option to use the VS checkin pending changes window.

              The Problem

            The problem is that if you are using a VSIX package to deploy your custom checkin policies, the custom checkin policies will only work when checking code in via the VS GUI, and not when doing it via the command line.  If you try and do it via the command line, the checkin window spits an “Internal error” for each custom checkin policy that you have, so your policies don’t run and you have to override them.

            InternalErrorInCheckinPolicies
            P. Kelly mentions this problem on his blog post, and has some other great information around custom checkin policies in TFS.
            The old TFS 2010 Power Tools had a feature for automatically distributing the checkin policies to your team, but unfortunately this feature was removed from the TFS 2012 Power Tools.  Instead, the Microsoft recommended way to distribute your custom checkin policies is now through a VSIX package, which is nice because it can use the Extension And Updates functionality built into VS and automatically notify users of updates (without requiring users to install the TFS Power Tools).  The problem is that VSIX packages are sandboxed and are not able to update the necessary registry key to make custom checkin policies work from the command line.  I originally posted this question on the MSDN forums, then I logged a bug about this on the Connect site, but MS closed it as “By Design” Sad smile. Maybe if it gets enough up-votes though they will re-open it (so please go up-vote it).

           

          The Workaround

          The good news though is that there is a work around.  You simply need to copy your custom checkin policy entry from the key:

          "HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0_Config\TeamFoundation\SourceControl\Checkin Policies"

          to:

          "HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\11.0\TeamFoundation\SourceControl\Checkin Policies" (omit the Wow6432Node on 32-bit Windows).

           

          Not Perfect, but Better

          The bad news is that every developer (who uses the command line checkin window) will need to copy this registry value on their local machine.  Furthermore, they will need to do it every time they update their checkin policies to a new version.

          While this sucks, I’ve made it a bit better by creating a little powershell script to automate this task for you; here it is:

          # This script copies the required registry value so that the checkin policies will work when doing a TFS checkin from the command line.
          
          # Turn on Strict Mode to help catch syntax-related errors.
          # 	This must come after a script's/function's param section.
          # 	Forces a function to be the first non-comment code to appear in a PowerShell Module.
          Set-StrictMode -Version Latest
          
          $ScriptBlock = {
              # The name of the Custom Checkin Policy Entry in the Registry Key.
              $CustomCheckinPolicyEntryName = 'YourCustomCheckinPolicyEntryNameGoesHere'
          
              # Get the Registry Key Entry that holds the path to the Custom Checkin Policy Assembly.
              $CustomCheckinPolicyRegistryEntry = Get-ItemProperty -Path 'HKCU:\Software\Microsoft\VisualStudio\11.0_Config\TeamFoundation\SourceControl\Checkin Policies' -Name $CustomCheckinPolicyEntryName
              $CustomCheckinPolicyEntryValue = $CustomCheckinPolicyRegistryEntry.($CustomCheckinPolicyEntryName)
          
              # Create a new Registry Key Entry for the iQ Checkin Policy Assembly so they will work from the command line (as well as from Visual Studio).
              if ([Environment]::Is64BitOperatingSystem)
              { $HKLMKey = 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\11.0\TeamFoundation\SourceControl\Checkin Policies' }
              else
              { $HKLMKey = 'HKLM:\SOFTWARE\Microsoft\VisualStudio\11.0\TeamFoundation\SourceControl\Checkin Policies' }
              Set-ItemProperty -Path $HKLMKey -Name $CustomCheckinPolicyEntryName -Value $CustomCheckinPolicyEntryValue
          }
          
          # Run the script block as admin so it has permissions to modify the registry.
          Start-Process -FilePath PowerShell -Verb RunAs -ArgumentList "-Command $ScriptBlock"
          

          Note that you will need to update the script to change YourCustomCheckinPolicyEntryNameGoesHere to your specific entry’s name.  Also, the “[Environment]::Is64BitOperatingSystem” check requires PowerShell V3; if you have lower than PS V3 there are other ways to check if it is a 64-bit machine or not.

          If you have developers that aren’t familiar with how to run a PowerShell script, then you can include the following batch script (.cmd/.bat file extension) in the same directory as the PowerShell script, and they can run this instead by simply double-clicking it to call the PowerShell script:

          SET ThisScriptsDirectory=%~dp0
          SET PowerShellScriptPath=%ThisScriptsDirectory%UpdateCheckinPolicyInRegistry.ps1
          
          :: Run the powershell script to copy the registry key into other areas of the registry so that the custom checkin policies will work when checking in from the command line.
          PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%PowerShellScriptPath%'"
          

          Note that this batch script assumes you named the PowerShell script “UpdateCheckinPolicyInRegistry.ps1”, so if you use a different file name be sure to update it here too.

          Your developers will still need to run this script every time after they update their checkin policies, but it’s easier and less error prone than manually editing the registry.  If they want to take it a step further they could even setup a Scheduled Task to run the script once a day or something, or even implement it as a Group Policy so it automatically happens for everyone, depending on how often your company updates their checkin policies and how many developers you have.

          Ideally I would like to simply be able to run this script during/after the VSIX installer.  I have posted a question on Stack Overflow to see if this is possible, but from everything I’ve read so far it doesn’t look like it; maybe in the next generation of VSIX though.  If you have any other ideas on how to automate this, I would love to hear them.

          Happy coding!

          Accessing PowerShell Properties and Variables with Periods (and other special characters) in their Name

          September 5th, 2013 No comments

          TL;DR

          If your PowerShell variable name contains special characters, wrap it in curly braces to get/set its value.  If your PowerShell property name contains special characters, wrap it in double quotes:

          # Variable name with special characters
          $VariableName.That.Contains.Periods			# This will NOT work.
          ${VariableName.That.Contains.Periods}		# This will work.
          
          $env:ProgramFiles(x86)			# This will NOT work, because parentheses are special characters.
          ${env:ProgramFiles(x86)}		# This will work.
          
          # Property name with special characters
          $SomeObject.APropertyName.That.ContainsPeriods		# This will NOT work.
          $SomeObject.{APropertyName.That.ContainsPeriods}	# This will work.
          $SomeObject.'APropertyName.That.ContainsPeriods'	# This will also work.
          $SomeObject."APropertyName.That.ContainsPeriods"	# This will work too.
          
          # Property name with special characters stored in a variable
          $APropertyNameWithSpecialCharacters = 'APropertyName.That.ContainsPeriods'
          $SomeObject.$APropertyNameWithSpecialCharacters		# This will NOT work.
          $SomeObject.{$APropertyNameWithSpecialCharacters}	# This will NOT work.
          $SomeObject.($APropertynameWithSpecialCharacters)	# This will work.
          $SomeObject."$APropertynameWithSpecialCharacters"	# This will also work.
          $SomeObject.'$APropertynameWithSpecialCharacters'	# This will NOT work.
          

           

          More Information

          I was recently working on a powershell script to get the values of some entries in the registry.  This is simple enough:

          Get-ItemProperty -Path 'HKCU:\Software\Microsoft\VisualStudio\11.0_Config\TeamFoundation\SourceControl\Checkin Policies' -Name 'TF.iQmetrix.CheckinPolicies'
          

          If we run this command, this is what we get back:

          TF.iQmetrix.CheckinPolicies : C:\Users\Dan Schroeder\AppData\Local\Microsoft\VisualStudio\11.0\Extensions\mwlu1noz.4t5\TF.iQmetrix.CheckinPolicies.dll
          PSPath                      : Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0_Config\TeamFoundation\SourceControl\Checkin Policies
          PSParentPath                : Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0_Config\TeamFoundation\SourceControl
          PSChildName                 : Checkin Policies
          PSDrive                     : HKCU
          PSProvider                  : Microsoft.PowerShell.Core\Registry
          

          So the actual value I’m after is stored in the “TF.iQmetrix.CheckinPolicies” property of the object returned by Get-ItemProperty; notice that this property name has periods in it.  So let’s store this object in a variable to make it easier to access it’s properties, and do a quick Get-Member on it just to show some more details:

          $RegistryEntry = Get-ItemProperty -Path 'HKCU:\Software\Microsoft\VisualStudio\11.0_Config\TeamFoundation\SourceControl\Checkin Policies' -Name 'TF.iQmetrix.CheckinPolicies'
          $RegistryEntry | Get-Member
          

          And this is what Get-Member shows us:

             TypeName: System.Management.Automation.PSCustomObject
          
          Name                        MemberType   Definition                                                                                                                                                          
          ----                        ----------   ----------                                                                                                                                                          
          Equals                      Method       bool Equals(System.Object obj)                                                                                                                                      
          GetHashCode                 Method       int GetHashCode()                                                                                                                                                   
          GetType                     Method       type GetType()                                                                                                                                                      
          ToString                    Method       string ToString()                                                                                                                                                   
          PSChildName                 NoteProperty System.String PSChildName=Checkin Policies                                                                                                                          
          PSDrive                     NoteProperty System.Management.Automation.PSDriveInfo PSDrive=HKCU                                                                                                               
          PSParentPath                NoteProperty System.String PSParentPath=Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0_Config\TeamFoundation\SourceControl           
          PSPath                      NoteProperty System.String PSPath=Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0_Config\TeamFoundation\SourceControl\Checkin Policies
          PSProvider                  NoteProperty System.Management.Automation.ProviderInfo PSProvider=Microsoft.PowerShell.Core\Registry                                                                             
          TF.iQmetrix.CheckinPolicies NoteProperty System.String TF.iQmetrix.CheckinPolicies=C:\Users\Dan Schroeder\AppData\Local\Microsoft\VisualStudio\11.0\Extensions\mwlu1noz.4t5\TF.iQmetrix.CheckinPolicies.dll 
          

           

          So in PowerShell ISE I type “$RegistryEntry.” and intellisense pops up showing me that TF.iQmetrix.CheckinPolicies is indeed a property on this object that I can access.

          PowerShell ISE Intellisense

          So I try and display the value of that property to the console using:

          $RegistryEntry = Get-ItemProperty -Path 'HKCU:\Software\Microsoft\VisualStudio\11.0_Config\TeamFoundation\SourceControl\Checkin Policies' -Name 'TF.iQmetrix.CheckinPolicies'
          $RegistryEntry.TF.iQmetrix.CheckinPolicies
          

          But nothing is displayed Sad smile

          While PowerShell ISE does color-code the line “$RegistryEntry.TF.iQmetrix.CheckinPolicies” to have the object color different than the property color, if you just look at it in plain text, something clearly looks off about it.  How does PowerShell know that the property name is “TF.iQmetrix.CheckinPolicies”, and not that “TF” is a property with an “iQmetrix” property on it, with a “CheckinPolicies” property on that.  Well, it doesn’t.

          I did some Googling and looked on StackOverflow, but couldn’t a solution to this problem.  I found slightly related posts involving environmental variables with periods in their name, but that solution did not work in this case.  So after some random trial-and-error I stumbled onto the solution.  You have to wrap the property name in curly braces:

          $RegistryEntry.TF.iQmetrix.CheckinPolicies		# This is WRONG. Nothing will be returned.
          $RegistryEntry.{TF.iQmetrix.CheckinPolicies}	# This is RIGHT. The property's value will returned.
          

           

          I later refactored my script to store the “TF.iQmetrix.CheckinPolicies” name in a variable and found that I couldn’t use the curly braces anymore.  After more trial-and-error I discovered that using parentheses instead works:

          $EntryName = 'TF.iQmetrix.CheckinPolicies'
          
          $RegistryEntry.$EntryName		# This is WRONG. Nothing will be returned.
          $RegistryEntry.{$EntryName}		# This is WRONG. Nothing will be returned.
          $RegistryEntry.($EntryName)		# This is RIGHT. The property's value will be returned.
          $RegistryEntry."$EntryName"		# This is RIGHT too. The property's value will be returned.
          

           

          So there you have it.  If for some reason you have a variable or property name that contains periods, wrap it in curly braces, or parenthesis if you are storing it in a variable.

          Hopefully this makes it’s way to the top of the Google search results so you don’t waste as much time on it as I did.

          Happy coding!

          Add ability to add tabs to the end of a line in Windows PowerShell ISE

          June 24th, 2013 1 comment

          In the preamble of an earlier post I mentioned that one of the little things that bugs me about Windows PowerShell ISE is that you can add tabs to the start of a line, but not to the end of a line.  This is likely because it would interfere with the tab-completion feature.  I still like to be able to put tabs on the end of my code lines though so that I can easily line up my comments, like this:

          $processes = Get-Process										# Get all of the processes.
          $myProcesses = $processes | Where {$_.Company -eq "MyCompany" }	# Get my company's processes.
          

           

          We can add the functionality to allow us to insert a tab at the end of a line, but it involves modifying the PowerShell ISE profile, so opening that file for editing is the first step.

          To edit your PowerShell ISE profile:

          1. Open Windows PowerShell ISE (not Windows PowerShell, as we want to edit the ISE profile instead of the regular PowerShell profile).
          2. In the Command window type: psedit $profile

            If you get an error that it cannot find the path, then first type the following to create the file before trying #2 again: New-Item $profile –ItemType File –Force

          And now that you have your PowerShell ISE profile file open for editing, you can append the following code to it:

          # Add a new option in the Add-ons menu to insert a tab.
          if (!($psISE.CurrentPowerShellTab.AddOnsMenu.Submenus | Where-Object { $_.DisplayName -eq "Insert Tab" }))
          {
              $psISE.CurrentPowerShellTab.AddOnsMenu.Submenus.Add("Insert Tab",{$psISE.CurrentFile.Editor.InsertText("`t")},"Ctrl+Shift+T")
          }
          

           

          This will allow you to use Ctrl+Shift+T to insert a tab anywhere in the editor, including at the end of a line.  I wanted to use Shift+Tab, but apparently that shortcut is already used by the editor somewhere, even though it doesn’t seem to do anything when I press it.  Feel free to change the keyboard shortcut to something else if you like.

          I hope this helps make your PowerShell ISE experience a little better.

          Happy coding!

          Automatically Create Your Project’s NuGet Package Every Time It Builds, Via NuGet

          June 22nd, 2013 15 comments

          So you’ve got a super awesome library/assembly that you want to share with others, but you’re too lazy to actually use NuGet to package it up and upload it to the gallery; or maybe you don’t know how to create a NuGet package and don’t have the time or desire to learn.  Well, my friends, now this can all be handled for you automatically.

          A couple weeks ago I posted about a new PowerShell script that I wrote and put up on CodePlex, called New-NuGetPackage PowerShell Script, to make creating new NuGet packages quick and easy.  Well, I’ve taken that script one step further and use it in a new NuGet package called Create New NuGet Package From Project After Each Build (real creative name, right) that you can add to your Visual Studio projects.  The NuGet package will, you guessed it, pack your project and its dependencies up into a NuGet package (i.e. .nupkg file) and place it in your project’s output directory beside the generated dll/exe file.  Now creating your own NuGet package is as easy as adding a NuGet package to your project, which if you’ve never done before is dirt simple.

          I show how to add the NuGet package to your Visual Studio project in the New-NuGetPackage PowerShell Script documentation (hint: search for “New NuGet Package” (include quotes) to find it in the VS NuGet Package Manager search results), as well as how you can push your package to the NuGet Gallery in just a few clicks.

          Here’s a couple screenshots from the documentation on installing the NuGet Package:

          NavigateToManageNugetPackages   InstallNuGetPackageFromPackageManager

          Here you can see the new PostBuildScripts folder it adds to your project, and that when you build your project, a new .nupkg file is created in the project’s Output directory alongside the dll/exe.

          FilesAddedToProject     NuGetPackageInOutputDirectory

          So now that packaging your project up in a NuGet package can be fully automated with about 30 seconds of effort, and you can push it to the NuGet Gallery in a few clicks, there is no reason for you to not share all of the awesome libraries you write.

          Happy coding!

          PowerShell ISE: Multi-line Comment and Uncomment Done Right, and other ISE GUI must haves

          June 19th, 2013 16 comments

          I’ve written some code that you can add to your ISE profile that adds keyboard shortcuts to quickly comment and uncomment lines in PowerShell ISE.  So you can quickly turn this:

          This is some
          	code and here is
          some more code.
          

          into this:

          #This is some
          #	code and here is
          #some more code.
          

          and back again.

          Feel free to skip the Preamble and get right to the good stuff.

           

          Preamble

          I’ve only been writing PowerShell (PS) for about 6 months now, and have a love-hate relationship with it.  It is simply a wonderful tool…once you understand how it works and have learnt some of the nuances.  I’ve gotten hung up for hours on end with things that should be simple, but aren’t.  For example, if you have an array of strings, but the array actually only contains a single string, when you go to iterate over the array instead of giving you the string it will iterator over the characters in the string….but if you have multiple strings in your array then everything works fine (btw the trick is you have to explicitly cast your array to a string array when iterating over it).  This is only one small example, but I’ve found I’ve hit many little Gotcha’s like this since I started with PS.  So PS is a great tool, but has a deceptively steep learning curve in my opinion; it’s easy to get started with it, especially if you have a .Net background, but there are many small roadblocks that just shouldn’t be there.  Luckily, we have Stack Overflow Smile

          Anyways, as a PS newb one of the first things I did was go look for a nice editor to work in; intellisense was a must.  First I tried PowerShell ISE v3 since it comes with Windows out of the box, but was quickly turned off at how featureless the GUI is.  Here’s a quick list of lacking UI components that immediately turned me off of ISE’s Script Pane:

          1. No keyboard shortcut to quickly comment/uncomment code (go up-vote to get this added).
          2. No “Save All Files” keyboard shortcut (go up-vote to get this added).
          3. No ability to automatically reopen files that were open when I closed ISE; there’s the Recent Documents menu, but that’s an extra 10 clicks every time I open ISE (go up-vote to get this added).
          4. Can not split the tab windows to show two files side by side (go up-vote to get this added).
          5. Can not drag a tab out of ISE to show it on another monitor (go up-vote to get this added).
          6. Can not enter tabs on the end of lines; I do this all of the time to line up my comments placed on the end of the code line. I’m guessing this is “by design” though to allow the tab-completion to work (I show a workaround for this in this post).
          7. Find/Replace window does not have an option to wrap around the end of the file; it will only search down or up depending on if the Search Up checkbox is checked (go up-vote to get this added).
          8. Can’t simply use Ctrl+F3 to search for the current/selected word/text; you have to use the actual Find window (go up-vote to get this added).
          9. When you perform an undo/redo, the caret and view don’t jump to the text being undone/redone, so if the text being changed is outside of the viewable area you can’t see what is being changed (up-vote to get this fixed).
          10.   Can not re-arrange tabs; you have to close and reopen them if you want to change their order (go up-vote to get this added).
          11.   The intellisense sometimes becomes intermittent or stops all together and you have to restart ISE (go up-vote to get this fixed).
          12.   Double-clicking a cmdlet or variable name does not select the entire cmdlet/variable name; e.g. doesn’t fully select “Get-Help” or “$variable” (go up-vote to get this added).

          It took me all of 5 minutes to say “ISE is not a mature enough editor for me”; I guess I’ve been spoiled by working in Visual Studio for so many years.  So I went and found PowerGUI, which was pretty good and I liked it quite a bit at first.  It’s been a while since I’ve used it so honestly I can’t remember all of the reasons why I decided to switch away from it.  I remember one problem of having to constantly start a new PS session in order to pick up changes to functions that I made (I think they had a button for that at least), as well as intellisense not being reliable, and having problems with debugging.  Anyways, I decided to switch to PowerShellPlus and was much happier with it.  It still wasn’t perfect; I still had problems with intellisense and debugging, but I was still happy.  I especially liked that I could search for and download other people’s script easily from it, which is great for learning.  As I kept using it though, it kept taking longer and longer to load.  After about 3 months I found myself waiting about a minute for it to open, and then once it was open, another 15 seconds or so to open all of my previously open tabs; and I have an SSD.  So I thought I would give ISE another shot, mainly because it is already installed by default and I now know that I can customize it somewhat with the add-ons.

           

          Other Must Have ISE GUI Add-ons

          After looking for not too long, I found posts on the PowerShell Team’s blog which address the Save All and Save/Restore ISE State issues (#2 and #3 in my list above).  These are must haves, and I provide them alongside my code in the last section below.

           

          Why My Implementation Is Better

          Other solutions and why they suck:

          So of course before writing my own multiline comment/uncomment code I went searching for an existing solution, and I did find two.  The first one was recommended by Ed Wilson (aka Hey, Scripting Guy!) at the bottom of this post.  He recommended using the PowerShellPack.  I downloaded it, added it to my PS profile, and gave it a try.  I was instantly disappointed.  The other solution I found was by Clatonh (a Microsoft employee).  Again, I added his code to my ISE profile to try it out, and was disappointed.

          Here are the problems with their solutions:

          1. If you only have part of a line selected, it places the comment character at the beginning of your selection, not at the beginning of the line (undesirable, both).
          2. If you don’t have any text selected, nothing gets commented out (undesirable, both).
          3. If you have any blank lines selected in your multiline selection, it removes them (unacceptable, PowerShellPack only).
          4. It uses block comments (i.e. <# … #>)! (unacceptable (block comments are the devil), Clatonh’s solution only) I’m not sure if the PowerShellPack problems are because it was written for PS v2 and I’m using v3 on Windows 8, but either way that was unacceptable for me.
            You might be wondering why #4 is on my list and why I hate block comments so much.  Block comments themselves aren’t entirely a bad idea; the problem is that 99% of editors (including PS ISE) don’t handle nested block comments properly.  For example, if I comment out 3 lines in a function using block comments, and then later go and comment out the entire function using block comments, I’ll get a compiler error (or in PS’s case, a run-time error); this is because the first closing “#>” tag will be considered the closing tag for both the 1st and 2nd opening “<#” tags; so everything between the 1st and 2nd closing “#>” tag won’t actually be commented out.  Because of this it is just easier to avoid block comments all together, even for that paragraph of comment text you are about to write (you do comment your code, right?).

          My Solution:

          1. Uses single line comments (no block comments!).
          2. Places the comment character at the beginning of the line, even if you have middle of line selected.
          3. Comments out the line that the caret is on if no text is selected.
          4. Preserves blank lines, and doesn’t comment them out.

            Show Me The Code

            Before I give you the code, we are going to want to add it to your PowerShell ISE profile, so we need to open that file.

            To edit your PowerShell ISE profile:

            1. Open Windows PowerShell ISE (not Windows PowerShell, as we want to edit the ISE profile instead of the regular PowerShell profile).
            2. In the Command window type: psedit $profile

              If you get an error that it cannot find the path, then first type the following to create the file before trying #2 again: New-Item $profile –ItemType File –Force

            And now that you have your PowerShell ISE profile file open for editing, here’s the code to append to it in order to get the comment/uncomment commands and keyboard shortcuts (or keep reading and get ALL the code from further down).  You will then need to restart PowerShell ISE for the new commands to show up and work.  I’ll mention too that I’ve only tested this on Windows 8 with PowerShell v3.0.

            # Define our constant variables.
            [string]$NEW_LINE_STRING = "`r`n"
            [string]$COMMENT_STRING = "#"
            
            function Select-EntireLinesInIseSelectedTextAndReturnFirstAndLastSelectedLineNumbers([bool]$DoNothingWhenNotCertainOfWhichLinesToSelect = $false)
            {
            <#
                .SYNOPSIS
                Exands the selected text to make sure the entire lines are selected.
                Returns $null if we can't determine with certainty which lines to select and the 
            
                .DESCRIPTION
                Exands the selected text to make sure the entire lines are selected.
            
                .PARAMETER DoNothingWhenNotCertainOfWhichLinesToSelect
                Under the following edge case we can't determine for sure which lines in the file are selected.
                If this switch is not provided and the edge case is encountered, we will guess and attempt to select the entire selected lines, but we may guess wrong and select the lines above/below the selected lines.
                If this switch is provided and the edge case is encountered, no lines will be selected.
            
                Edge Case:
                - When the selected text occurs multiple times in the document, directly above or below the selected text.
            
                Example:
                abc
                abc
                abc
            
                - If only the first two lines are selected, when you run this command it may comment out the 1st and 2nd lines correctly, or it may comment out the 2nd and 3rd lines, depending on
                if the caret is on the 1st line or 2nd line when selecting the text (i.e. the text is selected bottom-to-top vs. top-to-bottom).
                - Since the lines are typically identical for this edge case to occur, you likely won't really care which 2 of the 3 lines get selected, so it shouldn't be a big deal.
                But if it bugs you, you can provide this switch.
            
                .OUTPUT
                PSObject. Returns a PSObject with the properties FirstLineNumber and LastLineNumber, which correspond to the first and last line numbers of the selected text.
            #>
            
                # Backup all of the original info before we modify it.
                [int]$originalCaretLine = $psISE.CurrentFile.Editor.CaretLine
                [string]$originalSelectedText = $psISE.CurrentFile.Editor.SelectedText
                [string]$originalCaretLineText = $psISE.CurrentFile.Editor.CaretLineText
            
                # Assume only one line is selected.
                [int]$textToSelectFirstLine = $originalCaretLine
                [int]$textToSelectLastLine = $originalCaretLine
            
                #------------------------
                # Before we process the selected text, we need to make sure all selected lines are fully selected (i.e. the entire line is selected).
                #------------------------
            
                # If no text is selected, OR only part of one line is selected (and it doesn't include the start of the line), select the entire line that the caret is currently on.
                if (($psISE.CurrentFile.Editor.SelectedText.Length -le 0) -or !$psISE.CurrentFile.Editor.SelectedText.Contains($NEW_LINE_STRING))
                {
                    $psISE.CurrentFile.Editor.SelectCaretLine()
                }
                # Else the first part of one line (or the entire line), or multiple lines are selected.
                else
                {
                    # Get the number of lines in the originally selected text.
                    [string[]] $originalSelectedTextArray = $originalSelectedText.Split([string[]]$NEW_LINE_STRING, [StringSplitOptions]::None)
                    [int]$numberOfLinesInSelectedText = $originalSelectedTextArray.Length
            
                    # If only one line is selected, make sure it is fully selected.
                    if ($numberOfLinesInSelectedText -le 1)
                    {
                        $psISE.CurrentFile.Editor.SelectCaretLine()
                    }
                    # Else there are multiple lines selected, so make sure the first character of the top line is selected (so that we put the comment character at the start of the top line, not in the middle).
                    # The first character of the bottom line will always be selected when multiple lines are selected, so we don't have to worry about making sure it is selected; only the top line.
                    else
                    {
                        # Determine if the caret is on the first or last line of the selected text.
                        [bool]$isCaretOnFirstLineOfSelectedText = $false
                        [string]$firstLineOfOriginalSelectedText = $originalSelectedTextArray[0]
                        [string]$lastLineOfOriginalSelectedText = $originalSelectedTextArray[$originalSelectedTextArray.Length - 1]
            
                        # If the caret is definitely on the first line.
                        if ($originalCaretLineText.EndsWith($firstLineOfOriginalSelectedText) -and !$originalCaretLineText.StartsWith($lastLineOfOriginalSelectedText))
                        {
                            $isCaretOnFirstLineOfSelectedText = $true
                        }
                        # Else if the caret is definitely on the last line.
                        elseif ($originalCaretLineText.StartsWith($lastLineOfOriginalSelectedText) -and !$originalCaretLineText.EndsWith($firstLineOfOriginalSelectedText))
                        {
                            $isCaretOnFirstLineOfSelectedText = $false
                        }
                        # Else we need to do further analysis to determine if the caret is on the first or last line of the selected text.
                        else
                        {
                            [int]$numberOfLinesInFile = $psISE.CurrentFile.Editor.LineCount
            
                            [string]$caretOnFirstLineText = [string]::Empty
                            [int]$caretOnFirstLineArrayStartIndex = ($originalCaretLine - 1) # -1 because array starts at 0 and file lines start at 1.
                            [int]$caretOnFirstLineArrayStopIndex = $caretOnFirstLineArrayStartIndex + ($numberOfLinesInSelectedText - 1) # -1 because the starting line is inclusive (i.e. if we want 1 line the start and stop lines should be the same).
            
                            [string]$caretOnLastLineText = [string]::Empty
                            [int]$caretOnLastLineArrayStopIndex = ($originalCaretLine - 1)  # -1 because array starts at 0 and file lines start at 1.
                            [int]$caretOnLastLineArrayStartIndex = $caretOnLastLineArrayStopIndex - ($numberOfLinesInSelectedText - 1) # -1 because the stopping line is inclusive (i.e. if we want 1 line the start and stop lines should be the same).
            
                            # If the caret being on the first line would cause us to go "off the file", then we know the caret is on the last line.
                            if (($caretOnFirstLineArrayStartIndex -lt 0) -or ($caretOnFirstLineArrayStopIndex -ge $numberOfLinesInFile))
                            {
                                $isCaretOnFirstLineOfSelectedText = $false
                            }
                            # If the caret being on the last line would cause us to go "off the file", then we know the caret is on the first line.
                            elseif (($caretOnLastLineArrayStartIndex -lt 0) -or ($caretOnLastLineArrayStopIndex -ge $numberOfLinesInFile))
                            {
                                $isCaretOnFirstLineOfSelectedText = $true
                            }
                            # Else we still don't know where the caret is.
                            else
                            {
                                [string[]]$filesTextArray = $psISE.CurrentFile.Editor.Text.Split([string[]]$NEW_LINE_STRING, [StringSplitOptions]::None)
            
                                # Get the text of the lines where the caret is on the first line of the selected text.
                                [string[]]$caretOnFirstLineTextArray = @([string]::Empty) * $numberOfLinesInSelectedText # Declare an array with the number of elements required.
                                [System.Array]::Copy($filesTextArray, $caretOnFirstLineArrayStartIndex, $caretOnFirstLineTextArray, 0, $numberOfLinesInSelectedText)
                                $caretOnFirstLineText = $caretOnFirstLineTextArray -join $NEW_LINE_STRING
            
                                # Get the text of the lines where the caret is on the last line of the selected text.
                                [string[]]$caretOnLastLineTextArray = @([string]::Empty) * $numberOfLinesInSelectedText # Declare an array with the number of elements required.
                                [System.Array]::Copy($filesTextArray, $caretOnLastLineArrayStartIndex, $caretOnLastLineTextArray, 0, $numberOfLinesInSelectedText)
                                $caretOnLastLineText = $caretOnLastLineTextArray -join $NEW_LINE_STRING
            
                                [bool]$caretOnFirstLineTextContainsOriginalSelectedText = $caretOnFirstLineText.Contains($originalSelectedText)
                                [bool]$caretOnLastLineTextContainsOriginalSelectedText = $caretOnLastLineText.Contains($originalSelectedText)
            
                                # If the selected text is only within the text of when the caret is on the first line, then we know for sure the caret is on the first line.
                                if ($caretOnFirstLineTextContainsOriginalSelectedText -and !$caretOnLastLineTextContainsOriginalSelectedText)
                                {
                                    $isCaretOnFirstLineOfSelectedText = $true
                                }
                                # Else if the selected text is only within the text of when the caret is on the last line, then we know for sure the caret is on the last line.
                                elseif ($caretOnLastLineTextContainsOriginalSelectedText -and !$caretOnFirstLineTextContainsOriginalSelectedText)
                                {
                                    $isCaretOnFirstLineOfSelectedText = $false
                                }
                                # Else if the selected text is in both sets of text, then we don't know for sure if the caret is on the first or last line.
                                elseif ($caretOnFirstLineTextContainsOriginalSelectedText -and $caretOnLastLineTextContainsOriginalSelectedText)
                                {
                                    # If we shouldn't do anything since we might comment out text that is not selected by the user, just exit this function and return null.
                                    if ($DoNothingWhenNotCertainOfWhichLinesToSelect)
                                    {
                                        return $null
                                    }
                                }
                                # Else something went wrong and there is a flaw in this logic, since the selected text should be in one of our two strings, so let's just guess!
                                else
                                {
                                    Write-Error "WHAT HAPPENED?!?! This line should never be reached. There is a flaw in our logic!"
                                    return $null
                                }
                            }
                        }
            
                        # Assume the caret is on the first line of the selected text, so we want to select text from the caret's line downward.
                        $textToSelectFirstLine = $originalCaretLine
                        $textToSelectLastLine = $originalCaretLine + ($numberOfLinesInSelectedText - 1) # -1 because the starting line is inclusive (i.e. if we want 1 line the start and stop lines should be the same).
            
                        # If the caret is actually on the last line of the selected text, we want to select text from the caret's line upward.
                        if (!$isCaretOnFirstLineOfSelectedText)
                        {
                            $textToSelectFirstLine = $originalCaretLine - ($numberOfLinesInSelectedText - 1) # -1 because the stopping line is inclusive (i.e. if we want 1 line the start and stop lines should be the same).
                            $textToSelectLastLine = $originalCaretLine
                        }
            
                        # Re-select the text, making sure the entire first and last lines are selected. +1 on EndLineWidth because column starts at 1, not 0.
                        $psISE.CurrentFile.Editor.Select($textToSelectFirstLine, 1, $textToSelectLastLine, $psISE.CurrentFile.Editor.GetLineLength($textToSelectLastLine) + 1)
                    }
                }
            
                # Return the first and last line numbers selected.
                $selectedTextFirstAndLastLineNumbers = New-Object PSObject -Property @{
                    FirstLineNumber = $textToSelectFirstLine
                    LastLineNumber = $textToSelectLastLine
                }
                return $selectedTextFirstAndLastLineNumbers
            }
            
            function CommentOrUncommentIseSelectedLines([bool]$CommentLines = $false, [bool]$DoNothingWhenNotCertainOfWhichLinesToSelect = $false)
            {
                $selectedTextFirstAndLastLineNumbers = Select-EntireLinesInIseSelectedTextAndReturnFirstAndLastSelectedLineNumbers $DoNothingWhenNotCertainOfWhichLinesToSelect
            
                # If we couldn't determine which lines to select, just exit without changing anything.
                if ($selectedTextFirstAndLastLineNumbers -eq $null) { return }
            
                # Get the text lines selected.
                [int]$selectedTextFirstLineNumber = $selectedTextFirstAndLastLineNumbers.FirstLineNumber
                [int]$selectedTextLastLineNumber = $selectedTextFirstAndLastLineNumbers.LastLineNumber
            
                # Get the Selected Text and convert it into an array of strings so we can easily process each line.
                [string]$selectedText = $psISE.CurrentFile.Editor.SelectedText
                [string[]] $selectedTextArray = $selectedText.Split([string[]]$NEW_LINE_STRING, [StringSplitOptions]::None)
            
                # Process each line of the Selected Text, and save the modified lines into a text array.
                [string[]]$newSelectedTextArray = @()
                $selectedTextArray | foreach {
                    # If the line is not blank, add a comment character to the start of it.
                    [string]$lineText = $_
                    if ([string]::IsNullOrWhiteSpace($lineText)) { $newSelectedTextArray += $lineText }
                    else 
                    {
                        # If we should be commenting the lines out, add a comment character to the start of the line.
                        if ($CommentLines) 
                        { $newSelectedTextArray += "$COMMENT_STRING$lineText" }
                        # Else we should be uncommenting, so remove a comment character from the start of the line if it exists.
                        else 
                        {
                            # If the line begins with a comment, remove one (and only one) comment character.
                            if ($lineText.StartsWith($COMMENT_STRING))
                            {
                                $lineText = $lineText.Substring($COMMENT_STRING.Length)
                            } 
                            $newSelectedTextArray += $lineText
                        }
                    }
                }
            
                # Join the text array back together to get the new Selected Text string.
                [string]$newSelectedText = $newSelectedTextArray -join $NEW_LINE_STRING
            
                # Overwrite the currently Selected Text with the new Selected Text.
                $psISE.CurrentFile.Editor.InsertText($newSelectedText)
            
                # Fully select all of the lines that were modified. +1 on End Line's Width because column starts at 1, not 0.
                $psISE.CurrentFile.Editor.Select($selectedTextFirstLineNumber, 1, $selectedTextLastLineNumber, $psISE.CurrentFile.Editor.GetLineLength($selectedTextLastLineNumber) + 1)
            }
            
            function Comment-IseSelectedLines([switch]$DoNothingWhenNotCertainOfWhichLinesToComment)
            {
            <#
                .SYNOPSIS
                Places a comment character at the start of each line of the selected text in the current PS ISE file.
                If no text is selected, it will comment out the line that the caret is on.
            
                .DESCRIPTION
                Places a comment character at the start of each line of the selected text in the current PS ISE file.
                If no text is selected, it will comment out the line that the caret is on.
            
                .PARAMETER DoNothingWhenNotCertainOfWhichLinesToComment
                Under the following edge case we can't determine for sure which lines in the file are selected.
                If this switch is not provided and the edge case is encountered, we will guess and attempt to comment out the selected lines, but we may guess wrong and comment out the lines above/below the selected lines.
                If this switch is provided and the edge case is encountered, no lines will be commented out.
            
                Edge Case:
                - When the selected text occurs multiple times in the document, directly above or below the selected text.
            
                Example:
                abc
                abc
                abc
            
                - If only the first two lines are selected, when you run this command it may comment out the 1st and 2nd lines correctly, or it may comment out the 2nd and 3rd lines, depending on
                if the caret is on the 1st line or 2nd line when selecting the text (i.e. the text is selected bottom-to-top vs. top-to-bottom).
                - Since the lines are typically identical for this edge case to occur, you likely won't really care which 2 of the 3 lines get commented out, so it shouldn't be a big deal.
                But if it bugs you, you can provide this switch.
            #>
                CommentOrUncommentIseSelectedLines -CommentLines $true -DoNothingWhenNotCertainOfWhichLinesToSelect $DoNothingWhenNotCertainOfWhichLinesToComment
            }
            
            function Uncomment-IseSelectedLines([switch]$DoNothingWhenNotCertainOfWhichLinesToUncomment)
            {
            <#
                .SYNOPSIS
                Removes the comment character from the start of each line of the selected text in the current PS ISE file (if it is commented out).
                If no text is selected, it will uncomment the line that the caret is on.
            
                .DESCRIPTION
                Removes the comment character from the start of each line of the selected text in the current PS ISE file (if it is commented out).
                If no text is selected, it will uncomment the line that the caret is on.
            
                .PARAMETER DoNothingWhenNotCertainOfWhichLinesToUncomment
                Under the following edge case we can't determine for sure which lines in the file are selected.
                If this switch is not provided and the edge case is encountered, we will guess and attempt to uncomment the selected lines, but we may guess wrong and uncomment out the lines above/below the selected lines.
                If this switch is provided and the edge case is encountered, no lines will be uncommentet.
            
                Edge Case:
                - When the selected text occurs multiple times in the document, directly above or below the selected text.
            
                Example:
                abc
                abc
                abc
            
                - If only the first two lines are selected, when you run this command it may uncomment the 1st and 2nd lines correctly, or it may uncomment the 2nd and 3rd lines, depending on
                if the caret is on the 1st line or 2nd line when selecting the text (i.e. the text is selected bottom-to-top vs. top-to-bottom).
                - Since the lines are typically identical for this edge case to occur, you likely won't really care which 2 of the 3 lines get uncommented, so it shouldn't be a big deal.
                But if it bugs you, you can provide this switch.
            #>
                CommentOrUncommentIseSelectedLines -CommentLines $false -DoNothingWhenNotCertainOfWhichLinesToSelect $DoNothingWhenNotCertainOfWhichLinesToUncomment
            }
            
            
            #==========================================================
            # Add ISE Add-ons.
            #==========================================================
            
            # Add a new option in the Add-ons menu to comment all selected lines.
            if (!($psISE.CurrentPowerShellTab.AddOnsMenu.Submenus | Where-Object { $_.DisplayName -eq "Comment Selected Lines" }))
            {
                $psISE.CurrentPowerShellTab.AddOnsMenu.Submenus.Add("Comment Selected Lines",{Comment-IseSelectedLines},"Ctrl+K")
            }
            
            # Add a new option in the Add-ons menu to uncomment all selected lines.
            if (!($psISE.CurrentPowerShellTab.AddOnsMenu.Submenus | Where-Object { $_.DisplayName -eq "Uncomment Selected Lines" }))
            {
                $psISE.CurrentPowerShellTab.AddOnsMenu.Submenus.Add("Uncomment Selected Lines",{Uncomment-IseSelectedLines},"Ctrl+Shift+K")
            }
            

            As you can see by the code at the bottom, the keyboard shortcut to comment lines is Ctrl+K and to uncomment it is Ctrl+Shift+K.  Feel free to change these if you like.  I wanted to use the Visual Studio keyboard shortcut keys of Ctrl+K,Ctrl+C and Ctrl+K,Ctrl+U, but it looks like multi-sequence keyboard shortcuts aren’t supported.  I figured that anybody who uses Visual Studio or SQL Server Management Studio would be able to stumble across this keyboard shortcut and would like it.

             

            Ok, it’s not perfect:

            If you’re still reading then you deserve to know about the edge case bug with my implementation.  If you actually read through the functions’ documentation in the code you will see this mentioned there as well.

            Edge Case:
                - When the selected text occurs multiple times in the document, directly above or below the selected text.
            
                Example:
                abc
                abc
                abc
            
                - If only the first two lines are selected, when you run this command it may comment out the 1st and 2nd lines correctly, or it may comment out the 2nd and 3rd lines, depending on
                if the caret is on the 1st line or 2nd line when selecting the text (i.e. the text is selected bottom-to-top vs. top-to-bottom).
                - Since the lines are typically identical for this edge case to occur, you likely won't really care which 2 of the 3 lines get uncommented, so it shouldn't be a big deal.
            

            Basically the problem is that I change the selected text to ensure that the entire lines are selected (so that I can put the comment character at the start of the line).  The PS ISE API doesn’t tell me the selected text’s starting and ending lines, so I have to try and infer it from the line the caret is on, but the caret can be on either the first or the last line of the selected text.  So if text that is identical to the selected text appears directly above or below the selected text, I can’t know for sure if the caret is on the first line of the selected text, or the last line, so I just make a guess.  If this bothers you there is a switch you can provide so that it won’t comment out any lines at all if this edge case is hit.

             

            Show Me ALL The Code

            Ok, so I mentioned a couple other must-have ISE add-ons above.  Here’s the code to add to your ISE profile that includes my comment/uncomment code, as well as the Save All files and Save/Restore ISE State functionality provided by the PowerShell Team.  This includes a couple customizations that I made; namely adding a Save ISE State And Exit command (Alt+Shift+E) and having the ISE State automatically load when PS ISE starts (I didn’t change the functions they provided that do the actual work at all). So if you want your last session to be automatically reloaded, you just have to get in the habit of closing ISE with Alt+Shift+E (again, you can change this keyboard shortcut if you want).

            #==========================================================
            # Functions used by the script.
            #==========================================================
            
            function Save-AllISEFiles
            {
            <#
            .SYNOPSIS
                Saves all ISE Files except for untitled files. If You have multiple PowerShellTabs, saves files in all tabs.
            #>
                foreach($tab in $psISE.PowerShellTabs)
                {
                    foreach($file in $tab.Files)
                    {
                        if(!$file.IsUntitled)
                        {
                            $file.Save()
                        }
                    }
                }
            }
            
            function Export-ISEState
            {
            <#
            .SYNOPSIS
                Stores the opened files in a serialized xml so that later the same set can be opened
             
            .DESCRIPTION
                Creates an xml file with all PowerShell tabs and file information
               
            .PARAMETER fileName
                The name of the project to create a new version from. This will also be the name of the new project, but with a different version
             
            .EXAMPLE
                Stores current state into c:\temp\files.isexml
                Export-ISEState c:\temp\files.isexml
            #>
             
                Param
                (
                    [Parameter(Position=0, Mandatory=$true)]
                    [ValidateNotNullOrEmpty()]
                    [string]$fileName
                )
               
                # We are exporting a "tree" worth of information like this:
                #
                #  SelectedTabDisplayName: PowerShellTab 1
                #  SelectedFilePath: c:\temp\a.ps1
                #  TabInformation:
                #      PowerShellTab 1:
                #           File 1:
                #                FullPath:     c:\temp\a.ps1
                #                FileContents: $null
                #           File 2:
                #                FullPath:     Untitled.ps1
                #                FileContents: $a=0...
                #       PowerShellTab 2:
                #       ...
                #  Hashtables and arraylists serialize rather well with export-clixml
                #  We will keep the list of PowerShellTabs in one ArrayList and the list of files
                #  and contents(for untitled files) inside each tab in a couple of ArrayList.
                #  We will use Hashtables to group the information.
                $tabs=new-object collections.arraylist
               
                # before getting file information, save all untitled files to make sure their latest
                # text is on disk
                Save-AllISEFiles
             
                foreach ($tab in $psISE.PowerShellTabs)
                {
                    $files=new-object collections.arraylist
                    $filesContents=new-object collections.arraylist
                    foreach($file in $tab.Files)
                    {
                        # $null = will avoid $files.Add from showing in the output
                        $null = $files.Add($file.FullPath)
                       
                        if($file.IsUntitled)
                        {
                            # untitled files are not yet on disk so we will save the file contents inside the xml
                            # export-clixml performs the appropriate escaping for the contents to be inside the xml
                            $null = $filesContents.Add($file.Editor.Text)
                        }
                        else
                        {
                            # titled files get their content from disk
                            $null = $filesContents.Add($null)  
                        }
                    }
                    $simpleTab=new-object collections.hashtable
                   
                    # The DisplayName of a PowerShellTab can only be change with scripting
                    # we want to maintain the chosen name       
                    $simpleTab["DisplayName"]=$tab.DisplayName
                   
                    # $files and $filesContents is the information gathered in the foreach $file above
                    $simpleTab["Files"]=$files
                    $simpleTab["FilesContents"]=$filesContents
                   
                    # add to the list of tabs
                    $null = $tabs.Add($simpleTab)
                   
                }
               
                # tabsToSerialize will be a hashtable with all the information we want
                # it is the "root" of the information to be serialized in the hashtable we store...
                $tabToSerialize=new-object collections.hashtable
               
                # the $tabs information gathered in the foreach $tab above...
                $tabToSerialize["TabInformation"] = $tabs
               
                # ...and the selected tab and file.
                $tabToSerialize["SelectedTabDisplayName"] = $psISE.CurrentPowerShellTab.DisplayName
                $tabToSerialize["SelectedFilePath"] = $psISE.CurrentFile.FullPath
               
                # now we just export it to $fileName
                $tabToSerialize | export-clixml -path $fileName
            }
             
             
            function Import-ISEState
            {
            <#
            .SYNOPSIS
                Reads a file with ISE state information about which files to open and opens them
             
            .DESCRIPTION
                Reads a file created by Export-ISEState with the PowerShell tabs and files to open
               
            .PARAMETER fileName
                The name of the file created with Export-ISEState
             
            .EXAMPLE
                Restores current state from c:\temp\files.isexml
                Import-ISEState c:\temp\files.isexml
            #>
             
                Param
                (
                    [Parameter(Position=0, Mandatory=$true)]
                    [ValidateNotNullOrEmpty()]
                    [string]$fileName
                )
               
               
                # currentTabs is used to keep track of the tabs currently opened.
                # If "PowerShellTab 1" is opened and $fileName contains files for it, we
                # want to open them in "PowerShellTab 1"
                $currentTabs=new-object collections.hashtable
                foreach ($tab in $psISE.PowerShellTabs)
                {
                    $currentTabs[$tab.DisplayName]=$tab
                }
               
                $tabs=import-cliXml -path $fileName
             
                # those will keep track of selected tab and files   
                $selectedTab=$null
                $selectedFile=$null
             
                foreach ($tab in $tabs.TabInformation)
                {
                    $newTab=$currentTabs[$tab.DisplayName]
                    if($newTab -eq $null)
                    {
                        $newTab=$psISE.PowerShellTabs.Add()
                        $newTab.DisplayName=$tab.DisplayName
                    }
                    #newTab now has a brand new or a previouslly existing PowerShell tab with the same name as the one in the file
                   
                    # if the tab is the selected tab save it for later selection
                    if($newTab.DisplayName -eq $tabs.SelectedTabDisplayName)
                    {
                        $selectedTab=$newTab
                    }
                   
                    # currentUntitledFileContents keeps track of the contents for untitled files
                    # if you already have the content in one of your untitled files
                    # there is no reason to add the same content again
                    # this will make sure calling import-ISEState multiple times
                    # does not keep on adding untitled files
                    $currentUntitledFileContents=new-object collections.hashtable
                    foreach ($newTabFile in $newTab.Files)
                    {
                        if($newTabFile.IsUntitled)
                        {
                            $currentUntitledFileContents[$newTabFile.Editor.Text]=$newTabFile
                        }
                    }
                   
                    # since we will want both file and fileContents we need to use a for instead of a foreach
                    for($i=0;$i -lt $tab.Files.Count;$i++)
                    {
                        $file = $tab.Files[$i]
                        $fileContents = $tab.FilesContents[$i]
             
                        #fileContents will be $null for titled files
                        if($fileContents -eq $null)
                        {
                            # the overload of Add taking one string opens the file identified by the string
                            $newFile = $newTab.Files.Add($file)
                        }
                        else # the file is untitled
                        {
                            #see if the content is already present in $newTab
                            $newFile=$currentUntitledFileContents[$fileContents]
                           
                            if($newFile -eq $null)
                            {
                                # the overload of Add taking no arguments creates a new untitled file
                                # The number for untitled files is determined by the application so we
                                # don't try to keep the untitled number, we just create a new untitled.
                                $newFile = $newTab.Files.Add()
                           
                                # and here we restore the contents
                                $newFile.Editor.Text=$fileContents
                            }
                        }
                   
                        # if the file is the selected file in the selected tab save it for later selection   
                        if(($selectedTab -eq $newTab) -and ($tabs.SelectedFilePath -eq $file))
                        {
                            $selectedFile = $newFile
                        }
                    }
                }
               
                #finally we selected the PowerShellTab that was selected and the file that was selected on it.
                $psISE.PowerShellTabs.SetSelectedPowerShellTab($selectedTab)
                if($selectedFile -ne $null)
                {
                    $selectedTab.Files.SetSelectedFile($selectedFile)
                }
            }
            
            # Define our constant variables.
            [string]$NEW_LINE_STRING = "`r`n"
            [string]$COMMENT_STRING = "#"
            
            function Select-EntireLinesInIseSelectedTextAndReturnFirstAndLastSelectedLineNumbers([bool]$DoNothingWhenNotCertainOfWhichLinesToSelect = $false)
            {
            <#
                .SYNOPSIS
                Exands the selected text to make sure the entire lines are selected.
                Returns $null if we can't determine with certainty which lines to select and the 
            
                .DESCRIPTION
                Exands the selected text to make sure the entire lines are selected.
            
                .PARAMETER DoNothingWhenNotCertainOfWhichLinesToSelect
                Under the following edge case we can't determine for sure which lines in the file are selected.
                If this switch is not provided and the edge case is encountered, we will guess and attempt to select the entire selected lines, but we may guess wrong and select the lines above/below the selected lines.
                If this switch is provided and the edge case is encountered, no lines will be selected.
            
                Edge Case:
                - When the selected text occurs multiple times in the document, directly above or below the selected text.
            
                Example:
                abc
                abc
                abc
            
                - If only the first two lines are selected, when you run this command it may comment out the 1st and 2nd lines correctly, or it may comment out the 2nd and 3rd lines, depending on
                if the caret is on the 1st line or 2nd line when selecting the text (i.e. the text is selected bottom-to-top vs. top-to-bottom).
                - Since the lines are typically identical for this edge case to occur, you likely won't really care which 2 of the 3 lines get selected, so it shouldn't be a big deal.
                But if it bugs you, you can provide this switch.
            
                .OUTPUT
                PSObject. Returns a PSObject with the properties FirstLineNumber and LastLineNumber, which correspond to the first and last line numbers of the selected text.
            #>
            
                # Backup all of the original info before we modify it.
                [int]$originalCaretLine = $psISE.CurrentFile.Editor.CaretLine
                [string]$originalSelectedText = $psISE.CurrentFile.Editor.SelectedText
                [string]$originalCaretLineText = $psISE.CurrentFile.Editor.CaretLineText
            
                # Assume only one line is selected.
                [int]$textToSelectFirstLine = $originalCaretLine
                [int]$textToSelectLastLine = $originalCaretLine
            
                #------------------------
                # Before we process the selected text, we need to make sure all selected lines are fully selected (i.e. the entire line is selected).
                #------------------------
            
                # If no text is selected, OR only part of one line is selected (and it doesn't include the start of the line), select the entire line that the caret is currently on.
                if (($psISE.CurrentFile.Editor.SelectedText.Length -le 0) -or !$psISE.CurrentFile.Editor.SelectedText.Contains($NEW_LINE_STRING))
                {
                    $psISE.CurrentFile.Editor.SelectCaretLine()
                }
                # Else the first part of one line (or the entire line), or multiple lines are selected.
                else
                {
                    # Get the number of lines in the originally selected text.
                    [string[]] $originalSelectedTextArray = $originalSelectedText.Split([string[]]$NEW_LINE_STRING, [StringSplitOptions]::None)
                    [int]$numberOfLinesInSelectedText = $originalSelectedTextArray.Length
            
                    # If only one line is selected, make sure it is fully selected.
                    if ($numberOfLinesInSelectedText -le 1)
                    {
                        $psISE.CurrentFile.Editor.SelectCaretLine()
                    }
                    # Else there are multiple lines selected, so make sure the first character of the top line is selected (so that we put the comment character at the start of the top line, not in the middle).
                    # The first character of the bottom line will always be selected when multiple lines are selected, so we don't have to worry about making sure it is selected; only the top line.
                    else
                    {
                        # Determine if the caret is on the first or last line of the selected text.
                        [bool]$isCaretOnFirstLineOfSelectedText = $false
                        [string]$firstLineOfOriginalSelectedText = $originalSelectedTextArray[0]
                        [string]$lastLineOfOriginalSelectedText = $originalSelectedTextArray[$originalSelectedTextArray.Length - 1]
            
                        # If the caret is definitely on the first line.
                        if ($originalCaretLineText.EndsWith($firstLineOfOriginalSelectedText) -and !$originalCaretLineText.StartsWith($lastLineOfOriginalSelectedText))
                        {
                            $isCaretOnFirstLineOfSelectedText = $true
                        }
                        # Else if the caret is definitely on the last line.
                        elseif ($originalCaretLineText.StartsWith($lastLineOfOriginalSelectedText) -and !$originalCaretLineText.EndsWith($firstLineOfOriginalSelectedText))
                        {
                            $isCaretOnFirstLineOfSelectedText = $false
                        }
                        # Else we need to do further analysis to determine if the caret is on the first or last line of the selected text.
                        else
                        {
                            [int]$numberOfLinesInFile = $psISE.CurrentFile.Editor.LineCount
            
                            [string]$caretOnFirstLineText = [string]::Empty
                            [int]$caretOnFirstLineArrayStartIndex = ($originalCaretLine - 1) # -1 because array starts at 0 and file lines start at 1.
                            [int]$caretOnFirstLineArrayStopIndex = $caretOnFirstLineArrayStartIndex + ($numberOfLinesInSelectedText - 1) # -1 because the starting line is inclusive (i.e. if we want 1 line the start and stop lines should be the same).
            
                            [string]$caretOnLastLineText = [string]::Empty
                            [int]$caretOnLastLineArrayStopIndex = ($originalCaretLine - 1)  # -1 because array starts at 0 and file lines start at 1.
                            [int]$caretOnLastLineArrayStartIndex = $caretOnLastLineArrayStopIndex - ($numberOfLinesInSelectedText - 1) # -1 because the stopping line is inclusive (i.e. if we want 1 line the start and stop lines should be the same).
            
                            # If the caret being on the first line would cause us to go "off the file", then we know the caret is on the last line.
                            if (($caretOnFirstLineArrayStartIndex -lt 0) -or ($caretOnFirstLineArrayStopIndex -ge $numberOfLinesInFile))
                            {
                                $isCaretOnFirstLineOfSelectedText = $false
                            }
                            # If the caret being on the last line would cause us to go "off the file", then we know the caret is on the first line.
                            elseif (($caretOnLastLineArrayStartIndex -lt 0) -or ($caretOnLastLineArrayStopIndex -ge $numberOfLinesInFile))
                            {
                                $isCaretOnFirstLineOfSelectedText = $true
                            }
                            # Else we still don't know where the caret is.
                            else
                            {
                                [string[]]$filesTextArray = $psISE.CurrentFile.Editor.Text.Split([string[]]$NEW_LINE_STRING, [StringSplitOptions]::None)
            
                                # Get the text of the lines where the caret is on the first line of the selected text.
                                [string[]]$caretOnFirstLineTextArray = @([string]::Empty) * $numberOfLinesInSelectedText # Declare an array with the number of elements required.
                                [System.Array]::Copy($filesTextArray, $caretOnFirstLineArrayStartIndex, $caretOnFirstLineTextArray, 0, $numberOfLinesInSelectedText)
                                $caretOnFirstLineText = $caretOnFirstLineTextArray -join $NEW_LINE_STRING
            
                                # Get the text of the lines where the caret is on the last line of the selected text.
                                [string[]]$caretOnLastLineTextArray = @([string]::Empty) * $numberOfLinesInSelectedText # Declare an array with the number of elements required.
                                [System.Array]::Copy($filesTextArray, $caretOnLastLineArrayStartIndex, $caretOnLastLineTextArray, 0, $numberOfLinesInSelectedText)
                                $caretOnLastLineText = $caretOnLastLineTextArray -join $NEW_LINE_STRING
            
                                [bool]$caretOnFirstLineTextContainsOriginalSelectedText = $caretOnFirstLineText.Contains($originalSelectedText)
                                [bool]$caretOnLastLineTextContainsOriginalSelectedText = $caretOnLastLineText.Contains($originalSelectedText)
            
                                # If the selected text is only within the text of when the caret is on the first line, then we know for sure the caret is on the first line.
                                if ($caretOnFirstLineTextContainsOriginalSelectedText -and !$caretOnLastLineTextContainsOriginalSelectedText)
                                {
                                    $isCaretOnFirstLineOfSelectedText = $true
                                }
                                # Else if the selected text is only within the text of when the caret is on the last line, then we know for sure the caret is on the last line.
                                elseif ($caretOnLastLineTextContainsOriginalSelectedText -and !$caretOnFirstLineTextContainsOriginalSelectedText)
                                {
                                    $isCaretOnFirstLineOfSelectedText = $false
                                }
                                # Else if the selected text is in both sets of text, then we don't know for sure if the caret is on the first or last line.
                                elseif ($caretOnFirstLineTextContainsOriginalSelectedText -and $caretOnLastLineTextContainsOriginalSelectedText)
                                {
                                    # If we shouldn't do anything since we might comment out text that is not selected by the user, just exit this function and return null.
                                    if ($DoNothingWhenNotCertainOfWhichLinesToSelect)
                                    {
                                        return $null
                                    }
                                }
                                # Else something went wrong and there is a flaw in this logic, since the selected text should be in one of our two strings, so let's just guess!
                                else
                                {
                                    Write-Error "WHAT HAPPENED?!?! This line should never be reached. There is a flaw in our logic!"
                                    return $null
                                }
                            }
                        }
            
                        # Assume the caret is on the first line of the selected text, so we want to select text from the caret's line downward.
                        $textToSelectFirstLine = $originalCaretLine
                        $textToSelectLastLine = $originalCaretLine + ($numberOfLinesInSelectedText - 1) # -1 because the starting line is inclusive (i.e. if we want 1 line the start and stop lines should be the same).
            
                        # If the caret is actually on the last line of the selected text, we want to select text from the caret's line upward.
                        if (!$isCaretOnFirstLineOfSelectedText)
                        {
                            $textToSelectFirstLine = $originalCaretLine - ($numberOfLinesInSelectedText - 1) # -1 because the stopping line is inclusive (i.e. if we want 1 line the start and stop lines should be the same).
                            $textToSelectLastLine = $originalCaretLine
                        }
            
                        # Re-select the text, making sure the entire first and last lines are selected. +1 on EndLineWidth because column starts at 1, not 0.
                        $psISE.CurrentFile.Editor.Select($textToSelectFirstLine, 1, $textToSelectLastLine, $psISE.CurrentFile.Editor.GetLineLength($textToSelectLastLine) + 1)
                    }
                }
            
                # Return the first and last line numbers selected.
                $selectedTextFirstAndLastLineNumbers = New-Object PSObject -Property @{
                    FirstLineNumber = $textToSelectFirstLine
                    LastLineNumber = $textToSelectLastLine
                }
                return $selectedTextFirstAndLastLineNumbers
            }
            
            function CommentOrUncommentIseSelectedLines([bool]$CommentLines = $false, [bool]$DoNothingWhenNotCertainOfWhichLinesToSelect = $false)
            {
                $selectedTextFirstAndLastLineNumbers = Select-EntireLinesInIseSelectedTextAndReturnFirstAndLastSelectedLineNumbers $DoNothingWhenNotCertainOfWhichLinesToSelect
            
                # If we couldn't determine which lines to select, just exit without changing anything.
                if ($selectedTextFirstAndLastLineNumbers -eq $null) { return }
            
                # Get the text lines selected.
                [int]$selectedTextFirstLineNumber = $selectedTextFirstAndLastLineNumbers.FirstLineNumber
                [int]$selectedTextLastLineNumber = $selectedTextFirstAndLastLineNumbers.LastLineNumber
            
                # Get the Selected Text and convert it into an array of strings so we can easily process each line.
                [string]$selectedText = $psISE.CurrentFile.Editor.SelectedText
                [string[]] $selectedTextArray = $selectedText.Split([string[]]$NEW_LINE_STRING, [StringSplitOptions]::None)
            
                # Process each line of the Selected Text, and save the modified lines into a text array.
                [string[]]$newSelectedTextArray = @()
                $selectedTextArray | foreach {
                    # If the line is not blank, add a comment character to the start of it.
                    [string]$lineText = $_
                    if ([string]::IsNullOrWhiteSpace($lineText)) { $newSelectedTextArray += $lineText }
                    else 
                    {
                        # If we should be commenting the lines out, add a comment character to the start of the line.
                        if ($CommentLines) 
                        { $newSelectedTextArray += "$COMMENT_STRING$lineText" }
                        # Else we should be uncommenting, so remove a comment character from the start of the line if it exists.
                        else 
                        {
                            # If the line begins with a comment, remove one (and only one) comment character.
                            if ($lineText.StartsWith($COMMENT_STRING))
                            {
                                $lineText = $lineText.Substring($COMMENT_STRING.Length)
                            } 
                            $newSelectedTextArray += $lineText
                        }
                    }
                }
            
                # Join the text array back together to get the new Selected Text string.
                [string]$newSelectedText = $newSelectedTextArray -join $NEW_LINE_STRING
            
                # Overwrite the currently Selected Text with the new Selected Text.
                $psISE.CurrentFile.Editor.InsertText($newSelectedText)
            
                # Fully select all of the lines that were modified. +1 on End Line's Width because column starts at 1, not 0.
                $psISE.CurrentFile.Editor.Select($selectedTextFirstLineNumber, 1, $selectedTextLastLineNumber, $psISE.CurrentFile.Editor.GetLineLength($selectedTextLastLineNumber) + 1)
            }
            
            function Comment-IseSelectedLines([switch]$DoNothingWhenNotCertainOfWhichLinesToComment)
            {
            <#
                .SYNOPSIS
                Places a comment character at the start of each line of the selected text in the current PS ISE file.
                If no text is selected, it will comment out the line that the caret is on.
            
                .DESCRIPTION
                Places a comment character at the start of each line of the selected text in the current PS ISE file.
                If no text is selected, it will comment out the line that the caret is on.
            
                .PARAMETER DoNothingWhenNotCertainOfWhichLinesToComment
                Under the following edge case we can't determine for sure which lines in the file are selected.
                If this switch is not provided and the edge case is encountered, we will guess and attempt to comment out the selected lines, but we may guess wrong and comment out the lines above/below the selected lines.
                If this switch is provided and the edge case is encountered, no lines will be commented out.
            
                Edge Case:
                - When the selected text occurs multiple times in the document, directly above or below the selected text.
            
                Example:
                abc
                abc
                abc
            
                - If only the first two lines are selected, when you run this command it may comment out the 1st and 2nd lines correctly, or it may comment out the 2nd and 3rd lines, depending on
                if the caret is on the 1st line or 2nd line when selecting the text (i.e. the text is selected bottom-to-top vs. top-to-bottom).
                - Since the lines are typically identical for this edge case to occur, you likely won't really care which 2 of the 3 lines get commented out, so it shouldn't be a big deal.
                But if it bugs you, you can provide this switch.
            #>
                CommentOrUncommentIseSelectedLines -CommentLines $true -DoNothingWhenNotCertainOfWhichLinesToSelect $DoNothingWhenNotCertainOfWhichLinesToComment
            }
            
            function Uncomment-IseSelectedLines([switch]$DoNothingWhenNotCertainOfWhichLinesToUncomment)
            {
            <#
                .SYNOPSIS
                Removes the comment character from the start of each line of the selected text in the current PS ISE file (if it is commented out).
                If no text is selected, it will uncomment the line that the caret is on.
            
                .DESCRIPTION
                Removes the comment character from the start of each line of the selected text in the current PS ISE file (if it is commented out).
                If no text is selected, it will uncomment the line that the caret is on.
            
                .PARAMETER DoNothingWhenNotCertainOfWhichLinesToUncomment
                Under the following edge case we can't determine for sure which lines in the file are selected.
                If this switch is not provided and the edge case is encountered, we will guess and attempt to uncomment the selected lines, but we may guess wrong and uncomment out the lines above/below the selected lines.
                If this switch is provided and the edge case is encountered, no lines will be uncommentet.
            
                Edge Case:
                - When the selected text occurs multiple times in the document, directly above or below the selected text.
            
                Example:
                abc
                abc
                abc
            
                - If only the first two lines are selected, when you run this command it may uncomment the 1st and 2nd lines correctly, or it may uncomment the 2nd and 3rd lines, depending on
                if the caret is on the 1st line or 2nd line when selecting the text (i.e. the text is selected bottom-to-top vs. top-to-bottom).
                - Since the lines are typically identical for this edge case to occur, you likely won't really care which 2 of the 3 lines get uncommented, so it shouldn't be a big deal.
                But if it bugs you, you can provide this switch.
            #>
                CommentOrUncommentIseSelectedLines -CommentLines $false -DoNothingWhenNotCertainOfWhichLinesToSelect $DoNothingWhenNotCertainOfWhichLinesToUncomment
            }
            
            
            #==========================================================
            # Add ISE Add-ons.
            #==========================================================
            
            # Add a new option in the Add-ons menu to save all files.
            if (!($psISE.CurrentPowerShellTab.AddOnsMenu.Submenus | Where-Object { $_.DisplayName -eq "Save All" }))
            {
                $psISE.CurrentPowerShellTab.AddOnsMenu.Submenus.Add("Save All",{Save-AllISEFiles},"Ctrl+Shift+S")
            }
            
            $ISE_STATE_FILE_PATH = Join-Path (Split-Path $profile -Parent) "IseState.xml"
            
            # Add a new option in the Add-ons menu to export the current ISE state.
            if (!($psISE.CurrentPowerShellTab.AddOnsMenu.Submenus | Where-Object { $_.DisplayName -eq "Save ISE State" }))
            {
                $psISE.CurrentPowerShellTab.AddOnsMenu.Submenus.Add("Save ISE State",{Export-ISEState $ISE_STATE_FILE_PATH},"Alt+Shift+S")
            }
            
            # Add a new option in the Add-ons menu to export the current ISE state and exit.
            if (!($psISE.CurrentPowerShellTab.AddOnsMenu.Submenus | Where-Object { $_.DisplayName -eq "Save ISE State And Exit" }))
            {
                $psISE.CurrentPowerShellTab.AddOnsMenu.Submenus.Add("Save ISE State And Exit",{Export-ISEState $ISE_STATE_FILE_PATH; exit},"Alt+Shift+E")
            }
            
            # Add a new option in the Add-ons menu to import the ISE state.
            if (!($psISE.CurrentPowerShellTab.AddOnsMenu.Submenus | Where-Object { $_.DisplayName -eq "Load ISE State" }))
            {
                $psISE.CurrentPowerShellTab.AddOnsMenu.Submenus.Add("Load ISE State",{Import-ISEState $ISE_STATE_FILE_PATH},"Alt+Shift+L")
            }
            
            # Add a new option in the Add-ons menu to comment all selected lines.
            if (!($psISE.CurrentPowerShellTab.AddOnsMenu.Submenus | Where-Object { $_.DisplayName -eq "Comment Selected Lines" }))
            {
                $psISE.CurrentPowerShellTab.AddOnsMenu.Submenus.Add("Comment Selected Lines",{Comment-IseSelectedLines},"Ctrl+K")
            }
            
            # Add a new option in the Add-ons menu to uncomment all selected lines.
            if (!($psISE.CurrentPowerShellTab.AddOnsMenu.Submenus | Where-Object { $_.DisplayName -eq "Uncomment Selected Lines" }))
            {
                $psISE.CurrentPowerShellTab.AddOnsMenu.Submenus.Add("Uncomment Selected Lines",{Uncomment-IseSelectedLines},"Ctrl+Shift+K")
            }
            
            #==========================================================
            # Perform script tasks.
            #==========================================================
            
            # Automatically load our saved session if we just opened ISE and have a default blank session.
            # Because this may remove the default "Untitled1.ps1" file, try and have this execute before any other code so the file is removed before the user can start typing in it.
            if (($psISE.PowerShellTabs.Count -eq 1) -and ($psISE.CurrentPowerShellTab.Files.Count -eq 1) -and ($psISE.CurrentPowerShellTab.Files[0].IsUntitled))
            {
                # Remove the default "Untitled1.ps1" file and then load the session.
                if (!$psISE.CurrentPowerShellTab.Files[0].IsRecovered) { $psISE.CurrentPowerShellTab.Files.RemoveAt(0) }
                Import-ISEState $ISE_STATE_FILE_PATH
            }
            
            # Clear the screen so we don't see any output when opening a new session.
            Clear-Host
            

             

            Hopefully this post makes your ISE experience a little better.  Feel free to comment and let me know if you like this or find any problems with it.  Know of any other must-have ISE add-ons? Let me know.

            Happy coding!

            Create and publish your NuGet package in one click with the New-NuGetPackage PowerShell script

            June 7th, 2013 No comments

            I’ve spent a good chunk of time investigating how nuget.exe works and creating a PowerShell script called New-NuGetPackage to make it dirt simple to pack and push new NuGet packages.

            Here’s a list of some of the script’s features:

            • Create the .nupkg package file and optionally push the package to the NuGet Gallery (or a custom gallery).
            • Can be ran from Windows Explorer (i.e. double-click it) or called via PowerShell if you want to be able to pass in specific parameters or suppress prompts.
            • Can prompt user for version number and release notes (prompts are prefilled with previous version number and release notes) or can suppress all prompts.

            This makes packing and pushing your NuGet packages quick and easy, whether doing it manually or integrating it into your build system.  Creating NuGet packages wasn’t overly complicated before, but this makes it even simpler and less tedious.

            Go to the codeplex page to download the script and start automating your NuGet package creating today.  The codeplex documentation describes the script in much more detail, as well as step by step instructions on how to get setup to start using it.

            [UPDATE] I have also used this script in a new NuGet package that will automatically create a NuGet package for your own projects without you having to do anything. Read about it here. [/UPDATE]

             

            Additional NuGet Information

            During my investigation I compiled a list of what happens when doing “nuget spec” and “nuget pack” against the various different file types (e.g. dll vs. project vs. nuspec).  Someone else may find this information useful, so here it is:

            Spec a Project or DLL directly (e.g. "nuget spec PathToFile"):
            - Creates a partial .nuspec; still has placeholder info for some fields (e.g. Id, Dependencies).
            - Creates [full file name with extension].nuspec file.
            - The generated .nuspec file is meant to still be manually updated before making a package from it.
            
            // TestProject.csproj.nuspec
            <?xml version="1.0"?>
            <package >
              <metadata>
                <id>C:\dev\TFS\RQ\Dev\Tools\DevOps\New-NuGetPackage\TestProject\TestProject\TestProject.csproj</id>
                <version>1.0.0</version>
                <authors>Dan Schroeder</authors>
                <owners>Dan Schroeder</owners>
                <licenseUrl>http://LICENSE_URL_HERE_OR_DELETE_THIS_LINE</licenseUrl>
                <projectUrl>http://PROJECT_URL_HERE_OR_DELETE_THIS_LINE</projectUrl>
                <iconUrl>http://ICON_URL_HERE_OR_DELETE_THIS_LINE</iconUrl>
                <requireLicenseAcceptance>false</requireLicenseAcceptance>
                <description>Package description</description>
                <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
                <copyright>Copyright 2013</copyright>
                <tags>Tag1 Tag2</tags>
                <dependencies>
                  <dependency id="SampleDependency" version="1.0" />
                </dependencies>
              </metadata>
            </package>
            =====================================================================
            Spec a DLL using "nuget spec" from the same directory:
            - Creates a partial .nuspec; still has placeholder info for some fields (e.g. Id, Dependencies).
            - Creates "Package.nuspec" file.
            - The generated .nuspec file is meant to still be manually updated before making a package from it.
            
            // Package.nuspec
            <?xml version="1.0"?>
            <package >
              <metadata>
                <id>Package</id>
                <version>1.0.0</version>
                <authors>Dan Schroeder</authors>
                <owners>Dan Schroeder</owners>
                <licenseUrl>http://LICENSE_URL_HERE_OR_DELETE_THIS_LINE</licenseUrl>
                <projectUrl>http://PROJECT_URL_HERE_OR_DELETE_THIS_LINE</projectUrl>
                <iconUrl>http://ICON_URL_HERE_OR_DELETE_THIS_LINE</iconUrl>
                <requireLicenseAcceptance>false</requireLicenseAcceptance>
                <description>Package description</description>
                <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
                <copyright>Copyright 2013</copyright>
                <tags>Tag1 Tag2</tags>
                <dependencies>
                  <dependency id="SampleDependency" version="1.0" />
                </dependencies>
              </metadata>
            </package>
            =====================================================================
            Spec a Project using "nuget spec" from the same directory:
            - Creates a template .nuspec using the proper properties and dependencies pulled from the file.
            - Creates [file name without extension].nuspec file.
            - The generated .nuspec file can be used to pack with, assuming you are packing the Project and not the .nuspec directly.
            
            // TestProject.nuspec
            <?xml version="1.0"?>
            <package >
              <metadata>
                <id>$id$</id>
                <version>$version$</version>
                <title>$title$</title>
                <authors>$author$</authors>
                <owners>$author$</owners>
                <licenseUrl>http://LICENSE_URL_HERE_OR_DELETE_THIS_LINE</licenseUrl>
                <projectUrl>http://PROJECT_URL_HERE_OR_DELETE_THIS_LINE</projectUrl>
                <iconUrl>http://ICON_URL_HERE_OR_DELETE_THIS_LINE</iconUrl>
                <requireLicenseAcceptance>false</requireLicenseAcceptance>
                <description>$description$</description>
                <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
                <copyright>Copyright 2013</copyright>
                <tags>Tag1 Tag2</tags>
              </metadata>
            </package>
            =====================================================================
            Pack a Project (without accompanying template .nuspec):
            - Does not generate a .nuspec file; just creates the .nupkg file with proper properties and dependencies pulled from project file.
            - Throws warnings about any missing data in the project file (e.g. Description, Author), but still generates the package.
            
            =====================================================================
            Pack a Project (with accompanying template .nuspec):
            - Expects the [file name without extension].nuspec file to exist in same directory as project file, otherwise it doesn't use a .nuspec file for the packing.
            - Throws errors about any missing data in the project file if the .nuspec uses tokens (e.g. $description$, $author$) and these aren't defined in the project, so the package is not generated.
            
            =====================================================================
            Cannot pack a .dll directly
            
            =====================================================================
            Pack a .nuspec:
            - Creates the .nupkg file with properties and dependencies defined in .nuspec file.
            - .nuspec file cannot have any placeholder values (e.g. $id$, $version$).