PowerShell is an extremely *ahem* powerful programming language. Since 2017, it has been my language of choice whenever it comes to scripting and automation on Windows, including small-scale systems integrations.
PowerShell has two key benefits over the alternatives:
- .NET integration: PowerShell has direct access to the whole .NET class library. If you already know .NET (for example, from a C# background), you will find this immensely helpful.
- First-class support: Microsoft is investing heavily into PowerShell as the successor to VBScript and Batch scripts. Almost any Windows administration task can be automated through PowerShell… not to mention Exchange Online, etc. If you're a Windows system administrator and still using VBScript and Batch, you're living in the ancient past†.
But PowerShell isn't without its warts. Notably, it suffered from an awful OOP implementation for 10 years before classes were added in PowerShell 5. Beyond that, it has a number of quirks and gotchas—unexpected behaviours that can take you by surprise if you aren't aware of them, especially if you are used to other .NET languages such as C#.
Here are a few things to watch out for when working with PowerShell.
By Default, String Comparisons are Case-Insensitive
PowerShell is case-insensitive by default:
To perform a case-sensitive comparison, case-sensitive comparison operators must be used. For example: Refer to about_Comparison_Operators for a list of available operators.
$null Comparisons can be Inconsistent
If you write your
$null comparisons with
$null on the right-hand side (as is common with
most programming languages), you may encounter inconsistent results. For example:
This is due to differences in how the comparison operation operators work for scalars (individual values) versus
arrays. A detailed explanation can be found
in the PSScriptAnalyzer documentation.
To avoid this problem, consider placing
$null on the left-hand side during comparisons. This way,
$null (a scalar) is the subject of the comparison, and the behaviour will be consistent regardless of
whether it's compared to another scalar, or to an array:
By Default, PowerShell Continues after Non-Fatal Errors
By default, if PowerShell encounters a non-fatal error, it will automatically continue execution. This can be undesirable for scripts and automations, where you generally want to stop if an unexpected error occurs.
If you don't want PowerShell to blindly continue after a non-fatal error occurs, you have three options:
- Use try/catch blocks. A non-fatal error is treated as an exception for the purpose of try/catch blocks, and can be handled however you please.
-ErrorActionparameter on individual cmdlets. This can be used to cause the script to stop on a per-cmdlet basis.
$ErrorActionPreference, which sets PowerShell's default behaviour when a non-fatal error occurs.
For example, to make a PowerShell script automatically stop when an unhandled non-fatal error occurs, set
$ErrorActionPreference to Stop:
$ErrorActionPreference do not affect the handling of errors
caught in a try/catch block; they will still catch and handle errors regardless of the
Adding Array Elements with += Recreates the Array
If you've been using PowerShell for a while, you may be accustomed to adding elements to an array using the
However, the array
+= operator comes at a significant performance cost because it recreates the array
every time. As explained in the documentation:
When you use the
+=operator, PowerShell actually creates a new array with the values of the original array and the added value. This might cause performance issues if the operation is repeated several times or the size of the array is too big.
If you need to grow an array inside of a loop, consider using the .NET
instead. New elements can be added to a
List<T> without having to recreate an array every time:
So why does PowerShell need to clone the array when the
+= operator is used? Like many programming
languages, PowerShell allocates a single contiguous block of memory for arrays and their elements. All elements of
an array are located next to each other in memory, arranged in order of their index (index 0 being the first
element, index 1 being the second, and so on).
Since array memory is allocated as a contiguous block, it isn't possible to expand the array because the memory immediately following it could be in-use by something else. Instead, PowerShell finds a new contiguous block of memory large enough to store the original array plus the new element, allocates it, copies the elements across, then de-allocates the original array—a task that can be very slow when performed frequently, such as inside of a loop.
Internally, the .NET
List<T> class also uses an array to store its elements. So how does it avoid
the performance penalty of recreating the array whenever an element is added? Simple: by allocating space for
extra array elements
beyond what's initially required. By maintaining a buffer or set of spare array elements, new data can be added to
List<T> without having to recreate the array every time. This is why adding new values to a
List<T> is significantly faster than adding new elements to an array.
By Default, Accessing Uninitialized Variables is not an Error
By default, uninitialized variables are considered equivalent to
$null, and referencing them is not
considered an error:
This can cause all kinds of unexpected issues. For example, it makes it easier for typos to go unnoticed:
To change this behaviour, use
Set-StrictMode to set strict mode to version 1.0 or greater.
Similarly, exceeding the bounds of an array is also not considered an error by default. To change this behaviour, set strict mode to 3.0 or greater.
PowerShell Scoping is not Strictly Lexical
PowerShell's scoping isn't strictly lexical—variables declared inside of if statements and for loops can be accessed outside of them:
I prefer not to use this functionality because it can be confusing. Be wary of it during code reviews—what looks like a reference to an uninitialized variable may actually be valid.
Piping to Out-Null can be Slow
Out-Null can be used to discard the output of a cmdlet via the pipeline. However, using the pipeline
adds a performance overhead. This overhead can become significant when piping a large number of cmdlets to
Out-Null inside of a loop. Instead, consider assigning the cmdlet to
$null, which is much
That covers all the gotchas I've encountered as a PowerShell developer so far. What kind of gotchas have you encountered?
† Unless you're maintaining a pre-PowerShell version of Windows, in which case, you have bigger problems than PowerShell could ever solve.