Field Note

Data-driven PowerShell Tests with Pester

powershell data-driven-tests pester
Logo of Pester testing framework for PowerShell.
Posted on Tuesday, January the 13rd, 2026
5 min read

Tl;dr (if you currently need to catch your train)

  • Writing tests for automation scripts in CI/CD and system administration is important once code reaches a certain complexity threshold.
  • PowerShell has some advantages over other commonly used scripting languages for CI/CD and system administration like BASH due to its comparatively high-level feature set.
  • Pester is a behavior-driven development (BDD) testing framework for PowerShell that also supports data-driven testing (DTT) and generates coverage reports out-of-the-box.
  • A simple example for a DTT set up using Pester is shown below (full code available on GitHub)

Writing tests for automation scripts

Testing PowerShell scripts may seem a bit over-the-top given that it is not commonly used to write large programs. If you are, however, writing automation scripts-e.g., for CI/CD pipelines and/or for administration use cases—, PowerShell has a couple of advantages over other regularly used scripting languages like BASH. For example, due to PowerShell’s support for explicit typing, object-orientedness, support for exceptions, the module system, as well as an extensive standard library (not to mention access to .NET standard libraries). It is also the suggested choice of scripting language on modern Windows systems.

Tests for CI/CD or system administration scripts—at least from my experience—are not that common even though those scripts sometimes may perform mission-critical duties and bugs may have severely adverse consequences. (Think a CI/CD pipeline breaking due to a script bug while you’re trying to deploy an urgent hotfix. Not a good time.)

Naturally, the more intricate scripts become, the more likely it is that they will benefit from writing tests. Well-designed tests allow us to prevent bugs and safeguard against regressions.

A short introduction to Pester

Now, writing tests—however necessary that may may be—by virtue of being the usual safeguard for code quality is stressful and involves careful thinking. While using a testing framework will not help us with deciding which tests to write for what. But it may at least reduce the mental load involved in writing and maintaining test suites. Pester is a behavior-driven development (BDD) testing and mocking framework for PowerShell code that fits this bill of providing a developer-friendly way of dealing with automation script tests.

Pester does that by—among other things—adopting some conventions from other testing frameworks. It uses, for example, the intuitive Describe, It, and Should pattern for structuring test code. This style of writing tests should be familiar for people coming from other frameworks like Vitest. Pester also supports writing in a domain-specific language (DSL) style using closures to pass the test logic. This tends to make tests more readable and accessible.

Pester also has integrated support for code-coverage metric generation to detect missing test cases.

For more info on Pester, please follow the link below.

An example Pester test

A simple Pester test looks something like this

Describe "Guaranteed to succeed" {
  It "Is true" {
    $True | Should -Be True
  }
}

Data-driven tests

The data-driven testing (DTT) approach basically means iterating the same test for a set of test data. DTT may be very convenient for reducing the amount of test code and, potentially, to collaborate with non-technical teams that write the test cases.

Pester has in-built support for data-driven testing via the -ForEach option to It. I.e. we may run a single test for multiple test cases as follows

Describe "The test" {
  It "holds that" -ForEach $testCase {
    # ...
  }
}

An example data-driven test with Pester

Assume we have a PowerShell function like the following that tests if an input string is all in uppercase.

# ./StringFunctions.psm1
function Test-IsUpperCase {
  param(
    [Parameter(Mandatory = $True)]
    [string]$TheString
  )

  return "$TheString".ToUpper() -ceq "$TheString"
}

And lets further assume that we want check if the following test data set passes verification.

{
  "Data": [
    { "Value": "aA", "Expected": false },
    { "Value": "AA", "Expected": true }
  ]
}

Then, a simple data-driven test suite in Pester may look like this:

# Test-IsUppercase.Tests.ps1
param(
    [Parameter(Mandatory = $True)]
    [hashtable[]]$TestData
)

BeforeAll {
    Import-Module ./StringFunctions.psm1 -Force -Function Test-IsUppercase -Verbose
}

Describe "Data-driven tests" {
    It "Returns <Expected> for input `"<Value>`"." -ForEach $TestData {
        Test-IsUppercase -TheString "$Value" | Should -Be $Expected
    }
}

Notes

  • Test files are only accepted by Pester if they match the pattern *.Tests.ps1 (Test-IsUppercase.Tests.ps1 in this case).
  • Test data are passed as a parameter to the test script Test-IsUppercase.Tests.ps1 as a mandatory parameter and iterated over in the It test by passing them to the -ForEach option.
  • The description of the It test (the first parameter), may also contain interpolation expressions for the input data fields for more expressive output messages.

To run a data-driven tests in Pester, we need to create a Pester container object and pass it to Invoke-Pester.

$container = New-PesterContainer -Path './Test-IsUppercase.Tests.ps1' -Data @{ TestData = $testData }
Invoke-Pester -Container $container -Output Detailed

The output for the test data set above should be something like this

Pester v5.7.1

Starting discovery in 1 files.
Discovery found 2 tests in 74ms.
Running tests.

Running tests from '/opt/local/Test-IsUppercase.Tests.ps1'
VERBOSE: Loading module from path '/opt/local/StringFunctions.psm1'.
VERBOSE: Exporting function 'Test-IsUpperCase'.
VERBOSE: Importing function 'Test-IsUpperCase'.
Describing Data-driven tests
  [+] Returns False for input "aA". 42ms (28ms|15ms)
  [+] Returns True for input "AA". 1ms (1ms|0ms)
Tests completed in 232ms
Tests Passed: 2, Failed: 0, Skipped: 0, Inconclusive: 0, NotRun: 0

A logical next step would then be to bundle test cases and tests together in directories within a common root like tests and then auto-discover and run DTT tests for each subdirectory.

tests
│── ...
├── tests
│   ├── ...
│   ├── is-uppercase
│   │   ├── test-cases.json
│   │   ├── Tests.ps1
│   │   └── ...
│   └── ...
└── ...

Full code

Follow the link below for the full proof of concept code ony my GitHub.

friedrichkurz.me

© 2026 Friedrich Kurz

Privacy Policy