A Better PowerShell Get Scheduled Job Results

Yesterday I posted a quick update on my code to get the most recent scheduled job result in PowerShell. I had been using a simple script. But the more I thought about it, the more I realized I really did need to turn it into a function with more flexibility. When creating a PowerShell based tool you need to think about who might be using it. Even though I only wanted the last result for all enabled jobs, there might be situations where I wanted say the last 2 or 3. Any maybe I wanted to only get results for a specific job. Or maybe all jobs. The bottom line is that I needed more flexibility. Now I have the Get-ScheduledJobResult function.

The function takes parameters that I can pass to Get-ScheduledJob and Get-Job. For the most part the core functionality remains the same: I get the X number of most recent jobs for scheduled jobs. The difference is that now these values are controlled by parameters. The other benefit to my revision is error handling. Before I had a single pipelined expression, but now I use a Try/Catch block to get the scheduled job by name, using * as the default. You’ll also notice I’m using my own error variable. If there is an error, I can handle it more gracefully. I’ll let you test it with a bogus scheduled job name to see what happens.

But perhaps the biggest change is that I define my own object type, based on the Job object.

For every job result insert a new typename. Because I might have multiple objects I need to insert the typename for each one. As I was working on this I originally was inserting my typename into the Microsoft.PowerShell.ScheduledJob.ScheduledJob object. But my custom formatting wasn’t working property, so I found that by selecting all properties, which has the effect of creating a Selected.Microsoft.PowerShell.ScheduledJob.ScheduledJob my changes worked.

Inserting the typename is only part of the process. In the script file that defines the function, I included code to take advantage of Update-TypeData. In PowerShell 3.0 we no longer need to deal with XML files. Now type updates can be done on-the-fly. So instead of creating my custom properties with Select-Object and custom hash tables, I add them as alias properties. I so something similar to create the Run property.

The last part of my revision is to define the default display property set. The effect is that when I run my function, if I don’t specify any other formatting, by default I’ll see the properties I want.

And I still have access to all of the other properties as well.

get-scheduledjob-latest-ogv

Now I have a tool, complete with an alias, with defaults that work for me, but if I need to see something else I can adjust the output based on my parameters. If you want to try this, save the function and type information to the same script file.

MSDevWNY PowerShell Advanced Functions

talkbubble-v3Last night I presented for the MSDevWNY user group in the Buffalo, NY area. They were an interested and enthusiastic audience and I think we could have spent another few hours talking about PowerShell. My presentation was one I’ve given before on Advanced PowerShell functions. I promised the group a copy of my slides and demos, including the scripts we didn’t have time to get to. But the material is open to anyone.

If you want to learn more about PowerShell scripting and toolmaking then naturally the best book is Learn PowerShell Toolmaking in a Month of Lunches.

Download the PowerShell Advanced Functions v3 zip file.

Thanks to Rich and everyone in Buffalo and I look forward to a return visit.

Why Doesn’t My Pipeline Work?

talkbubble I saw a little discussion thread on Twitter this morning which I felt needed a little more room to explain. Plus since we’re in ScriptingGames season beginners might like a few pointers. I always talk about PowerShell, objects and the pipeline. But sometimes what looks like a pipelined expression in the PowerShell ISE doesn’t behave the way you might expect.

Here’s an example.

If you run this, you’ll see numbers 1 to 5 written to the pipeline. But if you try something like this it will fail.

You’ll get an error about an empty pipe. In fact, in the PowerShell ISE you’ll get a red squiggle under the | indicating this is not going to work. That’s because PowerShell isn’t writing to pipeline at the end of the scriptblock, but rather within in. Another way to think about it is at the While operator is not a cmdlet so the only thing writing objects to the pipeline is whatever commands are within the While loop.

What you can do is something like this:

Here, I’m capturing the pipeline output from the scriptblock and saving it to a variable. Then I have objects I can use. Or if you wanted to be clever, you could use a subexpression.

This same behavior also applies to Do and the ForEach enumerator. The latter trips people up all the time.

You think you’ll get the output of ForEach saved to the file, but you’ll run into the empty pipeline again. You could use a variable and then pipe the variable to the file or use a subexpression. Even better, use a pipelined expression.

Here I’m using the cmdlet ForEach-Object, which unfortunately has an alias of ForEach which confuses PowerShell beginners. So don’t assume that just because you see a set of { } that you get pipelined output. Remember, cmdlets write objects to the pipeline, not operators.

Filter Left

When writing WMI queries expressions in Windows PowerShell, it is recommended to use WMI filtering, as opposed to getting objects and then filtering with Where-Object. I see expressions like this quite often:
[cc lang=”PowerShell”]
get-wmiobject win32_process -computer $c | where {$_.name -eq “notepad.exe”}
[/cc]
In this situation, ALL process objects are retrieved and THEN filtered. The better performing approach is to use a WMI filter:
[cc lang=”PowerShell”]
get-wmiobject win32_process -filter “name=’notepad.exe'” -computer $c
[/cc]
The WMI service on the remote computer filters in place and you only get back the item you want. Don’t believe me? Measure for yourself. Start up Notepad, then define these script blocks.
[cc lang=”PowerShell”]
PS C:\> $a={gwmi win32_process | where {$_.name -eq “notepad.,exe”}
PS C:\> $b={gwmi win32_process -filter “name=’notepad.,exe'”}
[/cc]
Now measure how long it takes the first to run:
[cc lang=”PowerShell”]
PS C:\> Measure-command $a
[/cc]
WMI caches results so wait about 10 minutes and then measure the second script block.
[cc lang=”PowerShell”]
PS C:\> Measure-command $b
[/cc]
For me, the second expression took half as long. Granted this is a small data set and I’m not going to quibble over 100ms. But when you think about querying many computers with the potential for larger data sets, the performance gains are significant. So get in the habit of filtering as far to the left as you can in your PowerShell expressions.

[this was originally posted in my Google+ account.]

Verbose or Debug?

This morning there was some discussion on Twitter about when to use Write-Verbose and when to use Write-Debug. They both can provide additional information about what your script or function is doing, although you have to write the code. Typically, I use Write-Verbose to provide trace and flow messages. When enabled, it makes it easier to follow what the script is doing and often the messages include variable information. Write-Debug is helpful for providing detailed debug messages, but it also has the effect of turning on debuggging when you include it.

Here’s a sample script that uses both cmdlets.

[cc lang=”PowerShell”]

#requires -version 2.0

[cmdletbinding()]
Param([string]$computername=$env:computername)

$start=Get-Date
Write-Verbose “Starting $($myinvocation.mycommand)”
Write-Debug “`$computername is $computername”
Write-Verbose “Connecting to $computername”

Try {
Write-Debug “Trying WMI”
$cs=get-wmiobject -class win32_computersystem -ComputerName $computername -errorAction Stop
}
Catch {
Write-Debug “Exception caught”
Write-Warning (“Failed to get WMI information from $computername. {0}” -f $_.Exception.Message)
}

if ($cs) {
Write-Verbose “Processing Results”
Write-Debug ($cs | Select * | Out-string)
$cs | Select Model,Manufacturer
}
$end=Get-Date
Write-Debug (“Total processing time {0}” -f ($end-$start).ToString())
Write-Verbose “Ending $($myinvocation.mycommand)”
[/cc]

The script can use the common -Verbose and -Debug common parameters because I include the [cmdletbinding()] attribute. You don’t need to define the parameters. When I run the script normally it runs as expected.

[cc lang=”DOS”]
PS S:\> .\debugdemo.ps1

Model Manufacturer
—– ————
Qosmio X505 TOSHIBA
[/cc]

When I use -Verbose, all the Write-Verbose commands “work”.

The -Debug parameter does the same thing for Write-Debug, but it also turns on debugging:

Not only do I get the Write-Debug messages, but I get a prompt for every command. I can drop into the debug prompt using the Suspend option and look at variables or run any other PowerShell commands. Some 3rd party script editors, like PrimalScript, also take advantage of Write-Debug messages. I can load the script into the editor and run it with Debug turned on (F7 or F5) and the debug messages show in the Debug window.

This ability to step through a script is very handy, but often (personally) I just need to see where I’m at in the script and Write-Verbose suffices. As you see you can have both types of commands, and you can certainly run the script with both parameters. One last note, both cmdlets require that the message be a string. When I want to write objects using either Write-Debug or Write-Verbose, I use expressions like this:

[cc lang=”PowerShell”]
Write-Debug ($cs | Select * | Out-string)
[/cc]

I encourage you to include Verbose/Debug messages from the very beginning of your script development. You only see the messages when you use the appropriate parameter. It may seem like a lot of work up front, but when the time comes to debug or trace a problem, you’ll realize it was time well spent.