An Improved Get-Verb Command

A recommended best practice for PowerShell scripting, especially when developing functions, is to follow the standard Verb-Noun naming convention. The Verb  should be a value from the list of approved .NET verbs. The easy way to see that list is with the Get-Verb cmdlet. The result will also indicate the verb group or category like Security or Lifecycle. If you wanted to filter for a particular group you needed to use a Where-Object expression. It really isn’t that difficult but I decided to make a my own version of Get-Verb that would let you specify one or more groups.

The function is essentially a wrapper for Get-Verb. It uses the same parameters plus my addition. The help is also from Get-Verb, although I modified it slightly to reflect my version of the command. You can use Get-MyVerb as you would Get-Verb or you can specify one or more groups.


The last improvement, is that if you run help for Get-MyVerb with -Online you’ll get the MSDN documentation about .NET verbs which goes into detail. I think you will find it helpful.

Enjoy and as always, comments or questions are welcome.

Scary PowerShell

In honor of today’s festivities, at least in the United States, I thought we’d look at some scary PowerShell. I have seen a lot of scary things in blog posts, tweets and forum discussions. Often these scary things are from people just getting started with PowerShell who simply haven’t learned enough yet to know better. Although I have seen some of these things from people who appear to be a bit more experienced. I find these things scary because I think they are “bad” examples of how to use PowerShell. Often these examples will work and provide the desired result, but the journey is through a terrifying forest. Please note that PowerShell style is subjective but I think many of the core concepts I want to show you are sound.

First, here is a chunk of scary PowerShell code. This is something I wrote based on a number of “bad” techniques I’ve encountered.

This code will create a CSV file with files in C:\Work that are over a certain size and modified after January 1, 2014. When I see code like this I am pretty confident the author is coming from a VBScript background. To me, I see a lot of energy working with text. I’ll come back to that point in a moment.

First, filtering is a bit ugly. Unless you have a specific byte value in mind, use shortcuts like 1MB which is easier to understand than 104876. You can also combine filters into one.

I cringe when I see people trying to concatenate values in PowerShell. In this case it simply isn’t needed. But for the sake of learning, if you really needed to build a string, take advantage of variable expansion. Here I will need to use a sub-expression.

But in the example of bad code ,instead of manually creating a CSV file, PowerShell can do that for you with Export-CSV.  In the code sample, the header is different than the property names, but that’s OK. We can use a hashtable with Select-Object and create something new.

The original code used text parsing to get the file extension. But if you pipe a file object to Get-Member, you would discover there is a property called Extension. When building scripts, pipe objects to Get-Member or Select-Object with all properties to discover what you have to work with. The original code is written with the assumption that the CSV file will be used outside of PowerShell. If that is the case, then export it without type information. But if you think you will import the data back into PowerShell, include the type information because it will help PowerShell reconstruct the objects. You would learn all of this by looking at help and examples for Export-CSV.

The example above is technically a one-line command. But it doesn’t have to be. It might make more sense to break things up into discrete steps.

This code is sort of a compromise and isn’t too difficult to follow, even if you are new to PowerShell. This would give you the same result.

Sometimes using the ForEach enumerator is faster than using ForEach-Object in a pipeline. You have to test with Measure-Command. If you are running PowerShell v4, you can take advantage of the new Where() method which can dramatically improve performance.

By now I hope you can see how this code is taking advantage of cmdlets and the pipeline. For example, Export-CSV has a –NoClobber parameter which prevents you from overwriting an existing file. You can’t do that with legacy redirection operators like > and >> without additional commands.

The final step with my scary code, which now isn’t quite so scary I hope, is to turn this into something re-usable. If you put the above into a script as-is, you would have to edit the file every time you wanted to check a different folder or export to a different file. This is where we can turn it from a pumpkin into a fantastic carriage, that won’t revert at midnight.

This function gives me a flexible tool. All I need to do is specify a path, although it defaults to the current directory, and some sort of filtering script block. The default is to display everything. Now I can run a command \ like this:

You’ll notice that my function doesn’t export anything to a CSV file. Of course not. The function’s only purpose is to get files and display a subset of properties. Because the function writes to the pipeline I can do whatever I need. Perhaps today I need to export to a CSV file but tomorrow I want to create formatted table saved to a text file.

I hope PowerShell in general doesn’t frighten you. As the saying goes, we fear that which we don’t understand. But once you understand some basic PowerShell principals and concepts I think you’ll find it not quite as terrifying.

What scary PowerShell have you come across

So you need to write a PowerShell script

lightbulb-idea So…you have decided to write a PowerShell script or have at least identified a need. What do you do first? If you say “Google or Bing”, I’d say you are wrong. In my opinion, when you are developing a PowerShell script, searching for an existing script is not the first step. Sure, you will likely find something, but….

There are many online sources of PowerShell scripts and code samples. However, from my experience the quality is all over the board. Sure, you might find a great example. But unless you have a great deal of PowerShell experience, how will you judge? Sadly, many script and code samples I see don’t follow community accepted best practices, don’t follow the PowerShell paradigm or are simply bad scripts. There is also absolutely no guarantee that the script or code sample you download will work correctly (and safely) in your environment.

Personally, I would recommend that you open your script editor and start laying out a series of comments about what it is that you need to accomplish. If you are using the ISE you might even use regions to outline your script. This task helps your organize your work, and when you are finished, the script is documented? All you have to do is write the code to fulfill the comments. Yes, that might be difficult and maybe even a little time consuming at first, but that’s the point. You will be learning much more than by simply copy and pasting something of dubious quality you found online. Even more importantly, you will be developing something that you know will work in your environment.

Get stuck? Then sure, look online to find examples of how someone used a particular cmdlet or function. But try to find several examples and “average” them out. Perhaps even better would be to post in the forums at and ask for specific help on a sticky problem. You’ll likely get several responses. And knowing the quality of the average forum member, I’d feel very comfortable with their responses.

Over time, as you gain more PowerShell experience, you will be better able to assess the quality of online PowerShell scripts and samples. I still think you should develop on your own from scratch, but I also think you’ll find the process goes much faster.

Reflections on the PowerShell Scripting Games

talkbubbleDuring the most recent PowerShell Scripting Games, I was fortunate enough to be one of the judges. Now that the games have concluded I thought I’d share my reflections on the entries. Naturally these are merely my opinions but they are drawn from years of experience with PowerShell and almost 25 years as an IT Pro. All I can hope is that you’ll consider some of these things on your next PowerShell development project.

Let me also be clear, for those of you still new to PowerShell, that when using it interactively at a PowerShell prompt, anything goes if it helps you get the job done efficiently. But when you are creating a scripted PowerShell solution, something that will live on, then the guidelines are a little different.

It is true that the challenges for this past set of games were complex. Frankly, I’m not sure how I would have even started on a few of them. That said, we’re still talking about PowerShell scripts in the end. One thing that struck me on the entries I judged, was how often I felt I was reading source code for a Visual Studio project and not a PowerShell script. On one hand that could be considered a good thing: PowerShell is flexible enough to create anything from basic scripts to the types of entries I reviewed. But then again, if you have the technical chops to come up with code like I read, you probably could create a “real” set of compiled tools. At some point in your development process you might ask yourself if you are creating a PowerShell script or have moved beyond and maybe a script isn’t the right solution. Yes, the game entries needed to be PowerShell scripts and modules but what about what you’re working on?

Even so, this brings me to my next point: maintainability. The best way to test this is to hand your script files to someone else, ideally with a little less PowerShell knowledge than you, and see if they can understand what your scripts do and how. Many of the entries I read were sprawling, complicated masses of functions and scripts and .NET code. I was challenged in many cases to try and figure out what was going on. If *I* have to struggle to figure out how all the pieces of your script or module work, think about your co-worker who gets handed your solution to maintain or troubleshoot when you get promoted. Or it might be you coming back to it in 6 months.

There is no penalty for comments. I encourage beginners to write the comments first so that they can organize and plan their work. For a module, you can also write your own About help topics. In my opinion, information like this is just as important as the PowerShell commands you are using.

Lastly, I have the usual concerns about syntax and language choices. Yes, I know there are always exceptions. Don’t resort to using .NET classes when a cmdlet will do. Use standard verbs and meaningful nouns in your command names. But perhaps the biggest peeve for me is the continued use of the Return keyword.

Whenever I see the Return keyword, I feel the script author has not fully grasped the PowerShell paradigm. To my way of thinking we don’t return values we write objects to the pipeline. The only time I use Return is when I intentionally want to bail out of a script or function and gracefully terminate everything. Usually when I see the use of Return I also see a lot of unnecessary helper functions each intended to “return” something. This only adds to the (unneccessary) complexity of your work. Add in a lack of documentation and you have a mess of, let’s call it, sub-optimal PowerShell.

A well-crafted PowerShell script file can be a thing of beauty.

The pipeline opens,
objects blossom sending joy
formatted as bliss

What do you think? What have you seen in your company or in the wild that is good, bad or ugly?

PowerShell Deep Dive First Sales

PowerShell Deep Dives Last year I had the pleasure of editing PowerShell Deep Dives, published by Manning. This book is a community project with chapters contributed from MVPs and leading members of the PowerShell community. You won’t find this content anywhere else.

Anyway, I have the first royalty report from Q3 2013. Looks like we sold a little under 1500 copies. The important thing, in case you missed the original news about this project, is that all proceeds are given to charity. For this book, Save the Children received a check for $3,338.30. That’s nice but I know we can do better.

So if you’ve put off getting a copy, what are you waiting for? If you have a copy, thank you. Now spread the word and tell your colleagues to buy a copy. You can get the title in print or ebook formats. You can also buy the book from Amazon. In fact, if you’ve read the book a posted review would also help. The more reviews the attention the book can get which should lead to more sales and continued charitable contributions.

Thank you again to all of the authors and editors on this project and to those of you who have a copy on your shelf. I hope you found a few things that made it worth your investment. For the rest of you, well, you know what you need to do.