274 53 3MB
English Pages XVIII, 146 [150] Year 2020
Building Better PowerShell Code Applying Proven Practices One Tip at a Time — Adam Bertram
Building Better PowerShell Code Applying Proven Practices One Tip at a Time
Adam Bertram
Building Better PowerShell Code: Applying Proven Practices One Tip at a Time Adam Bertram Evansville, IN, USA ISBN-13 (pbk): 978-1-4842-6387-7 https://doi.org/10.1007/978-1-4842-6388-4
ISBN-13 (electronic): 978-1-4842-6388-4
Copyright © 2020 by Adam Bertram This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Trademarked names, logos, and images may appear in this book. Rather than use a trademark symbol with every occurrence of a trademarked name, logo, or image we use the names, logos, and images only in an editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein. Managing Director, Apress Media LLC: Welmoed Spahr Acquisitions Editor: Smriti Srivastava Development Editor: Matthew Moodie Coordinating Editor: Shrikant Vishwakarma Cover designed by eStudioCalamar Cover image designed by Pexels Distributed to the book trade worldwide by Springer Science+Business Media LLC, 1 New York Plaza, Suite 4600, New York, NY 10004. Phone 1-800-SPRINGER, fax (201) 348-4505, e-mail [email protected], or visit www.springeronline.com. Apress Media, LLC is a California LLC and the sole member (owner) is Springer Science + Business Media Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation. For information on translations, please e-mail [email protected]; for reprint, paperback, or audio rights, please e-mail [email protected]. Apress titles may be purchased in bulk for academic, corporate, or promotional use. eBook versions and licenses are also available for most titles. For more information, reference our Print and eBook Bulk Sales web page at http://www.apress.com/bulk-sales. Any source code or other supplementary material referenced by the author in this book is available to readers on GitHub via the book’s product page, located at www.apress.com/978-1-4842-6387-7. For more detailed information, please visit http://www.apress.com/source-code. Printed on acid-free paper
This book is dedicated to all of the tech professionals out there that have been intrigued by PowerShell and have taken the time to dig in, learn, and better themselves with knowledge.
Table of Contents About the Author���������������������������������������������������������������������������������xi About the Technical Reviewer�����������������������������������������������������������xiii Acknowledgments������������������������������������������������������������������������������xv Introduction��������������������������������������������������������������������������������������xvii Chapter 1: Do the Basics����������������������������������������������������������������������1 Plan Before You Code��������������������������������������������������������������������������������������������1 Don’t Reinvent the Wheel��������������������������������������������������������������������������������������2 Build Functions As Building Blocks�����������������������������������������������������������������������2 Build Reusable Tools���������������������������������������������������������������������������������������������3 Don’t Focus Purely on Performance����������������������������������������������������������������������3 Build Pester Tests��������������������������������������������������������������������������������������������������4 Implement Error Handling�������������������������������������������������������������������������������������4 Build Manageable Code����������������������������������������������������������������������������������������5 Don’t Skimp on Security���������������������������������������������������������������������������������������5 Log Script Activity�������������������������������������������������������������������������������������������������6 Parameterize Everything���������������������������������������������������������������������������������������6 Limit Script and Function Input�����������������������������������������������������������������������������7 Maintain Coding Standards�����������������������������������������������������������������������������������7 Code in Context�����������������������������������������������������������������������������������������������������8 Return Informational Output���������������������������������������������������������������������������������8 Understand Your Code�������������������������������������������������������������������������������������������9 v
Table of Contents
Use Version Control�����������������������������������������������������������������������������������������������9 Write for Cross-Platform�������������������������������������������������������������������������������������10 Write for the Next Person������������������������������������������������������������������������������������10 Use Visual Studio Code���������������������������������������������������������������������������������������11
Chapter 2: Don’t Reinvent the Wheel��������������������������������������������������13 Use Community Modules������������������������������������������������������������������������������������13 Leverage Others’ Work����������������������������������������������������������������������������������������14
Chapter 3: Use Visual Studio Code�����������������������������������������������������17 Install the PowerShell Extension�������������������������������������������������������������������������19 Integrate VS Code with Git����������������������������������������������������������������������������������19
Chapter 4: Plan Before You Code��������������������������������������������������������21 Write Comments Before Coding��������������������������������������������������������������������������21 Use Your Code As a Todo List������������������������������������������������������������������������������23
Chapter 5: Create Building Blocks with Functions�����������������������������25 Write Functions with One, Single Goal����������������������������������������������������������������25 Build Functions with Pipeline Support����������������������������������������������������������������27 Save Commonly Used, Interactive Functions to Your User Profile�����������������������30
Chapter 6: Parameterize Everything���������������������������������������������������33 Don’t Hardcode. Always Use Parameters������������������������������������������������������������33 Use Parameter Sets When All Parameters Should Not Be Used at Once������������36 Use a PSCredential Object Rather Than a Separate Username and Password������������������������������������������������������������������������������������������������������������40
Chapter 7: Log Script Activity�������������������������������������������������������������43 Use a Logging Function���������������������������������������������������������������������������������������43 Clean Up Verbose Messages�������������������������������������������������������������������������������46
vi
Table of Contents
Chapter 8: Build with Manageability in Mind������������������������������������49 DRY: Don’t Repeat Yourself���������������������������������������������������������������������������������49 Don’t Store Configuration Items in Code�������������������������������������������������������������51 Always Remove Dead Code��������������������������������������������������������������������������������53
Chapter 9: Be Specific������������������������������������������������������������������������55 Explicitly Type All Parameters�����������������������������������������������������������������������������55 Always Use Parameter Validation When Possible�����������������������������������������������57 Always Define a Function’s OutputType��������������������������������������������������������������59 Write Specific Regular Expressions��������������������������������������������������������������������61
Chapter 10: Write for the Next Person�����������������������������������������������63 Give Your Variables Meaningful Names���������������������������������������������������������������63 String Substitution����������������������������������������������������������������������������������������������65 Keep Aliases to the Console Only, Not in Scripts�������������������������������������������������66 Put Functions in Alphabetical Order in a Module������������������������������������������������67 Explain Regular Expressions with Comments�����������������������������������������������������68 Write Comment-Based Help��������������������������������������������������������������������������������69 Weigh the Difference Between Performance and Readability����������������������������71
Chapter 11: Handle Errors Gracefully�������������������������������������������������73 Force Hard-Terminating Errors���������������������������������������������������������������������������73 Avoid Using $?����������������������������������������������������������������������������������������������������76
Chapter 12: Don’t Skimp on Security�������������������������������������������������79 Sign Scripts���������������������������������������������������������������������������������������������������������79 Use Scriptblock Logging�������������������������������������������������������������������������������������81 Never Store Sensitive Information in Clear Text in Code�������������������������������������82 Don’t Use Invoke-Expression������������������������������������������������������������������������������84 Use PowerShell Constrained Language Mode����������������������������������������������������86 vii
Table of Contents
Chapter 13: Stick to PowerShell���������������������������������������������������������87 Use Native PowerShell Where Possible��������������������������������������������������������������87 Use Approved, Standard Function Names�����������������������������������������������������������89
Chapter 14: Build Tools�����������������������������������������������������������������������91 Think Ahead and Build Abstraction “Layers”������������������������������������������������������91 Wrap Command-Line Utilities in Functions���������������������������������������������������������97 Make Module Functions Return Common Object Types��������������������������������������98 Ensure Module Functions Cover All the Verbs���������������������������������������������������100
Chapter 15: Return Standardized, Informational Output������������������101 Use Progress Bars Wisely���������������������������������������������������������������������������������101 Leave the Format Cmdlets to the Console��������������������������������������������������������103 Use Write-Verbose��������������������������������������������������������������������������������������������105 Use Write-Information���������������������������������������������������������������������������������������107 Ensure a Command Returns One Type of Object�����������������������������������������������108 Only Return Necessary Information to the Pipeline������������������������������������������110
Chapter 16: Build Scripts for Speed�������������������������������������������������113 Don’t Use Write-Host in Bulk�������������������������������������������������������������������������113 Don’t Use the Pipeline���������������������������������������������������������������������������������������114 Use the foreach Statement in PowerShell Core���������������������������������������������115 Use Parallel Processing������������������������������������������������������������������������������������117 Use the .NET StreamReader Class When Reading Large Text Files������������������119
Chapter 17: Use Version Control�������������������������������������������������������121 Create Repositories Based on a Purpose����������������������������������������������������������121 Commit Code Changes Based on Small Goals��������������������������������������������������122 Create a Branch Based on a Feature����������������������������������������������������������������122 Use a Distributed Version Control Service��������������������������������������������������������123 viii
Table of Contents
Chapter 18: Build and Run Tests�������������������������������������������������������125 Learn the Pester Basics������������������������������������������������������������������������������������125 Leverage Infrastructure Tests���������������������������������������������������������������������������126 Automate Pester Tests��������������������������������������������������������������������������������������127 Use PSScriptAnalyzer����������������������������������������������������������������������������������������128
Chapter 19: Miscellaneous Tips��������������������������������������������������������131 Write for Cross-Platform�����������������������������������������������������������������������������������131 Don’t Query the Win32_Product CIM Class�������������������������������������������������������133 Create a Shortcut to Run PowerShell As Administrator������������������������������������134 Store “Formattable” Strings for Later Use��������������������������������������������������������136 Use Out-GridView for GUI-Based Sorting and Filtering�������������������������������������137 Don’t Make Automation Scripts Interactive�������������������������������������������������������139
Chapter 20: Summary�����������������������������������������������������������������������141 Index�������������������������������������������������������������������������������������������������143
ix
About the Author Adam Bertram is a 22-year veteran of IT and experienced online business professional. He’s an entrepreneur, Microsoft MVP, blogger at adamtheautomator.com, trainer, and writer for multiple technology companies. Catch up on Adam’s articles at adamtheautomator.com, connect on linkedin.com/in/AdamBertram/, or follow him on twitter.com/adbertram.
xi
About the Technical Reviewer Vikas Sukhija has over 16 years of IT infrastructure experience. He is certified/ worked on various Microsoft and related technologies. He has been awarded five times with Microsoft Most Valuable Professional title (thrice in Cloud and Datacenter management (PowerShell) and twice in the Office 365 category). With his experience on messaging and collaboration technologies, he has assisted clients in migrating from one messaging platform to another. He has utilized PowerShell for automation of various monotonous tasks as well as created self-service solutions for users. He has been recognized many times by clients for automations that resulted in direct/indirect cost avoidance. He is playing key roles with various large clients in the implementation and adoption of Office 365. He is the owner and author of the http://TechWizard.cloud, http://SysCloudPro.comblog site. He is also the owner and author of the https://www.facebook.com/ TechWizard.cloud Facebook page.
xiii
Acknowledgments This book, along with all of my other career projects, could not have been possible without my wife, Miranda. She’s the rock of our household and has allowed me to pursue projects regardless of how crazy they have been and has supported me for nearly 20 years now. I also want to acknowledge all of those that have reached out and let me know how much my work means to you. It may mean a lot to you, but trust me, it means more to me to hear stories of how I’ve helped throughout your career.
xv
Introduction This book was created out of necessity. There are many books out there on how to learn PowerShell. You’ll also find thousands of articles and blog posts on PowerShell best practices. But there wasn’t an entire collection of PowerShell learning and best practices brought together before. Each chapter in this book is broken down by chapter with multiple “tips” inside. Each chapter is a bucket for the kinds of tips you can expect to read about. Each tip is a best practice. Tips are short, actionable steps you can take today to help you improve your PowerShell scripts. Tips do not go into major detail. There are other resources out there for that. The tips in this book are not meant to be exhaustive how-tos but to rather act as a checklist for actions to take. With each tip, you will typically find an example to solidify your understanding of the tip. All tips within this book should be treated as universal across all PowerShell versions and platforms from Windows PowerShell 5.1 and later including all PowerShell Core versions. If you see an example using code, assume that it will work in your PowerShell version of choice. All examples were written to be as generic as possible. All tips in this book were written by me, but many were contributed by the PowerShell community. If a tip did come from the community, the community member will be referenced.
Who Is This Book For? This book is for anyone wanting to learn how to write better PowerShell code. The book’s examples are primarily targeted to the IT professional, although anyone writing PowerShell for any purpose can get a lot from this book. xvii
Introduction
This book is not meant to be “training,” per se. It’s not specifically targeted at any level of PowerShell expertise. You will find tips in this book ranging from the basic level all the way up to the advanced level. It’s up to you to skip those tips that don’t apply to you and soak up the ones that do. Read over this book periodically throughout your career. You’ll find that each tip will vary based on specific contexts, use cases, and expertise levels. Once you find yourself at that level, you’ll be able to understand and get more out of those tips.
B ook Resources You will find all code referenced in this book in the (a) GitHub repository called PowerShellTipsToWriteBy.
xviii
CHAPTER 1
Do the Basics When it comes to code, there are a lot of opinions out there about “best practices.” What one developer thinks is a must, another will refute it. But these disagreements typically happen around specific, nuanced situations like tabs vs. spaces and if a curly brace should go on a new line. There are larger categories of tips and best practices that are universal. Everyone can agree on broad tips like “write tests,” “make code reusable,” and “don’t store passwords in clear text.” In this chapter, we’re going to hit those broad strokes. We’re going to cover the basic truths that almost everyone can agree on. In the later chapters, we’ll dive deeper into each of these areas to provide more specific tips the community and I have come up with. Without further ado, let’s get to the tips!
Plan Before You Code Don’t automatically jump into coding. Instead, take a little bit of time to do a “back of the napkin” plan on what your code will do. Scaffold out code in comments briefly outlining what the code will do in those spots. Write pseudocode. The practice of writing pseudocode will take your brain through the mental steps of what you need to do.
F urther Learning •
How to Write a Pseudocode?
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_1
1
Chapter 1
Do the Basics
Don’t Reinvent the Wheel Leverage the hard work of others. Don’t completely write a new solution if one already exists. Issue pull requests to existing open source projects if an existing PowerShell module doesn’t quite fit your needs. Don’t fork it and build your own. Look to the PowerShell Gallery first before embarking on a new script. Someone may have already solved that problem.
F urther Learning •
Get Started with the PowerShell Gallery
Build Functions As Building Blocks As you begin to build more complex PowerShell code, begin to think in functions, not lines of code. As you write code, consider if what you’re writing could stand on its own. Consider what the default commands in PowerShell do already. Get-Content reads a text file. Test-Connection checks the status of a network connection. Copy-Item copies a file. If a script does more than one “thing,” consider breaking it up into one or more functions. If you begin to collect a library of functions, create a module.
F urther Learning •
2
Building Advanced PowerShell Functions and Modules
Chapter 1
Do the Basics
Build Reusable Tools Similar to the “Build Functions As Building Blocks” tip, build PowerShell tools, not code. Focus on building reusable scripts, functions, or modules that you can reuse consistently. You should strive to build a library of tools that can be called from other code. Instead of rewriting code over and over again, you should naturally begin to write code that calls tools. Eventually, you’ll find the majority of your code will be making calls to your tools rather than re-creating the wheel.
F urther Learning •
Learn PowerShell Toolmaking in a Month of Lunches
Don’t Focus Purely on Performance Jeffrey Snover, the father of PowerShell, once said that (paraphrasing) PowerShell was meant for humans, not computers. It was built not for blazing performance but to be human readable. It was built to be approachable for non-developers, the IT admins that need to automate tasks but can’t and would rather not develop software applications. If you’re trying to eke out every last bit of speed from a PowerShell script for no reason other than to satisfy your own OCD tendencies, you’re doing it wrong.
F urther Learning •
Writing PowerShell Code for Performance
3
Chapter 1
Do the Basics
Build Pester Tests If you build scripts that make their way into production, always include Pester tests with it. Pester tests: •
Ensure that your code (at all angles) “works”
•
Allow you to make changes and confirm new bugs weren’t introduced
•
Help you trust your code instead of being afraid you’re going to break it by introducing changes
Build Pester unit tests to test your code. Build Pester integration/ acceptance/infrastructure tests to confirm the code you wrote changed what you expected.
F urther Learning •
The Pester Book
Implement Error Handling Never let your code throw an exception you didn’t account for. Use $ErrorActionPreference = 'Stop' religiously to ensure all errors are hard-terminating errors. Wrap all code in try/catch blocks and handle thrown exceptions. Don’t leave any possibility for your code to exit without you already expecting it.
F urther Learning •
4
The Big Book of PowerShell Error Handling
Chapter 1
Do the Basics
Build Manageable Code Build code for the future. Build PowerShell code that won’t leave you wondering WTF this thing does a year from now. Ensure your code can be managed in the long term. Practice the DRY (don’t repeat yourself ) principle. Write code once and refer to it rather than copying/pasting. The fewer place code duplication exists, the simpler the code and the easier that code is to manage.
F urther Learning •
Clean Code: A Handbook of Agile Software Craftsmanship
Don’t Skimp on Security Many PowerShell developers disregard security implications. It’s not because they don’t care (well, some don’t), it’s because they don’t know any better. Infosec professionals and IT have always been separate wheelhouses. You may not need to know the ins and outs of vulnerability assessments, root kits, ransomware, encryption, or Ttrojans. But you do need to practice common security sense. Don’t put plaintext passwords in your scripts. Sign your scripts. Don’t set your execution policy to unrestricted. We’ll cover these and more in the security chapter.
F urther Learning •
PowerShell Security at Enterprise Customers
5
Chapter 1
Do the Basics
Log Script Activity Record script activity across your organization. PowerShell is a powerful scripting language. PowerShell can automate tasks in no time, but it doesn’t discriminate between test and production. It can also be run by nefarious individuals. Log script activity. Log it to a text file, a database, or some other data source; just record and audit the activity somehow. Logging is like backups. You might not need them now, but when you do, you’ll be glad you did.
F urther Learning •
Greater Visibility Through PowerShell Logging
P arameterize Everything This tip is related to the functions and tools tip. When building scripts and functions, add as many parameters as necessary. Always add a parameter that might one day hold a different value. Never add “static” values to your scripts. If you need to define a value in code, create a parameter and assign it a default value. Don’t assign that value in the code. Creating parameters allows you to pass different values to your functions or scripts at runtime instead of changing the code.
F urther Learning •
6
The PowerShell Parameter Demystified
Chapter 1
Do the Basics
Limit Script and Function Input When you open up your scripts to input, limit what can be input as tightly as possible. Be sure to account for as many different ways as possible values may be passed into your code at runtime. Validate as much input as possible with parameter validation attributes, conditional checks, and so on. Try to never allow a value or type of input into your code you didn’t expect.
F urther Learning •
The PowerShell Parameter Demystified
Maintain Coding Standards Come up with a standard and stick to that for everything. Don’t name variables $Var1 in one script and $var1 (note the case) in another. Don’t include a bracket on the same line in one script and the bracket on a new line in the next. Maintain a consistent coding methodology for everything. If your code is consistent, others (and you) will be able to understand your code much better.
F urther Learning •
How to Use the PowerShell Script Analyzer to Clean Up Your Code
7
Chapter 1
Do the Basics
C ode in Context Write a solution specific to context in which it will run. Don’t write a script that expects 10 parameters and give it to your help desk staff. Write a GUI instead. Don’t quickly bang out a script without much thought if it’s going to be used in production. Worry about performance if the action your script is taking is time sensitive. How you code always depends on the context in which it will run. Don’t assume the code you write and test on your workstation is going to run fine in another environment. Code in the context that the script will run in.
F urther Learning •
Clean Code: A Handbook of Agile Software Craftsmanship
Return Informational Output Even though you can return anything you want to the output stream, do it wisely. Define what’s verbose, informational, and error output and only show that output in the various streams. Don’t show unnecessary object properties. Instead, use PowerShell formatting rules to hide properties from being seen by default.
F urther Learning •
8
PowerShell Format Output View
Chapter 1
Do the Basics
Understand Your Code If you don’t know what a line of code does, remove it or write a Pester test. Understand what each command or function you call is capable of at all times. Don’t simply run a command under a single situation and expect it to run the same way every time. Test as many scenarios as possible to understand what your code is capable of until various circumstances.
F urther Learning •
10 Steps to Plan Better so You Can Write Less Code
Use Version Control File names like myscript.ps1.bak and myscript.ps1.bak2 shouldn’t exist. Instead, use tools like Git and GitHub to put your scripts under version control. Version control allows you to audit and roll back changes to your code if necessary. Version control becomes even more important in a team environment. If you’re serious at all about PowerShell scripting, you must use version control. Tip Source: https://twitter.com/Dus10
F urther Learning •
Git Basics for IT Pros: Using Git with Your PowerShell Scripts
9
Chapter 1
Do the Basics
W rite for Cross-Platform PowerShell isn’t just on Windows anymore. PowerShell Core is cross-platform and so should be your scripts. If you ever see a time when your scripts need to run on other operating systems, account and test for that now. If you’re sharing scripts via the PowerShell Gallery or some other community repository, cross-platform is especially important. Don’t let others find out the hard way your script only runs on Windows.
F urther Learning •
Tips for Writing Cross-Platform PowerShell Code
Write for the Next Person Be sure other people understand your code. Write your code (and comments) in a clear, concise manner. A layperson should be able to look at your code and understand what it’s doing. Don’t get fancy just because you can. Don’t use aliases. Instead, write understandable code, include detailed help content, and comment code heavily. Ensure the next person can easily digest your code. You never know. That next person might be you!
F urther Learning •
10
Getting Fancy with Code Just Makes You Look Stupid
Chapter 1
Do the Basics
Use Visual Studio Code You can’t get away with a text editor anymore. Your code is too important to sloppily throw together and hope that it runs in a terminal. Using an integrated development environment (IDE) is a must if you want your code to be taken seriously. Use Visual Studio Code with the PowerShell extension. You’ll get all the benefits of a full ISE, linting functionality, autocompletion, and more.
F urther Learning •
PowerShell Tip: Use a Code Editor
11
CHAPTER 2
Don’t Reinvent the Wheel You are surrounded by a community of coding professionals that has, for a very long time, given a lot of their work away for free. By searching back through all of the community work that has been written, you can find a treasure trove of quality code that has been used by thousands of people. Although it gives us warm fuzzies to think that our code is so unique and so special that no one has ever thought to write what we’ve written, a simple search will show how many people in the community have already accomplished (albeit with varying degrees of success) the same thing you’re trying to accomplish. I don’t say this to try and put you down or shame you for not having ideas that are good enough; all I’m trying to do is get you to reuse the code that was written before you and to leverage others’ work. This will save you time, brainpower, and all of those creative juices that you need to write your best code possible.
Use Community Modules The PowerShell Gallery is a great place to find community modules that are available to the public. The PowerShell Gallery is a repository of packages and modules for you to discover and download modules, read public documentation, and even update the modules from the PowerShell © Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_2
13
Chapter 2
Don’t Reinvent the Wheel
terminal! Once you’re comfortable, you can even contribute back to the community by uploading your own modules and scripts for public consumption. The next time you’re looking for some PowerShell code to perform a task, do a search on the PowerShell Gallery first. A module may already be out there that takes care of the task you’re after. In your PowerShell console, use the Find-Module command and specify a name with a wildcard. For example, perhaps you need to find a module that automates something with the popular Trello service. In your PowerShell console, run Find-Module -Name *Trello*. You’ll see a few different modules available. If you see one you’d like to try out, pipe it to Install-Module. This will immediately install it and allow you to begin exploring its commands. Find-Module -Name PowerTrello | Install-Module
F urther Learning •
PowerShell Gallery: Getting Started
Leverage Others’ Work Code reuse is becoming just as much of an art form as it is a skill. Thankfully, as more time goes on, this is becoming more of the norm than the exception. By practicing and developing your code reuse skills, you’ll be able to integrate and operate more code than you would otherwise. In turn, this will help you create code with a higher degree of complexity or solve more high-level problems than you’d be able to do if you were trying to code everything yourself.
14
Chapter 2
Don’t Reinvent the Wheel
One of the best ways to leverage others’ work is by searching GitHub. Go to the GitHub Advanced Search page and provide some criteria around the work you’re doing. Since you’ll be working with PowerShell, be sure to include searching for extensions PS1 or PSM1 to find only PowerShell scripts.
F urther Learning •
10 Tips on Writing Reusable Code
15
CHAPTER 3
Use Visual Studio Code After all, code is just text. You could get by opening up a Windows notepad screen and just start typing away. At some point though, you’re going to go nuts trying to find code buried in multiple files, wondering where you defined a function, and struggling to rename that variable in 100 different places. You need an IDE. IDEs have been around for decades as a staple for every software developer out there. But IDEs have always been thought of as a developer-only tool. Those of us writing PowerShell don’t typically think of ourselves as “developers,” but that’s not true at all. If you write code, you are a developer and PowerShell is code. Since you’re a developer, you need a tool that helps you manage all of your code and assists you to not only write code but also run tests, debug, and run code in your environment. An IDE allows you to perform most, if not all, of the functions necessary not only coding in of itself but all of the tasks around coding that you never have to leave a single tool. Back when PowerShell was just a part of Windows, many of us chose to use the PowerShell Integrated Scripting Environment (ISE) tool. The ISE came with Windows and was a great tool at the time for writing and managing PowerShell scripts. It still is a good tool to use for beginner developers, but you will soon see its limitations once you require more from your IDE.
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_3
17
Chapter 3
Use Visual Studio Code
Nowadays, we have Visual Studio (VS) Code. Microsoft has publicly confirmed VS Code is the choice for PowerShell developers. VS Code is routinely updating and always getting new features. The PowerShell extension for VS Code is also under active development, meaning bugs are being fixed and features added regularly. There are many reasons to use Visual Studio Code over the PowerShell ISE which include:
18
•
The PowerShell ISE is purposefully built for PowerShell and PowerShell only. Although not necessarily a bad thing, nowadays, it’s rare someone is only working in PowerShell. Especially for larger projects, there are always JSON, YAML, or maybe a C# application involved. You need a tool that supports all languages which VS Code does.
•
VS Code is cross-platform. If you write PowerShell code but prefer to be on a Mac or Linux, you can write code on your platform of choice and then execute it in the target environment. You aren’t tied down to development only in Windows.
•
VS Code has a huge extension marketplace. VS Code is just a foundation for extensions. You can find an extension to help you do just about anything from managing your Azure subscription, creating AWS CloudFormation templates, and helping you write better PowerShell code with linting.
•
VS Code has an extensive code styling abilities. Using VS Code, you can customize if you’d rather use tabs vs. spaces, if you want a new curly brace on the same line or a new line, code colors, and a whole lot more. VS Code has a much richer ability to customize it to your liking than the ISE does.
Chapter 3
Use Visual Studio Code
Overall, the PowerShell ISE will do just fine for beginners writing a little PowerShell here and there, but even if you’re a beginner, you might as well start on a tool that will be with you from beginner to expert.
F urther Learning •
Getting Started with Visual Studio Code
Install the PowerShell Extension If you’re going to write PowerShell code in VS Code, you must install and use the PowerShell extension. VS Code has a huge selection of available extensions to improve upon its functionality. For PowerShell, a required extension is the PowerShell from Microsoft. The PowerShell extension provides improvements such as syntax highlighting, time-saving snippets, quick navigation of PowerShell scripts and modules, linting, and more. The first task you must do when installing VS Code is to install the PowerShell extension.
F urther Learning •
PowerShell Editing with Visual Studio Code
Integrate VS Code with Git VS Code has native Git integration. You should be storing all of your code in some kind of version/source control anyway. If you’re using VS Code, you should take advantage of this integration. Using VS Code, you can make commits, manage branches, push and pull from remote Git repos,
19
Chapter 3
Use Visual Studio Code
and more. If you’re scared of the Git command line, no worries because VS Code has GUI options for nearly every Git command you can think of. Begin storing code in individual Git repositories (repos) and opening up Git repos as folders in VS Code. It will get you in the habit of storing scripts in version control by default. Since Git is so tightly integrated with VS Code, you’ll soon easily become accustomed to using version control as a daily habit.
F urther Learning •
20
Version Control in Visual Studio Code
CHAPTER 4
Plan Before You Code As a coder and avid scripter, it’s hard to not just get down to coding right away. Planning and documentation is boring. You just want to get to the fun stuff! Although a laissez-faire attitude may work for simple scripts, you’ll soon find yourself in a world of hurt if no planning is done on larger projects. Putting together a plan, whether it be a simple back-of-the-napkin sketch or an entire project outline, is critical for ensuring success on larger projects. If you don’t plan ahead, you’ll find yourself having to put fixes in code to cover up previous problems, introduce unnecessary performance degradation, and end up with a plate of spaghetti code. Poor planning will force you to accrue technical debt and will always result in a management nightmare.
Write Comments Before Coding Large software projects require extensive planning before diving into code. Why shouldn’t your important PowerShell scripts get the same treatment albeit at a much smaller scale? It may be more fun to start coding away immediately, refrain. Ask yourself what the end goal of the script is and document it with comments. Think through what the script will do before writing a single line of code. Instead, break down each task in your head and document it with simple comments in the script.
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_4
21
Chapter 4
Plan Before You Code
Regions are a great commenting feature to use when planning. Regions allow you to easily collapse and expand parts allowing you to pay attention to the task at hand. Don’t worry about using a particular comment structure. Use whatever structure is most comfortable for you to guide you through the coding process when the time comes. The following are some examples of how to create a “code scaffolding” with comments: ## Find the file here ## Set the file's ACL to read/write for all users ## Read the file and parse the computer name from it ## Attempt to connect to the computer and if successful, find the service Alternatively, you could start off using regions to where you could then add code into. Regions give you little “buckets” to put code into preventing you from getting overwhelmed with all of the coding you have to do. #region Find the file here #endregion #region Set the file's ACL to read/write for all users #endregion #region Read the file and parse the computer name from it #endregion #region Attempt to connect to the computer and if successful, find the service #endregion 22
Chapter 4
Plan Before You Code
Tip Source: https://twitter.com/duckgoop
F urther Learning •
Organizing Your Code Using Regions
Use Your Code As a Todo List We’ve all had those times when you’re in the middle of a script and get interrupted somehow. You may be in just the right frame of mind and on the cusp of solving a problem that’s been plaguing you for days. Or you might be building a script as fast as possible but don’t want to forget to come back to a certain area. Either way, using your code as a todo list will help. Perhaps you know code will break under certain circumstances, see a way to improve performance or perhaps you’re working on a team and need to assign a junior developer a piece of code, try adding ## TODO to the code at that particular line. The comment doesn’t have to specifically be ## TODO. The point here is to create a “comment flag” that points to areas that need to be addressed at some point. At some point in the future, you can then perform a search on the codebase for that flag and find all of the instances to address. Let’s say you’re under a tight deadline and you’ve got to get something done quick. You decide to take some shortcuts. ## Run some program & C:\Program.exe ## Find the log file it generates and send it back to me Get-ChildItem -Path 'C:\Program Files\Program\Logs' -Filter '*.log' | Get-Content This little script performs its intended purpose now, but you realize that you can improve a lot here. When you have time to come back to this, 23
Chapter 4
Plan Before You Code
you perform a search for ## TODO across your entire project you come across this: ## TODO: This needs to run with Start-Process. More control that way ## Run some program & C:\Program.exe ## TODO: Sometimes this program creates a .txt file as the log. Need to account for this edge case. ## Find the log file it generates and send it back to me Get-ChildItem -Path 'C:\Program Files\Program\Logs' -Filter '*.log' | Get-Content Tip Source: https://twitter.com/guyrleech
F urther Learning •
24
5 Ways Using TODO Comments Will Make You a Better Programmer
CHAPTER 5
Create Building Blocks with Functions Once you create a few PowerShell scripts, you’re bound to start feeling like you’re re-creating the wheel. You will inevitably begin seeing patterns in what solutions you build with PowerShell. One of the most important concepts when building great PowerShell code is treating code like building blocks. Don’t build unique solutions for all problems. Instead, build blocks in the form of functions and modules and then use those modules. Over time, you’ll find that you’re saving tons of time by not rewriting the same code you wrote months ago and you only have one piece of code to maintain rather than ten copies.
Write Functions with One, Single Goal Functions are supposed to be small, bite-sized code snippets that perform a single action. Build functions to not boil the ocean but to increase the temperate one degree at a time. Functions should serve one primary purpose and should be easily describable with a quick glance of the code. If you find it hard to describe a function without saying the word “and,” it probably needs to be split into multiple functions.
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_5
25
Chapter 5
Create Building Blocks with Functions
Make functions small and easily callable from other functions. Functions are your building blocks. Build solutions with blocks not by pouring a solid building of concrete at once. For example, perhaps you’re writing a script that places a file on multiple, remote. Perhaps it’s some configuration file meant for another application. You want to copy this file to a variable number of servers in the same location. Since you need to perform the same task multiple times (on multiple servers), this would be a great case for a function with a single goal, getting a file reliably on a server. When you build a function with a single goal, managing many functions becomes a lot easier over time. In this example, check out the function in the code snippet in the following called Copy-ConfigurationFile. This function does one “thing,” copies a file to a server. However, notice the ServerName parameter is using string[]] as the type. This allows you to pass in multiple server names at once. Then, using the foreach loop, you can copy that configuration file to many servers with one function call. Overall though, the function still has one purpose and one purpose only. function Copy-ConfigurationFile { [CmdletBinding()] param( [Parameter(Mandatory)] [string]$FilePath, [Parameter(Mandatory)] [string[]]$ServerName ) foreach ($server in $ServerName) { if (Test-Connection -ComputerName $server -Quiet -Count 1) {
26
Chapter 5
Create Building Blocks with Functions
Copy-Item -Path $FilePath -Destination "\\$server\c$" } else { Write-Warning "The server [$server] is offline. Unable to copy file." } } } Tip Source: https://twitter.com/pgroene
F urther Learning •
PowerShell Functions Introduction
Build Functions with Pipeline Support PowerShell wouldn’t be PowerShell without the pipeline. Be a good PowerShell citizen and include pipeline support in your functions. Passing command output to other commands via the pipeline is an intuitive way to run functions. It’s also easier for newcomers to understand. When you build functions with pipeline support, your function can begin “linking” your functions to other PowerShell commands. You can run functions simply by piping output from one command to another. For example, let’s say you’re building a tool to help you back up and restore a set of important files that some third-party line-of-business application uses. When this software is installed on a user’s computer, it creates a directory called *C:\Files. This folder contains crucial files that must get backed up on a regular basis. You decide to create a module with a couple of functions called Start-AppBackup to kick off the backup, Get-AppBackup to pull the set of files from a backup server somewhere,
27
Chapter 5
Create Building Blocks with Functions
and Restore-AppBackup to put the files back on a machine. Your little module looks like this: function Start-AppBackup { param( [Parameter()] [string]$ComputerName, [Parameter()] [string]$BackupLocation = '\\MYBACKUPSERVER\Backups' ) Copy-Item -Path "\\$ComputerName\c$\Program Files\SomeApp\ Database\*" -Destination "$BackupLocation\$ComputerName" } function Restore-AppBackup { param( [Parameter(Mandatory)] [string]$ComputerName, [Parameter()] [string]$BackupLocation = '\\MYBACKUPSERVER\Backups' ) Copy-Item -Path "$BackupLocation\$ComputerName\*" -Destination "\\$ComputerName\c$\Program Files\SomeApp\ Database" } Now, let’s say you’re using this tool as is without pipeline support. You would run each function above like this: Start-AppBackup -ComputerName FOO Restore-AppBackup -ComputerName FOO
28
Chapter 5
Create Building Blocks with Functions
This appears to be fine but does require two lines. Add in pipeline support and you may get: function Start-AppBackup { param( [Parameter()] [string]$ComputerName, [Parameter()] [string]$BackupLocation = '\\MYBACKUPSERVER\Backups' ) Copy-Item -Path "\\$ComputerName\c$\Program Files\SomeApp\ Database\*" -Destination "$BackupLocation\$ComputerName" } function Get-AppBackup { param( [Parameter(Mandatory)] [string]$ComputerName, [Parameter()] [string]$BackupLocation = '\\MYBACKUPSERVER\Backups' ) [pscustomobject]@{ 'ComputerName' = $ComputerName 'Backup' = ( Get-ChildItem -Path "\\$BackupLocation\ $ComputerName") } } function Restore-AppBackup { param( [Parameter(Mandatory,ValueFromPipelineByPropertyName)] 29
Chapter 5
Create Building Blocks with Functions
[string]$ComputerName, [Parameter()] [string]$BackupLocation = '\\MYBACKUPSERVER\Backups' ) Copy-Item -Path "$BackupLocation\$ComputerName\*" -Destination "\\$ComputerName\c$\Program Files\SomeApp\ Database" } You can then easily find existing backups and restore them by using the pipeline using a single line. This method is also a little easier to understand. Get-AppBackup -ComputerName FOO | Restore-AppBackup
F urther Learning •
The PowerShell Parameter Demystified
ave Commonly Used, Interactive Functions S to Your User Profile If you’re working at the PowerShell console interactively, it’s always handy to have a function created and available as a shortcut to performing a certain task. Think about all of the common tasks you perform while working in the PowerShell console. To ensure these “console” functions are available to you across all sessions, add them to your PowerShell profile. This will allow you to always have them available.
30
Chapter 5
Create Building Blocks with Functions
For example, maybe you have a handy function that changes what your PowerShell prompt looks like. The default is PS [working directory]> but you have your own prompt function that makes it look like PS>. You’re tired of seeing the working directory. Your function looks like this: function prompt { 'PS>' } When you close out of the current PowerShell console though, the default prompt is back. You need to add it to your profile. To do that, first figure out where PowerShell stores your user profile. $Profile Once you find the path to your profile script, create or edit that file and then place your function inside. Restart your console and you’ll see the prompt gets changed every time. PowerShell reads this profile every time it starts up allowing you to keep persistent settings. Tip Source: https://www.reddit.com/user/TheDinosaurSmuggler/
F urther Learning •
Understanding the Six PowerShell Profiles
31
CHAPTER 6
Parameterize Everything One of the key differences between a simple script and a PowerShell tool are parameters. Parameters allow developers to write scripts that are reusable. Parameters don’t force developers to edit their scripts or functions every time they need to run them. They allow users to modify how the script or function works without modifying the code. Parameters are an integral component of building a reusable PowerShell tool that turns ad hoc scripts into building blocks. In this chapter, you’ll learn many different tips on how to properly use parameters in your daily life.
Don’t Hardcode. Always Use Parameters You should make it your mission to reuse as many scripts and functions as possible. There’s no need to re-create the wheel. One of the easiest ways to do that is to define parameters for everything. Before you finish up a script, think about how it could be reused for other similar purposes. Which components may need to be changed for next time? Is it a script to run against a remote computer? Make the computer name a parameter. How about referencing a file that could be anywhere. Create a FilePath parameter.
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_6
33
Chapter 6
Parameterize Everything
If, most of the time, the value is the same, set a default parameter and override it as necessary. Don’t hardcode values that may change in scripts or functions. Let’s say that you have a need for a script that restarts some services on one particular server that looks like the following code snippet: ## restart-serverservice.ps1 Get-Service -ComputerName FOO -Name 'service1','service2' | Restart-Service On the surface, there’s nothing wrong with the preceding line. It does what you need it to do. But one of the most important concepts when coming from the PowerShell kiddie pool to the deep end is thinking ahead. Ask yourself: “Do I ever see a need where I’ll need to restart different services or restart services on a different server?” If you answered yes to that question, you need to provide parameters. Parameters help you build reusable tools, not just static scripts. Instead of the preceding script, instead, build one that looks like the following: ## restart-serverservice.ps1 [CmdletBinding()] param( [Parameter()] [string]$ServerName, [Parameter()] [string[]]$ServiceName ) Get-Service -ComputerName $ServerName -Name $ServiceName | Restart-Service
34
Chapter 6
Parameterize Everything
You’d then call this script using parameters. .\restart-serverservice.ps1 -ServerName FOO -ServiceName 'service1','service2' Perhaps, for now, you know you will always use the same server and services. You don’t have to keep passing in the same server and services every time with parameters. Instead, assign default values to parameters. You can still run .\restart-serverservice.ps1 like you did before without parameters, but now you can change up the functionality of the script simply by passing different values to the script instead of modifying the script itself. ## restart-serverservice.ps1 [CmdletBinding()] param( [Parameter()] [string]$ServerName = 'FOO', [Parameter()] [string[]]$ServiceName = @('service1','service2') ) Get-Service -ComputerName $ServerName -Name $ServiceName | Restart-Service
F urther Learning •
PowerShell Parameters: Everything You Ever Wanted to Know
35
Chapter 6
Parameterize Everything
se Parameter Sets When All Parameters U Should Not Be Used at Once If you have a script or function with parameters that cannot be used at the same time, create parameter sets. Parameter sets are there to help you control which parameters are used together when you have similar ways of calling a script or function. For example, perhaps you have a function that takes input via an object or a simple name. Let’s call it Reboot-Server. It looks like the following: function Reboot-Server { param( [Parameter(Mandatory)] [string]$ServerName ) if (Test-Connection -ComputerName $ServerName -Count -Quiet) { Restart-Computer -ComputerName $ServerName } } You call this function by passing a string value to the ServerName parameter as shown in the following: Reboot-Server -ServerName FOO All is well but you’ve been getting better at PowerShell and want to create a module called RebootServer that has a function find a server in Active Directory (AD) called Get-Server and use the pipeline to pass that server object to Reboot-Server. You’re creating your own objects now too. ## RebootServer.psm1 #requires -Module ActiveDirectory
36
Chapter 6
Parameterize Everything
function Get-Server { param( [Parameter()] [string]$ServerName ) ## If ServerName parameter was not used, pull computer names from AD if (-not ($PSBoundParameters.ContainsKey('ServerName'))) { $serverNames = Get-AdComputer -Filter * | Select-Object -ExpandProperty Name } else { $serverNames = $ServerName } ## send a custom object out to the pipeline for each server name found $serverNames | ForEach-Object { [pscustomobject]@{ 'ServerName' = $_ } } } When you run this function, it will return one or more custom objects with a ServerName property. You then want to pass this object directly to the Reboot-Server function using the pipeline to reboot all servers that are passed to it like the following: ## Reboots all computers in AD. Ruh roh! Get-Server | Reboot-Server
37
Chapter 6
Parameterize Everything
But this scenario won’t work because, as is, the Reboot-Server function only has a ServerName string parameter that doesn’t accept pipeline input. You want to still be able to run Reboot-Server like so: Reboot-Server -ServerName FOO But you also want the ability to use the pipeline so you create another parameter called InputObject that accepts your custom object that Get-Server returns over the pipeline as shown in the following. You then add the necessary logic to test which parameter is used. ## RebootServer.psm1 function Reboot-Server { param( [Parameter(Mandatory)] [string]$ServerName, [Parameter(Mandatory, ValueFromPipeline)] [pscustomobject]$InputObject ) process { if ($PSBoundParameters.ContainsKey('InputObject')) { $ServerName = $InputObject.ServerName } if (Test-Connection -ComputerName $ServerName -Count -Quiet) { Restart-Computer -ComputerName $ServerName } } } Notice that both parameters are now mandatory. You want to be sure one of them is used, but when you run Get-Server | Reboot-Server, 38
Chapter 6
Parameterize Everything
you’ll find that you’re prompted for the ServerName parameter. You’re using the pipeline and it should be using InputObject instead, you think. But PowerShell is confused. It doesn’t know which parameter you want to use. You have two scenarios here, using the ServerName parameter or the InputObject parameter, not both. To rectify this situation, use parameter sets. Put the ServerName and InputObject parameters in different sets. This way, you can still make each mandatory, but now PowerShell won’t be confused, and you are only allowed to use one parameter. PowerShell won’t let you use both parameters together which is exactly what you want. ## RebootServer.psm1 function Reboot-Server { param( [Parameter(Mandatory,ParameterSetName = 'ServerName')] [string]$ServerName, [Parameter(Mandatory, ValueFromPipeline, ParameterSetName = 'InputObject')] [pscustomobject]$InputObject ) process { if ($PSBoundParameters.ContainsKey('InputObject')) { $ServerName = $InputObject.ServerName } if (Test-Connection -ComputerName $ServerName -Count -Quiet) { Restart-Computer -ComputerName $ServerName } } } 39
Chapter 6
Parameterize Everything
F urther Learning •
PowerShell Parameters: Everything You Ever Wanted to Know
se a PSCredential Object Rather Than U a Separate Username and Password PowerShell has a type of object called PSCredential. This object stores a username and password with the password securely encrypted. When you write a new script or function, use a [pscredential]$Credential parameter rather than a UserName and Password. It’s cleaner, ubiquitously common in the PowerShell world, and a more secure way to pass sensitive information to a function. Perhaps you have a script that authenticates to some services. To do that, the service needs a plaintext username and password, so you oblige by creating two parameters, UserName and Password both as string values. ## somescript.ps1 [CmdletBinding()] param( [Parameter()] [string]$UserName, [Parameter()] [string]$Password ) .\someapp.exe $UserName $Password You would then call this script like so: .\somescript.ps1 -UserName 'adam' -Password 'MySuperS3ct!pw!' 40
Chapter 6
Parameterize Everything
This works fine but you have to store this sensitive information somewhere and, when you pass the information, it’s all in clear text. If you’re writing PowerShell in Visual Studio Code (which you should), you’ll see a yellow squiggly line underneath the Password parameter telling you this is a no-no.
Figure 6-1. Code linting Instead of creating two parameters, just create one parameter that will securely hold both the username and password in the form of a PSCredential object. You can then decrypt and extract the username and password from the object using the GetNetworkCredential() method. ## somescript.ps1 [CmdletBinding()] param( [Parameter()] [pscredential]$Credential ) ## You'll still have to decrypt the password here but you at least keep it ## secure for as long as you can 41
Chapter 6
Parameterize Everything
$cred = $Credential.GetNetworkCredential() .\someapp.exe $cred.UserName $cred.Password Once the script has a Credential parameter and the code inside to extract the username and password, you can then create a PSCredential object and pass it securely to the function. In the following, Get- Credential is prompting the user for a username and password preventing the script from holding the sensitive information: .\somescript.ps1 -Credential (Get-Credential) Tip Source: https://www.reddit.com/user/thedean_801/
F urther Learning •
42
Using the PowerShell Get-Credential Cmdlet and All Things Credentials
CHAPTER 7
Log Script Activity If you don’t know what your code is doing, how are you supposed to troubleshoot it? How are you supposed to optimize it? To show what it changed in an environment? Log everything! Especially on long-running scripts or scripts executed in production, you must bake logging routines into your code. It’s not only helpful to monitor activity when things go well, it’s especially helpful when problems arise that need investigation. In this chapter, you’ll learn some tips on how to best implement logging in your PowerShell scripts.
Use a Logging Function Logging to a text file is an extremely common need. Great scripts log everything and do it a lot. This seems like a good opportunity to build a little tool to do that for us! You should have or be building up an arsenal of small helper functions. A Write-Log function needs to be in that arsenal somewhere. By creating a function with a few parameters like Message, Severity, etc., which then records information to a structured text file, is a must-have. For example, let’s say you find yourself constantly copying/pasting the same line of code over and over again in your scripts like the following. You want to record the status of your scripts using Add-Content. Add-Content -Path 'C:\activity.log' -Value "Doing something here" © Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_7
43
Chapter 7
Log Script Activity
This method gets the job done but has a few of problems. For starters, you’re repeating yourself. Don’t do that. Second, what happens if you suddenly want to change up how you’re logging information? Maybe you want to include a timestamp in the output or even create a structure allowing you to define messages by severity, category, and other criteria. Logging information is a great opportunity to create a Write-Log function. The choice is yours how you’d like your Write-Log function to behave, but in the following you’ll find a good example. Rather than calling Add-Content over and over, instead, you can load the following function into your session and run Write-Log. This function is a good example of a simple yet handy logging function. It: •
Creates and appends lines to a CSV-formatted file called activity.log. By creating a CSV file, you can then easily parse it with Import-Csv and other utilities.
•
Adds a Severity parameter. By passing different values to the Severity parameter, you can differentiate message importance and filter on severity when reading the log later.
•
Adds a Source property. This is an advanced technique which shows you the name of the script or module the function was invoked from as well as the line number.
•
Adds a Time property. Timestamps are critical in log files. Now you will always get a timestamp written to the log file.
function Write-Log { [CmdletBinding()] Write-Log -Message 'Valuell -LogLevel "Value2' This example shows how to call the Write-Log function with named parameters. #> param ( [Parameter(Mandatory)] [ValidateNotNullOrEmpty()] [string]$Message, [Parameter()] [ValidateNotNullOrEmpty()] [string]$FilePath = "$PSScriptRoot\activity.log", [Parameter()] [ValidateSet(1, 2, 3)] [int]$LogLevel = 1 )
45
Chapter 7
Log Script Activity
[pscustomobject]@{ 'Time' = (Get-Date -Format 'MM-dd-yy HH:mm:sstt') 'Message' = $Message 'Source' = "$($MyInvocation.ScriptName | Split-Path -Leaf):$($MyInvocation. ScriptLineNumber)" 'Severity' = $LogLevel } | Export-Csv -Path $FilePath -Append -NoTypeInformation }
F urther Learning •
How to Build a Logging Function in PowerShell
Clean Up Verbose Messages Think of a PowerShell script as a piece of software. Software has three rough stages of creation: development, testing, and production. It’s a shame so many scripts out there are “in production” but still have development code throughout. One example of “development code” in a script is all of those Write-Verbose lines. Developers use Write-Verbose to troubleshoot and debug code while it’s being developed. That code isn’t necessary when the code is solid. You don’t need to return minute, granular details you required when debugging in the final product. Remember to clean all of those debugging messages up before signing off. Let’s say you’ve been working on a script for a few days and needed to return some variable values to track what a variable’s value is at a specific point in time. Take the following script, for example. Perhaps you call this script in a larger automation workflow. [CmdletBinding()] 46
Chapter 7
Log Script Activity
param( [Parameter()] [string]$ComputerName, [Parameter()] [string]$FilePath ) Get-Content -Path "\\$ComputerName\$FilePath" You’ve been noticing that this script returns an error sometimes trying to access a bogus file. You need to know what values are being passed to the FilePath parameter so you decide to add a Write-Verbose reference. ## read-file.ps1 [CmdletBinding()] param( [Parameter()] [string]$ComputerName, [Parameter()] [string]$FilePath ) Write-Verbose -Message "Attempting to read file at [\\$Computer Name\$FilePath]..." Get-Content -Path "\\$ComputerName\$FilePath" You then call this function using the Verbose parameter. ./read-file.ps1 -ComputerName FOO -FilePath 'bar.txt' -Verbose You will then see the familiar verbose line as shown here:
47
Chapter 7
Log Script Activity
Verbose messaging isn’t necessarily a bad thing. After all, you can control when and when not to show this stream using the Verbose parameter or the VerbosePreference automatic variable. However, this behavior can become a problem when you have random lines throughout your scripts that make no sense to anyone other than your current self. Ensure verbose message is clear, concise, and you don’t overdo it. Tip Source: https://twitter.com/JimMoyle
F urther Learning •
48
Quick and Efficient PowerShell Script Debugging with Breakpoints
CHAPTER 8
Build with Manageability in Mind You can write code all day to solve all the things, but if you can’t manage it over time, you’re sunk. It’s important to not only solve the problems of today but think about how those solutions will be maintained over time.
DRY: Don’t Repeat Yourself Notice when you’re repeating the same code snippets. Be cognizant you’re following the same patterns over and over again. Being great at coding is about pattern recognition and improving efficiency. Don’t type out the same command ten times to process ten different parameter values. Use a loop. Write “helper” functions that can be called from other functions to eliminate writing that same code again. Let’s say you have a script that checks for Windows features, and if they don’t exist, installs various Windows features. Currently, your code looks like this to install a single feature. if (-not (Get-WindowsFeature -Name 'Web-Server').Installed) { Install-WindowsFeature -Name 'Web-Server' }
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_8
49
Chapter 8
Build with Manageability in Mind
You eventually need to install other features. You think that since you already have the code to install one feature, it’d be OK to just copy/paste this code for each new feature. if (-not (Get-WindowsFeature -Name 'Web-Server').Installed) { Install-WindowsFeature -Name 'Web-Server' } if (-not (Get-WindowsFeature -Name 'SNMP-Server').Installed) { Install-WindowsFeature -Name 'Web-Server' } if (-not (Get-WindowsFeature -Name 'SNMP-Services').Installed) { Install-WindowsFeature -Name 'Web-Server' } if (-not (Get-WindowsFeature -Name 'Backup-Features'). Installed) { Install-WindowsFeature -Name 'Web-Server' } if (-not (Get-WindowsFeature -Name 'XPS-Viewer').Installed) { Install-WindowsFeature -Name 'Web-Server' } if (-not (Get-WindowsFeature -Name 'Wireless-Networking'). Installed) { Install-WindowsFeature -Name 'Web-Server' } Although the preceding code gets the job done, you can tell this practice isn’t sustainable. What if you must change how to detect the feature or install it? You’d have to do a lot of find/replace behavior. There’s a better way to do this by separating the elements that do change (the feature name) with the code that will not typically change (the code to check for an install the feature).
50
Chapter 8
Build with Manageability in Mind
$featureNames = @( 'Web-Server' 'SNMP-Server' 'SNMP-Services' 'Backup-Features' 'XPS-Viewer' 'Wireless-Networking' ) foreach ($name in $featureNames) { if (-not (Get-WindowsFeature -Name $name).Installed) { Install-WindowsFeature -Name $name } } Not only is the code much shorter, it’s also much more manageable in the future. To add new features, simply add them to the $featureNames array. To change the behavior of find the status or installing a new feature, just change the code inside of the foreach loop. The preceding example is only one way to implement the DRY mentality. Be constantly looking for repeating patterns and address them using logic.
F urther Learning •
The DRY Principle: How to Write Better PowerShell Code
Don’t Store Configuration Items in Code Always treat configuration items that are required in your code as separate entities. Items like usernames, passwords, API keys, IP addresses, hostnames, etc. are all considered configuration items. Configuration 51
Chapter 8
Build with Manageability in Mind
items should then be pointed to in your code. These artifacts are static values that should be injected in your code, not stored in the code. Separating out configuration items from your code allows you the code to be more flexible. It allows you to make a change at a global level and that change be immediately consumed by the code. Whenever you write PowerShell code, you should always build with reuse in mind. Storing configuration items outside of the code is one way to do that. Taking the previous tip “DRY: Don’t Repeat Yourself” to the next level, let’s build that code using a configuration file. Instead of defining the items that may change over time (Windows feature names) inside of the code as is, store them in an external data store. This kind of data store can be anything like a JSON, XML, YAML, or even a SQL database. The point is to separate the data from the code logic itself. Let’s use a PowerShell data file as the data store called ScriptConfiguration.psd1 in the same directory as the Windows feature script as shown here: @{ 'WindowsFeatures' = @( 'Web-Server' 'SNMP-Server' 'SNMP-Services' 'Backup-Features' 'XPS-Viewer' 'Wireless-Networking' ) } The original script would then read the PowerShell data file and use it as input as shown in the following code snippet: $configuration = Import-PowerShellDataFile "$PSScriptRoot\ ScriptConfiguration.psd1" 52
Chapter 8
Build with Manageability in Mind
foreach ($name in $configuration.WindowsFeatures) { if (-not (Get-WindowsFeature -Name $name).Installed) { Install-WindowsFeature -Name $name } } This practice sets up your script for other configuration values down the road and even allow non-developers to easily change the behavior of the script simply by adding or removing Windows features in the configuration file.
F urther Learning •
Configuration PowerShell Module
Always Remove Dead Code Although not critical, leaving code in scripts that will never be executed is bad practice. It clutters up the important code and makes it harder to understand and troubleshoot scripts. Remove it. Look into using the code coverage options in Pester to discover all of the unused code in your scripts. If you’re using Visual Studio Code with the PowerShell extension, you can also spot and remove dead code by paying attention to unused variables. Perhaps you have a script that connects to a remote computer. At the time you built it, your test computer was not in an Active Directory domain and you had to pass a PSCredential object to the remote computer. To support that use case, you added a Credential parameter. But things have changed, and this script is only used in a domain environment without needing an alternate credential.
53
Chapter 8
Build with Manageability in Mind
You also had a variable inside of the script that served a purpose at one time, but you had changed the value. [CmdletBinding()] param( [Parameter(Mandatory)] [string]$ComputerName, [Parameter()] [pscredential]$Credential ) $somelostForgottenVariable = 'bar' $service = Get-Service -Name 'foo' -ComputerName $ComputerName if ($service.Status -eq 'Running') { Write-Host 'Do something here' } elseif ($someLostForgottenVariable -eq 'baz') { Write-Host 'do something else here' } The preceding code has two pieces of “dead” code: the Credential parameter and the $someLostForgottenVariable variable. Why is this code “dead”? Because it will never be used in the script. You can invoke the script using the Credential parameter, but it’s not going to be used. Also, as is, with the value of $someLostForgottenVariable set statically to bar, the Write-Host 'do something else here' line will never execute because it’s depending on a value that will never be set. Tip Source: https://twitter.com/JimMoyle
F urther Learning •
54
Testing Pester Code Coverage
CHAPTER 9
Be Specific Us humans tend to take a lot of shortcuts and assume many things and for good reason. We wouldn’t survive in this world if our brain had to pay attention to every input we receive on a daily basis. We naturally and automatically assign context to situations, conversations, and more. This is why a coworker can just begin talking about something that happened the previous day you were involved with and you can quickly pick up context. You don’t have to start a conversation like: “At 7:01 PM, you and I were in our office working late. I then rose from my chair and yelled over the cube….” Your coworker gets it already. Although artificial intelligence (AI) is trying, computers don’t assume. They aren’t smart. They can’t automatically do things. We humans must tell them exactly what to do. Sure, you can get away with taking shortcuts here and there, but if that random edge case comes along you never coded for, the computer isn’t going to handle it; it’s going to fail miserably. In this chapter, you’re going to learn some tips on when to be specific in your code. You will learn about opportunities you will find yourself in to choose the more defined, explicit, specific route rather than relying on that off-chance your code will not encounter an edge case you didn’t account for.
Explicitly Type All Parameters PowerShell is called a loosely typed language. This means that you don’t have to statically assign types to variables. Statically typed languages don’t allow you to simply define a variable or parameter like $var = 2
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_9
55
Chapter 9
Be Specific
and automatically assign the $var value as an integer or assign the $var2 variable as a string in the code $var2 = 'string'. PowerShell makes assumptions based on various “patterns” it sees in the value. Loosely typed languages like PowerShell are handy because they take care of some extra work, but this hand-holding can sometimes backfire. When defining function or script parameters, you should always statically assign a type to each one. When creating a parameter, you have a lot of control. You can, for example, simply define a parameter name. PowerShell accepts this. function Get-Thing { param( $Param ) } I don’t advise you to define parameters like this even though you can. What happens if you pass in a string, how about a Windows service object? Are you handling for those times when someone tries to use the pipeline to pass in a Boolean somehow? You get the point. There are many different types of parameters PowerShell will accept in this situation. You’re probably not accounting for all of them in the code. To make the parameter better, you should statically type the parameter to whatever type you’re supporting in the function code. Perhaps you expect a pscustomobject with a property of Foo. Your code will reference the Foo property on the Param object. function Get-Thing { param( $Param ) Write-Host "The property Foo's value is [$($Param.Foo)]" } 56
Chapter 9
Be Specific
Instead of assuming this will work, instead, explicitly cast the Param parameter to the type of object you’re expecting. Make this a habit. function Get-Thing { param( [pscustomobject]$Param ) Write-Host "The property Foo's value is [$($Param.Foo)]" } By explicitly assigning a type, you’re telling PowerShell that this function should only accept this type. Sometimes explicitly defining the type isn’t as important as others, but it’s a great habit to get into.
F urther Learning •
The PowerShell Parameter Demystified
lways Use Parameter Validation A When Possible Along the same lines as explicitly typing a parameter, you should also narrow the type of input a parameter will accept by using parameter validation. PowerShell provides many different ways you can limit values passed to parameters. For example, maybe you have a function that is part of a larger automation routine that provisions servers. One function in that routine comes up with a random server name. The first part of that random server name is supposed to be what department owns that server.
57
Chapter 9
Be Specific
You have defined three departments names that should be part of the server name. $departments = 'ACCT', 'HR', 'ENG' You then have a function to come up with the server name which accepts the department label and assigns a random number to the name. function New-ServerName { param( [Parameter()] [string]$Department ) "$Department-$(Get-Random -Maximum 99)" } The New-ServerName function is working as intended for a while because you just know you should only use one of three department labels. You decide to share this script with someone else and they don’t know that and instead try New-ServerName -Department 'Software Engineering' which creates a server name Software Engineering-88. This name doesn’t fit your schema and isn’t even a valid name. You need to ensure everyone only uses the values you want. You need to use a ValidateSet validation in this case. function New-ServerName { param( [Parameter()] [ValidateSet('ACCT','HR','ENG')] [string]$Department ) "$Department-$(Get-Random -Maximum 99)" } 58
Chapter 9
Be Specific
Now when anyone tries to pass a value that isn’t allowed, PowerShell will deny it and tell them which values they can use. PS> New-ServerName -Department 'Software Engineering' New-ServerName: Cannot validate argument on parameter 'Department'. The argument 'Software Engineering' does not below to the set “ACCT,HR,ENG” specified by the ValidateSet attribute. Supply and argument that is in the set and try the command again.
F urther Learning •
Parameter Validation
Always Define a Function’s OutputType The OutputType keyword is a relatively unknown PowerShell construct, but one not only shows, at a glance, what kind of object a function returns but also helps the function user with some handy tab-completion. Let’s say you have a function that returns a file object in the form of a System.IO.FileInfo object like this: function Get-File { param( [Parameter()] [string]$FilePath ) Get-Item -Path $FilePath } This function returns a single file object as you can see in Figure 9-1.
59
Chapter 9
Be Specific
Figure 9-1. Get-File returning a single object Now let’s say you need to reference the time that file was created but don’t quite remember what the property name is. You pipe the output to Get-Member and begin looking. Get-File -Path 'C:\myfile.txt' | Get-Member You wouldn’t have to use Get-Member if you would know what type of object is coming from that function. How do you do that? Using the OutputType keyword like the following: function Get-File { [OutputType([System.String])] param( [Parameter()] [string]$FilePath ) Get-Item -Path $FilePath } Not only is this helpful when reading the code, it’s helpful when inspecting the object that it returns. Since PowerShell knows what type of object will be returned when you run it, you don’t even have to run the function to discover the properties and methods on that object.
60
Chapter 9
Be Specific
Enclose the function reference in parens or assign the output to a variable, hit a dot, and start hitting the Tab key. You’ll now see that PowerShell cycles through all of the available members on that object. (Get-File -FilePath C:\Test-PendingReboot.ps1).
Write Specific Regular Expressions If you have to match a string with a regular expression (regex), be sure that regular expression is built as specific as possible. Regex can be another book in and of itself, but for this book, just try to build that expression as specific as possible. If not, you’re bound to match strings you didn’t intend. Perhaps you need to match a globally unique identifier (GUID). An accurate regex to match a GUID would be like the following. That’s a lot of characters! (\{){0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F] {4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}(\}){0,1} Compare that long string to something like this that would also match GUIDs. '^.{8}-.{4}-.{4}-.{4}-.{12}$' The difference in these two regular expressions is not only the length but the accuracy. The first string match all string patterns a GUID could possibly be represented in and only those. The second example simply matches any characters separated by dashes. The second regex is less specific and could lead to matches you do not intend. Also, pay attention to case sensitivity. Maybe you have an expression that’s only supposed to match exactly the name of Michael. You decide to hurriedly put in a regex and just use the match operator like the following.
61
Chapter 9
Be Specific
It works for now but some scenario down the road, the michael string comes along, is matched and screws up your script. 'michael' -match 'Michael' Instead of using the case-insensitive match operator, consider using the case-sensitive cmatch operator. Using the cmatch operator ensures the string you intend to match is exactly what you intend. 'michael' -cmatch 'Michael'
62
CHAPTER 10
Write for the Next Person Have you ever come across a script on the Internet that looks exactly like what you want but you can’t understand it? It has no comments, no help content, and you can’t follow the code? This situation is all too common. Don’t be that script author. Write your code so the next person can understand your code. Writing clear, concise code also means you’ll end up helping yourself. More often than not it seems that after not touching a piece of code for six months or more, you are the “next person” who ends up having to come back through and clean up what you previously wrote. By taking the following precautions, you’ll be able to save yourself and the next coder countless hours on your projects.
Give Your Variables Meaningful Names Variable names should be meaningful. When you go back through your code, you shouldn’t be wondering what $x is or how you are supposed to use it. You should be able to pick up exactly what it’s supposed to do the first time through. With PowerShell, there’s no practical limit to how long your variable names should be, so feel free to be as descriptive as possible. Let’s say you have a list of server names in a text file. You’d like to write a script that reads this text file and connects to each of these servers. © Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_10
63
Chapter 10
Write for the Next Person
Technically, something like the following code snippet would work. PowerShell doesn’t care how you name variables but humans do! $t = 'servers.txt' $x = Get-Content -Path $t foreach ($r in $x) { Invoke-Command -ComputerName $r -ScriptBlock {'Doing something on this computer'} } There is nothing syntactically wrong with the preceding snippet and, as is, something with intermediate PowerShell skills could probably read what it’s doing but with unnecessary difficulty. Compare that snippet with the following one: $serversFilePath = 'servers.txt' $serverNames = Get-Content -Path $serversFilePath foreach ($serverName in $serverNames) { Invoke-Command -ComputerName $serverName -ScriptBlock {'Doing something on this computer'} } Notice how just a simple change of variable names makes the code so much clearer as to what’s going on. If, for example, you’re looking at the Invoke-Command reference, you don’t have to pain stakingly decipher the code like this:
64
•
Notice the $r variable
•
Look up to see that $r is the foreach loop iterator variable part of the $x collection
•
Continue to follow where $x came from to find that it’s file contents from some file pointing to $t
•
Then finally see that $t points to servers.txt
Chapter 10
Write for the Next Person
You could have saved four steps just by changing the $r variable name to what it actually is ($serverName). Tip Source: https://twitter.com/lh_aranda
F urther Learning •
The PowerShell Variable - Naming, Value, Data Type
S tring Substitution When working with strings, you sometimes need to insert variable values inside of other strings. One way that is a bit clearer than other methods is called string formatting using the -f string operator. The -f string operator acts as a placeholder allowing you to insert values inside of other strings. Let’s say you have a script that provisions your company’s servers. You’re using the script to create a standardized naming convention with server names that start with the department name they’re being used for like HR-SRV, ACCT-SRV, and ENG-SRV. You have all of the departments defined in an array like the following: $departments = @( 'HR' 'ACCT' 'ENG' ) You then need to concatenate strings together to come up with each server name. You could merge the department name and the rest of the server name label using variable expansion as shown in the following: foreach ($dept in $departments) { $srvName = "$dept-SRV" $srvName } 65
Chapter 10
Write for the Next Person
Variable expansion inside of a string with double quotes works fine, but you can also use the -f format operator to remove the variable from inside of the string completely. The same functionality can be achieved using the following snippet: foreach ($dept in $departments) { $srvName = '{0}-SRV' -f $dept $srvName } Now you have a placeholder ({0}) representing the place where the value of $dept will be. Most people consider string formatting a bit more clear especially when it comes to inserting many different variables inside of a single string. Tip Source: https://twitter.com/harringg
F urther Learning •
Keep Your Hands Clean: Use PowerShell to Glue Strings Together
eep Aliases to the Console Only, K Not in Scripts PowerShell has way you can refer to commands via different names than their actual command name called aliases. Aliases are typically used to shorten the number of characters you have to type for a command. If you run Get-Alias, you’ll see all of the default aliases available to you. Even though you can save some keystrokes, there are a few major problems with aliases: 1. Lack of clarity – You’re adding another layer of complexity by using a reference to a command rather than calling a command on its own. 66
Chapter 10
Write for the Next Person
2. Inconsistent syntax – You are free to use aliases or the actual command name whenever you like. As humans, we are terribly inconsistent which probably means you’re going to use the command alias sometimes and the actual command other times. 3. Aliases can vary by system – If you create a custom alias on your computer and then share that script with others, the script will break. You are including an unnecessary dependency in your script you will then have to manage. 4. Some aliases are not cross-platform – ls, for example, is an alias for the Get-ChildItem cmdlet on Windows, but ls is an actual Bash command on Linux. Invoking ls on Windows and Linux invoke different commands altogether. Don’t use aliases in your code. Aliases are fine as shortcuts in your console session but keep them there. Don’t be tempted to use aliases in scripts or modules. Your code will be simpler and clearer. Tip Source: https://twitter.com/TheTomLilly
F urther Learning •
When You Should Use PowerShell Aliases
ut Functions in Alphabetical Order P in a Module If you have a module file (PSM1) with many different functions, consider defining them in alphabetical order. If you have a module with dozens of functions, you should make it as easy as possible to find a function. One way to do that is through alphabetical ordering. 67
Chapter 10
Write for the Next Person
Alphabetical ordering will come in handy too when you’re working in an editor like Visual Studio (VS) Code. In many editors, you can bring up a list of all functions defined in a file. In VS Code, you can type Ctrl-Shift-O while you have a module file open and you’ll immediately see all of the functions defined in that file. If all functions were in alphabetical order, you can quickly scan through the list to find the function you’re looking for.
Figure 10-1. Viewing functions in Visual Studio Code Out of order module function listing in VS Code Tip Source: https://twitter.com/raychatt
E xplain Regular Expressions with Comments There are times when you need to match and parse strings in PowerShell and simple pattern matching just won’t cut it. You must use regular expressions. If you have to write a complicated regular expression in a script, be sure to provide a comment above it indicating exactly what kind of string it matches. There aren’t many regular expression gurus out there that can read a regex string like reading command name. Make the code as easy to understand as possible. Include some example strings the regex expression matches too. Examples are always appreciated.
68
Chapter 10
Write for the Next Person
For example, let’s say you need to match a globally unique ID (GUID). In the following you will see the regular expression to match a GUID. It’s not intuitive at all to say the least. No one can take a quick glance at that string and know its purpose is to match a GUID. $regexString = '(\{){0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F] {4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}(\}){0,1}' However, if you have a small comment above that string, it gives the reader an idea of what that regular expression’s purpose is. ## Matches all GUIDs $regex = '(\{){0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F] {4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}(\}){0,1}' This is a simple tip but one that will help bring more clarity to those regular expressions. Tip Source: https://twitter.com/guyrleech
Write Comment-Based Help Always include comment-based help especially if you’re sharing scripts with other people. Comment-based help is the easiest kind of help to write. It shows up when a user runs Get-Help and acts as comments in the code too. Comment-based help should be standard across all production-ready scripts you write. Anyone can write comment-based help for scripts or for your own functions. Maybe you have a function that reads a configuration file for some line-of-business application. You can create comment-based help as shown in the following to define what this function is doing and its purpose: function Get-ConfigurationFile { Get-ConfigurationFile -FilePath 'C:\Program Files\AcmeApp\config.xml' This example will look for the configuration file at C:\Program Files\AcmeApp\config.xml and, if found, will return a file object. .OUTPUTS System.IO.FileInfo #> param( [Parameter()] [string]$FilePath ) ## Some code here that looks for the configuration file } Once this function is loaded into your session, you can then use Get- Help -Name Get-ConfigurationFile to see all of the help content without even going into the code.
70
Chapter 10
Write for the Next Person
F urther Learning •
Building Advanced PowerShell Functions and Modules
eigh the Difference Between Performance W and Readability When writing code, you’re forced to weigh decisions based on many factors. Two factors that sometimes collide are performance and readability. It’s important for code to accomplish a task as quickly as possible but not at the cost of readability. Even though a computer doesn’t need white space, comments and long command names doesn’t mean humans don’t. There are many instances where you can increase performance in expense of readability, but let’s narrow down an example to a simple one: adding items to a collection. You can add items to a collection many different ways in PowerShell. Some ways are simple and succinct yet others are faster by getting you into creating your own .NET objects and calling methods. Let’s say you have a small collection of 100 items in an array that you’ve defined as the following. In this case, we’re using the range operator to quickly create an array of numbers zero to 100 for simplicity. $array = 0..100 When you’re working with small arrays like this, you can add items to this array using the += operator as the following. You can see this method is simple, succinct, and pretty intuitive. $array += 101
71
Chapter 10
Write for the Next Person
However, this simple method is actually tearing down that collection in the background and creating a new one. PowerShell is so fast though, you wouldn’t notice anyway for even collections with a few thousand items in it. A faster, yet more complex, way to create an array and add items to it is to use an ArrayList as shown in the following. This method forces you to explicitly cast your basic array to a System.Collections.ArrayList type. Then, to add an item, you must call the Add() method. $arrayList = [System.Collections.ArrayList](0..100) $arrayList.Add(101) This method is clearly more obtuse than the earlier example, but it’s faster especially with large collections. Don’t use the fastest method all of the time if it’s going to affect readability. If you don’t ever intend to process more than a few thousand items in the collection, just use the += operator; otherwise, use an array list.
Further Learning •
72
Consider Trade-offs Between Performance and Readability
CHAPTER 11
Handle Errors Gracefully Believe it or not, your PowerShell script isn’t going to work right all the time. It will fail and fail hard sometimes. A novice developer typically doesn’t worry much about error handling. Error handling is one of those topics that separates the novices from the professionals. To create a production-ready, robust PowerShell solution, you must ensure all the ways your scripts can fail are properly handled. Inevitably, they will throw an error when you least expect it and it’s important to catch that error to either fix the problem mid-run or exit gracefully.
Force Hard-Terminating Errors Unlike many other languages, PowerShell has two types of exceptions/ errors – terminating and non-terminating errors. When encountered, a non-terminating error does not stop script execution. It does not terminate code execution. A hard-terminating error, on the other hand, does. You should not rely on non-terminating errors under most circumstances. Why? Because you have much more control with hard-terminating errors. Non-terminating errors are essentially just red text on the screen. Just like using the Write-Host cmdlet, you don’t have much, if any, control over those errors. Always use hard-terminating errors.
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_11
73
Chapter 11
Handle Errors Gracefully
There are a couple of different ways to force a hard-terminating error, either through using the common ErrorAction parameter or using the global $ErrorActionPreference automatic variable. Let’s say you have a script that copies an ACL from one file to another but you accidentally pass a nonexistent file to it. $someFile = Get-Item -Path 'a file that does not exist' $acl = Get-Acl -Path $someFile.FullName Set-Acl -Path 'a file that exists' -AclObject $acl If you’d run the preceding script, you’d see three different errors as you can see in Figure 11-1. This happens because Get-Item couldn’t find the file. It told you so but didn’t stop the script. It returned a nonterminating error. Since it didn’t terminate the script, the code just kept going unnecessarily. Lines 2–3 depend on line 1, so if line 1 fails, there’s no reason to run lines 2–3.
Figure 11-1. Three non-terminating errors Instead of relying on a non-terminating error, instead “convert” that error to hard-terminating error. One way to do that on a per-command basis is to use the ErrorAction parameter and set it to Stop as shown here:
74
Chapter 11
Handle Errors Gracefully
$someFile = Get-Item -Path 'a file that does not exist' -ErrorAction Stop $acl = Get-Acl -Path $someFile.FullName Set-Acl -Path 'a file that exists' -AclObject $acl Now when you run the script, you only receive one error. PowerShell terminated the script before it had the change to execute lines 2–3. Not only that but you can now run code based on if the error/exception is thrown by putting this in a try/catch block. try { $someFile = Get-Item -Path 'a file that does not exist' -ErrorAction Stop $acl = Get-Acl -Path $someFile.FullName Set-Acl -Path 'a file that exists' -AclObject $acl } catch { Write-Warning -Message $_.Exception.Message } When you run the script now, you receive a nice warning. PowerShell threw the exception that Get-Item created and the catch block caught it and ran the code inside. You could tack on the -ErrorAction Stop parameter to each of your commands or, if you want to globally turn non-terminating errors into hard-terminating errors, you can use $ErrorActionPreference. By setting $ErrorActionPreference to Stop as the first line of the script, you’re forcing all commands below that script to only return hard-terminating errors. $ErrorActionPreference = 'Stop' try { $someFile = Get-Item -Path 'a file that does not exist' $acl = Get-Acl -Path $someFile.FullName Set-Acl -Path 'a file that exists' -AclObject $acl
75
Chapter 11
Handle Errors Gracefully
} catch { Write-Warning -Message $_.Exception.Message } Use $ErrorActionPreference and the ErrorAction parameter as much as possible to ensure you can control those exceptions!
F urther Learning •
Error Handling: Two Types of Errors
Avoid Using $? It’s important to write code as simple as possible. Convoluted, complex code does no good to anyone regardless if you think you look smart doing it or if you think it’s saving you a few characters of typing. You should always focus on writing code that others can read and understand. PowerShell has a default variable called $? which some are tempted to use. Just looking at this variable, what do you think it does? You undoubtedly have no idea. There’s no name or indication of what this variable’s purpose is. The only way to find out is to read some documentation. This is not the type of code you want to write. To save you some time scouring through the documentation, this variable indicates if the last command executed was successful or not by returning True or False. Not only is this variable’s name unintuitive, but it also just returns a single Boolean value if an error occurred. It provides no other information. Using part of the preceding script as a demo, let’s say you instead went the lazy way and decided to silence all soft-terminating errors completely by setting the ErrorAction parameter to Ignore. You’re then using the $? variable to make a decision whether or not to run the other code.
76
Chapter 11
Handle Errors Gracefully
$someFile = Get-Item -Path 'a file that does not exist' -ErrorAction Ignore if (-not $?) { Write-Warning -Message $_.Exception.Message } else { $acl = Get-Acl -Path $someFile.FullName Set-Acl -Path 'a file that exists' -AclObject $acl } At the end of the day, the preceding code is accomplishing the same goal as the earlier try/catch example with a hard-terminating error. But, by using the much less known $? variable, the script isn’t near as readable.
F urther Learning •
About Automatic Variables
77
CHAPTER 12
Don’t Skimp on Security As developers and system administrators come together to form DevOps, it’s important to not exclude security. Security is an extremely important topic especially in today’s day and age and is one that we can begin to include in our everyday PowerShell code. Injecting security in PowerShell code is a deep topic and one that could not be completed in a single chapter let alone a single book. But, in this chapter, you’ll learn some tips to take care of some of the low-hanging fruit easily obtained by using some best practices.
S ign Scripts PowerShell has a built-in method to cryptographically sign all scripts to ensure they are not tampered with. Signing a script adds a cryptographic hash at the bottom of the script. When an execution policy is set to RemoteSigned or AllSigned, PowerShell will not allow the script to run if it detects the code has been modified since it was last signed. Let’s quickly cover how you would sign a PowerShell script. To do so, you must first have a certificate capable of code signing. You can either use a public certificate or a self-signed one. Let’s create a self-signed certificate for simplicity using the New-SelfSignedCertificate cmdlet.
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_12
79
Chapter 12
Don’t Skimp on Security
Using the New-SelfSignedCertificate cmdlet, you can only create the certificate in the My store, so go ahead and do that. Be sure to note the thumbprint. You’ll need that in the next step. New-SelfSignedCertificate -DnsName -CertStoreLocation Cert:\CurrentUser\My -Type Codesigning Next, export the created certificate and import it into the Trusted Root Certification Authorities and Trusted Publishers certificate stores. Export-Certificate -FilePath codesigning.cer -Cert Cert:\ CurrentUser\My\ Import-Certificate -CertStoreLocation Cert:\LocalMachine\Root\ -FilePath .\codesigning.cer Import-Certificate -CertStoreLocation Cert:\LocalMachine\ TrustedPublisher\ -FilePath .\codesigning.cer Now you can sign a script. Take any ol’ PowerShell script and sign it with the Set-AuthenticodeSignature cmdlet. ## This assumes there is only one code signing cert in the store $cert = Get-ChildItem -Path Cert:\CurrentUser\My\ -CodeSigningCert Set-AuthenticodeSignature -FilePath C:\Temp\script1.ps1 -Certificate $cert If you open up the script you just signed, you’ll see the large signature at the bottom. Now if you run this script on a computer that has an execution policy requiring signed certificates and that computer has your generated certificates, PowerShell will execute the script. Tip Source: https://twitter.com/brentblawat
80
Chapter 12
Don’t Skimp on Security
F urther Learning •
PowerShell Basics – Execution Policy and Code Signing
Use Scriptblock Logging It’s critical to know what code is being executed in your environment. Unfortunately, if you download a script from the Internet, click a malicious link or a user gets a phishing email, that code may be malicious. By enabling scriptblock logging in your environment, you can see, down to the scriptblock level, exactly what that code is doing and developing a proper audit trail. To enable scriptblock logging, you must first enable it at the local policy or group policy level by going to the Computer Configuration ➤ Administrative Templates ➤ Windows Components ➤ Windows PowerShell. Once there, double-click Turn on PowerShell Script Block Logging. You can see what the menu item looks like in Figure 12-1.
Figure 12-1. Navigating a local policy Once you have the Turn on PowerShell Script Block Logging box open, click Enabled and OK to confirm as shown in Figure 12-2.
81
Chapter 12
Don’t Skimp on Security
Figure 12-2. Enabling scriptblock logging Now when PowerShell executes code, you’ll begin to see events written to the Microsoft/Windows/PowerShell event log as shown in Figure 12-3.
Figure 12-3. Windows PowerShell events
F urther Learning •
Greater Visibility Through PowerShell Logging
ever Store Sensitive Information in Clear N Text in Code This should go without saying but saving passwords, private keys, API keys, or any other sensitive information in code is a bad idea. PowerShell allows you many different ways to encrypt this information and decrypt it, when necessary. 82
Chapter 12
Don’t Skimp on Security
Use the Export and Import-CliXml commands to encrypt and decrypt objects with sensitive information like credentials. Use secure strings rather than plaintext strings, generate your own cryptographic keys, and more with PowerShell. There are lots of ways to encrypt and decrypt sensitive information with PowerShell. Perhaps you need to pass a PSCredential object to a service to authenticate. Typically, you’ve been using the Get-Credential cmdlet to create this object by prompting you for a username and password. You’d now like to automate this process and authenticate without any user interaction. Browsing the Web trying to find a way to prevent that username/ password prompt, you come across the following code. This code allows you to statically assign a username and password in the code and create a PSCredential object. $password = ConvertTo-SecureString 'MySecretPassword' -AsPlainText -Force $credential = New-Object System.Management.Automation. PSCredential ('root', $password) You now have MySecretPassword stored in plaintext inside of the script. Don’t do this. Instead, save PSCredential object to disk encrypted using Export-CliXml. Instead of creating the PSCredential object on the fly, instead, save the entire object in an XML file that automatically encrypts the password. To do that, first, create the PSCredential object using whatever means you’d like. $credential = Get-Credential Next, export that object to an XML file on disk with Export-CliXml. $credential | Export-CliXml -Path .\credential.xml Take a look at the credential.xml file in Figure 12-4 that you just created and you’ll see the password is encrypted. 83
Chapter 12
Don’t Skimp on Security
Figure 12-4. Encrypted password in credential.xml You’ll then need to “convert” that credential.xml file to a PSCredential object your script can use. To do that, use Import-CliXml. $credential = Import-CliXml -Path .\credential.xml You now have a PSCredential object stored in $credential that’s exactly the same as if you obtained it via other means. And, as an added bonus, it requires no interaction and is encrypted.
Further Learning •
How to Encrypt Passwords in PowerShell
Don’t Use Invoke-Expression Using the Invoke-Expression command can open up your code to code injection attacks. The Invoke-Expression command allows you to treat any type of string as executable code. This means that whatever expression you pass to Invoke-Expression, PowerShell will gladly execute it under whatever context it’s running in. Although executing the expression you intend works great, what happens when you open up the code to others? Perhaps you’re accepting input to a script from some external source. Maybe you have a script that takes input from a web form and does some processing that looks like the following: [CmdletBinding()] param( [Parameter()] 84
Chapter 12
Don’t Skimp on Security
[string]$FormInput ) ## Do something with $FormInput You expect the value of FormInput to be some specific string that you can just execute. You consider yourself clever because you’re saving lines of code validating the value of FormInput but at the risk of malicious code. [CmdletBinding()] param( [Parameter()] [string]$FormInput ) Invoke-Expression -Command $FormInput Perhaps that web page is accidentally exposed to the Internet and someone puts this in the form that gets sent to your script: Remove-Item -Path 'C:\' -Recurse -Force You’ve got a bad day on your hands. By invoking expressions blindly and allowing PowerShell to invoke any code passed to it, you’re allowing any kind of code to run which is a terrible practice!
F urther Learning •
Invoke-Expression Considered Harmful
85
Chapter 12
Don’t Skimp on Security
se PowerShell Constrained U Language Mode If you need to allow junior users, employees in your company, or service applications the ability to run PowerShell commands, you need to ensure that access is least privilege. There’s no need to allow running commands others do not need and may accidentally or purposefully introduce security issues. PowerShell has a mode called constrained language mode that allows you to provide access to a PowerShell environment but not allow access to all commands and modules. Constrained language mode allows you to granularly define activities as allowed and disallowed giving you tight control over what can be done. Maybe you’d like a junior admin to run some scripts or execute some commands on a server. But you don’t want them executing any ol’ command. You can enable constrained language mode just by setting a property value. $ExecutionContext.SessionState.LanguageMode = "ConstrainedLanguage" Once a PowerShell session is in constrained language mode, PowerShell does not allow you to execute certain commands or perform certain functions. You can see in Figure 12-5 an error message PowerShell will display when you hit one of these restrictions.
Figure 12-5. PowerShell constrained language mode preventing command execution
F urther Learning • 86
PowerShell Constrained Language Mode
CHAPTER 13
Stick to PowerShell PowerShell is a forgiving scripting language, and that’s part of its appeal. When you’re working with PowerShell, you can borrow code from C#, .NET, or even COM objects and integrate them seamlessly into your code. This sounds great in theory, but when it comes to working on your code in a team of PowerShell developers, it’s best to stick to PowerShell. By straying from it, you could end up leaving your team confused, making your code less understandable, and really giving the next person who needs to edit your scripts a hard time.
Use Native PowerShell Where Possible With PowerShell, you have a nearly limitless toolset. If you need to accomplish something outside of what you can find PowerShell commands for, you have the option to use COM objects, .NET classes, and so on. That being said, don’t immediately jump to using these “non- PowerShell” methods. When there’s an existing PowerShell way of doing something, use it! There’s a strong chance that it’ll process better, faster, or give you an output that’s easier to use in the end. Let’s say you’re a seasoned C# developer in charge of writing a script for your team. Your team consists of system administrators and perhaps a junior dev here and there. No one quite understands C# as you do. You come across a need to read a file. Easy enough, you think. You know how
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_13
87
Chapter 13
Stick to PowerShell
you’d do this in C# and you know you can do a lot of “C sharpy” things in PowerShell so you write a line like this: $text = [System.IO.File]::ReadAllText('C:\textfile.txt') ## Do some stuff with the string in $text here You test the script, it reads the file as expected and you move on. However, a few days go by and maybe someone else on your team opens up the script or you decide to share it with the PowerShell community. Soon someone finds a bug and begins to attempt to troubleshoot the problem and see this line. Not having any programming knowledge at all, they immediately are stuck and much dive into Google figuring out what this means. By simply replacing the preceding line Get-Content, this problem wouldn’t have happened. You are now reading this file the “PowerShell way” making it much easier to understand for a programming layperson. $text = Get-Content -Path 'C:\textfile.txt' -Raw ## Do some stuff with the string in $text here Only revert to calling .NET methods rather than PowerShell commands if you require the highest performance possible or there is no PowerShell command available to do what you need to. Tip Source: https://twitter.com/JimMoyle
F urther Learning •
88
Comparing WMI and Native PowerShell
Chapter 13
Stick to PowerShell
Use Approved, Standard Function Names PowerShell allows you to assign any name to a function you’d like. You can call a function foo, do-thing, or Get-Thing. PowerShell doesn’t care as long as you call that function by the assigned name. Even though you can assign whatever name you’d like doesn’t mean you should. PowerShell has preferred naming standard you should follow for all of your functions. This standard specifies that all commands should start with a verb, a dash, and followed by a singular noun such as Get-Thing, Start-Service, Invoke-Command, and so on. You are free to use whatever noun you’d like that makes sense, but you should only use “approved verbs.” PowerShell strongly encourages you to use an “approved verb” or any verb you see when you run the Get-Verb cmdlet. Use any of these verbs that make sense for your situation. Once you find a verb that fits, assign a singular noun. Don’t assign a plural noun like Get-Things. Although your function may, in fact, get things (plural), does it always? When you use the pipeline, your function isn’t technically getting multiple things; it’s processing them one at a time. Always use a singular noun to provide an accurate name of what your function does.
F urther Learning •
Approved Verbs for PowerShell Commands
89
CHAPTER 14
Build Tools As you begin to write more PowerShell, you’ll probably find that you keep reinventing the wheel. This is natural. You keep repeating yourself over and over again because you’re not building upon the code you had previously created. You’re not building script libraries, modules, and tools. You’re essentially creating disposable code. Stop that! Instead of wasting time writing a brand new script from scratch, write code in a way you can reuse it. Build tools you can then put to use to build larger automation frameworks.
Think Ahead and Build Abstraction “Layers” This tip isn’t necessarily PowerShell specific, but it’s one of the most important tips of this entire book. When you build reusable tools in PowerShell, always think and build with abstraction in mind. What’s abstraction? Abstraction is a term software developers are familiar with but you may not be. Abstraction means writing code that interacts with something just not directly. Abstraction means writing code in “layers” that then refer back to one another eventually building an entire framework around a concept. Building in abstraction layers means you’re coding not for the now but for the future. You’re writing code that can be reused across many different use cases in various situations without actually changing the code; you just call it differently.
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_14
91
Chapter 14
Build Tools
Think about the code you write now. The code may work in the current context, but will it work if you run it from another computer, another environment, or even move it to the cloud? Does it need to have that kind of resiliency? If so and your code breaks when it’s moved out of its pristine, current environment, you’re probably not building in enough abstraction. I know the term abstraction is a bit vague, so let’s put some context and examples behind it when you’re building tools in PowerShell. Let’s say you’ve built some scripts that start, stop, and manage Hyper-V virtual machines (VMs). Your team manages one of the company’s hypervisors which, in this case, is Hyper-V. Since the VMs are Hyper-V, you decide to use the HyperV PowerShell module. This module is solely focused on only Hyper-V VMs. Perhaps you have some code that looks like the following: #requires -Module HyperV [CmdletBinding()] param( [Parameter(Mandatory)] [string]$VmName ) ## Use the Start-Vm cmdlet in the HyperV module to start a VM Start-Vm -Name $VmName ## Do some stuff to the VM here ## Use the Stop-Vm cmdlet in the HyperV module to stop the VM Stop-Vm -Name $VmName This script works just fine. You’ve already created a parameter which is a great first start. This parameter allows you to run this script against many different VMs without changing any code inside of the script. Good job! With this script, you’ve already created one layer of abstraction. Instead of running Start-Vm and Stop-VM directly on the PowerShell 92
Chapter 14
Build Tools
console, you decided you needed a script to those things and run some other code in between. You created it and now, instead of typing Start- VM and Stop-VM at the console, you’re simply running .-MyVM.ps1 or something like that. You’re not calling the Start-VM and Stop-VM cmdlets directly. You’ve abstracted away the need to do that and instead just interface with the script and not the cmdlets themselves. One day, your boss comes along and tells you your team has now inherited management of the company’s VmWare cluster. Uh oh! Now you think you need to create an entirely new set of scripts or module to manage VmWare VMs. You might have to but you could decide to integrate VmWare functionality into your current solution. A VM is a VM, right? VmWare VMs and Hyper-V VMs are similar; they just run on different hypervisors. What are the commonalities between them? You can start and stop both kinds of VMs. Instead of creating another script, why not integrate them into your current generic Set- MyVM.ps1 script. One way to do that would be to add a parameter called Hypervisor and build the logic needed to determine if your script should run some VmWare code of HyperV code. Notice that the following code snippet has a required module called VmWare. You’ll need to ensure your script now supports both scenarios. Also, it now has some conditional logic in the form of a switch statement that does some different things based on the type of hypervisor chosen. #requires -Module HyperV, VmWare [CmdletBinding()] param( [Parameter(Mandatory)] [string]$VmName, [Parameter()] [ValidateSet('HyperV','VmWare')] [string]$Hypervisor ) 93
Chapter 14
Build Tools
switch ($Hypervisor) { 'HyperV' { ## Use the Start-Vm cmdlet in the HyperV module to start a VM Start-Vm -Name $VmName ## Do some stuff to the VM here ## Use the Stop-Vm cmdlet in the HyperV module to stop the VM Stop-Vm -Name $VmName break } 'VmWare' { ## Use whatever command the VmWare module has in it to start the VM Start-VmWareVm -Name $VmName ## do stuff to the VmWare VM ## Use whatever command the VmWare module has in it to stop the VM Stop-VmWareVm -Name $VmName break } default { "The hypervisor you passed [$_] is not supported" } } Alternatively and albeit better approach, if possible, you could dynamically figure out what type of VM was passed and then make a decision on the code to run based on the result of that code. See in
94
Chapter 14
Build Tools
the following an example of how to do that. Notice now you leave the decision-making up to the code (which is always better). The script now automatically knows what hypervisor the VM is running on. #requires -Module HyperV, VmWare [CmdletBinding()] param( [Parameter(Mandatory)] [string]$VmName ) function Get-Hypervisor { ## simple helper function to determine the hypervisor param( [Parameter()] [string]$VmName ) ## Some code here to figure out what type of hypervisor this VM is running on ## Return either 'HyperV' or 'VmWare' } $hypervisor = Get-Hypervisor -VmName $VmName switch ($Hypervisor) { 'HyperV' { ## Use the Start-Vm cmdlet in the HyperV module to start a VM Start-Vm -Name $VmName ## Do some stuff to the VM here
95
Chapter 14
Build Tools
## Use the Stop-Vm cmdlet in the HyperV module to stop the VM Stop-Vm -Name $VmName break } 'VmWare' { ## Use whatever command the VmWare module has in it to start the VM Start-VmWareVm -Name $VmName ## do stuff to the VmWare VM ## Use whatever command the VmWare module has in it to stop the VM Stop-VmWareVm -Name $VmName break } default { "The hypervisor you passed [$_] is not supported" } } You’ve now discovered the commonalities between Hyper-V and VmWare VMs and have built a solution to manage them both! You’ve now created a layer of abstraction that allows you to manage two different VM types with a single script.
96
Chapter 14
Build Tools
F urther Learning •
Building Advanced PowerShell Functions and Modules
Wrap Command-Line Utilities in Functions If you have to use a command-line utility, wrap it in a PowerShell function. If it returns output, parse the output and make it return a pscustomobject. This allows a CLI utility to act like any other PowerShell command. Once you’ve abstracted away all of the “CLIness”, the command can be easily integrated with other PowerShell tools. This tip is great for standardization. When building tools, it’s important to standardize everything you can. It makes your code cleaner and easier to understand and work with. A great example of this is the command- line utility netstat. This is a utility that tells you all of the ports that are closed, listening, etc. on your local Windows computer. This utility, like all command-line utilities, returns a bunch of text. This text has no structure to it as you can see in Figure 14-1. You can’t pipe this output to another command or check on object properties.
Figure 14-1. No object structure to netstat
97
Chapter 14
Build Tools
To build reusable tools, all commands whatever they are, PowerShell cmdlets or command-line tools, should return similar if not the same kind of object. Check out this script called Get-LocalPort.ps1. You’ll notice this script uses various methods to parse the output from netstat. It then returns a standard PowerShell pscustomobject for each entry as you can see in Figure 14-2.
Figure 14-2. The netstat command-line output in objects Once you have a script or function returning a PowerShell object, you can treat that command’s output just like you would any other object.
Further Learning •
Get-LocalPort: netstat PowerShellified and Text Parsing Shenanigans
ake Module Functions Return Common M Object Types If you have different PowerShell functions that work together, be sure they always return the same type of object. This object type is usually a pscustomobject because it’s generic and easy to create. If functions are all returning a pscustomobject type, you’ll know what to expect when it returns information. It also makes troubleshooting easier. 98
Chapter 14
Build Tools
Perhaps you have a module with a couple of functions like in the following code snippet. What those functions do is irrelevant. Regardless of what you do inside of each function, ensure each function that returns an object to the pipeline returns the same type of object. function Get-Thing { [CmdletBinding()] param() ## Do some stuff [pscustomobject]@{ 'Name' = 'XXXX' 'Type' = 'XXXX' } } function Get-OtherThing { [CmdletBinding()] param() [pscustomobject]@{ 'Name' = 'XXXX' 'Type' = 'XXXX' 'Property1' = 'Value1' 'Property2' = 'Value2' } } When you or someone else now runs Get-Thing, they’ll then expect Get-OtherThing to return the same type of object. The properties may be different because they are, after all, different functions that perform different tasks, but you should always strive to return objects of the same type and as similar as possible. 99
Chapter 14
Build Tools
F urther Learning •
PowerShell: Everything You Wanted to Know About PSCustomObject
E nsure Module Functions Cover All the Verbs If you have an immediate need to accomplish a task like creating a user account, removing a file, or modifying a database record, don’t just create a single function. Instead, create four functions that cover the complete life cycle of that object – New, Set, Get, and Remove. For example, perhaps you’re creating a module for a monitoring appliance that has an API. You decide on an “object” noun of Monitor. If you need to create a new monitor in an automation script, don’t just create the New-Monitor function. Instead, create Get-Monitor, Set-Monitor, and Remove-Monitor to ensure you have support for the monitor’s full life cycle. Even if you just have the need to create a new monitor at this point, always, at least create a Get-Monitor function. You’re going to inevitably need to confirm if the monitor you think you just created was actually created or ensure the monitor does not exist before attempting to create it. If you’re tight on time, create the function you immediately need to perform some change on an object like New, Set, or Remove. Then, once you have built that function, build the Get function alongside of it. Eventually, you should go back and build out the entire set of functions which ensures you have a tool that can manage whatever thing you’re working with through its entire life cycle. Tip Source: https://twitter.com/JimMoyle
F urther Learning • 100
How to Design a PowerShell Module
CHAPTER 15
Return Standardized, Informational Output Have you ever run a script or function you received from someone else and wondered if it worked? It ran without showing an error, but then again, it returned nothing at all! You don’t have a clue what it did nor could see its progress as it was executing. Because it didn’t return any object to the pipeline, you also can’t use its outcome in other commands. You are forced to write more code to check whether it did its job which results in wasted time and added complexity. In this chapter, you’re going to learn some tips on what to return to your user, how often, and how to return output in many different ways.
Use Progress Bars Wisely PowerShell has a handy cmdlet called Write-Progress. This cmdlet displays a progress bar in the PowerShell console. It’s never a good idea to leave the user of your script staring at a blinking cursor. The user has no idea if the task should take 10 seconds or 10 minutes. They also might think the script has halted or crashed in some manner. It’s important you provide some visual cues as to what’s going on. Use Write-Progress for any task that takes more than 10 seconds or so. The importance increases with time. For more complicated scripts with many transactions, be sure to use Write-Progress at a higher level. © Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_15
101
Chapter 15
Return Standardized, Informational Output
Use the progress bar as the main indicator of progress and leave the smaller steps to a verbose message, for example. Perhaps you have a script that connects to a list of servers and parses through a bunch of files on each server. Each server has a few thousand files that take 10–20 seconds each to read and parse through. Take a look here for an example of what this script might look like: ## Pull server computer names from AD $serverNames = Get-AdComputer -Filter '*' -SearchBase 'OU=Serve rs,DC=company,DC=local' | Select-Object -ExpandProperty Name foreach ($name in $serverNames) { foreach ($file in Get-ChildItem -Path "\\$name\c$\ SomeFolderWithABunchOfFiles" -File) { ## Read and do stuff to each one of these files } } In the preceding example, you essentially have two “progress layers” represented with a foreach loop – processing a server at a time and also processing a file at a time. Try to add a progress bar at the “higher-level layer” or processing the server. You wouldn’t want tens of thousands of file paths flying by on the progress bar. Instead, a messaging saying “Processing files on ABC server” that continually updates would be better. ## Pull server computer names from AD $serverNames = Get-AdComputer -Filter '*' -SearchBase 'OU=Serve rs,DC=company,DC=local' | Select-Object -ExpandProperty Name ## Process each server name for ($i=0;$i -lt $serverNames.Count;$i++) { Write-Progress -Activity 'Processing files' -Status "Server [$($serverNames[$i])]" -PercentComplete (($i / $serverNames.Count) * 100) 102
Chapter 15
Return Standardized, Informational Output
## Process each file on the server foreach ($file in Get-ChildItem -Path "\\$($serverNames[$i])\c$\SomeFolderWithABunchOfFiles" -File) { ## Read and do stuff to each one of these files } } Now if you run the example script, you’d see a progress bar in Figure 15-1 that just displays each server that’s being processed. Tip Source: https://twitter.com/brentblawat and https:// twitter.com/danielclasson
Figure 15-1. Using Write-Progress
Further Learning •
Add a Progress Bar to Your PowerShell Script
Leave the Format Cmdlets to the Console For any script that returns some kind of output, a well-developed script contains two “layers” – processing and presentation. The processing “layer” contains all of the code necessary to perform whatever task is at hand. The “presentation” layer displays what the scripts outputs to the console. Never combine the two. If you need to change up what the output looks like in the console, do it outside the script. For example, never use a Format-* cmdlet inside of a script. You should treat scripts like reusable tools. Unless you are 100% certain that script will never need to send output to another script or function, don’t attempt to format the output in the script. Instead, 103
Chapter 15
Return Standardized, Informational Output
pipe objects from the script into a formatting command at the console or perhaps another “formatting” script. Perhaps you have a script that reads some files. To make the output easier to look at, you decide to pipe the contents of Get-ChildItem to Format-Table. ## readsomefiles.ps1 Get-ChildItem -Path 'somepath\here' | Format-Table The output looks fine but you then need to perform some action on each of those files; maybe it’s removing them. Knowing that you can pipe the contents of Get-ChildItem to Remove-Item, you add Remove-Item onto the end. ## readsomefiles.ps1 Get-ChildItem -Path 'somepath\here' | Format-Table | Remove-Item You’ll soon find out that PowerShell doesn’t like that at all. Why? Because Format-Table, just like any of the Format-* cmdlets, does not return objects to the pipeline. Instead of using a formatting cmdlet inside of the script, remove the Format-Table reference from the script and instead use the cmdlet in the console. PS> .\readsomefiles.ps1 | Format-Table
F urther Learning •
104
Using Format Commands to Change Output View
Chapter 15
Return Standardized, Informational Output
Use Write-Verbose Verbose messaging comes in handy in many different scenarios from troubleshooting, script progress indication, and debugging. Use the Write-Verbose cmdlet as much as possible to return granular information about what’s happening in a script. Use verbose messages to display variable values at runtime, indicate what path code takes in a condition statement like if/then, or indicate when a function starts and stops. There are no defined rules to indicate when to return a verbose message. Since verbose messaging is off by default, you don’t need to worry about spewing text to the console. It’s better to have more information than less when it comes to verbose messaging. Perhaps you have a script that gathers ACLs for DNS records stored in Active Directory (AD). #requires -Module ActiveDirectory [CmdletBinding()] param( [Parameter(Mandatory)] [string[]]$DnsHostname, [Parameter()] [string]$DomainName = (Get-ADDomain).Forest ) $Path = "AD:\DC=$DomainName,CN=MicrosoftDNS,DC=ForestDnsZones,D C=$($DomainName.Split('.') -join ',DC=')" foreach ($Record in (Get-ChildItem -Path $Path)) { if ($DnsHostname -contains $Record.Name) { Get-Acl -Path "ActiveDirectory:://RootDSE/$($Record. DistinguishedName)" } } 105
Chapter 15
Return Standardized, Informational Output
When you run this script, it doesn’t return any messaging letting you know what’s going on. It only returns ACLs matching the DnsHostName parameter. You could add some messaging to this function to make it better using some verbose messages. You can see in the following some verbose messaging was added to the script. To see this verbose messaging when you run this script, you would use the Verbose parameter like .\Get-DnsAdAcl.ps1 -Verbose. #requires -Module ActiveDirectory [CmdletBinding()] param( [Parameter(Mandatory)] [string[]]$DnsHostname, [Parameter()] [string]$DomainName = (Get-ADDomain).Forest ) Write-Verbose -Message 'Starting script...' $Path = "AD:\DC=$DomainName,CN=MicrosoftDNS,DC=ForestDnsZones,D C=$($DomainName.Split('.') -join ',DC=')" foreach ($Record in (Get-ChildItem -Path $Path)) { if ($DnsHostname -contains $Record.Name) { Write-Verbose -Message "Getting ACL for [$($Record.Name)]..." Get-Acl -Path "ActiveDirectory:://RootDSE/$($Record. DistinguishedName)" Write-Verbose -Message "Finished getting ACL for [$($Record.Name)]." } } Write-Verbose -Message 'Script ending...' Tip Source: https://twitter.com/UTBlizzard 106
Chapter 15
Return Standardized, Informational Output
F urther Learning •
Write-Verbose Help Cmdlet
U se Write-Information PowerShell has six streams. You can think about three of those streams in terms of verbosity – Debug, Verbose, and Information. The debug stream is at the bottom and should return lots of granular information about code activity while the information stream should contain high-level messages. Use the Write-Information cmdlet to display top-level information similar to what would be down in a progress bar. The user doesn’t need to see variables, if/then logic, or any of that. Use Write-Information to display basic, high-level status messages about script activity. Using the example from the “Use Write-Verbose” tip, you could improve that a bit by using Write-Information instead of Write-Verbose when the script starts and stops. #requires -Module ActiveDirectory [CmdletBinding()] param( [Parameter(Mandatory)] [string[]]$DnsHostname, [Parameter()] [string]$DomainName = (Get-ADDomain).Forest ) Write-Information -MessageData 'Starting script...' $Path = "AD:\DC=$DomainName,CN=MicrosoftDNS,DC=ForestDnsZones,D C=$($DomainName.Split('.') -join ',DC=')" foreach ($Record in (Get-ChildItem -Path $Path)) { 107
Chapter 15
Return Standardized, Informational Output
if ($DnsHostname -contains $Record.Name) { Write-Verbose -Message "Getting ACL for [$($Record. Name)]..." Get-Acl -Path "ActiveDirectory:://RootDSE/$($Record. DistinguishedName)" Write-Verbose -Message "Finished getting ACL for [$($Record.Name)]." } } Write-Information -MessageData 'Script ending...' When and where to use Write-Information vs. Write-Verbose is completely up to you. The use cases vary wildly. Just remember to use Write-Information to display higher-level activity and use Write-Verbose to display more granular activity.
F urther Learning •
Welcome to the PowerShell Information Stream
E nsure a Command Returns One Type of Object Nothing will confuse a PowerShell developer more than when a script or function returns different types of objects. Keep it simple and ensure regardless of the circumstances, the command only returns one type. When a script or function returns different types of objects based on various scenarios, it becomes hard to write code that takes input from that command.
108
Chapter 15
Return Standardized, Informational Output
Perhaps you’re writing a server inventory script. You have a script that queries a remote server and returns information like service status and user profiles on that server. $servers = ('SRV1','SRV2','SRV3','SRV4') foreach ($server in $servers) { Get-Service -ComputerName $server Get-ChildItem -Path "\\$server\c$\Users" -Directory } The preceding script returns two different types of objects – a System.Service.ServiceController object via Get-Service and a System.IO.DirectoryInfo object via Get-ChildItem. Even though you want to see the output of each cmdlet, do not leave the script as is. Instead, consolidate these two types of output into your own object type, preferably a PSCustomObject. You can see an example in the following of returning one PSCustomObject object type for each server: $servers = ('SRV1','SRV2','SRV3','SRV4') foreach ($server in $servers) { $services = Get-Service -ComputerName $server $userProfiles = Get-ChildItem -Path "\\$server\c$\Users" -Directory [pscustomobject]@{ Services = $services UserProfiles = $userProfiles } } Allowing your scripts and functions to only return one object type forces standardization. It also makes it easier to chain functions together. If you can expect a certain object type to be returned for each function in a 109
Chapter 15
Return Standardized, Informational Output
module, for example, you can write other functions that accept that input much easier than if you had to code around a lot of different types.
F urther Learning •
About Functions OutputTypeAttribute
nly Return Necessary Information O to the Pipeline If a command returns information you have no use for, don’t allow it to return objects to the pipeline. Instead, assign the output to $null or pipe the output to the Out-Null cmdlet. If the command should return the output sometimes but not all the time, create a PassThru parameter. By creating a PassThru parameter, you give the user the power to decide to return information or not. Maybe you have a script that, as a part of it, creates a directory using New-Item. Some cmdlets that do not have the Get verb return unnecessary information; the New-Item cmdlet is one of them. New-Item -Path 'C:\path\to\folder' -ItemType Directory You can see in Figure 15-2 when you run New-Item, it returns an object. If you just need to create a directory, you probably don’t need that object. If you’d leave this command in your script, the script would return this object. Ensure that doesn’t happen.
110
Chapter 15
Return Standardized, Informational Output
Figure 15-2. New-Item returning an unnecessary object Instead of allowing cmdlets and other functions to place unnecessary objects on the pipeline, send the output to $null. Assigning output to $null essentially removes it entirely and prevents anything from going to the pipeline. $null = New-Item -Path 'C:\path\to\folder' -ItemType Directory You can also pipe output to Out-Null but I prefer assigning output to the $null variable. Why? Because when working with large collections, you should not use the pipeline at all, if possible for performance reasons. Tip Source: https://twitter.com/JimMoyle
F urther Learning •
The PassThru Parameter: Gimme Output
111
CHAPTER 16
Build Scripts for Speed Although this chapter conflicts with tips on not purely focusing on performance, there’s a fine line to follow. On the one hand, you don’t need to get bogged down shaving off microseconds of runtime. On the other hand, though, you shouldn’t completely disregard script performance. There is a gray area that you need to stay within to ensure a well-built PowerShell script.
D on’t Use Write-Host in Bulk Although some would tell you never to use the Write-Host cmdlet, it still has its place. But, with the functionality, it brings also a small performance hit. Write-Host does nothing “functional.” The cmdlet outputs text to the PowerShell console. Don’t add Write-Host references in your scripts without thought. For example, don’t put Write-Host references in a loop with a million items in it. You’ll never read all of that information, and you’re slowing down the script unnecessarily. If you must write information to the PowerShell console, use [Console]::WriteLine() instead. Tip Source: https://twitter.com/brentblawat
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_16
113
Chapter 16
Build Scripts for Speed
F urther Learning •
PowerShell Performance: Write-Host
Don’t Use the Pipeline The PowerShell pipeline, although a wonderful feature, is slow. The pipeline must perform the magic behind the scenes to bind the output of one command to the input of another command. All of that magic is overhead that takes time to process. You can see in the following an example of the pipeline’s speed. Using the foreach method on an array of 1,000,000 items is three times as fast as the pipeline.
114
Chapter 16
Build Scripts for Speed
The pipeline simply has a lot more going on behind the scenes. When the pipeline isn’t necessary, don’t use it.
F urther Learning •
Quick Hits: Speed Up Some of Your Commands by Avoiding the Pipeline
U se the foreach Statement in PowerShell Core PowerShell has a few different ways to iterate through collections. The fastest way is with the foreach statement in PowerShell Core. The speed of the foreach statement varies over different versions, but in PowerShell Core, the PowerShell team has really made it fly. Consider the example in the following of iterating through an array of 1,000,000 strings vs. using the foreach() method on the collection. These two examples perform the exact same function.
115
Chapter 16
Build Scripts for Speed
You can see here that the foreach() method took four times as long! When you need to process large collections, consider using the foreach statement rather than the foreach() method. You could alternatively use the ForEach-Object cmdlet as well but do without using the pipeline. ForEach-Object -InputObject $array -Process { $_ }
F urther Learning •
116
PowerShell V4: Where() and ForEach() Methods
Chapter 16
Build Scripts for Speed
Use Parallel Processing Leveraging PowerShell background jobs and .NET runspaces, you can significantly speed up processing through parallelization. Background jobs are a native feature in PowerShell that allows you to run code in the background in a job. A runspace is a .NET concept that’s similar but requires a deeper understanding of the .NET language. Luckily, you have the PoshRSJob PowerShell module to make it easier. Let’s say you have a script that attempts to connect to hundreds of servers. These servers can be in all different states – offline completely, misconfigured leading to no connection, and connectable. If processing each connection in serial, you’ll have to wait for each of the offline or misconfigured servers to time out before attempting to connect to the other. Instead, you could use put each connection attempt in a background job which will start them all nearly at the same time and then wait on all of them to finish. Maybe you have a text file full of server names with one server name per line like this: SRV1 SRV2 SRV3 .... You then query these server names with Get-Content -Path C:\servers.txt. Connecting to each of these servers in serial may look like this. The following code snippet uses the Invoke-Command command’s ability to process multiple computers at once by passing in an array of server names to the ComputerName parameter: $servers = Get-Content -Path C:\Servers.txt Invoke-Command -ComputerName $servers -ScriptBlock { 117
Chapter 16
Build Scripts for Speed
## Do something on the server here } The ability to process many servers at once is handy, but the default behavior is serial. Instead, you can use the AsJob parameter on Invoke- Command to invoke each instance and immediately create a background job and keep processing more servers. $servers = Get-Content -Path C:\Servers.txt Invoke-Command -ComputerName $servers -AsJob -ScriptBlock { ## Do something on the server here } When the Invoke-Command has finished starting all of the background jobs, it will then release control of the console back to you. At that point, you can check on the status of the jobs using the Get-Job command and discover the output using the Receive-Job command as shown in the following:
F urther Learning •
118
Parallel Processing with PowerShell
Chapter 16
Build Scripts for Speed
se the .NET StreamReader Class When U Reading Large Text Files The Get-Content PowerShell cmdlet works well for most files, but if you’ve got a large multi-hundred megabyte or multi-gigabyte file, drop down to .NET. Perhaps you have a large file called C:\MyHugeFile.txt that is several gigabytes. You need to read the entire file so you immediately use the cmdlet you’re most familiar with which is Get-Content. Get-Content -Path C:\MyHugeFile.txt You’ll find that Get-Content takes a long time to read the entire file. Instead of using Get-Content, consider using the System. IO.StreamReader .NET class. Create an instance of the System. IO.StreamReader .NET class using the file you wish. Then, using the Peek() method, process each line of the file and read the line using the ReadLine() method inside of a while loop as you can see in the following: $sr = New-Object -Type System.IO.StreamReader -ArgumentList 'C:\MyHugeFile.txt' while ($sr.Peek() -ge 0) { $sr.ReadLine() } Using the Get-Content cmdlet is much simpler but much slower. If you need speed though, the StreamReader approach is much faster.
Further Learning •
PERF-02 Consider Trade-offs Between Performance and Readability
119
CHAPTER 17
Use Version Control Have you ever overwrote an important PowerShell script and wish you could go back 5 minutes in time? Has a coworker ever edited one of your scripts and broke an important server? These questions and more can be answered and resolved using one practice: version control. Version, or source control, allows you to control changes to code rather than making changes on a whim without much though. Version control has so many benefits over keeping scripts stored on some file share somewhere and making backups by renaming scripts to .bak, you cannot believe. Version control can truly be a game-changer once you and your team learn how to leverage it properly.
Create Repositories Based on a Purpose Although there are many different strategies for creating repositories (repos), one of the best strategies is to create a repo per project. Perhaps you have a set of PowerShell scripts to manage things in Active Directory (AD) and a script or two that helps you build virtual machines (VMs). Rather than creating one big repo called Scripts and throwing them both in there, break that out into two repos called ADManager and VMManager, for instance. Assign a label to the effort. Put a name on that effort. Associating a repo with a specific purpose not only helps you keep scripts separated for manageability purposes but also encourages tool-building. When you start labeling your “buckets” as tool names like software developers © Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_17
121
Chapter 17
Use Version Control
do, it puts you into the mindset of reuse instead of just dropping another script into a generic folder somewhere.
ommit Code Changes Based C on Small Goals Regardless of what version control tool you use, always try to commit code to your repository (repo) of choice based on a small goal. What’s a “small goal”? Think about the last time you made a change to one of your existing scripts that probably will take a couple of hours. What was the purpose? What was the desired outcome? That’s probably a single commit. Consider, for example, you have a script that installs some software for a particular application. During your testing, you find that the script fails when it encounters a certain use case. The code to account for this use case will take less than 1–2 hours. You make the changes, test locally on your computer, and then you should create a commit with an informational message like “fixed that issue where X happens when Y.” You should commit changes to your repo several times a day. If not, your working code isn’t “backed up” anywhere. If your local repo is wiped, you will lose that code forever.
Create a Branch Based on a Feature If you’re committing code to fix a single edge case that may take you an hour to fix, a branch is a group of these commits. Think of a branch as a single feature or maybe fixing a problem in the code that may affect other areas of the code. Maybe you have a module and a set of scripts that all coordinate with each other to manage a large line of business application called PSAcmeApp. You’ve spent many months on this project and helps
122
Chapter 17
Use Version Control
automate and manage many aspects of this application. Let’s say your manager comes to you and tell you they’re getting ready to roll out the next major version of this application but you still need to maintain support for v1. You have no idea how this new version will work with your project. A branch would be a good strategy here. You could create a branch called v2 off of your main branch and perform all of the work necessary for v2 knowing you’re not affecting the stable v1 version. Once you’re satisfied, you can then merge your v2 code with v1 enabling your tool to support both versions.
Use a Distributed Version Control Service Version control, in and of itself, is an independent endeavor. A single person commits changes, rolls back changes, and manages code by themself. It’s not until you begin to work on a team, need to share code, or integrate your code with other systems, you may not need distributed version control. Popular distributed version control services like GitHub are extremely popular and for good reason. Once you sync your local repos with GitHub, you can then easily share your code with others online, collaborate with teammates on a shared project, integrate with other tools like continuous integration or various testing tools, and more. Using a distributed version control service is the next step in PowerShell development and should be the default for every new script and tool you create.
123
CHAPTER 18
Build and Run Tests If you’re writing PowerShell for personal, random reasons to save yourself some time, writing tests for your scripts probably isn’t worth it. But, if you’re writing PowerShell for a business in a production environment, this is a requirement. Tests, especially automated tests, are quality control for your code. Tests not only ensure you publish code that won’t nuke your production environment, but they also help you trust your code. They open up an entirely new, more professional way of PowerShell development.
Learn the Pester Basics The only dominant testing framework for PowerShell is Pester. If you’re developing PowerShell scripts for an organization and don’t have the first clue about Pester, stop right now and check out The Pester Book. Pester is a PowerShell module that allows you to build unit and integration tests for your code. You can also create infrastructure tests from it to run after your code runs to ensure it made the appropriate changes. At its most basic level, Pester is just a specially designed PowerShell script. Pester tests are built using a domain-specific language (DSL). A DSL is a specific way of writing code in a particular area, testing in this case.
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_18
125
Chapter 18
Build and Run Tests
If you have Pester installed (and you will if you are on any modern Windows operating system), you can create a simple tests file right now. Open up your favorite code editor and insert the following code snippet: describe 'My computer tests' { it 'my computer name is what I expect it to be' { $env:COMPUTERNAME | should be 'PSGOD' } } Save that script as mycomputer.tests.ps1 in the ~\Tests, for example. Now run Pester and point it to that file. You’ll see that the tests run which compares your actual computer name with one you define. Invoke-Pester -Path '~\Tests\mycomputer.tests.ps1' Pester is all about comparing what you expect something to be vs. what it actually is.
F urther Learning •
The Pester Book
Leverage Infrastructure Tests Pester is a unit-testing framework but has been adapted to allow you to write infrastructure tests. Infrastructure tests give you two significant benefits:
126
•
Confirming the environment is how you’d expect
•
Ensuring the scripts you write made the changes you expect
Chapter 18
Build and Run Tests
By writing infrastructure tests with Pester, you can get a bird’s-eye view of your entire environment boiled down to simple red/green checks. Once the tests are written, they can be executed at any time. Pester infrastructure tests also allow you to verify the changes your scripts are supposed to make to your environment actually do. When’s the last time you ran a script, it didn’t return any errors, but it didn’t change what you wanted it to? It happens all the time. If you have a set of tests to run before and after your script runs, you can easily see if the changes you expected to happen did. You can either build your own infrastructure tests or use existing projects. Two open source projects I can recommend are PoshSpec and Operational Validation Framework.
F urther Learning •
How I Learned Pester by Infrastructure Testing a Domain Controller Build
Automate Pester Tests Once you’ve learned about Pester, created some tests, and can now run them on demand, it’s time to automate them. Automating Pester tests consists of building some way to automatically run tests once a change is made to code. This step is most commonly used in a continuous integration/continuous delivery (CI/CD) pipeline. To automate tests upon code change, you’ll, at a minimum, need some version control like Git. You’ll then need an automation engine to detect when a change has been detected in version control and kick off a test. There are many automated build/release pipelines like this. Some popular ones include Jenkins, AppVeyor, and Azure DevOps.
127
Chapter 18
Build and Run Tests
F urther Learning •
Hitchhikers Guide to the PowerShell Module Pipeline
U se PSScriptAnalyzer PSScriptAnalyzer is a free tool by Microsoft that performs code linting. Code linting runs checks against your code to ensure you’re meeting best practices and building efficient code. Download the PSScriptAnalyzer module from the PowerShell Gallery and run it against your scripts and modules to see what errors it finds. If you don’t already have PSScriptAnalyzer, you can download it by running Install-Module -Name PSScriptAnalyzer. Once you have it installed, create a simple script with a single function inside and call it Gather-OperatingSystem.ps1. You can use the following example if you’d like. This simple script defines a function and then calls that function. Function Gather-OperatingSystem { [CmdletBinding()] Param ( [Parameter(Mandatory)] $ComputerName = $env:COMPUTERNAME ) Get-CimInstance -ClassName Win32_OperatingSystem -ComputerName $ComputerName } Gather-OperatingSystem Even though this function will work fine, there are numerous problems with it based on Microsoft-recommended best practices. To discover those
128
Chapter 18
Build and Run Tests
issues, run PSScriptAnalyzer against this script to detect all of the issues. You can see some of the issues found for this script in Figure 18-1. Invoke-ScriptAnalyzer -Path ~\PathToScript\Gather- OperatingSystem.ps1
Figure 18-1. PSScriptAnalyzer has caught various issues You can see in the figure that PSScriptAnalyzer found multiple issues with this script. This is a handy tool to run against all of your scripts to ensure they are constantly meeting best practices. Tip Source: https://twitter.com/DavePinkawa
Further Learning •
How to Use the PowerShell Script Analyzer to Clean Up Your Code
129
CHAPTER 19
Miscellaneous Tips There will inevitably be tips that don’t fit the mold. Tips that don’t necessarily fit in a chapter and not enough of them were found to create a chapter will be here. In this chapter, you will find a smorgasbord of tips ranging from string and array best practices, interactive prompting, creating PowerShell script shortcuts, and more.
W rite for Cross-Platform At one time, PowerShell was called Windows PowerShell. PowerShell only existed on Windows. Not anymore. Nowadays, PowerShell is available on just about every platform out there. As a result, scripts are being run on Windows, Linux, macOS, and other platforms every day. If you’re writing PowerShell for the community or others in your organization, it’s a good idea to ensure those scripts are cross-platform compatible. IT workloads are constantly moving around on-prem and to/from the cloud on different operating systems. A web server can just as easily run on nGinx as it can on IIS. You should write scripts to account for this potential change even though there are no plans to run on another platform.
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_19
131
Chapter 19
Miscellaneous Tips
Even though the Microsoft PowerShell Team has taken every effort to maintain complete parity between Windows PowerShell and PowerShell, there will be differences, especially across different operating systems. If you believe your script may run on both versions, be sure to write and test for both. There is an endless number of examples of following this tip, but to solidify things, let’s use an example. Perhaps you have a script that works with an API. This script queries a REST API and performs some tasks based on the result of various API calls. In one line of your code, the script reads a directory. You usually work on Windows and you’ve been getting a bit lazy lately with how you code. You decide to save a few keystrokes use the alias ls to list files in a directory. This code works just fine in Windows. ls -Path 'C:\SomeFolder' | ForEach-Object { $_.Encrypt() } Now let’s say you’ve been working hard on this script and want to share it with others. This script doesn’t necessarily need to be run on Windows. There is no dependency on Windows because perhaps you’re querying an API with the Invoke-RestMethod cmdlet that’s available cross-platform. Someone else runs this script on macOS and it fails. Why? There are two reasons: 1. ls is a shell command in macOS. However, on Windows, ls is an alias to the Get-ChildItem cmdlet. You’re running two separate commands expecting both to have the same functionality. 2. The Encrypt() method is not available on macOS. It’s only available in Windows. If you need to encrypt the files, you need to find a cross-platform way to do it. You don’t have to always write code to be cross-platform but it can never hurt!
132
Chapter 19
Miscellaneous Tips
Tip Source: https://twitter.com/migreene
F urther Learning •
Tips for Writing Cross-Platform PowerShell Code
Don’t Query the Win32_Product CIM Class One common use for PowerShell is to find installed software on a Windows machine. Installed software can be found in various places in both WMI, the registry and the file system. One place that installed software is located is within the Win32_ Product CIM class. It may be tempting to use Get-CimInstance to query this class but don’t! This class is special and forces Windows to run external processes behind the scenes. It’s slower to respond and will instruct Windows to run msiexec for each instance when you don’t expect it. If you need to find what software is installed on a Windows computer, don’t do this or anything like it: Get-CimInstance -Class Win32_Product If so, you’ll find the performance is extremely slow. If you dig in a bit to figure out why, you’ll find that you’re filling up the Windows Application event log with messages as shown in Figure 19-1.
133
Chapter 19
Miscellaneous Tips
Figure 19-1. MsiInstaller event log messages Instead of querying the WMI Win32_Product class, use the registry. You can query the registry yourself, but to save time, check out the PSSoftware community module or any other that queries the registry instead.
F urther Learning •
Why Win32_Product Is Bad News!
reate a Shortcut to Run PowerShell C As Administrator Much of the time, you need to run PowerShell “as administrator” in Windows. Doing so allows you to perform many system tasks you wouldn’t typically have access to.
134
Chapter 19
Miscellaneous Tips
To run any command “as administrator,” you can right-click the program and click Run as administrator as shown in Figure 19-2. But this is too much work!
Figure 19-2. Manually running PowerShell “as administrator” Instead of navigating to a program in a menu, you can’t save a few seconds by creating a shortcut that always opens up PowerShell “as administrator.” To make this happen, create a Windows shortcut on your desktop or another place on disk using the following command. This command starts PowerShell and then immediately invokes a child Windows PowerShell process “as administrator.” PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -Verb Runas}" Replace Start-Process PowerShell with Start-Process pwsh to launch PowerShell Core “as administrator” Tip Source: https://www.reddit.com/user/Wugz/
135
Chapter 19
Miscellaneous Tips
F urther Learning •
9 Ways to Launch PowerShell in Windows
Store “Formattable” Strings for Later Use You can store “formattable” strings without actually using them until you need to. What’s a “formattable” string? As mentioned in the “Write for the Next Person” chapter, PowerShell allows you to create placeholders in strings to replace via the -f operator. As shown in the following code snippet and result in Figure 19-3, using string formatting with the -f operator, you can replace a value inside of another string by using an incrementing placeholder: 'Hi, I am {0} {1}' -f 'Adam','Bertram'
Figure 19-3. Replacing placeholders with values in a string You can use this feature to your advantage by also storing the entire string with placeholders in a variable and then replacing those placeholders with a variable. For example, perhaps you need to store a common SQL query in code but one value in that query will change. You could create the string with placeholders, assign it to a variable, and invoke the code later. The following code snippet is storing the SQL query string with a placeholder as $FString. It’s then using a pipeline variable ($_) as the value to replace which is a variable. You can see that the “formattable” string’s placeholder is getting replaced by different values.
136
Chapter 19
Miscellaneous Tips
$FString = "SELECT * FROM some.dbo WHERE Hostname = '{0}'" "comp1","comp2" | ForEach-Object { Invoke-SQLcmd -Query ($FString -f $_) } Tip Source: https://www.reddit.com/user/Vortex100/
F urther Learning •
Learn the PowerShell String Format and Expanding Strings
se Out-GridView for GUI-Based Sorting U and Filtering Sometimes the command line isn’t enough. Whether you’re feeling mouse-happy or you’re giving a script to a user that needs a GUI, you can easily display objects using a cmdlet called Out-GridView. The Out-GridView cmdlet takes one or more objects as input and then displays them in a window with a grid. You can then filter, sort, and click around the rows as much as you’d like. Maybe you’d like to present a list of Windows services to a user. Just pipe Get-Service to Out-GridView. You’ll get a nice window as shown in Figure 19-4. Get-Service | Out-GridView
137
Chapter 19
Miscellaneous Tips
Figure 19-4. Viewing Windows services in a GUI-based grid layout Not only can you create a nice, grid layout of objects, you can also provide input to other commands via the grid by passing selected objects using the PassThru parameter. Perhaps you need to restart some Window services but want to select them via nice, grid layout you saw earlier. In that case, you’d use the PassThru parameter which would force Out-GridView to pass all selected objects to the pipeline which another command would pick up (Restart- Service in this case). Get-Service | Out-GridView -PassThru | Restart-Service The Out-GridView cmdlet is an easy and simple way to create an interactive, GUI-based grid layout. Tip Source: https://www.reddit.com/user/alphanimal/
F urther Learning •
138
Fun with PowerShell’s Out-GridView
Chapter 19
Miscellaneous Tips
Don’t Make Automation Scripts Interactive When you’re building scripts to automatically perform tasks, don’t introduce any kind of interactivity. The point of automation is complete, hands-off-the-keyboard behavior. The last thing you want to want to happen is to be woken up in the middle of the night because a mission- critical script didn’t run because it was waiting on you to type something in. Every time a script prompts for input, it’s paused. The entire script just waits for a human to come along and type some characters on the keyboard. Don’t create this. Instead, think through the input you need to pass at runtime and automatically use that information. Don’t define a mandatory parameter and not pass a value to that parameter: function Do-Thing { [CmdletBinding()] param( [Parameter(Mandatory)] [string]$Foo ) } PS> Do-Thing function Do-Thing { [CmdletBinding()] param( [Parameter()] [string]$Foo )
139
Chapter 19
Miscellaneous Tips
$thing = Read-Host -Prompt 'I need some input!' ## Do stuff with $thing } PS> Do-Thing $credential = Get-Credential -Message 'Needs some love here' The only place to prompt for anything is when someone intends to be in front of that code when it runs.
F urther Learning •
140
Working with Interactive Prompts in PowerShell
CHAPTER 20
Summary I hope you learned a few tips and tricks in this book. This book was created to bring together all of the little gotchas and tips collected by myself and the PowerShell community over the years. It was written not necessarily as a best practices guide but more of a source to confirm you’re designing scripts according to community guidelines. PowerShell is a forgiving language. That forgiveness is a double-edged sword. In one sense, you can write PowerShell code that gets a job done quickly in many different ways. However, with that kind of flexibility comes higher importance of good design decisions. Even though you can do something doesn’t mean you should! I hope this book has taught how to write better PowerShell code and go from a simple script-maker to a tool-building guru!
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4_20
141
Index A Abstraction Hyper-V, 92 hypervisor, 93, 95, 96 software developers, 91 VmWare, 93 Active Directory (AD), 121 Aliases, 66, 67 Alphabetical order, 67, 68 Artificial intelligence (AI), 55 Automating Pester tests, 127 Automation scripts, 139, 140
B Building blocks, 2, 25–30 Building functions pipeline, 27–29 Restore-AppBackup, 30
C Case-sensitive cmatch operator, 62 Clean up verbose messages, 46–48 Code scaffolding, 22 Coercion, 57
© Adam Bertram 2020 A. Bertram, Building Better PowerShell Code, https://doi.org/10.1007/978-1-4842-6388-4
Command-line utility, 97, 98 Comment-based help, 69, 70 Community modules, 13 Configuration items, 52 Constrained language mode, 86 Context, 8 Credential parameter, 42 Cross-platform, 10 macOS, 132 REST API, 132 web server, 131
D Domain-specific language (DSL), 125 Debug stream, 107 Distributed version control services, 123 DnsHostName parameter, 106 Don’t Repeat Yourself (DRY), 49, 50
E Encrypt() method, 132 Error handling, 4, 73
143
INDEX
F FilePath parameter, 33 Find-Module command, 14 Format Cmdlets, 103, 104 “Formattable” strings, 136 -f string operator, 65 Functions foreach loop, 26, 27 ServerName parameter, 26 single action, 25
G
Invoke-Expression command, 84, 85 itHub Advanced Search page, 15
L Limit script/function input, 7 Logging function, 43, 44, 46 Log script activity, 6 Loosely typed languages, 56
M
Get-Content cmdlet, 119 Get-Server returns, 38 Globally unique ID (GUID), 69
Manageable code, 5 Module functions get function, 100 New-Monitor function, 100 pscustomobject type, 98, 99
H
N
Hard-terminating error ErrorAction parameter, 74 $ErrorActionPreference, 76 try/catch block, 75 Write-Host cmdlet, 73 Hyper-V virtual machines (VMs), 92
.NET StreamReader Class, 119 $null variable, 111
I, J, K Infrastructure tests, 126 InputObject parameters, 39 Integrated development environment (IDE), 11 144
O Out-GridView, GUI based sorting/ filtering, 137, 138 OutputType keyword, 59, 60
P, Q Parallel processing background jobs, 117 Get-Job command, 118
INDEX
Invoke-Command, 118 runspace, 117 Parameter sets, 36, 37 Parameter validation, 57–59 Param parameter, 57 PassThru parameter, 110 Peek() method, 119 Performance vs. readability, 71, 72 Pester definition, 125 DSL, 125 infrastructure tests, 126, 127 PSScriptAnalyzer, 128 Pester tests, 4, 9 PoshRSJob PowerShell module, 117 PowerShell, 141 console, 30, 31 foreach statement, 115, 116 Get-Verb cmdlet, 89 naming standard, 89 .NET methods, 88 pipeline, 114, 115 PowerShell “as administrator”, 134, 135 PowerShell extension, 19 PowerShell gallery, 13 PowerShell Integrated Scripting Environment (ISE), 17 PowerShell module, 2 PowerShell scripts, 21 PSAcmeApp, 122
PSCredential object, 40, 42, 83, 84 PSCustomObject object type, 109
R ReadLine() method, 119 Reboot-Server function, 37 Regular expression (regex), 61, 62, 68, 69 RemoteSigned or AllSigned, 79 Remove dead code, 53, 54 Return informative output, 8 Re-usable tools, 3
S Scriptblock logging, 81, 82 Security, 5, 79 ServerName property, 37 Set-AuthenticodeSignature cmdlet, 80 Sign scripts, 79, 80 Soft-terminating errors, 76, 77 String substitution, 65, 66 System.IO.FileInfo object, 59
T Todo list, 23 Trusted Root Certification Authorities, 80
U Unit-testing framework, 126 145
INDEX
V Variable names, 63, 65 Version/source control .bak, 121 branch based feature, 122, 123 commit code, 122 creating repositories, 121 Visual Studio (VS) code cross-platform, 18 extensions, 18
146
Git integration, 19, 20 PowerShell extension, 18
W, X, Y, Z Windows PowerShell, 131 Win32_Product CIM Class, 133, 134 Write-Host cmdlet, 73, 113 Write-Information cmdlet, 107, 108 Write-Progress, 101–103 Write-Verbose cmdlet, 105, 106