Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Best Practices for Module Developement and Building #59

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Jaykul opened this issue Jun 23, 2016 · 39 comments
Closed

Best Practices for Module Developement and Building #59

Jaykul opened this issue Jun 23, 2016 · 39 comments

Comments

@Jaykul
Copy link
Member

Jaykul commented Jun 23, 2016

Hey, anyone interested in trading success stories about how you "build" modules?

I just tried something new this week on the SecretServer module, thanks to @bushe and @RamblingCookieMonster ... where the functions in the module are organized in "Public" and "Private" folders, and the psm1 dot-sources them, but the build script combines them, copying all the content into the psm1 -- so when it's shipped, the module is just the .psd1 and the .psm1

The result is somewhat easier for code navigation and debugging during dev (at least in Visual Studio, Code, and Sublime) and faster loading of the 'built' module.

I like it so much, I'm wondering if it's worth teaching others to do the same...

The build script for that is something that's gone through many revisions on other projects, and I'm starting to wonder if there's a way we can ever stop all the projects on github from having their own unique build/test systems.

See Also: ModuleBuilder, PSake, PSDeploy, CodeFlow, Pester, etc...

@gerane
Copy link

gerane commented Jun 23, 2016 via email

@gerane
Copy link

gerane commented Jun 23, 2016

I have a plaster template written up using the format I have now and was
going to see if it could be added into the examples.

Edit: Reading comprehension fail.

I have been thinking about the Tests folder as well. Should it be in Root\Tests or Root\ModuleName\Tests

@gerane
Copy link

gerane commented Jun 23, 2016

What I encounter is issues with modules using different Tests directories, making reusable tests usually require some sort of alteration.

@zloeber
Copy link

zloeber commented Jun 23, 2016

I'd love to see how you are doing releases. I've slowly been working on an invoke-build script for my handful of module projects to help turn the source into modules in a more streamlined manner. The invoke-build script is an exceptional, pure posh, build engine which you might appreciate. I run code formatting against the source among other things. Here is the list of tasks I've created so far. I'm not completely done with the build script but the overarching steps are:

  • Clean/Create the build temp directory
  • In the temp directory:
  1. Create project folder structure
  2. Copy over project files
  3. Format PowerShell files with the FormatPowerShellCode module
  4. Create module markdown files (PlatyPS)
  5. Convert markdown files to HTML documents for online help (PlatyPS)
  6. Update module manifest with exported functions and new version information.
  7. Update the current release directory with the temp directory contents
  • Create a zipped copy of the current release directory contents with the current version name (thus keeping an ongoing history of releases).
  • Push the release with a tag to GitHub
  • Push the release to PSGallery (using some custom functions)

I hope to have a working (releasable) .build.ps1 script as an example done soon. I'm sure someone like yourself could take and make it 10x better.

@gerane
Copy link

gerane commented Jun 23, 2016

I think it would be interesting to create a PSDeploy deployment type that handles a module like this. I really like PSDeploy DSL for configuration. It lets you extract away the complicated and messy code in your build script, and replace it with a single command. You just have human readable config file. It makes for easier auditing and reading of what is going on.

@gerane
Copy link

gerane commented Jun 23, 2016

This could also allow for variations in folder structure. The config file could have paths for public\private\tests etc. Why I mention PSDeploy is it also covers more than just building the module, so while it could have this integrated in, it can also cover things like build PlatyPS help, cloud services or any other sort of deployment.

@bushe
Copy link

bushe commented Jun 23, 2016

I am currently using https://github.com/martin9700/ConvertTo-Module to build modules, I have a partially working DSL wrapped around it to make it a simple process, using .Build.ps1 file and just running Invoke-Build.

It might make more sense to have ConvertTo-Module plug into PSDeploy as mentioned by @gerane

@rkeithhill
Copy link

For "build" tasks, I prefer PSake. It seems to have reasonable momentum (16 releases, 47 contributors) behind it and it works well. Like the old sayings go: "use the right tool for the job" and "don't try to fit a square peg into a round hole". :-)

@Jaykul
Copy link
Member Author

Jaykul commented Jun 23, 2016

@bushe yeah, I did not like that one. It's adding all this weirdness about #publish blocks and stuff. Since during development the psm1 is just dot-sourcing all the scripts, there's no need to do any weird magic at build time: just copy the file content. It's literally a one-liner.

I want to keep my dev psm1, and put the release one in the output directory.

And finally, it wanted to re-create the psd1 every time with all the lame "in case you don't know how modules work" comments -- blowing away my comments, and making the PSD1 UTF-16 encoded. That final bit made me so angry, I deleted it. 😠

@gerane I'm increasingly confused by PSDeploy vs. PSake and what the goal is 😉

@zloeber
Copy link

zloeber commented Jun 23, 2016

I went ahead and finished the invoke-build script I've been working on for my script formatting module and pushed it up if anyone is interested in it. The build process is pretty simple to run (the last two lines are the different defined task lists ( . = default task if no tasks are given to invoke-build) but you can call any one task individually. I'm no build expert but now that I have it working I'm absolutely going to be using this for all my future projects :)

@zloeber
Copy link

zloeber commented Jun 23, 2016

I should be honest and say that I didn't actually finish the tasks for pushing to the psgallary (just the underlying functions I'm going to use within them are done, I'll publish them soon)

@Jaykul
Copy link
Member Author

Jaykul commented Jun 23, 2016

It looks like I need a whole conversation about "build systems" ...

PSake vs PSDeploy vs Invoke-Build.

MSBuild vs Grunt.

Why is everyone trying to create these non-deterministic "task" systems? What do you see as the payoff, versus a simple, linear build script?

@zloeber
Copy link

zloeber commented Jun 23, 2016

I personally was looking for a completely stand-alone (portable) method for repeating my build releases. I was going to just script out the process but putting the whole thing into the invoke-build framework allows a lot more flexibility for running the individual tasks. If you look at my small(ish) example I have one big linear task list as my default build action but I also have a task called 'TestModule' to kick off a test run of the code formatting module instead of a full release. It also forces you to break your build tasks out into discrete tasks I suppose.

@nightroman
Copy link

@Jaykul

Why is everyone trying to create these non-deterministic "task" systems? What do you see as the payoff, versus a simple, linear build script?

I tried to answer this question: Invoke-Build/wiki/Concepts.

@Jaykul
Copy link
Member Author

Jaykul commented Jun 23, 2016

@nightroman but your tasks don't have "inputs" (source) and "outputs" (destination) -- so doing two builds in a row does all the work twice?

@nightroman
Copy link

@Jaykul Tasks have "inputs" and "outputs", see Invoke-Build wiki

@bcdady
Copy link

bcdady commented Jun 23, 2016

It seems this thread is starting to wander down side-trail(s) of defending existing approaches/projects/repos. I'd like to respond to the initial nomination and confirm that the idea of making the module 'build' process simple, consistent, and IDE/ISE/tool agnostic is definitely worthwhile.

Those who already have an established process and preferred tool are unlikely to change nonchalantly, so the benefits of sharing notes on how to most efficiently author, test, and distribute new modules will appeal to those who are open to trying a new method.

I've been using PowerShell ISE with the ISESteroids module, which adds some nice navigation features that make managing a large psm1 more feasible. But I'm all for learning a technique that would work well, and be a bit more accessible for colleagues that aren't excited about investing in the ISESteroids add-on, or any other number of sources of friction.

@Jaykul
Copy link
Member Author

Jaykul commented Jun 23, 2016

So, yeah, @bcdady, I've been telling people that the reason they want to break things up into many files is because they don't have a good editor with "goto" syntax awareness ... and I've been sticking with one file because some editors don't have a "folder tree" view.

But after trying it for a few days, honestly, it's nice, when debugging large modules, to be able to easily open side-by-side views of three functions to visually see a stack trace -- and to never have to type a 4 digit "goto line" number 👀

For what it's worth, I've tried several of the build tools, and always end up just using a build.ps1 like what's in the SecretServer module I mentioned earlier. I had a go at combining that with my test.ps1 and so on just now, so if anyone cares, that is here but it's still really just a straight-forward linear script.

My ideal tool would be like gruntjs -- able to watch folder trees and re-run build steps to constantly keep outputs up to date. Ideally, I'd like to use a simple powershell-like syntax for that. Something like:

concat @{
    inputs = "${source}\Public\*.ps1", "${source}\Private\*.ps1"
    output = "${build}\${modulename}.psm1"
    separator = "`n`n"
}
pester @{
    inputs = "${source}\Tests\"
    outputs = "${logs}\TestResults.xml"
}
watch @{
    inputs = $concat.inputs
    tasks = concat, pester
}

Or even (since I'm day-dreaming here), something simpler:

concat -input "${source}\Public\*.ps1", "${source}\Private\*.ps1" -output "${build}\${modulename}.psm1"
copy -input "${source}\**\*.ps1" -exclude $concat.inputs -output "${build}\**"
pester -input "${source}\Tests\" -output "${logs}\TestResults.xml"

watch -inputs $concat.inputs -output concat, pester
watch -inputs $copy.inputs -output copy, pester

@RamblingCookieMonster
Copy link

RamblingCookieMonster commented Jun 23, 2016

Hi!

On combining function definitions into the psm1 at the build stage: agreed, that might be a handy option, although I wouldn't advocate for using it all the time. Would be handy to have it abstracted into a function, at the very least : )

Aside:

So, this isn't the most common use case, but I occasionally move modules or similar projects across different build systems - AppVeyor, Jenkins, GitLab CI, etc.

Having a modular, cross platform solution makes this a bit easier. I also tend to prefer abstraction, to an extent; it leads to more readable code, IMHO, and makes it easier to change/fix the underlying code without changing what the abstracted build looks like.

For example, I tend to use:

BuildHelpers (forgive the name): This includes a variety of helper functions you might use in a build process. For example:

  • Reading environment variables to determine details like the repo path, commit message, and branch name
  • Reading the folder/file structure to determine the actual project path within the repo (assumptions are made)
  • Not finalized, but steps like bumping module version, or for projects with FunctionsToExport='*', loading the module, reading exported functions, and updating the psd1 (will try to get dependency on your Configuration module working for this)

psake: This is used to help organize tasks, and many folks can use the Tags feature to differentiate what tasks run, perhaps for different build environments - e.g. maybe I could have a 'local' tag that would restrict some of the build-system-specific tasks.

I do agree that psake adds a bit of overhead - now you have a build.ps1 and a psake.ps1. I still think it's beneficial though, it's a common tool, and I can quickly skim it and make assumptions from what (little) I know about the tool, rather than trying to parse a random build script, even if you organized it as you mentioned (functions as stages, call those functions at close - awesome idea, but I worry that you might see wonky implementations or that folks would skip it). Cidney seems like an interesting alternative, and Robert is very receptive, but went with what other folks were using and contributing to.

Pester: At this point, if it's bundled in the OS, not going to look for a different testing solution : )

PSDeploy: This helps simplify and standardize deployments. I can simply look for a *.psdeploy.ps1 file and get an idea of what deployments will result from a project. It might be overkill, but is particularly helpful when you have more than one deployment (e.g. maybe you're also building docs with PlatyPS or MkDocs), or doing infrastructure or other deployments.

With this set of tools, it's pretty straightforward to not only move between build systems, but to not have project-specific bits that I need to change every time, much of that is abstracted out and normalized.

Open to change though, if it makes sense and meets needs!

Cheers!

@kilasuit
Copy link

@Jaykul Personally I think its better to leave it in the segregated styles of Private and Public folders than combining them into a single PSM1 - This allows for a better code reading and debugging experience IMO

Perhaps its because I can't see the actual need nor the benefit in adding this layer of additional work to be done in the publish stage.

Could you define an example that you think would help me to understand the use case behind doing this?

@gerane
Copy link

gerane commented Jun 23, 2016

@kilasuit faster module loading is one of the major benefits.

@rkeithhill
Copy link

This is the build script that is currently in the Plaster New Module template example. It uses PSake but as Joel says, its main job is to just manage a Release\<ModuleName> folder in the module's root dir. Every "build" is compromised of cleaning the Release dir and copying only the necessary files into the Release dir.

Maybe it's just me but when I see a module that just publishes everything that is in its GH root dir, I think a little less of it. I don't need its .gitattributes/.gitignore files. I don't need to see its build script. Frankly, I don't even care about its tests. If I want those sort of source files, I'll fork their repo and get the benefit of having those files.

So this build script has an Exclude property that is prepopulated with most of what you don't need in the published module but you can customize it. There are also Pre/PostPublish tasks as well. On a regular build, the PSake script will not publish. However when this project scaffolds into VSCode it tweaks the project's tasks.json file to add a "publish" task.

There are some things it doesn't do (or do well) that I think need addressing. There is no "automatic" versioning. Is this something it should do? Can folks agree on an automatic versioning scheme? I'm not sure how such a scheme can know things like "breaking change" or "added features" vs "bug fixes". But hey, if something can be automated, I'm all for it. :-) Right now it uses the ReleaseNotes parameter on the Publish-Module task because Update-ModuleManifest is broken, It can't handle a ' in the release notes text.

If you want to play with this script it ships with the PowerShell extension for VSCode. Just open the extension's examples dir and press Ctrl+Shift+B or look at the tasks with Ctrl+P and type task<space>.

@kilasuit
Copy link

@gerane how much faster are we talking ? minutes/seconds/milliseconds/ticks ?? - especially seeing as from v3 there is typically little need to forcefully import a module due to Module Auto Loading when installed in a PSModulePath location and moreso when it is in a default location.

Personally I prefer seeing the module structure as it would be in development when I've downloaded the module than an essentially mushed together version mainly because I think its easier for me to do localised debugging when it may be required.

Other than some speed benefits are there any further benefits that are easily realised?

@rkeithhill from the tests point of view going forward I personally would prefer to see the tests included so that they can be run locally without the need to find and fork the source - especially as there isn't always a ProjectURI link in the PowerShell Gallery because its not added in the PSD1 manifest file so then it can be painful to find where the source is located

@rkeithhill
Copy link

rkeithhill commented Jun 23, 2016

@kilasuit I think you and most of the folks participating in this discussion, are several sigma out from the "typical" module user. Thinking of the folks I work with - devs and system admin - I can confidently say none of them would ever look in the module's folder - if they even knew where it was located.

I know the extra files aren't huge but then again, they are extra files that are not required. As we consider modules for footprint sensitive environments like Nano server, I see no need to install unnecessary files especially when you can typically access those files very easily.

BTW you can run those files locally without doing a fork. You can always just clone the project's repo. The fork comment was made because you mentioned running tests. If you're running tests, you're likely changing something and if you are changing something, you are obviously going to contribute that back to the original project, right? :-)

@Jaykul
Copy link
Member Author

Jaykul commented Jun 23, 2016

For what it's worth, if the module's not open source (at least within your company), some of these things are a substantially different conversation. Assuming it is open source, if you're going to change the module, you SHOULD fork the repository and go to the original source. Published modules SHOULD specify their project repository.

I agree with @rkeithhill that I don't think tests should be in "published" modules. They actually lead people down the wrong path --I want people to go to the repo if they want the tests-- otherwise they aren't helping me, because they can't file bug reports or pull requests. 😉

@kilasuit Speed is the main benefit of merging all the scripts, and is after all, the main concern aside from correctness (which is the same, either way). The time difference is on the order of seconds when the module is imported -- its definitely noticeable. Editing and debugging is just NOT a priority for the shipped module. If you want to debug or edit, clone the repo.

However, there are a few other upsides:
  1. PSScriptAnalyzer only runs on the module code. I don't know about you, but my tests won't pass 😦
  2. Discoverability. In the auto-loading world, having the manifest explicitly updated is important. This pattern causes updating the manifest, and doesn't rely on the module being loaded to discover things, but if it did, it would work better merged.
  3. Cleanliness. Despite the size of the module file, the file layout is much simpler: psd1, psm1. Optionally, help files and format files. No nested trees of scripts -- a quick glance in the file manager tells you everything you need to know.

@RamblingCookieMonster
Copy link

RamblingCookieMonster commented Jul 25, 2016

Finally had time to write a bit longer-form on the process I tend to use. More to come (e.g. BuildHelpers should query PSGallery to bump the version), but I've found it helpful to abstract out the build steps where possible.

For example, rather than a swath of script to package up and deploy a mode to AppVeyor, it's a small snippet in the *.psdeploy.ps1 file:

# Publish to AppVeyor if we're in AppVeyor
if(
    $env:BHPSModulePath -and
    $env:BHBuildSystem -eq 'AppVeyor'
   )
{
    Deploy DeveloperBuild {
        By AppVeyorModule {
            FromSource $ENV:BHPSModulePath
            To AppVeyor
            WithOptions @{
                Version = $env:APPVEYOR_BUILD_VERSION
            }
        }
    }
}

I also like the idea of breaking these down into tasks. Whether using psake, cidney, or invoke-build, you get a pretty standard way to see how things are grouped, how things flow, etc. Random PowerShell code, even if you clearly document it, is going to force folks to read through things more carefully to figure out when and why (e.g. is it for the build? the deployment?) a line of code is used.

Also, on abstraction.... After reading through various build files and the many ways to handle common build needs (whether they're in a function, or just a bunch of lines of PowerShell), I'm in favor of using something like BuildHelpers to abstract out the various helper functions and avoid clutter.

Whew! I need some coffee.

Regardless of which modules are chosen, I think we would all benefit from seeing more abstraction in build processes:

  • Avoid random build scripts with hard coded project specific details
  • Avoid cases where a bug or other necessary change requires an update to all build scripts in all projects, rather than to a module they depend on
  • Build conventions. I'm all for reading PowerShell, but having a common set of conventions and standard modules to work with, where they fit, would help with readability and moving between various module projects

Cheers!

@Jaykul
Copy link
Member Author

Jaykul commented Aug 1, 2016

So, from @RamblingCookieMonster's PSDeploy example I will say this: Written properly, PowerShell is wonderfully self-documenting. I don't care about pseudo-configuration DSLs vs straight PowerShell, but I definitely agree that packaging away the repeatable logic and leaving behind just calls is clearly easier to follow ...

Deploy-Project -Scope "Developer" -From $ENV:BHPSModulePath -To AppVeyor -Version $env:APPVEYOR_BUILD_VERSION

Apparently, the only way to get people in the PowerShell community to stop writing their own modules and use other people's is to get Microsoft to pick a winner at random and ship it in the ever larger "WMF" operating system. That makes me angry. However, that behavior also makes it likely that Microsoft will eventually have to pick one, so there's no way I'm going to try to recommend a "standard" module that could be extinct tomorrow if Microsoft picks a different one.

Since practically everyone in this conversation has their own modules for this, we aren't going to have a vote -- but I'd suggest that you guys should consider this a warning of your imminent demise and start talking to each other about collaborating and consolidating, because the project that grows and mutates and absorbs the most collaborators will inevitably be the winner.

@michaeltlombardi
Copy link

michaeltlombardi commented Aug 1, 2016

I think there's two different conversations happening with some crosstalk:

  1. How should PowerShell module projects be organized - the source which one can fork from a repository.
  2. How should PowerShell module packages be organized - the artifacts which are installed from package management.

They're related and important but we seem to be trying to discuss a solution that solves for both problems rather than approaching them separately.

Edit:

Actually, there's a third conversation intertwined with these and it's "How do we go from 1 to 2" as far as I can tell.

@Jaykul
Copy link
Member Author

Jaykul commented Aug 18, 2016

To be fair, the practice of having a different organization in the project than in the package was my original point.

A year or more ago we talked #22 about how the files should be organized, and I assumed that the discussion was about the package and argued that I didn't like having the files separate. That conversation didn't go anywhere, because we essentially just disagreed.

Now a year later we came back to it with one core difference:

There have grown up a plethora of PowerShell build tools, and a convention of sorts to have a build.ps1 script which allows us to essentially have it both ways. So let's wrap this thread up by returning to the original question with an attempt to get an up or down vote:

Is anyone opposed to a recommendation (or at least a non-binding recognition 😉) that open source PowerShell projects are normally organized one of two ways:

  1. Such that the project root folder (the folder which contains the .git folder) is an importable module.
  2. Such that the project root folder contains basically readme and license and build.ps1 and the build script produces an importable module.

In this second case, my proposal is that the project be organized as in #22

@JustinGrote
Copy link

JustinGrote commented Mar 18, 2019

Curious what people think about Private Function naming convention.

I know the common way in other languages is to do _myPrivateFunction, but Powershell style discourages this.

So far I've thought using the Verb-Noun method but removing the hyphen so it is VerbNoun seems to work best, e.g. GetMyInternalValues.

The main reason is that I can still distinguish between my private and public functions (public functions defined as ones I export with export-modulemember) when just looking at the code. If I decide to later make a private function public, however, the search-replace to add the hypen is very straightforward and has minimal chance of breaking anything.

Thoughts?

@martin9700
Copy link

Curious what people think about Private Function naming convention.

I know the common way in other languages is to do _myPrivateFunction, but Powershell style discourages this.

So far I've thought using the Verb-Noun method but removing the hyphen so it is Verb-Noun seems to work best, e.g. GetMyInternalValues.

The main reason is that I can still distinguish between my private and public functions (public functions defined as ones I export with export-modulemember) when just looking at the code. If I decide to later make a private function public, however, the search-replace to add the hypen is very straightforward and has minimal chance of breaking anything.

Thoughts?

Yeah, I usually remove the hypen and I don't necessarily stick to the Verb-Noun convention as hard (non approved verbs, mostly). The underscore is good, makes it very obvious what's happening. I haven't tested though, is PowerShell ok with that?

@ChrisLGardner
Copy link

ChrisLGardner commented Mar 18, 2019 via email

@JustinGrote
Copy link

JustinGrote commented Mar 18, 2019

@martin9700

The underscore is good, makes it very obvious what's happening. I haven't tested though, is PowerShell ok with that?

Yep, works totally fine, you see it all the time in Powershell Modules that were clearly written by developers with a C# methodology. It seems a bit arcane to non-programmers I think though, even if private commands are never "exposed" to the end user.

@ChrisLGardner

Having them named the same makes it more discoverable for maintainers and future users should you decide to make functions public or private from the other state.

Particularly having private functions have a different style was for maintainers who are reviewing code and see a function they don't recognize as standard, e.g. "this command is a local private command that users don't see, and doesn't come from some other module". As far as changing functions private to public, that's pretty easily done with a search-replace if you follow either the underscore or "remove-hyphen" method with little risk

I still like the VerbNoun format rather than arbitrary naming because it still contextually makes it clear this is a Powershell Private cmdlet and not some commandline tool like nmap, etc. (though those should almost always be prefaced with the call operator, not everyone follows that style)

@martin9700
Copy link

I stick with Verb-Noun naming for both public and private functions. I can see some benefits to having them named differently but I don't see any real difference between private and public beyond how easy it is for users to run them. Having them named the same makes it more discoverable for maintainers and future users should you decide to make functions public or private from the other state.

Have to disagree here. If it has the exact same naming standard I could see someone spending hours rewriting a function then loading the module and the damn thing won't run--oh turns out it's private. By using a different naming standard (and I love the _ prefix) the difference is obvious without looking through the manifest.

@JustinGrote
Copy link

JustinGrote commented Mar 18, 2019

@martin9700

It's not the same naming standard though.

Invoke-MyPublicMethod is Public
InvokeMyPrivateMethod is Private

basically instead of _ being the private delimiter, the hypen is, in reverse :)

I've been doing this practice for years with a broad team and no one has really ever had an issue. Also doesn't hurt that we organize with "Private" and "Public" folders in our powershell modules like most do per this style guide.

That said, lots of other languages (C#, Python) style guides use the underscore to deliniate private resources (as a means of convention, it's not programmed into the language or anything), so do we just "do what they do", or does this method make more sense from a readability perspective by being more "Powershell-y"

@martin9700
Copy link

Understood, I use the same naming convention. I just like the _ to augment it, takes even less brain cycles to recognize the difference. PowerShell has a long history of borrowing what's best from other languages so I don't think that should enter into the thinking.

@JustinGrote
Copy link

JustinGrote commented Mar 18, 2019

@martin9700 So your recommendation would be to do both just to make it extra clear:
Invoke-MyPublicMethod is Public
_InvokeMyPrivateMethod is Private

I see your point on the "less brain cycles", especially for a developer seasoned on other languages, but I disagree, probably primarily on preference I suppose.

Both options are "fine" I think, but I think we can agree that we should actively discourage things like _values being a private function, they should still follow PascalCase and VerbNoun :)

Curious what others style preferences are, maybe we can build a consensus.

@MartinSGill
Copy link

The first time I saw the approach that the OP mentioned I was hooked and I've been using it myself ever since.

The convention I use for internal vs public methods is fairly simple. External methods comply powershell cmdlet naming conventions verb-noun, whereas interal/private methods follow a more traditional approach, i.e. MyInternalFunction. Makes it very easy for me to tell at a glance what's internal vs external.

I have used underscore style conventions in the past and I can see some of the advantages in them, but with powershell just leaving out the hyphen makes it fairly easy to spot them at a glance.

@Jaykul
Copy link
Member Author

Jaykul commented Mar 20, 2019

I recommend VerbNoun (without a dash) for private functions, in a private folder. This is primarily to help the reader, when you're looking at other functions which call it, so that you know you can't just copy that code out of the module 😉

I have never liked _ on the front of things, even in C. And underscores are definitely not commonly used by .NET -- in fact, Microsoft's default StyleCop and FxCop rules would yell at you. Furthermore, even in languages where private variables are named with underscores, you don't necessarily name methods that way unless you're working in a language where it's impossible to hide your methods (e.g. Python).

@PoshCode PoshCode locked and limited conversation to collaborators Jul 22, 2023
@Jaykul Jaykul converted this issue into discussion #171 Jul 22, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Projects
None yet
Development

No branches or pull requests