invoke-command instead of invoke-expression

In my previous post (which you can find here ) I use the invoke-expression cmdlet for running a Powershell script which was downloaded with invoke-webrequest.
And this was a good solution. The code that was downloaded and executed was a powershell script that would run a private function. This private function then was formated with the three scriptblocks Begin,End and Process.
Parameters where with a same construct being downloaded from a git repository and placed in a powershell Object called $P.
With this approach I separated code parameters from the actual code.
Using GIT I was able to versioning my parameters file, separate from my script code. This setup is working great. And it gives flexibility by leaving the code untouched when changing parameters.


Yeah, a but…. I still needed a way to pass parameters /arguments on the command line. Using invoke-expression… well that wasn’t possible.
So I looked into invoke-command, which has an -argumentlist parameter., making it possible to pass one or more arguments to the script. Using named parameters isn’t possible, which is not what I was looking for.
So to support naming parameters, I decided to introduce just one parameter. And this parameter should be a JSON string, making it possible to pass multiple parameters merged into a JSON object.

The only challenge with this is that all the interpreters that the code was going to pass, should leave the JSON string intact, including the quotes. And I didn’t want to escape any quotes. That would be messy and prone to errors. But encoding it, should solve this issue. The argument is a base64 encoded JSON string.

Is it secure?

Well, No …. not at all… it is a base64 encoding. My goal was not to make it more secure, but that the string wouldn’t be changed by the different shell interpreters.
Off course, you can make it more secure by using private/public key-pairs. You could use a docker volume containing the encoding keys, or other secure methods. When using base64 coding, just don’t pass any sensitive data (passwords) with it. There are other, more secure, approaches for this with containers.

parameter approaches

This setup gives me different approaches to pass parameters to the script. The more static parameters are stored in a .json file, stored in a GIT repository.
And the more dynamic parameters (like VM names to start), are passed via the base64 encoded JSON string.

What changed ?

I changed to following items:

  • changed entrypoint string
    • using invoke-command instead of invoke-expression
    • placing invoke-webrequest inside a scriptblock
    • using argumentlist to pass a base64 string, encoded a JSON string
  • changed powershell wrapper script to decode inpu

Docker Entrypoint

The previous docker entrypoint was something like

pwsh -Command invoke-expression '$(Invoke-WebRequest -SkipCertificateCheck -uri ' + <git URI> + ' -Headers @{"Cache-Control"="no-store"} )'

The new entrypoint is looking like

pwsh -Command invoke-command -scriptblock ([scriptblock]::Create( (Invoke-WebRequest -SkipCertificateCheck -uri <git URI> -Headers @{"Cache-Control"="no-store"} ).content ) ) -ArgumentList <base64 coded JSON string>

As you can see, the one-liner has grown.
I used the -scriptblock and the -ArgumentList parameter from the invoke-command. The -scriptblock contains the Invoke-webrequest cmdlet which downloads the RAW version of the powershell script on the GIT repository.
The invoke-command cmdlet then executes this scriptblock and passing the argument from the argumentlist to this script.

Script Layer

The script has a wrapper layer, a main layer (containing the Begin,End and Process blocks) and the Process block containing the specific code to run.

    template.ps1 powershell
.PARAMETER inputObject 
    A JSON string base64 (UTF-8) encoded

        ValueFromPipeline = $true, 
        ValueFromPipelineByPropertyName = $true,
        HelpMessage="JSON string base64 (UTF-8) encoded.")]$inputObject=""

function main {

#-- calling the real powershell code to run

main layer (function)

I choose to use the function method to preserve my code format structure. For most of my powershell code I use the End,Begin and Process scriptblocks to structure the code. And I didn’t want to stepp down from that approach.

function main {

        $uri = <url to RAW version of parameter file>
        #-- trying to load parameters into $P object, preferably json style
        try { $webResult= Invoke-WebRequest -SkipCertificateCheck  -Uri ($scriptrootURI+$scriptName+".json") -Headers @{"Cache-Control"="no-store"}  }
        catch  {
            write-host "uri : " + $scriptrootURI
            throw "Request failed for loading parameters.json with uri: " + $webResult 
        # validate answer
        if ($webResult.StatusCode -match "^2\d{2}" ) {
            # statuscode is 2.. so convert content into object $P
            $P = $webResult.content | ConvertFrom-Json 
        } else {
            throw ("Failed to load parameter.json from repository. Got statuscode "+ $webRequest.statusCode)

    #-- private functions
        function exit-script {

    #--- proces inputObject (the argument passed via the cmd line
        # decode that inputObject as UTF-8 base64 and convert it to a powershell object
        $A= ConvertFrom-Json -InputObject ([System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($($inputObject)))) -ErrorAction SilentlyContinue -ErrorVariable err1
        if ($err1) {
            writ-host "Failed to proces input object"
    End {
        exit-script -exitcode 0
    Process {
        #-- the code that is doing the real work
        write-host ($ #-- from the parameter.json file
        write-host ($A.universe) #-- passed via the cmd line argument


So I hope this blog gives you some ideas with you code challenges.
I’m going to write a more structured set of articles , a deep dive into my FaaS – like setup. So keep following this blog, when interested.
And comments, are always welcome.

Dynamicly executing powershell code from GIT – a FaaS way

In this post I mentioned that I was tipping my toes in vRA.
And…. yeah, it is not dipping anymore, it is a deep dive… 🙂 but that is for another blog

This blog is about a challenge I solved by using a FaaS approach. FaaS stands for Function as a Service.
I have a small UPS running for my home environment. And found out that it could support my home environment for about 30 min. And then poof… power off…. So I was searching for a solution that my vsphere environment would be shutdown, triggered from a home assistant /Node-Red automation flow, monitoring my UPS status.

My first way of thinking was to use the REST API of vSphere / VMware esxi. But not all the actions I need are published. (Like shutdown of an ESXi host.)
And I want to be dynamic as possible.
I want to shutdown / suspend all VMs expect VMs that are tagged as coreVM. So the code that I should write, doesn’t contain the VM names or IDs, but would filter out all VMs that have a vSphere tag with coreVM.

The Idea

As mentioned, is that the automation would find all VMs that need to be suspended or shutdown ( depending if VMtools is running). And VMs that have a vSphere tag UPS/coreVM, would be ignored.

These VMs are controlled with the startup/shutdown feature of my ESXi host.
The total automation flow would be a 3 fase flow. Fase 1 shutdown all but the core VMs. Fase 2 shutdown the ESXi host, which would automaticly shutdown VMs controlled via the startup/shutdown feature. Fase 3, shutdown of Synology NAS and Home Assistant
The core VMs are my vCenter, Log Insight and router VM.

The tools

The tools I use are:

  • Home Assistant with
    • NUT intergration (monitoring my UPS)
    • Node-Red flow automation (ussing HTTP REST API calls to control docker via Portainer)
  • Docker CE with Portainer CE running on a synology NAS
    • Docker image vmware/powerCLI
    • Portainer controlling the docker environment and leveraging control via its APIs
  • Gitea GIT server running locally (containing my script and parameter files)

the FaaS way

For fase 2 I try to use a FaaS approach. Meaning I have a function for shutting down / suspending VMs. This function is run in a temporary runtime environment (docker container) and only available at runtime. The function is not part of the runtime environment, but on runtime it is downloaded from a GIT repository and executed.
This gives the advantage of maintaining a runtime environment seperate from scripts (or the other way around). And on every execution, it starts with a clean environment.
For the function I use powerCLI, because I haven’t found an API call in vSphere 7.0 that will shutdown an ESXi host. And filtering VMs on vsphere tags was (for me) a bridge to far.

Using a container gives me also the possibility to seperate my vSphere credentials from my script, by saving it in a docker volume, which is only mounted at runtime. The function itselfs contains no credentials. It looks to a file in that docker volume.

The hurdles I encountered were:

  • how to run the script from a git repo
    • by using a powershell one-line invoke-expression with invoke-webrequest using the url to the raw presentation of a file (script) in the git repo.
    • using this one-liner as the entrypoint parameter when creating the container
  • how to bypass caching by the web browser
    • using “Cache-Control”=”no-store” in the header
  • running a full powershell script via invoke-expression
    • write the full code block as a powershell function (with begin, End and Process blocks) and call that function within the script.

The powershell one-line is

pwsh -Command invoke-expression '$(Invoke-WebRequest -SkipCertificateCheck -uri ' + <git URI> + ' -Headers @{"Cache-Control"="no-store"} )'

The <git URI> is the url to the raw representation of the script file in the GIT repo. When leveraging the portainer/docker api to create the container you need to use the JSON notation for the entrypoint. The one-line will look something like:

'$(Invoke-WebRequest -SkipCertificateCheck -uri ' + <git URI> + ' -Headers @{"Cache-Control"="no-store"} )']

Powershell script formant / the snag

The snag with using invoke-expression is that it can’t handle a powershell script that has a Begin, End and Process code block. While I this is how I write my code, and wouldn’t like to deviate from it, it meant I had a snag.
The solution was to write a powershel script that contains a private function and execution of that function, like this

function main {
        Example code for running full script from an URI.
        Example code to run a powershell script with dynamic blocks Begin,Process and End.
        Loading parameters from a JSON file into an P object.
        By wrapping the code into the function main, we can use begin, process and end scriptblocks when calling with Invoke-Expression
        The Process block contains the main code to execute.
        The Begin and en blocks are mainly used for setting up the environment and closing it.
    Run the following cmdline in a powershell session.
    Invoke-Expression (Invoke-webrequest <URL>).content
    Begin {
        #=== Script parameters

        #-- GIT repository parameters for loading the parameters.json
        $scriptGitServer = "https://....."   # IPv4 or FQDN of GIT server
        $scriptGitRepository = "organisation/repo/" # uri part containing git repository
        $scriptBranch = "master/" # GIT branch
        $scriptrootURI = $scriptGitServer+$scriptGitRepository+"raw/branch/"+$scriptBranch

        #==== No editing beyond this point !! ====
        $ts_start=get-date #-- Save current time for performance measurement

        #--- write log header to console
        write-host "================================================================="
        write-host ""
        write-host "script: $scriptName.ps1"
        write-host ""
        write-host "-----------------------------------------------------------------"

        #-- trying to load parameters into $P object, preferably json style
        try { $webResult= Invoke-WebRequest -SkipCertificateCheck  -Uri ($scriptrootURI+$scriptName+".json") -Headers @{"Cache-Control"="no-store"}  }
        catch  {
            write-host "uri : " + $scriptrootURI
            throw "Request failed for loading parameters.json with uri: " + $webResult 
        # validate answer
        if ($webResult.StatusCode -match "^2\d{2}" ) {
            # statuscode is 2.. so convert content into object $P
            $P = $webResult.content | ConvertFrom-Json 
        } else {
            throw ("Failed to load parameter.json from repository. Got statuscode "+ $webRequest.statusCode)
    End {

    Process {


What it does, is the script will run the function main. That function is basicly the full powershell script. In the begin code block the invoke-Webrequest is used to load a .json file and convert it to a powershell object called P.
This object contains all parameters used in the rest of script (like vCenter FQDN).


The result is that from a monitoring trigger Node-Red will do some REST API calls to portainer to create run and delete a docker container based on a vmware/powerCLI image. During the lifetime of this container a volume is mounted with authorisation information and a powershell one-liner is executed which runs a powershell code directly loaded from a GIT repository.
With this setup I can run on demand any powershell script which doesn’t need user interaction, maintained in a GIT repository.

I hope you enjoyed this post. do you have questions / comments, please leave them below or reach out to me on twitter

truncate docker container log

sometimes you just want to clear logs of a docker container.
For instance, I’m running dnsmasq in a docker container and I’m troubleshooting raspberry PI PXE boot (yes to run ESXi-ARM stateless…)
And the dnsmasq is serving my homelab its domain.
Then it can be an annoyance when you run docker logs <container name> seeing all the log entries since the container started.
And you just want to start with a clean slate.

It is the way

Yes, there is a way. And when you google you’ll find more blog posts giving you the solution. Which is

pi@raspberrypi~ $ sudo docker inspect --format='{{.LogPath}}' <container name>
pi@raspberrypi~ $ sudo truncate -s 0 <path presented by previous command>

And it works great… but… typing 2 lines of code… copy / pasting the log path in the second command.. way too dificult 🙂
So what do you think abou this ?

pi@taspberrypi~$ sudo  truncate -s 0  $(docker inspect --format='{{.LogPath}}' <container name>)

Yes, it will erase all logging, but that is the purpose of this whole excercise.


If the one-liner isn’t helping, how about creating a small script just for one purpose alone…. What if you could do something like

pi@raspberrypi~$ truncate-log dnsmasq

To truncate the docker logs of the dnsmasq docker container. How ? Well easy.
Just create a new file in /usr/local/bin

pi@raspberrypi~$ nano /usr/local/bin/truncate-log

and copy / paste this content into the file

truncate -s 0 $(docker inspect --format='{{.LogPath}}' $CONTAINER)

After saving the file. (Ctrl-x), add the execution bit

pi@raspberrypi~$ chmod +x /usr/local/bin/truncate-log

And voila, the next time you need to truncate a docker container log, you just type truncate-log <docker container name>

Ansible in a (docker) container

Ansible Playbooks vs Roles: Part I - The Playbook ...

A year ago, after following a few sessions at the dutch VMATechcon I decided to dive into the world of Ansible. With a background in industrial automation (programming PLCs) I have a soft spot for writing code. I saw the benefits of declarative languages, and decided to dive into Ansible.
But where to start. And what should I use it for.


I’m working on a small VM which provides DHCP,DNS,TFTP,FTP,HTTP and NTP services so I can easily (PXE) boot nested ESXi hosts on VMware workstation and my homelab. All running in docker containers on Photon OS.
For maintenance (and learning) I decided to use Ansible for building, running these containers.

One of the issues I had, was that I had to maintain the VM and the software installed on it… I’m quite proficient with Linux, but it can still be a challenge.
And keeping installations logs update about how to install software… yeah…. that is an issue. A embraced the solutions docker, git and ansible are giving me. (And vagrant, when using VMware workstation).

Using docker to run the services, using ansible to automate workflows and using git to keep track of changes and store my code in my github repositories.
With this approach I can easily rebuild the VM when it breaks down. And I see the VM more as a processing entity, then a virtual computer that needs to be maintained.

The Ansible container

browsing the internet I found this blog post about running ansible inside a docker container and using it interactively on the docker host. This gave me the possibility to have ‘portable’ ansible installation, without any ties to the VM OS.
In my opinion it is great. With adding some small bash scripts so I don’t need to write something like ‘docker run -rm -it …..’ everytime, I have an ideal solution.
The original code for the container is based on alpine:3.7 and python 2. Everytime I build the docker image I received a warning that python 2 is outdated…. so I (finaly) updated the dockerfile using alpine:3.11 as the base image and using python 3.
You can find my code here on github, and the docker image here on dockerhub.

How to use

simple. There are 2 ways to use it.

  1. clone git repository
  2. pull image from dockerhub

Option 1, clone git repository

I’ve created a small installation script to make things easier. It will run docker build to build the ansible image and it will copy the scripts under ./scripts/ to /usr/local/bin/
And your all set… well almost

Option 2, pull image from dockerhub

Instead of building the image, you can pull it from dockerhub into your local docker environment. The scripts assume that the docker image is tagged like ansible:2.9.0, but when you pull it from dockerhub, the name is brtlvrs/ansible-helper:v.0.3. So you need to retag them (or edit the scripts).

docker image tag brtlvrs/ansible-helper:v.0.3 ansible:2.9.0
docker image tag ansible:2.9.0 ansible:latest

The wrapper scripts are stored in the image, in the folder /2installOnHost
To copy them out of the image do the following

docker create -n test ansible:2.9.0
docker cp test:/2InstallOnHost/ah* /usr/local/bin/
chmod +x /usr/local/bin/ah*
docker rm test

This will create a docker container named test, using the imported image ansible:2.9.0.
Then copies the files from inside the container to the /usr/local/bin/ and sets the executable bit on these files.
Finally it removes the created docker container.
Now you should be able to run ah or ah-playbook.

Both scripts will mount the folowing volumes into the container

~/.ssh/id_rsa/root/.ssh/id_rsaneeded to access localhost via ssh (ansible)
~/.ssh/ to access localhost via ssh (ansible)
$(pwd)/ansible/playbooksMountpoint is the default location inside the container.
/var/log/ansible/ansible.logMaking the ansible logs persistent.

Finishing touch

To make it work you need to do the following:

  • install docker-py on the docker host ( if you want ansible to be able to control docker)
  • add the ~/.ssh/ content to ~/.ssh/authorized_keys file, so you don’t need to store the user password into the ansible inventory file.


An now it should work. Type ‘ah’ and you’ll enter a temporary docker container running ansible. Type ah-playbook and you’ll execute the command ansible-playbook inside a temporary docker container.
Have fun.

%d bloggers like this: