vExpert 2023

vExpert BadgevExpert Badge

Yes, for the fifth year in a row I got the VMware vExpert status.
Being a VMware enthousiast, I blog about my experiences (not as much as I want too) and try to be an active community member of some Facebook VMware expert groups.

For this year I hope to blog more about my experiences transitioning into CNA (Cloud Native Applications), focusing on the platform technology for supporting CNA. Think about VMware Tanzu TAS, software like, Confluence, BOSH, Cloudfoundry, (Kubernetes)…..

vExpert NUC 2022

As mentioned in my previous blog I received (as many other vExperts) a ‘NUC’ (sponsered by Cohesity) to be used for a home lab. We were asked to blog about how we are going to use it in our homelab. So this is part 1 of …. (I have no clue ) blogging about my adventures with this NUC

What are you ?

To drop the b… it is not a nuc.
a nuc is a small form factor PC manufactured by Intel. This devices is manufactured by Maxtang. It is a small form factor PC, a NUC look-a-like.
And it has its own page at their site, titled ‘Intel Elkhart Lake J6412 Processor based compact fanless mini pc’.

The specs

CPU: Intel Elkhart Lake Celeron J6412 (4 cores)
Memory: SO-DIMM DDR4 max 32 GB
Ethernet: 2x nic – Realtek RTL8111H – 10/100/1000 Mbps
Storage: 1x M.2 for 2242/2280 SSD (SATA)
I/O: 2x USB 2.0, 2x USB 3.2, 1x USB-C
Display: 2x HDMI 2.0

Full specs can be found here

Usage

My ‘nuc’ is going to be used for a pfsense router. At the moment I use pfSense in a VM for my firewaling and routing at home. A VM is great, but in my homelab server (a supermicro) memory is scares. And having the router virtualised, demands to much memory. At first my pfSense VM was only used for my homelab, but it became more and more my main firewall and router for my home network. Which introduced some challenges. Namely losing internet when updating ESXi.
So moving it a dedicated hardware, results in no internet loss in my home for my family when I update my homelab.

Challenges

Installing pfSense, easy. And it looked like it was running fine. But once in a while, my WIFI was gone. And my LAN networks weren’t available.
Scrolling through the logs I first found the UUID error.
Also googling for the realtek NICs and pfsense, I found out that the realtek drivers aren’t updated in the freeBSD image.

UUID error

For the UUID error I had to search in the source code of pfSense (which is available in github) for the error message. And looked on the community forum of pfsense (here) And I found out that pfSense checks the UUID against a blacklist of UUIDs. And guess what….
My UUID was blacklisted….
The solution was to change the UUID. For this I needed to boot into the EFI shell and run DMI edit for AMI. After a long search I found the needed software here.
I downloaded dmi-edit-efi-ami.zip, unpacked it to a USB flash drive, read the documentation and changed some DMI settings.
And this solved my issue. But it didn’t solve the network issue.

Realtek NIC

I knew already that the realtek nics aren’t supported for ESXi. But I hoped pfSense (or freeBSD for that matter) would.
But no. pfSense would work for a week. Ben then my network dropped. Especially my VLANs.
After some search I found that you could update the Realtek drivers for freeBSD. (here)

To install them I followed the following steps on the shell of pfSense.

adding the package

pkg add https://pkg.freebsd.org/FreeBSD:12:amd64/latest/All/realtek-re-kmod-197.00.pkg

Letting pfSense know it should load the realtek drivers on boot by creating/ editing the file loader.conf.local in /boot/

if_re_load="YES"
if_re_name="/boot/modules/if_re.ko"

Then a cold reboot of pfSense and checking the OS boot log that the re1/re0 interfaces show the correct version (version 1.97.00)

This solved the issue. I haven’t seen the issue since.
So now I have my firewall/router running on dedicated hardware, making it independent from my ESXi homelab server. And leaving me with more available resource in my homelab to experiment with BOSH and CloudFoundry.

Back from Explore EU 2022 – first thoughts

This week I attended VMware Explore Europe. Finally a life event and back in Barcelona.
And for the first time being part of an orange ‘army’ the ITQ workforce.
I had a great time. Getting to know my colleagues, learning some interesting stuff, meeting customers.

Multi cloud

Multi cloud, yeah, you couldn’t have missed it during Explore. It was the buzz word (and even the WIFI password…..) It was no surprise of course. VMware is advertising for years that their vision is to run any application on any infrastructure on any cloud. And even when you think you are not using the bits and bytes from VMware, well there is a good change you still do. Does spring.io ring a bell ??

Eventhough compared to the last VMworld in Barcalona, VMware Explore was a bit tuned down, there was still more then enough to do. It just depends on what your focus is. For me this year, I wanted to focus more on cloud native session. And just some session about new features for vSphere. And just merge into the vExpert en Code communities.
Being part of the vExpert community, you had a treat this year if you signed up for a barebone nuc. The nuc, sponsored by Cohesity is going to be a great addition to my homelab and with make life a bit easier for my esxi host.

I will run pfsense on the nuc, so my esxi is dedicated for homelab, and not running also my pfsense. Which gives me more flexibility when upgrading esxi (impact at the moment would be that internet access is down, which is not appreciated at my home 🙂 ). And I don’t have to reserve CPU and memory resources anymore.
But that is for another time, another blog(story).

key takeaways / highlights

My key takeaways / highlights of Explore 2022 are

  1. 90DaysOfDevops by Michael Cade
    Michael shared his journey of discovering DevOps and documented it on github
    My intention is to start the 90DaysOfDevops journey for myself, but also to check it out if it is interesting for sysadmins I encounter. There is a gap to breach for sysadmins, because the way of working is going to be more dev like. But the dev country can look overwhelming. And what I expect of this journey is to get a better understanding and hands on experiences with the DevOps tooling
  2. Accelerating Your Career: The power of Mentoring (PCB2454EUR)
    A great panel discussion about mentorship in a company, which benefits it will give for the company and for its employees.
  3. Keeping a University Medical Centre Running During a VCF Transition (MCLB1452EUR)
    An honest story about the successes and pitfalls of the transition. Co-presented by a buddy of mine.
  4. Buidling and Running Enterprise-Grade Spring Applications in the Cloud (CNAB1953EUR)
    I’ve heard of the Spring framework, but never knew it was a VMware product, and never knew what it was about. Now I do 🙂
  5. How to Be Successful at Modernizing Apps and Data, and the Adoption of Cloud (CNAB2113EUR).
  6. And off topic …. There is a nice grill restaurant somewhere in Barcelona… I’ve marked the spot …..

What are your key take aways of VMware Explore 2022 , and why ?

VMware cloud account validation failed – alternative workaround on KB88531

Recently we had an issue with the validation of a VMware cloud account in vRA. The validation didn’t work.
A colleague at the customer site found the correct VMware KB artikel that addressed this issue ( it is KB article 88531)
This post is about an alternative approach for the same workaround mentioned in the KB article. This is not an in-depth article. You should have knowledge about vRA and using REST API calls.

Issue

The issue is that the certificate info has not been stored in the cloud account.
And this can happen when the vCenter SSL certificate is renewed, and in vRA you accept the new certificate, but you didn’t hit the ‘SAVE ‘ button.
When accepting the new certificate, you stored it in the certificate store of vRA. But because you didn’t hit save, that info wasn’t stored with the cloud account registration.
According to the article this has been solved in vRA 8.9 and for prior versions there is a workaround

Workaround

The workaround is correct. But what I don’t like about it, is that there is no explanation about what you are doing.
The workaround uses the REST API interface of vRA to store the correct certificate with the cloud account. And if they would have mentioned that, maybe the reader would think… wait…. REST API…. can I use the swagger interface…..

Yes you can.
The API calls we need are:

  • GET /iaas/api/cloud-accounts
    To find the cloud-account-id of the vCenter Cloud Account we are going to update
  • PATCH /iaas/api/cloud-accounts/{cloud-account-id}
    To store the new certificate information with the vCenter Cloud account

Alternative

  1. Store the vCenter certificate (including the chain) as a PEM file.
  2. Go to the swagger ui (which can be found at {root-url}/automation-ui/api-docs/
  3. Go to the ‘Infrastructure as a Service section
    In this section you will find the API calls we need.
  4. Authenticate with vRA using a Bearer token.
    (Tip: you can get a Bearer token using the REST API calls on the swagger ui and/or create a vRO action to get a Bearer token)
  5. Search for the cloud account id by using the ‘GET /iaas/api/cloud-accounts’
  6. Convert the PEM file to a single line where the line ends are replaced with \n
  7. update the cloud account with the new certificate information using the ‘PATCH /iaas/api/cloud-accounts/{cloud-account-id}? apiVersion=2021-07-15’ API call.
    (note: using the url parameter apiVersion is crucial)
    You should get a HTTP status code of 202 for confirmation.
  8. run the validation of the cloud account in vRA, it should work now

invoke-command instead of invoke-expression

In my previous post (which you can find here ) I use the invoke-expression cmdlet for running a Powershell script which was downloaded with invoke-webrequest.
And this was a good solution. The code that was downloaded and executed was a powershell script that would run a private function. This private function then was formated with the three scriptblocks Begin,End and Process.
Parameters where with a same construct being downloaded from a git repository and placed in a powershell Object called $P.
With this approach I separated code parameters from the actual code.
Using GIT I was able to versioning my parameters file, separate from my script code. This setup is working great. And it gives flexibility by leaving the code untouched when changing parameters.

But…

Yeah, a but…. I still needed a way to pass parameters /arguments on the command line. Using invoke-expression… well that wasn’t possible.
So I looked into invoke-command, which has an -argumentlist parameter., making it possible to pass one or more arguments to the script. Using named parameters isn’t possible, which is not what I was looking for.
So to support naming parameters, I decided to introduce just one parameter. And this parameter should be a JSON string, making it possible to pass multiple parameters merged into a JSON object.

The only challenge with this is that all the interpreters that the code was going to pass, should leave the JSON string intact, including the quotes. And I didn’t want to escape any quotes. That would be messy and prone to errors. But encoding it, should solve this issue. The argument is a base64 encoded JSON string.

Is it secure?

Well, No …. not at all… it is a base64 encoding. My goal was not to make it more secure, but that the string wouldn’t be changed by the different shell interpreters.
Off course, you can make it more secure by using private/public key-pairs. You could use a docker volume containing the encoding keys, or other secure methods. When using base64 coding, just don’t pass any sensitive data (passwords) with it. There are other, more secure, approaches for this with containers.

parameter approaches

This setup gives me different approaches to pass parameters to the script. The more static parameters are stored in a .json file, stored in a GIT repository.
And the more dynamic parameters (like VM names to start), are passed via the base64 encoded JSON string.

What changed ?

I changed to following items:

  • changed entrypoint string
    • using invoke-command instead of invoke-expression
    • placing invoke-webrequest inside a scriptblock
    • using argumentlist to pass a base64 string, encoded a JSON string
  • changed powershell wrapper script to decode inpu

Docker Entrypoint

The previous docker entrypoint was something like

pwsh -Command invoke-expression '$(Invoke-WebRequest -SkipCertificateCheck -uri ' + <git URI> + ' -Headers @{"Cache-Control"="no-store"} )'

The new entrypoint is looking like

pwsh -Command invoke-command -scriptblock ([scriptblock]::Create( (Invoke-WebRequest -SkipCertificateCheck -uri <git URI> -Headers @{"Cache-Control"="no-store"} ).content ) ) -ArgumentList <base64 coded JSON string>

As you can see, the one-liner has grown.
I used the -scriptblock and the -ArgumentList parameter from the invoke-command. The -scriptblock contains the Invoke-webrequest cmdlet which downloads the RAW version of the powershell script on the GIT repository.
The invoke-command cmdlet then executes this scriptblock and passing the argument from the argumentlist to this script.

Script Layer

The script has a wrapper layer, a main layer (containing the Begin,End and Process blocks) and the Process block containing the specific code to run.

<#
.SYNOPSIS
    template.ps1 powershell
.PARAMETER inputObject 
    A JSON string base64 (UTF-8) encoded
#>

param(
    [string][Parameter(
        ValueFromPipeline = $true, 
        ValueFromPipelineByPropertyName = $true,
        HelpMessage="JSON string base64 (UTF-8) encoded.")]$inputObject=""
)

function main {

}
#-- calling the real powershell code to run
main

main layer (function)

I choose to use the function method to preserve my code format structure. For most of my powershell code I use the End,Begin and Process scriptblocks to structure the code. And I didn’t want to stepp down from that approach.

function main {
    <#
    .SYNOPSIS

    #>
    Begin{
        $uri = <url to RAW version of parameter file>
        #-- trying to load parameters into $P object, preferably json style
        try { $webResult= Invoke-WebRequest -SkipCertificateCheck  -Uri ($scriptrootURI+$scriptName+".json") -Headers @{"Cache-Control"="no-store"}  }
        catch  {
            write-host "uri : " + $scriptrootURI
            throw "Request failed for loading parameters.json with uri: " + $webResult 
        }
        # validate answer
        if ($webResult.StatusCode -match "^2\d{2}" ) {
            # statuscode is 2.. so convert content into object $P
            $P = $webResult.content | ConvertFrom-Json 
        } else {
            throw ("Failed to load parameter.json from repository. Got statuscode "+ $webRequest.statusCode)
        }

    #-- private functions
        function exit-script {
            ...
        }

    #--- proces inputObject (the argument passed via the cmd line
        # decode that inputObject as UTF-8 base64 and convert it to a powershell object
        $A= ConvertFrom-Json -InputObject ([System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($($inputObject)))) -ErrorAction SilentlyContinue -ErrorVariable err1
        if ($err1) {
            writ-host "Failed to proces input object"
            exit-script
        }
    }
    End {
        exit-script -exitcode 0
    }
    Process {
        #-- the code that is doing the real work
        write-host ($P.world) #-- from the parameter.json file
        write-host ($A.universe) #-- passed via the cmd line argument
    }
}

Final

So I hope this blog gives you some ideas with you code challenges.
I’m going to write a more structured set of articles , a deep dive into my FaaS – like setup. So keep following this blog, when interested.
And comments, are always welcome.

Dynamicly executing powershell code from GIT – a FaaS way

In this post I mentioned that I was tipping my toes in vRA.
And…. yeah, it is not dipping anymore, it is a deep dive… 🙂 but that is for another blog

This blog is about a challenge I solved by using a FaaS approach. FaaS stands for Function as a Service.
I have a small UPS running for my home environment. And found out that it could support my home environment for about 30 min. And then poof… power off…. So I was searching for a solution that my vsphere environment would be shutdown, triggered from a home assistant /Node-Red automation flow, monitoring my UPS status.

My first way of thinking was to use the REST API of vSphere / VMware esxi. But not all the actions I need are published. (Like shutdown of an ESXi host.)
And I want to be dynamic as possible.
I want to shutdown / suspend all VMs expect VMs that are tagged as coreVM. So the code that I should write, doesn’t contain the VM names or IDs, but would filter out all VMs that have a vSphere tag with coreVM.

The Idea

As mentioned, is that the automation would find all VMs that need to be suspended or shutdown ( depending if VMtools is running). And VMs that have a vSphere tag UPS/coreVM, would be ignored.

These VMs are controlled with the startup/shutdown feature of my ESXi host.
The total automation flow would be a 3 fase flow. Fase 1 shutdown all but the core VMs. Fase 2 shutdown the ESXi host, which would automaticly shutdown VMs controlled via the startup/shutdown feature. Fase 3, shutdown of Synology NAS and Home Assistant
The core VMs are my vCenter, Log Insight and router VM.

The tools

The tools I use are:

  • Home Assistant with
    • NUT intergration (monitoring my UPS)
    • Node-Red flow automation (ussing HTTP REST API calls to control docker via Portainer)
  • Docker CE with Portainer CE running on a synology NAS
    • Docker image vmware/powerCLI
    • Portainer controlling the docker environment and leveraging control via its APIs
  • Gitea GIT server running locally (containing my script and parameter files)

the FaaS way

For fase 2 I try to use a FaaS approach. Meaning I have a function for shutting down / suspending VMs. This function is run in a temporary runtime environment (docker container) and only available at runtime. The function is not part of the runtime environment, but on runtime it is downloaded from a GIT repository and executed.
This gives the advantage of maintaining a runtime environment seperate from scripts (or the other way around). And on every execution, it starts with a clean environment.
For the function I use powerCLI, because I haven’t found an API call in vSphere 7.0 that will shutdown an ESXi host. And filtering VMs on vsphere tags was (for me) a bridge to far.

Using a container gives me also the possibility to seperate my vSphere credentials from my script, by saving it in a docker volume, which is only mounted at runtime. The function itselfs contains no credentials. It looks to a file in that docker volume.

The hurdles I encountered were:

  • how to run the script from a git repo
    • by using a powershell one-line invoke-expression with invoke-webrequest using the url to the raw presentation of a file (script) in the git repo.
    • using this one-liner as the entrypoint parameter when creating the container
  • how to bypass caching by the web browser
    • using “Cache-Control”=”no-store” in the header
  • running a full powershell script via invoke-expression
    • write the full code block as a powershell function (with begin, End and Process blocks) and call that function within the script.

The powershell one-line is

pwsh -Command invoke-expression '$(Invoke-WebRequest -SkipCertificateCheck -uri ' + <git URI> + ' -Headers @{"Cache-Control"="no-store"} )'

The <git URI> is the url to the raw representation of the script file in the GIT repo. When leveraging the portainer/docker api to create the container you need to use the JSON notation for the entrypoint. The one-line will look something like:

["pwsh",
"-Command",
"invoke-expression",
'$(Invoke-WebRequest -SkipCertificateCheck -uri ' + <git URI> + ' -Headers @{"Cache-Control"="no-store"} )']

Powershell script formant / the snag

The snag with using invoke-expression is that it can’t handle a powershell script that has a Begin, End and Process code block. While I this is how I write my code, and wouldn’t like to deviate from it, it meant I had a snag.
The solution was to write a powershel script that contains a private function and execution of that function, like this

function main {
    <#
    .SYNOPSIS
        Example code for running full script from an URI.
    .DESCRIPTION
        Example code to run a powershell script with dynamic blocks Begin,Process and End.
        Loading parameters from a JSON file into an P object.
        By wrapping the code into the function main, we can use begin, process and end scriptblocks when calling with Invoke-Expression
        The Process block contains the main code to execute.
        The Begin and en blocks are mainly used for setting up the environment and closing it.
    .EXAMPLE
    Run the following cmdline in a powershell session.
    Invoke-Expression (Invoke-webrequest <URL>).content
    .NOTES    
    #>
    Begin {
        #=== Script parameters

        #-- GIT repository parameters for loading the parameters.json
        $scriptName="startVMs"
        $scriptGitServer = "https://....."   # IPv4 or FQDN of GIT server
        $scriptGitRepository = "organisation/repo/" # uri part containing git repository
        $scriptBranch = "master/" # GIT branch
        $scriptrootURI = $scriptGitServer+$scriptGitRepository+"raw/branch/"+$scriptBranch

        #==== No editing beyond this point !! ====
        $ts_start=get-date #-- Save current time for performance measurement

        #--- write log header to console
        write-host
        write-host "================================================================="
        write-host ""
        write-host "script: $scriptName.ps1"
        write-host ""
        write-host "-----------------------------------------------------------------"

        #-- trying to load parameters into $P object, preferably json style
        try { $webResult= Invoke-WebRequest -SkipCertificateCheck  -Uri ($scriptrootURI+$scriptName+".json") -Headers @{"Cache-Control"="no-store"}  }
        catch  {
            write-host "uri : " + $scriptrootURI
            throw "Request failed for loading parameters.json with uri: " + $webResult 
        }
        # validate answer
        if ($webResult.StatusCode -match "^2\d{2}" ) {
            # statuscode is 2.. so convert content into object $P
            $P = $webResult.content | ConvertFrom-Json 
        } else {
            throw ("Failed to load parameter.json from repository. Got statuscode "+ $webRequest.statusCode)
        }
    }
    End {

    }
    Process {

    }
}
main

What it does, is the script will run the function main. That function is basicly the full powershell script. In the begin code block the invoke-Webrequest is used to load a .json file and convert it to a powershell object called P.
This object contains all parameters used in the rest of script (like vCenter FQDN).

Result

The result is that from a monitoring trigger Node-Red will do some REST API calls to portainer to create run and delete a docker container based on a vmware/powerCLI image. During the lifetime of this container a volume is mounted with authorisation information and a powershell one-liner is executed which runs a powershell code directly loaded from a GIT repository.
With this setup I can run on demand any powershell script which doesn’t need user interaction, maintained in a GIT repository.

I hope you enjoyed this post. do you have questions / comments, please leave them below or reach out to me on twitter

dipping my toes in vRA 8

re-visiting an old friend ?

I’ve been quite on my blog for the last half year. That is what moving to a new house, seeing my son growing up, renovating a bathroom, setting up my home automation etc… will do…
So at the last day of 2021, looking to the morning sky from my office at home, I decided to blog about my reintroduction experience with vRA 8

view from my attic

Yeah, reintroduction.

A few years ago I followed the VMware ICM (installation, configuration and management) course for vRA 7. Five days of learing about fabric groups, blueprints, orchestrator, the several VMs needed to build a vRealize Automation platform. By trade I’m a developer / automation engineer. The course was more a meaning to an end. But that end never happened. Yeah I’ve been building a vRA 7 platform. And on another assignment, rescuing a vRA 7 platform ( it was falling apart)… But really developing automation with vRA.. Never happened.
And my warm feelings for vRA 7 went away. A complex platform, a memory game in finding the correct terminology and links in the vRA portal… and that awfull java client for orchestrator. No succes stories for me.

Until

Until this year…
I’m working with a customer who has already a vRealize platform in place, but needs support in developing. In helping their administrators developing a developer mindset. And they have vRA 8

vRealize automation 8

Of course I had seen VMware’s presentation about vRA 8. And to be honest I started to become positive about vRA. No mix of appliances and windows VMs, no MS SQL VM… just one type of appliance, running kubernetes and distributed the several services over kubernetes pods.
vRA8 is underneath the hood a completely different than its predecessor. And no java client. And the principe that everything should be code…

In the last two weeks I’ve (with some help) deployed a new vRA 8 platform, and developed some automation, using the exisiting vRA 7 platform. We decided not to use the migration tool. But to rebuild the functionality. The main reason for this is the different approaches between vRA 7 and 8.
vRA 8 is focused on tag policy driven placement. Meaning you tag resources with several kind of metadata, like enviroment (DMZ,Dev,Production,Test) , OS, storage SLA, backup SLA. And you use constraints in blueprints and projects to guide the deployment.
You use vRO actions as external sources for option values, default values in the vRA request forms to help the end user in making selections.
You don’t develop one big monolithic automation, but need to slice it up in smaller parts. Thinking of ‘can I reuse it’, ‘can I make it more task general and dynamic’, etc.
And in the end you have some simple workflows and blueprints, but build a catalog with dynamic items, helping the administrator and/or developers in deploying VMs

My experience with developing in vRA 8 is a positive one. As for any language or automation platform, the main points to take away are:

  • find out how some programming constructs should be set up, like if…then…else, for..loop, regular expressions etc…
  • learn to code structured. Use readable names for variables, constants, objects etc… Use comments in your code to highlight, explain what the next line of instructions do.
  • The more high level your code is, the more you should use comments to explain what that code, workflow should do.
  • Learn by failing… take one function you are trying to use, build a new workflow around it, and test it… yes it can be time consuming… but you’ll learn
  • work with other developers, learn from their way of structure code. From their approach to automation
  • First try to get a picture of what needs to be automated, describe it in manual actions, how would an administrator solve this task by hand ?
    Try to get a broader sence of how the administrators are working , and ask about why things are as they are, what is the reason / decision for the way of working…..
    This is one of the hard parts of automation. Don’t start right away on the keyboard, but try to understand what is being asked to automate

Final

As you can tell, my experiences with vRA8 are positive. You need to invest time to understand the platform… but it makes more sense then vRA7 did. And it is completely different.
One of the main challenges with automation is, selling it to the organisation… and making it donkey proof… it takes time… take small steps so the changes for succes are bigger. And celebrate them.

Code repository maintenance.

I’ve changed my code repository on github.

And as someone already noticed this breaks some links on my blog.
I’m in the progess of fixing them, but if you find a link that is not working anymore, please leave a comment. I’ll try to fix it as soon as possible.

The new code repository can be found here

IFO: remote-ssh VSC on photonOS

I’m using visual studio code a lot. It has a lot of extensions that will make your life a bit easier. And one my of my top favorites has become remote-ssh.

With remote-ssh you can use VSC on a remote server/VM. It uses the ssh protocol to connect to the remote server, install the remote component, and you can use the remote system as a local system.

What am I using it for ?

Well, I have a homelab to get accustomed to and explore VMware software. I use it to fiddle around with software solutions and get accustomed with solutions like GIT, Docker, Ansible and Kubernetes. And all those solutions are text based… Yes off course there are GUI shells for these solutions, but that will not help you to get sufficient with these solutions.

So you need an IDE, and my choice is VSC with remote-ssh
As a VMware fan boy, I like to use photonOS for my linux VMs.

How to configure photonOS for VSC & remote-ssh

I make the following assumptions:

  • it is a test/dev environment (we logging in as root) not a production environment
  • tdnf updateinfo and tndf -y update has been run
  • you have internet connection with the photonOS VM

To make remote-ssh work with photonOS you need to do these things

  1. install tar tdnf -y install tar
    Remote-ssh uses tar to extract its remote server software
  2. edit sshd_config at /etc/ssh/sshd_config and set the following settings
    1. PermitRootLogin yes
    2. AllowTcpForwarding yes
  3. (bonus / optional) Add your public SSH key to the <user>/.ssh/authorized_keys file.
    When using root to login, the location is /root/.ssh/authorized_keys
    else it is /home/<username>/.ssh.authorized_keys

See also

IFO: allow apps downloaded from anywhere on mac OS

.

So, Apple is big on security. Which is a good thing.
But sometimes, it is too strict.
I’m busy remodelling my homelab, and one of the actions is reïnstalling a clean vCenter appliance.
And I thought let’s do it from the CLI !!!


Yeah…. so I ran vcsa-deploy and got the error that the app is downloaded from the internet and not to be trusted.
So you can allow it via the system preferences, but the MAC OSx gatekeeper keeps irritating you with all the warnings about the libraries that are loaded

After some googling around I found this site
3 Ways to Allow Installation of Apps from Anywhere in macOS Catalina (techsviewer.com)
And the cli option to allow apps downloaded from anywhere was winking at me. Yes that was the option I wanted. So even though it was for mac OS Catalina, why not try it for macOS Big Sur.

And it worked… to allow vcsa-deploy to function properly just do the following

  1. open terminal
  2. execute the command : sudo spctl –master-disable
  3. goto system preferences -> Security & Privacy
  4. tick the ‘anywhere’ option under Allow apps downloaded from:
  5. Run vcsa-deploy

Well to be security aware, the best practise is to remove the anywhere option, just follow these steps

  1. open terminal
  2. execute the command: sudo spctl –master-enable

and your done.

Making these changes is (off course) at your own risk.

%d bloggers like this: