dipping my toes in vRA 8

re-visiting an old friend ?

I’ve been quite on my blog for the last half year. That is what moving to a new house, seeing my son growing up, renovating a bathroom, setting up my home automation etc… will do…
So at the last day of 2021, looking to the morning sky from my office at home, I decided to blog about my reintroduction experience with vRA 8

view from my attic

Yeah, reintroduction.

A few years ago I followed the VMware ICM (installation, configuration and management) course for vRA 7. Five days of learing about fabric groups, blueprints, orchestrator, the several VMs needed to build a vRealize Automation platform. By trade I’m a developer / automation engineer. The course was more a meaning to an end. But that end never happened. Yeah I’ve been building a vRA 7 platform. And on another assignment, rescuing a vRA 7 platform ( it was falling apart)… But really developing automation with vRA.. Never happened.
And my warm feelings for vRA 7 went away. A complex platform, a memory game in finding the correct terminology and links in the vRA portal… and that awfull java client for orchestrator. No succes stories for me.

Until

Until this year…
I’m working with a customer who has already a vRealize platform in place, but needs support in developing. In helping their administrators developing a developer mindset. And they have vRA 8

vRealize automation 8

Of course I had seen VMware’s presentation about vRA 8. And to be honest I started to become positive about vRA. No mix of appliances and windows VMs, no MS SQL VM… just one type of appliance, running kubernetes and distributed the several services over kubernetes pods.
vRA8 is underneath the hood a completely different than its predecessor. And no java client. And the principe that everything should be code…

In the last two weeks I’ve (with some help) deployed a new vRA 8 platform, and developed some automation, using the exisiting vRA 7 platform. We decided not to use the migration tool. But to rebuild the functionality. The main reason for this is the different approaches between vRA 7 and 8.
vRA 8 is focused on tag policy driven placement. Meaning you tag resources with several kind of metadata, like enviroment (DMZ,Dev,Production,Test) , OS, storage SLA, backup SLA. And you use constraints in blueprints and projects to guide the deployment.
You use vRO actions as external sources for option values, default values in the vRA request forms to help the end user in making selections.
You don’t develop one big monolithic automation, but need to slice it up in smaller parts. Thinking of ‘can I reuse it’, ‘can I make it more task general and dynamic’, etc.
And in the end you have some simple workflows and blueprints, but build a catalog with dynamic items, helping the administrator and/or developers in deploying VMs

My experience with developing in vRA 8 is a positive one. As for any language or automation platform, the main points to take away are:

  • find out how some programming constructs should be set up, like if…then…else, for..loop, regular expressions etc…
  • learn to code structured. Use readable names for variables, constants, objects etc… Use comments in your code to highlight, explain what the next line of instructions do.
  • The more high level your code is, the more you should use comments to explain what that code, workflow should do.
  • Learn by failing… take one function you are trying to use, build a new workflow around it, and test it… yes it can be time consuming… but you’ll learn
  • work with other developers, learn from their way of structure code. From their approach to automation
  • First try to get a picture of what needs to be automated, describe it in manual actions, how would an administrator solve this task by hand ?
    Try to get a broader sence of how the administrators are working , and ask about why things are as they are, what is the reason / decision for the way of working…..
    This is one of the hard parts of automation. Don’t start right away on the keyboard, but try to understand what is being asked to automate

Final

As you can tell, my experiences with vRA8 are positive. You need to invest time to understand the platform… but it makes more sense then vRA7 did. And it is completely different.
One of the main challenges with automation is, selling it to the organisation… and making it donkey proof… it takes time… take small steps so the changes for succes are bigger. And celebrate them.

Code repository maintenance.

I’ve changed my code repository on github.

And as someone already noticed this breaks some links on my blog.
I’m in the progess of fixing them, but if you find a link that is not working anymore, please leave a comment. I’ll try to fix it as soon as possible.

The new code repository can be found here

IFO: remote-ssh VSC on photonOS

I’m using visual studio code a lot. It has a lot of extensions that will make your life a bit easier. And one my of my top favorites has become remote-ssh.

With remote-ssh you can use VSC on a remote server/VM. It uses the ssh protocol to connect to the remote server, install the remote component, and you can use the remote system as a local system.

What am I using it for ?

Well, I have a homelab to get accustomed to and explore VMware software. I use it to fiddle around with software solutions and get accustomed with solutions like GIT, Docker, Ansible and Kubernetes. And all those solutions are text based… Yes off course there are GUI shells for these solutions, but that will not help you to get sufficient with these solutions.

So you need an IDE, and my choice is VSC with remote-ssh
As a VMware fan boy, I like to use photonOS for my linux VMs.

How to configure photonOS for VSC & remote-ssh

I make the following assumptions:

  • it is a test/dev environment (we logging in as root) not a production environment
  • tdnf updateinfo and tndf -y update has been run
  • you have internet connection with the photonOS VM

To make remote-ssh work with photonOS you need to do these things

  1. install tar tdnf -y install tar
    Remote-ssh uses tar to extract its remote server software
  2. edit sshd_config at /etc/ssh/sshd_config and set the following settings
    1. PermitRootLogin yes
    2. AllowTcpForwarding yes
  3. (bonus / optional) Add your public SSH key to the <user>/.ssh/authorized_keys file.
    When using root to login, the location is /root/.ssh/authorized_keys
    else it is /home/<username>/.ssh.authorized_keys

See also

IFO: allow apps downloaded from anywhere on mac OS

.

So, Apple is big on security. Which is a good thing.
But sometimes, it is too strict.
I’m busy remodelling my homelab, and one of the actions is reïnstalling a clean vCenter appliance.
And I thought let’s do it from the CLI !!!


Yeah…. so I ran vcsa-deploy and got the error that the app is downloaded from the internet and not to be trusted.
So you can allow it via the system preferences, but the MAC OSx gatekeeper keeps irritating you with all the warnings about the libraries that are loaded

After some googling around I found this site
3 Ways to Allow Installation of Apps from Anywhere in macOS Catalina (techsviewer.com)
And the cli option to allow apps downloaded from anywhere was winking at me. Yes that was the option I wanted. So even though it was for mac OS Catalina, why not try it for macOS Big Sur.

And it worked… to allow vcsa-deploy to function properly just do the following

  1. open terminal
  2. execute the command : sudo spctl –master-disable
  3. goto system preferences -> Security & Privacy
  4. tick the ‘anywhere’ option under Allow apps downloaded from:
  5. Run vcsa-deploy

Well to be security aware, the best practise is to remove the anywhere option, just follow these steps

  1. open terminal
  2. execute the command: sudo spctl –master-enable

and your done.

Making these changes is (off course) at your own risk.

IFO: max. lifetime SSL cert is 1 year

Yes, certificate missery.
In the wisdom of great corperations, for our safety, it is deiced that the maximum SSL/TLS certificate validity is one year.
Yes, really… don’t believe me, just check this search for it
ssl lifetime 1 year at DuckDuckGo

From the security side of things this is a good thing. Because it mitigates the risk of a hacked certificate.
But from an administration point of view….. HEADACHE.
Especially for certificates that are used internally in your production sites.
Now you need to replace the certificates every year for your servers.
At least for those servers that run web services because your browser is going to nag you that the site isn’t safe anymore. Yes another warning.
And you know what happens to warnings, in the end they will think for you, and won’t allow you to access the website anymore.

so SSL certificate monitoring becomes more importent, and having a plan / replacement strategy for SSL certificates would be a good thing to have.
Do you have an up-to-date overview of all the SSL certificates in your network ?
Maybe a good idea to have it up-to-date and monitor them.

truncate docker container log

sometimes you just want to clear logs of a docker container.
For instance, I’m running dnsmasq in a docker container and I’m troubleshooting raspberry PI PXE boot (yes to run ESXi-ARM stateless…)
And the dnsmasq is serving my homelab its domain.
Then it can be an annoyance when you run docker logs <container name> seeing all the log entries since the container started.
And you just want to start with a clean slate.

It is the way

Yes, there is a way. And when you google you’ll find more blog posts giving you the solution. Which is

pi@raspberrypi~ $ sudo docker inspect --format='{{.LogPath}}' <container name>
pi@raspberrypi~ $ sudo truncate -s 0 <path presented by previous command>

And it works great… but… typing 2 lines of code… copy / pasting the log path in the second command.. way too dificult 🙂
So what do you think abou this ?

pi@taspberrypi~$ sudo  truncate -s 0  $(docker inspect --format='{{.LogPath}}' <container name>)

Yes, it will erase all logging, but that is the purpose of this whole excercise.

Addendum

If the one-liner isn’t helping, how about creating a small script just for one purpose alone…. What if you could do something like

pi@raspberrypi~$ truncate-log dnsmasq

To truncate the docker logs of the dnsmasq docker container. How ? Well easy.
Just create a new file in /usr/local/bin

pi@raspberrypi~$ nano /usr/local/bin/truncate-log

and copy / paste this content into the file

#!/bin/sh
CONTAINER=$1
truncate -s 0 $(docker inspect --format='{{.LogPath}}' $CONTAINER)

After saving the file. (Ctrl-x), add the execution bit

pi@raspberrypi~$ chmod +x /usr/local/bin/truncate-log

And voila, the next time you need to truncate a docker container log, you just type truncate-log <docker container name>

rPI adventures bits III

Some of the default installations have a graphic desktop. But, when you use your rPIs headless, what’s the point of these interfaces.
Of course you can remove them, but you can also use remote connections.
And then you have by default VNC, for which you need to start the service first.
But a RDP service, wouldn’t that be usefull, and is it possible ?
Yes it is.

XRDP

It is called xrdp.
xrdp is an open-source remote desktop service for linux. And ues, you can run it on a rPI. To find more inf about xrdp check out their site http://xrdp.org/

Install XRDP

To install xrdp on a rpi run the following steps. (assuming you are running raspion OS

  • login to the rPI or with a terminal or via ssh
  • update software
pi@raspberrypi:~$ sudo apt update
pi@raspberrypi:~$ sudo apt full-upgrade
  • install xrdp
pi@raspberrypi:~$ sudo apt-get install xrdp
  • start a RDP client and connect to the rPI

And easy as pi 🙂

Purpose

Well, for me it is a stepping stone into my homelab.
I have rPI for running dns,tftp,dhcp,samba,http,ntp, unifi controller services in docker containers.
With this setup I can control my homelab from this stepping stone. And also acces the IPMI interface of my supermicro server, hosting my vSphere homelab.

rPI adventures bits II

ssh access on first boot

I’m using my rPI headless. Meaning no monitor, mouse and keyboard. Just an ethernet connection.
And that is great, but then you need to have ssh access.. and by default that is not running.
There are some small steps to have ssh running on boot, these are the steps
(assumption is that your network has a DHCP service running)

  1. insert SD card into your Windows / MacOS / Linux system
  2. create a file named ‘ssh’ on the root partition.
  3. Insert the SD card into the rPI
  4. boot the rPI
  5. check your DHCP service log for a new created IP lease
  6. SSH to the IP found in the previous step
    Default username: pi, password: raspberry

rPI adventures bits I

Yes, I’ve stept into the realm of raspberry pi.
Adding this to my homelab setup.

For now I’ll just scribble some stuff I need to remember. Later on I’ll write detailed blogs about the setup of my homelab.

Temperature check

rPI’s are getting hot. Especially the rpi 4B+.
To check it’s core temp you can run the following command

root@raspberrypi:~# /opt/vc/bin/vcgencmd measure_temp

To make life easier I created a small script called ‘temp’ and placed it in /usr/local/bin.

#!/bin/sh
/opt/vc/bin/vcgencmd measure_temp

Also make sure it is executable.

chmod +x /usr/local/bin/temp

Now you can check the temp by running the command temp.

root@raspberrypi:~# temp
temp=47.8'C

Ansible in a (docker) container

Ansible Playbooks vs Roles: Part I - The Playbook ...

A year ago, after following a few sessions at the dutch VMATechcon I decided to dive into the world of Ansible. With a background in industrial automation (programming PLCs) I have a soft spot for writing code. I saw the benefits of declarative languages, and decided to dive into Ansible.
But where to start. And what should I use it for.

Image

I’m working on a small VM which provides DHCP,DNS,TFTP,FTP,HTTP and NTP services so I can easily (PXE) boot nested ESXi hosts on VMware workstation and my homelab. All running in docker containers on Photon OS.
For maintenance (and learning) I decided to use Ansible for building, running these containers.

One of the issues I had, was that I had to maintain the VM and the software installed on it… I’m quite proficient with Linux, but it can still be a challenge.
And keeping installations logs update about how to install software… yeah…. that is an issue. A embraced the solutions docker, git and ansible are giving me. (And vagrant, when using VMware workstation).

Using docker to run the services, using ansible to automate workflows and using git to keep track of changes and store my code in my github repositories.
With this approach I can easily rebuild the VM when it breaks down. And I see the VM more as a processing entity, then a virtual computer that needs to be maintained.

The Ansible container

browsing the internet I found this blog post about running ansible inside a docker container and using it interactively on the docker host. This gave me the possibility to have ‘portable’ ansible installation, without any ties to the VM OS.
In my opinion it is great. With adding some small bash scripts so I don’t need to write something like ‘docker run -rm -it …..’ everytime, I have an ideal solution.
The original code for the container is based on alpine:3.7 and python 2. Everytime I build the docker image I received a warning that python 2 is outdated…. so I (finaly) updated the dockerfile using alpine:3.11 as the base image and using python 3.
You can find my code here on github, and the docker image here on dockerhub.

How to use

simple. There are 2 ways to use it.

  1. clone git repository
  2. pull image from dockerhub

Option 1, clone git repository

I’ve created a small installation script to make things easier. It will run docker build to build the ansible image and it will copy the scripts under ./scripts/ to /usr/local/bin/
And your all set… well almost

Option 2, pull image from dockerhub

Instead of building the image, you can pull it from dockerhub into your local docker environment. The scripts assume that the docker image is tagged like ansible:2.9.0, but when you pull it from dockerhub, the name is brtlvrs/ansible-helper:v.0.3. So you need to retag them (or edit the scripts).

docker image tag brtlvrs/ansible-helper:v.0.3 ansible:2.9.0
docker image tag ansible:2.9.0 ansible:latest

The wrapper scripts are stored in the image, in the folder /2installOnHost
To copy them out of the image do the following

docker create -n test ansible:2.9.0
docker cp test:/2InstallOnHost/ah* /usr/local/bin/
chmod +x /usr/local/bin/ah*
docker rm test

This will create a docker container named test, using the imported image ansible:2.9.0.
Then copies the files from inside the container to the /usr/local/bin/ and sets the executable bit on these files.
Finally it removes the created docker container.
Now you should be able to run ah or ah-playbook.

Both scripts will mount the folowing volumes into the container

localmountpointremark
~/.ssh/id_rsa/root/.ssh/id_rsaneeded to access localhost via ssh (ansible)
~/.ssh/id_rsa.pub/root/.ssh/id_rsa.pubneeded to access localhost via ssh (ansible)
$(pwd)/ansible/playbooksMountpoint is the default location inside the container.
/var/log/ansible/ansible.logMaking the ansible logs persistent.

Finishing touch

To make it work you need to do the following:

  • install docker-py on the docker host ( if you want ansible to be able to control docker)
  • add the ~/.ssh/id_rsa.pub content to ~/.ssh/authorized_keys file, so you don’t need to store the user password into the ansible inventory file.

Final

An now it should work. Type ‘ah’ and you’ll enter a temporary docker container running ansible. Type ah-playbook and you’ll execute the command ansible-playbook inside a temporary docker container.
Have fun.

%d bloggers like this: