Featured

Code repository maintenance.

I’ve changed my code repository on github.

And as someone already noticed this breaks some links on my blog.
I’m in the progess of fixing them, but if you find a link that is not working anymore, please leave a comment. I’ll try to fix it as soon as possible.

The new code repository can be found here

IFO: remote-ssh VSC on photonOS

I’m using visual studio code a lot. It has a lot of extensions that will make your life a bit easier. And one my of my top favorites has become remote-ssh.

With remote-ssh you can use VSC on a remote server/VM. It uses the ssh protocol to connect to the remote server, install the remote component, and you can use the remote system as a local system.

What am I using it for ?

Well, I have a homelab to get accustomed to and explore VMware software. I use it to fiddle around with software solutions and get accustomed with solutions like GIT, Docker, Ansible and Kubernetes. And all those solutions are text based… Yes off course there are GUI shells for these solutions, but that will not help you to get sufficient with these solutions.

So you need an IDE, and my choice is VSC with remote-ssh
As a VMware fan boy, I like to use photonOS for my linux VMs.

How to configure photonOS for VSC & remote-ssh

I make the following assumptions:

  • it is a test/dev environment (we logging in as root) not a production environment
  • tdnf updateinfo and tndf -y update has been run
  • you have internet connection with the photonOS VM

To make remote-ssh work with photonOS you need to do these things

  1. install tar tdnf -y install tar
    Remote-ssh uses tar to extract its remote server software
  2. edit sshd_config at /etc/ssh/sshd_config and set the following settings
    1. PermitRootLogin yes
    2. AllowTcpForwarding yes
  3. (bonus / optional) Add your public SSH key to the <user>/.ssh/authorized_keys file.
    When using root to login, the location is /root/.ssh/authorized_keys
    else it is /home/<username>/.ssh.authorized_keys

See also

IFO: allow apps downloaded from anywhere on mac OS

.

So, Apple is big on security. Which is a good thing.
But sometimes, it is too strict.
I’m busy remodelling my homelab, and one of the actions is reïnstalling a clean vCenter appliance.
And I thought let’s do it from the CLI !!!


Yeah…. so I ran vcsa-deploy and got the error that the app is downloaded from the internet and not to be trusted.
So you can allow it via the system preferences, but the MAC OSx gatekeeper keeps irritating you with all the warnings about the libraries that are loaded

After some googling around I found this site
3 Ways to Allow Installation of Apps from Anywhere in macOS Catalina (techsviewer.com)
And the cli option to allow apps downloaded from anywhere was winking at me. Yes that was the option I wanted. So even though it was for mac OS Catalina, why not try it for macOS Big Sur.

And it worked… to allow vcsa-deploy to function properly just do the following

  1. open terminal
  2. execute the command : sudo spctl –master-disable
  3. goto system preferences -> Security & Privacy
  4. tick the ‘anywhere’ option under Allow apps downloaded from:
  5. Run vcsa-deploy

Well to be security aware, the best practise is to remove the anywhere option, just follow these steps

  1. open terminal
  2. execute the command: sudo spctl –master-enable

and your done.

Making these changes is (off course) at your own risk.

IFO: max. lifetime SSL cert is 1 year

Yes, certificate missery.
In the wisdom of great corperations, for our safety, it is deiced that the maximum SSL/TLS certificate validity is one year.
Yes, really… don’t believe me, just check this search for it
ssl lifetime 1 year at DuckDuckGo

From the security side of things this is a good thing. Because it mitigates the risk of a hacked certificate.
But from an administration point of view….. HEADACHE.
Especially for certificates that are used internally in your production sites.
Now you need to replace the certificates every year for your servers.
At least for those servers that run web services because your browser is going to nag you that the site isn’t safe anymore. Yes another warning.
And you know what happens to warnings, in the end they will think for you, and won’t allow you to access the website anymore.

so SSL certificate monitoring becomes more importent, and having a plan / replacement strategy for SSL certificates would be a good thing to have.
Do you have an up-to-date overview of all the SSL certificates in your network ?
Maybe a good idea to have it up-to-date and monitor them.

truncate docker container log

sometimes you just want to clear logs of a docker container.
For instance, I’m running dnsmasq in a docker container and I’m troubleshooting raspberry PI PXE boot (yes to run ESXi-ARM stateless…)
And the dnsmasq is serving my homelab its domain.
Then it can be an annoyance when you run docker logs <container name> seeing all the log entries since the container started.
And you just want to start with a clean slate.

It is the way

Yes, there is a way. And when you google you’ll find more blog posts giving you the solution. Which is

pi@raspberrypi~ $ sudo docker inspect --format='{{.LogPath}}' <container name>
pi@raspberrypi~ $ sudo truncate -s 0 <path presented by previous command>

And it works great… but… typing 2 lines of code… copy / pasting the log path in the second command.. way too dificult 🙂
So what do you think abou this ?

pi@taspberrypi~$ sudo  truncate -s 0  $(docker inspect --format='{{.LogPath}}' <container name>)

Yes, it will erase all logging, but that is the purpose of this whole excercise.

Addendum

If the one-liner isn’t helping, how about creating a small script just for one purpose alone…. What if you could do something like

pi@raspberrypi~$ truncate-log dnsmasq

To truncate the docker logs of the dnsmasq docker container. How ? Well easy.
Just create a new file in /usr/local/bin

pi@raspberrypi~$ nano /usr/local/bin/truncate-log

and copy / paste this content into the file

#!/bin/sh
CONTAINER=$1
truncate -s 0 $(docker inspect --format='{{.LogPath}}' $CONTAINER)

After saving the file. (Ctrl-x), add the execution bit

pi@raspberrypi~$ chmod +x /usr/local/bin/truncate-log

And voila, the next time you need to truncate a docker container log, you just type truncate-log <docker container name>

rPI adventures bits III

Some of the default installations have a graphic desktop. But, when you use your rPIs headless, what’s the point of these interfaces.
Of course you can remove them, but you can also use remote connections.
And then you have by default VNC, for which you need to start the service first.
But a RDP service, wouldn’t that be usefull, and is it possible ?
Yes it is.

XRDP

It is called xrdp.
xrdp is an open-source remote desktop service for linux. And ues, you can run it on a rPI. To find more inf about xrdp check out their site http://xrdp.org/

Install XRDP

To install xrdp on a rpi run the following steps. (assuming you are running raspion OS

  • login to the rPI or with a terminal or via ssh
  • update software
pi@raspberrypi:~$ sudo apt update
pi@raspberrypi:~$ sudo apt full-upgrade
  • install xrdp
pi@raspberrypi:~$ sudo apt-get install xrdp
  • start a RDP client and connect to the rPI

And easy as pi 🙂

Purpose

Well, for me it is a stepping stone into my homelab.
I have rPI for running dns,tftp,dhcp,samba,http,ntp, unifi controller services in docker containers.
With this setup I can control my homelab from this stepping stone. And also acces the IPMI interface of my supermicro server, hosting my vSphere homelab.

rPI adventures bits II

ssh access on first boot

I’m using my rPI headless. Meaning no monitor, mouse and keyboard. Just an ethernet connection.
And that is great, but then you need to have ssh access.. and by default that is not running.
There are some small steps to have ssh running on boot, these are the steps
(assumption is that your network has a DHCP service running)

  1. insert SD card into your Windows / MacOS / Linux system
  2. create a file named ‘ssh’ on the root partition.
  3. Insert the SD card into the rPI
  4. boot the rPI
  5. check your DHCP service log for a new created IP lease
  6. SSH to the IP found in the previous step
    Default username: pi, password: raspberry

rPI adventures bits I

Yes, I’ve stept into the realm of raspberry pi.
Adding this to my homelab setup.

For now I’ll just scribble some stuff I need to remember. Later on I’ll write detailed blogs about the setup of my homelab.

Temperature check

rPI’s are getting hot. Especially the rpi 4B+.
To check it’s core temp you can run the following command

root@raspberrypi:~# /opt/vc/bin/vcgencmd measure_temp

To make life easier I created a small script called ‘temp’ and placed it in /usr/local/bin.

#!/bin/sh
/opt/vc/bin/vcgencmd measure_temp

Also make sure it is executable.

chmod +x /usr/local/bin/temp

Now you can check the temp by running the command temp.

root@raspberrypi:~# temp
temp=47.8'C

Ansible in a (docker) container

Ansible Playbooks vs Roles: Part I - The Playbook ...

A year ago, after following a few sessions at the dutch VMATechcon I decided to dive into the world of Ansible. With a background in industrial automation (programming PLCs) I have a soft spot for writing code. I saw the benefits of declarative languages, and decided to dive into Ansible.
But where to start. And what should I use it for.

Image

I’m working on a small VM which provides DHCP,DNS,TFTP,FTP,HTTP and NTP services so I can easily (PXE) boot nested ESXi hosts on VMware workstation and my homelab. All running in docker containers on Photon OS.
For maintenance (and learning) I decided to use Ansible for building, running these containers.

One of the issues I had, was that I had to maintain the VM and the software installed on it… I’m quite proficient with Linux, but it can still be a challenge.
And keeping installations logs update about how to install software… yeah…. that is an issue. A embraced the solutions docker, git and ansible are giving me. (And vagrant, when using VMware workstation).

Using docker to run the services, using ansible to automate workflows and using git to keep track of changes and store my code in my github repositories.
With this approach I can easily rebuild the VM when it breaks down. And I see the VM more as a processing entity, then a virtual computer that needs to be maintained.

The Ansible container

browsing the internet I found this blog post about running ansible inside a docker container and using it interactively on the docker host. This gave me the possibility to have ‘portable’ ansible installation, without any ties to the VM OS.
In my opinion it is great. With adding some small bash scripts so I don’t need to write something like ‘docker run -rm -it …..’ everytime, I have an ideal solution.
The original code for the container is based on alpine:3.7 and python 2. Everytime I build the docker image I received a warning that python 2 is outdated…. so I (finaly) updated the dockerfile using alpine:3.11 as the base image and using python 3.
You can find my code here on github, and the docker image here on dockerhub.

How to use

simple. There are 2 ways to use it.

  1. clone git repository
  2. pull image from dockerhub

Option 1, clone git repository

I’ve created a small installation script to make things easier. It will run docker build to build the ansible image and it will copy the scripts under ./scripts/ to /usr/local/bin/
And your all set… well almost

Option 2, pull image from dockerhub

Instead of building the image, you can pull it from dockerhub into your local docker environment. The scripts assume that the docker image is tagged like ansible:2.9.0, but when you pull it from dockerhub, the name is brtlvrs/ansible-helper:v.0.3. So you need to retag them (or edit the scripts).

docker image tag brtlvrs/ansible-helper:v.0.3 ansible:2.9.0
docker image tag ansible:2.9.0 ansible:latest

The wrapper scripts are stored in the image, in the folder /2installOnHost
To copy them out of the image do the following

docker create -n test ansible:2.9.0
docker cp test:/2InstallOnHost/ah* /usr/local/bin/
chmod +x /usr/local/bin/ah*
docker rm test

This will create a docker container named test, using the imported image ansible:2.9.0.
Then copies the files from inside the container to the /usr/local/bin/ and sets the executable bit on these files.
Finally it removes the created docker container.
Now you should be able to run ah or ah-playbook.

Both scripts will mount the folowing volumes into the container

localmountpointremark
~/.ssh/id_rsa/root/.ssh/id_rsaneeded to access localhost via ssh (ansible)
~/.ssh/id_rsa.pub/root/.ssh/id_rsa.pubneeded to access localhost via ssh (ansible)
$(pwd)/ansible/playbooksMountpoint is the default location inside the container.
/var/log/ansible/ansible.logMaking the ansible logs persistent.

Finishing touch

To make it work you need to do the following:

  • install docker-py on the docker host ( if you want ansible to be able to control docker)
  • add the ~/.ssh/id_rsa.pub content to ~/.ssh/authorized_keys file, so you don’t need to store the user password into the ansible inventory file.

Final

An now it should work. Type ‘ah’ and you’ll enter a temporary docker container running ansible. Type ah-playbook and you’ll execute the command ansible-playbook inside a temporary docker container.
Have fun.

Custom Welcome DCUI screen

Recently I discovered you can change the default ESXi DCUI screen.
This feature is already present for a long time…. just check one of my favorite blog sites (virtualGhetto) for this item here. Yes he blogged about it in 2010….. I know

Why would you want to change it

  1. Your Company wants a warning shown on the DCUI
  2. Security, don’t show hostname, ip etc…
  3. showing off ASCI-ART – just for fun
  4. ….

For me, it was point 3….
I run on my laptop VMware Workstation and am sometimes in need of spinning up nested ESXi with different kind of versions. For this reason I build a provisioning VM, so I can network boot empty VMs and choose from an iPXE menu the ESXi version I want to work with. (maybe some interesting stuff for a blog serie) And this setup wasn’t finished with a custom DCUI screen.

How does it work

Until version ESXi 7.0 there is a text file /etc/vmware/welcome, which you can edit. When it is succesfully edited, the DCUI wil show the result.
My screen shows: VMware product and version, license, Memory, IP, SSL Thumbprint and ASCI-ART 🙂

You can also edit the logonbanner, shown when you access the host via SSH. And the support info

Until ESXi 7 ?

Yes, until… well almost. Until version 7.x you just could simply edit the file /etc/vmware/welcome. But it isn’t there anymore.
But there is a different way. Called CLI. You can edit the welcome message with esxcli and powercli. Information is found in the VMware knowledge base kb2046347. And this works for a lot off ESXi versions. I’ve tested it succesfully from 5.5 to 7.0.

ASCII-ART

Well, you can show your creative side. Some guidelines I found are:

  1. screen with is 125 characters
  2. max. rows about 25
  3. check for ASCI TAB character, ASCII nr 9
  4. use a site for creating art like this one

Especially number 3 is a killer. A lot of text editors will replace a certain amount of spaces with a TAB character… and this will mess up your screen.

ESXCLI method

The syntax is easy, just

esxcli system welcomemsg set -m 'your welcome'

And that is what is shown in the kb article. But how do you create a complete screen. Well, there are some undocumented parameters you can use. The article by William Lam (see above) shows some of them. But I haven’t find a site with all of them.
With those parameters you create in a text editor your screen…. like this one

Every line that uses a different background color then black is 125 characters long, not including the length of any fields that are used, like {color:white}.

After you created your file, you can place it in /etc/vmware or use esxcli to edit the welcome message.
If the file is placed in /tmp/welcome.txt then your instruction is like

esxcli system welcomemsg set -m "$(cat /tmp/welcome.txt)"

And voila, you have a custom welcome screen.

DCUI with SSH

The DCUI is not only shown on the console screen of ESXi, but also accessed via SSH by >dcui
The result is shown below.

DCUI screen when accessed in SSH

Enjoy.