RSS

Repeatable Deployments 5 – NVMe Base Duo

October 21st, 2024 • ElectronicsComments Off on Repeatable Deployments 5 – NVMe Base Duo

NVMe Base Duo Banner

A few weeks ago we looked at using the Pimoroni NVMe Base to add 500 GBytes of storage to a Raspberry Pi, first manually and then using Ansible.

A few days ago I started to look at doing the same but this time with the NVMe Base Duo. This would allow the addition of two drives to the Raspberry Pi. Should be simple, right?

TL;DR The scripts and instructions for running them can be found in the AnsibleNVMe GitHub repository.

Setting up the Hardware

As with the NVMe Base, setting up the NVMe Base Duo was simple, just follow the installation instructions on theproduct page. Again, the most difficult part was connecting the NVMe Base Duo board to the Raspberry Pi using the flat flex connector.

A quick check of the /dev directory shows the two drives as devices:

clusteruser@TestServer:~ $ ls /dev/nv*
/dev/nvme0  /dev/nvme0n1  /dev/nvme1  /dev/nvme1n1

Setting up the two drives should be a case of running the configuration script from the previous post with two different device / mount point names, namely:

  • nvme0 / nvme0n1
  • nvme1 / nvme1n1

Ansible tasks should allow us to reuse the commands used to mount a single drive on the NVMe Base without repeating the instructions with copy/paste.

Ansible Tasks

First thing to do is identify the tasks that should be executed on both drives. These are found in the ConfigureNVMeBase.yml file. They make up the bulk of the file. These tasks are:

- name: Format the NVMe drive {{ nvmebase_device_name }}
  command: mkfs.ext4 /dev/{{ nvmebase_device_name }} -L Data
  when: format_nvmebase | bool

- name: Make the mount point for {{ nvmebase_device_name }}
  command: mkdir /mnt/{{ nvmebase_device_name }}

- name: Mount the newly formatted drive ({{ nvmebase_device_name }})
  command: mount /dev/{{ nvmebase_device_name }} /mnt/{{ nvmebase_device_name }}

- name: Make sure that {{ ansible_user }} can read and write to the mount point
  command: chown -R {{ ansible_user }}:{{ ansible_user }} /mnt/{{ nvmebase_device_name }}

- name: Get the UUID of {{ nvmebase_device_name }}
  command: blkid /dev/{{ nvmebase_device_name }}
  register: blkid_output

- name: Extract UUID from blkid output
  set_fact:
    device_uuid: "{{ blkid_output.stdout | regex_search('UUID=\"([^\"]+)\"', '\\1') }}"

- name: Clean the extracted UUID
  set_fact:
    clean_uuid: "{{ device_uuid | regex_replace('\\[', '') | regex_replace(']', '') |  regex_replace(\"'\" '') }}"

- name: Add UUID entry for {{ nvmebase_device_name }} to /etc/fstab
  lineinfile:
    path: /etc/fstab
    line: "UUID={{ clean_uuid }} /mnt/{{ nvmebase_device_name }} ext4 defaults,auto,users,rw,nofail,noatime 0 0"
    state: present
    create: yes

Breaking these instruction out into a new file, ConfigureNVMeBaseTasks.yml file gives us the consolidated tasks list. The device and drive mount point are derived from the Ansible variable nvmebase_device_name which is set in the calling script as follows:

- include_tasks: ConfigureNVMeDriveTasks.yml
  vars:
	  nvmebase_device_name: nvme0n1

The ansible_user variable (used in the chown command in the tasks file) is taken from the group_vars/all.yml file.

And for the second drive we would use:

- include_tasks: ConfigureNVMeDriveTasks.yml
  vars:
	  nvmebase_device_name: nvme1n1
  when: nvme_duo == true

Note the addition of the when clause to only execute the tasks in the ConfigureNVMeDriveTasks.yml if the nvme_duo variable is true. This clause will also be used when Samba is configured.

Install Samba (Optional)

The installation of Samba follows similar steps as detailed in the Adding NVMe Base using Ansible post. The only addition is to add the second drive to the configuration section of the script.

Change Hostname (Optional)

One final, optional step is to change the name of the Raspberry Pi. This is useful when a number of devices are being configured. This step requires the hostname variable being set in the group_vars/all.yml file. Execute the ChangeHostname.yml script once the variable has been changed. Note that the script may fail following the reboot step as Ansible tries to reconnect with the Raspberry Pi using the old host name.

Lessons Learned

For some reason which was never tracked down, the installation of the sometimes failed on the NVMe Base Duo with access permission issues. The access issue presents itself on the Samba shares and also when attempting to use the drive when logged into the Raspberry Pi. This was resolved by setting the ansible_user variable in the group_vars/all.yml file.

The first installation of the NVMe Base Duo added the PCIe Generation 3 support. This caused some issues accessing the devices in the /dev directory. Removing this support allowed both of the drives to be accessed.

Conclusion

Breaking the drive installation and configuration into a new tasks file allows Ansible to reuse the same steps for two different devices. Couple this with when clauses allows changes to the control flow when deploying to two different types of devices.

Repeatable Deployments 4 – Variables and Samba

October 14th, 2024 • Ansible, Raspberry PiComments Off on Repeatable Deployments 4 – Variables and Samba

Repeatable Deployments 4 Banner

The previous post, Adding NVMe Drive Automatically used Ansible to automatically add and format a SSD drive mounted on the Pimironi NVMe Base to a Raspberry Pi. This post will extend the work by using Ansible variables and configuration files to allow the following:

  • Control the name of the mount point for the SSD
  • Determine if the SSD should be formatted before mounting
  • Add configuration to simplify the command line

Additionally, this post covers the installation of Samba to allow the contents of the mounted drive available to an eternal computer.

The code for this this post is available on GitHub with the code for this article available tagged as Repeatable Deployments 4.

Configuration

Ansible has a number of options for configuration files and the one being used here is the ansible.cfg file in the base directory for the ansible scripts. In this simple case, the ansible.cfg file contains the following:

[defaults]
inventory = ./hosts.ini

These two lines allow the simplification of the command line and also the Ansible scripts.

hosts = ./hosts.ini

Name of the file containing the list of host systems being targetted. In the previous post this was inventory.yml. This has been changed here to hosts.ini to fit in with more conventional Ansible usage. Adding this line to the file allows us to remove the -i inventory.ini from the command line.

The hosts.ini contents remain the same as the inventory.ini file in the previous post.

Variables

The group_vars/all.yml file contains the variable definitions that will be applied to all of the scripts. The variables are as follows:

---
default_user: clusteruser
nvmebase_mount_point: nvme0
format_nvmebase: false

default_user

Name of the default user for the device. This is used as part of the Samba installation. This should match the user name used to install and update the operating system on the Raspberry Pi.

Default: clusteruser

nvmebase_mount_point

Name of the mount point for the NVMe drive.

Default: nvme0

format_nvmebase

This determines if the NVMe drive should be formatted (have a new file system created on the drive) as part of the installation process.

Set this to true for new drives (or drives where the data is to be discarded). Set this to false to preserve the contents of the drive. Note that setting format_nvmebase to false for an unformatted (new) drive will cause the ConfigureNVMeBase.yml script to fail.

Default: false

Install Samba

The next addition is the installation of Samba allowing sharing of files on the Pi to the local network.

- name: Samba
  hosts: raspberrypi

  tasks:
  - name: Install Samba
    become: yes
    apt:
      pkg:
        - samba
        - samba-common-bin
        - cifs-utils 
      state: present
      update_cache: true
  
  - name: Set up Samba configuration
    become: yes
    blockinfile:
      path: /etc/samba/smb.conf
      block: |
        [share]
            path = /mnt/{{ nvmebase_mount_point }}
            read only = no
            public = yes
            writable = yes
            force user={{ default_user }}

This breaks down to two steps:

  • Install Samba
  • Configure Samba

The configuration puts the Samba share on the drive connected to the NVMe Base board. This allows for the USB system drive to be reflashed whilst preserving any data that has been created on the NVMe Base drive.

The share will be a writable share accessible to any users on the network. Normally this is not desirable but it is acceptable for a device on a small restricted network.

Lessons Learned

One issue initially encountered with the Samba share was accessibility. The share was visible over the local network but only in read-only mode. Adding the following to the configuration corrected this issue:

force user={{ default_user }}

Where default_user is defined in the global_vars/all.yml file.

This script is run in the same manner as the other Ansible scripts, ansible-playbook InstallSamba.yml.

Conclusion

Using variables allows flexibility in the installation of the NVMe drive, namely, data preservation across installations. This makes the system ideal for use as a monitoring system gathering data from devices on the network while allowing the Raspberry Pi to be updated with new software configurations while still preserving the ability to setup a new system.

Repeatable Deployments 3 – Adding NVMe Drive Automatically

July 1st, 2024 • Ansible, Raspberry PiComments Off on Repeatable Deployments 3 – Adding NVMe Drive Automatically

Repeatable Deployments Part 3 - Adding NVMe Drive Automatically Banner

This latest chapter in the Repeatable Deployments series looks at automating the steps discussed in the Raspberry Pi and NVMe Base post. The previous post investigated the manual steps necessary to add a PCIe drive, here we will look at automating the setup process.

TL;DR The scripts and instructions for running them can be found in the AnsibleNVMe GitHub repository with the code for this article available tagged as Repeatable Deployments 3.

Automation Options

The two main (obvious) options for automation in this case would be:

  • Shell scripts
  • Ansible

In this post we will look at using Ansible as it allows remote installation and configuration without having to install scripts on the target machine.

The Hardware

The installation hardware will be based around the Raspberry Pi 5 as the PCIe bus is required for the NVMe base and associated SSD:

  • Raspberry Pi (3, 4 or 5)
  • 256 GByte SATA SSD
  • SATA to USB adapter
  • Cooling fan (for the Raspberry Pi 5)
  • NVMe Base and 500GByte M2 drive
  • Power Supply
  • Ethernet cable
  • 3D printed mounts to bring everything together

For the purpose of this post we will be configuring the system with the following credentials:

  • Hostname: TestServer500
  • User: clusteruser

The password will be stored in an environment variable and on a Mac this is setup by executing the following command:

export CLUSTER_PASSWORD=your-password

The remainder of this post will assume the above names and credentials but feel free to change these as desired.

Step 1 – Install the Base Operating System

The operating system is installed following the same method as described in part 1 of this series. Simply ensure that the hostname, user name and password parameters are set to those noted above (or with your substitutions).

Step 2 – Ensure SSH Works

The next thing we need to do is to check that the Raspberry Pi boots and that we can log into the system. So apply power to the board and wait for the board to boot. This normally takes a minute or two as the system will boot initially and then expand the file system before booting two more times.

Time for the first log on to the Raspberry Pi with the command:

ssh testserver500.local

If this is the first time this device has been setup with the server name then you will be asked to accept the certificates for the host along with the password for the Raspberry Pi.

The authenticity of host 'testserver500.local (fe80::67b6:b7f4:b285:2599%en17)' can't be established.
ED25519 key fingerprint is SHA256:gfttQ9vr7CeWfjyPLUdf5h2Satxr/pRrP2EjbmW2BKA.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'testserver500.local' (ED25519) to the list of known hosts.
clusteruser@testserver500.local's password:
Linux TestServer500 6.6.20+rpt-rpi-2712 #1 SMP PREEMPT Debian 1:6.6.20-1+rpt1 (2024-03-07) aarch64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
clusteruser@TestServer500:~ $

Answer yes to accept the certificates and then enter the password at the following prompt.

If the machine name has been used before, or if you are trying repeat deployments then you will receive a message about certificate errors:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ED25519 key sent by the remote host is
SHA256:<sha256-key>
Please contact your system administrator.
Add correct host key in /Users/username/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /Users/username/.ssh/known_hosts:35
Host key for testserver500.local has changed and you have requested strict checking.
Host key verification failed.

This can be resolved by editing the ~/.ssh/known_hosts> file and removing the entries for testserver500.local, saving the file and retrying.

A final step is to copy the local machine ssh keys to the Raspberry Pi. This can be done with the command:

ssh-copy-id testserver500.local

This command provides access to the Raspberry Pi, from the current machine, without needing to enter the password so keep in mind the security implications. Executing the above command will result in output something like:

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
clusteruser@testserver500.local's password: <enter you password here>

Number of key(s) added:        1

Now try logging into the machine, with:   "ssh 'clusteruser@testserver500.local'"

Where the password for the Raspberry Pi user was entered when prompted. These two steps are required for Ansible to work correctly (at least from a Mac).

Step 3 – Update the System

At this point the Raspberry Pi has the base operating system installed and we have confirmed that the system can be accessed from the local host computer. The next step is to ensure that the operating system is updated with the current patches and the EEPROM is updated to the latest to allow access to the NVMe Base.

From the previous post, Raspberry Pi and NVMe Base, we know that we can do this with the commands:

sudo apt get update -y
sudo apt get dist-upgrade -y
sudo raspi-config nonint do_boot_rom E1 1
sudo reboot now

The first thing to note is that in the Ansible scripts we will be using privilege elevation to execute command with root privilege. This means that we do not need to use sudo to execute the commands in the Ansible scripts. So we start by creating a YAML file with the following contents:

---
- name: Update the Raspberry Pi 5 OS and reboot
  hosts: raspberrypi
  become: true
  tasks:
    - name: Update apt caches and the distribution
      apt:
        update_cache: yes
        upgrade: dist
        cache_valid_time: 3600
        autoclean: yes
        autoremove: yes
        
    - name: Update the EEPROM
      command: raspi-config nonint do_boot_rom E1 1

    - name: Reboot the Raspberry Pi
      reboot:
        msg: "Immediate reboot initiated by Ansible"
        reboot_timeout: 600
        pre_reboot_delay: 0
        post_reboot_delay: 0

There are a number of websites discussing Ansible scripts including the Ansible Documentation site so we will just look at the pertinent elements of the script.

hosts: raspberrypi defines the host names / entries that this section of the script applies to. We will define this later in the inventory.yml file.

become: true is the entry that tells Ansible that we want to execute the tasks with elevated privileges.

tasks defines a group of tasks to be executed on the Raspberry Pi. These tasks will be a combination of actions that Ansible is aware of as well as commands to be executed on the Raspberry Pi. In this script the tasks are:

  • Use apt to update the distribution
  • Execute the command to update the Raspberry Pi EEPROM
  • Reboot the Raspberry Pi to ensure the updates are applied

Now we have the definition of the tasks we want to execute we need to define the systems we want to run the script against. This can be done using an inventory script. For the single Raspberry Pi this is a simple file and looks like this:

[raspberrypi]
testserver500.local ansible_user=clusteruser ansible_ssh_pass=$CLUSTER_PASSWORD ansible_python_interpreter=/usr/bin/python3

The above contains a number of familiar entries, the server and user names. ansible_ssh_pass references the environment variable set earlier. The final entry, ansible_python_interpreter=/usr/bin/python3, prevents a warning from Ansible about the Python version deployed on the Raspberry Pi. This warning looks something like:

[WARNING]: Platform linux on host testserver500.local is using the discovered Python interpreter at /usr/bin/python3.11, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible-core/2.17/reference_appendices/interpreter_discovery.html for more
information.

We are now ready to use Ansible to update the Raspberry Pi using the following command:

ansible-playbook -i inventory.yml UpdateAndRebootRaspberryPi.yml

If all goes well, then you should see something like the following:

PLAY [Update the Raspberry Pi 5 OS and reboot] ******************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************
ok: [testserver500.local]

TASK [Update apt caches and the distribution] ******************************************************************************************************************************
changed: [testserver500.local]

TASK [Update the EEPROM] ******************************************************************************************************************************
changed: [testserver500.local]

TASK [Reboot the Raspberry Pi] ******************************************************************************************************************************
changed: [testserver500.local]

PLAY RECAP *******************************************************************************************************************
testserver500.local        : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Step 4 – Format and Configure the Drives

We can now move on to configuring the system to access the NVMe drives.

Configure PCIe Gen 3 support

The NVMe Base can be run using PCIe Gen 3 support. This is experimental and not guaranteed to work although in my experience there are no issues with the drive supplied with the NVMe Base. The following code adds the appropriate entried in the /boot/firmware/config.txt file.

- name: Ensure pciex1_gen3 is enabled in /boot/firmware/config.txt
  blockinfile:
  path: /boot/firmware/config.txt
  marker: "# {mark} ANSIBLE MANAGED BLOCK"
  block: |
      dtparam=pciex1_gen=3
  insertafter: '^\[all\]$'
  create: yes

Format the Drive and Mount the File System

The next few steps executes the command necessary to format the drive and mount the formatted drive ensuring that the cluseruser can access the drive:

- name: Format the NVMe drive nvme0n1
  command: mkfs.ext4 /dev/nvme0n1 -L Data

- name: Make the mount point for the NVMe drive
  command: mkdir /mnt/nvme0

- name: Mount the newly formatted drive
  command: mount /dev/nvme0n1 /mnt/nvme0

- name: Make sure that the user can read and write to the mount point
  command: chown -R {{ ansible_user }}:{{ ansible_user }} /mnt/nvme0

Make the Drive Accessible Through Reboots

At this mount the drive will be available in the /mnt directory and the clusteruser is able to access the drive. If we were to reboot now then the drive will still be available and formatted but it will not be mounted following the reboot. The final step is to update the /etc/fstab file to mount the drive automatically at startup.

- name: Get the UUID of the device
  command: blkid /dev/nvme0n1
  register: blkid_output

- name: Extract UUID from blkid output
  set_fact:
    device_uuid: "{{ blkid_output.stdout | regex_search('UUID=\"([^\"]+)\"', '\\1') }}"

- name: Clean the extracted UUID
  set_fact:
    clean_uuid: "{{ device_uuid | regex_replace('\\[', '') | regex_replace(']', '') |  regex_replace(\"'\" '') }}"

- name: Add UUID entry to /etc/fstab
  lineinfile:
    path: /etc/fstab
    line: "UUID={{ clean_uuid }} /mnt/nvme0 ext4 defaults,auto,users,rw,nofail,noatime 0 0"
    state: present
    create: yes

There is a small complication as the UUID in device_uuid is surrounded by [‘ and ‘] characters. These delimiters need to be removed and the clean steps do this before adding the entry into the /etc/fstab file.

The only thing left to do is to run the playbook with the command:

ansible-playbook -i inventory.yml ConfigureNVMeBase.yml

If all goes well then we should see something similar to:

PLAY [Configure Raspberry Pi 5 to use the drive attached to the NVMe Base.] ******************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************
ok: [testserver500.local]

TASK [Ensure pciex1_gen3 is enabled in /boot/firmware/config.txt] ******************************************************************************************************************************
changed: [testserver500.local]

TASK [Format the NVMe drive nvme0n1] ******************************************************************************************************************************
changed: [testserver500.local]

TASK [Make the mount point for the NVMe drive] ******************************************************************************************************************************
changed: [testserver500.local]

TASK [Make sure that the user can read and write to the mount point] ******************************************************************************************************************************
changed: [testserver500.local]

TASK [Mount the newly formatted drive] ******************************************************************************************************************************
changed: [testserver500.local]

TASK [Get the UUID of the device] ******************************************************************************************************************************
changed: [testserver500.local]

TASK [Extract UUID from blkid output] ******************************************************************************************************************************
ok: [testserver500.local]

TASK [Clean the extracted UUID] ******************************************************************************************************************************
ok: [testserver500.local]

TASK [Add UUID entry to /etc/fstab] ******************************************************************************************************************************
changed: [testserver500.local]

TASK [Reboot the Raspberry Pi] ******************************************************************************************************************************
changed: [testserver500.local]

PLAY RECAP ******************************************************************************************************************************
testserver500.local        : ok=11   changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Now to see if it has worked.

Step 5 – Test the Deployment

There are a few things we can check to verify that the system is configured correctly:

  • Check the drive appears in /dev
  • Ensure the drive has been mounted correctly in/mnt
  • Check that the clusteruser can create files and directories

Starting a ssh session on the Raspberry Pi we can manually check the system:

Linux TestServer500 6.6.31+rpt-rpi-2712 #1 SMP PREEMPT Debian 1:6.6.31-1+rpt1 (2024-05-29) aarch64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sun Jun 30 10:15:00 2024 from fe80::1013:a383:fe75:54e6%eth0
clusteruser@TestServer500:~ $ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.8G     0  3.8G   0% /dev
tmpfs           806M  5.3M  800M   1% /run
/dev/sda2       229G  2.3G  215G   2% /
tmpfs           4.0G     0  4.0G   0% /dev/shm
tmpfs           5.0M   48K  5.0M   1% /run/lock
/dev/nvme0n1    469G   28K  445G   1% /mnt/nvme0
/dev/sda1       510M   64M  447M  13% /boot/firmware
tmpfs           806M     0  806M   0% /run/user/1000
clusteruser@TestServer500:~ $ cd /mnt
clusteruser@TestServer500:/mnt $ ls -l
total 4
drwxr-xr-x 3 clusteruser clusteruser 4096 Jun 30 10:14 nvme0
clusteruser@TestServer500:/mnt $ cd nvme0
clusteruser@TestServer500:/mnt/nvme0 $ mkdir Test
clusteruser@TestServer500:/mnt/nvme0 $ echo "Hello, world" > hello.txt
clusteruser@TestServer500:/mnt/nvme0 $ cat < hello.txt
Hello, world
clusteruser@TestServer500:/mnt/nvme0 $ ls -l
total 24
-rw-r--r-- 1 clusteruser clusteruser    13 Jun 30 10:16 hello.txt
drwx------ 2 clusteruser clusteruser 16384 Jun 30 10:14 lost+found
drwxr-xr-x 2 clusteruser clusteruser  4096 Jun 30 10:15 Test

Looking good.

Lesson Learned

There were a few things that caused issues along the way.

Raspberry Pi – Access Denied

As mentioned in Step 2 – Ensure SSH Works, we need to log in to the Raspberry Pi in order for Ansible to be able to connect to the Raspberry Pi and run the playbook. Missing the first step, logging on to the Raspberry Pi will result in the following error:

TASK [Gathering Facts] ******************************************************************************************************************************
fatal: [testserver500.local]: FAILED! => {"msg": "Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this.  Please add this host's fingerprint to your known_hosts file to manage this host."}

Missing the second step, copying the SSH ID will result in the following error:

TASK [Gathering Facts] ******************************************************************************************************************************
fatal: [testserver500.local]: UNREACHABLE! => {"changed": false, "msg": "Invalid/incorrect password: Permission denied, please try again.", "unreachable": true}

Permission Denied

In the Step 4 – Format and Configure the Drives script we have the following:

- name: Mount the newly formatted drive
  command: mount /dev/nvme0n1 /mnt/nvme0

- name: Make sure that the user can read and write to the mount point
  command: chown -R {{ ansible_user }}:{{ ansible_user }} /mnt/nvme0

Switching these two lines result in the user not being able to write to the file system on the /mnt/nvme0 drive. Read and execute access are allowed but write access is denied.

Conclusion

The scripts presented here allow for a new Raspberry Pi to be configured with a newly formatted NVMe SSD drive in only a few minutes. This method does present a small issue in that the NVMe drive will be formatted as part of the set up process which does mean that the data on the drive will be lost. Something that is easy to resolve.

Repeatable Deployments Part 2 – NVMe Base

June 24th, 2024 • Electronics, Raspberry Pi1 Comment »

NVMe Base on Raspberry Pi

A while ago, I started a series about creating Repeatable Deployments documenting the first step of the process, namely getting an OS onto a Raspberry Pi equipped with a USB SSD drive.

In this post we will look at the steps required to install a NVMe SSD into the environment and automate configuring the system. At the end of the post we should have two drives installed:

  • USB SSD drive holding the operating system and any applications used
  • NVMe drive for holding persistent data

The first of these drives will be considered transient with the operating system and applications changing but always installed in a repeatable manner. The second drive will be used to hold data which should persist even if the operating system changes.

The Hardware

The release of the Raspberry Pi 5 gave us supported access to the high speed PCIe and this means we have the option to install SSD drives using PCIe to give high speed disc access to the Raspberry Pi. A number of manufacturers have developed boards to support the addition of PCIe devices from SSD (which we will look at) to AI coprocessor boards.

There have been a number of reports concerning the compatibility of various drives and the boards offering access to the PCIe bus on the Raspberry Pi. For this reason I decided to use the a board that is supplied with a known working SSD. Luckily Pimoroni has two boards on offer that can be purchased stand alone or with a known working SSD:

For this post we will look at the single drive option with a 500GB SSD.

As usual with Pimoroni, ordering was easy and delivery was quick with the unit arriving the next day.

Pimoroni also provide a list of alternative known working drives along with some that may work. This list ocan be found on the corresponding produce page (links above). There is also a link to the Pi Benchmarks site that provides speed information for the various drives.

Assembly

Following the assembly guide was fairly painless. The most difficult step was to install the flat flex connector between the NVMe Base and the Raspberry Pi 5.

Configuration

The product page for the NVMe Base contains a good installation guide. This guide assumes that the SSD installed on the NVMe base is going to be used as the main boot drive for the system and that the OS etc. will be copied from existing bootable media. This is not the case for this installation.

System Update

As with all Pi setups, the first thing we should do is make sure that we have the most recent OS and software.

sudo apt get update -y
sudo apt get dist-upgrade -y
sudo reboot now

This should ensure that we have the latest and greatest deployed to the board.

Firmware Update and Setting the PCIe Mode

The first two steps are common to both the bootable scenario and this scenario. The firmware will need to be checked and if necessary updated and then experimental PCIe mode enabled.

Firstly, choose the boot ROM version and set this to the latest and reboot the system. The following command will configure the system to use the latest boot ROM.

sudo raspi-config nonint do_boot_rom E1 1
sudo reboot now

Further information on using the raspi-config tool in command mode can be found in the Raspi-Config documentation. Note however, that at the time of writing the documentation was slightly out of date as there was no mention of the need for the number 1 at the end of the command. This is required to answer No to the question about resetting the bootloader to the default configuration.

Next up, check the bootloader version and update this if necessary by using the rpi-eeprom-update command:

sudo rpi-eeprom-update

This generated the following output:

BOOTLOADER: up to date
   CURRENT: Fri 16 Feb 15:28:41 UTC 2024 (1708097321)
    LATEST: Fri 16 Feb 15:28:41 UTC 2024 (1708097321)
   RELEASE: latest (/lib/firmware/raspberrypi/bootloader-2712/latest)
            Use raspi-config to change the release.

PCIe Experimental Mode (Optional)

The last step is optional and turns on an experimental high speed feature, namely PCIe mode 3. Edit the /boot/firmware/config.txt file and add the following to the end of the file:

[all]
dtparam=pciex1_gen=3

This step is optional and without this entry the system will run in PCIe mode 2.

Reboot

So the boot loader is up to date, time for another reboot using the command sudo reboot now.

Mounting the Drive

We should now be at the point where the system has the correct configuration to allow access to the drive mounted to the NVMe Base. These drives should be in the /dev directory:

ls /dev/nv*

shows:

/dev/nvme0  /dev/nvme0n1

This looks good, as the drive shows up as the two devices. Now we need to put a file system on the drive, the following formats the drive using the ext4 file system:

sudo mkfs.ext4 /dev/nvme0n1 -L Data

For the 500GB ADATA drive supplied with the NVMe Base kit, this command generates the following:

mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 125026900 4k blocks and 31260672 inodes
Filesystem UUID: 008efe62-93f2-4875-bf52-5843953db8d0
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
	4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
	102400000

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

Next up we need to mount the file system and make it accessible to the user. Assuming the default (current) user is pi and the user is in the group pi then the following commands will make the drive available to the current session:

sudo mkdir /mnt/nvme0
sudo chown -R pi:pi /mnt/nvme0
sudo mount /dev/nvme0n1 /mnt/nvme0

Firstly, we make the mount point for the partition and then ensure that the pi user has access to the mount point. Finally we mount the partition /dev/nvme0n1 as /mnt/nvme0. The availability of the drive can be checked with the df command, running:

df -h

should result in something like:

Filesystem      Size  Used Avail Use% Mounted on
udev            3.8G     0  3.8G   0% /dev
tmpfs           806M  5.2M  801M   1% /run
/dev/sda2       229G  1.8G  216G   1% /
tmpfs           4.0G     0  4.0G   0% /dev/shm
tmpfs           5.0M   48K  5.0M   1% /run/lock
/dev/sda1       510M   63M  448M  13% /boot/firmware
tmpfs           806M     0  806M   0% /run/user/1000
/dev/nvme0n1    469G   28K  445G   1% /mnt/nvme0

Access should now be checked by copying some files or creating a directory on the drive.

Mounting the Partition Automatically

The final step to take is to mount the partition and make the file system available as soon as the system boots. If the Raspberry Pi is rebooted now and the df -h command is run then something like the following is shown:

Filesystem      Size  Used Avail Use% Mounted on
udev            3.8G     0  3.8G   0% /dev
tmpfs           806M  5.2M  801M   1% /run
/dev/sda2       229G  1.8G  216G   1% /
tmpfs           4.0G     0  4.0G   0% /dev/shm
tmpfs           5.0M   48K  5.0M   1% /run/lock
/dev/sda1       510M   63M  448M  13% /boot/firmware
tmpfs           806M     0  806M   0% /run/user/1000

Note that the drive /mnt/nvme0 has not appeared in the list of file systems available. Executing ls /dev/nvm* shows the device is available but it has not been mounted.

The first step in mounting the drive automatically is to find the unique identified (UUID) for the device using the command:

sudo blkid /dev/nvme0n1

This resulted in the following output:

/dev/nvme0n1: LABEL="Data" UUID="008efe62-93f2-4875-bf52-5843953db8d0" BLOCK_SIZE="4096" TYPE="ext4"

Make a note of the UUID shown for your drive. The final step is to add this to the /etc/fstab file, so edit this file with the command:

sudo nano /etc/fstab

and add the following line to the end of the file:

UUID=008efe62-93f2-4875-bf52-5843953db8d0 /mnt/usb1 ext4 defaults,auto,users,rw,nofail,noatime 0 0

Remember to substitute the UUID for the drive on your system for the UUID above.

Time for one final reboot of the system and log on to the Raspberry Pi. Check the availability of the file system with the df -h command. The drive should now be available and accessible to the pi user as well as any users in the pi group.

Conclusion

The NVMe Base boards offer a way to add M-key NVMe SSD drives to the Raspberry Pi using PCIe gen 2 and even experimental access using PCIe gen 3. This results in high speed access to data stored on these drives as well as increased reliability over SD card boot devices.

The standard installation documentation is excellent but it assumes that the NVMe Base is going to host the OS as well as user data on the drive installed on the NVMe base. It also makes the assumption that the user will be using the Raspberry Pi desktop and not running in a headless situation.

Hopefully the above shows how to install the NVMe Base and SSD drive as a data storage only media with a second drive acting as the OS boot device.

In the next post we will look at taking the above steps and automating them to allow for reliable, repeated installations.

Docker File Sharing

June 11th, 2024 • Aide-memoir, Docker, Software DevelopmentComments Off on Docker File Sharing

Docker File Sharing Banner

I use Docker fairly often for a number of projects. I find it useful for getting access to tools that are not available on my host system (Mac running on Apple silicon) and also for duplicating the environments I use for CI on GitHub. I also use it to access some legacy tools that I can no longer install locally on my system but where the vendor has provided a Docker image containing those tools.

Docker Image

The docker containers I use most often are the espressif/idf containers, specifically the espressif/idf:release-v4.2 container. This is used to access the development tools for the ESP-IDF release 4.2 tools and libraries to support maintenance of legacy code whilst it is being ported to a newer version of the SDK.

Access to the tools is through the docker command:

docker run --platform linux/amd64 --rm -it -v $PWD:/project -w /project espressif/idf:release-v4.2

Running this command starts the container and sets the PATH etc allowing interactive access to the tools.

Detecting the Python interpreter
Checking "python" ...
Python 3.6.9
"python" has been detected
Adding ESP-IDF tools to PATH...
Using Python interpreter in /opt/esp/python_env/idf4.2_py3.6_env/bin/python
Checking if Python packages are up to date...
Python requirements from /opt/esp/idf/requirements.txt are satisfied.
Added the following directories to PATH:
  /opt/esp/idf/components/esptool_py/esptool
  /opt/esp/idf/components/espcoredump
  /opt/esp/idf/components/partition_table
  /opt/esp/idf/components/app_update
  /opt/esp/tools/xtensa-esp32-elf/esp-2020r3-8.4.0/xtensa-esp32-elf/bin
  /opt/esp/tools/xtensa-esp32s2-elf/esp-2020r3-8.4.0/xtensa-esp32s2-elf/bin
  /opt/esp/tools/esp32ulp-elf/2.28.51-esp-20191205/esp32ulp-elf-binutils/bin
  /opt/esp/tools/esp32s2ulp-elf/2.28.51-esp-20191205/esp32s2ulp-elf-binutils/bin
  /opt/esp/tools/cmake/3.16.4/bin
  /opt/esp/tools/openocd-esp32/v0.11.0-esp32-20220706/openocd-esp32/bin
  /opt/esp/python_env/idf4.2_py3.6_env/bin
  /opt/esp/idf/tools
Done! You can now compile ESP-IDF projects.
Go to the project directory and run:

  idf.py build

root@a76f973ca31c:/project#

It is now possible to use all of the usual Espressif tools from the command prompt with the usual caveats for USB port access on Mac systems. Still, development remains possible even if deployment requires some additional steps.

Workflow

For a while now the working workflow has been as follows:

  • Edit the code, build scripts in VS Code on Mac
  • Build the code in the docker container running in a terminal session
  • Deploy the code from a second terminal session where the latest libraries are installed and configured

This worked flawlessly for several months.

Until the last few days.

The Problem

A recent requirement change necessitated the modification of the source code for the application built using the workflow described above. All seemed to start well, the code was edited, the docker container started and the code hit the first compilation of the day for a syntax check.

This was closely followed by come code changes and a second compilation. All seemed well and the code was committed to source control and rebuilt. At this point I noticed something odd, the automatically generated build number did not increase. The system used for this repository changes the build number based upon the number of commits. The first two compilations would not have increased the build number as there was no commit. The build after the commit would normally generate an increment in the build number and it clearly was not doing so.

Let’s introduce a syntax error, deleting a semicolon should do it, and try rebuilding. The code compiled with no errors. How strange.

Investigating further using more to check the contents of the files on the host machine against the docker container revealed that the changes on the host system were not being reflected in the docker image.

Docker File System Access

Something odd is happening to the file system. Changes on the host are clearly not being reflected in the mounted volume in the docker container. Time to try a few things with the syntax error still in place:

  • Exit the docker container and restart it and build the code – no change, the code still compiles
  • Exit the docker container and run it none interactively – still no change
  • Change the mount method from a volume to a mount option – the code still compiles
  • Delete the docker image and restart (this will rebuild from scratch) – compilation gives a syntax error

So finally, the code change is reflected in the docker volume. Now we need to remove the syntax error by reverting the file change and we can recompile and move on. Doing this resulted in a compilation failure, the change had once again not been applied to the mounted volume.

At this point a colleague checked on their system that they could change files and see the changes reflected in the mounted volume and yes they could. Time for a comparison of the system settings and the obvious one to pick up is the File System settings. A quick check showed a difference between the two systems. Changing my system to match theirs and restarting everything resolved the issue.

The difference was in the File sharing implementation for your containers.

File Sharing Selection

File Sharing Selection

My local system had this configured for gRPC FUSE. Changing to the above setting, VirtioFS, and restarting Docker Desktop and the docker container seems to have fixed the issue.

Conclusion

I still do not know why the change to the way the file system was accessed changed or why the system stopped reflecting changes to the files in the mounted volume. I don’t think I will ever find out but maybe the note will help others (maybe even myself in the future).

Test Environments

May 28th, 2024 • Raspberry PiComments Off on Test Environments

Pi v.s. Mac Mini

Not really a technical article this time, instead a review of setting up a test environment with Raspberry Pi v.s. a used Mac Mini.

The introduction of the Raspberry Pi has brought low cost computing to the market. These small machines can help set up test environments where high speed and resource hungry systems are not required. I have used these for many years to provide a self contained environment which can possibly allow a debugger to be attached to an embedded development board.

So why the Pi v.s. Mac Mini question?

Raspberry Pi 5

The increasing power of the Raspberry Pi small board computer has also been accompanied by an increasing cost for the boards. The latest model (at the time of writing) , the Pi 5, can cost over £80 plus shipping and this is for the Raspberry Pi alone. A number of other components are also required:

  • Power supply – £10
  • Storage – £10+ for SD card, more for SSD

The following optional components would also be recommended for a self contained test environment:

  • Active cooler – £5
  • Case – £10

Personally, I recommend using a SSD with a USB to SATA adapter as a minimum for storage as these drives tend to be faster and more reliable than SD cards.

Adding all of this together brings the base cost for a usable system that can be used as a test controller to around £140 including shipping (assuming a 256GB SSD).

Alternatives?

CEX is a chain of shops in the UK that buy and sell used electronic equipment. Any equipment sold by CEX has a 2 year warranty so there is some peace of mind about the quality of the goods being sold.

So why mention them?

As mentioned at the top of this article, the test test controller does not necessarily have to be very powerful. This means that a 3 (or more) year old machine will be more than capable of doing the job.

Enter the Intel range of Mac Mini computers. These can be picked up from as little as £80 for a low spec Intel edition with higher specification Intel machine available for around £150. These prices more than match those of a current Raspberry Pi 5 with the accessories needed to make the system useful. This is especially true when you consider that the Mac Mini comes with storage and power supply all built into the system.

Long Term Support

The Raspberry Pi Foundation has a history of long term support. The latests operating system release also include support for the original Raspberry Pi released over a decade ago. This is fairly impressive and something not matched by many (if any) other hardware vendors.

The Intel range of Mac Mini computers are fairly well supported but older hardware has ceased to be supported for new operating system releases. Security releases are still made available for older systems but new features are not.

So the question is does this really matter for an isolated tests system? Probably not but if software support is a concern then the Raspberry Pi is the better choice at this price point.

Conclusion

Modern software development requires the use of fairly powerful computers. This article is being written on a modern Mac laptop with a powerful M2 processor and 32 GB of RAM. This sort of power is needed for compilation but is overkill for execution and supervision of test environment for embedded development.

Using used equipment keeps them alive and prevents them going to recycling (at least for a few years) making them environmentally friendly and cost effective.

KiCAD ERC and DRC

April 23rd, 2024 • KiCadComments Off on KiCAD ERC and DRC

KiCAD ERC DRC Checks Status

2024 saw the release of KiCAD version 8. This release brings a number of new features to both the user interface and the command line interface. One useful feature has been made easier to use, namely the ability to run the Electrical Rules Checks (ERC) and Design Rules Checks (DRC).

What are ERC and DRC?

ERC checks are run on the schematic and the checks verify the schematic against a number of common mistakes. Examples include:

  • Pins not connected to a net (but not marked as not connected)
  • Pin conflicts
  • Missing drivers

DRC runs against the PCB layout and checks the design rules specified in the design. These rules will vary between fabrication houses and include properties such as:

  • Minimum track width
  • Minimum track spacing
  • Hole-to-track spacing limits

Both types of check can be run through the user interface and from the command line. It is the command line interface that we will look at here as this will allow the checks to be run as part of a continuous integration pipeline (in this case a GitHub Action).

KiCAD Docker Container

Initial thoughts about running these checks is to use a docker container. This will allow the use of a script to run the same commands from the desktop as well as through CI. Luckily, the KiCAD project maintain a docker container for just this purpose.

First step, create a script file containing the two commands that we need to run the checks. Our script file will look something like this:

#!/bin/bash

set -e
scriptdir="$( cd "$(dirname "$0")" ; pwd -P )"

kicad-cli sch erc --severity-all --exit-code-violations -o "$scriptdir/erc.rpt" Meadow-Feather-OLED.kicad_sch
kicad-cli pcb drc --severity-all --exit-code-violations -o "$scriptdir/drc.rpt" Meadow-Feather-OLED.kicad_pcb

If we save this file as say run-erc-drc.sh then we can run the checks from the command line using docker to run the script. The command to run the script would look something like this:

docker run --platform linux/amd64 --rm -v $PWD:/project -w /project kicad/kicad:8.0.1 ./run-erc-drc.sh

Note that the –platform option allows the container to run on Apple Silicon Macs as well as Intel platforms. Running the above command will show the following output:

Running ERC...
Checking sheet names...
Checking bus conflicts...
Checking conflicts...
Checking units...
Checking footprints...
Checking pins...
Checking labels...
Checking for unresolved variables...
Checking no connect pins for connections...
Checking for library symbol issues...
Checking for off grid pins and wires...
Checking for undefined netclasses...
Found 0 violations
Saved ERC Report to /project/erc.rpt
Loading board
Running DRC...
Found 0 violations
Found 0 unconnected items
Saved DRC Report to /project/drc.rpt

The run-erc-drc.sh script will generate two report files, one for the ERC and one for the DRC. Checking the erc.rpt file contents will, for a successful run, will show something like this:

ERC report (2024-04-22T18:38:28+0000, Encoding UTF8)

***** Sheet /

 ** ERC messages: 0  Errors 0  Warnings 0

Great, so we now have the docker container created and the report is being generated based upon the current design and PCB layout. The next step is to integrate this into a GitHub Action.

GitHub Action

The first attempt at performing the checks in a GitHub Action resulted in the following yaml file:

name: Run ERC and DRC checks

on:
  repository_dispatch:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout source code
      uses: actions/checkout@v4

    - name: Run the checks
      run: docker run --platform linux/amd64 --rm -v $PWD:/project -w /project kicad/kicad:8.0.0 ./run-erc-drc.sh
      id: run-the-checks

The theory was to use the same script file in a GitHub Action as was used on the desktop. This will ensure that the checks performed are consistent on both platforms. It would also mean that only one script would need to be maintained.

“Houston We Have a Problem”

Saving a new version of the schematic and pushing the changes to GitHub ran the action and resulted in the following log entries:

11:33:55: Error: can't open file 'erc.rpt' (error 13: Permission denied)
Running ERC...
Checking sheet names...
Checking bus conflicts...
Checking conflicts...
Checking units...
Checking footprints...
Checking pins...
Checking labels...
Checking for unresolved variables...
Checking no connect pins for connections...
Checking for library symbol issues...
Checking for off grid pins and wires...
Checking for undefined netclasses...
Found 0 violations
Unable to save ERC report to erc.rpt
Error: Process completed with exit code 4.

Some time later…

After spending some time with Google and the GitHub help system yielded nothing too helpful regarding solving the issue and removing the error. It was however, discovered that someone had already created a GitHub Action to run the checks anyway. The sparkengineering/kicad-action@v1 action can be used to run the checks. The following yaml file will run the action and perform the ERC and DRC checks:

name: Run ERC and DRC checks

on:
  repository_dispatch:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  RunChecks:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout source code
      uses: actions/checkout@v4

    - name: Run KiCad ERC
      id: erc
      uses: sparkengineering/kicad-action@v1
      if: '!cancelled()'
      with:
        kicad_sch: Meadow-Feather-OLED.kicad_sch
        sch_erc: true

    - name: Run KiCad DRC
      id: drc
      uses: sparkengineering/kicad-action@v1
      if: '!cancelled()'
      with:
        kicad_pcb: Meadow-Feather-OLED.kicad_pcb
        pcb_drc: true

The results of the checks can be shown on the repository home page by creating a badge for the workflow. The badge is then added to the README.md file. The badge will show the status of the checks and will be updated each time the checks are run.

Conclusion

The ready built action does perform the checks required and the badge shows the results of the check workflow. While the method of running the checks are different, both will run the checks giving and indication of success or failure. The desktop script has the advantage of producing a report file that can be reviewed, along with the schematic and PCB layout, to resolve any design issues.

It was disappointing to find that the docker image could not be used within the GitHub Action. There will surely be a simple solution it is just having the time to find it.

An example of the GitHub Action script and a PCB design and layout can be found in the Meadow OLED Feather board repository.

Repeatable Deployments (Part 1)

March 19th, 2024 • Ansible, Raspberry Pi, Software DevelopmentComments Off on Repeatable Deployments (Part 1)

Repeatable Deployment Banner

A common problem in the IT world is to create a consistent environment in a repeatable manner. This is important in a number of use cases:

  • Development
  • Testing
  • Training

This series of posts will investigate using Ansible to create a consistent test environment, one that can be setup and torn down quickly and easily.

The starting point is setting up the hardware and installing the operating system (OS) which will be covered here. Subsequent posts will use Ansible to configure the system and deploy additional tools.

The Hardware

The test environment will be based around the Raspberry Pi 5 (although any version of the Pi hardware could be used). The system will be built around the following components:

  • Raspberry Pi (3, 4 or 5)
  • 256 GByte SATA SSD
  • SATA to USB adapter
  • Cooling fan (for the Raspberry Pi 5)
  • Power Supply
  • Ethernet cable
  • 3D printed mounts to bring everything together

Grabbing a Raspberry Pi 5 and putting all of this together yields something like this:

Raspberry Pi Setup

Raspberry Pi Setup

SATA SSDs have been chosen for the OS and data storage as they are both faster and more reliable than SD cards. From a cost perspective they are not too much more expensive than a quality SD card. It should be noted that recent third party addon boards are becoming available that add one or two NVMe drives to be added to the the Raspberry Pi 5 using the PCIe bus.

Write OS Image

The easiest way to create a bootable Raspberry Pi system is to use the Raspberry Pi Imager. This is a free tool that allows the selection of one of the many operating systems available for the Raspberry Pi and it can then be used to write the operating system to a SD card or HDD/SSD

The process starts by connecting the SATA to USB adapter the the SSD and then connecting the drive to the host computer. This makes the drive appear as an external USB drive.

Now start Raspberry Pi Imager:

Raspberry Pi Imager

Raspberry Pi Imager

Select the device we are going to create the image for, in this case this is the Raspberry Pi 5:

Select Device

Select Device

The next step is to decide which operating system should be installed on the SSD. There are a large number of options and the selection will depend upon what you want to achieve. In this case we can use a basic system such as Raspberry Pi OS Lite. Firstly, select the Raspberry Pi (64-bit) operating system:

Select Operating System

Select Operating System

Now refine this selection and select the Raspberry Pi OS Lite (64-bit):

Select Raspberry Pi Lite

Select Raspberry Pi Lite

A basic system will be adequate as the device is intended to be run headless and so the desktop environment and applications are not required.

Next step is to select the storage device that the image will be written to. Once this is done we can move on to providing some configuration options for the operating system.

Ready For Configuration

Ready For Configuration

Click the Next button to move on to the next step, editing the configuration.

Edit Settings

Edit Settings

Clicking Edit setting starts the editing process. The General options are presented first, here we can set the following:

  • Hostname
  • User name and password
  • WiFi access point details
Customise General Settings

Customise General Settings

SSH should be enabled in order to run the system headless. This is enabled on the Services tab:

Customise Services

Customise Services

Clicking on Save now gives the option of applying the settings and start writing the image to the SSD:

Apply Settings

Apply Settings

The final step is to verify that the SSD can be erased:

Confirm Media Erase

Confirm Media Erase

Control now passes back to the main window where the write and verification progress can be monitored:

Writing OS

Writing OS

After a short while the the process will complete and Raspberry Pi Imager wil conform that the image has been written successfully and the drive can now be disconnected from the host computer and connected to the Raspberry Pi 5:

OS Write Successful

OS Write Successful

Conclusion

The whole process of creating the image is straightforward and only takes a few minutes. At the end of the process the Raspberry Pi is ready to boot.

The next step will be to start the installation and configuration of additional software tools and components. Something for the next post in this series.

KiCAD Relative Positioning

March 10th, 2024 • KiCad, TipsComments Off on KiCAD Relative Positioning

KiCAD Positioning Banner

Most of the PCBs I make have mounting holes in the final layout to allow the boards to be firmly attached to 3D printed cases or mounts. When I first started using KiCAD I found it difficult to position the arc edge cuts around the mounting holes accurately. This was not too critical but it was a little annoying. The error in positioning the arcs was minor and is difficult to see but it would be good to fix the problem.

This was something I finally worked out in the last design I sent to manufacture and thought it would be something others might want to know about.

Board Layout

Most of my designs usually result in a square or rectangular board. The boards are simple and don’t really need to fis an irregularly shaped case. So most of the time I am trying to place a hole at the corner of say a square and then place an edge cut around the hole, something like this:

PCB with two mounting holes

PCB with two mounting holes

Placing the mounting holes is a simple case of editing the x and y positions of the mounting hole and ensuring that the holes are lined up correctly. The edge cuts are a little more difficult to position consistently when placing them by hand.

Accurate Edge Cuts

As noted above, the first stage is to place the mounting holes on a rectangular grid and using the x and y positions to place the holes. Next step is to create an arc centred on one of the mounting holes with the appropriate radius. This can be done in using the centre of the mounting holes as the starting point and then sweeping an arc through 90 degrees around the hole:

Arc and Hole

Arc and Hole

Next up we duplicate the arc, rotate it through 90 degrees and move to one of the opposite mounting holes:

Duplicated arcs

Duplicated arcs

As you can see, the duplicated arc is not centred on the opposite mounting hole. We now use the positioning tools to align the arc with the mounting hole. Start by selecting the arc and then right click to bring up the context menu and select Position Relative To… from the context menu:

Positioning context menu

Positioning context menu

From the positioning dialog click on the Select Item button:

Select item button

Select item button

Next, select the reference item, in this case it is the mounting hole:

Select the reference item

Select the reference item

The positioning dialog will now reappear with the reference item selected. Ensure that the Offset X and Offset Y are both set to 0 and click OK.

Position dialog box

Position dialog box

The arc should now move and be centred on the mounting hole.

Final arc position

Final arc position

Finally, repeat for the remaining 2 mounting holes.

Conclusion

This method allows for the board outline to be defined more accurately then lining up the arcs by eye. It is simple to do and only takes a few minutes to complete. The arcs can then be used as the anchors for the linear edges of the board.

This technique is also useful for positioning other parts on any design.

Photo to Pencil Drawing With Affinity Photo

February 11th, 2024 • Affinity Photo, TipsComments Off on Photo to Pencil Drawing With Affinity Photo

Pencil Drawing Banner

Another aide memoir, how to convert a photo into a pencil sketch using Affinity Photo. I don’t do this often and so I always forget the steps.

In the following I will refer to the Mac keystrokes which use the CMD key, on the PC use the CTRL key.

Starting Point

This example will use a photo of a working cocker spaniel:

Original Image

Original Image

The image has a reasonable amount of detail and will be a challenge.

Essential Steps

Form me, the first step when working with any photograph it to create a duplicate of the original and make sure that the original is locked.

Shortcut: CMD+J

Duplicate Layer

Duplicate Layer

Next step, invert the image on the duplicate layer.

Shortcut: CMD+I

Invert duplicate layer

Invert duplicate layer

Next up, change the blend mode of the duplicate layer to colour dodge.

Colour Dodge

Colour Dodge

The image should now turn white. Now add a Gaussian blur to the duplicate layer.

Gaussian Blur

Gaussian Blur

Use the slider to change the radius until you are happy with the effect.

Change Blur Radius

Change Blur Radius

At this point the image still has some colour in it. Adding a HSL adjustment and reducing the saturation to 0% will remove the colour.

Add HSL Adjustment

Add HSL Adjustment

Set saturation to 0%

Set saturation to 0%

The final step is to add a Levels Adjustment:

Adjust Levels

Adjust Levels

and adjust the black level:

Change the Black Level

Change the Black Level

Here is a zoomed in section of the final image:

Image post levels adjustment

Image post levels adjustment

Optional Additional Adjustments

There are some additional adjustments that can be made to give the image the appearance of an actual pencil drawing:

  • Add a paper like canvas to the image
  • Use a mask layer to paint out some of the background around the edges giving a blurred edge
  • If the edge of the image is predominantly white then maybe use the inpainting tool to remove any slightly grey areas in the background

Conclusion

Here is the full image with just the essential adjustments:

Final Image

Final Image