RSS

Archive for the ‘Software Development’ Category

Docker File Sharing

Tuesday, June 11th, 2024

Docker File Sharing Banner

I use Docker fairly often for a number of projects. I find it useful for getting access to tools that are not available on my host system (Mac running on Apple silicon) and also for duplicating the environments I use for CI on GitHub. I also use it to access some legacy tools that I can no longer install locally on my system but where the vendor has provided a Docker image containing those tools.

Docker Image

The docker containers I use most often are the espressif/idf containers, specifically the espressif/idf:release-v4.2 container. This is used to access the development tools for the ESP-IDF release 4.2 tools and libraries to support maintenance of legacy code whilst it is being ported to a newer version of the SDK.

Access to the tools is through the docker command:

docker run --platform linux/amd64 --rm -it -v $PWD:/project -w /project espressif/idf:release-v4.2

Running this command starts the container and sets the PATH etc allowing interactive access to the tools.

Detecting the Python interpreter
Checking "python" ...
Python 3.6.9
"python" has been detected
Adding ESP-IDF tools to PATH...
Using Python interpreter in /opt/esp/python_env/idf4.2_py3.6_env/bin/python
Checking if Python packages are up to date...
Python requirements from /opt/esp/idf/requirements.txt are satisfied.
Added the following directories to PATH:
  /opt/esp/idf/components/esptool_py/esptool
  /opt/esp/idf/components/espcoredump
  /opt/esp/idf/components/partition_table
  /opt/esp/idf/components/app_update
  /opt/esp/tools/xtensa-esp32-elf/esp-2020r3-8.4.0/xtensa-esp32-elf/bin
  /opt/esp/tools/xtensa-esp32s2-elf/esp-2020r3-8.4.0/xtensa-esp32s2-elf/bin
  /opt/esp/tools/esp32ulp-elf/2.28.51-esp-20191205/esp32ulp-elf-binutils/bin
  /opt/esp/tools/esp32s2ulp-elf/2.28.51-esp-20191205/esp32s2ulp-elf-binutils/bin
  /opt/esp/tools/cmake/3.16.4/bin
  /opt/esp/tools/openocd-esp32/v0.11.0-esp32-20220706/openocd-esp32/bin
  /opt/esp/python_env/idf4.2_py3.6_env/bin
  /opt/esp/idf/tools
Done! You can now compile ESP-IDF projects.
Go to the project directory and run:

  idf.py build

root@a76f973ca31c:/project#

It is now possible to use all of the usual Espressif tools from the command prompt with the usual caveats for USB port access on Mac systems. Still, development remains possible even if deployment requires some additional steps.

Workflow

For a while now the working workflow has been as follows:

  • Edit the code, build scripts in VS Code on Mac
  • Build the code in the docker container running in a terminal session
  • Deploy the code from a second terminal session where the latest libraries are installed and configured

This worked flawlessly for several months.

Until the last few days.

The Problem

A recent requirement change necessitated the modification of the source code for the application built using the workflow described above. All seemed to start well, the code was edited, the docker container started and the code hit the first compilation of the day for a syntax check.

This was closely followed by come code changes and a second compilation. All seemed well and the code was committed to source control and rebuilt. At this point I noticed something odd, the automatically generated build number did not increase. The system used for this repository changes the build number based upon the number of commits. The first two compilations would not have increased the build number as there was no commit. The build after the commit would normally generate an increment in the build number and it clearly was not doing so.

Let’s introduce a syntax error, deleting a semicolon should do it, and try rebuilding. The code compiled with no errors. How strange.

Investigating further using more to check the contents of the files on the host machine against the docker container revealed that the changes on the host system were not being reflected in the docker image.

Docker File System Access

Something odd is happening to the file system. Changes on the host are clearly not being reflected in the mounted volume in the docker container. Time to try a few things with the syntax error still in place:

  • Exit the docker container and restart it and build the code – no change, the code still compiles
  • Exit the docker container and run it none interactively – still no change
  • Change the mount method from a volume to a mount option – the code still compiles
  • Delete the docker image and restart (this will rebuild from scratch) – compilation gives a syntax error

So finally, the code change is reflected in the docker volume. Now we need to remove the syntax error by reverting the file change and we can recompile and move on. Doing this resulted in a compilation failure, the change had once again not been applied to the mounted volume.

At this point a colleague checked on their system that they could change files and see the changes reflected in the mounted volume and yes they could. Time for a comparison of the system settings and the obvious one to pick up is the File System settings. A quick check showed a difference between the two systems. Changing my system to match theirs and restarting everything resolved the issue.

The difference was in the File sharing implementation for your containers.

File Sharing Selection

File Sharing Selection

My local system had this configured for gRPC FUSE. Changing to the above setting, VirtioFS, and restarting Docker Desktop and the docker container seems to have fixed the issue.

Conclusion

I still do not know why the change to the way the file system was accessed changed or why the system stopped reflecting changes to the files in the mounted volume. I don’t think I will ever find out but maybe the note will help others (maybe even myself in the future).

Repeatable Deployments (Part 1)

Tuesday, March 19th, 2024

Repeatable Deployment Banner

A common problem in the IT world is to create a consistent environment in a repeatable manner. This is important in a number of use cases:

  • Development
  • Testing
  • Training

This series of posts will investigate using Ansible to create a consistent test environment, one that can be setup and torn down quickly and easily.

The starting point is setting up the hardware and installing the operating system (OS) which will be covered here. Subsequent posts will use Ansible to configure the system and deploy additional tools.

The Hardware

The test environment will be based around the Raspberry Pi 5 (although any version of the Pi hardware could be used). The system will be built around the following components:

  • Raspberry Pi (3, 4 or 5)
  • 256 GByte SATA SSD
  • SATA to USB adapter
  • Cooling fan (for the Raspberry Pi 5)
  • Power Supply
  • Ethernet cable
  • 3D printed mounts to bring everything together

Grabbing a Raspberry Pi 5 and putting all of this together yields something like this:

Raspberry Pi Setup

Raspberry Pi Setup

SATA SSDs have been chosen for the OS and data storage as they are both faster and more reliable than SD cards. From a cost perspective they are not too much more expensive than a quality SD card. It should be noted that recent third party addon boards are becoming available that add one or two NVMe drives to be added to the the Raspberry Pi 5 using the PCIe bus.

Write OS Image

The easiest way to create a bootable Raspberry Pi system is to use the Raspberry Pi Imager. This is a free tool that allows the selection of one of the many operating systems available for the Raspberry Pi and it can then be used to write the operating system to a SD card or HDD/SSD

The process starts by connecting the SATA to USB adapter the the SSD and then connecting the drive to the host computer. This makes the drive appear as an external USB drive.

Now start Raspberry Pi Imager:

Raspberry Pi Imager

Raspberry Pi Imager

Select the device we are going to create the image for, in this case this is the Raspberry Pi 5:

Select Device

Select Device

The next step is to decide which operating system should be installed on the SSD. There are a large number of options and the selection will depend upon what you want to achieve. In this case we can use a basic system such as Raspberry Pi OS Lite. Firstly, select the Raspberry Pi (64-bit) operating system:

Select Operating System

Select Operating System

Now refine this selection and select the Raspberry Pi OS Lite (64-bit):

Select Raspberry Pi Lite

Select Raspberry Pi Lite

A basic system will be adequate as the device is intended to be run headless and so the desktop environment and applications are not required.

Next step is to select the storage device that the image will be written to. Once this is done we can move on to providing some configuration options for the operating system.

Ready For Configuration

Ready For Configuration

Click the Next button to move on to the next step, editing the configuration.

Edit Settings

Edit Settings

Clicking Edit setting starts the editing process. The General options are presented first, here we can set the following:

  • Hostname
  • User name and password
  • WiFi access point details
Customise General Settings

Customise General Settings

SSH should be enabled in order to run the system headless. This is enabled on the Services tab:

Customise Services

Customise Services

Clicking on Save now gives the option of applying the settings and start writing the image to the SSD:

Apply Settings

Apply Settings

The final step is to verify that the SSD can be erased:

Confirm Media Erase

Confirm Media Erase

Control now passes back to the main window where the write and verification progress can be monitored:

Writing OS

Writing OS

After a short while the the process will complete and Raspberry Pi Imager wil conform that the image has been written successfully and the drive can now be disconnected from the host computer and connected to the Raspberry Pi 5:

OS Write Successful

OS Write Successful

Conclusion

The whole process of creating the image is straightforward and only takes a few minutes. At the end of the process the Raspberry Pi is ready to boot.

The next step will be to start the installation and configuration of additional software tools and components. Something for the next post in this series.

Mac Remote Access

Sunday, October 15th, 2023

SSH Login Command

This blog serves two purposes:

  • Sharing information that I hope is useful to others
  • Aide-Memoire for yours truly

This post falls into the second group, something I’ve done in the past but forgotten.

Background

The current project requires the test environments to be expanded. Several of the environments are running on Raspberry Pi SBC which is feasible now that they are available in volume once again. There is one exception, a Mac Mini with a M1 processor. This environment allows the usual tests to be run in the same manner as the Raspberry Pi boards. It also gives the ability to build the code and attach a debugger to the board invaluable for tests that are known to be failing and need to run for an extended period of time.

This sort of setup is ideal for running headless, no monitor, keyboard or mouse; we can just use MacOS screen sharing and ssh.

What is Wrong?

Enter a new (well secondhand) Mac Mini. Setup went well, attached a keyboard and mouse and ran through the setup process with no issues. Logged on to the Mac and all is well. A few configuration tweaks to enable screen sharing and remote login were required, nothing too complex, just a case of setting the right permissions.

Next step, test the remote connection. Screen sharing started OK and the Mac appeared on the network with file sharing enabled. Time for a reboot.

System rebooted OK, time to browse the network.

The new machine was not showing in the network browser and ssh was able to establish the connection.

Back to the still connected keyboard and mouse to log on. Once logged in the system once again appeared in the network browser and screen sharing and ssh worked flawlessly.

Time for another reboot and the same thing happened, machine booted OK but nothing appeared on the network until a successful login through the attached keyboard and mouse.

The Solution

This is where it gets odd. Apparently, you have to turn FileVault off. That’s right you have to turn the disc encryption off in order to enable fully remote logon.

FileVault is turned on automatically during the MacOS installation processes which makes sense. Disc encryption will make it harder for a malicious actor to recover sensitive information from a machine, so disc encryption on modern machines is good. The side effect of this is that you must logon to the Mac via an attached keyboard before it will turn up on the network.

Conclusion

I have a solution of sorts but I do find it odd that disc encryption must be disabled before remote services can be enabled on the Mac. After all, if you require remote access to a system then you are likely to be putting the physical machine in a location where access is going to be difficult.

Raspberry Pi Pico and Pico W Project Templates

Tuesday, September 5th, 2023

Pico Template Build Complete

There are somethings in software development that we do not do very often and because we perform the task infrequently we often forget the nuances (well I do). One of these tasks is creating a new project.

In this post we will look at one option for automating this process, GitHub Templates. This work is partially my own work but also a case of bringing together elements of other blog posts and GitHub repositories.

All of the code discussed below can be found in the PicoTemplate repository and this should be used as a reference throughout this post.

Starting a New Project

Several days ago, I was in the process of starting a new project for the Raspberry Pi Pico W using the picotool. For those who are not familiar with this tool is is meant to automate the project creation process for you. You simply tell the tool which features you are going to be using and a few other parameters like board type and it will generate a directory containing all of the necessary code and project files.

Sounds too good to be true. For me it was as no matter which feature list I requested I could not get an application to deploy to the board and talk to the desktop computer over UART or USB.

At this point I decided to take this back to basics and get Blinky up and running. The problem definition became:

  1. C++ application
  2. Blink the onboard LED
  3. Support both Pico and Pico W boards
  4. Standard IO output over UART or USB

It would also be desirable to automate as much of the process as possible.

Blinky

One complication with the Pico boards is that they both use a different mechanism to access the on board LED. The Pico has the LED wired directly to GPIO 25 whilst the Pico W uses a GPIO from the onboard WiFi / Bluetooth chip (hereafter referred to as just wireless). This means we have different versions of Blinky depending upon which board is selected. This also means that the Pico W board also needs an additional library to support the wireless chip even if we are not using any wireless features.

The simplest way of doing this is to use a template directory to contains the main application code and the project CMakeFileList.txt file for each board. We can then copy the template files for the appropriate board type into the sources directory.

Keeping with the theme of forgetting the nuances, we will add some scripts to perform some common tasks:

  1. Configure the system (copying the files to the correct locations)
  2. Build the application
  3. Flash a board using openocd

Our first task will be to set the system up with the correct main.cpp and CMakeFileList.txt files and put the files in the correct place. Checking the default CMakeFileList.txt file we see that the project is named projectname which is not very informative so the configuration script will also rename the project as well.

The purpose of the build script should be obvious, it will build the application code and also offer the ability to perform a full rebuild if necessary.

The final script will flash the board using openocd using another Raspberry Pi Pico configured as a debug probe. This was discussed a few weeks ago in the post PicoDebugger – Bringing Picoprobe and the PicoW Together. This set up also has the advantage of exposing the default UART to the host computer.

Using the Scripts

Using the scripts is a three step process, one of which is only really performed once.

The first step is to use the configure.sh script to copy the files and rename the project:

./configure.sh -b=picow -n=TestApplication

After running this command the src/main.cpp file will contain the code to blink the LED on a Pico W board and the project will have been renamed TestApplication.

Secondly, we can build the application using the build.sh script:

./build.sh

Finally, the application can be deployed to the board using the flash.sh script.

./flash.sh

The UF2 file (in the case above, TestApplication.uf2) can also be copied to the board if a debug probe is not available.

Scope Creep

The story was supposed to end here but this is not going to be the case.

There are some improvements that we can look at to make the system a little more comprehensive. These include:

  1. Add a testing framework
  2. Create a Docker file for build and testing

Both of these items are currently work in progress.

Acknowledgements

This project has been inspired by several others on github:

  • [Raspberry Pi Pico Examples](https://github.com/raspberrypi/pico-examples)
  • [Raspberry Pi Pico Template](https://github.com/cathiele/raspberrypi-pico-cpp-template)

Conclusion

The initial requirement of creating a template for Pico and Pico W development could have been achieved simply by creating two different templates, one for each board type. This would arguably have been quicker and less complex.

The addition of the testing framework and docker container into the requirement would have resulted in some duplication of work in each template. This made bringing the two templates together in one repository more logical as errors or additions in one project type are automatically part of the other board template.

Getting Started with Ansible

Monday, August 28th, 2023

20x4 LCD Display

Recent work has involved reviewing some test environments for an IoT development board. The aim is to improve some of the components used for testing as well as adding new functionality. The requirements are:

  • Provide an updated version of existing functionality
  • Single board environment with all functionality deployed for quick testing
  • Cluster distributing the test environment for load testing

The most cost effective way to do this is to use a number of Raspberry Pi single board computers. These boards are now becoming available in quantities after several years of limited availability.

The Problem

How to setup the environment in such a way that will allow a fresh environment to be created reliably.

Enter ansible.

Ping

First step, try to contact a board and this is where ping comes in. This command will verify that ansible can connect to a board. The following command will test the connection to each board:

ansible cluster -m ping -i hosts

This command requires a text file hosts containing the list of boards to the contacted. The file is simple and may only contact two lines:

[cluster]
node

In the above example, the file defines a group of machines to be contacted and this is named cluster and in this case the group contains only one machine and this is named node. The name cluster is also mentioned in the ansible command above.

Additional machines can also be named under the cluster entry by simply placing additional entries on a new line in the file.

So far this is nothing new and it is covered in the Ansible documentation.

What Happened

The first step was to use the Raspberry Pi Imager application to create a new image on a new SSD. Nothing complex:

  • Raspberry Pi 64-bit Lite OS
  • Set the machine name to be node
  • Enable SSH
  • Set the user name to clusteruser and give the user a secure password

The password was then stored on the local machine in an environment variable CLUSTER_PASSWORD to allow the scripts to be stored in source control without giving away any secrets.

Time to test the connection with the following command:

ansible cluster -m ping -i hosts --extra-vars "ansible_user=clusteruser ansible_password=$CLUSTER_PASSWORD"

Breaking this down, we want to ping all of the machines defined in the cluster group. The group is defined in the file hosts and we are going to log on to the machines with the user name clusteruser and with the password contained in the CLUSTER_PASSWORD environment variable.

Now running the above command results in the following:

node | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

Conclusion

A good start to the project, now on to something more complex, time to install and configure some software.

And I can’t believe I’ve missed Ansible for so long.

20×4 LCD Display and NuttX

Sunday, July 23rd, 2023

20x4 LCD Display

Time for some more experimentation with NuttX, today serial LCDs.

Serial LCDs

Small LCD displays can be found in many scientific instruments as they provide a simple way to display a small amount of information to the user. Typically these displays are 16×2 (2 lines of 16 characters) or 20×4 (4 lines of 20 characters) displays. The header to this article shows part of the output from a 20×4 display.

Communication with these displays is normally through a 4 or 8 bit interface when talking directly to the controller chip for the LCD. Using 4 or 8 data lines for communication with the LCD is a large burden on a small microcontroller. To overcome this, several vendors have produced a backpack that can be attached to the display. The backpack uses a smaller number of microcontroller lines for communication while still being able to talk to the LCD controller chip.

This post looks at using such and LCD and backpack with NuttX running on the Raspberry Pi Pico W.

NuttX Channel (Youtube)

A great place to start with NuttX is to have a look at the NuttX Channel on Youtube as there are a number of quick getting started tutorials covering a number of subject. In fact there is one covering a 16×2 LCD display which is similar to what will be used in this tutorial, with a small difference.

The video linked above covers a lot of what is needed to get 16×2 LCD up and running. There are some small changes that are needed as NuttX has moved on since the video was released.

Hardware

The major changes compared to the video above are:

  • Microcontroller used will be the Raspberry Pi Pico W
  • LCD display will be 20×4 display

The LCD display used here will be a larger physical display (20×4 instead of 16×2) but it will still use the same interface on the backpack, namely the PCF8574 GPIO expander. This uses I2C as a communication protocol so reduces the number of GPIO lines required from 8 to 2.

There are two I2C busses on the Pico W, I2C0 and I2C1 and in this case I2C1 will be the chosen interface. Something that caused some issues but more on that later.

For now we start with a base configuration with the LCd connected to GPIO6 and GPIO7 on the Pico W.

Configuring NuttX

Configuration followed much of the video linked above enabling the following:

  • Enable I2C master and I2c1 in the System Type menu
  • I2C Driver Support in the Device Drivers menu
  • PCF8574 support through Device Drivers, Alphanumeric Drivers menus
  • Segment LCD CODEC in the Library Routines menu
  • Segment LCD Test in the Application Configuration, Examples menu

Time to build, deploy and run.

  • make clean followed by make -j built the system
  • The application was then deployed to the Pico W and board was reset
  • The application can be run by connecting a terminal/serial application to the board an running the command slcd

Nothing appears on the display. Time to check the connections and break out the debugger.

Troubleshooting

Checking the connections showed that everything looked to be in order. The display was showing a faint pixel pattern which is typical of these displayed when they have been powered correctly but there is no communication. Double checking the I2C connections showed everything in theory was good.

Over to the software. Running through the configuration again and all looks good here. So lets try I2C0 instead of I2C1, quick change of the configuration in the software and moving some cables around and it works!

So lets go back to the I2C1 configuration, recompile and deploy to the board and it works. What!

It turns out that I had not moved the connections from I2C0 back to I2C1.

The default application was also only displaying 1 line of text. So let’s expand this to display 4 lines of text, namely:

  • Hello
  • Line1
  • Line2
  • Line3

Running the application gives only two line of text:

  • Hello
  • Line3

How odd.

Lets Read the Sources

After a few hours of tracing through the sources we find ourselves looking in the file rp2040_common_bringup.c where there is this block of code:

#ifdef CONFIG_LCD_BACKPACK
    /* slcd:0, i2c:0, rows=2, cols=16 */

    ret = board_lcd_backpack_init(0, 0, 2, 16);
    if (ret < 0)
    {
        syslog(LOG_ERR, "Failed to initialize PCF8574 LCD, error %d\n", ret);
        return ret;
        return ret;
    }
#endif

This suggests that the serial LCD test example is always configured to use a 16×2 LCD display on I2C0. This explains why we saw only two lines of output on the display and also why the code did not work on I2C1.

Changing ret = board_lcd_backpack_init(0, 0, 2, 16); to ret = board_lcd_backpack_init(0, 1, 4, 20); and recompiling generated the output we see at the top of this post.

Navigating to the System Type menu also allowed the I2C1 pins to be changed to 26 and 27 and the system continued to generate the expected results.

Conclusion

This piece of work took a little more time than expected and it would have been nice to have had the options within the configuration system to change the display and I2C parameters. As it stands at the moment using a 20×4 display requires changes to the OS source code. This is a trivial change but it does make merging / rebasing with later versions of the system a little more problematic.

SSEM Program Execution Complete

Wednesday, July 19th, 2023

SSEM Program Execution Complete

A while ago I put together an emulator for the Small Scale Experimental Machine (SSEM), also known as the Manchester Baby. This was a basic console application allowing a program written for the Manchester Baby to be run in a console application on a modern computer. As things turned out, I now spend most of my time working in either C or C++. This has left me with a piece of code that is difficult to maintain as I have to relearn Python every time I want to make any improvements.

Time to rewrite the application in C++.

SSEM Simulator

The simulator provides a number of features:

  • Assembler/compiler to take source files and generate the binary to be execution
  • Console interface to control the execution of the application
  • Simulated display of the registers and memory

More information about the Python version of the simulator can be found in this blog and on the Small Scale Experimental Machine web site with full source code available on GitHub.

Porting the Simulator

The aim of the initial port is to provide the same functionality of the original application with any changes necessary to provide additional robustness as we are undoubtedly going to be seeing pointers in the C++ port.

Where possible, the structure of the original Python code has been maintained to keep a 1:1 mapping with the original code and test suite. This will provide an easy way to validate the unit tests in the port against the original Python code. The original Python code was validated against David Sharps Java Simulator.

The long term aim of this port is to provide a way of running the application on a Raspberry Pi Pico connected to hardware which will emulate the original SSEM. The application on the Raspberry Pi Pico will target the NuttX RTOS. As we will see later, compiling and running on NuttX will present some interesting issues.

Initial Port

The first stage of the port is to reproduce the core functionality of the SSEM showing the application output in a console interface targeting C++ 17. The only real complication here is ensure the user interface and platform specific code are abstracted to keep as much functionality in common with a desktop and NuttX implementation as possible.

The original Python code and the C++ port can be found in the Manchester Baby GitHub repository. A quick check of the source code shows that the 1:1 mapping has been kept where possible. The only real significant difference between the two code bases is the separation of the unit tests from the class implementation. The Python code keeps the unit test code in the class definitions themselves, the C++ code implements the unit tests in their own files.

Memory Checks

The switch from Python to C++ brings a new danger, memory access issues and memory leaks.

One memory issue that we can address relatively easily is memory leaks. If we can abstract the core functionality into a self contained group of files then we can use valgrind to check for memory leaks. A small glitch with using valgrind is that the application is not available for Mac from the key repositories. There is an informal project on GitHub.

The issue of valgrind not running on the mach was resolved by putting together a Dockerfile containing common development tools. The memory check could then be run on the desktop using Docker.

Running the Emulator

The emulator can be run on both a desktop computer as well as a board running NuttX.

Run from the Desktop

Running the application on the desktop is the simplest way to test the emulator:

  • Open a command console and change to the Desktop directory in the repository
  • Build the emulator with the command make
  • Run the emulator with the command ./ssem_main

Run on NuttX

Running on NuttX is a little more complex as we need to build the application and the operating system and then deploy the binary to a board. The processes of adding the SSEM application to a Raspberry Pi PicoW board has already been documented in the article Adding a User Application to NuttX. The first step is to follow the steps in the article to add the SSEM basic applicatiom.

The next stage is to copy the contents of the NuttX directory over the application directory created in the above article. The code should then be rebuilt with the command make clean && make -j. The application can now be deployed to the board.

Now we have the OS and the application deployed to the Raspberry Pi (or your board of choice) we can connect a serial adapter to the board and press the enter key twice. This will bring up the NuttX shell. Typing help should show the ssem application deployed to the board. Simply execute this by entering the command ssem.

Application Output

In both cases the emulator should run the hfr989.ssem application (the source for this can be found in the SSEMApps folder in the repository). Both the desktop and the NuttX versions of the emulator will run the SSEM application and will show the start and end state of the SSEM on the console / serial port. The first output will show the SSEM application loaded into the store lines:

NuttShell (NSH) NuttX-10.4.0
nsh> ssem
                   00000000001111111111222222222233
                   01234567890123456789012345678901
   0: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
   1: 0x48020000 - 01001000000000100000000000000000 LDN 18           ; 16402
   2: 0xc8020000 - 11001000000000100000000000000000 LDN 19           ; 16403
   3: 0x28010000 - 00101000000000010000000000000000 SUB 20           ; 32788
   4: 0x00030000 - 00000000000000110000000000000000 CMP              ; 49152
   5: 0xa8040000 - 10101000000001000000000000000000 JPR 21           ; 8213
   6: 0x68010000 - 01101000000000010000000000000000 SUB 22           ; 32790
   7: 0x18060000 - 00011000000001100000000000000000 STO 24           ; 24600
   8: 0x68020000 - 01101000000000100000000000000000 LDN 22           ; 16406
   9: 0xe8010000 - 11101000000000010000000000000000 SUB 23           ; 32791
  10: 0x28060000 - 00101000000001100000000000000000 STO 20           ; 24596
  11: 0x28020000 - 00101000000000100000000000000000 LDN 20           ; 16404
  12: 0x68060000 - 01101000000001100000000000000000 STO 22           ; 24598
  13: 0x18020000 - 00011000000000100000000000000000 LDN 24           ; 16408
  14: 0x00030000 - 00000000000000110000000000000000 CMP              ; 49152
  15: 0x98000000 - 10011000000000000000000000000000 JMP 25           ; 25
  16: 0x48000000 - 01001000000000000000000000000000 JMP 18           ; 18
  17: 0x00070000 - 00000000000001110000000000000000 HALT             ; 57344
  18: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  19: 0xc43fffff - 11000100001111111111111111111111 HALT             ; -989
  20: 0x3bc00000 - 00111011110000000000000000000000 JMP 28           ; 988
  21: 0xbfffffff - 10111111111111111111111111111111 HALT             ; -3
  22: 0x243fffff - 00100100001111111111111111111111 HALT             ; -988
  23: 0x80000000 - 10000000000000000000000000000000 JMP 1            ; 1
  24: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  25: 0x08000000 - 00001000000000000000000000000000 JMP 16           ; 16
  26: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  27: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  28: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  29: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  30: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  31: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0

Reading from left to right, the above output shows the following:

  • Store line number (i.e. the memory address) 0:, 1: etc.
  • The hexadecimal representation of the store line contents.
  • Binary representation of the store line contents
  • Disassembled representation of the store line contents JMP 0 etc.
  • Decimal representation of the store line contents

It must be remembered when reading the above that the least significant bit is at the left of the word and the most significant bit is to the right. This is honoured with the hexadecimal and binary components of the above output. The decimal value to the right should be read in the usual way for a base 10 number.

After a short time the contents of the store lines at the end of the run will be displayed:

Program execution complete.
                   00000000001111111111222222222233
                   01234567890123456789012345678901
   0: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
   1: 0x48020000 - 01001000000000100000000000000000 LDN 18           ; 16402
   2: 0xc8020000 - 11001000000000100000000000000000 LDN 19           ; 16403
   3: 0x28010000 - 00101000000000010000000000000000 SUB 20           ; 32788
   4: 0x00030000 - 00000000000000110000000000000000 CMP              ; 49152
   5: 0xa8040000 - 10101000000001000000000000000000 JPR 21           ; 8213
   6: 0x68010000 - 01101000000000010000000000000000 SUB 22           ; 32790
   7: 0x18060000 - 00011000000001100000000000000000 STO 24           ; 24600
   8: 0x68020000 - 01101000000000100000000000000000 LDN 22           ; 16406
   9: 0xe8010000 - 11101000000000010000000000000000 SUB 23           ; 32791
  10: 0x28060000 - 00101000000001100000000000000000 STO 20           ; 24596
  11: 0x28020000 - 00101000000000100000000000000000 LDN 20           ; 16404
  12: 0x68060000 - 01101000000001100000000000000000 STO 22           ; 24598
  13: 0x18020000 - 00011000000000100000000000000000 LDN 24           ; 16408
  14: 0x00030000 - 00000000000000110000000000000000 CMP              ; 49152
  15: 0x98000000 - 10011000000000000000000000000000 JMP 25           ; 25
  16: 0x48000000 - 01001000000000000000000000000000 JMP 18           ; 18
  17: 0x00070000 - 00000000000001110000000000000000 HALT             ; 57344
  18: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  19: 0xc43fffff - 11000100001111111111111111111111 HALT             ; -989
  20: 0x54000000 - 01010100000000000000000000000000 JMP 10           ; 42
  21: 0xbfffffff - 10111111111111111111111111111111 HALT             ; -3
  22: 0x6bffffff - 01101011111111111111111111111111 HALT             ; -42
  23: 0x80000000 - 10000000000000000000000000000000 JMP 1            ; 1
  24: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  25: 0x08000000 - 00001000000000000000000000000000 JMP 16           ; 16
  26: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  27: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  28: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  29: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  30: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  31: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
Executed 21387 instructions in 30000000 nanoseconds

The original SSEM ran at about 700 instructions per second, modern PCs and even RP2040 processors are running the application much faster.

Conclusion

Even small boards (such as the Raspberry Pi Pico) running relatively low power processors can now emulate the Manchester Baby running application intended for the SSEM many times faster that the original hardware. The hfr989.ssem application would have run in about 30 seconds in 1948, today we can run this in an emulator in less that 30ms.

PicoW with SmartFS

Sunday, July 2nd, 2023

Mounting SmartFS

One feature that I want to add to my current project is to add a small file system with files that have been built into the system at compile time. These files would then be available to the application at run time. Let’s look at how we can do can do this with NuttX.

This tutorial assumes that you have NuttX cloned and ready to build, if not then you can find out how to do this in the first article in this series.

Adding SmartFS to the Build

NuttX has a built in configuration for the PicoW with SmartFS already configured. The first thing we need to do is to start with a clean system and then configure the build to include NSH and the flash file system. Start by changing to the NuttX source directory and then executing the following commands:

make distclean
./tools/configure.sh -l raspberrypi-pico-w:nsh-flash

Now we have the system configured we can build the OS and applications by executing the following command:

make -j

This should take a minute or so on a modern machine. Now we can deploy the system to the PicoW either by using openocd or by dragging the uf2 file onto the PicoW drive. Now connect to the PicoW using a serial application and type help to show the menu of commands. You should see something like the following:

SmartFS Builtin Apps

SmartFS Builtin Apps

We can check to see if the SmartFS is available checking the contents of the /dev directory with the command ls /dev. This should result in something like the following if SmartFS has been enabled correctly:

Device Directory Listing

Device Directory Listing

We can mount the file system using the command mount -t smartfs /dev/smart0 /data and then check the contents of the /data directory and we should find one file in the directory, test. Checking the contents of the file with the command cat /data/test should find that it contains a single ine of text which should be Hello, world!.

So far, so good, we have built the system and proven that it contains the default file and correct contents.

Adding a New File to the System Image

The next piece of the puzzle is to work out how to add new files to the file system. This took a few hours to figure out, but here goes…

The first attempt lead me searching for RP2040_FLASH_FILE_SYSTEM in the source tree (ripgrep is a great tool for doing this). This lead to a number of possible files. Maybe we can narrow the search down a little.

Second attempt, let’s have a look for Hello, world!. This resulted in a smaller number of files leading to the file arch/arm/src/rp2040/rp2040_flash_initialize.S. This file is well documented and shows how to set up the SmartFS file system and at the end of the file it shows how to create an entry for the file we see when we list the mounted directory. Scrolling down to the end of the fie we find the following:

    sector      3, dir
    file_entry  0777, 4, 0, "test"

    sector      4, file, used=14
    .ascii      "Hello, world!\n"

    .balign     4096, 0xff
    .global     rp2040_smart_flash_end
rp2040_smart_flash_end:

This looks remarkably familiar. So what happens if we change the above to look like this:

    sector      3, dir
    file_entry  0777, 4, 0, "test"
    file_entry  0777, 5, 0, "test2"

    sector      4, file, used=14
    .ascii      "Hello, world!\n"

    sector      5, file, used=14
    .ascii      "Testing 1 2 3\n"

    .balign     4096, 0xff
    .global     rp2040_smart_flash_end
rp2040_smart_flash_end:

Building the system, deploying the code and executing the following commands:

mount -t smartfs /dev/smart0 /data
cd /data
ls

results in the following:

New file added to SmartFS

New file added to SmartFS

If we execute the command cat test2 we are rewarded with the output Testing 1 2 3.

Further testing shows that the file system survives through a reset. We can do the following:

  • echo “My test” > test3
  • rm test
  • reboot

These commands should remove the file test, create a new file test3 and then reboot the system. Checking the file system contents shows that the system persists the changes through a reset.

Conclusion

This experiment was a partial success. A simple file system has been made available to an application and the file system survives a reset. One issue remains, adding new files is a little complex. It also requires changes to the NuttX source tree outside of the applications folder. This could result in changes being lost when a new version of NuttX is released.

There could be a solution, ROMFS, stay tuned for the next episode.

VSCode Debugging with NuttX and Raspberry Pi PicoW

Wednesday, June 14th, 2023

VS Code Halted in NuttX __start

In the previous post, we managed to get GDB working with NuttX running on the Raspberry Pi PicoW. In this post we will look at using VSCode to debug NuttX.

For a large part of this post it was case of following Shawn Hymels guide Raspberry Pi Pico and RP2040 – C/C++ Part 2 Debugging with VS Code. This is an excellent guide and I recommend using it as a companion to this post.

We will start with the assumption that you have followed the previous post in this series and have a working NuttX build for the Raspberry Pi PicoW. We will also assume that you have a working VSCode installation with the Cortex-Debug extension installed.

Configuration Files

We will need to create (or modify) three configuration files to allow VSCode to debug NuttX.

  • launch.json
  • tasks.json
  • settings.json

In the first article of this series we created the directory NuttX-PicoW to hold the apps and nuttx folders holding out NuttX code. This directory is the project directory or in VS Code parlance, the workspaceFolder. The workspaceFolder should also contain the Raspberry Pi PicoW SDK and the Raspberry Pi specific version of openocd.

We now add a .vscode directory to the workspaceFolder. This directory will hold the three configuration files listed above.

Start by opening VS Code and opening the workspaceFolder. This should show the four folders already in the workspaceFolder. Now create a .vscode directory in the workspaceFolder if it does not already exist.

setting.json

Only one entry is required in the settings.json file and this is the location of the openocd executable. If you have followed these posts so far this will be in the openocd/src folder. Create a setting.json file in the .vscode folder and add the following to the file.

{
    "cortex-debug.openocdPath": "${workspaceFolder}/openocd/src/openocd"
}

Note that the name of the executable may vary depending upon your operating system this post is being written from a MacOS perspective.

tasks.json

The tasks.json file holds the entry that will be used to build NuttX prior to deployment. In Shawns document the projects being worked on use the cmake system. We need to modify this to build NuttX using make. We want VS Code to use the command make -C ${workspaceFolder}/nuttx -j to build NuttX. The build task below will create a task Build NuttX to do just this:

{
    "version": "2.0.0",
    "tasks": [
        {
          "label": "Build NuttX",
          "type": "cppbuild",
          "command": "make",
          "args": [
            "-C",
            "${workspaceFolder}/nuttx",
            "-j"
          ]
        }
      ]  
}

launch.json

Of the three files we are creating, the launch.json file is the most complex. Much of the file remains the same as that presented by Shawn but there are some differences. The file used here looks like this:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Pico Debug",
            "cwd": "${workspaceRoot}",
            "executable": "${workspaceFolder}/nuttx/nuttx.elf",
            "request": "launch",
            "type": "cortex-debug",
            "servertype": "openocd",
            "gdbPath" : "arm-none-eabi-gdb",
            "device": "RP2040",
            "configFiles": [
                "interface/cmsis-dap.cfg",
                "target/rp2040.cfg"
            ],
            "svdFile": "${workspaceFolder}/pico-sdk/src/rp2040/hardware_regs/rp2040.svd",
            "runToEntryPoint": "__start",
            "searchDir": ["${workspaceFolder}/openocd/tcl"],
            "openOCDLaunchCommands": ["adapter speed 5000"],
            "preLaunchTask": "Build NuttX"
        }
    ]
}

The following items are the ones that need to be changed:

“executable”: “${workspaceFolder}/nuttx/nuttx.elf”

This is the path to the executable that will be created by the build system. In this cse it is the NuttX ELF file. This may need to be changed to “executable”: “${workspaceFolder}/nuttx/nuttx” depending upon the output of the build system, this may simply be nuttx.

“configFiles”

The recent versions of the picoprobe software use a different interface to talk to the picoprobe. For versions 1.01 and above of the picoprobe software this interface changed from picoprobe.cfg to cmsis-dap.cfg.

“svdFile”: “${workspaceFolder}/pico-sdk/src/rp2040/hardware_regs/rp2040.svd”

We have a version of the Pico SDK specifically for our build requirements and so for this reason the location of the file is changed to reference the workspaceFolder rather than the global PICO_SDK_PATH environment variable.

“runToEntryPoint”: “__start”

This entry replaces the “runToMain”: true entry. Unlike convention C/C++ applications, NuttX does not have a main method. Having the runToMain entry generates an error stating that main cannot be found and stops at the first executable statement in our code, namely in __start. Replacing runToMain with runToEntryPoint achieves the same thing but does not generate the error.

Doing this also removes the need to have the postRestartCommands specified.

“searchDir”: [“${workspaceFolder}/openocd/tcl”]

This entry is used by openocd to look for the interface and target configuration files specified in the configFiles entry.

“openOCDLaunchCommands”: [“adapter speed 5000”]

These commands are executed by openocd when it starts. The change to the adapter speed is required and is documented in Getting Started with Raspberry Pi Pico documentation from the Raspberry Pi Foundation.

“preLaunchTask”: “Build NuttX”

The final entry tells Cortex-Debug how to build NuttX. It references the task we previously defined in the tasks.json file.

Testing

Testing this should simply be a case of saving all of the files above and pressing F5 in VS Code. If the changes have been successful then VS Code will first try to build NuttX in a Terminal window and it will then deploy NuttX to the Pico board and start the debugger. VS Code should look something like this (click on image for full view):

Debugging PicoW with VSCode

Debugging PicoW with VSCode

The inclusion of the SVD file allows us to examine the PicoW peripheral registers as well as the core registers:

PicoW Peripheral Registers in VSCode

PicoW Peripheral Registers in VSCode

Pico Cortex Registers in VSCode

Pico Cortex Registers in VSCode

Conclusion

GDB is a great debugger but it is often more convenient to use an IDE to debug your code. VS Code with the Cortex-Debug extension allow visual debugging of NuttX with a few nice additional features thrown in:

  • Easily viewed call stacks for both cores.
  • The SVD file allows the peripheral registers to be viewed through VS Code

We should also note that the use of VS Code resolved the issues noted at the end of the previous post as Cortex-Debug is able to deploy a binary using the picoprobe without resorting to the UF2 method of deployment. This results in a seamless build, deploy and debug process.

We have two debug options and it is now down to personal preferences as to which one to use.

Design the System

Friday, April 14th, 2017

One of the current projects on the go is a level shifter for the Teensy 3.6 using the TXS0108E chip. The aim is to allow the use of as many of the Teensy’s GPIO pins as possible to allow the development of another project that is working on 5V logic levels.

This project reminded me that when putting together a microcontroller project that the system is made up of both hardware and software. Sometimes a design decision made in one element can have an adverse effect on the other.

Design Decisions

From the start it was decided that the GPIO pins would have a one to one mapping from the microcontroller to the external bus. So pin 1 on the microcontroller would map to pin 1 on the external connectors.

This would make coding easy when using the Arduino API. So connecting to the external bus and outputting a digital high signal on pin 1 would become:

pinMode(1, OUTPUT);
digitalWrite(1, HIGH);

Impact

Putting the circuit together in KiCAD resulted in the following design:

PCB Layout

PCB Layout

TXS0108E Schematic

TXS0108E Schematic

As you can see, the Teensy GPIOS (TIO-1…) mapped directly to the external bus (GPIO-1…)

When translated into the rats nest there were three occurrences of the following:

PCB Layout

PCB Layout

Routing this was going to be a nightmare.

Changing the Design

At this point the penny dropped that a small change in the software would make the routing a whole lot easier.

Instead of using the pin numbers directly, a #define could be used for the external bus pin numbers. The above snippet would become:

#define BUS_IO1    30
.
.
.
pinMode(BUS_IO1, OUTPUT);
digitalWrite(BUS_IO1, HIGH);

This small change to the design created a one off task to create a header file for the board but it made the routing a lot easier.

Conclusion

Sometimes a small change may create a new task (creating the header file) but may possibly save more time elsewhere in the project.

Moral of the story, Design the system as a whole, not the individual components.