RSS

Archive for the ‘Software Development’ Category

Mac Remote Access

Sunday, October 15th, 2023

SSH Login Command

This blog serves two purposes:

  • Sharing information that I hope is useful to others
  • Aide-Memoire for yours truly

This post falls into the second group, something I’ve done in the past but forgotten.

Background

The current project requires the test environments to be expanded. Several of the environments are running on Raspberry Pi SBC which is feasible now that they are available in volume once again. There is one exception, a Mac Mini with a M1 processor. This environment allows the usual tests to be run in the same manner as the Raspberry Pi boards. It also gives the ability to build the code and attach a debugger to the board invaluable for tests that are known to be failing and need to run for an extended period of time.

This sort of setup is ideal for running headless, no monitor, keyboard or mouse; we can just use MacOS screen sharing and ssh.

What is Wrong?

Enter a new (well secondhand) Mac Mini. Setup went well, attached a keyboard and mouse and ran through the setup process with no issues. Logged on to the Mac and all is well. A few configuration tweaks to enable screen sharing and remote login were required, nothing too complex, just a case of setting the right permissions.

Next step, test the remote connection. Screen sharing started OK and the Mac appeared on the network with file sharing enabled. Time for a reboot.

System rebooted OK, time to browse the network.

The new machine was not showing in the network browser and ssh was able to establish the connection.

Back to the still connected keyboard and mouse to log on. Once logged in the system once again appeared in the network browser and screen sharing and ssh worked flawlessly.

Time for another reboot and the same thing happened, machine booted OK but nothing appeared on the network until a successful login through the attached keyboard and mouse.

The Solution

This is where it gets odd. Apparently, you have to turn FileVault off. That’s right you have to turn the disc encryption off in order to enable fully remote logon.

FileVault is turned on automatically during the MacOS installation processes which makes sense. Disc encryption will make it harder for a malicious actor to recover sensitive information from a machine, so disc encryption on modern machines is good. The side effect of this is that you must logon to the Mac via an attached keyboard before it will turn up on the network.

Conclusion

I have a solution of sorts but I do find it odd that disc encryption must be disabled before remote services can be enabled on the Mac. After all, if you require remote access to a system then you are likely to be putting the physical machine in a location where access is going to be difficult.

Raspberry Pi Pico and Pico W Project Templates

Tuesday, September 5th, 2023

Pico Template Build Complete

There are somethings in software development that we do not do very often and because we perform the task infrequently we often forget the nuances (well I do). One of these tasks is creating a new project.

In this post we will look at one option for automating this process, GitHub Templates. This work is partially my own work but also a case of bringing together elements of other blog posts and GitHub repositories.

All of the code discussed below can be found in the PicoTemplate repository and this should be used as a reference throughout this post.

Starting a New Project

Several days ago, I was in the process of starting a new project for the Raspberry Pi Pico W using the picotool. For those who are not familiar with this tool is is meant to automate the project creation process for you. You simply tell the tool which features you are going to be using and a few other parameters like board type and it will generate a directory containing all of the necessary code and project files.

Sounds too good to be true. For me it was as no matter which feature list I requested I could not get an application to deploy to the board and talk to the desktop computer over UART or USB.

At this point I decided to take this back to basics and get Blinky up and running. The problem definition became:

  1. C++ application
  2. Blink the onboard LED
  3. Support both Pico and Pico W boards
  4. Standard IO output over UART or USB

It would also be desirable to automate as much of the process as possible.

Blinky

One complication with the Pico boards is that they both use a different mechanism to access the on board LED. The Pico has the LED wired directly to GPIO 25 whilst the Pico W uses a GPIO from the onboard WiFi / Bluetooth chip (hereafter referred to as just wireless). This means we have different versions of Blinky depending upon which board is selected. This also means that the Pico W board also needs an additional library to support the wireless chip even if we are not using any wireless features.

The simplest way of doing this is to use a template directory to contains the main application code and the project CMakeFileList.txt file for each board. We can then copy the template files for the appropriate board type into the sources directory.

Keeping with the theme of forgetting the nuances, we will add some scripts to perform some common tasks:

  1. Configure the system (copying the files to the correct locations)
  2. Build the application
  3. Flash a board using openocd

Our first task will be to set the system up with the correct main.cpp and CMakeFileList.txt files and put the files in the correct place. Checking the default CMakeFileList.txt file we see that the project is named projectname which is not very informative so the configuration script will also rename the project as well.

The purpose of the build script should be obvious, it will build the application code and also offer the ability to perform a full rebuild if necessary.

The final script will flash the board using openocd using another Raspberry Pi Pico configured as a debug probe. This was discussed a few weeks ago in the post PicoDebugger – Bringing Picoprobe and the PicoW Together. This set up also has the advantage of exposing the default UART to the host computer.

Using the Scripts

Using the scripts is a three step process, one of which is only really performed once.

The first step is to use the configure.sh script to copy the files and rename the project:

./configure.sh -b=picow -n=TestApplication

After running this command the src/main.cpp file will contain the code to blink the LED on a Pico W board and the project will have been renamed TestApplication.

Secondly, we can build the application using the build.sh script:

./build.sh

Finally, the application can be deployed to the board using the flash.sh script.

./flash.sh

The UF2 file (in the case above, TestApplication.uf2) can also be copied to the board if a debug probe is not available.

Scope Creep

The story was supposed to end here but this is not going to be the case.

There are some improvements that we can look at to make the system a little more comprehensive. These include:

  1. Add a testing framework
  2. Create a Docker file for build and testing

Both of these items are currently work in progress.

Acknowledgements

This project has been inspired by several others on github:

  • [Raspberry Pi Pico Examples](https://github.com/raspberrypi/pico-examples)
  • [Raspberry Pi Pico Template](https://github.com/cathiele/raspberrypi-pico-cpp-template)

Conclusion

The initial requirement of creating a template for Pico and Pico W development could have been achieved simply by creating two different templates, one for each board type. This would arguably have been quicker and less complex.

The addition of the testing framework and docker container into the requirement would have resulted in some duplication of work in each template. This made bringing the two templates together in one repository more logical as errors or additions in one project type are automatically part of the other board template.

Getting Started with Ansible

Monday, August 28th, 2023

20x4 LCD Display

Recent work has involved reviewing some test environments for an IoT development board. The aim is to improve some of the components used for testing as well as adding new functionality. The requirements are:

  • Provide an updated version of existing functionality
  • Single board environment with all functionality deployed for quick testing
  • Cluster distributing the test environment for load testing

The most cost effective way to do this is to use a number of Raspberry Pi single board computers. These boards are now becoming available in quantities after several years of limited availability.

The Problem

How to setup the environment in such a way that will allow a fresh environment to be created reliably.

Enter ansible.

Ping

First step, try to contact a board and this is where ping comes in. This command will verify that ansible can connect to a board. The following command will test the connection to each board:

ansible cluster -m ping -i hosts

This command requires a text file hosts containing the list of boards to the contacted. The file is simple and may only contact two lines:

[cluster]
node

In the above example, the file defines a group of machines to be contacted and this is named cluster and in this case the group contains only one machine and this is named node. The name cluster is also mentioned in the ansible command above.

Additional machines can also be named under the cluster entry by simply placing additional entries on a new line in the file.

So far this is nothing new and it is covered in the Ansible documentation.

What Happened

The first step was to use the Raspberry Pi Imager application to create a new image on a new SSD. Nothing complex:

  • Raspberry Pi 64-bit Lite OS
  • Set the machine name to be node
  • Enable SSH
  • Set the user name to clusteruser and give the user a secure password

The password was then stored on the local machine in an environment variable CLUSTER_PASSWORD to allow the scripts to be stored in source control without giving away any secrets.

Time to test the connection with the following command:

ansible cluster -m ping -i hosts --extra-vars "ansible_user=clusteruser ansible_password=$CLUSTER_PASSWORD"

Breaking this down, we want to ping all of the machines defined in the cluster group. The group is defined in the file hosts and we are going to log on to the machines with the user name clusteruser and with the password contained in the CLUSTER_PASSWORD environment variable.

Now running the above command results in the following:

node | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

Conclusion

A good start to the project, now on to something more complex, time to install and configure some software.

And I can’t believe I’ve missed Ansible for so long.

20×4 LCD Display and NuttX

Sunday, July 23rd, 2023

20x4 LCD Display

Time for some more experimentation with NuttX, today serial LCDs.

Serial LCDs

Small LCD displays can be found in many scientific instruments as they provide a simple way to display a small amount of information to the user. Typically these displays are 16×2 (2 lines of 16 characters) or 20×4 (4 lines of 20 characters) displays. The header to this article shows part of the output from a 20×4 display.

Communication with these displays is normally through a 4 or 8 bit interface when talking directly to the controller chip for the LCD. Using 4 or 8 data lines for communication with the LCD is a large burden on a small microcontroller. To overcome this, several vendors have produced a backpack that can be attached to the display. The backpack uses a smaller number of microcontroller lines for communication while still being able to talk to the LCD controller chip.

This post looks at using such and LCD and backpack with NuttX running on the Raspberry Pi Pico W.

NuttX Channel (Youtube)

A great place to start with NuttX is to have a look at the NuttX Channel on Youtube as there are a number of quick getting started tutorials covering a number of subject. In fact there is one covering a 16×2 LCD display which is similar to what will be used in this tutorial, with a small difference.

The video linked above covers a lot of what is needed to get 16×2 LCD up and running. There are some small changes that are needed as NuttX has moved on since the video was released.

Hardware

The major changes compared to the video above are:

  • Microcontroller used will be the Raspberry Pi Pico W
  • LCD display will be 20×4 display

The LCD display used here will be a larger physical display (20×4 instead of 16×2) but it will still use the same interface on the backpack, namely the PCF8574 GPIO expander. This uses I2C as a communication protocol so reduces the number of GPIO lines required from 8 to 2.

There are two I2C busses on the Pico W, I2C0 and I2C1 and in this case I2C1 will be the chosen interface. Something that caused some issues but more on that later.

For now we start with a base configuration with the LCd connected to GPIO6 and GPIO7 on the Pico W.

Configuring NuttX

Configuration followed much of the video linked above enabling the following:

  • Enable I2C master and I2c1 in the System Type menu
  • I2C Driver Support in the Device Drivers menu
  • PCF8574 support through Device Drivers, Alphanumeric Drivers menus
  • Segment LCD CODEC in the Library Routines menu
  • Segment LCD Test in the Application Configuration, Examples menu

Time to build, deploy and run.

  • make clean followed by make -j built the system
  • The application was then deployed to the Pico W and board was reset
  • The application can be run by connecting a terminal/serial application to the board an running the command slcd

Nothing appears on the display. Time to check the connections and break out the debugger.

Troubleshooting

Checking the connections showed that everything looked to be in order. The display was showing a faint pixel pattern which is typical of these displayed when they have been powered correctly but there is no communication. Double checking the I2C connections showed everything in theory was good.

Over to the software. Running through the configuration again and all looks good here. So lets try I2C0 instead of I2C1, quick change of the configuration in the software and moving some cables around and it works!

So lets go back to the I2C1 configuration, recompile and deploy to the board and it works. What!

It turns out that I had not moved the connections from I2C0 back to I2C1.

The default application was also only displaying 1 line of text. So let’s expand this to display 4 lines of text, namely:

  • Hello
  • Line1
  • Line2
  • Line3

Running the application gives only two line of text:

  • Hello
  • Line3

How odd.

Lets Read the Sources

After a few hours of tracing through the sources we find ourselves looking in the file rp2040_common_bringup.c where there is this block of code:

#ifdef CONFIG_LCD_BACKPACK
    /* slcd:0, i2c:0, rows=2, cols=16 */

    ret = board_lcd_backpack_init(0, 0, 2, 16);
    if (ret < 0)
    {
        syslog(LOG_ERR, "Failed to initialize PCF8574 LCD, error %d\n", ret);
        return ret;
        return ret;
    }
#endif

This suggests that the serial LCD test example is always configured to use a 16×2 LCD display on I2C0. This explains why we saw only two lines of output on the display and also why the code did not work on I2C1.

Changing ret = board_lcd_backpack_init(0, 0, 2, 16); to ret = board_lcd_backpack_init(0, 1, 4, 20); and recompiling generated the output we see at the top of this post.

Navigating to the System Type menu also allowed the I2C1 pins to be changed to 26 and 27 and the system continued to generate the expected results.

Conclusion

This piece of work took a little more time than expected and it would have been nice to have had the options within the configuration system to change the display and I2C parameters. As it stands at the moment using a 20×4 display requires changes to the OS source code. This is a trivial change but it does make merging / rebasing with later versions of the system a little more problematic.

SSEM Program Execution Complete

Wednesday, July 19th, 2023

SSEM Program Execution Complete

A while ago I put together an emulator for the Small Scale Experimental Machine (SSEM), also known as the Manchester Baby. This was a basic console application allowing a program written for the Manchester Baby to be run in a console application on a modern computer. As things turned out, I now spend most of my time working in either C or C++. This has left me with a piece of code that is difficult to maintain as I have to relearn Python every time I want to make any improvements.

Time to rewrite the application in C++.

SSEM Simulator

The simulator provides a number of features:

  • Assembler/compiler to take source files and generate the binary to be execution
  • Console interface to control the execution of the application
  • Simulated display of the registers and memory

More information about the Python version of the simulator can be found in this blog and on the Small Scale Experimental Machine web site with full source code available on GitHub.

Porting the Simulator

The aim of the initial port is to provide the same functionality of the original application with any changes necessary to provide additional robustness as we are undoubtedly going to be seeing pointers in the C++ port.

Where possible, the structure of the original Python code has been maintained to keep a 1:1 mapping with the original code and test suite. This will provide an easy way to validate the unit tests in the port against the original Python code. The original Python code was validated against David Sharps Java Simulator.

The long term aim of this port is to provide a way of running the application on a Raspberry Pi Pico connected to hardware which will emulate the original SSEM. The application on the Raspberry Pi Pico will target the NuttX RTOS. As we will see later, compiling and running on NuttX will present some interesting issues.

Initial Port

The first stage of the port is to reproduce the core functionality of the SSEM showing the application output in a console interface targeting C++ 17. The only real complication here is ensure the user interface and platform specific code are abstracted to keep as much functionality in common with a desktop and NuttX implementation as possible.

The original Python code and the C++ port can be found in the Manchester Baby GitHub repository. A quick check of the source code shows that the 1:1 mapping has been kept where possible. The only real significant difference between the two code bases is the separation of the unit tests from the class implementation. The Python code keeps the unit test code in the class definitions themselves, the C++ code implements the unit tests in their own files.

Memory Checks

The switch from Python to C++ brings a new danger, memory access issues and memory leaks.

One memory issue that we can address relatively easily is memory leaks. If we can abstract the core functionality into a self contained group of files then we can use valgrind to check for memory leaks. A small glitch with using valgrind is that the application is not available for Mac from the key repositories. There is an informal project on GitHub.

The issue of valgrind not running on the mach was resolved by putting together a Dockerfile containing common development tools. The memory check could then be run on the desktop using Docker.

Running the Emulator

The emulator can be run on both a desktop computer as well as a board running NuttX.

Run from the Desktop

Running the application on the desktop is the simplest way to test the emulator:

  • Open a command console and change to the Desktop directory in the repository
  • Build the emulator with the command make
  • Run the emulator with the command ./ssem_main

Run on NuttX

Running on NuttX is a little more complex as we need to build the application and the operating system and then deploy the binary to a board. The processes of adding the SSEM application to a Raspberry Pi PicoW board has already been documented in the article Adding a User Application to NuttX. The first step is to follow the steps in the article to add the SSEM basic applicatiom.

The next stage is to copy the contents of the NuttX directory over the application directory created in the above article. The code should then be rebuilt with the command make clean && make -j. The application can now be deployed to the board.

Now we have the OS and the application deployed to the Raspberry Pi (or your board of choice) we can connect a serial adapter to the board and press the enter key twice. This will bring up the NuttX shell. Typing help should show the ssem application deployed to the board. Simply execute this by entering the command ssem.

Application Output

In both cases the emulator should run the hfr989.ssem application (the source for this can be found in the SSEMApps folder in the repository). Both the desktop and the NuttX versions of the emulator will run the SSEM application and will show the start and end state of the SSEM on the console / serial port. The first output will show the SSEM application loaded into the store lines:

NuttShell (NSH) NuttX-10.4.0
nsh> ssem
                   00000000001111111111222222222233
                   01234567890123456789012345678901
   0: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
   1: 0x48020000 - 01001000000000100000000000000000 LDN 18           ; 16402
   2: 0xc8020000 - 11001000000000100000000000000000 LDN 19           ; 16403
   3: 0x28010000 - 00101000000000010000000000000000 SUB 20           ; 32788
   4: 0x00030000 - 00000000000000110000000000000000 CMP              ; 49152
   5: 0xa8040000 - 10101000000001000000000000000000 JPR 21           ; 8213
   6: 0x68010000 - 01101000000000010000000000000000 SUB 22           ; 32790
   7: 0x18060000 - 00011000000001100000000000000000 STO 24           ; 24600
   8: 0x68020000 - 01101000000000100000000000000000 LDN 22           ; 16406
   9: 0xe8010000 - 11101000000000010000000000000000 SUB 23           ; 32791
  10: 0x28060000 - 00101000000001100000000000000000 STO 20           ; 24596
  11: 0x28020000 - 00101000000000100000000000000000 LDN 20           ; 16404
  12: 0x68060000 - 01101000000001100000000000000000 STO 22           ; 24598
  13: 0x18020000 - 00011000000000100000000000000000 LDN 24           ; 16408
  14: 0x00030000 - 00000000000000110000000000000000 CMP              ; 49152
  15: 0x98000000 - 10011000000000000000000000000000 JMP 25           ; 25
  16: 0x48000000 - 01001000000000000000000000000000 JMP 18           ; 18
  17: 0x00070000 - 00000000000001110000000000000000 HALT             ; 57344
  18: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  19: 0xc43fffff - 11000100001111111111111111111111 HALT             ; -989
  20: 0x3bc00000 - 00111011110000000000000000000000 JMP 28           ; 988
  21: 0xbfffffff - 10111111111111111111111111111111 HALT             ; -3
  22: 0x243fffff - 00100100001111111111111111111111 HALT             ; -988
  23: 0x80000000 - 10000000000000000000000000000000 JMP 1            ; 1
  24: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  25: 0x08000000 - 00001000000000000000000000000000 JMP 16           ; 16
  26: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  27: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  28: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  29: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  30: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  31: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0

Reading from left to right, the above output shows the following:

  • Store line number (i.e. the memory address) 0:, 1: etc.
  • The hexadecimal representation of the store line contents.
  • Binary representation of the store line contents
  • Disassembled representation of the store line contents JMP 0 etc.
  • Decimal representation of the store line contents

It must be remembered when reading the above that the least significant bit is at the left of the word and the most significant bit is to the right. This is honoured with the hexadecimal and binary components of the above output. The decimal value to the right should be read in the usual way for a base 10 number.

After a short time the contents of the store lines at the end of the run will be displayed:

Program execution complete.
                   00000000001111111111222222222233
                   01234567890123456789012345678901
   0: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
   1: 0x48020000 - 01001000000000100000000000000000 LDN 18           ; 16402
   2: 0xc8020000 - 11001000000000100000000000000000 LDN 19           ; 16403
   3: 0x28010000 - 00101000000000010000000000000000 SUB 20           ; 32788
   4: 0x00030000 - 00000000000000110000000000000000 CMP              ; 49152
   5: 0xa8040000 - 10101000000001000000000000000000 JPR 21           ; 8213
   6: 0x68010000 - 01101000000000010000000000000000 SUB 22           ; 32790
   7: 0x18060000 - 00011000000001100000000000000000 STO 24           ; 24600
   8: 0x68020000 - 01101000000000100000000000000000 LDN 22           ; 16406
   9: 0xe8010000 - 11101000000000010000000000000000 SUB 23           ; 32791
  10: 0x28060000 - 00101000000001100000000000000000 STO 20           ; 24596
  11: 0x28020000 - 00101000000000100000000000000000 LDN 20           ; 16404
  12: 0x68060000 - 01101000000001100000000000000000 STO 22           ; 24598
  13: 0x18020000 - 00011000000000100000000000000000 LDN 24           ; 16408
  14: 0x00030000 - 00000000000000110000000000000000 CMP              ; 49152
  15: 0x98000000 - 10011000000000000000000000000000 JMP 25           ; 25
  16: 0x48000000 - 01001000000000000000000000000000 JMP 18           ; 18
  17: 0x00070000 - 00000000000001110000000000000000 HALT             ; 57344
  18: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  19: 0xc43fffff - 11000100001111111111111111111111 HALT             ; -989
  20: 0x54000000 - 01010100000000000000000000000000 JMP 10           ; 42
  21: 0xbfffffff - 10111111111111111111111111111111 HALT             ; -3
  22: 0x6bffffff - 01101011111111111111111111111111 HALT             ; -42
  23: 0x80000000 - 10000000000000000000000000000000 JMP 1            ; 1
  24: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  25: 0x08000000 - 00001000000000000000000000000000 JMP 16           ; 16
  26: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  27: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  28: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  29: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  30: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
  31: 0x00000000 - 00000000000000000000000000000000 JMP 0            ; 0
Executed 21387 instructions in 30000000 nanoseconds

The original SSEM ran at about 700 instructions per second, modern PCs and even RP2040 processors are running the application much faster.

Conclusion

Even small boards (such as the Raspberry Pi Pico) running relatively low power processors can now emulate the Manchester Baby running application intended for the SSEM many times faster that the original hardware. The hfr989.ssem application would have run in about 30 seconds in 1948, today we can run this in an emulator in less that 30ms.

PicoW with SmartFS

Sunday, July 2nd, 2023

Mounting SmartFS

One feature that I want to add to my current project is to add a small file system with files that have been built into the system at compile time. These files would then be available to the application at run time. Let’s look at how we can do can do this with NuttX.

This tutorial assumes that you have NuttX cloned and ready to build, if not then you can find out how to do this in the first article in this series.

Adding SmartFS to the Build

NuttX has a built in configuration for the PicoW with SmartFS already configured. The first thing we need to do is to start with a clean system and then configure the build to include NSH and the flash file system. Start by changing to the NuttX source directory and then executing the following commands:

make distclean
./tools/configure.sh -l raspberrypi-pico-w:nsh-flash

Now we have the system configured we can build the OS and applications by executing the following command:

make -j

This should take a minute or so on a modern machine. Now we can deploy the system to the PicoW either by using openocd or by dragging the uf2 file onto the PicoW drive. Now connect to the PicoW using a serial application and type help to show the menu of commands. You should see something like the following:

SmartFS Builtin Apps

SmartFS Builtin Apps

We can check to see if the SmartFS is available checking the contents of the /dev directory with the command ls /dev. This should result in something like the following if SmartFS has been enabled correctly:

Device Directory Listing

Device Directory Listing

We can mount the file system using the command mount -t smartfs /dev/smart0 /data and then check the contents of the /data directory and we should find one file in the directory, test. Checking the contents of the file with the command cat /data/test should find that it contains a single ine of text which should be Hello, world!.

So far, so good, we have built the system and proven that it contains the default file and correct contents.

Adding a New File to the System Image

The next piece of the puzzle is to work out how to add new files to the file system. This took a few hours to figure out, but here goes…

The first attempt lead me searching for RP2040_FLASH_FILE_SYSTEM in the source tree (ripgrep is a great tool for doing this). This lead to a number of possible files. Maybe we can narrow the search down a little.

Second attempt, let’s have a look for Hello, world!. This resulted in a smaller number of files leading to the file arch/arm/src/rp2040/rp2040_flash_initialize.S. This file is well documented and shows how to set up the SmartFS file system and at the end of the file it shows how to create an entry for the file we see when we list the mounted directory. Scrolling down to the end of the fie we find the following:

    sector      3, dir
    file_entry  0777, 4, 0, "test"

    sector      4, file, used=14
    .ascii      "Hello, world!\n"

    .balign     4096, 0xff
    .global     rp2040_smart_flash_end
rp2040_smart_flash_end:

This looks remarkably familiar. So what happens if we change the above to look like this:

    sector      3, dir
    file_entry  0777, 4, 0, "test"
    file_entry  0777, 5, 0, "test2"

    sector      4, file, used=14
    .ascii      "Hello, world!\n"

    sector      5, file, used=14
    .ascii      "Testing 1 2 3\n"

    .balign     4096, 0xff
    .global     rp2040_smart_flash_end
rp2040_smart_flash_end:

Building the system, deploying the code and executing the following commands:

mount -t smartfs /dev/smart0 /data
cd /data
ls

results in the following:

New file added to SmartFS

New file added to SmartFS

If we execute the command cat test2 we are rewarded with the output Testing 1 2 3.

Further testing shows that the file system survives through a reset. We can do the following:

  • echo “My test” > test3
  • rm test
  • reboot

These commands should remove the file test, create a new file test3 and then reboot the system. Checking the file system contents shows that the system persists the changes through a reset.

Conclusion

This experiment was a partial success. A simple file system has been made available to an application and the file system survives a reset. One issue remains, adding new files is a little complex. It also requires changes to the NuttX source tree outside of the applications folder. This could result in changes being lost when a new version of NuttX is released.

There could be a solution, ROMFS, stay tuned for the next episode.

VSCode Debugging with NuttX and Raspberry Pi PicoW

Wednesday, June 14th, 2023

VS Code Halted in NuttX __start

In the previous post, we managed to get GDB working with NuttX running on the Raspberry Pi PicoW. In this post we will look at using VSCode to debug NuttX.

For a large part of this post it was case of following Shawn Hymels guide Raspberry Pi Pico and RP2040 – C/C++ Part 2 Debugging with VS Code. This is an excellent guide and I recommend using it as a companion to this post.

We will start with the assumption that you have followed the previous post in this series and have a working NuttX build for the Raspberry Pi PicoW. We will also assume that you have a working VSCode installation with the Cortex-Debug extension installed.

Configuration Files

We will need to create (or modify) three configuration files to allow VSCode to debug NuttX.

  • launch.json
  • tasks.json
  • settings.json

In the first article of this series we created the directory NuttX-PicoW to hold the apps and nuttx folders holding out NuttX code. This directory is the project directory or in VS Code parlance, the workspaceFolder. The workspaceFolder should also contain the Raspberry Pi PicoW SDK and the Raspberry Pi specific version of openocd.

We now add a .vscode directory to the workspaceFolder. This directory will hold the three configuration files listed above.

Start by opening VS Code and opening the workspaceFolder. This should show the four folders already in the workspaceFolder. Now create a .vscode directory in the workspaceFolder if it does not already exist.

setting.json

Only one entry is required in the settings.json file and this is the location of the openocd executable. If you have followed these posts so far this will be in the openocd/src folder. Create a setting.json file in the .vscode folder and add the following to the file.

{
    "cortex-debug.openocdPath": "${workspaceFolder}/openocd/src/openocd"
}

Note that the name of the executable may vary depending upon your operating system this post is being written from a MacOS perspective.

tasks.json

The tasks.json file holds the entry that will be used to build NuttX prior to deployment. In Shawns document the projects being worked on use the cmake system. We need to modify this to build NuttX using make. We want VS Code to use the command make -C ${workspaceFolder}/nuttx -j to build NuttX. The build task below will create a task Build NuttX to do just this:

{
    "version": "2.0.0",
    "tasks": [
        {
          "label": "Build NuttX",
          "type": "cppbuild",
          "command": "make",
          "args": [
            "-C",
            "${workspaceFolder}/nuttx",
            "-j"
          ]
        }
      ]  
}

launch.json

Of the three files we are creating, the launch.json file is the most complex. Much of the file remains the same as that presented by Shawn but there are some differences. The file used here looks like this:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Pico Debug",
            "cwd": "${workspaceRoot}",
            "executable": "${workspaceFolder}/nuttx/nuttx.elf",
            "request": "launch",
            "type": "cortex-debug",
            "servertype": "openocd",
            "gdbPath" : "arm-none-eabi-gdb",
            "device": "RP2040",
            "configFiles": [
                "interface/cmsis-dap.cfg",
                "target/rp2040.cfg"
            ],
            "svdFile": "${workspaceFolder}/pico-sdk/src/rp2040/hardware_regs/rp2040.svd",
            "runToEntryPoint": "__start",
            "searchDir": ["${workspaceFolder}/openocd/tcl"],
            "openOCDLaunchCommands": ["adapter speed 5000"],
            "preLaunchTask": "Build NuttX"
        }
    ]
}

The following items are the ones that need to be changed:

“executable”: “${workspaceFolder}/nuttx/nuttx.elf”

This is the path to the executable that will be created by the build system. In this cse it is the NuttX ELF file. This may need to be changed to “executable”: “${workspaceFolder}/nuttx/nuttx” depending upon the output of the build system, this may simply be nuttx.

“configFiles”

The recent versions of the picoprobe software use a different interface to talk to the picoprobe. For versions 1.01 and above of the picoprobe software this interface changed from picoprobe.cfg to cmsis-dap.cfg.

“svdFile”: “${workspaceFolder}/pico-sdk/src/rp2040/hardware_regs/rp2040.svd”

We have a version of the Pico SDK specifically for our build requirements and so for this reason the location of the file is changed to reference the workspaceFolder rather than the global PICO_SDK_PATH environment variable.

“runToEntryPoint”: “__start”

This entry replaces the “runToMain”: true entry. Unlike convention C/C++ applications, NuttX does not have a main method. Having the runToMain entry generates an error stating that main cannot be found and stops at the first executable statement in our code, namely in __start. Replacing runToMain with runToEntryPoint achieves the same thing but does not generate the error.

Doing this also removes the need to have the postRestartCommands specified.

“searchDir”: [“${workspaceFolder}/openocd/tcl”]

This entry is used by openocd to look for the interface and target configuration files specified in the configFiles entry.

“openOCDLaunchCommands”: [“adapter speed 5000”]

These commands are executed by openocd when it starts. The change to the adapter speed is required and is documented in Getting Started with Raspberry Pi Pico documentation from the Raspberry Pi Foundation.

“preLaunchTask”: “Build NuttX”

The final entry tells Cortex-Debug how to build NuttX. It references the task we previously defined in the tasks.json file.

Testing

Testing this should simply be a case of saving all of the files above and pressing F5 in VS Code. If the changes have been successful then VS Code will first try to build NuttX in a Terminal window and it will then deploy NuttX to the Pico board and start the debugger. VS Code should look something like this (click on image for full view):

Debugging PicoW with VSCode

Debugging PicoW with VSCode

The inclusion of the SVD file allows us to examine the PicoW peripheral registers as well as the core registers:

PicoW Peripheral Registers in VSCode

PicoW Peripheral Registers in VSCode

Pico Cortex Registers in VSCode

Pico Cortex Registers in VSCode

Conclusion

GDB is a great debugger but it is often more convenient to use an IDE to debug your code. VS Code with the Cortex-Debug extension allow visual debugging of NuttX with a few nice additional features thrown in:

  • Easily viewed call stacks for both cores.
  • The SVD file allows the peripheral registers to be viewed through VS Code

We should also note that the use of VS Code resolved the issues noted at the end of the previous post as Cortex-Debug is able to deploy a binary using the picoprobe without resorting to the UF2 method of deployment. This results in a seamless build, deploy and debug process.

We have two debug options and it is now down to personal preferences as to which one to use.

Design the System

Friday, April 14th, 2017

One of the current projects on the go is a level shifter for the Teensy 3.6 using the TXS0108E chip. The aim is to allow the use of as many of the Teensy’s GPIO pins as possible to allow the development of another project that is working on 5V logic levels.

This project reminded me that when putting together a microcontroller project that the system is made up of both hardware and software. Sometimes a design decision made in one element can have an adverse effect on the other.

Design Decisions

From the start it was decided that the GPIO pins would have a one to one mapping from the microcontroller to the external bus. So pin 1 on the microcontroller would map to pin 1 on the external connectors.

This would make coding easy when using the Arduino API. So connecting to the external bus and outputting a digital high signal on pin 1 would become:

pinMode(1, OUTPUT);
digitalWrite(1, HIGH);

Impact

Putting the circuit together in KiCAD resulted in the following design:

PCB Layout

PCB Layout

TXS0108E Schematic

TXS0108E Schematic

As you can see, the Teensy GPIOS (TIO-1…) mapped directly to the external bus (GPIO-1…)

When translated into the rats nest there were three occurrences of the following:

PCB Layout

PCB Layout

Routing this was going to be a nightmare.

Changing the Design

At this point the penny dropped that a small change in the software would make the routing a whole lot easier.

Instead of using the pin numbers directly, a #define could be used for the external bus pin numbers. The above snippet would become:

#define BUS_IO1    30
.
.
.
pinMode(BUS_IO1, OUTPUT);
digitalWrite(BUS_IO1, HIGH);

This small change to the design created a one off task to create a header file for the board but it made the routing a lot easier.

Conclusion

Sometimes a small change may create a new task (creating the header file) but may possibly save more time elsewhere in the project.

Moral of the story, Design the system as a whole, not the individual components.

Manchester Baby (SSEM) Emulator

Sunday, February 12th, 2017

The Manchester Small Scale Experimental Machine (SSEM) or Manchester Baby as it became known, was the first computer capable of executing a stored program. The Baby was meant as a proving ground for the early computer technology having only 32 words of memory and a small instruction set.

A replica of the original Manchester Baby is currently on show in the Museum of Science and Industry in Castlefield, Manchester.

Manchester Baby at MOSI

Manchester Baby at Museum of Science and Industry August 2012.

As you can see, it is a large machine weighing in at over 1 ton (that’s imperial, not metric, hence the spelling).

Manchester Baby Architecture

A full breakdown on the Manchester Baby’s architecture can be found in the Wikipedia article above as well as several PDFs, all of which can be found online. The following is meant as a brief overview to present some background to the Python code that will be discussed later.

Storage Lines

Baby used 32-bit words with numbers represented in twos complement form. The words were stored in store lines with the least significant bit (LSB) first (to the left of the word). This is the reverse of most modern computer architectures where the LSB is held in the right most position.

The storage lines are equivalent to memory in modern computers. Each storage line contains a 32-bit value and the value could represent an instruction or data. When used as an instruction, the storage line is interpreted as follows:

Bits Description
0-5 Storage line number to be operated on
6-12 Not used
13-15 Opcode
16-31 Not used

As already noted, when the storage line is interpreted as data then the storage line contains a 32-bit number stored in twos complement form with the LSB to the left and the most significant bit (MSB) to the right.

This mixing of data and program in the same memory is known as von Neumann architecture named after John von Neumann. For those who are interested, the memory architecture where program and data are stored in separate storage areas is known as a Harvard architecture.

SSEM Instruction Set

Baby used three bits for the instruction opcode, bits 13, 14 and 15 in each 32 bit word. This gave a maximum of 8 possible instructions. In fact, only seven were implemented.

Binary Code Mnemonic Description
000 JMP S Jump to the address obtained from memory address S (absolute jump)
100 JRP S Add the value in store line S to the program counter (relative jump)
010 LDN S Load the accumulator with the negated value in store line S
110 STO S Store the accumulator into store line S
001 or 101 SUB S Subtract the contents of store line S from the accumulator
011 CMP If the accumulator is negative then add 1 to the program counter (i.e. skip the next instruction)
111 STOP Stop the program

When reading the above table it is important to remember that the LSB is to the left.

Registers

Baby had three storage areas within the CPU:

  1. Current Instruction (CI)
  2. Present Instruction (PI)
  3. Accumulator

The current instruction is the modern day equivalent of the program counter. This contains the address of the instruction currently executing. CI is incremented before the instruction is fetched from memory. At startup, CI is loaded with the value 0. This means that the first instruction fetched from memory for execution is the instruction in storage line 1.

The present instruction (PI) is the actual instruction that is currently being executed.

As with modern architecture, the accumulator is used as a working store containing the results of any calculations.

Python Emulator

A small Python application has been developed in order to verify my understanding of the architecture of the Manchester Baby. The application is a console application with the following brief:

  1. Read a program from a file and setup the store lines
  2. Display the store lines and registers before the program starts
  3. Execute the program displaying the instructions as they are executed
  4. Display the store lines and registers at the end of the program run

The application was broken down into four distinct parts:

  1. Register.py – implement a class containing a value in a register or storage line
  2. StoreLines.py – Number of store lines containing the application and data
  3. CPU.py – Implement the logic of the CPU
  4. ManchesterBaby.py – Main program logic, reading a program from file and executing the program

The source code for the above along with several samples and test programs can be fount on Github.

Register.py

A register is defined as a 32-bit value. The Register class stores the value and has some logic for checking that the value passed does not exceed the value permitted for a 32-bit value. Note that no other checks or interpretation of the value is made.

StoreLines.py

StoreLines holds a number of Registers, the default when created is 32 registers as this maps on to the number of store lines in the original Manchester Baby. It is possible to have a larger number of store lines.

CPU.py

The CPU class executes the application held in the store lines. It is also responsible for displaying the state of the CPU and the instructions being executed.

ManchesterBaby.py

The main program file contains an assembler (a very primitive one with little error checking) for the Baby’s instruction set. It allows a file to be read and the store lines setup and finally instructs the CPU to execute the program.

SSEM Assembler Files

The assembler provided in the ManchesterBaby.py file is primitive and provides little error checking. It is still useful for loading applications into the store lines ready for execution. The file format is best explained by examining a sample file:

--
--  Test the Load and subtract functions.
--
--  At the end of the program execution the accumulator should hold the value 15.
--
00: NUM 0
01: LDN 20      -- Load accumulator with the contents of line 20 (-A)
02: SUB 21      -- Subtract the contents of line 21 from the accumulator (-A - B)
03: STO 22      -- Store the result in line 22
04: LDN 22      -- Load accumulator (negated) with line 22 (-1 * (-A - B))
05: STOP        -- End of the program.
20: NUM 10      -- A
21: NUM 5       -- B
22: NUM 0       -- Result

Two minus signs indicate an inline comment. Everything following is ignored.

An instruction line has the following form:

Store line number: Instruction Operand

The store line number is the location in the Store that will be used hold the instruction or data.

Instruction is the mnemonic for the instruction. The some of the instructions have synonyms:

Mnemonic Synonyms
JRP JPR, JMR
CMP SKN
STOP HLT, STP

As well as instructions a number may also be given using the NUM mnemonic.

All of the mnemonics requiring a store number (all except STOP and CMP) read the Operand field as the store line number. The NUM mnemonic stores the Operand in the store line as is.

Testing

Testing the application was going to be tricky without a reference. Part of the reason for developing the Python implementation was to check my understanding of the operation of the SSEM. Luckily, David Sharp has developed an emulator written in Java. I can use this to check the results of the Python code.

The original test program run on the Manchester Baby was a calculation of factors. This was used as it would stress the machine. This same application can be found in several online samples and will be used as the test application. The program is as follows:

--
--  Tom Kilburns Highest Common Factor for 989
--
01:   LDN 18
02:   LDN 19
03:   SUB 20
04:   CMP
05:   JRP 21
06:   SUB 22
07:   STO 24
08:   LDN 22
09:   SUB 23
10:   STO 20
11:   LDN 20
12:   STO 22
13:   LDN 24
14:   CMP
15:   JMP 25
16:   JMP 18
17:   STOP
18:   NUM 0
19:   NUM -989
20:   NUM 988
21:   NUM -3
22:   NUM -988
23:   NUM 1
24:   NUM 0
25:   NUM 16

Running on the Java Emulator

Executing the above code in David Sharps emulator gives the following output on the storage tube:

Java Simulator Storage Lines

Result of the HCF application shown on the storage lines of the Java emulator.

and disassembler view:

Storage Lines Disassembled

Result of the program execution disassembled in the Java emulator.

And running on the Python version of the emulator results in the following output in the console:

Python Output

Final output from the Python emulator.

Examination of the output shows that the applications have resulted in the same output. Note that the slight variation in the final output of the Python code is due to the way in which numbers are extracted from the registers. Examination of the bit patterns in the store lines shows that the Java and Python emulators have resulted in the same values.

Conclusion

The Baby presented an ideal way to start to examine computer architecture due to its prmitive nature. The small storage space and simple instruction set allowed emulation in only a few lines of code.

Soak Testing (Minor Update)

Saturday, October 15th, 2016

The weather station project has been struggling for part of September with a random exception in my code. The source of the exception was finally identified although the exact cause is still a mystery. As a temporary measure, the component and the associated source code were removed from the project, this allowed the project to continue.

Source of the Exception

In the previous post, Define a Minimum Viable Product and Ship it, I discussed a replacement for the clock module. This module was changed from the DS3234 to the DS3231 freeing up four additional pins (the DS3231 is an I2C peripheral and the DS3234 uses SPI). This change seems to have introduced some instability into the system.

Several days debugging followed after which I was still unable to isolate the problem. The interim solution is to remove the module and use the Ticker class to trigger an interrupt. Not ideal but this will allow the rest of the software to be soak tested.

The additional Ticker object fires once per minute to trigger the reading of the sensors. In addition, the interrupt will check the current time and if the hour and minute are both zero then the application will reset the pluviometer counter.

Soak Testing

As mentioned above, one Ticker object is generating an interrupt once per minute. The frequency of the interrupt is higher than will be used in the final project. One minute is more than enough time to perform the sensor readings and log this to the internet but also frequent enough to simulate a prolonged period in the field.

The prototype board was also located close to the final home for the sensors. This hit a problem as the range of the WiFi network was not sufficient for the ESP8266. This problem has prevented the prototype to be tested in the field at the moment.

Conclusion

The prototype board is currently sited on a window ledge in the house. taking temperature, humidity and luminosity readings once per minute. This data is logged to the Sparkfun Phant Data Service. The system has been running continuously for the last 7 days (approximately 10,000 readings).

The software seems stable at the moment, the next problem, extend the range of the WiFi network.