RSS

Archive for the ‘Raspberry Pi’ Category

Raspberry Pi Pico and Pico W Project Templates

Tuesday, September 5th, 2023

Pico Template Build Complete

There are somethings in software development that we do not do very often and because we perform the task infrequently we often forget the nuances (well I do). One of these tasks is creating a new project.

In this post we will look at one option for automating this process, GitHub Templates. This work is partially my own work but also a case of bringing together elements of other blog posts and GitHub repositories.

All of the code discussed below can be found in the PicoTemplate repository and this should be used as a reference throughout this post.

Starting a New Project

Several days ago, I was in the process of starting a new project for the Raspberry Pi Pico W using the picotool. For those who are not familiar with this tool is is meant to automate the project creation process for you. You simply tell the tool which features you are going to be using and a few other parameters like board type and it will generate a directory containing all of the necessary code and project files.

Sounds too good to be true. For me it was as no matter which feature list I requested I could not get an application to deploy to the board and talk to the desktop computer over UART or USB.

At this point I decided to take this back to basics and get Blinky up and running. The problem definition became:

  1. C++ application
  2. Blink the onboard LED
  3. Support both Pico and Pico W boards
  4. Standard IO output over UART or USB

It would also be desirable to automate as much of the process as possible.

Blinky

One complication with the Pico boards is that they both use a different mechanism to access the on board LED. The Pico has the LED wired directly to GPIO 25 whilst the Pico W uses a GPIO from the onboard WiFi / Bluetooth chip (hereafter referred to as just wireless). This means we have different versions of Blinky depending upon which board is selected. This also means that the Pico W board also needs an additional library to support the wireless chip even if we are not using any wireless features.

The simplest way of doing this is to use a template directory to contains the main application code and the project CMakeFileList.txt file for each board. We can then copy the template files for the appropriate board type into the sources directory.

Keeping with the theme of forgetting the nuances, we will add some scripts to perform some common tasks:

  1. Configure the system (copying the files to the correct locations)
  2. Build the application
  3. Flash a board using openocd

Our first task will be to set the system up with the correct main.cpp and CMakeFileList.txt files and put the files in the correct place. Checking the default CMakeFileList.txt file we see that the project is named projectname which is not very informative so the configuration script will also rename the project as well.

The purpose of the build script should be obvious, it will build the application code and also offer the ability to perform a full rebuild if necessary.

The final script will flash the board using openocd using another Raspberry Pi Pico configured as a debug probe. This was discussed a few weeks ago in the post PicoDebugger – Bringing Picoprobe and the PicoW Together. This set up also has the advantage of exposing the default UART to the host computer.

Using the Scripts

Using the scripts is a three step process, one of which is only really performed once.

The first step is to use the configure.sh script to copy the files and rename the project:

./configure.sh -b=picow -n=TestApplication

After running this command the src/main.cpp file will contain the code to blink the LED on a Pico W board and the project will have been renamed TestApplication.

Secondly, we can build the application using the build.sh script:

./build.sh

Finally, the application can be deployed to the board using the flash.sh script.

./flash.sh

The UF2 file (in the case above, TestApplication.uf2) can also be copied to the board if a debug probe is not available.

Scope Creep

The story was supposed to end here but this is not going to be the case.

There are some improvements that we can look at to make the system a little more comprehensive. These include:

  1. Add a testing framework
  2. Create a Docker file for build and testing

Both of these items are currently work in progress.

Acknowledgements

This project has been inspired by several others on github:

  • [Raspberry Pi Pico Examples](https://github.com/raspberrypi/pico-examples)
  • [Raspberry Pi Pico Template](https://github.com/cathiele/raspberrypi-pico-cpp-template)

Conclusion

The initial requirement of creating a template for Pico and Pico W development could have been achieved simply by creating two different templates, one for each board type. This would arguably have been quicker and less complex.

The addition of the testing framework and docker container into the requirement would have resulted in some duplication of work in each template. This made bringing the two templates together in one repository more logical as errors or additions in one project type are automatically part of the other board template.

PicoW with SmartFS

Sunday, July 2nd, 2023

Mounting SmartFS

One feature that I want to add to my current project is to add a small file system with files that have been built into the system at compile time. These files would then be available to the application at run time. Let’s look at how we can do can do this with NuttX.

This tutorial assumes that you have NuttX cloned and ready to build, if not then you can find out how to do this in the first article in this series.

Adding SmartFS to the Build

NuttX has a built in configuration for the PicoW with SmartFS already configured. The first thing we need to do is to start with a clean system and then configure the build to include NSH and the flash file system. Start by changing to the NuttX source directory and then executing the following commands:

make distclean
./tools/configure.sh -l raspberrypi-pico-w:nsh-flash

Now we have the system configured we can build the OS and applications by executing the following command:

make -j

This should take a minute or so on a modern machine. Now we can deploy the system to the PicoW either by using openocd or by dragging the uf2 file onto the PicoW drive. Now connect to the PicoW using a serial application and type help to show the menu of commands. You should see something like the following:

SmartFS Builtin Apps

SmartFS Builtin Apps

We can check to see if the SmartFS is available checking the contents of the /dev directory with the command ls /dev. This should result in something like the following if SmartFS has been enabled correctly:

Device Directory Listing

Device Directory Listing

We can mount the file system using the command mount -t smartfs /dev/smart0 /data and then check the contents of the /data directory and we should find one file in the directory, test. Checking the contents of the file with the command cat /data/test should find that it contains a single ine of text which should be Hello, world!.

So far, so good, we have built the system and proven that it contains the default file and correct contents.

Adding a New File to the System Image

The next piece of the puzzle is to work out how to add new files to the file system. This took a few hours to figure out, but here goes…

The first attempt lead me searching for RP2040_FLASH_FILE_SYSTEM in the source tree (ripgrep is a great tool for doing this). This lead to a number of possible files. Maybe we can narrow the search down a little.

Second attempt, let’s have a look for Hello, world!. This resulted in a smaller number of files leading to the file arch/arm/src/rp2040/rp2040_flash_initialize.S. This file is well documented and shows how to set up the SmartFS file system and at the end of the file it shows how to create an entry for the file we see when we list the mounted directory. Scrolling down to the end of the fie we find the following:

    sector      3, dir
    file_entry  0777, 4, 0, "test"

    sector      4, file, used=14
    .ascii      "Hello, world!\n"

    .balign     4096, 0xff
    .global     rp2040_smart_flash_end
rp2040_smart_flash_end:

This looks remarkably familiar. So what happens if we change the above to look like this:

    sector      3, dir
    file_entry  0777, 4, 0, "test"
    file_entry  0777, 5, 0, "test2"

    sector      4, file, used=14
    .ascii      "Hello, world!\n"

    sector      5, file, used=14
    .ascii      "Testing 1 2 3\n"

    .balign     4096, 0xff
    .global     rp2040_smart_flash_end
rp2040_smart_flash_end:

Building the system, deploying the code and executing the following commands:

mount -t smartfs /dev/smart0 /data
cd /data
ls

results in the following:

New file added to SmartFS

New file added to SmartFS

If we execute the command cat test2 we are rewarded with the output Testing 1 2 3.

Further testing shows that the file system survives through a reset. We can do the following:

  • echo “My test” > test3
  • rm test
  • reboot

These commands should remove the file test, create a new file test3 and then reboot the system. Checking the file system contents shows that the system persists the changes through a reset.

Conclusion

This experiment was a partial success. A simple file system has been made available to an application and the file system survives a reset. One issue remains, adding new files is a little complex. It also requires changes to the NuttX source tree outside of the applications folder. This could result in changes being lost when a new version of NuttX is released.

There could be a solution, ROMFS, stay tuned for the next episode.

VSCode Debugging with NuttX and Raspberry Pi PicoW

Wednesday, June 14th, 2023

VS Code Halted in NuttX __start

In the previous post, we managed to get GDB working with NuttX running on the Raspberry Pi PicoW. In this post we will look at using VSCode to debug NuttX.

For a large part of this post it was case of following Shawn Hymels guide Raspberry Pi Pico and RP2040 – C/C++ Part 2 Debugging with VS Code. This is an excellent guide and I recommend using it as a companion to this post.

We will start with the assumption that you have followed the previous post in this series and have a working NuttX build for the Raspberry Pi PicoW. We will also assume that you have a working VSCode installation with the Cortex-Debug extension installed.

Configuration Files

We will need to create (or modify) three configuration files to allow VSCode to debug NuttX.

  • launch.json
  • tasks.json
  • settings.json

In the first article of this series we created the directory NuttX-PicoW to hold the apps and nuttx folders holding out NuttX code. This directory is the project directory or in VS Code parlance, the workspaceFolder. The workspaceFolder should also contain the Raspberry Pi PicoW SDK and the Raspberry Pi specific version of openocd.

We now add a .vscode directory to the workspaceFolder. This directory will hold the three configuration files listed above.

Start by opening VS Code and opening the workspaceFolder. This should show the four folders already in the workspaceFolder. Now create a .vscode directory in the workspaceFolder if it does not already exist.

setting.json

Only one entry is required in the settings.json file and this is the location of the openocd executable. If you have followed these posts so far this will be in the openocd/src folder. Create a setting.json file in the .vscode folder and add the following to the file.

{
    "cortex-debug.openocdPath": "${workspaceFolder}/openocd/src/openocd"
}

Note that the name of the executable may vary depending upon your operating system this post is being written from a MacOS perspective.

tasks.json

The tasks.json file holds the entry that will be used to build NuttX prior to deployment. In Shawns document the projects being worked on use the cmake system. We need to modify this to build NuttX using make. We want VS Code to use the command make -C ${workspaceFolder}/nuttx -j to build NuttX. The build task below will create a task Build NuttX to do just this:

{
    "version": "2.0.0",
    "tasks": [
        {
          "label": "Build NuttX",
          "type": "cppbuild",
          "command": "make",
          "args": [
            "-C",
            "${workspaceFolder}/nuttx",
            "-j"
          ]
        }
      ]  
}

launch.json

Of the three files we are creating, the launch.json file is the most complex. Much of the file remains the same as that presented by Shawn but there are some differences. The file used here looks like this:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Pico Debug",
            "cwd": "${workspaceRoot}",
            "executable": "${workspaceFolder}/nuttx/nuttx.elf",
            "request": "launch",
            "type": "cortex-debug",
            "servertype": "openocd",
            "gdbPath" : "arm-none-eabi-gdb",
            "device": "RP2040",
            "configFiles": [
                "interface/cmsis-dap.cfg",
                "target/rp2040.cfg"
            ],
            "svdFile": "${workspaceFolder}/pico-sdk/src/rp2040/hardware_regs/rp2040.svd",
            "runToEntryPoint": "__start",
            "searchDir": ["${workspaceFolder}/openocd/tcl"],
            "openOCDLaunchCommands": ["adapter speed 5000"],
            "preLaunchTask": "Build NuttX"
        }
    ]
}

The following items are the ones that need to be changed:

“executable”: “${workspaceFolder}/nuttx/nuttx.elf”

This is the path to the executable that will be created by the build system. In this cse it is the NuttX ELF file. This may need to be changed to “executable”: “${workspaceFolder}/nuttx/nuttx” depending upon the output of the build system, this may simply be nuttx.

“configFiles”

The recent versions of the picoprobe software use a different interface to talk to the picoprobe. For versions 1.01 and above of the picoprobe software this interface changed from picoprobe.cfg to cmsis-dap.cfg.

“svdFile”: “${workspaceFolder}/pico-sdk/src/rp2040/hardware_regs/rp2040.svd”

We have a version of the Pico SDK specifically for our build requirements and so for this reason the location of the file is changed to reference the workspaceFolder rather than the global PICO_SDK_PATH environment variable.

“runToEntryPoint”: “__start”

This entry replaces the “runToMain”: true entry. Unlike convention C/C++ applications, NuttX does not have a main method. Having the runToMain entry generates an error stating that main cannot be found and stops at the first executable statement in our code, namely in __start. Replacing runToMain with runToEntryPoint achieves the same thing but does not generate the error.

Doing this also removes the need to have the postRestartCommands specified.

“searchDir”: [“${workspaceFolder}/openocd/tcl”]

This entry is used by openocd to look for the interface and target configuration files specified in the configFiles entry.

“openOCDLaunchCommands”: [“adapter speed 5000”]

These commands are executed by openocd when it starts. The change to the adapter speed is required and is documented in Getting Started with Raspberry Pi Pico documentation from the Raspberry Pi Foundation.

“preLaunchTask”: “Build NuttX”

The final entry tells Cortex-Debug how to build NuttX. It references the task we previously defined in the tasks.json file.

Testing

Testing this should simply be a case of saving all of the files above and pressing F5 in VS Code. If the changes have been successful then VS Code will first try to build NuttX in a Terminal window and it will then deploy NuttX to the Pico board and start the debugger. VS Code should look something like this (click on image for full view):

Debugging PicoW with VSCode

Debugging PicoW with VSCode

The inclusion of the SVD file allows us to examine the PicoW peripheral registers as well as the core registers:

PicoW Peripheral Registers in VSCode

PicoW Peripheral Registers in VSCode

Pico Cortex Registers in VSCode

Pico Cortex Registers in VSCode

Conclusion

GDB is a great debugger but it is often more convenient to use an IDE to debug your code. VS Code with the Cortex-Debug extension allow visual debugging of NuttX with a few nice additional features thrown in:

  • Easily viewed call stacks for both cores.
  • The SVD file allows the peripheral registers to be viewed through VS Code

We should also note that the use of VS Code resolved the issues noted at the end of the previous post as Cortex-Debug is able to deploy a binary using the picoprobe without resorting to the UF2 method of deployment. This results in a seamless build, deploy and debug process.

We have two debug options and it is now down to personal preferences as to which one to use.

Debugging NuttX on the Raspberry Pi Pico

Sunday, June 11th, 2023

GDB showing thread information

At some point in the development lifecycle we will hit some unexpected application behaviour. At this point we reach for the debugger to allow us to connect to the application an interrogate the state and hopefully we will be able to determine the cause of the problem. Working with NuttX is no different from any other application. The only complication we have is connecting to application running on the PicoW requires a debug adapter. Luckily, the Raspberry Pi Foundation has an affordable solution to this problem.

Environment

The getting started guide for the Pico has instructions on how to set up the environment for Linux, Mac and Windows. This post will be following the guide for MacOS adding some hardware to allow for a stable connection between the debug probe and the PicoW.

There are two options for the debug probe:

  • The Raspberry Pi debug probe
  • Use a Pico programmed as a debug probe

I have a number of Pico and PicoW boards and so the later option, using a Pico programmed as a debug probe, is going to be the most cost effective and this is the route we will look at here.

Hardware

For the purpose of discussion we will use the following terms to describe the two boards:

  • Picoprobe – Pico that is programmed to be a debug probe
  • Target – PicoW that is running the application we wish to debug

The getting stared guide shows how two Pico boards should be connected to provide both debugging and serial console output through a UART on the target board. This shows the two boards on a breadboard setup (image is taken from the Raspberry Pi Pico getting started guide under CC-BY-ND license).

Picoprobe and Pico Fritzing

Picoprobe and Pico Fritzing

The same setup can be reproduced on protoboard. This will make the setup a little more robust and permanent. Breaking out the soldering iron and the parts bin yielded the following:

Pico on Protoboard

Pico on Protoboard

The flying leads are needed to connect to the SWD debug headers on the PicoW as it was not possible to connect these through to the protoboard. The black (GND) and red (5V) wires provide power to the target board. This means that we only have one connector to the system, namely to the Picoprobe but we must take care not to exceed the power capabilities of the USB connection.

The white (SWCLK) and blue (SWDIO) flying leads provide the debug connection between the two boards. As with the permanent connections above, the black lead is a GND connection.

The blue and yellow leads connected permanently to the protoboard connect the UART from the target board through to the picoprobe. The picoprobe will use the USB connection to present the host computer with a connection through to the UART on the target board.

Software

The first piece of software needed is the Picoprobe software itself. There are instruction on how to build this but I found the simplest thing to do was to download the latest release from the Picoprobe GitHub repository. At the time of writing this is version 1.0.2. The Picoprobe software is deployed to the Pico being used as a Picoprobe in the usual way, reset the Pico whilst hold the BootSel button and then drag and drop the firmware file onto the drive present to the host computer.

The next thing we need is a version of openocd that can talk to the Picoprobe. The machine I am using is used to develop software for multiple boards and so the preferable solution is to build the Raspberry Pi Pico version of openocd for this setup and then reference this locally rather than install the software globally. This is the same approach taken for the various development environment I use to prevent them interfering with each other.

Following the guide (for Mac) we need to execute the following commands:

cd ~/pico
git clone https://github.com/raspberrypi/openocd.git --branch picoprobe --depth=1 $ cd openocd
export PATH="/usr/local/opt/texinfo/bin:$PATH" 1
./bootstrap
./configure --enable-picoprobe --disable-werror 2
make -j4

Check the latest version of the guide for instructions for your OS.

Now to try openocd… This is where I hit a problem caused by having out of date documentation. Following the latest documentation we need to execute the following command:

src/openocd -f interface/cmsis-dap.cfg -f target/rp2040.cfg -c "adapter speed 5000" -s tcl

I was following I hit two errors due to the out of date documentation:

  • Error: Can’t find a picoprobe device! Please check device connections and permissions.
  • Error: CMSIS-DAP command CMD_DAP_SWJ_CLOCK failed.

Both of these errors were corrected by using the command above to connect to the debug probe. Moral of the story, make sure you are using the latest information. If everything goes well then we should be rewarded with something similar to the following output:

Open On-Chip Debugger 0.11.0-g4f2ae61 (2023-06-10-18:20)
Licensed under GNU GPL v2
For bug reports, read
	http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "swd". To override use 'transport select <transport>'.
Info : Hardware thread awareness created
Info : Hardware thread awareness created
Info : RP2040 Flash Bank Command
adapter speed: 5000 kHz

Info : Listening on port 6666 for tcl connections
Info : Listening on port 4444 for telnet connections
Info : Using CMSIS-DAPv2 interface with VID:PID=0x2e8a:0x000c, serial=E66058388356A232
Info : CMSIS-DAP: SWD  Supported
Info : CMSIS-DAP: FW Version = 2.0.0
Info : CMSIS-DAP: Interface Initialised (SWD)
Info : SWCLK/TCK = 0 SWDIO/TMS = 0 TDI = 0 TDO = 0 nTRST = 0 nRESET = 0
Info : CMSIS-DAP: Interface ready
Info : clock speed 5000 kHz
Info : SWD DPIDR 0x0bc12477
Info : SWD DLPIDR 0x00000001
Info : SWD DPIDR 0x0bc12477
Info : SWD DLPIDR 0x10000001
Info : rp2040.core0: hardware has 4 breakpoints, 2 watchpoints
Info : rp2040.core1: hardware has 4 breakpoints, 2 watchpoints
Info : starting gdb server for rp2040.core0 on 3333
Info : Listening on port 3333 for gdb connections

If we have got here then we have the the debug probe programmed and we have a copy of openocd that can communicate with the picoprobe.

Reconfiguring NuttX

In previous posts we used UART over USB to communicate with the host computer. The picoprobe gives us another option, using the UART on the target board which is routed through the picoprobe. We must reconfigure NuttX in order to take advantage of the UART redirection. The configuration we want is raspberrypi-pico-w:nsh, so following the first post in this series and making the change to the configuration we need to execute the following:

make distclean
./tools/configure.sh -l raspberrypi-PicoW:nsh
make -j

At the end of this process we should have a new nuttx.uf2 file configured to use the UART port on the target board to communicate with the NuttX shell. This UF2 file can be deployed using the usual deployment process (Bootsel and reset followed by drag and dropping the file) to the target board.

At this point we should have two Pico boards programmed, one to be the picoprobe and one to be the target board (with NuttX). Now we can connect the boards (in my case using the protoboard shown above) and verify that NuttX is active and communicating through the UART.

Plugging the Pico (picoprobe) and the PicoW (target board) into the protoboard and then connecting the host computer to the picoprobe through USB presents the computer with a new serial port, usbmodem14302. Connecting to this port and hitting enter shows the nsh prompt. Asking for help shows the expected output.

nsh> help
help usage:  help [-v] [<cmd>]

    .         cat       df        free      mount     rmdir     truncate  
    [         cd        dmesg     help      mv        set       uname     
    ?         cp        echo      hexdump   printf    sleep     umount    
    alias     cmp       env       kill      ps        source    unset     
    unalias   dirname   exec      ls        pwd       test      uptime    
    basename  date      exit      mkdir     reboot    time      usleep    
    break     dd        false     mkrd      rm        true      xd        

Builtin Apps:
    nsh  sh   
nsh>

So we have a good UART connection and openocd can connect to the picoprobe.

Now we need to connect the debugger to openocd.

Debugging

To debug the board we will start with GDB, specifically arm-none-eabi-gdb-py as this has Python scripting enabled. We can use the scripting engine to automate some tasks later.

First thing to do is add a .gdbinit file to the directory we will be executing GDB from, in this case we will use the nuttx directory. The following commands should setup GDB and openocd ready for debugging. The nuttx ELF file will be loaded for us so by the time the GDB prompt is show we will have the symbol file loaded. Our .gdbinit file should look something like this:

set history save on
set history size unlimited
set history remove-duplicates unlimited
set history filename ~/.gdb_history

set output-radix 16
set mem inaccessible-by-default off

set remote hardware-breakpoint-limit 4
set remote hardware-watchpoint-limit 2

set confirm off

file nuttx
add-symbol-file -readnow nuttx

target extended-remote :3333
mon gdb_breakpoint_override hard

To debug we will need two terminal sessions open. In the first session run openocd as described above. In the second session start GDB (simply enter the command arm-none-eabi-gdb-py in the terminal session) making sure that you are in the directory containing the nuttx file and the .gdbinit file. If you are successful then you will see something like this:

GNU gdb (GDB) 13.1
Copyright (C) 2023 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-apple-darwin22.3.0".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word".
add symbol table from file "../nuttx/nuttx"

We can now start to debug the application on the board. A few simple tests, firstly, reset the board with the GDB command mon reset halt. We should see something like this:

target halted due to debug-request, current mode: Thread
xPSR: 0xf1000000 pc: 0x000000ea msp: 0x20041f00
target halted due to debug-request, current mode: Thread
xPSR: 0xf1000000 pc: 0x000000ea msp: 0x20041f00

We can now set a breakpoint, say in nsh_main (b nsh_main) and run the application to the breakpoint (c). If we then ask for information about the threads (info thre) we should see something like this:

  Id   Target Id                                           Frame
* 1    Thread 1 (Name: rp2040.core0, state: breakpoint)    nsh_main (argc=0x1, argv=0x20003170) at nsh_main.c:58
  2    Thread 2 (Name: rp2040.core1, state: debug-request) 0x000000ea in ?? ()

Conclusion

Getting debugging up and running took a little more effort than I expected but was not expensive due to the ability to use a Pico to debug a Pico. There is some excellent debugger documentation on the NuttX web site. This suggests that openocd should support NuttX however I was not able to get this working with the Raspberry Pi release of openocd. It was a question of having Pico support or NuttX support. I’m going with the Pico support for the minute.

I was hoping to use the picoprobe board to deploy the application meaning that we only have one connection between the host computer and the development hardware. A little more research is going to be necessary to see if this can be done.

Adding SMP to NuttX on the Raspberry Pi PicoW

Tuesday, June 6th, 2023

ps Command Results without SMP Enabled

So far in this series we have configured NuttX to build C++ applications on the PicoW and we have created our own custom application and successfully deployed this to the board.

In this post we will look at configuring the system to use both cores on the RP2040 processor using the configuration we have developed over the previous posts. NuttX includes a SMP configuration already, raspberrypi-pico-w:smp, this uses UART0 for communication rather than USB hence we will look at add this configuration on top of the USB configuration.

Default Configuration

It appears that the default configuration for NuttX running on the PicoW is to run the processor as a single core processor. Running the ps in the NuttX shell yields the following output:

   PID GROUP PRI POLICY   TYPE    NPX STATE    EVENT     SIGMASK           STACK COMMAND
    0     0   0 FIFO     Kthread N-- Ready              0000000000000000 001000 Idle Task
    1     1 100 RR       Task    --- Running            0000000000000000 002000 nsh_main   

From the output we see that there are only two tasks running on the system. From this we can deduce that only one of the cores is enabled. If both cores were running then we should see two Idle Tasks running, one on each core plus the nsh_main task. This is confirmed in the SMP Documentation where we find the statement:

Each CPU will have its own IDLE task.

Now we have this confirmed, let’s move on to reconfiguring the system.

Enabling SMP

According to the documentation we need set three options through the configuration system in order to turn on SMP:

  • CONFIG_SMP
  • CONFIG_SMP_NCPUS
  • CONFIG_IDLETHREAD_STACKSIZE

Starting the configuration system with the make menuconfig we are presented with the top level options screen:

NuttX configuration system

NuttX configuration system

This poses the question “How do I find CONFIG_SMP and the rest of the options?”. Luckily, kconfig has a search option, pressing the / key presents the user with a search dialog box:

Search dialog here.

Typing SMP and pressing return shows a number of results, some are relevant to use, some are not. The longer the search term the more accurate the results will be. In our case, searching for SMP yields a number of results that are irrelevant. However, the top result looks like it is the one for us:

SMP Search results image Here.

The results give us a fair amount of information about the configuration options. If we look at the top entry for SMP (this is actually the option that we need to turn on) the system is giving us the following information:

  • The option name (SMP) which will translate to CONFIG_SMP in the various include files
  • The location of the option, in this case RTOS Features -> Tasks and Scheduling
  • In which file (and the line number) the option is defined, sched/Kconfig, line 271
  • Any other options that this option depends upon.
  • Any options that will be turned on as a result of selecting this option

You will also note a number in brackets at the side of Tasks and Scheduling. This is the short cut key, in this case 1 and pressing 1 will take us to the appropriate place in the configuration system.

RTOS Features options.

Selecting Tasks and Scheduling by pressing enter we are presented with the following:

Tasks and scheduling, no SMP.

Notice that there is no option to turn on SMP. Confused, so was I the first time I saw this. Pressing the ESC key a few times should take us back to the search results. Reading these more carefully we note the following:

Depends on: ARCH_HAVE_MULTICPU [=y] && ARCH_HAVE_TESTSET [=y] && ARCH_INTERRUPTSTACK [=0]!=0

So we have to ensure that three conditions are met in order to enable SMP:

  • HAVE_MULTICPU is set to turned on, this looks to be the case (ARCH_HAVE_MULTICPU [=y])
  • HAVE_TESTSET is turned on, again this looks to be the case (ARCH_HAVE_TESTSET [=y])
  • INTERRUPTSTACK has a non-zero value, this appears to be false (ARCH_INTERRUPTSTACK [=0]!=0)

It appears that we have not met all of the conditions to activate SMP. We need to look at defining a size for the interrupt stack. So using the search system again but this time for ARCH_INTERRUPTSTACK we get the following results:

Interrupt stack search results

Pressing the 1 key takes us directory to the entry we need to change. Setting the value to 2048 should be a reasonable starting point. We can always adjust this later if we find it is causing problems.

Exiting this menu and the search for ARCH_INTERRUPTSTACK we can search for SMP again. Pressing the 1 key in the search results presents us with a different view, this time we are taken to the SMP option and we can now turn this on. If we do this the system will add some new entries to the menu, one of which defines the number of CPUs available and this is set to 4. We will need to change this value to 2.

SMP Configured image.

We are now ready to build the system.

Build and Deploy NuttX

We should now be ready to build and deploy the system to the PicoW. Executing the following command will build NuttX and create the UF2 file ready for deployment:

make clean
make -j

At the end of the build process we can deploy the UF2 file to the board by putting the board in bootloader mode and dropping the file to the drive that is shown on the host computer. Se the first article in this series for more information on how to do this.

Time to connect our serial terminal to the PicoW and run the ps command once again:

SMP Enabled ps output.

As you ca see from the screen shot above, we now have two ide tasks as expected and the ps command has some additional information, notably the CPU that each task is allocated to.

Conclusion

Adding SMP to the system was not as simple as it first sounded. The main lesson from this was to check the dependencies in the search results. The requirement for an interrupt stack was not discussed in the SMP documentation and this caused some confusion initially. The help in the configuration system came to our rescue and the situation was easily recoverable.

If you have been following the series we are now at the point where we have think about some serious development. We now have space for our own application code, C++ is enabled and we are able to use both cores on the RP2040 microcontroller.

At this point the series will slow down a little as some of the features now need some in depth investigation most notable:

  • Using SMP correctly in NuttX
  • Interprocess communication between tasks running on different cores
  • RP2040 PIO and NuttX

Adding a User Application to NuttX

Tuesday, June 6th, 2023

Enable SSEM User Application

So far we have seen how to deploy NuttX to a Raspberry Pi PicoW and enable C++ in NuttX. Next up we will create our own application and deploy it to the PicoW.

Creating a New Application

The build system for NuttX applications uses a a makefile that is written in such a way that all we have to do to add a new application is simply create a directory containing some template files. The easiest way to do this is to use an existing application as a template. So let us start by making a copy of the helloxx application. This can be found in the apps repository in the examples directory. So we will start by making a recursive copy of this directory and renaming it ssem.

Looking inside the ssem directory we find a number of files:

helloxx_main.cxx
Make.defs
Kconfig
Makefile

Time to start editing the files.

helloxx_main.cxx

We will start by renaming the file to represent the application name, so we will rename this to ssem_main.cxx. Now editing the file we change all occurrences of helloxx_main to ssem_main. This will allow us to verify that we have indeed deployed the application.

Makefile

Make a similar change to the Makefile by changing all occurrences of helloxx to ssem. Make changes to the comments to reflect that this application is a new application. Our Makefile should now look something like this:

include $(APPDIR)/Make.defs

# SSEM Application

MAINSRC = ssem_main.cxx

# ssem built-in application info

PROGNAME = ssem
PRIORITY = SCHED_PRIORITY_DEFAULT
STACKSIZE = $(CONFIG_DEFAULT_TASK_STACKSIZE)
MODULE = $(CONFIG_EXAMPLES_SSEM)

include $(APPDIR)/Application.mk

Make.defs

The Make.defs file conditionally adds the new application to the list of configured applications and ensures that the file will be built. This file should be edited to look something like this:

ifneq ($(CONFIG_EXAMPLES_SSEM),)
CONFIGURED_APPS += $(APPDIR)/examples/ssem
endif

Kconfig

The Kconfig file allows us to define and change application parameters, something we may look at later. For the moment it simply defines the application name and title and tells the configuration system how to turn compilation on or off. Making changes similar to the above we will have a file that contains something like this:

config EXAMPLES_SSEM
tristate "SSEM example"
default n
depends on HAVE_CXX
---help---
    Enable the SSEM application

Build the SSEM Application

Now we have set up the application we can start to reconfigure the build system and deploy the application. First thing to do is to clean the current build environment in order to make NuttX aware of the presence of the application. Issuing the command make clean will remove the previous build components.

Now we can configure the environment by issuing the make menuconfig command. This will present the configuration menuing system we saw in the previous post. So now head over to the following menu items and disable the Hello, world example and enable the SSEM example.

  • Main menu -> Application Configuration -> “Hello, World!” C++ example
  • Main menu -> Application Configuration -> SSEM example

We should now be ready to build the system, so execute the command make -j to build the system. The build system should show output similar to the following if the changes to the system have been made correctly:

Create version.h
LN: platform/board to /Projects/NuttX/apps/platform/dummy
Register: ssem
Register: nsh
Register: sh
CPP:  /Projects/NuttX/nuttx/boards/arm/rp2040/raspberrypi-pico-w/scripts/raspberrypi-pico-flash.ld-> /Projects/NuttX/nuttx/boards/arm/rp2040/raspberrypi-pico-w/scripts/raspberrypi-pico-flash.lLD: nuttx
Generating: nuttx.uf2
tools/rp2040/elf2uf2 nuttx nuttx.uf2;

Time to deploy NuttX to the board and check to see if the application has built correctly. So copy the NuttX.uf2 file to the board and connect a serial console to the serial port. Press enter a few times until the nsh prompt appears and now type help. The new application should appear in the list of built in applications:

Builtin Apps:
    nsh   sh    ssem  

Executing the ssem application should now present something similar to the following (depending upon how you modified the helloxx_main application code):


nsh> ssem
ssem_main: Saying hello from the dynamically constructed instance
CHelloWorld::HelloWorld: Hello, World!!
ssem_main: Saying hello from the instance constructed on the stack
CHelloWorld::HelloWorld: Hello, World!!
nsh>

Conclusion

The process of creating a new application and having this included in the NuttX build system is not a difficult task, it simply means editing a few files.

One key step is the make clean step as omitting this from a system which already contains build artefacts may not include the new application in the configuration process and hence exclude the files from the build.

In the next post we will add SMP to the configuration.

Enabling NuttX C++ Support on PicoW

Monday, June 5th, 2023

Enable C++ compiler support

In the last article we looked at getting NuttX running on the Raspberry Pi Pico. The system deployed the default applications which are written in C. In this post we will look at enabling C++ support.

The first thing to do is to follow the instructions in the previous post to get the development system downloaded and deployed. Next up we can clean any object files that may be remaining from any previous builds:

make distclean
./tools/configure.sh -l raspberrypi-pico-w:usbnsh

Now we have a clean system wew can enabled C++ compiler support. This is done through the configuration system which is accessed by executing the command:

make menuconfig

This will show the main menu for the configuration system:

NuttX configuration system

NuttX configuration system

Using the arrow and enter keys we need to navigate the menus and change the following options:

  • Main menu -> Library Routines -> Have C++ compiler
  • Main menu -> Application Configuration -> “Hello, World!” C++ example

Turning on the C++ compiler will show the following additional menu items:

Enable C++ compiler support

Enable C++ compiler support

The 12.1 release of the PicoW implementation deploys some additional applications which are not needed so we will turn these off in order to save memory. The two application to be disabled are:

  • ostest
  • getprime

These applications can be turned off using the configuration system, the menu items can be found here:

  • Application Configuration -> Testing -> OS test example
  • Application Configuration -> Testing -> getprime example

The system would be ready to be built using the make -j command. Once built, deploy the application to the board by copying the UF2 file to the board as discussed in the previous post. The board should reset automatically so connect a serial console to the board and hit enter a few times to get to the nsh prompt. Typing help at the command problem should show the built in commands plus the additional commands that have been deployed:

Builtin Apps:
    helloxx  nsh      sh

Finally we should verify that the C++ application has been built and deployed correctly by running the helloxx application. If all is well then the serial console should show the following output:

helloxx_main: Saying hello from the dynamically constructed instance
CHelloWorld::HelloWorld: Hello, World!!
helloxx_main: Saying hello from the instance constructed on the stack
CHelloWorld::HelloWorld: Hello, World!!

Conclusion

This is a small step forward getting C++ enabled ready for developing a new C++ application which we will look at in the next post.

In the next post we will add a new C++ user application to the system.

Deploying NuttX to the Raspberry Pi PicoW

Sunday, June 4th, 2023

Close up of Raspberry Pi PicoW

Currently taking a few weeks break and after a few days away from the computer I decided to do a little bit of research into the PicoW and NuttX. So time for a quick attempt at getting the system up and running.

I have for the last few years been working on the Meadow F7 board for Wilderness Labs. This board aims to provide a full .NET development environment bringing embedded development to a wider audience. There are three main technologies underpinning this board:

I spend much of my time working with NuttX and FreeRTOS so as you can gather, I’m not worried about working with C or C++. This gives a few options for working with the board:

  • Native development using the Pico SDK
  • NuttX
  • FreeRTOS

I decided to start with a look at NuttX as I am currently spending a lot of time working with this RTOS.

Let’s Follow the Guide

First thing to do is head over the the documentation for NuttX on the RP2040. The getting started guide is really good and I thought I would be able to use the copy of the Pico SDK I had already on my machine.

This is where I hit my first problem. NuttX requires a specific version of the Pico SDk and using this version would impact any other Pico development that I would undertake in the future. To overcome this I decided to take a second copy of the Pico SDK specifically for use when developing with NuttX.

Having said we would follow the guide, we are deviating already.

So first up we create a development environment for the PicoW (all instructions are for working on a Mac but should also work on Linux).

mkdir NuttX-PicoW
cd NuttX-PicoW
git clone -b 1.1.2 https://github.com/raspberrypi/pico-sdk.git
export PICO_SDK_PATH=$PWD/pico-sdk

Now we need to clone the NuttX OS and applications repositories:

git clone https://github.com/apache/nuttx.git nuttx
git clone https://github.com/apache/nuttx-apps.git apps
cd nuttx

Now we have the NuttX components cloned we need to make sure we are on the current releases. This is done using git to checkout the correct branches. At the time of writing the current release is 12.1 so the following commands will checkout the correct branches:

cd apps
git checkout releases/12.1
cd ../nuttx
git checkout releases/12.1

Now we have the correct branches we need to configure the system to work with the PicoW. There are several different boards and configurations available using the RP2040, not just the pico. The following command will list all of the available board configurations available for the Pico series of boards (not just the PicoW but all of the Pico boards):

./tools/configure.sh -L | grep pico

For NuttX 12.1 this returns the following list:

raspberrypi-pico:lcd1602
raspberrypi-pico:waveshare-lcd-1.3
raspberrypi-pico:usbmsc
raspberrypi-pico:nshsram
raspberrypi-pico:nsh-flash
raspberrypi-pico:nsh
raspberrypi-pico:audiopack
raspberrypi-pico:composite
raspberrypi-pico:st7735
raspberrypi-pico:displaypack
raspberrypi-pico:spisd
raspberrypi-pico:smp
raspberrypi-pico:enc28j60
raspberrypi-pico:ssd1306
raspberrypi-pico:usbnsh
raspberrypi-pico:waveshare-lcd-1.14
raspberrypi-PicoW:lcd1602
raspberrypi-PicoW:waveshare-lcd-1.3
raspberrypi-PicoW:usbmsc
raspberrypi-PicoW:nshsram
raspberrypi-PicoW:nsh-flash
raspberrypi-PicoW:nsh
raspberrypi-PicoW:audiopack
raspberrypi-PicoW:composite
raspberrypi-PicoW:st7735
raspberrypi-PicoW:displaypack
raspberrypi-PicoW:spisd
raspberrypi-PicoW:smp
raspberrypi-PicoW:enc28j60
raspberrypi-PicoW:ssd1306
raspberrypi-PicoW:usbnsh
raspberrypi-PicoW:waveshare-lcd-1.14
raspberrypi-PicoW:telnet

The configuration that is interesting here (to get started) is the raspberrypi-PicoW:usbnsh configuration. This will redirect the console output from NuttX over USB to the host machine using serial communication. This will allow a simple terminal application to connect to the board without the need for any additional hardware. So let’s configure NuttX and build the system.

First up, clean the system and set the configuration:

make distclean
./tools/configure.sh -l raspberrypi-PicoW:usbnsh

This should clear any previous configurations (distclean) and then set the board type and specific configuration to be built.

Now we should be able to build the system.

make

You can also tell make to use as many cores on your system as possible with the command make -j. After a short time the system will compile the NuttX OS and applications and create a UF2 file that can be uploaded to the PicoW.

The board can be programmed by starting the board in bootloader mode (press and hold the BOOTSEL button while plugging in the USB cable). The Pico should now present itself as a USB drive on the host computer. Drag and drop the nuttx.uf2 file onto the Raspberry Pi Pico drive on the host computer. The file will be copied to the board and the board will restart.

Connecting to NuttX

Time to check that the board has been programmed correctly. Open your favourite terminal application and set your connection settings to 115200,n,8,1 and connect to the serial port presented by the PicoW. The default is something like usbmodem01. Connect to the PicoW and hit enter a couple of times. On doing this you should be presented with something like this:


NuttShell (NSH) NuttX-12.1.0
nsh>
nsh>

If you have got this far then you have successfully built NuttX and deployed the OS to the PicoW. Typying help in the serial console should reveal the list of deployed applications.

Conclusion

At this point we have NuttX running on the PicoW. This opens up some new channels for investigation:

  • Using both cores on NuttX
  • C++ application development

In the next post we will look at enabling C++ development in NuttX.

PiZero – Is it a Zero or Hero

Sunday, December 6th, 2015

On 26th November the Raspberry Pi foundation announced the release of the PiZero, a $5 computer based upon the Broadcom 2835 processor. Looks like it could be interesting…

First Impressions

The board itself is about 60% the size of a credit card and has a reduced number of input output options compared to the other Raspberry Pis in the range but promises more processing power than the original Raspberry Pi. A quick look at the specifications reveals the following:

  • Broadcom BCM2835 processor (1GHz ARM11 core – 40% faster than the original Raspberry Pi)
  • 512MB of SDRAM
  • A micro-SD card slot
  • A mini-HDMI socket
  • Micro-USB sockets, one for data and one for power
  • An unpopulated 40-pin GPIO header (Identical to the Model A+/B+/2B)

The form factor and price combined are sure to make this ideal for the hobby/hacker market.

Extras

Some hobbyists are sure to have the additional connectors in their parts bucket, but not this one, every other cable you can think of but not the OTG or HDMI convertor! Over the years I have collected a fair number of cables and other spare parts but none are suitable for the PiZero. The micro-SD cards I have in abundance but the additional connectors needed to start work are another matter. In hindsight I should have ordered the additional cables when purchasing the PiZero from Pimoroni as these were on offer for £8 at the time of ordering.

Hindsight is a wonderful thing…

The PiZero arrived in short order, the accessories were another matter. It is not often that Amazon let me down but this was one of those times. A small pause while waiting for delivery.

Setting up the PiZero

Setting up the PiZero was as simple as the initial setup for the rest of the Raspberry Pi family. The easiest option is to use NOOBS. This just needs a formatted SD card, 16GB should do it.

Formatting the SD card and dropping NOOBS on the card took about 5 minutes, now time to finally power on the Raspberry Pi. But first a few connections, the HDMI was connected through a standard HDMI cable through a convertor to the the mini-HDMI. My USB hub was connected through an ingenious USB OTG Convertor. Keyboard, trackball and WiFi dongle were added to the hub. We’re now ready to go.

NOOBS starts to the desktop by default and so the mini-HDMI connection is essential. The board started reasonably quickly although users of the Raspberry Pi B+ etc will find the start up a little slow.

The first things NOOBS did was to reorganise the file store, this takes about 30 minutes.

The next step, booting to the desktop and the first thing to do, configure the system to boot to the command line and setup the network settings for the WiFi.

And then reboot.

One hour after starting the process I had a PiZero on the network connecting to the command line via SSH.

Conclusion

The PiZero created a fair amount of excitement at work but for what are probably the wrong reasons. The initial excitement about the $5 price tag soon starts to wane when you start buying the add-ons. Don’t get me wrong this board has a purpose, this is a fine low cost hackers board. My concern is the number of people thinking that this is a low cost home computer for the family. If you want a low cost family Raspberry Pi then buy the Raspberry B2/B+ as these will be far easier to connect to the kit most families are likely to have in the home.

So where does the PiZero fit in. Well I think this is really a good fit for the market the compute module market. The compute module was always going to be a pain to work with for the hobbyist and the full Raspberry Pi is too large for some applications (quadcopters etc). The PiZero fits nicely into this market. Getting the additional components (OTG cables etc) is a one off cost for the hobbyist embedding a PiZero into a project. This is where I think this board will fit nicely.

So, in conclusion, great for the hobbyist hacker but for families wanting to get kids into computing I’d go for the Raspberry Pi B2/B+ as it’s going to be far easier to get connected.

Visual Studio and the Raspberry Pi

Sunday, April 12th, 2015

The last few months have seen me turn my attention to the Raspberry Pi. The new Model B 2 offers some performance and memory improvements over the previous models and it has started to become more attractive as a development platform for embedded projects. Being a long term C/C++ developer I wanted an effective development environment for C++ development on the Raspberry Pi. One of the best and simplest development environments to setup on the PC is Microsoft Visual Studio. A recent article in The Mag Pi (Introducing the “game changing” Raspberry Pi 2, Ian McAlpine, Issue 30, page 23) commented that author would be interested in exploring GPIO development using Visual Studio. The good news is that there is no need to wait for Windows 10, you can use Visual Studio to develop C++ applications which will run on the Raspberry Pi today.

Visual Studio

Microsoft have recently released an update to the free versions of Visual Studio. Previous releases required that developers download a different version of Visual Studio depending upon the development language and platform being used. The newer version, Microsoft Visual Studio Community 2013 removes this requirement and offers a single download an installer for a multitude of languages and environments.

Another key change is that Visual Studio Community 2013 supports the installation of add-ons. The previous Express editions did not allow this.

The Visual Studio Community 2013 system requirements are high (1.6 GHz processor, 1 GB RAM and 20 GB disc space) when compared to the Raspberry Pi but Visual Studio will be running on the PC where these resources are commonly available in modern PCs.

The first step is to download and install Microsoft Visual Studio Community 2013 on the PC. There is plenty of information on the Visual Studio web site and the process is simple and so I will not go into any further detail here.

VisualGDB

VisualGDB is where all of the magic happens. This add-on developed by SysProgs allows Visual Studio to be used with a wide variety of development boards and systems including the Raspberry Pi. The add-on is installed on the PC and this allows Visual Studio to communicate with the target platform. Supported platforms include:

  1. Raspberry Pi
  2. Beaglebone
  3. MSP430
  4. STM32

amongst others.

VisualGDB is not a free product but the developers do offer a 30-day free trial so you can evaluate the add-on for free. Installation of VisualGDB is as simple as the Visual Studio installation. Full instructions can be found on the downloads page and as such will not be covered here. The full product is not free but there are discounts available for educational users.

Compiling Options

VisualGDb offers two options for compiling your application:

  • Native compilation
  • Cross compilation

Native compilation allows the user to edit the source files in Visual Studio, save them on the Raspberry Pi and then invoke the compiler on the Raspberry Pi.

Cross compilation downloads a compiler onto your PC, this is all setup and configured for you. Your source files are edited on the PC and then the local compiler is used to generate the application executable. The compiled application is then copied to the Raspberry Pi for execution or debugging.

For small applications there are very few advantages in using a cross compiler but for a large application the advantages are obvious. Compiling applications is notoriously memory and processor intensive. Larger applications will therefore benefit from the greater resources available on the PC compared to those available on the Raspberry Pi.

Debugging

Visual Studio connects to the Raspberry Pi using the GDB. To the programmer this is transparent and the experience is much the same as when debugging Windows applications on the PC.

The VisualGDB web site contains a number of tutorials all of which are excellent and easy to follow. The first and most basic connects your development environment to the Raspberry Pi and simply outputs the name of the Raspberry Pi to the remote console:

#include <stdio.h>
#include <unistd.h>

using namespace std;

int main(int argc, char *argv[])
{
    char szHost[128];
    gethostname(szHost, sizeof(szHost));
    printf("The host name is %s\n", szHost);
    return 0;
}

The first tutorial uses the C compiler on the Raspberry Pi to do this. The tutorial encourages you to place breakpoints in the code and execute the application in debug mode.

Interface Changes

One thing to note is that VisualGDB adds a new Project Properties to the project right click menu. This menu item, VisualGDB Project Properties, can be seen at the top of the menu:

Visual Studio Project Properties Menu

Visual Studio Project Properties Menu

Selecting this entry determines how your application is compiled and executed. It allows you to change the setting which were selected during the creation of the initial project and it’s connection to the Raspberry Pi.

Simple so far, no complex installation or configuration required. It just worked.

Talking to Hardware

There are a number of methods for accessing the GPIO pins on the Raspberry Pi. Each method has it’s own merits but for this experiment the priority will be speed. This will be achieved by using the Gert and Dom example from the elinux web site as a template.

Starting with the standard headers:

#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <sys/mman.h>
#include <unistd.h>

A small modification is required for the Raspberry Pi 2:

#ifdef RPiB2
    #define BCM2708_PERI_BASE        0x3F000000
#else
    #define BCM2708_PERI_BASE        0x20000000
#endif

From this point on the application follows much the same as that on the elinux web site.

//
//  GPIO Controller.
//
#define GPIO_BASE   (BCM2708_PERI_BASE + 0x200000)

#define PAGE_SIZE   (4 * 1024)
#define BLOCK_SIZE  (4 * 1024)

int  mem_fd;
void *gpio_map;

// I/O access
volatile unsigned *gpio;

// GPIO setup macros. Always use INP_GPIO(x) before using OUT_GPIO(x) or SET_GPIO_ALT(x,y)
#define INP_GPIO(g) *(gpio + ((g) / 10)) &= ~(7 << (((g) % 10) * 3))
#define OUT_GPIO(g) *(gpio + ((g) / 10)) |=  (1 << (((g) % 10) * 3))
#define SET_GPIO_ALT(g,a) *(gpio + (((g) / 10))) |= (((a) <= 3 ? (a) + 4 : (a) == 4 ? 3 : 2) << (((g) % 10) * 3))

#define GPIO_SET *(gpio + 7)  // sets   bits which are 1 ignores bits which are 0
#define GPIO_CLR *(gpio + 10) // clears bits which are 1 ignores bits which are 0

#define GET_GPIO(g) (*(gpio + 13) & (1 << g)) // 0 if LOW, (1<<g) if HIGH

#define GPIO_PULL *(gpio + 37) // Pull up/pull down
#define GPIO_PULLCLK0 *(gpio + 38) // Pull up/pull down clock

//
// Set up a memory regions to access GPIO
//
void setup_io()
{
    if ((mem_fd = open("/dev/mem", O_RDWR | O_SYNC)) < 0)
    {
        printf("can't open /dev/mem \n");
        exit(-1);
    }

    /* mmap GPIO */
    gpio_map = mmap(
        NULL,                   // Any adddress in our space will do
        BLOCK_SIZE,             // Map length
        PROT_READ | PROT_WRITE, // Enable reading & writting to mapped memory
        MAP_SHARED,             // Shared with other processes
        mem_fd,                 // File to map
        GPIO_BASE               // Offset to GPIO peripheral
        );

    close(mem_fd);              // No need to keep mem_fd open after mmap

    if (gpio_map == MAP_FAILED)
    {
        printf("mmap error %d\n", (int) gpio_map);  // errno also set!
        exit(-1);
    }

    // Always use volatile pointer!
    gpio = (volatile unsigned *) gpio_map;
}

The main application requires modification in order to repeatedly toggle the selected GPIO pin, in this case GPIO3.

int main(int argc, char **argv)
{
    // Set up gpio pointer for direct register access
    setup_io();

    // Set GPIO pin 3 to output
    INP_GPIO(3); // must use INP_GPIO before we can use OUT_GPIO
    OUT_GPIO(3);

    while (1)
    {
        GPIO_SET = 1 << 3;
        GPIO_CLR = 1 << 3;
    }

    return 0;
}

Running this application and connecting an oscilloscope generates a 41 MHz signal with approximately a 50% duty cycle.

Conclusion

Visual Studio is not the only option for developing applications on the Raspberry Pi, you can use a simple text editor on the Raspberry Pi or for something more sophisticated, you can set up Eclipse on the PC and connect to the Raspberry Pi using GDB. Both of these methods are free and have their own merits.

The text editor approach is basically already there on the Raspberry Pi as supplied. The disadvantage is that debugging is not integrated and can be difficult.

Eclipse is a free development environment but it can be difficult to setup although tutorials are available.

As we have seen, Visual Studio is free but requires VisualGDB to be installed and VisualGDB is not a free product although educational discounts are available. On the plus side, VisualGDB is simple to install and configure. It is also supplied with a set of project templates for the Raspberry Pi.