RSS

Getting Started with Rust – Ownership, Borrowing and References

October 29th, 2025 • Rust, Software DevelopmentNo Comments »

Ownership Banner

In the words of Connor MacLeod from Highlander

There can be only one.

Every value in a Rust application has one owner and only one owner. When the owner of an value goes out of scope then the value is disposed of and is no longer valid.

The Rust Programming Language (Chapters 4 and 5)

These chapters cover the following topics:

  • Ownership
  • References and Borrowing
  • Slices
  • Structs and Methods

Ownership, references and borrowing determine how an object can be accessed and potentially modified. The strict rules are designed to reduce the possibility of common C/C++ issues such as use after free and general pointer access violations.

Structs and methods are the start of the journey into object orientated programming.

Scopes

Scoping rules pretty much follow the same rules as C/C++ and as such are familiar. Any pair of braces open and close a new scope. So a function introduces a new scope. A new scope can also be created with a pair of braces inside an existing scope.

Ownership, References and Borrowing

As Connor MacLeod said, “There can be only one” and in this case, there can be only one owner of a value.

Calling a function can change the ownership of a variable depending upon the type of the variable. Some types implement the copy trait which makes a copy of the variable on the stack. Types that do not implement the copy trait will have their ownership transferred to the function. The transfer of ownership means that the original variable is no longer valid when the function returns.

Consider the following code:

fn print_integer(x: i32) {
    println!("Integer value: {}", x);
}

fn print_string(s: String) {
    println!("String value: {}", s);
}

fn main() {
    let x: i32 = 42;
    println!("Original integer: {}", x);
    print_integer(x);
    println!("Original integer after function call: {}", x); // This line works fine since integers implement the Copy trait.

    let s: String = String::from("Hello, world!");
    println!("Original string: {}", s);
    print_string(s);
    println!("Original string after function call: {}", s); // This line will cause a compile-time error due to ownership rules.
}

Compiling the above code generates the following error output:

error[E0382]: borrow of moved value: `s`
  --> src/main.rs:24:57
   |
21 |     let s = String::from("Hello, world!");
   |         - move occurs because `s` has type `String`, which does not implement the `Copy` trait
22 |     println!("Original string: {}", s);
23 |     print_string(s);
   |                  - value moved here
24 |     println!("Original string after function call: {}", s); // This line will cause a compile-time error due to owne...
   |                                                         ^ value borrowed here after move
   |
note: consider changing this parameter type in function `print_string` to borrow instead if owning the value isn't necessary
  --> src/main.rs:16:20
   |
16 | fn print_string(s: String) {
   |    ------------    ^^^^^^ this parameter takes ownership of the value
   |    |
   |    in this function
   = note: this error originates in the macro `$crate::format_args_nl` which comes from the expansion of the macro `println` (in Nightly builds, run with -Z macro-backtrace for more info)
help: consider cloning the value if the performance cost is acceptable
   |
23 |     print_string(s.clone());
   |                   ++++++++

For more information about this error, try `rustc --explain E0382`.
error: could not compile `hello_world` (bin "hello_world") due to 1 previous error

An awful lot of output to digest.

Calls to print_integer work because the i32 type is simple and this implements the copy trait. This means a copy of the value in x is made on the stack. The function print_integer therefore operates on the copied value and not the variable x defined in main.

print_string works differently as it is operating on a complex value that does not implement the copy trait. Complex values are usually allocated on the heap. Calling print_string moves the ownership of s to the print_string function and the object is dropped at the end of the function making future uses of s in main invalid.

References

The compiler suggested one option for the problem in the above code, to clone the string and pass this to the print_string function. Another solution is to use references. Here the application allows the use of the object without transferring ownership. In this case the object passed is not dropped at the end of the function.

This is known as borrowing.

The above code can be modified to use references:

fn print_string(s: &String) {
    println!("String value: {}", s);
}

fn main() {
    let s: String = String::from("Hello, world!");
    println!("Original string: {}", s);
    print_string(&s);
    println!("Original string after function call: {}", s); // This line will cause a compile-time error due to ownership rules.
}

Running this code with cargo run generates the following output:

Original string: Hello, world!
String value: Hello, world!
Original string after function call: Hello, world!

No more compiler errors.

Function parameters can also be declared as mutable meaning that the original variable can be modified by the function. Modifying the above code to the following:

fn print_string(s: &mut String) {
    println!("String value: {}", s);
    s.push_str(", world!");
}

fn main() {
    let mut s = String::from("Hello");
    println!("Original string: {}", s);
    print_string(&mut s);
    println!("Original string after function call: {}", s);
 }

Running the application results in the following output:

Original string: Hello
String value: Hello
Original string after function call: Hello, world!

print_string is now able to borrow the parameter and modify the value.

I suspect that a lot of time (initially) will be spent fighting the borrow checker.

Structs and Methods

Structs provide the ability to collect together related data items and are similar to structs in C/C++. Methods are functions related to a struct giving us the basics of object orientated programming. The methods are implemented against a type:

struct Rectangle {
    width: u32,
    height: u32,
}

impl Rectangle {
    fn area(&self) -> u32 {
        self.width * self.height
    }
}

In the above code, the area method is implemented against the Rectangle structure.

Language Highlights

One stand out feature with structs is the ability to easily copy unchanged fields from one structure to a new version of the same type of structure. This is best illustrated with an example.

#[derive(Debug)]
struct Person {
    name: String,
    house_number: u16,
    street: String,         // Rest of address fields omitted for brevity.
    mobile_number: String
}

fn update_mobile(person: Person, new_mobile_number: String) -> Person {
    Person {
        mobile_number: new_mobile_number,
        ..person
    }
}

fn main() {
    let person = Person {
        name: String::from("Fred Smith"),
        house_number: 87,
        street: String::from("Main Street"),
        mobile_number: String::from("+44 7777 777777")
    };
    println!("Before update: {:?}", person);
    let updated_person = update_mobile(person, String::from("+44 8888 888888"));
    println!("After update: {:?}", updated_person);
}

Running this application with cargo run generates the following output:

Before update: Person { name: "Fred Smith", house_number: 87, street: "Main Street", mobile_number: "+44 7777 777777" }
After update: Person { name: "Fred Smith", house_number: 87, street: "Main Street", mobile_number: "+44 8888 888888" }

A contrived example maybe but it illustrates the use of ..person to copy the unchanged fields into the new new Person structure.

Another nice feature is the field initialisation for structure members. If a field has the same name as the value being used to initialise it then we can omit the field name. For instance we could modify the update_mobile function above to the following:

fn update_mobile(person: Person, mobile_number: String) -> Person {
    Person {
        mobile_number,
        ..person
    }
}

Note the change to the parameter name to mobile_number to match the field name in the Person struct.

Conclusion

The borrow checker is going to be frustrating for a while with the benefit that if the code compiles it is highly likely to be bug free. The borrow checker also aid in the safety of multithreaded applications.

New Links

Came across a new tool for code linting: Clippy.

Consider the following function:

fn print_integer_and_increment(x: &mut i32) {
    println!("Integer value: {}", x);
    *x = *x + 1;
}

This code compiles without error and works as expected. Running the command cargo clippy to lint the code results in the following output:

warning: manual implementation of an assign operation
  --> src/main.rs:18:5
   |
18 |     *x = *x + 1;
   |     ^^^^^^^^^^^ help: replace it with: `*x += 1`
   |
   = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#assign_op_pattern
   = note: `#[warn(clippy::assign_op_pattern)]` on by default

warning: `hello_world` (bin "hello_world") generated 1 warning (run `cargo clippy --fix --bin "hello_world"` to apply 1 suggestion)
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.11s

Running the command cargo clippy –fix –bin “hello_world” –allow-dirty makes the suggested change automatically.

Next Up

Enums, Pattern Matching, Modules and Project structure.

Getting Started with Rust – Variables, Functions and Loops

October 15th, 2025 • Rust, Software DevelopmentComments Off on Getting Started with Rust – Variables, Functions and Loops

Getting Started with Rust (Week One) Banner

With the tools installed it is time to start learning some language basics:

  • Variables
  • Functions
  • Control structures (if and loops)

The above are covered in the first three chapters of The Rust Programming Language.

Installing the Tools (Update)

Installation of the tools went pretty smoothly and took only a few hours. The Rust in Visual Studio Code page proved to be a nice addition to the links in the blog post.

The page provides information on:

  • Intellisense
  • Linting
  • Refactoring
  • Debugging

plus more.

The Rust Programming Language (Chapters 1 through 3)

The initial chapters of the The Rust Programming Language covers the basics of Rust:

  • Variables
  • Immutability and mutability
  • Functions
  • Control flow

It was interesting to discover that Rust has a greater degree of distinction between expressions and statements:

Function bodies are made up of a series of statements optionally ending in an expression. So far, the functions we’ve covered haven’t included an ending expression, but you have seen an expression as part of a statement. Because Rust is an expression-based language, this is an important distinction to understand. Other languages don’t have the same distinctions, so let’s look at what statements and expressions are and how their differences affect the bodies of functions.

This difference means that there is no analogy to the following C code:

int x, y;
x = y = 1024;

The basic rule is statements perform actions and expressions return a result. So functions that return a result must return an expression. The general rule being that a statement also ends with a semicolon and an expression does not need one.

Now consider this simple (and admittedly contrived example):

fn add_or_multiply(value : i32) -> i32 {
    if value > 5 {
        return value * 2;
    }
    //  Maybe do some other stuff...
    value + 1
}

fn main() {
    for number in 0..10 {
        let result = add_or_multiply(number);
        println!("Number: {number}, Result: {result}");
    }
}

Running the above generates the following output:

     Running `target/debug/hello_world`
Number: 0, Result: 1
Number: 1, Result: 2
Number: 2, Result: 3
Number: 3, Result: 4
Number: 4, Result: 5
Number: 5, Result: 6
Number: 6, Result: 12
Number: 7, Result: 14
Number: 8, Result: 16
Number: 9, Result: 18

The line return value * 2; can be changed to:

return value * 2

Note the semicolon has been removed. Running the resultant application also generates the same output. Further investigation is required to determine why this works and also what is considered best practice amongst the Rust community.

Language Highlights

From a C/C++ programmers perspective, two Rust constructs are appealing due to their convenience and ability to make code tidier:

  • Using if statements in assignments
  • Labelled loops

The first construct is alien to the C/C++ developer but should be familiar to Python developers. This is the ability to use an if statement in an assignment operation:

let x = if y <= MAXIMUM { MAXIMUM } else { y };

This means the trivial function add_or_multiply in the above application could have been written as:

const MAXIMUM: i32  = 5;

fn add_or_multiply(value : i32) -> i32 {
    let result = if value > MAXIMUM { value * 2 } else { value + 1 };
    result
}

Nice little feature that can make code more compact and readable.

The second nice feature is the ability to label loops. This allows the application to break in an inner loop to also break an outer loop.

'outer_loop: loop {
    //  Setup for the inner loop...
    loop {
        if remaining == MAXIMUM {
            break;
        }
        if (count % 2) == 0 {
            break 'outer_loop;  // Ignore even numbers.
        }
        // More inner loop processing...
    }
    // More outer loop processing...
}

The inner loop may be a contrived version of a while loop but it serves to illustrate the language feature. The break ‘outer_loop allows the inner loop to skip even numbers without the need run unnecessary nested if statements.

Conclusion

A slow start but some interesting language features:

  • Immutability by default
  • Using if statements in assignments
  • Labelled loops

Next up is ownership.

Rust – Installing the Tools

October 5th, 2025 • Rust, Software Development1 Comment »

Bacon running in a terminal

This week was a gentle start with Rust just installing the toolchain and some browsing for possibly useful tools.

Installing Rust

First step, install the compiler so lets head over to the Getting Started page. According to the page we just need to execute the command:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Which generates the following output:

info: downloading installer

Welcome to Rust!

This will download and install the official compiler for the Rust
programming language, and its package manager, Cargo.

Rustup metadata and toolchains will be installed into the Rustup
home directory, located at:

  /home/tester/.rustup

This can be modified with the RUSTUP_HOME environment variable.

The Cargo home directory is located at:

  /home/tester/.cargo

This can be modified with the CARGO_HOME environment variable.

The cargo, rustc, rustup and other commands will be added to
Cargo's bin directory, located at:

  /home/tester/.cargo/bin

This path will then be added to your PATH environment variable by
modifying the profile files located at:

  /home/tester/.profile
  /home/tester/.bashrc
  /home/tester/.zshenv

You can uninstall at any time with rustup self uninstall and
these changes will be reverted.

Current installation options:


   default host triple: aarch64-unknown-linux-gnu
     default toolchain: stable (default)
               profile: default
  modify PATH variable: yes

1) Proceed with standard installation (default - just press enter)
2) Customize installation
3) Cancel installation
>

Lets go with option 1, the default install:

info: profile set to 'default'
info: default host triple is aarch64-unknown-linux-gnu
info: syncing channel updates for 'stable-aarch64-unknown-linux-gnu'
info: latest update on 2025-09-18, rust version 1.90.0 (1159e78c4 2025-09-14)
info: downloading component 'cargo'
info: downloading component 'clippy'
info: downloading component 'rust-docs'
info: downloading component 'rust-std'
info: downloading component 'rustc'
info: downloading component 'rustfmt'
info: installing component 'cargo'
info: installing component 'clippy'
info: installing component 'rust-docs'
 20.5 MiB /  20.5 MiB (100 %)   8.3 MiB/s in  2s         
info: installing component 'rust-std'
 29.1 MiB /  29.1 MiB (100 %)  14.0 MiB/s in  2s         
info: installing component 'rustc'
 58.5 MiB /  58.5 MiB (100 %)  14.1 MiB/s in  4s         
info: installing component 'rustfmt'
info: default toolchain set to 'stable-aarch64-unknown-linux-gnu'

  stable-aarch64-unknown-linux-gnu installed - rustc 1.90.0 (1159e78c4 2025-09-14)


Rust is installed now. Great!

To get started you may need to restart your current shell.
This would reload your PATH environment variable to include
Cargo's bin directory ($HOME/.cargo/bin).

To configure your current shell, you need to source
the corresponding env file under $HOME/.cargo.

This is usually done by running one of the following (note the leading DOT):
. "$HOME/.cargo/env"            # For sh/bash/zsh/ash/dash/pdksh
source "$HOME/.cargo/env.fish"  # For fish
source $"($nu.home-path)/.cargo/env.nu"  # For nushell

Following the instructions to add rust to the PATH:

. "$HOME/.cargo/env"

Checking that the compiler has been installed:

$ rustup
rustup 1.28.2 (e4f3ad6f8 2025-04-28)

First Application – Hello, World

The classic way to test a new toolchain is to write Hello, world. The cargo build system has a simple way to do this:

cargo new hello_world

    Creating binary (application) `hello_world` package
note: see more `Cargo.toml` keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

cargo should have created a new directory with the name hello_world along with any necessary support files for a Rust project, including default git files.

cd hello_world
total 16
drwxr-xr-x 5 tester tester 4.0K Oct  1 10:11 .
drwxr-xr-x 3 tester tester 4.0K Oct  1 10:10 ..
-rw-r--r-- 1 tester tester   82 Oct  1 10:10 Cargo.toml
drwxr-xr-x 6 tester tester 4.0K Oct  1 10:11 .git
-rw-r--r-- 1 tester tester    8 Oct  1 10:10 .gitignore
drwxr-xr-x 2 tester tester 4.0K Oct  1 10:10 src

The source file for the project is in the src directory with the entry point to the application in the src/main.rs file:

cat < src/main.rs

fn main() {
    println!("Hello, world!");
}

The application can be run with the cargo run command:

cargo run

   Compiling hello_world v0.1.0 (/home/tester/Rust101/hello_world)
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.24s
     Running `target/debug/hello_world`
Hello, world!

Supplementary Tools

A little web browsing highlighted a couple of tools that might prove useful:

  • Bacon – Background Analyser
  • Visual Studio Code Extension – rust-analyzer

Let’s install these tools.

Bacon – Background Analyser

Bacon runs in a terminal and it scans the file system for any changes. It then runs cargo and checks the project source code for any errors. These are then displayed in the terminal. This means that the developer gets fast feedback of any issues throughout the development cycle.

Installation is simple:

cargo install --locked bacon

To run the application simply open a new terminal and run the command:

bacon --all-features

Visual Studio Code: rust-analyzer

Rust-analyzer is a popular extension for Visual Studio Code providing features such as:

  • Syntax highlighting
  • Code completion
  • Hints when hovering over variables, types etc.
  • Goto definition

The extension can be installed from the Visual Studio Marketplace or through Visual Studio Code itself.

Project

The best way to learn a new language is to reproduce an application / project that you have developed. This makes writing the application a little simpler as the problem is already understood, the only new element to the project is the new language.

  • Command line application
  • Process the command line
  • Using a directory passed through the command line, generate a list of all files in the directory
  • If a directory is found add to a list and recurse through the directory structure list all the files found

Short, simple problem maybe but it should be enough to get started.

Getting Started with Rust

October 1st, 2025 • ESP32, Pico, Raspberry Pi, RustComments Off on Getting Started with Rust

Rusty Bolts

Last year saw the push towards using safer programming languages. Languages such as C# and Rust, languages that help the developer avoid mistakes common to C and C++ (although there is a movement to make C++ safer to use).

It is time to take a look at Rust as a language and more specifically how easy it is to develop code that will run on a micrcontroller.

General Rust

The usual place to start is the The Rust Programming Language (Rust 2021) book.

There are also a number of online resources:

Only time will tell how good these resources are.

Microcontroller Specific

Initial learning will be laptop based as it will be easier to gain familiarity with the language. It will certainly be quicker than the usual develop, deploy and debug cycle that slows down firmware development.

The eventual aim is to move over to development for microcontrollers, likely the ESP32 variants or Raspberry Pi Pico boards. With this in mind the following Rust and micrcontroller resources are looking like they will be useful:

The microcontroller part of the journey will look at using the ESP32C6 microcontroller as this is on the supported list for both the bare metal and IDF versions of the HALL (see below).

ESP Hardware Abstraction Layer (HAL)

Espressif have released two versions of the HAL one supporting the IDF framework and one for bare metal applications:

Installation and use is covered in the Rust on the ESP Book.

Let’s get started and see where this takes us.

Linking Local and Remote Repositories

September 2nd, 2025 • Aide-memoir, Software DevelopmentComments Off on Linking Local and Remote Repositories

GitHub Repositories Header

Quick memo to self as I always forget how to connect a remote repository to an existing local one.

The scenario here is an idea has been worked on locally and git is being used to track progress. The project eventually reaches the point where it might be useful to others. The most obvious way to do this is to use GitHub to publish the project.

Local Repository

Creating a local repository is usually performed using a git init command:

git init .

which will result in output like:

Initialized empty Git repository in /Users/username/GitHub/ProjectName/.git/

From here on in it is a case of following the normal development methodology for the project in question.

Create the Remote Repository

Next up to create the remote repository:

  • Login to GitHub using your credentials
  • Click on the Repositories link
  • Click on the New button
  • Fill in the details for the repository

At this point we have a new remote repository.

Linking the Local and Remote Repositories

At this point we should have two repositories:

  • Local repository with some work and the associated history
  • Remote repository with a small amount of content (readme, maybe licence etc.)

The two can be linked as follows:

git remote add origin https://github.com/NevynUK/TemporaryTest.git
git branch -M main
git push -u origin main

Conclusion

The two repositories should now be linked and the local content should have been synchronised with the remote repository.

No revelation here but something that is often forgotten.

Installing an Email Server

May 31st, 2025 • Aide-memoir, Raspberry Pi, Software DevelopmentComments Off on Installing an Email Server

Mailhog Banner

The last post looked at Installing Mosquitto MQTT Server in a test environment, namely a Raspberry Pi. This post looks at a new server into the environment, an email server, again into the same Raspberry Pi environment.

Lightweight Mail Server

The requirement is to provide a portable SMTP email server to accept email from a test application. It is also desirable to provide a mechanism for checking if the email has been sent correctly. A quick search resulted in a docker container for Mailhog, a simple SMTP server that also exposes a web interface to show the messages that have been received. Let’s give it a go.

Building the image is simple, run the command:

docker run --platform linux/amd64 -d -p 1025:1025 -p 8025:8025 --rm --name MailhogEmailServer mailhog/mailhog

This should pull the necessary components and build the local image and run the server.

So far everything looks good. Starting a web browser on the local machine and navigating to localhost:8025 shows a simple web mail page.

Next step, move to a Raspberry Pi and try to start the server and check the service using the same docker command as used above:

docker run --platform linux/amd64 -d -p 1025:1025 -p 8025:8025 --rm --name MailhogEmailServer mailhog/mailhog

This command above results in the following output:

WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
d28ed1f93bdcf656f27843014f37b65ab12b5ad510f8d50faec96a57fc090056

This makes sense, the image will be referring to an amd64 image and docker is running on a Raspberry Pi, so arm64. This appears odd as original test was performed on an Apple Silicon Mac which is also arm64. At first glance it does look like the image may be running as we have a UUID for the container. Checking the status of the containers with:

docker ps -a

Shows the following information about the container:

CONTAINER ID   IMAGE             COMMAND     CREATED         STATUS                       PORTS     NAMES
d28ed1f93bdc   mailhog/mailhog   "MailHog"   5 seconds ago   Exited (255) 4 seconds ago             mailhog

Looks like we have a problem as the container is not running correctly.

Solution

The solution is to build a new Dockerfile and create a new image. The new docker file is based upon the Dockerfile in docker hub:

FROM --platform=${BUILDPLATFORM} golang:1.18-alpine AS builder

RUN set -x \
  && apk add --update git musl-dev gcc \
  && GOPATH=/tmp/gocode go install github.com/mailhog/MailHog@latest

FROM --platform=${BUILDPLATFORM} alpine:latest
WORKDIR /bin
COPY --from=builder tmp/gocode/bin/MailHog /bin/MailHog
EXPOSE 1025 8025
ENTRYPOINT ["MailHog"]

The container is built using the docker command above:

docker run --platform linux/amd64 -d -p 1025:1025 -p 8025:8025 --rm --name MailhogEmailServer mailhog-local

No errors, time to check the container status:

docker ps -a

shows:

CONTAINER ID   IMAGE           COMMAND     CREATED         STATUS         PORTS                                            NAMES
dc9e943b7097   mailhog-local   "MailHog"   4 seconds ago   Up 3 seconds   0.0.0.0:1025->1025/tcp, 0.0.0.0:8025->8025/tcp   MailhogEmailServer

Time for some testing.

Testing

Two features of the server need to be checked out:

  • Ability to receive email from a client
  • Verify that the email has been received by the server

The status of the email server can be tested using the web interface by browsing to testserver500.local:8025 (replace testserver500.local with the address/name of your Raspberry Pi), this time we see the simple web interface:

Mailhog Wemail interface showing empty mailbox

Sending email can be tested using telnet, issuing the command:

telnet raspberrypi.local 1025

should result in something like the following response:

Trying fe80::........
Connected to raspberrypi.local.
Escape character is '^]'.
220 mailhog.example ESMTP MailHog

A basic email message can be put together using the following command/response sequence:

Trying 172.17.0.1...
Connected to testserver500.local.
Escape character is '^]'.
220 mailhog.example ESMTP MailHog
HELO testserver500.local
250 Hello testserver500.local
mail from:<tester@testserver500.local> 
250 Sender tester@testserver500.local ok
rcpt to:<user@testserver500.local>
250 Recipient user@testserver500.local ok
data
354 End data with <CR><LF>.<CR><LF>
From: "Tester" <tester@testserver500.local>
To: "User" <user@testserver500.local>
Date: Thu, 1 May 2025 09:45:01 +0100
Subject: Testing the local mail server

Hello,

This is a quick message to test the local SMTP server (Mailhog) running on a Raspberry Pi.

Regards,
Tester
.
250 Ok: queued as mlsU6a9iplWWgg1RILcbGWP6NphswR26_64Pdf98WBo=@mailhog.example
quit
221 Bye
Connection closed by foreign host.

Over to the web interface to see if the email has been received correctly:

Mailhog Wemail interface showing one email in mailbox

And clicking on the message we see the message contents…

Mailhog webmail showing message

The last step is to verify that the container can be stopped and restarted to allow for the system to be automated. The following commands remove the container and allow the container name, MailhogEmailServer to be reused:

docker container stop MailhogEmailServer
docker remove MailhogEmailServer

Following this, the docker run command was executed once more and the container restarted as expected.

Conclusion

There is probably a better way to solve this problem using native docker command line arguments but lack of docker knowledge hindered any investigation. However, the solution presented works and allows for testing to continue.

Final step is to automate the deployment of this solution using ansible.

Installing Mosquitto MQTT Server

April 13th, 2025 • aide-memoire, Software Development, TipsComments Off on Installing Mosquitto MQTT Server

Steampunk Mosquito

Recently came across a customer problem which needed access to a MQTT server. Here is how it went.

Requirement

The aim is to provide a cross platform way of providing a MQTT server for testing with the following characteristics:

It should be stressed that this is a disposable test environment running on a local network. The system will not be exposed to the Internet and so security and robustness are not going to be an issue.

  • Cross platform, running on Mac and Raspberry Pi
  • Persistence of data is not necessary as the system will be started and stopped as needed
  • Running on a local network with no access to the Internet (reduced need for security)
  • Simple to configure across platforms

Looking at the above it is apparent that a full installation should not be required. In fact it may be overkill. as would a dedicated server. In fact this use case points in the direction of a docker container with some shared configuration.

Installation and Configuration

Mosquitto is a lightweight MQTT server available as a native application for the target platforms and also as a docker image. Using docker will ensure a consistent deployment across multiple platforms and the Eclipse image will be the one used there.

The docker image can be installed with the command:

docker pull eclipse-mosquitto

The download only takes a few seconds with a moderate speed connection.

The Mosquitto client tools are also required for testing. These are installed with the following commands, for Raspberry Pi:

sudo apt update && sudo apt upgrade
sudo apt install mosquitto-clients

and for Mac (assumes Homebrew is already installed, if not, Homebrew installation instructions can be found here):

brew install mosquitto

Note that on a Mac this installs both the client and the server components although we will only need the client applications for testing the system setup.

The Eclipse image page for the docker container describes a simple directory structure for configuration:

/mosquitto/config
/mosquitto/data
/mosquitto/log

This can be replicated by creating a local directory structure and then mapping this when we start the docker container. The local structure will look like this (note the missing / at the start of the directory names):

mosquitto/config
mosquitto/data
mosquitto/log

Time to run and test the server.

Testing

The client tools will allow the installation to be tested. The server will be installed on a Raspberry Pi with the name testserver500.local. The server is started by logging on to the Raspberry Pi and executing the following command:

docker run -it -p 1883:1883 -v "$PWD/mosquitto/config:/mosquitto/config" -v "$PWD/mosquitto/data:/mosquitto/data" -v "$PWD/mosquitto/log:/mosquitto/log" eclipse-mosquitto

This will start the downloaded image and register the config, data and log directories with the image.

First problem, we also need a configuration file (mosquitto.conf) in the mosquitto/config. A quick scan suggests that the following is all that is required:

persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log

Two open shells are required for testing, one for receiving MQTT messages (subscription) and one for sending MQTT messages (publishing). In the first shell we subscribe to notifications with the command:

mosquitto_sub -h testserver500.local -t test/debug

Second problem, we get an error message Error: Connection refused. Some googling suggests that anonymous login is also required. The configuration file needs a couple more options adding to the file:

persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log
allow_anonymous true
listener 1883 0.0.0.0

Stopping the docker container and then restarting it applies the changes. Subscribing again is successful.

Time to try and publish some data. In a third terminal enter the command:

mosquitto_pub -h testserver500.local -t test/debug -m "Hello, world" -d

The subscription terminal should now display the message just sent:

clusteruser@TestServer500:~ $ mosquitto_sub -h testserver500.local -t test/debug
Hello, world

Success!

One final test, publish / subscribe from a remote machine.

Conclusion

Using a docker image makes it possible to use a standardised setup on any platform, including Windows (although this is not in scope here). The only additional set up required is to supply any specialised configuration and possibly data and log directories should data retention be required.

It should also be remembered that the configuration here is really only suitable for a low risk network (i.e. isolated testing systems) and should not be used in production.

Cheap Yellow Display SPI

March 16th, 2025 • Electronics3 Comments »

Cheap Yellow Display Banner

Cheap Yellow Displays look to be an ideal way of experimenting with the ESP32 ecosystem. They are small, simple and are equipped with a number of built in sensors / modules. They sounded ideal for a project to monitor a test environment, displaying data and sending detailed logs over a WiFi network.

Early tests went well with the various features prototyped and each feature working well. The problems started when the various components were brought together, at which point the SPI implementation on the board started to become an issue.

This post runs through the details of the problem and the hardware solution selected.

There are a number of variants of the Cheap Yellow Display and this post suggests a destructive hardware modification to the circuit board to resolve the issue. It is recommended that the schematic is double checked before proceeding with any modifications.

What Are Cheap Yellow Displays?

I first came across the Cheap Yellow Displays when they were mentioned in a Hackaday Post in early January. They are small boards, about the size of a credit card. The boards contain an array of devices including:

  • Screen with touch sensor
  • Ambient light
  • RGB LED
  • Audio amplifier
  • SD Card slot

Cheap Yellow Display

All of this is powered by an ESP32 with 4MB of flash and can be obtained for about £9 when purchased from China. Sounds perfect for a small project that will monitor UART traffic and log / send the data over the network.

There is also an open source repository containing code, schematics and a list of hacks for the board.

The board ordered was the ESP32-2432S028, time to break out VS Code and ESP-IDF.

One feature missing from this package was the ability to natively connect an ESP-PROG programmer to the board. This means that development will have to fall back to using ESP_LOGx or printf statements for debugging. No JTAG here.

Proof of Concept

As mentioned, the board is going to be used to connect to a 3.3V TTL UART and log the information, optionally sending the data to a server on the network. The device has a display with touch screen so this becomes an obvious choice to configure and monitor operation of the device.

The SD Card slot also means that data can be logged to a file for longer term storage and transfer to a host computer.

For the initial proof of concept we need to look at the following tasks:

  • Use LVGL to create a user interface on the LCD screen
  • Respond to touch events
  • Access the SD card reader

The first two tasks we proven to work using the LVGL examples in the open source repository.

Access to the SD card was proven using the SD SPI example in the Espressif repository.

Combining the Examples

Next step was pulling the two pieces of work together, this is where things started to go wrong. Adding the SD Card example code to the LVGL code generated initialisation errors when spi_bus_initialize was called.

The code as written used the following interfaces:

  • SPI2_HOST for LCD display
  • SPI3_HOST fo the touch sensor

IDF exposes three SPI hosts, 1 – 3, so the logical step is to assign SPI1_HOST to the SD card interface. This is where the problems started to arise. Using the third SPI interface resulted in the following error:

E (296) spi: spi_bus_initialize(802): SPI bus already initialized

The issue can be distilled down to the following code:

void app_main(void)
{
    spi_bus_config_t buscfg_display = {};
    buscfg_display.mosi_io_num = (gpio_num_t) 13;
    buscfg_display.miso_io_num = (gpio_num_t) 12;
    buscfg_display.sclk_io_num = (gpio_num_t) 14;
    buscfg_display.quadwp_io_num = -1;
    buscfg_display.quadhd_io_num = -1;
    buscfg_display.max_transfer_sz = 4000;
    buscfg_display.flags = 0;
    buscfg_display.intr_flags = 0;

    ESP_ERROR_CHECK(spi_bus_initialize(SPI2_HOST, &buscfg_display, SPI_DMA_CH_AUTO));

    spi_bus_config_t buscfg_touch = {};
    buscfg_touch.mosi_io_num = GPIO_NUM_32;
    buscfg_touch.miso_io_num = GPIO_NUM_39;
    buscfg_touch.sclk_io_num = GPIO_NUM_33;
    buscfg_touch.quadwp_io_num = -1;
    buscfg_touch.quadhd_io_num = -1;
    buscfg_touch.data4_io_num = -1;
    buscfg_touch.data5_io_num = -1;
    buscfg_touch.data6_io_num = -1;
    buscfg_touch.data7_io_num = -1;
    buscfg_touch.max_transfer_sz = 4000;
    buscfg_touch.flags = SPICOMMON_BUSFLAG_SCLK | SPICOMMON_BUSFLAG_MISO | SPICOMMON_BUSFLAG_MOSI | SPICOMMON_BUSFLAG_MASTER | SPICOMMON_BUSFLAG_GPIO_PINS;
    buscfg_touch.isr_cpu_id = ESP_INTR_CPU_AFFINITY_AUTO;
    buscfg_touch.intr_flags = ESP_INTR_FLAG_LOWMED | ESP_INTR_FLAG_IRAM;

    ESP_ERROR_CHECK(spi_bus_initialize(SPI3_HOST, &buscfg_touch, SPI_DMA_CH_AUTO));

    spi_bus_config_t buscfg_sdcard = {};
    buscfg_sdcard.mosi_io_num = 23;
    buscfg_sdcard.miso_io_num = 19;
    buscfg_sdcard.sclk_io_num = 18;
    buscfg_sdcard.quadwp_io_num = -1;
    buscfg_sdcard.quadhd_io_num = -1;
    buscfg_sdcard.max_transfer_sz = 4000;

    ESP_ERROR_CHECK(spi_bus_initialize(SPI1_HOST, &buscfg_sdcard, SPI_DMA_CH_AUTO));
}

The traceback for the error is:

0x4008582b: _esp_error_check_failed at /Users/username/esp/esp-idf/components/esp_system/esp_err.c:49
0x400d6859: app_main at /Users/username/Public/spitest/main/spitest.c:49 (discriminator 1)

This highlights the line causing the error as:

ESP_ERROR_CHECK(spi_bus_initialize(SPI1_HOST, &buscfg_sdcard, SPI_DMA_CH_AUTO));

There are some notes in the IDF documentation about using SPI1 and possible issues that can be encountered.

The unanswered question is why do we need three SPI busses?

Brief Recap – SPI

Readers who are experienced with how SPI works can skip this section.

SPI has the concept of a central (master) device and a number of peripherals (slave) devices. The devices communicate using two or three lines with an optional device selection signal (chip select). The four lines are usually labelled as:

  • COPI (MOSI) – data signals from the central controller to the peripherals
  • CIPO (MISO) – data from the peripheral to the central device
  • CLK (SCK) – clock signal generated by the central device
  • CS – chip select, used to identify the peripheral being addressed.

The chip select signal is not always connected to a central device. Take for example, when there is only one SPI peripheral in a circuit. You do not always need the chip select signal, if the peripheral has a CS line then this could be connected to high or low depending upon the peripheral requirements. In this was the peripheral is always listening which is fine as this is the only device on the bus.

For a system with multiple SPI peripherals there are two options:

  • Put each peripheral on its own bus
  • Use the CS line to switch peripherals on / off

The first option means that for every peripheral we would need three signals, COPI, CIPO and CLK. The CS line could be hard wired as the bus is a single peripheral bus model. This design means that two devices would consume six pins from the microcontroller.

The second option is less costly as you use the same three data and clock pins (COPI, CIPO and CLK) for all of the peripherals. You need an additional CS pin for each peripheral but this still results in a saving in the pin count. So for the two peripheral example, you need the three data and clock pins along with a CS1 and CS2 line, so only 5 pins rather than 6.

A more detailed description of the history of SPI and other features of the bus including timing characteristics (modes) can be found on WikiPedia on the Serial Peripheral Interface page.

Cheap Yellow Display SPI Implementation

A quick look at the schematic for the Cheap Yellow Display shows how the three SPI devices have been connected to the ESP32. The pin out is as follows:

Signal Name LCD Touch Sensor SD Card
COPI (MOSI) 13 32 23
CIPO (MISO) 12 39 19
CLK 14 25 18
CS 15 33 5

The use of three distinct sets of pins, one for each interface, means that we will need three SPI interfaces requiring investigation and resolution of the SPI1_HOST initialisation error.

Hardware Solution

Another option is to look at a hardware solution, moving one of the devices to another bus and using CS to determine the source and destination for the data. Control of the LCD and SD card are both under control of the application. The touch screen peripheral is reacting to the user input and the application has no control over when this will occur. The touch screen peripheral will therefore remain on one bus and the LCD and SD card will share a bus. The wiring becomes:

Signal Name LCD Touch Sensor SD Card
COPI (MOSI) 13 32 13
CIPO (MISO) 12 39 12
CLK 14 25 14
CS 15 33 5

First task, disconnect the data and clock signals for the SD card leaving the CS pin connected. The image below shows the three tracks that need cutting (highlighted in red):

Track cuts

Next up, connect the SD card data and clock pins to the LCD SPI bus. The modified board looks something like this:

Modified Board

Software Modifications

The software modifications should be relatively trivial, a simple change to the MOSI, MISO and CLK pins used for the SD card. The CS pin assignment remains unchanged from the sample application.

Conclusion

Modifying the hardware has reduced the number of SPI host interfaces required bringing the interface count down to two. Early experience has shown that the system starts and runs as expected. The three peripherals, touch, LCD and SD card all respond as expected.

Back to providing UART monitoring functionality.

nRF52840 Does Not Appear as a Wireshark Interface

January 6th, 2025 • ElectronicsComments Off on nRF52840 Does Not Appear as a Wireshark Interface

nRF52840 Banner

Recent work has been heading towards Bluetooth software enhancement on the ESP32. The basic design of the system follows the classic server (central) and peripheral model. The ESP device is acting as a server with connections from peripheral devices such as sensors or event Bluetooth client applications such as LightBlue.

It would be really useful if as part of the debugging process we have some visibility of the Bluetooth traffic in the air. This will allow the diagnosis of software issues. There are two ways we can tackle this problem:

  • Write our own client on a second ESP32
  • Use commercially available software

The first of these will be cheaper initially but will require extra time to develop the software. The second option has a small initial outlay but allows us to build on the experience and expertise of others.

It is this second option that we will examine here.

BLE Sniffer

Nordic offer a range of development platforms for WiFi and Bluetooth. One of these platforms, the nrf52480, has the option for your own software development as well as some custom firmware allowing sniffing of BLE traffic.

Setting up the System

There are some really good instructions over on the Nordic web site discussing Setting up nRF Sniffer for Bluetooth LE. The nrf52480 dongle is installed as an interface for Wireshark. Wireshark then provides the user interface and the ability to decode the Bluetooth packets.

All was fine until section 3 of the guide where we find the following instructions:

3. Enable the nRF Sniffer capture tool in Wireshark.
3.1 Refresh the interfaces in Wireshark by selecting Capture > Refresh Interfaces or pressing F5.
3.2 Select View > Interface Toolbars > nRF Sniffer for Bluetooth LE to enable the nRF Sniffer interface.

The instruction at 3.2 no longer seems to be relevant as Wireshark displays the Interfaces panel when it starts. Time to search for the nRF Sniffer for Bluetooth LE in the list of installed interfaces.

No luck.

Troubleshooting

One of the prerequisite steps (section 1 of the document linked above) is to install the Python requirements with the command:

python3 -m pip install -r requirements.txt

This completed as expected and the command installed the required packages. So let’s run the shell script to check the hardware:

./nrf_sniffer_ble.sh --extcap-interfaces

The output from this script included a full

pyserial not found, please run: "/opt/homebrew/opt/python@3.13/bin/python3.13 -m pip install -r requirements.txt" and retry

Double checking with the python3 -m pip install -r requirements.txt command results in the following output:

Requirement already satisfied: pyserial>=3.5 in /Users/user-name/.pyenv/versions/3.12.0/lib/python3.12/site-packages

Looking at the output it appears that there is a conflict with the Python environments. The script is trying to use the homebrew version of Python but the command line is using a Python virtual environment.

Examining the script we find that the is special consideration for MacOS:

unamestr=`uname`
if [ "$unamestr" = 'Darwin' ]; then
        hb_x86_py3="/usr/local/bin/python3"
        hb_apple_silicon_py3="/opt/homebrew/bin/python3"
        .
        .
        .

The remainder of the if statement makes some checks on the various versions of Python that could be installed on the system. In this case the script selects /opt/homebrew/opt/python@3.13/bin/python3.13. However, which python shows /Users/username/.pyenv/shims/python as the Python interpreter being used.

So this looks to be the problem, the script is using the incorrect version of Python. Editing the script and adding the line py3=python3 towards the end of the script and re-running the ./nrf_sniffer_ble.sh –extcap-interfaces command results in the following in the output:

extcap {version=4.1.1}{display=nRF Sniffer for Bluetooth LE}{help=https://www.nordicsemi.com/Software-and-Tools/Development-Tools/nRF-Sniffer-for-Bluetooth-LE}
control {number=0}{type=selector}{display=Device}{tooltip=Device list}
control {number=1}{type=selector}{display=Key}{tooltip=}
control {number=2}{type=string}{display=Value}{tooltip=6 digit passkey or 16 or 32 bytes encryption key in hexadecimal starting with '0x', big endian format.If the entered key is shorter than 16 or 32 bytes, it will be zero-padded in front'}{validation=\b^(([0-9]{6})|(0x[0-9a-fA-F]{1,64})|([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2}) (public|random))$\b}
.
.
.

This is looking more positive. Back to Wireshark and this time the interface is shown in the list of interfaces available.

Conclusion

This problem took a little tracing and appears to have occurred due to two issues:

  • VIRTUAL_ENV was not set by the Python virtual environment being used
  • Several different version of Python installed on the host system

The Python versions had been installed as pre-requisites for other packages installed by homebrew and as such could result in more issues if uninstalled.

Editing the Nordic script, while not ideal, was probably the most pragmatic solution to this issue.

Repeatable Deployments 5 – NVMe Base Duo

October 21st, 2024 • ElectronicsComments Off on Repeatable Deployments 5 – NVMe Base Duo

NVMe Base Duo Banner

A few weeks ago we looked at using the Pimoroni NVMe Base to add 500 GBytes of storage to a Raspberry Pi, first manually and then using Ansible.

A few days ago I started to look at doing the same but this time with the NVMe Base Duo. This would allow the addition of two drives to the Raspberry Pi. Should be simple, right?

TL;DR The scripts and instructions for running them can be found in the AnsibleNVMe GitHub repository.

Setting up the Hardware

As with the NVMe Base, setting up the NVMe Base Duo was simple, just follow the installation instructions on theproduct page. Again, the most difficult part was connecting the NVMe Base Duo board to the Raspberry Pi using the flat flex connector.

A quick check of the /dev directory shows the two drives as devices:

clusteruser@TestServer:~ $ ls /dev/nv*
/dev/nvme0  /dev/nvme0n1  /dev/nvme1  /dev/nvme1n1

Setting up the two drives should be a case of running the configuration script from the previous post with two different device / mount point names, namely:

  • nvme0 / nvme0n1
  • nvme1 / nvme1n1

Ansible tasks should allow us to reuse the commands used to mount a single drive on the NVMe Base without repeating the instructions with copy/paste.

Ansible Tasks

First thing to do is identify the tasks that should be executed on both drives. These are found in the ConfigureNVMeBase.yml file. They make up the bulk of the file. These tasks are:

- name: Format the NVMe drive {{ nvmebase_device_name }}
  command: mkfs.ext4 /dev/{{ nvmebase_device_name }} -L Data
  when: format_nvmebase | bool

- name: Make the mount point for {{ nvmebase_device_name }}
  command: mkdir /mnt/{{ nvmebase_device_name }}

- name: Mount the newly formatted drive ({{ nvmebase_device_name }})
  command: mount /dev/{{ nvmebase_device_name }} /mnt/{{ nvmebase_device_name }}

- name: Make sure that {{ ansible_user }} can read and write to the mount point
  command: chown -R {{ ansible_user }}:{{ ansible_user }} /mnt/{{ nvmebase_device_name }}

- name: Get the UUID of {{ nvmebase_device_name }}
  command: blkid /dev/{{ nvmebase_device_name }}
  register: blkid_output

- name: Extract UUID from blkid output
  set_fact:
    device_uuid: "{{ blkid_output.stdout | regex_search('UUID=\"([^\"]+)\"', '\\1') }}"

- name: Clean the extracted UUID
  set_fact:
    clean_uuid: "{{ device_uuid | regex_replace('\\[', '') | regex_replace(']', '') |  regex_replace(\"'\" '') }}"

- name: Add UUID entry for {{ nvmebase_device_name }} to /etc/fstab
  lineinfile:
    path: /etc/fstab
    line: "UUID={{ clean_uuid }} /mnt/{{ nvmebase_device_name }} ext4 defaults,auto,users,rw,nofail,noatime 0 0"
    state: present
    create: yes

Breaking these instruction out into a new file, ConfigureNVMeBaseTasks.yml file gives us the consolidated tasks list. The device and drive mount point are derived from the Ansible variable nvmebase_device_name which is set in the calling script as follows:

- include_tasks: ConfigureNVMeDriveTasks.yml
  vars:
	  nvmebase_device_name: nvme0n1

The ansible_user variable (used in the chown command in the tasks file) is taken from the group_vars/all.yml file.

And for the second drive we would use:

- include_tasks: ConfigureNVMeDriveTasks.yml
  vars:
	  nvmebase_device_name: nvme1n1
  when: nvme_duo == true

Note the addition of the when clause to only execute the tasks in the ConfigureNVMeDriveTasks.yml if the nvme_duo variable is true. This clause will also be used when Samba is configured.

Install Samba (Optional)

The installation of Samba follows similar steps as detailed in the Adding NVMe Base using Ansible post. The only addition is to add the second drive to the configuration section of the script.

Change Hostname (Optional)

One final, optional step is to change the name of the Raspberry Pi. This is useful when a number of devices are being configured. This step requires the hostname variable being set in the group_vars/all.yml file. Execute the ChangeHostname.yml script once the variable has been changed. Note that the script may fail following the reboot step as Ansible tries to reconnect with the Raspberry Pi using the old host name.

Lessons Learned

For some reason which was never tracked down, the installation of the sometimes failed on the NVMe Base Duo with access permission issues. The access issue presents itself on the Samba shares and also when attempting to use the drive when logged into the Raspberry Pi. This was resolved by setting the ansible_user variable in the group_vars/all.yml file.

The first installation of the NVMe Base Duo added the PCIe Generation 3 support. This caused some issues accessing the devices in the /dev directory. Removing this support allowed both of the drives to be accessed.

Conclusion

Breaking the drive installation and configuration into a new tasks file allows Ansible to reuse the same steps for two different devices. Couple this with when clauses allows changes to the control flow when deploying to two different types of devices.