Getting Started

We recommend you developing the kernel in Debian-12.0+ or Ubuntu-24.04+ environment to get the best tooling support.

Prepare basic developement environment

repo

We are using repo to manage the kernel project. Please follow https://mirrors.tuna.tsinghua.edu.cn/help/git-repo/ to install repo.

gn

We are using GN to organize and build the BlueOS project, rather than cargo the official package manager of Rust eco system. gn offers better multi-language support and better build speed than cargo. You can download prebuilt gn binaries from https://gn.googlesource.com/gn/#getting-a-binary. Put the downloaded binary to a directory and ensure this directory is in your ${PATH}.

Install packages on Linux

Install packages shipped by the distro.

sudo apt install build-essential cmake ninja-build pkg-config \
                 libssl-dev gdb-multiarch curl git wget \
                 libslirp-dev python3 python3-pip meson \
                 libglib2.0-dev flex bison libfdt-dev \
                 gcc-riscv64-unknown-elf clang llvm lld \
                 python3-kconfiglib python3-tomli

Additionally, download and install arm toolchains

wget https://developer.arm.com/-/media/Files/downloads/gnu/14.3.rel1/binrel/arm-gnu-toolchain-14.3.rel1-x86_64-arm-none-eabi.tar.xz
tar xvf arm-gnu-toolchain-14.3.rel1-x86_64-arm-none-eabi.tar.xz -C <install-path>
wget https://developer.arm.com/-/media/Files/downloads/gnu/14.3.rel1/binrel/arm-gnu-toolchain-14.3.rel1-x86_64-aarch64-none-elf.tar.xz
tar xvf arm-gnu-toolchain-14.3.rel1-x86_64-aarch64-none-elf.tar.xz -C <install-path>

Add <install-path>/arm-gnu-toolchain-14.3.rel1-x86_64-aarch64-none-elf/bin and <install-path>/arm-gnu-toolchain-14.3.rel1-x86_64-arm-none-eabi/bin to your $PATH.

Build and install QEMU on Linux

Download QEMU source code tarball,

wget https://download.qemu.org/qemu-10.0.2.tar.xz
tar xvf qemu-10.0.2.tar.xz
cd qemu-10.0.2
mkdir build && cd build
../configure --prefix=<install-path> --enable-slirp && \
    make -j$(nproc) install

Add <install-path>/bin to your $PATH.

Install packages on macOS

brew install coreutils llvm@19 lld@19 gcc-arm-embedded cmake ninja qemu
brew tap riscv-software-src/riscv
brew install riscv-tools riscv64-elf-gcc riscv64-elf-binutils riscv64-elf-gdb
python3 -m pip install --user --break-system-packages --upgrade kconfiglib

For aarch64 toolchain, please refer to arm-gnu-toolchain. It’s recommended to download tarballs rather than pkgs. For RISC-V toolchain on macOS, please refer to homebrew-riscv.

Code formatters

We are using code formatters of corresponding programming language to keep our code style consistent. These formatters can be installed via

# On Linux
sudo apt install clang-format yapf3
# On macOS
brew install clang-format yapf

Here’s the table of format command and its corresponding programming language.

langformat command
Rustrustfmt
C/C++clang-format
Pythonyapf3
GNgn format

Init and sync the project

Use following command to init the project.

mkdir blueos-dev
cd blueos-dev

If you have configured public ssh key on github, please use following commands

repo init -u git@github.com:vivoblueos/manifests.git -b main -m manifest.xml

otherwise, please try

repo init -u https://github.com/vivoblueos/manifests.git -b main -m manifest.xml

Then sync all repositories in the project.

repo sync

You might accelerate the synchronization by appending -j$(nproc) to the above command.

Building the Rust toolchain for BlueOS kernel

We have forked upstream Rust compiler to support BlueOS kernel targeted to *-vivo-blueos-* and BlueOS kernel’s Rust-std.

We’ll finally contribute our changes to the upstream repository and make *-vivo-blueos-* a tier Rust target.

Clone the downstream repository

Run If you have configured public ssh key on github, please use following commands,

git clone git@github.com:vivoblueos/rust.git  
git clone git@github.com:vivoblueos/cc-rs.git  
git clone git@github.com:vivoblueos/libc.git

otherwise, please try

git clone https://github.com/vivoblueos/rust.git  
git clone https://github.com/vivoblueos/cc-rs.git  
git clone https://github.com/vivoblueos/libc.git 

The blueos-dev branch is set as default, so no manual branch switching is required.

Setup Rust mirror site

In China, we recommend you to use mirror site for crates.io and rustup. Add following lines to your ~/.bashrc

export RUSTUP_DIST_SERVER=https://mirrors.ustc.edu.cn/rust-static
export RUSTUP_UPDATE_ROOT=https://mirrors.ustc.edu.cn/rust-static/rustup

and then type

source ~/.bashrc

Install via x script

Run the following commands in your bash shell. These instructions work for both Linux and macOS platforms:

export CARGO_NET_GIT_FETCH_WITH_CLI=true
export DESTDIR=<choose-your-install-prefix>
cd rust
cp config.blueos.toml config.toml
./x.py install -i --stage 1 compiler/rustc
./x.py install -i --stage 1 library/std --target aarch64-vivo-blueos-newlib
./x.py install -i --stage 1 library/std --target thumbv7m-vivo-blueos-newlibeabi
./x.py install -i --stage 1 library/std --target thumbv8m.main-vivo-blueos-newlibeabihf
./x.py install -i --stage 1 library/std --target riscv64-vivo-blueos
./x.py install -i --stage 1 library/std --target riscv32-vivo-blueos
./x.py install -i --stage 1 library/std --target riscv32imc-vivo-blueos
./x.py install -i --stage 0 rustfmt
./x.py install -i --stage 0 rust-analyzer
./x.py install -i --stage 0 clippy

You must also install the host machine’s standard library and LLVM tools.

For Linux:

./x.py install -i --stage 1 library/std --target x86_64-unknown-linux-gnu
cp -rav build/x86_64-unknown-linux-gnu/llvm/{bin,lib} ${DESTDIR}/usr/local

For macOS:

./x.py install -i --stage 1 library/std --target aarch64-apple-darwin
cp -av build/aarch64-apple-darwin/llvm/{bin,lib} ${DESTDIR}/usr/local

To use the kernel toolchain, add the following to your environment:

export PATH=${DESTDIR}/usr/local/bin:${PATH}

Or if you want to mange the blueos toolchain using rustup, you can try:

ln -s ${DESTDIR}/usr/local ~/.rustup/toolchains/blueos-dev
rustup default blueos-dev

Build kernel image

The kernel currently supports multiple boards.

  • qemu_mps2_an385
  • qemu_mps3_an547
  • qemu_virt64_aarch64
  • qemu_riscv64

To build a kernel image for specified board, like qemu_mps2_an385, use command

gn gen out/qemu_mps2_an385.release/ --args='build_type="release" board="qemu_mps2_an385"'
ninja -C out/qemu_mps2_an385.release

To run tests, type

ninja -C out/qemu_mps2_an385.release check_all

Args and their semantics.

argsemantics
build_typeConfiguration for the build
boardName of the board targeted to

Use rust-analyzer in VSCode

We recommend you to use rust-analyzer extension in VSCode to support development of BlueOS kernel.

However, since we are using gn, rather than Cargo.toml, to manage our project, the rust-analyzer extension in VSCode might not work out-of-box. You should use following commands1 to get the extension work

gn gen out/qemu_mps2_an385 --export-rust-project
ln -sfn out/qemu_mps2_an385/rust-project.json

  1. https://chromium.googlesource.com/chromium/src/+/refs/heads/main/docs/rust.md#using-vscode

QEMU checker

Most kernel code are tested via QEMU. We introduce QEMU checker to assist testing the kernel.

QEMU checker must be provided with a QEMU runner script and a input file containing check directives. QEMU checker runs the runner script and captures output lines of it. If output lines match conditions or regex specified in the check directives, the checker takes corresponding actions. Directives should be written at the header of the input file.

directiveaction
// CHECK-FAIL: <regex>Record this line and report at exit.
// CHECK-SUCC: <regex>Record this line and report at exit.
// ASSERT-FAIL: <regex>Exit the runner and report failure.
// ASSERT-SUCC: <regex>Exit the runner and report success.
// NEWLINE-TIMEOUT: <number>Timeout when checker reads the next line. Report failure if timeout occurs.
// TOTAL-TIMEOUT: <number>Total timeout for this run.

Example

Suppose we have a kernel image for testing, named blueos_foo_test. First we have to generate a qemu runner script for it.

gen_qemu_runner("runner_for_blueos_foo_test") {
  img = ":blueos_foo_test"
  qemu = "$qemu_exe"
  board = "$board"
}

Then, we make a checker for the above runner.

run_qemu_checker("check_blueos_foo_test") {
  img = ":blueos_foo_test"
  runner = ":runner_for_blueos_foo_test"
  checker = "//kernel/kernel/tests/integration_test.rs"
}

Must be noted, the above two targets must be put in the same BUILD.gn. Check directives should be put at the header of integration_test.rs.

// NEWLINE-TIMEOUT: 15
// ASSERT-SUCC: Kernel test end.
// ASSERT-FAIL: Backtrace in Panic.*

Integrate checks in CI

CI runner only runs two toplevel targets, default and check_all. To run check during CI, you have to put your checker target in deps of check_all group in the toplevel BUILD.gn.

group("check_all") {
  deps = [ ":check_kernel" ]
}

Run Coverage

We generate the code needed for coverage statistics by adding -Cinstrument-coverage during compilation, and generate coverage data through the integration of minicov and semihosting. Finally, we use the grcov tool to generate readable coverage data in HTML format. All of these have been integrated into our build system, and coverage data can be generated using the following commands:

gn gen out/qemu_riscv64.cov/ --args='build_type="coverage" board="qemu_riscv64"'
ninja -C out/qemu_riscv64.cov check_coverage

If you get a prompt that grcov is not found, you can install it via

cargo install grcov

After building and running, you can find merged coverage report in the ./out/qemu_riscv64.cov/cov_report directory. Open the index.html file in the directory to view the coverage data.

Add a new syscall

Adding a new syscall in the kernel is easy.

There are 3 steps.

  1. Add new syscall number for the new syscall in header/src/lib.rs.
pub enum NR {
    Read,
}
  1. Implement the new syscall in kernel/src/syscall_handlers/mod.rs.

This is generally done via define_syscall_handler! if the syscall is trivial. For example

define_syscall_handler!(
read(fd: i32, buf: *mut i8, len: usize) -> c_long {
    // Implement your vfs read.
});
  1. Register the new syscall in kernel/src/syscall_handlers/mod.rs.

Add a new entry in syscall_table!. For example,

(Read, read),

Must be noted, the first operand MUST be the unqualified name of the NR enumeration.

Invoke a syscall

BlueOS kernel offers two modes of invoking syscall.

  1. Software interruption

This is generally seen in most OS kernels to switch from user space to kernel space.

  1. Direct invoking

In this mode, syscall handlers are invoked directly via function calls.

In both modes, if you have to invoke a syscall, use the bk_syscall! macro in blueos_scal crate. For example,

use blueos_scal::bk_syscall;
use blueos_header::syscalls::NR::{Read};

fn read_something(fd: i32, buf: *mut i8, len: usize) -> i32 {
    bk_syscall!(Read, buf, len) as i32
}

There is no need to change your code when switching to another mode. By default, SWI mode is used. If you want to use the direct invoking mode, what you have to do is passing --cfg direct_syscall_handler to build the kernel.

Kernel’s basic data types

Arc

A customized Arc(infra/src/tinyarc.rs), is implemented for the kernel. Compared to alloc::sync::Arc, there is no much difference, except it has only strong_count thus reduced its size and is friendly to embedded deivces. Also its memory layout is known to blueos_infra, so our instrusive list can cooperate with it easily.

Intrusive list

blueos_infra’s ilist(infra/src/list/typed_ilist.rs) is typed and unsafe. It’s like C-style’s ilist, however we recommend developers not using it directly but with smart pointers. We implement ArcList ontop of the typed ilst with Arc mentioned above and guarantees safety.