STM32 on Ubuntu Linux step by step

This is a step by step tutorial on using STM32 (stm32f407vg to be precise) under Linux (Ubuntu 18.10) and other tools (QtCreator and CMake) that I use in my everyday work. In this article we will compile simple LED blinking program and run it on the STM32F4-DISCOVERY. There may be easier ways of accomplishing this though. Paragraphs ending with asterisk are optional.

First we will try to prepare a project which compiles simply in a terminal using make or ninja. This is very important for me because nowadays every µC vendor seems to provide its own shitty IDE, and their examples tend to compile only under those IDEs in non standar ways. This forces you to install and use stuff that you (or is it only me?) don’t like not to mention that most of those tools runs only under Windows. Then I’ll show you my favorite IDE which happens to be QtCreator, but any other IDE which can cope with CMake build system should do.

Source code for this article is here :

Note : do not confuse [Stm32]CubeMX (desktop configuration tool) and [Stm32]CubeF4 which is a SDK package.

Note to myself : virtualBox creates way to small disk images by default. I went with 10GB which proved to be to small after only 1 hour of tweaking. Resizing the VB disk is done via vboxmanage, reviving dead distro is done as usual by using LiveCD, and then you delete the partition using fdisk, create new in the place of the old with all options set to defaults, and then you use resize2fs.

Note to myself 2 : install virtual box guest extensions in the guest system for seamless clipboard operation. Install virtualbox-ext-pack in the host system for USB 2.0 forwarding, add yourself to the vboxusers group, read this.

Installing the tools


Install Ubuntu 18.10. I used VirtualBox running on Ubuntu 18.04 for this.

Make sure everything is up to date. Run apt update + upgrade, or use GUI Ubuntu provides.

We are going to use an excellent program ST provides called STM32CubeMX which lets us configure pins, clock sources and more. But above all it can generate some startup and config code which we’ll take as a base for our LED blinking application. So lets download it and remember to install Java first. I usually install Oracle’s Java JDK, but JRE of course will do as well (in this case it was jdk-11.0.1_linux-x64_bin.deb). I have no experience with other Java implementations like OpenJDK but I suspect, that it also will work as expected.

After jdk was installed I added java to the PATH in ~/.profile : PATH=”/usr/lib/jvm/jdk-11.0.1/bin:$PATH”

Download Stm32CubeMX and unpack it (login required unfortunately). Run ./SetupSTM32CubeMX-4.27.0.linux . In my case (fresh Ubuntu) it said :

bash: ./SetupSTM32CubeMX-4.27.0.linux: No such file or directory

It is very non intuitive message, but it is because of 32 bit libraries missing. This post tells us what to do, and to my surprise it discourages from installing ia32-libs which I would normally do:

file SetupSTM32CubeMX-4.27.0.linux
SetupSTM32CubeMX-4.27.0.linux: ELF 32-bit LSB executable, Intel 80386, ......
sudo dpkg --add-architecture i386
sudo apt update
sudo apt-get install libc6:i386 libstdc++6:i386

Run the installer and verify, that CubeMX works.


Everyone has his/her favorite IDE, but mine is QtCreator for various reasons which I’m not going to dive into, but Qt libraries are not one of them. I do not use Qt, I simply tried many IDE’s and QtCreator suits me the best. First lets grab an installer.

For this article I picked and run it in the terminal and that’s it (login required).


Toolchain can be easily installed from Launchpad PPA, or can be compiled using excellent tool called crosstool-ng. Detailed instructions are in one of my previous posts. But for now lets use the easier way:

sudo add-apt-repository ppa:team-gcc-arm-embedded/ppa
sudo apt-get update
sudo apt install gcc-arm-none-eabi binutils-arm-none-eabi libnewlib-arm-none-eabi libstdc++-arm-none-eabi-newlib
sudo apt install cmake ninja

Other tools

sudo apt install mc openocd dos2unix gdb-multiarch

The project

Run Stm32CubeMX and start a new project. In the “Part Number Search” in the top left corner insert “stm32f407vg”. This is the model of a µC we will be using and which is mounted in the STM32F4-DISCOVERY, a popular evaluation board (you can get it from all major distributors, though shows, that availability is not at its best right now).

Click on the blue link-like label in the search result table and start the project. The next thing you’ll see is a view of your microcontroller with all the peripherals and GPIOs initialized according to the boards specs. This is because the board itself contains some neat stuff like accelerometer, digital to analog audio chip with amplifier and so on. We are focused on PD12 – PD15 which are connected directly to LEDs.

Turn off USB support. It uses some additional files which makes compilation a little bit harder. Make sure that USB_OTG_FS Mode is set to “Disable” in the left pane. Also on the “Configuration” tab make sure, that USB middleware is not used.

Turn off other unnecessary peripherals like SPI, I2C, USARTs. The more the peripherals, the more source files from SDK we will have to compile, so in my project only SYS, RCC and of course GPIOs are configured. Please check my blinky.ioc in case of trouble or simply experiment with CubeMx’s output.

Then use “project -> Generate Code” in the main menu to generate the code. When asked abut downloading the SDK called StmCubeF4 click YES, and appropriate SDK will be placed in the ~/STM32Cube/Repository. This is important as our code will depend on it (although the SDK can be separately downloaded from here). Here’s what the generated directory structure looks like on my computer:

./EWARM : some stuff for some proprietary IDE i guess.
./blinky.ioc : CubeMX project file.
./Middlewares/ST : USB library which won't be of interest for us since we only want to blink LEDs.

./Drivers : Parts of the CubeF4 gets copied here. 
./Drivers/STM32F4xx_HAL_Driver : Peripheral library.
./Drivers/CMSIS : CMSIS Library is a low level code for interfacing with the CPU.

./Inc/stm32f4xx_it.h : IRQ handlers declarations used by our app. Not particularly useful.
./Inc/stm32f4xx_hal_conf.h : low level peripheral configuration based on CubeMX options.
./Inc/main.h : another unnecessary file.
./Src/system_stm32f4xx.c : Low level init routines run from the startup code before main.
./Src/stm32f4xx_it.c : IRQ handler definitions.
./Src/stm32f4xx_hal_msp.c : hi level peripheral configuration based on CubeMX options (GPIOs etc).

So as you can see pretty verbose output gets produced from CubeMX, but we are interested only in the *.h and *.c files. My favorite directory structure at the other hand looks like this one below (for completeness sake, the directory resides in ~/workspace/blinky but of course dir names are up to you). Create it, and copy generated sources into src directory as so:

./build : Compiled files goes here. Never commit this dir as it's contents are generated.
./stm32f407xx.cmake : The toolchain file, more on it later.
./src : Sources generated by the CubeMX. *.h and *.c together, but it's up to you.
./CMakeLists.txt : The CMake file.

One can argue whether STM32F4xx_HAL_Driver and CMSIS should be inside our project tree or not. I prefer having all parts of SDK in some external directories simply because copying them into dozens of projects would be a waste and a pollution of the source tree. At the other hand though packaging all the necessary code into our project makes it self contained and easier to compile (no external deps. your choice). For this article I assume that SDK resides in the ~/STM32Cube/Repository/STM32Cube_FW_F4_V1.21.0 (the default for CubeMX). So now you have two files missing : stm32f407xx.cmake and CMakeLists.txt. The former looks like this:

# This variable is used later to set a C/C++ macro which tells CubeF4 which µC to use. ST
# drivers, and other code is bloated with all sorts of macros, #defines and #ifdefs.
SET (DEVICE "STM32F407xx")
# This is a variable which is later used here and in the CMakeLists.txt. It simply tells
# where to find the SDK (CubeF4). Please change it accordingly if you have other 
# version of CubeF4 installed.
SET (CUBE_ROOT "$ENV{HOME}/STM32Cube/Repository/STM32Cube_FW_F4_V1.21.0")
# Startup code and linker script - more on it later.
SET (STARTUP_CODE "${CUBE_ROOT}/Projects/STM32F4-Discovery/Templates/SW4STM32/startup_stm32f407xx.s")
SET (LINKER_SCRIPT "${CUBE_ROOT}/Projects/STM32F4-Discovery/Templates/SW4STM32/STM32F4-Discovery/STM32F407VGTx_FLASH.ld")
# Magic settings. Without it CMake tries to run test programs on the host platform, which
# fails of course.
# -mcpu tells which CPU to target obviously. -fdata-sections -ffunction-sections Tells GCC to.
# get rid of unused code in the output binary. -Wall produces verbose warnings.
SET(CMAKE_C_FLAGS "-mcpu=cortex-m4 -std=gnu99 -fdata-sections -ffunction-sections -Wall" CACHE INTERNAL "c compiler flags")
# Flags for g++ are used only when compliing C++ sources (*.cc, *.cpp etc). -std=c++17 Turns
# on all the C++17 goodies, -fno-rtti -fno-exceptions turns off rtti and exceptions.
SET(CMAKE_CXX_FLAGS "-mcpu=cortex-m4 -std=c++17 -fno-rtti -fno-exceptions -Wall -fdata-sections -ffunction-sections -MD -Wall" CACHE INTERNAL "cxx compiler flags")
# Those flags gets passed into the linker which is run by the GCC at he end of the process..
# -T tells the linker which LD script to use, -specs=nosys.specs sets the specs which most 
# notably tells the compiler to use libnosys.a which contains all the syscalls like _sbrk,
# _exit and much more. They are more like an interface between our program and operating system /
# bare metal system we are running it on. You can use rdimon.specs instead or write syscalls 
# yourself which for bare-metal isn't difficult. --gc-sections strips out unused code from 
#binaries I think.
SET (CMAKE_EXE_LINKER_FLAGS "-T ${LINKER_SCRIPT} -specs=nosys.specs -Wl,--gc-sections" CACHE INTERNAL "exe link flags")
# Some directories in the GCC tree.
# Macro I wrote about in the first line.
# Random include paths for CubeF4 peripheral drivers and CMSIS.

Few more words on the linker script (LD script) and the startup code. I won’t dive into detail to much, but LD script tells the linker how to assemble the final executable from all the object files that were compiled by a compiler. Basically for every *.c or *cc file one object (*.o or *.obj) file gets created (you can find them later inside build/CMakeFiles dir), and every of them is full of symbols stored in the “input” sections. LD script tells the linker what input symbols from which input sections should be copied into the “output” sections in the final executable (and much more; LD scripts are powerful).

Cortex-M4 CPU after powering up looks at memory address 0 which tells it where the stack starts. Then at the address 0x0000’0004 it checks where the reset IRQ routine is and runs it. Reset IRQ is the first of the many IRQ handlers which addresses reside in the IRQ vector defined in the startup (assembly file). Reset IRQ is called Reset_Handler in ST’s assembly startup code and does a few simple things:

  • Copies the .data section from flash to RAM. .data section contains global initialized variables.
  • Clears the .bss RAM section which contains uninitialized static variables.
  • Calls SystemInit which I mentioned previously (basic stuff like external memory controller maybe?)
  • Calls __libc_init_array which among other things that I’m unaware of calls C++ constructors.
  • Calls main function.

Writing startup code and LD script is not trivial (although startup code can be written in C), so thankfully ST provided us with their own implementations which we can use. There is an excellent article on the subject for those who are interested. Go to the ~/STM32Cube/Repository/STM32Cube_FW_F4_V1.21.0 and issue a find :

find ~/STM32Cube/Repository/STM32Cube_FW_F4_V1.21.0/ -name startup_stm32f407xx.s

There’s a lots of hits because the CubeF4 SDK is made so it can be used with all of STM evaluation boards which there are tons of. This file which we found though, seems to be prepared for our board (Stm32F4-DISCOVERY or sometimes Stm32F4-DISCO for short). Secondly they provide projects for 3 or 4 major (in their opinion) IDEs. Files prepared for TrueStudio and SW4STM32 are the way to go as those are GCC based apparently (never used them).

For finding the LD script issue:

find ~/STM32Cube/Repository/STM32Cube_FW_F4_V1.21.0/ -name STM32F407*.ld 

Modify stm32f407xx.cmake according to your findings (STARTUP_CODE and LINKER_SCRIPT variables) though if you followed my instructions directly, you should be fine without modifications.

Next the CMakeLists.txt file. This is like Makefile but on the higher level of abstraction. Mine looks like this:

PROJECT (blinky)
# Startup code is written by ST in assembly, so without this statement there are errors.
# Resonator used in this project. Stm32F4-DISCO uses 8MHz crystal. I left this definition here
# in the CMakeLists.txt rather than the toolchain file, because it's project dependent, not
# "platform" dependent, where by platform I mean STM32F4.
# All the sources goes here. Adding headers isn't obligatory, but since QtCreator treats CMakeLists.txt as
# its "project configuration" it simply makes header files appear in the source tree pane.
# Workaround : splitting C and C++ code helps QtCreator parse header files correctly. Without it, QtCreator
# sometimes treats C++ as C and vice versa. EDIT : this comment was written when in the ADD_EXECUTABLE C++
# files were present.
add_library ("stm" STATIC
# This links both pieces together.
TARGET_LINK_LIBRARIES (${CMAKE_PROJECT_NAME}.elf -Wl,--whole-archive stm -Wl,--no-whole-archive)
ADD_CUSTOM_TARGET("upload" DEPENDS ${CMAKE_PROJECT_NAME}.elf COMMAND ${OPENOCD} -f /usr/share/openocd/scripts/interface/stlink-v2.cfg -f /usr/share/openocd/scripts/target/stm32f4x.cfg -c 'program ${CMAKE_PROJECT_NAME}.elf verify reset exit')

OK now that we are set, lets create a build directory if it doesn’t exist yet, and build our project:

mkdir build
cd build
cmake -DCMAKE_CXX_COMPILER=arm-none-eabi-g++ \
    -DCMAKE_C_COMPILER=arm-none-eabi-gcc \
    -DCMAKE_TOOLCHAIN_FILE=../stm32f407xx.cmake -GNinja ..

If all went well, blinky.elf should appear in the build directory. Ninja isn’t mandatory (but its faster, more modern and jazzy), so if you omit -GNinja part, you will be left with classic Makefile and you would issue make instead of ninja at the end.

Feel free to post a comment in case of any trouble, we will sort it out.


Wires are for some other project. This board has been through a lot.

I don’t know why but, most of the Internet and books says “downloading” to describe the process of transmitting a binary firmware from host (PC) to the target (µC). I find it very confusing, because when a file is moved from a PC to some remote server, everybody calls it “uploading” not “downloading”.

Lets modify our generated code, so it actually blinks. Locate the main loop in the main.c file (it will be empty), and place this inside, and recompile (simply issue ninja or make inside the build dir):

 HAL_Delay (500);
 HAL_GPIO_WritePin(GPIOD, LD4_Pin|LD3_Pin|LD5_Pin|LD6_Pin, GPIO_PIN_SET);
 HAL_Delay (500);

In the “other tools” section we installed openocd, so lets use it now. Connect the STM32F4-DISCO, make sure that ST-LINK and JP1 jumpers are closed (the default), and :

cd build
openocd -f /usr/share/openocd/scripts/interface/stlink-v2.cfg -f /usr/share/openocd/scripts/target/stm32f4x.cfg -c 'program blinky.elf verify reset exit'

If everything went OK, you should see like a two dozens of messages like :

adapter speed: 8000 kHz
** Programming Started **
auto erase enabled
Info : device id = 0x10016413
Info : flash size = 1024kbytes
** Programming Finished **
** Verify Started **
verified 7992 bytes in 0.104675s (74.561 KiB/s)

Let me know in case of any problems, we can work it out probably. Remember that I managed to flash the thing under Ubuntu 18.10 running inside Ubuntu 18.04, so it cannot be that difficult :D. Another way of flashing STM32s under Linux is by using Texane’s st-link, but I found openocd to be more reliable and universal.

Oh, and I added an “upload” target in the CMakeLists.txt, so you can simply do “ninja upload” instead of running openocd manually.

The QtCreator IDE*

Now that our project compiles and runs in a console we can integrate it with QtCreator (or other IDE). Run it, and open Help -> About plugins. Make sure, that BareMetal (experimental) plugin is active and restart the IDE when asked.

Now open Tools -> Options from the main menu.

In the Devices section go to Bare Metal tab and Add OpenOCD GDB server provider. Defaults are OK, don’t change anything.

Apply changes and move to the Devices tab. Add new Bare Metal Device , name it accordingly (I named it Stm32F4) and pick OpenOCD GDB Server provider we created in the previous step.Go to Kits section, Compilers tab, and make sure GCC for ARM 32 got auto-detected. If it weren’t (because you installed some other GCC based toolchain in some non-standard place) add it there using the Add button.Go to the Debuggers tab and add gdb-multiarch which we installed previously like so (remember to Apply after each modification):Finally go to the Kits tab and add new. Pick a name (you can even add an icon), change Device type to Bare Metal Device, and pick proper one in the combo below it. In the Compiler combos pick the ones we created / found in the Compilers tab, and do the same with the Debugger combo.

Change CMake Configuration (last row) so it looks like this (those are the options we passed to the CMake using -D flag):After all this effort you should be left with a new kit looking like that:Now we can open our project. Delete old build directory to be sure old config doesn’t break anything and open the project by selecting CMakeLists.txt. If you don’t delete our old build directory created by hand in previous paragraphs, new “temporary” kit gets created and it can be used after some modifications, but we already have better one.

You will be presented with Configure Project window where you can pick a kit to be used. Uncheck the Desktop kit, and check our Stm32F4 one:

Click Configure project and hit Ctrl-B to verify that everything compiles.

Debugging in QtCreator*

It’s simple. Open a terminal, and run openocd like so:

openocd -f /usr/share/openocd/scripts/interface/stlink-v2.cfg -f /usr/share/openocd/scripts/target/stm32f4x.cfg

This way GDB remote protocol server is started (not sure about the name), and “normal” GDB can communicate with it. Switch to QtCreator, hit F5 and that’s it! Hit Shift-F5 to pause program, and you will be presented with a call stack, variables, and so On. F10 is for stepping over, F11 for stepping into, and Shift-F11 for stepping out. Happy debugging.

[BlueNRG-2-Android] Source code troubleshooting, bonding and privacy

The objective of the firmware presented in this article is to provide an example which would implement the following functionality:

  • After resetting the BlueNRG-2 (called the “device” or simply BlueNRG later on) it would make itself general discoverable and accept a connection from whatever connects first.
  • It would bond with this first connecting thing (called the “cellphone” later on) and allow it to connect and modify GATT characteristics exclusively.
  • No other central device (cellphone) would be allowed to modify device’s characteristics (or even connect).
  • This BlueNRG device would work with modern iOS and Android phones (it’s 2018 when I’m writing this).

My starting point was an example from STSW-BLUENRG1-DK version 3.0.0 named BLE_Security which resides in BlueNRG-1_2-DK-3.0.0/Project/BLE_Examples/BLE_Security/src/peripheral directory. This package comes as an executable if  I remember correctly, but most of the ST’s execs run under wine (wine-3.6 (Ubuntu 3.6-1), Ubuntu version 18.04). So just after incorporating the example source code into my project I was able to pair, bond, connect and read characteristics on the BlueNRG-2 using nRF connect app running on Android 8 on Huawei Honor 9. The problem arisen though, when I tried to disconnect, and then connect again. In such case I would always get “GATT ERROR 0x85” in the nRF connect logs, and the only way to connect to the BlueNRG again was to reset it (it clears its security database, removing all bond information) and to remove bond information from the cellphone. After some time I posted a question on my ST forum, but not getting the answer I slowly figured out, that  there is problem with whitelist, because when I turned “undirected connectable” without whitelist, everything worked OK. Let me explain. In the file BlueNRG-1_2-DK-3.0.0/Project/BLE_Examples/BLE_Security/src/peripheral/BLE_Security_Peripheral.c there is Make_Connection function which at first turn general discoverable :

 ret = aci_gap_set_discoverable (ADV_IND, 0x100, 0x200, PUBLIC_ADDR, filter, sizeof (local_name), local_name, 0, NULL, 0, 0);

with filter ==NO_WHITE_LIST_USE and after bonding it turns undirected connectable like this :

 ret = aci_gap_set_undirected_connectable(0x100,0x200,PUBLIC_ADDR, filter);

with filter == WHITE_LIST_FOR_ALL. The way I see this problem is, that modern cellphones use random private resolvable addresses which can change anytime, so it may happen that the address BlueNRG has on its whitelist is out of date. The solution is to turn on the controller privacy (introduced in so called link layer privacy 1.2 in BLE 4.2) which would resolve addresses in the link layer tier thus allowing whitelisting. There’s more reliable sources on the topic, but the way I understand it is that [I may be wrong here] on the 3rd stage of bonding peripheral and central exchange IRKs between each other (along with other information) and thus the peer device identity is stored in the BlueNRG. This peer device identity consists of peer’s device identity address (the “real” address which can be public or static random), the local IRK and peer’s IRK. This information is then passed to the controller letting it to resolve random private resolvable addresses from the peer (cellphone) or to “randomize / hide” its own address. [/I may be wrong here]

To turn the controller privacy on I modified aci_gap_init as so:

ret = aci_gap_init (GAP_PERIPHERAL_ROLE, 0x02, sizeof (deviceName), &service_handle, &dev_name_char_handle, &appearance_char_handle);

but it retured error BLE_STATUS_INVALID_PARAMS. The same goes if I wanted to turn “LE secure connections” instead of “legacy pairing” where I got BLE_ERROR_UNSUPPORTED_FEATURE.It turned out that I needed to modify stack_user_cfg.h and turn appropriate options on :


In addition I added stack_user_cfg.c and libcrypto.a to the source base, and was able to go past the aci_gap_init call. Next I wanted to turn on the “LE secure connections” which where introduced in BLE 4.2. This is some fancy option which modifies the way the bonding process goes and introduces even more security, yay. I did:


and got BLE_STATUS_OUT_OF_MEMORY (0x48) when adding first (and only) custom GATT service. Turns out, that when SC is supported, there is another characteristic added to the Genaral Access service called Central Address Resolution characteristic. So I needed to fire the BLUENRG1_Wizard.exe and generate new header file with “privacy – controller” option turned on. This way ATT attributes number was increased by 1 and the out of memory error went away.

Next problem was that aci_gap_set_discoverable started to return BLE_STATUS_INVALID_PARAMS (0x42). It turns out, that when host or controller privacy is turned on, Own_Address_Type parameter can be only RESOLVABLE_PRIVATE_ADDR or NON_RESOLVABLE_PRIVATE_ADDR, and conversely when privacy is off, Own_Address_Type can be only PUBLIC_ADDR or STATIC_RANDOM_ADDR. So this way if you want the BlueNRG to resolve private resolvable addresses from the peer, you are also forced to use them on the NRG as well.

This way I was able to run the firmware without any errors, but my cellphone was unable to detect the device 99% of the time. It would occasionally detect it, but on very rare occasions. The problem was solved after… replacing 16MHz High Speed oscillator crystal with 32MHz one. I really do not know where this bug/feature is documented, but I myself found it in the script of the BlueNRG GUI application. After replacement I made a change so HS_SPEED_XTAL macro would preprocess to 1 and SYSCLK_FREQ to 32000000.

And the last problem I had, which took me almost 2 days to fix was caused by (I think) improper capabilities setup:

ret = aci_gap_set_io_capability (IO_CAP_NO_INPUT_NO_OUTPUT);

I wanted to bond using “Just Works” scheme, but this particular set of calls above made paring process behave very odd. I was able to scan for my device (still using nRF connect), and to bond (… -> bond), but it would not appear on the “bonded” list as it usually happened:

And although I could connect to such oddly bonded device and even reconnect to id multiple times, what I could not achieve was to reconnect after devices disconnected by themselves due to signal loss (i.e. when cellphone was carried away). In such case, when I brought my cellphone back and turned scanning on, my BlueNRG device would reappear in the “Scanned” list but with different address! Trying to connect to it would return GAT ERROR 0x3d in the nRF logs which means BLE_ERROR_CONNECTION_END_WITH_MIC_FAILURE (MIC is some little chunk of bytes added to the payload when privacy is turned on if I remember correctly). To fix this I had to do two things:

  • reset the phone and find out, that “Bonding” list in the nRF connect was all of a sudden populated with ten or so instances of my BlueNRG device made that day during some of the tests.

And then during pairing, I had to enter a PIN (123456), and after that I was able to do whatever I wanted, disconnect, go out of range, reconnect turn Bluetooth off and on, reset the phone and so on. All worked pretty fine.

Remarks / TODOs:

  • My peripheral’s (BlueNRG) name isn’t shown int the “Scanned” list in nRF connect, and I don’t know how to fix this.
  • I don’t know how to enable “Just Works” scheme.
  • Linux (Ubuntu 18.04) uses static random addresses, so the whole address resolving and privacy thing is not a problem.

Self balancing robot improvements

I was dissatisfied with robots performance, so I experimented a lot with various aspects to improve it. But along the way the robot, however simple in concept, proved to be pretty complex system which depends on many factors that I didn’t anticipated. This variety of factors which altogether influence its movements make it very difficult to troubleshoot problems. So, now I’ll try to recall my adventures with it in order. Oh, and my objective (best case scenario) is to make it stable to the point it stands completely still. This is what I found the most difficult.

Better motors

Long (tall) version with hard tires.

I decided, my first set of motors (Dagu DG02S 48:1) have to low torque on low RPM (driven from DRV8835 H-Bridge again with PWM). The robot was able to stand, but wiggled (video in previous post). So I changed them to JK28HS32 stepper motors, which proved to be even weaker. They simply have to low torque in all RPM range for the weight of my robot. Another disadvantages:

  • Draws tons of current even when robot is stationary. This is because un-geared steppers (at least small ones) spin freely when unpowered and thus need current which hold them in position.
  • More complex electronics (1 H-bridge per motor instead of 1/2 per brushed one), and much more complex programming.
  • Intense vibration even in half stepping mode. And I used 200step ones. The solution for that would be to use smaller wheels with soft tires or implement micro stepping which would in turn reduce its incremental torque. Every time I use one of these steppers, and I dig into the subject I realize how difficult a stupid motor might get.

So, not without disappointment I moved to another set of motors which happened to be Brushless DC Motor with Encoder 12V 159RPM from DFrobot (I found them on Aliexpress BTW). To my consternation it didn’t help much. Advantages of the newly mounted motors are its high torque, ease of interfacing (they have controller stuffed inside), but cons are:

  • They have quite some backlash.
  • They run on 12V, so I had to rework some of the electronics.
  • They might be a little bit faster, though at the end I was able to stabilize the thing pretty satisfactory.

Geared BLDC motors

To sum up : high quality motors eliminate one point of failure and lets you focus on another aspects if something is failing. My ideal motors, that I would like to have are:

  • Precise in terms of control. Wide range of RPMs i.e. able to spin super slow or quite fast with decent torque in all situations. What is a decant torque? Dunno. 2,4 kg*cm?
  • Fast. I would love to have 300RPM.
  • Minimal backlash.
  • Encoders built in.
  • Not to mention power efficiency and ease of programming, but this is not the most important thing.


So then I finally decided, that I stop there with changing motors, and wont replace them for the 4rd time, but try to tune the PID and fiddle with the software. I replaced my fusion algorithm from simple complementary one to Madgwick, which open source implementation is available online, and it is a part of a PhD thesis of some clever guy (Sebastian Madgwick). I cannot stress how far superior this algorithm is over my humble thing. But no luck with that neither. If I increased Kp and Ki it would oscillate vigorously (Kd was of not  much help there), and when I decreased Kp and especially Ki, the robot would always drift away gaining speed and tipping over. Playing with the dreaded thing which would not stand at all, and wiggle as some drunk I recalled, that on many YT videos (user upgrdman among my favorites on the subject) people was driving their robots on soft surfaces like carpets and that made me wonder. So I put my robot on a blanket and it helped a little. It reduced both oscillations and the drifting. I have theory, that firm surface (and/or soft tires) first damps vibrations, but also when type flattens under the weight, it somewhat blocks further movements. Hard tires do not have this effect and on hard surfaces they spin whenever robot is only slightly out of its balance whether on soft ones this slight error can be counteracted by deformed wheel to some very small extent. Think of dry sand on the beach and big soft wheels, I think even when unpowered, the robot couldn’t have a chance to stand still. So it thought about changing the wheels (and tires), because at that time I used pretty hard wheels from hardware store (some furniture ones) and I ordered 70mm and 110mm squishy wheels for model aircrafts. And then, while waiting for the wheels I made hasty decision to shorten the thing.


Bear in mind, that I can be completely wrong here! As far as I understand, when you have normal pendulum, the longer the string is, the faster will be linear velocity of the pendulum bob. I assumed The same thing will be with inverted pendulum, thus the taller the robot, the bigger velocity of it top-end when tipping over. Thus, I concluded, the faster lower-end movements will be necessary to counteract the moving top. At that time I thought my motors are to slow, so It seemed to be a good idea to shorten the body. So I did. And improvement was negligible if at all. Also, from the pendulum frequency equation I knew, that shortening the length increases the pendulum frequency, so i understood that my algorithm must react faster from now on.

Mechanical issues

Then I realized, that the wheels got loose. I ordered bunch of adapters and fixed them in place. Still no luck.

Scratching head and tuning PID

I couldn’t sleep but visualized my dangling robot. I tried to tune the PID controller a lot. And still every time I increased Ki it oscillated, and when decreased it drifted away. Too much Ki and it oscillated left and right, and too much Kd and it oscillated but in like one direction. Like hiccup. Derivative term damps output when error decreases, so in theory the robot should decelerate near sweet point and make full stop, but instead it would stop for a moment (like millisecond moment), and start in the same direction, then stop and then again, all in sudden sequence.

nRF24L01+ distraction

I was frustrated and sick and tired, so decided to do something different, as a break from the main subject. I got into nRF24 code and telemetry. I wanted to make full telemetry for the project as user upgrdman did. I thought that it would help me to debug. Then I run into another problems, procrastinated a little bit more with refactoring my SPI library and so on. I had curious adventures with nRF though, but this is completely other story. I use excellent application called KST2 for plotting the data. From the plots I observed that robot is drifting due to sudden integral peaks. It’s like robot was balancing for a moment, and then the pitch plot would show slight shift towards one direction, and just after that the integral part grew, robot started to move and tried to catch up, the i increased and increased, robot speed up, and then output saturated and thats all. The only thing which helped a little was to increase Ki (it amplified the reaction for integral, so robot drove faster, so was able to catch up and straighten), but it also increased oscillations.

Main loop frequency

In the act of desperation, not hoping for a change for better, I increased main loop frequency from 100Hz to 1kHz. I could do that, because I use very fast STM32F4 (compared to Arduinos and Atmegas that everybody seems to love so much) with a fpu. And that was it. It calmed the robot so much, that I could increase Ki to the values not possible before. Then I mounted the 110mm squishy wheels and it helped even more for the reasons described above, and because bigger wheels gives more speed. Thats all for now, I’m a little bit tired of this project, but in future I plan to:

  • Program encoders.
  • Program remote control (I bought myself nice Devo 7E transmitter).
  • Experiment with all (or most) of the parameters I talked about. Height vs. loop frequency, various wheels and so on.

Wheels galore

Self balancing robot first tests

I have built a self balancing robot, and here I want to post some notes regarding problems that I encountered. It was (as usual) more difficult than I anticipated, but I guess every hacker know this feeling.


Frame is made of 8mm aluminium pipes, 2 sheets of 2mm solid transparent polycarbonate, and aluminium angle bars. The pieces are screwed together with threaded rods and self locking nuts.  Making the frame was the easiest part, so I won’t write much about it.


Motors are labeled “Dagu DG02S 48:1”, and I am disappointed, because they have very low torque at low RPM, and are of low quality I think (their low price also suggests that). The consequences are, that they are unable to move my robot gently and precisely, but instead they start all of a sudden with couple of RPMs. At very low PWM duty cycle they are unable to move wheels at all, even with no load. The lowest RPM i can get out of them is probably a couple of RPMs, which sounds OK, but I think, that many of my problems arise from them being unable to gently move the wheels few millimeters forth and back. Such gentle movement is in my opinion crucial for steady (nearly stationary) operation, that you have on certain YouTube videos. So next tests (if there will be any) will be performed with steppers or geared brushless motors. Oh, and there are no encoders neither.


If motors are the brawl, electronics are the brain. Main board is STM32F4-DISCO and is connected to the battery via custom per-boards with connectors. On the top of the robot there is a single 16850 Samsung cell paired with cheap LiPo charger, and switch. I chose it, because it is a lot cheaper per mAh than those silverish ones. Battery sits inside bracket ordered on AliExpress. As of accelerometer and gyroscope, the super popular MPU 6050 does the job (also AliExpress). Motors are driven by Pololu DRV8835, and last but not least nRF24L01+ is used for connectivity with the robot, which is crucial to tune PID controller without dangling cables which would degrade stability, and make center of mass unstable. Like shooting to a moving target.

Oh, and I wanted to control the robot using cheap CX-10 remote, but It failed completely. I based my code on deviation and others work, but after many hours I gave up. Then I’ve got Syma X5HW for Christmas, and with its remote I finally had success (after first try, like 10 minutes of coding, not everyday something like this happens. But of course I only had to modify parameters for my CX-10 code). At first I confused the binding channel number (I set 0 instead of 8), and I was still able to receive data, but only from approx 5 cm apart. Then after setting it correctly to channel 8 range increased dramatically.


With electronics and motors on its place, there comes programming, the most difficult part. First I made peripherals of the robot to work (Arduino library called i2cdevlib was very helpful), so I was able to read raw data from MPU 6050, send basic commands via nRF, and spin the motors. Then, the most challenging part was to implement:

  • pitch angle calculation (,
  • fusion of gyroscope and accelerometer data (this was helpful :,
  • PID tuning (though implementing a PID is dead simple, tuning it is completely other story). Most helpful for me in this regard was probably the Wikipedia article on PID controller. In my case, the most (positive) impact came from integral part, where D part seems to have minimal influence.

I’ll try to force myself to get into detail on the topics above, as I only scratched the surface.

Implementing zoom with moving pivot point using libclutter

I’m making this post, because I struggled with this functionality a lot! I was implementing it (to the extent I was happy with) in the course of 4 or even 5 days! Ok, first goes an animated gif which shows my desired zoom behavior (check out Inkscape, or other graphics programs, they all work the same in this regard):

Desired zoom functionality

Basically the idea is that, the center of the scale transformation is where the mouse pointer is on the screen, so the object under the cursor does not move while scaling, whereas other objects around it do. My first, and most obvious implementation (which didn’t work as expected) was as such:

 * Point center is in screen coordinates. This is where mouse pointer is on the screen.
 * The layout is : GtkWindow has ClutterStage which contains ClutterActor scaleLayer
 * (in this function named self), which contains those blue
 * circles you see on animated gif.
 * ScaleLayer is simply a huge invisible plane which contains all the objects an user
 * interacts with. User can pan and zoom it as he whishes.
void ScaleLayer::zoomOut (const Point &center)
        double scaleX, scaleY;
        // This gets our actual zoom factor.
        clutter_actor_get_scale (self, &scaleX, &scaleY);
        // We decrease the zoom factor since this is a zoomOut method.
        float newScale = scaleX / 1.1;

        float cx1, cy1;
        // Like I said 'center' is in screen coords, so we convert it to scaleLayer coords
        clutter_actor_transform_stage_point (self, center.x, center.y, &cx1, &cy1);

        float scaleLayerNW, scaleLayerNH;
        clutter_actor_get_size (self, &scaleLayerNW, &scaleLayerNH);
        // We set pivot_point
        clutter_actor_set_pivot_point (self, double(cx1) / scaleLayerNW, double(cy1) / scaleLayerNH);
        // And finalyy perform the scalling. Fair enough, isn't it?
        clutter_actor_set_scale (self, newScale, newScale);

Here is the outcome of the above:

Zoom fail

It is fine until you move the mouse cursor which changes the pivot point (center of the scale transformation) while scale is not == 1.0. I dunno why this happens. Apparently I do not understand affine transformations as well as I thought, or there is a bug in libclutter (I doubt it). The solution is to convert the pivot point from screen to scaleLayer coordinates before scaling (as I did), and again after scaling. The difference is in scaleLayer coordinates, so it must be converted back to screen coordinates, and the result can be used to cancel this offset you see on the second gif. Here is my current implementation:

void ScaleLayer::zoomIn (const Point &center)
        double x, y;
        clutter_actor_get_scale (self, &x, &y);

        if (x >= 10) {

        double newScale = x * 1.1;

        if (newScale >= 10) {
                newScale = 10;

        scale (center, newScale);


void ScaleLayer::zoomOut (const Point &center)
        ClutterActor *stage = clutter_actor_get_parent (self);

        float stageW, stageH;
        clutter_actor_get_size (stage, &stageW, &stageH);

        float dim = std::max (stageW, stageH);
        double minScale = dim / SCALE_SURFACE_SIZE + 0.05;

        double scaleX, scaleY;
        clutter_actor_get_scale (self, &scaleX, &scaleY);

        if (scaleX <= minScale) { return; } scale (center, scaleX / 1.1); 


void ScaleLayer::scale (Point const &c, float newScale) { 
   Point center = c; float cx1, cy1; if (center == Point ()) { if (impl->lastCenter == Point ()) {
                        float stageW, stageH;
                        ClutterActor *stage = clutter_actor_get_parent (self);
                        clutter_actor_get_size (stage, &stageW, &stageH);
                        impl->lastCenter = Point (stageW / 2.0, stageH / 2.0);

                center = impl->lastCenter;
        else {
                impl->lastCenter = center;

        clutter_actor_transform_stage_point (self, center.x, center.y, &cx1, &cy1);
        float scaleLayerNW, scaleLayerNH;
        clutter_actor_get_size (self, &scaleLayerNW, &scaleLayerNH);
        clutter_actor_set_pivot_point (self, double(cx1) / scaleLayerNW, double(cy1) / scaleLayerNH);
        clutter_actor_set_scale (self, newScale, newScale);

        // Idea taken from here :
        float cx2, cy2;
        clutter_actor_transform_stage_point (self, center.x, center.y, &cx2, &cy2);

        ClutterVertex vi1 = { 0, 0, 0 };
        ClutterVertex vo1;
        clutter_actor_apply_transform_to_point (self, &vi1, &vo1);

        ClutterVertex vi2 = { cx2 - cx1, cy2 - cy1, 0 };
        ClutterVertex vo2;
        clutter_actor_apply_transform_to_point (self, &vi2, &vo2);

        float mx = vo2.x - vo1.x;
        float my = vo2.y - vo1.y;

        clutter_actor_move_by (self, mx, my);

The whole project is here :
Here’s the thread which pointed me in right direction :

Cross compilation with GCC & QtCreator for ARM Cortex M

This post is outdated!!. Please see for more up to date instructions.

I used Eclipse CDT for years for C/C++ and was disappointed by its bulkiness, slowness and memory usage. I did mostly embedded, and sometimes GTK+ desktop apps (even OpenGL once or twice). I looked for a replacement, tried dozen or more IDEs and editors, and finally found out the QtCreator (my main concerns were : great code navigation – eclipse often get confused wit serious, templated C++ code), great code completion, and CMake integration. I am satisfied for now (It’s been a year now), and I use it for all but Qt. But all of a sudden, wen a new version appeared, I came across a minor flaw in CMake builder, which reported an error like “Can’t link a test program”. Obviously he cannot, because he used host compiler instead of ARM one.

So in version prior to 4.0.0 i used to configure my project with cmake like:

cd build
cmake ..

Then, from Qt, i simply compiled the project, and it worked flawlessly. But since QtCreator 4.0.0 it started to invoke cmake in every possible situation. Be it a IDE startup, or saving a CMakeLists.txt file. And he did it with -DCMAKE_CXX_COMPILER=xyz where “xyz” was a path configured in Tools -> Options -> Build & Run -> Compilers. If I run cmake manually, without this CMAKE_CXX_COMPILER variable set, everything was OK. I saw in the Internet, that many people had the same problem, and used QtCreator for embedded like I do (see comments here).

So I decided, that instead of forcing QtCreator to stop invoking cmake, or invoking it with different parameters, I should fix my CMakeLists.txt so it would run as QtCreator want it. Solution I found was CMAKE_FORCE_C_COMPILER and CMAKE_FORCE_CXX_COMPILER documented here. My CMakeLists.txt looks like this:


include (stm32.cmake)

PROJECT (robot1)


LIST (APPEND APP_SOURCES "src/stm32f4xx_it.c")
LIST (APPEND APP_SOURCES "src/syscalls.c")
LIST (APPEND APP_SOURCES "src/system_stm32f0xx.c")
LIST (APPEND APP_SOURCES "src/config.h")
LIST (APPEND APP_SOURCES "src/stm32f0xx_hal_conf.h")




And the “toolchain file” is like this:

SET (TOOLCHAIN_PREFIX "/home/iwasz/local/share/armcortexm0-unknown-eabi" CACHE STRING "")
SET (TARGET_TRIPLET "armcortexm0-unknown-eabi" CACHE STRING "")
SET (CUBE_ROOT "/home/iwasz/workspace/stm32cubef0")
SET (CRYSTAL_HZ 16000000)
SET (STARTUP_CODE "src/startup_stm32f072xb.s")


SET (CMAKE_C_FLAGS "-std=gnu99 -fdata-sections -ffunction-sections -Wall" CACHE INTERNAL "c compiler flags")
SET (CMAKE_CXX_FLAGS "-std=c++11 -Wall -fdata-sections -ffunction-sections -MD -Wall" CACHE INTERNAL "cxx compiler flags")
SET (CMAKE_EXE_LINKER_FLAGS "-T ${LINKER_SCRIPT} -Wl,--gc-sections" CACHE INTERNAL "exe link flags")



Two most important changes were:

  • Using CMAKE_FORCE_CXX_COMPILER macro instead of simply setting CMAKE_CXX_COMPILER var.
  • Including the toolchain file (and thus setting / forcing the compiler) before PROJECT macro.

My configuration as of writing this:

  • Qt Creator 4.0.2, Based on Qt 5.7.0 (GCC 4.9.1 20140922 (Red Hat 4.9.1-10), 64 bit), Built on Jun 13 2016 01:05:36, From revision 47b4f2c738
  • Host system : Ubuntu 15.10, Linux ingram 4.2.0-38-generic #45-Ubuntu SMP Wed Jun 8 21:21:49 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
  • Host GCC : gcc (Ubuntu 5.2.1-22ubuntu2) 5.2.1 20151010
  • Target GCC : armcortexm0-unknown-eabi-gcc (crosstool-NG crosstool-ng-1.21.0-74-g6ac93ed – iwasz) 5.2.0

BlueNRG + STM32F7-DISCO tests


  • Tested 32bit cross-compiler from launchpad, it worked OK, but binaries were much bigger, and debugger lacked Python support which is required by QtCreator which is an IDE of my choice (for all C/C++ development including embedded).
  • Made my own compiler with crosstool-NG (master from GIT). Used my own tutorial, which can be found somewhere over this blog.
  • Verified that a “hello world” / “blinky” program works. It did not, because I copied code for STM32F4, and I messed the clock configuration. Examples from STM32F7-cube helped.
  • I ported my project which aimed to bring my BlueNRG dev board to life. The project consists of a USB vendor-specific class for debugging (somewhat more advanced, and tuned for the purpose than simple CDC class), slightly adapted BlueNRG example code from ST (for Nucleo), and some glue code.
  • I switched to STM32F7-disco because I couldn’t get BlueNRG to work with STM32F4-disco. F7 has Arduino connector on the back, so I thought it’d be better than bunch of loosely connected wires which was required for F4. At that time I thought that there is something wrong with my wiring.
  • On STM32F7-disco BLE still would not work. I hooked up a logic analyzer, and noticed that MISO and CLK is silent. It turned up that I have to use alternative CLK configuration on BlueNRG board which required to move 0R resistor from R10 to R11. Although it was small (0402), at first I had problem desoldering it, probably because of unleaded solder used. See image.


  • Next problem I had, was with SPI_CS pin. According to the BlueNRG-dev board docs, I wanted to use D8 pin (this is PI2 pin on STM32F7-disco), but it didn’t work. Glimpse on the schematics revealed that the SPI_CS is connected to A1, which is marked as “alternative” in the docs. So IMHO this is an error in the docs.
  • Final pin configuration that worked was:
// SPI Reset Pin
// MISO (Master Input Slave Output)
// MOSI (Master Output Slave Input)
// IRQ
// !!!
  • Next thing that got me puzzled for quite a time was that after initial version query command, which seemed to work OK (I got some reasonably looking responses) the communication hanged. IRQ pin went high, and wouldn’t go low. So µC thought that there is data to read and was continuously reading and reading.
  • Only after I read in UM1755 User manual that IRQ goes HI when there is data from NRG to µC, and it goes HI-Z when there is no data, I double checked the schematics, and found that BlueNRG-dev board has no pull-down resistor in it. The ST example code I had also hadn’t have IRQ pin configured in pull-down state, so I wonder how this could work. Maybe Nucleo boards have pull-down resistor soldered there.
  • For now (after 3 or 4 nights, and I consider myself as quite experienced) I got this (see photo):Screenshot from 2016-03-21 02-09-30
  • “Iwona” is probably my neighbor’s phone.
  • Quick update made the next day morning. My Linux computer is able to connect to the test project, and query it (right now it simply counts up):

Screenshot from 2016-03-21 13-05-25

Internet thermal printer part 2

The printer I bought and described in the previous post really disappointed me. I didn’t spend some huge amount of time on that (say 3-4 evenings), but I dig into the subject so deep so I couldn’t help myself but do some more hacking. First of all I wanted to know if my cool looking, but quite useless printer can be used in some other way (i.e. the printing head) and whether that is the main board which is broken or the head itself. If the former was true, and the head was OK, I would try to communicate with the head and thus make something which would pretty much implement the whole printer main board that was broken. But if the head was broken, I couldn’t do anything but to abandon the project or find another printer. And it’s funny, because, as you may have seen at the end of my previous post, this is exactly I’ve written not to do. But I just love it. When the work you do all day every day is stupid and pointless, when you are constantly bothered with more and more irrelevant things, and after all day you are tired and discouraged, what would you do after arriving home (excluding household duties :D)? Grab a beer, sit and watch TV? Hell no! Grab a beer and tinker some more! It calms me down you know (unless I’m stuck for to long). The printer head used in my Intermec PW40 is a Seiko ( SII LTP3445 (datasheet here) and it is obsolete. New designs are encouraged to use LTPV445.

So what I did is that I soldered a bunch of wires to the main board to be able to speak directly to the printing head. The resulting wiring looks like that:

Connected directly to the thermal printer head.

Connected directly to the thermal printer head.

Then I grabbed signals with a logic analyzer and an oscilloscope to figure out what is malfunctioning (i.e. when the printer was operating). In my opinion, the main board is broken, because printing short strings like ‘A\r\n’ works OK, and all signals seems to be correct (i.e. 832 bits per row are transferred and quite a few rows are present). But when longer strings are submitted, the whole transmission appears to be corrupted at some point. Serial data burst is clearly shorter, like interrupted. Unfortunately I have made a screenshot of correct transmission (A\r\n), and don’t have the corrupted one now (and the board is not operational since I removed the FFC socket). Here’s the screen:

Correct transmission to the thermal head. Letter 'A' is being transmitted.

Correct transmission to the thermal head issued by the original Intermec PW40 main board. Letter ‘A’ is being transmitted.

The next step was to wire up some circuitry to actually drive the head while it was still soldered to the original main board. I didn’t want to break it then, but later it ceased to be a priority :D My setup consists of:

Breadboard looks like this:

The circuit. You can see that the printer is more or less intact i.e. the head is mounted on the main board and the plastic frame. Later on I decided to disconnect the head from the original main board.

The circuit. You can see that the printer is more or less intact i.e. the head is mounted on the main board and the plastic frame. Later on I decided to disconnect the head from the original main board.

Shifters are controlled with 3.3V and output 5V for the head’s logic. The whole contraption is powered from a laboratory power supply which was set to 5V with low current limit to prevent smoke and fire in case of errors in wiring on my side. The setup drawn about 0.1A when idle and 2.5A when feeding the paper. Driving the motor was pretty easy, I did stepper motor before, so I rather quickly caught up with this one. But the head took more time and at some (thankfully short) time I was stuck. First, the DST signal (DST is for power and thus temperature) circuitry on the main board is secured with some (I believe) TTL logic. The idea is that if thermistor says to the µC that he head is overheating, the µC shuts the head down. This is first protection mechanism programmed in software (BTW manual says that if overheat, the head may cause skin burns, smoke and even fire. It is a thermal one after all). But there is another protection mechanism done in hardware which shuts down the head if the software one malfunctions. I believe, that the two mechanisms are AND-ed by some TTLs. The protection mechanisms are pulling down the DST in case of trouble. In my case, when actually two logic circuits were connected to the head, this situation caused problems, because the original main board, which was not powered, pulled the DST down all time. The solution to this was to cut the trace and that was it (if not cut, the DST would stay low no mater what level I was trying to drive it. Oscilloscope shown only 100mV level changes, obviously to small to be useful).

My transmission. A 12*8 pixel wide bar strip. 12 x 0xaa.

My transmission. A 12*8 pixel wide bar strip. 12 x 0xaa.

But still no luck after the DST problem was resolved, so I decided that something else on the original main board is interfering and I need to disconnect the head from it in sake of connecting to it directly. Didn’t have spare FFC socket though (Molex 25 pin 1.25 mm pitch rare and obsolete), so after obtaining a wrong one from farnell (bought 1 mm pitch instead od 1.25 duh!) i soldered the wires directly to the FFC cable. Looks awful, but is rigid:

Wires soldered directly to the FFC strip.

Wires soldered directly to the FFC strip.

Still no luck! What the hell! Logic analyzer still happily shows correct bursts of data, so for the third time rewired the breadboard and checked levels with an oscilloscope. And curious thing revealed : all levels (shifted up) were 0-4V instead od 0-5V. I have completely no idea why? My power supply is a cheap one, but can 1 or 2 amps of load cause 1V drop? Must investigate further. EDIT my cheap counterfeit Saleae logic analyzer must have somewhat low input impedance and it was it that caused significant voltage drop on logic signals. Disappointing. On the picture below you can see (far left) that only after increasing the voltage repeatedly, the printer started to print:

The first successful printout.

The first successful printout.

I’m excited!

Internet thermal printer

The idea is shamelessly stolen from this project. EDIT : It evolved… What this project is intended to be:

  • A toy printer (for my son) with some light and sound signal connected to the Internet and accessible by some web interface. Anyone with password (basic auth configured in .htaccess) could send a graphic and/or text message to the printer which would immediately flash the light, beep the buzzer as well as print the message. Protocol to the printer (on the network level) : whatever a.k.a my own & super-simple.

What this project shall not become EDIT : It evolved.. (note to myself because I tend to complicate things):

  • An over-engineered wireless CUPS compatible Postscript full featured printer which also makes coffee.

After deciding that I would try to make such a thing which took approx. 1 second after seeing Jim’s site I went to (local EBay. BTW we have here in Poland, but allegro seems to be winning the battle) and found something printer-ish alike and seemingly broken, with some parts missing. It is a Intermec PW40 mobile printer. Useful links I found on this printer:

  • Manuals – Intermec site (those are for PW50, but I assume they are compatible in some way).
  • Intermec community – they even have forum, and some community around the site.

Photos after dismantling the thing:

Looks like it uses ESC/P like Jim’s printer and 7.2V battery pack also. Looks promising (at least some standard language). Elements found on the main board of PW-40:

I’ve written the LTC chip looks promising, because it connects the printer to the outside world, and gives a hint where to start hacking. It translates RS232 high voltage levels to TTL, but since I wanted to drive the printer directly from some µC I needed to bypass the LTC. After some research I determined what follows : RS 232 port (this with RJ socket) is connected to pins 14 (232 input) and 15 (232 output). Corresponding TTL pins are : pin 13 (logic output), and 12 (logic input). So as far as I am reasoning correctly :

  • Pin 13 is connected to the Toshiba’s RX pin.
  • Pin 12 is connected to the Toshiba’s TX pin.
  • Whole device can be powered from 12V supply (I read that somewhere).
  • Let’s try it! Seems to work. At least PC and the printer are communicating. Wiring looks like this:

Costs so far:

  • Printer : 25PLN ($8).
  • 10 rolls of thermal paper 20PLN ($7)

Intermec provides a CUPS driver for Linux which enables you to use their printer as regular printer in the OS. Apparently PW40 isn’t supported. I successfully compiled and installed the software, but printing a random text file gave me some gibberish. After that I tried to communicate with te printer in ESC/P language directly, but with no luck. I described my problems on the Intermec forums and still waiting for some reply. In short the problem is, that I don’t really know for sure if this is me doing something wrong, or the printer is broken (it was sold on auction as broken, but seller couldn’t tell for sure if it is really broken or not). So after two evenings the situation looks that I am able to print only one character in a row. If I’m sending more than 1 character to print, it hangs. To make matters worse, my printer won’t print a self test page as it is described in the manual. It feeds paper a little and that’s all. At the other hand I found a datasheet of the printer head used in my printer, but using it directly would be a triumph of form over the content I’m afraid, and I don’t have enough time for that (i.e. making my own printer from scratch). But I’m overambitious you know, so who knows…

This is the only thing It can print. If I try to print more than 1 character in a row, It hangs.

This is the only thing It can print. If I try to print more than 1 character in a row, It hangs.

The any key

…which in fact is a one button HID keyboard which you can reprogram to be any key or combination of keys you wish (open source hardware and software). Links for start:

And quick video (blurry one shoot):

At some point, after few battles I bravely fought with STM32 I wanted to learn something new. I’ve been a few times on Texas Instrument’s site because I wanted to learn more about BeagleBone black and the Sitara CPU that sits on it and spotted the TIVA microcontrolers somewhere on the page. After quick research they looked very promising. It had all I needed that is : can be easily programmed with GCC stack under Linux, has affordable starting platform (they call them launchpads, and they cost $13 and $20 for TM4C123 and TM4C129 respectively) and, what is most important for me, they have well written peripheral libraries and documentation (i.e. at that time I could only rely on opinions from the Web, but after my first project I definitely can confirm that).

My button assembled

My button assembled

So I started a new simple project, which I previously tried to make with STMs and had countless problems with (here is the link). I’ve got EK-TM4C123GXL launchpad and it’s great. Somewhere in near future I’ll try to write another post which would explain how to start development on Linux with GCC toolchain with this board, but for now I can only assure you that getting started was as easy and quick as one evening (I used my own cross-compiler which is described in previous post here). The project aims to construct one button USB-HID keyboard which could be reprogrammed in such a way that pressing the button would send any key code user wishes or even combination of keys if necessary. I imagined, that it would be super cool to have something like that on my desk at work, and if someone comes by and interrupt me with my work, I would ostentatiously hit the big red button which stops the music in my headphones and ask : “what? once again?”. TI provides excellent peripheral library with many examples for many evaluation boards. Furthermore they have great USB library which is neatly divided in four tiers dependent on each other. On the very top is the highest level tier called the “Device Class API” which enables one to implement typical devices in just few lines of code (I mean simple HID, DFU, CDC etc.) ST does not have that! Device class API is great, but in fact quite inflexible. For example HID keyboard can have only one interface which is not enough if one wants to implement something more sophisticated. Here are instructions for designing HID keyboard design with additional multimedia capabilities (which I wanted so bad). Microsoft recommends that there should be at least two USB interfaces in such a device. One should implement a ordinary keyboard compatible with BOOT interface, so such keyboard would be operational during system start up, when there is no OS support yet, and another one would implement the rest of desired keys such as play/pause, volume up, down and so on. I saw quite a few USB interface layouts conforming to this recommendations over the net, including my own keyboard connected to my computer as I write this, so I assume this is the right way to do this. And here is an example of USB interface layout as mentioned earlier. HID reports are also provided. So I moved to lower level tiers and it was not so simple. Here you can find all the code that is inside the button. All the magic is done in main.c which could be split in several smaller files, but who cares. Firstly there are USB descriptors. Standard and HID ones:

const tConfigSection *allConfigSections[] = {

Next you have callbacks. My code is heavily based on TI examples, but in some places it is simplified where no advanced functionality is needed. Custom requests are handled in onRequest where you can find bits responsible for sending and receiving configuration from the host (using another program running on a PC which is linked below). Configuration (i.e. what key combination should be sent to the host when “any-key” is pressed) is stored in eeprom (functions readEeprom and saveEeprom). And of course in main function you can find the main loop with buttons polling and report sending. After connecting the device to a Linux PC it introduces itself as two interface HID device which is silently recognized by Linux (and not so silently by Windows which searches for some drivers for it). What distinguishes this HID keyboard from others is that it recognizes two additional control requests from the host PC which enables user to store and receive combination of keys this device sends when pressed. This requests are prepared in PC application which looks like this: Any key host app   Every button on the main screen can be toggled (on the picture above the “play/pause” one is turned on) which immediately sends the configuration data which is stored in eeprom. After closing the host application (which then releases the USB device to the OS) button works as programmed, in situation depicted above behaving as a play/pause keyboard button. Play/pause was my initial intention and I am using it with this function right now, but friend of mine used in on presentation (as arrow down), and also I tested ctrl-alt-del, ctrl-shift-t (eclipse CDT open element), and power among others. Maximum simultaneously pressed keys which can be simulated is 8 for control ones (like ctrl, shift, alt etc) and 6 for regular ones.

Any key internalsSo there you have it. Feel free to post questions etc. I am also wondering about a “mass production experiment” which would aim to make, say, 10 of those things (with cheaper micro of course!) and sell them on tindie (I have never sold anything made by myself yet). What do you think? Would you buy one of these? What would be reasonable price for this (it is only one button after all… + PC app). I made some very rough calculations and the total cost of one device (assuming production of 100 pcs) would be somewhere around $10, when using MSP430 as a µC and importing casings from China. Not to mention boxes to pack the stuff, soldering (probably in some kind of reflow oven) and sending it all together. So for now it seems overwhelming for me, but who knows.

And for something completely different : what happens when you connect a USB device with VBUS and GND shorted:

Jun  4 08:58:52 diora kernel: [  998.928731] hub 2-1:1.0: over-current condition on port 1
Jun  4 08:58:52 diora kernel: [  999.136827] hub 2-1:1.0: over-current condition on port 2
... and you can hear humming in headphones connected to the PC.

EDIT : User jancumps on the EEVBlog forums pointed out, that there is an ongoing indiegogo campaign for a similar idea. Looks quite the same as mine :D

EDIT2 : Dave did a review of the “serious button” this is not mine design, it only looks the same:

EDIT3 (09/2014) : Another one on indiegogo with goal of $100k!