Internet thermal printer part 2

The printer I bought and described in the previous post really disappointed me. I didn’t spend some huge amount of time on that (say 3-4 evenings), but I dig into the subject so deep so I couldn’t help myself but do some more hacking. First of all I wanted to know if my cool looking, but quite useless printer can be used in some other way (i.e. the printing head) and whether that is the main board which is broken or the head itself. If the former was true, and the head was OK, I would try to communicate with the head and thus make something which would pretty much implement the whole printer main board that was broken. But if the head was broken, I couldn’t do anything but to abandon the project or find another printer. And it’s funny, because, as you may have seen at the end of my previous post, this is exactly I’ve written not to do. But I just love it. When the work you do all day every day is stupid and pointless, when you are constantly bothered with more and more irrelevant things, and after all day you are tired and discouraged, what would you do after arriving home (excluding household duties :D)? Grab a beer, sit and watch TV? Hell no! Grab a beer and tinker some more! It calms me down you know (unless I’m stuck for to long). The printer head used in my Intermec PW40 is a Seiko (www.sii.co.jp) SII LTP3445 (datasheet here) and it is obsolete. New designs are encouraged to use LTPV445.

So what I did is that I soldered a bunch of wires to the main board to be able to speak directly to the printing head. The resulting wiring looks like that:

Connected directly to the thermal printer head.

Connected directly to the thermal printer head.

Then I grabbed signals with a logic analyzer and an oscilloscope to figure out what is malfunctioning (i.e. when the printer was operating). In my opinion, the main board is broken, because printing short strings like ‘A\r\n’ works OK, and all signals seems to be correct (i.e. 832 bits per row are transferred and quite a few rows are present). But when longer strings are submitted, the whole transmission appears to be corrupted at some point. Serial data burst is clearly shorter, like interrupted. Unfortunately I have made a screenshot of correct transmission (A\r\n), and don’t have the corrupted one now (and the board is not operational since I removed the FFC socket). Here’s the screen:

Correct transmission to the thermal head. Letter 'A' is being transmitted.

Correct transmission to the thermal head issued by the original Intermec PW40 main board. Letter ‘A’ is being transmitted.

The next step was to wire up some circuitry to actually drive the head while it was still soldered to the original main board. I didn’t want to break it then, but later it ceased to be a priority :D My setup consists of:

Breadboard looks like this:

The circuit. You can see that the printer is more or less intact i.e. the head is mounted on the main board and the plastic frame. Later on I decided to disconnect the head from the original main board.

The circuit. You can see that the printer is more or less intact i.e. the head is mounted on the main board and the plastic frame. Later on I decided to disconnect the head from the original main board.

Shifters are controlled with 3.3V and output 5V for the head’s logic. The whole contraption is powered from a laboratory power supply which was set to 5V with low current limit to prevent smoke and fire in case of errors in wiring on my side. The setup drawn about 0.1A when idle and 2.5A when feeding the paper. Driving the motor was pretty easy, I did stepper motor before, so I rather quickly caught up with this one. But the head took more time and at some (thankfully short) time I was stuck. First, the DST signal (DST is for power and thus temperature) circuitry on the main board is secured with some (I believe) TTL logic. The idea is that if thermistor says to the µC that he head is overheating, the µC shuts the head down. This is first protection mechanism programmed in software (BTW manual says that if overheat, the head may cause skin burns, smoke and even fire. It is a thermal one after all). But there is another protection mechanism done in hardware which shuts down the head if the software one malfunctions. I believe, that the two mechanisms are AND-ed by some TTLs. The protection mechanisms are pulling down the DST in case of trouble. In my case, when actually two logic circuits were connected to the head, this situation caused problems, because the original main board, which was not powered, pulled the DST down all time. The solution to this was to cut the trace and that was it (if not cut, the DST would stay low no mater what level I was trying to drive it. Oscilloscope shown only 100mV level changes, obviously to small to be useful).

My transmission. A 12*8 pixel wide bar strip. 12 x 0xaa.

My transmission. A 12*8 pixel wide bar strip. 12 x 0xaa.

But still no luck after the DST problem was resolved, so I decided that something else on the original main board is interfering and I need to disconnect the head from it in sake of connecting to it directly. Didn’t have spare FFC socket though (Molex 25 pin 1.25 mm pitch rare and obsolete), so after obtaining a wrong one from farnell (bought 1 mm pitch instead od 1.25 duh!) i soldered the wires directly to the FFC cable. Looks awful, but is rigid:

Wires soldered directly to the FFC strip.

Wires soldered directly to the FFC strip.

Still no luck! What the hell! Logic analyzer still happily shows correct bursts of data, so for the third time rewired the breadboard and checked levels with an oscilloscope. And curious thing revealed : all levels (shifted up) were 0-4V instead od 0-5V. I have completely no idea why? My power supply is a cheap one, but can 1 or 2 amps of load cause 1V drop? Must investigate further. EDIT my cheap counterfeit Saleae logic analyzer must have somewhat low input impedance and it was it that caused significant voltage drop on logic signals. Disappointing. On the picture below you can see (far left) that only after increasing the voltage repeatedly, the printer started to print:

The first successful printout.

The first successful printout.

I’m excited!

Internet thermal printer

The idea is shamelessly stolen from this hackaday.io project. EDIT : It evolved… What this project is intended to be:

  • A toy printer (for my son) with some light and sound signal connected to the Internet and accessible by some web interface. Anyone with password (basic auth configured in .htaccess) could send a graphic and/or text message to the printer which would immediately flash the light, beep the buzzer as well as print the message. Protocol to the printer (on the network level) : whatever a.k.a my own & super-simple.

What this project shall not become EDIT : It evolved.. (note to myself because I tend to complicate things):

  • An over-engineered wireless CUPS compatible Postscript full featured printer which also makes coffee.

After deciding that I would try to make such a thing which took approx. 1 second after seeing Jim’s site I went to allegro.pl (local EBay. BTW we have ebay.com.pl here in Poland, but allegro seems to be winning the battle) and found something printer-ish alike and seemingly broken, with some parts missing. It is a Intermec PW40 mobile printer. Useful links I found on this printer:

  • Manuals – Intermec site (those are for PW50, but I assume they are compatible in some way).
  • Intermec community – they even have forum, and some community around the site.

Photos after dismantling the thing:

Looks like it uses ESC/P like Jim’s printer and 7.2V battery pack also. Looks promising (at least some standard language). Elements found on the main board of PW-40:

I’ve written the LTC chip looks promising, because it connects the printer to the outside world, and gives a hint where to start hacking. It translates RS232 high voltage levels to TTL, but since I wanted to drive the printer directly from some µC I needed to bypass the LTC. After some research I determined what follows : RS 232 port (this with RJ socket) is connected to pins 14 (232 input) and 15 (232 output). Corresponding TTL pins are : pin 13 (logic output), and 12 (logic input). So as far as I am reasoning correctly :

  • Pin 13 is connected to the Toshiba’s RX pin.
  • Pin 12 is connected to the Toshiba’s TX pin.
  • Whole device can be powered from 12V supply (I read that somewhere).
  • Let’s try it! Seems to work. At least PC and the printer are communicating. Wiring looks like this:


Costs so far:

  • Printer : 25PLN ($8).
  • 10 rolls of thermal paper 20PLN ($7)

Intermec provides a CUPS driver for Linux which enables you to use their printer as regular printer in the OS. Apparently PW40 isn’t supported. I successfully compiled and installed the software, but printing a random text file gave me some gibberish. After that I tried to communicate with te printer in ESC/P language directly, but with no luck. I described my problems on the Intermec forums and still waiting for some reply. In short the problem is, that I don’t really know for sure if this is me doing something wrong, or the printer is broken (it was sold on auction as broken, but seller couldn’t tell for sure if it is really broken or not). So after two evenings the situation looks that I am able to print only one character in a row. If I’m sending more than 1 character to print, it hangs. To make matters worse, my printer won’t print a self test page as it is described in the manual. It feeds paper a little and that’s all. At the other hand I found a datasheet of the printer head used in my printer, but using it directly would be a triumph of form over the content I’m afraid, and I don’t have enough time for that (i.e. making my own printer from scratch). But I’m overambitious you know, so who knows…

This is the only thing It can print. If I try to print more than 1 character in a row, It hangs.

This is the only thing It can print. If I try to print more than 1 character in a row, It hangs.

The any key

…which in fact is a one button HID keyboard which you can reprogram to be any key or combination of keys you wish (open source hardware and software). Links for start:

And quick video (blurry one shoot):

At some point, after few battles I bravely fought with STM32 I wanted to learn something new. I’ve been a few times on Texas Instrument’s site because I wanted to learn more about BeagleBone black and the Sitara CPU that sits on it and spotted the TIVA microcontrolers somewhere on the page. After quick research they looked very promising. It had all I needed that is : can be easily programmed with GCC stack under Linux, has affordable starting platform (they call them launchpads, and they cost $13 and $20 for TM4C123 and TM4C129 respectively) and, what is most important for me, they have well written peripheral libraries and documentation (i.e. at that time I could only rely on opinions from the Web, but after my first project I definitely can confirm that).

My button assembled

My button assembled

So I started a new simple project, which I previously tried to make with STMs and had countless problems with (here is the link). I’ve got EK-TM4C123GXL launchpad and it’s great. Somewhere in near future I’ll try to write another post which would explain how to start development on Linux with GCC toolchain with this board, but for now I can only assure you that getting started was as easy and quick as one evening (I used my own cross-compiler which is described in previous post here). The project aims to construct one button USB-HID keyboard which could be reprogrammed in such a way that pressing the button would send any key code user wishes or even combination of keys if necessary. I imagined, that it would be super cool to have something like that on my desk at work, and if someone comes by and interrupt me with my work, I would ostentatiously hit the big red button which stops the music in my headphones and ask : “what? once again?”. TI provides excellent peripheral library with many examples for many evaluation boards. Furthermore they have great USB library which is neatly divided in four tiers dependent on each other. On the very top is the highest level tier called the “Device Class API” which enables one to implement typical devices in just few lines of code (I mean simple HID, DFU, CDC etc.) ST does not have that! Device class API is great, but in fact quite inflexible. For example HID keyboard can have only one interface which is not enough if one wants to implement something more sophisticated. Here are instructions for designing HID keyboard design with additional multimedia capabilities (which I wanted so bad). Microsoft recommends that there should be at least two USB interfaces in such a device. One should implement a ordinary keyboard compatible with BOOT interface, so such keyboard would be operational during system start up, when there is no OS support yet, and another one would implement the rest of desired keys such as play/pause, volume up, down and so on. I saw quite a few USB interface layouts conforming to this recommendations over the net, including my own keyboard connected to my computer as I write this, so I assume this is the right way to do this. And here is an example of USB interface layout as mentioned earlier. HID reports are also provided. So I moved to lower level tiers and it was not so simple. Here you can find all the code that is inside the button. All the magic is done in main.c which could be split in several smaller files, but who cares. Firstly there are USB descriptors. Standard and HID ones:

const tConfigSection *allConfigSections[] = {
        &configSection,
        &interfaceSection1,
        &hidKeyboardSection1,
        &endpointSection1,
        &interfaceSection2,
        &hideKeyboardSection2,
        &endpointSection2
};

Next you have callbacks. My code is heavily based on TI examples, but in some places it is simplified where no advanced functionality is needed. Custom requests are handled in onRequest where you can find bits responsible for sending and receiving configuration from the host (using another program running on a PC which is linked below). Configuration (i.e. what key combination should be sent to the host when “any-key” is pressed) is stored in eeprom (functions readEeprom and saveEeprom). And of course in main function you can find the main loop with buttons polling and report sending. After connecting the device to a Linux PC it introduces itself as two interface HID device which is silently recognized by Linux (and not so silently by Windows which searches for some drivers for it). What distinguishes this HID keyboard from others is that it recognizes two additional control requests from the host PC which enables user to store and receive combination of keys this device sends when pressed. This requests are prepared in PC application which looks like this: Any key host app   Every button on the main screen can be toggled (on the picture above the “play/pause” one is turned on) which immediately sends the configuration data which is stored in eeprom. After closing the host application (which then releases the USB device to the OS) button works as programmed, in situation depicted above behaving as a play/pause keyboard button. Play/pause was my initial intention and I am using it with this function right now, but friend of mine used in on presentation (as arrow down), and also I tested ctrl-alt-del, ctrl-shift-t (eclipse CDT open element), and power among others. Maximum simultaneously pressed keys which can be simulated is 8 for control ones (like ctrl, shift, alt etc) and 6 for regular ones.

Any key internalsSo there you have it. Feel free to post questions etc. I am also wondering about a “mass production experiment” which would aim to make, say, 10 of those things (with cheaper micro of course!) and sell them on tindie (I have never sold anything made by myself yet). What do you think? Would you buy one of these? What would be reasonable price for this (it is only one button after all… + PC app). I made some very rough calculations and the total cost of one device (assuming production of 100 pcs) would be somewhere around $10, when using MSP430 as a µC and importing casings from China. Not to mention boxes to pack the stuff, soldering (probably in some kind of reflow oven) and sending it all together. So for now it seems overwhelming for me, but who knows.

And for something completely different : what happens when you connect a USB device with VBUS and GND shorted:

Jun  4 08:58:52 diora kernel: [  998.928731] hub 2-1:1.0: over-current condition on port 1
Jun  4 08:58:52 diora kernel: [  999.136827] hub 2-1:1.0: over-current condition on port 2
... and you can hear humming in headphones connected to the PC.

EDIT : User jancumps on the EEVBlog forums pointed out, that there is an ongoing indiegogo campaign for a similar idea. Looks quite the same as mine :D

EDIT2 : Dave did a review of the “serious button” this is not mine design, it only looks the same:

Toolchain for Cortex-M4

Important links:

These are brief instructions for creating your own GCC based tool-chain for a Cortex-M4 microcontroller, heavily based on this post. I tried a few precompiled ones which I found on the Internet, but always wondered how to make one which would be configured specifically for my micro, not for “ARM” in general. Tool-chains generated by following method was tested by me on ST STM32F407 and Texas Instruments TIVA-C TM4C123 (i.e. one tool-chain for these two µC since they both include the same CPU). My setup as I write this:

  • Host operating system : Ubuntu 14.04 LTS
  • Kernel : 3.13.0-24-generic
  • Few GB of free space on HD.

Making a tool-chain is hard, therefore wise people over the net developed tools to simplify the process. Few years ago, when I attepmpted to build a GCC tool-chain I struggled with lack of information, complexity of the process, and variety of recipes, which all seemed were extremely complex, and in some point in the process I was struck with problem I couldn’t solve. Then I found crosstool-NG – it may seem funny, but all this stuff was new to me, and I was looking for the best way possible to finish the task, some “canonical” way of building a cross-compiler, and for me, crosstool-NG is exactly this. Lets grab the newest version from its website and follow the installation instructions (this step will build only the crosstool-NG itself):

mkdir my-toolchain
cd my-toolchain
 
# Pay attention which version is the newest. As of writing this, the newest was
# 1.19.0, but at http://crosstool-ng.org/download/crosstool-ng/ the "header-file" 
# incorrectly indicated the 1.18.0 version
wget http://crosstool-ng.org/download/crosstool-ng/crosstool-ng-1.19.0.tar.bz2
 
tar jxvf crosstool-ng-1.19.0.tar.bz2 
cd crosstool-ng-1.19.0/
 
# Resolve some dependencies
sudo apt-get install bison flex gperf texinfo gawk libtool automake libncurses5-dev
 
# Provide a prefix to some destination which PATH points to.
./configure --prefix=/home/iwasz/local/
make
make install

Now we perform some setup. All features of our future tool-chain will be set during this step:

# cd back, so we are in "my-toolchain" directory again.
cd ..
mkdir staging
cd staging
ct-ng  menuconfig

The last command brings the following menu-config tool:

01-start-screen

Paths and misc options

  • Try features marked as EXPERIMENTAL : Y
  • Prefix : ${HOME}/local/share/${CT_TARGET} . Provide a destination folder that suits your needs, give descriptive name if you plan to host more than one crosscompilers.
  • Number of parallel jobs : 8 (depends on host capabilities of course).
  • Check “Debug Crosstool-NG”, “Save intermediate steps”, and “gzip saved states” as described here.

02-paths-and-misc

Target options

  • Target Architecture : arm
  • (cortexm4) Suffix to the arch-part (breaks the build!).
  • Use the MMU : N
  • Architecture level : armv7-m. As you can find here, the ARM architecture for Cortex-M4 is ARMv7E-M. In GCC manual (type /, and -march a few times) we can find that, among many others, available values for -march are armv7, armv7-a, armv7-r, armv7-m. Unfortunately the armv7e-m is invalid (if someone could elaborate on that, it would be perfect), so I choose the most similar armv7-m option. EDIT here : I’ve found that they added armv7e-m in recent version of GCC.
  • Emit assembly for CPU : cortex-m4 (full list of available options can be found in GCC manual somewhere near -mcpu phrase). 
  • Tune for CPU : empty (empty because -mcpu was provided. -mtune is similar to -mcpu, but -mcpu restricts us to one CPU only, while -mtune tries to do its best to optimize for particular CPU while still retaining the possibility to compile for other CPUs).
  • Use specific FPU : fpv4-sp-d16. Cortex-M4 can have FPU, but not necessarily (with FPU it is called Cortex-M4F, and Cortex-M4 without). But the fact is I found this option somewhere over the net, and I am a little bit confused on the topic of FPU.
  • Floating point : hardware (FPU).
  • Default instruction set mode (thumb).

03-target-options Toolchain options. Add some cool Toolchain ID string: 04-toolchain-options Operating System. Set Target OS to bare-metal: 05-operating-system

Binary utilities

  • Binary format: (Flat)
  • binutils version (2.22) – latest which is not marked as EXPERIMENTAL.

06-binary-utilities

C compiler

  • Show Linaro versions : Y
  • gcc version (linaro-4.8-2013.06-1)
  • C++ : Y

07-c-compiler   C-library

  • C library (newlib)
  • newlib version (2.0.0 (EXPERIMENTAL)) - the newest, and works OK.
  • Disable the syscalls supplied with newlib : Y - I provide my own syscalls in every program.

08-c-library Debug facilities

  • gdb : Y

09-debug

Then dig into “GDB” and check show lianro versions, and choose the newest from linaro, and set Enable python scripting to N (caused build problems for me):

10-gdb-cfg

Exit menu-config (few times ESC, and save when prompted) and finally build the toolchain:

unset LD_LIBRARY_PATH 
ct-ng build
tail -f build.log # in another console (not necessary if debug options were set)

The build process takes some time (30-60 minutes), and if in some point for some reason the build fail, first place you check is the build.log file in staging directory (therefore I pasted this tail -f command earlier, but of course it does not matter how you display the file). For example, in my case, the crosstool-NG decided to fail with this:

... kilobytes, megabytes of logs ....
[ALL  ]    checking whether to use python... yes
[ALL  ]    checking for python... /usr/bin/python
[ALL  ]    checking for python2.7... no
[ERROR]    configure: error: python is missing or unusable
[ERROR]    make[2]: *** [configure-gdb] Error 1
[ALL  ]    make[2]: Leaving directory `/home/iwasz/Documents/my-toolchain/staging/.build/arm-unknown-eabi/build/build-gdb-cross'
[ERROR]    make[1]: *** [all] Error 2
[ALL  ]    make[1]: Leaving directory `/home/iwasz/Documents/my-toolchain/staging/.build/arm-unknown-eabi/build/build-gdb-cross'
[ERROR]  
[ERROR]  >>
[ERROR]  >>  Build failed in step 'Installing cross-gdb'
[ERROR]  >>        called in step '(top-level)'
[ERROR]  >>
[ERROR]  >>  Error happened in: CT_DoExecLog[scripts/functions@257]
[ERROR]  >>        called from: do_debug_gdb_build[scripts/build/debug/300-gdb.sh@170]
[ERROR]  >>        called from: do_debug[scripts/build/debug.sh@35]
[ERROR]  >>        called from: main[scripts/crosstool-NG.sh@632]
[ERROR]  >>
[ERROR]  >>  For more info on this error, look at the file: 'build.log'
[ERROR]  >>  There is a list of known issues, some with workarounds, in:
[ERROR]  >>      '/home/iwasz/local/share/doc/crosstool-ng/ct-ng.1.19.0/B - Known issues.txt'
[ERROR]  
[ERROR]  (elapsed: 58:52.70)

I didn’t thought long on this one (apt-get install libpython2.7-dev maybe???), but disabled the python support for GDB (I modified the instructions accordingly, so hopefully you haven’t had the same error). But in case you had, you should resolve the error (maybe change the configuration using menuconfig, or resolve the problem in other ways, depending on the cause) and rerun ct-ng, or refer to this stack-overflow thread for more info on speeding up the process after build has failed.

STM32F407 DMA early tests

Research notes. Useful links:

DMA is a peripheral that can copy data between other peripherals and memory or between memory and memory. It used to be implemented in form of separate IC in early days, but in modern µCs it is of course integrated inside the single chip.

STM32 DMA peripherals are able to copy data from memory to peripheral, from peripheral to memory and from memory to the other place in memory (for example from RAM to FLASH as StdPeriph example shows). There are two DMA controllers : DMA1 and DMA2 and both have 8 streams. I see a stream as a some kind of physical, bi-directional connection between the DMA controller and some other peripheral. Those 16 streams cover all (most?) peripherals meaning that one stream is connected to more than one peripheral. For example if one is about to send data using USART1 he has to use exactly DMA2_Stream7, or if he wants to receive data from SPI3_RX he has to use DMA1_Stream0 or DMA1_Stream2, because apparently SPI3_RX is connected to both of those streams (see table 43 and 44 in reference manual of STM32F407).

DMA works automatically meaning that if there is some new data it will be copied without user code (and CPU) involved. It is possible thanks to channels, which I imagine as signals (like GUI signals if you know what I mean) connected between DMA controller and the peripheral (there is also something called “arbiter” between them). Peripheral can send a request (signal) to the DMA if it has new data, and DMA can then process it. At the same time DMA acknowledges it has got this new portion of data, and peripheral releases the request. Each stream can “listen” to its 8 channels, so there are 2 controllers * 8 streams * 8 channels = 128 configuration combinations, and that way every peripheral can have its own communication path with the DMA.

Streams have configurable priorities in case two or more streams request DMA controller attention. If two or more streams have the same priority, then stream with lower number wins. The bit of hardware called “arbiter” manages those priorities and decides which stream goes first.

So here comes the first DMA test I wrote (tested on STM32F407-DISCOVERY). It writes to USART1:

#include <stm32f4xx.h>
#include "logf.h"
 
/**
 * For printf, and USART1 in general.
 */
void initUsart (void)
{
        RCC_APB2PeriphClockCmd (RCC_APB2Periph_USART1, ENABLE);
        GPIO_InitTypeDef gpioInitStruct;
 
        RCC_AHB1PeriphClockCmd (RCC_AHB1Periph_GPIOB, ENABLE);
        gpioInitStruct.GPIO_Pin = GPIO_Pin_6 | GPIO_Pin_7;
        gpioInitStruct.GPIO_Mode = GPIO_Mode_AF;
        gpioInitStruct.GPIO_Speed = GPIO_High_Speed;
        gpioInitStruct.GPIO_OType = GPIO_OType_PP;
        gpioInitStruct.GPIO_PuPd = GPIO_PuPd_UP;
        GPIO_Init (GPIOB, &gpioInitStruct);
        GPIO_PinAFConfig (GPIOB, GPIO_PinSource6, GPIO_AF_USART1); // TX
        GPIO_PinAFConfig (GPIOB, GPIO_PinSource7, GPIO_AF_USART1); // RX
 
        USART_InitTypeDef usartInitStruct;
        usartInitStruct.USART_BaudRate = 9600;
        usartInitStruct.USART_WordLength = USART_WordLength_8b;
        usartInitStruct.USART_StopBits = USART_StopBits_1;
        usartInitStruct.USART_Parity = USART_Parity_No;
        usartInitStruct.USART_Mode = USART_Mode_Rx | USART_Mode_Tx;
        usartInitStruct.USART_HardwareFlowControl = USART_HardwareFlowControl_None;
        USART_Init (USART1, &usartInitStruct);
        USART_Cmd (USART1, ENABLE);
}
 
uint8_t myStrlen (char const *s)
{
        uint8_t len = 0;
 
        while (*s++) {
                ++len;
        }
 
        return len;
}
 
/**
 * Test1
 */
void initDma (char const *outputBuffer)
{
        /*
         * Reset DMA Stream registers (for debug purpose). For DMA2_Stream7 exmplanation read on.
         * It also disables the stream. Stream must be disabled prior configure it. Otherwise it can
         * misbehave.
         */
        DMA_DeInit (DMA2_Stream7);
 
        /*
         * Check if the DMA Stream is disabled before enabling it.
         * Note that this step is useful when the same Stream is used multiple times:
         * enabled, then disabled then re-enabled... In this case, the DMA Stream disable
         * will be effective only at the end of the ongoing data transfer and it will
         * not be possible to re-configure it before making sure that the Enable bit
         * has been cleared by hardware. If the Stream is used only once, this step might
         * be bypassed.
         */
        while (DMA_GetCmdStatus (DMA2_Stream7) != DISABLE) {
        }
 
        /* Configure the DMA stream. */
        DMA_InitTypeDef  dmaInitStructure;
 
        /*
         * Possible values for DMA_Channel are DMA_Channel_[0..7]. Refer to table 44 in reference manual
         * mentioned earlier. USART1_RX is communicate with DMA via streams 2 and 5 (both on channel 4).
         * USART1_TX uses stream7 / channel 4.
         */
        dmaInitStructure.DMA_Channel = DMA_Channel_4;
 
        /*
         * Possible values : DMA_DIR_PeripheralToMemory, DMA_DIR_MemoryToPeripheral,
         * DMA_DIR_MemoryToMemory.
         */
        dmaInitStructure.DMA_DIR = DMA_DIR_MemoryToPeripheral;
 
        /* Why DMA_PeripheralBaseAddr is of type uint32_t? Shouldn't it be void *? */
        dmaInitStructure.DMA_PeripheralBaseAddr = (uint32_t)&(USART1->DR);
        dmaInitStructure.DMA_Memory0BaseAddr = (uint32_t)outputBuffer;
 
        /*
         * Only valid values here are : DMA_PeripheralDataSize_Byte, DMA_PeripheralDataSize_HalfWord,
         * DMA_PeripheralDataSize_Word
         */
        dmaInitStructure.DMA_PeripheralDataSize = DMA_PeripheralDataSize_Byte;
 
        /*
         * I guess, that for memory is is always good to use DMA_MemoryDataSize_Word (32bits), since this is
         * a 32 bit micro. But haven't checked that. But here I use Byte instead for easier  DMA_BufferSize
         * calculations.
         */
        dmaInitStructure.DMA_MemoryDataSize = DMA_MemoryDataSize_Byte;
 
        /*
         * Length of the data to be transferred by the DMA. Unit of this length is DMA_MemoryDataSize when
         * direction is from memory to peripheral or DMA_PeripheralDataSize otherwise. Since I set both
         * sizes to one byte, I simply put strlen here.
         */
        dmaInitStructure.DMA_BufferSize = myStrlen (outputBuffer);
 
        /*
         * DMA_PeripheralInc_Disable means to read or to write to the same location everytime.
         * DMA_MemoryInc_Enable would increase memory or peripheral location after each read/write.
         */
        dmaInitStructure.DMA_PeripheralInc = DMA_PeripheralInc_Disable;
        dmaInitStructure.DMA_MemoryInc = DMA_MemoryInc_Enable;
 
        /* DMA_Mode_Normal or DMA_Mode_Circular here. */
        dmaInitStructure.DMA_Mode = DMA_Mode_Normal;
 
        /* DMA_Priority_Low, DMA_Priority_Medium, DMA_Priority_High or DMA_Priority_VeryHigh */
        dmaInitStructure.DMA_Priority = DMA_Priority_VeryHigh;
 
        /* DMA_FIFOMode_Disable means direst mode, DMA_FIFOMode_Enable means FIFO mode. FIFO is good. */
        dmaInitStructure.DMA_FIFOMode = DMA_FIFOMode_Disable;
 
        /*
         * DMA_FIFOThreshold_1QuarterFull, DMA_FIFOThreshold_HalfFull, DMA_FIFOThreshold_3QuartersFull or
         * DMA_FIFOThreshold_Full.
         */
        dmaInitStructure.DMA_FIFOThreshold = DMA_FIFOThreshold_Full;
 
        /*
         * Specifies whether to use single or busrt mode. If burst, then it specifies how much "beats"
         * to use. DMA_MemoryBurst_Single, DMA_MemoryBurst_INC4, DMA_MemoryBurst_INC8 or
         * DMA_MemoryBurst_INC16.
         */
        dmaInitStructure.DMA_MemoryBurst = DMA_MemoryBurst_Single;
        dmaInitStructure.DMA_PeripheralBurst = DMA_PeripheralBurst_Single;
 
        /* Configure DMA, but still leave it turned off. */
        DMA_Init (DMA2_Stream7, &dmaInitStructure);
 
        /* DMA_FlowCtrl_Memory, DMA_FlowCtrl_Peripheral */
        DMA_FlowControllerConfig (DMA2_Stream7, DMA_FlowCtrl_Memory);
 
        /* Enable DMA interrupts. */
        DMA_ITConfig (DMA2_Stream7, DMA_IT_TC | DMA_IT_HT | DMA_IT_TE | DMA_IT_DME | DMA_IT_FE, ENABLE);
 
/*--------------------------------------------------------------------------*/
 
        /* Enable the DMA Stream. */
        DMA_Cmd (DMA2_Stream7, ENABLE);
 
        /*
         * And check if the DMA Stream has been effectively enabled.
         * The DMA Stream Enable bit is cleared immediately by hardware if there is an
         * error in the configuration parameters and the transfer is no started (ie. when
         * wrong FIFO threshold is configured ...)
         */
        uint16_t timeout = 10000;
        while ((DMA_GetCmdStatus (DMA2_Stream7) != ENABLE) && (timeout-- > 0)) {
        }
 
        /* Check if a timeout condition occurred */
        if (timeout == 0) {
                /* Manage the error: to simplify the code enter an infinite loop */
                while (1) {
                }
        }
}
 
int main (void)
{
        /* This would be a function parameter or something like that. */
        char *outputBufferA = "Ala ma kota, a kot ma ale, to jest taki wierszyk z czytanki dla dzieci, ktora jest tylko w Polsce.\r\n";
        char *outputBufferB = "Wlazl kotek na plotek i mruga. Ladna to piosenka nie dluga. Nie dluga, nie krotka lecz w sam raz.\r\n";
 
        /*
         * Enable the peripheral clock for DMA2. I want to use DMA with USART1, so according to
         * table 44 in reference manual for STM32F407 (RM0090) this would be DMA2 peripheral.
         * Description in stm32f4xx_dma.c advises to do this as the first operarion.
         */
 
        /*
         * Spend two fu.king nights on this. Docs says to use RCC_AHB1PeriphResetCmd, but use
         * RCC_AHB1PeriphClockCmd instead!!!
         */
//        RCC_AHB1PeriphResetCmd (RCC_AHB1Periph_DMA2, ENABLE);
        RCC_AHB1PeriphClockCmd (RCC_AHB1Periph_DMA2, ENABLE);
 
/*--------------------------------------------------------------------------*/
 
        /*
         * Enable the USART1 device as usual.
         */
        initUsart ();
        logf("Init\r\n");
 
/*--------------------------------------------------------------------------*/
 
        initDma (outputBufferA);
 
        /*
         * The DMA stream is turned on now and waits for DMA requests. As far as I know, if this
         * were to be memory-to-memory transfer, it would start immedialtely without enabling any
         * channels. But for peripherals one has to enable the channel for requests. After following
         * statement, you should see data on serial console.
         *
         * This statement enables the DMA internals in USART (this stuff which communicates with the DMA
         * controller).
         */
        USART_DMACmd (USART1, USART_DMAReq_Tx, ENABLE);
 
        /* Waiting the end of Data transfer */
        while (USART_GetFlagStatus (USART1, USART_FLAG_TC) == RESET)
                ;
 
        while (DMA_GetFlagStatus (DMA2_Stream7, DMA_FLAG_TCIF7) == RESET)
                ;
 
        logf("It worked, and didn't hanged\r\n");
 
        /* Clear DMA Transfer Complete Flags */
        DMA_ClearFlag (DMA2_Stream7, DMA_FLAG_TCIF7);
 
        /* Th has to be initialized once again AFAIK to send another portion of data. */
        initDma (outputBufferB);
 
        /* Try to start it again */
        USART_DMACmd (USART1, USART_DMAReq_Tx, ENABLE);
 
        /* Waiting the end of Data transfer */
        while (USART_GetFlagStatus (USART1, USART_FLAG_TC) == RESET)
                ;
 
        logf("It workedagain\r\n");
 
        /* Infinite loop */
        while (1) {
        }
}

 

STM32F407-DISCOVERY SDIO tests.

  • Started a new project (includes StdPeriph 1.3.0). Repository can be found here.
  • First commit makes it simply output a “Init” text on the debug console (i.e. on USART1).
  • Browsing StdPeriph. Seems that stm32f4xx_sdio.[ch] are very low level (I’ve read SD card spec version 2.0).
  • Higher level stuff seems to be in StdPeriph here:
    • Utilities/STM32_EVAL/STM324x7I_EVAL
    • Utilities/STM32_EVAL/STM3240_41_G_EVAL
    • Utilities/STM32_EVAL/STM324x9I_EVAL

But don’t know why there is one version per dev-board. Looks like bad design to me at a first glance. Differences between those 3 files:

  • Lines 517/519 : Card presence is detected by different means (different pins are used on these boards). As far as I remember card presence detection is an optional feature, so maybe even it is not part of the standard. That would explain why different pins are used on the boards (I expect, that standard pins are laid out the same on …?).
  • Lines 1570/1572 different parameter passed to SDIO_ITConfig in function SD_WriteMultiBlocks. One dev-board uses SDIO_IT_RXOVERR and the other two useSDIO_IT_TXUNDERR (among others, this is a bitmask).
  • Copied STM324x9I_EVAL sdio routines to my source tree.
  • Included code from the SDIO example : Project/STM32F4xx_StdPeriph_Examples/SDIO/SDIO_uSDCard. Serial console went crazy and shows some gibberish, so I can’t see my debug messages, but card previously filled with random data now shows :
root@diora:~# hexdump -n 128 /dev/sdc 
0000000 0100 0302 0504 0706 0908 0b0a 0d0c 0f0e
0000010 1110 1312 1514 1716 1918 1b1a 1d1c 1f1e
0000020 2120 2322 2524 2726 2928 2b2a 2d2c 2f2e
0000030 3130 3332 3534 3736 3938 3b3a 3d3c 3f3e
0000040 4140 4342 4544 4746 4948 4b4a 4d4c 4f4e
0000050 5150 5352 5554 5756 5958 5b5a 5d5c 5f5e
0000060 6160 6362 6564 6766 6968 6b6a 6d6c 6f6e
0000070 7170 7372 7574 7776 7978 7b7a 7d7c 7f7e
0000080

Looks less random to me.

  • Had problems with serial console attached to USART1 after upgrading StdPeriph from 1.1.0 to 1.3.0. Console would speak Chinese from now on, and logic analyzer shows “framing errors” when attached to the TX pin. There were two problems: in 1.1.0 in file stm32f4xx.h default HSE_VALUE definition was 8MHz. In 1.3.0 ST increased this to 25MHz. In addition in stm32f4xx_conf.h I had HSE_VALUE redefined, but later on I upgraded this file (got it from some example projest from StdPeriph from 1.3.0 version), and it lacked this re-definition. Thus µC thought it is running on 25MHz which in turn disrupted the transmission. This re-definition looks as follows:
#if defined  (HSE_VALUE)
/* Redefine the HSE value; it's equal to 8 MHz on the STM32F4-DISCOVERY Kit */
 #undef HSE_VALUE
 #define HSE_VALUE    ((uint32_t)8000000) 
#endif /* HSE_VALUE */
  • I am facing a similar problem like this. Program hangs after returning from interrupt routine(?) In fact I don’t really know what is happening… Program hangs inSD_WaitReadOperation after successfully returning from SD_ReadMultiBlocks. It idles in loop (or at first glance it looks like it is iterating the loop forever) which looks like this:
while ((DMAEndOfTransfer == 0x00) && (TransferEnd == 0) && (TransferError == SD_OK) && (timeout > 0)) {
    timeout--;
}

Normally after successful transfer either DMAEndOfTransfer or TransferEnd would turn 1, but seemingly none of this happened. The only place the TransferEnd is set isSDIO_IRQHandler, so I added logs to check if µC hits this routine. It does, and it even set TransferEnd to 1, but it never returns from it. Debugger says, that program hangs in some strange places like WWDG_IRQHandler.

The problem was caused by missing DMA handler routine, namely the DMA2_Stream3_IRQHandler. I made two mistakes. First, I assumed, that since I run the demo with SD_DMA_MODE turned off (undefined), and SD_POLLING_MODE turned on (#defined), the DMA routines are unnecessary. This is not the case, those handlers are required in either cases (that’s the way SDIO example is made). So I copied DMA_IRQ handler from the example, where its name was hidden behind the SD_SDIO_DMA_IRQHANDLER macro (but at that point I didn’t know this is a macro, and thought that this is a regular function name). So secondly, SD_SDIO_DMA_IRQHANDLER was undefined in my stm32fxxx_it.c, It simply was not visible in this translation unit, and I ended up with function named SD_SDIO_DMA_IRQHANDLER, but without proper DMA IRQ handler. So µC jumped to the default handler which had infinite loop in it, but for some reason GDB showed the other handler.

A chaotic post on HID keyboard : STM32F407 success, STM32F105 fail

This is a quick dev-log post on my latest design, which was only partially successful. I have STM32F407-DISCOVERY board on which I successfully implemented a HID keyboard with only one keyboard. At first it reported that ‘a’ key was pressed every time user pressed the blue button, then, according to my plan I changed this to play/pause button, which can turn music on and of. It works under Linux and Windows (only ‘a’ version tested under win though). Then I decided to make a board for this and, since F407 is quite expensive, in fact too expensive for simple one-key keyboard, I decided to use something simpler. The slowest and cheapest µcros that support STM32_USB-Host-Device_Lib are those labeled as “connectivity line” i.e. STM32F105 and STM32F107. I’ve got myself two STM32F105R8T6 and made a board, which fits into a case I also bought. The case is labeled as “XB2-ES542″  :

boardthe case

Eagle schematic and board are here, OSH Park shared projest is here. I assumed (wrongly) that porting my program from F407 dev board to my custom board featuring different micro will be easy since they are quite similar. I was wrong. And I don’t have a dev board for F105 nor 107. Ok, but first things first. As I mentioned, program works on F407, so let me write down some random thoughts which emerged during the process of making this work:

Few facts about HID devices (that I learned)

All data exchanged resides in structures called reports. The host sends and receives data by sending and requesting reports in control or interrupt transfers. The report format is flexible and can handle just about any type of data, but each defined report has a fixed size. The device’s descriptors must include an interface descriptor that specifies the HID class, a HID descriptor, and an interrupt IN endpoint descriptor. The firmware must also contain a report descriptor that contains information about the contents of a HID’s reports.

So there are two additional descriptors when comparing to the ‘vendor specific’ device I made recently (there may be third, optional descriptor as well). First is HID class device descriptor and it specifies which other class descriptors are present (for example report descriptors or physical descriptors).

A HID can support one or more reports. The report descriptor specifies the size and contents of the data which this device generates. Physical descriptors at the other hand are optional pieces of data which describe the part(s) of human body used to operate the HID device. The HID class does not use subclasses to define most protocols. Instead, a HID class device identifies its data protocol and the type of data provided within its Report descriptor.

Here on page 53 you can find all key codes defined by the HID spec. Document “Device Class Definition for Human Interface Devices (HID) Version 1.11″ on page 62 has very useful information regarding keyboard implementation. Especially crucial are those bits about when to send a data report: “The keyboard must send data reports at the Idle rate or when receiving a Get_Report request, even when there are no new key events.” I mixed up the rates and my HID keyboard acted unpredictable. Only after adjusting wait period for 4ms * idle rate things went OK.

Then I started to getting familiar with the report descriptors, but nah, the more I read HID specification the more I realized this is too much complexity, and too much effort than I wanted to put into this project. At first I was like, “OK let’s read the whole spec, it has 97 pages, I’ve read longer specs before, not a problem”. But hey, this was meant to be a simple, few evenings project, and this HID spec turned out to be surprisingly complex (I mean report descriptors in particular. When I came to Push and Pop items I refused to read further). The better and simpler way of accomplishing this project was to grab some descriptors from the net, and so I did:

  • Here I found useful report descriptor for regular keyboards (like qwerty ones). Other, special buttons are implemented by other means (other items in reports) as I noticed before.
  • Here are some interesting report descriptors which looks like something I want to do (multimedia control). Seems to me, that all those volume, play/pause and other knobs are implenmented in some other means that regular keyboards are. There are different type of “usage” used i.e. regular keyboard has 0×09 0×06 (USAGE (Keyboard)), but in above document there is 0×09 0×01 (Usage (Consumer Control)) used.

There is also a tool on USB-IF page, which helps to assembly HID report descriptors, and I can confirm that it runs on Wine, but that is all I can say about it. I suppose you still have to know the specs, and know what you are doing while using this thing. Last but not least the source code which runs flawlessly on STM32F407-DISCO:

Failed attempt to port it to STM32F105

Then I started to move to my custom board depicted above, which I didn’t managed to accomplish. Actual status of the whole device (source code linked below, Eagle files above) is that, after connecting the USB, device initializes itself (i.e. USB stack gets initialized) then it gets quite a few reset requests (like 10) and then it hangs. Wireshark + usbmon shows “Malformed packet” when device tries to send the Device descriptor to host. Random notes from development:

  • Note : STM32F105 and 107 are called “connectivity line microprocessors”. It is useful to know that since there are many resources for STM32F1x out there which are tagged like “value line”, “connectivity line” etc.
  • I copied my previous project stm32f407-drama-button into new place in my SVN repository : stm32f105-drama-button.
  • Downloaded and unpacked STM32F10x_StdPeriph_Lib_V3.5.0 library from here. Main page for this micro is here. Current version of standard peripheral library for STM32F10x as of writing this is 3.5.0.
  • Replaced /STM32F4xx_StdPeriph_Driver with /STM32F10x_StdPeriph_Driver.
  • Replaced CMSIS folder. Removed Docs folder.
  • Made new toolchain with crosstool-ng fine tuned for Cortex-M3 µC.
  • Copied and modified stm32f105-crosstool.cmake.
  • StdPeriph comes with ld-scripts for various dev-boards. I figuret out that :
  • So linker script for STM3210C-EVAL is best suited for me and will check it first. Have it copied and modified. In “drama button” project I use STM32F105R8T6 which has:
    • 64kB of flash,
    • 64kB of RAM
  • Copied stm32f10x_conf.h from examples into src. CMSIS uses it somehow and I think this is bad design. Lower level library depends on higher level header file?
  • Had trouble when used GCC-4.8.0 (linaro version made with ct-ng 1.19). Works fine with GCC-4.7.0 (linaro version prepared with ct-ng 1.18). Some strange assembler errors when compiling core_cm3.c file poped out. Here guy in comments had similar issue, and someone told him to download fresh CMSIS library, because this provided with StdPeriph is old. I can believe that, because for example ST USB OTG library comes with StdPeriph bundled inside and it is also some old version. So my rule of thumb now is to collect newest versions of all the individual libraries, even when they are distributed together (i.e. OTG library virtually has everything required to compile the examples it provides, but now I throw it away and get fresh ones).

EDIT : when compiling with gcc-4.7.0 with heavy optimizations (-O3) the same assembler error emerges. Message is

/tmp/ccOfylWN.s: Assembler messages:
/tmp/ccOfylWN.s:646: Error: registers may not be the same -- `strexb r0,r0,[r1]'
/tmp/ccOfylWN.s:675: Error: registers may not be the same -- `strexh r0,r0,[r1]'

Downloaded and upgraded CMSIS from here : https://silver.arm.com/browse/CMSIS# (registration required). Upgraded from version 1.3.0 to 3.2.0. No *.c file this time, only header files.

When ported from STM32F407-DISCO to my custom STM32F105 board, program refused to operate. Kernel log says that:

Jan  3 00:08:09 diora kernel: [ 4993.895502] usb 7-2: new full-speed USB device number 4 using uhci_hcd
Jan  3 00:08:09 diora kernel: [ 4993.959542] hub 7-0:1.0: unable to enumerate USB device on port 2

And oscilloscope says, that… well… seems OK to me (compared for example to Wikipedia article on USB) :)

map001 map002

Debugger shows that program is running, It correctly invoke Reset_Handler, and then hits the main function.

Mistakes that I know I made:

  • Made a circuit with µC that I don’t have development board for (Assumed, that porting tested program from STMF407 to STMF105 will be easy. Wrong).
  • Screwed up a SWD port. Issues with signal integrity (probably due to lack of termination resistors?). This made debugging with GDB harder. It would hang, or disconnect at random points.
  • Forgot to add serial console output for STDOUT. Terrible mistake. I’ve wired that up, but board looks messy now.
  • Simple delay functions was dependent on optimization switches.

And then, after about 10 evenings / nights I gave up and put this project on the shelf for some time.

Motorcycle black box. A quick update.


Just a quick update for those who are interested if this project is still alive. It is. Above you can find an early test video from the application which runs on a PC and does post processing of the data collected by raspberry pi. More details to come in next post. All source code for this project is here (work in progress). First post here.

Motorcycle black box. Part 1 : data acquisition with Arduino Mega.

The objective

Make a black box, a device which would record short clips from camera in loop overwriting oldest clips. That would produce constant stream of short movies which put together would make one long recording containing last 30minutes (or more – depending on storage capacity) of my riding. Among with the recording, telemetric data would be collected such as velocity, RPM, temperature, front and rear brake pressed, depressed, and turn signals, and maybe more. Then this data could be put on the video in overlay.

The motorcycle

Yamaha XJ6SA 2010.

Identify by what means communication between the ECU and the dashboard is performed.

Which cables are used and what is the protocol used? After inspecting the service manual for my bike I was able to eliminate wires which are used for other things (see picture), and I left with only one, which is connecting the dash, the ECU and the immobilizer. Other ones was for flashing some warning LEDs, gathering information from fuel pump, oil switch and so on. So the yellow-blue wire was my first guess and it was correct. On the ecuhacking forum, which was source of very helpful information, guys was talking about something called K-line. I believe it is something widely common in the automotive industry, and for certain it has bunch of ISO standards describing it, but hey, if this is only one wire, and data flowing inside is some kind of serial communication, I bet I could sniff it, and figure out without any standards, which are hard to find and get (there is so much of them, I’ve got confused after few minutes of googling).

Dashboard connections

Dashboard connections

Figure out how to interface this, make some circuit if needed.

Again on the ecuhacing forum I’ve found information that K-line has logic levels relative to 12V, where 0V is logic 0, and 12V logic 1. There has to be some voltage level converter if I want to connect some TTL stuff to it. On the forum some guys were using a L9637 chip which is described as “ISO 9141 INTERFACE” by its datasheet. So I bought a few of these and connected it as follows :

L9637

L9637

Later on I removed the 510 ohm pull up resistor, because after turning engine on there was error showing of on the dash. I guessed that could be this resistor, and it helped. Dunno what was wrong.

Most difficult part for me was to find a place in motorcycle’s wiring to connect to. After some time I managed to insert piece of rigid wire to the back side of ECU connector as depicted below:

ECU connector

ECU connector

5V supply voltage I was drawing from step down voltage converter which I bought on the Internets. It is a PCB with a few discrete parts and quite huge radiator with some coils inside. There is [EDIT] label on it. To the output of L9637 I connected a Saleae logic analyzer, which I also bought. It is quite cheap and has good software for Windows, Linux and MAC OS. I can definitely recommend it, especially if you want to use it with Linux (I use Ubuntu).

Sniff the data, and collect it for further investigation.

My circuit worked the first time. After turning ignition to the ON state this is what I got:

Sealae logic analyzer window

Sealae logic analyzer window

Sealae app, among others, has a “Async serial” analyzer which I used with default configuration, and “use autobaud” checkbox checked. This useful option corrected initial 9600 baud to whatever it thought to be appropriate, and showed 16064 baud. Pretty odd value isn’t it? None the less it is very useful information if I want to read the data with some AVR, or something like that. It turns out, that data comes in packets which are 6B long. First byte is some kind of command, and the rest is the reply i.e the dashboard issues command 0×01 ant then ECU replies with 5 bytes of data. From the ecuhacking I knew that forst byte of the reply would be the RPM, second probably velocity, third an error code, fourth would be engine temperature (that is coolant or oil temperature?) and the very last is the checksum.

Sealae async serial analyzer setup

Sealae async serial analyzer setup

Collect the data in some more useful format.

Although Sealae has an option to store the analyzer’s data in CSV, sooner or later I would have to make some custom electronics anyway. Yet I want to make self contained logger hidden in neat case somewhere in the bike. I want to use a Raspberry PI i own as the main component of the gizmo, but after reading about serial communication with the Pi, I gave up for that time, and connected Arduino Mega. The main problem with PI is that I don’t know how to set it up for so unusual baud rate as 16064 baud (or 15625 as someone suggested). I had limited time, so for now I chose the Arduino. Below the code I uploaded :

#include <SoftwareSerial.h>
 
SoftwareSerial mySerial(10, 11); // RX, TX
 
void setup() {
    // initialize both serial ports:
    Serial.begin(115200);
 
    // set the data rate for the SoftwareSerial port
    mySerial.begin(16064);
}
 
void loop() {
    static int count = 0;
 
    // read from port 1, send to port 0:
    if (mySerial.available()) {
        int inByte = mySerial.read();
 
        if (inByte == 0x01 && count >= 5) {
            Serial.println(' ');
            count = 0;
        }
 
        Serial.print(inByte, HEX);
        Serial.print (' ');
        ++count;
    }
}

As you can see, RX is set up for pin 10, so the only thing I did was to disconnect the logic analyzer, and connect Arduino pin 10 instead. But nothing happened. This was because the SoftSerial library have a few baud rates predefined, namely the most usual ones. The solution was to modify arduino-1.0.5/libraries/SoftwareSerial/SoftwareSerial.cpp so it looks like that:

// .... line 59
static const DELAY_TABLE PROGMEM table[] =
{
    // baud rxcenter rxintra rxstop tx
    { 115200, 1, 17, 17, 12, },
    { 57600, 10, 37, 37, 33, },
    { 38400, 25, 57, 57, 54, },
    { 31250, 31, 70, 70, 68, },
    { 28800, 34, 77, 77, 74, },
    { 19200, 54, 117, 117, 114, },
    { 16064, 66, 140, 140, 137, }, // added baud rate
    { 15625, 68, 144, 144, 141, }, // added baud rate 2
    { 14400, 74, 156, 156, 153, },
    { 9600, 114, 236, 236, 233, },
    { 4800, 233, 474, 474, 471, },
    { 2400, 471, 950, 950, 947, },
    { 1200, 947, 1902, 1902, 1899, },
    { 600, 1902, 3804, 3804, 3800, },
    { 300, 3804, 7617, 7617, 7614, },
};
// ....

Here is the link you should follow for more info. I haven’t figured it out by myself. After this modification Arduino happily sent me my precious data, which I observed in serial monitor.

Serial data fas seen on arduino serial monitor

Serial data fas seen on arduino serial monitor

This is the data I collected so far (motorcycle standing on central stand, back wheel revolving, velocity comes from the back wheel, ABS LED blinking).

Ignition ON, engine stopped.

1 0 0 0 30 30
1 0 0 0 30 30
1 0 0 0 30 30
1 0 0 0 30 30
...

Ignition ON, engine started, no throttle (~1200 RPM), no gear (N) i.e. wheel not revolving, cold engine:

1 24 0 0 30 54
1 24 0 0 30 54
1 24 0 0 30 54
1 24 0 0 30 54
...

Ignition ON, engine started, little throttle (more RPM), no gear (N) i.e. wheel not revolving, cold engine:

1 34 0 0 53 87
1 34 0 0 53 87
1 34 0 0 53 87
1 34 0 0 53 87
...

1st gear, no throttle (engine warmed up thus less RPM). About 10 km/h

1 17 0 0 5C 73
1 17 1 0 5C 74
1 17 1 0 5C 74
1 17 0 0 5C 73
1 17 1 0 5C 74
1 18 1 0 60 79
1 18 1 0 60 79
1 18 0 0 60 78
1 18 1 0 60 79
1 17 1 0 60 78
1 17 0 0 60 77
1 17 1 0 60 78
1 17 1 0 60 78
1 17 0 0 60 77
1 17 1 0 60 78
1 17 1 0 60 78
1 17 0 0 60 77
1 18 1 0 60 79
1 18 1 0 60 79
1 18 1 0 60 79
1 18 0 0 60 78
1 18 1 0 60 79
1 18 1 0 60 79
1 18 0 0 60 78
1 18 1 0 60 79
1 18 1 0 60 79
1 17 0 0 60 77

23 km/h (6th gear)

1 17 2 0 68 81
1 17 2 0 69 82
1 17 2 0 69 82
1 17 2 0 69 82
1 17 1 0 69 81
1 17 2 0 68 81
1 17 2 0 68 81
1 17 2 0 68 81
1 17 2 0 68 81
1 17 1 0 68 80
1 17 2 0 69 82
1 17 2 0 68 81
1 17 2 0 68 81
1 17 1 0 68 80
1 17 2 0 68 81

40 km/h

1 2B 4 0 70 9F
1 2B 3 0 70 9E
1 2B 3 0 72 A0
1 2B 3 0 72 A0
1 2B 4 0 72 A1
1 2B 3 0 72 A0
1 2C 3 0 72 A1
1 2C 4 0 72 A2
1 2C 3 0 72 A1
1 2C 3 0 72 A1
1 2C 4 0 72 A2
1 2C 3 0 72 A1
1 2C 3 0 72 A1
1 2C 4 0 72 A2
1 2C 3 0 72 A1
1 2C 4 0 72 A2
1 2C 3 0 72 A1
1 2C 3 0 72 A1

60 km/h

1 40 5 0 74 B9
1 40 5 0 74 B9
1 40 5 0 74 B9
1 40 5 0 74 B9
1 40 5 0 74 B9
1 40 5 0 74 B9
1 40 5 0 74 B9
1 40 5 0 74 B9
1 41 4 0 74 B9
1 41 6 0 74 BB
1 41 4 0 74 B9
1 40 5 0 74 B9
1 40 5 0 74 B9
1 41 5 0 74 BA
1 40 5 0 74 B9
1 40 5 0 74 B9

80 km/h

1 56 7 0 77 D4
1 55 6 0 77 D2
1 56 7 0 77 D4
1 55 6 0 77 D2
1 56 7 0 77 D4
1 56 6 0 77 D3
1 56 7 0 77 D4
1 56 6 0 77 D3
1 56 7 0 77 D4
1 56 6 0 77 D3
1 56 7 0 77 D4
1 56 7 0 77 D4
1 56 6 0 77 D3
1 56 7 0 77 D4
1 56 7 0 77 D4
1 56 6 0 77 D3

Future

Due to uncertainty Raspberry PI serial communication I plan to make custom PCB with AVR in form of shield to Raspberry Pi. It will have two purposes. First it will acquire the data in some similar manner as depicted above, and will pass it to Pi by I2C or SPI (probably with velocity in some more usable form), among with some other data such as brakes and turn-lights, an maybe even outdoor temperature (I always wanted to have this information, especially in spring and autumn). And secondly it will drive some relay to cut the power to Pi which draws 2W of power even when shut down. Of course it also should send some signal first to Raspberry to shut it down correctly. If someone know something more about Pi’s UART and custom baud rates in particular, please let me know. Maybe there is a way to read K-line directly without AVR.

 

Old post about canon remote

So I came up with an idea of Cannon DSLR remote control. They are relatively cheap to buy on ebay, or other local online auction sites like allegro.pl here in Poland. But I wanted to build something by my self. As a complete amateur I wanted to make something small, and simple, thus DIY IR remote control for my camera was born. The protocol was reverse engineered by some smart people over the internet, so all I needed to do was to design the PCB, solder the stuff together, write a program and flash it. Below are the links:

My design is based on ATtiny2131a which in my opinion is a little too powerful, but first tests with ATtiny13 revealed some issues with internal oscillator. The timing of the clock signal is crucial when generating a carrier wave. Carrier should have 32.6kHz frequency for the best results. Deviations from this frequency has significant impact on IR operational range. Without an oscilloscope I was unable calibrate internal oscillator correctly, thus I’ve chosen chip with an external one (in fact the clock frequency was the most difficult issue I’ve got and I spent most of the time dealing with it). After soldering 4MHz quartz into place, at first I set it up for divide by 8 prescaler operation thus giving me 0.5MHz clock signal, but it also failed. I really don’t know why, but it seems, that prescaler is somehow unstalble, or has some sort of overhead (I’m an amateur after all. If someone could explain it, I would be grateful). Finally after setting the chip to operate at full speed (prescaler turned down) I was able to trigger the shutter, but range of operation was still small (circa 3m / 10ft).

I’ve chosen not to drive my IR led directly from AVR pins (like guy in aforementioned links did), but rather use a transistor used as a switch. I assume in that way I’m able to drive bigger current for the IR diode making it to produce stronger flashes. Next to the IR diode you can see a status diode, which indicates to the user, that device is sending a signal. Also I wanted it to be fully reprogrammable thus the connector on the board is present.

The last thing I did was the power down mode. Circuit, when turned on draws ~5mA even, when idling, so I presume the CR2032 battery would not last for long. For prolonging battery life I turn the AVR into power down mode after 3 seconds, and I wake it up via ping change interrupt when button is pressed. AVR in power down mode draws no current at all according to my multimeter (which simply is not precise enough to detect it), but according to the AVR specs it should take about 0.1uA.

Total cost was something about $5-7, with AVR and casing being the most expensive parts.

Below you can find a link to archive with all the necessary files to duplicate my design. Archive include Eagle files (both PCB and schematic), avr-gcc source files with CMake scripts and ATtiny2131a binary). The code was developed in Eclipse.

  • LINK

Feel free to post your thoughts on this design. I would greatly appreciate your comments on what I’ve could do better, or what I’ve done worng (although the device works well, but there always is room for improvement). For example I would be glad to hear how to extend the range which, as I mentioned, is only about 3m / 10ft.

Links