понедельник, 17 декабря 2012 г.

Tags generation for ASF project

If you use Atmel Software Framework you can generate tags (with ctags|GNU Global) for your Vim/Emacs environment in such way (in Makefile):
ctags = c:/Program Files/ctags.exe
gtags = c:/global/bin/gtags.exe
sed = c:/MinGW/msys/1.0/bin/sed.exe
find = c:/MinGW/msys/1.0/bin/find.exe

GTAGS_FILES = GTAGS GPATH GRTAGS
# Global can not parse out of c:/prj/PRJ1/src
TAGS_DIRS = 'c:/Program Files/Atmel/Atmel Studio 6.0/extensions/Atmel/AVRGCC/3.3.2.31/AVRToolchain/avr32/include' \
   c:/prj/PRJ1/src/asf-3.0.1 \
   c:/prj/PRJ1/src/PRJ1

tags.files: force
 $(find) $(TAGS_DIRS) -type f -name *.S -or -name *.h -or -name *.c|$(sed) 's/c:\/prj\/PRJ1\/src/./g' >tags.files

$(GTAGS_FILES): tags.files
 $(gtags) -f tags.files

.PHONY: global_tags
global_tags: $(GTAGS_FILES)

tags: tags.files
 $(ctags) -L tags.files

.PHONY: all_tags
all_tags: tags global_tags

.PHONY: clean_tags
clean_tags:
 rm -f $(GTAGS_FILES) tags
I suppose in this example you use Windows, have MinGW (with find, sed utils). Project is located in c:\prj and has name PRJ1. To generate use targets: tags, global_tags or all_tags. tags file will be about 100 MB, Global tags files will be about 70 MB, but Global can not parse files out of source tree, so avr32/include will be missed - use ctags instead of.

Calculating timer/clock parameters (prescalers) for 'tick' generation with AVR32

Task similar to this (AVR) but for AVR32 with usage of ASF. If we need to generate ticks, we can use one of wave generation mode of AVR32 TCs. For this we need to define value of RC register and to select clock source (internal line). We use PBA line divided by 2, 8, 32, 128 dividers (so sources are TC_CLOCK_SOURCE2..5 in ASF 'terminology' - macroses :).
Here is the source of calculator of RC, divider:
#include <stdint.h>
#include <limits.h>

// max value of RC register with type uint16_t (unsigned short of AVR32 GCC)
#define MAX_RC (1<<(sizeof(uint16_t)*CHAR_BIT))
int
calc_tc_params(long *freq, long *err, uint16_t *presc, uint16_t *rc)
{
        static const long divs[] = {2, 8, 32, 128};
        static const int ndivs = sizeof divs/sizeof divs[0];
        long _err = 0, _merr = *err;
        uint16_t _rc = 0, _mrc = 0, _idiv = 0, _midiv = 0;

        for (_idiv = 0; _idiv < ndivs; _idiv++) {
                _rc = (FPBA / divs[_idiv]) / (*freq);
                _rc = (_rc==0? 1 : _rc>MAX_RC? MAX_RC : _rc);
                _err = abs((*freq) - (FPBA / divs[_idiv]) / (long)_rc);
                if (_err <= _merr) {
                        _merr = _err;
                        _midiv = _idiv;
                        _mrc = _rc;
                }
                if (!_err) break;
        }

        if (_mrc) {
                *rc = _mrc;
                *err = _merr;
                *presc = divs[_midiv];
                *freq = (FPBA / divs[_midiv]) / _mrc;
                return (1);
        } else
                return (0);
}
NOTE: FPBA is the frequency of PBA line. Inputs are:
  • freq - frequency in Hz
  • err - absolute error in Hz
Outputs are:
  • freq - real frequency for calculated params
  • err - real absolute error
  • presc - divider value (2..128)
  • rc - value for RC register to load
Timer source can be found in this way:
#define PRESC2TCSRC(PRESC) ((PRESC)==2? TC_CLOCK_SOURCE_TC2 \
                            :(PRESC)==8? TC_CLOCK_SOURCE_TC3 \
                            :(PRESC)==32? TC_CLOCK_SOURCE_TC4 \
                            :(PRESC)==128? TC_CLOCK_SOURCE_TC5 \
                            :-1)
and it's result should be used as tcclks member of tc_waveform_opt_t struct.
On success, returns 1, 0 otherwise.
For more information how to start 'ticks' generation, see ASF example TC_EXAMPLE3 of TC module.

суббота, 24 ноября 2012 г.

Tcl 'regsub' with replacing by procedure call

This is the procedure like regsub but replacement is happens with procedure - not text:
proc regsub2 {re inStr by {varName ""}} {
# Simple version of regsub but support substitution by proc:
# regsub2 $re $str $by, where $by can be {proc \1 \2...}
    set result $inStr
    set byCallStr [regsub -all {\\(\d+)} $by {$m\1}]
    set matchedVarsCount [lindex [regexp -about $re] 0]
    set matchedVarNames {all}
    for {set i 1} {$i <= $matchedVarsCount} {incr i} {
        lappend matchedVarNames "m$i"
    }
    set nsubs 0
    foreach $matchedVarNames [regexp -all -inline $re $result] {
        set byRes [eval $byCallStr]
        set result [regsub $re $result $byRes]
        incr nsubs
    }
    if {$varName eq ""} {
        return $result
    } else {
        upvar 1 $varName varName_
        set varName_ $result
        return $nsubs
    }
}
If you call it with IncrTcl then use ::itcl::code proc.

суббота, 3 ноября 2012 г.

New Literate Programming Tool - TCLP 1.0!

I done new LP tool: very small (about 700-800Kb!) and flexible!
Main features are:
  1. Document has 'document header' with fields: TITLE, PROJECT, VERSION, DESCRIPTION, AUTHOR, ADDRESS, TAGS, it shown on the top of document.
  2. User can define values of this fields (and any other field) and can use defined fields in the document body (even in code chunks). There are special fields: APPNAME, APPVERSION, TIMESTAMP, DATE, TIME.
  3. Document has auto-generated TOC (table-of-contents), IT (index-table), ERT (external-references-table). TOC is placed in the start of the document. IT is placed at the end of document. ERT is placed last.
  4. TOC is generated automatically for all headers, defined with 'head' command.
  5. IT is generated automatically for all code chunks and includes file names of output chunks and lines numbers where to find them.
  6. ERT is generated automatically for all external references in the document, defined with 'ref' command and includes back-references to it's usage in the document body.
  7. Markup commands includes commands for bold, underlined, italic and monotyped text. Top/bottom indexes are supported also. Markups can be nested.
  8. There is special file with abbreviations to be used in section names and acronyms also.
  9. User can use acronym in the text from abbreviations file or explicitly with embedded expansion.
  10. User can create "vocabularies" - tables of definitions - from abbreviations file or embedded, defined explicitly.
  11. There are named counters (auto-incremented) to use in captions and so on.
  12. Code chunks are special formatted fragments of source code with lines numbering, optional syntax highlighting (only keywords). Lines can be numbered in continuously manner (with the same counter names) or in independent. User can reference to code lines by chunk name (section) and line number or with defining NAME of code line and usage of this name for referencing.
  13. Tables are supported: with attributes (table/row/cell), spanning, nesting.
  14. Lists (ordered/unordered) are also supported: with attributes; they can be nested also.
  15. User can insert image with 'embed' command. Any fragment of document can be "framed" with 'frame' command. It creates frame with caption (on top/on bottom), name and links (for referencing).
  16. There are 2 kinds of citations: inlined and blocked. Citation can have linked URL (shown as "More...")
  17. User can insert in document text any previous defined field values, but also variables from environment. Special environment variables is the USER - this is the name of user running tool.
  18. There is a special command 'wrap' for wrapping text with any word or quotation symbols, braces, etc.
  19. There is a command 'ent' to translate special symbols to HTML entities.
  20. There are references:
    • to external URL
    • to header
    • to line of code chunk/named line of code chunk
    • to code chunk
    • to frames
    • to any place of document where user defines own link
  21. There are 2 special commands: 'atstart', 'atend'. Mainly used is 'atend' for code generation from code chunks. User can define there what code chunks to be written to output source files (by sections, by glob-pattern of sections). Also is possible to write any free text to output file.
  22. Most of created HTML tags has CSS-classes. For all available CSS classes, see 'tclp.css' file. Used CSS file can be defined with command line option.
  23. User can add syntax keywords for any other language (for highlighting).
You can try it from Google Projects: http://code.google.com/p/tclp/

Automatic generation of proc/methods help in Tcl

Often you write not only internal code but procs for user's usage (in console and so on). It's difficult to write documentation of procedures in different file, so let's use doc-strings like Lisp, Python and so on.
It's not Java, so you don't need special tool :)
Considering that out procedures/IncrTcl methods has comments like this:
proc f {args} {
## Doc-string for this method.
# Syntax: f ...
##
... CODE ...
}
OR even:
proc f {args} {
## Doc-string for this method.
# Syntax: f ...
# Other help ##
... CODE ...
}
Well, doc-string is at proc body start, and begins with '##' and ends with '##' (at the same line or next), other comments are treated as usual comments and are ignored for doc-string mechanism. Here is example how to parse suck doc-strings:
package provide mydoc 1.1

package require Itcl

namespace eval mydoc {
    namespace export doc
}

set _NOHELPMSG "No help."

proc ::mydoc::_docstring {body} {
    set body [string trim $body]
    set docstring ""
    # without 1st '^' will match any FIRST docstring block even after
    # commands!
    if {[regexp {^##\s*([^\n]+\n?)+(##)} $body docstring]} {
        set docstring [regsub -all {\s*#\s?} $docstring \n]
        set docstring [string trim $docstring]
        return $docstring
    }
}

proc ::mydoc::doc args {
    ## Help on command: procedure or class method. Call:
    #   doc some_object some_method
    #   doc some_class some_method
    #   doc some_proc ##
    global _NOHELPMSG
    set found ""
    switch -- [llength $args] {
        1 {
            # args: proc
            set name [lindex $args 0]
            set arguments [info args $name]
            set body [info body $name]
            set found [_docstring $body]
        }
        2 {
            # FIXME not optimal!
            # args: object|class method
            lassign $args cls_obj meth
            set objs [itcl::find objects]
            # cls_obj may be object OR class. What is it?
            if {-1 != [lsearch -regexp $objs :*$cls_obj]} {
                # this is the object
                set arguments [$cls_obj info args $meth]
                set body [$cls_obj info body $meth]
                set found [_docstring $body]
            } else {
                # this is the class
                set arguments [namespace eval ::$cls_obj info args $meth]
                set body [namespace eval ::$cls_obj info body $meth]
                set found [_docstring $body]
            }
        }
        default { error "wrong args: proc | object method | class method" }
    }
    if {$found eq ""} {
        return $_NOHELPMSG
    } else {
        return $found
    }
}
Usage is simple, you can call:
doc object method
doc class method
doc usual-procedure
Class - is the IncrTcl class

вторник, 30 октября 2012 г.

Traversing over paths in Tcl

If we have paths - string like directories of DOS, we can want to group them and walking on all of them:
Input:
  a/b/c
  a/y/u
  a/b/d
  a/y/i

Walking:
  a (a)
   b (a/b)
    c (a/b/c)
    d (a/b/d)
   y (a/y)
    u (a/y/u)
    i (a/y/i)
Next procedute does it, but paths are list of lists (each item is list of directories' names).
proc forpaths {boundVarNames paths body} {
# walk on paths, each is list of dirs. $body will execute on visit each dir,
# variables bound:
#   $keyName - current path
#   $levelName - current level (0...N)
#   $leafName - is leaf or not (1, 0)
    foreach {keyName levelName leafName} [lrange $boundVarNames 0 2] {}
    set group [dict create]
    foreach path $paths {
        dict set group {*}$path "@LEAF"
    }
    proc _cmp {a b} {
        if {[expr {$a > $b}]} { return 1 } \
        elseif {[expr {$a < $b}]} { return -1 } \
        else { return 0 }
    }
    proc _trackpath {track level dir} {
        if {$track eq ""} { set track {"@DUMMY"} }
        switch -- [_cmp [expr $level+1] [llength $track]] {
            1  { lappend track $dir }
            0  { lset track end $dir }
            -1 { set track [lreplace $track $level end $dir] }
        }
        return $track
    }
    set gtrack {}; # current path when visit each node
    proc _walk {d keyName levelName leafName body {_deep 2}} {
    # here $level is level in tree, not in stack
    # $_deep is level in stack
        upvar $_deep $keyName key $levelName level
        upvar gtrack gtrack
        if {$leafName ne ""} { upvar $_deep $leafName leaf }
        dict for {k v} $d {
            set level [expr {$_deep-2}]
            set gtrack [_trackpath $gtrack $level $k]
            set key $gtrack
            if {$v eq "@LEAF"} {
                set leaf 1
                uplevel $_deep $body
            } else {
            # nested
                set leaf 0
                uplevel $_deep $body
                _walk $v $keyName $levelName $leafName $body [expr {$_deep+1}]
            }
        }
    }
    _walk $group $keyName $levelName $leafName $body
}
It works like foreach:
forpaths {k level leaf} $someDirs {
    puts "key: $k level: $level isLeaf: $leaf"
}
Mandatory is only first variable - key, so it's possible to call:
forpath {k l f} ...
forpath {k l} ...
forpath k ...
So to traverse directories' names first split them with separator string into lists!

среда, 24 октября 2012 г.

PEG grammars in Tcl - simple example

This is the very simple example how to create PEG grammar and parse some text with it in Tcllib (I used Tcllib 1.4).
package require starkit
starkit::startup
package require pt::peg::from::peg
package require pt::peg::container
package require pt::peg::interp

set PEG {
PEG myapp (Exp)
    IfKw       <- 'i' 'f'                               ;
    ThenKw     <- 't' 'h' 'e' 'n'                       ;
    ElseKw     <- 'e' 'l' 's' 'e'                       ;
    OSp        <- ('\t' / '\n' / ' ')?                  ;
    Sp         <- ('\t' / '\n' / ' ')+                  ;
    Block      <- (Exp / 't') &';'                      ;
    Cond       <- 'c' / 'o'                             ;
    If         <- IfKw Sp Cond OSp ':' OSp Block        ;
    Exp        <- If                                    ;
END;
}

set CPEG [pt::peg::from::peg convert $PEG]
pt::peg::container cont
cont deserialize = $CPEG
set src "if c: if o:t;"
pt::peg::interp myint
myint use cont
puts [myint parset $src]
(I use startkit...)
This grammar is very simple (and non-realistic): it defines rules to parse nested 'if'-expression like this:
if c: if o: t;
OR
if CONDITION: if CONDITION: BLOCK;
where CONDITION is 'c' or 'o', BLOCK is another EXP (expression) or 't' followed by ';'; when blocks are nested, only one ';' is needed. Example has bugs (see Sp, OSp - Optional SPaces) but shows base tricks with PEG, for ex., PEG definition is loaded from variable, not imported from file, so you need for your App. not all Tcllib packages :) After running, you get:
c:\>tclkitsh.exe main.tcl
Exp 0 11 {If 0 11 {IfKw 0 1} {Sp 2 2} {Cond 3 3} {OSp 4 3} {OSp 5 5} {Block 6 11 {Exp 6 11 {If 6 11 {IfKw 6 7} {Sp 8 8}
{Cond 9 9} {OSp 10 9} {OSp 11 10} {Block 11 11}}}}}

понедельник, 22 октября 2012 г.

Integer limits in C

They are in limits.h. But when you need calculated (depends on type), you can use:
#define SOME_TYPE_MAX (SOME_TYPE)((unsigned long)(1<<sizeof(SOME_TYPE)*8-1)-1)
#define SOME_TYPE_MIN (~SOME_TYPE_MAX)

Algorithm generalization in C (trick with auto-codegenerator)

Sometimes we need to code algorithms on C in very general way, alternatives - many functions for different parameters (optimization reasons, parametrized code, etc.). And if we want to avoid function pointers and other slow methods then we can use strong macros tricks. One of them is code generators with self-including.
Idea is that header file has sections:
  • Common, included in anyway
  • Parametrized codegenerator sections
  • Start of generation of code
and it looks like this (file self.h):
#ifndef _USUAL_H_GUARD
#define _USUAL_H_GUARD

// common code, included in anyway

#endif /*_USUAL_H_GUARD*/

// templates of codegenerators:
#ifdef SOME_CODEGEN_ARG
// some template with DEFINE's substitution
#else if defined(ANOTHER_CODEGEN_ARG0) && defined(ANOTHER_CODEGEN_ARG1)
// another template
#endif

// start of code generating:
// one version of general algorithm:
#ifndef SOME_CODEGEN_ARG /* guard of recursive including! */
#define SOME_CODEGEN_ARG "something"
#include "self.h"
#endif
// another version of the same generalized algorithm:
#ifndef SOME_CODEGEN_ARG /* guard of recursive including! */
#define SOME_CODEGEN_ARG "something else"
#include "self.h"
You also can have generalized function names! See:
#define __CONC3(X, Y, Z) X ## Y ## Z
#define _CONC3(X, Y, Z) __CONC3(X, Y, Z)
#define FUNC_NAME(N) _CONC3(N, ANOTHER_CODEGEN_ARG0, ANOTHER_CODEGEN_ARG1)
And use this macro in your codegen. templates as: FUNC_NAME(myfunc). Resulted name will be myfunc_AB() if you defined ANOTHER_CODEGEN_ARGs as A and B before starting of code generation (auto-including).
And yes, you can "generalize" your algorithm (name and body!) with previous defined PARAMETERS (before auto-including).

YOU SHOULD MODIFY LINE WITH GUARD OF RECURSION WHEN ADD NEW TEMPLATE! BETTER IS TO USE ONE MACRO SYMBOL TO DETERMINE 'GENERATING' MODE, ANOTHER - AS PARAMETERS

It does the same what does C++ templates. Better way is to use different files instead of one big with embedded "templates" - included .def files.

Generalization includes unique names generation for different argument types/different sub-algorithm used.
Considering, we have algorithm which is general but has some different parts depended on different "sub-methods", for example, using of MAX or MIN, using of some FLAG or not.
Our function can have names like func_MAXFLAG(), func_MINFLAG(), func_MAXNOFLAG(), func_MINNOFLAG(), general algorithm will use one of them (or all). Something like overloading C++ methods/functions, but depended not on types, but on any parameter value.
First, we should generate suffix for function depended on parameters. We ca use trick with _CONC3() (see below):
#define ARG1 MAX
#define ARG2 NOFLAG
and in the .def-file:
int FUNC_NAME(func_, ARG1, ARG2)(int a, int b);
to get name func_MAXNOFLAG. But here is the pitfall. If we decide to use conditional compilation, with #if ARG1==MAX we'll get ERROR! Bcz MAX is undefined, test will be everytime TRUE:#if "" == "". So you CAN NOT distinguish MAX or MIN values of ARG1!
So, you need to define MAX, MIN, FLAG, NOFLAG to test it's values in #if's!
To minimize code you can use instead of many parameters, one with bitfields. Testing on them - for #if's and also for name generating:
#define MAX (1<<1)
#define MIN (1<<2)
#define FLAG (1<<3)
#define NOFLAG (1<<4)
...
#define ARGS (MIN|NOFLAG)
#include "template.def"
and in template.def something like this:
#if ARGS&MIN
#define N1 MIN
#else
#define N1 MAX
#endif
... and the same for FLAG/NOFLAG

// generator for similar functions with one, common, general algorithm.
int FUNC_NAME(func_, N1, N2)(int a, int b) {
#if ARGS&MIN
... code for min
#else
... code for max
#endif
}

#undef ARGS
#undef N1
#undef N2
Also it's easy to split template.def with #ifdef ONLY_DECLARATIONS to 2 parts: declarations (for .h file includes) and code (for .c file includes). In this case checking of ARGS and "undefinings" should be out of #ifdef ONLY_DECLARATIONS
So, what you can do in C++, you can do in C, exceptionally matching of values/types (but you can use static checks in this case). How is this necessary - is the big question, for example, many language designers don't like restrictions like types - in dynamic languages, some like checks in Run-time (more flexible then only type/inheritance checking) :)

понедельник, 15 октября 2012 г.

Does sample is located around periodic peak?

Next pseudo-code (Python) function tests does sample is located in domain around periodic peak:
def isclose(a, b, lim):
    # is b close to a OR to aliquot of a in lim
    if a == 0:
        return abs(b - a) <= lim
    else:
        n = b/a
        if b%a > a/2:
            n += 1
        c = a*n # closest aliquot to b
        m = abs(b - c)
        return m <= lim
...returns boolean.
It's usable to test dispersion of sample around peaks.

четверг, 11 октября 2012 г.

How to find fundamental frequency of wave (auto-detect signal period)

This is not so rare task as you can see (for ex., you can find one in oscilloscope's auto mode). There are several methods to do it: in frequency domain (FD) and in time domain (TD). Most famous FD method is the auto-correlation function: it's maximum let you find δτ - it will be period of signal.
But main problem with FD methods is - it's "calculation speed". TD methods are faster. They are:
  • Based on zero-crossing counting
  • Peak rate
  • Slope rate and so on...
Here is one of them: peak rate -- "monitoring" max/min of signal in real-time.
Main idea is that signal has max (and min) values which will repeat with dt equals to period of signal.
Problems are:
  • possible several min/max, for ex. noise
  • signal can be superposition (sum) of signals with different frequencies
  • signal may consists of "packets" (repeated also periodical waves)

image/svg+xml T T Fig 1 Fig 3 T Fig 2
See on Fig 1. - it illustrates idea of the algorithm: periodical wave has periodical max and min. If we can detect them, then we can detect period. But sometimes it's difficult to recognize min, sometimes - to recognize max, so we should use "peaks" - both (mins and maxs), and select biggest of them.
So main idea of algorithm is:
PEAKS detector -> MAX -> Result (period)
However, Fig 3 shows us next problem: signal may be not ideal and will have distortions (horizontal - in time and vertical - in magnitude), so our algorithm should consider limits of non-ideal values of magnitude and peak distance repetitions. This will be equality domains for peak value and for possible periods. OK. And next problem is shown on Fig 2: if signal is sequence of a packets then we can detect 2 periods: of the signal in the packet or the period of packets. This algorithm will detect period as period of packets (bcz it's used for oscilloscope). And lst problem is noise, but to avoid noise effect we should use average of period maximums. Also equality domains will help us to avoid noise effect (as you can see in examples).
So, final scheme will be:
PEAKS detector -> Packets PROCESSOR -> MAX -> Result (period)
Here is the Python prototype of this algorithm:
import sys
import operator
import math
import random

class AverageDistance:
    # max. distance (may be period)
    max = -sys.maxint - 1
    # enough distances number to report
    difflim = 2
    # proximity of max. distance values (equality domain)
    vallim = 0
    # distance between peak
    d = 0
    # sum of distances between peaks (for average)
    diff = 0
    # number of distances between peaks (for average)
    ndiff = 0

    def dump(self):
        return "d=%d, diff=%d, ndiff=%d"%(self.d,self.diff,self.ndiff)

    def __init__(self, **kw):
        for k,v in kw.items():
            setattr(self, k, v)

    def calc(self, val):
        if abs(val - self.max) <= self.vallim:
            if self.d == 0:
                self.diff += val
            else:
                self.diff += self.d + val
            self.ndiff += 1
            self.d = 0
        elif val > self.max:
            self.max = val
            self.d = 0; self.diff = 0; self.ndiff = 0
        else:
            self.d += val
        if self.ndiff > self.difflim:
            return self.diff/self.ndiff
        else:
            return -1

    def test(self, samples):
        for smp in samples:
            #print "%d:"%smp
            x = self.calc(smp)
            if x != -1:
                return x
            #print "\t%s"%self.dump()
        return x

def DistanceBetweenPeaks_max(**kw):
    return DistanceBetweenPeaks(cmpfunc=operator.gt, max = -sys.maxint-1, **kw)
def DistanceBetweenPeaks_min(**kw):
    return DistanceBetweenPeaks(cmpfunc=operator.lt, max = sys.maxint, **kw)
class DistanceBetweenPeaks:
    ### Next 2 attrs. defines max. based or min. based algorithm,
    ### default is max. based:
    # comparision func
    cmpfunc = operator.gt
    #max. sample
    max = -sys.maxint-1

    # proximity of sample (near to max.)
    vallim = 0
    # peak value if not -1 (to distinguish monotonic growed 2 close samples)
    # it's too not scalar, but domain (vallim is used as limit)
    peak = -1
    # distance between two peaks
    d = 0
    # index of max.
    imax = 0
    # index of sample
    i = 0

    def __init__(self, **kw):
        for k,v in kw.items():
            setattr(self, k, v)

    def calc(self, val):
        ret = -1
        #print "val:",val
        # version with cut 1,2,3,4... samples by limitation of "hole" between peaks
        #if abs(val - self.max) <= self.vallim and (self.i - self.imax)>1:
        if abs(val - self.max) <= self.vallim and \
                ((abs(val - self.peak) <= self.vallim) if self.peak != -1 else True):
            ret = self.i - self.imax
            if ret == 0: ret = 1
            self.imax = self.i
            #print "1:"
        #elif val > self.max:
        elif self.cmpfunc(val, self.max):
            #print "2: val=%d max=%d"%(val,self.max)
            # not neccessary next 2 lines:
            if val > self.peak and self.peak != -1:
                val = self.peak
            self.max = val
            self.imax = self.i
            self.d = 0
        else:
            self.d += 1
        self.i += 1
        return ret

    def test(self, samples):
        for smp in samples:
            x = self.calc(smp)
            if x != -1:
                print x

def test_alg(samples, dbp_kw={}, ad_kw={}):
    dbp_max = DistanceBetweenPeaks_max(**dbp_kw)
    dbp_min = DistanceBetweenPeaks_min(**dbp_kw)
    ad_max = AverageDistance(**ad_kw)
    ad_min = AverageDistance(**ad_kw)
    period_maxalg = -1
    period_minalg = -1
    result = -1
    for smp in samples:
        time_maxalg = dbp_max.calc(smp)
        if time_maxalg != -1:
            period_maxalg = ad_max.calc(time_maxalg)
            if period_maxalg > result:
                result = period_maxalg
        time_minalg = dbp_min.calc(smp)
        if time_minalg != -1:
            period_minalg = ad_min.calc(time_minalg)
            if period_minalg > result:
                result = period_minalg
    return result

# Test samples
################################################################################
# T=7
samples0 = (
0, -10, -20, -10, 0, 10, 20, 0, -10, -20, -10, 0, 10, 20,
0, -10, -20, -10, 0, 10, 20, 0, -10, -20, -10, 0, 10, 20,
0, -10, -20, -10, 0, 10, 20, 0, -10, -20, -10, 0, 10, 20,
)
# samples with not same maximus, T=7
samples1 = (
0, -10, -20, -10, 0, 10, 18, 0, -10, -20, -10, 0, 10, 19,
0, -10, -20, -10, 0, 10, 19, 0, -10, -20, -10, 0, 10, 18,
0, -10, -20, -10, 0, 10, 19, 0, -10, -20, -10, 0, 10, 18,
)
# samples with not same maximus and time between them, T=7
samples2 = (
0, -10, -20, -10, 0, 10, 18, 0, -10, -20, -10, 0, 10, 19,
0, -10, -20, -10, 0, 10, 19, 0, -10, -20, -10, 0, 10, 0, 18,
0, -10, -20, -10, 0, 10, 18, 0, -10, -20, -10, 0, 10, 19,
0, -10, -20, -10, 0, 0, 0, 10, 18, 0, -10, -20, -10, 0, 1, 1, 10, 19,
)
# T=3
samples3 = (
0, 0, -5, 0, 0, -5, 0, 0, -5, 0, 0, -5, 0, 0,
0, 0, -5, 0, 0, -5, 0, 0, -5, 0, 0, -5, 0, 0,
0, 0, -5, 0, 0, -5, 0, 0, -5, 0, 0, -5, 0, 0,
)
# T=3
samples4 = (
-1, -1, -10, -1, -1, -10, -1, -1, -10, -1, -1, -10, -1, -1, -10,
-1, -1, -10, -1, -1, -10, -1, -1, -10, -1, -1, -10, -1, -1, -10,
-1, -1, -10, -1, -1, -10, -1, -1, -10, -1, -1, -10, -1, -1, -10,
)
# T=8
samples5 = (
1,0,1,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,1,0,0,0,0,0,1,
# T=12
#1,0,1,0,1,0,1,0,0,0,0,0,1,0,1,0,1,0,1,0,0,0,0,0,
#1,0,1,0,1,0,1,0,0,0,0,0,1,0,1,0,1,0,1,0,0,0,0,0,
#1,0,1,0,1,0,1,0,0,0,0,0,1,0,1,0,1,0,1,0,0,0,0,0,
#1,0,1,0,1,0,1,0,0,0,0,0,1,0,1,0,1,0,1,0,0,0,0,0,
)
# T=10
samples6 = (
0,1,2,3,4,5,4,3,2,1,0,1,2,3,4,5,4,3,2,1,0,1,2,3,4,5,4,3,2,1,
0,1,2,3,4,5,4,3,2,1,0,1,2,3,4,5,4,3,2,1,0,1,2,3,4,5,4,3,2,1,
)
# T=50
samples7 = (
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,
24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,26,
24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,
24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,26,
24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,26,
24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,
)
# "square", T=8
samples8 = (
0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,
0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,
0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,
)
# Sin: noise is 1..300. Magnitude is 1000, T=360
# to detect DistanceBetweenPeaks.vallim = 260..300, AverageDistance.vallim=10
samples9 = []
for angle in range(0, 5*360, 1):
    smp = math.sin(math.radians(angle))
    smp += random.uniform(0.001,0.5)
    samples9.append(int(smp*1000))
# Long glitches, T=8
samples10 = (
0,0,0,1000,5,5,1000,5,0,0,0,5,5,1000,5,1000,0,0,0,5,5,5,5,5,0,0,0,100,5,5,5,5,
0,0,0,5,5,5,100,5,0,0,0,100,5,5,5,5,0,0,0,5,5,5,5,100,0,0,0,5,5,5,5,100,
0,0,0,5,5,100,5,5,0,0,0,5,5,100,5,5,0,0,0,5,5,5,100,5,0,0,0,5,5,100,5,5,
)

# Tests
################################################################################
print "samples0:", test_alg(samples0)
print "samples1:", test_alg(samples1)
print "samples2:", test_alg(samples2, dict(vallim=1), dict(vallim=1))
print "samples3:", test_alg(samples3)
print "samples4:", test_alg(samples4)
print "samples5:", test_alg(samples5)
print "samples6:", test_alg(samples6, dict(vallim=0), dict(vallim=1))
print "samples7:", test_alg(samples7, dict(vallim=1), dict(vallim=1))
print "samples8:", test_alg(samples8, dict(vallim=1), dict(vallim=1))
print "samples9:", test_alg(samples9, dict(vallim=260), dict(vallim=100))
print "samples10:", test_alg(samples10, dict(vallim=1), dict(vallim=1))
#print "samples9:", test_alg(samples9, dict(vallim=60, peak=1000), dict(vallim=60))

for i,ss in enumerate((samples0, samples1, samples2, samples3, samples4, samples5, samples6, samples7, samples8, samples9, samples10)):
    with open("data%d.dat"%i, "wt") as f:
        for smp in ss:
            f.write("%d\n"%smp)
Here is DistanceBetweenPeaks - the general form of PEAK detector. It can be parametrized with cmpfunc and max for 2 different kind of detection: detection max's and detection min's (there are 2 functions: DistanceBetweenPeaks_max, DistanceBetweenPeaks_min). Parameters: vallim - equality domain (limit) for peak values (absolute value, not ratio!), peak - value of peak (not needed but in difficult cases, very noised signals can help).
AverageDistance is the PACKETS PROCESSOR. It's parameters are: vallim - equality domain of distance (to detects them as equaled), difflim - how many times of repeat are enough to detect repetition as period.
test_alg() is the main function - algorithm scheme implementation. It uses to peak detectors as alternative detectors and select max. value of period after processing possible packets.
This code is not very Pythonic but it focused to be easy ported to C and it's nature is "dataflowable" - functions has "state" (structure) and processes only ONE SAMPLE per call (should be inline or macros), so they are ideal for Real-time processing. Algorithm is faster then autocorrelation based, probability based, etc.
test_alg() needs however not one sample but list of samples - this signal fragment should has length of several periods, real length depends on used difflim parameter.
We see that the algorithm can detects even very noised signal (sample9) and signals with strong magnitude distortion and time distortion.
Result period is in "samples" units ("hops"/"holes" between samples). When you know sampling frequency then it's very easy to convert it to Hz :)
This algorithm is very simple and is usable in "fast" detection tasks but when you make offline signal processing, you should use more accurate algorithm!

среда, 10 октября 2012 г.

How to convert accelerometer values to path/coordinates

To get path or coordinates from accelerometer values you need to integrates them. From physics we know:
v = dr/dt
a = dv/dt = d2r/dt2
If a (acceleration) is constant, motion is called uniform accelerated:
v(t)=v(0)+at
r(t)=r(0)+v(0)t+0.5*at2
v(0), r(0) - initial velocity and displacement. If a != const, these formulas can not be used. If we know how a depends on t (for ex., a(t) = b + c*t), we can use:
v(t) = v(0) + b*t + 1/2*c*t2
r(t) = r(0) + v(0)*t + 1/2*b*t2 + 1/6*c*t3
But if we don't know a(t), we can only use common rules:
          t
r(t)=r(0)+∫v(t)dt
          0

          t
v(t)=v(0)+∫a(t)dt
          0
and calculate this with some numerical integration method
Examples (models with plots) of such methods (in SciLab) you can found there [in integral.sci file: Simpson's rule, Trapezoid rule, Leo Tick's rule]. But we should know about 2 problems:
  1. If there is some constant "component" in acceleration (or noise), this will be reason of BIG error in result
  2. Each numerical integration method is filter and has transfer function, which give distortions on 0.2 - 0.4 Nyquist's frequency even [see R. W. Hamming, Digitals filters, 1977]
So, you can use more smart methods. One of them is complex (chaining) integration with filters. Idea is to filter each dataflow: between each integrator and to filter output result too:
a(t)->FILT->af(t)->INT->v(t)->FILT->vf(t)->INT->x(t)->FILT->xf(t)
   where FILT - is a filter, INT - is an integrator.

It can be implemented on dataflow model (as I done it in hmc-demo). You can found example of such model in SciLab source file "path.sci" where you can combain different integration methods with different filters and see result in plots.
And remember, eventually, you never get correct result. Use sensors of velocity instead.

воскресенье, 30 сентября 2012 г.

AVR32: simple C pitfall

Often programmers like to convert pointer to one type to pointer to another. It's a usually (and INCORRECT!) "method" to pick-up some integer from byte array. From C standard we know that pointer to one type IS NOT EQUAL to pointer to another. But programmers ignores this fact on x86 :)
They will not ignore this on AVR/AVR32 architecture :)

среда, 19 сентября 2012 г.

Simplest (and fastest) allocator in C

I need to allocate strings (in UI) with the same length in my MCU program, so I can use: 1) static local buffer in functions or use 2) common buffer. 1) cons: implicit memory "eating" 2) cons: funcs are not reenterable. I use next simple (simplest!) and fast allocator - it allocate strings with the same length. It has only to necessary functions in it's API: allocate and free. Here is the code:
#include <stdint.h>
#include "bset.h"
/** string count */
#define GUI_TMPSTR_N 3
/** max string length */
#define GUI_TMPSTR_MAXLEN 64
/// block of temporary strings with the same length
typedef struct gui_tmpstr_buf_t {
        bset_t pool;
        char buf[GUI_TMPSTR_N][GUI_TMPSTR_MAXLEN];
} gui_tmpstr_buf_t;

#define GUI_TMPSTR_BUF_INITIALIZER() { \
        .pool = BSET_INITIALIZER(), \
        .buf = {0} \
}

// global blocks for allocation
gui_tmpstr_buf_t _tmpstrbuf = GUI_TMPSTR_BUF_INITIALIZER();

/// allocate temporary string
char*
gui_tmpstr_alloc(void) {
        int i;
        BSET_FORCLR(i, &_tmpstrbuf.pool) {
                if (i >= GUI_TMPSTR_N)
                        break;
                BSET_SET(i, &_tmpstrbuf.pool);
                return (_tmpstrbuf.buf[i]);
        }
        return (NULL);
}

/// free allocated tmp. string
void
gui_tmpstr_free(char* ptr) {
        intptr_t i = ((intptr_t)ptr - (intptr_t)_tmpstrbuf.buf);
        if (i <= 0) {
                // ptr is BEFORE base addr (buf)
                return;
        }
        if (i % GUI_TMPSTR_MAXLEN) {
                // ptr is not align on GUI_TMPSTR_MAXLEN
                return;
        }
        i /= GUI_TMPSTR_MAXLEN;
        if (BSET_ISSET(i, &_tmpstrbuf.pool)) {
                // free in pool
                BSET_CLR(i, &_tmpstrbuf.pool);
        }
}
GUI_TMPSTR_MAXLEN defines length of each string, GUI_TMPSTR_N defines available strings number. I used such string in UI (cutting strings, modifing, so on...).

Used bset.h is modified BSD select.h with API (macroses): BSET_SET(), BSET_CLR(), BSET_TGL(), BSET_ISSET(), BSET_FOREACH(), BSET_FORSET(), BSET_FORCLR(), BSET_FOREACHBYTE(), BSET_FOREACHMASK(), BSET_BYTE(), BSET_MASK(), BSET_COPY(), BSET_ZERO().

Most interesting are BSET_CLR() - clear item from bit's set, BSET_SET() - set item in bit's set, BSET_FORCLR() - iterate over cleared items in bit's set - looking for free strings in pool.

Little example:
char* ptr = gui_tmpstr_alloc();
if (ptr) {
  // USAGE of ptr
  ...
  // free:
  gui_tmpstr_free(ptr);
}

понедельник, 17 сентября 2012 г.

ASF: simple utility for font editing

Used fonts in Atmel Software Framework are simple: each glyph is described with array of bytes - each set bit is pixel, no bit - no pixel :)
Next is the very-very simple Tcl utility to convert glyph bytes into console image of symbol and vice-verse.

# parsing and loading font
#------------------------------------------------------------------------------
proc parsesymb str {
    # Parse one symbol in notation like:
    # "0x06,0x08,0x08,0x00,0x00,0x00,0x00,0x00"

    set bytes [regexp -all -inline {0[xX][0-9a-fA-F]{2}} $str]
    return $bytes
}

proc putsymbline byte {
    # put on screen one line of a symbol

    set mask 0b10000000
    for {set i 0} {$i<8 data-blogger-escaped-bit="bit" data-blogger-escaped-byte="byte" data-blogger-escaped-expr="expr" data-blogger-escaped-i="i" data-blogger-escaped-incr="incr" data-blogger-escaped-mask="mask" data-blogger-escaped-set="set">>1}]
        if $bit {
            puts -nonewline "#"
        } else {
            puts -nonewline " "
        }
    }
}

proc putsymb symb_bytes {
    # put on screen all symbol (all it's lines)

    puts "+--------+"
    foreach sb $symb_bytes {
        puts -nonewline "|"
        putsymbline $sb
        puts "|"
    }
    puts "+--------+"
}

# Compile glyphs to font in C format
#-------------------------------------------------------------------------------
proc compsymbline line {
    # Compile line like "# # # # " to 0xAA

    set bits [string map { " " 0 # 1} $line]
    return [format 0x%02X "0b$bits"]
}

proc compsymb symb_lines {
    # Compile lines of symbol (what out 'putsymb')

    set read 0
    set res {}
    foreach line $symb_lines {
        if {0 == $read} {
            if {$line eq "+--------+"} {
                set read 1
            }
        } else {
            if {$line eq "+--------+"} {
                break
            } else {
                set line [string trim $line "|"]
                lappend res [compsymbline $line]
            }
        }
    }
    return [join $res ","]
}

proc getlines {} {
    set lines {}
    while {[gets stdin line] >= 0} {
        lappend lines $line
    }
    return $lines
}

proc main args {
    global argc argv
    if {$argc} {
        set argv0 [lindex $argv 0]
        if {$argv0 eq "comp"} {
            set lines [getlines]
            puts [compsymb $lines]
            return
        } elseif {$argv0 eq "put"} {
            set line [gets stdin]
            puts [putsymb [parsesymb $line]]
            return
        }
    }
    puts {\
Syntax on Linux:
    1. cat file|PRG comp
    2. cat file|PRG put
Syntax on Windows:
    1. more file|PRG comp
    2. more file|PRG put

This means:
    1. Compile glyph image into C hex array
    2. Put glyph hex array on the screen as glyph image

Glyph images and its' hex arrays are inverse}
}

main
}
Example of usage:
$echo 0x20,0x78,0xA0,0x70,0x28,0xF0,0x20,0x00|tclkitsh font.tcl put
+--------+
|  #     |
| ####   |
|# #     |
| ###    |
|  # #   |
|####    |
|  #     |
|        |
+--------+
$
This is the "$" symbol. It's bytes I get from et024006dhu.c (see const unsigned char FONT6x8[97][8]) From another hand, you can save this glyph (with ASCII-box!) in some file and then call:
$cat somefile.txt|tclkitsh font.tcl comp
0x20,0x78,0xA0,0x70,0x28,0xF0,0x20,0x00
$
to get bytes of the glyph.

суббота, 4 августа 2012 г.

Converting PDF to JPG/PNG/PPM/TIFF

If you have big PDF with big resolution, then you can convert PDF to image file with pdftoppm:
pdftoppm -jpeg a.pdf a.jpg
Or use -png (see man for other options) to produce PNG

четверг, 26 июля 2012 г.

ASF: RGB macros

Famouse RGB macros for R5G6B5 for ET0240006 (analogue of et024006_Color(R, G, B) in Atmel Software Framework):
#define GUI_RGB(R, G, B) ((((R) & 0xF8)<<8) | (((G) & 0xFC)<<3) | ((B)>>3))
may be used anywhere - where you need constant, not function call (initializer of structs members and so on)

понедельник, 16 июля 2012 г.

ASF, AVR32, ET024006: et024006_PutPixmap() and bitmap format

How to use bitmap on ET024006 with ASF (Atmel Software Framework)? Format of bitmap file, used in et024006_PutPixmap(), is the R5G6B5, you can create image in the GIMP, then export to 'C source file' (select 'save as RGB 565 16 bits', and uncheck 'Use Glib types') and get something like this:
/* GIMP RGBA C-Source image dump (appicon.c) */
#define GIMP_IMAGE_WIDTH (22)
#define GIMP_IMAGE_HEIGHT (22)
#define GIMP_IMAGE_BYTES_PER_PIXEL (4) /* 3:RGB, 4:RGBA */
#define GIMP_IMAGE_PIXEL_DATA ((unsigned char*) GIMP_IMAGE_pixel_data)
static const unsigned char GIMP_IMAGE_pixel_data[22 * 22 * 4 + 1] =
("\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377"
...
);
I change it to:
const unsigned char appmenu_icon[] =
 ("\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377"
...
);
And then call:
et024006_PutPixmap((et024006_color_t*)appmenu_icon,
     appmenu_icon_width,
     0, 0, x, y,
     appmenu_icon_width,
     appmenu_icon_height);
appmenu_icon_width and appmenu_icon_height may be #define's

вторник, 10 июля 2012 г.

VIM: aliases vim command on Windows

Considering, we have folder with Makefile, other useful files (tags file, some bat files and so on) and deep-deep folders with sources. We are working with make/gmake in top folder but need to change directory to another deep-deep folders to make something there, for example, run vim... ^) To do it, I use script:
@echo off
PATH = C:\Program Files\Atmel\Atmel Studio 6.0\avrdbg;C:\Program Files\Atmel\Atmel Studio 6.0\extensions\Atmel\AVRGCC\3.3.2.31\AVRToolchain\bin;C:\papps\PortableApps\MinGWPortable\App\MinGW\msys\1.0\bin;%PATH%
start /D %CD%\deep\deep\src /B cmd.exe
doskey myvim=c:\Vim\vim73\gvim.exe -c ":set tags=../../../tags" $*
You can see there set of PATH variable for new terminal, aliasing new command myvim with autoset tags file. You can also setup individual _vimrc (per project:), if you need.
Save this script as cli.bat, run it and call in terminal:
myvim some_src.c
No more long paths and pushd/popd/cd! :)

Two usefull C macroses

As for me, there are 2 very simple but usefull C macros for safe calling of function (or method - callback in struct member):
#define SAFE_CALL(F, ...) do { if ((F)) (*(F))(##__VA_ARGS__); } while(0)
#define MCALL(SELF, CB, ...) do { if((SELF)->CB) (*(SELF)->CB)(SELF, ##__VA_ARGS__); } while(0)
Note two '#' - this is the GNU GCC extension to avoid problem with comma in variadic macros - with they we can use or not var. args. Usage is simple too:
MCALL(my_struct_ptr, onpaint, area_t* area);
Something like this... :)

пятница, 6 апреля 2012 г.

AVR32: development without IDE - only command line tools

How-To install environment for AVR32 development on Windows.


I downloaded avr-toolchain-installer-3.2.3.579-win32.win32.x86.exe and install it. Then downloaded asf-standalone-archive-3.0.1.zip (Atmel Software Framework)and unpacked it into c:\Program Files\Atmel\AVR Tools\. Then downloaded uname.exe (see bug with uname.exe, returns "windows32" on Windows instead of something like "MINGW32_NT-6.0") and replace with it installed c:\Program Files\Atmel\AVR Tools\AVR Toolchain\bin\uname.exe.

Testing of compiling ASF application: cd into c:\Program Files\Atmel\AVR Tools\asf-3.0.1\avr32\applications\uc3-dsplib-demo\at32uc3a3256_evk1104\gcc\ and run make:
"c:\Program Files\Atmel\AVR Tools\AVR Toolchain"\bin\make
after it you'll get .hex (and .elf and many other) files there!

Another way is to install AVR Studio and to use it's console but AVR Studio is so big :)

Programming a device


There are different ways to do it. For example, with avrdude, avr32program or atprogram. avrdude is free GNU program for Linux, Windows, Mac, it needs libusb to be installed. If you use Windows, then get it from http://sourceforge.net/projects/libusb-win32/, attach device, install (with inf-wizard.exe) driver (select your device from list, generate .inf and install), then run something like this:
avrdude.exe -P usb -c %PROGRAMMATOR% -p %MCU%
. Last (today:) version of avrdude for Windows is here: http://download.savannah.gnu.org/releases/avrdude/avrdude-5.11-Patch7610-win32.zip.
If you use AVR Dragon and Windows Vista, you can have problems with this!
For AVRDragon install last AVR Studio and use it's programmer tool: Jungo WinDriver. It's possible to install Jungo driver without studio, but you need also atprogram.exe or avr32program.exe tools. They needs XML files (database of MCUs and looks for themin installed Studio path).
Jungo driver will conflict with libusb drivers. So, remove libusb drivers first. For Windows: click WinKey+Pause, open Devices' manager and remove (deinstall) libusb drivers when device is attached. Then go to C:\Windows\System32 and delete file libusb0.dll. Then go to C:\Windows\System32\Drivers and delete libusb0.sys. Now detach device. Install Jungo drivers (automatically when install Studio or manually), attach device (AVRDragon I'm meaning) and if it ask you about driver files point to C:\Program Files\Atmel\Atmel USB (I thing Windows automatically can find it...). After it you should see in Deviceses' Manager this:
COMPUTER
  |
  +--Jungo
  |    |
  .    +--AVR Dragon
  .    +--WinDriver
and no any libusb32!!! AVRDragon and board should be on power (attached to USB). If Windows found "Unknown devices" (board, attached to USB for power) you can delete it from device list. Now if you use Studio 6 run in Atmel Studio Command Prompt (see Programs menu) something like this:
atprogram -t avrdragon -i jtag -d at32uc3b0512 read -fs -s 4
and you will see data bytes in console.
Better is to run Studio and try to open programming dialog. Studio will verify version of AVR Dragon firmware and upgrade it (if needed).

пятница, 16 марта 2012 г.

High-pass filter

In one of my posts I wrote about low-pass IIR filter (of 1st order). Here is the simple high-pass filter in form of IIR of 1st order. See more info here.
In my application HMC demo I poll sensor (accelerometer) and filter Ax signal:
HPFilter hpf
hpf configure -datatype flt -datatypes {phys}
Saver saver1
saver1 listen -on
saver listen -on
listen phvals:hpf
listen hpf:saver1
serial open 4
sensor poll -loop
This will create 2 CSV files with native and with filtered signals (after saving we rename them). We can render signals with GLE tool:
begin graph
    title "Accelerations"
    xtitle "Time [sec]"
    ytitle "Ax [g]"
    axis grid
    ticks color grey40
    subticks on
    subticks lstyle 2
    set font "texcmr"

    d1 file "c:/tmp/saver.csv,0,5"
    let d2 = d1
    d1 line color red
    d2 deresolve 10 average line color red lwidth 0.1

    d3 file "c:/tmp/saver1.csv,0,5"
    let d4 = d3
    d3 line color blue
    d4 deresolve 10 average line color blue lwidth 0.1
end graph

begin key
    set fill yellow
    line color red text "Native"
    line color blue text "Filtered"
end key
and get this image:
 Blue line is signal after high-pass filtering: it shows effect of removing slow drift.
 Removing of slow drift is used in position estimation of sensor (from acceleration) in multicascading scheme:
HPF -> INTEGR -> HPF -> INTEGR -> HPF
Here is the C code (for SWIG) of this kind of filter:
double* filter(double *v) {
        int i;
        int vlen;

        vlen = (int)v[0];
        if (vlen != $self->n) {
            return (NULL);
        }

        for (i=1; i <= vlen; i++) {
            $self->v[i] = $self->a * ($self->v[i] + v[i] - $self->x[i]);
            $self->x[i] = v[i];
        }
        return ($self->v);
    }
$self is the pointer to CHPFilter struct:
typedef struct CHPFilter {
        int n; // length of v
        double a; // alpha
        double *v; // calculated values (keep prev. before filter)
        double *x; // prev. input
} CHPFilter;

понедельник, 12 марта 2012 г.

Docstrings in Tcl (and IncrTcl)

Docstrings are very popular in Python, Lisp... This is the easy way to include documentation into implementation (reverse LP). But in Tcl are can be usable to generate help of commands for user. Here is the listing of one (it support procs and Itcl methods):
package provide mydoc 1.0

namespace eval mydoc {
    namespace export doc
}

set _NOHELPMSG "No help."

proc ::mydoc::_docstring {body} {
    set body [string trim $body]
    set docstring ""
    # without 1st '^' will match any FIRST docstring block even after
    # commands!
    if {[regexp {^#\s*([^\n]+\n)+(\n\n)} $body docstring]} {
        set docstring [regsub -all {\s*#\s?} $docstring \n]
        set docstring [string trim $docstring]
        return $docstring
    }
}

proc ::mydoc::doc args {
    # Help on command: procedure or class method. Call:
    #   doc some_object some_method
    #   doc some_class some_method
    #   doc some_proc


    global _NOHELPMSG
    set found ""
    switch -- [llength $args] {
        1 {
            # args: proc
            set name [lindex $args 0]
            set arguments [info args $name]
            set body [info body $name]
            set found [_docstring $body]
        }
        2 {
            # FIXME not optimal!
            # args: object|class method
            lassign $args cls_obj meth
            set objs [itcl::find objects]
            # cls_obj may be object OR class. What is it?
            if {-1 != [lsearch -regexp $objs :*$cls_obj]} {
                # this is the object
                set arguments [$cls_obj info args $meth]
                set body [$cls_obj info body $meth]
                set found [_docstring $body]
            } else {
                # this is the class
                set arguments [namespace eval ::$cls_obj info args $meth]
                set body [namespace eval ::$cls_obj info body $meth]
                set found [_docstring $body]
            }
        }
        default { error "wrong args: proc | object method | class method" }
    }
    if {$found eq ""} {
        return $_NOHELPMSG
    } else {
        return $found
    }
}

# txt is the string with \n, shifter is like '\t' or '\t\t'..
proc mydoc::_shift_strings {txt shifter} {
    if {$txt ne ""} {
        return "$shifter[regsub -all \n $txt \n$shifter]"
    }
}


# Generate only for documented with docstrings
proc mydoc::_genrst {fname} {
    set result {}
    # Collect help on objects and it's methods
    set clshelp {}
    foreach cls [itcl::find classes] {
        set her [namespace eval $cls "info heritage"]

        set varhelp {}
        foreach v [namespace eval $cls info variable] {
            catch {
                #lappend varhelp [namespace eval $cls info variable $v -protection public]
                if {[string first "::_" $v] == -1} {
                    switch -- [namespace eval $cls info variable $v -protection] {
                        public { set vprot "public" }
                        protected { set vprot "protected" }
                        private { set vprot "private" }
                    }
                    lappend varhelp "- $vprot $v"
                }
            }
        }

        set methelp {}
        foreach func [namespace eval $cls "info function"] {
            catch {
                set body [string trim [namespace eval $cls "info body $func"]]
                if {$body ne ""} {
                    set arguments [namespace eval $cls "info args $func"]
                    if {$arguments eq ""} { set arguments "no args." }
                    set docstring [_shift_strings [_docstring $body] \t]
                    if {$docstring ne ""} {
                        lappend methelp "*${func}*: **${arguments}**"
                        lappend methelp ""
                        lappend methelp "::"
                        lappend methelp ""
                        lappend methelp $docstring
                        lappend methelp ""
                        lappend methelp ""
                    }
                }
            }
        }
        if {$methelp ne ""} {
            # there are methods with docstrings!
            if {[llength $her] > 1} {
                # there are base classes
                set bases [lrange $her 1 end]
                set her "[lindex $her 0] (*extends ${bases}*)"
            }
            lappend clshelp "$her"
            lappend clshelp [string repeat "-" [string length $her]]
            lappend clshelp ""
            lappend clshelp "Variables"
            lappend clshelp "~~~~~~~~~"
            lappend clshelp ""
            if {$varhelp ne ""} {
                lappend clshelp [join $varhelp "\n"]
            } else {
                lappend clshelp "No variables."
            }
            lappend clshelp ""
            lappend clshelp "Methods"
            lappend clshelp "~~~~~~~"
            lappend clshelp ""
            lappend clshelp {*}$methelp
        }
    }
    # Collect procs help
    set prochelp {}
    foreach func [uplevel #0 info procs] {
        catch {
            set body [string trim [uplevel #0 info body $func]]
            if {$body ne ""} {
                set arguments [uplevel #0 info args $func]
                if {$arguments eq ""} { set arguments "no args." }
                set docstring [_shift_strings [_docstring $body] \t]
                if {$docstring ne ""} {
                    lappend prochelp "*${func}*: **${arguments}**"
                    lappend prochelp ""
                    lappend prochelp "::"
                    lappend prochelp ""
                    lappend prochelp $docstring
                    lappend prochelp ""
                    lappend prochelp ""
                }
            }
        }
    }

    if {$clshelp ne ""} {
        lappend result ""
        lappend result "Classes"
        lappend result "======="
        lappend result ""
        lappend result {*}$clshelp
    }
    if {$prochelp ne ""} {
        lappend result ""
        lappend result "Procedures"
        lappend result "=========="
        lappend result ""
        lappend result {*}$prochelp
    }


    set fid [open $fname w]
    puts -nonewline $fid [join $result "\n"]
    close $fid
}
This mechanism supports not only interactive help system for user (doc proc) but also help generating (to ReStructured Text format). Example of using (remember that two empty lines are used to end docstring section:
proc fun {args} {
  # Do something. Syntax:
  #  -one -- first action
  #  -two -- second action


  .. code ..
}

# OR

itcl::class C {
  method f {args} {
    # My method. Syntax:
    #  f one two -- do something
    #  f three four -- do something else


    .. code ..
  }
}
and then call:
doc fun
doc C f
C cobj
doc cobj f
_genrst procedure is used to generate documentation. It looks like this:

Classes

::HMC6343Protocol

Variables

  • protected ::HMC6343Protocol::this

Methods

::HMC6343Protocol::get: cmd what
Returns protocol info (what) about command cmd. What is:
  -def - definition of command (Tcl commands list to be execute for request)
  -resplen - length of response in bytes
  -respfmt - format of response
  -all - as fields as list
 
::HMC6343Protocol::unpack: cmd buf
Returns unpacked (as list) values from buf for cmd

::HMC6343Protocol::commands: no args.
Returns known commands names

::HMC6343Protocol::constructor: defs
Creates object.
  - defs is the list of CMD RESPONSE_LEN RESPONSE_FMT { Serial commands }...
  - RESPONSE_FMT is like in binary command
It's easy to use it in Makefile for help generation:
RST2HTML = C:/Python32/python.exe C:/Python32/Scripts/rst2html.py
hlp:
 tclkit.exe gendoc.tcl
 $(RST2HTML) hlp.rst hlp.html
Use tclkit.exe if you have Tk code (instead of tclkitsh.exe). Python and docutils Python package are used to trnasform .rst into .html. gendoc.tcl looks like this:
# Generate hlp.rst file with references to all classes, its methods
# and procedures with docstrings (see lib/mydoc.tcl)
package require starkit
starkit::startup
starkit::autoextend [file join $starkit::topdir]
package require myutl
set packages [glob -directory lib -tails "*.tcl"]
foreach pkg $packages {
    set pkg [regsub {.tcl$} $pkg ""]
    if {$pkg ne "pkgIndex"} {
        package require $pkg
    }
}
mydoc::_genrst help.rst
exit 0
it imports all packages from lib directory and generates help.rst file for its' procs and methods. Makefile magic:
make hlp
:)

Listeners/providers architecture in Tcl (IncrTcl)

This is very popular model for Dataflow (and DSP) applications and no needing to introduce it. So, only listing:
package provide mylsn 1.0
package require Itcl

# ListenEvent
# =====================================================================
itcl::class ListenEvent {
    public variable name
    public variable src
    public variable data
    constructor {aname asrc {adata ""}} {
        set name aname
        set src asrc
        set data adata
    }
}

# DataListener 
# =====================================================================
itcl::class DataListener {
    # list of understandable providers' datatypes. Empty - all
    public variable datatypes ""

    protected variable _event {};  # ListenEvent

    # all listeners
    public common all {}

    constructor {} { lappend all $this }

    destructor {
        set i [lsearch -exact $all $this]
        if {$i >= 0} { set all [lreplace $all $i $i] }
    }

    # FSM state: LISTEN|SESSION|OFF. Means:
    #   LISTEN - ready for data from data provider
    #   SESSION - already reading some data (first data packet was received)
    #   OFF - reading of data is disabled
    protected variable _lstate LISTEN

    # values is list of values
    protected method ondata {values} {}

    # on run polling (listening session). columns is the list of columns names,
    # units is the list of unit names
    protected method onrun {columns units} {}

    # on stop listening session
    protected method onstop {} {}

    # on add to data provider
    protected method onadd {provider} {}

    # on delete from data provider
    protected method ondel {provider} {}

    method event {ev src args} {
        # generate event (call callback) for this listener.
        # ev is ListenEvent object, src is the source of event.
        # ev is one of run|stop|data|on|off|add|del:
        #   run - before first packet sent
        #   stop - after last packet sent
        #   data - packet is received
        #   on - enable listening
        #   off - disable listening
        #   add - connect with some data provider
        #   del - disconnect from some data provider


        set _event [ListenEvent #auto $ev $src $args]
        switch -- $_lstate {
            LISTEN {
                switch -- $ev {
                    run  { set _lstate SESSION
                           catch { $this onrun {*}$args } }
                    off  { set _lstate OFF }
                    add  { catch { $this onadd {*}$args } }
                    del  { catch { $this ondel {*}$args } }
                }
            }

            SESSION {
                switch -- $ev {
                    stop { set _lstate LISTEN
                           catch { $this onstop } }
                    data { catch { $this ondata {*}$args } }
                }
            }

            OFF {
                switch -- $ev {
                    on   { set _lstate LISTEN }
                    add  { catch { $this onadd {*}$args } }
                    del  { catch { $this ondel {*}$args } }
                }
            }
        }
    }

    # listen -on|-off
    method listen {what} {
        # listen -on -- turn-on listening
        # listen -off -- turn-off listening


        # event without src ("" - user call is event source)
        switch -- $what {
            -on     { $this event on "" }
            -off    { $this event off "" }
            default { error  "listen can be -on or -off only" }
        }
        return
    }

    method listened {} {
        # Is listening now?


        return [expr {$_lstate ne "OFF"}]
    }

    # join columns and units with delimiter into new list
    method join_columns_units {columns units {delim ","}} {
        set res {}
        foreach c $columns u $units {
            lappend res "$c$delim$u"
        }
        return $res
    }
}

# DataProvider
# =====================================================================
itcl::class DataProvider {
    # static list of all providers
    public common all {}
    public variable datatype ""

    protected variable _listeners

    constructor {} { lappend all $this }

    destructor {
        set i [lsearch -exact $all $this]
        if {$i >= 0} { set all [lreplace $all $i $i] }
    }

    # returns list (pair) of columns (list) and units - if they are
    # fixed all the time (for any session)
    method fixed_columns_units {} {}

    # normalize name, need bcz user can use quilified name as
    # ::a, not a.
    # FIXME namespace which -variable does not work in Itcl, so
    # i cut all :, but is possible to add/del only listeners on
    # top-level namespace
    protected method _normname {name} {
        return [regsub -all ":" $name ""]
    }

    method get_listeners {} {
        # Returns names of all listeners


        return [array names _listeners]
    }

    method add_listener {listener} {
        # Add some listener


        set lsndts [$listener cget -datatypes]; # datatypes expected by listener
        if {[llength $lsndts] && !($datatype in $lsndts)} {
            # if listener datatypes not empty (expect some) and my datatype
            # is not in it's datatypes, so I can't add this listener
            error "Listener $listener does not understand $this provider"
        }

        set name [_normname $listener]
        if {[itcl::is object -class DataListener $listener]} {
            if {[array get _listeners $name] ne ""} {
                error "Listener $name already exists"
            }
            set _listeners($name) $listener
            $listener event add $this $this
        } else {
            error "listener should be DataListener object"
        }
    }

    method del_listener {listener {stop 1}} {
        # Deletes listener, sends before stop event if needed


        set name [_normname $listener]
        set listener [lindex [array get _listeners $name] 1]
        if {$listener ne ""} {
            if {$stop} { $listener event stop $this }
            array unset _listeners $name
            $listener event del $this $this
        }
        return $listener
    }

    # XXX not effective
    method del_all_listeners {{stop 1}} {
        # Deletes all listeners, send stop event before, if needed


        foreach name [array names _listeners] {
            del_listener $name $stop
        }
    }

    method notify_all {ev args} {
        # Notify all listeners with event ev and some args


        foreach {name listener} [array get _listeners] {
            $listener event $ev $this {*}$args
        }
    }

    method number {} {
        # Returns number of listeners


        return [array size _listeners]
    }

}

# ProxyListener - general object for process events without creating
# a special class
# =====================================================================
itcl::class ProxyListener {
    inherit DataListener DataProvider

    public variable onrunproc ""
    public variable onstopproc ""
    public variable ondataproc ""
    public variable onaddproc ""
    public variable ondelproc ""

    protected method onrun {columns units} {
        if {$onrunproc eq ""} {
            notify_all run $columns $units
        } else {
            $onrunproc $columns $units
        }
    }

    protected method onstop {} {
        if {$onstopproc eq ""} {
            notify_all stop
        } else {
            $onstopproc
        }
    }

    protected method ondata {values} {
        if {$ondataproc eq ""} {
            notify_all data $values
        } else {
            $ondataproc $values
        }
    }

    protected method onadd {provider} {
        if {$onaddproc eq ""} {
            notify_all add $provider
        } else {
            $onaddproc $provider
        }
    }

    protected method ondel {provider} {
        if {$ondelproc eq ""} {
            notify_all del $provider
        } else {
            $ondelproc $provider
        }
    }
}

# DebugListener - repeater
# =====================================================================
itcl::class DebugListener {
    inherit ProxyListener

    public variable fixed_cu ""

    constructor {} {
        $this configure -datatype phys -datatypes {v s flt raw phys} \
            -onrunproc [itcl::code $this onrunproc] \
            -onstopproc [itcl::code $this onstopproc] \
            -ondataproc [itcl::code $this ondataproc] \
    }

    method fixed_columns_units {} {
        return $fixed_cu
    }

    method onrunproc {columns units} {
        notify_all run $columns $units
        puts "*DEBUG_${this}* onrun: $columns, $units"
    }

    method onstopproc {} {
        notify_all stop
        puts "*DEBUG_${this}* onstop"
    }

    method ondataproc {values} {
        notify_all data $values
        puts "*DEBUG_${this}* ondata: $values"
    }
}

# procedures
# =====================================================================

proc debug_listen {d between0 between1} {
    #DebugListener $this.#auto
    $d configure -fixed_cu [$between0 fixed_columns_units]
    listen $between0:$d
    listen $d:$between1
}

proc listen args {
    # Set who listen who:
    #   listen provider...: listener...
    # or
    #   listen prov...: all
    # or
    #   listen -- return list of lists {provider {listeners}}
    # or
    #   listen -txt -- return formatted string (for user)
    # or
    #   listen -p -- returns formatted string with providers and it's datatypes
    # or
    #   listen -l -- returns formatted string with listeners and it's datatypes


    if {[lsearch -exact $args "-txt"] != -1} {
        # if there is "-txt" option, return formatted string
        set res {}
        foreach prov $DataProvider::all {
            set nprov [regsub -all ":" $prov ""]
            lappend res "$nprov: [join [$prov get_listeners] ,\ ]"
        }
        return [join $res "\n"]

    } elseif {[lsearch -exact $args "-p"] != -1} {
        set res {}
        foreach prov $DataProvider::all {
            set nprov [regsub -all ":" $prov ""]
            set dt [$prov cget -datatype]
            lappend res "$nprov: provides '$dt'"
        }
        return [join $res "\n"]

    } elseif {[lsearch -exact $args "-l"] != -1} {
        set res {}
        foreach lsn $DataListener::all {
            set nlsn [regsub -all ":" $lsn ""]
            set dt [join [$lsn cget -datatypes] "','"]
            lappend res "$nlsn: listens '$dt'"
        }
        return [join $res "\n"]

    } elseif {$args eq ""} {
        # if no args, returns listening table (all links)
        set res {}
        foreach prov $DataProvider::all {
            lappend res [list $prov [$prov get_listeners]]
        }
        return $res

    } else {
        # normalize args (make ':' like arg, not part of another arg)
        lassign [split [join $args " "] :] providers listeners

        # if there is 'all' in listeners then it'll be all of them :)
        if {[lsearch -exact $listeners all] != -1} {
            set listeners $DataListener::all
        }

        # delete each listener from ALL known providers then attach
        # to selected
        foreach lsn $listeners {
            foreach prov $DataProvider::all { $prov del_listener $lsn 1 }
            foreach prov $providers { $prov add_listener $lsn }
        }
    }
}
There are classes:
  • ListenEvent
  • DataListener (should be inherited)
  • DataProvider (should be inherited)
  • ProxyListener
  • DebugListener

First 3 are clear. ProxyListener avoids creation of class - options-callbacks are used instead. Also, it's like bridge: listens and provides. DebugListener is proxy too, but for debug purpose (transparent bridge with console logging). Here is a scheme:


Procedure listen is used to link providers and listeners or to obtain all of providers/listeners in system, or to obtain current link scheme (with native or pretty text representation). debug_listen procedure is used to insert DebugListener between two nodes.

Real time (stream) integration with Simpson's rule

Turn on MathML plugin
About this method of numerical integration you can read here. To do it in streaming manner we should do little formal mathematics:

Composite Simpson's rule:

a b f x x h 3 k = 1 , 2 N - 1 f x k - 1 + 4 f x k + f x k + 1

where k=1,2 means from 1 step by 2, N is even intervals number.

The sum will be: h 3 k = 1 , 2 N - 1 S k

Simpsone's method requires 3 points least so we can integrate not every point (sample) but through one, so neighboring samples are IN-2 and IN. Different between them is:

I N - I N - 2 = h 3 k = 1 , 2 N - 1 S k - k = 1 , 2 N - 3 S k = h 3 S N - 1 + k = 1 , 2 N - 3 S k - k = 1 , 2 N - 3 S k

no SN-2 member bcz of step k=2.

So difference is: h 3 S N - 1

For example, I 4 - I 2 = h 3 y 2 + 4 y 3 + y 4 , where I4 is 4th interval but 5th sample.

And resulting formula of Ii=
0 , i 1
h 3 y i - 2 + 4 y i - 1 + y i + I i - 1 , i mod 2 = 0
I i - 1 , i mod 2 0

where i is a number of input sample. We can not get values of I on each sample, but through one (skipping odds and 0-th, 1-th). On odds samples, value will repeat previous one.

Example in SWIG:
some.h file:
// Simpsone's rule integrator
typedef struct CSIntr {
        double I; // last value of integral
        double y_1; // y[-1]
        double y_2; // y[-2]
        double h; // step of subinterval
        double h_d; // step of subinterval/3
        long long i; // iteration
} CSIntr;
some.i file:
%include "some.h"
%extend CSIntr {
    // FIXME how is more right to do this?
    void reset() {
        $self->I = 0.;
        $self->y_1 = $self->y_2 = 0.;
        $self->i = 0;
    }

    void setup(double h) {
        $self->h = h;
        $self->h_d = h/3.;
    }

    CSIntr(double h) {
        CSIntr *obj;

        if (NULL == (obj=(CSIntr*) malloc(sizeof (CSIntr))))
            return (NULL);
        CSIntr_reset(obj);
        CSIntr_setup(obj, h);
        return (obj);
    }

    ~CSIntr() {
        if ($self) free($self);
    }

    double calculate(double y) {
        if ($self->i <= 1)
            $self->I = 0;
        else if (0 == $self->i % 2)
            $self->I += $self->h_d * ($self->y_2 + 4. * $self->y_1 + y);

        $self->y_2 = $self->y_1;
        $self->y_1 = y;
        $self->i++;
        return ($self->I);
    }
};
So, if we use Tcl, then we can build and test our class:
Makefile's fragment:
some_wrap.c: some.i some.h some.c
 swig -tcl -pkgversion 1.0 some.i

some.dll: some_wrap.c
 gcc -Ic:/tcl/include -I./ -DUSE_TCL_STUBS -c some_wrap.c -o some_wrap.o
 gcc -shared -o some.dll some/some_wrap.o -L c:/tcl/lib -ltclstub85 -lm
test on linear function with yi = {1, 2, 3, 4, 5}:
load some
% CSIntr cs 1
_f85f7601_p_CSIntr
% cs calculate 1
0.0
% cs calculate 2
0.0
% cs calculate 3
4.0
% cs calculate 4
4.0
% cs calculate 5
12.0
I4 is really 12.0

среда, 15 февраля 2012 г.

Gravity compensation of accelerometer measurements

This solution is only for normal mount of sensor package (see Fig 1)
When we read accelerometer values, we get measurements with Earth gravity. How to remove gravity amount?

Gravity is the vector in direction to center of Earth. Accelerometer values (for Ox, Oy, Oz axis) contains gravity vector's projections (projection, because of tilt of sensor: sensor can have pitch and roll angles not equals to 0!). We need to remove gravity projections to get clear values of acceleration.

Fig 1
 Fig 1 shows normal mount of the sensor and it's axis. First let's remove gravity projection from accelerometer value for Ox axis. When no tilt and roll, pitch angles are 0 - gravity projections on Ox and Oy are 0. But if we have some tilt, then we have also "gravity component" in accelerometer measurement value: Gx. See Fig 2 for Gx definition:

Fig 2
(Oy is "out from eyes")
Projection gx = g * cos α. Angle p is the pitch angle. α = 180° - 90° - p = 90° - p.

So,




gx = g * cos (90° - p) = g * sin p.


Ok, let's define gravity component on Oy axis. See Fig 3:

Fig 3
(Ox is "out from eyes")



Projection gy = g * cos β. Angle r is the roll angle. β = 180° - 90° - r = 90° - r.

So,




gy = g * cos (90° - r) = g * sin r


Ok, let's define gravity component on vertical, Oz axis.

Projection gz has some unknown angle γ between Oz and vector of gravity g. And how to define this angle γ ? It's easy: we know the theorem about orths:




cos2p + cos2r + cos2γ = 1



And gz = g * cos γ. We need to find cos γ. From this theorem: cos2γ = 1 - (cos2p + cos2r).

We know that cos2x = 0.5 * (1 + cos2x) so, cos2γ = 1 - (0.5*(1 + cos2p) + 0.5*(1 + cos2r)) = -0.5 * (cos2r + cos2p).

And now:




cos γ = ± SQRT(-0.5 * (cos2p + cos2r)) = ± 0.70710678118654752440 * SQRT(-(cos2p + cos2r))



So, we will use



gz = g * 0.70710678118654752440 * SQRT(cos2p + cos2r)

with correction of sign (direction of gz) by pitch/roll amounts.


These formulas are in g units, not in m/sec2! We suppose |g| = 1. So, you need to multiply by g constant (9.8). But more correct may be to use |g| = sqrt(Accx2 + Accy2 + Accz2) instead of 1


Here is the example in C:

#define DEG2RAD(D) ((D)*0.01745329251994329576)

typedef struct accv_t {
        double v[3]; // values of 3 axis of accelerometer
} accv_t;

accv_t corr_gravity(double ax, double ay, double az, double pitch, double roll) {
    /* XXX: for normal mount!!! Other not implemented */
    double gx, gy, gz, tmp;

    gx = sin(DEG2RAD(pitch));
    gy = sin(DEG2RAD(roll));
    tmp = cos(DEG2RAD(2*pitch)) + cos(DEG2RAD(2*roll));
    gz = 0.70710678118654752440 * sqrt(fabs(tmp));

    ax += gx;
    ay += gy;
    if (fabs(pitch) > 90.0 || fabs(roll) > 90.0) {
        az -= gz;
    }
    else {
        az += gz;
    }
    return ((accv_t){ .v = {ax, ay, az}});

}
NOTE: condition abs(pitch) > 90° || abs(roll) > 90° => invert gz.

On real accelerometer values I got:

sensor:
  Ax =  0.287g
  Ay = -0.094g
  Az = -0.928g
corrected:
  Ax = -0.006g
  Ay = 0.006g
  Az = 0.022g


See also How to convert accelerometer values to path/coordinates

пятница, 10 февраля 2012 г.

Low-pass filter

One of the low-pass filters software algorithm is described on http://en.wikipedia.org/wiki/Low-pass_filter. Here is the example of Python implementation:
# Low-pass filter
import random

RC = 0.05 # time constant (inertion)
DT = 0.02 # sampling time

A = DT/( RC + DT)
#A = 0.45
xk_1 = 0
print "Measured;Filtered"
for i in xrange(0, 200):
    xm = random.randint(0, 2)
    # Both are equals:
    #xk = A*xm + (1-A)*xk_1
    xk = xk_1 + A * (xm - xk_1)
    print "%d;%f" % (xm, xk)
    xk_1 = xk
With RC (tau) we can control inertia of the filter
RC=0.5
RC=0.05


Here is the example how looks low-pass filter with tilt angles (F-16 seems to be more stable):

HMC6343 demo 1 from bapcyk on Vimeo.



If you try to filtering tilt angles then you can see funny effect: turning of 3D model! It's happens due to error in sensor data which filter try to smooth, see image:


Near the scale edges error seems to be small, but angles may be, for ex., 10°, 350°, and filter will smooth their to 10°, 20°, 30°... 340°. And you can see rotation of the model on the screen.
One of the approach to avoid this problem is to normalize angles. For example, 10°, 350° may be converted to 10°, -10°. After such normalization angles great then 360° should be normalized to canonical range. Algorithm is easy:
absdiff = abs(v1 - v0)
if absdiff > 180.:
  d = 360. - absdiff
  if v0 > v1:
    v1 = v0 + d
  else:
    v1 = v0 - d
And after filtering to avoid values great then 360° we can use something like this:
if abs(v) >= 360.:
  v = v % 360.