вторник, 30 октября 2012 г.

Traversing over paths in Tcl

If we have paths - string like directories of DOS, we can want to group them and walking on all of them:
Input:
  a/b/c
  a/y/u
  a/b/d
  a/y/i

Walking:
  a (a)
   b (a/b)
    c (a/b/c)
    d (a/b/d)
   y (a/y)
    u (a/y/u)
    i (a/y/i)
Next procedute does it, but paths are list of lists (each item is list of directories' names).
proc forpaths {boundVarNames paths body} {
# walk on paths, each is list of dirs. $body will execute on visit each dir,
# variables bound:
#   $keyName - current path
#   $levelName - current level (0...N)
#   $leafName - is leaf or not (1, 0)
    foreach {keyName levelName leafName} [lrange $boundVarNames 0 2] {}
    set group [dict create]
    foreach path $paths {
        dict set group {*}$path "@LEAF"
    }
    proc _cmp {a b} {
        if {[expr {$a > $b}]} { return 1 } \
        elseif {[expr {$a < $b}]} { return -1 } \
        else { return 0 }
    }
    proc _trackpath {track level dir} {
        if {$track eq ""} { set track {"@DUMMY"} }
        switch -- [_cmp [expr $level+1] [llength $track]] {
            1  { lappend track $dir }
            0  { lset track end $dir }
            -1 { set track [lreplace $track $level end $dir] }
        }
        return $track
    }
    set gtrack {}; # current path when visit each node
    proc _walk {d keyName levelName leafName body {_deep 2}} {
    # here $level is level in tree, not in stack
    # $_deep is level in stack
        upvar $_deep $keyName key $levelName level
        upvar gtrack gtrack
        if {$leafName ne ""} { upvar $_deep $leafName leaf }
        dict for {k v} $d {
            set level [expr {$_deep-2}]
            set gtrack [_trackpath $gtrack $level $k]
            set key $gtrack
            if {$v eq "@LEAF"} {
                set leaf 1
                uplevel $_deep $body
            } else {
            # nested
                set leaf 0
                uplevel $_deep $body
                _walk $v $keyName $levelName $leafName $body [expr {$_deep+1}]
            }
        }
    }
    _walk $group $keyName $levelName $leafName $body
}
It works like foreach:
forpaths {k level leaf} $someDirs {
    puts "key: $k level: $level isLeaf: $leaf"
}
Mandatory is only first variable - key, so it's possible to call:
forpath {k l f} ...
forpath {k l} ...
forpath k ...
So to traverse directories' names first split them with separator string into lists!

среда, 24 октября 2012 г.

PEG grammars in Tcl - simple example

This is the very simple example how to create PEG grammar and parse some text with it in Tcllib (I used Tcllib 1.4).
package require starkit
starkit::startup
package require pt::peg::from::peg
package require pt::peg::container
package require pt::peg::interp

set PEG {
PEG myapp (Exp)
    IfKw       <- 'i' 'f'                               ;
    ThenKw     <- 't' 'h' 'e' 'n'                       ;
    ElseKw     <- 'e' 'l' 's' 'e'                       ;
    OSp        <- ('\t' / '\n' / ' ')?                  ;
    Sp         <- ('\t' / '\n' / ' ')+                  ;
    Block      <- (Exp / 't') &';'                      ;
    Cond       <- 'c' / 'o'                             ;
    If         <- IfKw Sp Cond OSp ':' OSp Block        ;
    Exp        <- If                                    ;
END;
}

set CPEG [pt::peg::from::peg convert $PEG]
pt::peg::container cont
cont deserialize = $CPEG
set src "if c: if o:t;"
pt::peg::interp myint
myint use cont
puts [myint parset $src]
(I use startkit...)
This grammar is very simple (and non-realistic): it defines rules to parse nested 'if'-expression like this:
if c: if o: t;
OR
if CONDITION: if CONDITION: BLOCK;
where CONDITION is 'c' or 'o', BLOCK is another EXP (expression) or 't' followed by ';'; when blocks are nested, only one ';' is needed. Example has bugs (see Sp, OSp - Optional SPaces) but shows base tricks with PEG, for ex., PEG definition is loaded from variable, not imported from file, so you need for your App. not all Tcllib packages :) After running, you get:
c:\>tclkitsh.exe main.tcl
Exp 0 11 {If 0 11 {IfKw 0 1} {Sp 2 2} {Cond 3 3} {OSp 4 3} {OSp 5 5} {Block 6 11 {Exp 6 11 {If 6 11 {IfKw 6 7} {Sp 8 8}
{Cond 9 9} {OSp 10 9} {OSp 11 10} {Block 11 11}}}}}

понедельник, 22 октября 2012 г.

Integer limits in C

They are in limits.h. But when you need calculated (depends on type), you can use:
#define SOME_TYPE_MAX (SOME_TYPE)((unsigned long)(1<<sizeof(SOME_TYPE)*8-1)-1)
#define SOME_TYPE_MIN (~SOME_TYPE_MAX)

Algorithm generalization in C (trick with auto-codegenerator)

Sometimes we need to code algorithms on C in very general way, alternatives - many functions for different parameters (optimization reasons, parametrized code, etc.). And if we want to avoid function pointers and other slow methods then we can use strong macros tricks. One of them is code generators with self-including.
Idea is that header file has sections:
  • Common, included in anyway
  • Parametrized codegenerator sections
  • Start of generation of code
and it looks like this (file self.h):
#ifndef _USUAL_H_GUARD
#define _USUAL_H_GUARD

// common code, included in anyway

#endif /*_USUAL_H_GUARD*/

// templates of codegenerators:
#ifdef SOME_CODEGEN_ARG
// some template with DEFINE's substitution
#else if defined(ANOTHER_CODEGEN_ARG0) && defined(ANOTHER_CODEGEN_ARG1)
// another template
#endif

// start of code generating:
// one version of general algorithm:
#ifndef SOME_CODEGEN_ARG /* guard of recursive including! */
#define SOME_CODEGEN_ARG "something"
#include "self.h"
#endif
// another version of the same generalized algorithm:
#ifndef SOME_CODEGEN_ARG /* guard of recursive including! */
#define SOME_CODEGEN_ARG "something else"
#include "self.h"
You also can have generalized function names! See:
#define __CONC3(X, Y, Z) X ## Y ## Z
#define _CONC3(X, Y, Z) __CONC3(X, Y, Z)
#define FUNC_NAME(N) _CONC3(N, ANOTHER_CODEGEN_ARG0, ANOTHER_CODEGEN_ARG1)
And use this macro in your codegen. templates as: FUNC_NAME(myfunc). Resulted name will be myfunc_AB() if you defined ANOTHER_CODEGEN_ARGs as A and B before starting of code generation (auto-including).
And yes, you can "generalize" your algorithm (name and body!) with previous defined PARAMETERS (before auto-including).

YOU SHOULD MODIFY LINE WITH GUARD OF RECURSION WHEN ADD NEW TEMPLATE! BETTER IS TO USE ONE MACRO SYMBOL TO DETERMINE 'GENERATING' MODE, ANOTHER - AS PARAMETERS

It does the same what does C++ templates. Better way is to use different files instead of one big with embedded "templates" - included .def files.

Generalization includes unique names generation for different argument types/different sub-algorithm used.
Considering, we have algorithm which is general but has some different parts depended on different "sub-methods", for example, using of MAX or MIN, using of some FLAG or not.
Our function can have names like func_MAXFLAG(), func_MINFLAG(), func_MAXNOFLAG(), func_MINNOFLAG(), general algorithm will use one of them (or all). Something like overloading C++ methods/functions, but depended not on types, but on any parameter value.
First, we should generate suffix for function depended on parameters. We ca use trick with _CONC3() (see below):
#define ARG1 MAX
#define ARG2 NOFLAG
and in the .def-file:
int FUNC_NAME(func_, ARG1, ARG2)(int a, int b);
to get name func_MAXNOFLAG. But here is the pitfall. If we decide to use conditional compilation, with #if ARG1==MAX we'll get ERROR! Bcz MAX is undefined, test will be everytime TRUE:#if "" == "". So you CAN NOT distinguish MAX or MIN values of ARG1!
So, you need to define MAX, MIN, FLAG, NOFLAG to test it's values in #if's!
To minimize code you can use instead of many parameters, one with bitfields. Testing on them - for #if's and also for name generating:
#define MAX (1<<1)
#define MIN (1<<2)
#define FLAG (1<<3)
#define NOFLAG (1<<4)
...
#define ARGS (MIN|NOFLAG)
#include "template.def"
and in template.def something like this:
#if ARGS&MIN
#define N1 MIN
#else
#define N1 MAX
#endif
... and the same for FLAG/NOFLAG

// generator for similar functions with one, common, general algorithm.
int FUNC_NAME(func_, N1, N2)(int a, int b) {
#if ARGS&MIN
... code for min
#else
... code for max
#endif
}

#undef ARGS
#undef N1
#undef N2
Also it's easy to split template.def with #ifdef ONLY_DECLARATIONS to 2 parts: declarations (for .h file includes) and code (for .c file includes). In this case checking of ARGS and "undefinings" should be out of #ifdef ONLY_DECLARATIONS
So, what you can do in C++, you can do in C, exceptionally matching of values/types (but you can use static checks in this case). How is this necessary - is the big question, for example, many language designers don't like restrictions like types - in dynamic languages, some like checks in Run-time (more flexible then only type/inheritance checking) :)

понедельник, 15 октября 2012 г.

Does sample is located around periodic peak?

Next pseudo-code (Python) function tests does sample is located in domain around periodic peak:
def isclose(a, b, lim):
    # is b close to a OR to aliquot of a in lim
    if a == 0:
        return abs(b - a) <= lim
    else:
        n = b/a
        if b%a > a/2:
            n += 1
        c = a*n # closest aliquot to b
        m = abs(b - c)
        return m <= lim
...returns boolean.
It's usable to test dispersion of sample around peaks.

четверг, 11 октября 2012 г.

How to find fundamental frequency of wave (auto-detect signal period)

This is not so rare task as you can see (for ex., you can find one in oscilloscope's auto mode). There are several methods to do it: in frequency domain (FD) and in time domain (TD). Most famous FD method is the auto-correlation function: it's maximum let you find δτ - it will be period of signal.
But main problem with FD methods is - it's "calculation speed". TD methods are faster. They are:
  • Based on zero-crossing counting
  • Peak rate
  • Slope rate and so on...
Here is one of them: peak rate -- "monitoring" max/min of signal in real-time.
Main idea is that signal has max (and min) values which will repeat with dt equals to period of signal.
Problems are:
  • possible several min/max, for ex. noise
  • signal can be superposition (sum) of signals with different frequencies
  • signal may consists of "packets" (repeated also periodical waves)

image/svg+xml T T Fig 1 Fig 3 T Fig 2
See on Fig 1. - it illustrates idea of the algorithm: periodical wave has periodical max and min. If we can detect them, then we can detect period. But sometimes it's difficult to recognize min, sometimes - to recognize max, so we should use "peaks" - both (mins and maxs), and select biggest of them.
So main idea of algorithm is:
PEAKS detector -> MAX -> Result (period)
However, Fig 3 shows us next problem: signal may be not ideal and will have distortions (horizontal - in time and vertical - in magnitude), so our algorithm should consider limits of non-ideal values of magnitude and peak distance repetitions. This will be equality domains for peak value and for possible periods. OK. And next problem is shown on Fig 2: if signal is sequence of a packets then we can detect 2 periods: of the signal in the packet or the period of packets. This algorithm will detect period as period of packets (bcz it's used for oscilloscope). And lst problem is noise, but to avoid noise effect we should use average of period maximums. Also equality domains will help us to avoid noise effect (as you can see in examples).
So, final scheme will be:
PEAKS detector -> Packets PROCESSOR -> MAX -> Result (period)
Here is the Python prototype of this algorithm:
import sys
import operator
import math
import random

class AverageDistance:
    # max. distance (may be period)
    max = -sys.maxint - 1
    # enough distances number to report
    difflim = 2
    # proximity of max. distance values (equality domain)
    vallim = 0
    # distance between peak
    d = 0
    # sum of distances between peaks (for average)
    diff = 0
    # number of distances between peaks (for average)
    ndiff = 0

    def dump(self):
        return "d=%d, diff=%d, ndiff=%d"%(self.d,self.diff,self.ndiff)

    def __init__(self, **kw):
        for k,v in kw.items():
            setattr(self, k, v)

    def calc(self, val):
        if abs(val - self.max) <= self.vallim:
            if self.d == 0:
                self.diff += val
            else:
                self.diff += self.d + val
            self.ndiff += 1
            self.d = 0
        elif val > self.max:
            self.max = val
            self.d = 0; self.diff = 0; self.ndiff = 0
        else:
            self.d += val
        if self.ndiff > self.difflim:
            return self.diff/self.ndiff
        else:
            return -1

    def test(self, samples):
        for smp in samples:
            #print "%d:"%smp
            x = self.calc(smp)
            if x != -1:
                return x
            #print "\t%s"%self.dump()
        return x

def DistanceBetweenPeaks_max(**kw):
    return DistanceBetweenPeaks(cmpfunc=operator.gt, max = -sys.maxint-1, **kw)
def DistanceBetweenPeaks_min(**kw):
    return DistanceBetweenPeaks(cmpfunc=operator.lt, max = sys.maxint, **kw)
class DistanceBetweenPeaks:
    ### Next 2 attrs. defines max. based or min. based algorithm,
    ### default is max. based:
    # comparision func
    cmpfunc = operator.gt
    #max. sample
    max = -sys.maxint-1

    # proximity of sample (near to max.)
    vallim = 0
    # peak value if not -1 (to distinguish monotonic growed 2 close samples)
    # it's too not scalar, but domain (vallim is used as limit)
    peak = -1
    # distance between two peaks
    d = 0
    # index of max.
    imax = 0
    # index of sample
    i = 0

    def __init__(self, **kw):
        for k,v in kw.items():
            setattr(self, k, v)

    def calc(self, val):
        ret = -1
        #print "val:",val
        # version with cut 1,2,3,4... samples by limitation of "hole" between peaks
        #if abs(val - self.max) <= self.vallim and (self.i - self.imax)>1:
        if abs(val - self.max) <= self.vallim and \
                ((abs(val - self.peak) <= self.vallim) if self.peak != -1 else True):
            ret = self.i - self.imax
            if ret == 0: ret = 1
            self.imax = self.i
            #print "1:"
        #elif val > self.max:
        elif self.cmpfunc(val, self.max):
            #print "2: val=%d max=%d"%(val,self.max)
            # not neccessary next 2 lines:
            if val > self.peak and self.peak != -1:
                val = self.peak
            self.max = val
            self.imax = self.i
            self.d = 0
        else:
            self.d += 1
        self.i += 1
        return ret

    def test(self, samples):
        for smp in samples:
            x = self.calc(smp)
            if x != -1:
                print x

def test_alg(samples, dbp_kw={}, ad_kw={}):
    dbp_max = DistanceBetweenPeaks_max(**dbp_kw)
    dbp_min = DistanceBetweenPeaks_min(**dbp_kw)
    ad_max = AverageDistance(**ad_kw)
    ad_min = AverageDistance(**ad_kw)
    period_maxalg = -1
    period_minalg = -1
    result = -1
    for smp in samples:
        time_maxalg = dbp_max.calc(smp)
        if time_maxalg != -1:
            period_maxalg = ad_max.calc(time_maxalg)
            if period_maxalg > result:
                result = period_maxalg
        time_minalg = dbp_min.calc(smp)
        if time_minalg != -1:
            period_minalg = ad_min.calc(time_minalg)
            if period_minalg > result:
                result = period_minalg
    return result

# Test samples
################################################################################
# T=7
samples0 = (
0, -10, -20, -10, 0, 10, 20, 0, -10, -20, -10, 0, 10, 20,
0, -10, -20, -10, 0, 10, 20, 0, -10, -20, -10, 0, 10, 20,
0, -10, -20, -10, 0, 10, 20, 0, -10, -20, -10, 0, 10, 20,
)
# samples with not same maximus, T=7
samples1 = (
0, -10, -20, -10, 0, 10, 18, 0, -10, -20, -10, 0, 10, 19,
0, -10, -20, -10, 0, 10, 19, 0, -10, -20, -10, 0, 10, 18,
0, -10, -20, -10, 0, 10, 19, 0, -10, -20, -10, 0, 10, 18,
)
# samples with not same maximus and time between them, T=7
samples2 = (
0, -10, -20, -10, 0, 10, 18, 0, -10, -20, -10, 0, 10, 19,
0, -10, -20, -10, 0, 10, 19, 0, -10, -20, -10, 0, 10, 0, 18,
0, -10, -20, -10, 0, 10, 18, 0, -10, -20, -10, 0, 10, 19,
0, -10, -20, -10, 0, 0, 0, 10, 18, 0, -10, -20, -10, 0, 1, 1, 10, 19,
)
# T=3
samples3 = (
0, 0, -5, 0, 0, -5, 0, 0, -5, 0, 0, -5, 0, 0,
0, 0, -5, 0, 0, -5, 0, 0, -5, 0, 0, -5, 0, 0,
0, 0, -5, 0, 0, -5, 0, 0, -5, 0, 0, -5, 0, 0,
)
# T=3
samples4 = (
-1, -1, -10, -1, -1, -10, -1, -1, -10, -1, -1, -10, -1, -1, -10,
-1, -1, -10, -1, -1, -10, -1, -1, -10, -1, -1, -10, -1, -1, -10,
-1, -1, -10, -1, -1, -10, -1, -1, -10, -1, -1, -10, -1, -1, -10,
)
# T=8
samples5 = (
1,0,1,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,1,0,0,0,0,0,1,
# T=12
#1,0,1,0,1,0,1,0,0,0,0,0,1,0,1,0,1,0,1,0,0,0,0,0,
#1,0,1,0,1,0,1,0,0,0,0,0,1,0,1,0,1,0,1,0,0,0,0,0,
#1,0,1,0,1,0,1,0,0,0,0,0,1,0,1,0,1,0,1,0,0,0,0,0,
#1,0,1,0,1,0,1,0,0,0,0,0,1,0,1,0,1,0,1,0,0,0,0,0,
)
# T=10
samples6 = (
0,1,2,3,4,5,4,3,2,1,0,1,2,3,4,5,4,3,2,1,0,1,2,3,4,5,4,3,2,1,
0,1,2,3,4,5,4,3,2,1,0,1,2,3,4,5,4,3,2,1,0,1,2,3,4,5,4,3,2,1,
)
# T=50
samples7 = (
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,
24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,26,
24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,
24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,26,
24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,26,
24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,
)
# "square", T=8
samples8 = (
0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,
0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,
0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,0,0,0,5,5,5,5,5,
)
# Sin: noise is 1..300. Magnitude is 1000, T=360
# to detect DistanceBetweenPeaks.vallim = 260..300, AverageDistance.vallim=10
samples9 = []
for angle in range(0, 5*360, 1):
    smp = math.sin(math.radians(angle))
    smp += random.uniform(0.001,0.5)
    samples9.append(int(smp*1000))
# Long glitches, T=8
samples10 = (
0,0,0,1000,5,5,1000,5,0,0,0,5,5,1000,5,1000,0,0,0,5,5,5,5,5,0,0,0,100,5,5,5,5,
0,0,0,5,5,5,100,5,0,0,0,100,5,5,5,5,0,0,0,5,5,5,5,100,0,0,0,5,5,5,5,100,
0,0,0,5,5,100,5,5,0,0,0,5,5,100,5,5,0,0,0,5,5,5,100,5,0,0,0,5,5,100,5,5,
)

# Tests
################################################################################
print "samples0:", test_alg(samples0)
print "samples1:", test_alg(samples1)
print "samples2:", test_alg(samples2, dict(vallim=1), dict(vallim=1))
print "samples3:", test_alg(samples3)
print "samples4:", test_alg(samples4)
print "samples5:", test_alg(samples5)
print "samples6:", test_alg(samples6, dict(vallim=0), dict(vallim=1))
print "samples7:", test_alg(samples7, dict(vallim=1), dict(vallim=1))
print "samples8:", test_alg(samples8, dict(vallim=1), dict(vallim=1))
print "samples9:", test_alg(samples9, dict(vallim=260), dict(vallim=100))
print "samples10:", test_alg(samples10, dict(vallim=1), dict(vallim=1))
#print "samples9:", test_alg(samples9, dict(vallim=60, peak=1000), dict(vallim=60))

for i,ss in enumerate((samples0, samples1, samples2, samples3, samples4, samples5, samples6, samples7, samples8, samples9, samples10)):
    with open("data%d.dat"%i, "wt") as f:
        for smp in ss:
            f.write("%d\n"%smp)
Here is DistanceBetweenPeaks - the general form of PEAK detector. It can be parametrized with cmpfunc and max for 2 different kind of detection: detection max's and detection min's (there are 2 functions: DistanceBetweenPeaks_max, DistanceBetweenPeaks_min). Parameters: vallim - equality domain (limit) for peak values (absolute value, not ratio!), peak - value of peak (not needed but in difficult cases, very noised signals can help).
AverageDistance is the PACKETS PROCESSOR. It's parameters are: vallim - equality domain of distance (to detects them as equaled), difflim - how many times of repeat are enough to detect repetition as period.
test_alg() is the main function - algorithm scheme implementation. It uses to peak detectors as alternative detectors and select max. value of period after processing possible packets.
This code is not very Pythonic but it focused to be easy ported to C and it's nature is "dataflowable" - functions has "state" (structure) and processes only ONE SAMPLE per call (should be inline or macros), so they are ideal for Real-time processing. Algorithm is faster then autocorrelation based, probability based, etc.
test_alg() needs however not one sample but list of samples - this signal fragment should has length of several periods, real length depends on used difflim parameter.
We see that the algorithm can detects even very noised signal (sample9) and signals with strong magnitude distortion and time distortion.
Result period is in "samples" units ("hops"/"holes" between samples). When you know sampling frequency then it's very easy to convert it to Hz :)
This algorithm is very simple and is usable in "fast" detection tasks but when you make offline signal processing, you should use more accurate algorithm!

среда, 10 октября 2012 г.

How to convert accelerometer values to path/coordinates

To get path or coordinates from accelerometer values you need to integrates them. From physics we know:
v = dr/dt
a = dv/dt = d2r/dt2
If a (acceleration) is constant, motion is called uniform accelerated:
v(t)=v(0)+at
r(t)=r(0)+v(0)t+0.5*at2
v(0), r(0) - initial velocity and displacement. If a != const, these formulas can not be used. If we know how a depends on t (for ex., a(t) = b + c*t), we can use:
v(t) = v(0) + b*t + 1/2*c*t2
r(t) = r(0) + v(0)*t + 1/2*b*t2 + 1/6*c*t3
But if we don't know a(t), we can only use common rules:
          t
r(t)=r(0)+∫v(t)dt
          0

          t
v(t)=v(0)+∫a(t)dt
          0
and calculate this with some numerical integration method
Examples (models with plots) of such methods (in SciLab) you can found there [in integral.sci file: Simpson's rule, Trapezoid rule, Leo Tick's rule]. But we should know about 2 problems:
  1. If there is some constant "component" in acceleration (or noise), this will be reason of BIG error in result
  2. Each numerical integration method is filter and has transfer function, which give distortions on 0.2 - 0.4 Nyquist's frequency even [see R. W. Hamming, Digitals filters, 1977]
So, you can use more smart methods. One of them is complex (chaining) integration with filters. Idea is to filter each dataflow: between each integrator and to filter output result too:
a(t)->FILT->af(t)->INT->v(t)->FILT->vf(t)->INT->x(t)->FILT->xf(t)
   where FILT - is a filter, INT - is an integrator.

It can be implemented on dataflow model (as I done it in hmc-demo). You can found example of such model in SciLab source file "path.sci" where you can combain different integration methods with different filters and see result in plots.
And remember, eventually, you never get correct result. Use sensors of velocity instead.