Koala

From Seamonster
Jump to: navigation, search

Motes ("Telos-B")


Related



Prelim notes

  • Filezilla connection to JETOS virtual machine
    • Use ip address as determined as root on VM
      • issue ifconfig, use inet addr for eth0, should be 192.168.X.Y
      • Rob's x64 work box: Filezilla to Jetos: 192.168.254.128
      • Rob's x86 laptop: Filezilla to Jetos: 192.168.74.129
      • Nick's x86 laptop: 192.168.183.128
    • Username / password / port: tinyos tinyos 22
    • /home/tinyos/local/src/Seamonster/ SensingNode (+ /FormatFlash) and Basestation


Introduction

The Johns Hopkins University has a soil ecology program called Life Under Your Feet or LUYF. Their efforts creating a data acquisition system built on "mote" microcomputers has produced Koala, a sensor mote operational software package. Mike Liang is helping SEAMONSTER implement Koala (2009). The principle advantage of koala is aggressive duty cycling that results in very low power consumption.


Like any research program the efforts on Koala are constantly evolving. To try and describe a "SEAMONSTER flavor" of Koala on the same page as a "LUYF flavor" would be overly ambitious. For this reason we have built a second page called Koala LUYF that describes the latter more precisely. In the sense of an evolutionary tree, the main trunk is LUYF Koala and SEAMONSTER Koala is a little off-shoot branch.


Working with Koala

There are three important directories in the virtual machine 'Jetos' for koala, respectively corresponding to sensor mote programming, clearing flash memory, and base station operation (programming and data recovery).

/home/tinyos/local/src/Seamonster/SensingNode
/home/tinyos/local/src/Seamonster/SensingNode/FormatFlash
/home/tinyos/local/src/Seamonster/Basestation


Very very important conceptual note

In the field we assume the motes are already programmed and operational and that the base station is connected to a VuS (Microserver) via USB cable. From this assumption we define an operational model: Primary function and secondary functions.

  • Primary function
    • Try to download mote data on every VuS operational interval (on-period).
      • This is by means of the python program Seamonster.py which behaves as a data gateway between the SBC and the base mote.
  • Secondary functions
    • Move the data files to inbound/expired directories
    • Clear mote flash memory
    • Modify mote operational parameters
    • Synch mote clocks
    • Delete global time synch file
    • Remove redundant data from data files
    • Convert raw data to CSV prior to SendBack
    • Evaluate data for possible "smart" actions

The game plan is to migrate secondary into primary as primary functions are established as reliable.


Procedural

Install the Virtual Machine (VM)


Program motes 1: Format the external flash memory

  • Use motelist to determine USB address e.g. /dev/ttyUSB0
  • Format (i.e. clear) the flash memory used for data storage
% cd ~/local/src/Seamonster/SensingNodes/FormatFlash/
% make telosb install bsl,/dev/ttyUSB0

(can use reinstall to skip the compile if this is already done)


Program motes 2: Program a SensingNode

  • Determine if you wish to include the mote onboard sensors in your return data
    • If so: Edit the file SamplerC.nc in the ~/local/src/Seamonster/SensingNodes directory
    • Ensure that the five lines of code referring to TempAndHumid, PhotoTsr, PhotoPar, and BatteryVoltageC are uncommented
  SamplerP.Read[unique(UQ_SAMPLER)] -> TempAndHumid.Temperature;
  SamplerP.Read[unique(UQ_SAMPLER)] -> TempAndHumid.Humidity;

  SamplerP.Read[unique(UQ_SAMPLER)] -> PhotoTsr;
  SamplerP.Read[unique(UQ_SAMPLER)] -> PhotoPar;
  SamplerP.Read[unique(UQ_SAMPLER)] -> BatteryVoltageC;
  • Notice there are no double-slashes in front of these lines: They are active.
    • Save this file.
  • Ensure the mote is visible to the VM (use "motelist") and program the mote:
    • Ensure all LEDs stop blinking (blue remains on-solid)
    • Choose a mote ID number (integer) to assign to this mote; this example assigns ID 2
    • Write the SensingNode program to mote memory:
% cd ~/local/src/Seamonster/ SensingNodes/
% make telosb install,2 bsl,/dev/ttyUSB0
  • You can use reinstall to skip the compile if this is already done)


Important: To modify the sampling frequency, for example for calibration purposes, please refer to the sensor calibration page.


Program motes 3: Program base station

The mote base station plays the crucial role of waking up the sensing nodes and downloading their data.

  • Do FormatFlash per Step 1 to clear the mote memory
  • Note default base station ID is 1: Always use this mote ID unless prepared to delve into the Koala software.


% cd ~/local/src/Seamonster/Basestation/
% make telosb install,1 bsl,/dev/ttyUSB0

(can use reinstall to skip the compile if this is already done)


Sensor motes should be sensing every 5 minutes and sleeping otherwise. As noted above, to modify this interval please see the mote sensor calibration page. To wake the sensing nodes and download their data ensure the basestation mote is connected to the server and type


% cd ~/local/src/Seamonster/Basestation/
% ./seamonster.py /dev/ttyUSB0


This produces or appends to store_x files in this (VM) directory where x is the node ID for each radio-available sensing node. store_x files are binary data files pulled in by the base station and written to disk by the Seamonster.py program. The procedure takes a couple minutes to complete typically. Actively participating motes will illuminate both blue and red LEDs. After completion the base station returns to solid red LED and the sensing nodes return to dark (no LEDs on).


It may happen that no store_x file is produced by the download operation. This would be noticeable by immediately (say within a few minutes) testing the download operation. Assuming that other motes do download properly the mote should be considered suspect. Owing to the fragile nature of a printed circuit board it is possible that motes that fail to report are broken and unusable.


Data and date checking on the Jetos virtual machine

Date checking (like "January 19 2009")

To check the date sequence of a particular raw data file store_x:

% cd ~/local/src/Seamonster/Basestation
% rm dates_*.txt
% ./Date.py
% more dates_x.txt

You can visually check that the date sequence is sequential. To see how Date.py converts epoch times (in seconds since 1970) to calendar dates see the remarks below.


Data checking on the Virtual Machine

It is essential to not delete store_x files or the globals file in the Basestation directory: Koala appends to store_x files and uses the globals file to convert mote local time to global (UNIX) time. To produce a comma-separated-value (CSV) files from the binary store_x files:


% cd ~/local/src/Seamonster/Basestation
% rm store_*.csv
% ./Csv.py
% more store_x.csv


This does two things:

  • Produces CSV files for all store_x files.
  • Uses information in the file "globals" to convert local mote times to global unix epoch times.


The time conversion uses linear regression to figure out the relationship between local and global clocks. One caveat is if you ever reboot (reset) a sensor mote you must clear the "globals" file too since that mote's local clock is reset. The JHU team points out that this may happen if the mote runs for awhile (many days?) without a data download... so there is more to figure out here. For now we do not assume that the time series is correct; we verify.


As globals files accumulate more and more data points the estimation will become increasingly accurate. Hence the same local time may get converted to different global times upon re-runs of Csv.py. Note also that the download process tries to get most-recent-only based on the last recorded time in the existing store_x files.


The original CSV converter was called toCsv.py. Rob modified this and renamed it Csv.py where all ADC values are written on a single line in the output (ascii) file. (See the local time-conversion notes below.) Whie this simplifies the csv file format it also has has one potential negative consequence: It assumes each ADC value is acquired at the same time. This is not a guaranteed correct assumption, although we expect the samples to happen "within a couple few seconds" of one another; and they are separated from the next sampling sequence by a longer interval, typically five minutes.


Data and date checking on a PC

Suppose we wish to analyze and calibrate sensor data. I will outline the procedure for moving the data to a PC for analysis using Microsoft Excel. First establish the ip address of the Jetos virtual machine:

% su
(log in as root)
# ifconfig
(note that the eth0 ip address is, in my case, 192.168.254.128)
  • Start up filezilla
  • Host: 192.168.254.128
  • Username: tinyos
  • Pass: tinyos
  • Port: 22
  • Click <Quickconnect>
  • Create a destination directory
  • Drag over store_x.csv files
  • Drag over dates_x.txt files if desired


On the PC we can now build charts that show time series data accumulated from the ADC ports.

  • Start Excel
  • Open the .csv file of interest. Excel should recognize the commas as delimeters
    • For example, using 4 ADC channels, there are 9 resulting columns of data
      • Time (seconds, epoch), 0, DN-0, 1, DN-1, 2, DN-2, 3, DN-3
  • Insert an empty column to the right of the A (time) column, populate it with dates
  • Delete column C which is all zeros
  • Copy and paste column B (the dates) into columns D, F, and H.
    • This permits you to select pairs of columns and create scatter plots directly
    • Since the data is all time-synchronized you can also simply...
      • Delete D, F, and H columns.
      • Work from column B as the horizontal axis.
    • Example: Create a comparison plot of two ADC channels
      • Select columns B and C and Insert a scatter chart with points and connector lines
      • Select column G (corresponding to ADC 2) and Copy the contents
      • Select the chart (not the plot) and Paste
      • Format both ADC 0 and ADC 2 data
      • Insert text etc explaining what is going on, as in this example:


Comparison of two LM19 data records

Timing

With overhead the sample timing seems to slip about 20 seconds per day, perfectly reasonable.


Time Conversion

koala writes calculated global time values associated with data samples as "epoch numbers" which are seconds since Jan 1 1970. This leads to conversion questions:

  • How to convert to standard dates?
  • Does this conversion accurately track leap years, ..., leap seconds?
  • Where in the data pipeline (if anywhere!) should this conversion be done?

The first question is addressed pretty well at http://www.epochconverter.com. However you'll notice it rather begs the second question under the "Microsoft Excel" entry which leads me to believe there must be a better way (in Excel).


Getting to human-readable from epoch time in seconds


Modifying Koala behavior

Modifying the sampling interval

Please see "Changing the sampling frequency" at the mote sensor calibration page.


Modifying the number of data samples

Expanding sampling to non-ADC methods

The TelosB mote is capable of I2C and serial port communication. These topics would be covered on other pages.


Notes

  • Prep data for inbound transfer:
    • On the VuS cron jobs: Run Csv.py to produce store_x.csv files for all x.
    • mv store_x.csv to /root/brickmove/inbound/store_x_datestring_VuSID.csv, rsync this back
    • move the rsync'd file to expired; but do not delete the store_x file
    • The store_x file will grow progressively larger; this is a redundancy problem
    • Therefore can store_x be tail-truncated periodically????
    • Ask Mike (in February) if a CustomTail trick would work:
% CustomTail store_x > store_x.tail
% mv store_x.tail store_x


    • The evolving globals file must keep pace with what happens to store_x