Fun with Motes, Bricks, and Modems
- Hardware prep: Field Motes and Microservers
- Lemon Creek Watershed: 2008 (Marijke), GIS Coverage Analysis, Foot access to upper Lemon Glacier
- Mendenhall River Watershed:
- Staying Alive: Field procedures, Blackerby Ridge
- 2008: Calendar, Build, Actions, Hardware, Station Overview
- 2007: Deployment notes
This page describes "degree of operation" for Motes, Microservers, and Radio Modem range testing. Format is a Summary at the top followed by log-style notes going back in time...
- Wifi ranging: router and brick sitting together were visible from 8.5 km using both the omni (-22dB) and dir (-100dB) attached to laptops.
- WiFi ranging: Sea level + 200ft; Vaio/brick-> Linksys ~8.5km; pinged Br from Vaio w/o using external antenna; NetStumbler on vaio tells signal str (dBm): no external antenna = -76; 2.4GHz Omni = -63; 2.4GHz Dir = -55. (These results are questionable due to the vaio having potentially utilized the ad-hock brick-router signal instead of making its own path to the router.)
- Assume 2.4GHz WiFi for Microservers, 2.4GHz 802.15.4 for Motes
- WiFi ranging: Sea level - Laptop - USB Wireless adapter - Parabolic (held overhead) ~ 1 mile ~ short-omni (roof?) - NSRL1 router @ 41% signal strength
- Campbell to VuS: Campbell CR10x - SC929 serial cable - Micro-Innovations Serial-USB - Br43:ttyUSB0: dmesg, cat ttyUSB0; data seems buffer-blocked, no formatting
VuS PCS and SBC Inventory (20080825)
The free floating PCS and SBC boards at NSRL were inventoried to determine what was properly functioning. The following boards were found at the lab:
- SBC- 7, 9, 20, 30, 31, 32 (note: 20, 30, 31, and 32 were found unlabeled. 20 is an older green board, while the other three are the new red boards)
- PCS- 7, 11, 14
- SBC 30 + PCS 7:
SBC lights remained on, with red LED flashing repeatedly. A ticking noise emanated from the SBC. 12v on JP3, but fluctuating 9 - 10v on JP1. Not pingable. The PCS was switched to 'off' during this test, but powered up immediately when JP1 was plugged in.
- SBC 30 + PCS 14:
12v on JP3, 11.96v on JP1. SBC lights normal. Ping successful, but both ssh and telnet timed out. The SD card was removed and reflashed using the br55 image. The unit was then still pingable, but neither ssh nor telnet connections could be established. The Ethernet lights flashed on the SBC board.
- SBC 31 + PCS 14:
11.96v on JP1. SBC Red and Green LEDs alternatively flashing in perpetuity. Unit not pingable.
- SBC 32 + PCS 14:
LEDs normal and Ethernet lights on. Pingable with a successful ssh. Indication: SBC 32 Functional, PCS 14 Functional
- SBC 9 + PCS 14:
Normal LED and Ethernet. Pingable and successful SSH.
- SBC 20 + PCS 14:
Normal LED and Ethernet. Pingable and successful SSH.
- SBC 7 + PCS 14:
Normal LED and Ethernet. Pingable and successful SSH.
- SBC 7 + PCS 11:
Normal LED and Ethernet. Pingable and successful SSH.
- Broken PCS: 7
- Broken SBC: 30, 31
- Functioning PCS: 11, 14
- Functioning SBC: 7, 9, 20, 32
Brick 54 and 55 Post Lemon Recovery Troubleshooting (20080808)
Bricks 54 and 55 were powered up in the laboratory after recovering 54 from an extended (~1 month) stay at the Upper Lemon MET station and 55 from a one day field trip to the Cairn Relay.
Brick 54 Brick 54 was found dead in the water at ULG MET by Marijke-- none of the SBC lights would turn on when the unit was powered up. Back at the lab the unit was powered up and it was determined that the uC was receiving 12v on JP3; however, JP1--leading to the SBC--showed no voltage. When the PCS board was switched out the unit worked just fine.
Brick 55 Brick 55 Back at the lab it did not seem that there were any problems with this unit. It power up properly with all the lights on the SBC and amp flashing in the correct manner.
Bricks Ready for Field Deployment (20080623)
Three uS units are ready to be deployed in the Lemon watershed on the ULG MET, LGL PXD, and ULG Relay stations. The ULG Relay will house uS 54, while ULG MET and LGL PXD will be outfitted with uS 54 and 55. respectively. The Agent software is setup in the '/etc/init.d/Agent' file as:
- uS 53: 1 1 132 10 0 4 2 2 10 (Unit awake every hour, at top of hour, for 10 min, powering amp + router every 4 hours for 10 min, and prepping radio and SBC two minutes prior to the top of the hour.)
- uS 54/55: 1 1 6 10 0 4 2 2 10 (Unit awake every hour, at top of hour, for 10 min, powering amp + bridge every 4 hours for 10 min, and prepping radio and SBC two minutes prior to the top of the hour.)
These setting can be changed by editing the file mentioned above and then rebooting ('shutdown -r now') the uS. These settings are not cast in stone and will need to be modified if they ended up being too much of a power burden.
The housekeeping script, which captures the data from the campbell data loggers coming in over the serial ports for uS 54 & 55, is set to run at the top of every hour for those two uS units. The uS_data_routing script is set to run one minute into every fourth hour for uS 54 & 55, while on uS 53 it will run at the second minute of every fourth hour. As of writing this entry, the bricks have been allowed to run for just under two days with this configuration. After checking the files on the SM server it has been determined that all the data is properly getting there (i.e. no data gaps).
The data routing script was recently modified to be a little more robust in that it now attempts multiple pings and rsyncs if the failures occur. It runs in the following manner: The uS first attempts to ping the SM server and if successful will rsync the files over-- all is good. If the initial ping fails, then the script will sleep for 60 s and then attempt to ping SM three more times (total ping attempts = 4), sleeping for 60 s between each failed ping. If one of the attempted pings is successful the script will moved to rsync the data, but if the rsync encounters problems such as a sudden loss of connectivity then it will reinitiate the ping cycle till count = 4. The rsync has been set to time out after 60 seconds so that it does not simply lock up as it had previously been observed doing. US 53 will stop trying to communicate after four failed pings to SM; however, uS 54 & 55 will start a ping/rsync loop similar to that described above though this time directed at uS 53. If communication with uS 53 fails, then the two uS units will halt communication activity.
The data routing script has been tested to make sure that various aspects properly function. For instance, the script was run on uS 55 and a sudden loss of connectivity after a successful ping on SM was mimicked by yanking the internet Ethernet cable from the SM router. The failed rsync timed out after 60 seconds and then reinitiated its attempts to ping the server. After three failed attempts to ping the server the script switched its attention to uS 53, successfully pinging and moving the files over to the unit. Another scenario was mimicked in which the connection with SM was never established (thus leading to four failed pings) and the connection with uS 53 was suddenly lost after a successful ping and an initiation of rsync. In this case, the rsync once again timed out and then went back into the ping cycle. The Ethernet cable was replugged into the uS 53 router after a few failed pings, thus reenabling the connection and leading to a successful ping and successful rsync. These changes in the data routing script will help make the communication protocol significantly more robust and less prone to failure given spotty wireless connectivity or uS time discontinuities.
Brick Count Down (20080613)
The bricks are behaving magnificantly. Currently bricks 53, 54, and 55 are setup in the lab, are all communicating wirelessly on the 192 network, and are able to speak and keylessly push files to seamonster @ 137. A CR1000 running the upper lemon MET station program is streaming into Br55. Bricks 54 and 55, the PXD and MET station respectively, have crontabs setup so that at the first minute of every hour they run the 'housekeeping.sh' script that captures data coming in over ttyUSB0 and stores it to a file in the /root/brickmove/logs/ direcotry. Every fourth hour, starting at 0:00, the radios turn on for seven minutes as per the fourHr script (that is soon to be replaced by a more advanced script that also manages sleep functionality). When the housekeeping script is reinitiated the files are moved to '../inbound' and two minutes into every fourth hour, starting at 0:00, the 'uS_data_router.py' script is run. This script checks to see if there are 'log' files in the inbound directory and if so, it attempt to ping the SM server. If the ping is successful then all of the '.log' files are sent via rsync to '/home/seamonster/inbound/' on the server. For Br 54 and 55, if pinging the server is not possible they then attempt to ping Br 53 and if successful push their files via rsync to that machine. After the files are pushed they are moved into the /root/brickmove/expired bin. Nothing is currently being done with the files after they are pushed into expired. While Br 54 and 55 attempt to push their files two minutes into every fourth hour, Br 53 waits an extra minute before attempting to push its file. This asyncranocity will help to reduce server loading and will give the log files from Br 54 and 55 two chances to make it down to the server if the first shot does not work. As the the production server (184.108.40.206) is currently not quite ready to act as the database, the files are routed from there to the development server (220.127.116.11) using the script 'prod_to_dev.py' that is running via crontab five minutes into every four hours.
What needs to be done:
- Install onto the bricks the soon-to-come 'Agent' script that Rob is writing. The script will manage both SBC and radio power cycling.
- Figure out what we want to do with the log files after they are pushed into the expired bin. (push them to a usb drive, or simply delete them perhaps every month?)
- Get the VPN set up so that the server can talk back to the bricks.
- If we are to have an actual sensor web then the bricks need to scan the log files after they come in, look for anything of interest, and then alert the other bricks.
- Setup the ntpdate syncronization.
- Cat the one hour cr1000 log files into four hour files before being sent via rsync off the bricks.
- What am I forgetting?
The Bricks March On (20080611)
The fourHr script was allowed to run overnight on Br53, which is equipped with a router, and this morning the radio turned on at the correct time for the correct interval. After speaking with Matt it was decided that the bricks should all be set to Juneau time using the 'tzconfig' command. Configuring the units to Juneau set the time zone to Alaska Daylight Savings Time (AKDT) which is UTC-8. The time was then set to my laptop (which is set about 5 s faster than atomic time) plus or minus about 10 s. The date query returned the appropriate time. As the fourHr script is set to have a time default of UTC-9, the default was overridden in the /etc/init.d/fourHr file by adding '6 7 0' to the bricks with bridges and '132 7 0' to the brick equipped with a router. These arguments were added just after the '/root/bin/fourHr' section of the file. For the 'x y z' arguments, x = ucInterface value (6 = bridge + amp, 132 = router + amp), y = radio on minutes, z = UTC offset.
- Setting the Time Zone:
>>tzconfig (then select region and nearest city-- Juneau is among the cities listed)
- Setting the Date and Time:
>>date -s "jun 10 14:00:00 AKDT 2008"
- Query Date-Time:
>>date -u (-u is optional if you want it to display UTC time)
The housekeeping script was modified so that the files are labeled with the appropriate brick number. The script was also modified so that it is storing the incoming serial data into the /root/brickmove/logs directory and then moving the *.log files to the ~../incoming directory.
What remains to be done?
- The sleep functionality needs to be enabled on the bricks so that they wake up at one hr and four hr intervals.
- The date/time needs to be setup so that it auto-syncs to some server using a program such as ntpdate (which is already installed on all the bricks)
- Make sure that cron is setup properly on the units.
- After the files have been successfully rsynced off the bricks and pushed in to /root/brickmove/expired, they need to dealt with.
Brick fourHr Script (20080610)
After a long chat with Rob and some quick script editing on Rob's part, the 'fourHr' script meant to replace 'updown' was implemented on two of the bricks. The script was sent to bricks 53 and 55, housing a linksys router and bridge, respectively. As per my understanding, the script is meant to turn the radios on for a given period of time every four hours and then to shut both the radio and brick back down. Rob set the program up so that it automatically turns on PCS JP5 (the bridge) for 7 minutes, though these default settings can be overridden by changing the '/etc/init.d/fourHr' file as was done with Br53. It appears that the script is functioning properly on both units, though this needs to be tested further. The script was implemented in the following way:
- The fourHr.c and fourHr.h files were sent to the bricks of interest:
>> rsync -rav fourHr.* email@example.com.##:/home/brick/src/
- Once on the brick, the script was added to the 'makefile' in the /home/brick/src directory:
>> pico /home/brick/src/makefile At the bottom of the file there is a section relating 'updown'. This was simply coppied and 'updown' was changed to 'fourHr'.
- The fourHr file was complied, with the output going to '/home/brick/bin'
>> make fourHr
- The /home/brick/bin/fourHr output was moved to /root/bin:
>> mv fourHr /root/bin/
- The owner of fourHr was set to root (as verified by an 'ls -l' after the command):
>> chown root fourHr
- The 'updown' file was replaced with 'fourHr' in /etc/init.d and the executable was change from /root/bin/updown to /root/bin/fourHr in the file.For Br53, '132 6 -9' was added as an argument just after /root/bin/fourHr :
>> mv updown fourHr >> pico fourHr
- A symbolic link was created:
>> ln -s /etc/init.d/fourHr ./S99fourHr
- The date and time were set on the bricks to test the fourHr scripts ability to turn the radios on and off. The script is designed so that the brick time is set in UTC, but so that the script runs at midnight (0:00) Alaska Daylight Savings Time (which is 9 hours less than UTC). For some unknown reason the bricks themselves are operating in BRT (Brazilian time) which is three hours less than UTC. For example, using '>>date -s "jun 10 11:50:00 UTC 2008"' to set the date/time would produce a standard output of 'jun 10 8:50:00 UTC 2008' when queried using '>>date'. As the fourHr script was subtracting 9 hours from the query-return time, it was necessary to set the time on both bricks as per above. This made it so that when 0:00 rolled around, ten minutes after the command was issued and the units rebooted, the script would active and turn the radios on for a given period before shutting back down.
While the script has not been allowed to run through numerous cycles, it does appear to be turning the radios on for given periods of time. The amount of time that the radios stay on needs to be measured. Additionally, while the radios are turned on and off, the script does not seem to be turning the SBCs on and off. Perhaps cycling the SCB power is not part of the script, but I was under the impression that it was. More to come....
Further Brick Success!! (20080609)
Today I setup the CR1000 MET station program to stream values from the data table. After this was working, the data logger was connected to Br55 (RS-232 port --> Serial Cable--> USB-Serial Converter--> USB on Brick Enclosure--> USB cable--> ttyUSB0 on brick). The 'housekeeping' script was initiated (>>./housekeeping), which captured the data flowing from the data logger and stored it to the '/root/brickmove/inbound' directory as a '.log' file. After allowing this to run for a few minutes the 'uS_data_router.py' script was initiated. The script established communications with the SM server and then pushed the recently created file to the server and moved the file on the brick to '/root/brickmove/expired'.
While this was generally successful, after the file was rsynced from the brick to the server it was pushed into the expired bin. With the current setup this left the data coming in over the serial port not being captured to a file. As such, we need to re-route the incoming data to '/root/brickmove/logs' and allow it to dump into a file that periodically gets closed (such as when the brick wakes up to take an hourly sample). When the file is close, it should then be moved over to '/root/brickmove/incoming' where it will wait to be rsynced to the server. As the rsync currently is only being attempted once every four hours there should be four files in the 'incoming' directory each time the transmission occurs. As this may change depending on the sampling interval, it may be better to create a script that will cat together the four files into one larger file. Once the larger file is sent, it could then be moved to the expired directory.
Brick Configuration 2 (20080605)
Today has been quite productive. Bricks 53, 54, and 55 are all now communicating wirelessly on the 192 network. Brick 53 has been outfitted with a WRT54G router, which turns on automatically when the unit is powered up. All of the bricks are able to ssh and rsync between each other and firstname.lastname@example.org without the need of passwords. Keyless rsyncing of files from each machine to every other has been tested and found to work properly. The uS_data_router.py script is installed on each of the bricks and is properly routing data from the bricks to the server, or if that connection is absent, then from 54/55 to 53 for storage.
What remains to be done (in roughly this order):
- Make sure that cron is running properly the housekeeping and uS_data_routing scripts
- Get the bricks properly power cycling
- Get them setup on the roof
- Get them on the ridge
- Party like a rock star
Brick Configuration (20080605)
Bricks 53, 54, and 55 have been configured in the following manners:
- Set bricks to be on UAS network (137.229.208.XXX).
>> pico /etc/network/interfaces address 137.229.208.XXX network 18.104.22.168 netmask 255.255.255.0 gateway 22.214.171.124
- Edit /etc/resolv.conf so that Domain Name Service (DNS) is setup to convert domain names to IP addresses.
>> pico /etc/resolv.conf search jun.alaska.edu nameserver 126.96.36.199 nameserver 188.8.131.52 nameserver 184.108.40.206
- Update the general software package on the bricks
>> apt-get update
- Install 'python' (br54 already had python)
>> apt-get install python
- Install 'less'
>> apt-get install less
- Install ssh
>> apt-get install ssh
- Install 'rsync', edit sshd_conf file so 'PasswordAuthentication' is 'no', then kill and restart process.
>> apt-get install rsync >> pico /etc/ssh/sshd_conf >> ps auxww | grep sshd >> kill -9 PID (PID is determined from last command) >> which sshd >> /usr/sbin/sshd
- Edited crontab from SVN so that housekeeping runs every day at 0:00 and so that uS data router runs every four hours.
- Copy 'brickmove' folder from the seamonster directory to bricks (includes crontab, housekeeping, and uS data routing scripts). The seamonster directory can be obtained by checking it out from the Subversion repository. Subversion must be installed on your computer ('sudo apt-get install subversion') and you must check out the directory from the repository ('svn co http://seamonster.jun.alaska.edu/svn/seamonster')
>> rsync -rav brickmove root@brick-IP:/root/
- Change the hostname of each brick to brick53, brick54, or brick55, then restart.
>> pico /etc/hostname >> pico /etc/hosts on first line: brickIP brick##>>> (ex: 192.168.1.55 brick55) >> shutdown -r now
- Setup keyless ssh and rsync. Generate keys if necessary, rename keys, move keys to remote machine, generate 'authorized_keys' file, add keys to 'authorized_keys' file, test, do all of this again but this time sending the remote keys to the local machine. If everything has been done properly the ssh at the last step should not prompt you for a password, nor should future ssh or scp.
root@brick##:/root/.ssh/ >> ssh-keygen -trsa (after issuing command, hit enter though all the prompts) root@brick##:/root/.ssh/ >> cp id_rsa.pub id_rsa_##.pub (where ## = brick's #) root@brick##:/root/.ssh/ >> scp id_rsa_##.pub email@example.com:/home/seamonster/.ssh/ root@brick##:/root/.ssh/ >> ssh firstname.lastname@example.org seamonster@production >> cd /home/seamonster/.ssh/ seamonster@production:/home/seamonster/.ssh/ >> cat id_rsa_##.pub >> authorized_keys
Then the other way around:
seamonster@production:.ssh/ >> ssh-keygen -trsa (after issuing command, hit enter though all the prompts) seamonster@production:.ssh/ >> cp id_rsa.pub id_rsa_seamonster.pub seamonster@production:.ssh/ >> scp id_rsa_seamonster.pub email@example.com##:/root/.ssh/ seamonster@production:.ssh/ >> ssh firstname.lastname@example.org## root@brick## >> cd /root/.ssh/ root@brick##:/root/.ssh/ >> cat id_rsa_seamonster.pub >> authorized_keys
Now test out the ssh for both:
root@brick##:/root/.ssh/ >> ssh email@example.com (this should not ask for a password anymore) seamonster@production >> ssh firstname.lastname@example.org## (this should not ask for a password anymore)
- Setup Br53 to power a 12V Linksys WRT54G router and amp upon boot instead of a WET54G bridge and amp. This involves modifying the 'updown' script.
root@brick53:/home/brick/src >> pico updown Under the -----2----- section, edit the 'ucInterface 2 6' string to read 'ucInterface 2 132' root@brick53:/home/brick/src >> make updown root@brick53:/home/brick/src >> cd ../bin root@brick53:/home/brick/bin >> mv updown /root/bin root@brick53:~/ >> shutdown -r now Check that the voltage of JP7 on the PCS shoots up to 12V after a little after the unit has restarted.
SBC Linux Boot and Build (20080515)
- All 10 of the new, red, SBCs have been setup to boot into the proper Linux kernel. This involved using RedBoot--the booting drive of Red Hat-- to load the proper kernel and sdcard initrd files from the Seamonster server into RAM and then subsequently onto the flash memory. With the images in flash memory, 'fcon boot_script_data' was configured to read the initrd and kernel files upon boot up and thus facilitate the start of Linux.
- Five of the new SBCs have been wired into the new brick enclosures. While the units are booting the proper Linux kernel, the kernels still need to be configured for proper networking, power management, and data capture/routing. Matt suggested creating a package that could be downloaded by each unit using the 'apt-get' command. Doing this, instead of flashing each of the individual SD cards, would allow for more dynamic updating capabilities. The bridges on the bricks have also yet to be configured.
New Brick Setup (20080427)
I am in the process of attempting to configure the three bricks that Rob recently sent up. It seems that there are a number of things that need to be done to each unit, including:
- Copy the brickmove dir from SVN into /root
- Install Rsync
- Setup crontab and milo
- Get the brick data router script running and for each unit
- Change the IP and gateway addresses to the 192.168 network
- Copy the 'authorized_keys2' and 'id_rsa' files to /root/.ssh to allow keyless ssh
Micro-Server Power Consumption (20080424)
- A uS equipped with a bridge, though lacking an amplifier, has been set on the roof of NSRL and is being powered by two 12v gell cell batteries connected in parallel. The batteries are rated as having 26Ah and 35Ah, thus totaling 71Ahs. Using the WattsUp, power draw was measured to be 7.4W with a amperage of 0.6A. Given such a power draw, the batteries should theoretically last for about 118 hrs (4.9 days) (Amps x Time = AmpHrs @ Voltage). The voltage output from the batteries will be periodically checked.
- Over the course of 50 hours, power output by the batteries dropped from 13.02v to 11.29v. Over the subsequent 19 hours the battery voltage dropped to 8.43v.
Wifi Range Test 3 (20080414)
A similar setup was used as during the last range testing, though this time the brick was left with the router on Engineers Cutoff Rd, whereas the laptops were taken up to Mountain Side Estates, 8.5 km away. Using both the hand-held omni and a directional antenna mounted on a tripod connected to a laptop, the router was clearly visible. For some reason, however, the router assumed an IP address that was not '192.168.1.1', which is what it had been configured as. This led to some confusion, as while it was possible to connect to the router it was not possible to either telnet nor ping the unit using the 192 address. It took a little while to realize that the router was utilizing a different IP address. After a little while the router began to broadcast on two different IP addresses, one of which was the 192. At that point it was possible to ssh into the 192 address and then telnet into the brick. It was very odd, though both broadcasts could be seen on Net Stumbler. While it seemed that the signal strength was better with the directional than with the omni, as per the number of bars present on the signal strength bar, Net Stumbler said that the strengths were -100 dBm and -22 dBm, respectively. Evidently the omni was getting a better signal than the directional, which may have been due to the directional antenna not having been properly aimed. The directional antenna was mounted on a tripod and aimed with the naked eye back at the router, so it was probably shoot pretty close to where it needed to be going, though it may still have been somewhat off. Having one person made it very difficult to position the antennas, operate the computer, and take note simultaneously.
Wifi Range Test 2 (20080407)
The WiFi link was pushed a little further today using the NSRL1 router located ~8.5km from Br43 and the Vaio that were about 200 ft higher in elevation. The router was installed at 2432 Engineers Cutoff Rd (Logan's Parent's house), which has a good view of the Mendenhall Wetlands and Twin Lake area. After installation, Nick and Logan drove to Mountainside Estates, above Twin Lakes, to the highest reach possible. Initially the plan was to mount the Dir antennas onto a tripod, however, we had problems getting the mounting brackets tight enough to hold the antenna properly. As such, the antennas ended up being mounted on the top of Nick's truck. Br43 was powered on and the dir antenna was roughly sighted in on the house. When the Vaio was powered on, NSRL1 was visible without any external antennas attached and it was possible to both connect to the router and to telnet into Br43. Using Net Stumbler, the signal strengths were assessed between the vaio and the router for a number of different antenna. Net Stumbler Help says that the unit of their 'Signal Strength' measurement are in 'dBm'. Here are the numbers:
- 1. no external antenna = -76 dBm
- 2. 2.4 GHz Omni = -63
- 3. 2.4 GHz Dir = -55
- 4. 2.4 GHz Yogi = -88 (though this may have been do to shoddy aiming...)
Wifi Range Test (20080401)
Some basic range testing was done today using the Vaio and a 2.4 GHz directional antenna. Nick and I went a little ways down the Juneau Airport trail to the point where the trail makes a sharp left turn. According to some Google Earth measuring, the distance as the crow flies from NSRL to where we setup shop was 1.00 mile (1.60 km). Holding the antenna overhead it was possible to see the NSRL1 router with 41% signal strength. The NSRL1 Linksys router was hooked into a short, unamplified omni-directional antenna. The Vaio was fitted with a directional antenna by using a USB wireless adapter and the signal strength was ascertained using a wireless utility employed by the adapter (I forget the name right now, though the icon on the tool bar was a 'Z').
We lugged the Br43 and a car battery out to the site as well, though ended up not being able to use it. A little more forethought would have led us to conclude that it is not possible to have both the bridge and an Ethernet-connected laptop hooked into the brick. As such, we could power the brick up and aim the antenna at NSRL1, however, there was no way of knowing what was happening within the brick. A few things would have allowed us to get around this. First, had we had a cell phone it would have been possible to call Ed back at the lab and have him attempt to ping the brick. The rate of packet loss could have then perhaps told us something about the quality of the link. Additionally, had we had another antenna it would have been possible to use the Viao to ping the brick through NSRL1.
It would have been better had the directional antenna been mounted on a tripod, thus allowing it to be aimed more accurately. Additionally, it would have been helpful to have had a pair of binoculars on hand to make sighting a little easier. In any event, we at least established that an unamplified link, using one directional and one omni, could be established at the range of 1.60 km.
The next step is to attempt to push the link a little further out, perhaps heading up to Eagle Crest. Additionally we would like to determine how well a directional will pick up an omni's signal when the omni is placed behind the directional (thus mimicking the potential setup between the lake and Cairn Peak). Another step would involve incorporating an additional brick in to the communication setup and testing a multi-hop data link.
Brick-Datalogger Communications (20080401)
It was possible today to get a Campbell CR10x streaming into Br43 using a SC929 serial cable attached to a Micro-Innovations Serial-USB converter. The CR10x was running the Mendenhall met station program. When initially plugged into the brick, it was determined (by typing 'dmesg' at the prompt) that the CR10x was communicating with the brick over port ttyUSB0. Using 'cat ttyUSB0' it was possible to watch the data flowing in from the datalogger. The values in the data looked pretty reasonable, however, the data did not appear to be properly formatted and would be spit out in long paragraphs.
NetRS File Formatting Fun and Processing(20080331)
There seems to be something wrong with UNAVCO AutoGypsy, as their service is not able to process the example file that they provide on their website as a sample run for new users. Marijke tested it last Friday and I attempted it again on Monday, however, both attempts met with the same error message:
- 'ag was unable to retrieve your file ftp site sideshow.jpl.nasa.gov not available'
Marijke has sent them an email, so we will see where that takes us.
In any regard, UNAVCO Gypsy will only process files that are in the RINEX format. As such, it was necessary to convert our .T00 files from the NetRS behind the Visitor's Center to that format. This ended up being a two part process. First the files were converted to the '.DAT' format using the 'runpkr00' program and then to RINEX using the 'dat2rin' software. Both programs were located on the cd that came with the NetRS and the processing was done using Linux. A more detailed description of this process can be found under my Logan's How To Page and in the NetRS User's Manual. The dat2rin process meta-data output for each file looked like this, though obviously with different file names:
DAT2RIN: DAT-to-RINEX Conversion Utility Version 3.46 Copyright (c) Trimble Navigation Limited 1992-2002. All rights reserved. RINEX Obs file: MGVC207.07O.07o RINEX Nav file: MGVC207.07O.07n
- No ANTENNA.INI file found in current directory.
Error: Unable to get ANTENNA.INI data for antenna I=-1 Error: Unable to obtain antenna information for antenna ID=-1 RINEX file creation completed.
As you can be seen, each file encountered an error when attempting to retrieve some information about the antenna. Not quite sure what this is, though hopefully it is not all that important. The 'dat2rin' process created two files, one navigation and one observation, for each input. The name of each output file was appended with either '.07o' or '.07n' for observation and navigation, respectively. As I had previously gone through and renamed the .T00 files to conform with the RINEX naming standard, it was necessary to go through by hand (because I am not so Linux savy) and delete '.07O' from the name of each output file. I had previously inserted that bit into the name while making the names conform to the standard. It would be nice if we could automate this process.
Automating the entire process would be quite nice, though would involve a few steps:
- 1. Changing the name of each file from that recorded by the NetRS (SystemNameYYYYMMDDHHmmS.ext) to the RINEX standard naming format (ssssddd.yyt' where s = station name, d = day of year of first record, y = year, t = type of file [O = observation, N = Navigation, M = Meteorological data, G = Glonass Navigation file]). It seems that the last 3 characters could be omitted as they will be added by the conversion program during the .dat--> rinex stage.
- 2. Accessing 'runpkr00' and converting the files from '.T00' to '.dat'
- 3. Accessing 'dat2rin' and converting the files from '.dat' to RINEX
- 4. Uploading the files to SEAMONSTERAK
- 5. Generating an email to UNAVCO
- 6. Retrieving the files after receiving the processing-complete email from UNAVCO.
- 7. FTPing the files from UNAVCO
- 8. Communicating with the database data input script.
This is way beyond my scope of Python programming at this point in time, though it would be a nice thing to have setup.
Trouble Shooting Brick 43 (20080326)
For some reason Brick 43 is not functioning properly. It is not possible to connect to the unit wirelessly, nor when plugged directly in using the Ethernet connection. The unit can be neither pinged nor ssh-ed. Two laptops and a number of Ethernet cables were used in the attempt to connect with the unit, however, they did not make any difference.
Acting on Rob's troubleshooting directions, the following steps have been taken to identify the problem. A few terms: SBC = Single Board Computer, SS = Solar Saver, PCS = Power Conditioning Subsystem.
Possible Power Issues
- 1. As there may be an issue with the SS not providing enough power to the SBC, owing to a possible cutoff voltage of 13.2V, the external power has been wired directly into the PCS. Prior to juicing the power, the power pin was removed from SBC.
- 2. The external power was switched from a car battery to a 13.3V, 0.55A wall-plugged power supply.
- 3. Before plugging the power pin back into the SBC it was determined that 12.01V were running through the pin.
- 4. When the power pin was plugged back into the SBC the red and green LEDs lit up briefly (~2s) and then shut off.
- 5. An Ethernet-wired Telnet session was attempted, though communication was not established... continue with troubleshooting.
SD Card Issues
It turns out that the SD card from Br43 is the issue here. This was ascertained in the following way. It was established that both the hyperterminal-serial cable assemblage and the Ethernet Telnet sessions were properly functioning with Br47. As that point, the SD card from Br43 was inserted into Br47 and the connection processes were reattempted, though proved to be unsuccessful. When attempting to connect using Telnet, Putty would almost immediately dump back out to windows. When using Hyperterminal it was possible to watch the unit boot up to a certain point before encountering an error shortly after dealing with the SD card. The last few lines of the boot up routine are found below:
- EXT2-fs warning: mounting unchecked fs, running e2fsck is recommended
- VFS: Mounted root (ext2 filesystem).
- Mounted devfs on /dev
- Freeing init memory: 72K
- Using sdcard.o
- sdcard0: Technologic Systems SD card controller, address 0x13000000
- sdcard0: card size 993280 sectors
- Partition check:
- sdcard0a: p1 p2 p3
- EXT2-fs warning: mounting unchecked fs, running e2fsck is recommended
When the SD card from Br47 was inserted into Br43 it was possible to log in to the unit using both the serial and Ethernet connections. Actually, that is not quite true, as when attempting to login to the unit over the serial connection the unit would freeze up upon typing the fourth character of the login name. This occurred, however, when the 'good' SD card was inserted into both bricks. Ed is going to bring an SD card reader in from home and we will attempt to determine and fix the error. Now it is time to reconnect all of the wires that were unplugged in the process of ascertaining where the error was arising... good fun.
Met Sensors --> C1000 Datalogger --> Brick
Status of Motes (20080227)
All of the motes at NSRL were examined, with the following results:
- M62, V = 2.308, unit is functioning properly.
- M14, V = 2.452, When turned on the red led flashes and is followed by a green flash, followed by a constant red flash while transmitting.
- M61, V = 1.5 - 2.5, Working properly
- M70, V = 2.995 - 3.235, LED waterfall when turned on. Unit not transmitting.
- M67 (mote box says 83), V = 3.114, No LED action upon power cycle and all LEDs flash simultaneously when units is reset.
- M66 (mote box says 81), V = 3.11, All LEDs flash together when unit is turned on and when reset.
- M63, V = 3.232, All LEDs flash simultaneously when unit is turned on and when reset.
- M64 (mote box says 82), V = 3.231, No LED action when power cycled or reset.
Brick43 Ready to Rumble (20080226)
Brick43 is now properly communicating with wireless network SEAMONSTER2 (192.168.1.1). The units are ready to be field deployed at any point. We need some more desiccant for the brick, but otherwise things are good.
Brick Data Recovery from NSRL Roof Mote Test (20080212)
The data aggregated together from the four motes by the brick was downloaded today. It took a little blundering around to figure out how to do this, but in the end it was possible to simply use FileZilla to FTP into the units as the 'brick' user. In all, 2355 files totaling 127Mb were downloaded.
Motes, Bricks, and Two Months of Snow (20080211)
The three motes and single brick that had been placed on the roof top of NSRL back in Nov 2007 were removed from under ~10 cm of snow and brought back into the lab. The units were opened and examined in the comfort of NSRL. Brick 43 was found to contain perhaps 15 mL of water in the bottom of the box and small drops of water around the jacks on the side of the box. The jacks had been facing upward, thus it is possible that water leaked through that way, though that can not be definitively said. The unit was not connected to a battery and thus remained unpowered for some time. It was not possible to Telnet into the brick, nor was the brick ping-able using the SEAMONSTER wireless connection. Does this mean that the bridge is off line? It was possible, however, to form a LAN and then Telnet into the unit. The last records were from Nov 26th, 2007.
The three motes were in various working order-
- M65- battery = 2.50v. Red LED flashing every 5 seconds. Antenna found lying near the mote box under the snow. Tiny amount of water present at the bottom of the mote box.
- M11- battery = 8 x 1.53v, 2 x 2.50v. No LEDS. Mote box appeared totally dry.
- M62- battery = 2.45v. Red LED flash every 5 seconds. Mote box, which had been lodged within a Mote Boat, was totally dry inside, though water had amassed within the moat boat.
Mote -> Brick -> Bridge -> Wireless Network -> Laptop Success (20071107)
I am happy to report that with a chunk of luck it was possible to get the Bridge attached to Brick43 to communicate with the 'seamonster' wireless network here at NSRL. After setting my laptop to be on the 'seamonster' network it was possible to use Putty to Telnet directly into the brick and watch as the real-time mote data was logged. The brick is set with an static IP address of 192.168.1.43 and the bridge with an address of 192.168.1.243. Additionally, the bridge is set with an 'infrastructure' network type and an SSID of 'seamonster.' It is such a relief to finally have this up and running, as we have played with it for a chunk of time now.
From here, we need to get the router on the roof of NSRL up and running, set the bridge to that SSID, and get the computer hooked into the router to log the files coming in over the wireless network. These files then need to be uploaded into the database where the digital number data from the motes can be converted into meaningful values. Ah, very soon now a mote -> brick -> database data flow will be a reality.
NSRL Rooftop Mote Deployment Power Consumption (20071106)
Three motes have now been deployed on the roof of NSRL for over a month. Each of the three motes is powered by 10 Duracell D batteries, similar to the picture below.
The battery voltage for each mote has been measured repeatedly over the past 22 days, with the following results.
- m14- V = 2.9970 - 0.0086*X (r2 = 0.9456, P = <0.0001)
- m61- V = 2.9339 - 0.0095*X (r2 = 0.9112, P = <0.0001)
- m62- V = 2.8875 - 0.0043*X (r2 = 0.9962, P = <0.0001)
- V = Battery Voltage
- X = Days running
What are possible explanations for the differences in slope for each of the lines? Mote 14 houses no sensors, M61 hosts four sensors, and M62 hosts three sensors and must act as a relay for M61. M61 has the steepest slope and thus it is possible that recording and transmitting the greatest amount of data draws the batteries down quicker than motes that host fewer sensors. This explanation, however, does not hold when motes 14 and 62 are compared, as M14 has a negative slope twice that of M62. M62, which had the shallowest slope, not only hosts three sensors but must also act as a relay station for data packets coming from M61. M61 does appear to have the nicest soldering job, with M14 appearing to have the shoddiest. They all have the same number of batteries, though M62 did start out with the lowest voltage and thus it might be possible that M14 and M61 are going to drop off quicker and then plateau. I'll keep up the test for a while longer and we'll see if the slopes change with time.
--Logan 12:48, 13 November 2007 (PST)The issue with M61 voltage dropping off the quickest may result from the poor connections between batteries. When testing the voltages today I found that when different batteries tested they yielded different results. Thus, the batteries are not all being drained equally for M61. I believe that three different voltages were displayed depending on which batteries were tested. As such, I went back and tested the other two motes and found that all of the batteries displayed the same voltages.
Notes on Black Box Industrial Modem FR115 from User Manual
• The master, normally based at the communication endpoint for ease of access, is set to call the slaves in its call book. • The modems communicate at 902-928 MHz (~0.3m) and have a range of ~ 20 mi. • The use of a repeater causes a decrease of 50% in system transmission capacity, though the use of a second repeater does not lead to a further depreciation. • A Point-to-Point Master is a configuration where a single master communicates with a N-slaves. • The Master determines the settings used for all radio transmissions. • For point to point communications, the master’s SN must be in the slave’s call book, the slave’s SN must be in the master’s call book, and the master must be programmed to call the slave. • To call a slave using a master, go to Call Book Menu and type ‘C’ followed by the Entry # of the slave.
Fish Creek Communications Mock Setup @ NSRL
A mock setup of the communications equipment used for the Fish Creek MET station was created at NSRL. Two radio modems were setup with one identified as a Point to Point Master and the other as a Point to Multi Point Slave. The modem hosted at NSRL will be the Point to Point Master, while the one on the MET station will be the slave. The Master (serial # 571-1674) has a baud rate of 9600, an empty call book, one repeater, and a network ID of 225, and a modem mode of 2. The slave (seril # 570-1171) also has a baud rate of 9600, a modem call book including (0) 571-3107 and (1) 571-1674, and a modem mode of 3.
Formal Writeup of the Managed Mote Deployment (20071008)
The formal writeup of our managed field mote deployment at Lemon Creek has been concluded and is linked to from here. Mote Writeup
Mote Calibration Round X (20071002)
A day was spent making us electroconductivity and temperature standards which would be used to calibrate the handful of temp and EC motes which we manufactured here at NSRL. Four solutions were prepared with 0, 11.2g, 22,4, and 33.6 g/L of NaCl which were suppose to represent EC standards of 0, 20, 40, and 60 mS/cm. Sea water has between 30 and 40 g TDS/L which corresponds to a EC of 62.5 mS/cm using the conversion 1 mg TDS/L / 0.56 uS/cm. While logging a file using Cygwin, each EC sensor was placed one at a time into each solution and allowed to record for five minutes. After each EC sensor had its turn in the sun, the temp sensors from motes 61 and 62 were plunged into an ice bath along with the DO/Temp YSI. The sensors were allowed to equilibrate for 10 minutes in each solution before a small quantity of hot water was added to raise the temperature. Four temperature (0.5, 3.7, 4.9, and 6.4 C) were measured using the homemade sensors while a file was logging. Following the recording to data, the files were brought into Excel where the irrelevant columns were deleted and the hexidecimal values were converted to Digital Numbers. The data was time referenced so that the data from each sensor could be synchronized with the proper EC or temperature solution. Following that, the outliers were removed from the dataset and the data was graphed. The graphs were examined for times during which the sensor recorded somewhat consistent readings for each temperature or EC solution. The values over that period of relative stability were averaged for each sensor and plotted against either the EC standard values or the temperatures as recorded by the YSI probe. Regression equations were determined for each sensor. After analyzing the data, it was apparent that M62 was not functioning properly during the experiment, as the data appeared totally jumbled and had no resemblance of order. As such, only calibration equations for the two temperature and two EC sensors from M61 were determined. The calibration equations were determined to be the following-
- M61_2 T (C) = -0.0531*DN + 163.63
- M61_3 T (C) = -0.0531*DN + 162.78
- M62_1 T (C) = -0.0531*DN + 162.59
- M62_2 T (C) = -0.0531*DN + 161.85
- M70_2 T (C) = -0.0531*DN + 161.38
- M70_3 T (C) = -0.0531*DN + 161.79
Fun with Motes (i.e. Mote Programming)
Working with Mote reminds me of trying to put my 4 month cat on a leash and take him for a walk. Every once in a while he actually walks in the direction that you desire to go, while most of the time he simply sits down and then rolls onto his back when you give him a little tug. Some times motes turn work just fine and follow down the path that you are trying to take them, whereas at other points in time they, for no apparent reason, just seize up and stop working. Also like my kitten, they seem to be getting better with time. Here are a number of Mote commands which I have picked up during the course of the summer.
- 1. 'motelist' tells the user which computer port the mote is communicating through.
- 2. 'motecom=serial@COM#:tmote java com.moteiv.trawler.Trawler' opens the program Trawler and tells it to listen to the port #. Trawler will display the network topology in a graphic form. When using this command, the # should be replaced with the port# that the mote is communicating through. Additionally, the user must be in the directory (opt/moteiv/apps/Delta).
- 3. 'motecom=serial@COM#:tmote java net.tinyos.tools.Listen' opens the program Listen and tells it to listen to the port #. Listen will display a text readout of the data coming in over the specified port. The user must be in the directory (opt/moteiv/apps/DSM). The incoming file can be save by typing '> enter file name.txt' after Listen.
- 4. 'make tmote install,#' will take a mote and reprogram it with a different mote number and program. If the # is 0, the mote will act as a base station, while any other number will allow the mote to act as a field mote. If the user is in the DMS directory, then the mote will be programmed with DSM, whereas if the user is in the Delta directory Delta will be programed to the unit.
Mote --> Brick NSRL Rooftop Setup (20071002)
Two motes (M61 and M62) were deployed on the roof of NSRL, with M61 around a corner thereby forcing it communicate strictly with M62. Mote 14 was placed in the window to act as a relay between M62 and M0 which was plugged into Brick43 (B43). The system was setup around 15:30 and according to the B43 appears to be logging the DNs of the sensors. The EC probes of both M61 and M62 were buried under the gravel atop the roof and at this time are covered by water. M61 is under the eve of a large vent which was palatable warmer than the surrounding air. One of the temp probes was hung over the building, while the other was buried under some gravel in the water. One temp from from M62 was also hung off the building, while the other was simply set on top of the gravel. The units will be allowed to run for a perhaps a week (?) or as long as we see fit.
The files logged to brick 43 were examined on October 10th, 11th, and 15th. On both the 10th and 11th the files were examined and it appeared that all of the motes were functioning properly. When the files were examined on 10/15, it was apparent that the motes had mote been communicating with B43 for quite some time. Tracking back through the files, it became apparent that the motes began having difficulties on the night on Oct 11th, as between 1800 and ~1945 there were no records logged besides those from the base station. The record from 2030 shows that no records were logged during the half hour period, nor were records logged to the 2130, 2230, or 2330 files. Approximately 8 files between 10/11 @ 2330 and 10/15 were examined and those also showed that no communication had taken place with any of the three motes.
Upon discovery that the motes had ceased to communicating with the base station all of the power sources were checked, the mote LED patterns noted, and the motes reset. The voltages were ascertained to be M14 = 3.02, M61 = 2.915, M62 = 2.886.
M14 displayed a solid red LED with a blue light which flashed every 3-4 seconds. The mote was powered down and initially failed to display an LED startup pattern when repowered. After toggling the power switch several times the mote began to display a pattern (3 blue --> 1 red --> 2 blue --> 1 red). When 'reset' the LEDs did the blue --> red cascade and then the red stayed on and the blue flashed occasionally. Eventually the red LED began to flash every 5 seconds and it was observed that the brick was logging the motes data packets during this period.
M61 displayed a blue --> red --> blue LED pattern with 2 seconds between flashes. When 'reset' the LED flashed red and then did the blue --> red cascade before reverting back to the blue --> red pattern initially observed. It was observed that the brick was logging during the blue --> red pattern.
M62 displayed a similar pattern as M61 and when reset did the cascade followed by the blue --> red pattern. After a short period the mote began to flash only the blue LED, though returned to the blue --> red pattern. During the blue --> red pattern the mote was observed to be broadcasting a signal which was recorded by the brick.
From looking at the files logged to the brick, it appears that after resetting M14, the relay mote, the communication link was reestablished and data from all the motes started being logged to the brick. The motes were all brought inside next to the brick and the base station recorded data from each of the motes. The motes were redeployed onto the roof top around 1000. The file 10/15 @ 1000 shows that all the motes are functioning properly on the rooftop. After the motes had been station on the roof for a few minutes, it was observed that M14 had once again seized up and was displaying a solid red light. M14 was situated on the windowsill right above a heater and it is possible that this was causing the internal mote temperature sensor to become too warm, thereby causing the mote to freeze up. When the window was opened and the mote was allowed to cool down, the red light went off and was replaced by the red-blue pattern. As such, the M14 was placed outside on the roof and the brick with M0 was moved near the window so that it had a clear line of sight with M14. After the motes had been resituated, the files being logged to B43 were rechecked and were indeed logging files from each of the motes. The lesson learned: If the internal temperature of the mote gets too high, it will freeze up and cease to pass along data packets. The base station is now located pretty close to the heater and thus it will need to be checked to insure that it to does not overheat.
When the window was closed, M0 stopped logging files from the field motes as it was unable to pick up the signal through the window pane. An antenna was fitted to M0 and immediately it began to record data from the field motes. Another lesson learned: Windows will attenuate the signals produced by field motes to the point where an antenna-less base station will not be able to resolve the incoming signal. This problem, however, can be resolved simply by attaching an antenna to the base station.
Mendenhall and Lemon Photosynth(20070928)
The high resolution photosynth photographs of both Lemon and Mendenhall watersheds were placed on nsrl1 at the following site (http://nsrl1.jun.alaska.edu/~seamonster/Photosynth).
Pre-Managed Deployment NSRL Motes Sensor Test (20070920)
To test the sensor strings which are attached to the three field motes which will be used during the managed field deployment.
- Three field motes (#61, 70, 83)
- 70 (Hex#= 46)- 4 x temp sensors
- 61 (Hex#= 3D)- salt and brackish EC sensors, 2 x temp sensors
- 83 (Hex#= 53)- fresh water EC sensor, 2 x temp sensors
- Mote base station (mote0)
- Laptop running Cygwin and Listen
- 2 x 1L beakers
Motes 83 and 70 stopped functioning for some unknown reason. Mote 70 came back on line about half an hour later; however, Mote 83 still seems to be down. The sensor string from Mote 83 was tested using Mote 61.
- 1. Prepare 2 beakers, one with cold/fresh water, the other with warm/salty water.
- 2. Open Cygwin and log to file using the Listen subprogram (file name = Abrupt1_T_EC.txt)
- 3. Place Mote 61 sensors (2 x EC, 2 x Temp) into cold beaker and wait 5 min.
- 4. Move sensors into warm beaker.
- 5. Close file.
- 6. Swap sensor string off of Mote 61 and attach sensor string from Mote 83 (EC, 2 x Temp)
- 7. Start logging new file (file name = Abrupt2_T_EC.txt)
- 8. Repeat steps 3-6.
- 9. Repeat steps 3-6 using Mote 70 (4 x temp) (file name = Abrupt3_T.txt)
Results and Conclusions
The original data is stored on the EDGE computer under the following .txt file names
All of the .txt files were converted to Excel spreadsheets and the hexidecimal values were converted to digital numbers using the command [=HEX2DEC(...)]. All of the DN values were plotted and it was appartent that all of the sensors recorded the change in environmental conditions associated with the two beakers. Each dataset had a small number of abnormally low outliers. Graphs were made both with and without the outliers. The outliers were very obvious when present. For example, while most value were btween 2000 and 3500 the outliers would come in around 150. The results of these tests indicate that the sensors are ready to be field tested. The sensors still need to be calibrated, however, that can be conducted post field test.
Mote Connectivity and Range Testing (20070726)
- Title- Mote Connectivity and Range Testing
- People- Suzie Teerlink and Logan Berner
- Purpose- Evaluate the range and connectivity of both handheld motes (Mh) and boxed motes (Mb).
Materials- • 3 x box motes • 2 x handheld motes • base station mote • laptop • 2 x handheld radios • 100 m tape measure • cardboard boxes • Computer Programs- Trawler and Cygwin
Methods: I. Using Cygwin, the motes were programmed to run Delta and not DSM. This was done using the ‘make tmote reinstall,#’ command while in the ‘/opt/moteiv/apps/Delta’ folder and where #= the desired number of the mote. II. Setup a laptop with the base station mote (M0) on a concrete brick approximately 45 cm off the ground. III. Created a 100 m transect using the tape measure which ran from the brick outward. IV. Using ‘Trawler,’ the network topography was observed while an active handheld mote (Mh)was held at approximately the same height as M0 and was walked down the transect till the signal became spotty. V. The distance between M0 and Mh was noted when the signal strength was both strong and when it became spotty. VI. After moving from down by the Soboloff Annex up to the UAS court yard, the laptop was placed on a picnic table and set to run ‘Trawler.’ VII. While monitoring Trawler, a Mb was moved away from the base station and the distances at which the single strength was both strong and spotty was recorded. VIII. Once the distance over which the signal remained strong had been determined for Mb, two additional box motes were added in series to the mote web. The motes were allowed to run for 7 min and the number of packets received and lost by each mote was recorded.
Mote Type Strong Signal (m) Spotty Signal (m) Handheld 60 85 Box 90 130 Note: The Strong and Spotty signal distances refer to the distances between the mote and the base station.
Mote Packets Received Packets Lost Percent Loss Distance between motes (m) Base 99 0 0 0 62 99 0 0 75 61 65 35 35% 93 70 No show No show n/a 75 Start Time: 15:57 End Time: 4:04 Run Time: 7 min The motes were deployed sequentially in the order listed In the ‘Mote’ column.
General Notes- The motes must be programmed using Delta to utilized the ‘Trawler’ program. Motes programmed with DSM use a program called ‘Sniffer.’ Six motes appear to be working, with three of them in boxes with antenna, two of them acting as handheld motes, and the final being used as a base station. The motes which were working were those which had previously been used. The new motes are not functioning properly and may required some type of initialization programming command before they can be used. Test Notes- The motes appeared very susceptible to interference stemming from topographic and other obstructions. Additionally, the motes did not always daisy-chain the data back to the nearest mote, but instead sometimes chose to skip motes and transmit data to a mote that was further away.
While the motes show tremendous promise, there are some issues that need to be resolved. In particular, the following questions need to be addressed: I. How is the rate of packet loss influenced by the distance between Mb / Mh and M0? II. How do elevation differences between Mb/Mh affect the feasible range of communication with M0? III. Are there commands built into DSM which would allow the user to specify which motes communication with which others (i.e. to force a daisy-chain)? IV. How does rain fall influence the mote communication?