Friday, December 31, 2010

Moving to Hobart - First few days

Last few weeks I have been trudging (well driving) across Australia from Adelaide to Hobart. Before leaving Adelaide we had a small farewell. Even though my Facebook estimate was 12, I invited other people on the side and the total headcount came to 28.
Packed like sardines while boarding the ferry, traffic jam on the pier
The next few days were spent on the road and on the ferry, Adelaide-Melboune(and environs)-Devonport (and environs)-Hobart. We went to several national parks near Devonport. Narawntapu along the coast where we saw crabs swarming along the beach.

Then we had a close-up with a wombat at the Cradle Mountain National park. Finally we got to Hobart and had a "Bah Humbug Christmas"(according to the organizers) at the waterworks park.

In a few days we watched "Wild Oats" sail into Hobart marking beginning of the end of the Sydney to Hobart race. More exciting summer things to do in Hobart coming up.
Close encounter with the wombat kind, cradle mountain, in the button grass #fb
Watching wild oats come into Hobart, end of Sydney to Hobart race

Wednesday, December 8, 2010

Beagle has 3 eyes - Kinect + Beagleboard

People have hooked up the Kinect to a few embedded platforms - Intel Atom and Gumstix Overo have come up. It was time for the other darling of open development, the BeagleBoard to talk to the Kinect. I have done stereo on the beagleboard before. The kinect makes things a lot easier.



First the usual step of getting libfreenect going, I ended up using unstable since it includes the OpenCV bindings. The sync API is much more stable than the callbacks and has fewer frame drops. Since I don't have a spare keyboard/mouse set for the Beagle and Synergy refused to co-operate, I did a demo frame viewer run with OpenCV rendering the frames. I had to upgrade opencv, ffmpeg, libusb, cmake and bunch of things on Angstrom, then replace CMake opencv detection with pkg-config to get everything working.

#kinect tested to work with #beagleboard and opencv #openkinect
Meanwhile there have been some major changes in the OpenKinect world with PrimeSense releasing the OpenNI framework. This releases the Microsoft stigma from the Kinect and encourages more open source development of applications. You can grab the OpenNI code from Github and build on your platform a from the manufacturer kinect driver. No stigma about a hacked Kinect (or shall we say PrimeSensor) any more either, it is officially open.

Monday, December 6, 2010

JAXB + Collada 1.4 Schema = Messy method names

Collada Reference ModelNASA WorldWind Java has recently added extensive KML support. They use neither SAX nor DOM, but a very creative solution with an XML pull parser. The events of interest i.e. implemented KML features only are responded to and corresponding objects created. So the pull based methodology is more flexible - faster than SAX and less memory hungry than DOM. One of the planned things is models, and given the interest in KML shown at the CSIRO Cop meeting I decided to have a shot at extending the worldwind KML support.

Collada Box in WorldWind3D models have been around on the forums in a mega thread for a while now, but never truly integrated into the worldwind core. With KML model support proposed this might finally be on the way. I got my feet wet in Collada parsing by creating annotated Java classes from the Schema using JAXB. I have to admit I made some primitive attempts with DOM + XPATH parsing and quickly gave it up in favour of pure JAXB magic. Parsing a simplistic Blender cube took some effort, I have the vertices, faces and normals done. Just need to set up some materials to get it to render.

Wednesday, December 1, 2010

CMAR Community of Practice - day 2

On the 2nd day the concept of divisions in CSIRO has sunk in and I realize that this is the CSIRO Marine and Atmospheric Research Software Engineering community, I am over the me and them mentality as well and I am using the "We" pronoun more often.
Large KML GE

Large KML WW
The talks today started off with a review of XML and why it is not the solve all. The second talk explored the REST idiom for web services. There were a couple of talks about IMOS and TERN indexing disparate NetCDF and HDF data and allowing exploration + download of it via a portal. The inevitable scalability question came up and was left as the elephant in the room. A very detailed talk about Oracle databases and optimizations was an eye opener for me - somebody used to the relatively "just should work" world and PostgreSQL. The last talk felt like a major reimplementation of Google Docs in a semantic space, to easily digest arbitrary data from spreadsheets - the goal is lofty but the proof is in the pudding.

We tried to muster up some support for a visualization platform around NASA WorldWind for community wide acceptance and there was interest from a couple of people. After dinner the talk around visualisation reduced to (or built up to depending on how you look at it) rendering uncertainity both in observations and model output data. Making graphs with regression for my thesis, this has been a common request from my supervisor, I respond to it with error bars and regression confidence interval curves. How does this propagate to a resonable metaphor in worldwind without making every icon hairy with error-bars ? The kml support in WorldWind seems to have also encouraged people to test what can be best described as Google Earth compatibility, leading to these hairy images.

Monday, November 29, 2010

Openkinect with IR and CSIRO Cop Meeting

Kinect IR image
The IR sensor on the Kinect can now pull images illuminated by the laser. The camera needs to be initialized with a separate command to fall into IR mode, search is also on for higher resolution streams than 640x480. I have made some headway towards a v4l2 wrapper for the Kinect streams using AVLD. This involves capturing data using libfreenect and writing it to the pseudo-device created by avld.

This week I am also at the CSIRO Cop meeting at ANU in Canberra. There will be 1.5 days of chats and some informal follow-ons. This mostly a programming and data management community with a pile of Matlab, Unix scripters, web-service developers and database admins. The first talk was about eResearch and corporate infrastructure to support it. The second talk was from the legal section covering open source and open data, terms like encumberances, freedom to operate (FTO) and due diligence. This talk covered licences and caused a fair bit of discussion.

The third talk was about OffSiders, which feels like a new object based scripting language. It will require some experimenting to work out what it is good for.  The persistence is non-atomic and exists as a transaction as supported by the filesystem. It has a pure computer science glow and will definitely trigger some "what is like ?" buttons.

Saturday, November 27, 2010

Joys of cleaning the house - Nuking non-compliant code

The openkinect project is nascent and as such the win32 platform support is still stabilizing. Due to the naughty behaviour of libusb on windows several attempts have been made so far to create common API across platforms without a simple maintainable and working version emerging.

GLview on Win32

I decided to learn about kinect, coding on github and merging things in general by taking Zephod's code and putting it in a win32 folder under the library source. I got most of the demo code compiling and running with this approach. I took a master piece of my Ikea chair in depth on Vista 64, apparently a rather difficult OS to get to work on. By then my git code tree had become cluttered with lots of merges, I was working off 3 remotes (my own fork, Zephod and mainline). With some nudging from this video, I deleted all copies of my work into oblivion. I can't control Google cache and such but it felt liberating.

Tuesday, November 23, 2010

Python Bindings for Accidental API - and vivi.c

I started a couple of Kinect related projects. Firstly, writing linux kernel driver based on v4l2 and cloning vivi.c to act as a stub. The responses indicate that a gspca based approach might be better, but I had a lot of fun hand hacking the vivi.c code and adding 2 video devices from 1 driver. I will have to wait till someone puts some gspca stub code out.
Secondly, I completed a first working version (as in can get some image buffers) of the win32 python binding from Zephod's code. I followed the swig how-to's and had to resort to typemapping to extract the image and depth buffers, but it all worked out in the end. Not quite as well as I had expected on the 1st attempt but I can replicate some of the early work on Linux in windows now. The colour's in the RGB image are flipped, I will have to change the data order. A lot of frames are dropped so the response is not as smooth as it could be.
Kinect Python on Windows

Sunday, November 21, 2010

Hooking Kinect into Languages - Python and Matlab

I spent the morning wading through SWIG to hook up Zephod's win32 Kinect code into Python. After getting the interface file and Cmake in place, I got unsupported platform type from the MSVC compiler.

I will also need to get some Matlab integration via Mex files in place to allow classic academic usage and student projects rolling. With my budget constraints and time constraints I can offer a small bounty to get this done. Contact me if interested.

Also on the todo list is v4l2 integration and a gstreamer plugin. The principle of operations of these drivers is simple, allocate a buffer, fill it up with data from the kinect and push it up the chain for further processing instead of just displaying as current things do.

rainbow depth

The new python wrappers for libfreenect allow me the luxury of using matplotlib colourmaps to redisplay the depth as I see fit. It introduces lags, but who cares we have depth in Python for $200.

Friday, November 19, 2010

Getting Data from Kinect to OpenCV

Kinect pillowA fair bit of work has already been done in simple template based object identification with the Kinect and even Skeleton tracking. I finally got my hands on one today from JB Hi-Fi, and paid a few bucks extra for it. Harvey Norman finally got them in stock at the end of the day. Terrible logistics or favouritism to employees, or something in-between. Anyway I can now get a 2nd one from the pre-order. Apparently a single USB 2.0 bus is insufficient to handle the stream from 2 Kinects. I did some simple experiments with OpenCV, just adding some hue mapping and swapping the RGB channels, tomorrow we are having the 12Hr Hackathon at Adelaide Hackerspace, might get to make a lantern controller then.

Tuesday, November 16, 2010

Preparing to hack the Kinect - OpenKinect on MinGW32

My Kinect is on pre-order, I can only get my hands on it tomorrow. That hasn't stopped me from hacking up the OpenKinect code on my Virtualbox Ubuntu and MinGW. There are multiple API's for it by now in C, Python, Java and C#. I choose to use the C version for easy portability and common Windows/Linux version, as well as the luxury of using CMake.

Here are the steps to get the glview kinect code compiled with MinGW32:
  1. Grab the OpenKinect code from Git, using MsysGit or similar.
  2. Grab the windows compatible libusb, I got the precompiled version to save time.
  3. You may or may not have GLUT, if you don't grab the libraries here and use reimp to make them MinGW32 compatible.
  4. Add a win32 specific linker line into CMake which reads: target_link_libraries(glview freenect OpenGL32 glut32)
  5. Configure using CMake, point to where libusb is, compile and you are ready to possibly enjoy OpenKinect on windows. I make no guarantees on whether it will actually work yet. If you have Windows and MinGW try it and let me know.
No Kinect Yet

AR2C Opening Ceremony - TechTalkFest

Yesterday we had the day long Adelaide Radar Research Centre opening ceremony. A self-funded collaborative research centre directed by my PhD supervisor Doug Gray. The talks ranged from highly theoretical to commercial to what reduced to a company ad (even though the speaker promised to not make it so).

I found the first talk most interesting, probably beacuse I was most attentive at the beginning of the day and it presented an irrevocable commercial success of Radar research. GroundProbe discussed their mine wall stability radar and how it came into being from concept to highly user friendly truck mountable versions.

The next talk was from Simon Haykin, dealing with the superbly abstracted Cognitive Radar. He kept drawing parallels with the visual brain and talking about layers of memory and feedback. At the end of the day, one of my colleagues asked if the Cognitive Radar is paralleled by the visual brain then where is the transmitter ? Auditory brains of bats might have been a better analogy.

The last talk before lunch Marco Martorella talked about ISAR and how ISAR techniques can be used to sharpen up blurred SAR images of targets under motion, such as moored ships rocking. However the ISAR results seem to flatten the image to a given projection plane and issues concerning 3D structure estimation came up. There seems to be no acceptable solution to this yet. May be some of the work done in Computer vision can help in this regard, particularly Posit and friends.

The energy level dropped a bit after the lunch break. Several people from DSTO took off to get back to work, some came in. The mixture in the audience changed slightly. The afternoon talks covered Raytheon, BoM and the Super-DARN system.

The guy from Raytheon talked about their huge scope in Radar systems, from the tiny targeting radars in aircrafts to GigaWatt beasts which require their own nuclear power plant. As well as their obsession with Gallium-Nitride and heat dissipation.

BoM fellow talked about the weather radar network which we have all come to know and love. Starting from the origins in World War II to the networked and highly popular public system it is today. They seemed to have a projective next hour forecast of precipitation along the lines I had imagined before, but only for distribution to air-traffic control and such.

The last presentation about SuperDARN had the most animated sparkle I have seen in a Radar presentation. It featured coronal discharges from the sun, interactions with the Earth's magnetic field (without which we would be pretty char grilled) and results of the fluctuations in the polar aurora.

Overall it was pretty interesting and diverse day. At the end of the day we were left discussing the vagaries of finishing a PhD and the similarities to Xeno's Paradox.

Wednesday, November 10, 2010

More Lollypop forests - in HFSS

HFSS Leafless Tree
Heath from our lab gave me a primer in Ansoft HFSS and since then I have been muddling along to try and represent a forest for SAR imaging. I managed to get some of my ngPlant created models into it via STL from Blender, but beyond that HFSS was totally non-cooperative. The model refused to be healed to close any of the perceived holes etc. So my only option was ot make some Lollypop trees like before using the supplied primitives, in 3D this time. Fortunately HFSS allows quick duplication, scaling etc. of a single primitive and links them with transform nodes. In the future this will allow more stochastic forest generation rather than a cookie cutter one.
hfss lollypop trees

FEM kicks in beyond this point and calculates the scattered far fields with plane wave excitation. As expected due to the perfectly conducting ground plane most of the energy radiates away, but there is still some significant return towards the illuminating direction.The nice change from FEKO is that the fields are calculated in all directions, not just the illuminating direction. So this setup can be used to bistatic imaging scenarios as well.
ngPlant palm

Sunday, November 7, 2010

Single Axis Stability in QuadCopter

The Masters student working on the QuadCopter stability system had their final seminar last week. They have been hampered over the past year with an overly complex system design that I put together for them, language skills and lack of time due to coursework commitments. In spite of all this they managed to get a pretty good understanding of the electromechanical system, produced system models and controllers in simulink. The basic code is now up in 3 modules on google code - adelaide quadcopter project.
Quadcopter is now stable in 1 axis with a pd controller
stability estimatedThe final controller they ended up using was a basic Proportional-Derivative (PD) controller. The integral part leads to tighter approach to the set point, but is not always necessary. There being 4 degrees of control the quadcopter can be stabilized in yaw,pitch, roll and height. Basically a fully stable system should be able to hover. Linear motion on X and Y can be achieved by dropping it into an unstable state. This can be implemented as a state-space control, a multidimensional extension of the PD.

The single axis stability is not too bad as it comes out here, could be better though. There is an unstable region with +/- 18 Degrees oscillation. At times given low damping the oscillations tend to build up.

Wednesday, October 27, 2010

HgSubversion with Ossim-opencv

I have come to really like the flexibility of making changes and saving them locally with Mercurial , before pushing them off to a repository online. It lets me work on a project independently while saving my work as well as staying upto date with core progress. Ossim-opencv is one of the projects I did some minor contributions to a while back. I wanted to get back into it, but following the incident with losing my last google account I no longer had commit access. So I checked out the SVN with HgSubversion and started making changes and commiting them locally. Finally the organizer switched to my new ID and I tried a push. Things fell apart at this point. I had committed things with my local username, and I usually use my googlecode authentication to push since these are separate things. In subversion however these need to be the same and I was stuck with my changes. Happily Mercurial lets me shelve, make patches, push to another mercurial repository (sandbox) and do a lot of fancy things. I decided to take the convert route and changed name of the committer to match my ID.
opencv ossim plugin

After that I pushed it back into SVN. While doing this I of course validated the results. Here is the plugin loaded into my latest compile of Imagelinker showing OpenCV based Laplacian filter.

Thursday, October 14, 2010

"Lollypop Forest" in Meep

I finally made some semi-realistic, well only very grossly, forests in MEEP. With dielectric cylinders as tree trunks topped by dielectric spheres as canopy. Then I bunched a few of them together to make a forest of 5 trees. I could have made much more complex structures but my knowledge of MEEP primitives and how to generate more of them automatically in Scheme is lacking. The attempts at replicating results with python-meep haven't gone so well, so I have to stick with Scheme-MEEP for now. Anyway even with the simplistic forest the wave propagation quickly becomes rather complex and difficult to handle analytically without gross assumptions. Yay for numerical methods.

Lollypop Forest Start
Other small discoveries this week include the pagesel package in Latex for sending each chapter off to review, of course after you have done the whole document and you know where the pages go and apparently very dated model selection procedures in R. Even though it is dated it gave me a model with decent regression statistics - now to send off that chapter heavy with regressions for review.
Lollypop Forest End

Wednesday, October 6, 2010

Processing Lidar with MPI - Liblas + MPICH2

I did some work in the last company for processing and reclassifying Lidar points using Liblas. The first Python implementation had lots of point-in-polygon queries and even with prepared geometries in GEOS took a few hours. We improved the execution of this by rasterizing the polygons, while polygons in theory have arbitrary precision and rasters are limited to the spacing they are gridded at, in this case the polygons were derived by classifying a rasterized version of the Lidar and the raster mask should have the same degree of precision.

The mask is overly generous in classes at times, considering each class as a local surface. A piecewise interpolation model can often be used to identify disparate points in context and perform class switching. In unordered Lidar data this may lead to many passes through the data generating neighbourhood class relationships if an index has not been built e.g. Octree.

I decided to adopt the Google philosophy of scale solves everything - and implemented the Lidar processing using MPI (Specifically MPICH2). The solution is implemented with task queues.
  1. Read all data on all nodes except last
  2. Node 0 makes a pass through all data sending each point in round robin to intermediate nodes
  3. Intermediate nodes receive a point and make another pass through all data identifying nearby points and using them in an IDW2 estimate
  4. Intermediate nodes message with IDW2 estimate to last node in the set (in charge of serializing) which makes various comparisons against classification masks (e.g. SRTM Water Boundary mask) and geo-statistical surface consistency before reclassifying and serializing to an output LAS file.
The task is essentially O(n^2) but with sufficient nodes we can bring it down to O(n*n/M+M*overheads) where M is the number of nodes. If the dataset is large enough and M is suitably chosen we gain significant speed increases. I got so carried away with Lidar + MPI ( and since I don't want to mix it with my PhD stuff) I started a small code dump on Google Code. Use CMake and see if you can build it. Lots of sample Lidar here.

Wednesday, September 29, 2010

Testing a new modem with BeagleBoard - DIGI Net Mobil

I broke my old Prolink Modem by walking around while it was plugged in and twisting the USB plug. I got a newer 7.2MB capable modem from Deal Extreme. Plugged into the beagleboard it shows up as a CD as expected and I ejected it to get the ttyUSBx devices. Unfortunately on the first try none of them wanted to respond to AT commands, it only started working after I installed the driver on windows and connected it up. A throughput test shows that it is working slightly better than the 3.6Mb/s Prolink promises.

Usual diagonistics on Linux shows:
lsusb - 05c6:0015 Qualcomm, Inc.

ATQ command works and dialing proceeds then I get stuck in an Unsolicited Message Block ("+ZUSIMR:2") using the ModemManager/NetworkManager Combo.
#beagleboard making friends with new 3G modem, first outing
ATI gives: 
Manufacturer: BMC INCORPORATED
Model: HD360
Revision: W1MV1.0.0B20100128 W1MV1.0.0B20100128 1  [Sep 4 2008 12:00:00]
IMEI:
+GCAP: +CGSM,+DS,+ES



Not much but at least a start for digging.

Tuesday, September 28, 2010

Fusing channel - array of cheap camera

I decided to do another experiment with my multitude of USB webcams and Gstreamer. This time I am capturing 2 shots using the stereo config I used before, but with an InfraRed filter with 850nm as the pass-band over one of the cameras. The rig looks nothing spectacular but costs only $40 net.
Testing out disparate info fusion using 2 cameras and 1 with an ir filter.

There is some parallax and neither the camera nor the mount is calibrated. Registration of the multi-band images is purely by trial and error at this stage. I have written it up with matplotlib, numpy and PIL to allow more automated transform estimation.

InfraRed 850nm Image
The overlapped section is cropped using array subsetting and channels are reassigned as InfraRed , Red , Green instead of plain RGB to form a False Colour Composite. Some of the misaligned sections can be seen highlighted. I also have 950nm and 750nm filters to test more detail in the spectral response. The compositing will obviously become harder and harder as a manual process till I build a stable mount, but I kind of like the extensibility with clip-on-mounting.
RGB Image
The solution is to use the well developed feature based image matching techniques and let feature descriptors like SURF, SIFT or Harris Corners pick up the bits I picked up by hand. The spectral variation provides additional challenge.
Composite Image
I finish off with the composite built so far.  Low costs systems like this can be used for environmental studies where lugging large cameras and spectrometers around rough terrain is not pleasant.

Monday, September 20, 2010

Processing TerraSAR-X in Python and R

Finally some 12GB of TerraSAR-X Quad-pol data finished downloading. I have essentially 3 PolInSAR capable sets (if the notorious X-band coherence holds up). I can process them in RAT but that requires converting them to floating point and more bloat, the TSX data is in a CInt32 form. Solution was to use GDAL's TerraSAR-X Cosar support and read subsets directly into Python. I took this opportunity to do some nice unit testing in Python. The result was the simple monstrosity here. It extends tsx_dual class I had implemented before.
R von Mises

Then I moved onto some phase difference statistics and encountered for the n-th time the nicely non-Gaussian von Mises distribution. Trying to use R Circular Statistics from Python became the bane of my existence tonight. RPy2 current code is unsupported on windows so I am lacking package import. At least I managed to hack it into compiling with the following tricks:
  1. Copied Rinterface.h from R source distro
  2. Hacked Rinterface.h to remove uintptr_t typedef
  3. Hacked na_values.c to remove dynamic allocations (compile time non-constants)
  4. Copied R dll's from "bin" to "lib" in R install
This makes rpy2 build and install, but I am left with - Assertion failed: PyType_IsSubtype(type, &PyLong_Type), file rpy\rinterface\/na_values.c, line 166  - at runtime obviously due to my hacks. Well I am missing this code block in my hand hacked things:

/* on some platforms these are not compile-time constants, so we must fill them at runtime */
+  NAInteger_Type.tp_base = &PyInt_Type;
+  NALogical_Type.tp_base = &PyInt_Type;
+  NAReal_Type.tp_base = &PyFloat_Type;
+  NACharacter_Type.tp_base = &PyString_Type;


Now to figure out where to put it and von Mises are in Python.

PS: So it is "Talk like a Pirate Day" so "R - ARRR in".

Monday, September 6, 2010

Waving around in MEEP - Scheme and Python

crops_2d
I finally get to learn some proper functional programming with MEEP and its Scheme based interface. MEEP is an FDTD - Finite Difference Time domain based simulator for electromagnetic waves, something I am trying to shoehorn into vegetation simulation. There is thankfully prebuilt cylinder primitives I can use to generate stems. So far I have managed put them in the Z and X axis and not in the Y axis which is of interest to me. The ring demo runs fine with some modifications and produces some nice and weird looking waves. Scheme is pretty easy to get around as a scripting interface, in spite of some polish math notation ( + (+ (+ (air water) earth ) fire ) = hooray !!)


I also got the Python bindings up and running. Time to see if my Python generated plant geometries and soil surfaces can be plugged straight into a 2D MEEP run. MEEP supports cylinder, ellipse, cone and block primitives. Making plants will be fairly easy, may be I can make the soil surface with dielectric blocks of random size.

Sunday, September 5, 2010

Building a PID controller for the Quadcopter

My students have finally taken some inspiration from the mechanical engineering folks and started on building a PID controller for the quadcopter using simulink blocks and various experiments to measure the angular acceleration produced by the motors. They are still far from having a rigid-body simulation of the system but apparently the PID controller in simulink can be stabilized. Another challenge will be to port the PID controller to Python, building a rigid-body simulation can be attempted with PyODE.

Quadcopter System DiagramThey had also been asking for a block diagram and/or circuit diagram. So I obliged with an Inkscape block diagram, circuit digrams are best left to the manufacturing professionals of Arduino and Gumstix.

Quadcopter test rig ended up being not so fancy and brick based at least it worksThe measurement of angle with time shows non-constant angular acceleration, but the curves can still be fitted with a second order polynomial. The angular rate curves look even more quadriatic, indicating increasing acceleration with time.

Quadcopter Magnetometer YExtracting the acceleration from this should be pretty straight forward as long as we fit a polynomial to a relatively safe part of the dataset. I think a fair number of reptitions and logs of the same experiment will be needed to smooth out the wiggles.

Tuesday, August 31, 2010

Wind profiling with Radars - ARRC

Had the inaugural lecture for the ARRC - Adelaide Radar Research Centre. It is trying to create collaboration between the Physics department and the Electronics Engineering department to make - well more radars of all sorts. The physics department at Adelaide has had several RF greats including the famous Braggs. Prof Iain Reid the Executive Director of ATRAD showed off the long history of wind profiling arrays at Buckland Park and other sites. They seem to have a huge number of installations at Chinese launch sites to control manned launches of the Long March.

Here are some interesting tid bits:

1) Full atmosphere coupled models work better for weather system modelling than lower atmosphere only models.
2) Atmosphere has Inertial,Bouyant and Viscous layers which influence its observability by VHF radar.
3) The region not observable by radar can be observed using lidar at multiple frequencies, there is a sodium band at the upper atmosphere which can be used for measurements. UV Lasers will have best response for most of the atmosphere.
4) Networked radars are the big new thing thanks to the popularity of the BOM site ( following on from the last post) and is driving the development of more wind profilers so that the atmospheric radars can be dedicated to precipitation observation.
5) Big radars are hard to build due to power constraints in remote areas like the outback and Antarctica, so they are being forced to become more economical. Pansy built by Japanese Universities in the Antarctica may include solar power or something similar, or a forest of Bonsai trees with electrical generation units and antennas plugged into them.

On that happy note we finished the talk on watching winds with electromagnetic waves.

Wednesday, August 25, 2010

Time to Rain with BOM Radar

BOM Radar loop is very handy no matter what you are planning, a bike ride, a dash to the bus stop or a picnic with your girl friend. The images are nice and you can use your good old brain to work out the windows of rain free time. I would like to convert the rain from spatial views to time slice views at my location and forward shift to work out when the rain will hit me, just so that I can time that 10 min slot to go and get some lunch.

This will involve working out the velocity of the rain band and current location to estimate the time of arrival. The loop provides 4 or more images, these can be used to estimate the direction of travel of the rain band together with wind velocities reported.

You can get individual Radar frames over Adelaide with the following format
  • URL Base: http://www.bom.gov.au/radar/IDR642.T [x.T where x is the resolution level - 2,3,4]
  • Date code: YYYYMMDD in UTC
  • Time code: HHMM in UTC 
Only last 4 frames can be directly obtained from the server. Not to worry, I am sure someone is running a cron job somewhere and backing them up. Once you have sufficient frames to establish a time series the fun part begins. Here is a Python script to fetch the last 4 frames over Adelaide, be conservative and run it every 5 minutes ( just to satisfy Nyquist) and fetch the latest frame.

The rainfall is colour coded so we will need to perform a look-up to convert from colour to rain density. Next a quick FFT based convolution for the 4 images to estimate x and y shifts. You can also use a shift and difference approach while minimizing the difference to establish the rain velocity. Statistically test the results to establish constant velocity . If the velocity is not fairly constant our prediction accuracy falls off. Then the time machine part, propagate the last image through the shift field and estimate the rain intensity at the location of interest for the next 30minutes to 1hour, whatever the horizon of the radar and rain velocity lets you do.

Obviously all this can be done much easier from within BOM since they have the data at hand, but it is much more fun mashing a separate application together.

Wednesday, August 18, 2010

MPICH2 - Joys of windows MPI

LAM/MPI is showing its age on our Ubuntu cluster. LAMD dies at times and we always seem to have more windows machines than linux pxeboot boxes. MPICH2 may allow heterogenous clusters. Here are runtimes with different communication protocols on MPICH2 and LAM with the trivial prime tutorial.
forest image
forest_image

MPICH2 : [FAQ]
sock: 4.562287
nemesis: 0.403781
ssm: 0.641682
mt: 4.793552
default: 4.489148
auto: 4.200369


LAM/MPI : [FAQ]
usysv: 0.260094
sysv: 1.899345
tcp: 1.965503
crtcp: 2.089337

The test app was built with CMake and the very handy MPI detection with MPI choice options.

On the other hand the CMake setup for polsarprosim is progressing very quickly thanks to the experience with Qgis, OTB and lots of advice from Norman and Emmanuel. Here is a what the automated testing says:

Test project C:/Program Files (x86)/PolSARpro_v4.1.5/Soft/build
 Start 1: compareOutput
1/2 Test #1: compareOutput ....................   Passed    0.07 sec
  Start 2: runSim
2/2 Test #2: runSim ...........................   Passed  3919.94 sec
100% tests passed, 0 tests failed out of 2
Total Test time (real) = 3920.05 sec



Now need to try some MPI to speed up the simulation time a bit and put the simulation run and output comparison to a standard in the proper order.

Wednesday, August 11, 2010

Writing PhD thesis and living in a hole

I have finally moved house to a crummy student digs to go with the PhD writing mood. Most of my time is spent with WinEdt and latex syntax. I might think of switching allegiance to LyX since Bibus has added Lyx support, I wonder if it will mangle my tex files into LyX specific format.

I have also been doing some TerraSAR-X quad-pol data analysis and comparing decomposition results to dual-pol case. There is a very nice overlap, but some things seem more coherent when the cross-pol scattering is not taken into account - in my case tree trunks standing on water.


I did manage to sneak in some time for open-source and check out the great work Manuel is doing to incorporate Monteverdi support into Qgis. Monteverdi can pull the layers from Qgis open files list and process them. The results currently cannot be passed directly from Monteverdi into Qgis. This issue can be solved by using a custom otbprovider which pushes Monteverdi streams into the QGis view by producing an appropriate QImage.

We have also been wondering whether an application is running inside our outside a NAT, what would be the best way to check this. Say ping an external server outside the NAT and see which source address the request came from, if the source is one particular IP (we have a static IP) then we are inside NAT and need to use internal IP's to connect to certain servers, otherwise we are outside and public IP is fine.

Friday, July 30, 2010

Getting the code out - from Academic to Open Source

I am at the IGARSS conference in Waikiki , Hawaii. It has been a busy few days. All done with my presentations today, 2 posters side-by-side are rather easy to handle. The last presentation I attended today was from Eric Pottier on PolsarPro - a very brave attempt to get academic research in polarimetric SAR processing to the non-specialist user community. He mentioned that new routines and improvements get incorporated into the platform every week, so I naturally asked about the release schedule and possible version control access. This is where the academic heritage showed up, the project in-spite of its research brilliance does not have a hard working software engineer to smooth the getting it out there process. Collation of the work from the various developers and building the final release is done manually it seems.

The folks at Rennes 1 should take a leaf out of the OTB (or even RAT) book and make collaboration easier. In the recent release the simulator that I frequently use is broken and I can't report a bug or track the changes that broke it - version control please.

PS: I created a version control for myself , only for PolSARProSim.

Wednesday, July 28, 2010

Writing Thesis - The easy way and the hard way

I wrote my PhD proposal, some 40 pages with nearly 70 references in OpenOffice. Outline numbering and Bibus based referencing worked out quite well for me. There were some tricky times with equations and such but I managed to keep it all together in the end.

Now I am facing a much larger task of writing a few hundred pages. So I started at the obvious point, adding pages to my proposal. Ideally the proposal should form the first chapter regarding background, before I delve into my core research. After 15 pages or so my outline numbering began to fall apart and I decied to look elsewhere for a more reliable solution.

So far Miktex with WinEdt has been doing the job remarkably well. The bibiliography comes in from Bibus as a management tool in bibtex format. Terms can be managed using the glossary tools - which I haven't managed to persuade into action so far. Google docs can be repurposed to be an online sandbox for the thesis with LatexLabs. Python code listings can be included using this trick. Overall all the pieces are in place to write it , now just need the words and pictures.

Sunday, July 11, 2010

Compiling OTB with CUDA - on Windows 7

I discussed some OTB-CUDA experiments that had been done with Emmanuel. The experiments tell the age old story of bandwidth pay-off versus speed. If your problem is too simple the performance gain from CUDA is not going to be much, they did one of the first tests with specifically Remote Sensing type computations of various complexity. Amdahl's law is fairly intuitive and even embarassingly parallel problems that we encounter in raster processing can not break the curse of sequential segments.

In GPGPU computing one of the core tweaks is in the kernel size and computational complexity. You don't want to be performing CPU to GPU transfers all the time for the GPU to perform a very simple operation no matter however fast, on the other hand GPU's are rather constrained in the complexity of operations they can carry out and will always need assitance from the CPU in loading data from the disk etc.

The OTB GPGPU experiment had not been extensively tested on Windows, so I decided to compile it with latest offering from Microsoft - MSVC Express 2010. As penalty for being on the bleeding edge Nvidia CUDA and OpenCL don't work on 2010, so I had to time warp back to 2008. Even then the OTB-CUDA code needs some porting due to missing timing code, may be the sample here will provide some inspiration.