Wednesday, September 30, 2009

Drooling on the Arm

Arm based boards are becoming more and more powerful and complete with nifty peripherals like gpu, gps and accelerometers. Making them perfect for building autonmous platform brains on a budget.


-- Post From My iPhone

Saturday, September 26, 2009

Playing god - messing with Terrain in WorldWind Java

I wish i could say I did this but, it is clearly in Italian. Very nice to see what can be done with the nice clean framwork WorldWind provides. This video demonstrates modifications to the mesh using a game sandbox like set of tools, now that feature should so make into the examples provides with the core library.

Wrangling with Curly Monkeys

I had some files delivered via FTP to me. Sounds simple enough - not really : the FTP server is apparently FTPS://. Oh great they are careful about security. Fire up FileZilla and try to download using FTPS. The nice feature in filezilla is that you can see the server interaction. FTPS RFC's defined implicit and explicit FTPS, the implicit version comes first in FileZilla options. The server threw me out with an EABORTCONNECT on port 990 (FTPS Implicit port). More self-education and I realized that the FTPS implicit is deprecated. Explicit FTPS is on port 21 using TLS. So that worked out fine till the server timed out the connection, it was not really fast to start with (6.8kb/s). So I resorted to using curl , for future reference the command below can handle FTPS/TLS.

curl --ftp-ssl -# -1 --insecure ftp://user:pass@server/file -o outputfile

So there was the curly bit, now let's move onto the monkey bit. I have an ATI Radeon HD 3670 card on the laptop, so I would rather use AMD/ATI graphics tools, namely RenderMonkey. I did some simple brightness contrast shaders in it looking to implement them in JOGL. That didn't quite work out - so more wrangling with monkeys is needed.

Thursday, September 24, 2009

Giving answers - messing around at Stack Overflow

I got onto SO a couple of days ago. Since then I have learnt an incredible amount of Java and Python via the quickfire question-answer system. There are interesting community bits too. It is almost like an MMO in design, the only monsters to hunt are niggling technical problems. Here is the joke of the board in my opinion and a solution to an important user satisfaction issue - load times.
Load times and data streaming is what has made the whole load pages by bit movement in the new internet design so popular. Good design makes portions of the content to the user and the input from the user atomic, allowing partial refreshes. People expect instant responses (within 1/16 second) - a lot of other factors come into play while delivering this:
  1. Server processing capacity
  2. Network bandwidth
  3. Client machine rendering capacity
  4. Concurrent users
  5. Any other bottleneck you can think about
Optimizing the user experience is what good products are built upon.

Tuesday, September 22, 2009

Economics of Chip Making - cut throat market

There was a very old story I was told once about 3 chips that could be - Zilog Z80, Motorola 68000 and the pre-maturely born Intel 8086. The early release, familiar instruction set and price gave 8086 such a boost that we still have it around.
Given all the publicity about AMD - Intel rivalry it is interesting to follow what works in this arena. AMD had to split up design and manufacture, Intel might be following suit. Motorola spun off Freescale long long time ago.
Problem then as in now is the familiarity and usability of the instruction sets and the chips, the efficiency and transparecy with which silicon is exposed to the neurons. I researched some more on the new instruction set on the block, the GPGPU. I wish I could tag cloud the projects using Nvidia Cuda vs those using ATI Stream, but here is the googlebattle result , Cuda is nearly twice as much used as ATI stream.
There are some heavy-weights putting in behind Cuda, the very hot Lapack creators making MAGMA, with blessings from Mathworks and Microsoft. Microsoft itself making a GPU computing extension in Windows 7/ DX11. Not to mention GPULib from Tech-X, which has led to some very interesting experiments in remote sensing image processing. The core of all this comes down to my laptop having only an ATI Radeon and no decent GPGPU capability to play with, best of luck to AMD getting its house in order.

Wednesday, September 16, 2009

Riding around in the rain

Hobart has some pretty good bike tracks if you can handle them in the wet in an unfamiliar ride. Went up a hill mostly on foot because I could not get the gears figured out - me and the gears haven't been friends this week, same problem in the rented lancer with it's semi automatic gears which automatically switch down but not up.

Downhill bits were fun where I had the guts to ride it and not slalom through trees. Had to scrape some concrete on the way back, brakes would not pull out enough speed - hence grazed elbow and knee nothing serious.

Need to whip around the hills a bit more when I get back.


-- Post From My iPhone

Saturday, September 12, 2009

Maintaining the fantasy - while the real people sleep

I was just reading on the Warhammer Online server architecture and hosting system in the Intel Zine. Fascinating description of what is essentially an AI simulation system with global deployment. Interesting are also the parallelization and maintenance systems. WAR uses application level parallelization and inter-process communication with TCP, forking of processes to deal with keep sieges and trying to minimize the instancing. The update system uses the hardware level access provided by blades, I am sure I would be scared if I was instructing the bios to eject and load a new CD and install a full os off it.

Luckily WAR is not so heavily populated and you can have some great large battles. The quality of the players is decent and the transition between RVE and RVR is smooth.

I wish similar efforts were put into dealing with real world problems and modelling it while we slept.

Friday, September 11, 2009

Right brain and left brain - CPU and GPGPU

In the good old days the hallmark of a scientific workstation was a math coprocessor , today we have supporting processors everywhere - in the GPU, network card, sound card. The purpose of the CPU is to be jack of all trades, manage all the crazy hardware and execute instructions. CPU's excel in path prediction and multi-tasking. Whereas the GPGPU does flotaing point arithmetic faster and a lot of them in parallel - just what you need to apply filters to raster data, perform finite element analysis or fit an isosurface to a point cloud using local data. Loading instructions into the GPU has traditionally been the domain of the graphics programmer. The major graphics vendors NVIDIA and AMD/ATI are now locking horns on making this just as easily accessible to the CPU user.


The two companies have competing technologies - CUDA and Stream SDK. The diversity between the camps makes the coders life difficult in using the GPGPU and supporting all platforms. There are people scratching this itch in making API's to do data parallel tasks independent of the enabling technology. Most are still at their infancy - I wish all the best to Pygwa. AMD is well positioned to eventually integrate the GPGPU architechture from ATI into its mainstream CPU's - seeing the challenge Intel is making noises about Larrabee. I would love to get my hands on all 3 for now so that I can implement a covering python layer to perform data parallel eigen decompositions.

Having said all this still I am not sure which one is the right brain and which one is the left - it seems the one obsessed with doing the same task in parallel and maths produces most of the art (GPU) and the one that is flexible, predictive  and moves around laterally does all the mundane word processing (CPU).

Playing with words - wordle

I saw a post summarizing a confernce program via a wordle word cloud. So I decided to do the same to my old blog which I can't write anymore. The results are frankly surprising. The word I talk about mostly is apparently beach. Here was thinking I was writing a mostly scientific and programming blog. I had a relatively short night out tonight, need to pack a bit and get ready to head off to Tasmania on Sunday. I have a sneaking suspicion that wordle only parsed the recent posts which are about my trip to Bali, but it is a nifty toy indeed. I will have to throw it at my thesis at some point when it is done.


Wordle: Whatnickblog

Thursday, September 10, 2009

Fusing information - when means don't cut it look at variance

Interpreting what you see can be a challenging task for the brain. We have a lot of mental resource dedicated to processing visual information but still a lot of illusions get past us, mostly due to lack concentration or presumptions which prevent us from paying attention to certain details.

The standard assumption for detecting an oil spill/slick in SAR is that the oil reduces surface tension and dampens capillary waves, flattening the sea surface and reducing backscatter. Other natural means may have the same impact, mainly doldrums and calm sea conditions. The power spectral densities in both of these sea surface anomalies can be very similar. The slicks however differ in textural infomation since the oil does not heavily dampen the natural wind driven waves, causing the oil to form streaks. Instead of the slow to compute but detailed texture filter banks or occurrence/co-occurrence measure, simple kernel based variance measures may help distingish streaky oil slicks from just plain old calm sea.

The West Atlas rig leak becoming a long drawn out affair and claims and counter claims everywhere with no material evidence, observing the dynamics of the slick can be important. Sometimes not only does the wind play a part, the bathymetry takes a role as well. The Timor Sea has resources worth billions shared by Australia and East Timor. The Envisat images show a lot of dark areas in the sea which can only be differentiated by texture. The oil seems to be dispersing at a bathymetric feature and right at the dispersion point is a QuickBird image, in the middle of the ocean, taken quite a while ago in May. I wonder what is so interesting there.

Wednesday, September 9, 2009

ESA uses NASA Software - to look at oil spills in Australia

Europe and America have always had a close technological co-operation, especially in the space sector. While Europe has major progress in the Radar domain, America has stayed behind, but another area where Americans, namely NASA made a huge leap is in making an open-source globe visualisation tool, WorldWind. Now the European Space Agency (ESA) is using this as their archive display and ordering system backdrop in EOLi , as opposed to the rather dated and hard to use 2D map they used to have.
One of the advantage of using WorldWind is that the preview images can be quickly loaded up into the globe overview to provide the user context and also allow exploration of the data history even before ordering. In the case of planning a future acquisition, the 3d display combined with a gazetteer makes finding the area of interest simple and fun.

Out of curiosity I checked on the on-going rather large oil spill near Ashmore Reef. Sure enough ENVISAT-ASAR had collected an image and I have a quick idea of how much the spill area is without getting the full image. Detailed images from TerraSAR-X and Cosmo-Skymed can be found here.

Monday, September 7, 2009

Scaling Python applications - Parallel Python easy heterogenous clustering

Python is often accused of being a slow interpreted language, inspite of all the proof of being very easy to accelerate critical sections with native code and large projects such as Youtube being written in Python. Python is a great glue language holding together disparate bits of code and providing easy interface to multiple languages, an invaluable proto-typing tool.

I write some naive inverse distance weighted interpolation for a set of field data and it ran painfully slowly (taking 1 second per interpolated point). So I looked into accelerating this with Parallel Python , this was surprisingly easy to set-up and to recode the algorithm in parallel mode. It is embarassingly parallel with the same operation being done on each grid point. Extending from 1 laptop to 7 different machines resulted in around 3 times increase in execution speed. Admittedly I ran the job over wireless and some of the machines were windows desktops with little dedicated resource while others were servers running linux. However the excercise demonstrates the flexibility of Python.