In the good old days the hallmark of a scientific workstation was a math coprocessor , today we have supporting processors everywhere - in the GPU, network card, sound card. The purpose of the CPU is to be jack of all trades, manage all the crazy hardware and execute instructions. CPU's excel in path prediction and multi-tasking. Whereas the GPGPU does flotaing point arithmetic faster and a lot of them in parallel - just what you need to apply filters to raster data, perform finite element analysis or fit an isosurface to a point cloud using local data. Loading instructions into the GPU has traditionally been the domain of the graphics programmer. The major graphics vendors NVIDIA and AMD/ATI are now locking horns on making this just as easily accessible to the CPU user.
The two companies have competing technologies - CUDA and Stream SDK. The diversity between the camps makes the coders life difficult in using the GPGPU and supporting all platforms. There are people scratching this itch in making API's to do data parallel tasks independent of the enabling technology. Most are still at their infancy - I wish all the best to Pygwa. AMD is well positioned to eventually integrate the GPGPU architechture from ATI into its mainstream CPU's - seeing the challenge Intel is making noises about Larrabee. I would love to get my hands on all 3 for now so that I can implement a covering python layer to perform data parallel eigen decompositions.
Having said all this still I am not sure which one is the right brain and which one is the left - it seems the one obsessed with doing the same task in parallel and maths produces most of the art (GPU) and the one that is flexible, predictive and moves around laterally does all the mundane word processing (CPU).
Friday, September 11, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment