A new year in life started a couple of days ago. Sharing the birthday with Einstein brings some disadvantages and advantages at the same time. One of the advantages is the constant nagging urge to learn more about the universe and see it the way the my illustrious birthdaymate did. The disadvantage is realising that if I were ever to invent a time machine and move in this extra dimension he conjured up, I will have to keep looking for clues that I have left myself - damn you predestination paradox.
As part of my continuous Brownian motion through life, I started a new job. No organisation (organism) is cool without a scientific name or acronym, the previous one I worked for was a mouthful - CSIRO, the current one is shorter, just AMX (Aerometrex). Doing multiview geometry mapping and point cloud collection. Calibrating lots of cameras in Agisoft, Imageiron, PhotoModeler etc. The approach here is very pragmatic, we will go with whatever is available off the shelf to create the product and my job is to develop an efficient production chain using the right mix of automation and human intervention.
The first order of business was setting up the development environment including - Python 2.7 64bit goodies and implementing Python image calibration (with some changes to account for the new cv2 API which makes numpy arrays and OpenCV images identical). I read through Zhang's core paper on simple flat checkerboard based camera calibration, implemented in OpenCV to estimate 2-3 radial distortion parameters and 2 tangential parameters (Brown's model), as well as the X and Y focal lengths and principal point (which can be different if the lens has astigmatism). Staring at these calibration targets for a while tends to give you optical illusions as the eye and the brain aim to iteratively approach a calibrated view of reality.
OpenCV needs to be told how many corners to expect, so a simple histogram equalisation and mean transition count is required on the checkerboard. Then simply populate the camera ( remember the corner element is 1) and distortions matrix and undistort. Writing calibrations and undistorts with higher order polynomials and even piecewise linear functions will be required for wide angle lenses. Otherwise I am also looking at the 3 rotations degrees of freedom we have in spaceland and their methods of representation via Rotation matrixes, Quaternions and Euler angles. Is there rotational degree of freedom in space-time land ?
As part of my continuous Brownian motion through life, I started a new job. No organisation (organism) is cool without a scientific name or acronym, the previous one I worked for was a mouthful - CSIRO, the current one is shorter, just AMX (Aerometrex). Doing multiview geometry mapping and point cloud collection. Calibrating lots of cameras in Agisoft, Imageiron, PhotoModeler etc. The approach here is very pragmatic, we will go with whatever is available off the shelf to create the product and my job is to develop an efficient production chain using the right mix of automation and human intervention.
The first order of business was setting up the development environment including - Python 2.7 64bit goodies and implementing Python image calibration (with some changes to account for the new cv2 API which makes numpy arrays and OpenCV images identical). I read through Zhang's core paper on simple flat checkerboard based camera calibration, implemented in OpenCV to estimate 2-3 radial distortion parameters and 2 tangential parameters (Brown's model), as well as the X and Y focal lengths and principal point (which can be different if the lens has astigmatism). Staring at these calibration targets for a while tends to give you optical illusions as the eye and the brain aim to iteratively approach a calibrated view of reality.
OpenCV needs to be told how many corners to expect, so a simple histogram equalisation and mean transition count is required on the checkerboard. Then simply populate the camera ( remember the corner element is 1) and distortions matrix and undistort. Writing calibrations and undistorts with higher order polynomials and even piecewise linear functions will be required for wide angle lenses. Otherwise I am also looking at the 3 rotations degrees of freedom we have in spaceland and their methods of representation via Rotation matrixes, Quaternions and Euler angles. Is there rotational degree of freedom in space-time land ?
No comments:
Post a Comment