Good diagrams in publications are hard to come by and sometimes already published ones get re-used exactly due to the lack of the know-how on reproducing them. One classic example is the Entropy-Alpha space diagram produced by Cloud-Pottier. Though the diagram is simple enough and the space limit curves can be computed from the equations, it is sufficiently complex given no full recipe to warrant copying as-is. Copying with attribution is standard practice in science, but gives no insight into how the diagram was originally produced without contact with the authors or forensic analysis of the lines. The tell-tale randomness of the curves in this particular one makes me of the opinion that it was hand drawn, I will have to as Shane Cloude when I meet him over a beer in Munich.
So I decided to try out my Inkscape skills and redraw the diagram from scratch using some theory and some free-hand approximations. Without definitive parametric equations for the bounding curves it is rather like trying to work out how the pyramids at Giza were built.
Working with Structure from Motion and Dense matching can be a lot of fun. If the project get sufficiently big though, the reconstructed models is broken up into multiple Wavefrom Object files and can be difficult visualise and render. I like doing most of my rendering in Blender, and the handy Python API for loading OBJ files makes the job of getting all the files in easy.
To grab a bunch of obj files from a folder into Blender simply script this in the console:
import glob objects = glob.glob("Filepath\*.obj") for obj in objects: try: bpy.ops.import_scene.obj(filepath=obj, axis_forward='X', axis_up='Z') except: print("Failed "+obj)
Just like that you create a scene with 100's of object tiles. Make sure they are viewed in outline mode, so that the display is zippy while setting up lights, cameras and animation action. Start off the render and go get a beverage.
One of the major strengths (and sometimes weakness) of the human brain is its ability to perform pattern matching - the classic is where people play more attention while playing games and digitising and gamifying the airport security leads to less false positives. We perform heuristics in everything we see and hear, often coming to conclusions based on partial information, based on extrapolation performed using past experience. This is why upper-case letters in English are so much easier to read since we have built up heuristics to infer the letter from the top-half, people who write in all capitals seem to be almost shouting as our visual heuristics take the cue.
Machines are getting better at the heuristics as well as brute force, we have passed them on just the way we pass on preconceptions of right and wrong, beauty and religion to our children. Based on this system of thought and accumulated body of knowledge, machines can perform pattern matching tasks they are programmed for in vastly superior ways than humans. All they demand in return is energy, materials to build them and intelligent human beings to make some long intellectual marches in programming them.
Cat videos are the dominant species
In someways the rote jobs of pure pattern matching has made us mental Neanderthals with short neural loops focused on ambush hunting only the job at hand, rather than thinking long term strategy. The long chase where all the short easy pattern matching jobs have been mechanised is for the more creative, the artists. Yet there is still need for the engineer to keep the machines functioning. There are always predictions of race of against the machines. We still need to communicate our mental models imperfectly via whatever means.
Genies are just machines with Genius
May be with exposure to lots of cat videos and artwork, the machine will eventually learn to play with cats while painting, and we will keep building bigger machines since we can't be born biologically with bigger skulls or weild GeV's with our finger tips.