Many years have passed since David Cox, a Google engineer in Paris, went to Mountain View, where he presented the first sketch of the “Google Cardboard” to Clay Bavor.
His intention was not to invent “Virtual Reality” but “teleportation”. David’s intention was to move a robot over a long distance, from his mobile phone, inside a “Google Cardboard”.
They met with Seitz and the project was discarded due to the high production cost of the robots.
When I found out, I thought it was an exciting idea and built the robot I show in the photo, based on a texas instruments robotic board with “Sitara” processor and Wifi and USB ports.
When I finished it, I discovered that I had created the perfect camera for stereoscopic panoramas, I only had to program it to make the capture using the “Timelapse” mode of the GoPro session. I introduced it to Kolor, but at the time they were very busy running the Google Jump camera… years later Karma and other things ended with Kolor and Google teamed up with Xiaomi.
The next step was to use professional cameras, for which I had to design a system capable of transporting two DSLR´s with 360º freedom of movement in two axes.
I showed the prototype and its result to the London Stereoscopic Company (LSC), who invited me to visit them in London and together we made the stereoscopic panorama of the chapel of King´s Collegue London, where Charles Wheatstone presented stereography in 1838.
Denis Pellerin, in charge of the LSC and Brian May’s right hand, gave me a difficult challenge, to make a robot for the Fuji W 3D camera.
And it worked, but its stitching process was quite complex, long and tedious, because using 40mm optics I needed to program the robot to take 99 MPO photographs, which had to be separated one by one and then stitch with Autopano Giga, and that’s how I discovered the Gigapanoramas.
I thought the choice of robotics was going to be determined in the creation of stereoscopic panoramas, and I knew that in California they were looking for an App capable of doing it, so I tried it with a Samgung Galaxy and it worked surprisingly well.
I talked to several panoramic photographers and asked them why they had not yet started making stereoscopic panoramas for the “Google Carboard” and they told me that not everyone has two cameras, so I built “Dekard”, a three-axis moving robot, in which the camera moves the entrance pupil to the right and left of the robot’s NPP.
Obviously no one was going to use a LEGO robot, so I made a first semi-industrial model with the ability to exchange cradles. I could install the cradle for the A6 or install another cradle to carry two Sony A5s, or both GoPro Sessions.
All these systems helped me define the extra overlap that is used in stereoscopic panoramas, it was very easy, I could program 10º/22.5º/30º/45º/60º and thanks to these robots I was able to simplify the capture system, to understand that motorized systems can have some applications, such as working with pertigas, but it was easier to create manual tools.
The robot on the left is the base on which I built NANOMINI, I made many tests until I found the perfect minimum catch.
NANOMINI Initial versions
When I posted this sketch on the virtual reality engineering forums, it was a lot of fun, colleagues laughed out loud… but over time INSTA has used this system for their Insta Titan 8K camera.
The minimum number of cameras to obtain a stereocopic panorama is 6, just like the Kandao or the Insta Pro. And if we extrapolate this technology to photography, the only thing we have to do is to take eight photos, misaligning the entrance pupil a few centimeters in front of the NPP of the whole RIG.