5) Research & development
When I started this work I was already an experienced stereographer, but my knowledge of panography was limited, as I did not yet know the importance of nodal adjustment. I had two Sony Nex 5 cameras with 16mm optics with which I had already taken many stereoscopic panoramic photos, in DCP/3D format for projection in 3D cinemas, so I thought it was a good starting point.
I immediately realized that if I had to rotate the cameras, it would be better to put them in an upright position to get a larger field of view, keeping the interaxial distance.
My first attempt was to build an automated system based on a robotics kit for the consumer market. I could program the number of photos and shoot the cameras using a servo-driven lever system, and after each attempt I tried to stitch in Autopano Giga until I could find the optimum number of photos around.
The result was spectacular, I was impressed with being able to do a full 360º rotation using my stereoscope, but the 16mm were very short, I had two black holes at the top and bottom that had to be closed.
I knew what I had to do, I needed an automated system with freedom of movement in two axes, which would allow me to capture the whole spheres.
The robotic kit that I used is a toy that is not designed to move a high load weight, which in the case of Sony their optics and batteries reached half a kilo, so I could not make a direct transmission from the servomotors and I was forced to include a gearbox that allowed me to increase the torque.
Once finished I did the programming based on the overlap I had needed to make the cylindrical panorama, but in three rows.
I used a 30º rotation base, 12 photos around and three rows at 0º and plus/minus 45º to which I added a photo of the Zenit because it had a small hole to cover, and two photos in Nadir at 90º.
Then something magical happened! I heard that the director of the London Stereoscopic Company (LSC), the astronomer Dr. Brian May, was coming to Tenerife to participate in the STARMUS festival, so I attended. The (LSC) had a stand in the market place of the congress, so it was easy for me to contact them, and it was exciting to see for the first time, together with Denis Pellerin, a VR 3D 360 photo on the VR OWL.. A few weeks later I stayed at Brian’s guest house in London, and took pictures all over the city downtown.
The London Stereocopic Company asked me to design and build a Robot for the Fuji W3 stereoscopic camera, and it worked correctly, but its post-production system was very slow due to the reduced FOV of its optics, which Autopano read as 40mm, which forced me to write a rather complex robotic capture program, since the robot had to make seven rows…. it was a very long capture that lasted nine minutes, due in part to the long recovery time of this camera after each shot.
In total there were exactly 99 photos in MPO format that had to be processed one by one. It was a lot of work, but it was worth it, the interpolator “Spline 64” combined with the 40mm and the high overlap resulted in a good stereoscopic photo. It turned out to be a great learning curve and motive that later would return to work with the more closed optics.
It was very interesting to see the result and to discover how the use of 40mm optics affects volume and dimensions.
When I finished with the Fuji W3, I went back to working with the DSLRs, because I was still not happy with the results. I already had good photos that I had stitched in Autopano Giga, but the parallax errors were visible despite the high overlap that was being used… so I understood the importance of respecting the nodal point as much as possible. It was clear: “The greater the interaxial distance, the greater the nodal misalignment”, so I decided to reduce this parameter.
I had reached the physical limit of Sony´s, both objectives could not be joined anymore, so I valued the possibility of creating a nodal “BeamSplitter”, but the fragility of the glass used in this system and the instability of LEGO led me to experiment with another very old solution, the so-called “cha-cha”, which consists of making the two photos with the same camera moving it to the right and left.
A three-axis robotic system is not easy to calculate and build, much less to transport a heavy camera like the Alpha 6 that I acquired for its WIFI function, so before doing so I built an identical system, designed for a very light camera, the camera of my Samsung 6 edge.
EV1 Robot & Samsung Galaxy 6 edge plus photo camera.
It was amazing to see how the stitching errors were reduced exponentially, because by reducing the interaxial distance by half, the errors were reduced in size four times, so I decided to build the robot for the Sony Alpha 6.
I already had stereopanoramas without appreciable stitching errors, but the capture was too slow, I needed higher FOV optics that would allow me to make a faster capture. But a greater FOV using reduced interaxial distances meant a turn in the investigation and a change of cameras.
I decided to go to the GoPro Session for several reasons, mainly because of its square design that I was passionate about, because by turning it 90º I could take photos vertically or horizontally. It had a greater FOV than Sony’s 16mm optics, its small size allowed me to reduce its interaxial distance and its synchrony with the shot was very good using the GoPro remote control.
I was pleasantly surprised to see how well the two Gopro were stitched in Autopano, probably because Kolor had been acquired by Gopro.
With the exception of four or five examples, no one and no company has talked about stereoscopic panoramas yet, so I thought I had invented something interesting, and I built a robot specifically for Gopro. Thanks to the robotics and extended FOV, I gradually reduced the capture sequence to 8 double photos that were taken in just over half a minute.
My proposal was to modify the Gopro Karma “Gimball”. It was perfectly possible to eliminate the accelerometers and program the Gimball as if it were a robot.
At that time Gopro and Google were very busy trying to start the “JUMP” and their 16 Gopros…. so they didn’t listen to me!
Thanks to the experience with robotics I continued to evolve the product until its maximum miniaturization. This RIG has a rotator and a photographic nut, so it was only necessary to make six double photos to the horizon plus zenit and nadir.
But the photographic quality of the Gopro´s was somewhat far from what I needed, so I replaced the cameras with the Sony AZ1, the best miniature photo camera on the market, and this robot was transformed into what would later become NANOMINI a RIG with which I made hundreds of stereopanoramas.
An acquaintance and dear photographer discovered me and invited me to meet the panograph community, so I knew that almost everyone uses 8mm lenses, so I decided to research and buy the then new Samsung 2016, an exciting 7mm test camera.
“My first project was called “The Broken Egg”, I tried to separate both cameras to use them “side by side” with native “genlock”, but Samsung had already foreseen this case and mounted some flat cables impossible to weld”.
I measured the distance between their optics and saw that the nodal misalignment of one entry pupil with respect to the other was 1.8 cm, so it was a stereoscopic camera “back to back”, and I decided to try to make a “rotational” capture, and it worked!
The first INSTA 360º just appeared with a configuration of six lenses with which you could make stereoscopic panoramas using the rotational technique, so I made the first rotational panorama taking six photos, but as I had a camera on each side I only had to make three double photos.
“Something didn’t work right, the INSTA 360º videos had no stitching/parallax errors while my capture made with a much smaller interaxial distance did, something I didn’t know… So I understood that the VR video software was designed to work with non-nodal captures and that they had a perfect stitch thanks to interpolation and optical flow.
By chance, I got in touch with the best film software engineer in the world, who curiously lives here in the Canary Islands; Roman Dudek, father of “Jaleo”, a “soft” presented in 1993 for the postproduction of analogue cinema in 35mm, using “Silcon Graphics” computers that at that time cost 250,000 dollars.
Thanks to Roman we were able to carry out several tests. First we imitated the capture of the Insta360, and little by little we looked for a capture that offered the same quality as the capture side by side. Until we decided that Samsung 2016 didn’t give enough quality, and that’s when Jürgen Schrader helped us. Between the three of us we decided on the photographic capture that Jürgen did and that Roman stitched.
“Thanks to Jürgen’s photographic quality and Roman’s perfect stitching, we had the first perfect picture in history. I can make this statement, because interpolation and “Optical Flow” for photography had never been experienced before.”
Now I had to start over! The discovery of Mistika VR was an important evolution. From this point I could work with two different techniques, rotational or parallel and the interaxial distance was no longer a problem, so I went back to work with DSLR´s, but this time with the learned experience.
I decided that the use of two-axis robotic systems could be interesting working with 16mm optics, but using 8mm or 10mm the number of photos we had to do, did not compensate for the additional work involved in travelling and running a robot, so I decided to create two-axis manual nodal heads.
From this moment on and after the presentation of the rotational technique in Tokyo with only 8 shots, I started to collaborate with photographers from all over the world who I had met thanks to the IVRPA. They sent me photos, taken with the best sensors and the best optics, and so little by little, for six months I was standardizing the capture using all possible variables.
There are still many tests to be done and a lot of work, because now is when we start to understand the mechanisms of stereopanoramas and how they are affected by different optics, the interaxial distance, the technique used or even the height of the tripod. However, when I saw the Nodal Ninja Stereo head, I decided to make a small stop on the way to publish and share all these experiences because, now I can offer a standardized workflow thanks to the Nodal Ninja initiative.
PATENT PENDING (PCI)
A Stereopanorama at the touch of a button!.
I’ve filed many patents, and every time I file a new one, the old ones become obsolete. In the latter I have managed to reach the limit of both technical and physical miniaturization. In this patent I describe an automated motorized system of No-Nodal capture for the accomplishment of stereoscopic photographs of 360º for its exhibition in VR Gears.
At first glance it may seem like something that already exists in the market, but its function, objective and operation are completely different, so the patent office has accepted it as a MODEL OF INVENTION.
Motorized system for capturing stereoscopic panoramas.
In any spherical capture there will always be a timelessness between the photos, so indoors we can do it with a single camera.
Using two servomotors it is very easy to create a pseudonodal motorized system, in which the parallax is obtained, displacing the distance that joins the vertical servomotor with the camera. In this case we must take into account that the “offset” used will be equal to half the interaxial distance that we calculate for the shot.
Panning: The horizontal servomotor will use a rotation base of 60º with which we will obtain enough overlapping even with the optics of 12mm.
Tilt: The vertical servomotor must use a 90º rotation base, making its four stops at 45º, 135º, 225º and 315º.
The most interesting thing about this capture is that both servomotors will always rotate in the same direction, in the graphic above we see how the colour changes depending on whether the camera looks to one side or the other, which means that in each rotation two photos will be taken for the right side and two for the left side.
I have designed a broad-spectrum capture system that will allow us to work with both APSC and Fullframe, as well as 7.5/8/10 and 12mm optics, with which we will take 24 photographs to compose the stereopanorama.
And finally I have created a preset for Mistika VR, so postproduction consists in ingesting and rendering the finished panorama.
To conclude simply by saying that I use the VR OWL designed by Dr. Brian May because it offers me a photographic quality, without a grid effect or chromatic aberration. It is true that the sensation is not as immersive as in a VR or an Oculus GO, but it is very effective in checking stereoscopic quality. It’s interesting to add that I don’t use the magnetic metal plate, instead, I use a removable sheet of silicone adhesive gel.