It was Pollock who first discovered that if you point a large number of lenses toward a common point, and then make a small correction on each of the lenses, you provide a camera with capabilities that far surpass existing technologies.
“If you look at high-resolution images taken by satellite or aircraft, the field-of-view in those photographs is tiny,” he said. “This camera provides anyone with the ability to view the entire scene and, simultaneously, zoom in closely on a certain area with very high resolution at real time.”
Flying at an altitude of 15,000 feet, a developmental version of the camera can see a 21-kilometer diameter area with a resolution of 0.3 meters. As a comparison, most Google Earth imagery is 1 meter.
The optics systems patent shared by UAHuntsville and Sony Corp. provides this unique coverage and resolution, according to Pollock.
Images from existing cameras have to be tiled like the pieces of a jigsaw puzzle before a full picture can be seen. This can create problems for security forces, such as Department of Defense, border or harbor patrol or homeland security. For example, vehicles can end up appearing more than once if they move from one image to the next between exposures. These types of errors frequently exist in online mapping tools, such as Google Earth or Microsoft’s Virtual Earth, according to Pollock.
That’s where researchers at UAHuntsville stepped in to configure an array of light sensitive chips - each one recording small parts of a larger image - and place them at the focal plane of a large multiple-lens system. The system has the structure of a common kitchen utensil, a colander. The camera would have one giga-pixel resolution, and be able to record images at five frames per second.
ArguSight, an Illinois-based company, has signed a licensing agreement with the university and seeking venture capital to bring the product to the commercial marketplace. CEO Stuart Claggett compares the product to a popular TV product.
“The complete camera system is like a ‘TIVO’ in the sky,” he said. “It captures high-quality imagery and records all the data. A user can request numerous high-definition video windows of live data in real-time or you can review all of the video on demand on the ground when the aircraft lands.”
Ultimately the camera can cover nearly a hemispherical field-of-view with uniform image quality and sensitivity. The initial camera design constraint was to obtain greater than 109 samples within a 10 x 10 km ground footprint. It was quickly realized that with 4 mega-samples (mega-pixel) per camera this would require 271 cameras. The constraint leads to significant, greater than 90 percent sample redundancy.
Reducing the redundancy to less than 1 percent significantly expands the field-of-view, Pollock said. Further, because of the modular nature, the field-of-view can be configured to suit specific applications.
For example, what one might call a Mohawk, an arc of cameras would sample a long, narrow strip. Also, a camera that can operate in other spectral regions, with a field-of-view configuration and a sample size to suit the application is feasible, according to Pollock. He states that software development to fully exploit the camera data capacity continues.
Pollock said the camera could have far-reaching implications for the military, crime prevention and enforcement as well as traffic analysis and emergency response support. The giga-pixel camera will fit in a one-meter cube, could be flown on any type of vehicle – airplanes, helicopters, blimps or unmanned aerial vehicles.
UAHuntsville filed the patent for the large-format giga-pixel camera and shares that patent on a 50-50 basis with Sony Corp.