Light field photography has been around for a long time. The first analog light field device was invented in 1908 by Gabriel Lippmann who eventually won a Nobel Prize for his work on color photography.
Light field photography is fascinating as it allows you to move the focus plane of an image around after an image has already been taken, which is impossible in normal photography.
So, how does light field photography work? This article will teach you everything you need to know.
What Is Light Field Photography?
Normal photography works very similar to the human eye. You focus with the camera and the sensor captures a two-dimensional image of three-dimensional space, with a “slice” of that space being in focus. Everything in front or behind the focused area is blurry and out-of-focus. This is because a normal sensor captures information only regarding the intensity of the light.
The light field refers to the entirety of all rays of light (every photon) in a scene. The light rays that make up the light field are defined by the plenoptic function (this is why light-field cameras are also called plenoptic cameras). The plenoptic function describes a light ray in five dimensions: its coordinates in 3D space (X, Y, `) and its direction in 2D space (two angles).
Light field photography captures information from the light field in a particular scene, including both the intensity of the light and the direction of the light rays (according to the plenoptic function).
Light-field photography is very different from conventional photography. It allows you to capture a three-dimensional image and choose where the focus will be after the fact. By using multiple sensors, both the incoming light and the direction of the light rays can be captured.
How Does Light Field Photography Work?
As mentioned, a light field camera captures all of the information about the light field in front of the camera. This information includes the intensity, color, and direction of the light. Because of this, it’s possible to mathematically determine where each ray of light emanated from before it reached the sensor. This means that a three-dimensional model of the scene can be constructed.
There are several techniques for capturing a light field, for instance:
- Using a single camera to capture information about a scene from multiple angles. This method produces a selection of many images.
- Multiple-camera arrays. These usually feature dozens of sensors in a broad array that each capture information about a scene from a slightly different angle. This method also produces many images at once.
- Microlens arrays. Having an array of hundreds of microlenses in front of a single digital camera sensor allows for light field information to be captured. This produces an image that is made up of hundreds of sub-images.
Each image or sub-image differs by capturing light rays that originated at slightly different locations in space. Because each pixel will therefore show a slightly different scene, information about the angle of the light ray is recorded. This makes it possible to calculate each object’s distance from the camera and position in the scene and ultimately develop a 3D model of the scene.
Applications of Light Field Photography
There are various uses for light field photography that could be incredibly useful. Because all of the information about the light field of a scene is recorded, it’s possible to process light field images in many ways that aren’t possible in normal photography.
Custom Focal Point
The most well-known feature of light field photography is being able to change the focus point after the image has been taken. This is because the information captured by the camera includes focus at every distance meaning that with sophisticated software it’s possible to choose any distance to be the focal point in the scene.
Variable Depth of Field
Similarly to focus, because of the nature of information recorded, it’s possible to process images with “synthetic aperture”. Aperture is the diameter of the opening in a lens and determines the depth of field (how out-of-focus the foreground and background are) in an image.
Because a light field image includes information at every possible focus distance, it’s possible to create images that have the smallest possible depth of field (only a very small section is in focus). It’s also possible to create an image with infinite depth of field where everything in the image is in focus.
Depending on the way the light field is captured, it’s possible to produce slightly different view angles of the scene. This depends on the diameter or width of the system used to take the image. The wider the lens system is, the more light is captured from wider angles.
Once the image is taken, it’s possible to change the perspective of the image by a small amount as if you were moving your head around in the actual scene. This is known as a parallax effect. Using the parallax effect, it’s also possible to reconstruct a 3D image.
Depending on the sensitivity of the light field photography system, and how well-known its optical properties are, it’s possible to calculate the distance from the lens to objects in a scene. One major application of this would be in microscopy where it’s useful to accurately measure the size of synthetic or biological samples.
Change Lighting Conditions
Because so much information about scene depth is recorded in light field photography, it’s possible with post-processing software to accurately reconstruct the lighting in a scene. Since the software knows the relative positions of all the objects in an image, it can convincingly calculate where the shadows would fall.
Light field photography may change filmmaking and VR forever. This is because light field photography can be used to create real-life VR. Google has developed examples on this that can be viewed on Steam.
Using a rotating camera array of 16 GoPros, they captured thousands of images that recorded all of the light field information in a 3D space. They were then able to create a three-dimensional, six-degrees-of-freedom, virtual reality experience.
Are Light Field Cameras the Future of Photography?
In 2012, the first consumer market light field camera was released by the company Lytro. This camera had a one-megapixel resolution with a constant aperture of F/2 and sold for between $400 and $500. Since then, very few consumer-targeted light field cameras have hit the market.
The lack of resolution and image quality meant that light field cameras simply didn’t take off in the consumer market as DSLRs did. In fact, many of the uses of light field technology remain in development.
But, there’s a reason Google (and now Apple) are investing in this technology, and its use in creating 3D user experiences for VR is just one example!
Seeking to lessen its dependence on Apple and Google, Facebook is going all-in on Oculus.
About The Author