Compressive Lightfield Imaging
Patent |
---|
Systems and methods for compressive light sensing using multiple spatial light modulators. |
Abstract. Systems and methods for capturing light field information including spatial and angular information using an image pickup device that includes an image sensor and at least one spatial light modulator (SLM) take multiple captures of a scene using the at least one SLM to obtain coded projections of a light field of the scene, wherein each capture is taken using at least one pattern on the at least one SLM, and recover light field data using a reconstruction process on the obtained coded projections of the light field.
Personal Notes
This patent resulted from the most fun two weeks of work I had during my summer internship at Canon. I had joined principally to work on optimization theory – specifically on refining the Tikhonov regularization approach to multi-hypothesis prediction in block-based compressed sensing that had defined a large part of my Ph.D. dissertation. However, that work was slow going, and the direction I received on this topic while at Canon was pretty minimal.
However, some of my colleagues had been working on different forms of scattering media for physically realizable imaging systems. Specifically, Ankit had worked on topics in computational imaging like Dappled Photography, which at the time was mind blowing to me – the capability to edit and refocus images after acquisition.
The whole idea of techniques like this was to find ways to be able to bring professional-grade photo editing into the consumer space: fire-and-forget when taking images, and then sit down later and determine the “best” articulation of the image – shift the focus, soft shadows, enhanced bokeh, etc.
The idea behind many of these computational photography approaches was to find ways of sampling the full, four-dimensional light field. Getting at that information at the time had been done with techniques like microlens arrays in plenoptic cameras. Unfortunately, these approaches to light-field sampling always had to trade off light-field accuracy with resolution – and resolution always won.
The idea we tried out in this project was to try to find a way to beat that spatial-vs-angular tradeoff in lightfield sampling. We just needed to pay a different price – paying the price of computation post-acquisition to reconstruct the light-field. As shown below, we thought this might be possible by finding hardware to selective block at different angles.
There was another weird thing about this setup. Nothing comes for free, so to be able to sample certain angular resolutions in the lightfield, we’d also need to extend these modulators out in the Z-dimension away from the imaging sensor.
I don’t think we ran the numbers, but it could be possible that one would have to have quite the telescope of a setup, but damnit it was neat.
Another challenge was that the acquisition would also have to be multiplexed in time – for compressive acquisition we would need multiple captures under different randomization patterns. However, in an ideal world, we’d have that be a pretty fast under-the-hood thing. But, when paired with the light-blocking properties of the setup we were proposing, it meant that any such system would have horrendous low-light performance.
In other words, I’m not surprised this methodology didn’t take off! But it was a great project because it taught me some valuable lessons.
Lessons
- You can just do stuff
- You can do a lot more than you think, and a lot quicker than you think.
- Small Teams > Solo » Large Teams
- Attacking a novel topic (even if just for you) is an incredible motivator. Even if you don’t know where it will lead to, the learning of such experiences is invaluable.
- The bitter lesson exists in every hardware domain.
Related Posts
- CUDA Perceptron in 2008
- Intensity-only Optical Compressive Imaging Using a Multiply Scattering Material: A Double Phase Retrieval System
- Scampi: A Robust Approximate Message-Passing Framework for Compressive Imaging
- Block-based Compressed Sensing of Images and Video
- Distance-Weighted Regularization for Compressed-Sensing Video Recovery and Supervised Hyperspectral Classification