Here at the Imaging Lyceum, we explore several research topics in the areas of computational cameras and visual media. Below are a selection of our current themes of research, but we are always open to new and exciting ventures! Please see our Publications page for a comprehensive list.
I. Light Transport
Light travels along many different paths from illumination sources to camera detectors, including multiple bounces, scattering, and absorption events. We have been exploring ways to probe and selectively capture these light transport paths, to be more judicious in the photons we receive, for various applications in computer graphics and vision. These include seeing through skin for blood vessel visualization, optically masking objects from an image, and rendering light interactions for dynamic, moving scenes.
1. Sreenithy Chandran, Hiroyuki Kubo, Tomoki Ueda, Takuya Funatomi, Yasuhiro Mukaigawa, Suren Jayasuriya, Slope Disparity Gating: Systems and Applications, IEEE Transactions on Computational Imaging 2022 [pdf] [video] (Best Demo Award at ICCP 2019)
2. X. Liu et al., "Dense Lissajous Sampling and Interpolation for Dynamic Light-Transport", Optics Express 2021 [pdf]
3. K. Henderson et al. "Design and Calibration of a Fast Flying-Dot Projector for Dynamic Light Transport Acquisition" IEEE TCI 2020 [pdf]
4. H. Kubo et al. "Programmable Non-Epipolar Indirect Light Transport: Capture and Analysis" IEEE TVCG 2021 (Best Demo at MIRU 2018, Best Presentation at IPSJ SIG-CG 2019, Exhibited at CES 2020, winner of Image Electronics Technology Excellence Award from The Institute of Image Electronics Engineers of Japan in 2021) [pdf]
II. Remote Sensing and Tomography
We are interested in computational imaging and its application to both remote sensing and tomography problems. This includes areas such as sonar imaging (especially synthetic aperture sonar), hyperspectral imaging, long range imaging through turbulence, and computed tomography for medical imaging. We are currently researching ways to blend the physics of image formation in these modalities with recent advances in both fully, self-, and unsupervised machine learning.
1. Albert Reed, Thomas Blanford, Daniel Brown, Suren Jayasuriya, SINR: Deconvolving Circular SAS Images Using Implicit Neural Representations [ArXiv link]
2. A. Reed et al. "Dynamic CT Reconstruction from Limited Views with Implicit Neural Representations and Parametric Motion Fields", ICCV 2021 [pdf]
3. J. Janiczek et al. "Differentiable Programming for Hyperspectral Unmixing using a Physics-based Dispersion Model", ECCV 2020 [pdf]
4. A. Reed et al. "Coupling Rendering and Generative Adversarial Networks for Artificial SAS Image Generation", IEEE/MTS OCEANS 2019 [pdf]
III. Software-Defined Imaging
Computer vision and image sensing for embedded devices requires a lot of energy and data bandwidth, which can limit visual computing applications. We have been exploring how image sensors can be made programmable, and designed for energy-efficient domain-specific tasks rather than solely taking aesthetic pictures. This includes new image signal processing pipelines and accelerators for vision algorithms. To validate these ideas, we have focused on hardware acceleration via FPGA platforms as well as exploring CMOS sensor design.
1. Odrika Iqbal, Victor Torres, Sameeksha Katoch, Andreas Spanias, Suren Jayasuriya, Adaptive Subsampling for ROI-based Visual Tracking: Algorithms and FPGA Implementation [Arxiv link]
2. Buckler et al. "EVA2: Exploiting Temporal Redundancy for Live Computer Vision", ISCA 2018 [pdf]
3. Buckler et al. "Reconfiguring the Imaging Pipeline for Computer Vision", ICCV 2017 [pdf]
IV. Integrated Engineering and Media Arts Education
Integrated engineering and media arts programs offer interdisciplinary experiences for students and teachers, helping to improve student engagement in these topics. We are researching effective new ways to innovate pedagogically in this domain, blending skills from philosophy and qualitative research to help improve teaching and learning. In particular, we are focused on issues including artificial intelligence and society, visual media, and diversity/equity/inclusion of LGBTQIA+ and other underrepresented students into these fields.
(photo courtesy of the Digital Culture Summer Institute)
1. Joshua Cruz, Noa Bruhis, Nadia Kellam, Suren Jayasuriya, Students’ Implicit Epistemologies when Working at the Intersection of Engineering and the Arts, International Journal of STEM Education 2021 [pdf]
2. Dominique Dredd, Nadia Kellam, Suren Jayasuriya, Zen and the Art of STEAM: Student Knowledge and Experiences in Interdisciplinary and Traditional Engineering Capstone Experiences, IEEE Frontiers in Education (FIE) 2021 [pdf]
3. Jennings et al., A Review of the State of LGBTQIA+ Student Research in STEM and Engineering Education, American Society of Engineering Education (ASEE) Conference 2020 [pdf] (Finalist for the Best Diversity, Equity & Inclusion Paper Award)
V. Angle Sensitive Pixels
Angle Sensitive Pixels (ASPs) are a new class of CMOS image sensor which features integrated diffraction gratings above the pixel. We have shown how ASPs sample the plenoptic dimension of light, including angle and polarization, and can even be used to optically compute the first layer of convolutional neural networks.
1. Chen/Jayasuriya et al. "ASP Vision: Optically Computing the First Layer of Convolutional Neural Networks using Angle Sensitive Pixels", CVPR 2016 (oral presentation)
2. Hirsch et al. "A Switchable Light Field Camera Architecture using Angle Sensitive Pixels and Dictionary-based Sparse Coding", ICCP 2014 (Best Paper Award)