Pradyumna Reddy

I am now a 2nd year PhD at Smart geometry Group, at University College London , where I work on 3D Object Generation. My PhD advisor is Prof. Niloy J. Mitra.

I completed my B.E Hons Computer Science and Engineering from BITS Pilani Goa, did my undergrad-thesis at Laboratory of Mathematics of Imaging, Psychiatry NeuroImaging Laboratory Harvard Medical School, where I worked with Prof. Yogesh Rathi. After that I worked as a Statistical Analyst with the Data and Analytics group at Walmart Labs for couple of years.

Email  /  GitHub  /  Google Scholar  /  LinkedIn

profile photo


I'm interested in computer graphics, computer vision, machine learning and optimization.


These include papers accepted to conferences or journels and pre-prints.

project image

SeeThrough: Finding Objects in Heavily Occluded Indoor Scene Images

Moos Hueting, Pradyumna Reddy, Ersin Yumer, Vladimir G. Kim, Nathan Carr, Niloy J. Mitra
3DV 2018 Oral Presentation, 2018

Discovering 3D arrangements of objects from single indoor images is important given its many applications including interior design, content creation, etc. Although heavily researched in the recent years, existing approaches break down under medium or heavy occlusion as the core object detection module starts failing in absence of directly visible cues. Instead, we take into account holistic contextual 3D information, exploiting the fact that objects in indoor scenes co-occur mostly in typical near-regular configurations. First, we use a neural network trained on real indoor annotated images to extract 2D keypoints, and feed them to a 3D candidate object generation stage. Then, we solve a global selection problem among these 3D candidates using pairwise co-occurrence statistics discovered from a large 3D scene database. We iterate the process allowing for candidates with low keypoint response to be incrementally detected based on the location of the already discovered nearby objects. Focusing on chairs, we demonstrate significant performance improvement over combinations of state-of-the-art methods, especially for scenes with moderately to severely occluded objects. Paper

project image

Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter

Pradyumna Reddy*, Yogesh Rathi
Frontiers of Neuroscience, 2016

Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.

project image

A Braille-based mobile communication and translation glove for deaf-blind people

Tanay Choudhary*, Saurabh Kulkarni*, Pradyumna Reddy*
ICPC, 2015

Deafblind people are excluded from most forms of communication and information. This paper suggests a novel approach to support the communication and interaction of deaf- blind individuals, thus fostering their independence. It includes a smart glove that translates the Braille alphabet, which is used almost universally by the literate deafblind population, into text and vice versa, and communicates the message via SMS to a remote contact. It enables user to convey simple messages by capacitive touch sensors as input sensors placed on the palmer side of the glove and converted to text by the PC/mobile phone. The wearer can perceive and interpret incoming messages by tactile feedback patterns of mini vibrational motors on the dorsal side of the glove. The successful implementation of real-time two- way translation between English and Braille, and communication of the wearable device with a mobile phone/PC opens up new opportunities of information exchange which were hitherto un-available to deafblind individuals, such as remote communication, as well as parallel one-to many broadcast. The glove also makes communicating with laypersons without knowledge of Braille possible, without the need for trained interpreters.

project image

Selective Visualization of Anomalies in Fundus Images via Sparse and Low Rank Decomposition

Amol Mahurkar*, Ameya Joshi*, Naren Nallapareddy*, Pradyumna Reddy*, Micha Feigin, Achuta Kadambi, Ramesh Raskar
Siggraph Poster, 2014

Other Projects

These include coursework, side projects and unpublished research work.

Design and source code from Jon Barron's website