Alexander 'Sasha' Vilesov

I am a PhD student in the Visual Machines Group with Achuta Kadambi at UCLA where I work on computer vision and computational imaging.

Before starting my PhD, I received my bachelors in Electrical and Computer Engineering at the University of Southern California (USC) in 2021. In 2020-2021, I worked at NASA JPL on TriG GPS receivers (mounted on Cosmic-2) to support tracking of GALILEO satellites.

Email  /  CV  /  Twitter  /  Github

profile photo
Research

Within computer vision and computational imaging, I have interests in Text-To-3D Generation, Image Diffusion models, and Human-centric Computer Vision. I am interested in research towards developing scalable large-scale scenes with applications in content generation and entertainment. Additionally, I am actively pursuing finding novel health sensing techniques with unique hardware combinations and algorithmic approaches.

Papers
CG3D: Compositional Generation for Text-to-3D via Gaussian Splatting
Alexander Vilesov, Pradyumna Chari, Achuta Kadambi
Arxiv, 2023  
Project Page / Paper Link

We present a method for 3D generation of multi-object realistic scenes from text by utilizing text-to-image diffusion models and Gaussian radiance fields. These scenes are decomposable and editable at the object level.

Making thermal imaging more equitable and accurate: resolving solar loading biases
Ellin Zhao, Alexander Vilesov, Shreeram Athreya, Pradyumna Chari, Jeanette Merlos, Kendall Millett, Nia St Cyr, Laleh Jalilian, Achuta Kadambi
Arxiv, 2023  
Paper Link

Despite the wide use of thermal sensors for temperatures screening, estimates from thermal sensors do not work well in uncontrolled scene conditions such as after sun exposure. We propose a single-shot correction scheme to eliminate solar loading bias in the time of a typical frame exposure (33ms).

Blending Camera and 77 GHz Radar Sensing for Equitable, Robust Plethysmography
Alexander Vilesov, Pradyumna Chari, Adnan Armouti, Anirudh B. Harish, Kimaya Kulkarni, Ananya Deoghare, Laleh Jalilian, Achuta Kadambi
SIGGRAPH, 2022  
Project Page / Paper Link / Code

To overcome fundamental skin-tone biases in camera-based remote plethysmography, we propose an adversarial learning-based fair fusion method, using a novel RGB Camera and FMCW Radar hardware setup.