amai-gsu.us - Home - AMAI Lab

Description: Advanced Mobility & Augmented Intelligence Lab Research Opportunities Latest News I am currently looking for strong and self-motivated students with an interest in on-device learning, LLM for edge devices, 3D vision. If you are interested in my research and working with me, please send me email with your CV and transcripts. News [Fall 2025] Xiaolong was

Example domain paragraphs

Copyright © 2023 AMAI Lab @ GSU | All Rights Reserved.

Automatically generating a bird’s-eye-view (BEV) of an object’s surrounding environment is critical for applications like autonomous driving and advanced driver-assistance systems. These systems rely on integrating signals from multiple cameras to construct a top-down view of the environment. Prominent examples include the BEV systems deployed in Tesla cars. However, many existing methods heavily depend on Transformers, which employ computationally expensive attention mechanisms to learn accurate representa

In this work, we introduce Spatial Cross Mamba , an innovative approach analogous to standard cross-attention in Transformers. Our method leverages the efficiency of state space models (SSMs) to significantly reduce the computational overhead associated with Transformers, enabling more efficient and scalable BEV systems without compromising representation accuracy.