Professor
University of Zurich, Switzerland
Bio:Davide Scaramuzza is a Professor of Robotics and Perception at the University of Zurich. He did his Ph.D. at ETH Zurich, a postdoc at the University of Pennsylvania, and was a visiting professor at Stanford University. His research focuses on autonomous, agile microdrone navigation using standard and event-based cameras. He pioneered autonomous, vision-based navigation of drones, which inspired the navigation algorithm of the NASA Mars helicopter and many drone companies. He contributed significantly to visual-inertial state estimation, vision-based agile navigation of microdrones, and low-latency, robust perception with event cameras, which were transferred to many products, from drones to automobiles, cameras, AR/VR headsets, and mobile devices. In 2022, his team demonstrated that an AI-powered drone could outperform the world champions of drone racing, a result published in Nature and considered the first time an AI defeated a human in the physical world. He is a consultant for the United Nations on disaster response and disarmament. He has won many awards, including an IEEE Technical Field Award, the levation to IEEE Fellow, the IEEE Robotics and Automation Society Early Career Award, a European Research CouncilConsolidator Grant, a Google Research Award, two NASA TechBrief Awards, and many paper awards. In 2015, he co-founded Zurich-Eye, today Meta Zurich, which developed the world-leading virtual-reality headset Meta Quest. In 2020, he co-founded SUIND, which builds autonomous drones for precision agriculture. Many aspects of his research have been featured in the media, such as The New York Times, The Economist, and Forbes.
Abstract:Event cameras are bio-inspired vision sensors with much lower latency, higher dynamic range, and much lower power consumption than standard cameras. This talk will present current trends and opportunities with event cameras, ranging from robotics to virtual reality and smartphones, as well as open challenges and the road ahead.
Professor
University of Windsor, Canada
Bio:Dr. Jonathan Wu received a PhD in Computer Vision and Intelligent Systems from the University of Wales, UK. Dr. Wu is a Distinguished professor of electrical and computer engineering and has been a Tier 1 Canada Research Chair in Automotive Sensors and Information Systems since 2005. He is the founding director of the Computer Vision and Sensing Systems Laboratory at the University of Windsor, Canada. Prior to joining the university, Dr. Wu was a senior research officer at the National Research Council of Canada. He has published one book in the area of 3D computer vision and more than 350 peer-reviewed papers, including 200 journal articles, in the areas of computer vision, machine learning, sensor data fusion. Dr. Wu is/was an associate editor for IEEE Transactions on Cybenectics, IEEE Transactions on Circuits and Systems for Video Technology, and IEEE Transactions on Neural Networks and Learning Systems. He is an elected fellow of the Canadian Academy of Engineering.
Abstract:Advancements in multisensor data fusion—integrating IR imaging, SAR, hyperspectral imaging, and LiDAR—have significantly enhanced object detection, classification, and scene understanding in applications such as urban monitoring, autonomous navigation, industrial diagnostics, and environmental monitoring. Deep learning-based data fusion, leveraging techniques such as convolutional neural networks (CNNs), transformers, and attention mechanisms, has demonstrated superior performance over traditional approaches in integrating heterogeneous data sources. Despite these advancements, challenges such as data heterogeneity, computational complexity, and real-time processing constraints remain. Recent fusion techniques offer unique advantages in feature integration, decision-making, and robustness against sensor failures. Moreover, novel approaches such as selective sensor fusion, interleaved attention fusion, and multi-scale feature fusion have further enhanced the adaptability and accuracy of deep learning-based fusion models. This keynote will explore state-of-the-art fusion methodologies and generalized inverse-based optimization, highlighting key challenges, case studies, and future research directions. The discussion will focus on scalable and efficient perception systems, addressing how deep learning and graph-based fusion strategies are transforming cybersecurity, intelligent transportation, and healthcare, among other fields.