A seminar titled
"Semantic Perception for Intelligent Systems: Going from Robot Manipulators to Autonomous Vehicles" will be given on
Thursday, June 23, 2022 at 10:30 (GMT+3) by Dr. Eren Erdal Aksoy, as part of the Game and Interaction Technologies Master Program seminars. Anyone interested are invited to attend the seminar.
Title: Semantic Perception for Intelligent Systems: Going from Robot Manipulators to Autonomous Vehicles
Speaker: Dr. Eren Erdal Aksoy
Date: Thursday, June 23, 2022 at 10:30 (GMT+3)
Location: Faculty of Computer and Informatics Engineering, Idris Yamanturk Conference Hall, No: 1304.
Abstract: This talk involves two parts. In the first part, I will promote a new holistic view on manipulation semantics, which combines the perception and execution of manipulation actions in one unique framework, so-called “Semantic Event Chain” (SEC). The SEC concept is an implicit spatiotemporal formulation that encodes actions by coupling the observed effect with the exhibited roles of manipulated objects. I will explain how such semantic action encoding can allow robots to link continuous visual sensory signals (e.g. image sequences) to their symbolic descriptions (e.g. action primitives). I will then elaborate on creating a robot-agnostic semantic library of actions to be further employed to generate complex chained manipulation sequences while grounding high-level symbolic plans into the low-level sensory-motor.
In the second part of the talk, I will introduce our recent multi-modal domain translation framework, which can, for the first time, synthesize a panoramic color image from a given full 3D LiDAR point cloud by leveraging the underlying semantics of the perceived scene. Unlike end-to-end approaches, I will argue that having a modular generative pipeline model and mediating the translation between perceptually different sensor readings via semantic scene information could ease the process to a great extent. I will end my talk by presenting potential applications of our framework in the context of autonomous driving, such as handling sensor failures and generating various labeled RGB-D images without using a camera sensor.
Bio: Eren Erdal Aksoy is an Associate Professor at Halmstad University in Sweden. He obtained his Ph.D. degree in computer science from the University of Göttingen, Germany, in 2012. During his Ph.D. study, he invented the concept of Semantic Event Chains to encode, learn, and execute human manipulation actions in the context of robot imitation learning. His framework has been used as a technical robot perception-action interface in many EU projects (e.g., IntellACT, Xperience, ACAT). Before moving to Sweden, he spent 3 years as a postdoctoral research fellow at the Karlsruhe Institute of Technology in the H2T group of Prof. Dr. Tamim Asfour. He has also been a visiting scholar at Volvo GTT and Zenseact AB in Sweden, working on AI-based perception algorithms for autonomous vehicles. He serves as an Associate Editor in different high-ranking robotics journals and conferences (RA-L, IROS, Humanoids, etc.). His research interests include action semantics, computer vision, AI, and cognitive robotics. He has been actively working on creating the semantic representation of visual experiences to achieve a better environment and action understanding for autonomous systems such as robots and unmanned vehicles. He is the main coordinator of a Horizon Europe project ROADVIEW focusing on robust automated driving in extreme weather.