Department of Electrical Engineering and Automation

Mobile Robotics

Mobile Robotics group focuses on enabling autonomous physical agents (robots) to safely, successfully, and legibly operate in dynamic environments shared with humans. In our research we are focusing on addressing the real-world needs through development of methods of embodied intelligence.

MRG

News

Exciting updates from our research group including upcoming workshops, paper acceptances, and the acceptance of a new proposal!

  • Advancing robotics in harsh environments with the support of Unite! Seed Fund

A robot in the forest.

Advancing robotics in harsh environments with the support of Unite! Seed Fund

The project ´Towards fleets of robust and agile mobile robots in harsh environments´ is led by Assistant Professor Tomasz Kucner.

News
  • Multi-Robot Inspection and Monitoring -REAL WORLD CHALLENGE

certificate

Multi-Robot Inspection and Monitoring -REAL WORLD CHALLENGE

The IEEE RAS Summer School 2024, hosted at the Czech Technical University (CTU) in Prague, was an incredible event focused on multi-agent systems and swarm robotics. The program offered in-depth insights into cutting-edge algorithms, coordination strategies, and the future of robotics, covering topics such as coordination in challenging environments, localization and planning, drone platforms, and safety considerations. Dr. Stefano V. Albrecht's lectures on multi-agent reinforcement learning were particularly impactful for me. The highlight of the event was the real-world competition, where teams tackled a complex multi-robot inspection and monitoring task. Collaborating with talented peers from various universities, our team developed a solution for assigning predetermined viewpoints and addressing trajectory planning challenges for two UAVs in a 3D environment with obstacles. Our approach focused on optimizing inspection time while maintaining collision-free paths and respecting dynamic constraints. Competing against 37 international teams in both virtual and real-world challenges, we secured 3rd place!

Department of Electrical Engineering and Automation
  • Workshop for IEEE/RSJ IROS 2024

LFM

From Learning-based to Foundation Models for Mapping Challenges and Opportunities (external link)

In recent years, many of the technical and scientific advancements in machine learning and computer vision systems led to major innovations in the field of scene understanding. However, due to the limited generalization capabilities of these approaches and the lack of standards, only a small fraction of these promising ideas has been widely adopted by the robotic research community. Instead, the spatial representation research field continues to be largely influenced by algorithms and methods established prior to the deep learning revolution. In this workshop, our objective is twofold. Firstly, we seek to explore the opportunities presented to the field of spatial and semantic representations for robotics by recent innovations within the machine learning community. We will focus the discussion on learning-based models, including large-language and foundation models and their exceptional capabilities in comprehending and processing semantic knowledge, allowing open-vocabulary navigation and promising increased generalization. Simultaneously, we aim to identify the barriers hindering the widespread adoption of these technologies within our community. Our goal is to establish the groundwork for a machine learning toolkit for semantic spatial representation, specifically designed for the needs of the autonomous mobile robotics community.

  • Unite! Seed Fund 2024 awards

    Towards fleets of robust and agile mobile robots in harsh environments

Tiny plant with text aside Unite! Seed Fund 2024

Unite! Seed Fund 2024 awards funding to 13 applications with Aalto's involvement

The Unite! Seed Fund aims to stimulate and support bottom-up proposals by teachers, researchers and students for collaborative activities.

News
  • Paper Accepted in Engineering Applications of Artificial Intelligence

network

Exploring Contextual Representation and Multi-modality for End-to-end Autonomous Driving (external link)

Our work enhances autonomous vehicle hazard anticipation and decision-making by integrating contextual and spatial environmental representations, inspired by human driving patterns. We introduce a framework utilizing three cameras (left, right, center) and top-down bird's-eye-view data fused via self-attention, with a vision transformer for sequential feature representation. Experimental results show our method reduces displacement error by 0.67 m in open-loop settings on nuScenes and enhances driving performance in CARLA's Town05 Long and Longest6 benchmarks.

  • Talk by Tomasz Kucner at the FCAI Machine Learning Coffee Seminar

Robot decding where to go

Anticipating Motion Patterns for Improved Navigation — FCAI (external link)

Autonomous mobile robots are being deployed in more diverse environments than ever before. That include shared spaces, where robots and humans have to coexist and cooperate. To assure that our joint life will be safe and successful, it is necessary to enable robots to learn and utilise the information about human motion patterns for imporved performance.

The goal of the talk is to introduce the listeners to the field and present existing and potential applications of maps of dynamics. Furthermore, the talk will also provide insight into open research questions and under-explored research direction.

  • Published:
  • Updated: