Taxiarchis Foivos Blounas
Robotic Reconfiguration of Dynamic Indoor Environments
My research focuses on enabling mobile manipulators to autonomously reconfigure dynamic indoor environments from abstract task specifications. This work involves the development of systems that can perceive, reason, and act in complex, evolving settings. A central challenge lies in constructing manipulation-relevant representations of the environment. To address this, semantic mapping techniques are being developed to enable robust localization with respect to immovable features, generate structured scene graphs that encode object relationships, and support the detection of environmental changes over time. Building on these representations, large language models are used to guide symbolic reconfiguration planning, allowing the system to interpret high-level instructions and automatically generate behavior trees for execution. The physical execution of reconfiguration relies on force-aware manipulation strategies that ensure safe and effective interaction with movable objects.
These capabilities are integrated on heterogeneous robotic platforms, including ABB’s Mobile YuMi and mobile forklifts. The final goal is to demonstrate full-scene reconfiguration in realistic indoor environments that reflect the capabilities and constraints of the available platforms. By combining semantic perception, high-level planning, and robust whole-body interaction, the research aims to advance the autonomy and adaptability of mobile manipulators in real-world indoor settings.


Jiaolong Li
Advancing Bi-Manual Robotics Through Imitation Learning
My goal is to leverage the power of imitation learning to significantly enhance the development of bi-manual robots. The core idea is to push these multi-limbed systems towards achieving better and more efficient policy learning. By enabling bi-manual robots to learn complex, coordinated actions directly from human demonstrations, I aim to unlock new levels of dexterity and adaptability. This approach promises to streamline the training process, reduce the need for extensive manual programming, and ultimately lead to more capable and versatile robotic agents for a wide range of applications.
Abhishek Kashyap
Learning Object Affordances for Full-body Mobile Manipulation
A mobile robot manipulator – or in short, a mobile manipulator – comprises of one or more robot arms mounted on a movable base. The mobile base greatly expands the reachable workspace, allowing the manipulator to perform tasks that may not have been possible due to reachability constraints or insufficient degrees of freedom. To accomplish tasks, the mobile manipulator needs to form a representation of the scene or the environment it’s operating in. Scene representations like Neural Radiance Fields (NeRFs) and Gaussian Splatting encode scene information in a way that can also render photorealistic images from novel viewpoints. Scene representations suitable for manipulation and navigation in the context of mobile manipulation will be explored.
Furthermore, the tasks to be performed by the mobile manipulator may impose motion constraints. For example, navigating through a door requires identifying a door handle, planning for and then executing arm and base trajectories that respect the physical constraints of the door, handle, and hinges. Similarly, grabbing an object off a table without the base stopping requires synchronized whole-body control. Devising general-purpose approaches to identify motion constraints can be useful in making a mobile manipulator more efficient in its tasks.


Esranur Ertürk
“Get a grip” – Accurate robot-human handovers
The goal is to develop a robotic system that can accurately hand over objects to humans, ensuring precise placement without re-gripping. The project addresses three main challenges: achieving effective scene understanding and situation awareness, accurately recognizing user intentions and high-level reasoning, and teaching and executing collaborative tasks. By focusing on these areas, the project seeks to improve intuitive robot instruction and responsive collaboration in both medical and industrial contexts, ultimately contributing to more efficient workflows and addressing labor shortages in sectors like healthcare and manufacturing.
The project plan focuses on developing methods to determine which objects to hand over, how to place them in the user’s hand, and how to execute these tasks effectively. Initially, the project will use tools like a scalpel and a screwdriver. Hand tracking enables the robot to understand and respond to human gestures, ensuring precise object handovers. This involves the robot interpreting the surgeon’s or worker’s gestures and hand poses to accurately place the tool in their hand without the need for re-gripping. Tool tracking, on the other hand, allows the robot to accurately identify and manipulate these specific tools during collaborative tasks, ensuring that each tool is handed over in the correct orientation and position for immediate use.
By integrating high-level knowledge and explicit commands, the system can predict what tool to hand over and when, based on the context of the task and the workflow. This involves using symbolic domain knowledge and sensor information to understand the user’s intentions and make informed decisions. The system aims to balance personalization with general solutions, learning individual user preferences for gestures, speed, and force parameters, while also maintaining a level of generality that allows it to adapt to different users and scenarios.
Sebastiano Fregnan
Whole-Body Interactive Mobile Manipulation
Collision-free trajectory generation. Constrained control for composite platform-manipulator motion. Adaptive whole-body compliant control for interaction with obstacles. We are planning to use off-the-shelf visual perception for capturing information about the obstacles and force/torque sensors, tactile skin and proprioception for whole body compliance.


Adam Miksits
Communication-Control for robot localization
The team that I work in at Ericsson Research studies communication-control co-design for connected mobile robots. As a part of that work, in my research I investigate the implications of offloading the robot’s localization to the edge, which will cause congestions in the network when scaled up to many robots. By sending fewer sensor measurements from the robot to the edge, the load on the network can be reduced, but at the cost of a higher localization uncertainty. This could also have an effect on safety if the localization estimate is used to avoid obstacles in the robot’s map. The uncertainty can sometimes be handled by the controller on the robot, by going slower or staying further away from obstacles, but then the robot would take much longer to reach its goal. By looking at both the communication and control aspect together we can find a good trade-off between the two variables while keeping the robot safe.
Matti Vahs
Controllers with safety guarantees under imprecise state knowledge
In my research, I focus on the aspect that safety is not an afterthought and should be carefully considered in the design process of robot autonomy. Especially when deploying safety-critical robots in real world scenarios, we must ensure beforehand that they operate as intended. However, this is challenging when we consider common uncertainties arising from unmodeled dynamics or noisy sensor readings. I am concerned with the question of how to design controllers with rigorous safety guarantees under imprecise state knowledge. To achieve that, I move the design process of robot control to the belief space instead of the state space. Reasoning about beliefs instead of states allows us to exploit two main properties: Risk-awareness and Information gathering. One of the highlights of my research so far is that we have shown how risk-aware safety guarantees can be enforced on the Mobile YuMi platform at WARA Robotics.


Faseeh Ahmad
Towards Self-Reliant Robots that Learn, Adapt, and Recover
I am pursuing my PhD at Lund University with a single driving question: how can robots look after themselves when the unexpected happens? My work blends symbolic behavior modelling, data-driven learning, and multimodal perception so that a robot can plan its actions, notice when something is amiss, explain what went wrong, and fix the problem without stopping for a human. I structure skills with Behavior Trees to keep actions modular and interpretable, tune those skills using sample-efficient reinforcement learning, and apply Vision-Language Models to understand scenes, predict risks, and suggest repairs. These ideas are tested in simulators like MuJoCo, AI2-THOR, and Isaac Sim, and on real robots including ABB YuMi, KUKA iiwa, and a UR5e mobile manipulator across tasks such as peg insertion, surface wiping, and small-batch assembly. In these settings, the robot learns to adjust motion parameters, detect failures early, and modify its Behavior Tree on the fly using a reactive planner. My goal is to equip robots with the confidence and transparency needed to support humans in factories, labs, and homes where conditions change quickly and downtime is costly, moving towards collaborative robots that are not only autonomous but also trustworthy companions in everyday settings.
Jonathan Styrud
Creating Behavior Trees for Robot Automation in Dynamic Environments
In our work we try to improve the ability of collaborative robots to autonomously learn transparent and modular control architectures such as behavior trees, aiming to take a step towards “show and tell”-like programming to drastically reduce programming times. Specifically, we look at ways to combine different paradigms like machine learning, planning, and human-robot interfaces. We look to answer research questions like “How can various methods for generating behavior trees best be combined?”, “What are the advantages and disadvantages of the different methods?”, “How can we improve on existing methods for these new implementations and combinations?”

Publications
Domínguez, D.C., Iannotta, M., Kashyap, A., Sun, S., Yang, Y., Cella, C., Colombo, M., Pelosi, M., Preziosa, G.F., Tafuro, A., Zappa, I., Busch, F., Dong, Y., Longhini, A., Lu, H., Cabral Muchacho, R.I., Styrud, J., Fregnan, S., Guberina, M., Jia, Z., Carriero, G., Lindqvist, S., Di Castro, S., Iovino, M., (2025). The First WARA Robotics Mobile Manipulation Challenge-Lessons Learned. To be published in European Conference on Mobile Robots (ECMR), Padova, Italy.
Turcato, N., Iovino, M., Synodinos, A., Libera, A.D., Carli, R. and Falco, P., (2025). Towards Autonomous Reinforcement Learning for Real-World Robotic Manipulation with Large Language Models. To be published in IEEE Robotics and Automation Letters (RA-L).
Ahmad, F., Ismail, H., Styrud, J., Stenmark, M., & Krueger, V. (2025). A Unified Framework for Real-Time Failure Handling in Robotics Using Vision-Language Models, Reactive Planner and Behavior Trees. To be presented at the 2025 IEEE International Conference on Automation Science and Engineering (CASE), Los Angeles, USA.
Ahmad, F., Styrud, J., & Krueger, V. (2024). Addressing Failures in Robotics using Vision-Based Language Models (VLMs) and Behavior Trees (BT). Presented at the 2025 European Robotics Forum (ERF), Stuttgart, Germany.
Miksits, A., Barbosa, F.S., Araújo, J., & Johansson, K. H. (2024, November). Communication and Control Co-Design for Risk-Aware Safety of Mobile Robots with Offloaded Localization. Submitted to ECC2025.
Styrud, J., Iovino, M., Norrlöf, M., Björkman, M., Smith, C. (2024, September). Automatic Behavior Tree Expansion with LLMs for Robotic Manipulation. To be presented at the 2025 IEEE International Conference on Robotics and Automation (ICRA), Atlanta, USA.
Iovino, M., Förster, J., Falco, P., Chung, J.J., Siegwart, R., & Smith, C. (2024, August). Comparison between Behavior Trees and Finite State Machines. arXiv preprint arXiv:2405.16137
Styrud, J., Mayr, M., Hellsten, E., Krueger, V., & Smith, C. BeBOP–Combining Reactive Planning and Bayesian Optimization to Solve Robotic Manipulation Tasks. 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 16459-16466, doi: 10.1109/ICRA57147.2024.10611468.
Vahs, M., & Tumova, J. Risk-aware control for robots with non-gaussian belief spaces. 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 11661-11667, doi: 10.1109/ICRA57147.2024.10611412.
Miksits, A., Barbosa, F. S., Lindhé, M., Araújo, J., & Johansson, K. H. (2023, December). Safe Navigation of Networked Robots Under Localization Uncertainty Using Robust Control Barrier Functions. In 2023 62nd IEEE Conference on Decision and Control (CDC) (pp. 6064-6071). IEEE.
Iovino, M., Styrud, J., Falco, P., & Smith, C. (2023, August). A Framework for Learning Behavior Trees in Collaborative Robotic Applications. In 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE) (pp. 1-8). IEEE.
Pek, C., Schuppe, G. F., Esposito, F., Tumova, J., & Kragic, D. (2023). SpaTiaL: monitoring and planning of robotic tasks using spatio-temporal logic specifications. Autonomous Robots, 47(8), 1439-1462.