As a seasoned Robotics Engineer with a passion for innovation, I bring a wealth of industry and research experience to the field. My expertise includes Humanoid Robotics, Machine Learning, Image Processing, SLAM, and Navigation with Advanced Control Systems, honed through my Master of Science in Robotics from Middlesex University.

At Expert Hub Robotics, I have served as a Senior Robotics Engineer and Project Lead, showcasing my skills in configuring SLAM and navigation systems on humanoid robots, designing custom attachments, developing AI-based Chatbot applications, and executing specialized software implementations. My experience also extends to the VLSI industry, where I worked as a Corporate Application Engineer and was involved in complex customer feature validation, product development, and hierarchical CDC flow validation.

My commitment to excellence has been recognized through several awards, including the Best Undergraduate Project Award from the IESL and the Best Technical Paper Award from the IET. With a desire to make a meaningful impact, I am eager to bring my industry expertise to the research field and contribute to the ongoing advancements in robotics.

Expertise

  • Computer Science

    • Environment Mapping
    • Graph Construction
    • Range Finder
    • Deep Reinforcement Learning
    • simultaneous localization and mapping
  • Earth and Planetary Sciences

    • Autonomy
    • Cartography
    • Position (Location)

Organisations

Master of Science in Robotics,

  • Middlesex University

Bachelor of Electrical and Electronics Engineering, 

  • University of Peradeniya

Affiliated study programs

Current projects

Towards Autonomous and Real-Time UAV Mapping

UAVs represent one of the most relevant emerging technologies in the remote sensing domain of the last two decades, becoming a valid alternative to traditional acquisition techniques in a wide range of applications. In common practice, UAV flights are pre-planned before starting the mission, while data processing is performed offline after the acquisition. However, these methods limit UAVs' applicability in dynamic and complex contexts. Examples of UAVs able to fly autonomously in an interest area, acquire complete data, understand the scene, and take autonomous decisions are still in their very early stage. However, these solutions would promise to open new opportunities for more advanced applications. Hence this research will try to address the autonomous operation of UAVs in an unknown environment with spatial understanding in real time. Implementing Active SLAM in real-time on UAVs is the key focus of this research. So apart from exploring an unknown environment, UAV should be able to build a map including semantic data in real-time using edge computing techniques with the possibility to fine-tune the map offline. It is planned to be tackled on three main fronts, i. Autonomous Exploration, ii. SLAM with semantic information, iii. Real-time implementation on an edge computing platform. This research mainly focuses on Deep Reinforcement Learning (DRL) methods for autonomous robotic exploration. It discusses the design of the state, action, and reward spaces and the problems of partial observability, generalization, and real-time implementation. Along with the exploration agent, SLAM with deep learning-based algorithms will be incorporated with semantic understanding to extract scene information to make human-like decisions. Moreover, the map of the explored environment is represented by 3D scene graphs. The SLAM algorithm will be developed in a way that will collect and store raw data to fine-tune the map offline for higher levels of accuracy and resolution.

Finished projects

Mono-Hydra

MONO-HYDRA: REAL-TIME 3D SCENE GRAPH CONSTRUCTION FROM MONOCULAR CAMERA INPUT WITH IMU

The ability of robots to autonomously navigate through 3D environments depends on their comprehension of spatial concepts, ranging from low-level geometry to high-level semantics, such as objects, places, and buildings. To enable such comprehension, 3D scene graphs have emerged as a robust tool for representing the environment as a layered graph of concepts and their relationships. However, building these representations using monocular vision systems in real-time remains a difficult task that has not been explored in depth. This paper puts forth a real-time spatial perception system Mono-Hydra, combining a monocular camera and an IMU sensor setup, focusing on indoor scenarios. However, the proposed approach is adaptable to outdoor applications, offering flexibility in its potential uses. The system employs a suite of deep learning algorithms to derive depth and semantics. It uses a robocentric visual-inertial odometry (VIO) algorithm based on square-root information, thereby ensuring consistent visual odometry with an IMU and a monocular camera. This system achieves sub-20 cm error in real-time processing at 15 fps, enabling real-time 3D scene graph construction using a laptop GPU (NVIDIA 3080). This enhances decision-making efficiency and effectiveness in simple camera setups, augmenting robotic system agility. We make Mono-Hydra publicly available at: https://github.com/UAV-Centre-ITC/Mono_Hydra.

Vision-based mobile robot for reconnaissance

The objective was to create a 3D map of an unknown environment. It was tackled in 3 main fronts as a 3D- Vision system, an intelligent navigation system (INS) and holonomic robot platform with the localization algorithm. The vision system was developed at the basic level through fashioning two web cameras as a stereo camera pair. Laser ranging was used as the sensing method and a 2D map of the environment was obtained through the efficient guidance of the intelligent navigation system. A particle filter-based SLAM algorithm was implemented for the task localization a Kinect sensor was interfaced with the system for the task of extending the map to 3D.

Address

University of Twente

Langezijds (building no. 19), room 2305
Hallenweg 8
7522 NH Enschede
Netherlands

Navigate to location

Organisations

Scan the QR code or
Download vCard