kit

SmartSocks: AI-Based Wearables for Feet Pronation Sensing and Customised Calf Massage

SmartSocks are lightweight, low-energy wearable socks designed to ease gait issues like pronation and supination, affecting 35% of the elderly. Using soft robotics and smart sensors, they provide gentle, targeted leg muscle relief tailored to each user’s walking pattern. Unlike bulky massage devices or high-voltage wearables, SmartSocks are practical for daily use—no heavy hardware, no frequent charging. An AI-powered app personalises massage settings based on foot shape and movement data. Offered at a competitive price, SmartSocks help elderly users walk more comfortably, reduce chronic pain risks, and improve mobility in a convenient, user-friendly way.

SmartSocks: AI-Based Wearables for Feet Pronation Sensing and Customised Calf Massage Read More »

Project Innobot: Multi-terrain Hexapod

Traditional wheeled robots often encounter challenges when navigating level or complex terrains. To address this real-world issue, our team presents the Multi-terrain Hexapod as a solution. Inspired by the spider’s structure, our Hexapod’s multi-legged design has demonstrated the ability to climb stairs and tackle more intricate terrains. Through this project, we aim to build a comprehensive understanding of legged robots and share our experience in developing the hexapod with the wider robotics community.

Project Innobot: Multi-terrain Hexapod Read More »

Project Innobot: Autonomous Rover

Auto-piloting is an evolving technology that significantly contributes to enhancing production efficiency across industries. The development of navigation rovers enables automated, high-loading point-to-point delivery tasks, reducing the reliance on traditional manpower and decreasing the risk of physical injuries among workers. Our team is committed to cultivating expertise in the field of navigation and sharing our experiences in creating navigation robots with the broader robotics community.

Project Innobot: Autonomous Rover Read More »

Real-time object detection with Grounding Dino

Grounding Dino is an innovative computer vision library designed to enhance object detection and scene understanding through advanced grounding techniques. It combines computer vision and natural language processing techniques to integrate contextual understanding in machine learning models, improving their accuracy and efficiency in real-world applications. We will showcase some of the strengths and weaknesses of this very powerful tool during this incoming Inno Show!

Real-time object detection with Grounding Dino Read More »

Armstrong: Robotic Arm Digital Twin

Robotic arms have been widely used in various real-world scenarios, such as automobile manufacturing and housekeeping. However, there is a significant gap at the Innovation Wing, as existing robotics-focused projects mainly focus on mobile robots or quadrupeds. To address this gap, our project aims to develop an intelligent manipulator capable of performing precise tasks like maintenance and part sorting in industrial settings.

Armstrong: Robotic Arm Digital Twin Read More »

High-fidelity Stereo Imaging Depth Estimation for Metaverse Devices

This project is about high-fidelity stereo RGBD imaging for Metaverse devices.
This work proposes to design and realize a fast and accurate 3D imaging architecture that Metaverse VR headsets can leverage. Different from the previous imaging framework, we use the prior of optical encoding, and joint optimization structure in deep optics to explore a new and improved framework for high-resolution RGBD imaging. A state-of-the-art stereo-matching algorithm is explored with jointly optimized lenses and an image recovery network. Coventional imaging algorithms consider only the post processing of captured images, which is less fleasible and with high computational complexity. Our deep stereo depth estimation integrates the optical preprossessing and encoding into the advanced decoding neural networks to achieve a higher accuracy of RGBD imaging with higher resolution. It fully utilizes the information obtained from the stereo camera with a pair of asymmetric lens in a Metaverse device to achieve a more extended depth range of all-in-focus imaging.

High-fidelity Stereo Imaging Depth Estimation for Metaverse Devices Read More »

Towards Real-time 3D Neural Holography for Near-eye Display

Holography plays a vital role in the advancement of virtual reality (VR) and augmented reality (AR) display technologies. Its ability to create realistic three-dimensional (3D) imagery is crucial for providing immersive experiences to users. However, existing computer-generated holography (CGH) algorithms used in these technologies are either slow or not 3D-compatible. In this project, we explore four inverse neural network architectures to overcome these issues for real time and 3D applications

Towards Real-time 3D Neural Holography for Near-eye Display Read More »