Current surveillance solutions primarily rely on facial biometric identification using RGB videos or images acquired from CCTV cameras. These systems are significantly compromised when faces are fully or partially covered by masks, hijabs, turbans, or beards, and in cases of visual ambiguity due to changes in illumination, smog, dust, or fog. Additionally, these systems are highly dependent on the point-of-view and do not perform well over long distances. This challenge is prevalent in India and similar regions, especially when surveillance systems need to scale for large populations.
These vulnerabilities undermine the reliability of existing surveillance methods, necessitating the development of a more robust solution. Analyzing a person's identity through body shape and walking posture offers an excellent alternative. This method relies on the fact that each individual has a unique physiological structure, including height, head shape, leg bones, hip extension, muscles, and other factors.
Human gait recognition, using classification of silhouettes obtained from RGB videos and point cloud data from LiDAR sensors, offers a promising alternative. Gait recognition is less affected by obstructions and external visual factors, making it a more robust biometric for surveillance in challenging environments.
Track Description:Dataset Name: IISERB-PS-G
Dataset Description: The dataset contains 2055 subjects of round-trip walking sequences captured by two cameras at 0o and 90oto the plane of the person’s walking trajectory at frame-rate of 30 fps. This dataset will be used for track-1.
Another dataset contains 255 subjects of round-trip walking sequences captured by two cameras at 0o and 90oto the plane of the person’s walking trajectory at frame-rate of 30 fps, in combination with LiDAR point cloud data captured by Velodyne VLP-16 at 0o at frame-rate of 10 fps. This dataset will be used for track-2. The video capturing setup is shown below.
Track1: Multi-view silhouette-based gait recognition
Size:
Example of input data:
Fig 1. Person walking in normal condition [a] RGB image captured at 90oto the plane of the person’s walking trajectory, [b] Silhouette derived from (a), [c] RGB image captured at 0oto the plane of the person’s walking trajectory, [d] Silhouette derived from (c).
Track 2: Multimodal (silhouette+LiDAR point cloud) gait recognition
Size:
Example of input data:
Fig 1. Person walking in normal condition [a] RGB image captured at 90oto the plane of the person’s walking trajectory, [b] Silhouette derived from (a), [c] RGB image captured at 0oto the plane of the person’s walking trajectory, [d] Silhouette derived from (c), [e] Point cloud frame captured at 0oto the plane of the person’s walking trajectory.
Note: Highest performance metric is not the only criterion on which the winners will be decided. The teams will be judged based on overall performance (novelty of model architecture and preprocessing methods used). The final decision on the winners will be made by the judges. Prizes are to be won in 2 different tracks.
To ensure alignment with real-world impact, additional evaluations on winning submissions can be conducted in realistic surveillance scenarios:
Number of Awards
Winner Prizes (Each Track)
Note: All active participants will get participation certificates.
Please register for this data challenge using the below link and we will send out the dataset and submission link.
Registration: https://forms.gle/jYYWZQLDG8G5bc6u8
1) Netweb TECHNOLOGIES https://netwebindia.com/
2) PawScan.AI https://pawscanai-68de4.web.app/