Artificial Intelligence

Smart video tracking

A novel software platform from which apps and algorithms can intelligently track and analyse video feeds from cameras spread across cities

March 14, 2021
The Scitech
 

Researchers at the Indian Institute of Science (IISc) have developed a novel software platform from which apps and algorithms can intelligently track and analyse video feeds from cameras spread across cities. Such analysis is not only useful for tracking missing persons or objects, but also for “smart city” initiatives such as automated traffic control. Many cities worldwide have set up thousands of video cameras. Machine learning models can scour through the feeds from these cameras for a specific purpose ‒ tracking a stolen car, for example. These models cannot work by themselves; they have to run on a software platform or “environment” (somewhat similar to a computer’s operating system). “There has been a lot of research on increasing the accuracy of these models, but sufficient attention hasn’t been paid to how you make [the model] work as part of a larger operation,” says Yogesh Simmhan, Associate Professor at the Department of Computational and Data Sciences (CDS). To address this gap, Simmhan’s lab has developed a software platform called Anveshak. It can not only run these tracking models efficiently, but also plug in advanced computer vision tools and intelligently adjust different parameters ‒ such as a camera network’s search radius ‒ in real time.  

The researchers showed how Anveshak can be used to track an object (like a stolen car) across a 1,000-camera network. A key feature of the platform is that it allows a tracking model or algorithm to focus only on feeds from certain cameras along an expected route, and tune out other feeds. It can also automatically increase or decrease the search radius or “spotlight” based on the object’s last known position. The platform also enables the tracking to continue uninterrupted even if the resources ‒ the type and number of computers that analyse the feeds ‒ are limited. For example, if the search radius needs to be increased and the computer becomes overwhelmed, the platform will automatically start dropping the video quality to save on bandwidth, while continuing to track the object.  

Simmhan’s lab showed how Anveshak could potentially be used to control traffic signals and automatically open up “green routes” for ambulances to move faster. The platform used a machine learning model to track an ambulance on a simulated Bengaluru road network with about 4,000 cameras. It also employed a “spotlight tracking algorithm” to automatically restrict which feeds needed to be analysed based on where the ambulance was expected to go.  

Source:  IISc press release