Drivers activity recognition and alert generation using AI

A large UK based industrial gas manufacturer wanted to monitor their driver’s activities while they are driving and classify their activities into different categories such as micro sleeps, drowsiness, yawning, looking outside, smoking, drinking, eating etc. In the present system, they had cameras installed in each driver’s cabin however the inbuilt system could not provide the classification that the client was looking for. Also the present system was inefficient in terms of accuracy and used to generate a lot of false alerts. Client wanted to use the existing cameras for video capturing.

Objective

The objective of the project was to build an AI based driver’s activity monitoring system that would classify and generate alerts for predefined driver’s activities. The new system would be integrated with the existing camera system to source the videos. The objective was to choose and train an AI model with client’s data to generate annotation level classification as accurate as possible and then develop an algorithm for each classification for the activity detection. Our goal was to achieve 95% accuracy on the algorithm that would work on top of the AI output.

Challenges

Our first challenge was to identify the characteristics or identification markers/factors for each activity classification. For example, when will we say that a driver is sleeping? What are the identification markers for microsleeps within a video?

Not all humans behave in the same way even if they are doing the same activity. For example, we found drivers whose eyes are not fully closed even though they are sleeping. Our challenge was to match the AI performance with human level intelligence. To solve this problem, we tuned the AI model and the algorithm that can address the maximum number of varieties of human behavior.

We found multiple markers for each classification, and designing a general algorithm that would bring more than 95% accuracy was a challenge.

Client wanted to change each of these algorithms dynamically with some dynamic parameters. Developing the platform that will produce algorithms dynamically was another challenge.

How we did it

We studied hundreds of videos to identify and document all the markers for each classification. Then carefully designed algorithms for each activity. We collected as much data as possible from the client and label those images/videos to achieve higher accuracy on the annotation label classification.

We kept a large number of videos for testing and tuning the AI model and the algorithm. We used these videos to test the AI model output and finally algorithm and continue to tune hyper parameters and algorithm logic to get the designed accuracy.

Outcome

We have successfully trained the AI model with high accuracy output. The algorithm that we designed works seamlessly on the AI model output and considers multiple factors for each activity recognition. The final classification started giving more than 95% accuracy within one month of the deployment. Finally we have a driver activity monitoring and alerting system that processes 40k to 55k videos per day.

Say hello to Intelgic

contact intelgic
Book a call

©2023 Intelgic Inc. All Rights Reserved.