The National Highway Traffic Safety Administration (NHTSA) found that over half a million large trucks were involved in accidents in 2022. And the average cost per accident, according to Automotive Fleet, is between $16,000 and $75,000. When there’s an injury or fatality, those costs rise significantly.
The transportation industry has traditionally relied on trigger-based camera systems and manual review methods to detect driving violations. But as technology advances, we’ve discovered that trigger-based systems provide limited insights, and manual review is unsustainable as well as unreliable. To address this challenge, some video telematics systems now combine artificial intelligence (AI) with HD cameras to provide smart data analytics—some even capture 100% of drive time instead of recording solely trigger-based events.
This resource examines how AI analyzes and learns from road data, and how it helps fleet managers tackle safety challenges that make companies vulnerable to accidents and costly litigation.
Innovation is transforming every corner of fleet operations, from driver safety and retention to risk reduction and vehicle performance.
Legacy trigger-based camera systems, which respond only to events that have already occurred, are outdated. The rapid development of AI-based technology provides a more advanced way for fleets to track and enforce driving performance and adherence to company policies.
AI involves training computers to perform traditionally “human” tasks faster and more efficiently. For example, outside of transportation, platforms like Spotify and YouTube use AI to recommend content based on users’ previous listening and viewing habits. Advanced chatbots use AI to communicate with humans and some are even modified to be handy assistants, likeApple’s Siri, Amazon’s Alexa, or OpenAI’s ChatGPT.
In the wider transportation industry, AI tends to be more functional. For example, Tesla and Waymo vehicles feature autopilot systems, allowing the cars to drive autonomously while adjusting to changing road conditions, stop signs, turns, and other vehicles.
How AI works with HD cameras
On their own, HD cameras capture high-quality video of road conditions and driving events after they’re triggered. This footage can be analyzed manually later.
When combined with vision-based AI, the camera system can recognize objects and create alerts based on the vision alone. For example, some AI cameras can identify the number on a speed limit sign. And then by connecting to the speedometer and calculating the following distance to another vehicle, the technology can determine whether an alert is justified, and then the level of severity for that alert. Vision-based AI, along with edge computing (immediate data analysis on the camera device) creates audible in-cab alerts related to driver behavior and road conditions to mitigate risk in real-time.
AI is trained through machine learning, a process that allows computers to learn from experience and improve over time, all without being explicitly programmed.
For example, let’s assume that legacy trigger-based systems that rely on cameras alone are a pair of human eyes. In this case, the eyes see everything that happens on the road and recall those events later. Based on this recollection, the fleet makes several improvements to its operations.
When you layer AI and edge computing on top of camera systems, you’re adding an entire nerve center. Cameras are like human eyes, sensors function like sense organs, and AI acts like a brain. In the human body, when the sense organs detect any form of stimuli, a message along with imagery is sent to the brain. The brain immediately processes this data and comes up with an instant response to stimuli.
It’s the same with artificial intelligence. In an AI camera system, cameras and sensors act as data collectors and messengers that send information to the AI system. This system analyzes the data collected and advises the driver in real time on the best actions to take.
However, since data is the bedrock of all learning processes, AI is only as good as the data it’s trained on. Without a solid data-collection system, functionality is limited. To obtain good data, first one must start with superior hardware, including high-definition lenses combined with a high frame rate and a broad field of view. Then, feeding the machine as much data as possible will enable it to get smarter at exponentially faster rates than machines that are only fed slices of data based on triggers. Finally, the number of devices in the field, being trained on a variety of roadways, will affect the machine’s ability to learn more quickly and therefore advance the accuracy of the AI faster.
The proliferation of driver management software has significantly improved driving behavior and performance in fleets that use the technology.
Vision-based AI enhances driver safety primarily through advanced analytics. By collecting and analyzing a variety of data as mentioned above, fleets can develop new solutions to driving and safety issues while also identifying and recognizing positive driving behaviors. They can also have more confidence that the behavior analysis is correct, since the vision-based system understands the context surrounding a particular event. For example, a trigger-based system logs a hard brake and docks points on that driver’s performance score.But a vision-based system can understand that the driver was cut off by another vehicle and therefore the hard brake was justified and necessary, and that would be considered “safe driving,” and the driver would be rewarded with points on their safety score. This additional context enables driver managers to focus on recognizing good driving rather than just risky driving and punitive behavior modification.
The most notable areas of driving performance and behavior improved by AI are:
Traffic law adherence
Traffic violations including illegal U-turns, lane changes and left turns, and tailgating are leading causes of accidents on U.S. roads.However, speeding remains the most dangerous factor.
In 2022, speeding killed 12,151 people, accounting for 29% of all traffic fatalities, according to the National Highway Traffic SafetyAdministration (NHTSA).
While a policy that calls for strict traffic law compliance is a necessary first step, it can only do so much. Keeping track of which drivers have run stop signs without being caught, and how severe the behavior is, is impossible without the use of vision-based AI. AI streamlines the process of tracking and understanding driver behavior and makes it sustainable and scalable for fleets of any size.
Distracted and drowsy driving
According to NHTSA, distracted driving accounted for more than 3,000 deaths in 2022. Distracted driving is associated with a range of behaviors, including using cell phones, eating, sleeping, fiddling with objects such as the radio, and even driving while intoxicated.
While using a traditional camera system to detect distracted or drowsy driving might seem like a straightforward solution, legacy systems can only record activity for later review. The footage must then be analyzed manually. While this might be manageable for a time, it quickly becomes unsustainable as the fleet grows.
In large fleets, managing such video playback can lead to hundreds of hours of analysis. Not only is this tedious and inefficient, but it’s also prone to errors, and only allows for a correction after the event has already occurred.
Vision-based AI cameras o er a more efficient and proactive approach by capturing video and analyzing it in real time. In the case of a distracted driver, sophisticated AI can detect eye and head movement, objects in hand, and other indicators of distracted driving and send an in-cab alert instantly. Drowsy indicators can include head movement, yawning, and eye blink rate. Sophisticated drowsy detection systems can even see eye details through sunglasses and at night. The immediate in-cab alerts lessen the likelihood of an accident occurring, create awareness for the driver, and correct the behavior in real time.
Adjusting to road conditions
Road conditions are constantly changing, and some of these changes can affect how well a driver complies with traffic laws. For example, drivers may follow too closely in the rain, speed in snowy conditions, or struggle to navigate through thick fog.
AI can alert drivers to such hazardous situations and offer guidance in the moment. Having an onboard coach can positively impact a driver’s safety and performance, boosting confidence and improving decision-making.
AI coaches work by collecting real-time data on the surroundings, analyzing it, and determining the best likely course of action.These calculations occur in seconds, thanks to edge computing which stores data locally for faster processing, enabling faster decision-making.
With a sophisticated AI-enabled camera and safety system, fleets can:
Significantly reduce the number of accidents and collisions
Signal that driver safety is a priority and help them mitigate risks
Save money from reduced claims and lawsuits
Help determine whether an early settlement is sensible
Save fuel through less aggressive driving practices
Create a driver-friendly environment driven by incentives and rewards
Improve driver retention by boosting morale among drivers through fair driver scores
Accurately target areas where drivers require more training, and customize the training for everyone
Gain more business from customers who request that their partners make safety a priority
Draft more specific company safety policies
Reduce risk with Netradyne’s ai-based fleet safety and management solutions
Netradyne’s vision-based AI camera system, Driver•i, provides comprehensive and reliable data and analytics that help protect drivers and fleets. It provides personalized coaching, in-cab safety warnings, and captures and analyzes 100% of drive time— not just trigger-based events.Combined with Netradyne’s proprietary GreenZone driver scoring system, which accounts for both positive and negative habits and events, fleets can reduce risk while fostering a culture based on rewards and positive reinforcement.
Netradyne’s research-backed ROI model, based on analysis of 1.3 billion miles of driving data, is the only way to correlate how safer driving measured by higher GreenZone Scores, leads to fewer accidents. Indeed, every 50-point improvement in a fleet’s GreenZone Score correlates to approximately a13–15% reduction in accidents per million miles (APMM). Based on a study of two different groups of fleets (one group of 100 fleets, and another group of 50fleets) we see that on average, Netradyne customers improved theirGreenZone Scoresby 150 points in the first year. In other words, a fleet using the Driver•I platform is likely to experience a 30%+ reduction in accidents per million miles (APMM) —in its first year alone.
Beyond accident reduction, Driver•i delivers a range of other benefits. For example:
Across the platform’s 10 million most recent stop sign observations, there is a 61% improvement for non-stop events and 51% improvement for rolling stops.
Some Netradyne customers even integrate the GreenZone Score into their payroll system, giving drivers a bonus for reaching a certain score.The result: safe drivers receive the recognition they deserve with a holistic approach, which boosts confidence, raises engagement, and increases overall driver retention. Some Netradyne customers have realized a 15% year-over-year improvement in driver retention.