While riding our electric mopeds, the two DOT-certified helmets we provide in the helmet case of every vehicle are an important feature to keep riders safe.
But helmets are only safe if riders wear them for the entire ride.
In our current markets in New York, California, and Washington, D.C., it is the law that all moped riders must wear a helmet, while riders in Florida older than 21 are not mandated to wear one.
It is — and always has been — Revel’s non-negotiable rule that every rider and passenger in every market in which we operate must wear a helmet while riding. We strictly enforce this policy and have suspended thousands of riders for failing to comply.
Nonetheless, we knew we could improve compliance and, in August of 2020, we set out to establish the standard in shared mobility for helmet detection technology.
With thousands of rides per day, it was imperative that we overcome some immediate challenges to determining if a rider had worn his or her helmet for the duration of the ride.
When starting a ride, our helmet case sensors are helpful in detecting whether the case opened at the start of a ride and closed at the end of a ride. However, there are still opportunities for a rider to open the case and not wear our provided helmet, or they could choose to wear their own helmet.
We needed to get creative.
We explored other sensor possibilities in the helmet case and in the helmets themselves, but all of them presented drawbacks.
In the helmet case, we looked into weight and infrared sensors, but both would prevent the common practice of riders storing their belongings during the rental. In the helmets themselves, current sensor options have telematics and power complications when used at scale.
After our analysis of available sensor technology, we determined that an accurate helmet selfie — with the assistance of machine learning — was the best way to ensure greater compliance.
To achieve this, we use convolutional neural network (CNN) models trained with our large proprietary library of human-labeled images of riders with and without helmets. As with many aspects of computing, a CNN’s process to recognize images was borrowed from a master of the craft — the visual cortex of animals, including from humans like us. In the past decade, CNNs, which use an interconnected network of neurons, have emerged as the clear leading architecture for most image recognition tasks.
Our models use state-of-the-art architectures from recent research results to ensure we find the best tradeoff between speed and accuracy. This makes the most sense for us, because we’re receiving a high volume of similar images — humans taking selfies with a helmet.
Creating and Training Our Model
We had to create a range of images that would give the machine “the vision” to further refine its identification capabilities.
This included confronting challenges of detecting helmets: different colors and styles of helmets, obscure angles, variant lighting, and reflections that might fool the machine.
We used real Revel employees and riders to create the training set and have achieved high success. However, for errant cases, we still use human power to check each and every image that failed the machine’s helmet ID process.
“The human element is still important in these cases because no matter how well a machine learning model performs, it will never be accurate 100 percent of the time,” said Audrey Tkach, Senior Engineer at Revel. “We want to ensure that no one is prevented from riding due to a model’s incorrect result.”
As with all of Revel’s safety innovations, we are constantly improving the process. To make our helmet detection technology even more effective, we’re focusing on the most difficult cases:
- Highly reflective helmet selfies
- Nighttime helmet selfies where the person is hard to see
- Two people in the image – with only one wearing a helmet
Shared mobility is the future and we at Revel Safety Innovation Lab will continue to do our part to improving the user experience at each step along the way.
Watch this space for more...