Client: Ford Motor Company
Building the AI underpinning Ford's first smart consumer product: next-gen vehicle security.
Building the AI underpinning Ford's first smart consumer product: next-gen vehicle security.
At the beginning of 2020, I co-founded an AI research and development company called Playground, based in London. We won a 7-figure contract with Ford Motor Company to build them an ML-based threat detection algorithm for a new security offering. This grew into a multi-year relationship where they recognised us as a tier-1 supplier for machine learning & training data collection services.
At the beginning of 2020, I co-founded an AI research and development company called Playground, based in London. We won a 7-figure contract with Ford Motor Company to build them an ML-based threat detection algorithm for a new security offering. This grew into a multi-year relationship where they recognised us as a tier-1 supplier for machine learning & training data collection services.
Challenge
Truck and Van owners have seen a dramatic rise in their belongings being stolen out of their vehicles. In an effort to combat these thefts, Ford tasked us to develop an intelligent, real-time threat detection algorithm that would be able to run in power constrained environments using audio and motion data.
Truck and Van owners have seen a dramatic rise in their belongings being stolen out of their vehicles. In an effort to combat these thefts, Ford tasked us to develop an intelligent, real-time threat detection algorithm that would be able to run in power constrained environments using audio and motion data.
Solution
We delivered a custom threat detection algorithm that combined a robust audio detection machine learning model with a motion detection algorithm. Our model was efficiently packaged to run locally on the edge in real-time, in order to satisfy Ford's power constraints. We spent a significant amount of time developing custom tooling to make data collection, model training, and evaluation more efficient.
We delivered a custom threat detection algorithm that combined a robust audio detection machine learning model with a motion detection algorithm. Our model was efficiently packaged to run locally on the edge in real-time, in order to satisfy Ford's power constraints. We spent a significant amount of time developing custom tooling to make data collection, model training, and evaluation more efficient.
Outcomes
Our work culminated in the delivery of a robust threat detection algorithm and a bespoke dataset of over 100K annotated audio, motion, and image samples that will be included in the next generation of Ford vehicles.
Our work culminated in the delivery of a robust threat detection algorithm and a bespoke dataset of over 100K annotated audio, motion, and image samples that will be included in the next generation of Ford vehicles.
Our Approach
System Design
We spent a considerable amount of time understanding the vehicle security space, learning about techniques used in attacking and breaking into vans and trucks. We also drew up the best combination of sensors and how to apply ML in a pragmatic way. Our research culminated in a comprehensive threat detection algorithm that specified the classes needed to be detected in the event of a vehicle break-in. A large part of the system considered external & environmental factors that would be robust against false positives. As a lot of the (audio, motion, image) training data required was very specific to vehicle security, we needed to collect and clean it ourselves.
We spent a considerable amount of time understanding the vehicle security space, learning about techniques used in attacking and breaking into vans and trucks. We also drew up the best combination of sensors and how to apply ML in a pragmatic way. Our research culminated in a comprehensive threat detection algorithm that specified the classes needed to be detected in the event of a vehicle break-in. A large part of the system considered external & environmental factors that would be robust against false positives. As a lot of the (audio, motion, image) training data required was very specific to vehicle security, we needed to collect and clean it ourselves.
Data Collection
Over the course of our engagement with Ford we created a bespoke data collection function which served as the backbone of our work. With full-time specialists simulating vehicle break-ins, recording thousands of audio and motion samples and annotating them for our ML engineers to train with. Having consistent communication between our data collection and modelling team created a feedback loop that was paramount to the models success. Our data collection teams were able to QA samples and collect and test models on the fly.
Over the course of our engagement with Ford we created a bespoke data collection function which served as the backbone of our work. With full-time specialists simulating vehicle break-ins, recording thousands of audio and motion samples and annotating them for our ML engineers to train with. Having consistent communication between our data collection and modelling team created a feedback loop that was paramount to the models success. Our data collection teams were able to QA samples and collect and test models on the fly.
Internal Tools
We built a sophisticated set of tools to streamline data collection with training and testing. Our tooling allowed for data collection engineers to collect and annotate samples off of any hardware the customer specified. It would be automatically sent to our backend which could then be further QA'd cleaned and annotated before being ingested into the ML training pipeline. This workflow created a simple and easy way for the entire team to debug and collaborate over model blindspots and training data quality. It also provided our customers an easy way of understanding the status of our work.
We built a sophisticated set of tools to streamline data collection with training and testing. Our tooling allowed for data collection engineers to collect and annotate samples off of any hardware the customer specified. It would be automatically sent to our backend which could then be further QA'd cleaned and annotated before being ingested into the ML training pipeline. This workflow created a simple and easy way for the entire team to debug and collaborate over model blindspots and training data quality. It also provided our customers an easy way of understanding the status of our work.
The software tools we built consisted of:
The software tools we built consisted of:
Collect & Test App - A mobile app that collectors used in the field to connect to Ford hardware to collect training data, annotate samples, configure metadata, and test latest models.
Collect & Test App - A mobile app that collectors used in the field to connect to Ford hardware to collect training data, annotate samples, configure metadata, and test latest models.
Backend Pipeline - A repository of all of the samples and meta data we collected.
Backend Pipeline - A repository of all of the samples and meta data we collected.
Data Dashboard - A webapp that allowed both teams to take a glance at the quantities and status of the entire dataset. As well as sample playback tools and annotation interfaces that allowed for our QA team to crop and analize time series data.
Data Dashboard - A webapp that allowed both teams to take a glance at the quantities and status of the entire dataset. As well as sample playback tools and annotation interfaces that allowed for our QA team to crop and analize time series data.
Embedded Tools - We built a set of tools that would allow for our team to easily test and collect training data and models off of specific AI chips and hardware.
Embedded Tools - We built a set of tools that would allow for our team to easily test and collect training data and models off of specific AI chips and hardware.
Model Development & Testing
We created a binary audio classifier, a multi-class audio classifier, and a motion detection algorithm that worked together in a chain of events culminating in a robust threat detection algorithm that ran on-device in real time. We discovered pretty quickly that communicating model effectiveness is really hard to do in a confusion matrix or as a simple percentage. Our customers (and team) needed to get an actual feeling for how the model was performing against false positives in the wild. We designed and built a feature into our tooling that allowed for our collection team to test models on the fly with highly visual feedback. This feature became a core piece of communication when presenting progress and deliverables to Ford management and executives.
Watch the demo:
We created a binary audio classifier, a multi-class audio classifier, and a motion detection algorithm that worked together in a chain of events culminating in a robust threat detection algorithm that ran on-device in real time. We discovered pretty quickly that communicating model effectiveness is really hard to do in a confusion matrix or as a simple percentage. Our customers (and team) needed to get an actual feeling for how the model was performing against false positives in the wild. We designed and built a feature into our tooling that allowed for our collection team to test models on the fly with highly visual feedback. This feature became a core piece of communication when presenting progress and deliverables to Ford management and executives.
Watch the demo:
The Team
We built a team of machine learning researchers, software engineers, embedded engineers, and designers all working together to make machine learning more approachable and transparent. Below is a photo of our office located in the heart of London.