Machine learning-enabled, supports remote access through cloud storage
"Thank you for your efforts and work you put into making phase #1 happen! We really appreciate the effort of the whole team!"
The client’s software solutions for automotive industry are used to build deep learning image datasets and train the intelligent visual sensor systems for pedestrian/vehicle detection, traffic sign/light source recognition, people sensing, etc. Each dataset consists of images (video frames) of different road objects grouped into categories and labeled with the name of the object and some additional parameters.
To build a dataset, the client utilizes an ecosystem of 6 software products, which leverages Artificial Intelligence and Machine Learning-based algorithms to process the videos and label the correct objects. If the automated methods can’t provide a relevant level of detection, the customer’s labeling teams annotate the objects manually.
While working on the manual annotation, the outsourcing labelers from all over the world process huge amounts of media data (images/video) remotely. The data is stored on customer’s local servers in Germany. This slows down the client-server communication, causes regular delays, and makes the whole system less effective.
The service provider was expected to migrate the solution from the on-premises servers in Berlin into the cloud and redesign its architecture during the first phase. Therefore the labeling teams would be able to access the system at the cloud server and process the media data more productively.
Softeq was chosen as the key service provider thanks to the company’s full-stack development capabilities and uncommon combination of skills (expertise in Machine Learning/Computer Vision, Digital Imaging and Web Application Development).
The solution is a standalone web application, which consists of 2 modules — Admin Panel and Workspace. Depending on the user’s role on the project, it allows performing specific actions. There are 5 user’s roles — Superadmin, Coordinator, Observer, Labeling Manager, and Labeler. The main functional responsibilities are shared between them.
A Labeler is able to:
Superadmin has the authority to:
Coordinator is responsible for project and task management.
Observer and Labeling Manager control the labelers’ performance and project quality.
A labeler receives specific instructions in an .xml format. They contain the information on the objects, which require labeling (vehicles, pedestrians, road marking, road signs, etc). The files are stored at a remote media storage. The labeler manually annotates the objects, which have not been identified through the Machine Learning algorithms.
Currently, the app supports only low color depth videos, which means that the imported video files must be processed before the labeling. To prepare the video frames for manual labeling, the solution utilizes convolutional neural networks-based image preprocessing algorithms.
The system and its media storages used to be located on the local servers in Berlin, while the majority of the labelers worked from remote locations. This caused regular connection problems and server response delays. To enhance the system’s consistency and reduce the server response time, Softeq’s team migrated the application and its media storages onto the Azure Cloud.
To make the solution more scalable and distribute the resource-consuming operations between separate components of the system, the team also redesigned the solution’s architecture. Such system functions as caching, queue messaging, and containerization were delegated to external services — Reddis, RabbitMQ, and Docker, among others.
During the first phase, the system and the media storages were successfully migrated to the Azure Cloud and the solution’s architecture was rebuilt.
The QA team performed Load testing of the system to ensure that the solution remained stable and sustainable, while being accessed by multiple labeling teams.
Happy with the results of the first phase, the customer engaged Softeq to develop additional functionality. The team is expected to add the support of HDR video and TIFF/PNG formats for resulting frames along with real-time video annotation, among others.