FAQ - English
All Categories Telraam S2 Classification of objects with Telraam S2

Classification of objects with Telraam S2

Our Telraam S2 sensor can differentiate between ten different road user categories:

  1. Bicycle

  2. Bus

  3. Car

  4. Light truck

  5. Motorcycle

  6. Pedestrian

  7. Stroller

  8. Tractor

  9. Trailer

  10. Truck

These ten modes are available for users with a Data or Network subscription. Users without subscription have free access to counts aggregated into the four classical Telraam classes: pedestrians (pedestrians + strollers), two wheelers (bicycles and motorcycles), cars, and heavy vehicles (all the remaining classes). 

Unlike our earlier - Raspberry Pi-based - Telraam V1 unit, Telraam S2 takes care of the classification of the detected objects immediately, so we do not need to do this as a post-processing step on the Telraam servers anymore. Actually object detection and classification happens at the same time now and it is done by an artificial intelligence (AI) chip.

Without going into the deep technical details, here is a simple explanation of what AI is and how it works:

Artificial intelligence is (some level of) intelligence - perceiving, synthesising, and inferring information - demonstrated by machines. The AI in Telraam S2 has been trained to be able to identify different types of road users, using a machine learning algorithm which was fed with a huge sample of images, each of which was already categorised by a human. During this process the AI analyses these images and looks for patterns, to build a concept - or model - that can differentiate between the different input categories. When provided with more and more training data, the AI can identify previously unseen road users with better and better accuracy.

To give a human analogue of this process, you could think of the AI as the brain of a young child. A small child learns about cars by looking at (photos of) objects that are identified as cars by the parents, and as time passes the brain builds a concept of what a car is. This concept consists of colours, shapes, sizes, places of occurrence, etc. After this concept has reached a threshold maturity, this child will successfully identify an object that they have never seen before as a car, as their brain is now capable of recognising patterns or properties that match their concept of a car. AI needs training just like a child does: showing one picture of a car will not teach a child what a car is, but showing more and more cars will make the brain's understanding of the concept better and better. In machine learning this process is called ‘training the AI model with annotated data’.

While moving to an AI classifier and raising the number of separate modes from four to ten has numerous benefits, it also presents new challenges. One of these is the ambiguity of classification of road users that are somewhat in between classes or which are atypical variants of the typical class member, and of objects that are outside of the ten classes that we had defined. For example it is very difficult for the AI to decide if small mopeds or large electric bikes should be classified as bicycles or motorbikes, as the border between these classes is not crystal clear. Concerning road users that are strictly speaking outside of our ten classes, chances are high that they will still be classified somehow. For example a person using an electric scooter will most likely be classified as a pedestrian, as their visual image is most similar to one of a pedestrian. Some decisions would be hard even for a human: what class should a large rickshaw fall into? Or an electric wheelchair?

One way to make Telraam S2 even better in the future is to feed the AI with more training data, in order to, e.g., refine the differences between specific similar classes. New AI models can be introduced with future firmware updates.



Was this article helpful?

Thanks for your feedback!