Meet ContextCheck: Our Open-Source Framework for LLM & RAG Testing! Check it out on Github!

in Blog

March 03, 2022

Training Data for Computer Vision

Author:




Artur Haponik

CEO & Co-Founder


Reading time:




10 minutes


Computer vision solutions teach machines to see, understand, and interpret visual data. It has to be powered by a set of effective algorithms to do so. An algorithm is a set of well-defined instructions through which the machine learns how to identify and interpret objects of interest. Computer vision models are revolutionizing many industries across the globe. The use of facial recognition as a security feature in smartphones, for example, is a typical case of computer vision right in your hands[1]. However, what really powers these computer vision algorithms goes under the radar. Regardless of how sophisticated the computer vision model is, it’s ineffective without adequate, accurate, or relevant training data.

Read on as we explain everything about creating quality training data to power your computer vision model. But before we get to that, let’s have a look at what training data is.

What is training data?

Training data is a set of samples such as videos and images with assigned labels or tags. It is used to train a computer vision algorithm or model to perform the desired function or make correct predictions. Training data goes by several other names, including learning set, training set, or training data set.

It creates the machine learning model and trains what the desired outcome should be. The model also scrutinizes the dataset repetitively to understand its traits and fine-tune itself for optimal performance.

In the same way, human beings learn better from examples; computers also need them to begin noticing patterns and relationships in the data. But unlike human beings, computers require plenty of examples as they don’t think as humans do. In fact, they don’t see objects or people in the images. They need plenty of work and huge datasets for training a model to recognize different sentiments from videos.

Read more about Building Computer Vision Applications

Types of training data

Images, videos, and sensor data are commonly used to train machine learning models for computer vision. The types of training data used include:

  • 2D images and videos: These datasets can be sourced from scanners, cameras, or other imaging technologies.
  • 3D images and videos: They’re also sourced from scanners, cameras, or other imaging technologies.
  • Sensor data: It’s captured using remote technology such as satellites.
RGB, Semantics in 2D, Depth

Source: paperswithcode.com

How do machines understand the visual world?

Let’s say you’ve collected tons of training data in the form of images, videos, and sensor data for your computer vision model. For the data to be useful, it must be labeled or annotated. This is because machines understand visual data by learning from labeled or tagged examples[2].

The process of identifying an object, a moving image, a text, a sound, or a product requires examples. Toddlers, for example, don’t know what a cat looks likes until they are told and provided with an example. Afterward, they internalize, learn, and extrapolate from it. Machines function in the same manner. They need a high number of examples that have to be carefully annotated or labeled.

It might be interesting for you: Computer Vision Case Study: Image Generation Process (Step-By-Step)

Labeled data

With machines, an image is just a series of pixels. While the pixels contain values that represent the colors, they lack the values that correspond to the object. But marked images train machines that specific sets of pixels are specific target objects.

Typically, labeled data is best performed by humans-in-the-loop (HITL). HITL employs both machine and human intelligence to build models for computer vision. In this arrangement, human judgment is applied to teach, fine-tune, and assess a certain machine learning model.

Having labeled data means your training data is annotated or marked up to show the desired answer or outcome that you wish your machine learning model to project. Labeled data underscores the properties, characteristics, or features of a set of training data. These can then be studied for patterns and relationships that help forecast the outcome.

Let’s use the example of computer vision for autonomous cars. The training data can be labeled using data labeling tools and techniques to point to the location of road signs, road users, or other cars.

How to label data

The process of labeling your training data is called data annotation or data labeling. It entails marking a training set with important features to help teach your algorithm in a computer vision project. Marked data identifies the features or characteristics that you have chosen to highlight in the dataset. And that pattern teaches the algorithm to detect a similar pattern in unmarked data.

You can use data annotation tools to enrich your dataset for training. You can either build your own or buy commercial data annotation tools from a third-party vendor.

Types of data labeling tools

  • Commercial data labeling tools: These are sold by third-party developers in the market. They are suited for companies at the enterprise or growth stage.
  • Open-source data labeling tools: You can build your own tools by using or changing the original source code. This gives you better control when it comes to features and integration.
  • Freeware data labeling tools: They can be downloaded and used at no cost. You can also make improvements to the source code as you deem fit.
data labeling tools

Source: neptune.ai

Techniques for data labeling

Image labeling for computer vision involves any of the following techniques:

  • Bounding box: The technique draws a box around the object of interest on the image. It is used in objects that are fairly symmetrical, such as pedestrians, cars, and street signs.
  • Transcription: Can mark the text found in videos or images.
  • Landmarking: Plots features in the data. It’s used in facial recognition, for example, to identify facial features, emotions, and expressions.
  • Masking: Can highlight or hide certain areas in an image.
  • Polygon: Can highlight the vertices or highest points of the object of interest in an image.
  • Polyline: Create continuous lines consisting of one or several segments.
  • Tracking: Can mark and trail the movement of the target object across several video frames.

Qualities of good training data

Besides proper labeling, training data must satisfy key requirements in model training. These key requirements include:

High quality

You should only work with high-resolution images. This is because they are easier to mark up or label with quality. This results in better training for your computer vision models. From another perspective, quality may mean pictures devoid of human-instigated shadows caused by poor camera angles.

Variance

Images and videos should contain a variety of target objects, including trucks, SUVs, minivans, sedans, and motorbikes. Equally, the data set can include different types of roadways, such as city streets, highways, or rural roads. Remember, the aim is to imitate the variation of real-life conditions in the data. The higher the diversity, the better.

good training data

Quantity

It’s better to work with lots of training data as it improves the accuracy of the predictive outcomes generated by your machine learning model. Let’s say you trained your algorithm using training data with 100 images. Its performance would be dwarfed compared to that of an algorithm trained on data with 100,000 images.

Relevance

Training data should be relevant to the job at hand. Let’s say you want to teach an algorithm for autonomous cars. Here, you don’t need images of books and pencils. Rather, your training data should contain photos of cars, sidewalks, roads, pedestrians, and road signs.

What influences the quality of training data?

The key to building high-performance algorithms is quality training data. Over 80% of the project time in artificial intelligence is used in data preparation and engineering undertakings. This is according to findings from analyst firm Cognilytica[3]. Three primary factors affect the quality of your training set:

quality training data

Source: cloudfactory.com

People

The level of experience and training of the workers who handle your training data impacts their output. Assess the skills of your data workers to set a baseline for an acceptable level of quality. Periodic training is also important for continued skilled development, resulting in higher quality work.

Process

The data labeling process should have tight quality controls and precise parameters for job precision. There should also be room for iteration to meet changing business objectives.

Tools

The right data annotating tools increase the quality of the training data and model performance.

How to acquire training data

Collecting the initial dataset for your computer vision project is the first challenge to creating a successful algorithm. So how can you collect the right dataset? Well, there are three methods when it comes to the acquisition of training datasets.

The first one is to use open data sources found online. These are built by individuals, corporations, and governments. While you can access some for free, others require you to buy a license to access the data. The drawback of open datasets is that they cannot be modified in their published form. So the challenge lies in finding those that are annotated and applicable to your specific use case. Examples of open data sources include:

  • Google Dataset Search
  • Microsoft Research Open Data
  • Public datasets

GDP growth

You can choose to be proactive and build your own training data from scratch. The advantage of this method is you can build datasets according to your feature specifications. But the process is time-consuming and costly. To gather data, you can use software and devices like web-scraping tools, sensors, and cameras.

The third option is to hire a third-party vendor or organization to perform the data gathering on your behalf. This is an excellent alternative if you need a large dataset but lack the internal resources to handle the work.

Regardless of your preferred method of data acquisition, the data should be collected in tiers or cycles. This will allow you to label the data before you can use it on your algorithm.

Final thoughts on training data in computer vision

Training data is the lifeblood of your computer vision algorithm or model. Without the relevant, labeled data, everything is rendered useless. The quality of the training data is also an important factor that you should consider when training your model.

The work of the training data is not just to train the algorithms to perform predictive functions as accurately as possible. It is also used to retrain or update your model, even after deployment. This is because real-world situations change often. So your original training dataset needs to be continually updated.

If you want to know more, see our computer vision solutions.

References

[1] Z. Akhtar and A. Rattani. “A face in any form: new challenges and opportunities for face recognition technology,”. Computer, vol. 50, no. 4, pp. 80–90, 2017.View at: Publisher Site | Google Scholar,
Accessed February 26, 2022
[2] Princeton.edu. Scribe Notes. URL: https://www.cs.princeton.edu/courses/archive/spr08/cos511/scribe_notes/0204.pdf. Accessed February 26, 2022
[3] Cloudfactory.com. Data Engineering Preparation Labelling for AI. URL: https://www.cloudfactory.com/reports/data-engineering-preparation-labeling-for-ai. Accessed February 26, 2022



Category:


Computer Vision