Exploring Automated Data Augmentation | by Vishal Gobin | Feb, 2024


Photo by Markus Spiske on Unsplash

Data augmentation is a powerful technique used in machine learning to increase the size and diversity of a training dataset, without collecting new data. It involves creating new training samples by applying various transformations to the existing data.

Before starting with data augmentation, it’s crucial to understand the data. Are we working with images, text, audio, or some other type of data? The type of data we have will determine the augmentation techniques we can use. For example, if we’re working with images, we might use techniques like rotation, scaling, and flipping.

Select appropriate augmentation techniques based on the characteristics of the dataset and the task at hand. Common augmentation techniques include:

  • Image Rotation: Rotating images by a certain angle (e.g., 90 degrees clockwise).
  • Horizontal or Vertical Flipping: Mirroring images horizontally or vertically.
  • Zooming: Enlarging or shrinking images.
  • Brightness and Contrast Adjustment: Modifying the brightness and contrast levels of images.
  • Adding Noise: Introducing random noise to images.
  • Random Cropping: Cropping random portions of images.
Hue adjustment of -0.5

After choosing the augmentation techniques, the next step is to implement them. Many machine learning libraries, such as TensorFlow and PyTorch, provide built-in functions for common augmentation techniques. Here’s an example of implementing image augmentation using TensorFlow:

import tensorflow as tf

def shear_image(image, intensity):
"""Randomly shears an image:
image is a 3D tf.Tensor containing the image to shear.
intensity is the intensity with which the image should be sheared.
Returns the sheared image."""
return tf.keras.preprocessing.image.random_shear(image,
intensity,
row_axis=1,
col_axis=0,
channel_axis=2,
)

Shear image

Now that we’ve implemented the augmentation techniques, we can apply them to the dataset. It’s common to apply augmentation on-the-fly during training, rather than saving the augmented data. This saves storage space and ensures that the model sees a slightly different dataset each epoch.

With the augmented dataset, we’re now ready to train the model. Because the model is now seeing more diverse data, it should generalize better to unseen data.

Finally, after training the model, don’t forget to evaluate it on a validation set to see how well it’s performing. If the model is overfitting, we might want to try increasing the diversity of the augmentations. If it’s underfitting, we might want to try decreasing the diversity.



Source link

Be the first to comment

Leave a Reply

Your email address will not be published.


*