Skip to content

Experiment settings: Image metric learning

Besides having certain common experiment settings with other problem types, the specific settings for an image metric learning experiment are listed and described below.

Dataset Settings

Data Folder

Defines the folder location of the images to use for the experiment. When the experiment is running, H2O Hydrogen Torch will load images from this folder.

Data Folder Test

Defines the folder location of the images H2O Hydrogen Torch will use to test the model. H2O Hydrogen Torch will load images from this folder when testing the model. This setting is only available if a test dataframe is selected.

Note

The Data Folder Test setting will appear when you specify a test dataframe using the Test Dataframe setting.

Image Settings

Image Width

Defines the width H2O Hydrogen Torch will use to rescale the images for training and predictions.

Note

Depending on the original image size, a bigger width can generate a higher accuracy value.

Image Height

Defines the height H2O Hydrogen Torch will use to rescale the images for training and predictions.

Note

Depending on the original image size, a bigger height can generate a higher accuracy value.

Image Channels

Defines the number of channels the train images contain.

Note

  • Typically images have three input channels (red, green, and blue (RGB)), but grayscale images have only 1. When you provide image data in a NumPy data format, any number of channels is allowed. For this reason, data scientists can specify the number of channels.

  • The defined number of channels will also refer to the provided validation and test datasets.

Image Normalization

Grid search hyperparameter

Defines the transformer to normalize the image data before training the model.

Note

Usually, state-of-the-art image models normalize the training images by scaling values of each of the input channels to predefined means and standard deviations.

Augmentation settings

Augmentations strategy

Grid search hyperparameter

Defines the augmentation strategy to apply to the input images. Soft, Medium, and Hard values correspond to the strength of the augmentations to apply.

Options
  • Soft: The Soft strategy applies image Resize and random HorizontalFlip during model training while applying image Resize during model inference.

  • Medium: The Medium strategy adds ShiftScaleRotate and CoarseDropout to the list of the train augmentations.

  • Hard: The Hard strategy applies RandomResizedCrop (instead of Resize) during model training while adding RandomBrightnessContrast to the list of train augmentations.

  • Custom: The Custom strategy allows users to use their own augmentations that can be defined in the following two settings:

Note

Augmentations are ways to modify train images while keeping the target values valid, such as flipping the image or adding noise. Distorting training images do not influence the expected prediction of the model but enrich the training data. Augmentations help generalize the model better and improve its accuracy.

Custom Train Augmentations

Defines a list of augmentations to use for the train data. The format is a resulting .json of the albumentations.save() function call from Albumentations library. IMAGE_HEIGHT and IMAGE_WIDTH placeholders can be used to utilize image dimensions from the experiment configuration.

Note

Augmentations are ways to modify train images while keeping the target values valid, such as flipping the image or adding noise. Distorting training images do not influence the expected prediction of the model but enrich the training data. Augmentations help generalize the model better and improve its accuracy. Augmentations are applied to every image at each epoch with the provided probability.

Custom Inference Augmentations

Defines a list of inference augmentations to be applied to the test and validation data. The format is a resulting .json of the albumentations.save() function call from Albumentations library. IMAGE_HEIGHT and IMAGE_WIDTH placeholders can be used to utilize image dimensions from the experiment configuration.

Note

Inference augmentations serve the same purpose as training augmentations, but the difference is that inference augmentations are applied to validation and test data. Typically, inference augmentations only contain resizing or very simple augmentations.

Architecture Settings

Embedding Size

Grid search hyperparameter

Defines the dimensionality H2O Hydrogen Torch will use for the embedding vector representing one sample during model training.

Note

  • The embedding size has an impact on the granularity of the embedding individual records (embedding calculation) and cosine similarity calculation (a calculation that follows the embedding calculation)

  • A smaller embeddings size will typically lead to more general embeddings and larger ones to more specific ones.

  • Tuning the size of the embedding can impact overfitting and underfitting.

Backbone

Grid search hyperparameter

Defines the backbone neural network architecture to train the model.

Note

H2O Hydrogen Torch provides several backbone state-of-the-art neural network architectures for model training. H2O Hydrogen Torch accepts backbone neural network architectures from the timm library (enter the architecture name).

Tip

Usually, it is good to use simpler architectures for quicker experiments and larger models when aiming for the highest accuracy.

Pretrained

Defines whether the neural network should start with pre-trained weights. When this setting is On, the training of the neural network will start with a pre-trained model on a generic task. When turn Off, the initial weights of the neural network to train will be random.

Pool

Grid search hyperparameter

Defines the global pooling method before the final fully connected layer that H2O Hydrogen Torch will use in the model architecture. Instead of adding a fully connected layer on top of the feature maps, global pooling is applied to each feature map beforehand.

Dropout

Grid search hyperparameter

Defines the dropout rate before the final fully connected layer that H2O Hydrogen Torch will apply during model training. The dropout rate will help the model generalize better by randomly dropping a share of the neural network connections.

Training Settings

Arcface Margin

Grid search hyperparameter

Defines the margin for ArcFace loss; higher values result in a bigger separation of samples.

Note

  • Tuning this setting can impact the training and quality of embeddings.

  • This setting can be an important setting to tune and specifically depends on the dataset at hand.

Arcface Scale

Grid search hyperparameter

Defines the ArcFace loss scale value that changes the shape of logits and impacts gradients.

Prediction Settings

Top K Similar

Defines the number (k) of similar predictions to keep for each record during the training model.

Note

Defining this setting impacts output predictions and metrics (metrics that rely on some top-k selection) but not the training process.

Test Time Augmentations

Defines the test time augmentation(s) to apply during inference. Test time augmentations are applied when the model makes predictions on new data. The final prediction is an average of the predictions for all the augmented versions of an image.

Note

This technique can improve the model accuracy.

Environment Settings

An image metric learning experiment does not have specific environment settings besides those specified in the environment settings section of the common experiment settings page.

Logging Settings

Number of Images

Defines the number of images to show in the experiment Insights tab.


Back to top