The following is how we can define the config to check for convergence statistics:

UpTrain also provides convergence analysis for embeddings, a technique for evaluating the performance of an embedding algorithm, and involves measuring how well the embeddings converge as the algorithm iterates. UpTrain provides several methods for conducting convergence analysis on embeddings, including visualization tools and metrics that can be used to evaluate the quality of the embeddings.

convergence_check = {
    'type': uptrain.Statistic.CONVERGENCE_STATS,
    "model_args": [{
        'type': uptrain.MeasurableType.INPUT_FEATURE,
        'feature_name': 'model_type',
        'allowed_values': ['batch', 'realtime'],
        }],
    'reference': "initial",
    "distance_types": ["cosine_distance", "norm_ratio", "l2_distance"],
}

In our dashboard, we observe that at time 100k, the norm ratio for embeddings generated by the batch model is higher, implying that there is a greater popularity bias.

Norm ratio of embeddings w.r.t. initial embeddings at time 100k