The results of Model2Vec are presented in the following sections:

MTEB Results (English)

Model2Vec is evaluated on MTEB, as well as two additional tasks: PEARL (a phrase representation task) and WordSim (a collection of word similarity tasks). The results are shown in the table below.

Note: The potion and M2V models are our static models.

ModelAvg (All)Avg (MTEB)ClassClustPairClassRankRetSTSSumPearlWordSim
all-MiniLM-L6-v256.0856.0962.6241.9482.3758.0441.9578.9030.8160.8349.91
potion-base-32M52.4651.6665.9735.2978.1750.9233.5274.2229.7855.3755.15
potion-base-8M50.5450.0364.4432.9376.6249.7331.7173.2429.2853.5450.75
potion-retrieval-32M49.7349.7659.5630.5576.3850.0536.3573.2228.8549.3150.02
potion-base-4M48.8748.2362.1931.4775.3748.7529.1172.1928.8952.5549.21
static-retrieval-mrl-en-v148.1848.3657.3928.3275.6349.1635.6172.1828.6449.6844.76
static-similarity-mrl-multilingual-v148.1547.1559.9624.4079.0248.2529.5474.8830.2851.6651.66
M2V_base_output46.7945.3461.2525.5874.947.6326.1468.5829.254.0249.18
potion-base-2M45.5244.7758.4527.573.7246.8224.1370.1431.5150.8244.72
GloVe_300d42.8442.3657.3127.6672.4843.322.7861.928.8145.6543.05
BPEmb_50k_300d39.3437.7855.7623.3557.8643.2117.555.129.7447.5641.28

The results show that potion-base-32M is the most performant static embedding model. It reaches 92.11% of the performance of all-MiniLM-L6-v2 with an average MTEB score of 51.66 while being orders of magnitude faster.

Note: the potion-retrieval-32M, static-retrieval-mrl-en-v1, and static-similarity-mrl-multilingual-v1 models are task-specific models. We’ve included them for completeness, but they should not be compared directly to the other models for tasks that they are not designed for.

The figure below shows the relationship between the number of sentences per second and the average MTEB score. The circle sizes correspond to the number of parameters in the models (larger = more parameters). This plot shows that the potion and M2V models are much faster than the other models, while still being competitive in terms of performance with the all-MiniLM-L6-v2 model. NOTE: for fairness of comparison, we disabled multiprocessing for Model2Vec for this benchmark. All sentence-transformers models are run with the sentence-transformers library’s default settings for encode.

Figure: The average MTEB score plotted against sentences per second. The circle size indicates model size.

MMTEB Results (Multilingual)

The results for the multilingual models are shown in the table below. We compare against the LaBSE model, as well as other multilingual static embedding models.

ModelMean (Task)Mean (TaskType)BitMiningClassClustInstRetMultiClassPairClassRankRetSTS
LaBSE52.0745.6576.3554.6038.08−3.0020.1275.9750.2033.1765.35
potion-multilingual-128M47.3140.4040.7252.3638.80−2.0815.9571.3947.3937.8661.23
static-similarity-mrl-multilingual-v147.2441.3850.6248.6030.67−1.2414.7474.3449.4541.2164.02
M2V_multilingual_output42.1335.8936.8849.7530.09−0.0714.3469.7441.5125.4255.33

As can be seen, potion-multilingual-128M is the most performant static multilingual model, reaching 90.86% of the performance of LaBSE. There are differences per task. The static-similarity-mrl-multilingual-v1 model is better for retrieval and STS tasks (which can be explained by the fact that it’s trained for STS), while the potion-multilingual-128M model is better for classification and clustering tasks. It is important to note that the potion-multilingual-128M model supports a total of 101 languages, while static-similarity-mrl-multilingual-v1 supports only 50 languages. It is also important to note that MMTEB does not include tasks for every language, and there may be a bias towards larger languages.

Retrieval Results

A subset of models we created and compare against are specifically designed for retrieval tasks. The results are shown in the table below, including two general-purpose models for comparison and a transformer.

ModelRetrieval Score
all-MiniLM-L6-v241.95
potion-retrieval-32M36.35
static-retrieval-mrl-en-v135.61
potion-base-32M33.52
potion-base-8M31.71

As can be seen, potion-retrieval-32M model is the most performant static retrieval model, reaching 86.65%% of the performance of all-MiniLM-L6-v2 with a retrieval score of 36.35.

Training Results

The main results for Model2Vec training are outlined in this section.

We compare five different architectures for our main results:

  • model2vec + logreg: A model2vec model with a scikit-learn LogisticRegressionCV on top.
  • model2vec full finetune: A model2vec classifier with the full model finetuned. This uses our StaticModelForClassification.
  • tfidf: A TF-IDF model with a scikit-learn LogisticRegressionCV on top.
  • setfit: A SetFit model trained using all-minilm-l6-v2 as a base model.
  • bge-base + logreg: A BGE-base encoder model with a scikit-learn LogisticRegressionCV on top.

We use 14 classification datasets, using 1000 examples from the train set, and the full test set. No parameters were tuned on any validation set. All datasets were taken from the Setfit organization on Hugging Face.

datasettfidfmodel2vec + logregmodel2vec full finetunesetfitbge-base + logreg
20_newgroups50.7156.2457.9461.2967.39
ade71.4679.2079.6883.0586.12
ag_news81.6886.7087.2088.0188.95
amazon_counterfactual85.1890.9691.9395.5192.74
bbc95.0995.8097.2196.6097.50
emotion59.2865.5767.1172.8665.63
enron_spam96.0096.4096.8597.4597.30
hatespeech_offensive66.4583.5485.6187.6984.92
imdb80.4485.3485.5986.0092.25
massive_scenario77.2682.8684.4283.5487.07
senteval_cr65.6177.0379.4786.1590.53
sst518.5232.3437.9542.3138.49
student74.1683.2085.0289.6289.71
subj86.3989.2089.8593.8094.55
tweet_sentiment_extraction53.2064.9662.6575.1569.48
tfidfmodel2vec + logregmodel2vec full finetunesetfitbge-base + logreg
average70.878.079.282.682.8

As can be seen see, full fine-tuning brings modest performance improvements in some cases, but very large ones in other cases, leading to a pretty large increase in average score. Our advice is to test both if you can use potion-base-32m, and to use full fine-tuning if you are starting from another base model.

The speed difference between model2vec and the other models is immeense, with the full finetune being 35x faster than a setfit based on all-minilm-l6-v2 on CPU and 200x faster than thebge-base transformer model.

tfidfmodel2vec + logregmodel2vec full finetunesetfitbge-base + logreg
samples / second1084341792524744716118

The figure below shows the relationship between the number of sentences per second and the average training score, where we’ve included more transformer-based models for comparison.

Figure: The average training score plotted against sentences per second (log scale).

Ablations

To better understand the factors contributing to the performance of Model2Vec, we conducted a comprehensive set of ablation studies, covering various aspects of the model’s architecture and preprocessing methods. In these studies, we examined the impact of key elements such as PCA, Zipf weighting, and the use of Sentence Transformers versus regular transformer models. We also compared the performance of input embeddings versus output embeddings, since it would seem plausible that these should also work well. The results are shown in the table below.

ModelAvg (All)Avg (MTEB)ClassClustPairClassRankRetSTSSumPearlWordSim
M2V_base_output46.7945.3461.2525.5874.947.6326.1468.5829.254.0249.18
M2V_base_output_nopca44.0442.3161.4220.1568.2144.6725.2561.8729.8551.0248.96
M2V_base_output_nozipf43.6141.5260.4421.6272.1545.5720.3562.7130.6652.2849.17
M2V_base_input_nozipf_nopca40.9739.5554.1618.6268.343.6523.6359.3832.0450.1940.52
M2V_base_output_nozipf_nopca40.838.4459.7819.3162.3942.2619.0155.163049.0948.97
M2V_base_input40.7439.9360.3522.6659.6343.0225.4750.0529.3550.6134.47
M2V_bert_output_nozipf_nopca35.5434.8255.6915.4258.6839.8712.9255.2430.1546.926.72

There’s four main findings in these results:

  1. Non-Sentence Transformers do not work well. This can be seen by comparing M2V_bert_output_nozipf_nopca (which uses BERT, a non-Sentence Transformer) and M2V_base_output_nozipf_nopca (which uses BGE-base, a Sentence Transformer). Using a Sentence Transformer gives a ~5.2% increase in performance.
  2. PCA is crucial for performance. This can be seen by comparing M2V_base_output_nozipf_nopca and M2V_base_output_nozipf which gives a ~2.8% increase in performance. Furthermore, PCA improves performance on all tasks.
  3. Zipf weighting is crucial for performance. This can be seen by comparing M2V_base_output_nozipf_nopca and M2V_base_output_nopca which gives a ~3.1% increase in performance.
  4. Output embeddings outperform input embeddings. This can be seen by comparing M2V_base_input and M2V_base_output which gives a ~6.1% increase in performance. Note that input embeddings do work well for some tasks. We hypothesize that this is because input embeddings are inherently normalized.