-
Notifications
You must be signed in to change notification settings - Fork 0
Home
💻 We recommend you to run in a device with at minimum: i3 +2th gens, 500MB of free RAM and storage for the models to run smoothly, we run those models in the performances below
When installing Model 1 or Model 1T, you're downloading pre-trained transformer all-MiniLM-L6-v2 locally. But went you use Model 2F, 2FT, 2M and 2MT, you get asked wish model you want to use, then you will download it locally. We collect the them in this list:
-
1. static-retrieval-mrl-en-v1
Optimized for English semantic search with static embeddings. -
2. static-similarity-mrl-multilingual-v1
Covers 51 languages, strong for cross-lingual similarity. -
3. all-MiniLM-L6-v2
Popular, fast, versatile, strong balance of speed and quality. -
6. msmarco-MiniLM-L6-v2
Fine-tuned on MSMARCO passage ranking, excellent for QA retrieval. -
7. multi-qa-MiniLM-L6-cos-v1
Trained on 215M QA pairs, optimized for semantic search.
-
4. distiluse-base-multilingual-cased-v2
Supports 50 languages, decent accuracy but heavier. -
5. paraphrase-MiniLM-L6-v2
Optimized for paraphrase detection and similarity scoring.
Performance: Balanced -
8. all-MiniLM-L6-v1
Early MiniLM version, fast but weaker than v2.
-
9. distilbert-base-nli-stsb-mean-tokens
Legacy SBERT model, low-quality embeddings, deprecated.
-In normal, Linux / Mac OS comes with a build-in python, because they are Unix based OSs, generally they came with python 3.11. IF upgrading your OS or VE (Virtual Environment) python, we recommend upgrading to python 3.14.
-If you don't know your python version, run:
python3 --version
-After, run the WikiSpeedrunner python script like this:
python3 path/to/WikiSpeedrunnertype.py
-The first, you should ensure that you have python +3.11 in your PC. You can do so by running:
where python
*if you get an output, you have python in! if you don't have it, visit MS store's python download
-Now, if you don't know your python version, run:
python --version
-In first, went you run the script or the executable, if you do everything correct*, you should see like this:
*if you use windows models, we recommend running them by using terminal.

*if you run the script, you might need to wait for certain seconds / minutes according to your device performances and network
-Then, you should see this:

-Then, input the staring and target Wikipedia articles, like in the example, we used:
-
- Starting: https://en.wikipedia.org/wiki/Potato
-
- Target: https://en.wikipedia.org/wiki/The_Walt_Disney_Company
-And at the, you should get the result:
- Target: https://en.wikipedia.org/wiki/The_Walt_Disney_Company
*if you use windows models, we recommend running them by using terminal.
-Run it:

-Then, you should get asked about wish model you want, see above wish one is good for you computer and network.
-After, enter the starting and target article:
-In the example, we used:
Those statistics where calculated under those performances: Surface Pro 3 with Intel(R) Core(TM) i7-4650U CPU @ 1.70GHz (2.30 GHz), 8,00 GB of installed RAM, 64-bit operating system, x64-based processor, Windows 11 pro, Git shell and unstable network (100Kb - 2Mb), the statistics may vary depending on your set up -We've been do some calculations and statics, we created Test models, that asks about how many times to fletch, and at the end, you get the average time and steps.
-Then, we used all the models, and we test them to get this result:
-we get this compare:
Minimum steps:
model 2Mini > model 2Mini Test > Model 2Mini / model 1 > model 2Full Test > model 2Full
Minimum Time :
model 2Mini Test > model 2Mini > model 2Full > model 1Test > model 1 > model 2Full Test