Spanish word embeddings computed with different methods and from different corpora
Below you find links to Spanish word embeddings computed with different methods and from different corpora. Whenever it is possible, a description of the parameters used to compute the embeddings is included, together with simple statistics of the vectors, vocabulary, and description of the corpus from which the embeddings were computed. Direct links to the embeddings are provided, so please refer to the original sources for proper citation (also see References). An example of the use of some of these embeddings can be found here or in this tutorial (both in Spanish).
Summary (and links) for the embeddings in this page:
Corpus | Size | Algorithm | #vectors | vec-dim | Credits | |
---|---|---|---|---|---|---|
1 | Spanish Unannotated Corpora | 2.6B | FastText | 1,313,423 | 300 | José Cañete |
2 | Spanish Billion Word Corpus | 1.4B | FastText | 855,380 | 300 | Jorge Pérez |
3 | Spanish Billion Word Corpus | 1.4B | Glove | 855,380 | 300 | Jorge Pérez |
4 | Spanish Billion Word Corpus | 1.4B | Word2Vec | 1,000,653 | 300 | Cristian Cardellino |
5 | Spanish Wikipedia | ??? | FastText | 985,667 | 300 | FastText team |
Links to the embeddings (#dimensions=300, #vectors=1,313,423):
More vectors with different dimensiones (10, 30, 100, and 300) can be found here
Links to the embeddings (#dimensions=300, #vectors=855,380):
Links to the embeddings (#dimensions=300, #vectors=855,380):
Links to the embeddings (#dimensions=300, #vectors=1,000,653)
Links to the embeddings (#dimensions=300, #vectors=985,667):