This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Word embeddings are vector representations of words, where more similar words will have similar locations in vector space. First developed by a team of researchers at Google led by Thomas Mikolov, and discussed in the paper Efficient Estimation of Word Representations in Vector Space, word2vec is a popular group of models that produce word embeddings by training shallow neural networks. In this blog post, we apply a word2vec model to the Alteryx Community texts to develop Alteryx-specific word embeddings.