Skip to content

Commit a513654

Browse files
committed
Fix file extension, add redirections
1 parent 96bb2ae commit a513654

2 files changed

Lines changed: 11 additions & 1 deletion

File tree

.openpublishing.redirection.json

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65149,6 +65149,16 @@
6514965149
"source_path": "learn-pr/azure/manage-secrets-with-azure-key-vault/7-summary.yml",
6515065150
"redirect_url": "/azure/security/",
6515165151
"redirect_document_id": false
65152+
}
65153+
{
65154+
"source_path": "learn-pr/tensorflow/intro-natural-language-processing-tensorflow/notebooks/2-represent-text-as-tensors.ipynb",
65155+
"redirect_url": "/azure/machine-learning/",
65156+
"redirect_document_id": false
65157+
},
65158+
{
65159+
"source_path": "learn-pr/tensorflow/intro-natural-language-processing-tensorflow/notebooks/3-embeddings.ipynb",
65160+
"redirect_url": "/azure/machine-learning",
65161+
"redirect_document_id": false
6515265162
}
6515365163
]
6515465164
}

learn-pr/tensorflow/intro-natural-language-processing-tensorflow/includes/3-embeddings.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ To do that, we need to pretrain our embedding model on a large collection of tex
176176

177177
CBoW is faster, and while skip-gram is slower, it does a better job of representing infrequent words.
178178

179-
![Diagram showing both CBoW and Skip-Gram algorithms to convert words to vectors.](../media/example-algorithms-converting-words-vectors.png)
179+
![Diagram showing both CBoW and Skip-Gram algorithms to convert words to vectors.](../media/example-algorithms-converting-words-vectors.svg)
180180

181181
To experiment with the Word2Vec embedding pretrained on Google News dataset, we can use the **gensim** library. Below we find the words most similar to 'neural'.
182182

0 commit comments

Comments
 (0)