{"id":587,"date":"2017-02-26T20:56:20","date_gmt":"2017-02-26T11:56:20","guid":{"rendered":"http:\/\/18.179.189.122\/?p=587"},"modified":"2018-12-28T12:53:44","modified_gmt":"2018-12-28T03:53:44","slug":"papers-and-articles-talk-about-nlp-with-deep-learning","status":"publish","type":"post","link":"https:\/\/wanggengyu.com\/?p=587","title":{"rendered":"Papers and Articles talk about NLP with Deep Learning"},"content":{"rendered":"<p><strong>Background<\/strong><br \/>\nDeep Learning in NLP \uff08\u4e00\uff09\u8bcd\u5411\u91cf\u548c\u8bed\u8a00\u6a21\u578b<\/p>\n<blockquote data-secret=\"JfmQpFTGps\" class=\"wp-embedded-content\"><p><a href=\"http:\/\/licstar.net\/archives\/328\">Deep Learning in NLP \uff08\u4e00\uff09\u8bcd\u5411\u91cf\u548c\u8bed\u8a00\u6a21\u578b<\/a><\/p><\/blockquote>\n<p><iframe loading=\"lazy\" class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; clip: rect(1px, 1px, 1px, 1px);\" src=\"http:\/\/licstar.net\/archives\/328\/embed#?secret=JfmQpFTGps\" data-secret=\"JfmQpFTGps\" width=\"600\" height=\"338\" title=\"\u300aDeep Learning in NLP \uff08\u4e00\uff09\u8bcd\u5411\u91cf\u548c\u8bed\u8a00\u6a21\u578b\u300b\u2014licstar\u7684\u535a\u5ba2\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe><\/p>\n<p>CS224n: Natural Language Processing with Deep Learning<br \/>\nSchedule and Syllabus<br \/>\nhttp:\/\/web.stanford.edu\/class\/cs224n\/syllabus.html<\/p>\n<p><strong>Skip-Gram Model &#8211; Word2Vec <\/strong><br \/>\nEfficient Estimation of Word Representations in Vector Space<br \/>\nhttps:\/\/arxiv.org\/pdf\/1301.3781.pdf<\/p>\n<p>Distributed Representations of Words and Phrases and their Compositionality<br \/>\nhttps:\/\/arxiv.org\/pdf\/1310.4546.pdf<\/p>\n<p>Word2Vec Tutorial &#8211; The Skip-Gram Model<br \/>\nhttp:\/\/mccormickml.com\/2016\/04\/19\/word2vec-tutorial-the-skip-gram-model\/<\/p>\n<p>Softmax Regression (is used in Skip-Gram model)<br \/>\nhttp:\/\/ufldl.stanford.edu\/tutorial\/supervised\/SoftmaxRegression\/<\/p>\n<p>\u4e2d\u82f1\u6587\u7ef4\u57fa\u767e\u79d1\u8bed\u6599\u4e0a\u7684Word2Vec\u5b9e\u9a8c<\/p>\n<blockquote data-secret=\"SLmHeXe96E\" class=\"wp-embedded-content\"><p><a href=\"http:\/\/www.52nlp.cn\/%e4%b8%ad%e8%8b%b1%e6%96%87%e7%bb%b4%e5%9f%ba%e7%99%be%e7%a7%91%e8%af%ad%e6%96%99%e4%b8%8a%e7%9a%84word2vec%e5%ae%9e%e9%aa%8c\">\u4e2d\u82f1\u6587\u7ef4\u57fa\u767e\u79d1\u8bed\u6599\u4e0a\u7684Word2Vec\u5b9e\u9a8c<\/a><\/p><\/blockquote>\n<p><iframe loading=\"lazy\" class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; clip: rect(1px, 1px, 1px, 1px);\" src=\"http:\/\/www.52nlp.cn\/%e4%b8%ad%e8%8b%b1%e6%96%87%e7%bb%b4%e5%9f%ba%e7%99%be%e7%a7%91%e8%af%ad%e6%96%99%e4%b8%8a%e7%9a%84word2vec%e5%ae%9e%e9%aa%8c\/embed#?secret=SLmHeXe96E\" data-secret=\"SLmHeXe96E\" width=\"600\" height=\"338\" title=\"\u300a\u4e2d\u82f1\u6587\u7ef4\u57fa\u767e\u79d1\u8bed\u6599\u4e0a\u7684Word2Vec\u5b9e\u9a8c\u300b\u2014\u6211\u7231\u81ea\u7136\u8bed\u8a00\u5904\u7406\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe><\/p>\n<p><strong>GloVe Model<\/strong><br \/>\nGloVe: Global Vectors for Word Representation<br \/>\nhttp:\/\/www-nlp.stanford.edu\/pubs\/glove.pdf<br \/>\nhttp:\/\/nlp.stanford.edu\/projects\/glove\/<br \/>\n<iframe loading=\"lazy\" title=\"Jeffrey Pennington\" width=\"790\" height=\"444\" src=\"https:\/\/www.youtube.com\/embed\/RyTpzZQrHCs?start=103&#038;feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>\u65af\u5766\u798f\u5927\u5b66\u6df1\u5ea6\u5b66\u4e60\u4e0e\u81ea\u7136\u8bed\u8a00\u5904\u7406\u7b2c\u4e8c\u8bb2\uff1a\u8bcd\u5411\u91cf<\/p>\n<blockquote data-secret=\"GPVitNRYAJ\" class=\"wp-embedded-content\"><p><a href=\"http:\/\/www.52nlp.cn\/%e6%96%af%e5%9d%a6%e7%a6%8f%e5%a4%a7%e5%ad%a6%e6%b7%b1%e5%ba%a6%e5%ad%a6%e4%b9%a0%e4%b8%8e%e8%87%aa%e7%84%b6%e8%af%ad%e8%a8%80%e5%a4%84%e7%90%86%e7%ac%ac%e4%ba%8c%e8%ae%b2%e8%af%8d%e5%90%91%e9%87%8f\">\u65af\u5766\u798f\u5927\u5b66\u6df1\u5ea6\u5b66\u4e60\u4e0e\u81ea\u7136\u8bed\u8a00\u5904\u7406\u7b2c\u4e8c\u8bb2\uff1a\u8bcd\u5411\u91cf<\/a><\/p><\/blockquote>\n<p><iframe loading=\"lazy\" class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; clip: rect(1px, 1px, 1px, 1px);\" src=\"http:\/\/www.52nlp.cn\/%e6%96%af%e5%9d%a6%e7%a6%8f%e5%a4%a7%e5%ad%a6%e6%b7%b1%e5%ba%a6%e5%ad%a6%e4%b9%a0%e4%b8%8e%e8%87%aa%e7%84%b6%e8%af%ad%e8%a8%80%e5%a4%84%e7%90%86%e7%ac%ac%e4%ba%8c%e8%ae%b2%e8%af%8d%e5%90%91%e9%87%8f\/embed#?secret=GPVitNRYAJ\" data-secret=\"GPVitNRYAJ\" width=\"600\" height=\"338\" title=\"\u300a\u65af\u5766\u798f\u5927\u5b66\u6df1\u5ea6\u5b66\u4e60\u4e0e\u81ea\u7136\u8bed\u8a00\u5904\u7406\u7b2c\u4e8c\u8bb2\uff1a\u8bcd\u5411\u91cf\u300b\u2014\u6211\u7231\u81ea\u7136\u8bed\u8a00\u5904\u7406\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Background Deep Learning in NLP \uff08\u4e00\uff09\u8bcd\u5411\u91cf\u548c\u8bed\u8a00\u6a21\u578b Deep Learning in NLP \uff08\u4e00\uff09\u8bcd\u5411\u91cf\u548c\u8bed\u8a00\u6a21\u578b CS224n: Natural Language Processing with Deep Learning Schedule and Syllabus http:\/\/web.stanford.edu\/class\/cs224n\/syllabus.html Skip-Gram Model &#8211; Word2Vec Efficient Estimation of Word Representations in Vector Space https:\/\/arxiv.org\/pdf\/1301.3781.pdf Distributed Representations of Words and Phrases and their Compositionality https:\/\/arxiv.org\/pdf\/1310.4546.pdf Word2Vec Tutorial &#8211; The Skip-Gram&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5],"tags":[],"class_list":["post-587","post","type-post","status-publish","format-standard","hentry","category-techniques"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/wanggengyu.com\/index.php?rest_route=\/wp\/v2\/posts\/587","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wanggengyu.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wanggengyu.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wanggengyu.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/wanggengyu.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=587"}],"version-history":[{"count":1,"href":"https:\/\/wanggengyu.com\/index.php?rest_route=\/wp\/v2\/posts\/587\/revisions"}],"predecessor-version":[{"id":681,"href":"https:\/\/wanggengyu.com\/index.php?rest_route=\/wp\/v2\/posts\/587\/revisions\/681"}],"wp:attachment":[{"href":"https:\/\/wanggengyu.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=587"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wanggengyu.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=587"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wanggengyu.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=587"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}