• A
  • A
  • A
  • АБВ
  • АБВ
  • АБВ
  • А
  • А
  • А
  • А
  • А
Обычная версия сайта

Natural Language Processing

Учебный год
Обучение ведется на английском языке
Курс по выбору
Когда читается:
3-й курс, 1-3 модуль


Еникеева Екатерина Владимировна

Лепигина Анастасия Анатольевна

Course Syllabus


The course is aimed at mastering the basics of natural language processing (NLP), a vibrant interdisciplinary field. The course covers the methods and approaches used in many real-world NLP applications such as language modeling, text classification, sentiment analysis, summarization and machine translation. The students taking the course will not only use some of the existing NLP libraries and software packages, but also learn about the principles behind their design, and about the mathematical models underlying modern computational linguistics. The course also involves completing practical programming assignments in Python and conducting experiments on texts written in English and Russian. Pre-requisites are python programming skills, general knowledge of linguistics
Learning Objectives

Learning Objectives

  • As a result of mastering the discipline, the student will know the structural features of natural language texts and the principles of their computer processing in order to obtain linguistic (morphological, syntactic, semantic) information;
  • As a result of mastering the discipline, the student will have an idea of the methods used to solve complex practical problems of natural language processing, in particular, information retrieval, summarization, sentiment analysis, machine translation
  • The student woll understand the limitations of existing computer models of natural language processing
Expected Learning Outcomes

Expected Learning Outcomes

  • Understands the difficulties in natural language processing, has an idea of approaches to solve these problems
  • Knows how to preprocess text, knows the syntax of regular expressions, has an idea of the editing distance.
  • Knows why language models are needed, knows how to create language models using n-grams
  • Has an idea of the tagging problem, knows the principle of hidden Markov models and the basic algorithm for implementation
  • Has an idea of different types of summarization and ways to assess the quality of summarization
  • Has an idea of computational semantics, knows the basic approaches, able to calculate semantic similarity
  • Has an idea of classical and modern approaches to machine translation
  • Understands the difference between basic classification metrics, has an idea of dependency and constituent trees and context-free grammars, knows the basic algorithms of syntactic parsing
  • Has an idea of the classification problem and approaches to it, understands the naive Bayesian classifier algorithm
  • Understands the difference between basic classification metrics
Course Contents

Course Contents

  • Introduction to natural language processing
    Structural features of texts in natural language; ambiguity on all levels of language; the main challenges of natural language processing; basic approaches to problem solving: manually written rules and machine learning.
  • Basic text processing and edit distance
    Preprocessing: tokenization and segmentation; normalization of words: stemming, lemmatization, morphological analyzers; regular expressions; edit distance.
  • Language models
    N-grams; perplexity; methods of smoothing; the use of language models: input prediction, error correction, speech recognition, text generation.
  • Tagging problems and hidden Markov models
    POS tagging; named entity recognition as a tagging problem; hidden Markov models, their ad-vantages and disadvantages; the Viterbi algorithm.
  • Text classification and sentiment analysis
    Classification problems; naive Bayes classifier; text classification; sentiment analysis.
  • Evaluation
    Performance measures: accuracy, precision, recall, F-measure; state-of-the-art.
  • Parsing
    Constituency and dependency trees; context-free grammar; probabilistic approach to parsing; lexicalized PCFGs; CKY algorithm.
  • Machine translation
    Classical approaches: direct, transfer-based, interlingual; statistical machine translation; IBM model; alignment; parameter estimation in IBM models; phrase-based translation models.
  • Computational semantics
    Word senses and meanings; WordNet; semantic similarity measures: thesaurus-based and distributional methods.
  • Text summarization
    Extractive and abstractive summarization; multiple-document summarization; query-based summarization; supervised and unsupervised learning; evaluation of summarization systems; ROUGE.
Assessment Elements

Assessment Elements

  • non-blocking Graded tests
  • non-blocking Oral exam
    Экзамен проводится на платформах Zoom (https://zoom.us), MS Teams (https://teams.microsoft.com). Ссылка будет отправлена преподавателем за три дня до экзамена.
Interim Assessment

Interim Assessment

  • Interim assessment (3 module)
    0.5 * Graded tests + 0.5 * Oral exam


Recommended Core Bibliography

  • Perkins, J. Python Text Processing with NLTK 2.0 Cookbook: Use Python NLTK Suite of Libraries to Maximize Your Natural Language Processing Capabilities [Электронный ресурс] / Jacob Perkins; DB ebrary. – Birmingham: Packt Publishing Ltd, 2010. – 336 p.

Recommended Additional Bibliography

  • The Handbook of Natural Language Processing [Электронный ресурс] / edited by Robert Dale, Hermann Moisl, Harold Somers; DB ebrary. – New York: Marcel Dekker, Inc., 2010. – XIX, 996 p. – режим доступа: https://ebookcentral.proquest.com/lib/hselibrary-ebooks/reader.action?docID=216282&query=natural+language+processing+with