Corso di NLP online

Posted on 27 settembre 2012

0


La Stanford University offre un corso di Natural Language Processing online.
Ovvero, dalla pagina https://class.coursera.org/nlp/lecture/preview/index potete prendere visione del materiale filmato per questo corso.

Date un’occhiata ai titoli dei video.

Week 1 – Course Introduction

  • Course Introduction (14:11)

Week 1 – Basic Text Processing

  • Regular Expressions (11:25)
  • Regular Expressions in Practical NLP (6:04)
  • Word Tokenization (14:26)
  • Word Normalization and Stemming (11:47)
  • Sentence Segmentation (5:31)

Week 1 – Edit Distance

  • Defining Minimum Edit Distance (7:04)
  • Computing Minimum Edit Distance (5:54)
  • Backtrace for Computing Alignments (5:55)
  • Weighted Minimum Edit Distance (2:47)
  • Minimum Edit Distance in Computational Biology (9:29)

Week 2 – Language Modeling

  • Introduction to N-grams (8:41)
  • Estimating N-gram Probabilities (9:38)
  • Evaluation and Perplexity (11:09)
  • Generalization and Zeros (5:15)
  • Smoothing: Add-One (6:30)
  • Interpolation (10:25)
  • Good-Turing Smoothing (15:35)
  • Kneser-Ney Smoothing (8:59)

Week 2 – Spelling Correction

  • The Spelling Correction Task (5:39)
  • The Noisy Channel Model of Spelling (19:30)
  • Real-Word Spelling Correction (9:19)
  • State of the Art Systems (7:10)

Week 3 – Text Classification

  • What is Text Classification? (8:12)
  • Naive Bayes (3:19)
  • Formalizing the Naive Bayes Classifier (9:28)
  • Naive Bayes: Learning (5:22)
  • Naive Bayes: Relationship to Language Modeling (4:35)
  • Multinomial Naive Bayes: A Worked Example (8:58)
  • Precision, Recall, and the F measure (16:16)
  • Text Classification: Evaluation (7:17)
  • Practical Issues in Text Classification (5:56)

Week 3 – Sentiment Analysis

  • What is Sentiment Analysis? (7:17)
  • Sentiment Analysis: A baseline algorithm (13:27)
  • Sentiment Lexicons (8:37)
  • Learning Sentiment Lexicons (14:45)
  • Other Sentiment Tasks (11:01)

Week 4 – Discriminative classifiers: Maximum Entropy classifiers

  • Generative vs. Discriminative Models (7:49)
  • Making features from text for discriminative NLP models (18:11)
  • Feature-Based Linear Classifiers (13:34)
  • Building a Maxent Model: The Nuts and Bolts (8:04)
  • Generative vs. Discriminative models: The problem of overcounting evidence (12:15)
  • Maximizing the Likelihood (10:29)

Week 4 – Named entity recognition and Maximum Entropy Sequence Models

  • Introduction to Information Extraction (9:18)
  • Evaluation of Named Entity Recognition (6:34)
  • Sequence Models for Named Entity Recognition (15:05)
  • Maximum Entropy Sequence Models (13:01)

Week 4 – Relation Extraction

  • What is Relation Extraction? (9:47)
  • Using Patterns to Extract Relations (6:17)
  • Supervised Relation Extraction (10:51)
  • Semi-Supervised and Unsupervised Relation Extraction (9:53)

Week 5 – Advanced Maximum Entropy Models

  • The Maximum Entropy Model Presentation (12:14)
  • Feature Overlap/Feature Interaction (12:51)
  • Conditional Maxent Models for Classification (4:11)
  • Smoothing/Regularization/Priors for Maxent Models (29:24)

Week 5 – POS Tagging

  • An Intro to Parts of Speech and POS Tagging (13:19)
  • Some Methods and Results on Sequence Models for POS Tagging (13:04)

Week 5 – Parsing Introduction

  • Syntactic Structure: Constituency vs Dependency (8:46)
  • Empirical/Data-Driven Approach to Parsing (7:11)
  • The Exponential Problem in Parsing (14:30)

Week 6 – Probabilistic Parsing

  • CFGs and PCFGs (15:29)
  • Grammar Transforms (12:05)
  • CKY Parsing (23:25)
  • CKY Example (21:52)
  • Constituency Parser Evaluation (9:45)

Week 6 – Lexicalized Parsing

  • Lexicalization of PCFGs (7:03)
  • Charniak’s Model (18:23)
  • PCFG Independence Assumptions (9:44)
  • The Return of Unlexicalized PCFGs (20:53)
  • Latent Variable PCFGs (12:07)

Week 6 – Dependency Parsing (Optional)

  • Dependency Parsing Introduction (10:25)
  • Greedy Transition-Based Parsing (31:05)
  • Dependencies Encode Relational Structure (7:20)

Week 7 – Information Retrieval

  • Introduction to Information Retrieval (9:16)
  • Term-Document Incidence Matrices (8:59)
  • The Inverted Index (10:42)
  • Query Processing with the Inverted Index (6:43)
  • Phrase Queries and Positional Indexes (19:45)

Week 7 – Ranked Information Retrieval

  • Introducing Ranked Retrieval (4:27)
  • Scoring with the Jaccard Coefficient (5:06)
  • Term Frequency Weighting (5:59)
  • Inverse Document Frequency Weighting (10:16)
  • TF-IDF Weighting (3:42)
  • The Vector Space Model (16:22)
  • Calculating TF-IDF Cosine Scores (12:47)
  • Evaluating Search Engines (9:02)

Week 8 – Semantics

  • Word Senses and Word Relations (11:50)
  • WordNet and Other Online Thesauri (6:23)
  • Word Similarity and Thesaurus Methods (16:17)
  • Word Similarity: Distributional Similarity I (13:14)
  • Word Similarity: Distributional Similarity II (8:15)

Week 8 – Question Answering

  • What is Question Answering? (7:28)
  • Answer Types and Query Formulation (8:47)
  • Passage Retrieval and Answer Extraction (6:38)
  • Using Knowledge in QA (4:25)
  • Advanced: Answering Complex Questions (4:52)

Week 8 – Summarization

  • Introduction to Summarization
  • Generating Snippets
  • Evaluating Summaries: ROUGE
  • Summarizing Multiple Documents
Advertisements
Posted in: Facetiae