166 Spitamen shah street, Samarkand city

(66) 222-37-54

Speech and Language Processing

Speech and Language Processing

  • Author: Daniel Jurafsky James H. Martin
  • Language: ingliz tilida
  • Writing: Lotin yozuvida
  • Publisher: University
  • Year: 2026
  • Views: 21
Sign up for read

In the first part of the book we introduce the fundamental suite of algorithmic
and linguistic tools that make up the modern neural large language model. We begin
with tokenization and preprocessing, including Unicode, and then proceed to introduce many basic language modeling ideas using simple n-gram language models, we
then introduce the algorithms which are the components of large language models:
logistic regression, embeddings, and feedforward networks. Next we are ready to
introduce the principles of large language modeling, encoder, decoders and pretraining, then the fundamental transformer architecture, then masked language model
and other architectures like RNNs and LSTMs, information retrieval and retrievalbased algorithms like RAG, machine translation and the encoder-decoder model,
and finally spoken language modeling including both ASR and TTS.