Events

Byte Latent Transformer: Patches Scale Better Than Tokens

LLM seminar event about the paper “Byte Latent Transformer: Patches Scale Better Than Tokens” by META.
Image with information about the presenter, title, time and place of the event

Title: Byte Latent Transformer: Patches Scale Better Than Tokens

Presenter: Nicola Dainese

Abstract: The authors introduce the Byte Latent Transformer (BLT), a new byte-level LLM architecture that, for the first time, matches tokenization-based LLM performance at scale with significant improvements in inference efficiency and robustness. BLT encodes bytes into dynamically sized patches, which serve as the primary units of computation. Patches are segmented based on the entropy of the next byte, allocating more compute and model capacity where increased data complexity demands it. They present the first FLOP controlled scaling study of byte-level models up to 8B parameters and 4T training bytes. Their results demonstrate the feasibility of scaling models trained on raw bytes without a fixed vocabulary. Both training and inference efficiency improve due to dynamically selecting long patches when data is predictable, along with qualitative improvements on reasoning and long tail generalization. Overall, for fixed inference costs, BLT shows significantly better scaling than tokenization-based models, by simultaneously growing both patch and model size.

Paper link: https://arxiv.org/abs/2412.09871

Disclaimer: The presenter is not part of the authors!

LLM seminar
  • Published:
  • Updated: