๐Ÿ“ˆ Perplexity Viewer

Visualize per-token perplexity using color gradients.

  • Red: High perplexity (model is uncertain)
  • Green: Low perplexity (model is confident)

Choose between decoder models (like GPT) for true perplexity or encoder models (like BERT) for pseudo-perplexity via MLM.

Model Name

Select a model or enter a custom HuggingFace model name

Model Type

Decoder for causal LM, Encoder for masked LM

Detailed Token Results

Click on an example to try it out:
Input Text Model Name Model Type

๐Ÿ“Š How it works:

  • Decoder Models (GPT, etc.): Calculate true perplexity by measuring how well the model predicts the next token
  • Encoder Models (BERT, etc.): Calculate pseudo-perplexity using masked language modeling (MLM)
  • Color Coding: Red = High perplexity (uncertain), Green = Low perplexity (confident)

โš ๏ธ Notes:

  • First model load may take some time
  • Models are cached after first use
  • Very long texts are truncated to 512 tokens
  • GPU acceleration is used when available
  • All tokens are analyzed in a single pass for accurate results