Natural Language Processing
Scoring LLM Outputs with Logprobs and Perplexity
In this course, you'll explore how to evaluate the fluency and likelihood of LLM outputs using internal scoring signals like log probabilities and perplexity. You'll work with OpenAI's completion models to analyze how models "think" under the hood. This course builds naturally on the first two by focusing on model-internal evaluation instead of external references.
OpenAI
Python
4 lessons
13 practices
1 hour
Course details
Fixing Token Probability Display Code
Making Token Probabilities Dynamic
Filtering Tokens by Probability Threshold

Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal