AxeCrafted Blog

Wandering thoughts from a wandering mind.

EN | PT

Winning with Multi-Armed Bandits: Smarter Experimentation in Databricks

Posted on August 18th, 2025

Running experiments often feels like gambling. Should you put more volume behind variant A, or give variant B another chance? Traditional A/B testing splits traffic and waits - but what if you could continuously adapt, maximizing gains as you learn? Enter Multi-Armed Bandits: an elegant blend of probability, statistics, and decision-making that turns experiments into dynamic optimization engines.

Just like choosing the right slot machine at a casino, Multi-Armed Bandits help you decide which option deserves your next coin - except here, the coin is traffic, impressions, or user attention. Let’s explore how they work, why they beat static testing, and how we’ve applied them in Databricks.

Read more...

Photo of Leonardo Machado

Leonardo Machado

Some guy from Brazil. Loves his wife, cats, coffee, and data. Often found trying to make sense of numbers or cooking something questionable.