Friday, February 10, 2006

The experimental approach

One of our responsibilities on the AdSense engineering team is to ensure that you always receive the most relevant and valuable ads on your site. It's exciting because we deal with the entire ad targeting process from analyzing your site's content to selecting ads and determining how to best display the ads on your page. I just celebrated my 2-year AdSense anniversary last week, and it's never ceased to be a fascinating and challenging project.

Prior to launch, any innovation in AdSense goes through three phases: analysis, implementation, and experimentation. When evaluating a new idea that might improve ad targeting, we first analyze all of our existing data. AdSense generates an amazing amount of data every day -- a record of every ad impression and click occurring on every web publisher in our network -- so identifying interesting trends is a never-ending challenge, from determining what ads perform best to identifying the most common causes of public service ads in German.

Once we believe a new idea is ready for testing, we have to implement the idea in our ad serving code using the very powerful development tools Google has provided for us. Naturally, all code designed to serve ads to our customers must pass a rigorous review and testing process before release to ensure that we continue to offer uninterrupted service. During the testing process, we typically run an experiment on a small percentage of traffic for anywhere from a few days to a few weeks. We select the experiment traffic randomly so it is as representative as possible of overall traffic; no individual site should ever see more than a small percentage of its traffic involved in an experiment. Because most targeting experiments involve adjustments to the ad-selection algorithms, they are rarely noticed by anyone outside Google. However, we occasionally run experiments that affect the format or display of ads as this is the only way we can verify whether a potential change will provide an overall benefit to publishers, advertisers, and users.

Throughout an experiment, we track a large number of metrics including clickthrough rate and cost per click, and compare this data to regular traffic collected during the same time period. This prevents external factors unrelated to the experiment -- for instance, a large new advertiser or publisher entering the system, or time considerations such as holidays, weekends, and seasons -- from significantly skewing the results.

After an experiment is complete, we consider the effects on publishers, advertisers and end users; we only release new features we believe will provide an overall benefit. Of course, once a feature is launched, we continue tracking the same metrics to make sure everything is behaving as expected, while starting to work on the next innovations in AdSense targeting.

No comments:

Post a Comment