What is single-adgroup testing?


A single-adgroup test is a test that runs on the ads within one adgroup only i.e. only the ads within an adgroup are compared to each other.  This is the most popular ad testing approach and is applicable to all PPC accounts.  This type of testing is referred to as single-adgroup to differentiate it from the other type of testing ads together across multiple adgroups (referred to in Adalysis as multi-adgroup testing).


Adalysis allows you to test your existing ads using their historical performance data.  There is no need to 'start' tests or create new ads to run tests.  Tests in Adalysis will run automatically on existing ads wherever ads compete.  Any new ads created will automatically be included in the testing.


Adalysis support single-adgroup testing for both text and image ads.


A single-adgroup test needs a minimum of 2 active ads (and a maximum of 15 active ads) to be able to run.  In the case of image ads, only ads that have the same size are split tested. Hence, a minimum of 2 active ads for an image size are needed before the ads are included in the testing.


Tip:  by default, ad tests don't run for paused campaigns, adgroups, or ads (since the ads are not active).  However, you can use this trick to tell Adalysis to include those ads in all tests.



 

How do I run single-adgroup tests?


There is no need to configure or start single-adgroup tests in Adalysis.  You don't need to create new ads, select ads, change URLs, or do any other tedious tasks involved with other inflexible testing techniques. Adalysis will automatically test all your existing ads wherever it finds competing ads.


Tip:  By default, Adalysis tests these 6 popular metrics whenever a test runs. You can, however, customize the metrics that you want to use.


Single-adgroup ad tests run in 2 ways: 



Automatically (run by Adalysis daily)


Your account will get synchronized with your PPC account at around 4am daily, after which Adalysis will do the following:
  • Scan all your adgroups and, for each adgroup, calculates a common date range during which the active ads within the adgroup were running simultaneously.  Adalysis uses the Enabled Date of each ad to determine such a time period. 
  • It will then run a test for those ads using the performance data of the date range found above.  
  • Any statistically significant results (compatible with your thresholds) will be displayed in the below screen.  Old test results (from the previous day) are removed.



Manually (run by you)


You're also able to run single-adgroup tests yourself anytime and as often as you want.  Some of the reasons one might want to run tests manually are:
  • See how different the test results are if different thresholds are used.
  • Run a test using a different date range from the common one used during the automated tests.
  • Run a test for a subset of ads within the adgroup e.g. only mobile-preferred ads
  • See results that are not statistically significant i.e. see the test data for all adgroups irrespective of their test confidence.

 

You can run manual tests as follows:

1) Specify a date range. Using the "common date range" option mimics how the automated tests run.

2) Choose to see all test results irrespective of confidence level.

3) You can override the current threshold values here.  This affects only this test run and will not change your global threshold values.





Understanding the test result data


The single-adgroup test results show the following:


1) The number of active ads found and included in the test for each adgroup.

2) The date range used for the test.  The automated daily tests use the most appropriate common date range, whereas the manual tests use the date range you specify.

3) The confidence the algorithm has for each metric tested.  Confidence less than 90% is displayed as --.  Other values can be 90%, 95% or 99%.

4) The aggregate performance of all ads included in the test for the date range used.





To see the test results details for an adgroup, click on the adgroup name which will show the below:


1) The winner and loser ads for each metric:

  • Only one winner ad (for each metric with enough data).  The winner ad is highlighted in green. The statistically significant winner can be different for each metric (e.g. one ad might win in CTR but lose in Conv. rate) 
  • Only one loser ad (for each metric with enough data). The loser ad is highlighted in red.
  • Zero or more neutral ads, that perform somewhere in the middle between the winner and the loser. These are highlighted in yellow.
  • Metrics with not enough data will not have a winner/loser.

2) The confidence the algorithm has in the result (which matches the confidence values you saw in the previous screen above)

3) The projected performance boost should the loser ad get paused.






Tip:  You can navigate through the test result using the left/right arrows, or go back to all the test result







What actions can I take when analyzing a test result?


Actions per adgroup


1) Pause the losing ad. All changes in Adalysis are pushed immediately to Google Ads /Microsoft Ads.
2) Create a new ad to replace the losing one.
3) Once finished analyzing this test result, you can either delete it or mark it as 'analyzed'. This has the effect of changing the color of this specific test result in the main list to give you a visual indication that you've already looked into this test result.

Tip: every time you pause or edit an ad from the above screen, a copy of this test result will be archived for future reference.  You can see the history of all your archived test results.  You're also able to archive any test result yourself using the option below



Actions done in bulk on all test results


You can also pause all losing ads (in any of the metrics) with a one button click using the below option.







Automatically pausing and, optionally replacing, losing ads


Automatic Pause & Replacement of Loser Ads



Viewing historical test results


History of Single-Adgroup Test Result Changes



Related Ad Testing Information