Welcome to the first entry of the Ladder 2017 Marketing Plan, a series of articles where we’re revealing our marketing plan for 2017. The series will cover all our plans for 2017, including a weekly journal giving you performance and execution updates.
This article covers an audit of our current pay-per-click ad campaigns. For the sake of transparency, we’re using real performance data drawn straight from our own AdWords and Facebook Ads Manager accounts.
Have any added insights about our PPC campaign audit? Let us know!
Auditing your PPC ad accounts is probably the quickest and easiest way to improve your marketing performance.
Because you’re almost guaranteed to find something.
Because it’s so measurable and changes take place immediately, an audit of your PPC accounts has a direct impact right out the box.
It’s important to remember that a PPC account is never ‘done’; there’s always more to do and never enough time to do it. You’ll find more dirt if the account is ‘working’; we tend to neglect anything that isn’t currently failing.
The following process covers not just an audit of our own internal PPC ad campaigns, but also a guide on how to run your own audit.
If you’re auditing your own account (like we are), go easy on yourself. If you find issues, that’s good news – you have an opportunity to improve performance. That’s the audit doing its job.
If you’re auditing someone else’s account, try to be humble. You don’t have the full context on why an account is in bad shape and you aren’t guaranteed to be able to do a better job.
It might seem complex to audit a PPC account, but it’s actually a really simple (if tedious) process, repeated systematically until you run out of time / have enough to work with.
Here’s how we’re doing ours:
- Segment the data
- Identify best / worst
- Find differences
- Research context
- Recommend action
And a brief explanation of what each of these means for the audit:
Segment the data – split the data by a variable, i.e. audience, creative, location, device, etc.
Identify best / worst – find the most important segments driving performance up / down.
Find differences – try to spot any differences between the best and worst performers.
Research context – search for any technical or logical reasons that explain this pattern.
Recommend action – offer a solution based on your research that will solve the issue.
This process works because it’s actionable and persuasive. Not only do you find the symptoms, but you also diagnose the underlying issues and recommend treatment.
Conducting analysis that doesn’t lead to a recommended action is a waste of time. Equally, recommending an action without giving context decreases the chance that it’ll actually get implemented, which is also a waste of time.
If you rigorously work through this process for every major way to segment the data, you’re guaranteed to find the vast majority of ways available to improve the account performance.
Once you’ve covered as much ground as you have time for, or you think you’ve found the majority of important issues, you should sort your tasks into the three following buckets:
Here you’re looking for anything that can have an immediate impact without committing too much resource. Always plan to do these first, as it is a great way to build trust with your boss / the client and give you the momentum needed to tackle bigger challenges.
Business as Usual
Most of your actions will fall into this bucket; tasks that follow best practice, but aren’t significantly going to move the needle. These should be prioritized according to expected impact (growth) vs cost (time + money), and worked into your marketing plan accordingly.
Though you will always notice a few exceptions (quick wins), you’ll largely find that you get what you pay for; the actions that provide the biggest expected impact are also very costly, risky or take a long time to pay off. Depending on your resourcing level and timeframe, you’ll want to focus on working 1-3 of these major strategic bets into your plan.
OK, let’s get started.
Need help auditing your PPC campaigns?
Ladder Google AdWords Audit
Ladder currently spends 83% of their PPC budget on Google AdWords, so it makes sense to start there. We’ll run through the process we outlined above, adapting it to the types of segments available to look at in Google AdWords.
Segment by Time Period
Typically the first place you want to look is at the trend over the past 30 days. For most accounts this gives you enough data to make it look useful, but it also isn’t too much to be incomprehensible. Most people think in terms of monthly marketing budgets, so it aligns well.
For your metrics, you want to see the full marketing funnel from impression through to conversion, as well as the cost and revenue metrics. This will give you an early view into how efficient the campaign is, and if it’s making money. You can select these by clicking ‘columns’ > ‘modify columns’, and you can save it to re-use later.
Now there’s not a single view where it’s easy to aggregate all performance across campaigns for a thirty day period, so what I usually recommend is to go to dimensions > time > year, then select ‘last 30 days’ for the date range.
This already tells us quite a bit. We’ve spent $3.7k in the past month; a pretty sizeable budget (for reference, our recommended starting budget for new clients is $3.5k – we now qualify for our own service!).
CPCs at $3.55 are high, but that’s to be expected in B2B and CTR doesn’t give any cause for concern at 1%. A conversion rate of more than 9% however is really good; we normally see more like 4-5% for our B2B clients whose goal is to generate leads.
Cost per conversion is more relative; at Ladder our goal is $60, because that’s how much we’ve calculated a conversion is worth, so $37 we’re doing well. The trick will be increasing spend from $3.7k to $6k without seeing a corresponding rise in CPA.
In Ladder’s case, you can see we aren’t tracking anything for revenue. This isn’t a huge issue as we’re a B2B business and our conversion doesn’t generate any immediate ‘revenue’. We can just assume each lead is worth our CPA target.
However if we start tracking an estimated lead value we could more easily calculate potential ROI from our campaigns; something to note down for later. For the rest of the screenshots, I’ll remove those columns.
Now I’d like to zoom out to last 90 days, and split spend by week to see what the trend is. Dimensions > time > week, and set the date range (no preset; you have to set it manually).
Great – this isn’t a bad trend at all. Spend has been steadily rising and just topped $1k last week. Though CPA isn’t as low as the $22 we were seeing back in August, it seems to be hovering around $35 which ain’t bad for such a large increase in spend.
Encouragingly we seem to have made huge gains in CTR; it’s doubled since August. This hasn’t been reflected in CPC but it has likely what’s allowed us to increase spend so much with minimal impact on CPA. Note that we’ve been able to shoulder a CPM (cost per thousand impressions) that’s double August.
Segment by Day of Week
While we’re here in the dimensions tab, it’s a good time for us to take a look at which days of the week work best for us. It’s quick to check and addressing any differences can lead to immediate performance gains.
Straight away we can see that Thursday and Friday are undervalued. They have higher than average conversion rates, but are being bid about the same. That leaves CPA at below $30, versus $35 on average; a missed opportunity to push for extra volume.
Now we could dig a little deeper into this and see if the trend holds for each campaign. However, we don’t have enough data to make this worthwhile. As a rule of thumb, you need 30 conversions average per segment to draw reliable conclusions. We have that at the current level, but won’t if we drill deeper, so we’ll just apply our learnings across all campaigns.
Segment by Hour of Day
Now let’s conduct the same exercise for Hour of Day.
In this case the data is much more sparse; there do look like some interesting patterns however, so let’s see if we can chart the data in Excel to make it more meaningful.
OK so it looks like we’re ok between 11am and 4am; CPA is relatively stable. However we do see some crazy activity in the morning; we’ve spent $1,200 between the hours of 4-9am, with only 22 conversions to show for it; a $54 CPA.
Now we do have to be careful here; it could be that these clicks were ‘research’ clicks, and the users tend to come back to sign up later. Luckily we can quickly sense check that in the Path Length report; Tools > Attribution > Paths > Path Length.
A whopping 94% of our conversions happen on the first click. This means we can make adjustments to time and day bid modifiers without much cause for concern.
We can run a quick calculation to compare the 5-9am CPA with the total average, so we know how much to modify our bid by for this time period.
Note that we don’t turn the campaigns off during this time frame – this traffic still has a value, it’s just a lower value than other time periods.
To do that, we need to head over to ‘Settings’ and ‘Ad Schedule’. Where we can see that we don’t currently have any bid modifiers.
Segment by Location
Another easy win is to check performance by location.
Not much to see from a country / territory point of view. We do have a London office that isn’t currently getting love from our AdWords campaigns, so it might make sense to expand the geography of our campaigns as a test. To find out more info, let’s break down further by pulling region (State) into the mix.
Now if we sort by cost, descending, we can see where our money is going.
We have a great opportunity here: our CPA in New York (where our head office is located) is only $24; we have plenty of room to expand. We should be pushing the bid up here.
I see a big problem though. California has spent over 20% of our budget. In the past 3 months, we’ve signed a total of zero clients in California. Oops.
In fact, after talking to the rest of the team, we very rarely sign any clients in the U.S. outside of New York. The ones we have signed have been relatively harder to service and have seen higher churn. Good to know.
This kind of thing seems obvious in retrospect, but before we started running ads we had no reason to think we’d have a low close rate outside of the East Coast. Looking at conversion data it looked like these leads were relevant. This is why full-funnel analysis is important.
Our action here can be pretty drastic; we could go as far as completely turning off all states apart from New York. We could then rebuild in a more tactical fashion; only running our best performing campaigns in individual startup locations like Austin and Washington D.C. where we’ve managed to pick up clients organically.
Finally on location, let’s do a quick sense check. Even though we were targeting the U.S., AdWords default settings sometimes mean that you pick up additional users outside your targeting range. Dropping the region and switching to the ‘user locations’ report shows that we are indeed seeing this too.
It’s not a huge amount of spend, less than 4%, but it’s still wasted money we can cut out. If we look in campaign settings, we can see the issue with Google’s ‘recommended’ option:
If we jump over to the ‘Settings’ tab, and select ‘Locations’, we can see if we’re currently using any location modifiers. In this case you can see that we’re not.
Segment by Settings
If we roll back to campaign settings, we should check that there isn’t anything weird going on.
The settings are a little inconsistent. Some campaigns are optimized to conversion, with others aiming for clicks. Some use enhanced CPC, while others don’t. Some run against Google Search Partners, while others aren’t. Some have device bid modifiers, others don’t. Nothing too troublesome here, but we should aim to rationalize our approach here.
Segment by Device
While we’re in campaign settings we can also see the effect of the inconsistent application of bid modifiers by device. For ‘Marketing Strategy’ alone we’re seeing double the CPA on mobile vs desktop, because conversion is roughly half. For the ‘Growth Hacking’ campaign we’re getting a much more favorable CPA because we’ve set mobile bids at 50% of desktop bid.
Let’s aggregate this data in Excel to see its effect across the account.
This is a great opportunity to improve performance; mobile CPA is just over double what we see on desktop. Applying a bid modifier of -50% seems to be working; we should roll this out further.
Segment by Campaign
If we now navigate to the main campaign screen (the default one you would have seen upon logging in), we can see overall performance segmented by campaign.
It looks like overall Growth Hacking is a great campaign for us; with a $27 CPA we could be spending more. Other campaigns like Marketing Strategy are performing worse, but have a larger share of budget. As is always pretty common, our top 3 campaigns make up the vast majority of spend (82%).
New one great thing to note is that we’re not hitting budget caps. If we were, Google would be displaying a notification in the status column. If you see this it’s a sign you could be getting more clicks at the same CPC; you should remove the cap so long as your clicks are profitable. If not, it’s an opportunity to drop bids and get the same volume at a more profitable price.
Segment by Landing Page
Next we want to quickly see what the landing page strategy looks like. Checking the ‘Final URL’ report in the ‘Dimensions’ tab helps us identify if we’re sending traffic to a customized landing page, and if any of our landing pages are performing poorly.
Sometimes if auto-tagging or a tracking template isn’t being used, you’ll see the utm tracking parameters in the URL. To get rid of this we can export to Excel and use ‘text-to-columns’ to split this out, then consolidate the data for each landing page.
In this case all campaigns are landing on the homepage. This isn’t unusual, and the Ladder homepage is pretty high converting. However we’d definitely want to work some landing page testing into our plan.
Segment by Search Query
Heading back to the dimensions tab, we can take a high level view of what search queries are driving performance. It’s important to look at this way, as sometimes a poor campaign structure will obfuscate your view of what keywords are important. First we need to remove ‘Added/Excluded’, ‘Match type’ and ‘Keyword’ to give us a consolidated view.
What’s interesting about this view is that it adds more context to what we saw at the campaign level. Marketing Strategy is actually performing much better than we thought; it’s just dragged down by poor performance on searches for “Marketing Strategy Template”.
It also looks like we’re not doing a good job on some searches like ‘advertising agency’, while other searches like ‘growth marketing’ are leading to very cost effective leads.
We should consider addressing this in the account architecture; splitting these keywords out as separate ad groups and bidding them up / down accordingly.
As we scroll down, we can start to identify different keywords here that aren’t relevant, but are costing us money. This is how we identify new negative keywords to add.
Segment by Keyword
Now we’ve looked at all the major topline segments, it’s time to dig a bit deeper into the structure of the account. We’ll start with keywords.
We can see a couple of things here. For one, “growth hackers” on phrase match was paused, despite being a high performing keyword. Marketing strategy on broad and phrase are bid below the first page bid; an indication we could be pushing a lot more volume if we could afford more. Also we can see that just over $3k or close to 30% of our budget went on the three marketing strategy match types; a sign we can split this out into more granular keywords.
When we have so much data however, it’s hard to see overall trends. We can make use of filters to quickly eyeball certain traits and patterns in the account. First, let’s look at what percentage of our keywords are below first page bid.
Roughly half of our spend goes on keywords that we can’t afford the first page for; this is really good news as it shows we have a lot of headroom if we can improve our quality score or conversion rate and afford a higher ad rank.
Using another quick filter, we can see we don’t have any keywords with critical issues stopping them from running.
How about low quality score?
It looks like we do have some quality issues, but it affects a pretty low volume of spend, and the performance is still really great at $28 CPA.
Zooming out to see keywords that have a quality score of less than 3, we can see that this makes up close to a third of our traffic. Although CPA still isn’t bad here, we have an opportunity to improve performance considerably by pushing up quality past the 5 mark.
Hovering over the speech bubble in status to see why precisely we’ve got a low quality score on some of these keywords. In our case, it looks like the clickthrough rate is too low, and the landing page isn’t relevant enough; both things we need to work on.
Note: when AdWords says ‘Another creative in the ad group was selected over this one’, it’s a (particularly opaque) way of saying you have ‘cross-matching’ going on; broad or phrase match keywords are taking turns matching to the same query. We’ll cover match types shortly.
Another quick filter tells us we have 5 keywords that are low volume; no big deal but this has been rumoured to affect quality score at an account level, so we may as well remove them.
Now for a great piece of news; we’ve only spent $200 on keywords that haven’t had a conversion. This is extremely unusual; normally we’d expect to see between 30-40% of the budget is wasted on keywords like this. Not to be too self-congratulatory, looking at it another way, it can be an indication that we need to take more risks with longer tail keywords.
It’s also important we look at overall match type strategy. If the vast majority of spend is going through broad and phrase match keywords, it’s an indication that we need to do a better job of getting the most important words over to exact match.
That’s exactly what we’re seeing; over 60% of spend is going through broad and phrase match keywords. Ideally it should be less than 20%. Exact match allows us to control bids more effectively, avoid irrelevant matches and ensure users are being served relevant ads. Moving more keywords over to exact match can help us with the quality score issue we saw.
Finally, let’s quickly sense check our negative keywords and make sure we aren’t accidentally limiting our traffic.
Looks pretty sparse; usually these lists are much longer. However we know from the search term report that our matches are pretty relevant, and our account structure seems to have little overlap, so this is totally fine.
There is one more place to check however; account-level shared lists in Shared Library > Campaign Negative Keywords.
Nothing out of the ordinary here; this is good practice as it makes it much easier to expunge bad matches from every campaign at once.
Segment by Ad Group
Now let’s look at the ad group structure. It’s important to take a look at this because we want to spot performance differences; most accounts are bid at the ad group level, so this is really where you would hope to see a consistent CPA across segments. We also want to check for duplication; we don’t want lots of overlapping ad groups with similar keywords on broad match.
As you can see, there’s not much to the account structure; most campaigns have just one ad group matching the campaign name, and most of the major ad groups are sensibly bid.
Performance marketing, Marketing tactics and Email marketing are the exceptions. With such a long data range, it’s perfectly possible bids have been adjusted to accurate levels more recently, so we should just plan to address this as part of our normal optimization work.
Now let’s just export and make the structure easier to see in Excel.
Nothing major to see here; the vast majority of campaigns just have 1 ad group, with brand having 5; none of which spend much.
All in all, not much wrong with the ad group selection; it closely matches the campaign structure, which seems carefully designed to avoid overlap.
Segment by Ad Copy
For ad copy we’re going to take a different approach than normal, as ad copy only really makes sense within the individual ad group it’s a part of. Let’s look at our top spending ad group first; Marketing Strategy. Remember to filter for ‘all’ ads, not just ‘all enabled’ as it’s common to remove old ads.
It looks like a lot of the older ads that are now paused are better performing; don’t worry about this as it’s common when you go from periods of low to high spend like we have.
It’s a good sign that there seems to have been a number of attempts to A/B test ad copy, though a lot of the variations are relatively similar; we’d want more variety in here.
Although this probably won’t be statistically significant, there is a variation that is getting an incredible conversion rate, and therefore CPA, but it isn’t being served very often. We can fix that by switching from ‘optimize to clicks’ to ‘optimize for conversions’ in the ad rotation settings.
Now let’s look at our other top spender, Growth Hackers, to see if we’re getting the same thing.
Yep; almost identical. We can see some evidence of much older ads in here with very different messaging. It looks like conversion rate was much higher on these older ads, so it might be worth revisiting some of these ideas if they’re still within the latest brand guidelines.
Good news so far however; it’s common to see just 1-2 ads per ad group with no evidence of testing or even changing copy over long stretches of time; we’re not seeing that here.
Now let’s zoom back out to the account level and check whether the same 3-4 ads are present across the account. We can do this by exporting and pivoting in Excel.
Note that we had to count the ‘final url’ field as we made the switch from the old text ad format to expanded text ads; final url is the only field that is present in both.
Although there were a couple of ad groups with only 1 or 2 ads, this isn’t too much to worry about as we’ve barely spent anything on these audiences (we’re just showing live ads here, which is why the numbers don’t match up to other screenshots).
We can also check the diversity of the ad copy, typically by reversing the order of the ad copy in a pivot table. This works because most marketers spend the time to customize the title based on keyword, but stick with the same body copy everywhere.
We can see this here also; there are only three versions of the body copy in the entire account. We also see that ‘Plan Your Marketing Tests’ is the default Headline 2 for most ads. We should address both of these in our plan.
Finally we should quickly check and see if we have any ads that are disapproved.
Nothing to be concerned about; it looks like we previously paused or removed any that didn’t match Google’s guidelines, which is good to see.
Segment by Extensions
Finally we check for ad extensions. These are really important as a way to improve CTR and therefore quality score. In fact, Google even gives you a small bump in quality score just for the presence of ad extensions.
It looks like these have been added at some point, but aren’t running on any of the campaigns currently live during the time period we have selected. We should definitely consider these as part of our plan.
GDN and Video
It is unusual to see an account of this spend level that hasn’t experimented with Google Display Network (GDN) or Video (Youtube) ads. Both do tend to perform worse than search ads, so it’s not a major deal, but at the very least, retargeting campaigns should be tested.
When auditing another PPC platform, you largely go through the same motions. In the case of Facebook, the major difference is simply that we target audiences instead of keywords. So let’s take a look at the account.
Overall we’ve spent about $2k on Facebook, and the vast majority of campaigns are paused. We achieved a good CPA; ranging between $6 and $17. CTR seems pretty high, so the first thing I’d look at is what network we ran on.
Ah, as I suspected; we were getting a lot of very cheap clicks on the audience network. This is on by default when you start a campaign, and sometimes it can be a source of very cheap traffic. However we should be careful; the quality of these leads will likely be low.
Once you strip out the audience network, the performance in terms of CTR and CPCs was relatively poor. We’d normally expect more like a 1% CTR on timeline ads.
Let’s look into our top spending campaign and see what it was.
A lookalike of homepage visitors is really the key driver here. Let’s eyeball the settings and see how it was set up.
OK so just like our AdWords campaign, we’re targeting the whole U.S. We should consider shutting it down to just New York. The expand interests option is worth leaving on however; if Facebook can find people who are likely convert, we’re ok with that.
What creative did we run?
Looks like a clear winner there.
This is a pretty decent ad; it grabs attention with a bold image, has a short but intriguing line of copy but then explains exactly what the offer is in the title.
Let’s go back to the one campaign that was still on and see what it was.
Ah we can see now that it was a retargeting campaign. This makes sense to leave running, as retargeting tends to spend little but get good results.
Again this ad isn’t bad. It’s for people who visited the website already, so it’s great that we’re using an image that matches the homepage, with a large logo. It’s a similar offer again, but this time with more detail to draw people in.
When you click the link, it actually takes you direct to HubSpot to book a meeting; nice!
I’ve been skipping through here, and there isn’t much detail to see in the Ladder Facebook account, but by now you should see that the process really is the same as AdWords.
We go through the account rigorously, segmenting by different variables; campaign, ad set, creative, audience, device… as many options as we have the patience for.
When we find something interesting, we compare and contrast the differences or similarities vs the campaigns that performed well / badly. Over time we build up a picture of what’s working, and what is still left to do.
In this case, there is still a ton to be tested on Facebook. Although we have tested some audiences, we can basically consider starting fresh on this medium; so long as we remember to work a website lookalike audience and retargeting campaign into our plans.
There are endless audiences we could try who fit our user persona, creatives and ad copy tests we could run to improve performance. There are a lot of ad units unique to Facebook that we should try. We’ve seen Facebook work well for some of our B2B clients, so why not us?
While we’re at it, we should also be testing other PPC platforms, like LinkedIn and Twitter; there’s still a big world out there for us to play in!
And that’s it for the Ladder PPC audit! As you can see, that process, while tedious, was relatively straightforward and simple. And it taught us a lot about the performance of our PPC campaigns, enabling us to make smart tweaks to increase performance.
Up next in the Ladder 2017 Marketing Plan series — a full audit of Ladder’s analytics.
Need help auditing your own PPC campaigns? We can do that for you! Talk to a Ladder Strategist to hear how we can help you improve performance for your PPC campaigns.
Review your PPC accounts with a Ladder Strategist: