The bad news? 57% of all the growth tests we’ve ran at Ladder have failed.
The good news? The market average for successful tests hovers between 10-15%.
Even the experts struggle with growth. Noah Kagan, an early employee at Facebook and founder of AppSumo, only sees a 1 in 8 success rate. An industry study by VWO shows a 1 in 7 success rate. A Study by Harvard Business Review pegs the failure rate at 80-90%.
As an agency, we need to be better than average, but getting there wasn’t easy. Our ability to provide a guaranteed path to building scalable, reliable business growth machines hinges on our success – and failure – rate.
We got there by leveraging four core ingredients to drive disruptive growth for businesses:
- and experimentation.
Let’s run through each together.
We’re always improving the Technology we develop, the Process we execute, and the People we employ to power it all.
See how we’re staying on the cutting-edge of growth solutions at our blog:
Nothing matters more than the people powering the technology; executing the process; and able to function as experts in creating growth experiments across the full-funnel. You can’t get hiring wrong.
And for a long time, we were getting it WAY wrong.
In Ladder’s early days, we thought throwing money at deep experience was the right solution – this was our hiring equivalent of relying on “vanity metrics” and being seduced by what the status quo says is the right thing.
The reality was, we didn’t have a hiring and training process in place that prioritized the right things.
You have to have a good process when you’re hiring a new marketer every month.
So we built one.
I won’t go into heavy details on the process itself (you can read about here), and I won’t detail any of the grueling expectations we place on potential hires, or the training they go through while at Ladder.
But, the value of getting it right can’t be overstated:
10 months into our new process, and we’ve still not lost a single employee. Ladder’s services, marketing, design, and strategy teams have grown tremendously while contributing heavily to our $150,000+ and growing monthly recurring revenue (3x higher than our Q1 2016 MRR).
Our clients are happier, our office is more energized and productive, and we’re now able to leverage a scalable, repeatable hiring infrastructure to continue hitting our goals as we grow.
Our process is the linchpin in how we’re able to be consistent in delivering performance, and our Growth Strategy and Client Services directors work hand-in-hand with our strategist team to refine how we work with clients; trim fat off our process and evolve Ladder’s strategic approach to help the clients who partner with us on their growth be even more efficient and effective.
Our full process is detailed in our growth hacking post, but here’s a sneak peak into why it’s so well.
First off, we figured out the sweet spot of executing 12 new tests per month. That’s the average across 35 active. 12 new growth experiments each per month.
Our full-funnel strategy is defined by the tactics in our Growth Playbook, the world’s largest database of growth tactics covering all funnel stages and channels. After testing both lower and higher numbers of testing frequency we discovered that 12 really is the ideal number of new testing activity that balances expected impact, statistical significance, and clean data to analyze and leverage month over month.
Interesting note: this is the same for seed stage startups AND for large enterprises.
Each and every month our Process equals a combination of strategy, analysis, reporting, and execution of the 12 growth experiments most likely to success and increase ROI across the customer touchpoints representing the highest impact to a business’s success.
Never forget the importance of compounding returns – which is what our process delivers through the aggressive, iterative process of 12 new experiments each month being executed.
Execution is the multiplier that turns your testing idea into something valuable in reality. Not just getting lucky, but consistently making your own luck, over and over again.
We’ve made our own luck at Ladder with the process detailed above. Our success rate tells the full story, but we’ve also:
- Gathered $1,000,000’s in performance data
- Built the world’s largest tactic database
- Built the Planner, a suite of tactical planning tools that provides full transparency into why our tests succeed (or fail)
To further expand on the third point listed above, Ladder wouldn’t be the agency it is today without our own proprietary technology platform and the brilliant tools built by third parties like Funnel, Sumo, Unbounce, and VWO.
We’ve learned by using the Planner to run our entire growth hacking process that successful marketing only happens when you can see all of the data and provide full transparency.
Built on top of 1,000+ proven growth tactics, powered by machine learning, it’s the technological engine that helps our client services and growth strategy teams operate at the level of excellence that they do. The Planner is purpose-built on creating a test-driven marketing plan with data funneled in from all marketing channels and sources.
Combining proprietary growth technology with the best SaaS tools in the world leads to market-beating performance, whether we’re working with startups to Fortune 500 brands.
I’ve spoken a lot in this post about “marketing tests” but it’s useful to talk a bit about how we define a marketing test beyond the standard industry definition.
We know that growth requires art and science. It’s why we treat marketing tests like scientific experiments.
First, we ask a question: Which funnel stage do we target with a test, and why?
Next, we do our research: We analyze the entire marketing funnel of a business we’re working with in order to identify the biggest growth opportunities. At the start of any relationship, we actually perform a full-funnel growth marketing audit to illuminate the growth levers.
After that, we pick tactics – out of the largest possible array of options from our database – that target those growth opportunities.
For each tactic, we write a hypothesis: What is it that we’re aiming to achieve with the test in a way that will increase ROI and boost a business’s KPIs? Every hypothesis is there to push the needle on a metric to drive growth, not just maintain status quo baselines.
Next, we execute. Whether that’s email marketing, conversion rate optimization, or putting together winning combinations of copy, creative and audiences in a PPC ad.
We do regular metrics checks every day to keep track of performance. When things are going wrong, we look at data-driven insights to make adjustments. Our standard protocol is to let a test run until it hits statistical significance at 95% confidence.
This cycle of testing has compounding returns because every experiment – whether it fails or succeeds – generates proven insights into why it performed how it did. This is what drives innovation and improved strategy, month over month.
Ultimately, it’s how we maintain our 43% success rate.
How confident are we about this process? Our Director of Marketing is currently executing our 2017 Marketing Plan and live-blogging our successes and failures, execution processes, and more.
At the moment we’re on track: 16 of the 36 tests we’ve ran for Ladder in 2017 resulted in positive growth. That’s a 44.4% success rate.
Everyone wants a repeatable, scalable system for growth. Whether it’s growing Ladder or growing our clients’ businesses, we’re being fully transparent in our blog about all our successes and failures, everything we’ve learned along the way, and the proven methods we’ve learned and developed for building and improving growth machines.
If you need something similar for your business, feel free to reach out at [email protected], or hit the big button below:
- We used a sample of 1,392 marketing tests in 2016 and 2017. 795 were either inconclusive (treated as a failure) or failed to drive growth, while 597 had a positive impact. ↩
- Note: Test results were self-reported as having a meaningful outcome for the business. Not all were statistically significant. ↩