I think this is a question everyone in the analytics field asks themselves at one point or another. Unfortunately, the short answer is no, there is no holy grail when it comes to finding “best practices” for analytics testing. Fortunately, there are several great resources out there that can support the education of analytics testing. One thing I would recommend is to seek out trainings and courses that offer guidance on this topic. Here are two great resources that come to mind:
1) New Organizing Institute
The New Organizing Institute’s (NOI) mission is to support and train organizers to build and manage effective movements by integrating tried-and-true community organizing, cutting-edge digital strategy and data-driven decision making. This year, the NOI will be hosting the Culture of Analytics and Optimization. During this two day training, you will learn the art and science of online analytics and experimental design. This course is designed for active new media practitioners, but it’s also a great chance for managers and leaders to better understand the work that’s possible when you institute a rigorous culture of testing.
2) Nonprofit Technology Network
Nonprofit Technology Network’s (NTEN) mission aspires to a world where all nonprofit organizations use technology skillfully and confidently to meet community needs and fulfill their missions. NTEN worked with Idealware to conduct a survey of nonprofits about their relationships with data, and what was found was a large dichotomy—either they were doing a lot with their metrics or not much at all. This report presents a great view into how organizations collect and integrate data into their daily decision making processes. This is a great resource because it provides recommendations you can use to bypass any potential barriers you may face as you work towards implementing a culture of measurement, testing and analytics.
One of the reasons why there’s not a “testing guidebook” is because testing a variable in one instance does not necessarily mean that it can be applied to all instances. Industry type, audiences and types of calls to action are just a few factors which can influence the test results. For example, testing out a new way to ask for donations in one sector may not work in another. In addition, ideas that have been tested in the past and may be thought of as best practices evolve as well. The suggestion is to always go back and re-test those ideas that have worked, because it may not be the case in the future.
Conversation with Jim Pugh:
I know for many not having a reference or best practices guideline makes testing tedious and challenging, so I reached out to Jim Pugh, CEO of ShareProgress and former analytics consultant, to hear more about what people can do when faced with these challenges. Here is his take on things:
What can we do to learn more about how to test and analyze the results?
As of now, there aren’t a whole lot of tutorials or documents online to learn about testing and optimization. There have been discussions on making resources like these available, but it hasn’t happened yet.
Your best bet right now is probably to either sign up for a training on the topic, or attend one of the big progressive conferences (like RootsCamp or Netroots Nation), where you’ll be able to see presentations on new experimental results and learn about how different organizations run their analytics programs. It’s also a great way to get to know practitioners in the field, who can help answer any questions you might have.
What do you suggest can be done to help support and facilitate testing within organizations?
Something that can make (and has made) a big difference is better information sharing on analytical practices and results among different progressive organizations. The Analyst Group was created for this purpose, with different groups presenting the results of recent tests they’ve run each month via webinar. AnalyzeProgress is a listserv for progressive analytics practitioners, where they can share their ideas, methodologies, and testing results. These communities have helped to start a more free flow of information between groups, but we’ve still got a long ways to go.
What are some barriers that people may face in trying to integrate testing as part of their campaign process?
Another common issue for organizations that are just starting with testing and optimization is getting serious buy-in from senior leadership. Analytics has gained much wider acceptance in the last few years, but there are still a lot of people who are hesitant to devote the staff time and resources to do it right. For those cases, showcasing successful results from other organizations and running some simple, proof-of-concept experiments can be helpful in demonstrating the power of testing, which often helps to convince leadership that it’s a worthwhile investment.
What are some effective ways to internally share the testing findings within organizations?
It’s important that you email out the results of your experiments to others in the organization whenever you conclude a test. Experiments only have value if they can help guide the program of your organization, and that can’t happen if the results are stuck in your head. Make sure to clearly explain the motivation for the experiment, what happened, and the implications of those results.
For long-term documentation, I think this varies from organization to organization. While I was working at the Democratic National Committee, we maintained a wiki with the results from all our testing, and that ended up being quite helpful for us. This might be overkill for other groups, though — just archiving the email write-ups you send could be sufficient.
Final thoughts, Jim?
Analytics is developing and growing rapidly in the progressive movement. Keep an eye out, since there will likely be new tools and training resources appearing throughout the coming year. And if you’re not already, start thinking about how to integrate it into your own program!
Jim Pugh is CEO of ShareProgress and a former analytics consultant, working for progressive non-profits and companies. He is the former Chief Technology Officer for Rebuild the Dream, and previously served as the Director of Analytics and Development at Organizing for America and the Democratic National Committee. Jim has a Ph.D. in distributed robotics from the Swiss Federal Institute of Technology in Lausanne, Switzerland.
Categories:testing, learning and iteration
Stories you may also like...
Why it’s important to measure the impact of people-powered advocacy
A common framework to measure and evaluate people power is a missing piece in the funding and decision-making pie, says Tom Liacas.
MobLab Live: Beyond vanity metrics
Guest speakers Colin Holtz, author of "Beyond Vanity Metrics", and Bhavik Lathia from Color of Change joined MobLab Live.