Improving Response Rates
This is my lightning talk from the Write The Docs 2018 conference in Portland. I talk about using the scientific method to try to improve the response rates in "was this page helpful?" feedback widgets.
On a whim, I decided to give a lightning talk on day 2 of the 2018 Write The Docs conference in Portland. A lightning talk is an informal, 5-minute talk on anything related to the conference. Time permitting, pretty much any attendee can come up on stage and talk.
The main takeaway: if you want to join me on this voyage to learn metrics, then email me
firstname.lastname@example.org or tweet me at @kaycebasques.
Here's the slides.
- I'm not an expert in this stuff. Think of this more as field notes during my journey to "get sorta good at" metrics.
- It's not really about metrics. It's about experiments.
- I'm attempting to use experiments to help me provide better answers to 3 important questions that have haunted me through my 7-year career in technical writing: What tasks do users need help with? Are they finding the content that's supposed to help them? Is the content actually helping them?
- The scientific method is our friend. You don't need to be a scientist to use it.
- To review, the scientific method means posing a question, predicting an outcome based on existing knowledge, building an experiment to test the prediction, and then analyzing the results.
- My question is "How can I improve response rates in feedback widgets?"
- My prediction is "If I put the feedback widget directly in the main content, rather than off to the side, response rates will improve."
- My experiment was to build a feedback widget that I could insert into the main content.
- The results seem to confirm the hypothesis. In the 50 docs or so that I've experimented with, I'm seeing about a 10x increase in responses in each doc.
- Like I mentioned earlier, I'm not an expert in this stuff. I don't know how to truly analyze the data to calculate "statistically significant differences" or whatever. Maybe my experiment is flawed. But experimentation seems powerful, and the only way I'll learn how to do it properly is to keep doing it.
- Another experiment I'm conducting is whether more personal appeals for responses improves response rates. My hypothesis is that if I craft the request for feedback in a more personal voice, I'll get more responses than the generic "was this page helpful?" format.
- The results for this 2nd experiment are... to be continued! I just rolled out the experiment last week. Stay tuned on this blog to find out the results.