Categories Articles, Sales and Marketing

As some of you may know, we recently unveiled a daily blog email at InsightSquared, ensuring that our regular readers won’t miss any of our sales and marketing analytics and management content.

The decision to start producing and sending this email was not made lightly. We asked ourselves lots of tough questions, including:

  • What do we want to accomplish by sending this daily blog email?
  • In what format should the content be presented?
  • What kind of design elements do we want in the email?
  • How can we enable organic sharing of both the email and the content in the email?

And, the one question we spent the most time discussing:

  • When should we send this daily email?

Instead of debating the anecdotal merits of a morning send versus an evening send – some people like to catch up on news first thing in the morning, others like to save it for bedtime reading – we did what any analytical and data-driven marketing team would – and should – do:

We tested.

A Culture of Experimentation

Before walking through the process of how we went about testing the best time to send this blog digest email, it’s important to understand why marketers and other members of any organization should think about testing, and it boils down to one simple concept:

We won’t always get it right. At least, not the first time.

If you’re a marketer or a sales rep who bats 1.000, congratulations! (And please join our team). For everyone else, perfection is not something that comes easily – but can be achieved through a rigorous process of trial-and-error, aka testing, measuring results and then testing again until the results are where you want them to be. It’s ok – commendable, even – to fail, as long as you’re failing fast and learning from your failure. Failing means you’re not satisfied with the status quo, and are willing to take a few steps back in order to take a giant leap forward.

It’s what you do after the failure that counts, and that means having an agile mindset. Agility in the workplace means embracing a culture of knowing what you don’t know, being willing to experiment, measuring the results of all your experiments rigorously and then – most importantly – being adaptable. You can take what you’ve just learned and, because you aren’t married to anything from the past and nothing is set in stone, pivot to do things differently. Whether these pivots and changes are small tweaks or gigantic maneuvers, the point is that you’re willing to change, depending on what the data tells you.

Which brings us to our little blog email experiment.

What we Discovered in our Experiment

When we first decided to launch this daily blog digest email, we of course wanted it to be a success. The question is, how would we measure this success? After some discussion, it essentially boiled down to: “How can we get as many of the people that we send this new regular email to to open it, click through it and read our great content?”

We started by formulating a hypothesis, as any scientific or data-backed experiment should. Based on anecdotal evidence, and in talking to similarly data-driven marketers in our network, we hypothesized that sending the email in the evening would drive more clicks, opens and reads.

We then determined the relevant marketing metrics for this experiment. The Harvard Business Review had it right in describing the mindset for experimentation metrics as “View many, choose one.” You definitely want to look at a wide swathe of different, possibly relevant marketing metrics…but you eventually want to boil it down to one clear difference-maker, lest you go crazy with analysis paralysis.

For this email experiment, some of the metrics we looked at included total sends (the number of emails that were sent out), the open rate for these emails (what % of the sent emails were opened by the receiver), the clickthrough rate (the % of people who were sent the emails and then clicked through the content within) and the click-to-open rate (the percentage of subscribers who opened the email, and then clicked on the links within). After considering the viability and importance of these metrics, we settled on open rate as the most important one for this experiment.

Time to get out in the field and start testing! For us, this meant that we should start sending out the daily email, and then measuring the open rates for these emails. We basically had two different ways to test the best time to send, neither of which was perfect:

  • Send the same email at the same time to two different batches of the same overall email segment, before changing the time and measuring the difference. The problem with this tactic was that the two individual batches could have different email habits, based on timezone location, job role or some other variable.
  • Send the same email to the same segment at one time, and then at another time, in regular intervals. This was the tactic that we ultimately went with, but it is not without it’s own faults. For instance, it’s hard to ascertain that the segment behaves consistently with its email habits. There are also other noisy variables, such as the holiday season, that might have affected the results one way or the other.

The first week, we sent the email to our entire segment at 8:00 am each morning. The next week, we sent the email to that same segment of our entire blog subscriber database at 9:00 pm in the evening. The following week, we went back to the morning. We went back and forth in this same pattern for two months, culminating in this past week. With the experimental period concluding, we stepped back, crunched the numbers and made a final determination:

We would be sending our daily blog digest email in the evenings.

The periods in which we sent the email in the evenings saw a more than 1% increase in open rates over the morning sends. While this might seem insignificant, a 1% increase over as big a segment as our email database can actually make a huge impact, the difference of potentially thousands of content views each day. When put that way, it was ultimately an easy decision.

 

But our testing is not over!

This is but one variable in the overall grand scheme of email marketing metrics and A/B testing possibilities. We’re going to continue improving our open rates through subject-line testing. We’re going to see if we can drive the Click-to-Open rate up with more compelling introductions or more engaging content. We won’t rest on our laurels until we are sure we have the best blog digest email around…and on that day, we’ll probably find something new to test!

That’s just how data-driven marketers work.

 

Subscribe to InsightSquared's Blog

     

Get InsightSquared's latest Sales & Marketing Analytics blog articles straight to your inbox.

 

Recommended Posts

Leave a Comment

Start typing and press Enter to search