The central issue with the “should we” question is that you are testing whether a mobile user responds in measurable ways when engaging with the traditional “desktop” email versus an optimized mobile version. Perhaps this is a flawed test. The desktop email best practices are fairly well baked. There is a decade-plus of institutional knowledge available, and consumers know what to expect. With mobile email, there is no such playbook already written, so we are testing a very mature desktop approach against a nascent mobile approach.
So I will ask: Do we really need the campaign-by-campaign, variant-by-variant data to tell us that mobile optimization is worthwhile? Isn’t it clear that individuals using mobile devices are such a pervasive influence on interactive marketing that we can justify investment in creativity and experimentation? Aren’t excellent user experiences worthwhile to an important segment of consumers? Can’t we simply declare confidently that consumers are re-evaluating their brand affinities every day, and consciously or subconsciously marking-up and marking-down those that embrace mobile user experience and those that don’t?
I believe that if we are waiting for the data to show us the outcomes of mobile email optimization in terms of 10% more clicks here, or a few dollars more in AOV there, the reality is we may be waiting a long time. And while we wait, brands looking for a stronger bond with their consumers may be losing valuable traction because of this form of data paralysis.
As marketers we need to consider that brand perception and user experience are valid sources for motivation, direction and inspiration. I’ll argue that the answer to the question “should we do it?” is more self-evident than we think right now. In fact, the more appropriate question to start solving for is “how.”