Over the past few years I’ve been heavily involved in mobile. It started with experiments in HTML5 video 4 years ago, and it’s grown to be my bread and butter.
One of the things I realized early on with any kind of mobile advertising is that fast mobile redirects matter. And DNS latency really matters. Both of those are things that on desktop aren’t a huge deal… it’s more important that the redirect just work than how fast it works.
But mobile internet time isn’t the same as desktop internet time. If a desktop redirect takes 600ms on a fast, dedicated connection, that same redirect might double or triple (or more) over a slower, far less robust mobile connection.
And you don’t have to play the mobile game very long to realize that slow link resolution = lost eyeballs = lost revenue = inefficient ad spends. If you’re running mobile ads, just look at your discrepancy between paid clicks and what registers on your analytics. The breakage can be enormous.
2 years ago I started working on a fast new mobile redirect engine. And it worked really well. Several months ago I had the opportunity to rewrite it. Version 2 is even better. And it’s fast. Really fast.
I won’t go into the technical details of how my mobile traffic director is implemented. But I will share some of the stuff you need to be watching and looking at for optimum speed. And, of course, everything that applies in mobile also applies to regular desktop redirects… just without the urgency.
First, let me give you an idea of the scale I’m working with. Because this level of optimization might not be necessary if you’re running 100 redirects to a mobile offer. But in my business I ran 1,546,261 mobile clicks through my redirector worldwide last Saturday (6/29). And at that scale, details matter.
Also, it should be noted that my traffic director does quite a bit more than just redirect incoming links. It handles geo-splitting automatically, so I can just run my network links to a single endpoint worldwide and handle the destination page dynamically. It also allows me to do fairly complex weighted split tests. Because it’s specifically built for mobile, it does device and platform detection. I don’t currently use that in the redirect logic… but I do pass it on to my destination pages so they can be personalized, if required. I can also dynamically add tags to the destination URL so I don’t have to modify incoming ad network links (that often require re-approval) just to change a campaign, for example. Lastly, the traffic director generates an analytics data point containing all kinds of information regarding the click, the source, and the device.
So it’s doing a fair amount of work in the background.
My target is sub-200ms complete redirect negotiation worldwide. And that includes DNS resolution (which ended up being the final piece to the puzzle). In the US I actually average < 75ms. It’s about 150ms in Europe. And Asia is the slowest, clocking in at just over 200ms.
In my experience, if you want really fast mobile redirects, here are the keys:
- Proximity. Get your servers as close to your visitors as possible. In our case we have 3 server clusters… one in central US (I’m considering moving this to 1 cluster on each coast), one in western Europe, and one in southeast Asia. Each location has at least 2 servers behind a load balancer for improved robustness and uptime. After a fair amount of testing, I’ve opted for more smaller servers over fewer bigger servers in each cluster.
- Geo-DNS. Proximate servers aren’t of much use if you don’t also have fast, geo-distributed DNS. As I mentioned, this was the final piece of the puzzle for me. Until I found a really good geo-DNS provider, I just couldn’t get the entire cold redirect transaction reliably under 200ms. In my case, I now have geo-DNS with failover to a different datacenter. That means my links stay active (albeit slower) even if an entire datacenter goes down.
- A Records. Ditch the CNames and use only A (IP address) DNS entries. Frankly, this one surprised me. Even after adding geo-DNS, we still had inconsistent redirect resolution times. And most of the lost time was in the DNS lookup. We switched to A records from CNames and everything fell into place. I’m very open to CNames not being the actual problem and just a symptom. What I know is that A records worked well, and CNames didn’t in our setup. Done deal. We only use A records now.
- Measure Locally. Monitor your redirect times around the world (or wherever you serve links). You can’t just check them from your office and think they’re working right. That was probably the biggest testing difference between v1 and v2 of my traffic director. I assumed v1 was rocking worldwide… and I was wrong. It kicked a** in the US… and sucked everywhere else. Local monitoring is important.
The most surprising thing to me? That it actually matters. Improving our redirect speed led directly to a noticeable reduction in our ad network breakage. So more of our paid clicks were seen. Beyond that, it also improved our conversion rate because we had a lower rate of page abandonment and bouncing.
Was it a huge improvement? No. If you’ve got other areas where you can get double-digit movement in your conversion process, you should probably focus on those first. If you’re down to dotting your I’s and crossing your T’s, you should take a look at your mobile redirect speeds. They actually matter.
Of course, the biggest downside to all this is the cost. Just to get a bare-bones service in place with no local redundancy would probably cost > $300/mo. My setup is quite a bit more than that. And that’s more than most smaller mobile marketers should be paying for a redirect service. So it kind of creates a catch-22… you need to be profitably running at volume to do it right… but it’s hard to do it right when you don’t have enough volume/revenue to support the infrastructure. Unfortunately, I don’t have an answer for that.
Links & Resources (all naked, no affiliate):
After much experimentation, I ended up choosing DNS Made Easy for my geo-DNS. They’ve been great. And much more wallet-friendly than other options. I never have to worry about them and they just work.
I use Pingdom for my ongoing worldwide monitoring and sometimes for spot-checking link resolution time and flow. I think they’re ok. There are things I wish they did better… but they work. And they send notifications straight to my phone when there are problems.
We originally used Amazon AWS for v1 of the traffic director. After running side-by-side tests, we switched to Microsoft Azure for v2. We’re pleased enough with Azure that we’ve transitioned everything but some CloudFront content off of AWS and onto Azure. We have a *lot* of Azure servers now. Other than a couple of really boneheaded outages on their part, I like them much better than Amazon.
P.S. I’ve considered open-sourcing my traffic director. Let me know if you’d be interested in that. It won’t help at all in the ongoing expense of maintaining a traffic directing service, but it’s a good, bullet-proof, battle-tested solution that works.