The future of media

I just read an awesome blog post written by Albert Wenger from Union Square Ventures. Albert has financed some of the largest consumer internet franchises on the web including Twitter, Zynga, Foursquare, Kickstarter and Tumblr. He know’s what is what!

A while back I wrote a post called “The future of eCommerce” which outlined my vision as to where I see online commerce going. His recent post has an uncanny resemblance to what I described back then.

I explained that the future of eCommerce is going to be driven by the following three factors:

  1. Mobile
  2. Frictionless payments
  3. Social platforms

Alberts blog is titled “Attention Scarcity, Transactions and Native (Mobile) Monetization”

  1. Attention Scarcity = Frictionless payments
  2. Native (Mobile) = Mobile (twitter, SMS and email – the lowest common denominator of technologies on smartphones)
  3. His use case is an example of buying via Twitter = Social platforms

What Albert is saying is that we need to capture consumers when their purchase intent is at its highest while they are engaged with their medium of choice, whilst making it easy for them to pay.

Marketing 101 is all about endorsing a celebrity with a brand (think Nike + Roger Federer), putting the celebrity on TV, broadcasting them to a large and engaged audience, and creating an emotional connection with the viewer. This is how brands are built. Almost all established brands have been built this way. It is what they teach you in marketing kindergarden.

As a cyclist I watch the Tour de France every year. I see these guys pumping up Alpe d’Huez with sweat beading from their foreheads. They are my hero for that moment and I watch in awe taking into consideration how gruelling the ride must be. If you are a soccer fan, you sit there glued to the TV watching your heros play the game. Every pass of the ball stikes an emotional cord. Fans want to be the players. Fans want to wear what they are wearing. This is why people pay $149 for a pair of Nike shoes that cost $15 to manufacture. This is why we shop at Nike Town not Joe’s Shoe Store.

The problem though, is that until now it has been impossible to turn purchase intent into a transaction at the moment where purchase intent is at its highest. In order to make a transaction, consumers have to move away from the medium they are engaged with, whether it be Twitter, TV, newspapers or magazines, and go to a store or web browser to buy. There is too much friction in this process.

Albert describes this here:

Because at the moment most routing is still of the disrupting and annoying kind that tries to take your attention and move it somewhere else altogether, such as a different web site altogether. The primary reason behind the need to disrupt and really move you elsewhere is that most web services have not yet found or deployed their native way of making money, which is largely due to the inability to transact within the services themselves.

According to Alberts example, he is saying that the consumer needs to be taken away from the Twitter stream and moved elsewhere on the web to a place with a low conversion funnel before they can buy. In reality, it is much worse than this. While watching the soccer, if I want to buy a Barcelona jersey, I have to leave the medium I’m engaged with (TV) leave my house, go find a store that stocks this product then buy it.

While my purchase intent is at its highest while watching TV, it diminishes over time, so the sooner I can capture a transaction, the higher the chance of a sale. 

As time goes on my circumstances change, I get distracted, my financial position might not afford a jersey, my wife might talk me out of it or Barcelona might lose the game – i.e. before half time I might have wanted to by the jersey badly enough to transact on the stop but at the end of the game I no longer want to. Had I been able to make that purchase instantly, before half time, I’d have done so.

In reality, the transaction point is not a website away (perhaps it is from Twitter) but if you take into consideration traditional marketing methods, the transaction point is miles away. If you see somethinig you like on TV, hear something on radio or see something in a magazine or catalogue, you still need to visit to a store. Even if you don’t need to visit a store, there is significant friction in buying online. You need to leave the couch, find a computer, search for the product, find a size, add it to cart, enter your payment details and so forth.

The opportunity therefore lies in bringing the transaction point to the medium – Twitter being one of those mediums.

  • What if you could bring the store to the TV?
  • What if you could transact within the Twitter stream?
  • What if you could buy off of the page of a magazine?
  • What if you could buy instantly while listening to radio?
Albert describes this here:
 The primary reason behind the need to disrupt and really move you elsewhere is that most web services have not yet found or deployed their native way of making money, which is largely due to the inability to transact within the services themselves.
It is not only web services that have not found a native way of transacting in stream. It is ALL offline mediums that don’t yet have a native way of transacting. This includes television, radio, magazines, catalogues etc. In order to transact we see something we like, then we go to a place where a transaction point can occur which is either in store or on the web, but what if the transaction could occur off the medium itself? What if I could buy Roger Federer’s shirt off his back while I’m watching the game in real-time using NATIVE technologies on my mobile phone (by native I am referring to technologies that dont require an App download). What if i could purchase when my purchase intent was at its highest via a single click payment mechanism? This is what Albert aludes to in his last paragraph when he talks about Facebook, Apple and Amazon storing credit cards.
FG says:

The platform we have created (BuyReply) solves this problem, and I look forward to sharing more with you shortly. We enable instant transactions from any online or offline medium including Twitter via a secure virtual wallet!

The programmable data center

One of the benefits of starting BuyReply is that I have been able to reconnect with technology. BuyReply is a software business whereas Lind Golf was a retail business that happened to be online. I don’t view eCommerce businesses as technology ventures. Selling product online is no different to selling product in a store, it’s just a different kind of shop front.

I’ve always enjoyed the technology and systems side of business more than product and trading and I’m glad that I now have a business where much of its success depends on this.

As we’ve built out BuyReply we’ve had to architect it in such a way that it can scale. One of the challenges of building a system that will generate much of its traffic from Television is that television can cause huge spikes. For example if a call-to-action is displayed on television to email or tweet an address, hundreds of thousands of requests could arrive at our servers in a matter of seconds or minutes. If an audience has 1m viewers watching a show like MasterChef, and a call-to-action was displayed that said ‘email desert to recipes@masterchef.com.au’ to get this desert recipe, there is a good chance that 20% or more of the audience would participate. Thats 200,000 inbound requests in a matter of seconds. In order to combat that we’ve designed an architecture that can scale.

As a startup mentor I try and persuade startups not to spend too much time worrying about scale until they have enough customers as building for scale is an engineering task in itself, however we’ve had to think about scale from day 1. Luckily Amazon Web Services exists.

What we’ve done is architected the application so that it can scale however our staging environment is hosted on a single server. I think it is very important to build your application for scalability early on so that you dont have to rebuild it later. You don’t have to host on scalable hardware at the beginning, however if your app cannot scale you might be in a bit of trouble down the road if you need to scale. We have a sophisticated queueing and queue processing architecture that is capable of running as a service on multiple instances. This means that we can distribute incoming requests across as many servers as we like should we need to.

This all sounds very complicated but in reality its not that complicated or expensive, thanks to the programmable data center.

We use Amazon Web Services to scale our service. Using AWS we can start with a small number of servers and add servers as we need them and you only pay for the servers you use when you use them. For example if  we are expecting a large amount of traffic and need 50 additional servers to handle load from a TV spike for 1 hour, we can boot up 50 servers in 2-3 minutes before the show starts, run them for the duration of the show and shut them down after. All of that that would only cost us $25.

As an old school MCSE, I am blown away by how incredibly powerful AWS is. What used to take weeks or months to configure and deploy can literally be deployed in a line of script.

Our entire BuyReply infrastructure which is made up of 10 servers, 3 security groups, two autoscaling groups, a load balancer and two mirrored database instances across two availability zones, can be deployed by pasting 10 lines of code into a terminal window. Within seconds the entire infrastructure is booting up and within 2-3 minutes its ready to accept traffic. All we need to do is flip the DNS to point to our load balancer and it starts working. The infrastructure is also highly available meaning that every aspect of the platform is mirrored and architected for fault tolerance and failover.

If we were to buy the hardware that runs BuyReply it cost well over $100,000 and require a team of sys ops to manage and deploy but thanks to AWS we have a programmable data center that is dirt cheap.

Many large web services run on AWS including Netflix, Drop Box and Heroku. AWS is what allowed Instagram to scale to billions of requests a day with just 3 engineers.

As a bit of a tech geek this whole AWS thing is a lot of fun.