Amazon used re:Invent 2015 to emphasize the growing momentum of its cloud infrastructure business and mark its transition into a data and application platform.
re:Invent is kind of incredible. I don’t think I have ever seen a more committed set of conference delegates in terms of session attendance. Everyone was anxious to learn so pretty much every session was packed. I’ve certainly never been to a show where Adrian Cockcroft was seemingly less of a draw than a nearby vendor session on Big Data.
The hunger for learning shouldn’t surprise us given the velocity of Amazon service delivery – there is an awful lot of stuff to take advantage of. Amazon is a flywheel of new function delivery, and the company’s growing community evidently wants to take advantage of new services as they are delivered. Keeping up with Amazon can be a full time gig. Just one data point – my friend Ant Stanley saw enough market opportunity to launch a video learning platform, A Cloud Guru, dedicated to AWS, running on AWS platform services – for bonus points it’s worth checking out this post about its microservices/serverless architecture.
In terms of the platform play, in case you didn’t get the memo – AWS Lambda is a potential game changer, allowing developers to write stateless application functions in a variety of programming languages, triggered in response to service events, without needing to provision servers. Lambda turns Amazon Web Services into one big event-driven data engine. Events can be triggered by any change within AWS, from updates to objects in S3 or DynamoDB tables, Kinesis streams or SNS calls. Amazon manages all server deployment and configuration in handling Lambda calls – as Amazon calls it, “serverless” cloud, which maps fairly well to what everyone else calls Platform as a Service (PaaS).
So far Lambda adoption has been a little slow, partly because it doesn’t fit into established dev pipelines and toolchains, but also almost certainly because of fears over lock-in. Amazon historically has dominated the Cloud market precisely becuse it was Infrastructure, rather than platform services play. As I said back in 2009:
“Amazon isn’t the de facto standard cloud services provider because it is complex – it is the leader because the company understands simplicity at a deep level, and minimum progress to declare victory. Competitors should take note – by the time you have established a once and future Fabric infrastructure Amazon is going to have created a billion dollar market. And what then? It will start offering more and more compelling fabric calls… People will start relying on things like SimpleDB and Simple Queue Service. Will that mean less portability? Sure it will…”
So while we expect Lambda adoption to quickly pick up, it is following a similar trajectory to the broader PaaS market. Amazon will have to do some market making and hand-holding to encourage adoption of technology that could mean lock in.
Update: When thinking about fear of lock in however it’s important to note that pragmatism and effective packaging tend to trump Open when it comes to the enterprise, even in the age of open source. Every wave of Open technology is eventually pwned, as customers choose what they see as the best packager- think Windows vs NetWare, Red Hat, the Unix Wars, Oracle database etc. The best packager in any tech wave therefore wins and wins big, because convenience trumps openness in enterprise decision making, especially when it’s driven by lines-of-business. The perceived value of Lambda could begin to erode fears about lock in. Meanwhile enterprise customers are far less fear driven when it comes to public cloud than they were even a year ago and they’re increasingly ready to make strategic commitments.
Lambda will underpin new AWS offerings, initially in areas such as security compliance and governance, with the AWS Config service and AWS Inspector, given any system change or API call can be logged, and or related message routed and acted upon. With the cloud we have far better observability, and ability to turn assets on or off than we do with on prem architectures.
I had an interesting chat with Adrian Cockcroft on the subject of security just after the keynote. Long of the opinion that cloud is more secure than on premise, he said that he could easily envisage that within 5 years you won’t be able to get PCI compliance unless you’re in the cloud. Seems Sean Michael Harvell has similar ideas
The nearest equivalent general purpose cloud architecture currently being touted could be Thunder, from Salesforce, announced at Dreamforce last month, which is also going to be event-based, allows for streaming, rules-based programming, with federated data stores on the back end – notably bridging IOT logs with customer data.
The event-based programmability of Lambda, with a rules engine, is very interesting, but the data management side behind it is kind of stunning.
Amazon is delivering on a federated data store model – once the data is stored in the AWS Cloud, developers can choose their engine of choice to program to, whether that be MySQL, Oracle, SQLServer, MariaDB (announced at re:Invent), or the AWS Red Shift datawarehouse. The idea a developer doesn’t need to choose a specific data store, or move data around, before creating or extending an app is very powerful. This is NoSQL on steroids. Most web companies today are building apps that are comprised of multiple data stores, and Amazon is catering to that requirement, and readying for a future where enterprises start to make choices that look more like web companies. There is no single database to rule them all. Amazon, as in other areas, doesn’t try to create a single once and future solution, but does a great job of packaging what’s already out there. RedMonk has been writing about heterogeneous federated data stores since forever, so it’s gratifying to see cloud computing finally making it a reality.
In terms of competitive offerings it’s worth mentioning Compose.IO, recently acquired by IBM, in this context. Compose is also delivering support for multiple federated data stores in the cloud. Mongo, Elasticsearch, RethinkDB, Redis etc.
update: Amazon is also up with the in memory cache and queuing pattern, with AWS ElastiCache offering managed Redis and Memcached as managed services. Oh yeah- Amazon also announced AWS ES – managed Elasticsearch – at re:Invent. Here’s why that’s kind of interesting. Thanks for the reminder @jdub!
But Amazon didn’t just make it easy to write data applications, it also made a great argument-as-code for the new approach, with its new QuickSight analytics platform. QuickSight is the new AWS business intelligence and data visualisation platform, built on top of an in memory query engine Amazon calls Spice (Super-fast, Parallel, In-memory Calculation Engine”). The core innovation of QuickSight as I see it is the fact queries can be run across the customer’s data estate in AWS, regardless of whether it’s held in the RedShift datawarehouse, Kinesis Streaming platform, DynamoDB (Amazon’s Cassandra implementation) or one of the database engines supported by AWS RDS including Oracle, Microsoft SQLServer or Postgres.
QuickSight discovers sources where your organisation has stored data in AWS, and makes suggestions about possible relationships. Pay by the hour, no need for ETL or data movement (cost) overheads. Amazon claims that because it builds its offerings on top of standard open source technology customers should be less wary of lockin than traditional proprietary databases and applications. In summary, data gravity, commonly thought of a cloud drawback is now potentially an Amazon advantage.
To that end, my favourite re:Invent announcement was definitely the Heath Robinson contraption otherwise known as Snowball.
How to get large volumes of data into the cloud is a live issue. Even with dedicated pipes it could take hundreds of days to upload multiple Terabytes of data. Amazon wanted to make it easier to do, and therefore invented the contraption above, a dedicated encrypted storage appliance, which the customer fills with data, before Amazon manages collection and shipping to take the data and upload it for the customer, before returning the empty box. Note the onboard Kindle, to prevent label misprinting issues, and allow for tracking. The snowballs can store up to 50TB of data. You have to love a good hack.
Launching the Internet of Things
As mentioned earlier in this post, Amazon and Salesforce are both converging on the next gen streaming/rules/heterogeneous data store/cloud platform. But where Salesforce talked about IoT and customers as a way to talk about IoT, Amazon cut straight to the chase with IoT as a set of AWS services – native support for MQTT, and certificate management, for example. One of the most intriguing inventions was the introduction of “shadows” – cloudside models of physical devices in the world, maintaining state, which could also be programmed against. Having virtual models of every physical device in the network is a very powerful notion.
Leaving Las Vegas
There is probably plenty more to add but in summary Amazon is in a very good place now, arguably pulling away from the chasing pack of cloud providers. The company is getting better at selling to the enterprise, and now has a number of new and compelling services to target those billions of dollars to spend. It is talking up hybrid, though it still has plenty of work to do in that regard. Meanwhile it continues to offer services startups can and will take advantage of. Amazon is now a platform.