Last December we were facing a really great problem. We had a growing number of customers and needed to upgrade our infrastructure architecture so we could speed up deployment of new features. We pride ourselves on delivering working IoT applications faster than any other platform on the market, and the efficiency of our deployment process had to improve in order to keep meeting our time-to-launch requirements.
We were using a complex mix of Azure IaaS and PaaS services which resulted in a partially manual deployment process. This was further complicated by the fact that some of our customers used our hosting services while others deployed in their own cloud environment.
To solve our deployment growing pains we held a hackfest with Microsoft and Cloud Valley, a Microsoft partner. Some of the pain points we wanted to solve:
- Automation gaps in the build process.
- Complex deployment into existing Staging and Production environments as well as creation of new environments on customers’ Azure subscriptions.
- Challenges in deploying and integrating new Apache Storm topologies into the HDInsight cluster.
- Add automated testing.
The Hackfest had 4 distinct phases
Step 1: Value Stream Mapping
Value stream mapping is about illustrating the entire deployment process, step by step, marking the portions that are manual, automated, tools, owners, environments and the time per step. I admit I was skeptical about the value this would add, but it proved to be a critical step in identifying the core issues and prioritizing the things we could and should change.
Step 2: Defining a Development Environment
Our staging environment was heavily used and deployed to once a day on average. We added another environment named Development, which is used for integration tests so we can identify issues sooner and make the Staging environment become more stable.
Step 3: Infrastructure as Code (IaC) using Resource Manager templates
We started using the Azure Resource Manager templates in order to manage deployments. The templates allowed us to automate processes that were previously manual. They also make it much simpler to deploy in customers’ Azure subscriptions. Since Resource Manager templates are JSON files, they can be committed in source control together with the code. This allows us to upgrade and rollback easily.
Step 4: Automating Storm topology submission
Our Windows clusters were running .NET Storm topologies deployed into the cluster manually, and managed using the Storm Dashboard. This manual process had to be automated. The hackfest team was able to create a three-step automation process using PowerShell scripts.
Step 5: Automated builds
This was the fun part. We defined two types of automatic builds: A continuous integration and a nightly build.
The continuous build is triggered every time a commit is pushed into the Git integration branch, which happens frequently during the development process. Build steps include Visual Studio build, running unit tests using the NUnit framework, and publishing the build artifacts.
We also defined a nightly build into the new development environment for testing integrations.
Step 6: Release management
Since we defined a continuous deployment, we needed to define the release steps for each phase. We used VSTS release management capabilities to define release steps both for delivery to Development environment as well as for delivery to Staging and Production environments. (Full details and steps we defined can be seen on the Microsoft technical blog).
The 3-day hackfest had a great outcome for us. We were able to optimize our delivery pipeline and are back to deploying apps for customers quickly, despite continuing to experience rapid growth.
For those of you interested in the full technical details, Microsoft wrote up the complete technical case study.