Blog

  • How we optimized our build and release pipeline in order to deploy live IoT applications faster
  • IoT Comics - Class Reunion
  • Leading Israeli service provider Bezeq chooses Axonize to deliver digital business services in any industry within days
  • IoT Comics - Cloud Issues
  • Is IoT Automation Eating Jobs?
  • IoT Comics - Gaining Weight
  • IoT Comics - Back From The MWC
  • IoT Comics - Mobile World Congress 2017
  • The most frequently asked IoT questions
  • IoT Comics - The Power Of Automation
  • System integrators, this is how to grow your IoT business
  • IoT Comics - Identity Issues
  • IoT Comics - IoT Magic
  • Survey results: System integrators’ top roadblocks to IoT business growth
  • IoT Comics - Every Time I Leave Home..
  • IoT Comics - Packing In Advance
  • The Agile IoT Manifesto
  • IoT Comics - Heading to the IoT Tech Expo
  • Walk the IoT Walk
  • IoT Comics - Back From Vacation
  • IoT Comics - New Years Resolutions
  • IoT Comics - Holiday Greetings
  • IoT Comics - The Art Of Analytics
  • In it to win it: Start small, scale fast, win big
  • IoT Comics - Hackfest Is Served
  • In It To Win IT: How to get to a live IoT project in 4 days
  • IoT Comics - Scaling
  • In it to win it: why system integrators should be taking over IoT
  • IoT Comics - Black Friday
  • IoT Comics - Not a Doctor
  • IoT Comics - Marriage Proposal
  • IoT Comics - Digital Transformation
  • Did IoT take down the Internet?
  • Joining Collections in MongoDB using the C# driver and LINQ
  • IoT Comics - DDoS Attack
  • IoT Comics - Smart House
  • Simple or sophisticated? What kind of IoT platform do you need?
  • IoT Comics - Smart Fridge
  • IoT Comics - My Mom's Job
  • IoT Comics - Reach For The Stars
  • The Advantages and Disadvantages of Using Azure Stream Analytics for IOT Applications
  • Smart City Orchestration in Action - Connecting All City Smart Apps
  • The Case for A Smart Campus, From Someone Who Would Utilize It
  • The Top 3 Considerations for Choosing an IoT Platform

Last December we were facing a really great problem. We had a growing number of customers and needed to upgrade our infrastructure architecture so we could  speed up deployment of new features. We pride ourselves on delivering working IoT applications faster than any other platform on the market, and the efficiency of our deployment process had to improve in order to keep meeting our time-to-launch requirements.

 

We were using a complex mix of Azure IaaS and PaaS services which resulted in a partially manual deployment process. This was further complicated by the fact that some of our customers used our hosting services while others deployed in their own cloud environment.

 

To solve our deployment growing pains we held a hackfest with Microsoft and Cloud Valley, a Microsoft partner. Some of the pain points we wanted to solve:

  • Automation gaps in the build process.
  • Complex deployment into existing Staging and Production environments as well as creation of new environments on customers’ Azure subscriptions.
  • Challenges in deploying and integrating new Apache Storm topologies into the HDInsight cluster.
  • Add automated testing.

 

The Hackfest had 4 distinct phases

 

Step 1: Value Stream Mapping

Value stream mapping is about illustrating the entire deployment process, step by step, marking the portions that are manual, automated, tools, owners, environments and the time per step. I admit I was skeptical about the value this would add, but it proved to be a critical step in identifying the core issues and prioritizing the things we could and should change.

 

Step 2: Defining a Development Environment

Our staging environment was heavily used and deployed to once a day on average. We added another environment named Development, which is used for integration tests so we can identify issues sooner and make the Staging environment become more stable.

 

Step 3: Infrastructure as Code (IaC) using Resource Manager templates

We started using the Azure Resource Manager templates in order to manage deployments. The templates allowed us to automate processes that were previously manual. They also make it much simpler to deploy in customers’ Azure subscriptions. Since Resource Manager templates are JSON files, they can be committed in source control together with the code. This allows us to upgrade and rollback easily.

 

Step 4: Automating Storm topology submission

Our Windows clusters were running .NET Storm topologies deployed into the cluster manually, and managed using the Storm Dashboard. This manual process had to be automated. The hackfest team was able to create a three-step automation process using PowerShell scripts.

 

Step 5: Automated builds

This was the fun part. We defined two types of automatic builds: A continuous integration and a nightly build.

The continuous build is triggered every time a commit is pushed into the Git integration branch, which happens frequently during the development process. Build steps include Visual Studio build, running unit tests using the NUnit framework, and publishing the build artifacts.

 

We also defined a nightly build into the new development environment for testing integrations.

 

Step 6: Release management

Since we defined a continuous deployment,  we needed to define the release steps for each phase. We used VSTS release management capabilities to define release steps both for delivery to Development environment as well as for delivery to Staging and Production environments. (Full details and steps we defined can be seen on the Microsoft technical blog).

 

The 3-day hackfest had a great outcome for us. We were able to optimize our delivery pipeline and are back to deploying apps for customers quickly, despite continuing to experience rapid growth.
For those of you interested in the full technical details, Microsoft wrote up the complete technical case study.