Over the last few years, cloud platforms have enjoyed spectacular success. Not only are they a revolution in infrastructure management; but they leverage automation, cost optimization and high availability. Cloud platforms are changing IT as a whole by promoting cultural shifts, such as DevOps, or new approaches to programming, such as serverless.
For these reasons, cloud migration is currently one of the major trends in IT. It is a trend that should only grow in coming years, according to the Cloud Migration Market Forecast published by Mordor Intelligence: “The cloud migration services market was valued at USD 119.13 billion in 2019 and is expected to reach USD 448.34 billion by 2025, at a Compound Annual Growth Rate (CAGR) of 28.89% over the forecast period 2020 – 2025.”
Migration benefits from a lot of technically oriented resources – guides and how-to’s. But there is a significant lack of content about how to structure workflows and ensure efficient governance along the way. Those organizational matters are critical to ensure success on any IT project. This article is aimed at providing some insight on this topic by sharing experience and lessons learned from the multitude of cloud migration projects managed by Pentalog.
4 Principles for a successful cloud migration project
Pentalog applies different types of Agile frameworks, such as Scrum or Kanban, depending on the projects’ context. Furthermore, the DevOps collaboration flow can vary. For example, infrastructure engineers can be onboarded along with developers inside multi-disciplinary teams, or they can be grouped by specialty.
So even if we must adapt to different organizational contexts, we also need to ensure consistency. It is for this reason that we have standard and consistent approaches to kick-off, tracking, and validating progress. We can summarize these approaches under four principles:
Focus on teamwork
Plan – and re-plan
Refine the architecture, continuously
Assess quality, regularly
Principle #1: Focus on teamwork
Build a team out of every team or individual involved in the migration
A migration project is often seen as an infrastructure-specific subject; therefore, a common mistake is to make it an operation-specific matter. The rise of DevOps has shown that the best value emerges from multi-disciplinary collaboration. Furthermore, any infrastructure migration will impact, among others, the code setup, security rules and data integrity.
It is crucial that every specialist be involved, so that the effort is conducted as a team with full awareness of every challenge or impediment, and progress. A team that works with a collaborative mindset in the same direction and with the same goals. In our experience, there is always a direct connection between a successful migration and the level of team coherence.
To achieve such an enlarged team, make sure that you:
Identify the people that should compose it. In the case of larger corporations with a lot of teams, they could be represented by a technical lead or “ambassador”.
Share a common backlog for the entire team.
Conduct short daily meetings that involve all members of the enlarged team.
Involve the enlarged team in regular organizational ceremonies, such as grooming, planning, reviews and retrospectives.
Ensure that the project is understood, achievable and vouched for by everyone
Clearly, the more your teams are motivated, the better and faster they will work! This is especially true in a context of changes that might be perceived as a danger. You should make sure that all the people involved in the project understand its purpose and its benefits.
First, involve all the people concerned, or at least, some representatives, early in the decision-making and architecting processes. Then, regularly verify that the project is understood and that your team is comfortable with achieving it. It is often helpful to provide mentoring and coaching so that the right level of skillsets and understanding is acquired to embrace and achieve the project.
Principle #2: Plan and re-plan
Fight against uncertainties
A migration effort always involves uncertainties, as new environments result in new behaviors. However, uncertainties are risks, and to ensure success, risks should be mitigated as much as possible.
The first piece of advice would be to aim for a step-by-step process that injects changes little by little, so that it is easier to control each risk that the changes introduce. Two main approaches are possible:
Start with a lift-and-shift, and re-architect iteratively to a cloud-native solution.This is especially advisable for monolithic applications.
Migrate services one by one onto the cloud.This approach is more suitable for SOA or microservices.
Furthermore, any identified risk should be recorded in writing, along with a severity rating. Ideally, the entire list should be cleared before the final switch. Any remaining risks should include a mitigation plan.
Of course, there will always be risks and uncertainties, but full awareness of them and keeping them under control are the necessary path to a successful migration. We have seen many projects where risks were exposed, then either forgotten or ignored. In most cases, they turned systematically into bugs and issues.
Maintain and regularly revisit an iterative roadmap
No matter what framework is in use to organize the workflow, any IT project should be organized in small delivery chunks, so that it is easier to track progress and adjust to impediments. Whether or not you organize into sprints, you should have a roadmap of deadlines that are not more than a month apart. Each deadline should contain a clear chunk of the backlog to deliver, with realistic estimations based on the team velocity.
As much as possible, for all the features to complete for the scope of work, try to track them, estimate each of them, and organize them by chunks of deliveries. As you would do in Scrum, revisit the backlog and the roadmap regularly. Make sure you update and re-estimate to always make certain your timeline is achievable or to take account of any change or update in the organization. This is the condition to ensure you react early on to any impediment, and you do not keep your team under too much pressure or increase the technical debt by forcing unachievable goals.
Principle #3: Refine the architecture – continuously
Ensure a permanent architecture governance
Cloud solutions provide new capabilities that can take time to adjust properly. For example, auto-scaling is an awesome feature that allows your infrastructure to be sized to the proper resources consumption required at any given moment. But such a capacity can require some tweaking to adjust in the most timely and effective manner.
An inadequate approach can result in negative effects. For example:
It takes too long to scale properly (triggers are using the wrong metrics or too long setup time) so your services can be too slow.
Your infrastructure is overprovisioned too often, which causes useless and unreasonable expenses.
In addition, the multitude of services provided by the major cloud platforms (AWS, Azure, and so on) and the multitude of ways they can interact, offer a multitude of different approaches to address a similar need. Sometimes, it is advisable to explore several paths to find the most optimal.
Therefore, it is highly advised to revisit your architecture frequently. At Pentalog, for example, we regularly organize Architecture Review Boards. This ceremony brings lead technicians and architects together to discuss the impediments encountered, review the technical debt, consider new available or discovered technologies … And if required, refine the target architecture. Whenever possible, the team should build a POC to test and validate a specific solution.
Ensure you have a Solution Architect who masters the targeted technology
Even if your architecture should be dynamic and open, some changes can bring increased costs and longer cycles. Therefore, you should make sure that you have an architect with enough skills, knowledge and experience to advise on the most relevant and efficient approaches as early as possible.
Such specialists can be rare, especially when the project is complex. However, a good way to select the right profile is the certification level. Nowadays, cloud platforms such as AWS or Azure provide a wide range of skillset assessments through dedicated exams and tests. Ideally, anyone recruited to lead an architecting effort should have a Solution Architect certificate (or equivalent) along with certifications on specific relevant domains, such as Big Data or Machine Learning.
Of course you should involve the architect early in the project to draft your first architecture and migration roadmap. Then you should ensure their regular presence, for example in the Architecture Review Board, in order to validate the architectural changes.
Principle #4: Assess quality – regularly
Unified Continuous Integration workflows
Cloud platforms offer full Infrastructure as Code capabilities. This offers multiple advantages:
Efficient change tracking, especially when combined with a version control system
Rollback in case of issues (or roll forward)
Safer deployments (blue-green, canary…)
Not only is full infrastructure a best practice which should be mandatory, but it promotes collaboration with development teams. Thus: DevOps, using similar workflows such as Continuous Integration. In most cases, the infrastructure code will be stored in a dedicated repository and have its own validation pace. But for better consistency and unified culture, it is strongly advised to use common tools and cycles throughout the project (for example, branching models, code reviews and pull requests, automated testing…).
Track the technical debt
As on any development project, technical debt should be tracked regularly to ensure full visibility over the technical quality of the project. In rare cases, use a tool such as SonarQube to provide an automated assessment, especially if using AWS Cloud Development Kit that provides the ability to write infrastructure code in a programming language such as TypeScript, Python or Java.
At Pentalog, we always write down any issue that could alter the quality of the technical outcome. We have a dedicated backlog that is regularly estimated, so that we have awareness and control over the technical debt.
Assess your excellence level and always aim for improvement
You should always try to reach the level of technical excellence that fits with your timing and financial capabilities. At Pentalog, we use a custom “Maturity Model to regularly assess and improve the level of excellence for a project. We define all the criteria and classify them into four categories, from ‘insufficient’ to ‘best state of practice’. Usually the artifact is contextualized to every project to ensure it is properly scaled to the capability and business value.
Assessing your options to set up a Cloud migration project?
Let’s get in touch to discuss more.
Serverless CI/CD Pipelines with AWS Services, for Your DevOps Needs
DevOps Implementation: How to Fail in 5 Steps
Agile Methodology: 4 Ways to Set your Project up for Success
CarlosFebruary 28, 2020 at 3:23 pm EST
Really good read