With (1) discovery and assessment outcomes and(2) migration design finalized and (3) planning completed, the execution phase put the migration plan into action—focusing on disciplined cutovers, issue resolution, and minimizing business disruption.
Execution Phase: Migration in Action
Objective: Execute the migration waves according to plan – including running the data migrations, performing cutover activities, and resolving issues – while minimizing impact on business operations.
Approach: We kicked off execution with the Pilot migration. For the pilot, a mix of early adopters from various departments were migrated. This allowed us to validate the end-to-end process and tooling on a small scale. The pilot was a success – all pilot users’ mail, files, and Teams moved over a weekend, and only minor issues (like a few OneDrive files with very long pathnames failing to migrate) surfaced, which we addressed by adjusting our scripts and guides. Pilot feedback also helped fine-tune our communications.
For each wave, execution followed a well-drilled pattern:
Post-cutover: Monday morning, users began working in the target environment. We had the hypercare support bridge open to immediately handle any issues. Because of the thorough UAT, there were relatively few surprises. Common post-migration issues were minor: users needing to re-favorite documents, a few shared mailboxes that needed re-permissioning, etc. We tracked these in a ticket log. By Thursday of the cutover week, we closed out the wave’s Hypercare (assuming all critical issues resolved or handed off).
Pre-sync: 2–3 weeks before cutover, we ran an Initial Sync for that wave’s users. This copied the bulk of their content in the background while users were still live on the source.
User readiness: The week before cutover, users in the upcoming wave did their UAT/rehearsal – logging into their new account on the new device and confirming they could send/retrieve email and see their content. We had daily checkpoints during this UAT period to capture any user issues. This step built user confidence and caught anomalies.
Cutover weekend: Come Friday of cutover, we executed the cutover runbook. We effectively “froze” user activity from Friday 6:00 PM until Monday 6:00 AM – users were instructed not to use their old account during this window. Our team worked in shifts through the weekend, validating each workload’s progress. By Sunday, we did final spot checks and by early Monday we handed over to the Hypercare team.
Post-cutover: Monday morning, users began working in the target environment. We had the hypercare support bridge open to immediately handle any issues. Because of the thorough UAT, there were relatively few surprises. Common post-migration issues were minor: users needing to re-favorite documents, a few shared mailboxes that needed re-permissioning, etc. We tracked these in a ticket log. By Thursday of the cutover week, we closed out the wave’s Hypercare (assuming all critical issues resolved or handed off).
Throughout execution, we maintained rigorous status tracking. We updated a Migration Dashboard daily, showing metrics like percentage of mailboxes migrated, volume of data moved, any failed items. These reports were shared with stakeholders to provide transparency (and celebrate progress).
Tooling notes: The migration tool proved to be a robust choice. It handled our GCC High to GCC High scenario seamlessly. We did have to carefully manage the tool’s performance – we tuned the number of concurrent migrations to avoid hitting any throttling limits.
Stakeholder Management: During execution, constant communication was essential. We sent out a “Migration in Progress” email to IT stakeholders at the start and end of each weekend, confirming whether things were on track or if any issues arose. When a wave was added (to accommodate additional users and some deferred from earlier waves), we managed stakeholder expectations about the extended timeline, emphasizing it was to ensure quality and included lessons from earlier waves. Stakeholders were pleased that we proactively added Wave 4 rather than cramming too much into Wave 3 – it showed we were flexible and risk-aware.
Outcome: We successfully migrated the entire in-scope user population on schedule. The business experienced no major downtime beyond planned cutover windows, and data integrity was maintained throughout. By the final wave, the process had become almost routine.
We delivered the intended outcomes: a unified M365 environment with all acquired users onboarded. As noted in our case study wrap-up, this project delivered a “secure, compliant GCCH-to-GCCH migration with data integrity maintained and minimal disruption to users, leveraging proven migration methodologies and toolsets.” In fact, user satisfaction was high; many commented that aside from the email domain change and new laptop, the transition felt “business as usual” – which is exactly what you want in an M&A IT integration.
Lesson: Execute with discipline but adapt as needed. Having a well-rehearsed runbook for each wave allowed the team to focus on issue resolution rather than figuring out steps on the fly. Phased execution proved its value – each wave got smoother through iterative improvements. For M&A migrations, assure quality over speed; it’s better to add an extra wave or take an extra weekend than to jeopardize user trust or business operations. Finally, keep stakeholders informed in real-time during execution – nobody should be wondering how the migration is going; they should know it’s in good hands through your updates.