We upgraded to Exchange 2013 this week from Exchange 2007. We have wanted to upgrade almost since the day we installed Exchange 2007. Not because it was bad, but because we installed it only a few months before Exchange 2010 was RTM (long story for another blog post). We are also hoping that the greatly enhanced Outlook Web App will help with our volunteers and BYOD users.
2003 to 2007 was a big project and I did it all last time. There were many unforeseen issues that took a lot of time to resolve. It was not something I wanted to repeat without help. This time I enlisted the services of a friend and professional who was experienced with this kind of thing. Ed Buford is someone I have know for a while and works for Pinnacle of Indiana.
I did what I could to cut down on the consulting costs. Upgraded our servers to VMware 5, downloaded and installed Server 2012 Datacenter, updated said server, downloaded Exchange 2012 CU2. Ed helped with the more technical stuff like prepping Active Directory, checking the configuration on Exchange 2007, Installing and configuring Exchange 2013 using best practices (better than mine since Exchange 2013 is still real new some things were kind of buggy).
Then came the migration. We had planned to do this on a Sunday afternoon because for us that is the lightest day of the week and would have the least impact on our staff. Ed and I figured it would take 4-6 hours to transfer our 260 mailboxes. We were wrong. After transferring just my mailbox as a test (which was about 4 Gb) we saw a problem. It was only transferring at about 11 Mbps and took about an hour. Exchange 2007 server was only pushing a few percent on CPU and network but the Exchange 2013 server was almost maxing out the two vCPU’s. I decided to move this to a different host and try again, this time with 4 vCPU’s. This batch we ran about 10 accounts and found that it would only transfer about 4 to 6 at a time. All 4 CPU’s were maxed out. After that batch I shut down and added two more (this host only had 8 cores). This next batch of 50 mailboxes went a little faster and would transfer 6 to 8 boxes at a time. We figure this was as good as it would get and queued up a few more batches and called it a night.
In the morning we saw that the boxes that had moved were accessible by OWA and ActiveSync. Boxes that hadn’t moved yet were only accessible by Outlook since we changed our autoconfigure settings and certificates. I Queued up the last batch but saw that we had a problem. They weren’t moving. After some exploring we found that restating the Exchange Transport service got things going again (it said it was up). We figured it was a fluke and continued with the migration. By the end of the day, all mailboxes were transferred, Activesync devices were syncing, Outlook was connecting, and Outlook Web App was working great.
The next day I discovered that at some point in the night/morning, we stopped getting outside email. Our hosted barracuda was saying that it had delivered a lot a messages but where did they go? We still had email routing through our old server so I checked there and saw that we had 1590 messages waiting to be delivered!! We restarted the Exchange Transport service on Exchange 2013 and watched as all of the messages were delivered in about 15 seconds. After this we moved inbound email over to the new server and thought we were done. Nope. This would continue to happen again and again over the next few days, but now, messages would spool on the Barracuda (thankful we use it for this reason!) and then deliver once we restarted the Transport service. Ed found that others on the web were experiencing this same issue outlined here.
After deleting the old receive connectors as people in the above TechNet thread suggested, and only having the default ones, it looks like the issues is gone. The problem is I NEED those connectors. So, I’m going to add them back one by one and see what happens.
So, in summary, these are the things I learned about migrating to Exchange 2013
- Communicate well to your users. I’m not sure if you can over communicate, but you want to get as close as you can. No matter what you do there will still be those saying “Oh, was that today?”. Since email would be down for some during and after the transition for some, we setup a blog they could go to for updates and directions.
- Plan a lot of time. More than you think.
- Make sure your new exchange server has plenty of processors (at least for the transition. You can drop a few after that.) More processors = faster migration.
- Be prepared for something to go wrong. In our case, we already had outside help queued up. If you’re doing this solo you should definitely do some more tests before the big “Moving Day”.
- Plan for issues with certificate and namespace if you changing those in any way with your migration (you probably will). Android devices seemed to have the most trouble with this since they all handle things a little differently depending on their OS version and device model. iOS devices were pretty predictable. Once we knew what change we had to make they were all the same.
I’ll try to update this blog in the coming weeks as we get used to Exchange 2013 and also note any issues and how we resolved them.
Issues
- Users accessing our OWA site via HTTP are not being redirected to HTTPS. We have tried just about everything we can and it still won’t work. If you have figured this out, let me know! Please!