1 of 1 people found this helpful
you are correct, if you know the size it used to be, you can probably recreate it without losing data. i don't see any way to find out how close to 500GB the original array is, though. All the online documentation refers to already knowing the exact size - nothing shows how to find it if that information isn't around anymore.
I still have access to the drive contents, as they are undamaged. But I cannot boot without the correct settings in the RAID Controller.
Does anyone know if the Intel software has a log or record of the array specifications in a file on the server's hard drive ? In an event log somewhere ?
In the end, I was unable to successfully recover the RAID array itself, but I was able to recover the server and get it back up and running (just now, one month later) through some good luck and good fortune. I will write some notes here for others who might stumble into this mess like I did, it may be helpful, but use at your own risk.
- I attached only the C: drive and its mirror (Disks A & B) in the same SATA slots as before, using the ORIGINAL disks (but after I'd backed them up using Paragon's Hard Disk Manager and confirmed they were good copies).
- In the Server's F2 Setup, I changed from RAID to SATA, removing the RAID feature dependency.
- Advanced Panel
- ATA Controller Configuration
- SATA Mode: [Enhanced]
- AHCI Mode: [Disabled]
- Configure SATA as RAID: [Disabled]
- Note: In this configuration, you can only see 4 of the 6 SATA ports now.
- I was then able see the machine boot up to the Windows OS, although it took quite some time for the boot to run, and there were quite a few warnings and errors about hardware changes. Due to the hardware changes, it required me to re-authenticate the Windows license (Activation), which required that I be on the network.
- I was able to logon with my Domain Admins account (as usual), which updated itself (its password) from the previous month, and likely updated the machine account's secure channel password.
- Unfortunately, the system used the two attached disks as C: (Disk A) and D: (Disk B) (virtually mirrors of each other of course) and booted to D: as the OS and C: had a paging file. This of course would annoy Exchange Server since (a) the information stores were expected on D: and they aren't there, and (b) they are nowhere to be found.
- I could not change the drive letters due to the system depending on these two disks in their current configuration ... I could not change "D:" since it was the System partition, and I could not change "C:" because it held a paging file.
- I tried changing the boot.ini files but that proved to be a bad mistake, so don't try this as a means to boot to the other drive.
- I powered down, switched the disks around (switched SATA ports/cables), and restarted. Since I was now booting to Disk A (not Disk B as it was in the first go round), Windows went through similar "hardware changes" and "reactivation" requirements. But I was successful in booting up.
- Then I couldn't logon anymore. Since I'm now booting to Disk A that had the "old machine account password", it could not contact the domain controller to allow me to logon with my Domain Admins account. I had to logon with the local Administrator account to access the system under this partition.
- On another machine, I went to a Domain Controller and "reset" the password for the computer account/machine account.
- I disabled the NIC on this Server, and then removed the machine from the domain while it could not contact the domain. It did accept my recent Domain Admins account password for this purpose, luckily. I disabled all of the Exchange Services so they wouldn't start until I was ready.
- I then restarted the system and it came up cleanly. I logged on with the local Administrator account, re-enabled the NIC, then joined the domain again, which effectively reset the machine account as if the server had been 'recovered' from backups.
- I then confirmed that I was booting to C: (Disk A) and D: (Disk B). I removed the paging file from the D: drive (Computer Properties, Advanced, Performance Settings, Advanced, Virtual memory, Change). This required a restart.
- After restarting, I was able to change the drive letter from D: to "T:" or any other letter you like.
- I could then plug in one of my mirrored drives from the "original D" (Disks C & D). I put Disk C in a USB kit/cradle. When it powered on as a USB device, it chose device letter D: (perfect).
- I could then enable/start the Exchange Services and they could now find what they wanted on my D: drive as before, and like magic, this Exchange Server was working again.
- This now allows me to extract live copies of all my mailboxes AND my public folders in a more organized way. I can then delete the mailboxes (and remove them from Active Directory knowledge), then uninstall Exchange Server (and remove its data from Active Directory), and remove the server from the domain. Then I can recreate the server from scratch using the RAID features once again.
As a footnote, due to the length of the expected outage, I moved our mail services, as a disaster recovery operation, to Microsoft's Exchange Online Service (part of Office365). I was able to create a new mail system within about 1 hour, populate the system with mailboxes in about one-two hours (~60 mailboxes), and make the necessary changes to have a functioning email system the next morning so that inbound mail started arriving. I had then used "Kernel for Exchange Server" software to extract mailboxes from a copy of the original disks mounted in a USB cradle attached to a PC. It was able to extract all the mailboxes and Public Folders using the EDB files, and I could then manually move this data out of the resulting extracted PST files and into Exchange Online (took one week to load it all manually ~~ about 45GB of mailbox data to be uploaded one mailbox at a time).
It is regrettable that a simple problem of accidentally erasing the RAID Array Configuration should be so nearly impossible to recover, but that seems to be the case. As you can imagine, I will be recording all of the parameters the next time I create a RAID array. This S5000VSA is far more difficult to work with than the S3200SH units.