Saturday, July 16, 2011

Interim Update notes


I installed the Exchange Server 2010 Service Pack 1 Update Rollup 4 Interim Update (KB2575734) this afternoon on all of our Exchange 2010 servers, and I have a couple of thoughts to share.
Regarding ANY Exchange patching:
  • Our Exchange 2010 servers are running the Microsoft Forefront Protection for Exchange anti-virus software. This software integrates with Exchange and creates dependencies with the Transport and Information Store services. You MUST disable Forefront prior to installing ANY Exchange patches, so it won’t interfere with the patch installer’s ability to stop or start Exchange services as needed, and then re-enable Forefront after patching. Run the FSCUTILITY command line utility with either the “/disable” or “/enable” switch to perform those tasks. That command will remove/restore the service dependencies. 
Regarding the interim update:
  • This Interim Update fixes a bug that was recently discovered in Exchange Server 2010 SP1 Update Rollup 4 (http://blogs.technet.com/b/exchange/archive/2011/07/13/exchange-2010-sp1-ru4-removed-from-download-center.aspx), which can cause subfolders and messages to disappear when a folder is copied from one location to another within the Outlook client (all versions). The bug does not affect OWA or ActiveSync clients. The deleted files can be recovered from the dumpster, but most Outlook clients do not have the dumpster enabled for any folder except the Deleted Items folder.
  • This Interim Update can only be applied to the same Exchange Server 2010 version and/or Rollup target for which it was intended.
  • Only one Interim Update can be run on a server at a time. You must remove any other Exchange Server 2010 Interim Update before installing this Interim Update. If you require multiple Interim Update fixes, these Interim Update fixes must be in a single Interim Update package and need to be requested.
  • The Strong Name tool (sn.exe) must be present to configure the system to skip strong name verification before installing the Interim Update package.
  • Strong named signing verification must not be enabled if an Interim Update is present on the server or some services may crash on startup.
  • This Interim Update package must be removed before installing any Exchange Server 2010 official rollup or alternate IU release.
Steps to apply the Interim Update:
1.            Run sn.exe -Vr * to disable strong name verification.
2.            Run sn.exe -Vl to verify that strong name verification is disabled.
3.            Run the MSP file to install the Interim Update on the system.
Steps to remove the Interim Update (these steps will not occur until the next update rollup or service pack is ready to be installed):
1.            Uninstall Interim Update for Exchange Server 2010 (KB2575734) from Add/Remove programs.
2.            Run sn.exe -Vu * to enable strong name verification.
3.            Run sn.exe –Vl to verify that strong name verification is enabled.

Wednesday, June 22, 2011

Mailflow Cutover to Exchange 2010 SP1 (take 1)

Things were looking good. We started the change at 5pm. Out network guy modified the mail flow on the load balancers, then we tested OWA and ActiveSync pointing to both the 2007 and 2010 servers. Modified the external URLs on all the servers and CAS services, swapping the "owa" and "legacy" names as needed. Ran into a strange problem with proxying to our remote sites, but my consultant quickly located an article describing the fix, which worked like a charm. Everything seemed to be working as expected, so we called it a night.

Unrelated to the change, I was awakened at 4am because internet mail had stopped working. First thing I checked was the Forefront services, and found them stopped. Couldn't restart them because the WinHTTP Autodiscover service is disabled. On both servers. Set the startup to Manual, then restarted the Transport service, which brought up everything agin, restoring mail flow.


By the time I got into the office at 7am, reports had started trickling in, complaints that some phones weren't able to connect to Exchange. More investigation revealed a common thread - they were all Android phones. Still have to collect more data, because some Androids are working, while others are not. Shortly after, some Mac clients reported being unable to connect. Seems to be those with older Entourage clients. My own Outlook 2011 client works fine, and supposed the Entourage 2008 client with EWS support will also work. The desktop group is on the job, trying to identify which Macs are having issues, and getting them upgraded.


The phones will be a problem. How do you talk several hundred computer-illiterate people through upgrading their phones to a compatible OS version. And to make matters worse, one of our first upgraded phones still won't connect to 2010 and proxy to a 2007 mailbox. It will connect to either 2007 or 2010 directly, but attempting to proxy from a 2010 CAS to a 2007 mailbox won't work.


Tomorrow I'll pull a report of the ActiveSync client information, in order to identify potential upgrade candidates, so we can come up with a plan to move forward.

Tuesday, June 7, 2011

Powershell - Enable Domestic Mailboxes for Archiving in Exchange

Configuring mailboxes for archiving is a time-consuming task. Identity management was supposed to handle this automatically, but that does not yet seem to be reliable. After some time, you may discover that a large group of mailboxes are not yet enabled for archiving. The following steps will allow you to enable an entire group of mailboxes for archiving (on the Exchange side, anyway).
 
This particular example queries Exchange for all databases except those specifically excluded (see the WHERE clause), then retrieves all the remaining mailboxes which do not have a ManagedFolderMailboxPolicy configured (field is $null).
 
First, query Exchange for all domestic mailboxes which do not have a ManagedFolderMailboxPolicy set, and assign those objects to the variable $mbx. We do this so that we can use the variable (which contains objects) to pipe those objects to other Exchange commands.
 
$mbx = (Get-MailboxDatabase -Server EXCHANGESERVER | where {$_.Name -ne "Germany Users" -And $_.Name -ne "Italy Users" -And $_.Name -ne "Sweden Users" -And $_.Name -ne "Canada Users"} | Get-Mailbox | where {$_.ManagedFolderMailboxPolicy -eq $null})
 
Next, set the ManagedFolderMailboxPolicy on each of the mailbox objects.
 
$mbx | set-mailbox -ManagedFolderMailboxPolicy "Delete Anything Over One Year"
 
Next, add all the mailbox objects to the appropriate distribution group (used for provisioning by Enterprise Vault).
 
$mbx | Add-DistributionGroupMember -identity "Email Archiving Group"
 
Finally, pipe the mailbox objects to the Start-ManagedFolderAssistant command. This will force Exchange to create the Managed Folders in each user's mailbox, per the ManagedFolderMailboxPolicy.

$mbx | Start-ManagedFolderAssistant
 
That's it for the Exchange side. All that's left to do is to provision and enable the mailboxes for archiving in Enterprise Vault.
 
  • Mike
 
 

Thursday, March 17, 2011

@ExchServPro, 3/17/11 5:03 AM

Exchange Server Pro (@ExchServPro)
3/17/11 5:03 AM
How to Import PST Files into Mailboxes with Exchange 2010 SP1 bit.ly/gjrP5t

Tuesday, March 15, 2011

PSQuickies

if([datetime]::today -eq "3/15/11") {"Beware the ides of march" }

Monday, March 14, 2011

New Home for the Exchange Team Blog

The Exchange team blog was redesigned and moved under the Technet banner.

http://blogs.technet.com/b/exchange/

Sunday, February 6, 2011

How To Move A Storage Group and Database on a CCR Cluster

When I started doing research for this task, I found a lot of conflicting information. It took some time to get all the relevant information together, but the task was completed successfully, so I wanted to take the time to document exactly what I did, in the hopes that it may benefit someone else.


The steps detailed in this article were performed on an Exchange 2007 SP2 (w/ hotfix rollup 4) CCR cluster. My cluster node disks are configured as follows:


C: - System, local drive (RAID1)
D: - Logs, local drive (RAID1)
E: - Applications (Exchange), local drive (RAID1)
H: - Databases 1, SAN drive
I: - Databases 2, SAN drive
J: - Transaction Logs 1, SAN drive
K: - Transaction Logs 2, SAN drive


Our CCR cluster contains 25 databases, spread over the two database drives (H: and I:). In general, for each database on drive H:, its related system and transaction logs are stored on drive J:. For each database on drive I:, its related system and transaction logs are stored on drive K:.


The problem: Over time, the databases on drive H: have grown in size at a faster rate than the databases on drive I:. Although free disk space was not yet an issue, I wanted to take action now, to balance out the disk usage before it became a problem at a more inopportune time. The difference in free space between drives H: and I: was approximately 50 GB. In reviewing the sizes of our databases, I found that the Accounting database, at 22 GB, was the best candidate for the move. This database is in a storage group named "SG Accounting". I should also note that each storage group and database lives in its own subfolder. Therefore, the Accounting storage group lives in the "J:\SG Accounting" subfolder, while the Accounting database lives in "H:\SG Accounting" subfolder.


The first thing I did was suspend the transaction log shipping between the CCR nodes for the SG Accounting storage group:


Suspend-StorageGroupCopy "MailboxServer\SG Accounting"


At this point, the database is still online and active. All we did was suspend log shipping for that storage group. All other storage groups are still replicating normally.


Next, I have to dismount the database, so the files can be moved. This will take the database offline, so be sure to schedule this task in the wee hours of the morning, when no one from Accounting is likely to be using their mailbox.


Dismount-Database "MailboxServer\SG Accounting\Accounting"


Although only the database drives have significant differences in free disk space, since we're moving the database from drive H: to drive I:, we'll also want to move the storage group system and transaction logs from drive J: to drive K:, in order to maintain the logical layout we described earlier.


To move the storage group system and log file paths, we'll use the following command:


Move-StorageGroupPath "MailboxServer\SG Accounting" -LogFolder "K:\SG Accounting" -SystemFolderPath "K:\SG Accounting" -ConfigurationOnly


Red Flag! - The above command only modifies the location of the SystemFolderPath and LogFolder in Active Directory. No files are actually moved on disk. You must manually move the files yourself - on both nodes of the CCR cluster. Personally, I just copy the files to the new location, in case I have to roll back to the old location.


With the storage group files moved, it's time to move the database files. Again, the following command will only modify the location in Active Directory. You will have to move the files yourself.


Move-DatabasePath "MailboxServer\SG Accounting\Accounting" -EdbFilePath "I:\SG Accounting" -ConfigurationOnly


Don't forget to move the database files on both nodes of the cluster.


With the storage group files and the databases in their new locations, all that's left is to mount the database and turn log shipping back on.


Mount-Database "MailboxServer\SG Accounting"


Resume-StorageGroupCopy "MailboxServer\SG Accounting\Accounting"


That's it. You can check on the status of the log shipping with the following command:


Get-StorageGroupCopyStatus


In the output, look for "Healthy" under SummaryCopyStatus, and "0" under CopyQueueLength. Depending on how long the database was offline, it could take a few minutes for log shipping to catch up. On my cluster, copying a 22 GB database, search index files, and another 1 GB of transaction logs, the Accounting database was only offline for about 14 minutes. Since there almost no activity on my system at the time I performed these steps, the databases were back in sync in less than a minute.


The last step was to go back and delete the old database and storage group files. In the end, the database drives on my cluster nodes were within 5 GB of each other, which is less than a 1% difference. I know it won't last long, but I know I won't have to worry about disk space for a little while longer.


Thanks for stopping by!


Mike