You’d think completely losing your ConfigMgr Content Library (no backup) would be quite a dramatic event from a bumpy road perspective, I found that it isn’t that traumatic at all, there are only two key activities, the first being some brief file system jiggery-pokery, and the second that the network is going to get a bit of a hammering, as all content will need to be resent (not redistribute) out to the DP’s to get ConfigMgr to put it back into the Content Library.
If you have a backup, restore that puppy and get out of jail totally. But with no backup, well, read on.
A while back I lost a HDD that was a member of a pair of 2TB disks in a lazy striped RAID disk, I was presenting that disk as a Shared Disk to two Hyper-V VM’s running SQL for my High Availability XL High Availability lab.
Losing that raided disk was a bit of a problem, as I said it was presented to two VM’s on the same Hyper-V host as a shared disk, they both used it to create a clustered SMB share to which I had moved the content library as part of the prep work to switch to HA.
My content library was literally gone!
It’s only a lab, so when things like this happen, hmmm, interesting, time to investigate!
So only recently I put another disk in to replace the faulted disk and brought the clustered SMB share back to life. Obviously there’s no content there but the clustered SMB share was back online and writable.
While the Content Library is unavailable no new content can be added to ConfigMgr, not the end of the world but worth noting from a HA perspective.
Left the lab alone for a day, came back and saw that ConfigMgr had attempted to distribute the built-in Configuration Manager Client package to the Content Library (CL) without being prompted, but had failed to complete the task.
It failed because ConfigMgr hadn’t recreated the CL top-level folder structure, it had created the DataLib folder but failed to create the FileLib folder and stalled right there.
Here’s the textual transcript:
I created the FileLib and PkgLib folders and kept an eye on DistMgr, magic began to happen. The client source was snapshotted and put into the CL, and then it went out to the DP’s just fine.
From there I needed to find all the content I wanted to put back in the CL in my own time, and perform an Update DP’s action on them. Content flowed from the source to the CL and then onto the DP’s as expected.
In the ‘real’ you’d just leave the content alone, its no doubt already on the DP’s and clients can retrieve it still, new content can be added as the content library is back online, so content would only really need to be resent to the DP’s if the source has changed, so the re-distribution of content to the DP’s to get it back into the CL doesn’t need to happen until that content has actually been iterated.
So, what can you take away from this, if you lose your content library and for some reason you find yourself without a backup the outcome isn’t that bad besides controllable network utilisation. As for recreating some folders, I’ve asked the PG to look at the code as they do recreate one folder (DataLib) but fail to create the other folder (FileLib), I didn’t check to see if it’d fail to create the last remaining folder (PkgLib), if they can make that piece of code consistent in folder creation then recovery will singly focus on network utilisation, as long as you haven’t lost your content source as well!
In an ideal world ConfigMgr would recover from this rare and sloppily-managed (no backup) event without redistributing all that content to the DP’s, so that the problem pivots solely around the content source and the content library alone, perhaps this can be made to happen now I’m not sure, and besides in a lab this can be brushed off, in production, backups … are … everything.