The CMDB Needs To Die

The Configuration Management Database (CMDB) has a wretched reputation of overpromising and underdelivering. When originally proposed in the early 21st Century, CMDBs promised to allow IT service managers to anticipate problems and devise solutions by centralizing all the relevant information about the computing environment in one space. But current installations and offerings prove to be very manual, full of inaccurate and/or incomplete information, and rarely deliver on the insights and automations which justify the high cost of installation and management. It is getting so bad, I myself was laughed out of a strategy meeting with a perspective client (a Fortune 500 clothing retailer) for recommending a new CMDB to replace their three spreadsheets and a home-grown request ticket tool!

Can the promise of the CMDB ever really be realized? I believe it can, because it is already being done; just not by ITSM. Look to the “digital twin” concepts promoted by the Internet of Things (IoT) technology. A “Digital Twin” is a virtualized model of a product or process built by real-time data provided by the computerized components of the product or process itself. The ready example of Digital Twin in action is modern race cars. Engineers in the pit can view telemetry from GPS, tire pressure sensors, g-force meters, oil temperature, fuel consumption, etc., and actively tweak settings to improve performance and win the day.

Digital Twin models are supposed to have the following characteristics:

  • Connectivity — information passes between the physical IoT item and the virtual IoT item so that changes in one are applied to the other.
  • Homogenization — there is no disconnect or knowledge gap between the information presented on the virtual IoT item and the physical IoT item.
  • Re-programable — adjustments can be made to the physical IoT item that extends its useful life while still in the production environment.
  • Digital traces — logs and receipts are generated as both the physical and virtual IoT items progress through their operational lifecycle.
  • Modular — commonality between IoT items allows for one individual item to be replaced by another item of the same type without impacting the overall process or service it supports.

Any IT Asset or Configuration Manager will look at those characteristics and recognize these capabilities within any decent CMDB product on the market. “Connectivity” is established by way of agent or agent-less discovery/management tools. “Homogenization” relates to the Asset Record and Configuration Item attributes populated by those discovery/management systems. Modern computers, servers (both physical and virtual), network gear, etc., can be reprogramed with new software, patches, and updates by remote commands. Best business practice requires historical data akin to “digital traces” to analyze, troubleshoot, and protect the business, user, and customer. And the “modular” nature of current computing technology means little to no “in field” repair is ever done – swap the old broken unit for a known good one and send the end-user on her way.

If they are so similar, how do we (as good IT Asset and Configuration Managers) restore the functionality, trustworthiness, and usefulness of our CMDBs? In my experience, the technology is fine, it is the execution of the processes that fall short. A relational database is a relational database, regardless if it is storing “digital twin” or CI information. But it is how the data is recognized, gathered, and organized where we fail our CMDB.

I propose the following:

1) Do not rely on one source of information about the computing environment.

Digital Twins take information from sensors sprinkled throughout the physical device and merge them together. Our aforementioned race car would use GPS telemetry, accelerometers in the chassis, and transmission gear turns to calculate speed. And it would display an error message to the pit crew should one, two, or all three begin returning conflicting information. What would that look like in a CMDB? Should a warning go up if a laptop login is detected by Active Directory and SCCM, but not the ITSecurity vulnerability scan?

2) Include “data maps” when modeling service management processes.

We can flow-chart and RACI process designs with the best of them. But how often do we model the data points each step of our service support processes produce? And if a particular data-point does not populate the expected CI attribute, do we know how to troubleshoot what went wrong? Digital Twin models track both the item and what the item is doing in the supporting process. An oxygen sensor at the intake manifold is different from the oxygen sensor in the exhaust manifold and the Digital Twin of our race car knows which is which and when to expect the sensor’s information. Our CMDB should also know that an asset reporting back an IP address and an end-user login is no longer sitting in inventory and, if an approved change request is also associated with that record, change the asset’s life-cycle flag accordingly.

3) Automate record updates when appropriate.

If the pit crew chief observes worrisome pressure and temperature information on a tire, she orders a pit stop and his crew replaces the entire wheel. She does not ask the driver to pull over and troubleshoot on the track. Nor does she expect her pit crew to stand there and troubleshoot the old wheel while it is still mounted on the race car. She trusts the Digital Twin to detect the new wheel assembly, baseline the telemetry from the new sensors, all while the driver gets back to the race. The same should be expected of our CMDBs. When the solution to an end-user’s complaint is “swap with like replacement”, why do we expect our Tier1 and Tier2 technicians to type out all of their troubleshooting steps before closing the ticket? Program the CMDB to automate the process, the record updates, the system notifications, etc., let the end-user get back to productive work.

About Jeremy Boerger

Jeremy Boerger is the Owner and Founder of Boerger Consulting, LLC.