As more and more emphasis is placed on virtual or cloud servers, and less on physical servers, asset managers are hard pressed to keep up with tracking of this rapidly changing environment. As organizations cross from all physical to 50% or more virtual or cloud, we need to come up with a better way to manage and track these server instances. The more complex the situation, the harder it is to come to a conclusion on the “best” way to track them. One size doesn’t fit all, and below I’ll talk about pros and cons of each method.
1) We could track them like physical servers. There are a few reasons we might do this. Functionally, they’re just like physical servers. They have RAM, disk space, and an OS. They have an IP, you can ping them, discover them, and install applications on them. You can assign software licenses to them as related assets. You can do maintenance on them, and you can decommission them.
Great! So let’s track them like physical then! But wait a minute. They don’t have a physical location you can assign. I can’t go to a data center and see them. Depending on how your virtual server architecture is arranged, each virtual instance could be running on a different physical server every day, so you can’t even make your virtual servers be related assets to their physical host. Add in the cloud servers like Azure, and now the virtual server isn’t even running on your own hardware. Additionally, if your organization is anything like mine, they’re creating virtual servers to use for a few days, and then they shut them down. Now I have an asset record created and discovered, but it’s turned off. Maybe forever? Who do I ask?
So hardware doesn’t quite fit. Hm. Well then, what if we…
2) Track them like software. If we track them like software, we can just assign them to the virtual controller as sub-assets. Or assign them to the primary contact as another user-licensed software title. This might work, especially if your IT department does chargebacks for each instance, because you could treat the virtual server as a monthly subscription or a service. You can still have related assets to this primary asset, for any applications or server OS licenses assigned. It might make software normalization a little harder, but it could still work.
So they’re going to work like software then. Cool. Except…how do I know how much disk space I have? Or what software is actually INSTALLED vs just listed as licensed? It’s almost like these things are software with other software installed on it. So what if we…
3) Do something altogether different. I like this one, because it seems to solve the problems we have with either asset class, but also requires “reinventing” some processes. Could we have a CI and/or asset identified as virtual, with all the fields that a physical server has, but also some of the fields that software records get? For that matter, should a virtual server be an asset, or just a CI? Whatever you decide, you should also update your server commissioning process. You need commissioning date, estimated use duration, primary contact and ownership. You’ll need a process to identify whether this instance is still in use, is temporarily offline, or has been decommissioned. If you’re going to do chargebacks, ensure you have a process to identify who should be charged and how often. What does your IT Security group need to be able to protect the data these contain? Ensure that you have a way to track licensing on it for any installed applications- in an audit situation, virtual servers are fair game!
Each organization functions differently and has different priorities for tracking. License compliance and security should be at the top of the list, and whatever method you choose should support these critical functions. If you already have a solution to this dilemma, consider yourself ahead of the game! If you’re still trying to get a handle on this, hopefully these considerations lead you to a solution that works for you and your organization.