Planning for Change: The Only Constant in IT Asset Management

Summary: Just when you think you have everything settled and running smoothly in your asset management program, something changes: deployments alter your inventory counts, new software titles are introduced, and upgrades keep coming; manufacturers change license models – including transitioning to the cloud – and policies require updates for changing circumstances. Everything is subject to change. This session will present some areas to observe and to increase your awareness, with suggested reaction paths, or, better yet, ways to prepare proactively to minimize surprises. As always, bring your experiences and join the conversation!

Somewhere around 1987, a company named DTSS Incorporated started working on a tool named SofStore. Based on an engine and database running on an IBM VM/CMS platform, SofStore was designed to be the first tool to provision PC-based software – sell, install, upgrade, whatever was needed.

To make it work, we needed one little utility, something to tell the process whether there was enough disk space to install the software, enough memory to run it, or, if an older version of the application was already installed, to determine whether a new purchase or an upgrade was appropriate. In short, we needed an inventory.

That was the genesis of PCCensus, the first hardware inventory tool, shipped in November 1989. The next year saw the incorporation of Tally Systems, and somewhere around the end of 1991 or the beginning of 1992 we added software inventory – we finally had the complete “utility” that SofStore needed to work. In case you were wondering, we killed SofStore in 1990; at that time, inventory was the business.

At that point, as best as my co-founder of Tally Systems, Tom Cecere, can recall, the Software Publishers Association was still a couple years away from conducting their first software audit. It would be several more years before Gartner held an asset management show and 2002 before IAITAM was founded. So there were no SAM programs, or at best they were in their infancies, never mind investing 3-5% of your software spend on a SAM program as has been suggested in some industry writing.

That’s a timeline for fifteen years, arguably the first half of the thirty year history of IT asset management. Just to be fair and not totally focused on Tally Systems, and with great pains taken not to play favorites, let’s add a few more relevant industry dates:
• Sassafras Software developed KeyServer in 1989
• LANDESK introduces the desktop management category in 1993
• Microsoft Systems Management Server 1.0 (predecessor to SCCM) released in 1994
• Snow Software founded 1997
• Novell ZENworks version 1 ships in 1998
• VMware Workstation version 1 in 1999
• Aspera GmbH founded 2000
• ISO 19770-1 first generation published 2006

So much more has happened in the second 15 years of the brief history of IT asset management. One reading of Moore’s Law states that “overall processing power for computers will double every two years” and it can be argued that the same rate of change has been observed in ITAM, including such introductions as:
• Mobile devices, and their extension into BYO-Device
• Core-based and sub-capacity licensing
• HIPAA and Sarbanes-Oxley and their impact on business operations
• Virtualization of everything from individual devices to entire datacenters and applications
• The Cloud

Change comes at the IT Asset Manager from many directions. For the purposes of this discussion, I’ll break the changes up into three areas: the desktop, the datacenter, and the corporation, in what I would suggest is the order of increasing challenge. We’ll look at the types of change that can be observed in each of those areas and then consider ways to prepare for them.

Change on the Desktop – Everything Used to be So Simple

In “the beginning” with Tally Systems, the desktop really was where all the action lay. When someone asked us if the inventory tool would work with servers, our position was that servers really were just expensive desktops.

Clearly there have been many changes in software licensing for the desktop from the days when everything was based on a “per-installation” model. First there were nuanced per-installation models like per-user or per-named-user and per-machine or per-name-machine. There was a period when metered usage was a big deal, and here I mean active control of a fixed number of licenses, not software usage monitoring. But then interest in metering died off, and then it came back; it’s been somewhat cyclic over the last twenty years.

Virtualization kicked in more recently. Instead of installing an application on each device where it was needed, we gained the option to package the application one time and then distribute it as a monolithic object; we thought that a single file to install or uninstall was as easy as it could get. Then came streaming virtualized applications, and we didn’t even have to install anything! And, with the right products, we could create and host the streaming process entirely on our own, without tapping into resources outside of the corporation. Finally, we now have entire desktops that are virtualized; we can encapsulate the entire device environment and send it down.

Until somewhere in middle stages of virtualizing aspects of the desktop, most of the software asset management still came down to counting things. A good inventory / license management tool would have counted the devices and the installed software instances and would have included reconciliation for the slightly more complicated per-user and per-machine variants, as well as the virtualized apps. That good license management tool should also have gathered software usage data, which is essential for analyzing application usage patterns and promoting recovery of unused licenses.

That usage data became even more important with the streaming virtualized apps since there’s nothing installed on the device onto which the inventory tool can lock-on. The move to the VDI model makes the measurements even more complicated since with that model, at the end of the user session, the instances of software are gone – back into the generic instance that will be distributed fresh to a user the next day.

And then along came the Cloud, streaming virtual applications packaged and delivered by the licensor, with or without a physical presence on the desktop device. Personally, when some apps started moving into the Cloud I really wondered if the economics made sense. I can certainly see the advantage from the vendor’s maintenance side of things to be able to roll in bug fixes as often as necessary without having to push update packages. And I have to assume that the accountants for Microsoft and Adobe and others made a strong case for the revenue model. I can’t imagine that we’ll see any backing away from the trend – more likely an acceleration into other applications. That will put more stress on the corporate network infrastructure, and that will have to be included in planning.

Change in the Datacenter – Request Permission to Treat the Licensor as Hostile

It’s clear that licensing in the datacenter is hideously complex with licensing schemes that are nothing less than devious. And I don’t think anyone familiar with the datacenter would blame management for the opinion that the software vendors are out to get you. How did we get to this point?

Characterizing servers as “just expensive desktops” twenty-odd years ago may have been a little unfair, but not that far off the mark. Yes they had specialized operating systems and performed some processes – storage, email, printing – that were very important, but licensing software for servers wasn’t that far away from the desktop per-installation models.

Client Access Licenses (CALs) began to change all that. Conceptually, they started off pretty simple; you have a desktop that needs to connect to the network for services? You need a CAL. You have an app that connects with Microsoft SQL Server? You need a SQL Server CAL. But right from the get-go CALs were a little bit squirrelly. They existed purely as a paper license – you couldn’t inventory them directly. With a little cleverness you could “refocus” the data in your inventory enough to calculate a “close-enough” CAL position for desktops.

CALs also seemed to be open for some interpretation on the part of Microsoft. The old TS.Census architecture was classic client-server with a client installed on each device to be inventoried. That client had no idea of where the database lived on the backend, nor whether that database was MS SQL, Oracle, or the built-in database for small customers. But there was a point in time when Microsoft changed their rules of engagement for MS SQL CALs and said that if you didn’t want to do server-based licensing, then you needed to purchase a CAL for every device in the enterprise from which inventory data was being collected. Not just the infrastructure nodes that talked to the database server – every device. Their reasoning was that since there was a program that generated data that – after middle-tier processing – was eventually loaded into the SQL database, then the endpoint devices required CALs. A lot of customers switched to server-based licensing after that.

That’s an old story, but somewhat representative of the changes that the industry has seen when it comes to server-based software licensing. Fifteen years ago if you installed a server app on a box with an Intel processor vs. an AMD chipset, not a problem. Now, if a datacenter architect sees an “opportunity” to make a process run faster by moving it over to a different server, that innocent move could cost you a lot of money. Only want to run that virtualized Oracle environment on some of the cores of that spiffy new server? Well, Oracle would like to talk to you about sub-capacity licensing, because they know you’re dishonest and will really run that process on the full set of cores as soon as you have a chance.

Also consider configurations where load balancing is automated – additional servers can be “spun up” virtually to cater to increased demand. Your licensing footprint is being changed dynamically by that process. You need to be sure that you have licenses to cover the upper edge of the envelope.

Some of the changes could be in reaction to increased sophistication of datacenter technology. When concepts such as clustering and moving workloads between devices (physical or virtual) have been introduced, these techniques have tended to reduce the need for customers to buy more or as many licenses. In much the same way as the increased attention to recycling licenses on the desktop caused software vendors to institute license clauses such as quarantine periods, datacenter vendors have moved to protect their revenue streams through creative new licensing calculations.

Change in the Corporation – You Mean I have to Deal with People?!

Earlier in this article I put forward the premise that change generated through the organization can present a greater challenge to the IT Asset Manager than changes seen on the desktop or in the datacenter. For both of those segments, change comes mainly through measurable changes to software or hardware. The challenge at the corporate level is that you are dealing with people and decision making.

Changes that originate with people permeate the enterprise from the bottom to the top of the organizational chart. At the desktop level, even the most finely tuned IT department can lose track of hardware if users move devices on their own. With respect to software, despite overwhelming consensus for locking down machines so that end users can’t perform installations, there are still plenty of companies out there dealing with software purchased outside of the IT or purchasing procurement stream and self-installed. The first the ITAM team knows about these items is from a careful read of inventory reports. These instances may not appear important by themselves, but in an audit every instance of every title becomes relevant.

We already looked at the unintended consequences of apparently subtle changes in datacenter configurations that can have serious impacts. Events that are directly caused by human interaction – such as by adding another host to a cluster – are more straightforward and a reevaluation of licensing requirements can be triggered by internal procedures. Equally important are changes staff make to automated process that can have similar effects on licensing. Everything needs to be double-checked.

Consider the acquisition of a company whose inventory, licenses and users will be folded into a system that is working well. Regardless of how well the incoming organization manages their IT assets, there inevitably will be changes required to meld the systems. Consider the arrival of a new CxO in a position to have influence on the ITAM process. You probably can’t get away with just saying “this is the way we do things here.”

Those of you who have been following my discussion carefully are probably wondering why I haven’t discussed one of the more recent challenges to change our industry – mobile device management (MDM). Certainly for the primary toolset I support, MDM fits nicely into the inventory discussion, but only to a degree: the hardware and software. But in front of those there are two factors that – for me – pushed MDM into the corporate section:
• Data security – I think there’s general agreement this is the greatest concern with mobile devices in the workplace. It’s far too easy for employees to lose track of their powerful little phones and tablets (or laptops as well) that are full of sensitive corporate data. It needs to be a matter of corporate policy as to how this will be handled.
• Personal devices – if employees are allowed to bring their own devices, they still want access to corporate data. Accordingly, they still need to be subject to corporate policies on their personal devices, such as full data wipes if they misplace their device.

Together, these two areas have presented one of the largest changes in the corporate IT environment in the last 10 years. Mobile devices continue to evolve at an amazing rate, and we can expect to have to deal with change in this area for a long time to come.

What’s a Poor IT Asset Manager to Do? – Embrace the Change

If change will not be your friend, it certainly will be your constant companion in IT Asset Management. Now is the time to start preparing for the changes ahead – whatever they may turn out to be. We’ll look back at out three segments and discuss some options.

For the desktop, most of the changes you’ll see can be revealed through the use of the right inventory tool. More and more, your chosen tool will need to not only find devices in the environment and detect the hardware and software associated with those devices, but it must also be able to measure and provide reports for application usage. This will be increasingly important for software hosted offsite, a.k.a. the Cloud.

When I took the IAITAM Certified Software Asset Managers (CSAM) course, one of my thoughts was “Wow, we’re spending a lot of time on policies and processes.” The more that I look at that training in the context of the enterprise, I can’t agree more with that focus, and the desktop is a prime starting place. With a well-implemented moves-adds-changes (MAC) program, reacting to changes in the desktop environment will be a matter of process. Working hand-in-hand with policies on procurement, the ITAM manager will know where all the hardware is, and what software is installed or in use through external services. The inventory tool and the MAC program working in concert provide a continuously updated foundation for much of the rest of the ITAM process.

Integrated into the MAC process is the recovery of assets that are either being retired or reallocated from one situation to another. This is especially important for the recovery of software licenses so they may be added back into the available license pool. Build processes to find and deal with zombie devices and zombie software.

Moving over towards the datacenter, the data returned by your inventory tool (or tools) will be even more important. As humans, we are good at looking at data and easily recognizing broad changes. But a very small, subtle change is harder to eyeball and such a small change could have dramatic effect on licensing costs; in the datacenter, it can often be all about the little, nearly invisible changes.

This is a situation where the data requirements really need to dictate the tool you use; indeed, to gather everything you need to adequately monitor your datacenter, you may require multiple tools. Data needs to be gathered from physical and virtual devices, in production and the disaster recovery environment; all contribute to the licensing picture. Available reporting must be able to call attention to changes so that you can’t help but see them.

Beyond the tool set itself, nothing can replace the inclusion of someone who knows the tool, and knows the supported environment intimately. That person will know when things are working the way they should or when they aren’t and can then interpret tool data appropriately.

Finally – and this is important – you need to know what needs to be measured in the datacenter. We’ve seen that the metrics used for license rights change on a whim. You need to know what the current state of that whim is when you license the software, and you need to be periodically reassessing the metrics so that you can be reacting to changes in real-time instead of at audit-time. If you can’t measure what you’re buying, don’t sign the agreement!

Something new I learned about while researching this article was the concept of “directory creep.” In a scenario where a user loses their login credentials, they may be granted a replacement instead of resurrecting the original. For software that is licensed based on directory users, that user is consuming two licenses. If there are multiple domains in play, with users moving from domain to domain with separate records in each domain, such users can be consuming extra licenses even though a smaller number of actual users are in play. All of this needs to be documented very carefully. Old records that should have been aged out can also present artificially high counts that will work against you in an audit. Once again, changes happening through day-to-day operations can put you into a bind. A better account management process can fix this.

Now let’s move up into the corporate realm and start at home in the IT Asset Management practice itself. An individual ITAM professional can’t be expected to have all-encompassing knowledge of every software license before them. Some of us have taken time to specialize with one or two manufacturers, and some larger organizations can afford either enough staff to spread the specializations around or to bring in targeting consultants when they’re needed. But you can expect to have to deal with new licensing schemes and it pays to be ready for their eventuality.

During the weeks prior to the due date for this article, I attended three web seminars by different software vendors as well as two IAITAM IMUG meetings, all on different topics, all relevant to my work. That’s a typical continuing education load and integrates nicely into the context of my day job. I’m also listening to the ITAM and ITSM podcasts from The ITAM Review which continue to provide a wealth of ideas. You probably won’t be the first site to be offered some new licensing scheme. But online places such as LinkedIn forums on Microsoft licensing, for example, tend to be quite good as the “nerds” like to ferret out little nuanced details that others may have missed. Find as many places as you can where good information is being shared and then keep listening.

Next, we’ll assume that you already have a policy in place that, whenever a new software application is brought into the enterprise, the EULA receives a full and formal review before any software is installed and the EULA accepted as part of the installation. The next assumption is one you should make – that the EULA will change on the occasion of every update to that application. You need to create a culture of caution in this area so that someone dashing out to install something for the CIO doesn’t supersede carefully negotiated terms and conditions with a click-through screen.

I think the aspects of the Cloud that are most tenuous for me are the pricing models and perhaps the usage side of the license. It’s pretty clear that the pricing for Cloud applications has provided an inducement to move away from installed software. What’s to keep the software vendors from making dramatic changes to the pricing models in the future once they’ve weaned everyone off the installed software and made us dependent on streamed apps? Could those changes include factoring in usage time or data transmission volume? Software vendors have continuously amazed us with the metrics introduced for licensing and there’s no reason to believe they’ll stop “innovating” in this area. And, as mentioned previously, keep a weather eye on your network traffic as more and more applications move to the Cloud. This may be an area where resources need to be beefed-up to keep pace with user requirements.

Conclusion

As we have seen, change is an everyday component to working in IT Asset Management. One way to look at it is that change represents job security – you’ll always need to be there to react to whatever factors explored above decide to throw at you next. We’ve all heard IT Asset Management described as a process, not a project; if you expect to finish IT Asset Management, you’ve set the wrong goals. Change is the very essence of that process – it will always be there to drag you along.

So work with the changes. Set your expectations and build your processes around the core assumption that whatever you’re doing well today will need to be reevaluated in the future, if not tomorrow. Maintain an open mind, seek out opportunities to learn new techniques and stay abreast of industry news, and keep your ears to the ground internally.

Footnotes

(1) All dates researched through Wikipedia or software websites
(2) From http://www.mooreslaw.org/
(3) ITAM Review podcast #11, 13 August 2015 – An Introduction to Microsoft Virtualisation
(4) ITAM Review Podcasts – https://www.itassetmanagement.net/category/podcasts/

About Bruce McDowell

In 1990, Bruce was a founder of Tally Systems, helping to bring the first hardware / software inventory tool to market and later working with the professional services group, managing on-site inventories for Fortune 1000 companies and product implementations. After Novell acquired Tally Systems in 2005, Bruce worked in a number of roles including Product Management for the inventory, recognition and asset management components of ZENworks. Since 2009, Bruce has been an independent consultant working on configuration and asset management projects mainly based around Novell’s ZENworks product line. He has also developed and presents several courses for Novell Training.