Why should I monitor IT effectiveness and how do I do it?
The two definitive studies by COSO—Internal Control: Integrated Framework and Enterprise Risk Management: Integrated Framework—both identify monitoring as a critical component of internal control and risk management. Monitoring refers to both ongoing monitoring processes to ensure that a system functions as it is supposed to (including management and review aspects), as well as ad hoc special studies and audits (for example, effectiveness reviews) to review the system.
The greater the efficacy of the day-to-day monitoring processes, the less need there is for separate effectiveness reviews. Nonetheless, conducting effectiveness reviews from time to time provides additional insights. These reviews may also be appropriate when there is a change of management, of systems, or in the event of an acquisition or divestiture.
Any information technology organization or department is composed of four types of assets or resources. They all should be considered when carrying out an effectiveness review:
- Processes and organization (structure)
- People (human resources)
- Technology (hardware and software)
- Applications (software)
A quality management system (QMS) refers to the structure, procedures and processes that ensure continuous improvement of IT services through ongoing and systematic performance monitoring against specified objectives. In some organizations, QMS is referred to explicitly, in others it may be implicit. In any event, organizations need to ensure that they respond effectively to both internal and external customers’ needs. In IT, quality refers to performance against a benchmark, constant improvement and exceeding customers’ expectations.
The International Organization for Standardization (ISO) sets out quality management standards in its ISO 9000:2005, Quality management systems fundamentals and vocabulary, and ISO 9004:2000, Quality management systems guidelines for performance improvements. The ISO sets out eight principles, which can provide a roadmap to any monitoring of IT systems:
- Customer focus (i.e., user focus)
- Involvement of people
- Process approach
- System approach to management
- Continual improvement
- Factual approach to decision-making
- Mutually beneficial supplier relationships
Among the questions to be addressed in a formalized effectiveness review are:
- How closely did the resources used correspond to those in the tactical plan?
- Are IT projects being managed with an approach that delivers the projects on time, within budget, and with minimal impact to end-users?
- Does the delivery of IT resources and the support provided by IT staff meet or exceed the requirements and expectations of end users?
Periodic effectiveness reviews will help answer those questions. From time to time, the effectiveness review should also consider the overall effectiveness of the IT organization itself. This is particularly important when IT services are decentralized, perhaps as a result of expansion by acquisition, for example. In that case, it may be helpful to step back and consider whether there is a more optimal way of organizing the delivery of IT services within the organization.
Objective reviews using appropriate measurements are required to determine and demonstrate that IT activities are, in fact, effective. This refers to a “scientific” approach, or in the ISO terminology “a factual approach to decision-making.”
The following metrics should be established for evaluating IT effectiveness:
- Resource capacity planning compared to actual results
- For over-utilization of each resource, quantify the impact to users from not supplying the additional resource required or the additional cost over budget from adding the resource required when needed
- For under-utilization of each resource, quantify the additional cost of supplying more than required
- Examples of resources for capacity planning include system cycles, server disk space allocations, network responsiveness, and batch job throughput.
- Implementation plans compared to successful and unsuccessful results
- Metrics may be established with regard to implementation timing, costs, and measured impact to users. Unsuccessful implementation attempts that resulted in rollbacks may be measured when there are established quantifiable impacts to users.
- Service level agreements compared to actual service levels delivered
- These metrics will provide an objective indication of the IT service levels delivered compared to the users’ stated requirements in a service level agreement. When these are compared to the corresponding metrics collected from the users’ subjective evaluations, there may be an indication of discrepancies between negotiated service levels and user expectations.
- Problem-handling effectiveness
- Collecting statistics about problems encountered and the time required to bypass and/or resolve the problems will provide a good indicator regarding problem-handling and how effective the technical staff are at addressing this important user satisfaction area.
- User subjective evaluations of IT operations, technical support and functionality
- Surveys distributed to collect user feedback on service, technical support, and IT services provided should return subjective measurements of satisfaction with a defined scale.
Jeffrey D. Sherman, BComm, MBA, CIM, FCPA, FCA
Author of Information Technology PolicyPro®
Published by First Reference Inc.