📜 ⬆️ ⬇️

1000+ KPI

Hey. If you at least once faced with the task of creating a system of key performance indicators (KPI) for employees or projects, you will most likely agree that this is not an easy task. “Head-on” solutions do not work here, many obvious indicators for verification turn out to be non-informative, give a distorted view of the situation, or can be easily distorted by employees.
In order to reduce precious time, keep nerve cells healthy and give “inspiration”, I suggest to get acquainted with the list of popular KPIs for IT. Perhaps some of them will help with the solution of your problem.

Figures taken from the collection: KPI Mega Library: 17,000 Key Performance Indicators
Buy at Amazon



Additional resources to help you:

')

IT


General
# extra months spent for the implementation
# fixed bugs
# of alerts on exceeding system capacity thresholds
# of annual IT service continuity plan testing failures
# of business disruptions caused by (operational) problems
# of changes closed, relative time of change
# of complaints received within the measurement period
# of failures of IT services during so-called critical times
# of incidents closed
# of incidents still opened
# of open incidents older than 15 days relative to all open incidents
# of open problems older than 28 days relative to all open problems
# of open service requests
# of overdue changes relative to # of open changes
# of overdue problems relative to # of open problems
# of requests closed
# of Service Level Agreement (SLA) breaches due to poor performance
# of unmodified / neglected incidents
% accuracy of forecast against current plans
% accuracy of forecast against actuals of expenditure as defined in continuity plan
% applications with adequate user documentation and training
% bugs found in-house
% financial management processes supported electronically
% hosts missing high priority patches
% of (critical) infrastructure components
% of actual uptime (in hours) of equipment uptime (in hours)
% of application / software development work outsourced
% of backlogged / neglected change requests
% of business process support of applications
% of closed service requests
% of Configuration Items (CIs) included in capacity reviews
% of the configuration items (CIs)
% of delivered
% of efficient and effective technical business process adaptability of applications
% of incidents prior to the lifecycle
% of incidents solved within deadline
% of incidents
% of IT services that are not covered in the continuity plan
% of open service requests worked on
% of overdue incidents
% of overdue service requests
% root problem analysis
% of problems resolved within the required time period
% of problems with a root cause identified for the failure
% of problems with available workaround
% of reopened incidents
% of reopened service requests
% of response-time SLAs
% of reviewed SLAs
% of service requests
% of service requests posted via web (self-help)
% of service requests resolved within an agreed-upon / acceptable period of time
% of SLAs with an assigned account manager
% of SLAs without service level breaches
% of time (in labor hours)
% of unauthorized implemented changes
% of unplanned purchases due to poor performance
% of urgent changes
% of workarounds to service requests applied
ASL applications cycle management
Average delay in SLAs review
Average problem closure duration
Average service request closure duration
Spent on those changes closed
Average time (hours)
Average time (in days) between updates of Capacity Plan
Average time (in days) between updates of Continuity Plan
Average time spent on-to-date of
Capacity Plans
Average time spent on-to-date of
Continuity Plans
Business Value (BV) of application (s)
Change closure duration rate
Customer satisfaction (index)
First line service request closure rate
Gap
Problem queue rate
Ratio of # of incidents versus # of changes
Service request closure duration rate
Technical Value (TV) of application (s)
Time between reviews of IT continuity plan

Project life cycle


Planning and Organization
# of conflicting responsibilities in the view of segregation of duties
% IT staff competent for their roles
% of budget deviation value compared to the total budget
% of IT budget spent on risk management (assessment and mitigation) activities
% of IT functions connected to the business
% of IT initiatives / projects championed by business owners
% of IT objectives that support business objectives
% of IT services whose costs are recorded
% of processes receiving Quality Assurance (QA) review
% of projects meeting stakeholder expectations
% of projects on budget
% of projects on time
% of projects receiving Quality Assurance (QA) review
% of projects with a post-project review
% of projects with the benefit (Return on Investment) defined up front
% of redundant and / or duplicate data elements as exist in the information architecture
% of repeat incidents
% of roles with documented position and authority descriptions
% of sick days (illness rate)
% of software applications
% of software applications
% of stakeholders satisfied with IT quality
% of stakeholders that understand IT policy
% of variation of the annual IT plan
Actual ratio vs. planned ratio of IT contractors to IT personnel
Average # of components
Delay in updates of IT plans after strategic updates
Frequency (in days) of enterprise IT control framework review / update
Frequency (in days) of the IT risk management process
Frequency (in days)
Frequency (in days) of strategy and steering committee meetings
Frequency (in days) of updates to the information architecture
Frequency (in days) of updates to the technology standards
Ratio of IT contractors to IT personnel

Implementation
# of application production problems (per application)
# of bugs or software defects of application
# of critical business processes supported by obsolete infrastructure
# of different technology platforms
# of infrastructure components that are no longer supportable
% of applications with adequate user and operational support training
% of business owners satisfied with application training and support materials
% of delivered projects where stated benefits were not guaranteed assumptions
% of development effort maintained maintaining existing applications
% of feasibility studies
% of implemented changes not approved (by management / CAB)
% of infrastructure components acquired outside the acquisition process
% of key stakeholders satisfied with their suppliers
% of procurement requests satisfied by preferred suppliers
% of procurement requests satisfied by the preferred supplier list
% procurements in compliance with standing procurement policies and procedures
% of projects on time and on budget
% of projects with a testing plan
% of request for proposals (rfp)
of course
% of systems
% of users satisfied with the functionality delivered
Request for Proposals (RFP)
Average rework per change after implementation of changes
Average time to configure infrastructure components
Cost to produce / maintain user documentation, operational procedures and training materials
Satisfaction scores for training
Software average time to procure
Time lag between procedures and documentation materials
Total rework (in FTE) after implementation of changes

Monitoring and control
# of (critical) non-compliance issues identified
# of (major) internal control breaches, within measurement period
# of improvement actions driven by monitoring activities
# of IT policy violations
# of non-compliance issues
# of recurrent IT issues on board agendas
# of weaknesses identified by qualification and qualification reports
% maturity of board reporting
% maturity of reporting from IT to the board
% of critical processes monitored
% of metrics standards can be benchmarked to (industry) standards and set targets
Age (days) of agreed-upon recommendations
Reflectance performance
Amount of effort required to gather measurement data
Average time lag
Average time lag between publication of law
Cost of non-compliance, including settlements and fines
Frequency (in days)
Frequency (in days) of compliance reviews
Frequency (in days)
Frequency of independent reviews of IT compliance
Frequency of IT steering / strategy meetings
Stakeholder satisfaction with the measuring process
Time between internal control deficiency occurrence and reporting

Support
# of business compliance issues caused by improper configuration of assets
# of deviations identified between the configuration and actual asset configuration
# of formal disputes with suppliers
# of incidents due to physical security breaches or failures
# of incidents
# of incidents of unauthorized access to computer facilities
# of incidents
# of incidents where sensitive data were retrieved
# of SLAs without service level
# of training hours divided by # of employees (in FTE)
# of violations in segregation of duties
# critical time outage
# devices per FTE
# incidents per PC
# incidents processed per service desk workstation
# IT service desk availability
# mean time to repair (MTTR)
# of complaints
# of training calls handled by the service desk
# of un-responded emails
% of (major) suppliers subject to monitoring
% of applications that are not capable of meeting
% of availability Service Level Agreements (SLAs) met
% of budget deviation relative to total budget
% of the defined service availability plan
% of delivered services that are not included in the service catalog
% of disputed IT costs by the business
% of IT service bills accepted / paid by business management
% of licenses purchased and not accounted for in the configuration repository
% of outage due to incidents (unplanned unavailability)
% of personnel trained in safety, security and facilities
% of scheduled work not completed on time
% of service levels (in Service Level Agreements) reported in an automated way
% of service levels (in Service Level Agreements) that are actually measured
% of successful data restorations
% of systems where security requirements are not met
% of telephone calls abandoned by caller
% of transactions executed
% of user complaints on contracted services
% of users who do not comply with password standards
% incidents resolved remotely
% incidents solved by first point of contact
% incidents solved within SLA time
% incidents which changed priority during the life-cycle
% IT incidents fixed before users notice
% IT incidents solved within agreed response time
% neglected incidents
% of (re (-assignments of service requests
% of calls transferred within measurement period
% of customer issues that were solved by the first phone call
% of first-line resolution of service requests
% of incorrectly assigned incidents
% of incorrectly assigned service requests
% of terminal response time
% service requests posted via web (self-help)
Actual budget (costs) relative to the established budget
Amount of downtime arising from physical environment incidents
Average # of training days per operations personnel
Average time (in hours) for data restoration
Average time period (lag)
Average # of (re) -assignments of closed incidents within measurement period
Average # of calls / service request per handler
Call center / service desk within measurement period
Average after call work time
If you haven’t been up, you can’t get it.
Time of registration
Average days for lease / refresh / upgrade fulfillment
Average days for software request fulfillment
Average incident response time
Average overdue time of service requests
Average problem closure duration
Average TCP round-trip time
Downtime caused by deviating from operations procedures
Downtime caused by inadequate procedures
Time before help calls are answered
Total service delivery penalties paid
Frequency (in days) of physical risk assessment and reviews
Frequency (in days) of review of IT cost allocation model
Frequency (in days) of testing of backup media
Frequency (in days) of updates to operational procedures
Frequency of review of IT continuity plan
Unit costs of IT service (s) within measurement period
User satisfaction with availability of data

Business


Service
# e-mail backlog
# of alerts on exceeding system capacity thresholds
# of transactions executed within response time threshold
% delivered services not in the service catalog
% fully patched hosts
% of "dead" servers
% of (assigned) disk space quota used
% of disk space used
% of dropped telephone calls
% of failed transactions
% of network bandwidth used
% of network packet loss
% of transactions executed during the response time threshold
Adoption rate
Application performance index
Average # of virtual images per administrator
Average% of CPU utilization
Average% of memory utilization
Average network throughput
Average response time of transactions
Average retransmissions of network packets
Average size of email boxes / storage
Corporate average data efficiency (CADE)
Datacenter power usage effectiveness
Maximum CPU usage
Maximum memory usage
Maximum response time of transactions
Mean opinion score (MOS)
Mean time to provision
Mean-time between failure (MTBF)

Service availability
# of developed new systems without downtime issues
# of integrate IT systems
# of outage due to incidents (unplanned unavailability)
# of reviews of management information systems (MIS)
% downtime (hours)
% effective usage of IT systems
% improvement of capacity of current systems
% mainframe availability
% of outage (unavailability)
% of outage (unavailability)
% of outage due to changes (planned unavailability)
% of system availability
% of unplanned outage / unavailability due to changes
% suitability of IT Systems
Customer database availability
Total outage from IT services

Expenses
# of maintenance contracts
% cost adherence
% hardware asset value to total IT value
% IT security budget
Average age of hardware assets
Average cost to solve a problem
Average cost to solve an incident
Average costs of a release
Average costs of change implementation
Average penalty costs per SLA
Average costs of penalties paid on Service Level Agreements (SLAs)
Cost of CMDB reconciliation
Cost of consumable items such as ink, cartridges, cds etc
Cost of delivery
Cost of digital storage media
Cost of Infrastructure
Cost of leased equipment
Cost of maintenance per 1000 lines of code
Cost of producing plans
Cost of Continuity Plans
Cost of purchase
Cost of security incidents
Cost of security incidents due to unauthorized access to systems
Cost of spares
Cost per device
Cost per PC
Cost per stored terabyte
Cost per terabyte transmitted
Costs associated with a call center / service desk, usually for a specific period, such as month or quarter
Costs of operating call center / service desk
Costs savings from service reuse
Cost of cleanup virus / spyware incidents
Cost of CMDB reconciliation
Cost of finding and hiring one staff
Cost of managing processes
Cost of patches
Cost of producing capacity plans
Cost of producing continuity plans
Cost of professional certifications necessary
Cost of service delivery
Cost of skilled labor for support
IT assets
Cost per trouble report (man-hours)
Domain registrations costs
Facilities costs such as a dedicated server room with fire and air control systems
Financing costs
Hardware asset value
IT spending per employee
Labor cost for technical and user support
Net Present Value (NPV) of investment
Network costs are determined by network demand.
Total cost of change implementation
Total cost of ownership
Total cost of release
Total cost to solve all incidents
Total cost to solve all problems
Time for maintenance scheduled and unscheduled
Time of assets for unrelated activities such a gaming, chatting
Training costs of both IT staff and end users
Unit cost of IT services
Unit costs of IT service (s)
Use of assets for non-business purposes
Voice network - cost per minute

Efficiency
# frequency of IT reporting to the board
# of capabilities (services that can be rendered)
# of people working
# of services delivered on time
# Service level Agreements (SLA) breaches due to poor performance
# terabyte managed by one Full Time Equivalent (FTE)
# unique requirements
# watts per active port
% facility efficiency (FE)
% growth in business profits
% growth in market share
% growth in sales
% improved SLA's
% Projects projects with a testing plan
% Service level Agreements (SLAs) reviewed
% SLAs without service level breaches
% stock price appreciation
% time coordinating changes
% IT budget of total revenues
% IT capital spending of total investment
% of current initiatives driven by IT
% of current initiatives driven by the business
% of growth of IT budget
% of IT contribution in ROTA
% of IT costs associated to IT investment
% of IT costs associated to IT maintenance
% of IT labor outsourced
% of IT time associated to IT investment
% of IT training on IT operational costs
% of business projects
Average IT-related costs per customer
IT to total employees ratio
Ratio of% growth of IT budget versus% growth of revenues
Ratio of fixed price projects cost versus T & M projects cost
Actual capacity (# of people available & avoid new project traps)
Technology effectiveness index

Ecology
% of energy used hem renewable sources ("green energy"}
% of recycled printer paper
% of servers located in data centers
Corporate average data efficiency (CADE)
Datacenter power usage effectiveness (PUE)

Infrastructure
# maximum memory usage
# of compliments received
# of incidents caused by changes vs. total # of incidents
# of incidents caused by inadequate capacity
# of open IT Infrastructure incidents older than 28 days relative to all open incidents
# of open IT Infrastructure problems older than 28 days relative to all open problems
# of open service requests older than 28 days
# of outstanding actions against last SLA review
# of printers divided by # of staff
# of problems closed
# of repeated incidents
# of untested releases
# of urgent releases
# power usage effectiveness
# propagation delay
% availability (excluding planned downtime)
% data center infrastructure efficiency
% disk space quota used
% incidents solved within SLA time
% of audited Configuration Items (Cl)
% of changes closed before deadline
% of closed service requests
% of configuration items (cl) mapped IT services in the CMDB
% of Configuration Items (Cl) monitored for performance
% of Configuration Items (Cl) under maintenance contract
% of Configuration Items (Cl) with under-capacity
% of customers given satisfaction surveys
% of delivered services not in the service catalog
% of end user computers
% of end user printers
% of escalated service requests
% of fully documented SLAs
% of implemented changes without impact analysis
% of inaccurately registered configuration items (cl) in cmdb
% of incidents configuration data
% of incidents which change classification during the lifecycle
% of incidents which change priority during the lifecycle
% of internal hosts which are centrally managed & protected
% of IT staff that ITIL trained
% of IT staff with (advanced) ITIL certification
% of money spent on IT infrastructure versus the total IT spent
% of money spent on IT
% of open service requests
% of open service requests unmodified / neglected
% of overdue changes
% of overdue problems
% of project files containing cost- / benefit estimates
% of refused changes
% of routine changes indicates the level of the process
% of security-related service calls
% of Service Level Agreements (SLAs) in renegotiation relative to all SLAs that are in production
% of Service Level Agreements (SLAs)
% of service requests closed before deadline
% of services covered by SLA
% of SLA breaches caused by underpinning contracts
% of SLA reviews conducted on-time
% of software licenses used
% of successful software installations
% of successful software upgrades
% of time coordinating changes
% of unmodified / neglected incidents
% of unmodified / neglected problems
% of unregistered changes
% of vendor services delivered without agreed service targets
% on-time service level changes
% reduction of IPCS's (Incident, Problem, Change, Service Request)
Average # of (re) -assignments of closed incidents
Average # of (re) -assignments of closed service requests within measurement period
Average change closure duration
Average rework (in FTE)
Average order of items
Average time between configuring items (CIs) as residing in CMDB
Average time between CMDB reconciliation
Average time between urgent releases of software
Average time spent on CMDB reconciliation
Average time to procure an item
Balance of problems solved
Change queue rate
Delay in production of financial reports
First-call resolution rate
Forecast accuracy of budget
Growth of the CIVIDB
Incident impact rate incomplete CMDB
Mean Time To Detect (IVITTD)
Overall cost of IT delivery per customer
Ratio of # of incidents versus # of problems
Service call abandoned rate
Service request backlog
Service request queue rate
Support costs for software contracts
Costs of an activity
Time lag between request for purchase and contracting
Total critical-time outage
Total rework after implementation of changes
Total service delivery penalties paid within a period

Data backup
# applications data transfer time
# data center infrastructure efficiency
# deviations between configuration and actual configurations
# time for configuration management database (CMDB) reconciliation
% backup operations that are successful
% corporate average data efficiency
% data redundancy
% of backup operations that are successful
% of changes that required restoration of backup
% of changes that required restoration of backup during the implementation
% of physical backup / archive media that are fully encrypted
% of test backup restores that are successful
Age of backup
Average time between tests of backup
Average time to restore backup
Average time to restore off-site backup

Network
# link transmission time
# network latency
# of bytes received since the system started
# of bytes sent out to connections
# of commands sent
the system started
# of connections are currently in queue to be processed
# of connections that have failed to complete successfully
# of connections that successfully completed transfer and confirmation
# of messages received by the system
# of the currently active connections
# retransmission delay
# voice network minutes per FTE
% Internal servers centrally managed
% network bandwidth used
% network packet loss
% utilization of data network
Accuracy rate
Average connection time
Average network round trip latency
Average response speed
Connections per customer
Cost per byte
Total amount of time has been running in milliseconds
Total time the system started in UTC (days)

Operations
# of business disruptions caused by problems
# of compliments
# of deviations between configuration and actual configurations
# of incidents first month
# of outstanding actions of last SLA review
# of overdue changes
# of overdue incidents
# of overdue problems
# of overdue service requests
# of problems in queue
# of problems with available workaround
# of reopened incidents
# of reopened service requests
# of repeat incidents
# of reviewed SLAs
# of service requests posted via web (self-help)
# of SLA breaches due to poor performance
# of SLAs with an assigned account manager
# of SLAs without service level breaches
# of software licenses used
# of time coordinating changes
# of unauthorized implemented changes
# of unplanned purchases due to poor performance
# of unregistered changes
# of untested releases
# of urgent changes
% growth of the CMDB
% incidents assigned to a level of support
% incidents closed unsatisfactorily
% incidents resolved using a change
% incidents resolved with workaround
% of audited Configuration Items (Cl)
% of availability SLAs met
% of backed out changes
% of calls transferred
% of Configuration Items (Cl) included in capacity reviews
% of escalated service requests
% of implemented changes not approved by management
% of incident classified as 'major'
% of incident impact rate incomplete
% of incidents bypassing the support desk
% of incidents caused by a workaround
% of incidents closed by service provider
% of incidents closed satisfactorily
% of incidents expected to close next period by scheduled workaround or change
% of incidents
% of incidents for which entitlement is unconfirmed
% of incidents inbound versus outbound
% of incidents incorrectly classified
% of incidents incorrectly prioritized
% of incidents involving third-party agreement
% of incidents recorded 'after the fact'
% of incidents rejected for reassignment
% of incidents resolved with non-approved workaround
% of incidents resulting from a service request
% of incidents resulting from previous incidents
% of incidents solved within deadline
% of incidents which change during the lifecycle
% of incidents with unmatched agreements
% of licenses purchased and not accounted for in configuration repository
% of obsolete user accounts
% of open service requests worked on
% of problems with a root cause analysis
% of problems with a root cause identified
% of response-time SLAs not met
% of service requests due to poor performance
% of service requests resolved within an agreed period of time
% of services not covered in Continuity Plan
% of un-owned open service requests
% of unplanned outage / unavailability due to changes
% of workarounds to service requests applied
Accuracy of expenditure as defined in Capacity Plan
Accuracy of expenditure as defined in Continuity Plan
Availability
Availability (excluding planned downtime)
Average # of (re) -assignments of incidents
Average # of (re) -assignments of service requests
Average audit cycle of Configuration Items (Cl)
Average change closure duration
Average cycle time between urgent releases
Average incident closure duration
Average service request closure duration
Average time between same reconciliations
Average time between updates of capacity plan
Average time between updates
Average time between identifying and rectifying a discrepancy
Average time spent on continuity plans
Change closure duration rate
Change queue rate
Critical-time failures
Critical-time outage
Deviation of planned budget for SLA
Email backlog
First line service request closure rate
First-call resolution rate
Frequency of review of IT continuity plan
Incident backlog
Incident queue rate
IT service continuity plan testing failures
Mean time in postmortem
Mean time in queue
Mean Time to Action (MTTA)
Mean Time to Escalation (MTTE)
Mean time to repair
Mean Time to ticket (MTTT)
Total changes after implementation
Total rework after implementation of changes
Total time in postmortem
Total time in queue
Total time spent on CMDB reconciliation
Total time to action (TTTA)
Total time to escalation (TTTE)
Total time to ticket (TTTT)

Quality control
# incident efficiency
# missing patches
# of back up & testing of computer systems
# of changes after the program is coded
# of changes to customer requirements
# of coding errors found during formal testing
# of cost estimates revised
# of defects found over a period of time
# of documentation errors
# of error-free programs delivered to customer
# of errors found after formal test
# of keypunch errors per day
# of process steps
# of reruns caused by operator error
# of revisions to checkpoint plan
# of revisions to plan
# of revisions to program objectives
# of test case errors
# of test case runs before success
# untested releases
% assignment content adherence
% availability errors
% change in customer satisfaction survey
% compliance issues caused by improper configuration of assets
% critical processes monitored
% critical time failures
% error in forecast
% error in lines of code required
% failed system transactions
% false detection rate
% fault slip through
% hours used for fixing bugs
% incidents after patching
% incidents backlog
% incidents queue rate
% of changes caused by a workaround
% of changes classified as miscellaneous
% of changes incorrectly classified
% of changes initiated by customers
% of changes insufficiently resourced
% of changes internal versus external
% of changes matched to scheduled changes
% of changes recorded 'after the fact'
% of changes rejected for reassignment
% of changes scheduled outside maintenance window
% of changes subject to schedule adjustment
% of changes that cause incidents
% of changes that were insufficiently documented
% of changes with associated proposal statement
% of customer problems corrected per schedule
% of defect-free artwork
% of input correction on data entry
% of problems uncovered before design release
% of programs not flow-diagrammed
% of reported bugs that have been fixed when going live
% of reports delivered on schedule
% of time required to debug programs
% of unit tests covering software code
Errors per thousand lines of code
Mean time between system IPL
Mean time between system repairs
QA personnel as% of # of application developers
QA personnel as a% of # of application developers
Time taken for completing a software application
Total rework costs resulting from computer program

Security
# detected network attacks
# exceeding alerts
# of detected network attacks
# of occurrences of loss of strategic data
# of outgoing viruses / spyware caught
# password policy violations
# security control
# time to detect incident
# unauthorized changes
# viruses detected in user files
% compliance to password policy
% computer diffusion rate
% downtime due to security incidents
% e-mail spam messages stopped
% employees with own ID and password for internal systems
% host scan frequency
% intrusion success
% IT security policy compliance
% IT security staff
% IT systems monitored by anti-virus software
% licenses purchased and not accounted for in repository
% modules that contain vulnerabilities
% of downtime due to security incidents
% of email spam messages stopped / detected
% of email spam messages unstopped / undetected
% of incidents
% of patches applied outside of maintenance window
% of spam false positives
% of systems covered by antivirus / antispyware software
% of systems
% of systems with latest antivirus / antispyware signatures
% of virus incidents require manual cleanup
% of viruses & spyware detected in email
% overdue incidents
% repeated IT incidents
% security awareness
% security incidents
% security intrusions detection rate
% servers located in data centers
% spam not detected
% trouble report closure rate
% virus driven e-mail incidents
% viruses detected in e-mail messages
Distribution cycle of patches
Latency of unapplied patches
Spam detection failure%
Time lag between detection, reporting and acting
Weighted security vulnerability per unit of code

Software development


General
# of bugs per release
# of critical bugs compared to # of bugs
# of function points (FP)
# of defects per function point
# of defects per line of code
# of defects per use case point
# of escaped defects
# of realized features compared to # of planned features
# of software defects in production
# of successful prototypes
# software defects in production
# unapplied patch latency
% critical patch coverage
% defects reopened
% of application development work outsourced
% of bugs found in-house
% of hours used for fixing bugs
% of overdue software requirements
% of software build failures
% of software code check-ins without comment
% of software code merge conflicts
% of time lost code loss
% of user requested features
% on time completion (software applications)
% overdue changes
% patch success rate
% routine changes
% schedule adherence in software development
% software build failures
% software code check-ins without comment
% software licenses in use
% software upgrades completed successfully
% unauthorized software licenses used
% unique requirements to be reworked
% user requested features
Average # defects created per man month
Average number of software versions released
Average progress rates (time versus results obtained)
Cyclomatic software code complexity
Halstead complexity
Lines of code per day
Rate of knowledge acquisition (progress within the research)
Rate of successful knowledge representation
System usability scale
Time ratio design to development
Time-to-market changes to existing products / services
Time-to-market of new products / services
Work plan variance

Classes
# of logical code lines
# of all statements
# of ancestor classes
# of classes
# of comment lines
# of constructors defined by class
# of control statements
# of declarative statements (procedure headers, variable declarations, declarations, all statements outside procedures)
# of events defined by class (This metric counts the event definitions)
# of executable statements
# of executable statements
# of immediate sub-classes that inherit from a class
# of interfaces implemented by class
# of logical lines of whitespace
You can’t even be able to receive your message.
You can’t get the message
# Of non-control statements, which are executable statements that are not control nor declarative statements
# of non-private variables defined by class VARS excluding private variables
# of physical source lines (Including code, comments, empty comments and empty lines)
# of each class call
# of subs, functions and functions in class
# of variables defined and inherited by class
# of variables defined by class (does not include inherited variables)
Size of class (# of methods and variables)
Size of class interface (# of non-private methods and variables)

Procedures
# of distinct procedures in the call tree of a procedure
# of execution paths through a procedure (Cyclomatic complexity)
# of formal parameters defined in procedure header
# of global and module-level variables accessed by a procedure
variables for a procedure (including parameters and function return value)
# of parameters used by or by procedure (output parameter)
# of procedure local variables and arrays (excluding parameters)
# of procedures that a procedure calls
# of procedures that call a procedure
% complexity inside procedures and between them
% external complexity of a procedure (# of other procedures called squared)
% internal complexity of a procedure (# of input / output variables)
% of Cyclomatic complexity without cases
Code lines count
Comment lines count
Fan-in multiplied by fan-out multiplied by procedure length (logical lines of code)
Length of procedure name in characters
This can be done by
Logical lines of whitespace
Maximum # of nested conditional statements in a procedure
Maximum # of nested loop statements in a procedure
Maximum # of nested procedure calls from a procedure
Physical source lines (including code, comments, empty comments and empty lines)
Total amount of data read (procedures called + parameters read + global variables read)
Total amount of data written

Variables
# of data flows into and out of a variable
# of modules that use a variable
# of read instructions from variable __
reads and writes
# of write instructions to variable
Length of variable name in characters

Project
# of abstract classes defined in project
# of possible couplings
# of class attributes (variables) hidden from other classes
# of class methods hidden from other classes
# of classes defined in project
# of concrete classes
# of days passed between versions
# of enumeration constant names
# of enumeration names
# of files in project
# of global variables and module-level variables and arrays
# of interfaces defined in project
# of leaf class has no descendants
# of Physical lines in dead procedures
# of procedure call statements (including calls to subs, functions and declares, accesses to properties)
# of read instructions from global and module-level variables
# of variables
# of real forms excluding any user controls
# of root classes defined in project
# of standard modules: bas files and Module blocks
# of unique names divided by # of names
# of unused constants
# of unused procedures
# of unused variables
# of user-defined types (or structure statements)
global and module-level variables
% comment density (meaningful comments divided by # of logical lines of code)
% of actual polymorphic definitions of all possible polymorphic definitions
% of code lines counted from logical lines
% of enum constants among all constants
% of parameterized classes (generic classes)
% of reuse benefit reuse of procedures)
Global flow variables versus procedure parameters
Average # of calls for a code line (measures the modularity or structuredness)
Average # of constants in an Enum block
Average # of variable access instructions
Average file date
VB files
VB files
Average system complexity among procedures
Classes that do access attributes
Classes that do access operations / Classes that can access operations
Date of newest file in project
Deadness index
Density of decision statements in the code
Length of names
Length of procedure names
Maximum depth of call tree
Maximum depth of inheritance tree
Maximum size of call tree
Project size in kilobytes (includes all source files)
Reuse ratio for classes (a class is reused if it has descendants)
Specialization ratio for classes (a class)
Sum of SYSC over all procedures
# Of times reused constants and enum constants
The internal inheritance happens
The sum of inherited methods divided by # of methods in a project
The sum of inherited variables divided by # of variables in a project

Project files
# of code lines count
# of constants (excluding enum constants)
# of control statements divided by # of all executable statements
# of files that a file uses
# of files that use a file
# of logical source lines
# of procedures (including subs, functions, property blocks, API declarations and events)
# of variables, including arrays, parameters and local variables
% of comment lines counted as full-line comments per logical lines
% of whitespace lines counted from logical lines
File size in kilobytes
End-of-the-line comments that have meaningful content
Meaningful comments divided by # of logical lines of code

Web


Server
# buffer size of router
# host latency
# of domain name (and aliases)
# of files on server
# of geographical locations
# of internet nodes mapped to same domain name
# of sub-sites
# of Web pages on server
# refused sessions by server
# server connection time
# server response time
Files by traffic% (eg,% of files for% of traffic)
HTTP node classification (inaccessible, redirection, accessible; these classifications will be time-sensitive; see volatility metric below)
Internet node identification (IP address and port)
Pages by traffic% (eg,% of pages for% traffic)
Ratio of explicit clicks to implicit clicks for server
Server-side filtering (robotstxt, firewalls)
Top-level domain (com, edu)
Volatility level (summarized during the given time period)

User
# of files transferred per user
# of pages transferred per user
# of unique files transferred per user
# of unique pages transferred per user
# of unique Web sites visited per user
# of user access method (ISP, dial-up modem, wireless network, etc)
# of Web sites visited per user
Data filtering imposed by user (which client filters have been activated by the user)
Inter-request time per user (request to request time)
Inter-session time per user (session to session time)
Intra-request time per user (request to render time)
Path length of sessions per user
Path length of visit per site per user
Ratio of embedded clicks
Ratio of explicit clicks to implicit clicks
Reoccurrence rates for files, pages, and sites
Sessions per user per time period
Stack distance per user
Temporal length of sessions per user
Temporal length of visit per user per user
User classification (adult, child, professional user, casual user, etc)
User response rate and attrition rate

Site
# of bytes
# of cookie supplied
# of levels in site's internal link structure (depth)
# of pages served per time period
# of search engines indexing the site
# of type of Web collections
# of unique Web sites (filter out Web sites located at multiple IP addresses)
# of user web page requests per time period
# of Web collections
# of Web pages
# of Web servers
# of Web site publisher
# of Web sites
% breakdown of protocols across the periphery
% of site devoted to CGI / dynamic content
% of textual description of site's content
Byte latency
Bytes transferred per time period
Network traffic (bytes transferred, Web pages accessed)
Ratio of size of periphery

Pages
# and type of embedded non-text objects (images, video, streaming data, applets)
# of content access scheme (free, pay-per-view, subscription)
# of type of collection (online journal, photo gallery)
# of Web pages in collection
% breakdown of mime types in hyperlinks
% breakdown of protocols in hyperlinks
% of textual description of page's content
Aggregate size of constituent Web resources (in bytes)
Average # of hyperlinks per page
Birth and modification history (major revisions of content — from HTTP header)
Ratio of internal to external links on page

Training


General
# of attendees at user training sessions
# of hours users have spent on training services
# of incidents caused by deficient user and operational documentation and training
# of incidents caused by deficient user training
# of users turned out successfully
Hours of user training
IT investment to IT staff training
Satisfaction scores for training and documentation
Time lag between changes and updates of documentation and training material

Source: https://habr.com/ru/post/214477/


All Articles