Saturday, 2 December 2017

LSIAgent Monitoring software

SysTrack LSIAgent - collects performance data

Writes a 130MB Access Database file at:
C:\Program Files (x86)\SysTrack\LsiAgent\MDB\collect.mdb



DESCRIPTION
Checks for CPU being overrun by one or more processes resulting in CPU performance issues.
Checks for unusual CPU behavior indicative of hardware failure, HAL mismatch, or other driver anomalies.
Checks most commonly used applications for slow startup issues.
Checks for applications running a high number of times per day for the 3 previous days.
Checks for high gateway latency.
Checks for the number of times the system has experienced a blue screen in the previous 24 hours.
Checks for high network latency when accessing remote drives.
Checks for any network connection that appears to be dropped.
Checks for high network latency on the application dependency server.
Checks that VMware Tools are installed.
Checks RDP session latency during the past 3 hours.
The Operational Alarm indicates whether the Analysis Tools found a new record in the SysTrack system database table for the system being monitored. Under normal (default) conditions, a new record should be found during each sampling interval. If no record is found, an alarm is triggered.
Monitors the percentage of elapsed time that the processors on the system are busy executing non-idle threads. On a multiprocessor system, if all processors are always busy, the % Total CPU Time is 100%. It can be viewed as the fraction of the total time spent doing useful work. This value is a good indicator of the demand for and efficiency of a microprocessor. A lower percentage is better.
Monitors the percentage of elapsed time that processors on the system spent handling hardware interrupts. When a hardware device interrupts the microprocessor, the interrupt handler manages the condition, usually by stopping the device from interrupting and scheduling a deferred procedure call (DPC) to service the device. Time spent in DPCs is not counted as time in interrupts. Device problems can sometimes make the system appear CPU-bound. This happens when the processing of interrupts caused by serial port, disk, network, timer, and other hardware device activities consume the processor. A malfunctioning hardware component or device driver can cause many interrupts, causing the processor to spend too much time in interrupt service routines. High-speed UARTs used in non-intelligent serial ports can also cause large numbers of interrupts. The processor should spend a relatively small amount of time servicing interrupts. A lower percentage is better.
Monitors the percentage of elapsed time that processors on the system spend servicing deferred procedure calls (DPCs). The Windows operating system interrupt architecture permits the bulk of the work normally done in an interrupt handler to be handled instead at the DPC level, between interrupt priority and normal application processing priority. A device driver's interrupt handler puts DPC packets into a queue that describes the work to be done and then exits. When there are no more interrupts to be serviced, the system Monitors for DPCs that needs to be executed. A DPC executes below interrupt priority and thus permits other interrupts to occur. No application thread executes any code until all the pending DPCs execute. If a busy processor is spending a large amount of time servicing interrupts and DPCs, it is likely the processor cannot function effectively and a probable processor bottleneck will develop. A malfunctioning hardware component or device driver can erroneously cause many DPCs to be queued, causing the processor to spend too much time servicing DPCs. The processor should spend a relatively small amount of time servicing DPCs. A lower percentage is better.
Monitors the rate of switches from one thread to another. Thread switches can occur either inside a single process or across processes. A thread switch may be caused either by one thread asking another for information or by a thread being preempted by a higher priority thread becoming ready to run. A high context switch rate can also be caused by a problem with a network card or device driver. A lower rate is better.
Monitors the rate that the system is issuing read and write operations to file system devices. It does not include File Control Operations. The design characteristic 'locality of reference' that improves an application's use of cache and memory reduces transfers to/from the disk. When a program is designed so that data references are localized, the data the program needs is more likely to be in its working set, or at least in the cache, and is less likely to be paged out when it is needed. A lower rate is better.
Monitors the aggregate rate (per second) of file system operations that are neither a read nor a write. These operations usually include file system control requests or requests for information about device characteristics or status. A malfunctioning device driver or hardware component can report status changes very frequently, causing a high file control operation rate. A poorly designed application can also cause a high file control operation rate. Heavy serial port usage can also cause a high file control operation rate because status operations are counted as control operations. A lower rate is better.
Monitors the number of threads in the processor queue. Unlike the disk counters, this counter shows ready threads only, not threads that are running. There is a single queue for processor time even on computers with multiple processors. Therefore, if a computer has multiple processors, you need to divide this value by the number of processors servicing the workload. A sustained processor queue of less than 10 threads per processor is normally acceptable, dependent of the workload.
Monitors the rate at which deferred procedure calls (DPCs) were added to the processors DPC queues between the timer ticks of the processor clock. DPCs are interrupts that run at a lower priority than standard interrupts. Each processor has its own DPC queue. This counter measures the rate that DPCs were added to the queue, not the number of DPCs in the queue. This counter displays the last observed value only; it is not an average.
Monitors the number of hardware device interrupts that the system is experiencing per second. A device interrupts the microprocessor when it has completed a task or when it requires attention. Normal thread execution is suspended during interrupts. An interrupt may cause the microprocessor to switch to another, higher priority thread. The devices that generate interrupts are the system timer, the mouse, data communication lines, network interface cards, and other peripheral devices. Timer interrupts are frequent and periodic and create a background level of interrupt activity. A failing hardware component can erroneously generate interrupts, producing a very high rate. A faulty device driver can fail to properly service its device, also leading to high interrupt rates. Non-intelligent serial ports can generate an interrupt for every one to sixteen characters sent and received, leading to high interrupt rates. A lower rate is better.
Monitors the rate (per second) at which pages are read from the disk to resolve memory references to pages that were not in memory at the time of the reference. This counter includes paging traffic on behalf of the system cache to access file data for applications. This is an important statistic to monitor if you are concerned about excessive memory pressure ('thrashing'), and the excessive paging that may result. Pages Input/sec, however, also accounts for such activity as the sequential reading of memory mapped files, whether cached or not. A lower rate is better.
Monitors the rate (per second) at which pages are written to disk to free up space in physical memory. Pages are written back to disk only if they are changed in physical memory, so they are likely to hold data (which is frequently modified by processes), not code (which is usually not modified). A high Page Output rate often indicates a memory shortage. The Windows operating system writes more pages back to disk to free up space when physical memory is in short supply (in order to free up space for faulted pages that must be paged in). A high Page Output rate indicates that most faulting activity is for data pages and that memory is becoming scarce. If memory is available, changed pages are retained in a list in memory and written to disk in batches. A lower rate is better.
Monitors the rate (per second) at which read operations from disk are initiated to resolve hard page faults. Hard page faults occur when a process requires code or data that is not in its working set or elsewhere in physical memory, and the code or data must be retrieved from disk. This statistic is a primary indicator of the kinds of faults that cause system-wide bottlenecks. It includes reads to satisfy faults in the file system cache (usually requested by applications) and in non-cached mapped memory files. This statistic is a measure of the rate of read operations, without regard to the numbers of pages retrieved by each operation. A lower rate is better.
Monitors the rate (per second) at which write operations to the disk are initiated to free up space in physical memory. Pages are written to disk only if they are changed while in physical memory, so they are likely to hold data (which is frequently modified by processes), not code (which is usually not modified). This statistic is a measure of the write operations rate, without regard to the number of pages written in each operation. A high page output rate often indicates a memory shortage. The Windows operating system writes more pages back to disk to free up space when physical memory is in short supply (in order to free up space for faulted pages that must be paged in). A high Page Write rate indicates that most faulting activity is for data pages and that memory is becoming scarce. If memory is available, changed pages are retained in a list in memory and written to disk in batches. A lower rate is better.
Monitors the overall rate (per second) at which faulted pages are handled by the processor. A page fault occurs when a process requires code or data that is not in its working set (its space in physical memory) in main memory. The Page Fault rate includes both hard faults (those that require disk access) and soft faults (where the faulted page is found elsewhere in physical memory). A soft fault occurs when the needed page is on the standby list, already in main memory, or is in use by another process that shares the page. Most processors can handle large numbers of soft faults without consequence. Hard faults, however, can cause significant delays. A lower rate is better.
Monitors the ratio of Transition Faults/sec to Page Faults/sec. The Transition Fault Rate is the rate at which page faults are resolved by recovering pages that were in transition. As the memory manager attempts to retrieve space from running processes, it trims some pages from their working sets and, if the pages have been modified, the memory manager puts them on the modified page list. When memory is scarce, it starts to write them to disk right away in hopes of freeing the frames, holding them, and satisfying the backlog of page faults. Once a write starts on a page, it becomes a 'transition' page. However, if the page is re-referenced by the process, a page fault occurs because the page was trimmed from the working set. The page is found by the memory manager on the transition list, and placed back in the process' working set. The modified page writer tries to delay the writes so that it doesn't have to rewrite the pages, but when there is little free memory, it has no chance to delay. It must write pages to free up space as quickly as possible. Thus, a high transition fault ratio usually indicates a memory shortage. A lower rate is better.
Monitors the percentage of the page file(s) that is/are in use. Paging files are used to store pages of memory used by processes that are not contained in other files when they are swapped out to disk. Pages are swapped out to disk when they are not being used in order to make maximum use of physical memory. All processes share paging files, so lack of space in paging files can prevent other processes from allocating memory. The operating system can be set to automatically increase the size of the paging file as needed (disk space permitting), or can restrict the page file size to a specific limit. Increasing the size of the paging file or adding more memory can decrease Page File Usage. Application programs that allocate memory but fail to free it ('leaking memory') consume page file space until they are terminated. Such faulty applications should be repaired. A lower % is better.
Monitors the number of available bytes - this is a measure of free memory. Free memory is the amount of physical memory available to processes running on the system. It is calculated by summing space on the Zeroed, Free, and Standby memory lists. Free memory is ready for use. Zeroed memory is memory filled with zeros to prevent later processes from seeing data used by a previous process. Standby memory is memory removed from a process' working set (its physical memory) en route to disk, but is still available to be recalled. The Virtual Memory Manager continually adjusts the space used in physical memory to maintain a minimum number of available bytes for the operating system and processes. When available bytes are plentiful, the Virtual Memory Manager lets the working sets of processes grow, or keeps them stable by removing an old page for each new page added. When available bytes are few, the Virtual Memory Manager must trim the working sets of processes to maintain the minimum required. The memory manager seeks to maintain at least 4 MB of free space. More available memory is better.
Monitors the number of times per second that the Cache Manager fails to find a file's page in the immediate cache. When this occurs, the Cache Manager must ask the Memory Manager to locate the page elsewhere in memory or on the disk so that it can be loaded into the immediate cache. If the page must be brought in from disk, this results in disk activity and a delay to the process that needed the page (while the page is brought in). The file system cache is an area of physical memory that stores recently used pages of data retrieved from disk or network file systems for applications. The system Cache uses memory not in use by active processes in the computer. The Windows operating system continually adjusts the size of the cache, making it as large as it can while still preserving the minimum required number of available bytes for processes. It is less likely that the pages needed by an application will be found in the cache as it becomes smaller. This can reduce application performance by requiring more disk and file system accesses. Lower is better.
Monitors the size of virtual memory that has been committed (as opposed to simply reserved) as a percentage of the total virtual memory that can be committed without having to extend the paging file(s). Committed memory is physical memory for which space has been reserved on the disk paging files. There can be one paging file on each logical drive. If the paging file(s) are expanded, the Commit Limit increases accordingly. You can also increase the commit limit by increasing the amount of memory in the system. For a given level of virtual memory usage (Commit Bytes), increasing the Commit Limit reduces the Commit Ratio. When the Commit Ratio reaches 100%, all available virtual memory has been consumed and additional memory cannot be allocated by applications. This may cause application failures. Lower is better.
Monitors the number of processes in the system at the time of data collection. This is an instantaneous count, not an average over a time interval. Each process represents the running of a program or a service. The operating system uses certain processes to provide its functionality. A large number of processes may indicate overuse of the system. Lower is better.
Monitors the number of threads currently active in the process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread.
Monitors the number of clients currently connected to a terminal server. This is an instantaneous count and not averaged over time, except in the summary records. This counter is important to terminal server monitoring as it can show the time periods when sessions are at a peak. The load on a server at the time of the peak number of active connections can be used as a sizing measure to gauge if you need more (or better) servers to sustain a concurrent number of applications and users in your environment. This counter should not be confused with the number of licenses in use (unless you only have one terminal server). Licenses are based on seats, not sessions.
Monitors the number of server session connections. This is an instant count and not averaged over time, except in the summary records. A server connection is any connection to a server, including mapped drives, database connections, service connections, etc. This is not just the number of user connections, but each instance of connection to the server. When watched over time, the number of server connections should fluctuate as the number of non-persistent connections start and end. The server connection count is useful on specialized servers (e.g. SQL Server, Terminal Server, etc.) to give an indication of client activity. It can also be an indicator of server-client licensing compliance when the server is licensed on a per connection basis. Lower is better.
Monitors the ability to send email notifications for triggered alarms.
Monitors the system's agent ability to condense and send data to the Master System before the attempt times out.
Monitors the percentage of elapsed time that the selected disk drive was busy servicing read or write requests.
Monitors the rate (per second) of read operations on the disk. A lower rate is better.
Monitors the rate (per second) of write operations on the disk. A lower rate is better.
Monitors the length of time (in milliseconds) for an average disk transfer. This statistic measures the average time of each data transfer, regardless of the number of bytes read or written. It shows the total time of the read or the write. Average Disk sec/Transfer is usually in multiples of the time it takes for one rotation of the disk. Any variation usually represents time for driver processing or time in the queue. For example, on a 7200-RPM disk, the actual Average Disk sec/Transfer would be in multiples of 8 milliseconds. Lower is better.
Monitors the current length of the server work queue for the CPU. A sustained queue length greater than four might indicate processor congestion. This is an instantaneous count, not an average over time.
Monitors the percentage of total usable space on the selected logical disk drive that is free.
Monitors the rate at which I/O requests to the disk are split into multiple requests. A split I/O may result from requesting data in a size that is too large to fit into a single I/O or that the disk is fragmented on single-disk systems. Disks that have a high % Disk I/O Fragmented may need to be defragmented to improve performance. It is important to note that you may have a high % of Disk I/O Fragmentation and yet the defrag utility tells you the drive does not need to be defragmented. The defrag utility calculates fragmented files across the whole volume. There could be a segment of the volume where frequently accessed files are heavily fragmented and impacting performance. A Solid-State Drive (SSD) may display a high % Fragmented I/O with no performance impact. Lower is better.
Monitors the number of bytes per second retrieved from the disk (read).
Monitors the number of bytes per second sent to the disk (write).
Monitors the percentage of network bandwidth in use on the network interface. It is important to note that most networks are fully saturated at less than 100% utilization. For example, as Ethernet networks become congested, collisions occur far more frequently, degrading performance and causing rapid saturation. A lower % is better.
Monitors the total number of packets received per second on a network interface. Each physical packet counts as one frame, independent of the number of bytes contained in the packets. Network packets per second are useful in measuring the traffic load on a network interface. A lower rate is better.
Monitors the total number of bytes received per second on a network interface. The total received bytes are counted independent of the number of packets in which the data is received. Network bytes per second are useful in measuring the traffic load on a network interface. A lower rate is better.
Monitors the rate of receipt of network packets whose destination address is a broadcast address. All nodes on the network segment receive packets sent to the broadcast address. Communications between network routers regarding the network topology, advertisements from various systems, and certain protocols used in translating names to addresses are all examples of broadcast packet use. High broadcast packet rates are especially troublesome because they consume network bandwidth and resources (CPU, memory, and I/O) on all systems on the network segment (since they all receive such packets). A lower rate is better.
Monitors the length of the network output queue in packets. This is an instantaneous count, not an average over time.
The percentage of previously transmitted bytes that were retransmitted.
Monitors the estimated current bandwidth of the network interface measured in megabits per second. For interfaces that do not vary in bandwidth or for those where no accurate estimation can be made, this value is the nominal bandwidth.
Monitors the CPU usage for services and processes. The rule will have a critical or warning level when a service or application's CPU usage is above the CPU High Limit defined in the Deployment Tool.
Monitors the CPU usage for services and processes. The rule will have a critical or warning level when a service or application's CPU usage is below the CPU Low Limit defined in the Deployment Tool.
Monitors the working set usage for services and processes. The rule will have a critical or warning level when a service or application's working set usage is above the Working Set High Limit defined in the Deployment Tool.
Monitors the working set usage for services and processes. The rule will have a critical or warning level when a service or application's working set usage is below the Working Set Low Limit defined in the Deployment Tool.
Monitors the virtual memory usage for services and processes. The rule will have a critical or warning level when a service or application's virtual memory usage is above the Virtual Size High Limit defined in the Deployment Tool.
Monitors the virtual memory usage for services and processes. The rule will have a critical or warning level when a service or application's virtual memory usage is below the Virtual Size Low Limit defined in the Deployment Tool.
Monitors the I/O rate for services and processes. The rule will have a critical or warning level when a service or application's I/O rate is above the I/O Rate High Limit defined in the Deployment Tool.
Monitors the I/O rate for services and processes. The rule will have a critical or warning level when a service or application's I/O rate is above the below the I/O Rate Low Limit defined in the Deployment Tool.
Checks if services are responding. The rule will have a critical or warning level when a service has not respond for the length of time (in seconds) defined in the Deployment Tool.
Checks if applications are responding. The rule will have a critical or warning level if an application has not respond for the length of time (in seconds) defined in the Deployment Tool.
Monitors for unexpected service or application statuses.
Checks for the number of distinct times an exe (either an application or service exe) is running concurrently on the system. If the rule has a critical or warning level, this indicates that the number of instances is outside the Deployment Tool's defined upper or lower limit.
Checks for detected memory leaks. Memory leaks are failures to release unused memory by a computer program. It is unnecessary memory consumption. A memory leak occurs when the program loses the ability to free the memory. A memory leak diminishes the performance of the computer, as it becomes unable to use all its available memory. Excessive memory leaks can lead to program failure after a sufficiently long period of time. Smaller is better.
Checks for detected non-paged pool leaks. These are a measurement, in bytes per minute, of memory leaking non-paged or non-paging data on any Windows-based application. Non-Paged Pool Memory is typically a smaller pool of memory and cannot be paged out to disk. The Non-paged pool is set virtual memory pages that always remain resident in RAM. Device drivers and the OS use the nonpaged pool to store data structures that must stay in physical memory and can never be paged out to disk. (For example, the TCP/IP driver must allocate some amount of nonpaged memory for every TCP/IP connection that is active on the computer for data structures that are required during processing of network adapter interrupts when page faults cannot be tolerated.) A device driver with a memory leak will eventually exhaust the supply of Nonpaged Pool Memory, which will cause subsequent allocations that request the nonpaged pool to fail. Running out of space in the nonpaged pool almost always results in a Blue Screen. Smaller is better.
Checks for detected paged pool leaks. These are a measurement, in bytes per minute, of memory leaking paged data on any Windows-based application. Paged Pool Memory can be paged out to disk. Virtual memory for various system functions, including shared memory files (like DLLs), is allocated from the Paged Pool, which is an area of the system's virtual memory that is limited in size. A program with a memory leak that is allocating, but never freeing memory from the Paged Pool will eventually exhaust the Paged Pool. Subsequent allocations that request the Paged Pool will then fail, with unpredictable results causing applications to act erratically and in some cases failing. Smaller is better.
Checks for detected handle leaks. A handle leak occurs when a computer program asks for a handle to a resource but does not free the handle when it is no longer used. If this occurs frequently or repeatedly over an extended period of time, a large volume of handles may be marked in-use and as a result, unavailable. This can cause performance problems or a crash. Smaller is better.
Checks for detected GDI object leaks. A GDI Object is an object from the Graphics Device Interface (GDI) library of application programming interfaces (APIs) for graphics output devices. This is the measurement, in kilobytes per minute, of GDI object leaks on any Windows-based application. No leaks are preferred.
Checks for detected user object leaks. A User Object is an object from Windows Manager which includes windows, menus, cursors, icons, hooks, accelerators, monitors, keyboard layouts, and other internal objects. This is the measurement, in kilobytes per minute, of user object leaks on any Windows-based application. No leaks are preferred.
Checks for any changes in change items being monitored. The change items are selected for monitoring via the Deployment Tool.
Checks for visits to unauthorized web sites. If the rule does not pass, this indicates that at least one web site has been visited that is specified as a web site that may not be visited (specified via the Deployment Tool).
If a system has more than 300 active event log alarms, an Event Log Processing Paused alarm is triggered. Alarms will not be triggered for the following Event Logs until the currently triggered alarms are manually cleared:\r\nApplication Event Log\r\nSystem Event Log\r\nSecurity Event Log\r\nOther Event Log
Checks for active Application Event Log entries.
Checks for active System Event Log entries.
Checks for active Security Event Log entries.
Checks for active Event Log entries other than Application, System, or Security Event Log entries.
Checks for attempts to use an application that has been locked via Deployment Tool's SysLock Application Locking feature. A triggered alarm indicates that an attempt was made.
Checks to see if the Master System has received an SNMP trap from a source other than a Child System. For example, if there is an SNMP router enabled on your network, the router can be figured to send its SNMP traps to the Master System. You can use Deployment Tool's SNMP Monitoring feature to add an SNMP alarming rule that allows SysTrack to respond to specified object identifiers (OIDs) received from this router.
Checks for application errors.
Checks for application hangs.
Checks for blue screens.
Checks for attempts to visit a website that has been blocked via Deployment Tool's SysLock Web Locking feature.
Checks for systems added to the SysTrack deployment tree during the last auto-deployment.
Checks for systems scheduled to be uninstalled from the SysTrack deployment tree during the last auto-deployment.
Checks for systems that would have been added to the SysTrack deployment tree (during the last auto-deployment) had the Deployment Tool not been running in test mode.
Checks for systems that would have been scheduled to be uninstalled from the SysTrack deployment tree (during the last auto-deployment) had the Deployment Tool not been running in test mode.
Deployment Tool Scripting and Response Time custom alarms are used to provide application-specific monitoring. Each alarm is based on execution of a named script or program. This rule will have a critical or warning level when the time required for the script to complete exceeds the specified limit for the specified time period.
Deployment Tool Scripting and Response Time custom alarms are used to provide application-specific monitoring. Each alarm is based on execution of a named script or program. This rule will have a critical or warning level when an attempt to run the script fails.
Deployment Tool Scripting and Response Time custom alarms are used to provide application-specific monitoring. Each alarm is based on execution of a named script or program. This rule will have a critical or warning level when the value returned by the script exceeds the specified limit for the specified time period.
In addition to the provided performance counters, the Deployment Tool allows you to add custom performance counters that can be monitored through alarms. This rule will have a critical or warning level alarm when the specified parameters are not met.
Checks for COM+ bounds created during the last measurement period. A bound is an object that has been created including those that are active and pooled. The 'binding' process is either early or late; it is the act of connecting a client process to a serving process.
Checks for COM+ calls completed during the last measurement period.
Checks for COM+ calls with unhandled exceptions during the last measurement period.
Checks for COM+ component instances for the component in an instance container that are currently performing a method call.
Checks for COM+ objects that are active in a pool, ready to be used by any client that requests the component.
Checks for COM+ references (open handles).
Checks the average call duration of COM+ method calls.
Checks the amount of guest physical memory (in megabytes) in use by the virtual machine (VM). Active memory is estimated by VMkernel statistical sampling and represents the actual amount of memory the VM needs. The value is based on the current workload of the virtual machine.
Checks for the amount of guest physical memory (in megabytes) reclaimed from the virtual machine (VM) by the balloon driver.
Checks for virtual machine (VM) 'physical' memory (in megabytes) that is mapped to machine memory. Mapped memory includes shared memory amount, but does not include memory overhead.
Checks for machine memory (in megabytes) allocated to a virtual machine (VM) beyond its reserved amount.
Checks for guest 'physical' memory shared (in megabytes) with other virtual machines (through the VMkernel's transparent page-sharing mechanism, a RAM deduplication technique).
Checks for the total machine memory saved (in megabytes) due to memory sharing between virtual machines (VMs) on a common host.
Checks for guest physical memory swapped out to the virtual machine's (VMs) swap file by the VMkernel (in megabytes). Swapped memory stays on disk until the VM needs it. This statistic refers to VMkernel swapping and not to guest OS swapping.
Checks for the balloon target memory (in megabytes) estimated by the VMkernel. This is the desired amount of virtual machine (VM) balloon memory. If the balloon target amount is greater than the balloon amount, the VMkernel inflates the balloon amount, which reclaims more VM memory. If the balloon target amount is less than the balloon amount, the VMkernel deflates the balloon, which allows the VM to reallocate memory when needed.
Checks the percentage of time that the virtual machine (VM) was ready, but could not get scheduled to run on the physical CPU. CPU ready time is dependent on the number of VMs on the host and their CPU loads.
Checks amount of time that it takes data to traverse a network connection (a measure of latency).
It is possible to override the SysTrack Agent's suspend command, meaning that the default scheduling is not used. This rule checks for those override exceptions.
Checks the percentage of total usable space on the selected logical disk drive that was free.
Monitors the CPU usage in megahertz during the interval. This rule uses a VMware host side counter.
Monitors the actively used CPU, as a percentage of the total available CPU, for each physical CPU on the host. This rule uses a VMware host side counter.
Monitors the total time elapsed, in seconds, since last system startup. This rule uses a VMware host side counter.
Monitors the memory usage as percentage of available machine memory. For a host, the percentage is calculated as follows: consumed memory ÷ machine memory size. This rule uses a VMware host side counter.
Monitors the aggregated disk I/O rate in kilobytes per second. For hosts, this metric includes the rates for all virtual machines (VMs) running on the host during the collection interval. This rule uses a VMware host side counter.
Monitors the average number of kilobytes per second read from the disk during the collection interval. This rule uses a VMware host side counter.
Monitors the average number of kilobytes per second written to disk during the collection interval. This rule uses a VMware host side counter.
Monitors the average rate (in kilobytes per second) at which data was transmitted across each physical network interface controller (NIC) instance on the host during the interval. This represents the bandwidth of the network. This rule uses a VMware host side counter.
Monitors the average rate (in kilobytes per second) at which data was received across each physical network interface controller (NIC) instance on the host during the interval. This represents the bandwidth of the network. This rule uses a VMware host side counter.
Monitors the average rate (in megabits per second) at which data is transmitted and received across all network interface controller (NIC) instances connected to the host. This rule uses a VMware host side counter.
Checks the host server CPU core consumption. This is indicative of the level of demand placed on the system by the virtual machines (VMs) that it hosts. This rule uses a VMware host side counter.
Checks for the percentage of time virtual machines (VMs) on the host system are unable to complete processor transactions due to contention over physical CPUs. This rule uses a VMware host side counter.
Checks the percentage of time that the virtual machine (VM) was ready, but could not get scheduled to run on the physical CPU. CPU ready time is dependent on the number of virtual machines on the host and their CPU loads. This rule uses a VMware host side counter.
Checks the amount of time in milliseconds the virtual machine (VM) is waiting for swap page-ins. CPU Swap Wait is included in CPU Wait. This rule uses a VMware host side counter.
Checks the total time in milliseconds that a virtual CPU is not runnable. It could be idle (halted) or waiting for an external event such as I/O. This rule uses a VMware host side counter.
Checks for the highest latency value (in milliseconds) across all datastores used by the host. This rule uses a VMware host side counter.
Checks for the highest latency value (in milliseconds) across all disks used by the host. Latency measures the time taken to process a SCSI command issued by the guest OS to the virtual machine (VM). The kernel latency is the time VMkernel takes to process an IO request. The device latency is the time it takes the hardware to handle the request. This rule uses a VMware host side counter.
Monitors the amount of memory compressed by ESX (in kilobytes). This rule uses a VMware host side counter.
Monitors the memory compression rate (in kilobytes per second).This rule uses a VMware host side counter.
Checks for the amount of machine memory (in kilobytes) used on the host. Consumed memory includes memory used by the Service Console, the VMkernel, vSphere services, plus the total consumed metrics for all running virtual machines (VMs). This rule uses a VMware host side counter.
Monitors the memory decompression rate (in kilobytes per second). This rule uses a VMware host side counter.
Monitors the percentage of time virtual machines (VMs) on the host system are unable to complete memory transactions due to contention. This rule uses a VMware host side counter.
Monitors the average rate (in megabytes per second) at which memory is swapped out to the host swap file. This rule uses a VMware host side counter.
Monitors the average rate (in megabytes per second) at which memory is swapped out to the host swap file. This rule uses a VMware host side counter.
Monitors the sum of all shared metrics for all powered-on virtual machines (VMs), plus amount for vSphere services on the host (in kilobytes). The host's shared memory may be larger than the amount of machine memory if memory is over committed (the aggregate virtual machine configured memory is much greater than machine memory). The value of this statistic reflects how effective transparent page sharing and memory over commitment are for saving machine memory. This rule uses a VMware host side counter.
Monitors the amount of machine memory that is shared by all powered-on virtual machines (VMs) and vSphere services on the host (in kilobytes). Subtract this metric from the shared metric to gauge how much machine memory is saved due to sharing (shared - shared common = machine memory (host memory) savings (KB)). This rule uses a VMware host side counter.
Checks for one of four threshold levels representing the percentage of free memory on the host. The counter value determines swapping and ballooning behavior for memory reclamation.\r\n0: (high) Free memory >= 6% of machine memory minus Service Console memory.\r\n1: (soft) 4%\r\n2: (hard) 2%\r\n3: (low) 1%\r\n0 (high) and 1 (soft): Swapping is favored over ballooning.\r\n2 (hard) and 3 (low): Ballooning is favored over swapping.\r\nThis rule uses a VMware host side counter.
Monitors the total amount of memory (in kilobytes) allocated by the virtual machine (VM) memory control driver (vmmemctl) for all powered-on virtual machines, plus vSphere services on the host. The vmmemctl driver is a VMware exclusive memory-management driver that controls ballooning, it is installed with VMware Tools. This rule uses a VMware host side counter.
Monitors the number of packets received on all virtual machines (VMs) running on the host. This rule uses a VMware host side counter.
Monitors the number of packets transmitted across each physical NIC instance on the host. This rule uses a VMware host side counter.
Monitors the power consumption in watts. This rule uses a VMware host side counter.
Monitors the amount of time an SMP virtual machine (VM) was ready to run, but was delayed due to co-CPU scheduling contention. This rule uses a VMware host side counter.
Monitors the total number of audio bytes that have been received since the PCoIP session started. This rule uses a VMware counter.
Monitors the total number of audio bytes that have been sent since the PCoIP session started. This rule uses a VMware counter.
Monitors the bandwidth (measured in kilobits per second) used for incoming audio packets. This rule uses a VMware counter.
Monitors the bandwidth (measured in kilobits per second) used for outgoing audio packets. This rule uses a VMware counter.
Monitors the transmit bandwidth limit (measured in kilobits per second) used for outgoing audio packets as defined by the GPO setting. This rule uses a VMware counter.
Monitors the number of bytes received over the PCoIP session. This rule uses a VMware counter.
Monitors the number of bytes sent over the PCoIP session. This rule uses a VMware counter.
Monitors the total number of packets that have been received over the PCoIP session started. Note that not all packets are the same size. This rule uses a VMware counter.
Monitors the total number of packets that have been transmitted since the PCoIP session started. Note that not all packets are the same size. This rule uses a VMware counter.
Monitors the total number of receive packets that have been lost since the PCoIP session started. This rule uses a VMware counter.
Monitors the incrementing number that represents the total number of seconds the PCoIP session has been open. This rule uses a VMware counter.
Monitors the total number of transmit packets that have been lost since the PCoIP session started. This rule uses a VMware counter.
Monitors the lowest encoded quality, which is updated every second. Not to be confused with the GPO setting. This rule uses a VMware counter.
Monitors the total number of imaging bytes that have been received since the PCoIP session started. This rule uses a VMware counter.
Monitors the total number of imaging bytes that have been sent since the PCoIP session started. This rule uses a VMware counter.
Monitors the current estimate of the decoder processing capability measured in kilobits per second. 0 means unlimited. This rule uses a VMware counter.
Monitors the number of imaging frames which were encoded over a one second sampling period. This rule uses a VMware counter.
Monitors the bandwidth (measured in kilobits per second) used for incoming imaging packets. This rule uses a VMware counter.
Monitors the bandwidth (measured in kilobits per second) used for outgoing imaging packets. This rule uses a VMware counter.
Monitors the round trip latency (measured in milliseconds) between server and client. This rule uses a VMware counter.
Monitors the overall bandwidth (measured in kilobits per second) used for incoming PCoIP packets. This rule uses a VMware counter.
Monitors the peak bandwidth (measured in kilobits per second) used for incoming PCoIP packets. This rule uses a VMware counter.
Monitors the percentage of received packets lost during a one second sampling period. This rule uses a VMware counter.
Monitors the current estimate of the available outgoing network bandwidth (measured in kilobits per second). This rule uses a VMware counter.
Monitors the overall bandwidth (measured in kilobits per second) used for outgoing PCoIP packets. This rule uses a VMware counter.
Monitors the transmit bandwidth limit (measured in kilobits per second) used for outgoing packets as defined by the GPO setting, and the network. This value may be lower than what is entered as GPO setting. This rule uses a VMware counter.
Monitors the percentage of transmitted packets lost during a one second sampling period. This rule uses a VMware counter.
Checks the total number of USB bytes that have been received since the PCoIP session started. This rule uses a VMware counter.
Checks the total number of USB bytes that have been sent since the PCoIP session started. This rule uses a VMware counter.
Checks the bandwidth used (measured in kilobits per second) used for incoming USB packets. This rule uses a VMware counter.
Checks the bandwidth used (measured in kilobits per second) used for outgoing USB packets. This rule uses a VMware counter.
Monitors the bandwidth used (measured in bits per second) when playing sound in an ICA session. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when performing clipboard operations such as cut-and-paste between the ICA session and the local window. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when routing a print job through an ICA session that does not support a spooler to a client printer attached to the client COM 1 port. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when routing a print job through an ICA session that does not support a spooler to a client printer attached to the client COM 2 port. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when sending data to the COM port. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when executing LongCommandLine parameters of a published application. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when performing file operations between the client and server drives during an ICA session. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when streaming Flash data in an HDX-enabled session. This rule uses a Citrix counter.
Monitors the bandwidth used on the virtual channel that prints to a client printer attached to the client LPT 1 port through an ICA session that does not support a spooler. This is measured in bits per second. This rule uses a Citrix counter.
Monitors the bandwidth used on the virtual channel that prints to a client printer attached to the client LPT 2 port through an ICA session that does not support a spooler. This is measured in bits per second. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when printing to a client printer through a client that has print spooler support enabled. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) from client to Citrix session device (e.g., a VM) for a session. This rule uses a Citrix counter.
Monitors the compression ratio used from client to Citrix session device (e.g., a VM) for a session. Higher is better. This rule uses a Citrix counter.
Monitors the upload speed (measured in bits per second) from the client to the Citrix session device. This rule uses a Citrix counter.
Monitors the bandwidth used from client to Citrix session device (e.g., VM) used by a redirected Smart Card. This is measured in bits per second. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) from client to server for data channel traffic. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when streaming data in a SpeedScreen Multimedia Acceleration enabled session. This rule uses a Citrix counter.
Monitors the line speed bandwidth used (measured in bits per second) from client to server for ThinWire traffic. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) by a redirected USB port device. This rule uses a Citrix counter.
Checks the last recorded latency measurement for the session. This rule uses a Citrix counter.
Checks the average client latency over the life of a session. This rule uses a Citrix counter.
Checks the session deviation for latency. This represents the difference between the minimum and maximum measured latency values for a session. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) for playing sound in an ICA session. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) for clipboard operations such as cut-and-paste between the ICA session and the local window. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when routing a print job through an ICA session that does not support a spooler to a client printer attached to the client COM 1 port. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when routing a print job through an ICA session that does not support a spooler to a client printer attached to the client COM 2 port. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when receiving data from the client COM port. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when executing LongCommandLine parameters of a published application. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when performing file operations between the client and server drives during an ICA session. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when streaming Flash data in an HDX-enabled session. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when routing a print job through an ICA session that does not support a spooler to a client printer attached to the client LPT 1 port. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when routing a print job through an ICA session that does not support a spooler to a client printer attached to the client LPT 2 port. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when printing to a client printer through a client that has print spooler support enabled. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) from Citrix session device (e.g., a VM) to client for a session. This rule uses a Citrix counter.
Monitors the compression ratio used from server to client for a session. This rule uses a Citrix counter.
Monitors the download speed from the client to the Citrix session device (measured in bits per second). This rule uses a Citrix counter.
Monitors the bandwidth from Citrix session device (e.g., VM) to client used by a redirected Smart Card. This is measured in bits per second. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) from server to client for data channel traffic within an ICA session. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when streaming data in a SpeedScreen Multimedia Acceleration enabled session. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) from server to client for ThinWire traffic. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) by a redirected USB port device. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when performing management functions. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when performing management functions. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) to negotiate licensing during the session establishment phase. Often, no data for this counter is available, as this negotiation takes place before logon. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) to negotiate licensing during the session establishment phase. Often, no data for this counter is available, as this negotiation takes place before logon. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) for delivering video frames. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) for delivering video frames. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) by Program Neighborhood (PN) to obtain application set details. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) by Program Neighborhood to obtain application set details. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) for published applications that are not embedded in a session window. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) for published applications that are not embedded in a session window. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when initiating font changes within a SpeedScreen-enabled ICA session. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when initiating font changes within a SpeedScreen-enabled ICA session. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) for text echoing. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) for text echoing. This rule uses a Citrix counter.
The total number of shares used by the session. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when streaming Flash v2 data in an HDX-enabled session. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when streaming Flash v2 data in an HDX-enabled session. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when scanning an image into an application. This rule uses a Citrix counter.
Monitors the bandwidth used (measured in bits per second) when scanning an image into an application. This rule uses a Citrix counter.
Monitors the total amount of time that elapses between the time the animated Windows logo first appears on the screen and the time you can actually begin using the system.
Monitors the amount of time that it takes to complete group policy processing during system boot.
Monitors the amount of time that elapses between the time the animated Windows logo first appears on the screen and the time that the desktop appears. Even though the system is usable at this point, Windows is still working in the background loading low-priority tasks.
Monitors the amount of time it take to complete user profile processing during system boot.
Monitors the amount of time that elapses between the time that the desktop appears and the time that you can actually begin using the system.
Monitors the rotating speed (measured in RPM) of the GPU fan.
Monitors the operational health level of the GPU fan.\r\n0 = unknown\r\n1 = normal\r\n2 = warning\r\n3 = critical\r\n
Monitors the power state of the graphics processing unit (GPU). A GPU can be in one of 16 power states (but not all cards support all 16 states). Values are on a scale in which 0 indicates using the most power and 15 indicates using the least power.
Monitors the temperature (in degrees Celsius) of the GPU.
Monitors the temperature (in degrees Celsius) of the GPU memory chip.
Monitors the temperature (in degrees Celsius) of the GPU power supply.
Monitors the temperature (in degrees Celsius) of the GPU mother board.
Monitors the health level of the GPU temperature.\r\n0 = unknown\r\n1 = normal\r\n2 = warning\r\n3 = critical\r\n
Monitors the GPU usage percentage (between 0 and 100%).
Monitors the GPU frame buffer usage percentage (between 0 and 100%). The frame buffer is an area of memory used to hold the frame of data that is continuously being sent to the screen.
Monitors the GPU video usage percentage (between 0 and 100%).
Monitors the GPU bus usage percentage (between 0 and 100%).
Monitors the number of bytes of memory the GPU card is using.
Monitors the percentage of available memory the GPU card is using.
Monitors the number of applications currently using the GPU.

No comments:

Post a comment