I asked my ISP to remove my fixed IP address as there is a firmware bug on their supplied router that makes it incompatible with fast broadband. Instead of getting the 200Mbps I am paying for I was getting 20-50Mbps.
This left me with a problem, how do I address my network now? I didn't particularly want to sign up to DynDNS or equivalents. I've got an Azure subscription, so can I do anything with that?
This post gives a beautifully elegant solution. Create an Azure DNS zone for your domain. Create an Azure Function that exposes a HTTP API. Use this function to update the DNS record by PowerShell.
Expose the API and call it regularly (by a scheduled task on a Raspberry PI for example).
Notes:
When I ran the function for the first time it wouldn't work and errored with a 401. On running the PowerShell line-by-line I noticed the Azure login was erroring with AADSTS50055: Force Change Password.
Logging in with the newly-created Azure AD account required an interactive login so that the password could be changed. I solved this by navigating to the Azure portal, logging in as the AD user I created for the DNS Contributor role, changing the password and then updating the Function password variable to match the new password.
You can run the updater on a low-power Raspberry PI with the following commands:
curl ifconfig.me
curl "https://<your function app>.azurewebsites.net/api/<your function name>?code=<your function key>&ipaddr=<ip address determined>"
and piping the two commands together you can update the IP address from one shell command:
curl -s ifconfig.me 2>&1 | curl "https://<your function app>.azurewebsites.net/api/<your function name>?code=<your function key>&ipaddr=$(cat -)"
That command can be scheduled to run with crontab, according to your preference.
PS - Another option worth considering is not even needing to specify the IP address to the Web service exposed by the Azure Function. Can it get the referrer's IP address automatically?
Sunday, 31 December 2017
Tuesday, 26 December 2017
Assigning Windows M to minimise all windows in Ubuntu
edit /usr/share/unity/scopes/music.scope and change shortcut=m
https://askubuntu.com/questions/142119/how-can-i-assign-the-key-superm-to-minimize-all-windows-in-ubuntu-12-04
https://askubuntu.com/questions/142119/how-can-i-assign-the-key-superm-to-minimize-all-windows-in-ubuntu-12-04
Changing the Bose Soundlink battery
https://www.ifixit.com/Guide/Bose+SoundLink+Mini+Battery+Replacement/51738
Saturday, 23 December 2017
Falcon 4 is available
Falcon 4 can be bought from Steam and the BMS updates are found here: https://www.bmsforum.org/
The manuals can be downloaded from here: https://sites.google.com/site/falcon4history/files
The manuals can be downloaded from here: https://sites.google.com/site/falcon4history/files
Interview with Kevin Klemmick – Lead Software Engineer for Falcon 4.0
I found an old interview with one of the engineers for Falcon 4, an old Flight SIm.
Interview with Kevin Klemmick – Lead Software Engineer for Falcon 4.0
- March 12th, 2011
- Posted in Falcon . Interviews
- By Giorgio Bertolone
- Write comment
Falcon is, without any doubt, the most ambitious and realistic Air Combat Simulation ever created and, for this reason, many simmers all over the world still fly it regularly despite its aging graphics. Because of this success, I always hoped to have one day the opportunity to ask specific questions about its development.
I can’t thank enough Kevin for agreeing to this interview and for answering in such a honest and professional way. This is a long interview that will reveal a lot of things that many simmers probably didn’t know. I’m proud to make it available for everyone who, like me, considers Falcon as one the best simulators out there. There is a lot to read so sit comfortably and enjoy!
GENERAL NOTES FROM KEVIN: Keep in mind that all this happened about 15 years ago and my memory is definitely fuzzy. I pulled up what I could from there, but I will be the first to admit my memory isn’t always accurate. I did no actual fact-checking on myselft, so take all this with a grain of salt.
_______________________________________________________________________________
Let’s start from the beginning. What can you tell us about your background and how did you find yourself working for MicroProse?
I had been studying Aerospace Engineering at Cal Poly when an opportunity came up to take an Internship at MicroProse (which was still Spectrum HoloByte at the time). Back in the 80s I had written several multiplayer games for a gaming BBS I ran in high school, and I found the job opportunity through those contacts. Because of my background with both gaming and aerospace it seemed like a good fit.
Could you describe your roles and responsibilities during those years?
Initially I was hired as an intern and asked to design and develop a dynamic campaign. For better or worse there wasn’t a lot of direction on what that would entail – the directive was mostly to make something that would be a persistent world and generate dynamic missions instead of the pre-scripted model which was the norm. I’d written a few simple strategy games prior to this, so I approached it as designing and writing a strategy game. This was obviously a much bigger job than an intern could handle in a summer, so I eventually signed on full time. By the end of the project I’d ended up taking on much more, and was eventually lead programmer on the localization projects.
How many people were in the development team of Falcon 4.0?
I honestly couldn’t give you an exact number. Probably 50 or so over the course of the project. The thing is, the entire team turned over twice due to people leaving, layoffs or terminations. For a brief period of time I was the only programmer on the team. So, depending on how you count it this number can vary widely. For most of the project we had about 6 engineers and maybe the same number of artists.
Working on a simulator like Falcon 4.0 must have been an incredibly exciting and stressful job. How was the general atmosphere in the team?
This was my first job in the industry so I didn’t really have anything to compare it to. At the time it seemed like we were really excited to build something cool, but in retrospect I realize there was a lot of stress, conflict and tension in the team. I take responsibility for a portion of that too. We all had strong opinions about what we wanted to do and there wasn’t a strong management presence until the end (when Gilman Louie came in and filled this role personally), so things definitely went off the rails regularly.
Falcon’s real-time Dynamic Campaign is one of the most impressive engine ever created in a sim. Could you talk specifically about its design, challenges and implementation?
I was given a pretty blank check in designing the Dynamic Campaign, so I approached it as I would a strategy game. The idea being that this game would be running in the background whether or not the player flew any missions. In fact, it could be played as a strategy game from the tool I wrote to monitor it. The AI was broken into three tiers, a strategic level, operational level and tactical level. Yet another level of AI would operate in the Simulation itself to drive the vehicles or aircraft.
The missions were generated as a byproduct of this AI, and in fact used real world planning techniques. For example, once a priority list of targets was determined, a package would be put together to time suppression of air defense, air superiority, refueling, AWACS, etc. All these missions would be timed out and planned much like a real world commander would, but were generated as a response to decisions made by the campaign’s AI.
While my primary goal was to make something fun to play, we were very fortunate to get a lot of advice from military sources about how things work in the real world and I tried to match that as closely as possible while keeping the game play elements that I felt were important. However, all of this had to work within a very tiny slice of the CPU, which was a huge limitation given all the AI/planning work that was going on. That was probably the biggest challenge of this system.
How did you design and code the multiple scenarios? How did you manage to work on them without feeling overwhelmed by the immese scale of these virtual conflicts?
We talked a lot about what theatres we wanted to use. I did some research about what was at the time thought to be the most likely future conflict zones. In the end we went with Korea because of a number of factors. I pushed hard to focus on one theatre in depth rather than do multiple theatres poorly, so we decided to do multiple scenarios in a single theater instead. The scenarios were based somewhat on historical situations in the Korean War, but also what could be likely situations given deployments at the time. The biggest problem with the scenarios wasn’t feeling overwhelmed by them, it was testing them enough to feel comfortable that completely non-scripted AI would be able to play through them realistically.
What were the biggest technical problems that you had to face and solve in the other areas engineered by you (AI, Multiplayer, Coms, etc)?
The biggest technical challenge for me was doing everything I wanted to with the Dynamic Campaign in the CPU slice we budgeted, which I believe was something like 5% of the CPU. To really get AI to work well you need to do a lot of pathfinding and data crunching, all of which is CPU intensive. So there was definitely a lot of compromise in AI quality because of this.
Coms was the other big challenge I was a part of. We developed a very low cost protocol and spent a lot of time on the whole “player bubble” concept. This meaning mostly that events happening near the player were sent more often and with a higher level of detail than those far away. Outside of this bubble we updated very infrequently and with units in aggregate. For example, an entire battalion would pass a bitwise array of active vehicles, a formation and a location. All of which was just a few bytes of data.
Of course, there were plenty of other challenges as well, including simply organization of the various components. We had largely developed the various modules in isolation and when it came time to put them together this turned out to be the source of a lot of problems.
What part of your specific work on Falcon 4.0 are you most proud of and why?
Definitely the Dynamic Campaign. It’s the first and last time I was able to design and code a part of a game pretty much on my own, which had been my experience doing games as a hobby up until then. In the rest of the gaming industry you really don’t have very much input on the design of a game as a programmer. I was still pretty green at the time though and looking back I can see so much that could have been done better, but I am still quite proud of that.
Some parts of Falcon seem to have a modular design. Was this planned? Were you guys thinking ahead to future aircraft and terrain expansions?
Absolutely. In fact, we had different aircraft working in house very early on, but doing other aircraft to the level of detail we did for the F16 just wasn’t possible given our resources. The Dynamic Campaign was initially designed to be able to be played as a separate game entirely, but in the end because very heavily intertwined with the rest of the game. Terrain sets and theatres were designed to be easily swapped out for future expansions (we had planned for an Iraq theatre). On the other hand, part of this modularization was due to different engineers working in isolation and became a problem later on. For example, three different modules ended up using 3 completely different coordinate systems, so communication between them required conversions.
It seems like the first release of Falcon 4.0 was rushed to the market in order to sell during the Christmas holidays. Was the code mature enough for this initial release?
I’d agree that the product was shipped in a pretty buggy state, but I couldn’t honestly say the first release of Falcon 4.0 was rushed. It took about 5 years to build and the last 9 months we were working 12-16 hour days (They had a hotel booked across the street, so my wife ended up staying there so that we could even see each other). It was a huge challenge to just finishing the thing; this was an incredibly complex product that really wasn’t planned out or managed well at all. Because of the complexity and lack of central design it became really difficult to find and fix the many, many bugs in the program. In the end we could have taken another year and still had open bugs, but at eventually you’ve got to get it out there. MicroProse was bleeding money at the time and Falcon already had the stigma of vaporware, so at some point we had to determine that it was good enough and then work hard on patching the problems.
You also worked as Lead Programmer for the post-release patch projects. What were your main priorities and which particular areas had to be fixed or improved?
I was actually a Lead Programmer on the localization projects, but I was involved in the patching process. The priorities there, to be perfectly honest, were to fix the problems that should have been fixed prior to shipping it. As I said, there were far more issues there than we had the resources to fix, but we tackled those that impacted the most users first and kept reducing the list. When I left I was still not happy with the stability of the game, which made it hard to leave it feeling unfinished, but I realized that to MicroProse this probably looked like a money pit.
Ironically, moving to SEGA was exactly the opposite environment. We were doing arcade games which had to run with absolutely zero crashes and which are burned write only on EPROM chips. It was a completely different challenge to develop software that worked out of the box and that was unpatchable.
The last official patch was 1.08. Was there any plan for future patches after that? If yes, what would they have addressed?
I had left for SEGA prior to then, so I don’t know what the state of things were at that point.
Did the layoff of the entire development team come as a surprise or was it a predictable event after the acquisition of MicroProse by Hasbro?
Well, the development team had been hit with layoffs multiple times previously so the concept was pretty familiar by then. I saw the acquisition by Hasbro as a pretty negative thing, so left for SEGA prior to the layoffs. I don’t think anyone was taken by surprise by that though.
Did you ever find out the cost of development of Falcon 4.0 (approximately) ?
I honestly don’t know. I could make a guess given my industry knowledge but it would only be a guess. I suspect that in the end MicroProse did not make money on Falcon 4.0 however. This is not to say that flight simulators are entirely unprofitable, it’s just that this one in particular had a much higher than average development cost.
You worked at MicroProse for a long time (almost five years). What are your best and worst memories?
I really liked the range of creative input I was allowed there. In retrospect maybe some of this wasn’t so much allowed as assumed since I’d come from doing games as a hobby, but in any event it allowed me to go off and build something I thought was really cool. Unfortunately, it was exactly this approach that caused the development to take so long and coordination between engineers to be so difficult.
My worst memories are mostly of conflict between the team and the long hours. I was very young at the time (the youngest programmer on the staff) and was opinionated, overworked and had a fragile ego, so things got pretty tense at times.
Many recent simulators are released without even trying to code a Dynamic Campaign engine. Why do you think today’s sim developers are so scared of what you guys were able to create more than a decade ago?
Well, it’s just really hard to do. Looking back on it, I think the only reason we took on what we did is because we were too inexperienced to know better. Knowing what I do now, even given my experience on Falcon, the cost to develop such an engine would be substantial. Since flight sims don’t bring in that kind of revenue companies look at it from a cost to benefit standpoint and Dynamic Campaigns score pretty low in that regard. There is also the argument that scripted missions are more interesting which has some merit. I think if I were to do it over I would do a mix of scripted/generated missions, so that the player still feels like they’re involved in the world, but there is also some variety thrown in to keep things interesting.
In 2000 the source code of Falcon 4.0 leaked out and after that groups of volunteers were able to make fixes and enhancements that assured the longevity of this sim. Do you see the source code leak as a good or bad event?
Absolutely a good event. In fact I wish I’d known who did it so I could thank them. I honestly think this should be standard procedure for companies that decide not to continue to support a code base.
I know that after MicroProse you moved on to other opportunities and important roles. But if asked, would you still consider working on a modern combat simulation?
I’d been approached about that a while back and expressed interest, but the team was being put together in Colorado I believe and I’m pretty tied to the San Francisco Bay Area at this point. I’m not interested from a flight sim perspective (I actually don’t play them), but I would be from a Dynamic Campaign perspective. I’m much more interested in the strategy/persistent world aspect of it all.
You currently work as Technical Director for Gravity Bear and you are developing very interesting applications. Could you talk about your work on current and future projects?
I’m actually now working as a Technical Director for Electronic Arts, doing Sims projects (that “The Sims”, not flight sims). Gravity Bear is a small company we created to do Facebook games, and I worked there for about 2 years. The entire company consisted of only 6 people and for much of that time I was the only programmer. I’ve been involved in a couple other startup attempts, largely because I would love to work on games I would actually enjoy playing and so much of what the big companies are doing these days are just remakes of old concepts, but I also have a family to support, so the stability of a company like EA and solid titles like “The Sims” is very attractive.
Wednesday, 13 December 2017
Sandisk Micro SD range
The Sandisk Micro SD line is arranged in the following line:
Extreme Pro > Extreme Plus > Extreme > Ultra > Standard
Extreme Pro > Extreme Plus > Extreme > Ultra > Standard
Tuesday, 5 December 2017
Removing Mobile Device Management (MDM) from Office 365 Exchange
My Android phone was asking me to allow Outlook to become a Device Adminstrator in order to receive my corporate emails.
I followed all the articles for removing MDM from Office 365 however nothing seemed to work. In the end, following a fantastic support call, I edit the user in the Admin centre and removed all group membership.
This seemed to trigger it and got it working again.
I followed all the articles for removing MDM from Office 365 however nothing seemed to work. In the end, following a fantastic support call, I edit the user in the Admin centre and removed all group membership.
This seemed to trigger it and got it working again.
Sunday, 3 December 2017
Moving file from one computer to NAS
ROBOCOPY "Source" "Dest" /MOVE /E /DCOPY:DAT
Saturday, 2 December 2017
root crontab doesn't work on raspberry PI Zero
Due to the hardware bug where the network loses connection on the PI Zero it is necessary to either restart the network or the Pi.
I thought I had done this by logging in as root, running crontab -e and then adding a script to reboot the Pi. However I noticed it wasn't working.
In the end I found out it was reporting "Command not found" on shutdown.
Crontab clears out the $PATH variable when running.
The solution was to command an explicit shutdown:
/sbin/shutdown -r
I thought I had done this by logging in as root, running crontab -e and then adding a script to reboot the Pi. However I noticed it wasn't working.
In the end I found out it was reporting "Command not found" on shutdown.
Crontab clears out the $PATH variable when running.
The solution was to command an explicit shutdown:
/sbin/shutdown -r
LSIAgent Monitoring software
SysTrack LSIAgent - collects performance data
Writes a 130MB Access Database file at:
C:\Program Files (x86)\SysTrack\LsiAgent\MDB\collect.mdb
Writes a 130MB Access Database file at:
C:\Program Files (x86)\SysTrack\LsiAgent\MDB\collect.mdb
DESCRIPTION |
---|
Checks for CPU being overrun by one or more processes resulting in CPU performance issues. |
Checks for unusual CPU behavior indicative of hardware failure, HAL mismatch, or other driver anomalies. |
Checks most commonly used applications for slow startup issues. |
Checks for applications running a high number of times per day for the 3 previous days. |
Checks for high gateway latency. |
Checks for the number of times the system has experienced a blue screen in the previous 24 hours. |
Checks for high network latency when accessing remote drives. |
Checks for any network connection that appears to be dropped. |
Checks for high network latency on the application dependency server. |
Checks that VMware Tools are installed. |
Checks RDP session latency during the past 3 hours. |
The Operational Alarm indicates whether the Analysis Tools found a new record in the SysTrack system database table for the system being monitored. Under normal (default) conditions, a new record should be found during each sampling interval. If no record is found, an alarm is triggered. |
Monitors the percentage of elapsed time that the processors on the system are busy executing non-idle threads. On a multiprocessor system, if all processors are always busy, the % Total CPU Time is 100%. It can be viewed as the fraction of the total time spent doing useful work. This value is a good indicator of the demand for and efficiency of a microprocessor. A lower percentage is better. |
Monitors the percentage of elapsed time that processors on the system spent handling hardware interrupts. When a hardware device interrupts the microprocessor, the interrupt handler manages the condition, usually by stopping the device from interrupting and scheduling a deferred procedure call (DPC) to service the device. Time spent in DPCs is not counted as time in interrupts. Device problems can sometimes make the system appear CPU-bound. This happens when the processing of interrupts caused by serial port, disk, network, timer, and other hardware device activities consume the processor. A malfunctioning hardware component or device driver can cause many interrupts, causing the processor to spend too much time in interrupt service routines. High-speed UARTs used in non-intelligent serial ports can also cause large numbers of interrupts. The processor should spend a relatively small amount of time servicing interrupts. A lower percentage is better. |
Monitors the percentage of elapsed time that processors on the system spend servicing deferred procedure calls (DPCs). The Windows operating system interrupt architecture permits the bulk of the work normally done in an interrupt handler to be handled instead at the DPC level, between interrupt priority and normal application processing priority. A device driver's interrupt handler puts DPC packets into a queue that describes the work to be done and then exits. When there are no more interrupts to be serviced, the system Monitors for DPCs that needs to be executed. A DPC executes below interrupt priority and thus permits other interrupts to occur. No application thread executes any code until all the pending DPCs execute. If a busy processor is spending a large amount of time servicing interrupts and DPCs, it is likely the processor cannot function effectively and a probable processor bottleneck will develop. A malfunctioning hardware component or device driver can erroneously cause many DPCs to be queued, causing the processor to spend too much time servicing DPCs. The processor should spend a relatively small amount of time servicing DPCs. A lower percentage is better. |
Monitors the rate of switches from one thread to another. Thread switches can occur either inside a single process or across processes. A thread switch may be caused either by one thread asking another for information or by a thread being preempted by a higher priority thread becoming ready to run. A high context switch rate can also be caused by a problem with a network card or device driver. A lower rate is better. |
Monitors the rate that the system is issuing read and write operations to file system devices. It does not include File Control Operations. The design characteristic 'locality of reference' that improves an application's use of cache and memory reduces transfers to/from the disk. When a program is designed so that data references are localized, the data the program needs is more likely to be in its working set, or at least in the cache, and is less likely to be paged out when it is needed. A lower rate is better. |
Monitors the aggregate rate (per second) of file system operations that are neither a read nor a write. These operations usually include file system control requests or requests for information about device characteristics or status. A malfunctioning device driver or hardware component can report status changes very frequently, causing a high file control operation rate. A poorly designed application can also cause a high file control operation rate. Heavy serial port usage can also cause a high file control operation rate because status operations are counted as control operations. A lower rate is better. |
Monitors the number of threads in the processor queue. Unlike the disk counters, this counter shows ready threads only, not threads that are running. There is a single queue for processor time even on computers with multiple processors. Therefore, if a computer has multiple processors, you need to divide this value by the number of processors servicing the workload. A sustained processor queue of less than 10 threads per processor is normally acceptable, dependent of the workload. |
Monitors the rate at which deferred procedure calls (DPCs) were added to the processors DPC queues between the timer ticks of the processor clock. DPCs are interrupts that run at a lower priority than standard interrupts. Each processor has its own DPC queue. This counter measures the rate that DPCs were added to the queue, not the number of DPCs in the queue. This counter displays the last observed value only; it is not an average. |
Monitors the number of hardware device interrupts that the system is experiencing per second. A device interrupts the microprocessor when it has completed a task or when it requires attention. Normal thread execution is suspended during interrupts. An interrupt may cause the microprocessor to switch to another, higher priority thread. The devices that generate interrupts are the system timer, the mouse, data communication lines, network interface cards, and other peripheral devices. Timer interrupts are frequent and periodic and create a background level of interrupt activity. A failing hardware component can erroneously generate interrupts, producing a very high rate. A faulty device driver can fail to properly service its device, also leading to high interrupt rates. Non-intelligent serial ports can generate an interrupt for every one to sixteen characters sent and received, leading to high interrupt rates. A lower rate is better. |
Monitors the rate (per second) at which pages are read from the disk to resolve memory references to pages that were not in memory at the time of the reference. This counter includes paging traffic on behalf of the system cache to access file data for applications. This is an important statistic to monitor if you are concerned about excessive memory pressure ('thrashing'), and the excessive paging that may result. Pages Input/sec, however, also accounts for such activity as the sequential reading of memory mapped files, whether cached or not. A lower rate is better. |
Monitors the rate (per second) at which pages are written to disk to free up space in physical memory. Pages are written back to disk only if they are changed in physical memory, so they are likely to hold data (which is frequently modified by processes), not code (which is usually not modified). A high Page Output rate often indicates a memory shortage. The Windows operating system writes more pages back to disk to free up space when physical memory is in short supply (in order to free up space for faulted pages that must be paged in). A high Page Output rate indicates that most faulting activity is for data pages and that memory is becoming scarce. If memory is available, changed pages are retained in a list in memory and written to disk in batches. A lower rate is better. |
Monitors the rate (per second) at which read operations from disk are initiated to resolve hard page faults. Hard page faults occur when a process requires code or data that is not in its working set or elsewhere in physical memory, and the code or data must be retrieved from disk. This statistic is a primary indicator of the kinds of faults that cause system-wide bottlenecks. It includes reads to satisfy faults in the file system cache (usually requested by applications) and in non-cached mapped memory files. This statistic is a measure of the rate of read operations, without regard to the numbers of pages retrieved by each operation. A lower rate is better. |
Monitors the rate (per second) at which write operations to the disk are initiated to free up space in physical memory. Pages are written to disk only if they are changed while in physical memory, so they are likely to hold data (which is frequently modified by processes), not code (which is usually not modified). This statistic is a measure of the write operations rate, without regard to the number of pages written in each operation. A high page output rate often indicates a memory shortage. The Windows operating system writes more pages back to disk to free up space when physical memory is in short supply (in order to free up space for faulted pages that must be paged in). A high Page Write rate indicates that most faulting activity is for data pages and that memory is becoming scarce. If memory is available, changed pages are retained in a list in memory and written to disk in batches. A lower rate is better. |
Monitors the overall rate (per second) at which faulted pages are handled by the processor. A page fault occurs when a process requires code or data that is not in its working set (its space in physical memory) in main memory. The Page Fault rate includes both hard faults (those that require disk access) and soft faults (where the faulted page is found elsewhere in physical memory). A soft fault occurs when the needed page is on the standby list, already in main memory, or is in use by another process that shares the page. Most processors can handle large numbers of soft faults without consequence. Hard faults, however, can cause significant delays. A lower rate is better. |
Monitors the ratio of Transition Faults/sec to Page Faults/sec. The Transition Fault Rate is the rate at which page faults are resolved by recovering pages that were in transition. As the memory manager attempts to retrieve space from running processes, it trims some pages from their working sets and, if the pages have been modified, the memory manager puts them on the modified page list. When memory is scarce, it starts to write them to disk right away in hopes of freeing the frames, holding them, and satisfying the backlog of page faults. Once a write starts on a page, it becomes a 'transition' page. However, if the page is re-referenced by the process, a page fault occurs because the page was trimmed from the working set. The page is found by the memory manager on the transition list, and placed back in the process' working set. The modified page writer tries to delay the writes so that it doesn't have to rewrite the pages, but when there is little free memory, it has no chance to delay. It must write pages to free up space as quickly as possible. Thus, a high transition fault ratio usually indicates a memory shortage. A lower rate is better. |
Monitors the percentage of the page file(s) that is/are in use. Paging files are used to store pages of memory used by processes that are not contained in other files when they are swapped out to disk. Pages are swapped out to disk when they are not being used in order to make maximum use of physical memory. All processes share paging files, so lack of space in paging files can prevent other processes from allocating memory. The operating system can be set to automatically increase the size of the paging file as needed (disk space permitting), or can restrict the page file size to a specific limit. Increasing the size of the paging file or adding more memory can decrease Page File Usage. Application programs that allocate memory but fail to free it ('leaking memory') consume page file space until they are terminated. Such faulty applications should be repaired. A lower % is better. |
Monitors the number of available bytes - this is a measure of free memory. Free memory is the amount of physical memory available to processes running on the system. It is calculated by summing space on the Zeroed, Free, and Standby memory lists. Free memory is ready for use. Zeroed memory is memory filled with zeros to prevent later processes from seeing data used by a previous process. Standby memory is memory removed from a process' working set (its physical memory) en route to disk, but is still available to be recalled. The Virtual Memory Manager continually adjusts the space used in physical memory to maintain a minimum number of available bytes for the operating system and processes. When available bytes are plentiful, the Virtual Memory Manager lets the working sets of processes grow, or keeps them stable by removing an old page for each new page added. When available bytes are few, the Virtual Memory Manager must trim the working sets of processes to maintain the minimum required. The memory manager seeks to maintain at least 4 MB of free space. More available memory is better. |
Monitors the number of times per second that the Cache Manager fails to find a file's page in the immediate cache. When this occurs, the Cache Manager must ask the Memory Manager to locate the page elsewhere in memory or on the disk so that it can be loaded into the immediate cache. If the page must be brought in from disk, this results in disk activity and a delay to the process that needed the page (while the page is brought in). The file system cache is an area of physical memory that stores recently used pages of data retrieved from disk or network file systems for applications. The system Cache uses memory not in use by active processes in the computer. The Windows operating system continually adjusts the size of the cache, making it as large as it can while still preserving the minimum required number of available bytes for processes. It is less likely that the pages needed by an application will be found in the cache as it becomes smaller. This can reduce application performance by requiring more disk and file system accesses. Lower is better. |
Monitors the size of virtual memory that has been committed (as opposed to simply reserved) as a percentage of the total virtual memory that can be committed without having to extend the paging file(s). Committed memory is physical memory for which space has been reserved on the disk paging files. There can be one paging file on each logical drive. If the paging file(s) are expanded, the Commit Limit increases accordingly. You can also increase the commit limit by increasing the amount of memory in the system. For a given level of virtual memory usage (Commit Bytes), increasing the Commit Limit reduces the Commit Ratio. When the Commit Ratio reaches 100%, all available virtual memory has been consumed and additional memory cannot be allocated by applications. This may cause application failures. Lower is better. |
Monitors the number of processes in the system at the time of data collection. This is an instantaneous count, not an average over a time interval. Each process represents the running of a program or a service. The operating system uses certain processes to provide its functionality. A large number of processes may indicate overuse of the system. Lower is better. |
Monitors the number of threads currently active in the process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread. |
Monitors the number of clients currently connected to a terminal server. This is an instantaneous count and not averaged over time, except in the summary records. This counter is important to terminal server monitoring as it can show the time periods when sessions are at a peak. The load on a server at the time of the peak number of active connections can be used as a sizing measure to gauge if you need more (or better) servers to sustain a concurrent number of applications and users in your environment. This counter should not be confused with the number of licenses in use (unless you only have one terminal server). Licenses are based on seats, not sessions. |
Monitors the number of server session connections. This is an instant count and not averaged over time, except in the summary records. A server connection is any connection to a server, including mapped drives, database connections, service connections, etc. This is not just the number of user connections, but each instance of connection to the server. When watched over time, the number of server connections should fluctuate as the number of non-persistent connections start and end. The server connection count is useful on specialized servers (e.g. SQL Server, Terminal Server, etc.) to give an indication of client activity. It can also be an indicator of server-client licensing compliance when the server is licensed on a per connection basis. Lower is better. |
Monitors the ability to send email notifications for triggered alarms. |
Monitors the system's agent ability to condense and send data to the Master System before the attempt times out. |
Monitors the percentage of elapsed time that the selected disk drive was busy servicing read or write requests. |
Monitors the rate (per second) of read operations on the disk. A lower rate is better. |
Monitors the rate (per second) of write operations on the disk. A lower rate is better. |
Monitors the length of time (in milliseconds) for an average disk transfer. This statistic measures the average time of each data transfer, regardless of the number of bytes read or written. It shows the total time of the read or the write. Average Disk sec/Transfer is usually in multiples of the time it takes for one rotation of the disk. Any variation usually represents time for driver processing or time in the queue. For example, on a 7200-RPM disk, the actual Average Disk sec/Transfer would be in multiples of 8 milliseconds. Lower is better. |
Monitors the current length of the server work queue for the CPU. A sustained queue length greater than four might indicate processor congestion. This is an instantaneous count, not an average over time. |
Monitors the percentage of total usable space on the selected logical disk drive that is free. |
Monitors the rate at which I/O requests to the disk are split into multiple requests. A split I/O may result from requesting data in a size that is too large to fit into a single I/O or that the disk is fragmented on single-disk systems. Disks that have a high % Disk I/O Fragmented may need to be defragmented to improve performance. It is important to note that you may have a high % of Disk I/O Fragmentation and yet the defrag utility tells you the drive does not need to be defragmented. The defrag utility calculates fragmented files across the whole volume. There could be a segment of the volume where frequently accessed files are heavily fragmented and impacting performance. A Solid-State Drive (SSD) may display a high % Fragmented I/O with no performance impact. Lower is better. |
Monitors the number of bytes per second retrieved from the disk (read). |
Monitors the number of bytes per second sent to the disk (write). |
Monitors the percentage of network bandwidth in use on the network interface. It is important to note that most networks are fully saturated at less than 100% utilization. For example, as Ethernet networks become congested, collisions occur far more frequently, degrading performance and causing rapid saturation. A lower % is better. |
Monitors the total number of packets received per second on a network interface. Each physical packet counts as one frame, independent of the number of bytes contained in the packets. Network packets per second are useful in measuring the traffic load on a network interface. A lower rate is better. |
Monitors the total number of bytes received per second on a network interface. The total received bytes are counted independent of the number of packets in which the data is received. Network bytes per second are useful in measuring the traffic load on a network interface. A lower rate is better. |
Monitors the rate of receipt of network packets whose destination address is a broadcast address. All nodes on the network segment receive packets sent to the broadcast address. Communications between network routers regarding the network topology, advertisements from various systems, and certain protocols used in translating names to addresses are all examples of broadcast packet use. High broadcast packet rates are especially troublesome because they consume network bandwidth and resources (CPU, memory, and I/O) on all systems on the network segment (since they all receive such packets). A lower rate is better. |
Monitors the length of the network output queue in packets. This is an instantaneous count, not an average over time. |
The percentage of previously transmitted bytes that were retransmitted. |
Monitors the estimated current bandwidth of the network interface measured in megabits per second. For interfaces that do not vary in bandwidth or for those where no accurate estimation can be made, this value is the nominal bandwidth. |
Monitors the CPU usage for services and processes. The rule will have a critical or warning level when a service or application's CPU usage is above the CPU High Limit defined in the Deployment Tool. |
Monitors the CPU usage for services and processes. The rule will have a critical or warning level when a service or application's CPU usage is below the CPU Low Limit defined in the Deployment Tool. |
Monitors the working set usage for services and processes. The rule will have a critical or warning level when a service or application's working set usage is above the Working Set High Limit defined in the Deployment Tool. |
Monitors the working set usage for services and processes. The rule will have a critical or warning level when a service or application's working set usage is below the Working Set Low Limit defined in the Deployment Tool. |
Monitors the virtual memory usage for services and processes. The rule will have a critical or warning level when a service or application's virtual memory usage is above the Virtual Size High Limit defined in the Deployment Tool. |
Monitors the virtual memory usage for services and processes. The rule will have a critical or warning level when a service or application's virtual memory usage is below the Virtual Size Low Limit defined in the Deployment Tool. |
Monitors the I/O rate for services and processes. The rule will have a critical or warning level when a service or application's I/O rate is above the I/O Rate High Limit defined in the Deployment Tool. |
Monitors the I/O rate for services and processes. The rule will have a critical or warning level when a service or application's I/O rate is above the below the I/O Rate Low Limit defined in the Deployment Tool. |
Checks if services are responding. The rule will have a critical or warning level when a service has not respond for the length of time (in seconds) defined in the Deployment Tool. |
Checks if applications are responding. The rule will have a critical or warning level if an application has not respond for the length of time (in seconds) defined in the Deployment Tool. |
Monitors for unexpected service or application statuses. |
Checks for the number of distinct times an exe (either an application or service exe) is running concurrently on the system. If the rule has a critical or warning level, this indicates that the number of instances is outside the Deployment Tool's defined upper or lower limit. |
Checks for detected memory leaks. Memory leaks are failures to release unused memory by a computer program. It is unnecessary memory consumption. A memory leak occurs when the program loses the ability to free the memory. A memory leak diminishes the performance of the computer, as it becomes unable to use all its available memory. Excessive memory leaks can lead to program failure after a sufficiently long period of time. Smaller is better. |
Checks for detected non-paged pool leaks. These are a measurement, in bytes per minute, of memory leaking non-paged or non-paging data on any Windows-based application. Non-Paged Pool Memory is typically a smaller pool of memory and cannot be paged out to disk. The Non-paged pool is set virtual memory pages that always remain resident in RAM. Device drivers and the OS use the nonpaged pool to store data structures that must stay in physical memory and can never be paged out to disk. (For example, the TCP/IP driver must allocate some amount of nonpaged memory for every TCP/IP connection that is active on the computer for data structures that are required during processing of network adapter interrupts when page faults cannot be tolerated.) A device driver with a memory leak will eventually exhaust the supply of Nonpaged Pool Memory, which will cause subsequent allocations that request the nonpaged pool to fail. Running out of space in the nonpaged pool almost always results in a Blue Screen. Smaller is better. |
Checks for detected paged pool leaks. These are a measurement, in bytes per minute, of memory leaking paged data on any Windows-based application. Paged Pool Memory can be paged out to disk. Virtual memory for various system functions, including shared memory files (like DLLs), is allocated from the Paged Pool, which is an area of the system's virtual memory that is limited in size. A program with a memory leak that is allocating, but never freeing memory from the Paged Pool will eventually exhaust the Paged Pool. Subsequent allocations that request the Paged Pool will then fail, with unpredictable results causing applications to act erratically and in some cases failing. Smaller is better. |
Checks for detected handle leaks. A handle leak occurs when a computer program asks for a handle to a resource but does not free the handle when it is no longer used. If this occurs frequently or repeatedly over an extended period of time, a large volume of handles may be marked in-use and as a result, unavailable. This can cause performance problems or a crash. Smaller is better. |
Checks for detected GDI object leaks. A GDI Object is an object from the Graphics Device Interface (GDI) library of application programming interfaces (APIs) for graphics output devices. This is the measurement, in kilobytes per minute, of GDI object leaks on any Windows-based application. No leaks are preferred. |
Checks for detected user object leaks. A User Object is an object from Windows Manager which includes windows, menus, cursors, icons, hooks, accelerators, monitors, keyboard layouts, and other internal objects. This is the measurement, in kilobytes per minute, of user object leaks on any Windows-based application. No leaks are preferred. |
Checks for any changes in change items being monitored. The change items are selected for monitoring via the Deployment Tool. |
Checks for visits to unauthorized web sites. If the rule does not pass, this indicates that at least one web site has been visited that is specified as a web site that may not be visited (specified via the Deployment Tool). |
If a system has more than 300 active event log alarms, an Event Log Processing Paused alarm is triggered. Alarms will not be triggered for the following Event Logs until the currently triggered alarms are manually cleared:\r\nApplication Event Log\r\nSystem Event Log\r\nSecurity Event Log\r\nOther Event Log |
Checks for active Application Event Log entries. |
Checks for active System Event Log entries. |
Checks for active Security Event Log entries. |
Checks for active Event Log entries other than Application, System, or Security Event Log entries. |
Checks for attempts to use an application that has been locked via Deployment Tool's SysLock Application Locking feature. A triggered alarm indicates that an attempt was made. |
Checks to see if the Master System has received an SNMP trap from a source other than a Child System. For example, if there is an SNMP router enabled on your network, the router can be figured to send its SNMP traps to the Master System. You can use Deployment Tool's SNMP Monitoring feature to add an SNMP alarming rule that allows SysTrack to respond to specified object identifiers (OIDs) received from this router. |
Checks for application errors. |
Checks for application hangs. |
Checks for blue screens. |
Checks for attempts to visit a website that has been blocked via Deployment Tool's SysLock Web Locking feature. |
Checks for systems added to the SysTrack deployment tree during the last auto-deployment. |
Checks for systems scheduled to be uninstalled from the SysTrack deployment tree during the last auto-deployment. |
Checks for systems that would have been added to the SysTrack deployment tree (during the last auto-deployment) had the Deployment Tool not been running in test mode. |
Checks for systems that would have been scheduled to be uninstalled from the SysTrack deployment tree (during the last auto-deployment) had the Deployment Tool not been running in test mode. |
Deployment Tool Scripting and Response Time custom alarms are used to provide application-specific monitoring. Each alarm is based on execution of a named script or program. This rule will have a critical or warning level when the time required for the script to complete exceeds the specified limit for the specified time period. |
Deployment Tool Scripting and Response Time custom alarms are used to provide application-specific monitoring. Each alarm is based on execution of a named script or program. This rule will have a critical or warning level when an attempt to run the script fails. |
Deployment Tool Scripting and Response Time custom alarms are used to provide application-specific monitoring. Each alarm is based on execution of a named script or program. This rule will have a critical or warning level when the value returned by the script exceeds the specified limit for the specified time period. |
In addition to the provided performance counters, the Deployment Tool allows you to add custom performance counters that can be monitored through alarms. This rule will have a critical or warning level alarm when the specified parameters are not met. |
Checks for COM+ bounds created during the last measurement period. A bound is an object that has been created including those that are active and pooled. The 'binding' process is either early or late; it is the act of connecting a client process to a serving process. |
Checks for COM+ calls completed during the last measurement period. |
Checks for COM+ calls with unhandled exceptions during the last measurement period. |
Checks for COM+ component instances for the component in an instance container that are currently performing a method call. |
Checks for COM+ objects that are active in a pool, ready to be used by any client that requests the component. |
Checks for COM+ references (open handles). |
Checks the average call duration of COM+ method calls. |
Checks the amount of guest physical memory (in megabytes) in use by the virtual machine (VM). Active memory is estimated by VMkernel statistical sampling and represents the actual amount of memory the VM needs. The value is based on the current workload of the virtual machine. |
Checks for the amount of guest physical memory (in megabytes) reclaimed from the virtual machine (VM) by the balloon driver. |
Checks for virtual machine (VM) 'physical' memory (in megabytes) that is mapped to machine memory. Mapped memory includes shared memory amount, but does not include memory overhead. |
Checks for machine memory (in megabytes) allocated to a virtual machine (VM) beyond its reserved amount. |
Checks for guest 'physical' memory shared (in megabytes) with other virtual machines (through the VMkernel's transparent page-sharing mechanism, a RAM deduplication technique). |
Checks for the total machine memory saved (in megabytes) due to memory sharing between virtual machines (VMs) on a common host. |
Checks for guest physical memory swapped out to the virtual machine's (VMs) swap file by the VMkernel (in megabytes). Swapped memory stays on disk until the VM needs it. This statistic refers to VMkernel swapping and not to guest OS swapping. |
Checks for the balloon target memory (in megabytes) estimated by the VMkernel. This is the desired amount of virtual machine (VM) balloon memory. If the balloon target amount is greater than the balloon amount, the VMkernel inflates the balloon amount, which reclaims more VM memory. If the balloon target amount is less than the balloon amount, the VMkernel deflates the balloon, which allows the VM to reallocate memory when needed. |
Checks the percentage of time that the virtual machine (VM) was ready, but could not get scheduled to run on the physical CPU. CPU ready time is dependent on the number of VMs on the host and their CPU loads. |
Checks amount of time that it takes data to traverse a network connection (a measure of latency). |
It is possible to override the SysTrack Agent's suspend command, meaning that the default scheduling is not used. This rule checks for those override exceptions. |
Checks the percentage of total usable space on the selected logical disk drive that was free. |
Monitors the CPU usage in megahertz during the interval. This rule uses a VMware host side counter. |
Monitors the actively used CPU, as a percentage of the total available CPU, for each physical CPU on the host. This rule uses a VMware host side counter. |
Monitors the total time elapsed, in seconds, since last system startup. This rule uses a VMware host side counter. |
Monitors the memory usage as percentage of available machine memory. For a host, the percentage is calculated as follows: consumed memory ÷ machine memory size. This rule uses a VMware host side counter. |
Monitors the aggregated disk I/O rate in kilobytes per second. For hosts, this metric includes the rates for all virtual machines (VMs) running on the host during the collection interval. This rule uses a VMware host side counter. |
Monitors the average number of kilobytes per second read from the disk during the collection interval. This rule uses a VMware host side counter. |
Monitors the average number of kilobytes per second written to disk during the collection interval. This rule uses a VMware host side counter. |
Monitors the average rate (in kilobytes per second) at which data was transmitted across each physical network interface controller (NIC) instance on the host during the interval. This represents the bandwidth of the network. This rule uses a VMware host side counter. |
Monitors the average rate (in kilobytes per second) at which data was received across each physical network interface controller (NIC) instance on the host during the interval. This represents the bandwidth of the network. This rule uses a VMware host side counter. |
Monitors the average rate (in megabits per second) at which data is transmitted and received across all network interface controller (NIC) instances connected to the host. This rule uses a VMware host side counter. |
Checks the host server CPU core consumption. This is indicative of the level of demand placed on the system by the virtual machines (VMs) that it hosts. This rule uses a VMware host side counter. |
Checks for the percentage of time virtual machines (VMs) on the host system are unable to complete processor transactions due to contention over physical CPUs. This rule uses a VMware host side counter. |
Checks the percentage of time that the virtual machine (VM) was ready, but could not get scheduled to run on the physical CPU. CPU ready time is dependent on the number of virtual machines on the host and their CPU loads. This rule uses a VMware host side counter. |
Checks the amount of time in milliseconds the virtual machine (VM) is waiting for swap page-ins. CPU Swap Wait is included in CPU Wait. This rule uses a VMware host side counter. |
Checks the total time in milliseconds that a virtual CPU is not runnable. It could be idle (halted) or waiting for an external event such as I/O. This rule uses a VMware host side counter. |
Checks for the highest latency value (in milliseconds) across all datastores used by the host. This rule uses a VMware host side counter. |
Checks for the highest latency value (in milliseconds) across all disks used by the host. Latency measures the time taken to process a SCSI command issued by the guest OS to the virtual machine (VM). The kernel latency is the time VMkernel takes to process an IO request. The device latency is the time it takes the hardware to handle the request. This rule uses a VMware host side counter. |
Monitors the amount of memory compressed by ESX (in kilobytes). This rule uses a VMware host side counter. |
Monitors the memory compression rate (in kilobytes per second).This rule uses a VMware host side counter. |
Checks for the amount of machine memory (in kilobytes) used on the host. Consumed memory includes memory used by the Service Console, the VMkernel, vSphere services, plus the total consumed metrics for all running virtual machines (VMs). This rule uses a VMware host side counter. |
Monitors the memory decompression rate (in kilobytes per second). This rule uses a VMware host side counter. |
Monitors the percentage of time virtual machines (VMs) on the host system are unable to complete memory transactions due to contention. This rule uses a VMware host side counter. |
Monitors the average rate (in megabytes per second) at which memory is swapped out to the host swap file. This rule uses a VMware host side counter. |
Monitors the average rate (in megabytes per second) at which memory is swapped out to the host swap file. This rule uses a VMware host side counter. |
Monitors the sum of all shared metrics for all powered-on virtual machines (VMs), plus amount for vSphere services on the host (in kilobytes). The host's shared memory may be larger than the amount of machine memory if memory is over committed (the aggregate virtual machine configured memory is much greater than machine memory). The value of this statistic reflects how effective transparent page sharing and memory over commitment are for saving machine memory. This rule uses a VMware host side counter. |
Monitors the amount of machine memory that is shared by all powered-on virtual machines (VMs) and vSphere services on the host (in kilobytes). Subtract this metric from the shared metric to gauge how much machine memory is saved due to sharing (shared - shared common = machine memory (host memory) savings (KB)). This rule uses a VMware host side counter. |
Checks for one of four threshold levels representing the percentage of free memory on the host. The counter value determines swapping and ballooning behavior for memory reclamation.\r\n0: (high) Free memory >= 6% of machine memory minus Service Console memory.\r\n1: (soft) 4%\r\n2: (hard) 2%\r\n3: (low) 1%\r\n0 (high) and 1 (soft): Swapping is favored over ballooning.\r\n2 (hard) and 3 (low): Ballooning is favored over swapping.\r\nThis rule uses a VMware host side counter. |
Monitors the total amount of memory (in kilobytes) allocated by the virtual machine (VM) memory control driver (vmmemctl) for all powered-on virtual machines, plus vSphere services on the host. The vmmemctl driver is a VMware exclusive memory-management driver that controls ballooning, it is installed with VMware Tools. This rule uses a VMware host side counter. |
Monitors the number of packets received on all virtual machines (VMs) running on the host. This rule uses a VMware host side counter. |
Monitors the number of packets transmitted across each physical NIC instance on the host. This rule uses a VMware host side counter. |
Monitors the power consumption in watts. This rule uses a VMware host side counter. |
Monitors the amount of time an SMP virtual machine (VM) was ready to run, but was delayed due to co-CPU scheduling contention. This rule uses a VMware host side counter. |
Monitors the total number of audio bytes that have been received since the PCoIP session started. This rule uses a VMware counter. |
Monitors the total number of audio bytes that have been sent since the PCoIP session started. This rule uses a VMware counter. |
Monitors the bandwidth (measured in kilobits per second) used for incoming audio packets. This rule uses a VMware counter. |
Monitors the bandwidth (measured in kilobits per second) used for outgoing audio packets. This rule uses a VMware counter. |
Monitors the transmit bandwidth limit (measured in kilobits per second) used for outgoing audio packets as defined by the GPO setting. This rule uses a VMware counter. |
Monitors the number of bytes received over the PCoIP session. This rule uses a VMware counter. |
Monitors the number of bytes sent over the PCoIP session. This rule uses a VMware counter. |
Monitors the total number of packets that have been received over the PCoIP session started. Note that not all packets are the same size. This rule uses a VMware counter. |
Monitors the total number of packets that have been transmitted since the PCoIP session started. Note that not all packets are the same size. This rule uses a VMware counter. |
Monitors the total number of receive packets that have been lost since the PCoIP session started. This rule uses a VMware counter. |
Monitors the incrementing number that represents the total number of seconds the PCoIP session has been open. This rule uses a VMware counter. |
Monitors the total number of transmit packets that have been lost since the PCoIP session started. This rule uses a VMware counter. |
Monitors the lowest encoded quality, which is updated every second. Not to be confused with the GPO setting. This rule uses a VMware counter. |
Monitors the total number of imaging bytes that have been received since the PCoIP session started. This rule uses a VMware counter. |
Monitors the total number of imaging bytes that have been sent since the PCoIP session started. This rule uses a VMware counter. |
Monitors the current estimate of the decoder processing capability measured in kilobits per second. 0 means unlimited. This rule uses a VMware counter. |
Monitors the number of imaging frames which were encoded over a one second sampling period. This rule uses a VMware counter. |
Monitors the bandwidth (measured in kilobits per second) used for incoming imaging packets. This rule uses a VMware counter. |
Monitors the bandwidth (measured in kilobits per second) used for outgoing imaging packets. This rule uses a VMware counter. |
Monitors the round trip latency (measured in milliseconds) between server and client. This rule uses a VMware counter. |
Monitors the overall bandwidth (measured in kilobits per second) used for incoming PCoIP packets. This rule uses a VMware counter. |
Monitors the peak bandwidth (measured in kilobits per second) used for incoming PCoIP packets. This rule uses a VMware counter. |
Monitors the percentage of received packets lost during a one second sampling period. This rule uses a VMware counter. |
Monitors the current estimate of the available outgoing network bandwidth (measured in kilobits per second). This rule uses a VMware counter. |
Monitors the overall bandwidth (measured in kilobits per second) used for outgoing PCoIP packets. This rule uses a VMware counter. |
Monitors the transmit bandwidth limit (measured in kilobits per second) used for outgoing packets as defined by the GPO setting, and the network. This value may be lower than what is entered as GPO setting. This rule uses a VMware counter. |
Monitors the percentage of transmitted packets lost during a one second sampling period. This rule uses a VMware counter. |
Checks the total number of USB bytes that have been received since the PCoIP session started. This rule uses a VMware counter. |
Checks the total number of USB bytes that have been sent since the PCoIP session started. This rule uses a VMware counter. |
Checks the bandwidth used (measured in kilobits per second) used for incoming USB packets. This rule uses a VMware counter. |
Checks the bandwidth used (measured in kilobits per second) used for outgoing USB packets. This rule uses a VMware counter. |
Monitors the bandwidth used (measured in bits per second) when playing sound in an ICA session. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when performing clipboard operations such as cut-and-paste between the ICA session and the local window. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when routing a print job through an ICA session that does not support a spooler to a client printer attached to the client COM 1 port. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when routing a print job through an ICA session that does not support a spooler to a client printer attached to the client COM 2 port. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when sending data to the COM port. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when executing LongCommandLine parameters of a published application. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when performing file operations between the client and server drives during an ICA session. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when streaming Flash data in an HDX-enabled session. This rule uses a Citrix counter. |
Monitors the bandwidth used on the virtual channel that prints to a client printer attached to the client LPT 1 port through an ICA session that does not support a spooler. This is measured in bits per second. This rule uses a Citrix counter. |
Monitors the bandwidth used on the virtual channel that prints to a client printer attached to the client LPT 2 port through an ICA session that does not support a spooler. This is measured in bits per second. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when printing to a client printer through a client that has print spooler support enabled. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) from client to Citrix session device (e.g., a VM) for a session. This rule uses a Citrix counter. |
Monitors the compression ratio used from client to Citrix session device (e.g., a VM) for a session. Higher is better. This rule uses a Citrix counter. |
Monitors the upload speed (measured in bits per second) from the client to the Citrix session device. This rule uses a Citrix counter. |
Monitors the bandwidth used from client to Citrix session device (e.g., VM) used by a redirected Smart Card. This is measured in bits per second. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) from client to server for data channel traffic. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when streaming data in a SpeedScreen Multimedia Acceleration enabled session. This rule uses a Citrix counter. |
Monitors the line speed bandwidth used (measured in bits per second) from client to server for ThinWire traffic. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) by a redirected USB port device. This rule uses a Citrix counter. |
Checks the last recorded latency measurement for the session. This rule uses a Citrix counter. |
Checks the average client latency over the life of a session. This rule uses a Citrix counter. |
Checks the session deviation for latency. This represents the difference between the minimum and maximum measured latency values for a session. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) for playing sound in an ICA session. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) for clipboard operations such as cut-and-paste between the ICA session and the local window. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when routing a print job through an ICA session that does not support a spooler to a client printer attached to the client COM 1 port. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when routing a print job through an ICA session that does not support a spooler to a client printer attached to the client COM 2 port. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when receiving data from the client COM port. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when executing LongCommandLine parameters of a published application. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when performing file operations between the client and server drives during an ICA session. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when streaming Flash data in an HDX-enabled session. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when routing a print job through an ICA session that does not support a spooler to a client printer attached to the client LPT 1 port. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when routing a print job through an ICA session that does not support a spooler to a client printer attached to the client LPT 2 port. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when printing to a client printer through a client that has print spooler support enabled. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) from Citrix session device (e.g., a VM) to client for a session. This rule uses a Citrix counter. |
Monitors the compression ratio used from server to client for a session. This rule uses a Citrix counter. |
Monitors the download speed from the client to the Citrix session device (measured in bits per second). This rule uses a Citrix counter. |
Monitors the bandwidth from Citrix session device (e.g., VM) to client used by a redirected Smart Card. This is measured in bits per second. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) from server to client for data channel traffic within an ICA session. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when streaming data in a SpeedScreen Multimedia Acceleration enabled session. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) from server to client for ThinWire traffic. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) by a redirected USB port device. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when performing management functions. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when performing management functions. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) to negotiate licensing during the session establishment phase. Often, no data for this counter is available, as this negotiation takes place before logon. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) to negotiate licensing during the session establishment phase. Often, no data for this counter is available, as this negotiation takes place before logon. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) for delivering video frames. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) for delivering video frames. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) by Program Neighborhood (PN) to obtain application set details. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) by Program Neighborhood to obtain application set details. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) for published applications that are not embedded in a session window. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) for published applications that are not embedded in a session window. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when initiating font changes within a SpeedScreen-enabled ICA session. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when initiating font changes within a SpeedScreen-enabled ICA session. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) for text echoing. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) for text echoing. This rule uses a Citrix counter. |
The total number of shares used by the session. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when streaming Flash v2 data in an HDX-enabled session. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when streaming Flash v2 data in an HDX-enabled session. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when scanning an image into an application. This rule uses a Citrix counter. |
Monitors the bandwidth used (measured in bits per second) when scanning an image into an application. This rule uses a Citrix counter. |
Monitors the total amount of time that elapses between the time the animated Windows logo first appears on the screen and the time you can actually begin using the system. |
Monitors the amount of time that it takes to complete group policy processing during system boot. |
Monitors the amount of time that elapses between the time the animated Windows logo first appears on the screen and the time that the desktop appears. Even though the system is usable at this point, Windows is still working in the background loading low-priority tasks. |
Monitors the amount of time it take to complete user profile processing during system boot. |
Monitors the amount of time that elapses between the time that the desktop appears and the time that you can actually begin using the system. |
Monitors the rotating speed (measured in RPM) of the GPU fan. |
Monitors the operational health level of the GPU fan.\r\n0 = unknown\r\n1 = normal\r\n2 = warning\r\n3 = critical\r\n |
Monitors the power state of the graphics processing unit (GPU). A GPU can be in one of 16 power states (but not all cards support all 16 states). Values are on a scale in which 0 indicates using the most power and 15 indicates using the least power. |
Monitors the temperature (in degrees Celsius) of the GPU. |
Monitors the temperature (in degrees Celsius) of the GPU memory chip. |
Monitors the temperature (in degrees Celsius) of the GPU power supply. |
Monitors the temperature (in degrees Celsius) of the GPU mother board. |
Monitors the health level of the GPU temperature.\r\n0 = unknown\r\n1 = normal\r\n2 = warning\r\n3 = critical\r\n |
Monitors the GPU usage percentage (between 0 and 100%). |
Monitors the GPU frame buffer usage percentage (between 0 and 100%). The frame buffer is an area of memory used to hold the frame of data that is continuously being sent to the screen. |
Monitors the GPU video usage percentage (between 0 and 100%). |
Monitors the GPU bus usage percentage (between 0 and 100%). |
Monitors the number of bytes of memory the GPU card is using. |
Monitors the percentage of available memory the GPU card is using. |
Monitors the number of applications currently using the GPU. |
Subscribe to:
Posts (Atom)