I find this a highly interesting problem. How do you measure how much CPU time a program needs? The OS has ceded its control of the CPU to the program. Does it just look at the clock after it’s in charge again to derive a program’s load?
While we’re at it: How does the OS even yank the CPU away from the currently running process?
Timer based interrupts are the foundation of pre-emptive multitasking operating systems.
You set up a timer to run every N milliseconds and generate an interrupt. The interrupt handler, the scheduler, decides what process will run during the next time slice (the time between these interrupts), and handles the task of saving the current process’ state and restoring the next process’ state.
To do that it saves all the CPU registers (incl stack pointer, instruction pointer, etc), updates the state of the process (runnable, running, blocked), and restores the registers for the next process, changes it’s state to running, then exits and the CPU resumes where the next process left off last time it was in a running state.
While it does that switcheroo, it can add how long the previous process was running.
The other thing that can cause a process to change state is when it asks for a resource that will take a while to access. Like waiting for keyboard input. Or reading from the disk. Or waiting for a tcp connection. Long and short of it is the kernel puts the process in a blocked state and waits for the appropriate I/O interrupt to put the process in a runnable state.
Or something along those lines. It’s been ages since I took an OS class and maybe I don’t have the details perfect but hopefully that gives you the gist of it.
as a non computer scientist who is programming multitasking applications, you did a good job at explaining context switching :)
Lmao at RAM chips and CPU cola
221% cpu usage?
100% equals one full core. Higher numbers are possible for multithreaded processes.