A computer’s main processor (CPU) is designed to (basically) do one thing at a time. For example; execute the current instruction or operation. However; because of the way in which we humans work and use computers, we require the CPU to do many things at once (or at least to seem to be doing many things simultaneously). It is this projection of the impression that the CPU is performing many tasks simultaneously that is known as “multitasking”.
Multitasking – In reality there is no single technology that can be designated as being the definitive technology solely responsible for the provisioning of a computer system’s multitasking capabilities. This is because multitasking is in fact a whole bunch of interrelated and complementary technologies including dedicated system controllers, operating system support, supporting software, chipset support, drivers, BIOS support and device logic (logic boards and firmware).
Multitasking CPUs – Modern CPUs contain multiple processing pipelines and the newer CPUs of today actually have multiple processing cores each with its own full complement of multiple processing pipelines. It is through the development of CPUs with multiple more or less independent complete processing cores that has indeed truly given the modern CPU the capability to perform multiple tasks simultaneously and not merely just seem to be doing so.
For quite some time now we have had computer systems with multiple CPUs. The most extreme examples of systems composed of multiple CPUs can be found in supercomputers where massive parallelism is a key player in the overall processing prowess of these high-end systems.
Importantly; the way in which processing tasks are managed and distributed among the multiple processing pipelines of the multiple cores is still achieved in pretty much the same way that it has always been done.
Multitasking Operating System – One mandatory prerequisite factor crucial to the successful implementation of multitasking capabilities and the resultant systems performance parameters is that you must have an operating system that supports it (multitasking).
When using multitasking operating systems (like Windows, Mac OS X, and Linux etc.) users tend to have multiple programs, utilities and applications running concurrently/simultaneously. For example: you may be concurrently editing a word document, downloading from the Internet and listening to music.
In order to accomplish this; the CPU has traditionally shared its processing time among the tasks requiring its attention. For example: user initiated tasks, the operating system, programs, utilities, memory management, automatic updates and quite a few “background” services and routines. Yet this is still an illusion. It only appears that the CPU is doing many things at once because of the incredible speeds at which modern CPUs are able to switch between tasks.
CPU and System/Device Communications – The majority of a PC’s subsystems need to send information to and receive information from both the CPU and system memory (RAM). Most also expect to be able to get the CPU’s attention when they do so.
Preferential Treatment – Some computer subsystems; such as input/output (I/O) devices and human interface devices (keyboard, mouse etc), tend to require “special” attention. That is to say that they want the CPU’s attention now. This class of devices also wants the CPU to drop whatever it is doing and attend to them regardless of the CPU’s current processing task or its relative importance.
Variable CPU Processing Time Requirements – The various system, subsystem and peripheral devices that comprise the modern computer all require different amounts of CPU time at various different irregular intervals.
The mouse; for example, needs far less attention than a hard disk involved in the transference of a large multi-gigabyte file. Thus; in the interest of a more efficient use of a computer’s finite resources, it is most beneficial if the amount of CPU time assigned to each device reflects the type of device and the nature of the operation and processing tasks involved.
In the above example of the resource needs of the mouse versus those of the hard drive; more resources can be allocated (even dedicated) to the hard drive for the duration of its current operation(s) while the mouse gets a smaller amount of CPU time sufficient for its needs.
When the hard drive is finished its current task(s) it may not be required to perform any transactions for various irregular periods of time. The system will then reassign those resources that were being used by the hard drive to other devices and processes as and when required.
Managing Processes – The computer (via the CPU) must also ensure that all active (running) processes and tasks are managed in the most efficient organized manner possible. There are basically 2 ways in which this can be done: CPU polling and device initiated interrupting.
Polling – Polling is the process whereby the CPU systematically locates and asks each device in turn if it requires any help or CPU processing time. This strategy (polling) is a very inefficient process because it is a waste of finite resources.
Interrupting – The other way that the CPU (processor) can employ to handle CPU required processes and data transfers is to have the devices requiring the CPU’s attention to issue a request for attention as and when they require it. This is the basic concept of interrupt requests.
Interrupt Requests Prioritization – Because the CPU might receive an interrupt request (IRQ) from multiple devices concurrently engineers created an IRQ priority table. Now when concurrent IRQs arrive the CPU simply looks up their priority ranking in its IRQ priorities table. The device with the highest priority will be attended to by the CPU first.
Data Transfers – In order to improve a computer’s overall efficiency the CPU also needs to balance the data transfers between itself and the various other subsystems; including system memory (RAM), of the machine. Most transactions performed by a computer use system memory (RAM) as a “middle-man”. For example: CPU to Memory to Printer or Hard Drive to Memory to CPU. Originally these data transfers were under the direct control of the CPU.
Direct Memory Access (DMA) – Direct Memory Access (DMA) technologies have enabled the device to write to or read from memory without any assistance from the CPU thereby freeing the CPU to perform other tasks. Once the data transfer process is completed the CPU is notified and will then initiate/execute those actions required of it. Because the CPU can attend to other matters while a DMA controlled transfer is taking place the system truly is performing multiple tasks concurrently but the CPU isn’t.
Other Technologies – Other technologies that play a role in multitasking include: hyper-threading, bus mastering, the BIOS, memory technologies (cache, flash, DRAM, DDR etc), the system chipset, I/O controllers, Extensible Firmware Interface (EFI), device logic (“smarter” devices) and the various system buses along with their associated supporting software and the operating system itself through better memory management and support.