Parallel computing

Parallel computing is a type of in which many calculations or the execution of es are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing:, , , and. Parallelism has long been employed in, but it's gaining broader interest due to the physical constraints preventing. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in, mainly in the form of s.

Conventional programs are written assuming the. Under this model, instructions execute one after the other, atomically (i.e., at any given point of time only one instruction is executed) and in the order specified by the program.

However, dependencies among statements or instructions may hinder parallelism — parallel execution of multiple instructions, either by a parallelizing compiler or by a processor exploiting. Recklessly executing multiple instructions without considering related dependences may cause danger of getting wrong results, namely.

Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

Subtasks in a parallel program are often called threads. Some parallel computer architectures use smaller, lightweight versions of threads known as fibers, while others use bigger versions known as processes. However, "threads" is generally accepted as a generic term for subtasks. Threads will often need synchronized access to an object or other resource, for example when they must update a variable that is shared between them.

The program can create any variable with any value it wants but it can never overwrite that value. Whenever a calculation is done it must create an entirely new variable.