The HPF INDEPENDENT directive allows the programmer to give information to the compiler concerning opportunities for parallel execution. The user can assert that no data object is defined by one iteration of a DO loop and used (read or written) by another; similar information can be provided about the combinations of index values in a FORALL statement. Such information is sometimes valuable to enable compiler optimization, but may require knowledge of the application that is available only to the programmer. HPF therefore allows a user to make these assertions, and the compiler may rely on them in its translation process. If the assertion is true, the semantics of the program are not changed; if it is false, the program is not HPF-conforming and has no defined meaning.
In contrast to HPF 1.0, the INDEPENDENT assertion of HPF 2.0 allows reductions to be performed in INDEPENDENT loops, provided the reduction operator is a built-in, associative and commutative Fortran operator (such as .AND.) or function (such as MAX). It is often the case that a data parallel computation cannot be expressed in HPF 1.0 as an INDEPENDENT loop because several loop iterations update one or more variables. In such cases parallelism may be possible and desirable because the order of updates is immaterial to the final result. This is most often the case with accumulations, such as the following loop:
DO I = 1, 1000000000 X = X + COMPLICATED_FUNCTION(I) END DO
This loop can run in parallel as long as its iterations make their modifications to the shared variable X in an atomic manner. Alternatively, the loop can be run in parallel by making updates to temporary local accumulator variables, with a (short) final phase to merge the values of these variables with the initial value of X. In either case, the computation is conceptually parallel, but it cannot be asserted to be INDEPENDENT by the strict definition found in HPF 1.0.
It is worth mentioning that Fortran now includes several means to express data parallel computation: