Wiki

Clone wiki

Modern Fortran / Home

This is not your father's Fortran

An introduction to modern Fortran programming

In online programming discussions, Fortran seems to have gotten an undeserved bad reputation. Most of the time, comments of the language are based on the Fortran 77 edition of the language, and fail to take into account that the language has seen four major revisions since then.

In my opinion, Fortran is a nice language for numerical programming. On this page, I will try to dispel myths about the language, and demonstrate features of the language which people seem to not know about. The goal is to convince people that Fortran is not a horrible language to code in, and that you are not necessarily crazy for choosing Fortran for a new project in 2011.

This introduction is divided into the following sections:

  • Old Fortran
  • Arrays, modules and functions
  • Parallel programming
  • Derived types and OOP
  • Which is faster, Fortran, C or C++?
  • Fortran in the programming ecosystem
  • What's not so great about Fortran?
  • Additional resources

All code snippets can be found in the repository.

This is a work in progress, so I would appreciate any suggestions or feedback!

Old Fortran

Here are some features from Fortran 77 that people seem to think still apply:

  • All upper case.
  • Fixed column format.
  • Short variable names. In Fortran 77, variables can not be longer than 6 characters!
  • Implicit variable declarations. Variables starting with a-h and o-z are real (float) and variables starting with i-n are integer. This easily leads to unpredictable behavior because of typos. It also leads to silly jokes like: "GOD is REAL, unless declared INTEGER"...
  • Extensive use of goto statements.
  • Static memory allocation.
  • Attempts at recursion fail to return correctly beyond one level, as the typical implementations do not use a stack for return addresses.

Fortunately, newer language revisions have addressed all these issues!

Arrays, modules and functions

The main feature of Fortran is the excellent support for multi-dimensional arrays. Here is a short program demonstrating some of these features. It can be found in the arrays folder in the repository. The program also demonstrates the module system and some of the features of functions. Some things to notice about this program:

  • Modules are a nice way of organizing code, and no header files are necessary!
  • A subroutine or function can take functions as arguments. We can specify how the function should look using interfaces. Trying to use functions not matching the interface will give a compile error.
  • The pure keyword means that the function is side effects free, i.e. it can not perform IO or modify its input arguments. This aids reasoning about the code.
  • Function arguments can be named, and we can have optional arguments.
  • Arrays know their sizes, and trying to access values outside will give a compile error if the array is allocated statically and a runtime error if allocated dynamically. Of course, bounds checking can be expensive, so it has to be explicitly turned on using compiler flags (-fbounds-check for gfortran).
  • Array slicing is extremely convenient for performing complex array operations.
  • All common operations on arrays, plus several exotic ones, are supported, and linear algebra operations are built in.
module array_operations

private
public :: foo

contains

    subroutine foo(func,a,b)
        ! Only functions matching the interface are valid
        interface
            pure function func(x) result(y)
                real, dimension(:), intent(in) :: x
                real, dimension(size(x)) :: y
            end function func
        end interface
        real, dimension(:),   intent(inout) :: a
        real, dimension(:,:), intent(inout) :: b
        ! An array knows its size
        real, dimension(size(a)) :: c

        c = func(a)

        ! Built-in linear algebra functions
        a = matmul(b,c)
        b = matmul(b,b)

    end subroutine foo

end module array_operations

module functions

contains

    ! The pure keyword means that the function can not perform IO 
    ! and can not modify its input.
    pure function transform(x) result(y)
        real, dimension(:), intent(in) :: x
        real, dimension(size(x)) :: y

        y = 3*x + 1

    end function transform

    real function cylinder_volume(radius,height,pi)
        real, intent(in) :: radius, height
        real, intent(in), optional :: pi
        real, parameter :: pi_exact=4*atan(1.0)

        if (present(pi)) then
            cylinder_volume = pi*radius**2*height
        else
            cylinder_volume = pi_exact*radius**2*height
        endif

     end function cylinder_volume

end module functions

program arrays
    use array_operations, only: foo
    use functions
    integer, parameter :: n = 3
    real, dimension(n) :: a = (/ 2,4,6 /)
    real, dimension(:,:), allocatable :: b
    integer :: i,j

    ! Dynamic memory allocation
    allocate(b(n,n))

    do j=1,n
        do i=1,n
            b(i,j) = i*j
        enddo
    enddo

    call foo(transform,a,b)

    write(*,*) 'a',a
    write(*,*) 'b'
    do i=1,n
        write(*,*) b(i,:)
    enddo

    ! Named function arguments and optional arguments
    write(*,*) 'Accurate cylinder  ',cylinder_volume(radius=3.0,height=4.0)
    write(*,*) 'Inaccurate cylinder',cylinder_volume(radius=3.0,height=4.0,pi=22.0/7.0)

    ! Array slicing
    write(*,*) 'Slice', a(2:n)-a(1:n-1)

    ! Some other built-in functions
    write(*,*) 'Sum', sum(a)
    write(*,*) 'Values larger than 3?', any(a>3)
    where (a>3)
        a=0.0
    end where
    write(*,*) 'Values larger than 3?', any(a>3)

    ! This gives a compiler error
    ! write(*,*) a(7)

    ! This gives a runtime error if compiled with -fbounds-check
     !write(*,*) b(7,7)

     deallocate(b)

end program arrays

Parallel programming

In this age of multi-core computers, support for parallel computing is very important. Fortunately, Fortran has many options for parallel programming.

  • For distributed memory, Fortran can use the MPI API.
  • For shared memory, Fortran can use the OpenMP API.
  • For GPUs, Fortran supports the CUDA architecture using CUDA Fortran from the Portland Group.

However, here I want to demonstrate co-arrays, which is Fortran's built-in support for parallelism. This was introduced in the latest revision, Fortran 2008, so compiler support is not great at the moment. The G95 compiler has the best support among the free compilers.

So what are co-arrays? According to John Reid "Coarrays were designed to answer the question: What is the smallest change required to convert Fortran into a robust and efficient parallel language?

The answer: a simple syntactic extension. It looks and feels like Fortran and requires Fortran programmers to learn only a few new rules."

So let's dive in and look at an example, numerical integration using the trapezoidal rule. This example can be found in the coarrays folder in the repository.

real pure function f(x)
  real, intent(in) :: x

  f = x**2*cos(x)*exp(-x**3)+exp(-1./x**2)*cosh(x)     &
    - (sin(10*x))**3/x - log(x) + 1.0-exp(-sin(x)/2.0) &
    - (cos(10*x))**2/sqrt(exp(x) - log(x))

end function

real pure function trap(f,a,b,n,dx)
    interface
        real pure function f(x)
            real, intent(in) :: x
        end function f
    end interface        
    real, intent(in) :: a,b,dx
    integer, intent(in) :: n
    !
    real :: x
    integer :: i

    trap=0.5*(f(a) + f(b))
    x = a
    do i=1,n-1
        x = x+dx
        trap = trap+f(x)
    enddo

    trap=trap*dx

end function trap

program trapezoid

    real :: a,b,dx,local_a,local_b,res
    real :: local_result[*]
    integer :: n,local_n,rank,nproc
    external :: f

    nproc = num_images()
    rank = this_image()

    a=1.0 
    b=10.0 
    n=2e7
    dx = (b-a)/n

    if( .not. mod(n,nproc)==0) stop "n must be divisable by nproc"
    local_n=n/nproc
    local_a=a+(rank-1)*local_n*dx
    local_b=local_a+local_n*dx

    local_result = trap(f,local_a,local_b,local_n,dx)

    ! Ensure that all processes are finished computing
    sync all
    ! Now, assemble total results on image 1
    if (rank==1) then

        res=local_result
        do i=2,nproc
            res = res+local_result[i]
        enddo

        print *, 'Total result',res

    endif

end program trapezoid

The new syntactic extension is the brackets. local_result[*] means that this variable is separate for each parallel process. If it was an array it would have been evenly distributed. Any process can access variables on other processes, simply by using the process index.

I tested the code on an Intel i5 quad core, and here are the results: timing.png

So the code runs about 3 times faster with all four processes, not too shabby for such a simple change. In principle, this will work on both shared memory and distributed memory architectures.

Derived types and OOP

Derived types are pretty much exactly like structs. Here's one example with a particle in the 2D plane having a position, a velocity and a mass:

type particle
    real,dimension(2) :: pos,vel
    real :: mass
end type particle

Support for object-oriented programming was added in the 2003 revision, and compiler support is still spotty. The standard includes most OOP features; inheritance, interfaces, abstract classes, polymorphism etc. Here I extend the previous particle type with a procedure for calculating the momentum. I also create a subclass, charged_particle, which has an extra variable. The example has been tested with the Intel Fortran compiler, which is free on Linux for non-commercial use, and also runs on the latest gfortran version. This example is a slightly modified version of an example from the book Fortran 95/2003 explained, and can be found in the OOP folder in the repository.

module particle_m
  implicit none

  type particle
    real,dimension(2) :: pos,vel
    real :: mass
  contains
    procedure :: momentum => particle_momentum
  end type particle
   
  type, extends(particle) :: charged_particle
    real :: charge
  end type charged_particle
 
contains

  function particle_momentum(this) result(pm)
    implicit none
    class(particle) :: this 
    real, dimension(2) :: pm
   
    pm = this%mass*this%vel
   
  end function particle_momentum

end module particle_m

program particle_test
use particle_m
type(particle) :: p
type(charged_particle) :: pc

p=particle( (/1.0,2.0/), (/3.0,4.0/), 5.0 )
pc=charged_particle( (/2.0,1.0/), (/4.0,3.0/), 8.0, 6.0 )

print *, 'Particle:'
print *, 'Position: ', p%pos
print *, 'Velocity: ', p%vel
print *, 'Momentum: ', p%momentum()

print *, 'Charged particle:'
print *, 'Position: ', pc%pos
print *, 'Momentum: ', pc%momentum()
print *, 'Charge:   ', pc%charge

end program particle_test

Which is faster, Fortran, C or C++?

A common answer to the question of why Fortran is still used is that it is faster than C and C++, especially for linear algebra. In my opinion, Fortran is just a nice language to use, even without the speed, but let us look at the infamous Computer Language Benchmark Game. The benchmark that is the most relevant for what Fortran is used for is the spectral norm benchmark. Here, Fortran is significantly faster than both C and C++. And more importantly, the code is about half the size! If you look at the Fortran source code, it is mostly straight-forward Fortran, while the C++ code is a lot longer and more obscure. There is a rather nice straight-forward C++ version, but that is almost 3 times slower than Fortran.

In other words, it seems to be easier to write performant code in Fortran, which I believe is a big reason why it is still popular among scientists.

Fortran in the programming ecosystem

Here I will discuss some aspects of programming in Fortran. First of all, compilers. The obvious choice is gfortran, which works on all major platforms, is reasonable full featured and generates decent machine code. Intel's ifort compiler is generally faster, but is only free for non-commercial use. As far as I know, the only compiler that has full support for the 2003 revision is the Portland Group compiler, which is also commercial.

Another important aspect is integration with other languages. These days, a lot of libraries are written in C, since this makes it easy for other languages to use them. It used to be difficult to link to C libraries from Fortran, but fortunately this has changed with the 2003 revision. Let us take the particle derived type above as an example. Let us say we have a super-advanced C function that we want to call from Fortran:

void cylinder_volume(float pi, float r, float h, float* vol){

    *vol = pi*r*r*h;
}

With the new features from Fortran 2003, this is how we call this function from Fortran:

program interop
    use iso_c_binding
    interface
        subroutine cylinder_volume(pi,r,h,vol) bind(C, name="cylinder_volume")
            use iso_c_binding
            real(c_float), value :: pi
            real(c_float), value :: r
            real(c_float), value :: h
            real(c_float) :: vol
        end subroutine
    end interface
    real :: pi,r,h,vol

    pi = 4*atan(1.0)
    r = 1.0
    h = 2.0

    call cylinder_volume(pi,r,h,vol)
    print *, 'Volume:', vol

end program interop

The module iso_c_binding contains constants which ensure correct interoperability between Fortran types and C types. Here, we use the c_float constant to specify the real precision. The bind attribute declares that this function is interoperable with C. By default, Fortran passes all arguments by reference, so we need to specify the "value" attribute to pass arguments by value. This program is available in the c_interop folder of the repository.

Calling Fortran from C is also made easier by the iso_c_binding module. But let us instead look at how to call Fortran from Python. It is easy to imagine having a "hot spot" function that you want to speed up when programming in Python. For instance, here is the trapezoidal rule example written in Python using the trapezoidal rule implementation found on Wikipedia:

def f(x):
  from math import sin,cos,exp,cosh,log,sqrt

  return x**2*cos(x)*exp(-x**3)+exp(-1./x**2)*cosh(x)  \
    - (sin(10*x))**3/x - log(x) + 1.0-exp(-sin(x)/2.0) \
    - (cos(10*x))**2/sqrt(exp(x) - log(x))

def trapezoidal_rule(a,b,n):

    return (b-a) * ( 0.5*(f(a) + f(b)) + sum([f(a + (b-a)*k/n) for k in xrange(1,n)]) ) / n

if __name__ == "__main__":
    a=1.0
    b=10.0
    n=20000000
    print trapezoidal_rule(a,b,n)

This implementation is extremely slow, on my machine it takes 1m45s to finish! Now, we could probably get some more speed out of it using Numpy, but let us use Fortran instead. To do this we can use the f2py tool which is part of Numpy. Here is the Fortran code:

subroutine trap(a,b,n,res)
    implicit none
    real (kind=8), intent(in) :: a,b
    integer, intent(in):: n
    real (kind=8), intent(out) :: res
    !f2py intent(in) :: a,b,n
    !f2py intent(out) :: res
    integer :: i
    real(kind=8) :: dx

    dx = (b-a)/n
    res=0.5*(f(a) + f(b))
    do i=1,n-1
        res = res+f(a+i*dx)
    enddo
    res=res*dx

contains
    real(kind=8) pure function f(x)
        real(kind=8), intent(in) :: x

        f = x**2*cos(x)*exp(-x**3)+exp(-1./x**2)*cosh(x)   &
        - (sin(10*x))**3/x - log(x) + 1.0-exp(-sin(x)/2.0) &
        - (cos(10*x))**2/sqrt(exp(x) - log(x))

    end function
end subroutine

For simplicity, I have hard-coded the integration function here as a nested function. We see that the only difference from normal Fortran is the comment hints to f2py. We can now use f2py to generate a python module as follows:

f2py -c trapezoidal.f90 -m fortran_module

We use the module as any other Python module:

def trapezoidal_rule_fortran(a,b,n):
    import fortran_module

    return fortran_module.trap(a,b,n)

In the folder py_interop in the repository, I have written a python program that compiles the fortran code and then compares the speed of the two functions. On this particular example, the Fortran code is about 25 times faster!

So what's not so great about Fortran?

So now you are probably convinced that Fortran is an excellent language and can't wait to start your next project in it, right? Hold on for just one second, here are a couple of things Fortran doesn't do very well:

  • Manipulating strings is painful.
  • While array support is great, there is little support for common data structures and algorithms, there is nothing like the STL for C++.
  • Fortran actually has a kind of generics, but the implementation is not great, you still have to write code for every type you want to support. It's nice to have for mathematical functions like abs() and min() etc, though. Here's a quick generics example, it can be found in the generic folder in the repository.
module foo_m
    
    interface foo
        module procedure foo_r
        module procedure foo_i
        module procedure foo_c
        module procedure foo_2i
    end interface

contains

    subroutine foo_r(f)
        real, intent(in) :: f
        print *, 'This is a real ',f
    end subroutine foo_r

    subroutine foo_i(f)
        integer, intent(in) :: f
        print *, 'This is an int ',f
    end subroutine foo_i

    subroutine foo_c(f)
        character(len=*), intent(in) :: f
        print *, 'This is a string ',f
    end subroutine foo_c

    subroutine foo_2i(f,g)
        integer, intent(in) :: f,g
        print *, 'This is two ints',f,g
    end subroutine foo_2i

end module foo_m

program generics
    use foo_m

    call foo(2.71828183)
    call foo(3)
    call foo('Euler number')
    call foo(3,4)

end program generics
  • Probably more, any suggestions?

Additional resources

Updated