The U.S. Department of Energy’s National Nuclear Security Administration (NNSA) and its three national labs this week announced they have reached an agreement for an open-source Fortran front-end for Higher Performance Computing (HPC). The agreement is with IBM? Microsoft? Google? Nope, the agreement is with NVIDIA, a company known for making graphics cards for gamers.
The heart of a graphics card is the graphics processor unit (GPU) which is an extremely powerful computing engine. It’s actually got more raw horsepower than the computer CPU, although not as much as many claim. A number of years ago NVIDIA branched into providing compiler toolsets for their GPUs. The obvious goal is to drive sales. NVIDIA will use as a starting point their existing Fortran compiler and integrate it with the existing LLVM compiler infrastructure. That Fortran, it just keeps chugging along.
You can try out GPU programming on your Raspberry Pi. Yup! Even it has one, a Broadcom. Just follow the directions from Raspberry Pi Playground. You’re going to get your hands dirty with assembly language so this is not for the faint hearted. One of the big challenges with GPUs is exchanging data with them which gets into DMA processing. You could also take a look at [Pete Warden’s] work on using the Pi’s GPU.
Still wondering about the performance of CPU vs GPU? Here’s Adam Savage taking a look…
21 thoughts on “DOE Announces A High Performance Computing Fortran Compiler Agreement”
I’m not sure what this means. An open source frontend for Fortran?
Does this mean the CUDA extensions will be made available through API calls from an opensource frontend Fortran compiler?
You’ve got the 3D turned on in the video, there. The Youtube original doesn’t have it, or at least it doesn’t show on this 2D monitor I’ve got. Maybe check the link for a 3D flag.
Why fortran? It’s pretty old language and inefficient compared to what we can use. The only reasons I could see for keeping it alive is to maintain communication with ancient probes that were launched when I was still in diapers.
inefficient compared to what we can use 
Fortran is still a great language for scientific computing. It handles arrays and math very well, and in my opinion is less cumbersome than C/C++ for those applications. It’s always been just as present as C in parallel frameworks like MPI/OpenMP, and in fact they’re a little more native in Fortran than C, if I remember right. The scientific computing community still uses Fortran as well, so it’s not just about legacy code. It will be good to leverage GPUs with Fortran.
I think it’s aposite to put a great quote about C in here…
“All the advantages of assembly language, with all the disadvantages of assembly language”.
Efficient coding is more a job for compilers nowadays, even for stuff like highly graphical 3D games running on commodity hardware. Understanding the compiler, and the machine underneath, can guide you to write C that runs a bit smoother, but there’s been plenty of examples of compiled C matching hand-written assembler. Indeed a compiler can sometimes do better. Particularly if the Cybermen gestalt who wrote the compiler understand low-level stuff, and algorithms, better than you do.
Other thing is, it’s so easy to make a mistake in C, and not notice it, you’re often blind to your own errors, either seeing what’s supposed to be there and missing a semicolon, misunderstanding levels of pointer indirection, or not seeing the actual logic isn’t quite the intended logic, even though it looks a lot like it!
That’s what higher-level languages are for, as a tool to mediate between human ideas and dumb transistors. Although AIUI FORTRAN isn’t too bad for efficiency, it relates pretty well to the low-level stuff it ends up as.
Even if it ran slower, which isn’t really the case, it’s better to lose 10% of speed, than the whole program run, because you got the program wrong so the results are useless. And even supercomputers are benefitting from the increased processing power we all have now, that’s making low-level development less popular.
I’ve never tried to translate a formula into it, but I’d guess the people who wrote Fortran had it’s intended uses in mind pretty strongly. C is much more the other direction, taking machine code and trying to make it more manageable. C is the language I use most, but really it’s pretty terrible in a lot of ways. It’s not designed around humans, even ones with stack-based brains.
There’s also the large amount of existing, standard, well-tested Fortran code, for scientists and mathematicians.
As an aside, Fortran is a lot of what BASIC evolved from, and that’s supposed to be one of the most usable languages for beginners.
Oh, and, as far as compilers go, computers can handle an endless number of lists of data, shift them about as needed, and never forget. That’s a big advantage they have over us, which means meeting us halfway with higher-level languages leaves each side, the computer and the programmer, to do what he’s best at.
Just like other languages is all about libraries, frameworks, programs, machine generated, verified/proven/legecy code etc. Outside the computing circle, few are interested in the language of the week.
Nuclear Engineer here. Why FORTRAN?
FORTRAN was used by nuclear engineers from the start, and still is today, for a couple big reasons:
1. It’s EXTREMELY expensive (in engineering resources) to translate safety-related software from FORTRAN to something newer.
2. FORTRAN (like C) can run very quickly, whereas C++ or newer languages might take 4x as much work to perform the same task. When you are counting neutrons, gammas, etc., the more processing power and the CPU time you have, the better your statistics will be on your calculated solutions.
There are some space nuclear power and propulsion people out there, and for them sure you have to continue to use what you started with. But for most of us terrestrial (and I’d imagine even the naval variety too), it’s really about cost (for safety reasons though), and speed.
This is really exciting news because the calculations we often do involve 3D models of reactors, rooms, or sometimes medical patients for diagnostics and/or treatments, and GPUs are of course much better than most CPUs at specifically this type of rendering and modeling, and associated calculations.
So, it ultimately means safer, cheaper, and quicker — a rare feat.
Please read my columns on Embedding C++. You’ll find that C++ is not slower. Also look back at the other FORTRAN articles I’ve written. C++ and FORTRAN are being used in combination for newer work.
A big argument for FORTRAN is that the compiler knows that say a function call cannot modify memory through some pointer….. FORTRAN does not have pointers. This means that compiling for GPU becomes much faster: it is possible to issue the processing of say 1024 array elements at once
Actually fortran Pointers have been around for years. Cray fortran has had it for years prior to Fortran 90 which added it to the standard. Traditionally Fotran was faster due to the orderly code (compiler optimizers could generate better code.. and pointers made it harder to predict). Now Fotran/C/C++ are similar in performance, more user coding is now the primary issue.. (like do not do system calls, I/O or inter-process communication if you can avoid it and other basic parallel performance issues). For GPU’s you need to write with CUDA or OpenCL so it is basically a rewrite for GPUs. Java was slow and a memory hog and has also improved. However, most Nuclear codes were written decades ago, and most of the engineers are more comfortable in Fortran are more of the main reasons for Fortran. C-Fortran has been done over a decade, and C++ has become more mature in 1997-2000 as it got standardized more.. There has been work in Java also (even if I have not been a fan :) ).
I like how, in Mathematics and supercomputer programming in general, it’s “codes” and not “code”. A bit of etymology I’d like to know about.
Half-way there. Fortran does have pointers and functions can have side effects, but they have to be explicitly declared every single time, so the compiler can optimize in every situation where there aren’t pointers or side effects. Conversely, C allows pointers to anything, and no restriction on how functions treat their variables, so the compiler has to account for that unless you explicitly tell it *not* to, if that’s even possible–I think it is, but I don’t remember. I just went on and happily used fortran.
Fortran has been used in science/engineering for decades. Fortran has had pointers for decades on Crays, and officially in Fortran 90. C++ is relatively new, 1997 it was a daily update to what was going to be in the standard, and finally standardized 2.0 in 2011. Most of the codes mentioned are in Fortran and have been for many decades, C++ has developed as has Java, both used to be slow and many problems, and have stabilized over time. And yes many of the science codes have mixed Fortran/C or have moved to C++ or even Java. However, it is often a pain to mix Fortran/C codes especially these codes which may be very archaic versions of Fortran. For GPUs it would likely be used to offload a portion of the code to a GPU versus re-write millions of lines. Other Fortran code have been re-written in C/CUDA for things like Seismic codes with great results (SPECFEM3D).
As long as you don’t try to pass strings, mixing C & ancient FORTRAN code is quite simple. Actually no different than making a regular function call in C. Just recognize that the argument passing is different.
For this to actually be significant it must be possible to run unmodified FORTRAN on the GPU. When the VAX 11/780 was the dominant workhorse, it was typical to have an FPS-120 array processor attached to the system. However, it was VERY painful to rewrite code to use the AP because of the need to explicitly transfer data between main memory and the AP. Since the 80’s array processors have come and gone several times. This is just the latest incarnation. The source code for major seismic processing packages like DISCO/Focus are littered with now obsolete sections that setup to pass data to an AP. The actual AP calls have been replaced with new code to use whatever is currently available. I’m sure in some cases the old AP code has been rewritten to use a GPU, but in may cases it’s not worth the memory transfer overhead.
As for the obsolescence of FORTRAN, that is mostly the view of people that don’t write scientific codes and don’t have to deal with indexing through very large, multidimensional arrays. Good programmers use the language appropriate to the task at hand and are fluent in several.
“Good programmers use the language appropriate to the task at hand and are fluent in several.”
Has nobody though “we are all screwed” when reading that Nvidia will supply stuff to nuclear power plants? Does nobody remember er what Linus said some years back?
I don’t! What did he say?
Euphoria, the best programming language no-one seems to know about, open source, but needs developers to bring it into mainstream.
FORTRAN is far from dead. It is still the one of the primary high-level high performance computing (HPC) languages for numerical analysis, simulation, finite-element analysis, etc. Much of what exists today in terms of optimized numerical and statistical libraries (regardless of host language) have their roots firmly embedded in Fortran (e.g., IMSL & SPSS to name just two – from memory). If there’s a brand new supercomputer all around you whirring away, you can bet Fortran jobs are running on it.
Please be kind and respectful to help make the comments section excellent. (Comment Policy)