Have you ever wondered if there is a correlation between a computer’s energy consumption and the choice of programming languages? Well, a group of Portuguese university researchers did and set out to quantify it. Their 2017 research paper entitled Energy Efficiency across Programming Languages / How Do Energy, Time, and Memory Relate? may have escaped your attention, as it did ours.
Abstract: This paper presents a study of the runtime, memory usage and energy consumption of twenty seven well-known soft- ware languages. We monitor the performance of such lan- guages using ten different programming problems, expressed in each of the languages. Our results show interesting find- ings, such as, slower/faster languages consuming less/more energy, and how memory usage influences energy consump- tion. We show how to use our results to provide software engineers support to decide which language to use when energy efficiency is a concern.
While we might take issue with some of the programming languages selected as being “well known”, the project was very thorough and quite well documented. Most people would take for granted that a computer program which runs faster will consume less energy. But this might not always be true, as other factors enter into the power consumption equation besides speed. The team used a collection of ten standard algorithms from the Computer Language Benchmarks Game project (formerly known as The Great Computer Language Shootout) as the basis for their evaluations.
Last year they updated the functional language results, and all the setups, benchmarks, and collected data can be found here. Check out the paper for more details. Have your choice of programming language ever been influenced by energy consumption?
And where is FORTH.
on the stack, i suppose ;-)
Combined with Go,
as in Go Forth.
Now I want to make a programming language named Prosper which is that combination.
Or matlab, forth and go.
Then you can “go forth and multiply”
THat would be the forth joke on this thread.
dup 1 + though it should be included.
Go Forth, be Conquered
https://www.youtube.com/watch?v=UiJk_RcGblU
Personally I was looking, knowing I wouldn’t see it, for something like Brainf#ck. It would be interesting to see where some of the esoteric languages fair on this sort of ranking. Perhaps one could even bracket Conway’s game of life into that category, or dare I suggest, even Minecraft, for the more Rube Goldbergian solution.
In 4th place, of course.
May the Forth be with you.
This is extremely interesting to me.
That C wins is no big surprise. That Python really sucked is also no surprise.
But I am very surprised that Rust came so close to C. Equally surprised that Rust beat Go by as much as it did.
Any test of numerical algorithms written in ‘pure python’ will probably always show that it is slower than the same code written in C, but a lot of the time the code doing the hard work in python scripts isn’t ‘pure python’ but uses a library like Numpy or wrappers for openCV or whatever, and I thought that these generally used things like C or Fortran, on top of the MKL etc to do the actual hard work?
Rust was basically designed to replace C with a memory safe language, so why would it surprise you that it’s pretty comparable? The relationship between C, Rust, and Go is exactly what I expected to see and falls right in line with each of their design philosophies.
Rust is a low level systems programming language with no runtime and small standard library (and lots of zero-cost abstractions making it feel high level). It’s meant to be a direct competitor to C. Go has a runtime and garbage collection; I’m impressed that Go does as well as it does against other garbage collected languages.
Rust was a few percent behind C 4 years ago; at the latest version of the benchmarking game, Rust is beating C gcc in 7 out of 10 benchmarks and beating C clang in 6 out of 10 benchmarks (looking at time only).
That said, the individual benchmark results vary so much problem to problem that there’s likely a significant effect of the solution/code quality of each solution.
Probably between C# and Javascript.
I’m having trouble believing this.
I see LUA, known to be a very fast language, at the bottom of the “time” list.
“I see LUA, known to be a very fast language”
Source required. This might be true but putting my finger in the wind I’d say as an interpreted language this is unlikely to be the case. From what I found so far Lua seems to be a fast language… among interpreted ones. So it kinda makes sense that it’s faster/cheaper than Python and slower/more expensive than C. If you looks closely most of the least performing language are interpreted. No magic happening there.
Almost as fast as pure C.
https://www.quora.com/Why-is-Lua-so-fast
You are confusing the LuaJIT and normal Lua. Lua JIT is compiled to assembly and is not representative of 1) the general Lua usecase 2) performance of the Lua interpreter (but it’s a nice perf boost to have JIT version)
Actually LuaJIT is increasingly becoming the general Lua usecase. A lot of software that embeds Lua is using LuaJIT as the default, and adoption of new versions of mainline Lua is pretty low.
Just to contribute to the discussion, there is also a language called luau, which is closer to the original mainline lua, in that it’s not a JIT compiled language, but it still manages to have very solid performance(they claim, I never tested), which sort of gives credit to the thought that the wasted energy is largely more a result of bad engineering than language design or whether it’s interpreted or compiled.
The question is what people usually do with LUA. In most cases, isn’t it used to invoke parts of C++ (or other compiled) classes in a script-fashion ? If you instead try to make it sort arrays within the language constructs itself, things change a lot.
I have similar surprise with PERL, which I used a lot at a time to automate some functions where I needed max reactivity (find a document containing a keyword, add it a comment, etc.). JAVA was terrible at the task because it took up to 5 seconds just to get the JVM in place and start calling ‘main()’. But that wouldn’t come into the equation if you’re running a benchmark like “how much time does the language take to execute that algorithm sorting 1000000 words?”
No, the AV firm I used to work for uses Lua heavily to parse/analyze in-memory structures,
with real-time constraints. It uses LuaJIT for C-speed. Lua is cool for allowing you to *ensure* your code has no access to specific system resources — malware-proofing.
By curious coincidence, the same company was an offspring of ActiveState, and also used Perl heavily. Perl5 performance and libraries were all you could ask for, and reasonably fast (not C-fast).
I remember a 200k-line perl subsystem for monitoring client pc’s was all highly readable.
As PypeBros indicated, Lua is a lightweight niche language designed specifically to be embedded in other software ( Usually C or C++ etc). The main goals were a very small memory footprint for the runtime and for scripts. Absolute speed efficiency will not be of the same order as languages such as C
LuaJit is very close to the same speed as C. Why anyone would use Lua without LuaJit is anyones guess.
You can’t always use a JIT as there are restrictions around using JITs on certain platforms (iOS, consoles, etc), or needing to be compatible with latest features in Lua versions newer than 5.1
You might use Lua without LuaJit in an embedded system where you don’t have enough memory to embed the jit compiler, which appears to require around 1 Mb as opposed to 1K without
It’s because they are not using LuaJIT.
We have LUA but no BASIC or batch.
I wonder how human written asm would fair, performing the same computations as the test, probably worse, for the average programmer.
Yes, that’s a big miss on the part of the researchers. Also, I wonder what is gauge was used to tally the results, probably C, their favorite language.
Depends on the complexity of the problem I would think, when its simple the human is bound to turn out faster code – as they don’t need to worry about all the odd edge cases the compiler has to be sure to avoid. But as soon as it starts getting complex I’d expect even a master of asm to start loosing or taking significantly longer to create the program…
Lua is a proper noun not an acronym. It is not Lua Uppercase Accident.
As others have stated, LuaJIT is very fast, I get about 10-50x performance improvement between LuaJIT and Lua in our game code.the above tests look like they are run with the standard Lua interpreter.
Guess they don’t teach assembly in university anymore, because that would beat C
ASM can be C in performance if your CPU is simple enough, but given how much paper I covered with assembly program printouts, it sure wasn’t green.
(I’m not sure they still teach much of C at the University either, btw)
I graduated a software engineering bachelor in 2018 where we started with learning C, then were recommended to use C# for most assignments after that, but we briefly touched upon most popular languages and what their pros and cons were in regard to computational efficiency, maintainability and initial development costs.
> were recommended to use C#
Wow, that is terrible. A proprietary Microsoft only language? I wouldn’t have even been able to do the coursework. Hopefully they have rectified this, and recommend / use portable non-proprietary languages today.
At my university, there wasn’t a single computer in any computer lab running any Microsoft software (nor were there any x86 machines). It was a bit longer ago, though. We had Sun workstations in nearly every lab, with the art department using SGIs, and a few 68K Macs in the library running A/UX for gopher, web browsing and telnet. And a couple Crays and an Intel Paragon for the fun stuff.
The main languages used in instruction were C, C++ and some Sparc assembly in the compiler design classes. I was able to do 99% of my coursework on a 486 running Slackware. Mac, NEXT, Coherent, etc. all worked fine for the majority of coursework (anything where gcc ran was going to work great). And, if someone ware wedded to Microsoft, they could have struggled through their work there (unless using c++ templates since MS compilers were kinda shite and took years to catch up to the ATT and gcc compilers’ support for C++ features).
C# and .NET is not propietary any more. It’s fully open source and free software. It’s not Microsoft only anymore, it’s fully cross platform.
Lol, C# is one of the most used languages, and it isn’t “proprietary Microsoft”, as it’s standardized by both ECMA (ECMA-334) and ISO (ISO/IEC 23270). You’re entitled to your opinions, but not based on false information.
“C# is one of the most used languages”?? Hahahahahaha, thanks for the laugh.
Ah trigger word, and corrections in one day. There’s just something about the word Microsoft that brings out the “back in the day Steve Ballmer threw a chair at me” in people.
““C# is one of the most used languages”?? Hahahahahaha, thanks for the laugh.”
5th overall on the TIOBE index, at around 5% (with #1 being C at ~15%).
@[Pat]
And there are so many languages now! Probably more languages than architectures. It’s just mind blowing and I seriously wonder why that is.
Back in my day, my university teach concepts of programming and not popular languages. They sometime purposely use obscured languages for the heck of it. We learn the programming part in tutorials and in the course of doing assignments. Languages rise/obsolete, but the concepts remains. It is relatively to pick up similar languages. There are some odd languages e.g. Prolog, Lisp, APL.
(Community) Colleges on the other hand teach programming for the hottest languages targeting the job market.
Came here to say the same. We learned Ada (~1990), one of the reasons being that no-one would have used it before, so there would be no learned bad habits. Despite using it for several real-time-critical situations, I’m surprised to see it so far up this list, as it’s (or seemed at the time) quite heavyweight.
The downside of using obscure abstract languages is that most people, over half of the students, learn things mostly by rote memory so the education is lost on them. It’s surprising how many people simply can’t put two and two together, and instead learn everything as if it was a completely new topic.
If you teach them C, they can then program an Arduino. If you teach them Python, they can kinda-sorta program a Raspberry Pi. If you teach them Logo, they won’t touch programming ever again because nothing uses Logo.
It depends a lot on your college and major, nowadays.
Computer Science still teaches Prolog, lisp, and whatever pet languages the professor has authored.
Software Engineering mostly teaches concrete languages in common usage. You learn how to design software programs in the real world in a team.
And so on and so forth. Computer Engineering teach C, VHDL and assembly, IT learns stuff used in system administration, Web Development learns web languages.
with any modern CPU, the compiler will beat the crap out of you optimization wise (in 99.9% of the cases)
If you are a “clean” assembly language programmer a good assembler will beat you. It does not have to make readable code and can use tricks you would likely never try.
It would be interesting to see, but I’m not so sure that it would beat C. Modern CPUs with pipelines, out-of-order executions, multiple cores and other bells and whistles are hard to optimize manually. On the other hand, modern compilers have a lot optimizations for modern CPUs.
Depends on the hardware surely. The classic Microchip PIC16c84 required its own C dialect, because it was very hard to use standard C with it, while the Atmega328 was designed to be programmed in C from the ground up, and there is little advantage to be gained in programming it in assembler, ( but you need to test to verify that )!
There is plenty where the Atmega family still does better in ASM. Plenty of libraries have blobs of ASM in them because in things where timing matters, you can’t get the timing right in C. A popular one would probably be the various libraries for controlling WS2812 LEDs, most of the communication protocol for them is in ASM because the LEDs have fairly strict timing on the data pulses you have to bit bang out of the microcontroller and it just can’t be done reliably in C
Atmega is quite an old processor, so doesnt really have the hardware to drive neopixels neatly, at least if you want to do anything else. For driving neopixels, I would suggest moving to another processor family such as STM32, where you can drive multiple neopixel streams with very low overhead using a timer and DMA straight from C. No asm required and you will have most of your processing power still available for other tasks. Offtopic, but I would guess RP2040 also would do a nice job using its programmable PIO and 2 cores!
The last AVR project I worked on, the C optimizer was awful, and if we removed an “if (true)” statement, that literally always executed, it wouldn’t fit in the code space. Looking at the assembly language it output, there were lots of clear optimizations that it missed.
You don’t say what compiler you use. Anyway I always found avr-gcc pretty good, though I mainly use it with C++ ( with exceptions,rtti disabled), and In fact for quite a while now avr-g++ support c++11 and beyond which allows a lot of constexpr optimisations and other C++11 goodies such as lambdas etc. In fact C++11 is also useable in the later versions of Arduino IDE, which allows lots of external libraries to be used.
Overall I found that if I am having issues with GCC or need to start writing lots of assembler then it is probably time to move to a new processor
> Microchip PIC16c84 required its own C dialect
No, it didn’t.
One of the popular toolchains *implemented* its own dialect, but at least one close-to-ANSI-C compiler existed.
Wasn’t Microchip the company that basically said “if you want to use our proprietary libraries, you need to use our compiler” when that compiler was actually GCC and if you wanted to compile with optimizations on you had to pay through the nose for it?
If that is Hi-Tech C , I believe we tried it, but found that code was considerably larger and slower than our existing code with MPASM assembler. Not a fault of the compiler, but of the PIC architecture. Meanwhile we were also testing other mcu’s to switch to C and we ended up switching to AVR mcu’s which seemed to support C much more easily and compactly, at least for our purposes
As soon as a CPU becomes pipelined and gets an instruction cache, it becomes quite hard to handwrite efficient assembly. So I guess this was only true until the 8086/68000 era.
Hey they 8086/8 had a cache, all 4 bytes of cache lol. Though it really didn’t help it at all in most cases since it would take something like 4 clock cycles to even read anything from RAM it couldn’t be filled up fast enough. The CPU tended to empty out those 4 bytes faster than new data could be put into it from RAM.
C is often called machine independent assembly because, as Linus pointed out, by looking at the code, you can get a pretty good idea what the hardware is doing.
After a decade of coding in assembly, I was more than happy to switch to c for microcontrollers.
I used assembler on every system I had from the mid70ers to the 68k days (which was before x86 in my timeline).
Coding in those days really was like: You know what your compiler will generate as machine code while you type in C.
Fun days!
Sure it was possible to do it better in assembler. You always fond some tricks to beat your compiler. You knew when and how often loop unrolling would make sense. In C (see Duff’s device) and in assembler! But at what cost? Assembler often was not even more than twice as fast as “Optimal C”. You sure would not code a REPL that way just to wait faster for user input.
I heavily dislike today’s complexity.
We are driving full throttle against a complexity wall.
But that’s a topic for a different paper…
So, Xilinx has this crappy tiny softcore processor for their FPGAs (PicoBlaze). It’s super-tiny, but while there *exist* C compilers for it, they’re mostly disasters because the architecture is so barebones: there’s no hardware stack, and even a software stack is hard to create because there are no instructions to add/subtract without screwing with flags.
It drove other collaborators nuts because they’re so far removed from low-level design that even though it’s practically pseudocode (add, sub, fetch, store, input, output, call, return, compare, test, jump with condition, etc.) they just threw up their hands and said “whatever.”
Then I found a “psuedo-compiler” for it that someone created in Python. It *looks* like C. It has C syntax. Brackets for surrounding if/while clauses and functions, functions with parentheses (although essentially no arguments, except for fetch/store/input/output functions), and standard bitwise operators.
But it’s totally not C at all. The Python script literally just translates it flat-out into assembly. You can’t have “a = b + c,” you have to write “a=b” followed by “a += c”. No variables: you’ve got r0-r31, and that’s it. (plus other restrictions, obviously).
Suddenly, all the complaints vanished, and people were easily able to find dumb logic bugs in stuff I wrote.
The difficulty with assembly isn’t that it’s hard. It’s that the language constructs are sooo far away from what people have grown to understand.
Many people who are taught programming do not learn to program – they learn to “code”. They learn a system or rigid rules and symbols instead of the principle, because it takes less energy to memorize and repeat than to understand and apply. Most people have enough trouble just remembering the syntax.
Yeah, I seriously disagree. Even people seriously steeped in computer science and able to “program” versus “code” would find it easier to read familiar syntax than unfamiliar. Learning the ideas doesn’t make you able to understand some random language with esoteric crazy syntax quickly.
The vast majority of programming languages use similar syntax for a reason. For some reason, no one’s bothered to create an assembly language that looks like it. There’s no reason you couldn’t.
Assembly isn’t a language. It’s a class of languages. It’d be like calling “interpret” a language and lumping literally every single interpreted language into it.
Even if you restrict yourself to a single architecture, ARM/x86 for instance both have essentially orthogonal instruction sets, so there are literally multiple assembly languages for one machine. Choosing those poorly can easily lead to worse performance than a compiled language.
Now, if you say “well, yes, but I mean they should compare it against the super-awesome best optimized version of the program” – at that point it’s not a language test anymore, it’s a hardware test. Which could be interesting somewhat as a baseline, but *even then* there’s a difference between “super-awesome best optimized which can run in an OS” and “super-awesome best optimized on bare metal.”
no, it wouldn’t. i come from a world where people still program big applications in assembly out of inertia because their IT infrastructure dates to the 60s (with constant updates obviously). if you’re writing a videogame like quake, you can take the tiny part of it that is in charge of texture rasterizing and focus on optimizing it and do better than even a modern compiler that’s seen a lot of tuning. but if you’re writing the whole thing in assembly, then you soon have to make compromises to readability, maintainability, and the hugeness of the labor task (I.e., more work means more low-skilled contributors). it gets to be a nightmare for all the reasons you’d assume, coming from a high-level language, and because of that nightmare there are inefficiencies *everywhere*.
for example, since the 1960s, they’ve changed the calling convention to pass values in registers instead of on the stack (this has happened on almost every platform). no one even notices, you just recompile your app to the modern ABI and you get this advantage. but the assembly code at a shop that has built up a huge app? no way, they don’t change. they’re still passing values on the stack. and it’s that way for every advance…if your platform added an add-immediate instruction encoding, *every* compiler will support that within 1-10 years, but the cost of maintaining assembly code is so high that it’s still loading out-of-line literals.
if you start writing a big program in asm today, it’ll take you long enough that simply by the time you get to 1.0, you’ll have started to accumulate these cruft problems. and then it just gets worse from there.
a big assembly program simply isn’t finely-tuned even to the level of a bad compiler. there isn’t an extant example of such a beast.
It’s difficult for humans to reason about how modern CPUs behave, such as branch prediction. In practice, it’s very difficult to beat a C compiler these days.
Indeed, one can write reasonable assembly for older/smaller architectures, but not the latest high-end CPUs. Just with the hundreds (thousands?) of instructions available, there’s many ways to just add or multiply, but modern compiler optimizers “know” the fastest and/or most compact ways to do things. Check out Godbout’s compiler explorer and his Youtube videos about compilers. I recall something using a load-effective-address instruction to do a calculation. An “average” assembly programmer can’t compete with an average C/C++ programmer and a modern compiler.
I find it interesting that people think the compiler has better control or understanding of the behavior and execution flow of the CPU. Considerable amount of engineering effort is spent on microarchitecture improvements with sole aim to predict the program behavior and adapt to it on-the-fly. No compiler (nor programmer) has absolute control over that.
Too large software modules issue? Hard to trace issues. No ?. Hard to fixe bugs … without causing more bugs?
This probably highlights the real questions here.
What is the function of a language:
1) Abstract the hardware.
2) Provide namespace management for human readable code.
3) In some cases, provide a (basic) operating system or core services.
1) Most hardware abstraction is done reasonably well as hardware is very accurately specified and the best optimization is often obvious.
2) Namespace management (variable names and function/class/attribute names) not only help with readability they can make or break a language as the whole point is to be able “write” code (language) that fits closely with human cognitive processes. At the end of the day, the use of namespace should make the code seem to flow in a human natural manner.
3) And this is the deal breaker for efficiency. What does the program do? Different platforms are better for different things. At the core of this is procedural programming (synchronous) ‘vs’ event driven programming (asynchronous). The differences here are handled at the very core of a language. Unfortunately most programming platforms started as synchronous and then tried to add-on event driven and that doesn’t work well. So now efficiency has more to do with choosing the best language for the intended purpose than the perceived efficiency of individual languages.
Each language has its reason, purpose and goal, assembly is low level,. C isn’t. When it comes to programming it is not a boxing match who will beat who, but rather which language is more suitable for a specific task. Basic, COBOL are good languages but specific for variety of unique tasks
Except for very small programs (or device drivers) writing more efficient code that an optimized C compiler is very difficult.
Instead of speaking of languages, shouldn’t it have been more accurate to speak about compilers ?
Not really. People often talk about languages in terms of compiler maturity, but some languages have built-in features that make them inherently slower or faster. For example, it’s no surprise seeing JavaScript so far down on the list. This is mostly because of the JIT compiler process and the GC; and a lot of the languages here suffer from similar slow-downs. C is at the top of the list because it’s basically a cross platform assembly language, so it’s going to be more nimble than most other languages. Rust is slightly slower than this due to its runtime guarantees. And, on top of all of this, many of these languages are using the LLVM compiler.
If they used the Tiny C Compiler instead of GCC, I bet C would produce much worse results. Java has both JIT and GC. Implementation does matter.
Languages limits what optimizations are available to you. For example, dynamic languages are harder to optimize because they give fewer guarantees, so it’s harder for the compilers to perform, e.g. inclining and constant propagation, because they may be changed dynamically. JITs adapt to runtime behavior, but that’s additional overhead.
The above results are only valid for “Linux Ubuntu Server 16.10 operating system, kernel version 4.8.0-22-generic 16GB of RAM, a (four core 4 thread) Intel i5-4460 CPU @ 3.20GHz”
Because a different CPU architecture will produce a different set of results, even different chips with the same ISA will produce different results. Usually the biggest impact on performance is the number of CPU registers, followed by the size of the L1 and L2 cache on the chip. Even changing the order of the LD_LIBRARY_PATH environment variable can modify performance. I would be interested in the output of “cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor” for the machine before each test. Intel chips when powered up remain in “performance” for about 60 seconds (for a faster bootup time) and then typically drop to powersave. After boot, the CPU governor can typically be modified by the “sudo cpufreq-set” command. So if some of the performance tests were ran in the first 60 seconds after powerup they will yield better performance results and used more energy than if used later.
the compiler certainly has an influence. there are definitely a variety of very different javascript compilers available. but a lot of the languages really only have one compiler or interpretter. and the mature languages (by which I mean, C), all the compilers are pretty good. there are definitely some compilers that are much advanced beyond others, but the fact of the matter is that even a relatively simple C compiler, like a gcc from the 90s, can do a pretty good job on most code. and a lot of the basic things like register allocation and pipeline optimization can actually be handled in a pretty simple and general way, you don’t need to invest a bunch of effort tuning to the hardware to do a “pretty good” job. putting together a good optimizer is definitely one of those things where 90% of the work only gives 10% improvement.
To add to what Greg A said, compiler optimizations usually involve clever strategies such as loop expansion and function inlining, among others, to boil away abstractions and syntax sugars; thereby speeding up your code. This is done by pretty much every language compiler out there. However, for a garbage collected language, the compiler cannot simply optimize its way around the GC. That’s a feature inherent to the language – one which comes with performance penalties. Moreover, if the language does not provide the facilities to manually free unused memory the resulting program would be more prone to higher memory usage, since there would be delays between: when the memory is no longer needed and when it’s actually freed.
Rust is the only anomalous one here, since it has a zero-cost abstraction to memory management, which means that 99% of the time, there is no need to manually free memory since it’s automatically freed when it goes out of scope. This is possible because the compiler performs static analysis of your code to determine when memory is no longer needed and inserts the appropriate calls at that point in your code. However, Rust introduces quite a few novel concept and extra syntax to facilitate this type of static analysis; and, as far as I know, it’s the only language that has the syntax necessary to facilitate this form of zero-cost abstraction.
If you can appreciate the performance difference in both approaches, then we’re right back to a language vs language discussion. Because no amount of compiler optimization can deliver this zero-cost abstraction to Javascript code. This is only available to Rust because the Rust language has the necessary language features. This argument is also true for some other features found in other languages.
To add to what Greg A said, compilers usually employ clever strategies such as: loop expansion and function inlining, among others, to boil away abstractions and syntax sugars, thereby speeding up your code. This is done by pretty much every language compiler out there. However, for garbage collected languages, the compiler can’t simply optimize its way around the GC – that’s an inherent part of the language and a necessary component of the runtime. Moreover, languages that do not provide the facilities to manually free unused memory would result in programs that are more prone to higher memory usage, since there would be delays between: when the memory is no longer needed and when it’s actually freed.
Rust is slightly anomalous here, since it has a zero-cost abstraction to memory management, meaning that 99% of the time you don’t need to manually free memory, since it’s automatically freed when it’s no longer needed. This is possible because the compiler performs static analysis on your code to determine where memory is no longer needed and inserts the appropriate calls at that point in your code, before it’s compiled. So you end up with code that is very similar to what would have been written in C. This, of course, come at the cost of verbosity. And, Rust introduces quite a few novel concepts and syntax to facilitate this type of static analysis. And, as far as I’m aware, it’s the only language that has the necessary features to implement this strategy.
This argument could also be made for other features found in other languages. So if you can appreciate the performance difference that comes with certain language features, you’d see that it becomes a language vs language discussion.
This one definitely calculates runtime-only. To be fair, this rating should also consider development expenses – time, developer machine power consumption and compilation.
To be fair on what metrics? The researchers set out to calculate runtime performance and that’s what they did. Besides, including development cycle would make this study near impossible to conduct, since that would need to consider a lot of other factors not confined to the language, such as: tooling, developer experience and the nature of the software being developed. It’s much easier to build web services in Golang than in Rust; but some companies do use Rust for hot/mission critical components of their web service.
You need to remember that more time is spent running software than developing it, and this is generally true regardless of the language. Even if development cycles across languages differ by months, that would be miniscule if the software is shipped to millions of customers around the world, or if the software sits in a hot region of a high traffic web service.
And then there’s the consideration of maintenance cycles, which may differ from the initial development cycles.
Not to mention, if you’re programming embedded systems, the device running the code probably has way less available energy, memory, and processing power than what you’re using to write the code for it.
Also, if the CPU is the main power draw in your system, you’re probably either running something that doesn’t have very much in the way of outputs, or are running something VERY computation-heavy.
A very interesting and relevant post in these environmentally conscious times. Though there is a rank provided, it is of course highly dependent on the input factors,so needs an in depth read rather than jumping to conclusions, which is of course what will happen nevertheless. I see for example that the functional community has already been provided with updated results, putting a functional language at the top by virtue of excluding all other languages (https://sites.google.com/view/energy-efficiency-languages/updated-functional-results-2020) so the results should be treated with caution!
Broadly, for efficiency
Don’t use an interpreted language ( but read up on TypeScript and lexical analysis)
Don’t use garbage collection
The papers mention hardware only in passing (regarding mobile v desktop applications), but power efficiency of the language you use is perhaps even more relevant in a microcontroller and so it It would be interesting to perform the tests ( or maybe a different set of tests) on various microcontrollers
Raspberry pi
stm32
pic
classic Arduino hardware
etc
:%s/retults/results/g
+1
I typically use:
:%s#retults#results#g
Mostly because I always forget, it is a forward or a backslash.
‘C’ rules, then. 😁
K&R smile and nod knowingly…
I love K&R C for its simplicity. 💛
It has to be the first edition though! :-D
The more interpreted a language is when it comes to execution, the worse it tends to be in terms of performance and energy efficiency.
As an example, if we ran C in an interpreter instead of compiling it, it wouldn’t really fair any better than Java. Likewise we can compile java for the machine we intend to run it on before runtime starts, this can greatly improve its performance and efficiency.
Likewise, the more detours it takes along the route to do its task, then it is likewise going to have poor performance and energy efficiency, but this is more often due to poor code rather than something inherent in the language.
As an example, if we just have to add two variables together, practically all languages can do it fairly trivially. But that doesn’t stop a programmer from calling a far more complex math library every time they need to add two integers together, even if this might require them to convert the integers to something else before running them through the library. (since a library typically don’t contain the utterly trivial stuff one should be able to do already.)
In short, it isn’t really so much the language that matters, but how it is used and executed.
Though, I do find some concepts of higher level programming languages as a bit weird when viewed from the hardware perspective. And in interpreted languages this can lead to a fairly decent performance hit.
But, viewing things from the hardware perspective isn’t always that great, can’t create an array of strings without pondering how it indexes its contents and how that indexing affects end performance and memory utilization. (There is many ways to skin that cat, so what solution did your language use?)
Then there is the arguments around security when it comes to different languages. But that is a can of worms for another day…
you make a valid point but only to a point. you can make a java compiler that does a good job compared to a java interpretter. because of the semantic requirements of the language, you can’t make a java compiler that competes with a C compiler. the language actually does more work. i’m not saying the work isn’t worth it (in fact, i’d probably say the opposite in many cases), but it does a fundamentally different task and creating an optimizer that turns even efficient java code (like using byte[] instead of java.lang.String) into the implied equivalent C form is a monstrous task that no one has succeeded at yet beyond a few cherry picked cases.
I never stated that a Java compiler would compete with a C compiler. But it would partly depend on the end platform one wants to run the code on.
“creating an optimizer that turns even efficient java code into the implied equivalent C form is a monstrous task”
I don’t know why, but a lot of programmers seems stuck in the belief that everything has to be compiled through C, or made to look similar to C for it to supposedly run on hardware efficiently. (Similar to how a lot of language translating programs are stuck in the belief that everything has to be translated through English.)
I should also clarify that when I say “run on hardware” I do not exclusively mean x86 or ARM. But rather any architecture.
All programming language makes assumptions about the hardware and OS environment it will run on, these assumptions can be more or less correct depending on the situation.
But C at least makes exceptionally few assumptions, and the ones it does make is broadly applicable across most architectures. OS support only really becomes a question when dealing with libraries. Optimizing for a specific architecture is typically done by the compiler when it comes to C, since C itself isn’t particularly optimized for anything. (It is in short a bit like Java, but without features. Since features brings dependencies/assumptions.)
Java on the other hand makes the huge assumption that it will run in its own feature rich emulated environment. So compiling Java code to run on something else is going to be a hassle. Since all those features of the environment needs to be recreated anew.
“I don’t know why, but a lot of programmers seems stuck in the belief that everything has to be compiled through C, or made to look similar to C for it to supposedly run on hardware efficiently.”
I mean, the reasons why are pretty obvious…C is a readable expression of the hardware operations. Compilers make something run efficiently by converting it to a series of hardware operations and when we want to reason about that, it’s inevitably going to look “similar to C”, because that’s what it looks like when it’s readable.
Just for a stupid example, you can code up
char s[100]; int accum=0; for (int i = 0; i < 100; i++) { accum+=s[i]; }
in C or Java, I think you can actually use identical syntax to state it. If you care about efficiency, you are probably seeing the Java version as implicitly the same as the C version. But it isn't, because every time you reference s[i], Java semantics demand a check for out-of-range. In order for your Java compiler to actually get to the same efficiency as the implied C implementation, it needs to prove that the range test will always be satisfied so it can eliminate that overhead. That's not actually *hard*, but it's not easy either, and in practice it turns out to be super limited…it's real easy to throw off an optimizer like that and so even if your optimizer is very good, for a lot of code the Java is not going to actually translate to the implied C version even if you were careful to write your program with C-like idioms.
That's just an example so you can see what I was trying to say and why the implied C implementaiton is interesting.
That Java as a language has additional fail safes (or rather error checks) and that generally C doesn’t, is not really an all that correct comparison.
Yes if all we care about is the upmost in performance and/or efficiency, then yes, this check is a waste of resources and time.
But I will reiterate: “I never stated that a Java compiler would compete with a C compiler.”
The actual statement I made were: “Likewise we can compile java for the machine we intend to run it on before runtime starts, this can greatly improve its performance and efficiency.” Compared to running Java in its runtime environment.
In regards to C.
It really isn’t a good candidate for explaining computer architectures. Since it obfuscates a lot of the inner workings. As any other high level language does.
If you want to see machine code instructions converted to readable text, then look at assembly language, since that is a carbon copy of what actually happens on an instruction level. So your “it’s inevitably going to look ‘similar to C'” is fairly far from the truth.
Assembly language is however an abhorrent view for seeing larger program structures. If one wants a better view of the overall program, then C is a lot more applicable.
I’m showing my age, but I recall using a then-new Microsoft product named QuickC. My memory’s hazy from back then but I think it was a JIT compiler, though it was in an IDE and “felt” like an interpreter.
QuickC was a nice little IDE but only compiled C as far as I remember, and compiled to static binaries. Maybe the IDE did stuff like code completion, but you still had to press the make button to get it to produce runnable code as far as I remember. I still miss Windows 3.1 . What happened to quiet unassuming operating systems?
You almost certainly won’t improve Java by compiling it. There have been attempts at this, and they tend not to be any better than a good JIT runtime. They may be worse.
Compiling to assembly isn’t a magic bullet. The language semantics matter. When a method is looked up, it has to be done the Java way. When you index an array, it has to be done the Java way. These things have rules in Java that can’t be optimized away.
Yes, Java isn’t really a strong contender to be compiled.
Mainly since its whole point is cross platform compatibility for the code. Something that tends to offer both performance and power efficiency along the way.
This however doesn’t mean that one can’t improve it by compiling it. With a suitable compiler it would always see an improvement (since one won’t also have to interpret the code), but how large is up for debate and will depend on the platform one tries to compile it for.
It is possible for a JIT to be faster than a straight compiler, because it can use information at runtime to optimize on the fly. For example, it might see that there’s a “for” loop like this:
for( int i = 0; i < some_input_var; i++ ) …
And sees that "some_input_var" is currently a very large value. It can unroll that loop right on the spot to the exact number of times it's going to loop. Conversely, if "some_input_var" is on the smaller side, maybe it's not worth the effort.
When taking cache into consideration, we don’t want to unroll a loop fully.
Since then our unrolled code won’t really be in cache.
And dynamically switching between an instance that does the loop once, and an instance that does it x times isn’t outside of what can be done with fully compiled code. And some compilers do this already. It is however a bit less memory efficient to have multiple copies of the same code.
So that advantage isn’t exclusive to JIT.
Then some architectures don’t have a performance hit from executing loops or conditional jumps, making this optimization pointless in those cases.
(A hobby architecture of mine does conditional decisions as part of out of order execution, this means that the decoder simply prepares both sides of all branches, and then the out of order system handles picking the correct side depending on the answer returned from checking the condition. And this works fine since the decoder isn’t usually the bottleneck in that architecture. And this condition checking is free as long as it has all the requisite registers ready that are part of the condition, while it is doing that it could still be scheduling instructions that are part of the prior cycle of the loop. (I am however oversimplifying here.))
Isnt most Java compiled these days? https://docs.oracle.com/javase/7/docs/technotes/tools/windows/javac.html
Not that it seems to do much good, if Eclipse IDE or Android Studio startup times are anything to go by!
Heh, Java wins over Swift in all categories. Objective C is basically C, but with a whole message-passing framework. I wonder how it actually fares compared to Swift. One of the objectives of Swift is to reduce app’s power use. At first glance, it seems like Swift is failing in that respect.
I’m iOS developer. ;)
I agree about C, however although they’re relatively new and lesser known to the masses, the list should have included Nim (formerly known as Nimrod) and Crystal for being very fast and efficient as well.
Also glad to see Pascal ranking quite good considering its age, so it may be worth a look at Lazarus, which runs on ARM hardware as well.
https://nim-lang.org/
https://crystal-lang.org/
https://www.lazarus-ide.org/
Lazarus is an IDE.
FreePascal is the excellent open source Pascal compiler used by Lazarus.
https://www.freepascal.org/
Or Delphi, which has a free Community Edition for non-commercial (or limited commercial) use.
https://www.embarcadero.com/products/delphi/starter/free-download/
please add crystal (if we have go in table)
I have a 200,000+ line real-time java application. Surprised me as it’s crazy fast. Completing its core tasks so fast, they’re a relatively trivial load for the hardware, so it was dialing back CPU speed and turning cores off. Had to turn those ‘savings’ off, all cores always ‘on’ and bump the clock to always be at turbo speed. Made it even faster with extensive multi-threading, and again when I restructured it as Reactive. With all of that, completing its real-time work went from ~22 ms to typically under 3 ms and often under 1 ms. This is running on a hex-core that’s over ten years old.
Side benefit – it’s a real cozy space heater.
Great now, but not so great in summer…
Curious to see what it would do under a modern CPU, and with sixteen cores.
Glad to see computing efficiency brought to light. When someone says just use python and buy 10X the processor you were using it makes me unhappy.
Yes eventually power used for computing will be a thing and code efficiency will be a thing. It takes longer to develop but then code up a solid product and don’t fck with it every month to put in useless bells and whistles
I agree, annoys me no end how much power and load simple tasks in some programs use – when you know there is no reason for it to be that way but laziness, poor language choice, really poor programming practices – about the only speed/efficiency flaw common to most programs I can accept easily are poor or no multithreading – that adds lots of complexity to do well so it is quite possibly not worth it.
However if you wish to start the journey python looks like a good approachable choice, and having one dev machine with greater than really needed specs is a good place to start too. Not that familiar with python myself, but it reads very clearly when looking at somebody else’s code)
When you start wanting to run code fast on that tiny micro, or get better performance on your embedded SBC then you start learning about the less newcomer friendly but much more efficient compiled languages.
(also I can see the merits of a ‘poor’ language choices in some locations because half the point is to make it simple for others to modify or improve the code – the benefits of keeping that simpler and iterations fast by using interpreted might well outweigh the performance loss)
“Glad to see computing efficiency brought to light.”
Yeah, totally. It’s amazing how efficient nowadays technology has become.
Nowadays systems are almost on par with an Amiga or Atari ST from 1985,
when it comes to relative power efficiency and snappiness.
I believe, with the fastest SSDs, hexa cores and highest C code optimization, they almost rival an Atari ST running TOS of an EEPROM. 😂
Hey, back in the day we could “race the beam”.
Would be linterresting to see what the optimizations and speed improvements in .net 6 would do to those numbers
A better benchmark would be to put in a room a developer for each language and assign a given task. Then measure code execution time/energy and also add an energy metric based on the time and resources spent by each developer.
For python it seems that everything was benchmarked in pure python while libraries (implemented in C) or the use of cython greatly help improve performance in production.
This! I had the task of verifying a 100Kloc C rule-based event-handling program, and was asked to use Haskell to do so. It took me six months of book learning / exercises / application familiarization / brain reorientation to develop the required competency in Haskell and understand the key functionality of the C program. Eventually, I wrote an 11-line Haskell program that demonstrated that the C program was functioning just fine, but that it was the large and varied team of people that were defining the rules and events that were not doing so consistently.
Now, buried under those 11 lines of Haskell code was a compiler that understood set theory, formal logic, and abstract algebra. What performance overhead that imposed, I never measured.
fyi “Plat_Forms — a contest: The web development platform comparison” 2008
“Plat_Forms” is a competition in which top-class teams of three programmers compete to implement the same requirements for a web-based system within 30 hours, each team using a different technology platform (Java EE, .NET, PHP, Perl, Python, or Ruby on Rails).”
https://www.researchgate.net/publication/1922338_Plat_Forms_–_a_contest_The_web_development_platform_comparison
And that would be the proper way to benchmark Python. If you’re using Python to do plumbing work between a bunch of libraries implemented in C, then you’re mostly bench marking C not Python.
Your going to have to reanimate a COBOL programmer unless you can find a live one.
Eh, it depends. Even if a C program takes 10x, 100x as long to write… If hundreds of thousands of people use the software the energy cost of a higher level language would dwaf the energy cost to develop it. That one-time cost is nothing compared to the users.
But companies don’t really care about the energy cost they care about the salaries they’re paying. So high level poor performance software is what we get!
This discussion sounds irrelevant.
How much of the world’s generated electricity goes into keeping the digital infrastructure powered up 24/7/365 ? How much of that electrical energy ends up as waste heat?
Billions of cell phones charging overnight, server farms devoted to mining bitcoins or serving cat or food pictures on social media.
In the grand scheme of things, the language we choose to provide our digital infrastructure probably gets lost in the noise.
no. i mean, there are valid reasons for using just about any language. but if you are going to be doing a huge amount of computing with a relatively task-constrained program that won’t need to change too much (i.e., like a database backend running across 10,000,000 cores at google), the language you chose can easily make a real order of magnitude difference in final energy consumption. the thing that takes 1GW today could easily take 10GW if it had been written in python.
mysql is slow, but imagine how much slower it would be if it was written in python. then, everyone realizes SQL itself is slow, so big cloud apps are all using some ‘nosql’ alternative. the language you pick really matters, *especially* at large scale. big numbers don’t mean efficiency gets lost in the noise, they mean the exact opposite.
While I agree, its also irrelevant for many folks too – shaving 10% or even 90% of the energy use to run the code only I run (or perhaps a tiny number of folks replicating one of my project), that runs perhaps a few times a day will get lost in the noise of the rest of the world.
Its just not at all irrelevant when the frequency that code is called or the number of concurrent users of that code is really high, so all your network gear, server, database, phone and desktop OS’s with so many users really need to care.
No, it’s a huge problem. Running and cooling datacenters has a big power requirement, and the combined total of all of them already surpasses some industrialized nation states.
https://www.nature.com/articles/d41586-018-06610-y
That said, this study is only covering a datastructure problem, and doesn’t consider IO overhead, interfacing libraries written in C, or the steady move to computation on GPUs.
If those billions of cell phones have their O/S written in ‘C’ or (heaven forbid) Perl that makes a big difference too. All those billions of single watts add up. This applies to the popular apps too, if they are used at scale. Native ‘C’ is more efficient than Java.
It’s more complicated than that. Power draw on phones comes in no small part from from the screen and the various radios. Ever notice how hot your phone gets when using GPS? Consumption from the choice of language is a secondary problem.
You don’t need to tell me, but actually one of the single biggest factors is the paging rate setting in the network, and various other network configuration parameters, which determines (to a great extent) the standby time. The tradeoff being against the time for the phone to start ringing.
Shhhh. We have to keep a lid on this. If we tell people that they have to reduce their consumption then there will be mayhem!
Pretty much a useless measure then. The power consumed by the compiler will be pretty much dwarfed by the runtime application most of the time. The ease of use, time to debug and developer productivity are all very much more important and will effect power usage rather more also (a slow to develop, slow to debug language will end up using lots of power at the wall, even if the compiler is quick and efficient).
This is why slow languages exist. The likes of python. They are easy to work with and devs will need less hours to develop the product. So their computers will be on for less hours. So more energy efficiency. But this isn’t true for performance critical domains. Like video games. I guess CoD warzone would run at 1-2 frames per second if it was written in python rather than C++ lol.
The responses here are often funny and predictable. “My interpreted language counting integers, or acting to just orchestrate fast C/C++ code is almost as fast as C when using a JIT compiler!!!!” Oh man…
The one thing I take issue with is the C++ code used in the study uses the C++ standard library. Every time I have benchmarked that, I have been sorely disappointed, so I gave up on it years ago. One of the biggest issues was the number of calls to new/delete of tiny sizes. It seemed like the programmers were going too abstract, writing O((some good value for the algorithm in question)) algorithms, but forgetting that the hardware/software underneath have restrictions as well. I’d say maybe it has improved in the past years, but the benchmarks speak for themselves. Use a better container set, and you’ll see the C++ within 10% of C. There is a reason that just about every toolkit library out there has included its own template container classes… It will still be slower as all virtual function calls do incur some overhead, but the access pattern of a vtable is very predictable (unless the vtable is so large as to fill the entire cache, but then, you’re doing it wrong!), and thus will be cached basically 100% of the time. So we shouldn’t be talking a 34% difference. The standard C++ library is great for convenience, and when you’re looking to do something with minimal external code, but not when you want something to perform.
Notice you don’t mention which librarys you like to use instead of the std library.
Depends on the platform, and the container required. For a linked list on a microcontroller, I’ll just write my own, for a full large app, I’ll use Qt, MFC, or write my own for some things (basic string operations for example, I’ll write my own class with memory pooling, copy on write, etc if I cannot use a big library for some reason).
What I won’t use, is the std library….unless I’m truly deperate! (or don’t need performant code).
Where is KOTLIN?
Good point. Whether it is slow/fast, efficient or not, I really enjoy programming in Kotlin!
This is the last straw. No more Python programming for me!
And not a Visual Basic program in sight
True. It was *the* RAD development tool of the 90s/early 2000s!
Before it, there was HyperCard.
And after VB1, there were also CA dBFast, MS Visual FoxPro and Borland Delphi. Among others.
I’ve been really digging in and practicing my Python-skills lately, trying to adjust to their coding-style guidelines, documenting things their way and all. I know Python ain’t the most performant thing in the world, let alone lightest one, but I like using an interpreted language.
That said, I’ll grow bored soon enough and I should probably teach myself Rust next. Go looks like a pretty competitive language, but it just doesn’t tickle my fancy. Too bad I don’t really have any interesting project in mind that would benefit from Rust over Python, since all of my projects tend to be aimed at simple, non-time-critical tasks.
The “flaw” in this analysis is that they’re looking at implementations from the CLBG rather than anything specific about emitted machine code instructions. The CLBG results have often been hyper-tuned to emit efficient code from the compiler even beyond simple algorithmic optimization. That is, the game part of the CLBG benchmarks is what makes the results somewhat unrepresentative. For example, on the n-body benchmark, all the top performers are basically piles of CPU specific vector intrinsic calls. This is totally apropos for the benchmark game. Is it representative of a real program or indicative of the speed of the language? I dunno!
The researchers do *somewhat* address this in the External Validity section, but I’m not really satisfied with the extent to which they’ve considered if the analysis holds for the general case of looking at the language’s performance or not.
> … on the n-body benchmark, all the top performers are…
And yet the benchmarks game also includes simpler C programs like this —
https://benchmarksgame-team.pages.debian.net/benchmarksgame/program/nbody-gcc-1.html
Not sure I follow your point? That version is 3x slower than the carefully optimized pile of intrinsics solution.
The researchers explicitly say, “The obtained solutions were the best performing ones at the time we set up the study,” so they wouldn’t be looking at that version for C (for example).
C is only the most energy efficient if you look at the system in isolation. Now add in all the infrastructure needed to analyze, track, and patch all of security vulnerabilities due to its unsafe memory model, and these tables will probably look pretty different.
‘C’ isn’t the problem here, what you’re poking at (sic) is the O/S or the CPU architecture or how the memory management is(n’t) used (or a combination of the three). If the O/S partitions and ringfences memory between different security levels, that solves the problem. Granted, there may be some overhead in doing this (including scrapping all x86 architecture CPU’s, and replacing every instance of Windows)
“If it is not programmed in c/c++, it is no good.” From about ~1992 forward. Programming in c/++ required to make a living as a programmer? c/c++ 1 Buggy [update] 2 Malware vulnerable [security update] and 3 May contain software modules greater than one page of code in violation of Boeing hardware engineers software standards? Standards in place before 1966 to after 1980. Boeing 737 MAX avionics software contain Linux Mint? http://www.prosefights.org/irp2020/windscammers14.htm
Speaking of the ‘effectiveness’ and resource friendly-ness of C..
Is the effect on the programmer taken into account, too?
I mean, the extra cups of coffee required when programming in C, the antidepressant medicine, the sessions at the psychiatrist, the alcohol therapy, the countless pills of Aspirin..? 😁
+1 for the alcohol therapy, but I view that as a positive!
Hate to be one more person dunking on the study, as I *do* find the approach interesting. But in my opinion it doesn’t have much real world application given that the vast majority of most programming involves stitching together libraries. It’s the technology stack that’s going to get you.
Since I’ve written C++ code that compiled to exactly the same assembly as the C variant, I find these kind of papers highly suspect (also knowing the prevalent aversion for C++ in those institutes).
It is possible if you leave out all the C++ features, explicitly force compile to leave all the runtime features out do not use any IO -libraries and just compile your C -code with a C++ compiler. Otherwise I have very hard time believing your claim.
Surprised me that someone would actually put out the ‘energy’ to do a ‘report’ on such a thing in the first place :rolleyes: . I guess it shouldn’t as Universities tend to head off on useless tangents….
Use the language that fits the application which in your knowledge base :) . For me that is usually C/C++ or Python at this time for production work. And yes ‘why back’ in my CS College days we did touch Cobol, Fortran, Pascal, Assembly, Lisp, Basic, and probably a few others, but C has always been the backbone of programming.
I think it would be really valuable comparing to different HW architectures and even ISA as they have a considerable impact on power consumption.
The paper uses binary trees as a baseline, which is mostly a matter of moving data structures around in memory. Most programs aren’t like that, and tend to wait on IO a lot. Less so with SSDs, but the factor is still there. A 1GHz CPU can do 1 million cycles worth of instructions in the time it takes for a 1ms access time to come back, and that’s a CPU we would attach to a low end smartphone.
Async IO can help with this, but it tends to work better with closures, and C doesn’t really do those.
The huge datacenters that are sucking up lots of power are often run by the Facebooks and Amazons of the world, which tend to have exactly this kind of IO bottleneck. Also, a lot of the computational workload they do have is increasingly going to GPUs, where this study doesn’t apply.
Just want to say that Ken Thompson and Dennis Ritchie were absolute geniuses to make a product that performs so well – even, and especially, on metrics they likely would never have considered. RIP Dennis 2011.
Some results were VERY suspicious so I just took a look at few “tests” for the few languages I know and I can honestly say this test has LITTLE to NO VALUE. Implementations are so different and suboptimal that we’re comparing apples with plastic bananas. A waste of time.
Yeah this was discussed at the time and it’s basically worthless. Look at how different the results for Typescript and JavaScript are. They should be identical.
CO2 production is in the production of hardware, not in amount of electricity consumed by the software.
So the greeness of software is barely relevant here, apart if it helps to reduce the amount of hardware you need, for example, the number of machines to achieve the same result.
Which is why electric cars are a joke, along with wind energy – to a point. The procurement of raw materials, production and refining of those, and the actual machining and assembly is by far the largest culprit in energy use and pollution for the life time of a product. That is not to say a power management system that are written poorly can’t contribute to the waste, it’s just that they never seem to include the “extras” in the equation. However, the necessary progression to something greener with newer technology requires going through these baby steps.
I was surprised by the comments on assembly above, any decent Developer that has been coding in assembly for more than 5 years knows the architecture well enough to know the time consuming instructions, and the shortcuts – like segment boundary alignment, etc. Even using xor ax, ax instead of mov ax,0 can save time and a clock cycle depending on the processor. The largest no-no from “C” like languages is the stack usage that C has for interfacing each routine, even optimized stack operations are the slowest things to do.
Also, Assembly Developers usually have their own hand written libraries already optimized for use, like serial handing, video, sorting, etc. Things like sorting in C that use recursion are really slow. People that didn’t come up from that development training most likely have no idea how the routines they are talking to work, they have become Managers of libraries, not Developers.
No, it’s pretty big. Data center power consumption has become a major issue. This study has other problems, though.
LuaJIT is kind of a miracle, but it’s stuck at Lua 5.1 with a couple of features backported from 5.2. The author also stopped developing a few years ago it with no clear successor to pick it up again (bus factor: 0). There have been a few forks (RaptorJIT, MoonJIT) trying to gain traction, and Roblox’s Luau language is an attempt to give it a Typescript equivalent, but so far none of them have taken off like LuaJIT did.
First Line : 3 of 4 times : C & Pacal.
1. How come no one talking about Pascal ?
2. How about creating ‘PaC’ Language ? get the best parts from each ?
I get it ; People ‘afraid’ of C
but Pascal is easy and logical …
so … ??
Compilers have optimisation modes for speed and size, for small machines like micro-controllers with little or no system overhead, someone should add an optimisation mode for energy efficiency.
This would know the energy cost for each instruction and implement the code using the least energy cost.
It is a system level design not a language issue. You can save power by putting the CPU into sleep or even halt mode, wakeup by either pin change or preset timers, do some processing and ho back to sleep. Also disable peripherals you are not using, power gating external circuits when not needed etc…All of that require effort on your part and very hardware specific.
There is not much the compiler can help if you insist on using Audrino frame work busy wait loop and polling, and using power hungry 7805 and running off 9V batteries.
Rust’s also pretty good if you want a powerful type system to enforce your invariants without turning to Haskell.
Don’t underestimate how useful it can be to “simple, non-time-critical tasks” to have a language where nullability and error returns are handled via sum types/data-bearing enums/tagged unions, whether an object is thread-safe is tracked as part of its type, the language is expressive enough to check state machines at compile time via the “typestate pattern”, and you’ve got powerful compile-time metaprogramming and optimizers to enable really nice APIs like Serde and StructOpt that are still very fast.
Ugh. I forgot WordPress lies to you about whether it’s going to post your reply AS a reply if you have JavaScript disabled.
That was meant for WereCatf.
C is really the only programming language worth using. It’s the SIMPLEST, FASTEST and MOST PORTABLE of all programming languages. Why people find it hard to learn and use is beyond me.
Simplest, fastest and most portable doesn’t speak to every use case. Besides, C only has the ABILITY to be the fastest but, in practice this may not always be the case.
The problem with C, is the difficulty in using it correctly and always avoiding common pitfalls. C and its compilers are also notorious for their plethora of undefined behaviour and edge cases. This is what makes it so difficult to fully learn C, despite it having a relatively small syntax. A language is more than just its syntax.
You also have to remember that C was design to provide higher level constructs than those available in assembly. Similarly, other languages were created to provide higher level constructs than those available in C. It’s easier to build a GUI framework in C++ because language has built-in constructs to encapsulate functionality and express relationship among types.
As someone who works closely with embedded developers I will say that, even with C, one can make code significantly more power-hungry by prioritizing code portability. In other words, implementing an embedded OS.
The influence of agile development and web/app processes on the embedded word has not been a good one imo.
I noticed Python wasn’t listed here. It seemed weird for the world’s most popular programming language to be missing so I looked at the original paper to see why they left it out.
The answer is that they didn’t. Overall it was a little worse than Ruby overall.
Such a weird thing to do to remove that here. Framing Ruby in the mysterious case of the planet on fire.
As one who choose to learn ada vs pascal and fortran, I happy to see that all the naysayers of the design by committee language actually holds up to the best.
And rust seems to be between C/C++ and Java/Ada. And that’s very promising. Learning rust.
Yup, at execution time, C is very fast. But it has a big cost at compile time, and C++ a very big one. I bet green software should consider both energy passed in build time and runtime, because developping optimized softwares requires many dev computers/tests/ci builds.
This is from 2017. I am sure all of the programming languages have proceeded and some might have better performance.
I al o don’t think zhis is not represent the re l world. Some are saying it does not matter, but as in any other comparison regarding “the greenest” it is needed to take everything in account. If you could right theoretically the greenest programs with C, but there are only few peoples experince enough to know all the tips and tricks to do so. I think it is important to measure how many percent of the programming language community is able to write green code. If the language is very hard to master but easy to start with, I doubt there are many people that are really able to write such code. For myself I can write C code but I am not experinced enough so I would say I would write code that is performing very well and has a green footprint.
Could some language be disadvantaged because they are running on a platform they wouldn’t be optimized for, which would be the case with swift
what about julia?
what about toit, zig, odin, jai, beef, vland, d?
What I find funny is that Digital Signal Process is usually done in C and Python/Matlab which are polar opposites in speed and energy use. I guess the ease of use for prototyping trump’s everything else.
I am using C only for my game development. I could use assembly but it’s not necessary at this time. I reject C++ because it’s a horrible programming language. I did not know about the green advantage of C until I read this post. I do appreciate knowing about it.
Care to give vlang and Chez Scheme a benchmark also?