Simple Tricks To Make Your Python Code Faster

Python has become one of the most popular programming languages out there, particularly for beginners and those new to the hacker/maker world. Unfortunately, while it’s easy to  get something up and running in Python, it’s performance compared to other languages is generally lacking. Often, when starting out, we’re just happy to have our code run successfully. Eventually, though, performance always becomes a priority. When that happens for you, you might like to check out the nifty tips from [Evgenia Verbina] on how to make your Python code faster.

Many of the tricks are simple common sense. For example, it’s useful to avoid creating duplicates of large objects in memory, so altering an object instead of copying it can save a lot of processing time. Another easy win is using the Python math module instead of using the exponent (**) operator since math calls some C code that runs super fast. Others may be unfamiliar to new coders—like the benefits of using sets instead of lists for faster lookups, particularly when it comes to working with larger datasets. These sorts of efficiency gains might be merely useful, or they might be a critical part of making sure your project is actually practical and fit for purpose.

It’s worth looking over the whole list, even if you’re an intermediate coder. You might find some easy wins that drastically improve your code for minimal effort. We’ve explored similar tricks for speeding up code on embedded platforms like Arduino, too. If you’ve got your own nifty Python speed hacks, don’t hesitate to notify the tipsline!

22 thoughts on “Simple Tricks To Make Your Python Code Faster

  1. The best trick to make your Python code thousands times faster is to write it in C/C++, Pascal, Rust or any other compiled language…

    1. Aw you beat me too it. This is the answer though.

      Python is a reasonable tool and all but sometimes I do wonder if I will ever reach for it outside of a scripting context.

      1. It really depends most of the time the code will be run once or twice. In which case dev time costs from writing C/Rust will massively outweigh the gain.

        So long as the heavy lifting is in some library like Numpy.

        If it’ll be running repeatedly or in an ongoing fashion then it’s deffo worth the devtime costs.

    2. That will make your program run faster, but not your python code as you didn’t modify your python code or the way it is interpreted.
      Python has many advantages over compiled languages. You can test things in real time using interactive terminal and Jupyter notebooks. There are many libraries available that are easy to integrate. Python has many optimized libraries. So even if the interpreted code is slow it can delegate performance critical parts to those libraries.
      Porting python code to C/C++ can take a few hours to a few months.

      I use both C/C++ and python. C/C++ for embedded devices. Python for PC tools (code analysis, processing log files, generating PDF’s and XLSX documents, regression analysis, code generation, web scraping, build scripts, etc.).
      I have used micropython on embedded devices, but I’ve never used that in any serious project.

      1. Every advantage you just wrote also applies to Julia, which is JIT-compiled and much faster than Python. Give it a try some time, it’s a really nice language. :)

      2. That whole thing about how python can be high performance because the libraries are written in c++ is pure BS. I’ve experienced it first hand. Yes it can be fast, but if any real python code touches your data the whole thing grids to a halt. 1. It takes an insane amount of care and experimentation to avoid this, and 2. If you can’t use any python operators or data-structures, what is the point of using python at all.

    3. This is the answer. I built a whole computer vision app with python and later rewrote it in C++ for performance reasons. Best decision I ever made. Trying to write high performance code in python is an absolute nightmare.

    4. Or use pypy to compile the Python code.

      The advice in the article seems useless: either the savings are minimal, or they are generic programming concepts like hash lookup being faster than linear lookup.

  2. I checked out the list, and two things are clear to me:

    It’s a JetBrains ad, hawking their AI “tools”
    Calling time.sleep one time instead of one thousand makes the code sleep one thousandth as much

    For a real Python performance tip: Every time you use the . operator to look up a member of an object, that’s a hash table lookup. Make local references to repeatedly-accessed members before loops, don’t do the hash table lookup more than you need to. This can make a big difference, and is used all over the place in the standard library.

  3. Most of these results are marginal and since time.time()-start was used to measure run times it is highly inaccurate. Use timeit to get results that are believable.

  4. Another easy win is using the Python math module instead of using the exponent (**) operator since math calls some C code that runs super fast.

    all the operators “call some C code”
    presumably you meant using math.pow instead of ** since “the math module” can’t be used in the same way as an operator
    math.pow and ** do NOT do the same thing and are not universally interchangeable. “Unlike the built-in ** operator, math.pow() converts both its arguments to type float. Use ** or the built-in pow() function for computing exact integer powers.”

  5. I don’t have much antipathy towards python becasue i never have to use it. i don’t hate it the way i hate C++, for example :)

    But i have an unusually low amount of respect for it. It’s unusually slow even for an interpretted language. It’s also unusually unreadable. I haven’t yet run into a bit of python code, no matter how simple, that doesn’t start out with a lecture on how unsuitable its object model is for the way it’s actually used. Especially for a ‘beginner’ language, i can’t believe people jump through the hoops of its object model just for hello worlds! But mostly, it’s just dog slow because it’s implementation is slow and then the idioms that people actually use in it are dog slow idioms in any language.

    I do generally believe you can improve performance within any language by actually learning how to use it. For example, java is faster if you use byte[] or char[] for string parsing instead of java.lang.String, right? Languages with garbage collection are faster if you give just a little mind to when you are doing your allocations. People haev made a lot of noise about the fact that clean languages that lean towards functional programming, like Scheme and OCaml, can be just as fast as any other language. To my surprise, the straightforward functional way to use OCaml was very fast (faster than C, because there was a mistake in a part of my C code that was completely elided by OCaml) in the one experiment i did.

    And that’s where Python fails…anyone who knows better isn’t using Python. Python is only used by people who don’t know any better, who get suckered in by the slowest idioms. By contrast to Perl, which is nearly as slow but much more expressive and much more suited to one-off hacks, i see larger programs written in Python. To where you ‘apt install xxx’ and then use some unusably slow piece of garbage and discover it was written in Python. In Perl, everything i write is so simple that i don’t mind that it’s slow.

    You can’t improve Python’s performance by improving the way you use it because the culture of how it’s used is its problem. And that’s why “just use a different language” is the effective advice, because anyone who isn’t willing to use a different language won’t be willing to confront that culture of using it for its weaknesses either.

    1. As someone who does almost zero programming and has been meaning to get into it for like 30 years this was a really insightful comment and probably saved me a lot of time and frustration. Thanks.

  6. “anyone who knows better isn’t using Python.” Nonsense. Python is a fine solution for many problems. Yes, it is slow. But in many cases, it doesn’t matter. As long as the solution is fast enough, I don’t need to invest the considerable extra effort to use C++. Processor cycles are cheap, programmer cycles are expensive.

    1. Totally agree. Our company uses Python for lots of tasks. Slow is relative. If you task tasks a second to run in Python, and takes 1/10 in C. …. So what? Fast ‘enough’ is the key…. I use it a lot at home too. Again it is fast enough for (throwing out a number) 90% of all the tasks that we do. The other beauty is the source is the runtime. So fast turn-around on changes. And Engineers to Software people can understand and make changes to the code. Fast development also. Wins all around.

      A speed up tip. At work I had a situation where I had a list of files that I needed to sort by file modified time. My original solution was to to ‘glob’ it (get the files), then go get the modified time on each file, and finally sort it. With a lot of files this was ‘slow’ . So found a solution by using ‘ os.scandir() ‘. This is much much faster as the file info is returned with the file name. Just an FYI….

  7. The comments section looks like I expected when I read the title of the article.

    “I would like my bicycle to go faster.”
    “Get a car.”

Leave a Reply to jpaCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.