As most of you already know, Python is a general-purpose programming language optimized for simplicity and ease of use. Whereas it’s an ideal software for gentle duties, code execution pace can quickly turn into a serious bottleneck in your applications.
On this article, we’ll talk about why Python is so sluggish, when in comparison with different programming languages. Then, we’ll see how you can write a fundamental Rust extension for Python and evaluate its efficiency to a local Python implementation.
Why Python is sluggish
Earlier than we begin, I want to level out that programming languages aren’t inherently quick or sluggish: their implementations are. If you wish to be taught concerning the distinction between a language and its implementation, take a look at this text:
To start with, Python is dynamically typed, which means that variable varieties are solely recognized at runtime, and never at compile-time. Whereas this design selection permits for extra versatile code, the Python interpreter can not make assumptions about what your variables are and their dimension. In consequence, it can not make optimizations like a static compiler would.
One other design selection that makes Python slower than different options is the notorious GIL. The World Interpreter Lock is a mutex lock that permits just one thread to execute at any cut-off date. The GIL was initially meant to ensure thread security however has encountered nice backlash from builders of multi-threaded functions.
On prime of that, Python code is executed by a digital machine as an alternative of working instantly on the CPU. This additional layer of abstraction provides a major execution overhead, in comparison with statically compiled languages.
Moreover, Python objects are internally handled as dictionaries (or hashmaps) and their attributes (properties and strategies, accessed through the dot operator) aren’t often accessed by a reminiscence offset, however…