

Picture by Writer | Ideogram
Let’s be trustworthy. Once you’re studying Python, you are most likely not desirous about efficiency. You are simply making an attempt to get your code to work! However here is the factor: making your Python code quicker does not require you to turn out to be an knowledgeable programmer in a single day.
With just a few easy strategies that I will present you at present, you possibly can enhance your code’s velocity and reminiscence utilization considerably.
On this article, we’ll stroll by way of 5 sensible beginner-friendly optimization strategies collectively. For every one, I will present you the “earlier than” code (the best way many newcomers write it), the “after” code (the optimized model), and clarify precisely why the advance works and the way a lot quicker it will get.
🔗 Hyperlink to the code on GitHub
1. Substitute Loops with Listing Comprehensions
Let’s begin with one thing you most likely do on a regular basis: creating new lists by remodeling present ones. Most newcomers attain for a for loop, however Python has a a lot quicker manner to do that.
Earlier than Optimization
This is how most newcomers would sq. a listing of numbers:
import time
def square_numbers_loop(numbers):
consequence = []
for num in numbers:
consequence.append(num ** 2)
return consequence
# Let's check this with 1000000 numbers to see the efficiency
test_numbers = checklist(vary(1000000))
start_time = time.time()
squared_loop = square_numbers_loop(test_numbers)
loop_time = time.time() - start_time
print(f"Loop time: {loop_time:.4f} seconds")
This code creates an empty checklist known as consequence, then loops by way of every quantity in our enter checklist, squares it, and appends it to the consequence checklist. Fairly simple, proper?
After Optimization
Now let’s rewrite this utilizing a listing comprehension:
def square_numbers_comprehension(numbers):
return [num ** 2 for num in numbers] # Create the complete checklist in a single line
start_time = time.time()
squared_comprehension = square_numbers_comprehension(test_numbers)
comprehension_time = time.time() - start_time
print(f"Comprehension time: {comprehension_time:.4f} seconds")
print(f"Enchancment: {loop_time / comprehension_time:.2f}x quicker")
This single line [num ** 2 for num in numbers]
does precisely the identical factor as our loop, however it’s telling Python “create a listing the place every component is the sq. of the corresponding component in numbers.”
Output:
Loop time: 0.0840 seconds
Comprehension time: 0.0736 seconds
Enchancment: 1.14x quicker
Efficiency enchancment: Listing comprehensions are sometimes 30-50% quicker than equal loops. The development is extra noticeable while you work with very giant iterables.
Why does this work? Listing comprehensions are carried out in C beneath the hood, in order that they keep away from plenty of the overhead that comes with Python loops, issues like variable lookups and performance calls that occur behind the scenes.
2. Select the Proper Information Construction for the Job
This one’s big, and it is one thing that may make your code tons of of occasions quicker with only a small change. The secret’s understanding when to make use of lists versus units versus dictionaries.
Earlier than Optimization
For instance you need to discover frequent components between two lists. This is the intuitive method:
def find_common_elements_list(list1, list2):
frequent = []
for merchandise in list1: # Undergo every merchandise within the first checklist
if merchandise in list2: # Examine if it exists within the second checklist
frequent.append(merchandise) # If sure, add it to our frequent checklist
return frequent
# Check with fairly giant lists
large_list1 = checklist(vary(10000))
large_list2 = checklist(vary(5000, 15000))
start_time = time.time()
common_list = find_common_elements_list(large_list1, large_list2)
list_time = time.time() - start_time
print(f"Listing method time: {list_time:.4f} seconds")
This code loops by way of the primary checklist, and for every merchandise, it checks if that merchandise exists within the second checklist utilizing if merchandise in list2
. The issue? Once you do merchandise in list2
, Python has to go looking by way of the complete second checklist till it finds the merchandise. That is sluggish!
After Optimization
This is the identical logic, however utilizing a set for quicker lookups:
def find_common_elements_set(list1, list2):
set2 = set(list2) # Convert checklist to a set (one-time value)
return [item for item in list1 if item in set2] # Examine membership in set
start_time = time.time()
common_set = find_common_elements_set(large_list1, large_list2)
set_time = time.time() - start_time
print(f"Set method time: {set_time:.4f} seconds")
print(f"Enchancment: {list_time / set_time:.2f}x quicker")
First, we convert the checklist to a set. Then, as an alternative of checking if merchandise in list2
, we verify if merchandise in set2
. This tiny change makes membership testing practically instantaneous.
Output:
Listing method time: 0.8478 seconds
Set method time: 0.0010 seconds
Enchancment: 863.53x quicker
Efficiency enchancment: This may be of the order of 100x quicker for big datasets.
Why does this work? Units use hash tables beneath the hood. Once you verify if an merchandise is in a set, Python does not search by way of each component; it makes use of the hash to leap on to the place the merchandise must be. It is like having a guide’s index as an alternative of studying each web page to search out what you need.
3. Use Python’s Constructed-in Features At any time when Attainable
Python comes with tons of built-in capabilities which can be closely optimized. Earlier than you write your personal loop or customized operate to do one thing, verify if Python already has a operate for it.
Earlier than Optimization
This is the way you would possibly calculate the sum and most of a listing for those who did not find out about built-ins:
def calculate_sum_manual(numbers):
complete = 0
for num in numbers:
complete += num
return complete
def find_max_manual(numbers):
max_val = numbers[0]
for num in numbers[1:]:
if num > max_val:
max_val = num
return max_val
test_numbers = checklist(vary(1000000))
start_time = time.time()
manual_sum = calculate_sum_manual(test_numbers)
manual_max = find_max_manual(test_numbers)
manual_time = time.time() - start_time
print(f"Guide method time: {manual_time:.4f} seconds")
The sum
operate begins with a complete of 0, then provides every quantity to that complete. The max
operate begins by assuming the primary quantity is the utmost, then compares each different quantity to see if it is greater.
After Optimization
This is the identical factor utilizing Python’s built-in capabilities:
start_time = time.time()
builtin_sum = sum(test_numbers)
builtin_max = max(test_numbers)
builtin_time = time.time() - start_time
print(f"Constructed-in method time: {builtin_time:.4f} seconds")
print(f"Enchancment: {manual_time / builtin_time:.2f}x quicker")
That is it! sum()
offers the full of all numbers within the checklist, and max()
returns the most important quantity. Identical consequence, a lot quicker.
Output:
Guide method time: 0.0805 seconds
Constructed-in method time: 0.0413 seconds
Enchancment: 1.95x quicker
Efficiency enchancment: Constructed-in capabilities are sometimes quicker than handbook implementations.
Why does this work? Python’s built-in capabilities are written in C and closely optimized.
4. Carry out Environment friendly String Operations with Be part of
String concatenation is one thing each programmer does, however most newcomers do it in a manner that will get exponentially slower as strings get longer.
Earlier than Optimization
This is the way you would possibly construct a CSV string by concatenating with the + operator:
def create_csv_plus(information):
consequence = "" # Begin with an empty string
for row in information: # Undergo every row of information
for i, merchandise in enumerate(row): # Undergo every merchandise within the row
consequence += str(merchandise) # Add the merchandise to our consequence string
if i
This code builds our CSV string piece by piece. For every row, it goes by way of every merchandise, converts it to a string, and provides it to our consequence. It provides commas between objects and newlines between rows.
After Optimization
This is the identical code utilizing the be part of methodology:
def create_csv_join(information):
# For every row, be part of the objects with commas, then be part of all rows with newlines
return "n".be part of(",".be part of(str(merchandise) for merchandise in row) for row in information)
start_time = time.time()
csv_join = create_csv_join(test_data)
join_time = time.time() - start_time
print(f"Be part of methodology time: {join_time:.4f} seconds")
print(f"Enchancment: {plus_time / join_time:.2f}x quicker")
This single line does loads! The internal half ",".be part of(str(merchandise) for merchandise in row)
takes every row and joins all objects with commas. The outer half "n".be part of(...)
takes all these comma-separated rows and joins them with newlines.
Output:
String concatenation time: 0.0043 seconds
Be part of methodology time: 0.0022 seconds
Enchancment: 1.94x quicker
Efficiency enchancment: String becoming a member of is far quicker than concatenation for big strings.
Why does this work? Once you use += to concatenate strings, Python creates a brand new string object every time as a result of strings are immutable. With giant strings, this turns into extremely wasteful. The be part of
methodology figures out precisely how a lot reminiscence it wants upfront and builds the string as soon as.
5. Use Turbines for Reminiscence-Environment friendly Processing
Typically you needn’t retailer all of your information in reminiscence directly. Turbines allow you to create information on-demand, which may save huge quantities of reminiscence.
Earlier than Optimization
This is the way you would possibly course of a big dataset by storing every part in a listing:
import sys
def process_large_dataset_list(n):
processed_data = []
for i in vary(n):
# Simulate some information processing
processed_value = i ** 2 + i * 3 + 42
processed_data.append(processed_value) # Retailer every processed worth
return processed_data
# Check with 100,000 objects
n = 100000
list_result = process_large_dataset_list(n)
list_memory = sys.getsizeof(list_result)
print(f"Listing reminiscence utilization: {list_memory:,} bytes")
This operate processes numbers from 0 to n-1, applies some calculation to every one (squaring it, multiplying by 3, and including 42), and shops all leads to a listing. The issue is that we’re holding all 100,000 processed values in reminiscence directly.
After Optimization
This is the identical processing utilizing a generator:
def process_large_dataset_generator(n):
for i in vary(n):
# Simulate some information processing
processed_value = i ** 2 + i * 3 + 42
yield processed_value # Yield every worth as an alternative of storing it
# Create the generator (this does not course of something but!)
gen_result = process_large_dataset_generator(n)
gen_memory = sys.getsizeof(gen_result)
print(f"Generator reminiscence utilization: {gen_memory:,} bytes")
print(f"Reminiscence enchancment: {list_memory / gen_memory:.0f}x much less reminiscence")
# Now we will course of objects separately
complete = 0
for worth in process_large_dataset_generator(n):
complete += worth
# Every worth is processed on-demand and may be rubbish collected
The important thing distinction is yield
as an alternative of append
. The yield
key phrase makes this a generator operate – it produces values separately as an alternative of making them all of sudden.
Output:
Listing reminiscence utilization: 800,984 bytes
Generator reminiscence utilization: 224 bytes
Reminiscence enchancment: 3576x much less reminiscence
Efficiency enchancment: Turbines can use “a lot” much less reminiscence for big datasets.
Why does this work? Turbines use lazy analysis, they solely compute values while you ask for them. The generator object itself is tiny; it simply remembers the place it’s within the computation.
Conclusion
Optimizing Python code does not must be intimidating. As we have seen, small adjustments in the way you method frequent programming duties can yield dramatic enhancements in each velocity and reminiscence utilization. The secret’s creating an instinct for choosing the proper device for every job.
Keep in mind these core ideas: use built-in capabilities after they exist, select applicable information constructions on your use case, keep away from pointless repeated work, and be conscious of how Python handles reminiscence. Listing comprehensions, units for membership testing, string becoming a member of, turbines for big datasets are all instruments that must be in each newbie Python programmer’s toolkit. Continue learning, preserve coding!
Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, information science, and content material creation. Her areas of curiosity and experience embody DevOps, information science, and pure language processing. She enjoys studying, writing, coding, and low! At the moment, she’s engaged on studying and sharing her data with the developer group by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates participating useful resource overviews and coding tutorials.