HomeArtificial Intelligence10 Helpful Python One-Liners for CSV Processing

10 Helpful Python One-Liners for CSV Processing


10 Helpful Python One-Liners for CSV Processing10 Helpful Python One-Liners for CSV Processing
Picture by Editor

 

Introduction

 
CSV recordsdata are all over the place in information workflows, from database exports to API responses to spreadsheet downloads. Whereas pandas works nice, generally you want fast options that you may code utilizing Python with out having to put in pandas.

Python’s built-in csv module mixed with listing comprehensions and generator expressions can deal with most typical CSV duties in a single line of code. These one-liners are good for fast information exploration, ETL debugging, or if you’re working in constrained environments the place exterior libraries aren’t obtainable.

Let’s use a pattern enterprise dataset with 50 data: information.csv and get began!

🔗 Hyperlink to the code on GitHub

 

1. Discover Column Sum

 
Calculate the entire of any numeric column throughout all rows.

print(f"Complete: ${sum(float(r[3]) for r in __import__('csv').reader(open(path)) if r[0] != 'transaction_id'):,.2f}")

 

Right here, path is the variable holding the trail to the pattern CSV file. For this instance, in Google Colab, it’s path = "/content material/information.csv".

Output:

 

Right here, __import__('csv') imports the built-in CSV module inline. The generator expression skips the header row, converts column values to floats, sums them, and codecs with forex notation. Alter the column index (3) and header test as wanted.

 

2. Group By Most

 
Discover which group has the best combination worth throughout your dataset.

print(max({r[5]: sum(float(row[3]) for row in __import__('csv').reader(open(path)) if row[5] == r[5] and row[0] != 'transaction_id') for r in __import__('csv').reader(open(path)) if r[0] != 'transaction_id'}.gadgets(), key=lambda x: x[1]))

 

Output:

('Mike Rodriguez', 502252.0)

 

The dictionary comprehension teams by column 5, summing column 3 values for every group. One move collects group keys and a second does the aggregation. max() with a lambda finds the best complete. Alter column indices for various group-by operations.

 

3. Filter and Show a Subset of Rows

 
Present solely rows that match a particular situation with formatted output.

print("n".be a part of(f"{r[1]}: ${float(r[3]):,.2f}" for r in __import__('csv').reader(open(path)) if r[7] == 'Enterprise' and r[0] != 'transaction_id'))

 

Output:

Acme Corp: $45,000.00
Gamma Options: $78,900.00
Zeta Methods: $156,000.00
Iota Industries: $67,500.25
Kappa LLC: $91,200.75
Nu Applied sciences: $76,800.25
Omicron LLC: $128,900.00
Sigma Corp: $89,700.75
Phi Corp: $176,500.25
Omega Applied sciences: $134,600.50
Alpha Options: $71,200.25
Matrix Methods: $105,600.25

 

The generator expression filters rows the place column 7 equals Enterprise, then codecs columns 1 and three. Utilizing "n".be a part of(...) avoids printing an inventory of None values.

 

4. Group By Sum Distribution

 
Get totals for every distinctive worth in a grouping column.

print({g: f"${sum(float(row[3]) for row in __import__('csv').reader(open(path)) if row[6] == g and row[0] != 'transaction_id'):,.2f}" for g in set(row[6] for row in __import__('csv').reader(open(path)) if row[0] != 'transaction_id')})

 

Output:

{'Asia Pacific': '$326,551.75', 'Europe': '$502,252.00', 'North America': '$985,556.00'}

 

The dictionary comprehension first extracts distinctive values from column 6 utilizing a set comprehension, then calculates the sum of column 3 for every group. That is reminiscence environment friendly as a consequence of generator expressions. Change column indices to group by totally different fields.

 

5. Threshold Filter with Kind

 
Discover and rank all data above a sure numeric threshold.

print([(n, f"${v:,.2f}") for n, v in sorted([(r[1], float(r[3])) for r in listing(__import__('csv').reader(open(path)))[1:] if float(r[3]) > 100000], key=lambda x: x[1], reverse=True)])

 

Output:

[('Phi Corp', '$176,500.25'), ('Zeta Systems', '$156,000.00'), ('Omega Technologies', '$134,600.50'), ('Omicron LLC', '$128,900.00'), ('Matrix Systems', '$105,600.25')]

 

This filters rows the place column 3 exceeds 100000, creates tuples of identify and numeric worth, kinds by the numeric worth, after which codecs the values as forex for show. Alter the edge and columns as wanted.

 

6. Depend Distinctive Values

 
Rapidly decide what number of distinct values exist in any column.

print(len(set(r[2] for r in __import__('csv').reader(open(path)) if r[0] != 'transaction_id')))

 

Output:

 

Right here, the set comprehension extracts distinctive values from column 2; len() counts them. That is helpful for checking information variety or discovering distinct classes.

 

7. Conditional Aggregation

 
Calculate averages or different statistics for particular subsets of your information.

print(f"Common: ${sum(float(r[3]) for r in __import__('csv').reader(open(path)) if r[6] == 'North America' and r[0] != 'transaction_id') / sum(1 for r in __import__('csv').reader(open(path)) if r[6] == 'North America' and r[0] != 'transaction_id'):,.2f}")

 

Output:

 

This one-liner calculates the typical of column 3 for rows matching the situation in column 6. It makes use of a sum divided by a rely (by way of a generator expression). It reads the file twice however retains reminiscence utilization low.

 

8. Multi-Column Filter

 
Apply a number of filter situations concurrently throughout totally different columns.

print("n".be a part of(f"{r[1]} | {r[2]} | ${float(r[3]):,.2f}" for r in __import__('csv').reader(open(path)) if r[2] == 'Software program' and float(r[3]) > 50000 and r[0] != 'transaction_id'))

 

Output:

Zeta Methods | Software program | $156,000.00
Iota Industries | Software program | $67,500.25
Omicron LLC | Software program | $128,900.00
Sigma Corp | Software program | $89,700.75
Phi Corp | Software program | $176,500.25
Omega Applied sciences | Software program | $134,600.50
Nexus Corp | Software program | $92,300.75
Apex Industries | Software program | $57,800.00

 

It combines a number of filter situations with and operators, checks string equality and numeric comparisons, and codecs output with pipe separators for clear show.

 

9. Compute Column Statistics

 
Generate min, max, and common statistics for numeric columns in a single shot.

vals = [float(r[3]) for r in __import__('csv').reader(open(path)) if r[0] != 'transaction_id']; print(f"Min: ${min(vals):,.2f} | Max: ${max(vals):,.2f} | Avg: ${sum(vals)/len(vals):,.2f}"); print(vals)

 

Output:

Min: $8,750.25 | Max: $176,500.25 | Avg: $62,564.13
[45000.0, 12500.5, 78900.0, 23400.75, 8750.25, 156000.0, 34500.5, 19800.0, 67500.25, 91200.75, 28750.0, 43200.5, 76800.25, 15600.75, 128900.0, 52300.5, 31200.25, 89700.75, 64800.0, 22450.5, 176500.25, 38900.75, 27300.0, 134600.5, 71200.25, 92300.75, 18900.5, 105600.25, 57800.0]

 

This creates an inventory of numeric values from column 3, then calculates min, max, and common in a single line. The semicolon separates statements. It’s extra reminiscence intensive than streaming however quicker than a number of file reads for these statistics.

 

10. Export Filtered Knowledge

 
Create a brand new CSV file containing solely rows that meet your standards.

__import__('csv').author(open('filtered.csv','w',newline="")).writerows([r for r in list(__import__('csv').reader(open(path)))[1:] if float(r[3]) > 75000])

 

This reads the CSV, filters rows primarily based on a situation, and writes them to a brand new file. The newline="" parameter prevents further line breaks. Notice that this instance skips the header (it makes use of [1:]), so embody it explicitly in case you want a header within the output.

 

Wrapping Up

 
I hope you discover these one-liners for CSV processing useful.

Such one-liners are useful for:

  • Fast information exploration and validation
  • Easy information transformations
  • Prototyping earlier than writing full scripts

However it is best to keep away from them for:

  • Manufacturing information processing
  • Recordsdata requiring advanced error dealing with
  • Multi-step transformations

These strategies work with Python’s built-in CSV module if you want fast options with out setup overhead. Completely satisfied analyzing!
 
 

Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, information science, and content material creation. Her areas of curiosity and experience embody DevOps, information science, and pure language processing. She enjoys studying, writing, coding, and occasional! At present, she’s engaged on studying and sharing her data with the developer neighborhood by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates partaking useful resource overviews and coding tutorials.



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments