'Python pandas DataFrame from first and last row of csv

All -

I am looking to create a pandas DataFrame from only the first and last lines of a very large csv. The purpose of this exercise is to be able to easily grab some attributes from the first and last entries in these csv files. I have no problem grabbing the first line of the csv using:

pd.read_csv(filename, nrows=1)

I also have no problem grabbing the last row of a text file in various ways, such as:

with open(filename) as f:
    last_line = f.readlines()[-1]

However, getting these two things into a single DataFrame has thrown me for a loop. Any insight into how best to achieve this goal?

EDIT NOTE: I am trying to achieve this task without loading all of the data into a single DataFrame first as I am dealing with pretty large (>15MM rows) csv files.

Thanks!



Solution 1:[1]

Just use head and tail and concat. You can even adjust the number of rows.

import pandas as pd

df = pd.read_csv("flu.csv")
top = df.head(1)
bottom = df.tail(1)
concatenated = pd.concat([top,bottom])

print concatenated

Result:

           Date  Cases
0      9/1/2014     45
121  12/31/2014     97

Adjusting head and tail to take in 5 rows from top and 10 from bottom...

           Date  Cases
0      9/1/2014     45
1      9/2/2014    104
2      9/3/2014     47
3      9/4/2014    108
4      9/5/2014     49
112  12/22/2014     30
113  12/23/2014     81
114  12/24/2014     99
115  12/25/2014     85
116  12/26/2014     55
117  12/27/2014     91
118  12/28/2014     68
119  12/29/2014    109
120  12/30/2014     55
121  12/31/2014     97

One possible approach that can be used if you don't want to load the whole CSV file as a dataframe is to process them as CSVs alone. The following code is similar to your approach.

import pandas as pd
import csv

top = pd.read_csv("flu.csv", nrows=1)
headers = top.columns.values

with open("flu.csv", "r") as f, open("flu2.csv","w") as g:
    last_line = f.readlines()[-1].strip().split(",")
    c = csv.writer(g)
    c.writerow(headers)
    c.writerow(last_line)

bottom = pd.read_csv("flu2.csv")
concatenated = pd.concat([top, bottom])
concatenated.reset_index(inplace=True, drop=True)

print concatenated

Result is the same, except for the index. Tested against a million rows and it was processed in a about a second.

        Date  Cases
0   9/1/2014     45
1  7/25/4885     99
[Finished in 0.9s]

How it scales versus 15 million rows, maybe that's your ballgame now. So I decided to test it against exactly 15,728,626 rows and the results seem good enough.

        Date  Cases
0   9/1/2014     45
1  7/25/4885     99
[Finished in 3.3s]

Solution 2:[2]

So the way to do this without reading in the whole file into Python first is to grab the first line then iterate through the file to the last line. Then use StringIO to suck them into Pandas. Maybe something like this:

import pandas as pd
import StringIO

with open('tst.csv') as f:
    first_line = f.readline()
    for line in f:
        pass #iterate to the end
    last_line = line

mydf = pd.DataFrame()
mydf = mydf.append(pd.read_csv(StringIO.StringIO(first_line), header=None))
mydf = mydf.append(pd.read_csv(StringIO.StringIO(last_line), header=None))

Solution 3:[3]

This is the best solution I found

import pandas as pd

count=len(open(filename).readlines()) 

df=pd.read_csv(filename, skiprows=range(2,count-1), header=0)

Solution 4:[4]

You want this answer https://stackoverflow.com/a/18603065/4226476 - not the accepted answer but the best because it seeks backwards for the first newline instead of guessing.

Then wrap the two lines in a StringIO:

from cStringIO import StringIO
import pandas as pd

# grab the lines as per first-and-last-line question
truncated_input = StringIO(the_two_lines)
truncated_input.seek(0) # need to rewind
df = pd.read_csv(truncated_input)

Solution 5:[5]

I had this problem too and went searching for a better solution.

The suggestion by Stefan Manole above is better than reading in the whole csv. Its about ~2x faster than reading in the whole csv file in my testing.

Using a csv writer like suggested above was faster again ~5x

The best method would surely be to use the tail head and sed unix commands. Tested over 20x faster!

import pandas as pd
import subprocess

filename = "csv_file.csv"

#Header
csv_header_str = subprocess.check_output(f"head -1 {filename}", shell=True).decode("utf-8").strip()
csv_header = csv_header_str.split(",")


#First line
csv_head = subprocess.check_output(f"sed -n '2p' {filename}", shell=True).decode("utf-8").strip()
head = csv_head.split(",")


#Last line
csv_tail = subprocess.check_output(f"tail -1 {filename}", shell=True).decode("utf-8").strip()
tail = csv_tail.split(",")

df = pd.DataFrame([head,tail], columns=csv_header)

I have a Github repo for this here with more functionality like reading n line from a csv into a DataFrame and handling data with/without headers. https://github.com/donjor/read-csv-turbo

I created a python module readcsvturbo (mainly just to try it out)

pip install readcsvturbo

import pandas as pd
import readcsvturbo as rct

filename = "csv_file.csv"
df = rct.read_csv_headtail(filename)

Hope this helps others who are in the same boat.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 JD Long
Solution 3
Solution 4 Community
Solution 5