'Unable to resolve pandas encoding error by changing encoding

I'm having trouble resolving an encoding error when reading a csv file using the pandas library.

import pandas as pd
filepath = "D:\Datasets\2019HighwayBridgeInventory"
pd.read_csv(filepath + '\2019HwyBridgesDelimitedUtah.csv')

This returns a UnicodeDecodeError:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 11: invalid start byte

I understand this error occurs when a non UTF8 character exists in the data after reviewing this relevant thread. However, my attempts to resolve the error have been unsuccessful.

At first I tried opening the file and saving it with utf8 encoding in sublime text, but received the same error message.

I have also tried specifying the encoding in the read_csv statement. I tried

pd.read_csv(filepath + '\2019HwyBridgesDelimitedUtah.csv', encoding = "ISO-8859–1")
pd.read_csv(filepath + '\2019HwyBridgesDelimitedUtah.csv', encoding = "us-ascii")
pd.read_csv(filepath + '\2019HwyBridgesDelimitedUtah.csv', encoding = "latin1")

but I seem to receive the same utf encoding error every time. is it possible that this error is not related to the read_csv statement? Why is the error still stating the utf-8 codec cant read something even when I change the encoding to something else?

Full error text:

UnicodeDecodeError                        Traceback (most recent call last)
<ipython-input-9-1e8cb6445435> in <module>
----> 1 pd.read_csv(filepath + '\2019HwyBridgesDelimitedUtah.csv', encoding = "iso-8859-1")

~\Anaconda3\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
    700                     skip_blank_lines=skip_blank_lines)
    701 
--> 702         return _read(filepath_or_buffer, kwds)
    703 
    704     parser_f.__name__ = name

~\Anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
    427 
    428     # Create the parser.
--> 429     parser = TextFileReader(filepath_or_buffer, **kwds)
    430 
    431     if chunksize or iterator:

~\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds)
    893             self.options['has_index_names'] = kwds['has_index_names']
    894 
--> 895         self._make_engine(self.engine)
    896 
    897     def close(self):

~\Anaconda3\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine)
   1120     def _make_engine(self, engine='c'):
   1121         if engine == 'c':
-> 1122             self._engine = CParserWrapper(self.f, **self.options)
   1123         else:
   1124             if engine == 'python':

~\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds)
   1851         kwds['usecols'] = self.usecols
   1852 
-> 1853         self._reader = parsers.TextReader(src, **kwds)
   1854         self.unnamed_cols = self._reader.unnamed_cols
   1855 

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._setup_parser_source()

~\Anaconda3\lib\genericpath.py in exists(path)
     17     """Test whether a path exists.  Returns False for broken symbolic links"""
     18     try:
---> 19         os.stat(path)
     20     except OSError:
     21         return False

UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 11: invalid start byte


Solution 1:[1]

A suggestion would be to check which encoding you actually have. Do it this way:

with open('filename.csv) as f:  ### or whatever your extension is
   print(f)

from that you'll obtain the encoding. Then,

df=pd.read_csv('filename.csv', encoding="the encoding that was returned")

Solution 2:[2]

If you use python3.0 or more higher, you can code like me, for example:

df = pd.read_csv(f'E:\\??????????\\11???????\\{name}', encoding='ISO-8859-1')

if you have done it, you will find the decode type, ISO-8859-1, can decode almost all types. Thanks for your reading.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 biao zhu