Panda: Tips on Working with Datetime Index in pandas
Sumber: http://sergilehkyi.com/tips-on-working-with-datetime-index-in-pandas/
As you may understand from the title it is not a complete guide on Time Series or Datetime data type in Python. So if you expect to get in-depth explanation from A to Z it’s a wrong place. Seriously. There is a fantastic article on this topic, well explained, detailed and quite straightforward. Don’t waste your time on this one.
For those who have reached this part I will tell that you will find something useful here for sure. Again, seriously. I found my notes on Time Series and decided to organize it into a little article with general tips, which are aplicable, I guess, in 80 to 90% of times you work with dates. So it’s worth sharing, isn’t it?
I have a dataset with air pollutants measurements for every hour since 2016 in Madrid, so I will use it as an example. Importing data
By default pandas will use the first column as index while importing csv file with read_csv(), so if your datetime column isn’t first you will need to specify it explicitly index_col='date'.
The beauty of pandas is that it can preprocess your datetime data during import. By specifying parse_dates=True pandas will try parsing the index, if we pass list of ints or names e.g. if [1, 2, 3] – it will try parsing columns 1, 2, 3 each as a separate date column, list of lists e.g. if 1, 3 – combine columns 1 and 3 and parse as a single date column, dict, e.g. {‘foo’ : [1, 3]} – parse columns 1, 3 as date and call result ‘foo’. If you are using other method to import data you can always use pd.to_datetime after it.
I have imported my data using the following code:
import pandas as pd import glob pattern = 'data/madrid*.csv' csv_files = glob.glob(pattern) frames = [] for csv in csv_files: df = pd.read_csv(csv, index_col='date', parse_dates=True) frames.append(df) df = pd.concat(frames) df.head()
Out[4]: BEN CH4 CO EBE NMHC NO NO_2 NOx O_3 PM10 date 2016-11-01 01:00:00 NaN NaN 0.7 NaN NaN 153.0 77.0 NaN NaN NaN 2016-11-01 01:00:00 3.1 NaN 1.1 2.0 0.53 260.0 144.0 NaN 4.0 46.0 2016-11-01 01:00:00 5.9 NaN NaN 7.5 NaN 297.0 139.0 NaN NaN NaN 2016-11-01 01:00:00 NaN NaN 1.0 NaN NaN 154.0 113.0 NaN 2.0 NaN 2016-11-01 01:00:00 NaN NaN NaN NaN NaN 275.0 127.0 NaN 2.0 NaN
The data is gathered from 24 different stations about 14 different pollutants. We are not going to analyze this data, and to make it little bit simpler we will choose only one station, two pollutants and remove all NaN values (DANGER! please, do not repeat it at home).
df_time = df'O_3', 'PM10'[df['station'] == 28079008].dropna() df_time.head() Out[9]: O_3 PM10 date 2016-11-01 01:00:00 4.0 46.0 2016-11-01 02:00:00 4.0 37.0 2016-11-01 03:00:00 4.0 31.0 2016-11-01 04:00:00 5.0 31.0 2016-11-01 05:00:00 6.0 27.0
Now when we have our data prepared we can play with Datetime Index.
Selecting values
Although the default pandas datetime format is ISO8601 (“yyyy-mm-dd hh:mm:ss”) when selecting data using partial string indexing it understands a lot of other different formats. For example:
df_time.loc['2016-11-01'].head()
Out[17]: O_3 PM10 date 2016-11-01 01:00:00 4.0 46.0 2016-11-01 02:00:00 4.0 37.0 2016-11-01 03:00:00 4.0 31.0 2016-11-01 04:00:00 5.0 31.0 2016-11-01 05:00:00 6.0 27.0
df_time.loc['November 1, 2016'].head()
Out[18]: O_3 PM10 date 2016-11-01 01:00:00 4.0 46.0 2016-11-01 02:00:00 4.0 37.0 2016-11-01 03:00:00 4.0 31.0 2016-11-01 04:00:00 5.0 31.0 2016-11-01 05:00:00 6.0 27.0
df_time.loc['2016-Nov-1'].head()
Out[19]: O_3 PM10 date 2016-11-01 01:00:00 4.0 46.0 2016-11-01 02:00:00 4.0 37.0 2016-11-01 03:00:00 4.0 31.0 2016-11-01 04:00:00 5.0 31.0 2016-11-01 05:00:00 6.0 27.0
All produce the same output. So we are free to use whatever is more comfortable for us. Also we can select data for entire month:
df_time.loc['2016-11'].head()
Out[23]: O_3 PM10 date 2016-11-01 01:00:00 4.0 46.0 2016-11-01 02:00:00 4.0 37.0 2016-11-01 03:00:00 4.0 31.0 2016-11-01 04:00:00 5.0 31.0 2016-11-01 05:00:00 6.0 27.0
df_time.loc['2016-11'].count()
Out[24]: O_3 715 PM10 715 dtype: int64
The same works if we want to select entire year:
df_time.loc['2016'].head()
Out[31]: O_3 PM10 date 2016-11-01 01:00:00 4.0 46.0 2016-11-01 02:00:00 4.0 37.0 2016-11-01 03:00:00 4.0 31.0 2016-11-01 04:00:00 5.0 31.0 2016-11-01 05:00:00 6.0 27.0
df_time.loc['2016'].count()
Out[32]: O_3 8720 PM10 8720 dtype: int64
If we want to slice data and find records for some specific period of time we continue to use loc accessor, all the rules are the same as for regular index:
df_time.loc['2017-11-02 23:00' : '2017-12-01'].head() Out[34]:
O_3 PM10
date 2017-11-02 23:00:00 5.0 30.0 2017-11-03 00:00:00 5.0 25.0 2017-11-03 01:00:00 5.0 12.0 2017-11-03 02:00:00 6.0 8.0 2017-11-03 03:00:00 7.0 14.0
df_time.loc['2017-11-02 23:00' : '2017-12-01'].count() Out[35]: O_3 690 PM10 690 dtype: int64
Resampling
Pandas has a simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications.
resample() is a time-based groupby, followed by a reduction method on each of its groups.
The resample function is very flexible and allows us to specify many different parameters to control the frequency conversion and resampling operation. sum, mean, std, sem,max, min, median, first, last, ohlc are available as a method of the returned object by resample()
- Converting hourly data into monthly data
df_time.resample('M').mean().head() Out[46]:
O_3 PM10
date 2016-01-31 21.871622 19.990541 2016-02-29 32.241679 25.853835 2016-03-31 44.234014 16.952381 2016-04-30 46.845938 12.189076 2016-05-31 53.136671 14.671177
For upsampling, we can specify a way to upsample to interpolate over the gaps that are created:
- Converting hourly data into 10-minutely data
df_time.resample('10Min').mean().head() Out[46]:
O_3 PM10
date 2016-01-01 01:00:00 8.0 17.0 2016-01-01 01:10:00 NaN NaN 2016-01-01 01:20:00 NaN NaN 2016-01-01 01:30:00 NaN NaN 2016-01-01 01:40:00 NaN NaN
df_time.resample('10Min').mean().ffill().head() Out[47]:
O_3 PM10
date 2016-01-01 01:00:00 8.0 17.0 2016-01-01 01:10:00 8.0 17.0 2016-01-01 01:20:00 8.0 17.0 2016-01-01 01:30:00 8.0 17.0 2016-01-01 01:40:00 8.0 17.0
We can use the following methods to fill the NaN values: ‘pad’, ‘backfill’, ‘ffill’, ‘bfill’, ‘nearest’. More details on this can be found in documentation. Or we can do it using interpolation with following methods: ‘linear’, ‘time’, ‘index’, ‘values’, ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘barycentric’, ‘krogh’, ‘polynomial’, ‘spline’, ‘piecewise_polynomial’, ‘from_derivatives’, ‘pchip’, ‘akima’. And again, deeper explanation on this can be found in pandas docs.
And a table of resampling frequencies: Input Description min, T minute H hour D day B business day W week M month Q quarter A year Visualization
And another one awesome feature of Datetime Index is simplicity in plotting, as matplotlib will automatically treat it as x axis, so we don’t need to explicitly specify anything.
import seaborn as sns sns.set() df_plot = df_time.resample('M').mean() plt.plot(df_plot) plt.title('Air polution by O3 and PM10') plt.ylabel('micrograms per cubic meter (mg/m3)') plt.xticks(rotation=45) plt.show()
As promised in the beginning – few tips, that help in the majority of situations when working with datetime data. For me – one more refresher and organizer of thoughts that converts into knowledge. All win. Someone will find it useful, someone might not (I warned in the first paragraph :D), so actually I expect everyone reading this will find it useful.
This is the most exciting feature of knowledge – when you share it, you don’t loose anything, you only gain. To write an article, it requires some research, some verification, some learning – basically you get even more knowledge in the end.
Knowledge is just a tool. And it’s your responsibility to apply it or not. In the end of the day it doesn’t matter how much you know, it’s about how you use that knowledge. But that’s already another story…
Thank you for reading, have an incredible week, learn, spread the knowledge, use it wisely and use it for good deeds 🙂 Karma +1 when you share it: