Profile picture

Co-founder @ RMOTR

10 Minutes to Pandas - Interactive Tutorial

Last updated: November 23rd, 20182018-11-23Project preview

This is a short introduction to pandas, geared mainly for new users. You can see more complex recipes in the Cookbook.

In [1]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

%matplotlib inline

Current versions of libraries (you can upgrade by doing !pip install pandas==X.Y.Z):

In [2]:
print(pd.__version__)
print(np.__version__)
0.23.0
1.14.5
In [3]:
np.random.seed(10)

Object Creation

See the Data Structure Intro section.

Creating a Series by passing a list of values, letting pandas create a default integer index:

In [4]:
s = pd.Series([1,3,5,np.nan,6,8])
In [5]:
s
Out[5]:
0    1.0
1    3.0
2    5.0
3    NaN
4    6.0
5    8.0
dtype: float64

Creating a DataFrame by passing a NumPy array, with a datetime index and labeled columns:

In [6]:
dates = pd.date_range('20130101', periods=6)
dates
Out[6]:
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
               '2013-01-05', '2013-01-06'],
              dtype='datetime64[ns]', freq='D')
In [7]:
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
In [8]:
df
Out[8]:
A B C D
2013-01-01 1.331587 0.715279 -1.545400 -0.008384
2013-01-02 0.621336 -0.720086 0.265512 0.108549
2013-01-03 0.004291 -0.174600 0.433026 1.203037
2013-01-04 -0.965066 1.028274 0.228630 0.445138
2013-01-05 -1.136602 0.135137 1.484537 -1.079805
2013-01-06 -1.977728 -1.743372 0.266070 2.384967

Creating a DataFrame by passing a dict of objects that can be converted to series-like.

In [9]:
df2 = pd.DataFrame({
    'A' : 1.,
    'B' : pd.Timestamp('20130102'),
    'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
    'D' : np.array([3] * 4,dtype='int32'),
    'E' : pd.Categorical(["test","train","test","train"]),
    'F' : 'foo'
})
In [10]:
df2
Out[10]:
A B C D E F
0 1.0 2013-01-02 1.0 3 test foo
1 1.0 2013-01-02 1.0 3 train foo
2 1.0 2013-01-02 1.0 3 test foo
3 1.0 2013-01-02 1.0 3 train foo

The columns of the resulting DataFrame have different dtypes.

In [11]:
df2.dtypes
Out[11]:
A           float64
B    datetime64[ns]
C           float32
D             int32
E          category
F            object
dtype: object

If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled. Here’s a subset of the attributes that will be completed:

In [12]:
# Try tapping tab after the dot a couple of times
df2.

you should see something like this:

image

As you can see, the columns A, B, C, and D are automatically tab completed. E is there as well; the rest of the attributes have been truncated for brevity.

Viewing Data

See the Basics section.

Here is how to view the top and bottom rows of the frame:

In [13]:
df.head()
Out[13]:
A B C D
2013-01-01 1.331587 0.715279 -1.545400 -0.008384
2013-01-02 0.621336 -0.720086 0.265512 0.108549
2013-01-03 0.004291 -0.174600 0.433026 1.203037
2013-01-04 -0.965066 1.028274 0.228630 0.445138
2013-01-05 -1.136602 0.135137 1.484537 -1.079805
In [14]:
df.tail(3)
Out[14]:
A B C D
2013-01-04 -0.965066 1.028274 0.228630 0.445138
2013-01-05 -1.136602 0.135137 1.484537 -1.079805
2013-01-06 -1.977728 -1.743372 0.266070 2.384967

Display the index, columns, and the underlying NumPy data:

In [15]:
df.index
Out[15]:
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
               '2013-01-05', '2013-01-06'],
              dtype='datetime64[ns]', freq='D')
In [16]:
df.columns
Out[16]:
Index(['A', 'B', 'C', 'D'], dtype='object')
In [17]:
df.values
Out[17]:
array([[ 1.3315865 ,  0.71527897, -1.54540029, -0.00838385],
       [ 0.62133597, -0.72008556,  0.26551159,  0.10854853],
       [ 0.00429143, -0.17460021,  0.43302619,  1.20303737],
       [-0.96506567,  1.02827408,  0.22863013,  0.44513761],
       [-1.13660221,  0.13513688,  1.484537  , -1.07980489],
       [-1.97772828, -1.7433723 ,  0.26607016,  2.38496733]])

describe() shows a quick statistic summary of your data:

In [18]:
df.describe()
Out[18]:
A B C D
count 6.000000 6.000000 6.000000 6.000000
mean -0.353697 -0.126561 0.188729 0.508917
std 1.228268 1.007917 0.975651 1.179607
min -1.977728 -1.743372 -1.545400 -1.079805
25% -1.093718 -0.583714 0.237850 0.020849
50% -0.480387 -0.019732 0.265791 0.276843
75% 0.467075 0.570243 0.391287 1.013562
max 1.331587 1.028274 1.484537 2.384967

Transposing your data:

In [19]:
df.T
Out[19]:
2013-01-01 00:00:00 2013-01-02 00:00:00 2013-01-03 00:00:00 2013-01-04 00:00:00 2013-01-05 00:00:00 2013-01-06 00:00:00
A 1.331587 0.621336 0.004291 -0.965066 -1.136602 -1.977728
B 0.715279 -0.720086 -0.174600 1.028274 0.135137 -1.743372
C -1.545400 0.265512 0.433026 0.228630 1.484537 0.266070
D -0.008384 0.108549 1.203037 0.445138 -1.079805 2.384967

Sorting by an axis:

In [20]:
df.sort_index(axis=1, ascending=False)
Out[20]:
D C B A
2013-01-01 -0.008384 -1.545400 0.715279 1.331587
2013-01-02 0.108549 0.265512 -0.720086 0.621336
2013-01-03 1.203037 0.433026 -0.174600 0.004291
2013-01-04 0.445138 0.228630 1.028274 -0.965066
2013-01-05 -1.079805 1.484537 0.135137 -1.136602
2013-01-06 2.384967 0.266070 -1.743372 -1.977728

Sorting by values:

In [21]:
df.sort_values(by='B')
Out[21]:
A B C D
2013-01-06 -1.977728 -1.743372 0.266070 2.384967
2013-01-02 0.621336 -0.720086 0.265512 0.108549
2013-01-03 0.004291 -0.174600 0.433026 1.203037
2013-01-05 -1.136602 0.135137 1.484537 -1.079805
2013-01-01 1.331587 0.715279 -1.545400 -0.008384
2013-01-04 -0.965066 1.028274 0.228630 0.445138

Selection

Note: While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we recommend the optimized pandas data access methods, .at, .iat, .loc and .iloc.

See the indexing documentation Indexing and Selecting Data and MultiIndex / Advanced Indexing.

Getting

Selecting a single column, which yields a Series, equivalent to df.A:

In [22]:
df['A']
Out[22]:
2013-01-01    1.331587
2013-01-02    0.621336
2013-01-03    0.004291
2013-01-04   -0.965066
2013-01-05   -1.136602
2013-01-06   -1.977728
Freq: D, Name: A, dtype: float64

Selecting via [], which slices the rows.

In [23]:
df[0:3]
Out[23]:
A B C D
2013-01-01 1.331587 0.715279 -1.545400 -0.008384
2013-01-02 0.621336 -0.720086 0.265512 0.108549
2013-01-03 0.004291 -0.174600 0.433026 1.203037
In [24]:
df['20130102':'20130104']
Out[24]:
A B C D
2013-01-02 0.621336 -0.720086 0.265512 0.108549
2013-01-03 0.004291 -0.174600 0.433026 1.203037
2013-01-04 -0.965066 1.028274 0.228630 0.445138

Selection by Label

See more in Selection by Label.

For getting a cross section using a label:

In [25]:
df.loc[dates[0]]
Out[25]:
A    1.331587
B    0.715279
C   -1.545400
D   -0.008384
Name: 2013-01-01 00:00:00, dtype: float64

Selecting on a multi-axis by label:

In [26]:
df.loc[:,['A','B']]
Out[26]:
A B
2013-01-01 1.331587 0.715279
2013-01-02 0.621336 -0.720086
2013-01-03 0.004291 -0.174600
2013-01-04 -0.965066 1.028274
2013-01-05 -1.136602 0.135137
2013-01-06 -1.977728 -1.743372

Showing label slicing, both endpoints are included:

In [27]:
df.loc['20130102':'20130104',['A','B']]
Out[27]:
A B
2013-01-02 0.621336 -0.720086
2013-01-03 0.004291 -0.174600
2013-01-04 -0.965066 1.028274

Reduction in the dimensions of the returned object:

In [28]:
df.loc['20130102',['A','B']]
Out[28]:
A    0.621336
B   -0.720086
Name: 2013-01-02 00:00:00, dtype: float64

For getting a scalar value:

In [29]:
df.loc[dates[0],'A']
Out[29]:
1.331586504129518

For getting fast access to a scalar (equivalent to the prior method):

In [30]:
df.at[dates[0],'A']
Out[30]:
1.331586504129518

Selection by Position

See more in Selection by Position.

Select via the position of the passed integers:

In [31]:
df.iloc[3]
Out[31]:
A   -0.965066
B    1.028274
C    0.228630
D    0.445138
Name: 2013-01-04 00:00:00, dtype: float64

By integer slices, acting similar to numpy/python:

In [32]:
df.iloc[3:5, 0:2]
Out[32]:
A B
2013-01-04 -0.965066 1.028274
2013-01-05 -1.136602 0.135137

By lists of integer position locations, similar to the numpy/python style:

In [33]:
df.iloc[[1,2,4],[0,2]]
Out[33]:
A C
2013-01-02 0.621336 0.265512
2013-01-03 0.004291 0.433026
2013-01-05 -1.136602 1.484537

For slicing rows explicitly:

In [34]:
df.iloc[1:3,:]
Out[34]:
A B C D
2013-01-02 0.621336 -0.720086 0.265512 0.108549
2013-01-03 0.004291 -0.174600 0.433026 1.203037

For slicing columns explicitly:

In [35]:
df.iloc[:,1:3]
Out[35]:
B C
2013-01-01 0.715279 -1.545400
2013-01-02 -0.720086 0.265512
2013-01-03 -0.174600 0.433026
2013-01-04 1.028274 0.228630
2013-01-05 0.135137 1.484537
2013-01-06 -1.743372 0.266070

For getting a value explicitly:

In [36]:
df.iloc[1,1]
Out[36]:
-0.7200855607188968

For getting fast access to a scalar (equivalent to the prior method):

In [37]:
df.iat[1,1]
Out[37]:
-0.7200855607188968

Boolean Indexing

Using a single column's values to select data.

In [38]:
df[df.A > 0]
Out[38]:
A B C D
2013-01-01 1.331587 0.715279 -1.545400 -0.008384
2013-01-02 0.621336 -0.720086 0.265512 0.108549
2013-01-03 0.004291 -0.174600 0.433026 1.203037

Selecting values from a DataFrame where a boolean condition is met.

In [39]:
df[df > 0]
Out[39]:
A B C D
2013-01-01 1.331587 0.715279 NaN NaN
2013-01-02 0.621336 NaN 0.265512 0.108549
2013-01-03 0.004291 NaN 0.433026 1.203037
2013-01-04 NaN 1.028274 0.228630 0.445138
2013-01-05 NaN 0.135137 1.484537 NaN
2013-01-06 NaN NaN 0.266070 2.384967

Using the isin() method for filtering:

In [40]:
df2 = df.copy()
In [41]:
df2['E'] = ['one', 'one','two','three','four','three']
In [42]:
df2
Out[42]:
A B C D E
2013-01-01 1.331587 0.715279 -1.545400 -0.008384 one
2013-01-02 0.621336 -0.720086 0.265512 0.108549 one
2013-01-03 0.004291 -0.174600 0.433026 1.203037 two
2013-01-04 -0.965066 1.028274 0.228630 0.445138 three
2013-01-05 -1.136602 0.135137 1.484537 -1.079805 four
2013-01-06 -1.977728 -1.743372 0.266070 2.384967 three
In [43]:
df2[df2['E'].isin(['two','four'])]
Out[43]:
A B C D E
2013-01-03 0.004291 -0.174600 0.433026 1.203037 two
2013-01-05 -1.136602 0.135137 1.484537 -1.079805 four

Setting

Setting a new column automatically aligns the data by the indexes.

In [44]:
s1 = pd.Series([1,2,3,4,5,6], index=pd.date_range('20130102', periods=6))
In [45]:
s1
Out[45]:
2013-01-02    1
2013-01-03    2
2013-01-04    3
2013-01-05    4
2013-01-06    5
2013-01-07    6
Freq: D, dtype: int64
In [46]:
df['F'] = s1

Setting values by label:

In [47]:
df.at[dates[0],'A'] = 0

Setting values by position:

In [48]:
df.iat[0,1] = 0

Setting by assigning with a NumPy array:

In [49]:
df.loc[:,'D'] = np.array([5] * len(df))

The result of the prior setting operations.

In [50]:
df
Out[50]:
A B C D F
2013-01-01 0.000000 0.000000 -1.545400 5 NaN
2013-01-02 0.621336 -0.720086 0.265512 5 1.0
2013-01-03 0.004291 -0.174600 0.433026 5 2.0
2013-01-04 -0.965066 1.028274 0.228630 5 3.0
2013-01-05 -1.136602 0.135137 1.484537 5 4.0
2013-01-06 -1.977728 -1.743372 0.266070 5 5.0

A where operation with setting.

In [51]:
df2 = df.copy()
In [52]:
df2[df2 > 0] = -df2
In [53]:
df2
Out[53]:
A B C D F
2013-01-01 0.000000 0.000000 -1.545400 -5 NaN
2013-01-02 -0.621336 -0.720086 -0.265512 -5 -1.0
2013-01-03 -0.004291 -0.174600 -0.433026 -5 -2.0
2013-01-04 -0.965066 -1.028274 -0.228630 -5 -3.0
2013-01-05 -1.136602 -0.135137 -1.484537 -5 -4.0
2013-01-06 -1.977728 -1.743372 -0.266070 -5 -5.0

Missing Data

pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See the Missing Data section.

Reindexing allows you to change/add/delete the index on a specified axis. This returns a copy of the data.

In [54]:
df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
In [55]:
df1.loc[dates[0]:dates[1],'E'] = 1
In [56]:
df1
Out[56]:
A B C D F E
2013-01-01 0.000000 0.000000 -1.545400 5 NaN 1.0
2013-01-02 0.621336 -0.720086 0.265512 5 1.0 1.0
2013-01-03 0.004291 -0.174600 0.433026 5 2.0 NaN
2013-01-04 -0.965066 1.028274 0.228630 5 3.0 NaN

To drop any rows that have missing data.

In [57]:
df1.dropna(how='any')
Out[57]:
A B C D F E
2013-01-02 0.621336 -0.720086 0.265512 5 1.0 1.0

Filling missing data.

In [58]:
df1.fillna(value=5)
Out[58]:
A B C D F E
2013-01-01 0.000000 0.000000 -1.545400 5 5.0 1.0
2013-01-02 0.621336 -0.720086 0.265512 5 1.0 1.0
2013-01-03 0.004291 -0.174600 0.433026 5 2.0 5.0
2013-01-04 -0.965066 1.028274 0.228630 5 3.0 5.0

To get the boolean mask where values are nan.

In [60]:
pd.isna(df1)
Out[60]:
A B C D F E
2013-01-01 False False False False True False
2013-01-02 False False False False False False
2013-01-03 False False False False False True
2013-01-04 False False False False False True

Operations

See the Basic section on Binary Ops.

Stats

Operations in general exclude missing data.

Performing a descriptive statistic:

In [61]:
df.mean()
Out[61]:
A   -0.575628
B   -0.245775
C    0.188729
D    5.000000
F    3.000000
dtype: float64

Same operation on the other axis:

In [62]:
df.mean(1)
Out[62]:
2013-01-01    0.863650
2013-01-02    1.233352
2013-01-03    1.452543
2013-01-04    1.658368
2013-01-05    1.896614
2013-01-06    1.308994
Freq: D, dtype: float64

Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically broadcasts along the specified dimension.

In [64]:
s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(2)
In [65]:
s
Out[65]:
2013-01-01    NaN
2013-01-02    NaN
2013-01-03    1.0
2013-01-04    3.0
2013-01-05    5.0
2013-01-06    NaN
Freq: D, dtype: float64
In [66]:
df.sub(s, axis='index')
Out[66]:
A B C D F
2013-01-01 NaN NaN NaN NaN NaN
2013-01-02 NaN NaN NaN NaN NaN
2013-01-03 -0.995709 -1.174600 -0.566974 4.0 1.0
2013-01-04 -3.965066 -1.971726 -2.771370 2.0 0.0
2013-01-05 -6.136602 -4.864863 -3.515463 0.0 -1.0
2013-01-06 NaN NaN NaN NaN NaN

Apply

Applying functions to the data:

In [67]:
df.apply(np.cumsum)
Out[67]:
A B C D F
2013-01-01 0.000000 0.000000 -1.545400 5 NaN
2013-01-02 0.621336 -0.720086 -1.279889 10 1.0
2013-01-03 0.625627 -0.894686 -0.846863 15 3.0
2013-01-04 -0.339438 0.133588 -0.618232 20 6.0
2013-01-05 -1.476040 0.268725 0.866305 25 10.0
2013-01-06 -3.453769 -1.474647 1.132375 30 15.0
In [68]:
df.apply(lambda x: x.max() - x.min())
Out[68]:
A    2.599064
B    2.771646
C    3.029937
D    0.000000
F    4.000000
dtype: float64

Histogramming

See more at Histogramming and Discretization.

In [69]:
s = pd.Series(np.random.randint(0, 7, size=10))
In [70]:
s
Out[70]:
0    2
1    5
2    5
3    6
4    6
5    0
6    3
7    4
8    2
9    0
dtype: int64
In [71]:
s.value_counts()
Out[71]:
6    2
5    2
2    2
0    2
4    1
3    1
dtype: int64

String Methods

Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.

In [72]:
s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
In [73]:
s.str.lower()
Out[73]:
0       a
1       b
2       c
3    aaba
4    baca
5     NaN
6    caba
7     dog
8     cat
dtype: object

Merge

Concat

pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.

See the Merging section.

Concatenating pandas objects together with concat():

In [74]:
df = pd.DataFrame(np.random.randn(10, 4))
In [75]:
df
Out[75]:
0 1 2 3
0 -1.926579 0.698584 -0.746201 -0.156627
1 -0.193636 1.139125 0.362218 -0.590913
2 0.389363 -0.526996 -0.514308 0.897609
3 0.989252 0.259225 -0.024193 0.358497
4 -0.377546 0.695631 1.024845 1.707755
5 -1.967117 -0.870957 -1.106825 0.275124
6 -0.364456 -1.165153 -1.343070 1.453393
7 0.001169 -0.011920 0.715883 -1.185769
8 -0.636328 -0.610814 -1.319391 -0.561728
9 0.205385 -0.953361 0.321518 -1.520939
In [76]:
pieces = [df[:3], df[3:7], df[7:]]
In [77]:
pd.concat(pieces)
Out[77]:
0 1 2 3
0 -1.926579 0.698584 -0.746201 -0.156627
1 -0.193636 1.139125 0.362218 -0.590913
2 0.389363 -0.526996 -0.514308 0.897609
3 0.989252 0.259225 -0.024193 0.358497
4 -0.377546 0.695631 1.024845 1.707755
5 -1.967117 -0.870957 -1.106825 0.275124
6 -0.364456 -1.165153 -1.343070 1.453393
7 0.001169 -0.011920 0.715883 -1.185769
8 -0.636328 -0.610814 -1.319391 -0.561728
9 0.205385 -0.953361 0.321518 -1.520939

Join

SQL style merges. See the Database style joining section.

In [78]:
left = pd.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})
In [79]:
right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})
In [80]:
left
Out[80]:
key lval
0 foo 1
1 foo 2
In [81]:
right
Out[81]:
key rval
0 foo 4
1 foo 5
In [82]:
pd.merge(left, right, on='key')
Out[82]:
key lval rval
0 foo 1 4
1 foo 1 5
2 foo 2 4
3 foo 2 5

Another example that can be given is:

In [83]:
left = pd.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]})
In [84]:
right = pd.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]})
In [85]:
left
Out[85]:
key lval
0 foo 1
1 bar 2
In [86]:
right
Out[86]:
key rval
0 foo 4
1 bar 5
In [87]:
pd.merge(left, right, on='key')
Out[87]:
key lval rval
0 foo 1 4
1 bar 2 5

Append

Append rows to a dataframe. See the Appending section.

In [88]:
df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
In [89]:
df
Out[89]:
A B C D
0 -2.161453 0.345313 0.878969 0.562763
1 0.288435 -1.006482 0.053978 -1.826723
2 -0.091915 -0.253183 1.380594 -0.683429
3 -0.316851 -1.017270 -0.974124 -0.033239
4 0.982883 -0.468529 -0.895339 -0.075996
5 -0.415207 0.904939 -0.902895 0.247432
6 1.752237 0.348548 -0.022903 -0.961702
7 0.079236 -0.393272 -0.600994 -0.594842
In [90]:
s = df.iloc[3]
In [91]:
df.append(s, ignore_index=True)
Out[91]:
A B C D
0 -2.161453 0.345313 0.878969 0.562763
1 0.288435 -1.006482 0.053978 -1.826723
2 -0.091915 -0.253183 1.380594 -0.683429
3 -0.316851 -1.017270 -0.974124 -0.033239
4 0.982883 -0.468529 -0.895339 -0.075996
5 -0.415207 0.904939 -0.902895 0.247432
6 1.752237 0.348548 -0.022903 -0.961702
7 0.079236 -0.393272 -0.600994 -0.594842
8 -0.316851 -1.017270 -0.974124 -0.033239

Grouping

By “group by” we are referring to a process involving one or more of the following steps:

  • Splitting the data into groups based on some criteria
  • Applying a function to each group independently
  • Combining the results into a data structure

See the Grouping section.

In [92]:
df = pd.DataFrame({
    'A' : ['foo', 'bar', 'foo', 'bar',
    'foo', 'bar', 'foo', 'foo'],
    'B' : ['one', 'one', 'two', 'three',
    'two', 'two', 'one', 'three'],
    'C' : np.random.randn(8),
    'D' : np.random.randn(8)
})
In [93]:
df
Out[93]:
A B C D
0 foo one -1.657737 0.586605
1 bar one -0.005070 0.284085
2 foo two 0.793057 -1.331690
3 bar three -0.889118 1.273106
4 foo two -0.997817 0.409245
5 bar two 0.669894 -0.607473
6 foo one 0.648512 0.934010
7 foo three 0.025735 -1.843736

Grouping and then applying the sum() function to the resulting groups.

In [94]:
df.groupby('A').sum()
Out[94]:
C D
A
bar -0.224294 0.949718
foo -1.188251 -1.245566

Grouping by multiple columns forms a hierarchical index, and again we can apply the sum function.

In [96]:
df.groupby(['A','B']).sum()
Out[96]:
C D
A B
bar one -0.005070 0.284085
three -0.889118 1.273106
two 0.669894 -0.607473
foo one -1.009225 1.520615
three 0.025735 -1.843736
two -0.204760 -0.922445

Reshaping

See the sections on Hierarchical Indexing and Reshaping.

Stack

In [97]:
tuples = list(zip(*[
    ['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
    ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]))
In [98]:
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
In [99]:
df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])
In [100]:
df2 = df[:4]
In [101]:
df2
Out[101]:
A B
first second
bar one 0.478052 0.485261
two -0.474997 -1.165866
baz one -0.755630 0.588104
two -1.439245 -0.461221

The stack() method “compresses” a level in the DataFrame’s columns.

In [102]:
stacked = df2.stack()
In [103]:
stacked
Out[103]:
first  second   
bar    one     A    0.478052
               B    0.485261
       two     A   -0.474997
               B   -1.165866
baz    one     A   -0.755630
               B    0.588104
       two     A   -1.439245
               B   -0.461221
dtype: float64

With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is unstack(), which by default unstacks the last level:

In [104]:
stacked.unstack()
Out[104]:
A B
first second
bar one 0.478052 0.485261
two -0.474997 -1.165866
baz one -0.755630 0.588104
two -1.439245 -0.461221
In [105]:
stacked.unstack(1)
Out[105]:
second one two
first
bar A 0.478052 -0.474997
B 0.485261 -1.165866
baz A -0.755630 -1.439245
B 0.588104 -0.461221
In [106]:
stacked.unstack(0)
Out[106]:
first bar baz
second
one A 0.478052 -0.755630
B 0.485261 0.588104
two A -0.474997 -1.439245
B -1.165866 -0.461221

Pivot Tables

See the section on Pivot Tables.

In [107]:
df = pd.DataFrame({
    'A' : ['one', 'one', 'two', 'three'] * 3,
    'B' : ['A', 'B', 'C'] * 4,
    'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
    'D' : np.random.randn(12),
    'E' : np.random.randn(12)
})
In [108]:
df
Out[108]:
A B C D E
0 one A foo -0.557044 -1.121932
1 one B foo 0.227126 1.202484
2 two C foo -0.740886 1.511334
3 three A bar -1.283116 -1.872146
4 one B bar 0.364895 -0.649122
5 one C bar -1.380255 1.394916
6 two A foo -1.465143 1.049797
7 three B foo 0.446281 -0.389249
8 one C foo 2.853460 -0.646174
9 one A bar -1.032414 0.501322
10 two B bar 0.161776 0.551917
11 three C bar 0.225552 1.461271

We can produce pivot tables from this data very easily:

In [109]:
pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'])
Out[109]:
C bar foo
A B
one A -1.032414 -0.557044
B 0.364895 0.227126
C -1.380255 2.853460
three A -1.283116 NaN
B NaN 0.446281
C 0.225552 NaN
two A NaN -1.465143
B 0.161776 NaN
C NaN -0.740886

Time Series

pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications. See the Time Series section.

In [110]:
rng = pd.date_range('1/1/2012', periods=100, freq='S')
In [111]:
ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
In [112]:
ts
Out[112]:
2012-01-01 00:00:00    165
2012-01-01 00:00:01    207
2012-01-01 00:00:02    187
2012-01-01 00:00:03    359
2012-01-01 00:00:04     20
2012-01-01 00:00:05    415
2012-01-01 00:00:06     89
2012-01-01 00:00:07    124
2012-01-01 00:00:08    256
2012-01-01 00:00:09    352
2012-01-01 00:00:10    281
2012-01-01 00:00:11     63
2012-01-01 00:00:12    498
2012-01-01 00:00:13    315
2012-01-01 00:00:14    258
2012-01-01 00:00:15     30
2012-01-01 00:00:16     17
2012-01-01 00:00:17    315
2012-01-01 00:00:18    198
2012-01-01 00:00:19    447
2012-01-01 00:00:20     86
2012-01-01 00:00:21     29
2012-01-01 00:00:22    106
2012-01-01 00:00:23    199
2012-01-01 00:00:24    348
2012-01-01 00:00:25    275
2012-01-01 00:00:26    128
2012-01-01 00:00:27    391
2012-01-01 00:00:28    251
2012-01-01 00:00:29    477
                      ... 
2012-01-01 00:01:10    308
2012-01-01 00:01:11    346
2012-01-01 00:01:12    313
2012-01-01 00:01:13    432
2012-01-01 00:01:14    113
2012-01-01 00:01:15    362
2012-01-01 00:01:16     23
2012-01-01 00:01:17    409
2012-01-01 00:01:18    147
2012-01-01 00:01:19     48
2012-01-01 00:01:20    488
2012-01-01 00:01:21    225
2012-01-01 00:01:22    332
2012-01-01 00:01:23     75
2012-01-01 00:01:24     93
2012-01-01 00:01:25    286
2012-01-01 00:01:26    287
2012-01-01 00:01:27    156
2012-01-01 00:01:28    101
2012-01-01 00:01:29    463
2012-01-01 00:01:30    212
2012-01-01 00:01:31    149
2012-01-01 00:01:32    212
2012-01-01 00:01:33    227
2012-01-01 00:01:34    174
2012-01-01 00:01:35    430
2012-01-01 00:01:36    301
2012-01-01 00:01:37    478
2012-01-01 00:01:38    340
2012-01-01 00:01:39    148
Freq: S, Length: 100, dtype: int64
In [113]:
ts.resample('5Min').sum()
Out[113]:
2012-01-01    24548
Freq: 5T, dtype: int64

Time zone representation:

In [114]:
rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D')
In [115]:
ts = pd.Series(np.random.randn(len(rng)), rng)
In [116]:
ts
Out[116]:
2012-03-06   -1.547305
2012-03-07    1.344807
2012-03-08    0.503189
2012-03-09    1.194351
2012-03-10   -0.563506
Freq: D, dtype: float64
In [117]:
ts_utc = ts.tz_localize('UTC')
In [118]:
ts_utc
Out[118]:
2012-03-06 00:00:00+00:00   -1.547305
2012-03-07 00:00:00+00:00    1.344807
2012-03-08 00:00:00+00:00    0.503189
2012-03-09 00:00:00+00:00    1.194351
2012-03-10 00:00:00+00:00   -0.563506
Freq: D, dtype: float64

Converting to another time zone:

In [119]:
ts_utc.tz_convert('US/Eastern')
Out[119]:
2012-03-05 19:00:00-05:00   -1.547305
2012-03-06 19:00:00-05:00    1.344807
2012-03-07 19:00:00-05:00    0.503189
2012-03-08 19:00:00-05:00    1.194351
2012-03-09 19:00:00-05:00   -0.563506
Freq: D, dtype: float64

Converting between time span representations:

In [120]:
rng = pd.date_range('1/1/2012', periods=5, freq='M')
In [121]:
ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [122]:
ts
Out[122]:
2012-01-31    0.854829
2012-02-29    0.687905
2012-03-31   -1.533069
2012-04-30    0.148961
2012-05-31    0.315111
Freq: M, dtype: float64
In [123]:
ps = ts.to_period()
In [124]:
ps
Out[124]:
2012-01    0.854829
2012-02    0.687905
2012-03   -1.533069
2012-04    0.148961
2012-05    0.315111
Freq: M, dtype: float64
In [125]:
ps.to_timestamp()
Out[125]:
2012-01-01    0.854829
2012-02-01    0.687905
2012-03-01   -1.533069
2012-04-01    0.148961
2012-05-01    0.315111
Freq: MS, dtype: float64

Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end:

In [126]:
prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV')
In [127]:
ts = pd.Series(np.random.randn(len(prng)), prng)
In [128]:
ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9
In [129]:
ts.head()
Out[129]:
1990-03-01 09:00   -0.574918
1990-06-01 09:00   -0.360171
1990-09-01 09:00    0.436185
1990-12-01 09:00   -0.037918
1991-03-01 09:00   -1.004716
Freq: H, dtype: float64

Categoricals

pandas can include categorical data in a DataFrame. For full docs, see the categorical introduction and the API documentation.

In [130]:
df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})

Convert the raw grades to a categorical data type.

In [131]:
df["grade"] = df["raw_grade"].astype("category")
In [132]:
df["grade"]
Out[132]:
0    a
1    b
2    b
3    a
4    a
5    e
Name: grade, dtype: category
Categories (3, object): [a, b, e]

Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace!).

In [133]:
df["grade"].cat.categories = ["very good", "good", "very bad"]

Reorder the categories and simultaneously add the missing categories (methods under Series.cat return a new Series by default).

In [134]:
df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
In [135]:
df["grade"]
Out[135]:
0    very good
1         good
2         good
3    very good
4    very good
5     very bad
Name: grade, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]

Sorting is per order in the categories, not lexical order.

In [136]:
df.sort_values(by="grade")
Out[136]:
id raw_grade grade
5 6 e very bad
1 2 b good
2 3 b good
0 1 a very good
3 4 a very good
4 5 a very good
In [137]:
df.groupby("grade").size()
Out[137]:
grade
very bad     1
bad          0
medium       0
good         2
very good    3
dtype: int64

Plotting

See the Plotting docs.

In [138]:
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
In [139]:
ts = ts.cumsum()
In [140]:
ts.plot(figsize=(15, 7))
Out[140]:
<matplotlib.axes._subplots.AxesSubplot at 0x7fc366fcd400>

On a DataFrame, the plot() method is a convenience to plot all of the columns with labels:

In [141]:
df = pd.DataFrame(
    np.random.randn(1000, 4), index=ts.index,
    columns=['A', 'B', 'C', 'D'])
In [142]:
df = df.cumsum()
In [144]:
plt.figure(); df.plot(figsize=(15, 7)); plt.legend(loc='best')
Out[144]:
<matplotlib.legend.Legend at 0x7fc364d49f28>
<Figure size 432x288 with 0 Axes>

Getting Data In/Out

CSV

Writing to a csv file.

In [145]:
df.to_csv('foo.csv')
In [146]:
pd.read_csv('foo.csv')
Out[146]:
Unnamed: 0 A B C D
0 2000-01-01 0.232796 0.299175 1.621098 -0.836301
1 2000-01-02 -1.141489 0.723000 2.123504 -0.224878
2 2000-01-03 -1.420463 1.059972 1.590400 -1.626455
3 2000-01-04 -0.697959 1.176849 0.247666 -3.094220
4 2000-01-05 -1.837010 1.546245 -0.679068 -2.800726
5 2000-01-06 -3.212793 1.289629 -0.267142 -3.664913
6 2000-01-07 -3.906949 -0.051253 0.772836 -4.155552
7 2000-01-08 -3.876541 -1.830087 0.170127 -4.103286
8 2000-01-09 -3.169982 -1.753049 -1.600213 -4.153426
9 2000-01-10 -3.186105 -1.360824 -1.098576 -5.080789
10 2000-01-11 -3.257805 -0.728985 -0.328157 -6.008696
11 2000-01-12 -4.016530 -0.483654 -0.695272 -6.296281
12 2000-01-13 -3.169856 -0.334199 -0.112377 -5.682387
13 2000-01-14 -4.241703 0.404195 -0.158822 -7.291888
14 2000-01-15 -4.004658 0.640639 0.200434 -7.808418
15 2000-01-16 -3.019631 -0.138619 -0.195451 -6.725233
16 2000-01-17 -2.526337 0.093638 1.532938 -6.958976
17 2000-01-18 -4.373418 0.311532 1.132170 -7.302990
18 2000-01-19 -5.273040 -0.717768 0.497616 -6.911126
19 2000-01-20 -6.448850 -0.224653 -0.046340 -6.758269
20 2000-01-21 -4.943787 -0.698704 0.163211 -6.893747
21 2000-01-22 -3.952354 -2.204695 0.558461 -7.387826
22 2000-01-23 -5.304680 -3.134514 0.021907 -7.226565
23 2000-01-24 -5.031701 -2.328006 -0.860458 -7.306846
24 2000-01-25 -4.896881 -1.583543 -1.869415 -7.453436
25 2000-01-26 -6.391101 -1.879440 -2.990488 -8.301841
26 2000-01-27 -6.121248 -2.036508 -3.717953 -7.029116
27 2000-01-28 -5.195141 -2.084565 -4.404227 -6.450158
28 2000-01-29 -5.660434 -2.255982 -3.892638 -6.683686
29 2000-01-30 -7.905991 -2.137240 -2.710869 -6.353077
... ... ... ... ... ...
970 2002-08-28 -28.390474 -19.783973 -36.249024 -80.524933
971 2002-08-29 -27.914406 -19.708396 -35.141058 -80.900598
972 2002-08-30 -24.721035 -17.702092 -35.045412 -80.167698
973 2002-08-31 -24.028938 -18.607195 -35.387397 -79.570238
974 2002-09-01 -24.009956 -19.631794 -34.888249 -78.985944
975 2002-09-02 -23.898960 -20.413293 -36.001419 -77.613580
976 2002-09-03 -22.907406 -20.596000 -35.239997 -76.348259
977 2002-09-04 -22.712333 -19.955995 -35.620827 -77.238011
978 2002-09-05 -23.519035 -19.364869 -35.814106 -76.645591
979 2002-09-06 -22.718371 -21.255241 -36.034339 -77.210565
980 2002-09-07 -22.307577 -20.132891 -35.793113 -76.714027
981 2002-09-08 -21.650785 -20.220993 -33.704326 -75.940266
982 2002-09-09 -20.926967 -19.749027 -33.700124 -76.323981
983 2002-09-10 -21.846174 -19.018332 -33.099587 -75.310443
984 2002-09-11 -19.501036 -19.933442 -32.431522 -74.287285
985 2002-09-12 -19.553611 -20.412938 -30.490885 -75.329411
986 2002-09-13 -20.185181 -20.188889 -30.885516 -75.923923
987 2002-09-14 -21.904924 -20.065591 -29.455811 -77.189176
988 2002-09-15 -21.565201 -19.652966 -29.695901 -75.278708
989 2002-09-16 -22.297790 -18.310904 -28.670178 -73.564076
990 2002-09-17 -23.974133 -18.572065 -28.444068 -73.247689
991 2002-09-18 -23.960098 -17.501733 -28.688742 -72.517694
992 2002-09-19 -24.455757 -17.695024 -27.874703 -71.885050
993 2002-09-20 -25.454776 -17.199965 -28.099897 -71.716991
994 2002-09-21 -25.888726 -18.274632 -27.933352 -71.547674
995 2002-09-22 -25.895522 -18.258257 -27.961358 -70.378859
996 2002-09-23 -24.579641 -17.517173 -28.735203 -69.626137
997 2002-09-24 -25.891772 -16.750934 -29.931475 -68.980136
998 2002-09-25 -26.511951 -17.871739 -30.520565 -68.626549
999 2002-09-26 -26.888491 -16.791559 -32.415504 -68.502537

1000 rows × 5 columns

HDF5

HDF5 support requires the tables module, already installed in this notebook.

Reading and writing to HDFStores.

Writing to a HDF5 Store.

In [148]:
df.to_hdf('foo.h5','df')

Reading from a HDF5 Store.

In [149]:
pd.read_hdf('foo.h5','df')
Out[149]:
A B C D
2000-01-01 0.232796 0.299175 1.621098 -0.836301
2000-01-02 -1.141489 0.723000 2.123504 -0.224878
2000-01-03 -1.420463 1.059972 1.590400 -1.626455
2000-01-04 -0.697959 1.176849 0.247666 -3.094220
2000-01-05 -1.837010 1.546245 -0.679068 -2.800726
2000-01-06 -3.212793 1.289629 -0.267142 -3.664913
2000-01-07 -3.906949 -0.051253 0.772836 -4.155552
2000-01-08 -3.876541 -1.830087 0.170127 -4.103286
2000-01-09 -3.169982 -1.753049 -1.600213 -4.153426
2000-01-10 -3.186105 -1.360824 -1.098576 -5.080789
2000-01-11 -3.257805 -0.728985 -0.328157 -6.008696
2000-01-12 -4.016530 -0.483654 -0.695272 -6.296281
2000-01-13 -3.169856 -0.334199 -0.112377 -5.682387
2000-01-14 -4.241703 0.404195 -0.158822 -7.291888
2000-01-15 -4.004658 0.640639 0.200434 -7.808418
2000-01-16 -3.019631 -0.138619 -0.195451 -6.725233
2000-01-17 -2.526337 0.093638 1.532938 -6.958976
2000-01-18 -4.373418 0.311532 1.132170 -7.302990
2000-01-19 -5.273040 -0.717768 0.497616 -6.911126
2000-01-20 -6.448850 -0.224653 -0.046340 -6.758269
2000-01-21 -4.943787 -0.698704 0.163211 -6.893747
2000-01-22 -3.952354 -2.204695 0.558461 -7.387826
2000-01-23 -5.304680 -3.134514 0.021907 -7.226565
2000-01-24 -5.031701 -2.328006 -0.860458 -7.306846
2000-01-25 -4.896881 -1.583543 -1.869415 -7.453436
2000-01-26 -6.391101 -1.879440 -2.990488 -8.301841
2000-01-27 -6.121248 -2.036508 -3.717953 -7.029116
2000-01-28 -5.195141 -2.084565 -4.404227 -6.450158
2000-01-29 -5.660434 -2.255982 -3.892638 -6.683686
2000-01-30 -7.905991 -2.137240 -2.710869 -6.353077
... ... ... ... ...
2002-08-28 -28.390474 -19.783973 -36.249024 -80.524933
2002-08-29 -27.914406 -19.708396 -35.141058 -80.900598
2002-08-30 -24.721035 -17.702092 -35.045412 -80.167698
2002-08-31 -24.028938 -18.607195 -35.387397 -79.570238
2002-09-01 -24.009956 -19.631794 -34.888249 -78.985944
2002-09-02 -23.898960 -20.413293 -36.001419 -77.613580
2002-09-03 -22.907406 -20.596000 -35.239997 -76.348259
2002-09-04 -22.712333 -19.955995 -35.620827 -77.238011
2002-09-05 -23.519035 -19.364869 -35.814106 -76.645591
2002-09-06 -22.718371 -21.255241 -36.034339 -77.210565
2002-09-07 -22.307577 -20.132891 -35.793113 -76.714027
2002-09-08 -21.650785 -20.220993 -33.704326 -75.940266
2002-09-09 -20.926967 -19.749027 -33.700124 -76.323981
2002-09-10 -21.846174 -19.018332 -33.099587 -75.310443
2002-09-11 -19.501036 -19.933442 -32.431522 -74.287285
2002-09-12 -19.553611 -20.412938 -30.490885 -75.329411
2002-09-13 -20.185181 -20.188889 -30.885516 -75.923923
2002-09-14 -21.904924 -20.065591 -29.455811 -77.189176
2002-09-15 -21.565201 -19.652966 -29.695901 -75.278708
2002-09-16 -22.297790 -18.310904 -28.670178 -73.564076
2002-09-17 -23.974133 -18.572065 -28.444068 -73.247689
2002-09-18 -23.960098 -17.501733 -28.688742 -72.517694
2002-09-19 -24.455757 -17.695024 -27.874703 -71.885050
2002-09-20 -25.454776 -17.199965 -28.099897 -71.716991
2002-09-21 -25.888726 -18.274632 -27.933352 -71.547674
2002-09-22 -25.895522 -18.258257 -27.961358 -70.378859
2002-09-23 -24.579641 -17.517173 -28.735203 -69.626137
2002-09-24 -25.891772 -16.750934 -29.931475 -68.980136
2002-09-25 -26.511951 -17.871739 -30.520565 -68.626549
2002-09-26 -26.888491 -16.791559 -32.415504 -68.502537

1000 rows × 4 columns

Reading and writing to MS Excel.

Writing to an excel file.

In [150]:
df.to_excel('foo.xlsx', sheet_name='Sheet1')

Reading from an excel file.

In [151]:
pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])
Out[151]:
A B C D
2000-01-01 0.232796 0.299175 1.621098 -0.836301
2000-01-02 -1.141489 0.723000 2.123504 -0.224878
2000-01-03 -1.420463 1.059972 1.590400 -1.626455
2000-01-04 -0.697959 1.176849 0.247666 -3.094220
2000-01-05 -1.837010 1.546245 -0.679068 -2.800726
2000-01-06 -3.212793 1.289629 -0.267142 -3.664913
2000-01-07 -3.906949 -0.051253 0.772836 -4.155552
2000-01-08 -3.876541 -1.830087 0.170127 -4.103286
2000-01-09 -3.169982 -1.753049 -1.600213 -4.153426
2000-01-10 -3.186105 -1.360824 -1.098576 -5.080789
2000-01-11 -3.257805 -0.728985 -0.328157 -6.008696
2000-01-12 -4.016530 -0.483654 -0.695272 -6.296281
2000-01-13 -3.169856 -0.334199 -0.112377 -5.682387
2000-01-14 -4.241703 0.404195 -0.158822 -7.291888
2000-01-15 -4.004658 0.640639 0.200434 -7.808418
2000-01-16 -3.019631 -0.138619 -0.195451 -6.725233
2000-01-17 -2.526337 0.093638 1.532938 -6.958976
2000-01-18 -4.373418 0.311532 1.132170 -7.302990
2000-01-19 -5.273040 -0.717768 0.497616 -6.911126
2000-01-20 -6.448850 -0.224653 -0.046340 -6.758269
2000-01-21 -4.943787 -0.698704 0.163211 -6.893747
2000-01-22 -3.952354 -2.204695 0.558461 -7.387826
2000-01-23 -5.304680 -3.134514 0.021907 -7.226565
2000-01-24 -5.031701 -2.328006 -0.860458 -7.306846
2000-01-25 -4.896881 -1.583543 -1.869415 -7.453436
2000-01-26 -6.391101 -1.879440 -2.990488 -8.301841
2000-01-27 -6.121248 -2.036508 -3.717953 -7.029116
2000-01-28 -5.195141 -2.084565 -4.404227 -6.450158
2000-01-29 -5.660434 -2.255982 -3.892638 -6.683686
2000-01-30 -7.905991 -2.137240 -2.710869 -6.353077
... ... ... ... ...
2002-08-28 -28.390474 -19.783973 -36.249024 -80.524933
2002-08-29 -27.914406 -19.708396 -35.141058 -80.900598
2002-08-30 -24.721035 -17.702092 -35.045412 -80.167698
2002-08-31 -24.028938 -18.607195 -35.387397 -79.570238
2002-09-01 -24.009956 -19.631794 -34.888249 -78.985944
2002-09-02 -23.898960 -20.413293 -36.001419 -77.613580
2002-09-03 -22.907406 -20.596000 -35.239997 -76.348259
2002-09-04 -22.712333 -19.955995 -35.620827 -77.238011
2002-09-05 -23.519035 -19.364869 -35.814106 -76.645591
2002-09-06 -22.718371 -21.255241 -36.034339 -77.210565
2002-09-07 -22.307577 -20.132891 -35.793113 -76.714027
2002-09-08 -21.650785 -20.220993 -33.704326 -75.940266
2002-09-09 -20.926967 -19.749027 -33.700124 -76.323981
2002-09-10 -21.846174 -19.018332 -33.099587 -75.310443
2002-09-11 -19.501036 -19.933442 -32.431522 -74.287285
2002-09-12 -19.553611 -20.412938 -30.490885 -75.329411
2002-09-13 -20.185181 -20.188889 -30.885516 -75.923923
2002-09-14 -21.904924 -20.065591 -29.455811 -77.189176
2002-09-15 -21.565201 -19.652966 -29.695901 -75.278708
2002-09-16 -22.297790 -18.310904 -28.670178 -73.564076
2002-09-17 -23.974133 -18.572065 -28.444068 -73.247689
2002-09-18 -23.960098 -17.501733 -28.688742 -72.517694
2002-09-19 -24.455757 -17.695024 -27.874703 -71.885050
2002-09-20 -25.454776 -17.199965 -28.099897 -71.716991
2002-09-21 -25.888726 -18.274632 -27.933352 -71.547674
2002-09-22 -25.895522 -18.258257 -27.961358 -70.378859
2002-09-23 -24.579641 -17.517173 -28.735203 -69.626137
2002-09-24 -25.891772 -16.750934 -29.931475 -68.980136
2002-09-25 -26.511951 -17.871739 -30.520565 -68.626549
2002-09-26 -26.888491 -16.791559 -32.415504 -68.502537

1000 rows × 4 columns

Notebooks AI
Notebooks AI Profile20060