pyspark.pandas.Series.ffill#
- Series.ffill(axis=None, inplace=False, limit=None)#
Synonym for DataFrame.fillna() or Series.fillna() with
method=`ffill`
.Note
the current implementation of ‘ffill’ uses Spark’s Window without specifying partition specification. This leads to moveing all data into a single a partition in a single machine and could cause serious performance degradation. Avoid this method with very large datasets.
- Parameters
- axis: {0 or `index`}
1 and columns are not supported.
- inplace: boolean, default False
Fill in place (do not create a new object)
- limit: int, default None
If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None
- Returns
- DataFrame or Series
DataFrame or Series with NA entries filled.
Examples
>>> psdf = ps.DataFrame({ ... 'A': [None, 3, None, None], ... 'B': [2, 4, None, 3], ... 'C': [None, None, None, 1], ... 'D': [0, 1, 5, 4] ... }, ... columns=['A', 'B', 'C', 'D']) >>> psdf A B C D 0 NaN 2.0 NaN 0 1 3.0 4.0 NaN 1 2 NaN NaN NaN 5 3 NaN 3.0 1.0 4
Propagate non-null values forward.
>>> psdf.ffill() A B C D 0 NaN 2.0 NaN 0 1 3.0 4.0 NaN 1 2 3.0 4.0 NaN 5 3 3.0 3.0 1.0 4
For Series
>>> psser = ps.Series([2, 4, None, 3]) >>> psser 0 2.0 1 4.0 2 NaN 3 3.0 dtype: float64
>>> psser.ffill() 0 2.0 1 4.0 2 4.0 3 3.0 dtype: float64