Skip to content

Commit d271812

Browse files
committed
MNT: Rerender doc/examples with updated jupytext
1 parent f762c5b commit d271812

File tree

5 files changed

+98
-72
lines changed

5 files changed

+98
-72
lines changed

doc/examples/Multiple Time Frames.py

Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -3,15 +3,16 @@
33
# jupytext:
44
# text_representation:
55
# extension: .py
6-
# format_name: light
7-
# format_version: '1.5'
8-
# jupytext_version: 1.5.1
6+
# format_name: percent
7+
# format_version: '1.3'
8+
# jupytext_version: 1.17.1
99
# kernelspec:
1010
# display_name: Python 3
1111
# language: python
1212
# name: python3
1313
# ---
1414

15+
# %% [markdown]
1516
# Multiple Time Frames
1617
# ============
1718
#
@@ -32,7 +33,7 @@
3233
# [Tulipy](https://tulipindicators.org),
3334
# but among us, let's introduce the two indicators we'll be using.
3435

35-
# +
36+
# %%
3637
import pandas as pd
3738

3839

@@ -52,8 +53,7 @@ def RSI(array, n):
5253
return 100 - 100 / (1 + rs)
5354

5455

55-
# -
56-
56+
# %% [markdown]
5757
# The strategy roughly goes like this:
5858
#
5959
# Buy a position when:
@@ -66,7 +66,7 @@ def RSI(array, n):
6666
#
6767
# We need to provide bars data in the _lowest time frame_ (i.e. daily) and resample it to any higher time frame (i.e. weekly) that our strategy requires.
6868

69-
# +
69+
# %%
7070
from backtesting import Strategy, Backtest
7171
from backtesting.lib import resample_apply
7272

@@ -112,34 +112,36 @@ def next(self):
112112
# close the position, if any.
113113
elif price < .98 * self.ma10[-1]:
114114
self.position.close()
115-
# -
116115

116+
# %% [markdown]
117117
# Let's see how our strategy fares replayed on nine years of Google stock data.
118118

119-
# +
119+
# %%
120120
from backtesting.test import GOOG
121121

122122
backtest = Backtest(GOOG, System, commission=.002)
123123
backtest.run()
124-
# -
125124

125+
# %% [markdown]
126126
# Meager four trades in the span of nine years and with zero return? How about if we optimize the parameters a bit?
127127

128-
# +
128+
# %%
129129
# %%time
130130

131131
backtest.optimize(d_rsi=range(10, 35, 5),
132132
w_rsi=range(10, 35, 5),
133133
level=range(30, 80, 10))
134-
# -
135134

135+
# %%
136136
backtest.plot()
137137

138+
# %% [markdown]
138139
# Better. While the strategy doesn't perform as well as simple buy & hold, it does so with significantly lower exposure (time in market).
139140
#
140141
# In conclusion, to test strategies on multiple time frames, you need to pass in OHLC data in the lowest time frame, then resample it to higher time frames, apply the indicators, then resample back to the lower time frame, filling in the in-betweens.
141142
# Which is what the function [`backtesting.lib.resample_apply()`](https://kernc.github.io/backtesting.py/doc/backtesting/lib.html#backtesting.lib.resample_apply) does for you.
142143

144+
# %% [markdown]
143145
# Learn more by exploring further
144146
# [examples](https://kernc.github.io/backtesting.py/doc/backtesting/index.html#tutorials)
145147
# or find more framework options in the

doc/examples/Parameter Heatmap & Optimization.py

Lines changed: 27 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,16 @@
44
# jupytext:
55
# text_representation:
66
# extension: .py
7-
# format_name: light
8-
# format_version: '1.5'
9-
# jupytext_version: 1.16.6
7+
# format_name: percent
8+
# format_version: '1.3'
9+
# jupytext_version: 1.17.1
1010
# kernelspec:
1111
# display_name: Python 3 (ipykernel)
1212
# language: python
1313
# name: python3
1414
# ---
1515

16+
# %% [markdown]
1617
# Parameter Heatmap
1718
# ==========
1819
#
@@ -25,16 +26,18 @@
2526
# [TA-Lib](https://github.com/mrjbq7/ta-lib) or
2627
# [Tulipy](https://tulipindicators.org).
2728

29+
# %%
2830
from backtesting.test import SMA
2931

32+
# %% [markdown]
3033
# Our strategy will be a similar moving average cross-over strategy to the one in
3134
# [Quick Start User Guide](https://kernc.github.io/backtesting.py/doc/examples/Quick%20Start%20User%20Guide.html),
3235
# but we will use four moving averages in total:
3336
# two moving averages whose relationship determines a general trend
3437
# (we only trade long when the shorter MA is above the longer one, and vice versa),
3538
# and two moving averages whose cross-over with daily _close_ prices determine the signal to enter or exit the position.
3639

37-
# +
40+
# %%
3841
from backtesting import Strategy
3942
from backtesting.lib import crossover
4043

@@ -84,16 +87,15 @@ def next(self):
8487
self.position.close()
8588

8689

87-
# -
88-
90+
# %% [markdown]
8991
# It's not a robust strategy, but we can optimize it.
9092
#
9193
# [Grid search](https://en.wikipedia.org/wiki/Hyperparameter_optimization#Grid_search)
9294
# is an exhaustive search through a set of specified sets of values of hyperparameters. One evaluates the performance for each set of parameters and finally selects the combination that performs best.
9395
#
9496
# Let's optimize our strategy on Google stock data using _randomized_ grid search over the parameter space, evaluating at most (approximately) 200 randomly chosen combinations:
9597

96-
# +
98+
# %%
9799
# %%time
98100

99101
from backtesting import Backtest
@@ -112,32 +114,38 @@ def next(self):
112114
max_tries=200,
113115
random_state=0,
114116
return_heatmap=True)
115-
# -
116117

118+
# %% [markdown]
117119
# Notice `return_heatmap=True` parameter passed to
118120
# [`Backtest.optimize()`](https://kernc.github.io/backtesting.py/doc/backtesting/backtesting.html#backtesting.backtesting.Backtest.optimize).
119121
# It makes the function return a heatmap series along with the usual stats of the best run.
120122
# `heatmap` is a pandas Series indexed with a MultiIndex, a cartesian product of all permissible (tried) parameter values.
121123
# The series values are from the `maximize=` argument we provided.
122124

125+
# %%
123126
heatmap
124127

128+
# %% [markdown]
125129
# This heatmap contains the results of all the runs,
126130
# making it very easy to obtain parameter combinations for e.g. three best runs:
127131

132+
# %%
128133
heatmap.sort_values().iloc[-3:]
129134

135+
# %% [markdown]
130136
# But we use vision to make judgements on larger data sets much faster.
131137
# Let's plot the whole heatmap by projecting it on two chosen dimensions.
132138
# Say we're mostly interested in how parameters `n1` and `n2`, on average, affect the outcome.
133139

140+
# %%
134141
hm = heatmap.groupby(['n1', 'n2']).mean().unstack()
135142
hm = hm[::-1]
136143
hm
137144

145+
# %% [markdown]
138146
# Let's plot this table as a heatmap:
139147

140-
# +
148+
# %%
141149
# %matplotlib inline
142150

143151
import matplotlib.pyplot as plt
@@ -151,8 +159,8 @@ def next(self):
151159
ax.set_ylabel('n1'),
152160
ax.figure.colorbar(im, ax=ax),
153161
)
154-
# -
155162

163+
# %% [markdown]
156164
# We see that, on average, we obtain the highest result using trend-determining parameters `n1=30` and `n2=100` or `n1=70` and `n2=80`,
157165
# and it's not like other nearby combinations work similarly well — for our particular strategy, these combinations really stand out.
158166
#
@@ -163,13 +171,13 @@ def next(self):
163171
#
164172
# <a id=plot-heatmaps></a>
165173

166-
# +
174+
# %%
167175
from backtesting.lib import plot_heatmaps
168176

169177

170178
plot_heatmaps(heatmap, agg='mean')
171-
# -
172179

180+
# %% [markdown]
173181
# ## Model-based optimization
174182
#
175183
# Above, we used _randomized grid search_ optimization method. Any kind of grid search, however, might be computationally expensive for large data sets. In the follwing example, we will use
@@ -179,12 +187,12 @@ def next(self):
179187
#
180188
# So, with `method="sambo"`:
181189

182-
# +
190+
# %%
183191
# %%capture
184192

185193
# ! pip install sambo # This is a run-time dependency
186194

187-
# +
195+
# %%
188196
# #%%time
189197

190198
stats, heatmap, optimize_result = backtest.optimize(
@@ -199,10 +207,11 @@ def next(self):
199207
random_state=0,
200208
return_heatmap=True,
201209
return_optimization=True)
202-
# -
203210

211+
# %%
204212
heatmap.sort_values().iloc[-3:]
205213

214+
# %% [markdown]
206215
# Notice how the optimization runs somewhat slower even though `max_tries=` is lower. This is due to the sequential nature of the algorithm and should actually perform quite comparably even in cases of _much larger parameter spaces_ where grid search would effectively blow up, likely reaching a better optimum than a simple randomized search would.
207216
# A note of warning, again, to take steps to avoid
208217
# [overfitting](https://en.wikipedia.org/wiki/Overfitting)
@@ -214,18 +223,18 @@ def next(self):
214223
#
215224
# Note, because SAMBO internally only does _minimization_, the values in `optimize_result` are negated (less is better).
216225

217-
# +
226+
# %%
218227
from sambo.plot import plot_objective
219228

220229
names = ['n1', 'n2', 'n_enter', 'n_exit']
221230
_ = plot_objective(optimize_result, names=names, estimator='et')
222231

223-
# +
232+
# %%
224233
from sambo.plot import plot_evaluations
225234

226235
_ = plot_evaluations(optimize_result, names=names)
227-
# -
228236

237+
# %% [markdown]
229238
# Learn more by exploring further
230239
# [examples](https://kernc.github.io/backtesting.py/doc/backtesting/index.html#tutorials)
231240
# or find more framework options in the

0 commit comments

Comments
 (0)