Behind the science of neural networks is craft: by trial and error the neural network environment needs to be massaged until it yields some promise. Here's what I have come up with so far.
- Start with a simple non-NN entry strategy which is not profitable and yields large numbers of trades, then use the NN to filter these in much the same way as a more traditional long term moving average would be used. This idea came from http://articles.mql4.com/777. Right now I am using a cross 4 bar high for long and cross 4 bar low for short, and both Stoploss and Takeprofit of 100 pips.
- Use a series of inputs to the NN which are the close price differences from the bars with shift 1,2,3,4,9,16,25,36,49,64,81,100,121,144,169,196,225,256,289,324. This is simply a gradually lengthening time between each close. Divide the difference by the square root of the time difference. This idea comes from Mark Jurik
- Normalise the input vector
- Use a standard simply connected 19-10-1 neural network
- Here's the key new idea: early stopping -- run the learning backpropagation against factset 1 (say three months of 15 minute data), and select the NN which delivers the best equity curve on the following 3 months of data. Typically, and in agreement with many NN texts, the NN will continue to train past the point that it generalises
Update on 26 August - I'm now running a forward test on my new virtual host
How do you decide on the ideal number of hidden nodes?
ReplyDeleteHi skajake, I mention in my post that use of Neural Networks is a craft, and the number of hidden nodes, and whether two layers or one is better, is a matter of trial and error. Using 10 hidden nodes seems to produce a network which is just capable of training on a reasonable-sized factset. I intend to optimise this when I can get a backtestable version working
ReplyDelete