Most practical way to do this is sort by the entry condition in the code and replace strategies in the databank with the same entry condition that have a higher fitness/net profit (user configurable replacement filter). any strategies with the same entry condition but a lower fitness (or user configurable filter) would not be added.
This would require code that compares what's in the current generation to what's in the databank already so I didn't think the feature itself would be simple and wouldn't expect this for a while. However, if it's practical I am definitely all for this
even slightly different SL, TP, trailing stops could lead to a different strategy
take 3 major values that could lead to the conclusion that its a duplicity
net profit
number of trades
ret/DD ration
set the threshold +-5% and if you have in the databank strategy with the same values +- threshold, keep the only one with the highest ret/DD
Attachment STR.jpg added
HA - heiken ashi
HH - highest
LD - low daily
HD - high daily
but they are not similar at all and are not correlated, because fo the filtering conditions, because of the moneymanagement they use, etc.
Attachment dupl.jpg added
in the old task we have discussed about it, and i think that there is some "automatical dismissal" done, but in 108 i cant see it
But really, it is not so simple to filter similar strategies. We shouldn't do it by source code, because change in just one parameter could mean very different results.
Computing correlation is possible, it would be extremely slow. Imagine you already have 1000 strategies in databank and you should compute correlation with all of them when you are adding new strategy.
What I see as a possible option is to take a "fingerprint" using some of the stats values like what hankeys suggested:
net profit
number of trades
ret/DD ration
and use 5% threshold. I think these three numbers are a good approximation of the real equity curve.
Basically, SQ generate strategies and put the good one (for all the dismissal filter) inside the gpu memory, instead to put it inside the dB directly, and a dedicated source code, so totally detached from SQ, (but triggered from SQ) invoke the gpu kernel to make a reduction and generate the correlation matrix. The strategies scorrelated come back inside the databank at the end of the generations.
I have just created some similar, for SQ 3.8.2 but it works in post production, it would be more elegant on the fly for SQX
I invite you to think seriously about it. If you are interested we can talk about it more technically, there are some conditions to make the gpu computation advantageous, few exchanges of memory between ram and Vram (so basically you should transfer big pool of strategies) and of course a minimum pool size.
- to make sure, that 100% similar strategies dont go to databank - same net profit, same number of trades, same RDD - because this dont work in 108
- after that make some settings with threshold what i wrote up
- make a decision what next could be done
Status changed from New to Fixed
We can maybe later improve it by checking correlation, etc.
rng - I understand that GPU could be used to compute correlation, however, without using filters strategies can be theoretically saved to databank every few miliseconds, not just once in a while. So I'm not sure it possible to use GPU and keep it synchronized with the actual databank state. But let's discuss it further. I'll contact you directly, I have your email.