GPU acceleration feature

      Recently AI,Machine Learning,Deep learning, Reinforcement learning have been very hot and benefiting from this, Nvida's stock price has soared. Traditionally, it is believed that in the field of massively parallel computing such as machine learning, GPUs are probably 10 times more cost effective than CPUs, maybe more, and in the future there are NPUs and so on. As far as I know, some  strategy researchers in hedge fund are using Python, CUDA, Tensorflow with Nvida RTX4090 cluster, DGXA100, DGXH100 for massively parallel computing. 

       I saw interview from Ali Caisy channel, you mentioned that you are considering some AI feature plans.  Is it possible to consider supporting Nvida CUDA drivers, TensorFlow or Pytorch frameworks in SQ? Then we can buy RTX4090 or A100 or H100 instead of threadripper 5995wx or two EYPC.  If use GPU,there are higher hardware extension. This would allow us to develop smaller time frame strategies in SQ and make it more feasible to set stricter ranking filters, and faster optimization speed. Everything is about finding more robust strategies more efficiently. It would be a major milestone for SQ if this feature could be implemented.Then we will never be disadvantaged in the competition with strategy developers who are good at programming. 

Attachments
No attachments
  • Votes +8
  • Project StrategyQuant X
  • Type Feature
  • Status New
  • Priority Normal

History

b
#1

binhsir

20.06.2023 14:44

Task created

b
#2

binhsir

20.06.2023 14:45
Voted for this task.
E
#3

Emmanuel

21.06.2023 01:19
Voted for this task.
AA
#4

Alex

21.06.2023 08:59
Voted for this task.
CG
#5

Chris G

21.06.2023 17:12
Voted for this task.
DR
#6

mentaledge

21.06.2023 17:35
Voted for this task.
TR
#7

Tim

22.06.2023 00:58
Voted for this task.
JH
#8

Jabezz

25.06.2023 06:47
Voted for this task.
b
#9

binhsir

27.06.2023 15:02

Description changed:

      Recently AI,Machine Learning,Deep learning, Reinforcement learning have been very hot and benefiting from this, Nvida's stock price has soared. Traditionally, it is believed that in the field of massively parallel computing such as machine learning, GPUs are probably 10 times more cost effective than CPUs, maybe more, and in the future there are NPUs and so on. As far as I know, some  strategy researchers in hedge fund are using Python, CUDA, Tensorflow with Nvida RTX4090 cluster, DGXA100, DGXH100 for massively parallel computing. 

       I saw interview from Ali Caisy channel, you mentioned that you are considering some AI feature plans.  Is it possible to consider supporting Nvida CUDA drivers, TensorFlow or Pytorch frameworks in SQ? Then we can buy RTX4090 or A100 or H100 instead of threadripper 5995wx or two EYPC.  If use GPU,there are higher hardware extension. This would allow us to develop smaller time frame strategies in SQ and make it more feasible to set stricter ranking filters, and faster optimization speed. Everything is about finding more robust strategies more efficiently. It would be a major milestone for SQ if this feature could be implemented.Then we will never be disadvantaged in the competition with strategy developers who are good at programming. 

HH
#10

hydorh

28.06.2023 02:10
Voted for this task.

Votes: +8

Drop files to upload

or

choose files

Max size: 5MB

Not allowed: exe, msi, application, reg, php, js, htaccess, htpasswd, gitignore

...
Wait please