Russell 3000 Constituent List Not Correct...
plieberman
Posts: 10
I ran the code below to write the Russell 3000 constituent symbols to a file. When I run it from June 23, 2016 to June 27, 2016 I get the same list of 1846 symbols each day. Definitely not right. How far back and how well maintained are the index constituent files? Alternatively, am I doing something wrong in my code? Thanks!
from cloudquant.interfaces import Strategy
class R3000_GUID(Strategy):
# called when the strategy starts (aka before anything else)
@classmethod
def on_strategy_start(cls, md, service, account):
cls.td = service.time_to_string(service.system_time, '%y-%m-%d')
@classmethod
def is_symbol_qualified(cls, symbol, md, service, account):
R3000 = 'cc3ab113-f1e0-46a6-9a5a-ee3615f7600f'
handle_list = map(service.symbol_list.get_handle, [R3000])
for this_name, this_handle in zip(['R3000'], handle_list):
if service.symbol_list.in_list(this_handle,symbol):
service.write_file('R3000_%s.txt' % cls.td, symbol, end='\n', mode = 'append')
return True
else:
return False
# called when the strategy finish (aka after everything else has stopped)
@classmethod
def on_strategy_finish(cls, md, service, account):
pass
Comments
Why are you using brackets?
To preserve index as a list in case one wanted to add other indices. But the real problem is that historical constituents aren't being maintained / preserved. Try running this for yesterday's date and then try running it for a date 3 months ago or a year ago... seems to work for very recent dates, but number of constituents drops off way too much as you go back in time. Not good.
I've found the simulations are reliable going back only about 8 months.
For my own intellectual curiosity, I'm going to develop a sentiment strategy, but then probably cut my losses. The posts go back to July 2016, but this still feels very much in its infancy.
I've changed my view on this. I can appreciate why CloudQuant wouldn't want to devote significant resources to the Russell 3000, as models based on it could well have liquidity premia due to an overweight in small-cap stocks. Liquidity premia that are an illusion when it comes to trading in institutional size.
Hi @aj165602,
We plan to announce new information on the news sentiment shortly. We have several strategies with allocations going into live trading in the next two weeks. It does take a large effort to integrate new datasets scalably and much of the effort has been geared towards creating re-usable patterns for onboarding new datasets. Stay tuned.
Hi @aj165602,
We are in the process of creating custom CloudQuant universes based on market cap. It may take a few months, but hopefully sooner we will have a large number of MarketCap based universes to work with and we will have additional data fields that will enable users to define their own universe queries. We'll keep you posted.
from cloudquant.interfaces import Strategy
Hi @plieberman,
We unfortunately no longer have a vendor relationship for updating this list. We are migrating to a system of custom universes and adding additional fundamental fields that will enable users to define their own universe definitions. Thanks for the feedback, we appreciate it.
Thanks for the information, Superquant.
Hopefully, the work I've put into using the existing lists can still be put to good use.
Best,
Antony
Edited. Please see previous post!
For what it's worth, the approach I'm taking going forward is to use a universe of the top 1500 stocks. After running many tests, this seems to be at the limit of capturing liquid stocks, while providing the breadth I need for my own "basket" approach.
I'm looking forward to seeing how things develop on the data front. It would be great to be able to access alternative data sets from the
md
object.