From CSULA CS Wiki
Jump to: navigation, search

See also CS 461 Winter 2010


The market can stay irrational longer than you can stay solvent.
— John Maynard Keynes

An economist and his friend come upon a $100 bill. As the friend reaches to pick it up, the economist says, "Don’t bother; if it were a genuine $100 bill, someone would already have picked it up."
— Andrew Lo, The efficient market hypothesis

Economics … has not … come to grips with … the inordinate practical importance of a few extreme events.
— Benoit Mandelbrot

Anything that can't go on forever won't.
— Herb Stein

The efficient market hypothesis says that the stock market is always fairly priced, that the price of a stock is the best guess about its value given current information. But too many traders and speculators make money in the stock market based only on market fluctuations that are relatively independent of economic news. A sentence from a review of Animal Spirits, a recent book by Akerlof and Shiller, suggests one reason.

Animal spirits are human emotions; they can’t be turned off. Unchecked, they drive the economy into misbegotten booms and disastrous busts.

Stock prices are driven not only by economic news but also by human emotion. It seems reasonable, then, that it should be possible to write software that can notice some of those human effects and profit from them.

Besides this abstract rational, there are good empirical reasons to think that successful speculation software can be written. Below are a number of references (mostly available online) that explore quantitative prediction strategies.

Contents

Agent based modeling

Basic information

Data: prices, charts, etc.

http://finance.yahoo.com/q/hp?s=StockSymbol 
Replace StockSymbol with the symbol of the stock you are looking for.
  • To get the S&P500, use ^GSPC—or SPY for the ETF.
  • To get the NASDAQ use ^IXIC—or QQQQ for the ETF.
  • To get the Russell 2000 use ^RUT—or IWM for the ETF.
Set the time period you want. Then click Download to Spreadsheet at the bottom of the price listing.

Exchanges

Basics

General information

Special data reports

  • Commitments of Traders (COT) reports. It appears that the Commitments of Traders (COT) reports (from the government Commodity Futures Trading Commission) can be used for timing purposes.

Basic "Trading Indicators" (technical analysis)

  • Catallacticanalysis provides full reports on some basic strategies as well as a few spreadsheets.
  • Correlations.
    • To quantify "divergences."
    • To find hedging pairs.

Blogs, etc.

Bespoke Investment Group

CSS Analytics by David Varadi

"Consumer Confidence is a major driver of relative strength strategy returns. When the consumer is optimistic, the winners beat the losers in grand fashion, when the consumer is pessimistic the reverse happens. … Standard relative strength strategies perform best once a rally is well underway, and poorly at extreme low points in the market. …
"It therefore stands to reason that the only real inefficiencies in the stock market are driven by the persistency of human beings to behave the same way over time and across economic regimes. We are the real constant force in the markets, and while its nearly impossible to outsmart each other in rational times (or rational subject areas),we behave like lemming fools at irrational times–long enough to be taken advantage of by our more emotionally-balanced brethren. As human beings our collective moods are like the waves of the sea: building slowly and reliably over long periods of time until reaching peak height in a few short yet powerful moments, only to collapse under its own unstable weight.The constant economic/stock market patterns of boom and bust will likely always persist until scientists can figure out how to change our primitive brains."
  • In this post David Varadi notes that VIX and S&P changes can be useful when they are both extreme. This illustrates how combining two measures sometimes provides information that each one individually doesn't provide.
The AggZ did 20% CAGR over the last 2000 bars on the CAGR using dividend adjusted data on the SPY. The calculation is dead simple:
AggZ= (-1x( 10-day z-score)+(200-day z-score))/2
where z-score = (close-sma (closing prices over last n periods))/(standard deviation( closing prices over last n periods))
buy above 0, sell below 0 as a basic strategy.

Not all markets are the same

There are truly a great deal of idiosyncrasies in how different vehicles behave. Gold does not behave like the S&P500—it never has, and the differences are substantial. Oil does not behave like Gold or the S&P500, again substantial differences. This brings the interesting conclusion that if such large markets can behave so differently despite sharing common sources of investment flows, it stands to reason that smaller markets or individual stocks may have even more divergent behavior. In fact this is actually true—the stock market is like a great rainforest of different species in this respect. Smaller cap stocks have such unique and bizarre behavior that it is unfair to even place them in a sector or category.
This has some interesting implications– 1) a lot of the “noise” we see when searching for robust effects across stocks or markets is actually caused by real systematic differences between markets, and therefore many ideas that we discard would in fact show significant promise after controlling for it.
The Livermore Index is but one simple example—it is NOT just a list of high momentum stocks, it is also a list of stocks that have a historical tendency to trend in a meaningful way. That is the secret sauce–not the relative strength algorithm which is elegant but ultimately fairly simple. In fact if you were to test DV2 or RSI2 on the top 10 Livermore stocks, you will find that you lose money over a 10 year period! Surprisingly this same factor works across global markets and stocks—–it is a universal factor used to identify a very specific idiosyncracy. This means that if you wanted to trade these stocks using a 5-day or 10-day breakout, that would be a winning system. If you have a trend trader mentality then this is your dream situation in an increasing mean-reversion dominated marketplace. Heck even a 1-day breakout on these stocks is highly profitable with an exit on the first down day.
Now here is where it gets interesting and a little bit like a puzzle in astrophysics: If in a large index like the Nasdaq 100 we can identify the stocks most likely to follow through then we should be able to identify the stocks most likely to mean-revert. Therefore, given knowledge of the the current relative proportion of stocks that follow through that have risen/fallen today and the relative proportion of mean-reverting stocks that have risen/fallen today—we can predict more accurately whether the index itself is likely to follow through or mean revert. If we can do this for the Nasdaq 100, we can do this for any aggregate index whether it is the S&P500 or ANY sector or country ETF.
Markets within the market with more statistics—and still more.
As you can see when the long-term trend is DOWN, the standard reverse daily follow through approach is vastly superior in absolute and risk-adjusted returns to the two-day follow through strategy. However, when the long-term trend is UP, the two-day reverse follow-through strategy is superior in both respects. Note that most combinations of longer reverse-follow through lengths and/or indicators have performed more consistently when the long-term trend is UP. The only logical explanation for this is that the cycle lengths increase as a function of lower volatility and more consistent momentum/trend. The solution is clearly to shift to longer cycles in this environment.

Trend Strength Index (TSI)

See also Engineering Returns. Formula.

Speculative Demand Ratio

Nice first anniversary post

History and credits.

DVI formula at MarketSci

CXO Advisory Group

I finally realized what "CXO" means. It stands for C_O (for Chief X Officer of a corporation), where the missing middle character might be "E" (for Chief Executive Officer), "F" (for Chief Financial Officer), "O" (for Chief Operating Officer), I (for Chief Information Officer), etc.

Steve LeCompte publishes studies on various investing strategies. This blog has a broader view than many of the others. It publishes many more studies over a broader range of issues. It also is less useful as a guide to day-to-day speculation. But the results it publishes are often quite interesting. Look, for example, at the trading indicators, momentum investing, and Purifying Stock Market Sentiment Indicators studies.

Also, look at the following overview pages.

  • This fairly long page discusses whether or not it is reasonable to imagine that one can take advantage of inefficiencies in the market. Steve uses an analogy to Maxwell's Demon to question whether it is ever possible to separate good assets from bad ones—as Maxwell's demon was supposed to separate fast moving molecules from slow moving molecules, thereby creating a potential between hot and cold chamber that would provide a source of free energy.
  • This page lists all the CXO technical studies. It seems to be updated when new studies are done.

Engineering returns (Frank Hassler)

Trend Strength Index (TSI)

Formula. <java>

/*
   TSI = Average(Average(Ratio, 10), 100) 
      where Ratio = Abs(Today's Close - Close 10 Daya ago)/(10 Day Average True Range)

Ambibroker code
*/

 function TSI() {

    Ratio   = abs(Close - Ref(Close,-10)) / ATR(10)  ;

    return MA(MA(Ratio,10),100);

 }

</java>

ETF Screen

MarketSci

Michael Stokes of MarketSci blogs about patterns he explores in stock prices. This in my opinion is one of the best blogs. See his about page for an overview. Look also at some of his top posts to get a feeling for his approach. Besides this blog MarketSci offers a for-fee service. What is unusual about this service is that its results are audited by an external auditor. I believe the results, which are quite impressive. The YK(B) strategy in particular has done amazingly well. All of the MarketSci strategies take positions in market indices (or leveraged Mutual Fund proxies for them) at the end of the trading day. Positions are adjusted daily. In other words, the MarketSci strategies attempt to predict one day in advance. And they seem to do quite well.

Strategy rules: if the VIX will close at least 5% higher than the 63-day (1 quarter) moving average of the VIX at today’s close, go long the S&P 500 and short the Russell 2000 at the close. If the VIX will close at least 5% lower than the 63-day average, reverse the pair and go long the Russell 2000 and short the S&P 500 at the close. If the VIX will close within this 5% band, the pair is unpredictable so move to cash.


VixPairsGraph.gif
VixPairs.gif

Images from MarketSci

VIX-based Pairs (Russell 200 and the S&P 500) Trading (Market Neutral) Strategy.

Very impressive! Are there other pairs that might do even better?
This strategy appears in the following places.
My first reaction was Fantastic! But looking at the graph, it appears that most of the gain came from 1991 – 1992 and 1998 – 2003. (These are rough eyeball estimates.) Most of the rest of the time, the curve is relatively flat. Of course that’s better than losing money, but it’s not a steady significant winner.
Michael Stokes answers (sort of) those concerns in this post.

PuppetMaster trading

Quantifiable Edges

Rob Hanna of Quantifiable Edges also looks for patterns. His approach is to look for sequences of days (called setups) that in the past were statistically significant in predicting future days. To get a better feel for his work look at his blog. He too sells a service. As far as I know it isn't audited. But based on his blog entries, it appears that his studies have some value.

Quantum financier

Trading the odds suspended

Frank (last name not given) at Trading the Odds takes an approach similar to Rob Hanna's but both more ad hoc (each day is unique) and more focused on day-to-day results. See his About page for more about his approach. Also read some of his blog pieces (most of which resemble each other in their basic structure) for some examples. Frank does not sell any services. Based on the results I've observed informally, Frank's next-day predictions are quite often on the mark.

He is beginning to look at more system-like approaches.

Misc. blogs

This commentary was released mid-day 1/8/10 after the December jobs report showed a loss of 85,000 jobs in December. It is very sober with respect to the market and it's likely overvaluation. Nonetheless, the market was higher that day. Why was that? What does that suggest about future market movement?
  • Cobra of Cobra's Market View does a much less formal version of the kind of analysis previous blogs do more formally.
  • Mark Hulbert does some interesting longer term analysis, especially with respect to contrary opinions.
  • The last shall be first. Consider the results of a hypothetical model portfolio that each year exactly mimicked the newsletter model portfolio that had the best return in the previous calendar year, according to the Hulbert Financial Digest. Over the last 19 years, this portfolio produced a 21.7% annualized loss.
  • Brian Rich of Weiss Research. Velocity of money leads market.
Velocity of money leads market


Books

Be sure this isn't just a narrative of the melt-down.
  • Gintis, Herbert. Almost any book that he likes or wrote.
Before spending too much time on this book make sure it isn't just a narrative of the melt-down and that it includes enough information about the actual "quant" models that one might be able to implement them.
A copy is available online.

Bubbles and crashes

Sornette, Didier

We introduce the concept of “negative bubbles” as the mirror image of standard financial bubbles, in which positive feedback mechanisms may lead to transient accelerating price falls. To model these negative bubbles, we adapt the Johansen-Ledoit-Sornette (JLS) model of rational expectation bubbles with a hazard rate describing the collective buying pressure of noise traders. The price fall occurring during a transient negative bubble can be interpreted as an effective random downpayment that rational agents accept to pay in the hope of profiting from the expected occurrence of a possible rally. We validate the model by showing that it has significant predictive power in identifying the times of major market rebounds. This result is obtained by using a general pattern recognition method which combines the information obtained at multiple times from a dynamical calibration of the JLS model. Error diagrams, Bayesian inference and trading strategies suggest that one can extract genuine information and obtain real skill from the calibration of negative bubbles with the JLS model. We conclude that negative bubbles are in general predictably associated with large rebounds or rallies, which are the mirror images of the crashes terminating standard bubbles.
  • (Video) "Financial crises and risk management". The scientific study of complex systems has transformed a wide range of disciplines in recent years, enabling researchers in both the natural and social sciences to model and predict phenomena as diverse as the failure of materials, earthquakes, global warming, demographic patterns, and financial crises. In this talk, Didier Sornette describes a simple, powerful, and general theory of how, why, and when stock markets crash. Most attempts to explain market failures seek to pinpoint triggering mechanisms that occur hours, days, or weeks before the collapse. Sornette proposes a radically different view: the underlying cause can be sought months and even years before the abrupt, catastrophic event in the build-up of cooperative speculation, into an accelerating rise of the market price, otherwise known as a "bubble." This view implies the possibility of predicting such events and Sornette will describe the current status of predictions that he and his collaborators have made for events in various markets.
  • Scientific Commons entries
  • "The Financial Bubble Experiment: advanced diagnostics and forecasts of bubble terminations"
On 2 November 2009, the Financial Bubble Experiment was launched within the Financial Crisis Observatory (FCO) at ETH Zurich. In that initial report, we diagnosed and announced three bubbles on three different assets. In this latest release of 23 December 2009 in this ongoing experiment, we add a diagnostic of a new bubble developing on a fourth asset.

Market Bubbles in the Laboratory

Abstract: Trading at prices above the fundamental value of an asset, i.e. a bubble, has been verified and replicated in laboratory asset markets for the past seven years. To date, only common group experience provides minimal conditions for common investor sentiment and trading at fundamental value. Rational expectations models do not predict the bubble and crash phenomena found in these experimental markets; such models yield only equilibrium predictions and do not articulate a dynamic process that converges to fundamental value with experience. The dynamic models proposed by Caginalp et al. do an excellent job of predicting price patterns after calibration with a previous experimental bubble, given the initial conditions for a new bubble and its controlled fundamental value. Several extensions of this basic laboratory asset market have recently been undertaken which allow for margin buying, short selling, futures contracting, limit price change rules and a host of other changes that could effect price formation in these assets markets. This paper reviews the results of 72 laboratory asset market experiments which include experimental treatments for dampening bubbles that are suggested by rational expectations theory or popular policy prescriptions.
Abstract: Spot asset trading is studied in an environment in which all investors receive the same dividend from a known probability distribution at the end of each of T = 15 (or 30) trading periods. Fourteen of twenty-two experiments exhibit price bubbles followed by crashes relative to intrinsic dividend value. When traders are experienced this reduces, but does not eliminate, the probability of a bubble.
WSJ Report on the paper.

Cautions

David Varadi

Great post on fooling oneself. (Richard Feynman said that science is what we have learned about how not to fool ourselves.)

Readers are strongly encouraged to read the following links–I don’t want anyone to miss these very eloquent and thought provoking discussions on the topic:

What are “degrees of freedom” ? This is a very important and highly overlooked topic, and Brenda Jubin– a true market scholar– of Reading the Markets does the best job of explaining this concept that I have seen in print: http://readingthemarkets.blogspot.com/2010/01/degrees-of-freedom-kiss.html

Jez Liberty of Automated Trading Systems who is posting some very good work lately also highlights a very subtle concept: 1) the fact that there is no magic metric for measuring performance 2) traders are better off tailoring these metrics to their own personality and preference just like they would a trading system. http://www.automated-trading-system.com/bliss-function-quantify-trading-system-objective/

Last but not least, TopTick who is a frequent poster on my forums at DV Indicators in the community section is a quant par excellence and I must say is in very good company (thanks to all the great posts out there!) and I plan to link in many posts in the future. Here TopTick produces a great breakdown of a lot of the ideas to consider for system testing and performance metrics: http://www.dvindicators.com/community/forum/?vasthtmlaction=viewtopic&t=42.0

Here is my humble take on the topic:

1) Everyone wants to know the secret to finding the perfect metric that will allow you to extrapolate the future reliably from the past.

2) Everyone wants to know how to figure out exactly when a sytem has broken down so that they can exit reliably with minimal drawdown.

These are million dollar questions and those who realistically expect concrete answers in the form of one simple equation are living in a utopian trading universe. System performance metrics ultimately boil down to measuring the equity curve and/or the profitability and variablity of the individual trades. After finding a great result in backtesting the sad truth is that you cannot neccessarily extrapolate. Neither a simple nor complex analysis using the most advanced statistics will help you separate the bogus systems from the true winners. Equity curves are simply output, they depend on what factors/systems go into them. Hence GIGO- garbage in, garbage out. If you want to combine 50 variables and test long-parameter moving average sets over short periods–guess what? degrees of freedom strike thee down! your system is bogus and I don’t care if it has a Sharpe Ratio of 4 and a 2% drawdown. Understanding the inputs and the possible methodological flaws in trading system design are more important than the output— if you just want to find great numbers genetic algorithms and optimizers will give you what you are looking for in seconds.

Every equity curve contains a story behind the numbers, and asking good questions is the real key to finding the truth. If I looked at Madoff’s equity curve any metric I would have used would assure me of continued performance. Behind the curtain of this pretty performance was a fiction that no statistics could reveal. The only person who seemed to catch on was a fellow hedge fund manager who used ahem ”common sense” to point out that no strategy that was being described for Madoff’s fund or any other for that matter could possibly produce returns that consistent.

The same applies outside the realm of pure fraud– a 100 or 200 day system test during 2008 of any mean-reversion indicator would have looked flawless and similarly invulnerable. Guess what, the real story behind the curve was that the indicators preyed ruthlessly on tremendous uncertainty and a credit crisis that drove mass margin calls- this created unprecedented volatility and unusually chaotic correlations that are predictably cyclic—ie they won’t last forever! When uncertainty and credit was restored, profitability was restored to more normal levels. Was the system breaking down? Or was it just experiencing supernormal profitability? If we knew what drove profitability–ie volatility which was a consequence of credit issues– we could use either as a superior means of figuring out when to cash in our chips than observing the system output.

Try to think of yourself more as an experimenter or a scientist: What am I trying to measure here? Which independent variables are responsible for moderating the profitability of my system? (volatility etc) Am I potentially measuring a spurious correlation? ie perhaps I am taking an indirect measure of something and should search for a clearer proxy (credit spreads?). Keep drilling down and connecting the dots and even if your initial efforts are not rewarded, eventually you will be better and more consistently profitable as a result. I could go on forever, but if I can implore all of the people who “only care that something works and not why it works” to take a step back and try to understand their systems in logical terms for a change they would get much closer to getting the answers they truly seek. The best physicists–far more gifted in math that you or I— are great abstract thinkers who first seek to understand our universe with sound theory and how it fits together. In an increasingly uncertain world, it is not likely that reality will ever have the consistency and symmetry of a good theory–but the ideas help inspire better systems that are more likely to survive in the future.}}

Great post on gaming the market

Organizations

Papers

  • Almgren, Robert is seen as one of the major theoreticians of algorithmic trading. He is Adjunct Professor in Financial Mathematics, Courant Institute of Mathematical Sciences, New York University. His work is quite mathematical. Many (most?) of his papers are available on his web site.
Abstract: We analyze the informational content of more than 1.2 million stock picks provided by more than 60,000 individuals from November 1, 2006 to October 31, 2007 on the CAPS open access website created by the Motley Fool company (www.caps.fool.com). On average, an individual pick in CAPS outperformed the S&P 500 index by 4 percentage points in the twelve months after the pick. We use a four-factor regression framework to estimate the excess returns associated with portfolios that aggregate these picks; a portfolio of the most popular CAPS stocks yielded excess returns of more than 18 percentage points annually relative to the portfolio of the least popular stocks.
Most of the software that looks for patterns in the market does not produce a model for why those patterns exist. The software just finds patterns. In some sense that's very risky. patterns may be like faces in clouds in the sky: purely accidental and unlikely to occur again. One would really like models that explain why the patterns are produced. But this paper argues that we are now collecting so much data that models aren't necessary.
"Petabytes allow us to say: ‘Correlation is enough.’ We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot."
What do you think of that argument?
  • Chris Donnan's list of "seminal algorithmic trading papers."
We construct a financial "Turing test" to determine whether human subjects can differentiate between actual vs. randomized financial returns. The experiment consists of an online video-game (ARORA) where players are challenged to distinguish actual financial market returns from random temporal permutations of those returns. We find overwhelming statistical evidence (p-values no greater than 0.5%) that subjects can consistently distinguish between the two types of time series, thereby refuting the widespread belief that financial markets "look random". A key feature of the experiment is that subjects are given immediate feedback regarding the validity of their choices, allowing them to learn and adapt. We suggest that such novel interfaces can harness human capabilities to process and extract information from financial data in ways that computers cannot.
  • Huang, Zhijian (James) (2009) "Real-Time Profitability of Published Anomalies: An Out-of-Sample Test," SSRN. (Also, see summary at CXO Advisory.)
Previous studies show mixed results about the out-of-sample performance of various asset-pricing anomalies. To reduce data-snooping bias, this paper simulates a real-time trader who chooses among all asset-pricing anomalies published prior to that time using only non-forward-looking filters. I find that a trader can outperform the market by recursively picking the best past performer among published anomalies even after transaction costs are taken into account. The excess return tends to be highest when the trader looks at past performances between two years and five years and when the trader considers more anomalies. For published anomalies, their excess returns over benchmark as well as relative ranks among contemporaneous anomalies do not decrease over time, indicating a relatively stable performance once being published. Relying only on the then-available anomaly literature and historical data, the overall result shows a possible way to beat the market in real time.
  • LeCompte, Steve, Investing Demons, CXO Advisory Group. This is a long blog page that provides an overall perspective—with many references—on whether it is possible to beat the market.
    • Blog entries that analyze various academic papers, typically one paper per blog entry.
    • This page lists studies on (generally successful) momentum strategies.
These notes discuss several topics in neoclassical economics and alternatives, with an aim of reviewing fundamental issues in modeling economic markets. I start with a brief, non-rigorous summary of the basic Arrow-Debreu model of general equilibrium, as well as its extensions to include time and contingency. I then argue that symmetries due to similarly endowed individuals and similar products are generically broken by the constraints of scarcity, leading to the existence of multiple equilibria.
This is followed by an evaluation of the strengths and weaknesses of the model generally. Several of the weaknesses are concerned with the treatments of time and contingency. To address these we discuss a class of agent based models[3].
Another set of issues has to do with the fundamental meaning of prices and the related question of what the observeables of a non-equilibrium, dynamic model of an economic market should be. We argue that these issues are addressed by formulating economics in the language of a gauge theory, as proposed originally by Malaney and Weinstein[8]. We review some of their work and provide a sketch of how gauge invariance can be incorporated into the formulation of agent based models.

Papers on Leigh Tesfatsion's page on Financial Economics

Quantivity's list of algorithmic trading papers

Platforms

ECJ

Eclipse Trader

Eureqa.

Does general curve fitting (intended for fitting scientific experimental data, not stock market data) (This is sometimes called "data snooping".) Predicts the past very well! But does it provide any insight into the future? See Sullivan, et. al. (1999) "Data-Snooping, Technical Trading Rule Performance, and the Bootstrap". (Also contains some basic trading rules.)

One way to ask about data snooping is to measure how much compression a result gives you. If the result can be expressed in significantly fewer symbols than the data, one can be reasonably confident that the result expresses a pattern in the data. Of course, that doesn't tell you whether the pattern itself is reliable. The best way to build a case for the reliability of a pattern is to construct a model that will produce that pattern and see if the model predicts the pattern in places other than where the sample data is found. A model might be represented analytically by equations or as an agent-based model.

Java Traders Google group

Learning Classifier Systems (LCS)

  • Butz, Martin V. (2007) Java
  • Report, University of Illinois.
The XCS Library (XCSLib) is an open source C++ library for genetics-based machine learning and learning classifier systems. It provides (i) several reusable components that can be employed to design new learning paradigms inspired to the learning classifier system principles; and (ii) the implementation of two well-known and widely used models of learning classifier systems.
  • Schulenburg, Sonia (2002) "Explorations in LCS Models of Stock Trading," in Pier Luca Lanzi, Wolfgang Stolzmann, and Stewart W. Wilson, editors, Advances in Learning Classifier Systems, volume 2321 of Lecture Notes in Artificial Intelligence, pages 151-180. Springer-Verlag, Berlin, 2002.

MatLab

QuantLib

From QuantLib.org

The QuantLib project is aimed at providing a comprehensive software framework for quantitative finance. QuantLib is a free/open-source library for modeling, trading, and risk management in real-life.
QuantLib is written in C++ with a clean object model, and is then exported to different languages such as C#, Objective Caml, Java, Perl, Python, GNU R, Ruby, and Scheme. The QuantLibAddin/QuantLibXL project uses ObjectHandler to export an object-oriented QuantLib interface to a variety of end-user platforms including Microsoft Excel and OpenOffice.org Calc. Bindings to other languages and porting to Gnumeric, Matlab/Octave, S-PLUS/R, Mathematica, COM/CORBA/SOAP architectures, FpML, are under consideration. See the extensions page for details.
Appreciated by quantitative analysts and developers, it is intended for academics and practitioners alike, eventually promoting a stronger interaction between them. QuantLib offers tools that are useful both for practical implementation and for advanced modeling, with features such as market conventions, yield curve models, solvers, PDEs, Monte Carlo (low-discrepancy included), exotic options, VAR, and so on.

R System statistical software.

Repast Simphony

A pure Java point-and-click model execution environment that includes built-in results logging and graphing tools as well as automated connections to a variety of optional external tools including the R statistics environment, *ORA and Pajek network analysis plugins, A live agent SQL query tool plugin, the VisAD scientific visualization package, the Weka data mining platform, many popular spreadsheets, the MATLAB computational mathematics environment, and the iReport visual report designer.

Weka

List of other possibilities

Seasonality

Statistics

Trading Systems

  • Collecctive2 acts as a middleman between system developers and system users.
  • Free Trading System A simple and free trading system based on trend following ideas.
  • Model validation. "[The] model does well when the prices move in a directional way. [It doesn't] perform very well in a mean reverting environment." (Duh!)