Kamis, 29 Desember 2011

‘Different Sectoral Contexts’ approach is vital for IAM


… noted Jeremy in the last post to IP Finance. SMEs in the automotive sector may be interested in a recently completed study which maps the IP business models behind ten cases of US patent, trade secret and trade mark litigation brought by the most prolific US patent applicants in the field of braking technology. The study’s author runs the Engineering Intellectual Property Research Unit at Cranfield University’s School of Engineering.

The study shows that the predominant IP monetisation mechanism in the field of braking is that of “monopoly provision”, i.e. using IP to exclude competing suppliers. There is only one instance of a patent licensing relationship, that being with a company already related to the patentee, suggesting that SME developers of braking technology may struggle to license into prolific US patent applicant companies.

As regards the IP mix that such SME developers might employ, the study confirms that trade secret protection is not used above raw material / component-level manufacture. It also illustrates the risk of third party patent infringement associated with trade secret protection.

The greatest threat of infringement litigation appears to come from competitors at a similar level on the value chain, the study also showing that neither the small size of an SME nor the large size of a company’s patent portfolio can guarantee immunity from suit.

Senin, 26 Desember 2011

Intellectual assets and innovation: a sector-by-sector study on SMEs

The IP Finance weblog is grateful to Chris Torrero for drawing its attention to a recent study, Intellectual Assets and Innovation: the SME Dimension, which has been published by OECD Publishing in its OECD Studies on SMEs and Entrepreneurship series.
"Intellectual Property Rights can be instrumental for SMEs to protect and build on their innovations; position themselves competitively vis-à-vis larger enterprises in global markets; gain access to revenues; signal current and prospective value to investors, competitors and partners; access knowledge markets and networks; open up new commercial pathways; or segment existing markets ['can be' -- but the same rights 'can be' a means by which SMEs bankrupt themselves though over-expenditure in acquiring them and over-extending themselves when seeking to enforce them. That's why it's a shame that ...] ....  while there is increasing recognition of their significance, as well as the need for appropriate intellectual asset management for SMEs across OECD countries, there are few regulatory frameworks or specific instruments directed to SMEs. This is in part due the pace of technological innovation, which often exceeds the time it takes for policy makers to create appropriate responses to the changing landscape of intellectual property. 
This study explores the relations between SME intellectual asset management, innovation and competitiveness in different national and sectoral contexts [the 'different sectoral contexts' approach is vital, since IPRs behave so differently in their respective contexts and we have suffered too long from 'one size fits all' prescriptive analyses of the IP needs of SMEs]. It provides insights on the ability of SMEs to access and utilise the protection systems available to them and identifies key challenges for SMEs in appropriating full value from IPRs. It also investigates effectiveness of regulatory frameworks and policy measures to support SME access to IPRs, identifying best practices and proposing policy recommendations".

The study is available in print and pdf formats. Full details are available from the OECD's bookshop website here.

Minggu, 25 Desember 2011

The New York Times and Diminishing Derivative Works


Most of the readers of this blog are surely familiar with the struggle, some say existential, of print newspapers to find a viable business model in today's increasingly online world. The issue, as framed, is simple: having given away substantial content for free online access without a revenue stream parallel to the want-ads cum advertisments of print newspapers, can such newspapers now find a long-term model based on charging for online contents? This, while at the same time the newspapers continue to try to salvage something from its print product.

Against this print v online morality play, a little-mentioned event occurred late last week that highlights another aspect of the search for commercially viable platforms to deliver copyright content. The event in question was the announcement by the New York Times that it is discontinuing its broadcast of podcasts. First a confession: I am podcast freak. For over two hours each day, starting with my one-hour constitutional in the morning and extending to the 45-minute bus commmute to and from work, I combine either walking or bus travel with listening to a series of informative podcasts by the newspaper on business, politics, science and music.

Without a doubt the New York Times is well ahead of the class in the contents of the podcasts in these (and probably in all areas subject to a podcast broadcast by the company). It is not an exageration to say that these podcasts have helped frame the terms of my thinking in recent years. In effect, these New York Times podcasts are a unique form of derivative work, whereby journalists are called upon to adapt their print contents to the quite different medium of the spoken world. Unlike the mere reproduction of audio content broadcast on radio and then reworked for further podcast broadcast in MP3 format (such as Bloomberg, the BBC or National Public Radio), the New York Times had succceeded in creating a distinct content form elegantly adapted for an aural, non-visual platform. And yet this same newspaper, owner of one of the most august names in the journalistic world, has decided to shut down one (albeit small) aspect of its journalistic excellence.

While no specific reason was given in the announcement of the discontinuance of the podcasts, a strong hint was found near the end. There, the presenter urged listeners to either take up an offer to subscribe to the online version at an introductory low price or continue with the print edition. This suggests that either the podcasts were an expense that the newspaper was no longer interested in bearing, and/or that the podcasts had not succeeded in driving listeners to subscribe to either the online or print edition, and/or the podcast broadcasts were merely cannabilizing potential revenue-generating customers. In shutting down its podcast service, the newspaper is apparently prepared to risk the loss of the goodwill accruing to its name by virtue of the podcasts or, even more, engendering a certain feeling of betrayal -- contents available via the iPod over a number of years have been suddenly yanked from the listening public.

Whatever the reason, the result of the decision is to impoverish the content available for a medium (MP3 iPod players) that is uniquely placed to facilitate the optimalization of the multi-tasking experience, something neither the online or print experience can offer. After all, one can equally engage in various forms of physical activity while also listening to his favourite podcast content. Nothing of the like can be achieved by online or print content, which demands the complete attention of the reader.

It has been noted more than once that the burgeoning of derivative works and the success of their commercial exploitation has been one of the main drivers in the expansion of copyright over the last several decades. Both commercially and artistically, therefore, the creation and exploitation of derivative works is one of the success stories of modern copyright. When it comes to MP3 content, however, that is no longer the case, at least for the New York Times. In so doing, at least one najor copyright owner has apparently decided to cut back on the production of quality derivative works. That is a result that should be lamented.

Kamis, 22 Desember 2011

The Higgs Boson -- and the value of money

What does the (tentatively discovered) Higgs Boson have to do with finance? Nothing. At least nothing obvious. Still, it's a fascinating topic with grand sweeping themes. I've written an essay on it for Bloomberg Views, which will appear later tonight or tomorrow. I'll add the link when it does.

The interesting thing to me about the Higgs Boson, and the associated Higgs field (the boson is an elementary excitation of this field), is the intellectual or conceptual history of the idea. It seems crazy to think that as the universe cooled (very shortly after the big bang) a new field, the Higgs field, suddenly appeared, filling all space, and giving particles mass in proportion to how strongly they interact with that field. It would be a crazy idea if it were just a proposal pulled out of thin air. But the history is that Higgs' work (and the work of many others at the same time, the early 1960s) had very strong stimulation from the BCS theory of superconductivity of ordinary metals, which appeared in 1957.

That theory explained how superconductivity originates through the emergence below a critical temperature of a condensate of paired electrons (hence, bosons) which acts as an extremely sensitive electromagnetic medium. Try to impose a magnetic field inside a superconductor (by bringing a magnet close, for example) and this condensate or field will respond by stirring up currents which act precisely to cancel the field inside the superconductor. This is the essence of superconductivity -- its appearance changes physics inside the superconductor in such a way that electromagnetic fields cannot propagate. In quantum terms (from quantum electrodynamics), this is equivalent to saying that the photon -- the carrier of the electromagnetic fields -- comes to have a mass. It does so because it interacts very strongly with the condensate.

This idea from superconductivity is pretty much identical to the Higgs mechanism for giving the W and Z particles (the carriers of the weak force) mass. This is what I think is fascinating. The Higgs prediction arose not so much from complex mathematics, but from the use of analogy and metaphor -- I wonder if the universe is in some ways like a superconductor? If we're living in a superconductor (not for ordinary electrical charge, but for a different kind of charge of the electroweak field), then it's easy to understand why the W and Z particles have big masses (more than 100 times the mass of the proton). They're just like photons traveling inside an ordinary superconductor -- inside an ordinary metal, lead or tin or aluminum, cooled down to low temperatures.

I think it's fitting that physics theory so celebrated for bewildering mathematics and abstraction beyond ordinary imagination actually has its roots in the understanding of grubby things like magnets and metals. That's where the essential ideas were born and found their initial value.

Having said that none of this has anything to do with finance, however, I should mention a fascinating proposal from 2000 by Per Bak, Simon Nørrelykke and Martin Shubik, which draws a very close analogy between the process which determines the value of money and any Higgs-like mechanism. They made the observation that the absolute value of money is essentially undetermined:
The value of money represents a “continuous symmetry”. If, at some point, the value of money was globally redefined by a certain factor, this would have no consequences whatsoever. Thus, in order to arrive at a specific value of money, the continuous symmetry must be broken.
In other words, a loaf of bread could be worth $1, $10, or $100 -- it doesn't matter. But here and now in the real world it does have one specific value. The symmetry is broken.

This idea of continuous symmetry is something that arises frequently in physics. And it is indeed the breaking of a continuous symmetry that underlies the onset of superconductivity. The mathematics of field theory shows that, anytime a continuous symmetry is broken (so that some variables comes to take on one specific value), there appears in the theory a new dynamical mode -- a so-called Goldstone Mode -- corresponding to fluctuations along the direction of the continuous symmetry. This isn't quite the appearance of mass -- that takes another step in the mathematics, but this Goldstone business is a part of the Higgs mechanism.

I'll try to return to this paper again. It offers a seemingly plausible dynamical model for how a value of money can emerge in an economy, and also why it should be subject to strong inherent fluctuations (because of the Goldstone mode). None of this comes out of equilibrium theory, nor should one expect it to as money is an inherently dynamical thing -- we use it as a tool to manage activities through time, selling our services today to buy food next week, for example.

Selasa, 20 Desember 2011

Economics and IP: the Katonomics posts

After posting this item on the IPKat weblog, it occurred to me that there are probably a good many readers of the IP Finance weblog who are interested in the point at which economics intersects with IP but who do not read the IPKat. Accordingly I've listed the titles of a series of six posts on economics and IP, written by economist Dr Nicola Searle and published together under the term "Katonomics". A further series by the same author will follow in the New Year.
  • No.1: The social contract theory of IP
  • No.2: The economics of trade marks
  • No.3: Evidence-based policy: the challenge of data
  • No.4: Where to look for an IP-oriented economist
  • No.5: The paradox of fashion
  • No.6: The economics of IP in pharmaceuticals.

Kamis, 15 Desember 2011

Can Policy Right the Science Ship? The Case of Argentina


With another Nobel Prize season behind us after the winners picked up their prizes last weekend, it is worthwhile to consider the state of research and development in developing countries (or the "low end" of developed countries). Once again, and for the fourth time in less than a decade, an Israeli was awarded the Nobel prize for Chemistry, this time Professor Dan Schechtman of the Technion--Israel Institute of Technology, here. However, except for the oversized success of Israelis, especially in Chemistry, the track record of similarly placed countries regarding Nobel Prize awards has become more and more sparse out the years.

Against that backdrop, The Economist published an interesting article in its November 5th issue. Entitled "Cristina the Alchemist: Science in Argentina", the article discusses the multi-faceted attempts of the Argentine government under successive presidents, first the late Nestor Krichner and more recently his widow, Cristina Fernandez, to upgrade the state of science and technological research in the country.

Argentina is hardly without a respectable past in this regard. Using Nobel Prize awards as a rough proxy, the country has seen three winners in science. However, the last of these winners received the award in 1984 and the country has been witness to a precipitous decline since then. In response, the government has adopted a series of measures designed to right the sinking ship of Argentine science. Based on the article, the leading aspects of this push can be summarized as follows:

1. R&D expenditures has risen from 0.41% to 0.64% of GDP (although this is still far less than that expended for R&D in Brazil in 2009--1.18% of GDP).

2. Under President Kirchner, researchers' salaries were increased, an organized scheme has been put into place to repatriate Argentine scientists abroad, and tax breaks were given for the software industry. Under President Fernandez, a new science ministry was created and grants of money for new product development has been increased.

3. The state is financially supporting the cost of registering patents in jurisdictions outside of Argentina and lawyers' fees in connection with defending these patents. It is also supporting the placement of PhDs with employers in the IT area, including partial support of their salaries in this area.

4. Perhaps the most interesting result of the foregoing is that 854 scientists have returned to Argentina, lured by new labs and increased compensation. In turn, researchers have increased their presence in the leading scientific journals to 179 published articles during the past decade, compared to only 30 articles published in the 1990s. As well, there seem to be particularly noteworthy developments in agriculture and horticulture.

5. That said, reservations have been raised about the extent to which these scientists are engaged in industry-oriented enterprises. Moreover, the article is stone-silent on the extent of patent activity as a result of these efforts. At the macro level, it remains to be seen whether the Argentine government will stay the political course and maintain these policies over an extended period of time, much less whether these efforts will bear fruit at the level of Nobel prizes and similar achievements 15-30 years down the line.

The example of Israel should provide Argentina with an example of how a mix of public and private activities can enable a marginally developed country to reach world-class accomplishments in science. The Israeli situation should also serve as a caution that continued vigilance is essential. While the country basked last week in the Nobel Prize granted to Professor Schechtman, the professor himself sent a pointed and clear message to the country's leaders: Unless you adopt a wide-reaching set of changes in the approach and support of science, starting from primary education, there will not be another generation of Nobel prize recipients. The implications for Argentina are clear.

Selasa, 13 Desember 2011

a little more on power laws

I wanted to respond to several insightful comments on my recent post on power laws in finance. And, after that, pose a question on the economics/finance history of financial time series that I hope someone out there might be able to help me with.

First, comments:

ivansml said...
Why exactly is power-law distribution for asset returns inconsistent with EMH? It is trivial to write "standard" economic model where returns have fat tails, e.g. if we assume that stochastic process for dividends / firm profits has fat tails. That of course may not be very satisfactory explanation, but it still shows that EMH != normal distribution. In fact, Fama wrote about non-gaussian returns back in 1960's (and Mandelbrot before him), so the idea is not exactly new. The work you describe here is certainly useful and interesting, but pure patterns in data (or "stylized facts", as economists would call them) by themselves are not enough - we need some theory to make sense of them, and it would be interesting to hear more about contributions from econophysics in that area.
James Picerno said...
It's also worth pointing out that EMH, as I understand it, doesn't assume or dismiss that returns follow some specific distribution. Rather, EMH simply posits that prices reflect known information. For many years, analysts presumed that EMH implies a random distribution, but the empirical record says otherwise. But the random walk isn't a condition of EMH. Andrew Lo of MIT has discussed this point at length. The market may or may not be efficient, but it's not conditional on random price fluctuations. Separately, ivansmi makes a good point about models. You need a model to reject EMH. But that only brings you so far. Let's say we have a model of asset pricing that rejects EMH. Then the question is whether EMH or the model is wrong? That requires another model. In short, it's ultimately impossible to reject or accept EMH, unless of course you completely trust a given model. But that brings us back to square one. Welcome to economics.
I actually agree with these statements. Let me try to clarify. In my post I said, referring to the fat tails in returns and 1/t decay of volatility correlations, that  "None of these patterns can be explained by anything in the standard economic theories of markets (the EMH etc)." The key word is of course "explained."

The EMH has so much flexibility and is so loosely linked to real data that it is indeed consistent with these observations, as Ivansml (Mark) and James rightly point out. I think it is probably consistent with any conceivable time series of prices. But "being consistent with" isn't a very strong claim, especially if the consistency comes from making further subsidiary assumptions about how these fat tails might come from fluctuations in fundamental values. This seems like a "just so" story (even if the idea that fluctuations in fundamental values could have fat tails is not at all preposterous).

The point I wanted to make is that nothing (that I know of) in traditional economics/finance (i.e. coming out of the EMH paradigm) gives a natural and convincing explanation of these statistical regularities. Such an explanation would start from simple well accepted facts about the behaviour of individuals, firms, etc., market structures and so on, and then demonstrate how -- because of certain logical consequences following from these facts and their interactions -- we should actually expect to find just these kinds of power laws, with the same exponents, etc., and in many different markets. Reading such an explanation, you would say "Oh, now I see where it comes from and how it works!"

To illustrate some possibilities, one class of proposed explanations sees large market movements as having inherently collective origins, i.e. as reflecting large avalanches of trading behaviour coming out of the interactions of market participants. Early models in this class include the famous Santa Fe Institute Stock Market model developed in the mid 1990s. This nice historical summary by Blake LeBaron explores the motivations of this early agent-based model, the first of which was to include a focus on the interactions among market participants, and so go beyond the usual simplifying assumptions of standard theories which assume interactions can be ignored. As LeBaron notes, this work began in part...
... from a desire to understand the impact of agent interactions and group learning dynamics in a financial setting. While agent-based markets have many goals, I see their first scientific use as a tool for understanding the dynamics in relatively traditional economic models. It is these models for which economists often invoke the heroic assumption of convergence to rational expectations equilibrium where agents’ beliefs and behavior have converged to a self-consistent world view. Obviously, this would be a nice place to get to, but the dynamics of this journey are rarely spelled out. Given that financial markets appear to thrive on diverse opinions and behavior, a first level test of rational expectations from a heterogeneous learning perspective was always needed.   
I'm going to write posts on this kind of work soon looking in much more detail. This early model has been greatly extended and had many diverse offspring; a more recent review by LeBaron gives an updated view. In many such models one finds the natural emergence of power law distributions for returns, and also long-term correlations in volatility. These appear to be linked to various kinds of interactions between participants. Essentially, the market is an ecology of interacting trading strategies, and it has naturally rich dynamics as new strategies invade and old strategies, which had been successful, fall into disuse. The market never settles into an equilibrium, but has continuous ongoing fluctuations.

Now, these various models haven't yet explained anything, but they do pose potentially explanatory mechanisms, which need to be tested in detail. Just because these mechanisms CAN produce the right numbers doesn't mean this is really how it works in markets. Indeed, some physicists and economists working together have proposed a very different kind of explanation for the power law with exponent 3 for the (cumulative) distribution of returns which links it to the known power law distribution of the wealth of investors (and hence the size of the trades they can make). This model sees large movements as arising in the large actions of very wealthy market participants. However, this is more than merely attributing the effect to unknown fat tails in fundamentals, as would be the case with EMH based explanations. It starts with empirical observations of tail behaviour in several market quantities and argues that these together imply what we see for market returns.

There are more models and proposed explanations, and I hope to get into all this in some detail soon. But I hope this explains a little why I don't find the EMH based ideas very interesting. Being consistent with these statistical regularities is not as interesting as suggesting clear paths by which they arise.

Of course, I might make one other point too, and maybe this is, deep down, what I find most empty about the EMH paradigm. It essentially assumes away any dynamics in the market. Fundamentals get changed by external forces and the theory supposes that this great complex mass of heterogenous humanity which is the market responds instantaneously to find the new equilibrium which incorporates all information correctly. So, it treats the non-market part of the world -- the weather, politics, business, technology and so on -- as a rich thing with potentially complicated dynamics. Then it treats the market as a really simply dynamical thing which just gets driven in slave fashion by the outside. This to me seems perversely unnatural and impossible to take seriously. But it is indeed very difficult to rule out with hard data. The idea can always be contorted to remain consistent with observations.

Finally, another valuable comment:
David K. Waltz said...
In one of Taleeb's books, didn't he make mention that something cannot be proven true, only disproven? I think it was the whole swan thing - if you have an appropriate sample and count 100% white swans does not prove there are ONLY white swans, while a sample that has a black one proves that there are not ONLY white swans.
Again, I agree completely. This is a basic point about science. We don't ever prove a theory, only disprove it. And the best science works by trying to find data to disprove a hypothesis, not by trying to prove it.

I assume David is referring to my discussion of the empirical cubic power law for market returns. This is indeed a tentative stylized fact which seems to hold with appreciable accuracy in many markets, but there may well be markets in which it doesn't hold (or periods in which the exponent changes). Finding such deviations  would be very interesting as it might offer further clues as to the mechanism behind this phenomenon.

NOW, for the question I wanted to pose. I've been doing some research on the history of finance, and there's something I can't quite understand. Here's the problem:

1. Mandelbrot in the early 1960s showed that market returns had fat tails; he conjectured that they fit the so-called Stable Paretian (now called Stable Levy) distributions which have power law tails. These have the nice property (like the Gaussian) that the composition of the returns for longer intervals, built up from component Stable Paretian distributions, also has the same form. The market looks the same at different time scales.
2. However, Mandelbrot noted in that same paper a shortcoming of his proposal. You can't think of returns as being independent and identically distributed (i.i.d.) over different time intervals because the volatility clusters -- high volatility predicts more to follow, and vice versa. We don't just have an i.i.d. process.
3. Lots of people documented volatility clustering over the next few decades, and in the 1980s Robert Engle and others introduced ARCH/GARCH and all that -- simple time series models able to reproduce the realistic properties of financial times, including volatility clustering.
4. But today I found several papers from the 1990s (and later) still discussing the Stable Paretian distribution as a plausible model for financial time series.

My question is simply -- why was anyone even 20 years ago still writing about the Stable Paretian distribution when the reality of volatility clustering was so well known? My understanding is that this distribution was proposed as a way to save the i.i.d. property (by showing that such a process can still create market fluctuations having similar character on all time scales). But volatility clustering is enough on its own to rule out any i.i.d. process.

Of course, the Stable Paretian business has by now been completely ruled out by empirical work establishing the value of the exponent for returns, which is too large to be consistent with such distributions. I just can't see why it wasn't relegated to the history books long before.

The only possibility, it just dawns on me, is that people may have thought that some minor variation of the original Mandelbrot view might work best. That is, let the distribution over any interval be Stable Paretian, but let the parameters vary a little from one moment to the next. You give up the i.i.d. but might still get some kind of nice stability properties as short intervals get put together into longer ones. You could put Mandelbrot's distribution into ARCH/GARCH rather than the Gaussian. But this is only a guess. Does anyone know?

Jumat, 09 Desember 2011

Prosecuting Wall St.

By way of Simolean Sense:
The following is a script of "Prosecuting Wall Street" (CBS) which aired on Dec. 4, 2011. Steve Kroft is correspondent, James Jacoby, producer.

It's been three years since the financial crisis crippled the American economy, and much to the consternation of the general public and the demonstrators on Wall Street, there has not been a single prosecution of a high-ranking Wall Street executive or major financial firm even though fraud and financial misrepresentations played a significant role in the meltdown. We wanted to know why, so nine months ago we began looking for cases that might have prosecutorial merit. Tonight you'll hear about two of them. We begin with a woman named Eileen Foster, a senior executive at Countrywide Financial, one of the epicenters of the crisis.

Steve Kroft: Do you believe that there are people at Countrywide who belong behind bars?

Eileen Foster: Yes.

Kroft: Do you want to give me their names?

Foster: No.

Kroft: Would you give their names to a grand jury if you were asked?

Foster: Yes.

But Eileen Foster has never been asked - and never spoken to the Justice Department - even though she was Countrywide's executive vice president in charge of fraud investigations...
See the video and transcript here.

Kamis, 08 Desember 2011

Patents and standards again: a valuable study

This weblog has focused a good deal in recent weeks on standards and patents. In this context, the Study on the Interplay between Standards and Intellectual Property Rights (IPRs), April 2011, is highly relevant. Commissioned and financed by the Directorate General for Enterprise and Industry of the European Commission, this study was produced by the Fraunhofer Institute for Communication System and Dialogic in collaboration with the School of Innovation Sciences at Eindhoven University of Technology, and enjoyed the support of two legal consultants.

Ruben Schellingerhout, who kindly drew the attention of the IP Finance weblog to this study, explains a bit about it:
"The study shows that distribution of patents in standards is very skewed, both in terms of standards and in terms of owners. A few standards cover a large number of patents while most standards include only a few patents, or no patents at all [I had no idea that this was the case]. And a relatively small group of companies own a large number of essential patents in standards, while most companies own only a few or none of these patents. 

In the telecommunications and the consumer electronics market, implementers ensure access to essential IPRs most often via cross-licensing and - to a lesser extent - via general licensing-in and patent pools.

Legal uncertainty can still arise on the obligation to disclose, the irrevocability and the geographic scope of the licensing commitment and in cases of transfer of IPRs if they are still subject to a FRAND licensing commitment. Companies expect standard setting organisations to improve transparency on essential IPRs".
Thanks, Ruben, for your kind assistance.  Readers can access the report in full here.

Selasa, 06 Desember 2011

Power laws in finance

My latest column in Bloomberg looks very briefly at some of the basic mathematical patterns we know about in finance. Science has a long tradition of putting data and observation first. Look very carefully at what needs to be explained -- mathematical patterns that show up consistently in the data -- and then try to build simple models able to reproduce those patterns in a natural way.

This path has great promise in economic finance, although it hasn't been pursued very far until recently. My Bloomberg column gives a sketch of what is going on, but I'd like to give a few more details here and some links.

The patterns we find in finance are statistical regularities -- broad statistical patterns which show up in all markets studied, with an impressive similarity across markets in different countries and for markets in different instruments. The first regularity is the distribution of returns over various time intervals, which has been found generically to have broad power law tails -- "fat tails" -- implying that large fluctuations up or down are much more likely than they would be if markets fluctuated in keeping with normal Gaussian statistics. Anyone who read The Black Swan knows this.

This pattern has been established in a number of studies over the past 15 years or so, mostly by physicist Eugene Stanley of Boston University and colleagues. This paper from 1999 is perhaps the most notable, as it used enormous volumes of historical data to establish the fat tailed pattern for returns over times ranging from one minute up to about 4 days. One of the most powerful things about this approach is that it doesn't begin with any far reaching assumptions about human behaviour, the structure of financial markets or anything else, but only asks -- are there patterns in the data? As the authors note:
The most challenging difficulty in the study of a financial market is that the nature of the interactions between the different elements comprising the system is unknown, as is the way in which external factors affect it. Therefore, as a starting point, one may resort to empirical studies to help uncover the regularities or “empirical laws” that may govern financial markets.    
This strategy seems promising to physicists because it has worked in building theories of complex physical systems -- liquids, gases, magnets, superconductors -- for which it is also often impossible to know anything in great detail about the interactions between the molecules and atoms within. This hasn't prevented the development of powerful theories because, as it turns out, many of the precise details at the microscopic level DO NOT influence the large scale collective properties of the system. This has inspired physicists to think that the same may be true in financial markets -- at least some of the collective behaviour we see in markets, their macroscopic behaviour, may be quite insensitive to details about human decision making, market structure and so on.

The authors of this 1999 study summarized their findings as follows:


Several points of clarification. First, the result for the power law with exponent close to 3 is a result for the cumulative distribution. That is, the probability that a return will be greater than a certain value (not just equal to that value). Second, the fact that this value lies outside of the range [0,2] means that the process generating these fluctuations isn't a simple stationary random process with an identical and independent distribution for each time period. This was the idea initially proposed by Benoit Mandelbrot on the basis of the so-called Levy Stable distributions. This study and others have established that this idea can't work -- something more complicated is going on.

That complication is also referred to in the second paragraph above. If you take the data on returns at the one minute level, and randomize the order in which it appears, then you still get the same power law tails in the distribution of returns over one minute. That's the same data. But this new time series has different returns over longer times, generated by combining sequences of the one minute returns. The distribution over longer and longer times turns out to converge slowly to a Gaussian for the randomized data, meaning that the true fat tailed distribution over longer times has its origin in some rich and complex correlations in market movements at different times (which gets wiped out by the randomization). Again, we're not just dealing with a fixed probability distribution and independent changes over different intervals.

To read more about this, see this nice review by Xavier Gabaix of MIT. It covers this and many other power laws in finance and economics.

Now, the story gets even more interesting if you look past the mere distribution of returns and study the correlations between market movements at different times. Market movements are, of course, extremely hard to predict. But it is very interesting where the unpredictability comes in.

The so-called autocorrelation of the time series of market returns decays to zero after a few minutes. This is essentially a measure of how much the return now can be used to predict a return in the future. After a few minutes, there's nothing. This is the sense in which the markets are unpredictable. However, there are levels of predictability. It was discovered in the early 1990s, and has been confirmed many times since in different markets, that the time series of volatility -- the absolute value of the market return -- has long-term correlations, a kind of long-term memory. Technically, the autocorrelation of this time series only decays to zero very slowly.

This is shown below in the following figure (from a representative paper, again from the Boston University group) which shows the autocorrelation of the return time series g(t) and also of the volatility, which is the absolute value of g(t):



Clearly, whereas the first signal shows no correlations after about 10 minutes, the second shows correlations and predictability persisting out to times as long as 10,000 minutes, which is on the order of 10 days or so.

So, its the directionality of price movements which has very little predictability, whereas the magnitude of changes follows a process with much more interesting structure. It is in the record of this volatility that one sees potentially deep links to other physical processes, including earthquakes. A particularly interesting paper is this one, again by the Boston group, quantifying several ways in which market volatility obeys several quantitative laws known from earthquake science, especially the Omori Law describing how the probability of aftershocks decays following a main earthquake. This probability decays quite simply in proportion to 1/time since the main quake, meaning that aftershocks are most likely immediately afterward, and become progressively less likely with time. Episodes of high volatility appear to follow similar behaviour quite closely.

Perhaps even better is another study, which looks at the link to earthquakes with a somewhat tighter focus. The abstract captures the content quite well:
We analyze the memory in volatility by studying volatility return intervals, defined as the time between two consecutive fluctuations larger than a given threshold, in time periods following stock market crashes. Such an aftercrash period is characterized by the Omori law, which describes the decay in the rate of aftershocks of a given size with time t by a power law with exponent close to 1. A shock followed by such a power law decay in the rate is here called Omori process. We find self-similar features in the volatility. Specifically, within the aftercrash period there are smaller shocks that themselves constitute Omori processes on smaller scales, similar to the Omori process after the large crash. We call these smaller shocks subcrashes, which are followed by their own aftershocks. We also show that the Omori law holds not only after significant market crashes as shown by Lillo and Mantegna [Phys. Rev. E 68, 016119 2003], but also after “intermediate shocks.” ...
These are only a few of the power law type regularities now known to hold for most markets, with only very minor differences between markets. An important effort is to find ways to explain these regularities in simple and plausible market models. None of these patterns can be explained by anything in the standard economic theories of markets (the EMH etc). They can of course be reproduced by suitably generating time series using various methods, but that hardly counts as explanation -- that's just using time series generators to reproduce certain kinds of data.

The promise of finding these kinds of patterns is that they may strongly constrain the types of theories to be considered for markets, by ruling out all those which do not naturally give rise to this kind of statistical behaviour. This is where data matters most in science -- by proving that certain ideas, no matter how plausible they seem, don't work. This data has already stimulated the development of a number of different avenues for building market theories which can explain the basic statistics of markets, and in so doing go well beyond the achievements of traditional economics.

I'll have more to say on that in the near future.

Jumat, 02 Desember 2011

Consumable IP

Some OEMs derive a significant proportion of their profits from the sale of consumables (a recent article in The Times recalled the 2002 assertion by the Consumers Association that the ink in Hewlett-Packard’s printers “was more expensive, per millilitre, than Dom Perignon champagne”). Others make no attempt to prevent aftermarket suppliers, instead making their profit on the sale of original equipment.

Now there would appear to be a third way: in the field of aircraft brakes, Nasco has announced an “innovative alternative” approach to providing [corporate] customers with new brake designs involving a fixed-price design, development and production contract that includes re-procurement data rights. Such data are believed to include manufacturing drawings and material specifications.

According to Nasco’s website, “customers pay the development costs up front but reap the long-term benefits of lower cost spare parts through competitive sourcing.” Contrast this with the use of trade secrets in manufacturing drawings and material specifications to exclude competitors as previously reported here.

Interview with Dave Cliff

Dave Cliff of the University of Bristol is someone whose work I've been meaning to look at much more closely for a long time. Essentially he's an artificial intelligence expert, but has has devoted some of his work to developing trading algorithms. He suggests that many of these algorithms, even one working on extremely simple rules, consistently outperform human beings, which rather undermines the common economic view that people are highly sophisticated rational agents.

I just noticed tht Moneyscience is beginning a several part interview with Cliff, the first part having just appeared. I'm looking forward to the rest. Some highlights from Part I, beginning with Cliff's early work, mid 1990s, on writing algorithms for trading:
I wrote this piece of software called ZIP, Zero Intelligence Plus. The intention was for it to be as minimal as possible, so it is a ridiculously simple algorithm, almost embarrassingly so. It’s essentially some nested if-then rules, the kind of thing that you might type into an Excel spreadsheet macro. And this set of decisions determines whether the trader should increase or decrease a margin. For each unit it trades, has some notion of the price below which it shouldn’t sell or above which it shouldn’t buy and that is its limit price. However, the price that it actually quotes into the market as a bid or an offer is different from the limit price because obviously, if you’ve been told you can buy something and spend no more than ten quid, you want to start low and you might be bidding just one or two pounds. Then gradually, you’ll approach towards the ten quid point in order to get the deal, so with each quote you’re reducing the margin on the trade.  The key innovation I introduced in my ZIP algorithm was that it learned from its experience. So if it made a mistake, it would recognize that mistake and be better the next time it was in the same situation.

HFTR: When was this exactly?

DC: I did the research in 1996 and HP published the results, and the ZIP program code, in 1997. I then went on to do some other things, like DJ-ing and producing algorithmic dance music (but that’s another story!)

Fast-forward to 2001, when I started to get a bunch of calls because a team at IBM’s Research Labs in the US had just completed the first ever systematic experimental tests of human traders competing against automated, adaptive trading systems. Although IBM had developed their own algorithm called MGD, (Modified Gjerstad Dickhaut), it did the same kind of thing as my ZIP algorithm, using different methods. They had tested out both their MGD and my ZIP against human traders under rigorous experimental conditions and found that both algorithms consistently beat humans, regardless of whether the humans or robots were buyers or sellers. The robots always out-performed the humans.

IBM published their findings at the 2001 IJCAI conference (the International Joint Conference on AI) and although IBM are a pretty conservative company, in the opening paragraphs of this paper they said that this was a result that could have financial implications measured in billions of dollars. I think that implicitly what they were saying was there will always be financial markets and there will always be the institutions (i.e. hedge funds, pension management funds, banks, etc). But the traders that do the business on behalf of those institutions would cease to be human at some point in the future and start to be machines. 
Personally, I think there are two important things here. One is that, yes, trading will probably soon become almost all algorithmic. This may tend to make you think the markets will become more mechanical, their collective behaviour emerging out of the very simple actions of so many crude programs.

But the second thing is what this tells us about people -- that traders and investors and people in general aren't so clever or rational, and most of them have probably been following fairly simple rules all along, rules that machines can easily beat. So there's really no reason to think the markets should become more mechanical as they become more algorithmic. They've probably been quite mechanical all along, and algorithmic too -- it's just that non-rational zero intelligence automatons running the algorithms were called people. 

Kamis, 01 Desember 2011

The Missing IP Narrative

It remains my most vexing professional challenge. The "it" is how to integrate IP/IC into management education. The vexation comes from the seeming paradox tha, while intellectual property and intellectual capital are routinely described as cornerstones of innovation, if not modern business itself, their systematic presence in MBA curricula remains sporadic at best. I was reminded of this in connection with two quite different experiences that I had during the week.

In the first, I had occasion to spend some time with the dean of a local business school. Recently appointed, he was taking bold action to modify the schools's MBA program to make it more appropriate for today's student body. In that connection, he wanted to hear more about my class on IP and Management that I teach elsewhere. His question, half  "devil's advocate", half an expression of curricular skepticism, was simply this: "I have space for 20 or so courses in the program. Why should a course such as yours be part of the curriculum?"

The case in favour of inclusion is not simple. In the face of multiple courses in strategy, finance, marketing, and operations, the role of a course focusing on IP is dfficult to explain. The uneven diffusion of IP subject matter throughout an organization, the origin of IP as a branch of legal practice and its intangible character all give IP a bit of orphan status within the school's curriculum.

The Dean pushed me for examples of how the course works in practice. A pregnant pause ensued, finally punctuated by several examples of IP and management that seemed to pique his interest. All the while I stressed that one can look at MBA education as a platform for imparting relevant narratives to the students. Taken from this perspective, the ultimate justification for the course is that it highlights the IP narrative in a manner that is front and centre: "Can you imagine a manager who does not have the ability to apply the IP narrative to his daily businsess?", I asked. I am not sure that I convinced him that the answer is "yes". If I failed, cohort after cohort of young managers will be trained at his school without receiving any systematic tranining in this field. The managerial narrative for these students will simply lack a meaningful consideration of IP.

This absence of a narrative for IP was reinforced in listening to a podcast that featured a well-known venture capitalist describing the foundations of the VC world. The speaker did not disappoint. He described the flow of foundation money from university and similar endowments as the turning point for VCs to attract substantial investment capital. He emphasized the importance of the human dimension in any investment, and observed that any prospective company that puts special emphasis on an exit strategy for the company lacks the necessary patience. He distinguished between great innovative ideas and market potential. There are a lot more of the former than the latter.

These multiple narratives about the VC enterprise were interesting and instructive. Except for one thing: the speaker mentioned IP only in passing. Based on his words, IP was not a central part of the VC narrative. In follow-up correspondence with the speaker, he replied briefly that the company "of course" takes an interest in the company's IP, ie., "FTO and patentability." In his view, IP is largely limited to patents, and the work required is the purview of patent technocrats, far removed from most of the company's managers.


This podcast and email correspondence reinforced the sense of frustation that I had felt in my meeting with the dean, namely that IP is not part of the mainstream MBA narrative for most students. The upshot is that most MBA students will continue to go through their programs with scant or simply no attention being paid to IP. Is there a price to be paid for this? Perhaps. It is frequently observed that innovation has materially declined over the last few years. There are no doubt a number of reasons for this troubling state of affairs. Against that backdrop, one wonders whether the absence of a meaningful narrative regarding IP within the context of most MBA programs is another source of the innovative malaise. This is at least narrative food for thought.
 

http://financetook.blogspot.com/ Copyright © 2012 -- Powered by Blogger