Jumat, 30 September 2011

Lobbying pays off handsomely -- visual proof

From an article in The Economist, a graph showing the performance of the "Lobbying Index" versus the S&P 500 over the past decade. The Lobbying Index being an average over the 50 most intense lobbying firms within the S&P 500. It's pretty clear that lobbying -- a rather less than honourable profession in my book -- pays off:


The Fetish of Rationality

I'm currently reading Jonathan Aldred's book The Skeptical Economist. It's a brilliant exploration of how economic theory is run through at every level with hidden value judgments which often go a long way to  determining its character. For example, the theory generally assumes that more choice always has to be better. This follows more or less automatically from the view that people are rational "utility maximizers" (a phrase that should really be banned for ugliness alone). After all, more available choices can only give a "consumer" the ability to meet their desires more effectively, and can never have negative consequences. Add extra choices and the consumer can always simply ignore them.

As Aldred points out, however, this just isn't how people work. One of the problems is that more choice means more thinking and struggling to decide what to do. As a result, adding more options often has the effect of inhibiting people from choosing anything. In one study he cites, doctors were presented with the case history of a man suffering from osteoarthritis and asked if they would A. refer him to a specialist or B. prescribe a new experimental medicine. Other doctors were presented with the same choice, except they could choose between two experimental medicines. Doctors in the second group made twice as many referrals to a specialist, apparently shying away from the psychological burden of having to deal with the extra choice between medicines.

I'm sure everyone can think of similar examples from their own lives in which too much choice becomes annihilating. Several years ago my wife and I were traveling in Nevada and stopped in for an ice cream at a place offering 200+ flavours and a variety of extra toppings, etc. There were an astronomical number of potential combinations. After thinking for ten minutes, and letting lots of people pass by us in the line, I finally just ordered a mint chocolate chip cone -- to end the suffering, as it were. My wife decided it was all too overwhelming and in the end didn't want anything! If there had only been vanilla and chocolate we'd have ordered in 5 seconds and been very happy with the result.

In discussing this problem of choice, Aldred refers to a beautiful paper I read a few years ago by economist John Conlisk entitled Why Bounded Rationality? The paper gives many reasons why economic theory would be greatly improved if it modeled individuals as having finite rather than infinite mental capacities. But one of the things he considers is a paradoxical contradiction at the very heart of the notion of rational behaviour. A rational person facing any problem will work out the optimal way to solve that problem. However, there are costs associated with deliberation and calculation. The optimal solution to the ice cream choice problem isn't to stand in the shop for 6 years while calculating how to maximize expected utility over all the possible choices. Faced with a difficult problem, therefore, a rational person first has to solve another problem -- for how long should I deliberate before it becomes advantageous to just take a guess?

This is a preliminary problem -- call is P1 -- which has to be solved before the real deliberation over the choice can begin. But, Conlisk pointed out, P1 is itself a difficult problem and a rational individual doesn't want to waste lots of resources thinking about that one too long either. Hence, before working on P1, the rational person first has to decide what is the optimal amount of time to spend on solving P1. This is another problem P2, which is also hard. Of course, it never ends. Take rationality to it's logical conclusion and it ends up destroying itself -- it's simply an inconsistent idea.

Anyone who is not an economist might be quite amazed by Conlisk's paper. It's a great read, but it will dawn on the reader that in a sane world it simply wouldn't be necessary. It's arguing for the obvious and is only required because economic theory has made such a fetish of rationality. The assumption of rationality may in some cases have made it possible to prove theorems by turning the consideration of human behaviour into a mathematical problem. But it has tied the hands of economic theorists in a thousand ways.

Kamis, 29 September 2011

Economists on the way to being a "religious cult"...

A short seven minute video produced by the Institute for New Economic Thinking offers the views (very briefly) of a number of economists on modeling and it's purposes (h/t to Moneyscience). Two things of note:

1. Along the way, Brad DeLong mentions Milton Friedman's famous claim that a model is better the more unrealistic its assumptions, and that the sole measure of a theory is making accurate predictions. I'd really like to know what DeLong thinks on this, but his views aren't there in the interview. He mentions Friedman's idea but doesn't defend or attack it, just a reference to one of the most influential ideas on this topic, I guess. Shows how much Friedman's view is still in play.

In my view (some not very well organized thoughts here) the core problem with Friedman's argument is that a theory with perfect predictions and perfectly unrealistic assumptions simply doesn't teach you anything -- you're left just as mystified by how the model can possibly work (give the right predictions) as you were with the original phenomena you set out to explain. It's like a miracle. Such a model might of course be valuable as a starting point, and in stimulating the invention of further models with more realistic assumptions which then -- if they give the same predictions -- may indeed teach you something about how certain kinds of interactions, behaviours, etc (in the assumptions) can lead to observed consequences.

But then -- it's the models with the more realistic assumptions that are superior. (It's worth remembering that Friedman liked to say provocative things even if he didn't quite believe them.)

2. An interesting quote from economist James Galbraith, with which I couldn't agree more:
Modeling is not the end-all and the be-all of economics... The notion that the qualities of an economist should be defined by the modeling style that they adopt [is a disaster]. There is a group of people who say that if you're not doing dynamic stochastic general equilibrium modeling then you're not really a modern economist... that's a preposterous position which is going to lead to the reduction of economics to the equivalent of a small religious cult working on issues of interest to no one else in the world.

Basel III -- Taking away Jamie Dimon's Toys

Most people have by now heard the ridiculous claim by Jamie Dimon, CEO of JPMorgan Chase, that the new Basel III rules are "anti-American." The New York Times has an interesting set of contributions by various people on whether Dimon's claim has any merit. You'll all be shocked to learn that Steve Bartlett, president of the Financial Services Roundtable -- we can assume he's not biased, right? -- thinks that Dimon is largely correct. Personally, I tend to agree more with the views of Russell Roberts of George Mason University:

Who really writes the latest financial regulations, where the devil is in the details? Who has a bigger incentive to pay attention to their content — financial insiders such as the executives of large financial institutions or you and me, the outsiders? Why would you ever think that the regulations that emerge would be designed to promote international stability and growth rather than the naked self-interest of the financial community?

I do not believe it’s a coincidence that Basel I and II blew up in a way that enriched insiders at the expense of outsiders. To expect Basel III to yield a better result (now that we've supposedly learned so much) is to ignore the way the financial game is played. Until public policy stops subsidizing leverage (bailouts going back to 1984 make it easier for large financial institutions to fund each other’s activities using debt), it is just a matter of time before any financial system is gamed by the insiders.

Jamie Dimon is a crony capitalist. Don’t confuse that with the real kind. If he says Basel III is bad for America, you can bet that he means "bad for JPMorgan Chase." Either way, he’ll have a slightly larger say in the ultimate outcome than the wisest economist or outsider looking in.
Sadly, this is the truth, even though many people still cling to the hope that there are good people out there somewhere looking after the welfare of the overall system. Ultimately, I think, the cause of financial crises isn't to be found in the science of finance or of economics, but of politics. There is no way to prevent them as long as powerful individuals can game the system to their own advantage, privatizing the gains, as they say, and socializing the losses.

But not everyone is convinced of this by a long shot. Just after the crisis I wrote a feature article for Nature looking at new thinking about modeling economic systems and financial markets in particular. Researching the article, I came across lots of good new thinking about ways to model markets and go beyond the standard framework of economics. That all went into the article. I also suggested to my editor that we had to at least raise at the end of the article the nexus of influence between Wall St and the political system, and I proposed in particular to write a little about the famous paper by Romer and Akerlof, Looting: The Economic Underworld of Bankruptcy for Profit, which gives a simple and convincing argument in essence about how corporate managers (not only in finance) can engineer vast personal profits by running companies into the ground. Oddly, my editor in effect said "No, we can't include that because it's not science."

But that doesn't mean it's not important.

But back to Basel III. I had an article exploring this in some detail in Physics World in August. It is not available online. As a demonstration of my still lagging Blogger skills, I've captured images of the 4 pages and put them below. Not the best picture quality, I'm afraid.










Rabu, 28 September 2011

Financial Times numeracy check

This article from the Financial Times is unfortunately quite typical of the financial press (and yes, not only the financial press). Just ponder the plausibility of what is reported in the following paragraph, commenting on a proposal by José Manuel Barroso, European Commission president, to put a tax on financial transactions:
Mr Barroso did not release details of his plan, except to say it could raise some €55bn a year. However, a study carried out by the Commission has found that the tax could also dent long-term economic growth in the region by between 0.53 per cent and 1.76 per cent of gross domestic product.
The article doesn't mention who did the study, or give a link to it. But there's worse. If reported accurately, it seems the European Commission's economists -- or whoever they had do the study mentioned -- actually think that the "3" in 0.53 and the "6" in 1.76 mean something. That's quite impressive accuracy when talking about economic growth. In a time of great uncertainty.

I would bet a great deal that a more accurate statement of the confidence of their results would be, say, between 0 and 2 percent crudely, or maybe even -1 and 3. But that would be admitting that no one has any certainty about what's coming next, and that's not part of the usual practice.

High-frequency trading: taming the chaos

I have an opinion piece that will be published later today in the next few days in Bloomberg Views. It is really just my attempt to bring attention to some very good points made in a recent speech by Andrew Haldane of the Bank of England. For anyone interested in further details, you can read the original speech (I highly recommend this) or two brief discussions I've given here looking at the first third of the speech and the second third of the speech.

I may not get around to writing a detailed analysis of the third part, which focuses on possible regulatory measures to lessen the chance of catastrophic Flash Crash type events in the future. But the ideas raised in this part are fairly standard -- a speed limit on trading, rules which would force market makers to participate even in volatile times (as was formerly the case for market makers) and so on. I think the most interesting part by far is the analysis of the recent increase in the frequency of abrupt market jumps (fat-tail events) over very short times, and of the risks facing market makers and how they respond as volatility increases. I think this should all help to frame the debate over HFT -- which seems extremely volatile itself -- in somewhat more scientific terms.

I also suggest that anyone who finds any of this interesting should go to the Bank of England website and read some of Andrew Haldane's other speeches. Every one is brilliant and highly illuminating.

Senin, 26 September 2011

Linking legal and marketing theories regarding secondary pharma patents

Although the seminar which takes place on the afternoon of Thursday 3 November, 5.00pm to 6.30pm, is officially an IPKat event, its subject matter is one which may appeal to many readers of this weblog too. The speaker is Dr Galit Gonen (head of European patent litigation at Teva Pharmaceuticals) and the title of her paper is "Linkages between legal and marketing theories regarding secondary patents for pharmaceuticals". The venue is the London office of Olswang LLP, 90 High Holborn, where incidentally the IP Finance weblog held its first meeting in January 2008.

A panel of experts will comment briefly on the paper (which is based on Galit’s PhD thesis) before it’s thrown open to the floor for general discussion. Mr Justice Arnold (Patents Court, England and Wales), Professor Jo Gibson (Intellectual Property Institute and Queen Mary Intellectual Property Research Institute) and Chris Stothers (IBIL and Arnold & Porter) will be there and it is hoped that the Intellectual Property Institute's Economics Unit will also be represented.

Refreshments will be provided and registration is free. If you'd like to attend, please email Jeremy at the IPKat here and tell him. He will acknowledge your email when he can.

Overconfidence is adaptive?

A fascinating paper in Nature from last week suggests that overconfidence may actually be an adaptive trait. This is interesting as it strikes at one of the most pervasive assumptions in all of economics -- the idea of human rationality, and the conviction that being rational must always be more adaptive than being irrational. Quite possibly not:

Humans show many psychological biases, but one of the most consistent, powerful and widespread is overconfidence. Most people show a bias towards exaggerated personal qualities and capabilities, an illusion of control over events, and invulnerability to risk (three phenomena collectively known as ‘positive illusions’)2, 3, 4, 14. Overconfidence amounts to an ‘error’ of judgement or decision-making, because it leads to overestimating one’s capabilities and/or underestimating an opponent, the difficulty of a task, or possible risks. It is therefore no surprise that overconfidence has been blamed throughout history for high-profile disasters such as the First World War, the Vietnam war, the war in Iraq, the 2008 financial crisis and the ill-preparedness for environmental phenomena such as Hurricane Katrina and climate change9, 12, 13, 15, 16.

If overconfidence is both a widespread feature of human psychology and causes costly mistakes, we are faced with an evolutionary puzzle as to why humans should have evolved or maintained such an apparently damaging bias. One possible solution is that overconfidence can actually be advantageous on average (even if costly at times), because it boosts ambition, morale, resolve, persistence or the credibility of bluffing. If such features increased net payoffs in competition or conflict over the course of human evolutionary history, then overconfidence may have been favoured by natural selection5, 6, 7, 8.

However, it is unclear whether such a bias can evolve in realistic competition with alternative strategies. The null hypothesis is that biases would die out, because they lead to faulty assessments and suboptimal behaviour. In fact, a large class of economic models depend on the assumption that biases in beliefs do not exist17. Underlying this assumption is the idea that there must be some evolutionary or learning process that causes individuals with correct beliefs to be rewarded (and thus to spread at the expense of individuals with incorrect beliefs). However, unbiased decisions are not necessarily the best strategy for maximizing benefits over costs, especially under conditions of competition, uncertainty and asymmetric costs of different types of error8, 18, 19, 20, 21. Whereas economists tend to posit the notion of human brains as general-purpose utility maximizing machines that evaluate the costs, benefits and probabilities of different options on a case-by-case basis, natural selection may have favoured the development of simple heuristic biases (such as overconfidence) in a given domain because they were more economical, available or faster.
 The paper studies this question in a simple analytical model of an evolutionary environment in which individuals compete for resources. If the resources are sufficiently valuable, the authors find, overconfidence can indeed be adaptive:
Here we present a model showing that, under plausible conditions for the value of rewards, the cost of conflict, and uncertainty about the capability of competitors, there can be material rewards for holding incorrect beliefs about one’s own capability. These adaptive advantages of overconfidence may explain its emergence and spread in humans, other animals or indeed any interacting entities, whether by a process of trial and error, imitation, learning or selection. The situation we model—a competition for resources—is simple but general, thereby capturing the essence of a broad range of competitive interactions including animal conflict, strategic decision-making, market competition, litigation, finance and war.
Very interesting. But I just had a thought -- perhaps this may also explain why many economists seem to exhibit such irrational exuberance over the value of neo-classical theory itself?

High-frequency trading, the downside -- Part II

In this post I'm going to look a little further at Andrew Haldane's recent Bank of England speech on high-frequency trading. In Part I of this post I explored the first part of the speech which looked at evidence that HFT has indeed lowered bid-ask spreads over the past decade, but also seems to have brought about an increase in volatility. Not surprisingly, one measure doesn't even begin to tell the story of how HFT is changing the markets. Haldane explores this further in the second part of the speech, but also considers in a little more detail where this volatility comes from.

In well known study back in 1999, physicist Parameswaran Gopikrishnan and colleagues (from Gene Stanley's group in Boston) undertook what was then the most detailed look at market fluctuations (using data from the S&P Index in this case) over periods ranging from 1 minute up to 1 month. This early study established a finding which (I believe) has now been replicated across many markets -- market returns over timescales from 1 minute up to about 4 days all followed a fat-tailed power law distribution with exponent α close to 3. This study found that the return distribution became more Gaussian for times longer than about 4 days. Hence, there seems to be rich self-similarity and fractal structure to market returns on times down to 1 around second.

What about shorter times? I haven't followed this story for a few years. It turns out that in 2007, Eisler and Kertesz looked at a different set of data -- for total transactions on the NYSE between 2000 and 2002 -- and found that the behaviour at short times (less than 60 minutes) was more Gaussian. This is reflected in the so-called Hurst exponent H having an estimated value close to 0.5. Roughly speaking, the Hurst exponent describes -- based on empirical estimates -- how rapidly a time series tends to wander away from its current value with increasing time. Calculate the root mean square deviation over a time interval T and for a Gaussian random walk (Brownian motion) this should grow in proportion to T to the power H= 1/2. A Hurst exponent higher than 1/2 indicates some kind of interesting persistent correlations in movements.

However, as Haldane notes, Reginald Smith last year showed that stock movements over short times since around 2005 have begun showing more fat-tailed behaviour with H above 0.5. That paper shows a number of figures showing H rising gradually over the period 2002-2009 from 0.5 to around 0.6 (with considerable  fluctuation on top of the trend). This rise means that the market on short times has increasingly violent excursions, as Haldane's chart 11 below illustrates with several simulations of time series having different Hurst exponents:


The increasing wildness of market movements has direct implications for the risks facing HFT market makers, and hence, the size of the bid-ask spread reflecting the premium they charge. As Haldane notes, the risk a market maker faces -- in holding stocks which may lose value or in encountering counterparties with superior information about true prices -- grows with the likely size of price excursions over any time period. And this size is directly linked to the Hurst exponent.

Hence, in increasingly volatile markets, HFTs become less able to provide liquidity to the market precisely because they have to protect themselves:
This has implications for the dynamics of bid-ask spreads, and hence liquidity, among HFT firms. During a market crash, the volatility of prices (σ) is likely to spike. From equation (1), fractality heightens the risk sensitivity of HFT bid-ask spreads to such a volatility event. In other words, liquidity under stress is likely to prove less resilient. This is because one extreme event, one flood or drought on the Nile, is more likely to be followed by a second, a third and a fourth. Reorganising that greater risk, market makers’ insurance premium will rise accordingly.

This is the HFT inventory problem. But the information problem for HFT market-makers in situations of stress is in many ways even more acute. Price dynamics are the fruits of trader interaction or, more accurately, algorithmic interaction. These interactions will be close to impossible for an individual trader to observe or understand. This algorithmic risk is not new. In 2003, a US trading firm became insolvent in 16 seconds when an employee inadvertently turned an algorithm on. It took the company 47 minutes to realise it had gone bust.

Since then, things have stepped up several gears. For a 14-second period during the Flash Crash, algorithmic interactions caused 27,000 contracts of the S&P 500 E-mini futures contracts to change hands. Yet, in net terms, only 200 contracts were purchased. HFT algorithms were automatically offloading contracts in a frenetic, and in net terms fruitless, game of pass-the-parcel. The result was a magnification of the fat tail in stock prices due to fire-sale forced machine selling.

These algorithmic interactions, and the uncertainty they create, will magnify the effect on spreads of a market event. Pricing becomes near-impossible and with it the making of markets. During the Flash Crash, Accenture shares traded at 1 cent, and Sotheby’s at $99,999.99, because these were the lowest and highest quotes admissible by HFT market-makers consistent with fulfilling their obligations. Bid-ask spreads did not just widen, they ballooned. Liquidity entered a void. That trades were executed at these “stub quotes” demonstrated algorithms were running on autopilot with liquidity spent. Prices were not just information inefficient; they were dislocated to the point where they had no information content whatsoever.
This simply follow from the natural dynamics of the market, and the situation market makers find themselves in. If they want to profit, if they want to survive, they need to manage their risks, and these risks grow rapidly in times of high volatility. Their response is quite understandable -- to leave the market, or least charge much more for their service. 

Individually this is all quite rational, yet the systemic effects aren't likely to benefit anyone. The situation, Haldane notes, resembles a Tragedy of the Commons in which individually rational actions lead to a collective disaster, fantasies about the Invisible Hand notwithstanding:
If the way to make money is to make markets, and the way to market markets is to make haste, the result is likely to be a race – an arms race to zero latency. Competitive forces will generate incentives to break the speed barrier, as this is the passport to lower spreads which is in turn the passport to making markets. This arms race to zero is precisely what has played out in financial markets over the past few years.

Arms races rarely have a winner. This one may be no exception. In the trading sphere, there is a risk the individually optimising actions of participants generate an outcome for the system which benefits no-one – a latter-day “tragedy of the commons”. How so? Because speed increases the risk of feasts and famines in market liquidity. HFT contribute to the feast through lower bid-ask spreads. But they also contribute to the famine if their liquidity provision is fickle in situations of stress.
Haldane then goes on to explore what might be done to counter these trends. I'll finish with a third post on this part of the speech very soon. 

But what is perhaps most interesting in all this is how much of Haldane's speech refers to recent work done by physicists -- Janos Kertesz, Jean-Philippe Bouchaud, Gene Stanley, Doyne Farmer and others -- rather than studies more in the style of neo-classical efficiency theory. It's encouraging to see that at least one very senior banking authority is taking this stuff seriously.

Sabtu, 24 September 2011

Chinese Brands: Does Privatisation Matter?


The challenge of creating durable brands, especially those with traction outside of one's home territory, is not unique to Chinese companies. But the sheer size and potential international reach of Chinese companies makes their branding potential a matter of particular interest. It is against this backdrop that I found some intriguing insights in an article that recently appeared in the September 3rd issue of The Economist ("Privatisation in China: Capitalism Confined") here. The focus of the article, based on a study by Professors Jie Gan, Yan Guo and Chenggang Xu, is a typology of privatisation of Chinese companies. The first category contains massive infrastructure and utlility providers (such as banks, transport, energy and telecoms). In effect, these companies still remain largely within the purview of government ministries. Branding appears to be a minor or non-existent consideration.

Of more interest are two other categories: (i) joint ventures, comprised of a private (usually a foreign entity) together with a firm backed by the Chinese government; and (ii) companies that are largely in private ownership, but over which the government still exercises various forms of influence. At the risk of generalization, it appears that the second category of company is more attuned to branding matters than the first category. Even with that distinction, certain types of industries appear more likely to be concerned with branding issues than others, for instance, the automobile industry. Let's expand those thoughts.

Joint ventures--As has been often described (and sometimes decried), in the joint venture arrangement, the private, usually Western partner, seeks to gain access to the Chinese market in exchange for sharing its know-how with its Chinese partner. Criticism of this arrangement has centred on the charge that either by premeditated design or by later developments, the foreign partner is pushed aside or even squeezed out.

With respect to branding, most attention has been drawn to the car industry. As attributed to Michael Dunn, a car-industry consultant, the Chinese government has pushed the foreign company "to form "indigenous brand' joint ventures with intellectual-property and export rights." However, the article goes on to observe that "the efforts of the Chinese joint-venture partners to develop their own brands have yet to produce much success, despite their access to Western technology, vast resources and political pull."

The reason seems to be that, although the Chinese partner is interested in the economic well-being of the company, there is an absence of the long-term commitment that is required to build a brand. In particular, the Chinese representative is more likely to be tied to the government (indeed, that may well be the reason that he was chosen) and therefore it is also likely that he will return to a politically-related position. Under such circumstances, the chances that a joint-venture arrangement will successfully develop a strong brand appear weak.

Largely Private Company--Here the Chinese government appears to have less, or no direct involvement (indirect involvement and financial incentives are a different matter, but perhaps not so different than the situation with Western car companies as well). Again, focusing on the automobile industry, it is here that Chinese car companies have been most successful in brand development, pointing to the BYD, Chery and Geely brands. Further afield, the same situation is said to apply to ZTE and Huawei in the telecoms industry, Lenovo, the PC maker, and TCL, an electronics manufacturer. The common denominator for this has been ascribed to the different type of Chinese management in such companies--"[t]he bosses are not political appointees but charismatic businessmen in pursuit of commercial goals."

There is a potential darker side to these developments. The article goes on to decribe other types of "largely private" companies, most of which are in industries that are characterized as "strategic", such as energy, medical equipment and drugs. Here, industrial policy is more blatant, with protection against foreign challengers, liberal R&D support, and subsidized government purchasers. The jury is still out about whether such companies will able to develop their brands overseas successfully, once they venture out of their supportive local environment.

In this context, it would be instructive to learn whether any research has compared the trajectory of these companies with the success of both Japanese and Korean companies to create world-famous companies with powerful brands spanning the globe. More generally, it will be interesting to track the success of Chinese brands as a function of the degree that such companies are more, or less, privatised.

Yahoo's Patent Bag

NY Times OfficeThere's a little article over on the New York times about potential buyers renewing their interest in Yahoo. The company's investments in the Chinese e-commerce group Alibaba as well as its 35% stake in Yahoo Japan are often seen as potentially valuable assets. Indeed an investment group has already begun a USD 1.6 billion tender offer for shares in Alibaba (see here) which would value the company at USD 32 billion and Yahoo's stake at around USD 13 billion.Alibaba Logo Nobody has yet focussed on the IP rights in Yahoo. ThomsonInnovation are today recording 3051 individual patent families and currently 657 granted US patents - as well as a huge number of patent applications currently in process. The range of patent rights is fairly wide and a brief review shows that it covers many aspects of Internet technology. This author has not yet reviewed the portfolio in any detail, but given the volume of the portfolio, it would be surprising if there was not at least some golden nuggets in the bag.Yahoo logo The recent Google/Motorola Mobility and Nortel deals showed the value of patents in the telecommunications sector. Much of their value has been due to the development of standards using patent technology. This has been encouraged by the telecommunications standards bodies who accept that stakeholders in the standards development process want to receive rewards based on licensing of their patents. On the other hand the Internet community has been much more reluctant to adopt standards on patent technology requiring payment of licenses. There's still nothing to stop a company from patenting its technology, but the W3 consortium wants to see royalty-free licenses as its patent policy clearly states. This means that patents may have a lower value than otherwise (as there is no mechanism to obtain royalties).W3C Consortium Logo

Jumat, 23 September 2011

Brouwer's fixed point theorem...why mathematics is fun

**UPDATE BELOW**

I'm not going to post very frequently on Brouwer's fixed point theorem, but I had to look into it a little today. A version of it was famously used by Ken Arrow and Gerard Debreu in their 1954 proof that general equilibrium models in economics (models of a certain kind which require about 13 assumptions to define) do indeed have an equilibrium set of prices which makes supply equal demand for all goods. There's a nice review article on that here for anyone who cares.

Brouwer's theorem essentially says that when you take a convex set (a disk, say, including both the interior and the boundary) and map it into itself in some smooth and continuous way, there has to be one point which stays fixed, i.e. is mapped into itself. This has some interesting and counter-intuitive implications, as some contributor to Wikipedia has pointed out:
The theorem has several "real world" illustrations. For example: take two sheets of graph paper of equal size with coordinate systems on them, lay one flat on the table and crumple up (without ripping or tearing) the other one and place it, in any fashion, on top of the first so that the crumpled paper does not reach outside the flat one. There will then be at least one point of the crumpled sheet that lies directly above its corresponding point (i.e. the point with the same coordinates) of the flat sheet. This is a consequence of the n = 2 case of Brouwer's theorem applied to the continuous map that assigns to the coordinates of every point of the crumpled sheet the coordinates of the point of the flat sheet immediately beneath it.

Similarly: Take an ordinary map of a country, and suppose that that map is laid out on a table inside that country. There will always be a "You are Here" point on the map which represents that same point in the country.

**UPDATE**

In comments, "computers can be gamed" rightly points out that the theorem only works if one considers a smooth mapping of a set into itself. This is very important.

Indeed, go to the Wikipedia page for Brouwer's theorem and in addition to the examples I mentioned above, they also give a three dimensional example -- the liquid in a cup. Stir that liquid, they suggest, and -- since the initial volume of liquid simply gets mapped into the same volume, with elements rearranged -- there must be one point somewhere which has not moved. But this is a mistake unless you carry out the stirring with extreme care -- or use a high viscosity liquid such as oil or glycerine.

Ordinary stirring of water creates fluid turbulence -- disorganized flow in which eddies create smaller eddies and you quickly get discontinuities in the flow down to the smallest molecular scales. In this case -- the ordinary case -- the mapping from the liquid's initial position to its later position is NOT smooth, and the theorem doesn't apply. 

Class warfare and public goods

I think this is about the best short description I've heard yet of why wealth isn't created by heroic individuals (a la Ayn Rand's most potent fantasies). I just wish Elizabeth Warren had been appointed head of the new Bureau of Consumer Protection. Based on the words below, I can see why there was intense opposition from Wall St. She's obviously not a Randroid:

I hear all this, you know, “Well, this is class warfare, this is whatever.”—No!

There is nobody in this country who got rich on his own. Nobody.

You built a factory out there—good for you! But I want to be clear.

You moved your goods to market on the roads the rest of us paid for.

You hired workers the rest of us paid to educate.

You were safe in your factory because of police forces and fire forces that the
rest of us paid for.

You didn’t have to worry that maurauding bands would come and seize everything at your factory, and hire someone to protect against this, because of the work the rest of us did.

Now look, you built a factory and it turned into something terrific, or a great idea—God bless. Keep a big hunk of it.

But part of the underlying social contract is you take a hunk of that and pay forward for the next kid who comes along.

Thinking about thinking

Psychologist Daniel Kahneman has a book coming out in November. Thinking, fast and slow. It's all about mental heuristics and the two different functional levels of the brain -- the fast instinctive part which is effortless but prone to errors and the slow rational part which takes effort to use but which can (in some cases) correct some of the errors of the first part. His Nobel Prize Lecture from 2002 is a fascinating read so I'm looking forward to the book.

But meanwhile, edge.org has some videos and text of a series of very informal talks Kahneman recently gave. These give some fascinating insight into the origins some of his thinking on decision theory, prospect theory (why we value gains and losses relative to our own current position, rather than judge outcomes in terms of total wealth), why corporations make bad decisions and don't work too hard to improve their ability to make better ones, and so on. Here's one nice example of many:

The question I'd like to raise is something that I'm deeply curious about, which is what should organizations do to improve the quality of their decision-making? And I'll tell you what it looks like, from my point of view.

I have never tried very hard, but I am in a way surprised by the ambivalence about it that you encounter in organizations. My sense is that by and large there isn't a huge wish to improve decision-making—there is a lot of talk about doing so, but it is a topic that is considered dangerous by the people in the organization and by the leadership of the organization. I'll give you a couple of examples. I taught a seminar to the top executives of a very large corporation that I cannot name and asked them, would you invest one percent of your annual profits into improving your decision-making? They looked at me as if I was crazy; it was too much.

I'll give you another example. There is an intelligence agency, and the CIA, and a lot of activity, and there are academics involved, and there is a CIA university. I was approached by someone there who said, will you come and help us out, we need help to improve our analysis. I said, I will come, but on one condition, and I know it will not be met. The condition is: if you can get a workshop where you get one of the ten top people in the organization to spend an entire day, I will come. If you can't, I won't. I never heard from them again.

What you can do is have them organize a conference where some really important people will come for three-quarters of an hour and give a talk about how important it is to improve the analysis. But when it comes to, are you willing to invest time in doing this, the seriousness just vanishes. That's been my experience, and I'm puzzled by it.

Rabu, 21 September 2011

High-frequency trading -- the downside, Part I

Andrew Haldane of the Bank of England has given a stream of recent speeches -- more like detailed research reports -- offering deep insight into various pressing issues in finance. One of his most recent speeches looks at high-frequency trading (HFT), noting its positive aspects as well as its potential negative consequences. Importantly, he has tried to do this in non-ideological fashion, always looking to the data to back up any perspective.

The speech is wide ranging and I want to explore it points in some detail, so I'm going to break this post into three (I think) parts looking at different aspects of his argument. This is number one, the others will arrive shortly.

To begin with, Haldane notes that in the last decade as HFT has become prominent trading volumes have soared, and, as they have, the time over which stocks are held before being traded again has fallen:
... at the end of the Second World War, the average US share was held by the average investor for around four years. By the start of this century, that had fallen to around eight months. And by 2008, it had fallen to around two months.
It was about a decade ago that trading execution times on some electronic trading platforms fell below the one second barrier. But the steady march to ever fast trading goes on:
As recently as a few years ago, trade execution times reached “blink speed” – as fast as the blink of an eye. At the time that seemed eye-watering, at around 300-400 milli-seconds or less than a third of a second. But more recently the speed limit has shifted from milli-seconds to micro-seconds – millionths of a second. Several trading platforms now offer trade execution measured in micro-seconds (Table 1).

As of today, the lower limit for trade execution appears to be around 10 micro-seconds. This means it would in principle be possible to execute around 40,000 back-to-back trades in the blink of an eye. If supermarkets ran HFT programmes, the average household could complete its shopping for a lifetime in under a second.

It is clear from these trends that trading technologists are involved in an arms race. And it is far from over. The new trading frontier is nano-seconds – billionths of a second. And the twinkle in technologists’(unblinking) eye is pico-seconds – trillionths of a second. HFT firms talk of a “race to zero”.
Haldane then goes on to consider what effect this trend has had so far on the nature of trading, looking in particular at market makers.

First, he offers a useful clarification of why the bid-ask spread is normally taken as a useful measure of market liquidity (or more correctly, the inverse of market liquidity). As he points out, the profits market makers earn from the bid-ask spread represent a fee they require for taking risks that grow more serious with lower liquidity:
The market-maker faces two types of problem. One is an inventory-management problem – how much stock to hold and at what price to buy and sell. The market-maker earns a bid-ask spread in return for solving this problem since they bear the risk that their inventory loses value. ...Market-makers face a second, information-management problem. This arises from the possibility of trading with someone better informed about true prices than themselves – an adverse selection risk. Again, the market-maker earns a bid-ask spread to protect against this informational risk.

The bid-ask spread, then, is the market-makers’ insurance premium. It provides protection against risks from a depreciating or mis-priced inventory. As such, it also proxies the “liquidity” of the market – that is, its ability to absorb buy and sell orders and execute them without an impact on price. A wider bid-ask spread implies greater risk in the sense of the market’s ability to absorb volume without affecting prices.
The above offer no new insights, but explains the relationship in a very clear way.

Next comes the question of whether HFT has made markets work more efficiently, and here things become more interesting. First, there is a great deal of evidence (some I've written about here earlier) showing that the rise of HFT has caused a decrease in bid-ask spreads, and hence an improvement in market liquidity. Haldane cites several studies:
For example, Brogaard (2010) analyses the effects of HFT on 26 NASDAQ-listed stocks. HFT is estimated to have reduced the price impact of a 100-share trade by $0.022. For a 1000-share trade, the price impact is reduced by $0.083. In other words, HFT boosts the market’s absorptive capacity. Consistent with that, Hendershott et al (2010) and Hasbrouck and Saar (2011) find evidence of algorithmic trading and HFT having narrowed bid-ask spreads.
His Chart 8 (reproduced below) shows a measure of bid-ask spreads on UK equities over the past decade, the data having been normalised by a measure of market volatility to "strip out volatility spikes."


It's hard to be precise, but the figure shows something like a ten-fold reduction in bid-ask spreads over the past decade. Hence, by this metric, HFT really does appear to have "greased the wheels of modern finance."

But there's also more to the story. Even if bid-ask spreads may have generally fallen, it's possible that other measures of market function have also changed, and not in a good way. Haldane moves on to another set of data, his Chart 9 (below), which shows data on volatility vs correlation for components of the S&P 500 since 1990. This chart indicates that there has been a general link between volatility and correlation -- in times of high market volatility, stock movements tend to be more correlated. Importantly, the link has grown increasingly strong in the latter period 2005-2010.


What this implies, Haldane suggests, is that HFT has driven this increasing link, with consequences.
Two things have happened since 2005, coincident with the emergence of trading platform fragmentation and HFT. First, both volatility and correlation have been somewhat higher. Volatility is around 10 percentage points higher than in the earlier sample, while correlation is around 8 percentage points higher. Second, the slope of the volatility / correlation curve is steeper. Any rise in volatility now has a more pronounced cross-market effect than in the past.... Taken together, this evidence points towards market volatility being both higher and propagating further than in the past.
 
This interpretation is as interesting as it is perhaps obvious in retrospect. Markets have calmer periods and stormier periods. HFT seems to have reduced bid-ask spreads in the calmer times, making markets work more smoothly. But it appears to have done just the opposite in stormy times:
Far from solving the liquidity problem in situations of stress, HFT firms appear to have added to it. And far from mitigating market stress, HFT appears to have amplified it. HFT liquidity, evident in sharply lower peacetime bid-ask spreads, may be illusory. In wartime, it disappears. This disappearing act, and the resulting liquidity void, is widely believed to have amplified the price discontinuities evident during the Flash Crash.13 HFT liquidity proved fickle under
stress, as flood turned to drought.
This is an interesting point, and shows how easy it is to jump to comforting but possible incorrect conclusions by looking at just one measure of market function, or by focusing on "normal" times as opposed to the non-normal times which are nevertheless a real part of market history.

As I said, the speech goes on to explore some other related arguments touching on other deep aspects of market behaviour. I hope to explore these in some detail soon.

Selasa, 20 September 2011

Software Patents: a Convenient Misnomer for those who Seek to Expropriate IP

In this, the seventhin his series of posts for IP Finance, Keith Mallinson (WiseHarbor) reviews the recent history of software patent protection and the challenges made against it, concluding that the patent system is here to encourage investment in innovation by helping enable inventors to make a return on their risky investments and arguing that there is no evidence that patent systems are stifling innovation where inventions are implemented in software.

You can follow Keith on Twitter @WiseHarbor.
Software Patents – a Convenient Misnomer for those who Seek to Expropriate IP 
It makes no sense to disqualify innovative technologies from patentability or limit the rights and remedies associated with those patents on the basis they can be implemented in software on general purpose processors rather than only on dedicated hardware. The “software patent” debate is largely a battle of ideology and business models between those who develop patented technologies that can be implemented in software and implementers who would rather not pay for the privilege of using others’ IP. I focus exclusively on technologies in this article because a large and rising proportion of manufactured products are increasingly software defined. Patentability for “business methods”, such as financial trading algorithms, while also contentious, is an entirely different matter. 
Generosity at others’ expense 
Google has made itself popular from the promise of free software with its Android smartphone operating system (OS) and WebM project with VP8 coder-decoder (codec) for video and Vorbis codec for audio. This promise is as in free beer (i.e., something for no payment) rather than merely free speech (i.e., being allowed to say what you like). The proposition obviously seems very appealing to many implementers, including software developers and device manufacturers, who like the idea of getting something for nothing. 
However, this proposition is tricky because many software programs infringe the unencumbered rights of IP owners who justifiably do not wish to give away the fruits of their labour for nothing. In Free and Open Source Software (FOSS) the “free” refers to the freedom to copy and re-use the software, rather than to the price of the software. A fundamental requirement with Open-Source Software (OSS) is that “licenses shall not require a royalty or other fee”. 
Whereas these licences generally require licensees to contribute their patented and copyrightable works royalty free, that is far from sufficient to ensure (F)OSS implementations will actually be completely free of charge to licensees. FOSS licenses are private contractual orderings that have no impact on the obligations of those IP holders outside any given contract’s reach.  Many IP owners decide not to sign away their rights in (F)OSS licenses and others may be oblivious for a long time that specific (F)OSS software programs are infringing their rights. Despite efforts to prevent (F)OSS programs infringing un-liberated IP (that is, IP held by third parties outside the reach of the FOSS license), it is impossible to ensure this will not occur – particularly with respect to patents. 
 (F)OSS licensees may be found by courts of law or agencies such as the U.S. International Trade Commission (ITC) to be wilfully or otherwise infringing IP, with resulting legal costs, financial damages awards and even injunctions or exclusion orders preventing them from selling their products. Some of these licensees might not have expected this due to misleading statements from (F)OSS proponents and given that patent infringement was typically not a problem with packaged software, sold under license from the likes of Microsoft, that has prevailed for decades on PCs and elsewhere. Indemnities – derived from cross-licensing among various IP owners and commonly provided to licensees of proprietary software – are rarely available or as extensive with (F)OSS.  In fact, attempts by either IP owners or FOSS distributors to enter into license agreements with third party IP holders have often been deemed antithetical to the FOSS movement (or event in conflict with the terms of FOSS licenses) and so they have, until recent months, been the exception rather than the rule. 
 Until very recently, Google appears to have provided little or nothing more than rhetorical support for its beleaguered Android licensees who are signing patent licenses or being sued for infringement or by proprietary software providers Apple with its iOS, Microsoft with Windows Phone and others. On the receiving end of the onslaught are HTC, Samsung and others implementing this open source OS. Perhaps Google will assist in various counter-suits following its recent purchase of 1,000 patents from IBM and acquisition of Motorola Mobility with a trove of 17,000 patents. 
 Free riders infringe 
Tensions are running high between IP owners and those who shun paying patent fees for anything implemented in software including standards-based technologies. Already 12 patent owners have joined discussions to create a pool to collect royalties from those that implement the VP8 video codec standard. VP8 is based on technology developed by the Google acquisition On2 for its WebM project. This is purported to be completely royalty free (5th September 2011):

“Some video codecs require content distributors and manufacturers to pay patent royalties to use the intellectual property within the codec. WebM and the codecs it supports (VP8 video and Vorbis audio) require no royalty payments of any kind. You can do whatever you want with the WebM code without owing money to anybody.”
Whereas there is no reason to prevent VP8 being developed free of any copyright or patent fees to any of its developers who agree to such terms, the codec is most likely infringing the patents of these 12 and many others. Non-assert provisions in VP8 licensing anticipate that Google has essential patents --and licensees might too. Different, independently developed, programs will likely not infringe software copyrights, where code is not copied, but all codecs implementing a given standard will infringe the same set of patents that are essential to that standard. Software developers, by definition, cannot design around essential patents when implementing a standard. Similar (or “competing”) standards may well have common technologies among them which are also covered by the same patents. This is particularly the case in Codec algorithms, which represent cumulative technological developments made over many years, including many players and at substantial costs. Different codec standards setting organisations (SSOs) can try to design around patents in formulating their standards. While this is possible to some extent, it is difficult, and impossible to eliminate all infringements while also seeking to achieve high-performance functionality exploiting latest technologies. In some cases, SSOs might not even be aware of some patents their standards are infringing.
 MPEG LA licenses the H.264 video codec extensively. More than one thousand licensees have agreed royalty terms compensating 28 different licensors through a patent pool. These fees are due even where the software program implementing the codec is subject to royalty free copyright licensing, as is the case with the x264 – “a free software library and application for encoding video streams into the H.264/MPEG-4 AVC format, and is released under the terms of the GNU GPL [a royalty free licensing agreement]”.
 
 With other codecs reading on hundreds of patents and significant similarities among codecs, it is also most likely VP8 infringes some of the patents that are also infringed by other video standards including H.264. The question is simply how many patents and which of them are infringed? 
 Changing the rules  
Meanwhile, the patentability of any technologies and algorithms implemented in software are being significantly challenged with lobbying to policy makers around the world. 
Those who argue against “software patents”, including some absurd and unsubstantiated claims, seek to invalidate issued and pending patents associated with, for example, smartphone features and video codec standards. Others have suggested that the perceived problems with “software patents” could be remedied by requiring that those patents be licensed on a royalty free basis in certain contexts (i.e., in standards). The fact that many standards-essential and other technologies implemented in software infringe numerous different patents, rather than typically just a few patents in a drug or simple mechanical device, is no justification to deny any patent rights at all. A combination of bilateral (i.e., cross licensing) and multilateral arrangements (i.e., with patent pools) can be used to negotiate rates and collect payments efficiently. The average aggregate royalty for video codecs on a DVD player is just a few dollars and aggregate standards-essential patent licensing on mobile phones rarely costs more than 10% of the wholesale product price. Moreover, the unsubstantiated claim that FOSS developers are prohibited by the terms of FOSS licenses from paying these royalties has been debunked and shown to be little more than an attempt by certain implementers to gain business model advantage. 
Processors and software in everything  
The products and services we all use every day are increasingly software defined and computer-intensive as microprocessors are included in many different manufactured items. Software predominantly implements the innovative algorithms for a wide variety of technological functions; from touch screen scrolling and bar code reading to turn-by-turn navigation. Just a few of numerous and varied examples also include anti-lock brakes, eco-friendly air conditioners, medical equipment, programmable lathes and toys. 
The existence of microprocessors and computers over the last 30 years has fostered a marketplace for downstream development of computer programs performing a wide variety of functions with relatively low barriers to entry. For example, there are thousands and thousands of smartphone application developers.  Many of these set themselves up with just a computer and a few software tools in their sitting rooms or dormitories. 
Computer technologies with general purpose processors are increasingly substituting for application-specific designs. In some cases, state-of-the-art general processors make it possible to implement technologies (e.g., radio interference reduction, video compression or touch screen gesture recognition) significantly in software, in comparison to the more hardware-specific implementations such as with Application-Specific Integrated Circuits (ASICs) that were once required.  Mobile communications protocols including GSM, HSPA and LTE can now be implemented in Software Defined Radios (SDR)s. SDRs are already commonplace in network equipment and increasingly in terminal devices such as phones and dongles. Similarly, whereas older codec implementations were significantly in hardware with dedicated signal processors and hardware accelerators, it is now possible to implement these in general processors with customised hardware and accelerators being used mostly for high-end devices. 
Substituting software for hardware implementations of a given radio or codec technology is a design decision driven by considerations on feature performance, power consumption, heat dissipation, semiconductor die size, time-to-market and fixed versus variable manufacturing cost structure.
The speed, ease and low costs of coding in software— rather than having to design and fabricate dedicated hardware— does not negate the innovative steps, substantial costs and risks entailed in developing new ideas and technologies, regardless of their means of implementation. For example, development of anti-lock brakes requires lab work and drive testing under various conditions and medical instrumentation techniques (e.g., measurement of oxygen saturation in the blood) requires lab work and extensive clinical trials. Algorithms are first conceived, then modified and refined to improve performance, reliability and safety on the basis of this work. Software just happens to be an efficient and effective way to implement.
 
What is patentable?  
So-called “software patents” do not actually depict software per se: instead they describe algorithms and processes that can be performed by a programmed computer. It is such computer-implemented techniques— not the software itself—that can be eligible for patent protection. 
 In Information and Communications Technology (ICT), it is the underlying useful, novel and non-obvious techniques that can be implemented in hardware or software to perform real world functionality—such as in radio communication, audio noise reduction, video encoding, and touch screen operation—to name just a few possibilities —that are potentially patentable. To be patent-eligible in the U.S., generally, a claimed method must involve a machine or a transformation of an article—that is, it must describe a series of steps that use physical means to produce a result or effect in the physical world. All the above examples and many other technical processes do just that – whether they are, or could be, implemented in hardware or software. 
  In 2002, the European Commission proposed a Directive on the patentability of computer-implemented inventions, but the European Parliament rejected the final draft with the result that national laws were not harmonised. The European Patent Office, which generally adapts its regulations to new EU law, has no reason or incentive to modify its practice of granting patents on certain computer-implemented inventions, according to its interpretation of the European Patent Convention and its implementing regulations. 
 Copyrights protect software owners from having their programs duplicated, but this does not prevent reverse-engineering of software-implemented innovations. Similarly, it is increasingly possible to implement previously hardware-based functions such as radio modems and video codecs on more general processors such as SDRs and with software-based rather than hardware-based graphics accelerators. It would be nonsensical to disqualify patented innovations from protection, simply because independent advances in processor and software technology make the former implementable on general purpose processors as well as dedicated hardware. 
 Openness and patents in standards 
Whereas some assert that open standards should be royalty free, the International Telecommunications Union defines open standards, among other factors, as follows: 
"Open Standards" are standards made available to the general public and are developed (or approved) and maintained via a collaborative and consensus driven process. "Open Standards" facilitate interoperability and data exchange among different products or services and are intended for widespread adoption. 
Intellectual property rights (IPRs) – IPRs essential to implement the standard to be licensed to all applicants on a worldwide, non-discriminatory basis, either (1) for free and under other reasonable terms and conditions or (2) on reasonable terms and conditions (which may include monetary compensation). Negotiations are left to the parties concerned and are performed outside the SDO [standard- development organisation]. 
There are numerous open standards. However, IP policies differ widely among standards-setting organisations (SSOs). A relatively small number of SSOs have IPR policies that require participants to license essential patent claims on a royalty-free basis, but this can only bind those who elect to join those organisations and so standards implementers can be exposed to IP infringement claims by non-members. Most SSOs including those for mobile communications, video and audio codecs accept that patent owners can license their IP on a (Fair), Reasonable and Non-Discriminatory basis, including a royalty. 
For example, H.264 is open in the sense that the specifications are freely available from a copyright perspective. One can distribute an implementation of H.264 freely as long as one abides by certain terms. However, implementers of the H.264 standard are required to pay patent royalties. 
Software is no exception 
There is no good reason to abandon the widespread practice of allowing patents on technologies that are implemented in software. The patent system is there to encourage investment in innovation by helping enable inventors to make a return on their risky investments. There is no evidence that patent systems are stifling innovation where inventions are implemented in software. On the contrary, innovation continues apace in ICT, as illustrated by the rapid development and extensive adoption of smartphones and video encoding technologies, to name just two from among numerous examples, as I have explained in my previous articles with IP Finance.

A bleak perspective... but probably true

I try not to say too much about our global economic and environmental future as I have zero claim to any special insight. I do have a fairly pessimistic view, which is reinforced every year or so when I read in Nature or Science the latest bleak assessment of the rapid and likely irreversible decline of marine ecosystems. I simply can't see humans on a global scale changing their ways very significantly until some truly dreadful catastrophes strike.

Combine environmental issues with dwindling resources and the global economic crisis, and the near term future really doesn't look so rosy. On this topic, I have been enjoying an interview at Naked Capitalism with Satyajit Das (part 1, part2, with part 3 I think still to come) who has worked for more than 30 years in the finance industry. I'm looking forward to reading his new book "Extreme Money: Masters of the Universe and the Cult of Risk." Here's an excerpt from the interview which, as much as any analysis I've read, seems like a plausible picture for our world over the next few decades:
There are problems to which there are no answers, no easy solutions. Human beings are not all powerful creatures. There are limits to our powers, our knowledge and our understanding.

The modern world has been built on a ethos of growth, improving living standards and growing prosperity. Growth has been our answer to everything. This is what drove us to the world of ‘extreme money’ and financialisation in the first place. Now three things are coming together to bring that period of history to a conclusion – the end of financialisation, environmental concerns and limits to certain essential natural resources like oil and water. Environmental advocate Edward Abbey put it bluntly: “Growth for the sake of growth is the ideology of a cancer cell.”

We are reaching the end of a period of growth, expansion and, maybe, optimism. Increased government spending or income redistribution, even if it is implemented (which I doubt), may not necessarily work. Living standards will have to fall. Competition between countries for growth will trigger currency and trade wars – we are seeing that already with the Swiss intervening to lower their currency and emerging markets putting in place capital controls. All this will further crimp growth. Social cohesion and order may break down. Extreme political views might become popular and powerful. Xenophobia and nationalism will become more prominent as people look for scapegoats.

People draw comparisons to what happened in Japan. But Japan had significant advantages – the world’s largest savings pool, global growth which allowed its exporters to prosper, a homogeneous, stoic population who were willing to bear the pain of the adjustment. Do those conditions exist everywhere?

We will be caught in the ruins of this collapsed Ponzi scheme for a long time, while we try to rediscover more traditional sources of growth like innovation and productivity improvements – real engineering rather than financial engineering. But we will still have to pay for the cost of our past mistakes which will complicate the process.

Fyodor Dostoevsky wrote in The Possessed: “It is hard to change gods.” It seems to me that that’s what we are trying to do. It may be possible but it won’t be simple or easy. It will also take a long, long time and entail a lot of pain.

Senin, 19 September 2011

The Elusive Search for an E-Book Pricing Model


Perhaps it was appropriate that, shortly after buying a Kindle last week, I settled down into a transatlantic flight home, with the 12 September edition of the Wall Street Journal in hand. And there it was, staring me in the face on page 1 of the Marketplace section, an article entitled "e-Book Prices Prop Up Print Siblings." Now that I have a vested interest in the e-reader platform, the question of how e-book pricing differs from print books has become a matter of intense interest. The facts and figures as set out in the article make for interesting reading.

First, let's make a comparison between a hypothetical print book retailing at $26.00 and an e-book offering retailing at $12.99. Taking the print book first, from the $26.00 price one subtracts $13.00 for the retailer, $3.90 in royalty payments to the author and $3.25 for shipping and other handling, leaving a gross amount (don't forget returns) per unit sold of $5.85. By comparison, from the $12.99 retail price, one substracts $3.90 to the retailer, $2.27 in royalty payments to the author, and $0.90 for digital rights management, warehousing and prodution/distribution, leaving an amount per unit sold of $5.92 (returns are not a likely problem here).

These figures show that, to the extent that the e-reader publisher can increase the retail price, the greater will be its ultimate margin. Wait a minute, however! Wasn't the whole idea of the e-reader to offer the ubiquitous price of $9.99 per book. Raising the price from $9.99 seems antithetical to that pricing nirvana. What's the story here?

To appreciate these figures better, consider the major change that has taken place in the e-book industry. The starting point is described as "the wild days" of using the most popular titles as a loss leader (i.e., $9.99 or less), which days "are mostly over." In its place is an elevated e-reader price scheme anchored in what the article described as "agency pricing." As adopted by six major publishers and championed by Apple, "[p]ublishers worried about the deeply discounted $9.99 digital best-sellers promoted by Amazon.com Inc. agreed to set the consumer price of their digital titles. Under this model, retailers act as the agent for each sale and take 30%, returning 70% to the publisher."

The article then goes on to state that "[t]he major significance of agency pricing was that it made it impossible for a retailer to discount the price without the approval of the publisher." For discounting, read Amazon and its widespread $9.99 per book pricing policy, described as a means for building market share, even if "it actually lost money on the sale of many of the book industry's most popular titles."

Personally, I am bit disheartened because I had dreamed of using the Kindle platform to purchase book after book at that magic price of $9.99. Those dreams have been shattered. Standing back however from my disappointment, this apparently steady increase in the price of e-books is a fascinating example of a nascent industry seeking to find a workable pricing model.

On the one hand, we have the comment from an unidentified publishing executive that "'[i]f e-book prices land at 99 cents in the future we're not going to be in good shape." Certainly, the e-book platform carries with it the potential for downward pricing of books.

On the other hand, when is the difference in cost between the e-book and print versions of a book small enough to induce me to factor in the non-quantifiable tactile benefits of embracing a print version, the better to dog-ear, highlight and ultimately to place on the top row of my bookshelf? The problem is that, when I find out the answer to that question, the print version alternative may no longer be available. If so, then what will be the pressures on e-book pricing that will prevent an ever-increasing sticker price?

Don't get me wrong. As a published author, I am the last person to begrudge my publisher's bottom line. That said, as a reading consumer, I want to enjoy the benefits of the e-book platform at a reasonable (whatever that means) cost. Finding that balance remains an elusive goal. Something tells me that this so-called "agency pricing" model will not be the last word on the topic.
 

http://financetook.blogspot.com/ Copyright © 2012 -- Powered by Blogger