Holographic Universe

To see a World in a grain of sand, and a Heaven in a wild flower, Hold Infinity in the palm of your hand, and Eternity in an hour.
— William Blake (“Auguries of Innocence”, 1803)

To efficiently search the computational universe, the market needs to be organized for effective information reusablity. Traditionally, the role of information reusability is played by financial intermediaries such as investment banks, brokers or traders. They are the information processing and storage elements of the financial universe. How they are different, at a very high level, is simply that: investment bankers have capital and clients and are able to act as principals in trades; brokers have information and clients and are able to act as agents representing clients in trades; and proprietary traders have capital but no clients and have to find information to process in order to generate profitable trades.

Information has a peculiar property: its use does not result in its consumption. Most goods and services are transformed into waste as a result of being used. But this is not the case with information. Properly reusing information can result in substantial savings. To appreciate the power of information reusability, consider the following “wildcatting” metaphor from the book “Contemporary Financial Intermediation”:

Think of a very large geographic grid in which each intersection represents a potential oil well. Now suppose there are many oil drilling entrepreneurs, and further suppose that after drilling a dry hole the law requires that the landscape be restored to its initial condition. Thus, there is no way to know if a particular location has been drilled unless there is an operating well at a particular location. If a broker simply collects and disseminates information about the drilling activities of each explorer, the cost of redrilling dry holes can be eliminated. Without the broker, society will bear the unnecessary cost of searching for oil in locations known to be unproductive.

Go Wild Cats! Massive oil fields discovered in 1911, South Belridge, California. (Image Credit: Edward Burtynsky).

Go Wild Cats! Massive oil fields discovered in 1911, South Belridge, California. (Image Credit: Edward Burtynsky).

One can see that the larger the grid, the more compelling the need for a broker. For a given grid size, the more complex the object of search (e.g., whose attributes are less easily observable or subject to judgment), the more important the skills and reputation of the broker. An important aspect of brokerage is that it can be performed without processing substantial risks. In other words, brokerage services can be produced risklessly, in principle, and the processing of risk is not central to the production of brokerage services. But this is not the case with trading services built around asset transformation.

So what exactly do traders do? Traders look for mismatch of attributes in assets across the financial universe, and seek to transform them. Common asset attributes include: duration (or term to maturity), divisibility (or unit size), liquidity, credit risk, and numeraire (i.e., which currency). Typically, with respect to assets whose attributes they seek to transform, they can choose to either shorten the duration by holding assets of longer duration than its own liabilities; reduce the unit size by holding assets of larger unit size than its liabilities; enhance liquidity by holding assets that are more illiquid than its liabilities; or reduce credit risk by holding assets that are more likely to default than its liabilities. And by holding assets denominated in a different currency than its liabilities, they can also alter the numeraire of the assets. Traders collect a premium for carrying out asset transformation services in the financial markets, which they then share with investors whom they trade with to become the asset owners.

The case of duration transformation is particularly illustrative. The yield curve is thought to be a “biased predictor” of future spot interest rates owing to a liquidity premium attached to longer duration claims. Typically, borrowers prefer to borrow long-term, and lenders prefer to lend short-term. This is the theory of term structure of interest rates according to Sir John Hicks, a British economist who coined the term “forward interest rates” around 1925 in a London café when considering this matter. When traders are introduced into such a financial world, they would be able to finance the purchase of long-term assets with short-term liabilities, and profit from doing so. In fact, traders would continue to perform this transformation until the liquidity premium is competed down to the marginal cost of intermediating. According to Hicks, if the yield curve were an unbiased predictor of future spot interest rates, there would be no profit in performing duration transformation. Liquidity premium is what keeps the traders in the game.

One can say that the regenerative power of the world’s economy is derived from the redistributive property of trading in the global financial markets. Trading reallocates capital resources to where they are most needed, diversifies or shifts the risks, unifies markets, and improves liquidity. Using a biological metaphor, we can consider trading as the “lifeblood” of an organism, providing it with strength and energy by circulating nutrients to the right places in the body, shifts and distributes weights across the body, and make different body parts work together to improve overall mobility. Without financial trading, the world’s economy would fragment into a thousand uncoordinated pools of activities, operate at a greatly reduced level of efficiency, and yield a poor economic utility for all of its market participants. In short, a rather unhappy lump of organic matter, totally stressed out and unable to move at all.

So we seek to identify and occupy a sustainable trading niche that lies somewhere along the spectrum between the extremes of arbitrage (i.e., riskless, transitory, but requires high speed) and speculation of future asset prices (i.e., high risk, need to wait longer duration, and requires massive memory). Arbitrages of the non-statistical types are rare, and we cannot rely upon them to build a steady trading business. On the other hand, speculations abound, but they carry high risks and we won’t last very long in the trading business without a reliable means of hedging risks. What we are looking for would be classified as “risk-controlled arbitrage” or “limited risk speculation”. Therefore, we should learn to design trades that can hedge out risks, and find trade combinations that work nicely without tying up capital beyond the duration required for executing the trades. A potential candidate niche that might work for us is arbitrage trading based on speculated statistical models. We don’t necessarily have the fastest speed, but we are adequately provisioned when it comes to organizing storage memory in the cloud. We can work with lots of bits.

We have a hunch that information reusability is a key piece of the puzzle when it comes to assembling the parts of the MVP. We will be sure to examine this aspect of information a little closer, from both the cross-sectional reusability aspect (i.e., the same information can be reused across instruments or markets) as well as from the intertemporal aspect (i.e., reused through time). We know that redundant searches could be expensive, from both computational and budgetary considerations, and a well-designed MVP should provide architectural support for this type of savings. It’s a giant grid to explore, after all, and we do not have a lot of computational resources (neither bits nor time) to waste.

"It from Bit": A Holographic Universe for Computation. (Image Credit: Ars Technica).

"It from Bit": A Holographic Universe for Computation. (Image Credit: Ars Technica).

It is natural to ask: how might the universe be finite in computational resources? For instance, how many storage bits are there in the entire universe? Let’s start with the concept of “entropy” which, when considered as information (aka Shannon entropy), is measured in bits. The total quantity of bits is related to the total degrees of freedom of matter and energy. For a given energy in a given volume, there is an upper limit to the density of information, aka the Bekenstein bound, about the whereabouts of all the particles which compose matter in that volume. This suggests that matter itself cannot be subdivided infinitely many times; there has to be some ultimate upper limit as to the number of fundamental particles. The reason is that if a particle were to be subdivided infinitely into sub-particles, then the degrees of freedom of the original particles must be infinite, as the degrees of freedom of a particle are the product of all the degrees of freedom of its sub-particles. This would obviously violate the maximal limit of entropy density, according to the calculations of Jacob Bekenstein.

We can thus consider the fundamental particle as represented by a bit (i.e., “zero” or “one”) of information. What matters for computational purposes is not the spatial extent of the universe, but the number of physical degrees of freedom located in a causally connected region. In the standard cosmological models, the region of the universe to which we have causal access at this time is limited by the finite speed of light and finite age of the universe since the Big Bang. In 2002, Seth Lyold calculated the total number of bits available to us in the universe for computation to be around 10122. This upper bound is not fixed, but grows with time as the horizon expands and encompasses more particles.

Similarly, one might consider upper bounds on computational speed for the universe. In fact, Hans-Joachim Bremermann suggested back in 1962 that there is a maximum information-processing rate, which we now call Bremermann’s limit, for a physical system that has a finite size and energy. In other words, a self-contained system in the material universe has a maximum computational speed. For example, a computing device with the mass of the entire Earth operating at the Bremermann’s limit could perform approximately 1075 computations per second. This is important when designing cryptographic algorithms. We only hope that we’ll never run into Bremermann in computational finance.

“Is she real? Or are we but shadows on Plato’s cave?" C3PO wondered.

“Is she real? Or are we but shadows on Plato’s cave?" C3PO wondered.

The Holographic Principle postulates that the total information content of a region of space cannot exceed one quarter of the area of its encompassing surface (called the "event horizon"). The principle was first proposed by Gerard ‘t Hooft in 1993 and further elaborated by Leonard Susskind. A simple calculation of the size of our universe’s event horizon today, based on the size of the event horizon created by the measured value of dark energy, gives an information bound of 10122 bits, which is the same as found by Lyold using the “particle horizon”. The event horizon also expands with time, and at present is roughly the same radius as the particle horizon. Regardless of which method is used as the basis for the calculation, they agree on an upper bound for the information content of a causal region of the universe.

It was the theoretical physicist John A. Wheeler who first considered “the physical world as made of information, with energy and matter as incidentals.” Could our universe, in all its richness and diversity, really be just a bunch of bits? To some, an ultimate theory of reality must be concerned less with fields or spacetime, but rather with information exchange among physical processes. That sounds like computational finance, e.g., think about how the term structure of interest rates today already encodes all forward interest rates of any duration projecting into the future. It is quite possible our physical universe may be a hologram after all.

In a universe limited in resources and time — concepts like real numbers, infinitely precise parameter values, differentiable functions, the unitary evolution of a wave function – are a fiction: a useful fiction to be sure, but a fiction nevertheless, and with the potential to mislead. It then follows that the laws of physics, cast as idealized infinitely precise mathematical relationships inhabiting a Platonic heaven, are also a fiction when it comes to applications to the real universe.
— P.C.W. Davies (2007)

References:

  1. Greenbaum, Stuart I. and Thakor, Anjan V. (2007). Contemporary Financial Intermediation (Second Edition). Academic Press.
  2. Davies, P.C.W. (2007, March 6). The Implications of a Cosmological Information Bound for Complexity, Quantum, Information and the Nature of Physical Law. Retrieved from: http://arxiv.org/abs/quant-ph/0703041
  3. Seth, Lyold (2002). Computational Capacity of the Universe. Physical Review Letters, 88, 237901, 1-17. Retrieved from: http://cds.cern.ch/record/524220/files/0110141.pdf
  4. Bekenstein, Jacob D. (2003, August). Information in the Holographic Universe — Theoretical results about black holes suggest that the universe could be like a gigantic hologram. Scientific American, 289, pp. 58-65. Retrieved from: http://www.nature.com/scientificamerican/journal/v289/n2/pdf/scientificamerican0803-58.pdf
  5. Bremermann, Hans J. (1962). Optimization through Evolution and Recombination. In: Self Organizing Systems (Editors: M.C. Yovits, G.T. Jacobi, and G.D. Goldstein), Spartan Books, pp. 93-106.

Computationally Efficient Markets

Four centuries ago, telescopes were turned to the sky for the first time – and what they saw ultimately launched much of modern science. Over the past twenty years I have begun to explore a new universe – the computational universe –made visible not by telescopes but by computers.
— Stephen Wolfram ("A New Kind of Science", 2002)

So here are some questions of interest to quantitative finance: What do we expect to see when peering into the computational universe? What do "market imperfections" or "market inefficiencies" look like, exactly, from a computational viewpoint? More practically, how do we go about finding them?

The classical view of market efficiency, which can be traced to the work of Eugene Fama and Paul Samuelson in the 1960’s, is centered on the notion of how well market prices reflect all available information. Specifically, in an informationally efficient market, price changes must be unforecastable if they fully incorporate the information and expectations of all market participants. In other words, one can’t routinely profit from information that is already out there. Keep in mind though, that the process of impounding information into market prices might not be instantaneous. The resulting proposition, called the Efficient Market Hypothesis (or EMH), is “disarmingly simple to state, has far-reaching consequences for academic theories and business practice, and yet is surprisingly resilient to empirical proof or refutation,” according to Andrew Lo. We won’t be joining the ongoing academic debate here. Instead, we look to discover what is deep and enduringly true about the markets, but from a practical computational point of view.

For perspective, let's recall two interesting bits of history. The first bit of history happened at a time just before the telegraph, in the 1840’s, when Paul Reuter started a financial information service using carrier pigeons to carry messages between Aachen and Brussels. That was the missing link to connect Berlin and Paris. The carrier pigeons were much faster than the post train, giving Reuter faster access to stock news from the Paris stock exchange. The carrier pigeons was a brilliant idea; they were a superior means of transport that got information from point A to point B, and making markets more efficient in the process. The carrier pigeon was an advantageous technology until the telegraph came along.

The second bit of history occurred in 1960, when the Ford Foundation provided a grant for the University of Chicago to create the Center for Research in Security Prices (CRSP) tapes. These were Univac computer tapes that held stock price data from the stock exchanges, and were given away at cost to anybody who cared to study them. This was a historic turning point because before then nobody ever had stock price data. The CRSP tapes contained stock price data all the way back to 1926, and the database was extensively analyzed. As recounted by Robert Shiller, the CRSP tapes were “a breakthrough of science, of computers, of someone getting the data organized, and getting it available.”

The carrier pigeons and the CRSP tapes; both are technologies of their times. They are the underlying computational resources, i.e., for communications and storage of data, that enable market data to be processed into knowledge. The view of the markets as seen by carrier pigeons (if only they can read the tiny strips of paper tied to their feet) is likely very different from any man on the street in Paris or Berlin. Similarly, amidst the vast featureless landscape of seemingly random fluctuations in stock prices from the CRSP tapes, remnants of statistical memory persist as market anomalies when, and only when, processed through the FORTRAN program running on a computer. It is as though we cannot simply talk about markets being informationally efficient, or inefficient, without also considering the bounds of the underlying computational resources required to establishing the truth of such a statement.

Such a computational view of market efficiency has been proposed earlier in 2009. It was recognized that markets may be efficient for some investors, but not for others, based solely upon the differences in their computational capabilities. In particular, it is plausible that a high-memory strategy may “feed off” low-memory ones. In other words, it is precisely the presence of the low-memory strategies that create opportunities, which were not present initially, for high-memory strategies to exploit. Our understanding here is that definition of high/low can mean either memory span (long or short) or data resolution (high or low). This is certainly a piece of news worth knowing. We now realize that not only should MVP be built with copious working memory and generous data storage, it must also learn to use them well so as not to get picked off by others easily.

More recently, a surprising connection that links market efficiency in finance to computational efficiency in computer science was identified. Philip Maymin showed that markets are efficient if and only if P=NP, which is a shorthand for the claim that “any computation problem whose solution can be verified quickly (namely, in polynomial time) can also be solved quickly.” While the question of whether P=NP remains an open problem (with a million dollar reward for anyone who can resolve it in this new millennium), the overwhelming consensus among computer scientists today appears to be that P<>NP. Therefore, following Maymin’s result, we may be able to observe greater inefficiency in the market, especially when there is greater data, given the exponentially increasing complexity of checking all possible strategies. One could not help but wonder if the observed anomalies of value, growth, size, and momentum, among others, are just expressions of this computational phenomenon? After all, everything else being equal, more data should lead to more anomalies, or otherwise make existing anomalies more profitable. That would be good news for traders, if true.

In the final analysis, one might simply argue that perfectly efficient markets are impossible. If markets are perfectly efficient, both informationally and computationally, then there is no profit to gathering information or conducting searches. As a result, there is no longer a reason to trade and markets would eventually collapse. This is also known as the Grossman-Stiglitz paradox. Therefore, a degree of market inefficiency is necessary; it determines the amount of efforts traders are willing to expend to gather, process, and trade on information using computational resources available to them. In other words, traders need to be compensated for their efforts by sufficient profit opportunities, i.e., market inefficiencies. Put a different way, a trader’s livelihood depends upon market imperfections; just as a market’s normal functioning depends upon traders plying their trade.

Consider how markets impound information into prices, as described in the paper “How Markets Slowly Digest Changes in Supply and Demand”: “Because the outstanding liquidity of markets is always very small, trading is inherently an incremental process, and prices cannot be instantaneously in equilibrium and cannot instantaneously reflect all available information. There is nearly always a substantial offset between latent offer and latent demand, which only slowly gets incorporated in prices." Under this view, trade impact is a mechanical – or rather, statistical – phenomenon; the net effect regarding market impact of all trades is not only the same, but also history-dependent. It is surprising to us that long memory in order flow can be compatible with unpredictability of asset returns; they seem an incongruous mix. We propose to apply machine learning techniques towards filtering out market noises, in order to trade on the resultant signals in a statistical manner. While Mr. Market may appear characteristically unhurried these days, his mood swings have been more frequent, and behaves more erratically as well. Only a machine can figure him out.

Therefore, we think that having the right architecture for Big Data statistical search baked into the MVP is essential for success in finding both types of inefficiencies, whether informational or computational. A basic source of information comprising only of past prices, but coupled with a superior statistical package, might still reveal a few tradable patterns. A richer source of information, e.g., comprising of economic news or government statistics, in addition to past prices, should provide many more tradable patterns, especially if generous computational resources are available to process and refine them. However, a retail-level trading setup with just historical pricing data and entry-level computing capabilities (e.g., basic technical analysis) is unlikely to survive today’s super-competitive and hyper-efficient markets; and may end up as fodder for faster or higher-memory strategies.

It is important that MVP be defined right from the get-go to be able to fully explore the opportunity frontier of market imperfections, both informationally and computationally. Remember: Big Data is key, so computation cannot be all about speed; memory and storage matter quite a bit, too. In fact, lots of bits.

The carrier pigeons, the telegraph, the ticker tape, the telephone, the teletypes, to the modern day fiber optics, and microwave, are but computational resources for transmitting raw bits. In our view, computational resources for transforming bits to yield statistical trading insights are equally, if not more, important. They define the essence of trading, which many career pigeons today sadly don’t get.

But this pigeon is grounded...

But this pigeon is grounded...

What we observe is not Nature itself but Nature exposed to our method of questioning.
— Werner Heisenberg (1962)

References:

  1. Patterson, Scott (2010). The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It (First Edition). Crown Business.
  2. Fama, Eugene F. (1970, May). Efficient Capital Markets: A Review of Theory and Empirical Work. Journal of Finance. Vol. 25, Issue 2, pp. 383-417. Retrieved from: http://www.e-m-h.org/Fama70.pdf
  3. Lo, Andrew (2007). Efficient Markets Hypothesis. In: The New Palgrave: A Dictionary of Economics (2nd Edition). Retrieved from: http://web.mit.edu/alo/www/Papers/EMH_Final.pdf
  4. Hasanhodzic, Jasmina and Lo, Andrew W. and Viola, Emanuele (2009, August 31). A Computational View of Market Efficiency. Retrieved from: http://lfe.mit.edu/wp-content/uploads/2014/03/Computational-View-of-Market-Efficiency.pdf
  5. Maymin, Philip Z. (2013). A New Kind of Finance. In: Irreducibility and Computational Equivalence: 10 Years After Wolfram’s A New Kind of Science. pp. 89-99. Hector Zenil, ed., Springer Verlag. Retrieved from: http://arxiv.org/pdf/1210.1588.pdf
  6. Maymin, Philip Z. (2011, February 28). Markets are Efficient If and Only If P=NP. Algorithmic Finance 1:1, 1-11. Retrieved from: http://ssrn.com/abstract=1773169
  7. Grossman, Sanford J. and Stiglitz, Joseph E. (1980, June). On the Impossibility of Informationally Efficient Markets. The American Economic Review, 70, pp. 393-408. Retrieved from: https://www.aeaweb.org/aer/top20/70.3.393-408.pdf
  8. Bouchaud, Jean-Philippe and Farmer, J. Doyne and Lillo, Fabrizio (2008, September 11). How Markets Slowly Digest Changes in Supply and Demand. Available at SSRN: http://ssrn.com/abstract=1266681 or http://dx.doi.org/10.2139/ssrn.1266681

The Limits of Reason

Our brains are wired for narrative, not statistical uncertainty. And so we tell ourselves simple stories to explain complex thing we don’t — and, most importantly, can’t — know. The truth is that we have no idea why stock markets go up or down on any given day, and whatever reason we give is sure to be grossly simplified, if not flat out wrong.
— Nassim Nicholas Taleb (“The Black Swan”)

Cognitive biases aside, are there real, physical limits to what we'll ever know? How do we go about finding out about them? And where do we start?

We start from 1686 in France with Gottfried Leibniz, who stated that "a theory has to be simpler than the data it explains; otherwise it does not explain anything." This makes perfect sense because one can always construct an arbitrarily complex mathematical law to describe random, pattern-less data. Conversely, if the only law that describes some data is arbitrarily complicated, then the data is actually random and contains no patterns. Along the same vein, William of Occam illustrated a very good point with his metaphorical razor.

Gregory Chaitin has published a delightful book in 2005, called "Meta Math!: The Quest for Omega", that is "based on measuring information and showing that some mathematical facts cannot be compressed into a theory because they are too complicated." Now, that is a piece of news worth knowing, before we embark on our otherwise quixotic quest for data, statistics, and ultimately, knowledge. And what is knowledge for us but a neat summary of why and how things work (or not work!) in the market. Nobody wants to be running a fool's errand!

Perhaps that piece of news shouldn't have come as a surprise for us in the first place. After all, we are already familiar with Kurt Gödel’s incompleteness proof of mathematical logic. Alan Turing, too, had famously confronted us with his Halting Problem in every class of computation theory that was taught. Apparently, we have no shortage of things that man or machines can't do, in both mathematics and computer science. It's a sobering thought in this modern era of Big Data before Skynet.

As Chaitin pointed out, comprehension is compression; so a useful theory is a compression of the data. The implication for quantitative finance is clear: a useful theory of the market is a statistical compression of the market data. The simpler the statistics, the better we understand the market. This is a comforting thought for us. Complexities could be overwhelming, always putting us at a loss for words to explain them. Simplicity is good.

We always assume things work a certain way for a reason. That "know why" is called knowledge. The ancient Greeks called it epistêmê. Furthermore, if something is true, it must be true for a reason. This is the principle of sufficient reason that Leibniz, and the ancient Greeks, believed in. Mathematicians believe it, too. They demand proofs for everything. Mere instances would not do, not even after seeing millions and millions of concrete data points. Irrefutable proofs are required for absolute conviction.

Are the methods of quantitative finance more like physics, math, or computer science? (Image Credit: Sci. Am.).

Are the methods of quantitative finance more like physics, math, or computer science? (Image Credit: Sci. Am.).

So it came as a big surprise to everyone when it was revealed that there are mathematical facts that are true for no reasons. In fact, Chaitin showed that there are infinitely many mathematical facts that have no theories to explain them. He called these facts irreducible, both computationally and logically. In the context of markets, that means there are infinitely many market truths that cannot be neatly summarized in a statistic. These market truths could be anomalies, stylized facts, or regularities, depending on who you ask. That's good news for us. The opportunity frontier is boundless! And they are very complicated. But how could this be?

Turns out that there is a very neat construct called the omega number, i.e., a very long string of zeroes and ones, that Chaitin invented to help us understand this ultimate limit on logic and reason. We won't go into the details here, except to note that omega is a specific, well-defined number that cannot be calculated by any computer programs. Interested readers are referred to Chaitin's highly readable article, "The Limits of Reason". The implication for mathematics is immediately clear: there can never be a "theory of everything" for all of mathematics. Similarly, trying to find a reason behind every market behavior may be a fool's errand. In many cases, we should be happy to simply note their axiomatic existence, beyond logic, reason or proof. Problem is: we can't really tell when they are irreducible and when they are not! And what if we are wrong?

Incompleteness tells us that extensive statistical analysis can be extremely convincing, even if there is no underlying reason that we can find to explain the observed market phenomenon. So, what does this mean for our Zeroth Rule of Trading: "We do not trade what we don't understand"? Do we still insist on understanding or do we give up on opportunities that we can’t explain? How do we decide? And what do we do the next time when the cartographer pressed her new treasure map into our hands, and whispered knowingly, "Trust me, it's all good." Do we still strive to first understand how it works before sending MVP, risking life and limb, on a perilous journey across the seven seas?

It seems we have arrived at an impasse. We cannot abandon reason; and yet reason eludes us when we try to understand every market truth that we encounter. In the next post, we shall take a closer look at the definition of market imperfections and market inefficiencies specifically, and see if we can find some clues there to resolve this tricky situation.

References:

  1. Taleb, Nassim Nicholas (2010). The Black Swan: The Impact of the Highly Improbable (2nd Edition). Random House.
  2. Chaitin, Gregory (2005). Meta Math! The Quest for Omega. Pantheon Books, New York.
  3. Chaitin, Gregory (2006, March). The Limits of Reason. Scientific American, pp. 74-81. Retrieved from: http://www.umcs.maine.edu/~chaitin/sciamer3.pdf
  4. Yanofsky, Noson S. (2013, August). The Outer Limits of Reason: What Science, Mathematics, and Logic Cannot Tell Us. MIT Press.
  5. Dvorsky, Gregory (2013, April 26). How Skynet might Emerge from Simple Physics. Io9: We Come from the Future. Retrieved from: http://io9.com/how-skynet-might-emerge-from-simple-physics-482402911

Why Computational Finance?

Why computational finance? A friend asked recently. The surface reason as I have told myself is that computational finance is a natural extension of computer science and computing, for which I have been adequately trained and reasonably good at. But surely it can't be for the pleasure of hunting down market inefficiencies, day in and day out? My friend insisted, having examined certain elements of my business model canvas posted earlier. Now I have to search a little deeper within myself to understand my own recent career choice.

It rewards me to be truthful. The freedom to discover market truths any which way I choose, unbeholden to any entity or anyone. Anomalies, stylized facts, regularities; these are all synonyms of market truths. The ability to trade in amounts small or large as the opportunities and risks reveal themselves under dispassionate analysis, stripped of all earthly disguises and human biases. It's the joy of listening for and finding heavenly music amidst the market cacophonies; it's the thrill of discovering a grain of gold among sands. It's not for everyone. It carves out a place for me in this world and that suits me well.

It reminds me of my childhood years collecting and trading stamps with my neighbors and classmates. I seek beautiful stamps. I enjoy organizing them and rearranging them endlessly for display in my album. I trade for stamps that I want with my circle of friends, and dispose of duplicates. I marvel at beautiful graphic art the size of a thumbnail, and learn a lot about countries that I have never been. Everyone's stamp collection becomes more complete and beautiful from trading with each other. For kids at that age, discovering that individual differences can become the sole reason for trade, and changing minds encourage even more trading, is a revelation. Kids who speak a different language or dialect at home, and have different ideas of aesthetics, become my best trading buddies; they are an excellent source of coveted foreign stamps, and they trade generously for what I have aplenty. Nobody minds at all that some kids keep changing their minds from one day to the next. All that can be solved by trade backs. We don't fight over stamps, we simply trade them. And everyone goes home happy. Augmented by contributions from aunts and uncles (I have 8 total, and that's just from my mother's side), plus occasional trip to my dad's office (which has plenty of foreign correspondences), my stamp collection grows steadily and seemingly without effort. I love it. Just reminiscing about childhood makes me want to relive that time again.

A beautiful stamp album (but not mine).

A beautiful stamp album (but not mine).

Pursuit of freedom, truth, and simple joys of daily life. In contrast to becoming enslaved body and mind within my own company, beholden to clients with their whims and fancies, a cog in the supply chain, and having no life outside of work. Took me a while to actually realize this, having spent a good part of my career as a struggling entrepreneur. But the choice is starkly simpler now. Some call it the benefit of hindsight. Others say wisdom comes naturally with middle age, but mostly on birthdays.

I may be idealistic, for I believe trading can still be fair and honest. Like it once was, for me. We just have to program it right. Truth and reward must be aligned. Traders need to be rewarded not just for discovering market truths, but also for being truthful in the process of discovery. They cannot manufacture or manipulate truths. That's how we weed out charlatans and pretenders in other scientific pursuits, and we can do the same here. The market never lies, once you get to know it better. It might try to mislead you, feign a zigzag right in front of your eyes, or throw a head fake at you. But it will never lie to you.

So I'd rather be punished daily, for the mistakes that I made within the last 24 hours, and learn to trade better the next day, than to hold my breath in suspense for long periods of toil awaiting an uncertain outcome. Only entrepreneurs are crazy enough to do that. So I salute the Steve Jobs'es of the world, big or small, for they are the real heroes of our economy. They create the beautiful stamps that make collections possible, and keep the rest of us in business with our modest role trading them. And everyone goes home happy.