Subtotal: $00.00

Checkout

Your cart is empty

A.I.’s Beautiful Mind

Analogies between the financial markets and games are a dime a dozen. But can Go, chess and shogi really teach us anything substantive about trading strategies? The principles of the markets are, of course, more qualitative and mysterious than the most complex and strategic board game, Go.

Still, the markets have rules. And parameters. And these can be learned.

A.I. technology owned by Google has recently achieved “superhuman performance” in the games of Go, chess and shogi. Using a combination of “deep convolutional neural networks” and “a tabula rasa reinforcement learning algorithm,” DeepMind’s AlphaGo, AlphaGo Zero and AlphaZero have now mastered Go, chess and shogi through self-play.

Is it possible to imagine a future where A.I. is able to take on the market? Is this a good thing?

Not yet, perhaps. But, until very recently, they said the same about Go.

Man versus Machine (II): The Google DeepMind Challenge

Four Seasons Hotel, Seoul, South Korea, 9th–15th March, 2016: Lee Sedol, a human, is challenged by AlphaGo, a computer, to a head-to-head at the ancient Chinese board game, Go.

Man against machine. Five games. For the winner: $1,000,000.

But Lee Sedol is not interested in the money. Nor is he an ordinary man. An 18-time world champion, Lee Sedol is a Go player of mythic proportions: “He carries the hopes of a nation—not to mention a species—on his shoulders.” He expects, he says in the press conference, to win by a “landslide.” So do most of 100 million viewers tuning in to the live TV coverage in Korea, Japan and China and watching worldwide.

With hindsight, such confidence smacks of hubris. But the idea of A.I. defeating a human at the world’s most complex board game has always been deemed a significant challenge. Back in 1965, I.J. Good—a mathematician and author of Speculations Concerning the First Ultraintelligent Machine—considered the proposition of “Go on a computer”:

The principles are more qualitative and mysterious than in chess, and depend more on judgment. So I think it will be even more difficult to programme a computer to play a reasonable game of Go than of chess.

Lee Sedol’s press-conference responses to his losses in the first (“I’m in shock”), the second (“I am speechless”), and then the third game (“I don’t know what to say today, but I think I will have to express my apologies first”), demonstrate strikingly, tragically even, that DeepMind have programmed a computer to play more than “a reasonable game of Go”.

Everyone is humbled. Even Elon Musk, an early investor in DeepMind, tweets his surprise: “Experts in the field thought A.I. was 10 years away from achieving this.”

AlphaGo’s crushing defeat of Lee Sedol—more brutal even than Deep Blue’s less emphatic defeat of world chess champion, Gary Kasparov, by IBM in 1997—depicts the heart-breaking moment when hubris is replaced by anagnorisis.

It’s a critical moment. One which rewards analysis.

Lee Sedol reacts to his critical moment with a touch of brilliance, “God’s touch” according to Go players: Move 78 in Game Four, a “wedge” play, is creative, beautiful and as unexpected by AlphaGo as Move 37 in Game Two was unexpected by any human Go player. There was, AlphaGo computes, only a 1-in-10,000 chance of a human playing either move.

So beautiful,” utters Fan Hui, 3-time European Go champion.

With God’s touch, Lee Sedol salvages not just a game, but his raison d’être. Existential despair is replaced with a seed of hope. For all of us. Lee Sedol is, after all, not just carrying the hopes of a nation; he’s carrying the hopes of a species, on his shoulders.

So far, so impressive. But the firmament is still shuddering with the quakes of DeepMind’s recent victories. The implications of which are, ultimately, unknown. DeepMind have just shot up the S-curve, and no-one knows where the plateau lies.

Machine versus Machine

The devil, so the saying goes, is in the detail. And in A.I., there is a lot of detail. Inconceivable amounts, literally. But the detail does not attract over 100 million viewers. Particularly when the human element is removed: “Mastering the game of Go without human knowledge”. The detail gets published in the academic science journal Nature (19 October 2017) and on arXiv (5 December 2017).

AlphaGo Zero defeats AlphaGo—the vanquisher of Lee Sedol—100 games to 0. Less than three months later, AlphaZero achieves “a superhuman level of play in the games of chess and shogi, as well as Go”. These are highly impressive achievements in their own rights. But when you realise that AlphaZero’s general-purpose reinforcement learning algorithm can, tabula rasa and within 24 hours, convincingly defeat “a world-champion program in each case”, it really blows the mind.

As if this isn’t enough. AlphaZero achieves this with more of a qualitative, than a quantitative “arguably, a more ‘human-like’ approach to search.” Where AlphaGo used 48 tensor processing units (TPU) and a number of neural networks, AlphaGo Zero and AlphaZero use only 4 TPUs and a single deeper neural network:

AlphaZero searches just 80 thousand positions per second in chess and 40 thousand in shogi, compared to 70 million for Stockfish and 35 million for Elmo.

This is not simply about big data and quantitative analysis. There is a much thicker, more complex irony at play here. Given a more generically applicable algorithm, with a deeper neural network and a more selective approach to search, A.I. is able to sidestep human experience completely and achieve its own level of superhuman knowledge.

Machine versus Market

Within 24 hours and without human input, AlphaZero wipes out 3,000 years of human knowledge. Its only requirements: the rules of the games. Not just any games, either. Chess, shogi and Go are the most complex, qualitative and mysterious board games out there. In many ways, it is the stuff of nightmares. And yet, amidst the terror there is beauty, too.

For DeepMind, A.I. is “a multiplier for human ingenuity.” All of a sudden, then, there is much to be learned from these purer, more brutal machine-learned strategies of play, with their unusual blend of aggression and creativity. In games, yes, but this technology might also be applied “to other structured problems, like protein folding, reducing energy consumption or searching for revolutionary new materials.”

Perhaps, even, to the financial markets.

Is the key to “better algorithmic recipes” required by “The Warped Recipes of Algorithms” hidden within AlphaZero’s general reinforcement learning algorithm and neural network? Can such technology help markets avoid Quant Quakes?  Is there a conceivable future where markets, run via a purer form of machine-learned knowledge, search for strategies not based upon greater margins of win or loss, but upon a more democratic, gradual and continuous winning curve?

 Another win for robots. Learn how DeepMind trounced experts in a maze game

Read More