Tuesday, August 11, 2015

Your Money is Safe with Us.

Famous last words are two a billion. Or maybe even a trillion. Pennies are long past, as are my old days when sufficient money could be held in a small pouch to pay off the robbers when riding through the woods.

Now the robbers all live in the deep darkness of Bretton Woods which was planted in neat rows to enable transparency.

How's that Revolution going?

The banking system is safe. Change just one small letter and it's Barking. As we were all shown in the P&B this morning by a fella, Tyler Durden, who has been in the woods and managed to get out alive with his pouch intact. He regaled us thus:
The $12 Trillion Fat Finger: 
How A "Glitch" Nearly Crashed The Global Financial System 
- A True Story
For all the talk of how the financial world nearly ended in the aftermath of first the Lehman bankruptcy, then the money market freeze, and culminating with the AIG bailout, and how bubble after Fed bubble has made the entire financial system as brittle as the weakest counterparty in the collateral chain of some $100 trillion in shadow liabilities, the truth is that despite all the "macroprudential" planning and preparations, all the world's credit, housing, stock, and illiquidity bubbles may be nothing when compared to the oldest "glitch" in the book: a simple cascading error which ends up taking down the entire system.
Like what happened in the great quant blow up August 2007.
Before we continue, would you all mind, please, dropping a few coins in the Tavern-Keeper's Holiday Fund donations bag on the bar. Thanks. Just being cautious.

For those who may not recall the specific details of how the "quant crash" nearly wiped out all algo and quant trading hedge funds and strats in a matter of hours if not minutes, leading to tens of billions in capital losses, here is a reminder, and a warning that the official goalseeked crisis narrative "after" the fact is merely there to hide the embarrassment of just how close to total collapse the global financial system is at any given moment.
The following is a true story from the archives, going all the way back to 2007:
I.T. is a minefield for expensive mistakes.
There's so many different ways to screw up. The best you can hope for in a support role is to be invisible. 
If anyone notices your support team at all, you can rest assured it's because someone has made a mistake. I've worked for three major investment banks, but at the first place I witnessed one of the most impressive mistakes I'm ever likely to see in my career. I was part of the sales and trading production support team, but thankfully it wasn't me who made this grave error of judgement...
(I'll delve into obnoxious levels of detail here to add color and context if you're interested. If not, just skip to the next chunk, you impatient git)
OK chaps and chapesses. just be patient. He's getting there.  
This bank had pioneered a process called straight-through processing (STP) which removes the normal manual processes of placement, checking, settling and clearing of trades. Trades done in the global marketplace typically have a 5-day clearing period to allow for all the paperwork and book-keeping to be done.  
This elaborate system allowed same-day settlement, something never previously possible. The bank had achieved this over a period of six years by developing a computer system with a degree of complexity that rivalled SkyNet. 
By 2006 it also probably had enough processing power to become self-aware, and the storage requirements were absolutely colossal. It consisted of hundreds of bleeding edge compute-farm blade servers, several £multi-million top-end database servers and the project had over 300 staff just to keep it running. 
To put that into perspective, the storage for this one system (one of about 500 major trading systems at the bank) represented over 80% of the total storage used within the company. The equivalent of 100 DVD's worth of raw data entered the databases each day as it handled over a million inter-bank trades, each ranging in value from a few hundred thousand dollars to multi-billion dollar equity deals. 
This thing was BIG.
You'd think such a critically important and expensive system would run on the finest, fault-tolerant hardware and software.
Hahahahahahahahaha As if there were such a reliable horse.
Unfortunately, it had grown somewhat organically over the years, with bits being added here, there and everywhere. There were parts of this system that no-one understood any more, as the original, lazy developers had moved company, emigrated or *died* without documenting their work. I doubt they ever predicted the monster it would eventually become.
A colleague of mine one day decided to perform a change during the day without authorisation, which was foolish, but not uncommon. 
It was a trivial change to add yet more storage and he'd done it many times before so he was confident about it. 
The guy was only trying to be helpful ...
(The famous last words, eh) the besieged developers, who were constantly under pressure to keep the wretched thing moving as it got more bloated each day, like an electronic ‘Mr Creosote’.
As my friend applied his change that morning, he triggered a bug in a notoriously crap script responsible for bringing new data disks online. The script had been coded in-house as this saved the bank about £300 per year on licensing fees for the official ‘storage agents’ provided by the vendor. Money that, in hindsight, would perhaps have been better spent instead of pocketed. 
The homebrew code took one look at the new configuration and immediately spazzed out. 
This monged scrap of pisspoor geek-scribble had decided the best course of action was to bring down the production end of the system and bring online the disaster recovery (DR) end, which is normal behaviour when it detects a catastrophic 'failure'. 
It’s designed to bring up the working side of the setup as quickly as possible. Sadly, what with this system being fully-replicated at both sites (to [cough] ensure seamless recovery), the exact same bug was almost instantly triggered on the DR end, so in under a minute, the hateful script had taken offline the entire system in much the same manner as chucking a spanner into a running engine might stop a car. 
The databases, as always, were flushing their precious data onto many different disks as this happened, so massive, irreversible data corruption occurred. That was it, the biggest computer system in the bank, maybe even the world, was down.
And it wasn't coming back up again quickly.
(OK, detail over. Calm down)
At the time this failure occurred there was more than $12 TRILLION of trades at various stages of the settlement process in the system. This represented around 20% of ALL trades on the global stock market, as other banks had started to plug into this behemoth and use its capabilities themselves. If those trades were not settled within the agreed timeframe, the bank would be liable for penalties on each and every one, the resulting fines would eclipse the market capital of the company, and so it would go out of business. Just like that.
My team dropped everything it was doing and spent 4 solid, brutal hours recovering each component of the system in a desperate effort to coax the stubborn silicon back online. After a short time, the head of the European Central Bank (ECB) was on a crisis call with our company CEO, demanding status updates as to why so many trades were failing that day. 
Allegedly (as we were later told), the volume of financial goodies contained within this beast was so great that failure to clear the trades would have had a significant negative effect on the value of the Euro currency. 
This one f u almost started a global economic crisis on a scale similar to the recent (and ongoing) sub-prime credit crash. With two hours to spare before the ECB would be forced to go public by adjusting the Euro exchange rate to compensate, the system was up and running, but barely. 
We each manned a critical sub-component and diverted all resources into the clearing engines. The developers set the system to prioritise trades on value. Everything else on those servers was switched off to ensure every available CPU cycle and disk operation could be utilised. It saturated those machines with processing while we watched in silence, unable to influence the outcome at all.
Incredibly, the largest proportion of the high-value transactions had cleared by the close of business deadline, and disaster was averted by the most "wafer-thin" margin. 
Despite this, the outstanding lower-value trades still cost the bank more than $100m in fines.
Which no doubt was recovered from small, constant, niggling fees upon millions of individuals' bank accounts. 
Amazingly, to this day only a handful of people actually understand the true source of those penalties on the end-of-year shareholder report. Reputation is king in the world of banking and all concerned --including me-- were instructed quite explicitly to keep schtum. Naturally, I *can’t* identify the bank in question, but if you’re still curious, gaz me and I’ll point you in the right direction…
Epilogue… The bank stumped up for proper scripts pretty quickly but the poor sap who started this ball of shit rolling was fired in a pompous ceremony of blame the next day, which was rather unfair as it was dodgy coding which had really caused the problem. 
The company rationale was that every blaze needs a spark to start it, and he was going to be the one they would scapegoat. 
That was one of the major reasons I chose to leave the company (but not before giving the global head of technology a dressing down at our Christmas party… that’s another QOTW altogether). 
Even today my errant mate is one of the only people who properly understands most of that preposterous computer system, so he had his job back within six months -- but at a higher rate than before :-)

most banks are insane and they never do anything to fix problems until *after* it costs them uber-money. Did I hear you mention length? 100 million dollar bills in fines laid end-to-end is about 9,500 miles long according to Google calculator.

I pulled young Tyler a pint and charged him for the next one as well.



  1. I'm going to donate to the bar fund in bitcoins, do you mind?


    The phrase "penny wise and pound foolish" comes to mind, reading this post.

    Otherwise, you're preaching to the choir, brother Amfortas. Don't know if you've seen this recent outcome on other aspects of 2007 banking insanity?

    The sad thing is that the only "police" we have on the block about banking practices are our governments, God help us, who only seem to make things worse (primarily because most of our politicians here, at least, were heavily assisted into office by PACs or SuperPacs of which these banks are members, regardless of party, and who apparently just lack basic common sense).

    Though both parties have several hypocrisies, the only two that really bother me on either side are the faux moral superiority complex from the right's politicians (not everyday people on the right, of course) and the wealth and greed by the left's politicians, pretending they're all about the "everyday poor man," when they have no idea what everyday poor people live like.

    (Sorry to end on a negative note, there, Amfortas )


    1. If you go down to Bretton Woods today, you are in for a big surprise. Yes our finance system is 'bespoke' by Govmunt but with an inordinate amount of corruption involved. Hedging is to me a sensible matter but derivatives are a huge con. The gambling and speculation issues are corrupting. Selling debt is worrisome. Hiding bum debt in bundles of 'sound' investment is criminal - or should be.

      In the tale above though, non of these issues is aired, just the technical flaws that are bound to arise from the simple neglect in planning what a system should do and how. And the 'dominance' of the system. Anything we can 'make' we can break.

    2. Bit notes would be easier to get through customs. :)

    3. This comment has been removed by the author.

    4. This comment has been removed by the author.

  2. Saw it but first time I've read it. Wonder if the big one will come this way - a computer glitch.

    1. If a collapse is not 'brought about' it is more likely to happen by sheer incompetence, as the tale tells. Wheels fall off, even from BMWs and most computer systems are a hodge-podge of bolt on bits, barely engineered to the crudest tolerances.


Ne meias in stragulo aut pueros circummittam.

Our Bouncer is a gentleman of muscle and guile. His patience has limits. He will check you at the door.

The Tavern gets rowdy visitors from time to time. Some are brain dead and some soul dead. They attack customers and the bar staff and piss on the carpets. Those people will not be allowed in anymore. So... Be Nice..