Author: Humberto Velleca

A primitive digital native

The three forces of the Long Tail

{Infinite Loop} Begin;

“Force 1: Democratize Production (the tools of production)

Force 2: Democratice Distribution (the tools of Distribution)

Force 3: Connect Supply and Demand”

“Bottom line: A Long Tail is just culture unfiltered by economic scarcity

“The theory of the Long Tail can be boiled down to this: Our culture and economy  are increasingly shifting away from a focus on a relatively small number of hits at the head of the demand curve, and moving toward a huge number of niches in the Tail. In an era without the constraints of physical shelf space and other bottlenecks of distribution, narrowly targeted goods and services can be as economicaly attractive as mainstream fare”

The Long Tail, Chris Anderson

View original post

A sick world means


Unfortunately…

My condolences and respect to the victims and their families from Orlando shooting.

Andrey Nikolayevich’s Rule

There is another, even deeper reason for our inclination to narrate, and it is not psychological. It has to do with the effect of order on information storage and retrieval in any system, and it’s worth explaining because of what is considered the central problem of probability and information theory.

The first problem is that information is costly to obtain.

The second problem is that information is also  costly to store–like real state in NYC. The more orderly, less random, patterned, and narratized a series of words and symbols, the easier to store that series in one’s mind or jot it down in a book so your grandchildren can read it someday.

Finally, information is costly to manipulate and retrieve.

With so many brain cells–one hundred billion–the attic is quite large, so the difficulties probably do not arise from storage-capacity limitations, but maybe just indexing problems. Your conscious, or working memory, the one you’re using to read this line and make sense of their meaning, is considerably smaller than the attic. Consider that your memory  has difficulty holding a mere seven digit long phone number.

Consider a collection of words glued together  to constitute a 500-page book. If the words are purely random, picked up from the dictionary in an unpredictable way, you’ll not be able to summarize, transfer, or reduce the dimensions of that book without loosing something significant from it. You need a 100.000 words to carry the exact message of a random 100.000 words with you on your next trip to Siberia.

Now consider the opposite: a book filled with the repetition of the following sentence: “The chairman of [insert here your company name] is a lucky fellow who happened to be in the right place at the right time and claims credit for the company’s success, without making a single allowance for luck”. The entire book can be accurately compressed, as I just did, in 34 words, out of 100.000; you could accurately reproduce with total fidelity out of such a kernel.

By finding the pattern, the logic of the series, you no longer need to memorize it all. You just store the pattern. And, as we can see here, the pattern is obviously more compact than then raw information. You looked to the book and found a rule. It is along these lines that the great probabilist Andrey Nikolayevich Kolmorogov defined the  degree of randomness; it is called “Kolmorogov complexity“.

— Nassim N. Taleb, The Black Swan
Mandelpart2_red.png

The Narrative Fallacy

“You were able to see luck and separate cause and effect because of your Eastern Orthodox Mediterranean heritage.” And he was so convincing that, for a minute, I agreed with his interpretation.

We like stories, we like to summarize, and we like to simplify, i.e., to reduce the dimension of matters. The first of the problems of human nature that we examine… is what I call the narrative fallacy. The fallacy is associated with our vulnerability to overinterpretation and our predilection for compact stories over raw truths. It severely distorts our mental representation of the world; it is particularly acute when it comes to the rare event.

The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. Explanations bind facts together. They make them all more easily remembered; they help them make more sense.
Where this propensity can go wrong is when it increases our impression of understanding.

The problem of narrative, although extensively studied in one of its versions by psychologists, is not so “psychological” something about the way disciplines are designed masks the point that it is more generally a problem of information. While narrative comes from an ingrained biological need to reduce dimensionallity, robots would prone to the same process of reduction.

Information wants to be reduced.

— Nassim N. Taleb, The Black Swan

blackswan

Interesting to see that simplification lead to misunderstanding. Though makes a lot of sense, is so common and usual that, for some time during my teenager years, I thought that some of the inefficiencies found in my country were there due to the lack of sophistication of the Portuguese speakers thinking. For instance, Portuguese speaking countries are not the most developed, innovative places. May not be the worst of all, they still lack a lot of true good will, fair judgement and excellence during execution, specially in the public sector. We’re lazy, and we’re recognized for that (I feel shamed about it, just register). But it probably has little to do with the idiom per se, and more with the structure and organization of human brain, apart from society and government organization. You can change laws quickly, but it takes a lot more to change culture and collective behavior.

After all, we tend to create comfort zones, we tend to overvalue simplifications instead of true, deep and complex understanding. Again, thought it was an under developed societies’ trait. Not the case.

What really draw my attention was the fact that information wants to be reduced. In a general observation, makes sense. It makes even more sense considering all the abstraction we care everyday without questioning: car engines and urban pollution, water usage and scarcity, waste disposal and public health, government spending and citizens real needs. If we were able to track all this information at the lower level of granularity, life would be so complicated, so complex, that we would end up the way… we are. This considering the current mindset.

To understand what engine pollutes less, to chose it despite of its higher price, sound logical but not practical. To take care of water usage and consumption while not in a severe water rationing, is to much effort for everyday tasks. To carefully dispose waste and dump it correctly, ensuring its destiny far from our home, sounds too much of a task.
After all, why are governments there for?

But I don’t mean to be politically correct nor fair, I’m interested in the fact that we carry forward small mistakes bind in (wrong) facts and preconceived interpretations, and we do it for the sake of simplification, of easy understanding, of massification. Sounds like a huge breach to be explored by neutral, objective and powerful, Artificial Intelligence, as we sound very obsolete for such a long time.

Triplet of Opacity

History is opaque. You see what comes out, not the script that produces events, the generator of history. There is a fundamental incompleteness in your grasp of such events, since you do not see wh…

Source: Triplet of Opacity

On the Plumage of birds

Before discovery of Australia, people in the Old World were convinced that all swans were white, an unassailable belief as it seems completely confirmed by empirical evidence.

The sighting of the first black swan might have been an interesting surprise for a few orninthologists, but that is not where the significance of the story lies. It ilustrates a severe limitation to our edge. One single observation can invalidate a general statement derived from millennia of confirmatory sightings of millions of swans.

All you need is one single black bird.

…What we call a black swan is an event with the following three attributes.

First, is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outliers status, human nature makes us concoct explanations for its occurrence after the fact , making it explainable and predictable.

— The Black Swan, Nassim Nicholas Taleb

It’s been quite some time since my last blog, and here I am again. This time I found in a book of economics to one more time show a property of technology. The unpredictability of the future is in part caused by the fast pace of changes, and by the fact that it gets faster every year.

Our mind (the physical part), as well as our mentality (the intellectual side), needs time to adapt to change, and from a certain point, we tend to believe that the old is better., –in other words, we get old — ignoring the evolutionary nature of change.
We’re are always adapting, and it’s been like that for the last centuries. In fact, our capacity to adapt is vital to our survival as species in such a hostile environment.

And from our evolutionary capacity, technology grows and evolves.

But I’d rather say that the point here is predictability. The capacity to foreseen with accuracy near term events. The capacity to use it to promote even further evolution, breaking paradigms at the same time creating conditions for improvement.

Flying cars, cyborg humans, artificial intelligence, ubiquitous connection, instant response are still part of our dream of a future, though much more reachable then ever. What surprises our predictive capacity are limitation of data processing, or the uncertainty factor. At some point, we have to abstract. We have to assume premises to make valid conclusions. And reality is a dynamic environment, moving to every side at every time. An uncontrollable number of variables is in place.

Soon artificial intelligence processing capacity of a single device will overcome the human brain in taking effective, good decisions, considering complex problems. In fact, machines will be better decision makers, risk takers than a regular person, like you and I.

In another time span, the same device will have processing power equivalent to all human brains combined, taking the decision making process to another level, as well as our belief and understanding of our own nature.
Are you ready for that?

black_swan_cover

roubini-crise

Roubini, Basiléia e além

Este post é mais um de uma série de posts que faço em Portugues quando leio um bom livro em portugues. O livro em questão é A Economia das Crises (2010), de Nouriel Roubini e Stephen Mihm, e relata falhas na primeira versão do acordo de Basiléia que levaram ao colapso financeiro de 2008 nos EUA e no mundo, e o post será seguido de um complemento, apotando as falhas na segunda versão do mesmo acordo.

Enjoy!

 

Consideremos, por exemplo, dois bancos hipotéticos que investem USD 1 BI tomados de outras fontes. Um investe em obrigações superseguras do Tesouro dos EUA; o outro investe em obrigações de alto risco emitidas por empresas. De acordo com Basiléia I, os dois bancos atribuiriam um fator de risco diferente (percentual) para esses ativos diferentes.

Isso por sua vez, iria determinar o capital que o banco deveria ter relacionado a esses ativos e seu risco associado. Na pratica, o banco com dívidas superseguras do governo precisaria de menos capital que o banco com divida de alto risco.

O Basiléia I continha ainda outras clásulas. Os bancos que operavam em multiplos países precisavam manter um capital equivalente a 8% de seus ativos ponderados pelo risco. Em complemento, normas técnicas especificaram a forma que esse capital ou participação poderia ter: ações ordinárias, ações preferenciais e outros capitais de alta qualidade, que se chamou de Nível 1 [Tier 1], e então todo o restante, Nível 2 [Tier 2].

O primeiro acordo de Basiléia entrou em vigor na década de 1980, e a marioria dos países do G-10 adotou suas medidas até 1992. Muitas economias emergentes também adorataram essas normas  de forma espontanea, o que provocou o desmantelamento dos mercados emergentes; os padrões que faziam sentido para as economias industriais avançadas mostraram-se mais dificeis de serem aplicados em economias emergentes, em especial em tempos de crise.

Não menos inquietante, tambem ficou claro que os bancos haviam encontrado meio de ocultar os riscos que o acordo de Basiléia I não previra — por exemplo, securitizando ativos. Esses truques deram aos balanços dos bancos uma estabilidade aparente, mas não real. Os bancos haviam encontrado um meio de obedecer a letra, mas não ao espírito das normas.

Essas lacunas levaram ao Basiléia II.

Enquanto o primeiro tinha apenas 37 páginas, o novo acordo era dez vezes mais volumosos. Ele criou normas técnicas mais precisas sobre como dimensionar o risco relativo de vários ativos; sugeriu métodos para fazer tais cálculos; ampliou a definição de risco, de modo a  abranger novos perigos, como a probabilidadde de os ativos desvalorizarem no mercado aberto; procurou suprir várias omissões por meio das quais os bancos haviam ocultado riscos; exigiu que os reguladores acompanhassem com mais vigor o cumprimento da exigencia sobre requisitos de capital, e enumerou os meios sobre os quais os bancos publicariam suas demonstrações finaneiras.

Os membros do G-10 ratificaram a versão final do Basileia II em 2006.

Então procuraram as nações individualmente para que a implementassem, um processo que estava em andamento quando a crise eclodiu. Tornou-se imediatamente evidente que, com todas as suas especificações, o Basiléia II tinha sérias falhas. Embora muitas das revisões fossem uma resposta às crises de 1990, o acordo não protegeu os grandes bancos dos transtornos causados por uma grande crise financeira.

Em resumo, o Basiléia II presumiu que o sistema financeiro mundial era mais estável do que ele de fato era. Esse foi um grave erro.

— Nouriel Roubini

Uncertainty

Uncertainty is the situation which involves imperfect and/or unknown information. In other words, it is a term used in subtly different ways in a number of fields, including insurance, philosophy, physics, statistics, economics, finance, psychology, sociology, engineering, metrology, and information science. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. Uncertainty arises in partially observable and/or stochastic environments, as well as due to ignorance and/or indolence.

Urban Trench

Strange is our situation here on Earth. Each of us comes for a short visit, not knowing why, yet sometimes seeming to divine a purpose.

From the standpoint of daily life, however, there is one thing we do know: that man is here for the sake of other men — above all for those upon whose smiles and well-being our own happiness depends.

— Albert Einstein

View original post

The God Hypothesis

The God Hypothesis suggests that the reality we inhabit also contains a supernatural agent who designed the universe and maintains it and even intervenes in it with miracles, which are temporary violations of his own otherwise grandly immutable laws.

Richard Swinburne, in his book Is there a God:

What the theists claims about God is that e does not have a power to create, conserve or annihilate anything, big or small. And he can also make objects move or do anything else ,,, He can make the planets move in the way Kepler discovered  that they move, or make gunpowder explode when we set a match to it. or he can make planets move in quite different ways that chemical substances explode or not explode under different conditions from those which now govern their behaviour. God is not limited by the laws of nature, he makes them and he can change or suspend them — if he chooses.

Did Jesus  have a human father, or was his mother a virgin at the time of his birth? Whether or not there is enough surviving evidence to decide it, this is still a stricly scientific question with a definite answer in principle: yes or no.

Did Jesus raised Lazarus from the dead? Did he himself come alive back again, three days after being crucified? There is an answer to every question, whether or not we can discover it in practice, and it is  a strictly scientific answer.

— Richard Dawkins, The God Delusion

god delusion

“Success depends upon previous preparation, and without such preparation there is sure to be failure.”

— Confucius

In my mind I’ve always heard a voice saying: “there’s no random achievement, you know exactly what you’re doing or else you’re not going to reach plenitude, the intended target.”

confucius