The Symbiocene Age – part Deux

(c) www.despair.com

This is the second part of a three-part series on the new Symbiocene age.

Digital Pangaea.  The previous article concluded with the notion of a digital Pangaea made up of four plates: artificial intelligence (AI), the block chain, the energy lattice, and the digital-industrial (also known as the IIoT – industrial internet of things).  In this second article, each of the four plates are surmised, starting with artificial intelligence (AI), the “it” business trend of the moment.

Nature of the beast.  When Google decided to reorganize itself, it did so with artificial intelligence (AI) as its anchor.  This corporate change, more anything else seen or heard since then, was a turning point in the history of the new digital Pangaea.  It announced to the world that the world was changing permanently.  It laid down the cornerstone of a new business world foundation whereby every company must become a software company.  Since then, A.I. is everywhere in the news, on the roads, in the living room.  Self-driving cars, voice recognition, image identification, computerized medical diagnostics, car maintenance, network routings, sale projections and future trends have been forced to pay homage to their computing overlord.   A.I. has indeed become so prevalent that its acronym is becoming the name.

The science of “artificial intelligence” began in the 1970’s and has evolved by fits and starts along dramatically different theoretical directions.  It is quintessentially a software application powered by astounding hardware.  AI does not run on a laptop or a smartphone, at least not yet.  When a task can be automated and parsed into a set of fixed instructions, we call this programming.  That is why Google Maps, for example, is not A.I.  But IBM’s Watson machine, is, because the algorithmic approach isn’t a rigid set of coded instructions but a fluid set of processing closed loops that allows Watson to solve a problem by itself.

The current state of the art in AI relies on the concept of machine learning to devise these algorithms.  Machine learning is another one of these terms that are brandied about with wanton abandon.  Everyone has already been exposed to the rudiments of machine learning through popular applications like Apple’s Siri, Amazon’s Echo or Facebook’s M.  Machine learning is the outcome; neural networks are the algorithms.  These networks mimic in limited fashion the workings of the brain.  The brain comprises 100 billion or so neurons, with neuron connected to perhaps 10 000 other neurons.  That adds up to something about an astounding peta connections!  The artificial neural network, as it is commonly known in the industry, has been proven capable of operating on the same basis, albeit at a lesser scale (for now), through a series of processing layers, each assigned a limited range of decision parameters.  Layers communicate with each other in feedback loops until a convergence upon an answer with a high probability of correctness is achieved.  This is how cars are learning to drive by themselves.

The niche plays.  Three application subsets are associated with artificial intelligence: big data, quantum computing, and additive manufacturing.  Big data, as the name indicates, is concerned with very large sets of data (structured and unstructured).  The expression has already become ubiquitous.  They are nothing more than exceedingly large sets of numeric records.  Their scale can be staggering.  To illustrate, consider the Large Hadron Collider (LHC) operated by CERN in Europe.  An experiment can yield 1 GB of raw data per second.  Each year, the experiments accumulate 50 Petabytes of raw data (corresponding to 50 000 hard disks o 1 TB capacity each, or 10 million DVDs stacked 12 km high).  Big data is sine qua none to neural networks, whose commercial value is zero without sufficient dataset size.

The second niche play is quantum computing, which is the “nichest” of them all.  Quantum computing is to A.I. what fusion is to nuclear energy: the moonshot that always seems to be ten years away from reality.  The field is a rarefied one and not one for the neophyte.  The concept itself is complicated and exploits the weirdness of quantum mechanics (namely superposition and entanglement) to solve problems that are practically out of reach to classic binary computers.  Classic computers utilize transistors which operate as on-off switches (1 or 0) to encode electrical signals into binary numbers (our common bits and bytes).  Quantum computers do not.  The encoding of information is done with qbits representing superimposed quantum states, which is the key to its computing prowess.  In effect, the qbit contains all probable solutions to a problem simultaneously.  Specialized algorithms exploit this phenomenon.  These algorithms have already been shown to solve a class of problems, called NP, faster than any possible probabilistic classical algorithm run on a binary machine.  The field has been around for decades but has only recently been privy to successful demonstration experiments.  The state of the art, however, remains far from commercialization.

The last niche play is additive manufacture, colloquially known as 3D printing.  Its inclusion to the A.I. banner may surprise; however, the magic of the process lies in the software powering the hardware, especially when the machine is hooked up to a full-field scanner capable of mapping numerically the contours of an object.  Conceptually, the process creates an object by adding layers of material successively.  Production-level parts are now routinely fabricated in a variety of shapes, sizes and materials, often in geometries that are impossible to achieve with traditional methods (milling, turning, machining, casting, forging, stamping, etc.).   The spectacular results obtained by additive manufacturing belie a more profound impact to the big manufacturing picture.  It has already led to 50% reductions in energy consumptions in manufacturing, and 90% reductions in source material costs.  The prospect of manufacture in situ and on-demand in an operational setting is just around the corner.   This will bring about a profound paradigm shift to the business world.  Autonomous decision-making, self-monitoring and seamless collaboration with other devices will dramatically change how companies organize their daily operations.

The blockchain.  This is the second digital tectonic plate.  The year 2008 is forever associated with the global financial meltdown.  It was also the year that gave the world two of its more recent and famous neologisms: bitcoin and block chain (usually written as a single word, blockchain).   The blockchain is the keystone of the entire system.  It is poised to revolutionize the way people, organizations and governments establish the legitimacy of transactions.  Conceptually, a blockchain is constituted as a distributed database which contain and control – continuously, in real time – the records associated with a transaction.  The records are the blocks; each one includes a timestamp and a link to a previous block.  The transaction takes place between two parties, without any intermediaries (like bankers, lawyers, or managers).  The record of this transaction is written to the distributed database (which is operated as peer-to-peer network, again without interferences by middlemen or third parties).   Over time, these records form a chain, hence the name, which is impervious to tampering or record modifications without altering all subsequent blocks.  This caveat is the key to the entire edifice: the blockchain is theoretically impossible to hack by intent or by collusion.  If a change is made, that change is seen by everyone else whose name appears in at least one block.  In accounting parlance, the blockchain is said to be an open, transparent and shared ledger that documents the existence of transactions in a permanent, verifiable manner.

The incorruptible ledger.  Fundamentally, the blockchain is a ledger that is operated as a highly distributed computing platform and characterized by a “Byzantine fault tolerance”.   It gives rise to a decentralized consensus that is always up-to-date, acting in an evidentiary capacity regarding the legitimacy of its records.  All records are seen by all, at all times; none is hidden nor can they be hidden.  Therein lies the fantastic potential of the system: the only system capable of maintaining record legitimacy in the digital space.  The blockchain did for digital record management what double-entry accounting did to accounting in the fourteenth century: it revolutionized the practice.   The great corporate malfeasance scandals by Enron and Bernie Madoff could never have been possible had a blockchain been in use at the time.

The energy lattice.  The third tectonic plate is associated with energy (production and consumption).  The world has gotten on board the renewable energy train.  It is no longer a question of if or when, but how fast and how far.  Clean energy technologies are spreading across the world faster than fake news.  Batteries are changing the economics of solar and wind power generation, and are enabling electric motors to drive down the highways.  Traditional energy sources (oil and gas, coal, hydroelectric and nuclear) will not fade out anytime soon (except for coal, perhaps), which means that the planet will be existing in a state of power generation flux for decades to come.  The utopia of 100% renewable power is just that, a utopia, which stands zero chance of becoming reality.  But the possibly of 100% clean energy will fare better.  Within two generations, the world will have de-carbonized its energy diet.

The clean energy impetus is irreversibly launched, which will have profound ramifications on power distribution on this planet.  Batteries are unlikely to be the dominant form of storage and will remain a transient evolutionary step on the likelier dominance of stored hydrogen (in gaseous and liquid forms).  The reason is logistically simple: energy density.  Batteries cannot compete with chemical energy as stores of potential energy.  Hydrogen is the unassailable champion outside of nuclear sources.  The world’s energy production profile bears this out factually.  The top ten electric power plants in the world are either hydro-electric or nuclear.  According to the U.S. Energy Information Administration, typical capacity factors in 2016 for North American power plants typical capacity factors were 92% for nuclear plants, 55% for combined-cycle natural gas plants, 38% for hydro-electric plants, 35% for utility-scale wind farms, and 27% for utility-scale solar photovoltaic installations.  By way of comparison, the world’s biggest solar array is the Kurnool Ultra Mega Solar Park (India).  It produces 2 billion kWhs per year (with a footprint of 24 square kilometers).  This is a hundred times less than the top producing plant.  The largest wind-farm is found in Gansu, China.  It produced 24 billion kWhs of electricity in 2016 (over a footprint of 50 square kilometers).  Finally, remark that the top ten plants produced a total of 0.53 exajoules, which is represented one hundredth of the total world energy consumption in 2016 (560 exajoules – see my previous article entitled hocus pocus).  Those two numbers explain why the utopian dream of going 100% renewable in energy production is just that, utopia.  It is physically impossible to achieve.

Moreover, let us not forget the other investment consequence of a renewable paradigm: the capital expenses meant for resource exploration (hydrocarbons and water) are not eliminated but merely transferred to mining operations to produce the raw materials of renewable technologies – indium, tellurium and gallium for solar panels, lithium, cobalt and manganese for batteries, and rare earths for high performance magnets in motors – along with the environmental ramifications that they entail.

What will come out of this hegemonic undercurrent?  We can draw an interesting parallel between the latter and the blockchain.  Power distribution has hitherto been a centralized affair, with few but large power plants producing all electricity for large regions.  Some European countries, Germany notably, have forced their power grids to open up to locally-produced electricity from small-scale solar panels.  Their grids have slowly begun to transform themselves into de-centralized distribution networks.  If renewable energies are to flourish, centralized networks must devolve into de-centralized networks.  And this is indeed what can be witnessed on all continents.  Power grids are loosening their grip on power (pun intended…).

The end game however is further out.  This will be when those grids are fully distributed, with power flowing back and forth between local producers/consumers.  The localization of the energy nexus (supply AND demand) will dramatically alter the technological landscape as well.  For, all the variables (inputs and outputs) that go into the operation of a large power plant also appear at the smallest of scale.  Things like emissions, heat rejection and recovery, noise, local storage, in situ datum monitoring, internal spot performance tracking, and autonomous machine decision-making will become routine, nay necessary components of localized power installations.  Large industrial facilities will in fact become self-administered power networks in their own rights, superimposed in real-time unto the regional grid beyond the fences.  These myriad concepts are the reason why we speak of the energy lattice rather than the grid.  The grid limits the perspective to the hardware involved with power distribution.  The lattice injects a level of conceptual abstraction into the picture, by elevating the inner workings of the grid to that of a living organism, with all the extra dimensions that it entails.

The fourth plate.  The foregoing has laid the groundwork for a segue towards the fourth and last digital tectonic plate: the digital-industrial revolution, which will be the subject of the next article in this series.  Until then, enjoy the iconoclastic ramifications!


Leave a Reply