Presearch

Photographer: Federico Pompei | Source: Unsplash

 

One of my favorite phrases these days is “adjacent possible,” coined by Stuart Kauffman while exploring the origins of biological complexity. It’s like moving one seat over in a movie theater so that another couple can also sit next to each other. Or Amazon muscling into the movie market after mastering the art of selling books. Or bacteria that used to gobble soap also discovering a taste for bees and causing the end of civilization. In other words, what isn’t real yet but can become so by making a plausible shift.

As they say in Kannada “solpa adjust maadi.”

Photographer: Adam Sherez | Source: Unsplash

 

Design makes the adjacent possible in the worlds of engineering and commerce and that, over time, leads to substantially new patterns of behavior. Consider how web pages were first designed to replicate the physical page but once scrolling became an accepted and intuitive gesture, designers started creating websites with infinite scrolling. Which can never happen in the physical world.

I find it revealing that the business world supports several professional classes – various types of designers, architects etc – that look for the adjacent possible as a matter of course. In contrast, academia has a very unprofessional approach to the adjacent. Not only is there no academic cadre of professional “knowledge designers,” the people tasked with doing research are rarely taught how to arrive at new research questions and ideas – neither too outlandish to be unacceptable nor too similar to be boring. There’s no knowledge studio in which more experienced researchers critique the creative ideas of students. Consider how research seminars critique the rigor of experimental design and test whether alternative hypotheses might explain a phenomenon. But there’s never a research seminar that subjects the ideas themselves to an evaluation of novelty.

What kind of innovation is it where the innovations aren’t systematically judged for their innovativeness?

Perhaps you think my emphasis on novelty is itself a sign of capitalist indoctrination. Who cares about novelty besides tenure seeking professors? School teachers aren’t expected to be novel, and aren’t they the most common face of knowledge? Yes and No. School teachers are the visible face of the industrial approach to knowledge, but as an institution, the profession of teaching isn’t really geared for the knowledge economy.

Meanwhile, the “higher knowledge” industries still pretend as if they are artisanal traditions. Which is why it’s possible for professors to rail against the evils of capitalism while belonging to organizations that are 75% adjunct, i.e., the profession with the largest percentage of precarious labor. We live in a knowledge society but we don’t have a universal class of knowledge professionals and we certainly lack the further distinction between knowledge designers and knowledge engineers.

Photographer: Sven Mieke | Source: Unsplash

 

What I am looking for is a new creative profession, comparable to architecture and design.

Every profession deemed universal is represented throughout society. Doctors ply their wares in rural clinics, small town hospitals and the Harvard Medical School. Lawyers occupy the White House every four years. Engineers and architects work for the department of transport, the local real estate contractor and Google. There’s a teacher in every village.

The only knowledge professionals we have are found in universities, where they’re typically called professors. Even there, professors aren’t certified as knowledge professionals but as bearers of some specialized body of knowledge. There’s nothing that makes a professor into a professor; there are only professors of history and chemistry. That’s strange, for lawyers can’t be lawyers without passing the bar, engineers need to be certified and teachers need a degree in education. We mark our respect for a profession by declaring a badge that certifies entry into that profession.

That certificate also universalizes the profession, so that it can take root in every nook and corner of modern society. You might say that a PhD is the certificate for professors. It’s partly true, but most PhD’s aren’t professors and will never be one. Most PhD’s leave the profession of professing, or worse, languish as adjunct faculty. If the certification is a signal of respectable livelihood, then a PhD is a very poor guarantee. Imagine the heartburn that would ensue if 70% of those with a law or medical degree had a position that paid close to minimum wage and no hope of getting a better job.Every startup has a CEO, a CTO and a COO. They don’t have CKOs. The ivory tower has prestige, but intellectually, it’s as much a ghetto as it’s a beacon.

In any case, a PhD is a certification of specialized knowledge, not of knowledge as such. A knowledge bearer should be closer to a philosopher, a practical philosopher, than a possessor of arcane information. Socrates thought his role was to be the midwife of wisdom. I believe that role is far more important today than it was in Athens in 399 BCE. We are deluged by information on the one hand and plagued by uncertainty about the future on the other. The information deluge and uncertainty aren’t unrelated; the world is changing quickly, which leads to more information — both signal and noise — and more uncertainty.

In times of knowledge scarcity, knowledge professions are gate keepers to access — which is why we have priesthoods and ivory towers. We have moved far from those times. Knowledge is no longer about access but about value: what trends are important and what are fads? What’s worth learning and why? In the future, every individual, every company and every society will rise or fall on the basis of its understanding of value. We need a new category of professionals who will act as weather vanes for the new winds that are blowing; people who understand data making and meaning making.

Photographer: Hal Gatewood | Source: Unsplash

 

Back to the adjacent possible. I have been thinking that what research needs is an adjacent possible I am going to call presearch, a design wing next to the engineering floor. I am inspired by initiatives such as the near future laboratory and the push towards “design fiction,” i.e., the creation of speculative documents and artifacts that don’t exist today but could exist in the near future. In other words, the adjacent possible of design.

I really enjoyed reading “Speculative Everything,” one of the founding documents of the design fiction movement. Its byline: “how to use design as a tool to create not only things but ideas, to speculate about possible futures.” As designers, the authors of Speculative Everything embody their ideas in artifacts, but there’s nothing stopping us from expanding that repertoire to imagine speculative theories and experiments and knowledge traditions, i.e., the full panoply of knowledge production. So let me end with a definition:

Presearch is the use of design as a tool to create ideas, theories and more generally, to prototype instruments of knowledge.

Which brings me to a final question:

What do we need to presearch? What are our most pressing knowledge needs?

Here’s an obvious one for me:

The primary task of presearch in the anthropocene is to figure out how to run the earth. Just as economics (more generally, political economy) arose as the discipline that inquired into the wealth and poverty of nations, we need a new discipline that inquires into the flourishing of the planet as a whole.

Like every good beginning, the governance of the earth starts with naming the task ahead. I have one: Geocracy.

Smelly Science

What do I smell?
Photographer: Tadeusz Lakota | Source: Unsplash

Philosophy has a vision bias. The Sanskrit name for philosophical activity, ‘Darsana’ means ‘vision.’ Intriguingly, across Indo-European cultures, knowledge at a distance is closely related to blindness. Homer was supposedly blind and the great war in the Mahabharata was relayed to a blind king by an assistant given divine vision for that very purpose. One might say long distance vision leads to short distance blindness.

Perhaps God is blind seeing as he has the longest of long distance visions. Would explain a lot.

Coming back to philosophy, consider the classic treatment of illusions where someone mistakes a rope for a snake or an iPhone for an Android. It’s a visual illusion that serves double duty as a metaphor for all of knowledge. When the Vedantins argue that all of our perception is like that, they are saying that every rope you seem to be seeing is Brahman in disguise, which is the only true predicate of our perceptions. In contrast, the enlightened being sees Brahman everywhere. A subtler (even more enlightened?) view might involve seeing the rope as Brahman while also seeing how it might seen as a rope if you aren’t enlightened.

Many scientific paradigm shifts have a rope and snake quality to them. For example — the most important scientific shift of all, the Copernican revolution, was about missing the heliocentric rope for a geocentric shake. Or several hundred years later, Einstein saw the relativistic rope being mistaken for the Newtonian snake and corrected our distance vision.

To cut a long story short, we have become good at mapping the errors of the visual system on to the furniture of the universe. Fantastic, but we are missing out on all the other senses. What happens to science if vision is replaced by other sensory modes?

Take smell for example. Let’s try to imagine an intellectual history in which smell replaces vision as the most important sense, which would have been the history of science if dogs ran labs. What would a paradigm shift look like in smelly science? What are the chances of a Canine Copernicus? It’s hard for me to imagine, let alone convince another person that my imagination is on the right track.

Dark purple to red to orange gradient
Photographer: Luke Chesser | Source: Unsplash

The first thing that strikes me is that smell doesn’t lend itself to the rope and the snake. Sure, I can mistake a sprayed on perfume for a flower, but is that an illusion of smell or is it a conceptual illusion that happens to be clothed in smell. For example, not every visual act of subterfuge is an illusion. I can fake your signature and withdraw money from your bank account but that isn’t a visual illusion is it? It’s a social illusion with a visual signature. Literally 🙂 What’s interesting about sight is that there are illusions internal to vision itself, where it feels like both the illusion and its removal are part of the inner workings of the sense organ. The rope looks like a snake but then reveals itself to be a rope when you peer closer.

What’s the counterpart in the realm of smells? I am not sure if there’s any. Or at least any for the human smeller; it’s quite possible that dogs have smelly ropes and smelly snakes. Part of the puzzle is that smell is fundamentally a continuous sense. While we are used to seeing the world in terms of discrete objects — ropes and snakes and cars and trees — smells shade off into each other, like colors. In fact, the phrase “furniture of the universe” is well suited for visual philosophy but doesn’t quite make as much sense as smelly science. Smells aren’t spatially localized in the way shapes are. The table in front of me ends abruptly at its edges. The smell from the cup of coffee on the table isn’t as digital — I can smell it well from a foot away but I can sense its aroma from across the room. Let’s say that I inverted the relationship between the visual object and its accompanying smell so that it was a cloud of coffee-smell with a cup in the middle. What kind of object is it? What will it be like to live one’s life by smelling things that way?

Anthias are pretty fish which school in large numbers over tropical coral reefs.
Photographer: David Clode | Source: Unsplash

Smelly science will have to comfortable with much more ambiguity than visual science. Which might be a real problem for the doctrinaire (visual) scientist, for what is science without precision? But think about it another way: if deep sea fishes were scientists, they would have to be smelly scientists, since there’s no light at the bottom of the ocean. I bet they create a smellscape worth understanding and in order for us to do so, we too will have to recreate some of its imprecisions.

The Skin of the World

A Philodendron climbs up the trunks of a Rubber tree, Ficus elastica.
Photographer: David Clode | Source: Unsplash

I am taking a step back from writing about contentious topics — authoritarian politics, climate change, approaching extinction and animal rights. Not for ever, but for a few weeks. It feels like every conversation about those topics increases the fear and anxiety of everyone in the room and tilts the scales in favor of those who traffic in fear and anxiety, i.e., the very people we should be opposing. Therefore silence until I learn how to talk about our common future with imagination.

That frees me to write about a much older problem:

Why does the tree look just so? What’s the nature of experience? Why does the world appear the way it does?

Thinking about such questions is a relief after disputations about democracy and capitalism, for they are purer questions, in both senses of that term, i.e.,

  1. Pure rather than applied in the sense of pure mathematics versus applied mathematics, so that one can consider it abstractly. A metaphysical problem.
  2. Pure rather than impure in the sense of being free of politics and therefore amenable to unbiased inquiry.

Unlike some other pure problems, this one is easy to understand. Some scientific questions take a lot of technical preparation — if you were to ask a layperson why gravity isn’t reconciled with quantum mechanics, they wouldn’t know where to begin. The nature of experience and the appearance of the world are at the other extreme of familiarity. Every single one of us has intimate acquaintance with the matter under discussion, and if you haven’t been corrupted by texts that question your basic instincts, your gut’s likely telling you:

There’s a world out there of which you’re a part; it exists whether you believe in it or not; sometimes it hits you on the head but mostly it helps you get what you want.

The world just is. We can take the world for granted. Even those who question the solidity of the world for a living — scientists, philosophers, priests and poets- still conduct their lives as if it’s just there. Even questioning the world assumes a stable reality, so we are left with this intriguing question:

How do we probe an entity that’s presupposed by the probe?

I don’t have an answer to that query and it’s not a question that can be addressed directly like a nutcracker approaching a nut. Instead, we need to circle the question like a mountain peak along a hundred different trails, picking up insights along the way and hoping that immersion in the problem enables a shift in perspective.

Which is what many have done over the centuries. In the Indic sphere, both Vedantic and Buddhist traditions pay a lot of attention to the nature of experience. Then there’s the modern philosophical school that calls itself Phenomenology with a capital P. I am inspired by bearded men East and West, but I also want to keep my distance. For one, these traditions tend to be anthropocentric while I want a method that works for Octopi as it does for people. The second is that I don’t want to be responsible for being “true” to these traditions — if a reading of some dead man is mistaken, so be it; what’s more important is whether that reading illuminates a problem we care about.

Photographer: K. Mitch Hodge | Source: Unsplash

The First Trail

Let’s start at the surface, the skin, which is both an organ and the organ. All of us have a skin. It’s the interface between the outside world and us, the spatial marker of things that are mostly me, even if some of those things are on the way out such as breath and excrement, and things that are mostly not me, even if some of those are on the way in — breath and food. Sensation also begins with the skin. Every sensory receptor we have is part of the skin. Some of these receptors are mechanical, others are photosensitive, but there’s nothing that comes into our minds that isn’t mediated by the skin.

But my skin isn’t alone in the world, for it is one surface among many. As you walk around a room, what do you perceive?

  • You see a view of the world that consists of surfaces arrayed in space.
  • You hear the vibrations of surfaces.
  • You touch the texture of surfaces.

And so on, I hope you get my point. We live in a layout of surfaces. The surfaces we perceive are not abstract geometric surfaces. These are physical surfaces, with texture and toughness. These surfaces also have solidity, which takes us towards their mass, but should be distinguished from it. From our organismic perspective, mass, temperature, shape etc don’t really exist. Those quantities are useful surrogates, but they are not really real.

The layout of the world is mediated by the skin. We don’t have access to the world except through the receptors in our skin. The topography of the world — its layout — is mapped on to the topography of the skin and then transformed.

Is the unity of the experience due to the continuity of the skin?

If so, without the skin, the world would be a bumbling buzzing confusion, but because the skin is continuous and because the different senses are naturally integrated in the skin and the registration on the skin proceeds naturally from one sense to another, we have a seed that helps integrate the world.

Mehndi Design, Mehndi, Mehendi Training Center, Mehndi Training Center, Mehendi Training Center, Henna Training Center,  Online Mehndi Training Center, Online Mehendi Training Center, Online Henna Training Center, Mehandi design,tattoo,arabic mehndi design,henna designs,mehndi designs for hands,simple mehndi design,mehndi designs latest,new mehndi design,mehndi photo,easy mehndi design,simple mehndi,mehndi design video,mehandi,henna,mehendi design,henna tattoo,simple mehandi design,temporary tattoos,wrist tattoos,flower tattoos,butterfly tattoo,tattoo design,tattoo ideas,tattoos for girl hand
Photographer: Mehendi Training Center | Source: Unsplash

Normally, we think of the brain as the mediator, the place where sensation is transformed into perception and cognition. That may be, though there are reasons to disbelieve such a simple story. But the point I am trying to make is that whatever the brain does, whether that’s information processing or just registration, is in the service of the skin. It’s the skin’s view of the world that’s important to us.

The most important consequence of the skin’s view of the world: we see the skin of the world, not its volume. It’s surfaces that matter, not the interior. No wonder we see and hear and touch surfaces while the volumes bounded by those surfaces are rather more mysterious entities. For example, looking at the person sitting across the table from me, I notice the succession of emotions fleeting across his face, but what is he really thinking? It seems as if my neighbor’s mind is hidden beneath the skin, his intentions opaque to the observer.

What if the most fundamental distinction of all was between skin and body?

Before heaven and earth, before idealism and materialism, is there a primordial distinction between skin and body? When I said earlier that our gut instinct is to trust the world out there, that trust is felt on the surface of our bodies. If I say the world is unreal and you take a stone and crack my head open with it to show how reality intrudes on my illusion, the demonstration assumes the bleeding skin is the boundary of the real interior.

Yet, all of virtual reality depends on that bleeding skin being successfully faked by the impact of a virtual stone. So what happens when that circle of trust is broken, where the skin is no longer an indicator of the underlying body? To put another way:

If “normal” reality assumes a tight link between the skin and the body, what happens when that link is severed?

And we come to a deep cut:

  1. Either the skin is separate from the body and one is no indicator of the other. I can transport myself from skin to skin without affecting the body. Or as the Buddhist might say, there’s no body at all and I am transported from one empty skin to another.
  2. Or, there’s a deep and intrinsic relationship between the skin and the body. I am trapped in one because I am trapped in the other.

Which one of the two is it?

Navigating Higher Education

Note: this is the first post in a series that looks at how higher education needs to change in response to the wicked problems we face in the 21st century.

Photo by Nathan Dumlao on Unsplash

I think a lot about my daughter’s prospects. Her generation will inherit some of the greatest challenges that humanity has ever faced. Climate change. Economic turmoil. Flesh eating robots. OK, maybe not the third. Are we preparing her for these challenges? Is our system even capable of doing so? Where do we even look for an answer?

If you are like me, you think education is an important pathway to solving the world’s wicked problems. Education isn’t a panacea, but it sure creates opportunities for the enterprising and the diligent amongst us. As a student, I led a student group that funded primary schools in the most underprivileged areas of India. As a faculty, I have helped start several educational initiatives. When MOOCs and digital learning arrived on the scene, I jumped on their possibilities on day one.

Now I think my assumptions were flawed. Not because education can’t change the world, but because we don’t understand what education is and what it needs to be. Especially not for the problems that will dominate the news in the decades to come.

Audience

This post is the first in a series devoted to a systemic engagement with the future of learning. My main audience is the experienced professional — someone who’s been out of school for a decade or more; Who has seen first hand where their formal education helped them succeed and where it hindered their progress.

I believe this group is the most under-served market for higher education, both in its traditional and its digital avatars. If you are younger, you’re in school or considering going back for graduate school. If you’re older and retired, you can reawaken dormant interests, but what if you’re at the height of your capabilities and:

  • Want to keep abreast of emerging ideas and techniques but can’t go back to school?
  • See the potential for a new technology in your domain but don’t have the expertise?
  • Want to shift into a career that aligns your head and your heart? A career that makes the world a better place?

If so, there isn’t much for you out there. University career offices don’t care for alums ten years into their post-college lives. Online education platforms, like their physical ancestors, cater to the beginning student. I have met several professionals who want something different, but don’t know where to go. These notes are a response to that need.

Where are we today?

Higher education has come under severe criticism in the United States and elsewhere. The criticisms mostly fall under one of three camps:

  1. It’s too expensive, burdening students and their families with unsustainable debt.
  2. It’s not useful, i.e., not leading toward gainful employment.
  3. It’s not enough, an education that ends at 21 isn’t useful at 51.

There’s truth to all three. Surely college fees have outstripped inflation for most of the last three decades, but so has the lack of federal and state support for higher education. As for gainful employment, it’s not clear if increasing the number of STEM graduates will improve employment statistics. If anything, it might depress wages for those who have a STEM background. The fifty-one year old isn’t looking for the same education as the twenty-one year old. The system isn’t designed to educate older people.

What is to be done?

Photo by Emily Morter on Unsplash

Higher education keeps getting costlier and more and more people feel it’s not useful. Despite those problems, most people assume that the Harvards and MITs provide the right education; if only their teachings were available to everyone and any time.

Wasn’t that the premise behind the MOOC revolution?

The MOOC party claimed that streaming knowledge from the great institutions of higher learning will unleash the genius of underprivileged and/or remote students everywhere in the world. Several years into the revolution it’s clear that it has ended with a whimper. There have been many criticisms of the MOOC platforms, such as:

  1. Completion rates are low.
  2. They are only for the already rich and well-connected.
  3. Too focused on narrow skills.

Again, these are valid criticisms, but they miss a fundamental philosophical question:

What is the purpose of education?

Photo by Robina Weermeijer on Unsplash

There’s no general answer, but let me answer that question with a hypothetical 30–50 something person in mind. Someone trained in a discipline and with experience under their belt. For that person, the value of further education is to serve as a navigator, i.e.,

To reveal where the world is going, to give access to the tools and techniques that will help you get there and to (re)embed you in a community where that knowledge has value.

Does a MOOC help you navigate?

Answer: Only indirectly, if you skim through a hundred MOOCs you might get a sense for where the field is going.

For example, consider a hot new field such as data science. While a twenty something might consider training as a data scientist, a forty something is unlikely to do so. Yet, they need to understand how data driven techniques will change their work and where (or whom) they need to look to add that capability into their own workflows.

While MOOCs are available to the forty something, they don’t address her needs — they’re abstract presentations of general purpose material rather than the knowledge tied to contexts and circumstances that interest the older learner.

Situated Learning

The current state of digital education mimics the state of Artificial Intelligence in 1965. At that time, people thought when a computer gets smart, it will play chess better than grandmasters. Chess turned out to the easy problem; teaching computers how to see is by far the harder challenge.

Chess is an abstraction; it doesn’t depend on the shape of the pieces or the size of the board. Sight is the exact opposite — it depends entirely on the shape and size of objects.

Learning data science in graduate school is like the computer playing chess — it’s very useful, especially if it leads to high paying jobs — but it’s not the same as knowing how to apply those concepts in a newsroom or classroom. Instead of chess, the experienced learner needs the counterpart of the computer vision system, the techniques that will help them see their own world with new eyes.

Both the chess playing machine and the mechanical learner are throwbacks to an industrial era that continues to this day. That era will culminate with humans being replaced by machines (or worse, with humans becoming machines). We don’t want humans to become machines; we want machines to help augment our capacities. That can only happen when situated learning and digital technology come together.

Many Mes

faces
Photographer: Andrew Seaman | Source: Unsplash

MeMe and YouMe

The Buddha, peace be unto him, is famous for declaring there’s no self. Strictly speaking, he denied the existence of an abiding, permanent self, especially the metaphysical Atman of Brahmanical Hinduism. We are born, we grow into adulthood and then we pass away. Some think we restart that process in the next life. The Buddha says: one life or many, there’s no rock to tether the ship of existence.

The Buddha left out space in his calculations. Sure, there’s no single self over time, but what about having the same self in space? Are we the same person in every direction?

Perhaps not.

Every one of us experiences ourselves from the inside-out. We refer to ourselves as “I.” It’s commonly believed that we have unique access to that self, an experience of being me that no one else has, that there’s an inner door to a secret chamber that can only be opened by one key. Who else can tell me that I am in pain besides myself?

But there’s another self (or many selves) of which I am only partially aware. That’s the self others see and experience. Why do we assume these two selves to be the same? When my daughter asks me not to be upset with her, and I reply that I am not upset at all, is it possible that both are right? Is it possible there’s a MeMe that’s fully transparent to me and a YouMe that’s fully transparent to others and the two aren’t the same Me’s?

It’s much more likely that the two are somewhat consistent but far from being identical. Which poses a problem for any autobiographical effort because a recounting of MeMe can’t pass off as a recounting of Me in general. The rich and the powerful have always had alternatives — they can hire people to write about their YouMe or even better, if they are famous enough, others want to write about them of their own volition.

The rest of us have to try hard to get others to talk to us for a few minutes, let alone writing praises. But even the most avid biographer doesn’t have the access to my daily routine. In fact, I am too absorbed or distracted to fully grasp what I am doing. The wake of my passage is invisible to me. Fortunately, that data is being scooped up by our friendly neighborhood tech giant. If my data across various websites, social media properties and calendars is aggregated and made available to an automated story generation system such as Narrative Science, I might receive a half decent autobiography in the mail every morning.

“Rajesh left home early yesterday morning. He caught the first train to South Station where he waited for the Acela for a full thirty minutes during which he flipped between his kindle and his phone. On the train he worked on the Acme report for the third time in so many days, changing most of the ten pages that he had written the day before.”

More suspense than my real life for sure. I might even pay for that service. But why stick to the real world. Why not probe lives I have never lived and don’t plan on doing so? Technology comes to the rescue once again. After all, most of my online explorations are funded by personalized ads trying to sell a future different me. The same as every advertisement in the history of marketing but personalization brings new opportunities to the creative autobiographer.

Paths not taken

Forking forest path
Photographer: Jens Lelie | Source: Unsplash

Who does Facebook think I am?

In an attempt to understand myself through the eyes of Skynet, I have decided to take a screenshot of the first ad that Facebook inserts into my newsfeed every time I log in.

Hypothesis: If I take a screenshot every day for a hundred days I will learn more about who I am than a hundred years of Vipassana.

Just kidding, but I bet I will learn something. Don’t ask me what though, I am only on day 2.

Day 1: Today’s ad wants me to read like a CEO. Which is to say, not read at all but to get my staff to summarize it for me. Hey, at least I am better than Trump who doesn’t even read his summaries.

Sadly, I am going to pass. No $7 a month summary of business books for me. But the exercise frees up the imagination. Who is this CEO Rajesh? I’m thinking he wears a black suit everyday. Except for Saturday when he changes into a silk kurta to celebrate his pride for Mother India.

Day 2: Life is a roller coaster. Having rejected the offer to have summaries of business successes sent to my inbox, I must have missed a major opportunity while my competitors were making detailed notes. End result: I have been fired and my wife has left me.

Not to worry: DreamBuilder is here to rescue me from the jaws of failure.

It turns out that one in five men is utterly alone, without a friend in the world. Am I one of them? Facebook thinks so, at least today. How can I fulfill my dreams if I don’t have a warm community? Dreamers of the world unite.

The story is still being written. Facebook is going to help me discover myself. And me_2, me_3 and every self that could be me.

Subversive Intelligence

Another great picture done with the iPhone X Portrait mode.  This was a rescued bird at the Texas State Aquarium.  The majesty of these animals is unmatched.
Photographer: James Lee | Source: Unsplash

If you read or watch any mainstream media source that deals with facts instead of imaginary threats, you will notice the constant invocation of two civilizational threats: automation and climate change. This is mainstream media btw, not leftie radical sources; you know we are in a genuine crisis when hunks on TV look you in the eye and say we are all going to die.

AI and climate change: one economic, the other ecologic. One taking our jobs and the other destroying our home. I believe the two are actually the same, the worldly reflection of the platonic duality between information and energy. Unfortunately, while the mainstream is beginning to recognize the seriousness of our situation, they aren’t willing to take the necessary steps to adapt and flourish in the new world that’s being born.

The threat is recognized by the radicals knocking on the mainstream’s door: it’s increasingly common to say we need systems change. But who is going to do it and what skills are needed to do so? I find that even the most trenchant critic of the current system has conventional views on how it needs to be transformed: they say we need a radical transformation of our societies, but they assume that we already have the skills to do so; we only need powerful obstacles to get out of the way and let the innate intelligence of people, especially young people, to emerge out of the shadows.

That’s a romantic thought but a false one. We need to cultivate a form of subversive intelligence that is attuned to the changing conditions. That cultivation needs conscious, collective effort.

What form should that subversive intelligence take?

I have some thoughts on that matter and I have even written about it on other occasions though it’s only today that I am using the term “subversive intelligence” to describe the mindsets we need to cultivate. Here are a couple:

  1. The Design of Knowledge
  2. The Software Eaten World

Here are some more readings — that’s a continuously updated list.

Take those writings and readings with a grain of salt though; chances are much of what we read today will be flawed in its presentation of the world to come, just as the writers of the early industrial era couldn’t have predicted our capacity to order a computer from China with a click or two.

Take that uber-pinko Karl Marx. He started writing his famous book in the early days of capitalism. According to that canonical source of truth, i.e., Wikipedia, James Watt’s steam engine was invented between 1763 and 1775. Marx and Engels wrote the Communist Manifesto in 1848. In other words, somewhere between 85 and 73 years after the steam engine. Meanwhile, the first functioning electronic computer, i.e., ENIAC, was first completed in 1945, so we are 73 years past the deployment of that technology. Why am I saying this? When Marx and Engels wrote their pamphlet, industrial capitalism was just beginning to show its impact on England and Europe. 1848 was also the year of the social unrest across Europe but it was a long ways away from two worlds wars, several revolutions, decolonization and all the other consequences of the mechanical age. Nevertheless, they were right in pointing out that industrial capitalism was a really big deal and that it would change the world.

Similarly, we are at a relatively early stage in the development of intelligent capitalism, i.e., capitalism powered by information and machine learning. Not so coincidentally, we are also at an early stage of panic over climate change and ecological collapse more broadly. The two go together. We may or may not agree with Marx’s vision, but he was absolutely right (and he wasn’t alone in saying so) in pointing out that the real impact of industrial capitalism wasn’t in the new gadgets and gizmos that enter our lives but in the social relations transformed through this influx. Global capitalist society is nothing like the pre-industrial societies it has replaced.

Intelligent capital will cause an equally dramatic shift in life, liberty and the pursuit of happiness, even as individual gadgets come and go. Some of the symptoms of this shift are already upon us: we know surveillance is going to be big, automation bigger and climate change is going to be huge.

What else?

For one, it’s not just social relations that will change in this time. Natural relations, i.e., the relationship between humans and other beings on earth and also the relations between the other beings themselves will also change. Actually, natural relations have already changed. What else do we mean by the anthropocene? What does it mean when the majority of the world’s land area is being used for agriculture?

I think it’s only a matter of time before we consider all earthly activities as part of the human system, which is to say that the earth system and the human system are increasingly going to merge. Is this a good thing? A bad thing? Before we rush to judgment, let’s first try to understand the levers that control these systemic changes.

Really. I have resolutely left-wing sympathies, but the honest thing is to understand this new condition before passing judgment, especially if our long-term goal is to shine a crystal ball on the future and in doing so, unleash genuinely transformative forces. But that’s a ways away.

Some more snippets:

– Life in knowledge societies is mediated by flows of information and the networks that host those flows. It’s impossible to imagine making a simple widget without information mediation, let alone a complex product like a phone or an airplane. It’s equally impossible to imagine life without constant sharing of personal data and constant surveillance by corporations and nation states. Information technologies are technologies of living par excellence.

– In fact, no Stalinist state has ever had the level of intrusion in people’s lives that we see today being willingly shared and aggregated via social media. Informational life spawns many worries such as:

  1. The Future of Work: some are worried that robots will take our jobs. Others are worried that capitalists will use the threat of automation to reduce wages in the few jobs that remain.
  2. Full Spectrum Surveillance: that our lives are monitored and monetized second by second and further, surveillance fragments our working lives so that we can work for Uber in the morning and Walmart in the afternoon.
  3. Inequality Amplification: we are less likely to have data about the needs of underprivileged and marginal communities and people in those communities are even less likely to have the skills to make use of that data. Data poverty threatens to combine with larger concerns over automation to increase inequality.

Let’s not forget the utopian imaginations of abundance, of a life devoted to creation and enjoyment as machines perform all the drudgery. We can’t discount the power of this artificial city on the hill. If AI and Data spawn apocalyptic and utopian visions, we need a liberation theology to bring that vision to the people. That’s the driving ambition of subversive intelligence.

The Great Unsettling

https://www.flickr.com/photos/jayjayrobertson/5724336908

Introduction

I have written a few hundred essays over the last five years, with a year and a half in the middle being devoted to a single text, the Mahabharata. I might start the Jayary again this fall, prompted by a seminar I am organizing this semester.

The Mahabharata is unique in that it starts with a post-apocalyptic scenario: a great war has ended, killing everyone except for seven survivors, a death toll of millions. There’s recognition that the old order has ended, that it was unsustainable and that its end came despite the societies of that time being led by people considered “good” by the standards of the time.

Perhaps we too are such a society, led by regimes with some legitimacy but collectively heading towards a transition that we can’t plan for or avoid. What form will that transition take? What will be washed away? Those are the questions that I keep returning to, provoking a meandering journey through the forty two gates of knowledge.

To put it simply, there’s an itch I want to scratch, but each time I scratch one spot, it starts itching in another. I don’t want the itch to go away, but I would like to know the source of the irritation.

Mission accomplished last week: I found the source. I bet you’re itching to learn what I found. Here’s a clue:

Elrond: “This peril belongs to all Middle-Earth. They must decide now how to end it.”

Elrond: “The time of the Elves is over — my people are leaving these shores. Who will you look to when we’ve gone? The Dwarves?

Gandalf: “It is in Men that we must place our hope.”

One of my (many) favorite lines in the Lord of the Rings, describing a world that’s about to pass. If you haven’t read the books, here’s the premise: the dark lord has emerged from his hideout and is gathering his forces. If he wins, game over: everyone’s dead or his slave. If the good guys manage to defeat him, the dharma of the elves is fulfilled and they have to fade away.

Either way, one yuga is over and another is about to begin.

Fast forward a few thousand (or is it million?) years and we are at a crossroads once again. Just as the elves had to fade away after Sauron’s defeat, we might have to fade away too: except that we are the good guys and the bad guys, so if our bad guys win, we are all done for and if our good guys win, we will have to make away for something else.

What I don’t know is whether the future will transform the world we have created over the last seventy five years after the end of the second world war, or the world we have created as a settled species over the last ten thousand years. I tend towards the latter, hence the sensationalist headline:

The age of men is ending.

Whether it’s a rejection of the last 75 years or of the last 7500, we are in for a great unsettling.

Climate Reality
Photographer: Patrick Hendry | Source: Unsplash

Apocalypse?

Let’s get rid of the Sauron scenario first: I am not thinking apocalyptically. Major violence is almost certain, but I am skeptical about futures lacking in humanity altogether.

Let’s say all of Eurasia outside Siberia becomes uninhabitable because of climate change and Russia refuses to change its immigration policy in response, leading to pitched battles over migration and settlement. How many people do you think will die? A few hundred million? A billion? It would still be a smaller loss, relative to population, than what happened in China over the 13th century after the Mongol invasions, when a population of 120 million collapsed to 60 million over six decades.

Even if we are left with a human population of 4 billion in 2100 — undoubtedly the outcome of the greatest disasters in the history of humanity — it will still leave enough humans that the species is not threatened. In other words, whatever happens, there are going to be people left on earth. Lots of people.

The real question is: who will they be and how will they live?

Settler Humanity

For most of human history, we were a mobile species. It’s only with settled agriculture sometime in the last ten thousand odd years that we became “rooted.” It’s fair to say that what we call history is nothing but the chronicles of settler humanity; even when they were conquered by nomadic tribes — the Mongol invasions for example — it was in order to loot or skim off the wealth created by settler humanity.

In fact, the concept of wealth is a settler concept; a mobile species has no reason to accumulate.

Settler humanity has won to such a large extent that for most people including me, the only ways of life are rural or urban, i.e., agricultural settler humanity and industrial/post-industrial settler humanity. Non-settler humanities — often captured by the blanket term “indigenous peoples” — are barely 5% of the human population and every single one of them lives at the mercy of settlers.

I will not recall the long and torturous expansion of settler humanity across the globe, the waxing and waning of agrarian and industrial civilizations. We can say that all of that came to a head in the second world war, at the end of which were two “final settlements” that vied for support across the world: communist society and liberal democracy.

In the history we have written so far, one of them won and the other lost. I am thinking that’s that not the final verdict for they were both heading towards the wrong finish line.

When the Soviet Union fell, scholars such as Fukuyama thought we had arrived at a secular, this-worldly end of all times. In that moment of triumph, liberal capitalist democracy represented the end of history, a city on the hill that approximates the universal ideals of settler humanity. Which is to say that after an agonizing journey filled with disease and violence and predatory social relations that extracted wealth from the majority of toiling humans, we had created the institutional framework that made most people happy most of the time.

I think Fukuyama was right, in that all that toil and struggle produced a brief period under US hegemony when it seemed like a global settler human will become the universal ideal. Unfortunately, that ride into the sunset turned out to be a short stroll to the edge of the abyss. Instead of a final settlement, we are the beginning of a great unsettling, where every idea, ideal and institution of ours will be questioned, rejected, transformed or destroyed.

To give just one example: do you think we can continue to live in a world of sovereign nation states when hundreds of millions of people are desperate to migrate with their lands running out of water and their oceans frosting their fields with salt?

I have a hard time believing in that settled future.

There are many many other unsettlings waiting to happen and I hope to chronicle some of them. While there are many objects that catch my fancy, ultimately, my essays are a dairy of the great unsettling. And when I turn to the Mahabharata, I read another era’s retelling of their great unsettling and the painful recreation of a world worth living. That’s the source of my itch.

There’s still uncertainty over what will be unsettled: will it be the post-world war liberal order or will it be all of human history? I tend towards the latter, which is what I mean by the claim “the age of men is ending,” but even a rejection of the last seventy five years will be a great unsettling.

Caveat: Dramatic claims need extraordinary evidence. I am not arrogant enough to think one essay is enough evidence in the court of the cosmos. I am arrogant enough to think that an essay can make that claim vivid enough that further evidence will make it plausible and ninety three volumes later, lay the foundation of a cult in my name.

Meanwhile, as an educator, the great unsettling prompts some questions about learning to live through the shift –

  • how to imagine life as we enter that phase?
  • what skills will help us navigate its uncertainties?
  • and most importantly for me professionally — how will the world of knowledge be unsettled?

I will leave you with a diagram that captures my answer to that question.

Startup Thinking

Photo by Clark Tibbs on Unsplash

Startup Thinking

I have a firm leg in the doom and gloom camp. My friends alternate between sending images of the burning Amazon and pictures of Amazon — the company, not the lungs of the planet — replacing all jobs with robots. Not that I mind; all that misery gives my optimist brain something to push against.

So when someone asks me if startups can save the world, my first response is: of course. Five seconds later, I change my mind to: you must be kidding! Back and forth, here’s how I argue with myself:

Me: How can startups change the world? They are tiny and the world’s problems are huge!

Myself: They start small, but they don’t have to stay small! Remember what Steve Blank said: “a startup is an organization formed to search for a repeatable and scalable business model.” Repeat after me: scalable, scalable, scalable.

Me: Yes, but when they become bigger, they are no different from the other Death Star corporations sucking the life out of the planet.

Myself: but we can invent a new type of startup that’s less Death Star and more Jedi Knight. Startups don’t have to be about profits any more than factories have to manufacture cloth and nothing else.

and so it goes.

There’s something about how startups harness psychological energy and navigate uncertainty that appeals to me, and increasingly, we have data that tells us what works when people come together for a common purpose. Why not use it to make the world a better place?

To paraphrase a man who first gave me hope and then disappointed me: yes we can.

It’s never been easier to go from idea to implementation to turn a profit. Everything from Y-Combinator to my neighborhood angel investor are waiting to turn blood, sweat and tears into 💵. Unfortunately, saving the world is mostly not a matter of 💵. It’s about putting people and planet before profits. Here are three of my favorite world-savers:

Not a single businessman in that panorama. They were all politicians. That’s cuz politics is the most important method through which we have saved the world in the last two hundred years — both liberators and dictators have been politicians.

Note: none of them is a woman. I apologize for that snub to half the earth’s population. Patriarchy inserts itself into the STW business.

While the US doesn’t encourage political startups, i.e., new political parties, it’s common in other parts of the world for parties to be formed. Especially when there are classes of people whose needs aren’t being met by any of their current choices. Sometimes those new parties win elections and become political corporations and even one party states. Political monopolies suck even more than business monopolies.

The good news is there are more forms of political entrepreneurship than we can imagine, so why only political parties, why not other political startups? What about international political associations around global topics such as climate change — do we really think such wicked problems can be solved by middle aged women and men sitting in the U.N General Assembly? If you believe so, I have a couple of bridges I want to sell you.

I will tell you why like startups. They are the most robust institution we have devised for collective action amidst uncertainty. Corporations and governments are good at delivering solutions that work, but only startups are good at finding out what works in the first place. How will Indian farmers handle shorter, more intense monsoons? I don’t know, but I bet there’s a startup somewhere that will come up with a good idea.

Please stop thinking a startup is only about money.

We should divorce the idea of the startup from its capitalist origins, just as factories arose in the capitalist world but spread to societies where the government ran all forms of production. I am not saying one’s better than the other, just that startups and factories are flexible institutions capable of doing any number of different things.

I am a 100% certain that the future of the human species is bursting with uncertainty — pun intended. We have to become adept at navigating chaos and for that “Startup thinking” is an essential quality. Only if it combines politics, engineering and design though.

Forestree

I went to an engineering school but I didn’t study engineering. In fact, I have stayed away from engineering my entire life despite being a geek. Cooking, yes. Sports, yes. Maker spaces: of course. But not engineering.

That’s because engineering struck me — perhaps wrongly — as being focused on the small picture, of making this widget in front of me work without caring about its connection to the wider world and damn the consequences. It wasn’t a discipline that encouraged a philosophical bent of mind. Engineering has undoubtedly changed the world, but it has done so without taking responsibility for that change.

In contrast, politics always struck me as being closely tied to philosophy: or to paraphrase Marx: we don’t want to study the world, but to change it. And there’s no shortage of political writing as to why we should change the world this way rather than that way. Politics, however disagreeable, takes responsibility for changing the world, which is why the metaphorical levers of politics were better suited to my theory of change than the mechanical levers of engineering.

Of late, I have been feeling that the division between politics and engineering is disappearing. Both are technologies that create human artifacts in response to individual and collective needs. Until recently, it was easier to encode big-picture goals in political technology than in engineering technology. For example:

Do you think democracies should have a separation of powers those creating policy and those enacting it? Solution: create separate institutions for the two purposes.

Today, similar decisions are being made in engineering technology: if you want to create a platform in which the platform owner doesn’t have an undue advantage vis-a-vis other participants? Make sure their business development wing only has access to the same data as any other business using the site. API design can encode ethical features and value judgments in a manner unthinkable fifty years ago.

Technology, old and new

The reason politics and engineering are coming together is code — and I use that computing phrase in the broadest sense of the term. Political technology has always been based on text: constitutions, policy briefs, white papers and such. Engineering technology has been based on things: steam engines and marble tiles. Code functions both as text and as thing. That’s a huge transition in how we change the world. We are just scratching the surface of that revolution.

That realization got me thinking about problems that should be solved simultaneously as engineering products and political policy, with solutions exhibiting a combination of good design, good data and deep concern for social implications. Technology that pays attention to the forest and the trees. I am on the lookout for such “forestrees.”

Here’s the first.

Photographer: Ben Hershey | Source: Unsplash

Concussion Detection

Halfway through the first game of the season on Saturday, my daughter took a soccer ball to the face. She continued playing for a couple more minutes before her nose started bleeding when she had to leave the field and couldn’t play the rest of the game. This being the United States, a trainer checked her out and suggested she might have the mildest of concussions, which meant no more games that weekend. Fortunately, she was fine the next morning and ended up playing on Sunday.

I love my daughter more than anything in the universe but this essay isn’t about her. It’s about how she was assessed for a potential concussion. She was checked by trainers three times on Saturday and Sunday. I noticed that on all three occasions, the trainer whipped out her phone and used the phone camera to make an assessment.

There are a couple of concussion assessment apps in the iOS app store but none of them are fancy — they are just a list of protocols to follow, including making the player stand on one foot, move their arms in set patterns and so on. Looks quite crude if you ask me, though arguably optimized for assessment by a young person with little experience.

A friend.
Photographer: Ishan @seefromthesky | Source: Unsplash

Eye Movements

I asked myself if we have better signals for concussion.

What about eye movements or other neuromuscular signatures? A quick google search lead me to this paper which says that disconjugate eye movements (i.e., when the two eyes don’t move in synchrony) are present in more than 90% of concussion and blast victims. I am not sure if trainers have the medical training to detect disconjugate eyes, especially if lighting conditions aren’t good. Disconjucation detection (DCD for short) might be too hard for untrained human beings.

But we are forgetting that camera. It seems underutilized — I saw the trainers shining it into my daughter’s eyes one eye at a time. DCD needs to process the signal from both eyes at once for it to work — after all, we want to find out if they moving in unison.

Let’s eliminate what I think of as the easy case — in the case of severe concussion, the two eyes are more likely to be completely out of sync. A severe concussion is likely to be a result of a major collision either on a sports field or in an automobile. Those aren’t the obvious cases I am thinking about, since they will be referred to emergency care right away. Concussions from a minor incident on a children’s playground or from an elderly person falling in a bathroom are harder problems to solve, and the solution has to be be in your pocket.

Phone is all we got.

I took this photo at the Women’s March in Boston. I was shooting with my Nikon D750 with a 50mm prime lense which made catching the wider angles a bit tough. I looked up and could see the expanse of people through the screen of the persons phone above me and I had to grab it! Very Meta :)
Photographer: Alice Donovan Rouse | Source: Unsplash

Optical Solutions

An optical problem has to be solved first, a robust method for detecting eye movements from both eyes. There has to be a way of sweeping a phone camera in front of someone’s eyes so that it picks up the eye movement signals from both eyes at once. It’s a technical challenge because the signal is masked by an enormous amount of noise: jitter because of shaky hands, changing reflection patterns because of blinking eyes and head movements, changes in light sources if clouds block the sun and so on.

Fortunately, we have a clean separation of movements:

  • There’s the relatively smooth movement of my arm as I scan the camera in front of your eyes. Assuming that the light source from my phone camera is the only light that’s changing in intensity — ambient light from the sun or artificial lighting being assumed constant — the light reflected back from your eyes is going to be a smooth function of my hand movements. Further, smartphones now have motion sensors so we can use hardware to detect and filter out movements initiated by the person holding the camera.
  • There’s the jerky movement of your eyes as they saccade and change focus, and every time that happens, there’s a sharp change in light intensity. There’s are also the jerky movements of my eyes blinking, but that happens at a slower rate than eye movements and is an up and down movement while saccades have two degrees of freedom.

I am betting on a relatively clean separation of signal (eye movements) from noise (the camera movement, her head movements, blinks etc). In short, while there are genuine technical difficulties, I am reasonably confident that the signal detection problem can be solved. But once we have the two channel signal — one channel for each eye — we are left with an inference problem: how do I know when a signal indicates concussion?

Inferring Concussion

The simplest kind of processing that can be done on the two channel signal is a summary statistic, such as the correlation between the two channels. Disconjugate eyes will have lower correlation between the two channels than normal ones. If we are happy with a simple diagnostic, this is all we need to do: set a concussion threshold and slot anyone who meets that threshold for medical intervention.

That, by the way, is the nature of most medical interventions based on bodily indicators. If I go to my doctor’s office with a test result and if my blood pressure, blood sugar or cholesterol is above a certain threshold, they will likely talk to me about further testing. If the statistic is in a band that’s not too low or or too high, they will talk to me about my diet and exercise regime and suggest changes if necessary. Otherwise, the machine’s working as it should and I go home happy.

But we can do better than that today, can’t we?

There are several problems with the simple diagnostic. Let me mention three:

  1. It might lead to false positives if the threshold is too low and false negatives if it’s too high. For example, if I have strabismus, I am likely to trigger a false positive.
  2. It’s not personalized: My body might disconjunct at a lower contact threshold than yours. Even if there’s momentary disconjunction, your body might recover more quickly from it than mine. If disconjunction is a transient signal, how do we know when it’s a reliable indicator of concussion?
  3. More generally, signs of concussion might be hidden in higher-order statistics instead of simple correlations. If so, noise will prevent us from extracting those higher-order statistics from a single observation.

The alternative is to go for a top-down approach based on extensive data collection. If I collect my eye movement data over time, the system will learn the typical conjunction between my eyes and how that changes with exertion, time of the day and other variables. With a robust data set like that in the background, we can be much more confident about when a genuine concussion is the cause of disjunction. Instead of creating a simple summary statistic and basing our diagnosis on just that alone, we can create a Bayesian concussion detector that answers the question:

How likely is it that X has a concussion based on the record of her eye-movements?

Detection accuracy will obviously improve if the system has access to eye movement data of thousands of soccer playing children. Having that data in the background will also help diagnose whether my child’s post-concussion recovery is on track.

  • Where is her disconjunction one week post concussion relative to the population average?
  • Should we be looking at a more intensive check-up?

Every trip to the emergency room costs money and leads to higher insurance premiums. We want to base any decision to send a child to an emergency room on the most robust data we have on hand. Longitudinal data is better than sporadic data.

Even better, longitudinal collection of eye movements across a population will be helpful in any number of other situations. Dyslexia immediately comes to mind. We know the eye movement of dyslexic children are different from eye movements in children who read normally. The earlier we detect those pattern differences the better, especially if dyslexia is (partially) a learned eye-movement pattern than a higher-order cognitive disorder.

Not that you need convincing, but there’s no shortage of advances in health and wellbeing that need repositories of biological data, from eye-movements to cholesterol levels. But, and there’s a HUGE but: the possibility of exploitation, control and oppression is so much greater when data are collected and made available to corporations and governments. In order to avoid big brother, platform design should encode a “fair use” policy with respect to all the data hosted on their premises.

To put in crude terms, whose data is it?

First: who will create such a data set and if I create that set, do I own it? Let’s start with the latter — that the creator of the data set is the owner, which is the current default. Since data is supposedly the new oil, it’s no surprise there’s a rush to capture as many valuable data sets in your hands as possible, leading to all kinds of problems. Search monopolies are bad enough, but we certainly don’t want health data monopolies.

Let’s say Startup A raises a ton of VC money and creates a comprehensive eye-movement database whose API used by Startup B for concussion detection and Startup C for dyslexia monitoring. Two years down the road, Startup A enters its own concussion detection app, competing directly with Startup B. What’s B to do? How does an application company compete with a platform company?

There might be a programmatic solution — as I had mentioned in the introduction, we can design APIs that prevent the business development side of Startup A from having access to data that the users (Startups B and C in this case) have access to. But can API modularity be enforced without regulation?

I doubt it.

Also, platforms keep evolving. Imagine that Startup A discovers that while the market for concussion and dyslexia apps is individual parents and teams, hospitals and HMOs are an ideal market for the platform as a whole. What does A do? Make an offer to B and C that they can’t refuse? Enter into a complicated revenue sharing model?

Platform monopolies are even more entrenched than widget monopolies — the dominance of the FAANG platforms being a case in point. Despite the popular slogan, data isn’t oil; it’s not a resource that disappears after being used once. Instead, it gets more valuable with time and accretes more uses.

Which makes data prone to platform monopolies since platforms are designed for current as well as future uses — once you list all the books in the world, you can sell them yourself, offer space for others to sell, convert them into ebooks sold by your company or direct the customer to a competing book that has a higher rating on your own system of ratings. I can’t see a future in which privatized data is good for anyone besides the monopolist.

How will that work out in healthcare? If we want to avoid monopolization, we should keep the data open, say through the creation of a platform commons. That leads to another challenge: who is going to pay for such a platform? It’s not like creating an open database of cat videos — the regulatory demands of collecting and storing biological data will make their platforms prohibitively expensive for your typical non-profit.

Is the only financially and politically viable solution is to socialize the data? Which is to say, governments pay for the creation and maintenance of health data repositories and own the platform. Governments having ultimate ownership has its own challenges, especially in countries where citizens don’t have political control over what happens to their data. Which, to be honest, is the case in most liberal democracies let alone authoritarian regimes.

Plus, what do you think are the chances of a government creating a high-quality platform? Might be possible in a small and rich country like Sweden, but the U.S health care debacle suggests that creating universal health data systems in a large and diverse nation is an incredibly hard problem to solve.

What is to be done?

I have a utopian answer: data should be a universal resource like time. Clocks became important after the industrial revolution and transcontinental railroads, but there were thousands of time-measures in the early days of mechanical time-keeping. As this article says,

When the Union Pacific and the Central Pacific Railroad formed the Pacific Railroad, later called the transcontinental railroad, more than 8,000 towns were using their own local time and over 53,000 miles of track had been laid across the United States. Railroad managers and supervisors well understood the problems caused by so many discrepancies in time keeping.

There could have been many ways of solving the problem of time standardization:

  • Let railroad mergers dictate time mergers so that at the end of the process there are a few private time companies in the world that own my time and your time.
  • Let the government own the time — and tax you for owning a watch😏
  • Create an international standard for time that no one owns or even thinks of owning.

Aren’t we glad we chose the third option — can you imagine time being owned by Acme corp instead of being an international standard? Why can’t we do that with data? Why can’t we have universal, secure data systems that aren’t owned by anyone and enable any number of products?

One international data standard that stores and maintains all the important data in the world. Cue photos of smiling children and dogs playing in the sunshine.

You may not agree with my solution — feel free to leave yours in the comments, but I hope you’re convinced that:

  1. It’s hard to design platforms that serve our needs today and in the future.
  2. We need both engineering and political technologies to design such platforms.

Now for the philosophical climax of today’s program…

Octopus Politics

I can’t end an essay about creating data driven systems without a nod to data dystopias. There are the obvious dangers: hackers stealing your medical records and blackmailing you, insurers refusing service because of genetic predispositions, governments denying treatment to political dissidents and so on. While they are important worries — it would be a disaster if a hacker changed your baseline heart rate during a cardiac treatment — but in my view they are problems of the past, based on the model of the “all-seeing eye” or the panopticon.

There’s a difference between the world of meager data and the world of rich data.

A world of meager data is one where I don’t know what’s going on in your head. We are atomic individuals separated by infinite mental space. The all seeing eye assumes a uniform space occupied by atomic souls who are mostly like each other. In that world, the panopticon appears either as a blessing or as a nightmare — blessing if you’re the religious type and like the idea of a divinity knowing every thought that crosses your mind, and a nightmare because you’re the cynical type that doesn’t want god or the government having access to your desires.

Note how both the blessings and the curses arise from the act of being “truly seen.”

In response, we created liberal democracies where the government knows some things about you but not too much, where insurance companies write policies based on the normal individual and where you have to be in prison or a totalitarian state to be completely exposed to the authorities. To summarize: meager data, health insurance and liberal democracy are a package that pleases many people much of the time.

I believe the world of rich data will be substantially different. It’s prime worry (or blessing!) will not be whether I am being truly seen, but whether there’s an I at all. There’s no reason why the Snapchat self and the iHealth self and the iVote self are the same self or even feel like being the same person. What if the experienced of a unified self is an artifact of history?

What if the reason you feel that you vote, you work and you play is because you live in a time when you have privileged access to your internal states — as Descartes famously thought — while others have limited and indirect access to those states?

A useful way of thinking about technology is as an extension of our mental apparatus. Glasses are extensions of our visual system, hearing aids of the auditory system and equations of our conceptual capacities. But none of these do much computing. Imagine instead a third prosthetic arm that has as much computing as your peripheral nervous system — what do you think it will do your experience of the world? Or, for that matter, when data platforms are as good at predicting what you will do next as you can.

In that world we might feel less like human beings of the past and more like Octopi with eight arms, each one of whom has a mind of its own. Those arms usually act in unison but they don’t have to. Sometimes they clearly don’t. I don’t know what it feels like when arm 7 and arm 3 go to war against each other and I have no way of stopping the fight. I certainly don’t know what it’s like when arm 6 votes for Trump and arm 1 votes for Hillary.

Let me leave you with a final question: what will it be like to build a global society around a multiple selved creature?

I am not sure if there’s a startup creating the Octopus empire, but there should be one.

The Failing State

Photographer: Jason Leung | Source: Unsplash

The nation state is the most successful and important social institution in the world. Anything larger — the EU for example — tends to be a technocratic exercise without emotional pull. Anything smaller lives at the mercy of the nation; Kashmir being this week’s illustration of that general principle. For most of us, a world map is a map of countries.

What else could it be?

Yet, the nation state is a relative newcomer in the history of the world. There were only 77 sovereign states in 1900 while there are 195 today. Most of the new entrants came during the era of decolonization with a few more thrown in when the Soviet Union and Yugoslavia collapsed.

As far as I can tell, there’s no general principle that answers the question: “what’s the basis for creating a state?” Geographical continuity is a major plus (Pakistan at independence and the US today being prominent exceptions) but it’s not a sufficient condition for most national boundaries are between neighbors. Religion and language also help, but not always. The philosopher Ludwig Wittgenstein claimed that linguistic concepts were often family resemblance categories. His famous example was that of games: some are competitive, some are team sports and some are board games. There’s no single definition that covers all games.

The concept of nationhood is also a family resemblance category, except that it’s not just a concept. It’s out there in the world and with real consequences. Close to the Wagah border, it’s not clear to me if I should draw the boundary between India and Pakistan — the current situation — or between Punjab and non-Punjab (which is how some people might want it). Identities aren’t etched in stone even if national boundaries are.

Despite these philosophical conundrums, the list of states is relatively stable. We haven’t added many in the last few decades except for those that took on a sovereign identity after the Soviet collapse. In fact, the stability of nation states is so important that even when foreign powers meddle in their affairs, they don’t change territorial boundaries. Coups yes, Operation Iraqi Freedom yes, but don’t redraw the map please.

Stable boundaries are a good thing I suppose, especially when there’s a global consensus that we don’t use violence to change those boundaries. Unfortunately, this rigid designator is inadequate for many of our challenges.

Some challenges arise at the sub-national level, where the national identity has a hard time co-existing with sub-national identities. I don’t mean the liberal complaint that the nation can’t handle multiple identities, that I can’t be Tamil and Indian at the same time (yes, I can!). That challenge also exists but I am thinking of a different problem: the violence of the state and/or its citizens when they detect what they believe to be a betrayal of national loyalty. Why can’t we be better about juggling allegiances? Why can’t I be Indian while cheering on another country’s cricket team? There’s a strange residue of monotheism in the way we calculate national loyalties.

Consider Kashmir. I am betraying no confidence when I say that most valley Kashmiris want out of India while most of their Ladakhi and Jammu counterparts want to stay. It seems like the only way to “solve” that problem is by breaking the state apart and intensifying the military presence in the valley.

Nation states being what they are today, there’s no chance that India will allow the creation of an independent Kashmiri nation. But why is independence the only option? Or unending repression? We don’t have good models for the sharing of power, of multiple sovereignties.

Then there are the problems that arise at the supranational level because the reach of the sovereign nation is partial. While people have to be content with a primary sovereign, capital has no such allegiance. I have to apply for a visa to go to China, but my money can go there in a few seconds and come back a few weeks later as a computer. That works for me as a consumer, but creates real challenges for me as a worker doesn’t it? I can’t switch jobs from one country to another while my employer can switch factories far more easily: Bangladesh today, Vietnam tomorrow and back to the US only when protectionist sentiment is high.

The mobility of capital (and the capitalist) vis-a-vis the immobility of labor is both a cause of the close relationship between the lords of industry and political elites across the world and caused by that relationship. Those cozy relationships at the top are the basis of a global cybernetic system in which finance plays the role of a controller (in the way that switches and steering wheels are controllers in a mechanical system) that sends signals to a labor that in turn moves matter from place A to place B upon receiving the controlling signals.

Photographer: Andy Watkins | Source: Unsplash

Which leads to another problem. The further matter moves, the more energy it consumes. It doesn’t take me any more effort to click a button and buy a widget from China rather than the town next door, but the carbon footprint of the former is so much greater. Here the nation state conveniently plays exactly the opposite role that it plays in the labor-capital nexus. It enables the liberation and movement of carbon but prevents signals opposing that movement (international pressure, for example) from coming in. As the Amazon burns, the Brazilian government claims its sovereign right to do what it pleases with the lungs of the earth while criticizing international NGOs for creating trouble.

I am not saying the state will go away or even that it should, but it sure looks incapable of being the institutional form in which the wickedest problems of the times will be solved.

Money can’t buy me love

Cute piggy bank
Photographer: Fabian Blank | Source: Unsplash

Man is the measure of all things. So sayeth Protagoras, ancient scientist. If you’re the religious kind you might condemn Protagoras for idolatry, for only God has the measure of all things. Or if you’re William Blake, you might condemn Isaac Newton for succeeding at the task.

Newton By William Blake — The William Blake Archive, Public Domain, https://commons.wikimedia.org/w/index.php?curid=198284

Before we get to Newton and Blake, let me make an important distinction between two kinds of measurements:

  1. Objective measurements. These are measurements of entities out there in the real world, where despite the possibility of error, there’s an underlying quantity being measured. My height is an example of an objective quantity; you will measure my height wrong if you have the wrong tape measure and I might add a couple of inches to it while creating a profile on a dating site, but we can all agree that there’s such a quantity as my height.
  2. Measurements of Exchange. Money is the best example. Let’s say I want to appear taller than I am and I go out to buy a pair of platform shoes. How much should you charge me? Should a man 5’4’’ pay the same amount of money to add 2’’ to his height as a 5’6′ tall man? If now, who should you charge more? Height’s objective, the increase in height is objective, but the money you charge for it isn’t objective. The measurement of exchange value is variable by design.

The measurement of objective quantities is closely tied to precision calculations and mechanization. I better measure the distance between my landing gear and the ground if I want my spaceship to land gently on the Moon’s surface instead of crashing into it. The flip side of precision is a dismissal of quantities that can’t be measured accurately.

Perhaps they don’t even exist!

In contrast, the measurement of exchangeable commodities is tied to assessments of value. Why does gold cost more than iron? Objective explanations only go so far. Is it scarcity? Not really, because my childhood drawings are scarcer and I bet you won’t pay any money for them. Is it because gold is hyper malleable and a good conductor? I am sure that plays a role, but advertisements have beautiful women wearing gold necklaces rather than highlighting the conductance of gold wires.

Exchange value can never be reduced to objective quantity.

That’s my reading of Blake, i.e., that measurement leaves out what’s most important about us. Perhaps, but there’s good reason to try measuring the most obdurate phenomena.

Most people living in modern societies work for money. How to value their labor? It’s a real challenge, for strictly speaking, we are trying to compare apples and oranges. Material inputs and labor go in and widgets come out. Labor is nothing like the widgets it produces, but yet there must be a way to turn widget numbers into wages for labor, for without that conversion, we have no way of keeping the factory going.

If our factory is a cooperative, we might say:

  • we produced X widgets that are sold at Y dollars each;
  • it cost us Z dollars to buy the inputs and maintain the equipment and we need to carry another W dollars for future investments, insurance etc.
  • Since there are A of us, we will each get (XY-Z-W)/A.

That seems relatively easy. But what if there’s one owner and A employees. How much should the owner get and how much should the employees? Should they be paid a fixed salary and let all the profits go to the owner? If so, why?

What’s the value of labor? What’s a fair wage? We don’t have unique answers to these questions because the measurement of exchange can’t be reduced to the measurement of objective quantities. However, we have to live with an uneasy merger of facts and values because the alternative is even worse. To understand why, let’s explore that age old question:

Can money buy me love?

One of the universal myths of the modern world is that subjective qualities, emotions in particular, are immeasurable in both senses of that term, i.e.,

  • there’s no objective way of getting to my feelings and
  • there’s no price to be put on them.

In fact, so powerful is the myth that love’s immeasurable that it sparked one of the most successful pricing campaigns in the history of modern advertising. Some relationships and feelings are beyond the reach of the accountant, but for everything else there’s:

The immeasurability of love reveals itself in all three sectors of human relationships:

  • Romantic love
  • Family relations
  • Friendships

Why does A fall in love with B? The myth comes with an answer: chemistry, “love at first sight.” Of course there’s something special about catching the eye of a person across a room and feeling a knot in your stomach when they look back with doubled energy. But who is likely to evoke that zing in the first place?

If you take romance novels as your guide, the answer is pretty clear: love on first sight is a lot easier if the other person is a born aristocrat with charming manners and the flawless skin that comes from a worry-less life. Money may not be able to buy love directly, but it sure tilts the scales in favor of the rich. In that, love is a lot like “merit,” where entrance to Ivy League schools is theoretically open to the deserving of all races and classes but in practice favors the graduates of Phillips Andover.

The romance of familial relations is equally suspect. A mother’s love is supposed to be infinite and unquantifiable but in practice it means that women labor long hours to keep a family going without compensation. How can you charge for the immeasurable?

Even friendship isn’t immune to the pressures of the market, for we treat friends differently based on how much money they have. There’s a reason why Drona was deeply offended when Drupada treated him like a servant. It’s much easier to raise money for my next startup if I am rich and my friends are rich and they know even richer people.

My point is that the lack of measurement often leads to injustices of value. Every parent of multiple children has been told at some point or the other that he loves child A more than child B. But what does more love mean exactly? If I say I love my children equally and you (i.e., one of my children) say that I love Jimmy more than I love you, how exactly can we resolve this problem? There isn’t a final answer to this question, but we can all agree it’s unfair if I will 80% of my wealth to Jimmy and only 20% to you for no other reason than I love him more.

Photographer: George Pagan III | Source: Unsplash

All of this would be moot if love simply can’t be measured, but this is where abstract philosophical and scientific questions about the theory of measurement meet changing technological resources.

Until recently, emotion measurement was a rare affair. I knew how you were feeling only when I saw you or heard about you from a common friend. Aggregate data didn’t exist — there was no way even the richest advertiser could have gauged the feelings of her customers on a daily basis.

All of that has changed dramatically. We reveal our emotional states to platform companies and governments several times a day, perhaps several hundred times a day. As a consequence, they have excellent models of our emotional state and wellbeing. Instagram and Snapchat probably knows when my daughter is going to have a fight with one of her friends even before she does.

That degree of access to emotions is clearly worth money and it’s reflected in the valuation of Facebook and other corporations. In fact, whether money buys love or not, it’s been able to buy hate at scale — and the electoral fortunes of Trump, Bolsonaro and Modi are testament to that success. The only way to counter that wave of hatred is if the measurement of love expands at a faster rate than the measurement of anger and if emotions more generally are made into a public resource rather than the property of private corporations.