The Buddha, peace be unto him, is famous for declaring there’s no self. Strictly speaking, he denied the existence of an abiding, permanent self, especially the metaphysical Atman of Brahmanical Hinduism. We are born, we grow into adulthood and then we pass away. Some think we restart that process in the next life. The Buddha says: one life or many, there’s no rock to tether the ship of existence.
The Buddha left out space in his calculations. Sure, there’s no single self over time, but what about having the same self in space? Are we the same person in every direction?
Every one of us experiences ourselves from the inside-out. We refer to ourselves as “I.” It’s commonly believed that we have unique access to that self, an experience of being me that no one else has, that there’s an inner door to a secret chamber that can only be opened by one key. Who else can tell me that I am in pain besides myself?
But there’s another self (or many selves) of which I am only partially aware. That’s the self others see and experience. Why do we assume these two selves to be the same? When my daughter asks me not to be upset with her, and I reply that I am not upset at all, is it possible that both are right? Is it possible there’s a MeMe that’s fully transparent to me and a YouMe that’s fully transparent to others and the two aren’t the same Me’s?
It’s much more likely that the two are somewhat consistent but far from being identical. Which poses a problem for any autobiographical effort because a recounting of MeMe can’t pass off as a recounting of Me in general. The rich and the powerful have always had alternatives — they can hire people to write about their YouMe or even better, if they are famous enough, others want to write about them of their own volition.
The rest of us have to try hard to get others to talk to us for a few minutes, let alone writing praises. But even the most avid biographer doesn’t have the access to my daily routine. In fact, I am too absorbed or distracted to fully grasp what I am doing. The wake of my passage is invisible to me. Fortunately, that data is being scooped up by our friendly neighborhood tech giant. If my data across various websites, social media properties and calendars is aggregated and made available to an automated story generation system such as Narrative Science, I might receive a half decent autobiography in the mail every morning.
“Rajesh left home early yesterday morning. He caught the first train to South Station where he waited for the Acela for a full thirty minutes during which he flipped between his kindle and his phone. On the train he worked on the Acme report for the third time in so many days, changing most of the ten pages that he had written the day before.”
More suspense than my real life for sure. I might even pay for that service. But why stick to the real world. Why not probe lives I have never lived and don’t plan on doing so? Technology comes to the rescue once again. After all, most of my online explorations are funded by personalized ads trying to sell a future different me. The same as every advertisement in the history of marketing but personalization brings new opportunities to the creative autobiographer.
Paths not taken
Who does Facebook think I am?
In an attempt to understand myself through the eyes of Skynet, I have decided to take a screenshot of the first ad that Facebook inserts into my newsfeed every time I log in.
Hypothesis: If I take a screenshot every day for a hundred days I will learn more about who I am than a hundred years of Vipassana.
Just kidding, but I bet I will learn something. Don’t ask me what though, I am only on day 2.
Day 1: Today’s ad wants me to read like a CEO. Which is to say, not read at all but to get my staff to summarize it for me. Hey, at least I am better than Trump who doesn’t even read his summaries.
Sadly, I am going to pass. No $7 a month summary of business books for me. But the exercise frees up the imagination. Who is this CEO Rajesh? I’m thinking he wears a black suit everyday. Except for Saturday when he changes into a silk kurta to celebrate his pride for Mother India.
Day 2: Life is a roller coaster. Having rejected the offer to have summaries of business successes sent to my inbox, I must have missed a major opportunity while my competitors were making detailed notes. End result: I have been fired and my wife has left me.
Not to worry: DreamBuilder is here to rescue me from the jaws of failure.
If you read or watch any mainstream media source that deals with facts instead of imaginary threats, you will notice the constant invocation of two civilizational threats: automation and climate change. This is mainstream media btw, not leftie radical sources; you know we are in a genuine crisis when hunks on TV look you in the eye and say we are all going to die.
AI and climate change: one economic, the other ecologic. One taking our jobs and the other destroying our home. I believe the two are actually the same, the worldly reflection of the platonic duality between information and energy. Unfortunately, while the mainstream is beginning to recognize the seriousness of our situation, they aren’t willing to take the necessary steps to adapt and flourish in the new world that’s being born.
The threat is recognized by the radicals knocking on the mainstream’s door: it’s increasingly common to say we need systems change. But who is going to do it and what skills are needed to do so? I find that even the most trenchant critic of the current system has conventional views on how it needs to be transformed: they say we need a radical transformation of our societies, but they assume that we already have the skills to do so; we only need powerful obstacles to get out of the way and let the innate intelligence of people, especially young people, to emerge out of the shadows.
That’s a romantic thought but a false one. We need to cultivate a form of subversive intelligence that is attuned to the changing conditions. That cultivation needs conscious, collective effort.
What form should that subversive intelligence take?
I have some thoughts on that matter and I have even written about it on other occasions though it’s only today that I am using the term “subversive intelligence” to describe the mindsets we need to cultivate. Here are a couple:
Take those writings and readings with a grain of salt though; chances are much of what we read today will be flawed in its presentation of the world to come, just as the writers of the early industrial era couldn’t have predicted our capacity to order a computer from China with a click or two.
Take that uber-pinko Karl Marx. He started writing his famous book in the early days of capitalism. According to that canonical source of truth, i.e., Wikipedia, James Watt’s steam engine was invented between 1763 and 1775. Marx and Engels wrote the Communist Manifesto in 1848. In other words, somewhere between 85 and 73 years after the steam engine. Meanwhile, the first functioning electronic computer, i.e., ENIAC, was first completed in 1945, so we are 73 years past the deployment of that technology. Why am I saying this? When Marx and Engels wrote their pamphlet, industrial capitalism was just beginning to show its impact on England and Europe. 1848 was also the year of the social unrest across Europe but it was a long ways away from two worlds wars, several revolutions, decolonization and all the other consequences of the mechanical age. Nevertheless, they were right in pointing out that industrial capitalism was a really big deal and that it would change the world.
Similarly, we are at a relatively early stage in the development of intelligent capitalism, i.e., capitalism powered by information and machine learning. Not so coincidentally, we are also at an early stage of panic over climate change and ecological collapse more broadly. The two go together. We may or may not agree with Marx’s vision, but he was absolutely right (and he wasn’t alone in saying so) in pointing out that the real impact of industrial capitalism wasn’t in the new gadgets and gizmos that enter our lives but in the social relations transformed through this influx. Global capitalist society is nothing like the pre-industrial societies it has replaced.
Intelligent capital will cause an equally dramatic shift in life, liberty and the pursuit of happiness, even as individual gadgets come and go. Some of the symptoms of this shift are already upon us: we know surveillance is going to be big, automation bigger and climate change is going to be huge.
For one, it’s not just social relations that will change in this time. Natural relations, i.e., the relationship between humans and other beings on earth and also the relations between the other beings themselves will also change. Actually, natural relations have already changed. What else do we mean by the anthropocene? What does it mean when the majority of the world’s land area is being used for agriculture?
I think it’s only a matter of time before we consider all earthly activities as part of the human system, which is to say that the earth system and the human system are increasingly going to merge. Is this a good thing? A bad thing? Before we rush to judgment, let’s first try to understand the levers that control these systemic changes.
Really. I have resolutely left-wing sympathies, but the honest thing is to understand this new condition before passing judgment, especially if our long-term goal is to shine a crystal ball on the future and in doing so, unleash genuinely transformative forces. But that’s a ways away.
Some more snippets:
– Life in knowledge societies is mediated by flows of information and the networks that host those flows. It’s impossible to imagine making a simple widget without information mediation, let alone a complex product like a phone or an airplane. It’s equally impossible to imagine life without constant sharing of personal data and constant surveillance by corporations and nation states. Information technologies are technologies of living par excellence.
– In fact, no Stalinist state has ever had the level of intrusion in people’s lives that we see today being willingly shared and aggregated via social media. Informational life spawns many worries such as:
The Future of Work: some are worried that robots will take our jobs. Others are worried that capitalists will use the threat of automation to reduce wages in the few jobs that remain.
Full Spectrum Surveillance: that our lives are monitored and monetized second by second and further, surveillance fragments our working lives so that we can work for Uber in the morning and Walmart in the afternoon.
Inequality Amplification: we are less likely to have data about the needs of underprivileged and marginal communities and people in those communities are even less likely to have the skills to make use of that data. Data poverty threatens to combine with larger concerns over automation to increase inequality.
Let’s not forget the utopian imaginations of abundance, of a life devoted to creation and enjoyment as machines perform all the drudgery. We can’t discount the power of this artificial city on the hill. If AI and Data spawn apocalyptic and utopian visions, we need a liberation theology to bring that vision to the people. That’s the driving ambition of subversive intelligence.
I have a firm leg in the doom and gloom camp. My friends alternate between sending images of the burning Amazon and pictures of Amazon — the company, not the lungs of the planet — replacing all jobs with robots. Not that I mind; all that misery gives my optimist brain something to push against.
So when someone asks me if startups can save the world, my first response is: of course. Five seconds later, I change my mind to: you must be kidding! Back and forth, here’s how I argue with myself:
Me: How can startups change the world? They are tiny and the world’s problems are huge!
Me: Yes, but when they become bigger, they are no different from the other Death Star corporations sucking the life out of the planet.
Myself: but we can invent a new type of startup that’s less Death Star and more Jedi Knight. Startups don’t have to be about profits any more than factories have to manufacture cloth and nothing else.
and so it goes.
There’s something about how startups harness psychological energy and navigate uncertainty that appeals to me, and increasingly, we have data that tells us what works when people come together for a common purpose. Why not use it to make the world a better place?
To paraphrase a man who first gave me hope and then disappointed me: yes we can.
It’s never been easier to go from idea to implementation to turn a profit. Everything from Y-Combinator to my neighborhood angel investor are waiting to turn blood, sweat and tears into 💵. Unfortunately, saving the world is mostly not a matter of 💵. It’s about putting people and planet before profits. Here are three of my favorite world-savers:
Not a single businessman in that panorama. They were all politicians. That’s cuz politics is the most important method through which we have saved the world in the last two hundred years — both liberators and dictators have been politicians.
Note: none of them is a woman. I apologize for that snub to half the earth’s population. Patriarchy inserts itself into the STW business.
While the US doesn’t encourage political startups, i.e., new political parties, it’s common in other parts of the world for parties to be formed. Especially when there are classes of people whose needs aren’t being met by any of their current choices. Sometimes those new parties win elections and become political corporations and even one party states. Political monopolies suck even more than business monopolies.
The good news is there are more forms of political entrepreneurship than we can imagine, so why only political parties, why not other political startups? What about international political associations around global topics such as climate change — do we really think such wicked problems can be solved by middle aged women and men sitting in the U.N General Assembly? If you believe so, I have a couple of bridges I want to sell you.
I will tell you why like startups. They are the most robust institution we have devised for collective action amidst uncertainty. Corporations and governments are good at delivering solutions that work, but only startups are good at finding out what works in the first place. How will Indian farmers handle shorter, more intense monsoons? I don’t know, but I bet there’s a startup somewhere that will come up with a good idea.
Please stop thinking a startup is only about money.
We should divorce the idea of the startup from its capitalist origins, just as factories arose in the capitalist world but spread to societies where the government ran all forms of production. I am not saying one’s better than the other, just that startups and factories are flexible institutions capable of doing any number of different things.
I am a 100% certain that the future of the human species is bursting with uncertainty — pun intended. We have to become adept at navigating chaos and for that “Startup thinking” is an essential quality. Only if it combines politics, engineering and design though.
I went to an engineering school but I didn’t study engineering. In fact, I have stayed away from engineering my entire life despite being a geek. Cooking, yes. Sports, yes. Maker spaces: of course. But not engineering.
That’s because engineering struck me — perhaps wrongly — as being focused on the small picture, of making this widget in front of me work without caring about its connection to the wider world and damn the consequences. It wasn’t a discipline that encouraged a philosophical bent of mind. Engineering has undoubtedly changed the world, but it has done so without taking responsibility for that change.
In contrast, politics always struck me as being closely tied to philosophy: or to paraphrase Marx: we don’t want to study the world, but to change it. And there’s no shortage of political writing as to why we should change the world this way rather than that way. Politics, however disagreeable, takes responsibility for changing the world, which is why the metaphorical levers of politics were better suited to my theory of change than the mechanical levers of engineering.
Of late, I have been feeling that the division between politics and engineering is disappearing. Both are technologies that create human artifacts in response to individual and collective needs. Until recently, it was easier to encode big-picture goals in political technology than in engineering technology. For example:
Do you think democracies should have a separation of powers those creating policy and those enacting it? Solution: create separate institutions for the two purposes.
Today, similar decisions are being made in engineering technology: if you want to create a platform in which the platform owner doesn’t have an undue advantage vis-a-vis other participants? Make sure their business development wing only has access to the same data as any other business using the site. API design can encode ethical features and value judgments in a manner unthinkable fifty years ago.
The reason politics and engineering are coming together is code — and I use that computing phrase in the broadest sense of the term. Political technology has always been based on text: constitutions, policy briefs, white papers and such. Engineering technology has been based on things:steam engines and marble tiles. Code functions both as text and as thing. That’s a huge transition in how we change the world. We are just scratching the surface of that revolution.
That realization got me thinking about problems that should be solved simultaneously as engineering products and political policy, with solutions exhibiting a combination of good design, good data and deep concern for social implications. Technology that pays attention to the forest and the trees. I am on the lookout for such “forestrees.”
Here’s the first.
Halfway through the first game of the season on Saturday, my daughter took a soccer ball to the face. She continued playing for a couple more minutes before her nose started bleeding when she had to leave the field and couldn’t play the rest of the game. This being the United States, a trainer checked her out and suggested she might have the mildest of concussions, which meant no more games that weekend. Fortunately, she was fine the next morning and ended up playing on Sunday.
I love my daughter more than anything in the universe but this essay isn’t about her. It’s about how she was assessed for a potential concussion. She was checked by trainers three times on Saturday and Sunday. I noticed that on all three occasions, the trainer whipped out her phone and used the phone camera to make an assessment.
There are a couple of concussion assessment apps in the iOS app store but none of them are fancy — they are just a list of protocols to follow, including making the player stand on one foot, move their arms in set patterns and so on. Looks quite crude if you ask me, though arguably optimized for assessment by a young person with little experience.
I asked myself if we have better signals for concussion.
What about eye movements or other neuromuscular signatures? A quick google search lead me to this paper which says that disconjugate eye movements (i.e., when the two eyes don’t move in synchrony) are present in more than 90% of concussion and blast victims. I am not sure if trainers have the medical training to detect disconjugate eyes, especially if lighting conditions aren’t good. Disconjucation detection (DCD for short) might be too hard for untrained human beings.
But we are forgetting that camera. It seems underutilized — I saw the trainers shining it into my daughter’s eyes one eye at a time. DCD needs to process the signal from both eyes at once for it to work — after all, we want to find out if they moving in unison.
Let’s eliminate what I think of as the easy case — in the case of severe concussion, the two eyes are more likely to be completely out of sync. A severe concussion is likely to be a result of a major collision either on a sports field or in an automobile. Those aren’t the obvious cases I am thinking about, since they will be referred to emergency care right away. Concussions from a minor incident on a children’s playground or from an elderly person falling in a bathroom are harder problems to solve, and the solution has to be be in your pocket.
Phone is all we got.
An optical problem has to be solved first, a robust method for detecting eye movements from both eyes. There has to be a way of sweeping a phone camera in front of someone’s eyes so that it picks up the eye movement signals from both eyes at once. It’s a technical challenge because the signal is masked by an enormous amount of noise: jitter because of shaky hands, changing reflection patterns because of blinking eyes and head movements, changes in light sources if clouds block the sun and so on.
Fortunately, we have a clean separation of movements:
There’s the relatively smooth movement of my arm as I scan the camera in front of your eyes. Assuming that the light source from my phone camera is the only light that’s changing in intensity — ambient light from the sun or artificial lighting being assumed constant — the light reflected back from your eyes is going to be a smooth function of my hand movements. Further, smartphones now have motion sensors so we can use hardware to detect and filter out movements initiated by the person holding the camera.
There’s the jerky movement of your eyes as they saccade and change focus, and every time that happens, there’s a sharp change in light intensity. There’s are also the jerky movements of my eyes blinking, but that happens at a slower rate than eye movements and is an up and down movement while saccades have two degrees of freedom.
I am betting on a relatively clean separation of signal (eye movements) from noise (the camera movement, her head movements, blinks etc). In short, while there are genuine technical difficulties, I am reasonably confident that the signal detection problem can be solved. But once we have the two channel signal — one channel for each eye — we are left with an inference problem: how do I know when a signal indicates concussion?
The simplest kind of processing that can be done on the two channel signal is a summary statistic, such as the correlation between the two channels. Disconjugate eyes will have lower correlation between the two channels than normal ones. If we are happy with a simple diagnostic, this is all we need to do: set a concussion threshold and slot anyone who meets that threshold for medical intervention.
That, by the way, is the nature of most medical interventions based on bodily indicators. If I go to my doctor’s office with a test result and if my blood pressure, blood sugar or cholesterol is above a certain threshold, they will likely talk to me about further testing. If the statistic is in a band that’s not too low or or too high, they will talk to me about my diet and exercise regime and suggest changes if necessary. Otherwise, the machine’s working as it should and I go home happy.
But we can do better than that today, can’t we?
There are several problems with the simple diagnostic. Let me mention three:
It’s not personalized: My body might disconjunct at a lower contact threshold than yours. Even if there’s momentary disconjunction, your body might recover more quickly from it than mine. If disconjunction is a transient signal, how do we know when it’s a reliable indicator of concussion?
More generally, signs of concussion might be hidden in higher-order statistics instead of simple correlations. If so, noise will prevent us from extracting those higher-order statistics from a single observation.
The alternative is to go for a top-down approach based on extensive data collection. If I collect my eye movement data over time, the system will learn the typical conjunction between my eyes and how that changes with exertion, time of the day and other variables. With a robust data set like that in the background, we can be much more confident about when a genuine concussion is the cause of disjunction. Instead of creating a simple summary statistic and basing our diagnosis on just that alone, we can create a Bayesian concussion detector that answers the question:
How likely is it that X has a concussion based on the record of her eye-movements?
Detection accuracy will obviously improve if the system has access to eye movement data of thousands of soccer playing children. Having that data in the background will also help diagnose whether my child’s post-concussion recovery is on track.
Where is her disconjunction one week post concussion relative to the population average?
Should we be looking at a more intensive check-up?
Every trip to the emergency room costs money and leads to higher insurance premiums. We want to base any decision to send a child to an emergency room on the most robust data we have on hand. Longitudinal data is better than sporadic data.
Not that you need convincing, but there’s no shortage of advances in health and wellbeing that need repositories of biological data, from eye-movements to cholesterol levels. But, and there’s a HUGE but: the possibility of exploitation, control and oppression is so much greater when data are collected and made available to corporations and governments. In order to avoid big brother, platform design should encode a “fair use” policy with respect to all the data hosted on their premises.
To put in crude terms, whose data is it?
First: who will create such a data set and if I create that set, do I own it? Let’s start with the latter — that the creator of the data set is the owner, which is the current default. Since data is supposedly the new oil, it’s no surprise there’s a rush to capture as many valuable data sets in your hands as possible, leading to all kinds of problems. Search monopolies are bad enough, but we certainly don’t want health data monopolies.
Let’s say Startup A raises a ton of VC money and creates a comprehensive eye-movement database whose API used by Startup B for concussion detection and Startup C for dyslexia monitoring. Two years down the road, Startup A enters its own concussion detection app, competing directly with Startup B. What’s B to do? How does an application company compete with a platform company?
There might be a programmatic solution — as I had mentioned in the introduction, we can design APIs that prevent the business development side of Startup A from having access to data that the users (Startups B and C in this case) have access to. But can API modularity be enforced without regulation?
I doubt it.
Also, platforms keep evolving. Imagine that Startup A discovers that while the market for concussion and dyslexia apps is individual parents and teams, hospitals and HMOs are an ideal market for the platform as a whole. What does A do? Make an offer to B and C that they can’t refuse? Enter into a complicated revenue sharing model?
Platform monopolies are even more entrenched than widget monopolies — the dominance of the FAANG platforms being a case in point. Despite the popular slogan, data isn’t oil; it’s not a resource that disappears after being used once. Instead, it gets more valuable with time and accretes more uses.
Which makes data prone to platform monopolies since platforms are designed for current as well as future uses — once you list all the books in the world, you can sell them yourself, offer space for others to sell, convert them into ebooks sold by your company or direct the customer to a competing book that has a higher rating on your own system of ratings. I can’t see a future in which privatized data is good for anyone besides the monopolist.
How will that work out in healthcare? If we want to avoid monopolization, we should keep the data open, say through the creation of a platform commons. That leads to another challenge: who is going to pay for such a platform? It’s not like creating an open database of cat videos — the regulatory demands of collecting and storing biological data will make their platforms prohibitively expensive for your typical non-profit.
Is the only financially and politically viable solution is to socialize the data? Which is to say, governments pay for the creation and maintenance of health data repositories and own the platform. Governments having ultimate ownership has its own challenges, especially in countries where citizens don’t have political control over what happens to their data. Which, to be honest, is the case in most liberal democracies let alone authoritarian regimes.
Plus, what do you think are the chances of a government creating a high-quality platform? Might be possible in a small and rich country like Sweden, but the U.S health care debacle suggests that creating universal health data systems in a large and diverse nation is an incredibly hard problem to solve.
What is to be done?
I have a utopian answer: data should be a universal resource like time. Clocks became important after the industrial revolution and transcontinental railroads, but there were thousands of time-measures in the early days of mechanical time-keeping. As this article says,
When the Union Pacific and the Central Pacific Railroad formed the Pacific Railroad, later called the transcontinental railroad, more than 8,000 towns were using their own local time and over 53,000 miles of track had been laid across the United States. Railroad managers and supervisors well understood the problems caused by so many discrepancies in time keeping.
There could have been many ways of solving the problem of time standardization:
Let railroad mergers dictate time mergers so that at the end of the process there are a few private time companies in the world that own my time and your time.
Let the government own the time — and tax you for owning a watch😏
Create an international standard for time that no one owns or even thinks of owning.
Aren’t we glad we chose the third option — can you imagine time being owned by Acme corp instead of being an international standard? Why can’t we do that with data? Why can’t we have universal, secure data systems that aren’t owned by anyone and enable any number of products?
One international data standard that stores and maintains all the important data in the world. Cue photos of smiling children and dogs playing in the sunshine.
You may not agree with my solution — feel free to leave yours in the comments, but I hope you’re convinced that:
It’s hard to design platforms that serve our needs today and in the future.
We need both engineering and political technologies to design such platforms.
Now for the philosophical climax of today’s program…
I can’t end an essay about creating data driven systems without a nod to data dystopias. There are the obvious dangers: hackers stealing your medical records and blackmailing you, insurers refusing service because of genetic predispositions, governments denying treatment to political dissidents and so on. While they are important worries — it would be a disaster if a hacker changed your baseline heart rate during a cardiac treatment — but in my view they are problems of the past, based on the model of the “all-seeing eye” or the panopticon.
There’s a difference between the world of meager data and the world of rich data.
A world of meager data is one where I don’t know what’s going on in your head. We are atomic individuals separated by infinite mental space. The all seeing eye assumes a uniform space occupied by atomic souls who are mostly like each other. In that world, the panopticon appears either as a blessing or as a nightmare — blessing if you’re the religious type and like the idea of a divinity knowing every thought that crosses your mind, and a nightmare because you’re the cynical type that doesn’t want god or the government having access to your desires.
Note how both the blessings and the curses arise from the act of being “truly seen.”
In response, we created liberal democracies where the government knows some things about you but not too much, where insurance companies write policies based on the normal individual and where you have to be in prison or a totalitarian state to be completely exposed to the authorities. To summarize: meager data, health insurance and liberal democracy are a package that pleases many people much of the time.
I believe the world of rich data will be substantially different. It’s prime worry (or blessing!) will not be whether I am being truly seen, but whether there’s an I at all. There’s no reason why the Snapchat self and the iHealth self and the iVote self are the same self or even feel like being the same person. What if the experienced of a unified self is an artifact of history?
What if the reason you feel that you vote, you work and you play is because you live in a time when you have privileged access to your internal states — as Descartes famously thought — while others have limited and indirect access to those states?
A useful way of thinking about technology is as an extension of our mental apparatus. Glasses are extensions of our visual system, hearing aids of the auditory system and equations of our conceptual capacities. But none of these do much computing. Imagine instead a third prosthetic arm that has as much computing as your peripheral nervous system — what do you think it will do your experience of the world? Or, for that matter, when data platforms are as good at predicting what you will do next as you can.
In that world we might feel less like human beings of the past and more like Octopi with eight arms, each one of whom has a mind of its own. Those arms usually act in unison but they don’t have to. Sometimes they clearly don’t. I don’t know what it feels like when arm 7 and arm 3 go to war against each other and I have no way of stopping the fight. I certainly don’t know what it’s like when arm 6 votes for Trump and arm 1 votes for Hillary.
Let me leave you with a final question: what will it be like to build a global society around a multiple selved creature?
I am not sure if there’s a startup creating the Octopus empire, but there should be one.
Man is the measure of all things. So sayeth Protagoras, ancient scientist. If you’re the religious kind you might condemn Protagoras for idolatry, for only God has the measure of all things. Or if you’re William Blake, you might condemn Isaac Newton for succeeding at the task.
Before we get to Newton and Blake, let me make an important distinction between two kinds of measurements:
Objective measurements. These are measurements of entities out there in the real world, where despite the possibility of error, there’s an underlying quantity being measured. My height is an example of an objective quantity; you will measure my height wrong if you have the wrong tape measure and I might add a couple of inches to it while creating a profile on a dating site, but we can all agree that there’s such a quantity as my height.
Measurements of Exchange. Money is the best example. Let’s say I want to appear taller than I am and I go out to buy a pair of platform shoes. How much should you charge me? Should a man 5’4’’ pay the same amount of money to add 2’’ to his height as a 5’6′ tall man? If now, who should you charge more? Height’s objective, the increase in height is objective, but the money you charge for it isn’t objective. The measurement of exchange value is variable by design.
The measurement of objective quantities is closely tied to precision calculations and mechanization. I better measure the distance between my landing gear and the ground if I want my spaceship to land gently on the Moon’s surface instead of crashing into it. The flip side of precision is a dismissal of quantities that can’t be measured accurately.
Perhaps they don’t even exist!
In contrast, the measurement of exchangeable commodities is tied to assessments of value. Why does gold cost more than iron? Objective explanations only go so far. Is it scarcity? Not really, because my childhood drawings are scarcer and I bet you won’t pay any money for them. Is it because gold is hyper malleable and a good conductor? I am sure that plays a role, but advertisements have beautiful women wearing gold necklaces rather than highlighting the conductance of gold wires.
Exchange value can never be reduced to objective quantity.
That’s my reading of Blake, i.e., that measurement leaves out what’s most important about us. Perhaps, but there’s good reason to try measuring the most obdurate phenomena.
Most people living in modern societies work for money. How to value their labor? It’s a real challenge, for strictly speaking, we are trying to compare apples and oranges. Material inputs and labor go in and widgets come out. Labor is nothing like the widgets it produces, but yet there must be a way to turn widget numbers into wages for labor, for without that conversion, we have no way of keeping the factory going.
If our factory is a cooperative, we might say:
we produced X widgets that are sold at Y dollars each;
it cost us Z dollars to buy the inputs and maintain the equipment and we need to carry another W dollars for future investments, insurance etc.
Since there are A of us, we will each get (XY-Z-W)/A.
That seems relatively easy. But what if there’s one owner and A employees. How much should the owner get and how much should the employees? Should they be paid a fixed salary and let all the profits go to the owner? If so, why?
What’s the value of labor? What’s a fair wage? We don’t have unique answers to these questions because the measurement of exchange can’t be reduced to the measurement of objective quantities. However, we have to live with an uneasy merger of facts and values because the alternative is even worse. To understand why, let’s explore that age old question:
Can money buy me love?
One of the universal myths of the modern world is that subjective qualities, emotions in particular, are immeasurable in both senses of that term, i.e.,
there’s no objective way of getting to my feelings and
there’s no price to be put on them.
In fact, so powerful is the myth that love’s immeasurable that it sparked one of the most successful pricing campaigns in the history of modern advertising. Some relationships and feelings are beyond the reach of the accountant, but for everything else there’s:
The immeasurability of love reveals itself in all three sectors of human relationships:
Why does A fall in love with B? The myth comes with an answer: chemistry, “love at first sight.” Of course there’s something special about catching the eye of a person across a room and feeling a knot in your stomach when they look back with doubled energy. But who is likely to evoke that zing in the first place?
If you take romance novels as your guide, the answer is pretty clear: love on first sight is a lot easier if the other person is a born aristocrat with charming manners and the flawless skin that comes from a worry-less life. Money may not be able to buy love directly, but it sure tilts the scales in favor of the rich. In that, love is a lot like “merit,” where entrance to Ivy League schools is theoretically open to the deserving of all races and classes but in practice favors the graduates of Phillips Andover.
The romance of familial relations is equally suspect. A mother’s love is supposed to be infinite and unquantifiable but in practice it means that women labor long hours to keep a family going without compensation. How can you charge for the immeasurable?
Even friendship isn’t immune to the pressures of the market, for we treat friends differently based on how much money they have. There’s a reason why Drona was deeply offended when Drupada treated him like a servant. It’s much easier to raise money for my next startup if I am rich and my friends are rich and they know even richer people.
My point is that the lack of measurement often leads to injustices of value. Every parent of multiple children has been told at some point or the other that he loves child A more than child B. But what does morelove mean exactly? If I say I love my children equally and you (i.e., one of my children) say that I love Jimmy more than I love you, how exactly can we resolve this problem? There isn’t a final answer to this question, but we can all agree it’s unfair if I will 80% of my wealth to Jimmy and only 20% to you for no other reason than I love him more.
All of this would be moot if love simply can’t be measured, but this is where abstract philosophical and scientific questions about the theory of measurement meet changing technological resources.
Until recently, emotion measurement was a rare affair. I knew how you were feeling only when I saw you or heard about you from a common friend. Aggregate data didn’t exist — there was no way even the richest advertiser could have gauged the feelings of her customers on a daily basis.
All of that has changed dramatically. We reveal our emotional states to platform companies and governments several times a day, perhaps several hundred times a day. As a consequence, they have excellent models of our emotional state and wellbeing. Instagram and Snapchat probably knows when my daughter is going to have a fight with one of her friends even before she does.
That degree of access to emotions is clearly worth money and it’s reflected in the valuation of Facebook and other corporations. In fact, whether money buys love or not, it’s been able to buy hate at scale — and the electoral fortunes of Trump, Bolsonaro and Modi are testament to that success. The only way to counter that wave of hatred is if the measurement of love expands at a faster rate than the measurement of anger and if emotions more generally are made into a public resource rather than the property of private corporations.
I remember Reagan saying to Gorbachev “Tear down this wall.” Sorry, that’s fake news. Or at least white lie news. There’s no way I could have heard a live conference in West Berlin in 1987. It was probably past my bedtime in Delhi. I also have a memory of reading it in some magazine or the other. Perhaps Time. Perhaps Newsweek. Or because it was international news, I might have even read it in an Indian magazine like India Today. Frankly, since the news conference has posthumous fame — after the wall actually fell — there’s a good chance that all my memories are from reading about the event years later. When I say “I remember Reagan saying…,” I mean that the perceived importance of the event combined with my imagination has created a vivid “memory” of an event.
Well, most memory is like that. We don’t store the facts as is; instead we compress and transform every event to suit our needs. Selective understanding is crucial to living a sane life today, when we are deluged with information 24/7.
So what is a true memory?
There’s a famous thought experiment in epistemology called the Gettier paradox. Here’s a version I like:
Imagine you’re watching the 1984 Wimbledon finals with McEnroe facing Connors. Unfortunately, the broadcaster has lost contact with his TV van and doesn’t have a live feed anymore. Someone has a clever idea: why not broadcast a recording of the 1982 final instead which had the same cast?
So you’re watching the 1982 final while thinking you’re watching the 1984 final. In this version Connors wins. You go to sleep thinking Connors has won. Let’s say that Connors won the 1984 final (actually, McEnroe won in 1984; for the record, I supported Connors) and when you open the newspaper in the morning, you read the headline “Connors defeats McEnroe again.”
Your belief that Connors has won is a true belief despite being arrived at via a flawed route. Something is wrong when you can arrive at true beliefs through mistaken means isn’t it? Of course, Gettier’s thought experiment is a contrived situation. How likely is it that exactly the same type of prior event is available as a substitute for an actual one?
Tennis match twins might be hard to find but the use of memories as evidence is all too common — in testimony, in arguments between spouses, in story telling. When I tell the jury that I saw that man pull the trigger, what if never saw him shoot the victim. What if I am combining the knowledge that the man is a known hoodlum, the actual experience of shots being fired and reading headlines in the local newspaper?
Here’s the question: even if the man was the murderer, is my testimony valid? Further, if much testimony is confabulation, is any testimony valid? Especially in a murder trial where the jury is one color and the defendant another? And the final dystopian possibility — what if our social media feeds are full of posts that prime our memories to be one way rather than another. Can we trust our own minds?
I want to explore that internal dystopia in future essays. For example:
can technology help us certify memories? what would a process of certification look like? let’s say it takes the form of “bitcoin meets the brain.” Is that a techno-utopia or a techno-dystopia?
But we aren’t there yet. I am still a few decades behind that brave new world. But it does seem as if every utopia becomes a dystopia sooner or later. And then replaced by the next utopia. Let’s start with 1945. The second world war had just ended. Hundreds of millions dead, entire populations genocided, atom bombs burst.
Never again they said. Let’s form the United Nations and give a seat at the table to everyone. Some more prominently than others, i.e., those who were on the winning side of WWII. Decolonization started in earnest; India and Pakistan became independent in 1947, though that utopian moment happened in parallel with its own dystopian partition whose effects we feel to this day.
Anyway, the European powers who brought us two world wars lay defeated; even the victors. In their stead were two confident new powers: the United States and the Soviet Union. Each had its theory of progress, of delivering material prosperity to its citizens and eventually the world. When he said energy will become too cheap to meter we believed him. Unfortunately, that energy can flow smoothly out of an outlet or burn the sky. Even more so if you have ten thousand of them. That’s what led to:
I can’t believe how close the US and the USSR brought us to the end of times, but we were lucky; the nuclear winter never came despite several close runs. And then Reagan came to Berlin and asked that the wall come down. And it did, a couple of years after he asked!
When I first came to the US in the nineties it was an unrivaled power. For twenty plus years, it ruled the world, the most powerful country that has ever existed. It expanded market capitalism everywhere, most prominently in China but also in India. Globalization as we know it is a product of American power. I owe the writing of this essay in a cafe in Bangalore to the fall of the Berlin wall. Yes Brandenburg Gate, No Foxconn.
When 9/11 happened, the headlines across the world were “we are all Americans.” While that headline was meant as a mark of solidarity, it was truer than we think. The world of startups and markets, of Hollywood storytelling. The possibility of progress backed by global networks of influence and immense military power — who doesn’t want that in some form?
So much so that it became possible to write a book called “The End of History” which claimed that market driven liberal democracy is the final solution to the problem of political order. In this reading, human history is a series of attempts at prosperity that collapse in violence (Rome, Han China, Gupta India) and we continue to look for a solution that combines peace and power in a manner acceptable to most.
Fukuyama thought that solution was found in 1989. Let’s call it EOH (End of History) liberalism. That we can all ride into the sunset in our Cadillacs.
Who would have thought in 1992 that the most powerful nation in history would elect Trump in 2016, that EOH liberalism would be replaced by ethno-nationalism in every major country in the world? That it would be possible for Vladimir Putin to declare in a recent interview that liberalism has “become obsolete.”
Why did that happen? Is there an intrinsic tendency for a utopian bubble to be succeeded by a dystopian abyss?
I don’t know if there’s a universal principle of that kind, but I believe it’s important to understand the internal and external contradictions that are bursting the EOH bubble. Of which two are the most important:
EOH Liberalism was deployed on networks — of goods and information — and these networks became instruments of concentration and inequality instead of decentralization and democratization that we were promised. Why?
EOH Liberalism hastened the exploitation of the nonhuman world that supports all human life and economic activity. If I may say so, it is a UX designed for easy extraction.
Could we have predicted the two? Yes, and many did, but they weren’t heard loudly enough. Perhaps because we didn’t want to hear what they were saying or perhaps because they weren’t saying it the right way.
Everyone says we live in a knowledge society. What they really mean is we live in a knowledge economy, where knowledge is a source of profit.
You can see the workings of the knowledge economy in the ubiquity of two terms: innovation and intellectual property.
What does it even mean when someone says XX is the most innovative company in YY industry? Do they stack the brains of the employees and see which one is highest? How can knowledge even qualify as property? Isn’t that term restricted to resources like land or water or basmati rice that can be used by only one person or group at a time and often never again?
Fair enough, but you’re naive to the ways of capital if you succumb to these doubts. The magic of capitalism is in changing the ways of the world first and then giving it a name. Property expands to include algorithms and experiences while lexicographers and lawmakers struggle to catch up. That’s why companies and countries protect their IP ferociously, for everything from war to surveillance to profit depends on restricting access to their epistemic possessions.
Like it or not, this new era of knowledge is here to stay; our current worries about robots taking our jobs is only the tip of the knowledge iceberg. Of course we should worry about robots taking our jobs, just as the weavers of Dhaka worried about the mills in Manchester taking their jobs with good reason and the state will support the usurpers this time around too.
However, in the long run, the impact of manufacturing based capitalism was much bigger than the pre-industrial economies it disrupted — it’s impact includes everything from the colonization of Asia and Africa to the communist revolutions to the great depression, the rise of fascism and the two world wars all the way to climate change and the potential end of human life on earth as we know it.
Not that I know what knowledge based capital will do, but if it’s impact is even 10% of manufacturing based capital we need to pay close attention to it.
I am not advocating a utopia, not yet anyway; adults need to contribute to society but what do we do when all our current contributions are commodities? Of course we need a new politics for this new era, but I have a counterintuitive suggestion: set aside mass protests, labor regulations and progressive political parties for a moment and pay attention to kindergarten.
Because it’s not school. School is where we are disciplined into responsible citizens, people who know how to read and write and solve equations and set one widget on top of another until it’s ready to roll down the line to the next widget stacker.
Everything a responsible citizen can do, a robot can do better. If not now, five years from now.
Kindergarten (assuming you went to a decent one — not guaranteed by any stretch of the imagination) is the last time we weren’t “schooled,” when learning and play merged into an experience that taught you without teaching you. After that it’s a lifetime of monitoring, tests and judgment.
If we extend school and college to a lifetime of getting degrees and certificates, we are only going to extend the surveillance society to every corner of our lives. In fact, I wouldn’t be surprised if educational metrics became the liberal way of discipling society.
Surveillance fascism is easy to understand: every move you make I am watching you but surveillance liberalism is about self-disciplining: notice how some of the most privileged kids in the world in liberal enclaves accumulate social work opportunities, internships and SAT scores as if their life depended on it — and it does!
It’s easy to imagine a future in which continuous scoring is the norm — why wouldn’t it be? After all, if you are going to be a knowledge laborer when you’re 60, why would I believe a certificate you received when you were 22?
The brave new world of EdX and Coursera and other venues for lifelong learning is a market driven response to a knowledge economy; it works well enough for someone like me but it also prepares the ground for lifelong disciplining.
What’s the alternative?
As I said: preschool everyone. Give people a space to play without judgment, where exploration is more important than achievement.
Academia plays several roles in modern society. Professors educate the not-so young, advance the frontiers of knowledge and act as critical mirrors of the larger culture. Neoliberal academia has reduced all of these functions to the furtherance of economic activity. That new goal is enforced with metrics that track those narrow concerns. In this new avatar, education is education for livelihoods’ sake and research is research that has impact. The critical function has been excised altogether.
I understand the compulsions that lead to this sorry state. Taxpayers are upset about the rising costs of public education. With white collar jobs being automated and outsourced there’s genuine concern that today’s college debts will not lead to tomorrow’s prosperity.
Let’s set aside the real reasons behind the current state of affairs, namely, a deliberate attempt by those in power to defund public education and to end the sixties and seventies’ era of student and faculty radicalism. Let’s turn a blind eye to the new goal of academia, which is to buttress the interests of the system and to create a population that’s narrowly skilled, economically insecure and politically docile. Instead, let’s ask whether the job and grant obsessed, impact factor optimizing academic is any good at serving his new masters.
I believe the neo-professor is terrible at her new job, both in ‘skilling’ young people and at creating ‘impactful’ new knowledge. If knowledge consists of measurable and learnable skills, universities are exceptionally inefficient and expensive at imparting them. If the post-industrial economy needs knowledge labourers, then it’s much better off adopting an apprentice system; that way, we will be sure the apprentice learns a bankable skill and the costs will be borne partly by the employer.
Measurement indices such as impact factors lead to a problem that’s well known in primary and secondary education where measurement and accountability have ruled for decades. Once you have a metric, teachers start teaching to the test. Those who succeed at it are certifiably good at optimizing the chosen metric but there’s a dubious relation between “impact” and impact. Optimizing for impact lead to the research version of grade inflation.
Academic communities are small. All of us know who is influential and what they like to hear. It’s easy to write papers reflecting the prejudices of those who control the purse strings. In this milieu we can’t expect epistemic advances that upset the apple cart.
It’s likely that new knowledge cannot be produced by human beings at an industrial scale. Industrial knowledge cannot be produced by pre-industrial craftsmen; it needs machines. The AI robots of the future might be better at it, but that’s not our problem. Meanwhile consider this possibility: without the promise of genuinely new insights, there’s no reason for supporting academia at all. The alternatives are cheaper and better.