CARTHA

   

Get a copy of one of our books here!

Get a copy of one of our books here!

  • 6 / Learning Architecture, 2021
    • 6-1 / I / Call for Contributions
  • 5 / Invisible Structures, 2020
    • 5-1 / I / Prologue
    • 5-2 / II / Essays
  • 4 / The Possible Progress, 2019
    • 4-1 / I / The Possible Progress
    • 4-2 / II / Answer Series
  • 3 / Building Identity, 2018
    • 3-1 / I / ASSIMILATION
    • 3-2 / II / APPROPRIATION
    • 3-3 / III / REJECTION
    • 3-4 / IV / CONCILIATION
    • 3-5 / V / THE CASE OF DWELLING
  • 2 / The limits of fiction in Architecture, 2017
    • 2-1 / I / THE TEXT ISSUE
    • 2-2 / II / THE IMAGE ISSUE
  • 1 / The Form of Form, 2016
    • 1-1 / I / How To Learn Better
    • 1-2 / II / The Architecture of the city. A palimpsest
    • 1-3 / III / LISBOA PARALELA
  • 0 / Relations, 2015
    • 0-0 / Ø / Worth Sharing
    • 0-1 / I / Confrères
    • 0-2 / II / Mannschaft
    • 0-3 / III / Santisima Trinidad
  • imprintingidentity / Imprinting Identity, Special Issue 2019
    • imprintingidentity / Imprinting Identity
  • makingheimat / Making Heimat, Special Issue 2017
    • makingheimat / Making Heimat
  • About
  • Contributors
  • FRIENDS

    PROLOGUE

    ✕

    Editorial

    CARTHA

    This year’s cycle of Cartha is exploring the pre-organized structures that the contemporary individual is facing—the foundational, yet imperceptible, forces that shape society and emerging environments. Major shifts in the recent global narrative are less an exception to the norm than they are the heightened exposure of prescribed systems of social organization. These sprawling micro/macro symbiotic systems have always existed, the cyclical […]

    This year’s cycle of Cartha is exploring the pre-organized structures that the contemporary individual is facing—the foundational, yet imperceptible, forces that shape society and emerging environments.

    Major shifts in the recent global narrative are less an exception to the norm than they are the heightened exposure of prescribed systems of social organization. These sprawling micro/macro symbiotic systems have always existed, the cyclical nature of attempting to order the world and the social conditioning in return: between bodies and technology, wherein we simultaneously produce the apparatus and are produced by it; the virtual representations of the self and their deep reverberations within the psyche; or the environments we create and their resulting authority over our habits and routines. The ways in which things produce each other can be inevitable, but where do the biases lie considering the multitude of stakeholders participating to create new forms of organization? What sort of narratives emerge as consequence? The interest lies in the infrastructures of taxonomies: not only who writes the script, but how you read it, and in what space it needs to perform.

    BELIEF IN MYTH AS HISTORY

    At one point, cosmology, as the study of and attempt to understand the universe as a whole through systems of inherited belief, structured our understanding of societies, cities and nations. It is described by Walter Benjamin in his Theses on the Philosophy of History as messianic time, or a simultaneity.1 Cosmology and history could, prior to the reformation and print capitalism2, be viewed as one; the origins of the world and the origins of man as identical. Time, events, history and the present society linked by the idea of an ultimate condition, a highest metaphysical state of history which appear as imminent states of perfection, able to manifest themselves anywhere and at any time. This pre-reformation cosmology required an unquestioning belief in the power of a certain script-language to offer access to higher truths, and a trust in the natural organisation of societies into hierarchical structures. A slow change in the perceived validity of underlying cosmological facts through systemic economic changes, the invention of societal and scientific disciplines, and the increase in global communications led to a split, finally a chasm, between history and our understanding of the universe as a whole. New ways to understand and structure the world through categorisation and classification became a necessity and a power.
    The quintessential book of taxonomy, Systema Naturea was published in 1735 by Carl Linnaeus, organizing the‘entirety’ of the natural world into a hierarchical classification system with binomial nomenclature under what he titled the “Linnean taxonomy”.

    Linnaeus, Regnum Animale in Systema Naturae, 1735.

    CLASSIFICATION AS A POLITICAL ACT

    These fixed systems were questioned in the XXth century for being too reductive to be able to organize an ever-changing world. Michel Foucault published The Order of Things, building an entire discourse off of an invented system of taxonomy by Jorges Luis Borges, wherein the seemingly comical groupings of animals with loose visual resemblances exposed humanity’s irrefutable trust in the facts of science.3 However, these dichotomous categorization strategies continue to govern the way reality is understood through endless classifications in binary thinking or opposition logic, consistently relying heavily on the human eye’s authority over the other senses to structure the world.

    In recent times, the same stereotypical, culturally arbitrary and ethically exempt strategies are once again being employed to organize society in the development of taxonomies involved in training AI systems. Trevor Paglen’s studio emphasized this major hole in computer processing in his project ImageNet Roulette in 2019: he employed the same data set used to train basic image search algorithms to categorize people. The background processes are similar to that of the Linnean taxonomy, equally as unaccommodating of subtle difference and adaptive change, simply grouping people with similar visual traits and expressions as being cut from the same cloth. The over-categorization of everything continues, once again directing the evolving understanding of society and culture increasingly towards eugenics and biopolitics, but now differing in opacity: what once was clearly a celebration of man over nature is now cloaked in a ‘black box’, where the daily impacts of these deeply biased invisible structures go generally unnoticed.

    Imagenet taxonomy tree showing images classified
    as ‘second-rater, mediocrity’. ©Stanford Vision Lab 2010.

    Cartha asked specialists in computation, history, the arts, sciences and economics to contemplate the ways that these types of systems participate in their research. Associate Professor in Architecture Curtis Roth, Reader in History of Art and Design Dr. Annebella Pollen, Emeritus Professor in Economics Herman Daly, Max Planck fellow Dr. Meritxell Huch with artist Alex Thake make up the Prologue of Invisible Structures, sprawling out from the discipline of architecture, provoking a series of questions for reflection in our next Open Call for Submissions.

    1 Benjamin, Walter. Illuminations (1968). 263
    2 The relative ease of mass book printing towards the end of the 15th Century led to an increase in the number of publishers, who quickly saturated the limited market for Latin texts. The desire to bring the teachings of the bible closer to the masses, combined with the search for a new mass market for books empowered individual relationships with religion. Febvre and Martin, The Coming of the Book (1923). 271.
    3 Referring to Borges’ Celestial Emporium of Benevolent Knowledge from 1942 that in 1966 Foucault used to introduce The Order of Things.
    1 – 00
    Editorial
    PDF
    ✕

    Tissue Architecture and Organogenesis

    Alex Thake and Dr Meritxell Huch

    Dr. Meritxell Huch’s research and application of tissue regeneration and carcinogenesis, aka the expansion of tissues exterior to the body, is a significant advancement in developmental biology and the possibility of truly personalised medicine. Organoids are created via the cultivation of adult stem cells and growth of microscopic-functioning 3D models representative of cellular differentiation and […]

    Dr. Meritxell Huch’s research and application of tissue regeneration and carcinogenesis, aka the expansion of tissues exterior to the body, is a significant advancement in developmental biology and the possibility of truly personalised medicine. Organoids are created via the cultivation of adult stem cells and growth of microscopic-functioning 3D models representative of cellular differentiation and collaborative function. This technology is currently being used to test the efficiency of drugs for the treatment of congenital and acquired diseases, and in the potential future as a grafting technique for the repair of dysfunctional organs. 

    Language being a technology in and of itself, Dr. Huch’s use of figurative speech disrupts standard conceptions of taxonomy and gains agency as a determinant of practical dissemination. As late capital and globalisation have ostensibly undergone reification from analogical virus to reality, ex vivo tissue development conceives Deleuze and Guattari’s concept of “BWO’s” as metaphor to flesh. The description proposes a touchable condition, a surreal horizon of shifting bodies and migratory organs. 

    AT
    What is your specific study and application in the field of tissue regeneration and carcinogenesis? 

    MH
    In a nutshell it is how tissues develop, how they are maintained during adulthood and how our tissues repair – how they regenerate, basically. We try to understand that at a mechanistic level and to understand that is realising the liver as a model organ because of its huge regenerative capacity. 

    You can chop up to 70% of the liver and it will regrow from the remaining tissue that is there; very much like when you amputate the limb of a salamander it will regrow, so the liver is the only organ that can do it proficiently in increments. 

    We try to understand the mechanism, how the cells know that they have been damaged, how they know they have to react and when their regeneration is terminated. There are other tissues that regenerate but not like the liver. The stomach or the intestine in that sense, that is what they are constantly doing: automatic proliferation; but you cannot amputate a part and expect the remainder to grow. In addition, the liver does not do that except when it is damaged. 

    These questions have fascinated scientists for centuries, and model organisms have helped us gain further understanding on the process, however, we still do not know how humans regenerate their livers. As you can imagine, we cannot study that regeneration in humans because we can not do damage to people and check what happens. Because of that, we opted to first establish a system ex vivo, which recapitulates what happens in the organs in vivo, in the person, and now with the models in place, we are in a better position to ask that question. Yet, we must remember that organoids are not the organ, neither the organism per se, so we still have to improve them to be able to get a more holistic understanding of the problem. 

    AT
    What’s the general size of an organoid? When does an organoid classify as an organ? Beyond self organisation and cell determination? 

    MH
    A definition is a three dimensional structure of cells that have the capacity to organise by themselves, generate the structure and function similarly in many aspects – but not all – of the organ that they are mimicking. 

    AT
    What aspects would it not recapitulate? 

    MH
    It will not recapitulate for instance, vascularization. You cannot recapitulate inter-organ interactions.
    Because you only have one tissue, sometimes it cannot recapitulate the whole function, sometimes it’s more like a premature pickle-like organ, sometimes it’s more like an adult organ but it’s missing some function of the tissue in the case of the liver, for instance…  So, it’s not a complete organ, but the cells that compose that organ. 

    Love is Not Enough, Video, Alex Thake, 2020.

    AT
    How does the lab determine when growth begins and ends, could you extrapolate on what consists of the growth culture and how organoids are preserved and regulated? How does the lab determine spatial constraints? 

    MH
    We understand growth as cellular division. When we can see that one cell goes from more than one, from one to two, two to four, six to eight.
    In culture, the organoids, depending on the tissue – for instance, our intestine divides, the stem cells in the intestine divide every 24 hours.
    In the case of the liver, the liver does not divide, in that case it merely divides every three weeks, every three months, even to one cellular division. But, in culture they divide much faster, because it recapitulates regeneration, more than a homeostatic state, more than the natural state. For the question as to how it is being regulated? The scientific community has gained some knowledge of the mechanism, but we are far from a full understanding of it. This is a question that we still don’t know much about, but we are working on it.
    In other words, one of the things we know is that it’s kind of activating the regenerative program.
    That’s why understanding regeneration mechanisms can help us understand how organoids grow, but also how the tissue grows, and understanding how the tissue grows can allow us to understand how it is regulated and also how organoids regulate growth. It’s a kind of feedback loop. 

    AT
    What are the specifics of controlling cell fate determination, and ability of control? 

    MH
    This is a very important question, we do not have answers to. I mean there are no answers to how you control cell fates.
    So we know genetics is very important. But probably the cellular environment is equally important. 

    These are indeed very important questions for which we, as scientists, are asking ourselves as well and trying to find answers to. But, we don’t have them yet. 

    AT
    So that’s the whole process in itself, that’s what you’re trying to determine? 

    MH
    Indeed. So, a cell state or a cell fate is determined, is born with that. But, what we start to see is that cell fates are plastic. There is a kind of predetermination, but this predetermination or likelihood is not a fixed fit.
    There are fates that are fixed. A neuron can hardly become any other thing, neurons are pretty fixed.
    There are other things that are more plastic, we call that concept cellular plasticity.
    And this concept, which means this ability to become something different, means to not have a fixed fate but the capacity to pervert to other fates, within the same tissue. 

    AT
    That said, can an organoid be rerouted to become another organ? 

    MH
    I like this question. Well, I would rephrase it slightly different: can you make one cell become something completely different? Going back to the example of the liver, think that an hepatocyte makes an hepatocyte.  A ductal cells makes a ductal cell,  and in some instances, an adult ductal cell can become an hepatocyte and vice versa… However, what can never happen is that an adult ductal cell can become an insulin pancreas cell.
    This is crossing.  Okay, this is crossing a moment during development that is passed – so we start from the zygote, we start from one single cell. This single cell makes many cells during the embryo development.
    And as the tissues are specifying becoming brain, and becoming liver, and becoming pancreas,  some genes are shut off forever and some might be opened to allow that cellular fate.
    That means that a human pancreas can not become an intestinal cell, or a neuron. It seems that, the way our development happens, it puts a block in the expression pattern in the genome which is never going to go back to that past state. Yet, in the liver – within the tissue, if you need to go from one fate to the other – You still can, but of course, depending on the conditions. 

    AT
    That’s due to proximity or environment? 

    MH
    It has to do with the way that the genome organizes in a sense, we still don’t understand it on the molecular level, basically. This is a matter of huge investigation at the moment, how these fates are being closed during development.
    Imagine it like that: You have five doors and you’re going to take door number three, which is going to make you become an hepatocyte.
    Once you “cross” that door number three, you cannot go back, the door closes forever. So at a certain moment, there are no doors anymore and you arrive at your final fate, yet, depending on the tissue, there is some kind of plasticity, let’s call it flexibility, and you can become another cell type within the same tissue. But you cannot go back to something completely different, from another tissue, unless you genetically manipulate the cell.
    To genetically manipulate you have to induce expression of genes that were not there, by introducing the genes. This is how Yamanaka made the fibroblasts, which can become pluripotent now. The fibroblasts can now become a zygote type of cell. Why? Because he found factors that are essential for that. For that state of pluripotency, for that state of becoming anything. He found the key to go back, unlock all these doors and return to the original state. But, you have to genetically manipulate it, it’s not natural. 

    AT
    Could you say that an organoid produced from a particular subject is directly representational of the organ at time of collection or would an organoid need the “memory” of developmental function in coordination with other systems (vascular for instance) to be determined a replica? 

    MH
    Ah, that’s a very interesting point. If we take it directly from a patient, I mean directly from a particularly subject – We know that it kind of retains – Many, many, many features – From transcriptional to kind of epigenetic memory of what it was, So, somehow it does indeed represent the organ at collection time. Whether the organ in vivo and organoid ex vivo would evolve in the same way, we do not know. 

    The coordination aspect is interesting, we actually do not know whether coordination with other systems is required to maintain this original status of the patient, basically, we don’t really know because we cannot put other systems in there yet. But is a future avenue some of us we are indeed exploring. 

    On the other hand, if instead of making organoids directly from the tissue we use cells from this patient and we do induce pluripotency, like what Yamanaka did – then we induce full reprogramming and for this reprogramming to pluripotent state you need to lose this memory, or at least, a great part of it. 

    AT
    How is interorgan communication established between organoids? What is the most comprehensive inter-organoid system created thus far? 

    MH
    This is not yet there, so it has not happened yet.  The most comprehensive “inter-organoid system” – 

    It’s from a group in Cincinnati’s Children’s Hospital where they showed that they can recapitulate the liver, pancreas and biliary duct structure – from development, from the original organ primordium.
    But, they can capitulate the development from the progenitor that generates these three organs to the generation of the three primordium structures.
    Not the organ itself, but the primordium structures. It is a fantastic achievement indeed.
    Recapitulating from adult tissue has not been attempted yet. 

    AT
    So in terms of personalised medicine and developing therapies would you be able to apply the research from a single subject to a body of people with similar genetic disposition? Or would this technology be distributed on an exclusive basis? Could there be hypothetical libraries of organoid information?

    Arcosanti Arch, phosphorescent pigment, binder, UV charge, Alex Thake, 2020, Image: Ainsley
    Johnston.

    MH
    Let’s say personalised medicine goes in that direction. So, the long term vision of it is that you could have a tissue that is diseased, corrected it ex vivo, put it back and it will be from the same patient but corrected, so there will be no rejection because it would be from the same person. However, we are not there yet. 

    You can also think that there is a lot of shortage of donors in general, even in the liver. And now we can expand pancreas tissue and liver tissue, ex vivo – so we could actually generate a source of human liver, human pancreas tissue that we could put back into a patient instead of needing a full organ transplant. 

    We would infuse cells that are functional and with the aim or goal of the cells being grafted into the tissue and becoming functional parts of the tissue. In the liver, using mouse models, we demonstrated that this is possible with mouse liver organoids. You can grow mouse liver from one mouse, grow it in culture and transfer it to 20 mice. And we saw the cells got into the liver and they were made into functional cells. Human into human is going to be a long road to go. 

    Another thing that we’ve also done, also going in the direction of personalised medicine, now we can imagine one of these diseased patients is diagnosed with liver cancer. So, we can take a biopsy of the cancer, expand it in culture as if they were in the patient. 

    Now, you can imagine you have a patient who has a tumour, you can propagate each tumour ex vivo, enough to generate material that now you can test as many drugs as you want and these are drugs specific for this particular patient. 

    So this is what is true personalised medicine. 

    So you could have the disease, itself, in culture, ex vivo. Now test ten drugs, twenty drugs, thirty drugs and say, drug A, B and F are the good ones that will work for that patient and the others dont work. But, they might work for a second patient, for a third patient. 

    That’s why personalised medicine could not be done before, because we could not grow cells directly from the patient without manipulating them first. Now, that’s history. 

    We can grow tumour cells from the patient directly, without manipulating these cells and these tumour cells recapitulate many aspects of the tumours that the patient has inside of the body. Which means that it’s a better model to predict what would be the best treatment for our patient. 

    AT
    Are these therapies competing or in congruence with CRISPR gene editing technologies? 

    MH
    There isn’t a parallel but they can be complementary. So, CRISPR editing just manipulates the genome.
    Imagine a page of a book, and you are missing a critical word and anything can be that word. So imagine you have a genome, it has a defect. Gene editing can correct that defect. 

    Okay, this is what you do in the genome, but then the important question is: which cell are you going to edit? In the book example, if you put the word on the wrong page, it still has no comprehensive meaning right? 

    Now, go back to the example of liver disease, a patient that has a liver disease, the disease is a subset of the population of the liver, not in the whole liver. Then, we need to correct the cell. So, the organoids give the cell types.
    Gene editing is a tool to correct any defect in the genome, in whatever cell.
    Organoids give you the specific cell where you are going to do the correction in a meaningful manner.

    AT
    Could you comment on contemporary applications of ex vivo expansion in regards to COVID-19? 

    MH
    It is an application of the technology because now we can grow human tissue in a dish, now we can ask the question: does this pathogen, in that case the COVID-19 virus affect particular tissue. 

    Early on on the outbreak we learned that some patients had liver failure, the livers were collapsing from these patients – The reason for that was unknown, it could be either that the liver failure of these patients was caused by a secondary reaction to the inflammation everywhere in the body, or it was caused by the virus entering into the blood and as a target (to the liver). However, that question you cannot ask in the patient since you cannot monitor where the virus has entered. Neither take biopsies (especially on these sick patients) just for the sake of knowing whether the cells have viral particles. Here is where organoids, being an ex-vivo organ-like culture, could help. 

    Since we knew that our organoid culture system expresses the receptor for the virus,we collaborated with a group at Heidelberg who had patients that had the virus. So, because we had human liver organoids derived from human donors from patients that were healthy, that we had in a bank in my lab – we sent [the organoids] to Heidelberg for them to infect our cultures and answer the question whether actually the liver cells are targeted by the virus. 

    So it could be two options, either the virus is directly entering into the liver cells and killing them, or the virus is killing many cells that cause the liver to be very unhappy because it has to detoxify so much that it also results in liver failure. 

    So we can confirm, yes – the liver cells are targeted directly by the virus.

    Detail, newspaper, silicon tube, ants, Picnic at Hanging Rock, Alex Thake, 2020, Image: Ainsley
    Johnston.

    AT
    Do you have a timeline for when you anticipate predictive and transplanted organoid technologies to be widely implemented? 

    MH
    Yeah, I mean that we have to think that all of this is relatively new in terms of human publication. It was only five years ago that we managed to grow human tissue ex vivo. So it’s actually a field that is really expanding at the moment. For instance, for cancer, it’s already been used to do predictive medicine and is starting to be used as a tool to tell which drug would be better for that particular cancer patient. It’s really early stages because it’s been scaled as only research, now it’s starting to be a bit embraced by pharmaceutical companies, to grow to the general public and make an impact it needs to be implemented. Not from little research labs like ours but from the government level. 

    I see it as a future in the next five-ten years.
    If we can do that and it proves to be good predictive value maybe it will be implemented widely. 

    The transplant experiments we were talking about at the beginning, like the capacity of expanding out of human cells and then putting them back into patients – I think that is more the long-term future because there is still quite some improvement that needs to be done on the systems, and this improvement can only come from basic research. 

    The industry implements these things when they are over the age of development and there are little developments to be done. 

    For transplantation, cell therapy transplantation, I still envision maybe 15 years more. But who knows, 15 years ago, nobody (or very few) knew the word “organoid” neither used it as we do now… Even us, in our first manuscripts 10 years ago we were calling them “3D Adult stem cell cultures”, so the growth of this and, probably all, fields is exponential and it only takes one or two breakthroughs to get to the next level. Only time will tell…  

    Alex Thake received her BFA from Hunter College in New York. She currently resides in Frankfurt attending the Städelschule. Her installation considers and conflates preconceptions of technology, architecture and biology. Her most recent works have been investigating biological radicalism as response to repressive systems and the internal and external manifestations of failed utopias. 

    Dr Meritxell Huch obtained her PhD at the Center for Genomic Regulation in Barcelona. In 2008, moved to the Netherlands, to the Hubrecht Institute where she studied Stem cells. She made the ground-breaking discoveries that liver and pancreas cells can be expanded as organoids ex-vivo, and that these recapitulate many aspects of tissue regeneration in a dish. In 2019, she was awarded the Lise Meitner excellence research program fellowship from the Max Planck Society and moved her lab to the Max Planck Institute of Molecular Cell Biology and Genetics, in Dresden, where she focuses her research on understanding tissue regeneration, organ formation and their deregulation in disease.

    1 – 01
    Interview
    PDF
    ✕

    The Cosmic Macro-Economy

    Herman Daly

    Is there an economic structure or fact that governs our lives, but is so large and all-encompassing that it remains nearly invisible? Many of us have failed to see the overarching fact that all life and all wealth is maintained by an entropic flow of matter and energy through an economy that is a subsystem […]

    Is there an economic structure or fact that governs our lives, but is so large and all-encompassing that it remains nearly invisible? Many of us have failed to see the overarching fact that all life and all wealth is maintained by an entropic flow of matter and energy through an economy that is a subsystem of a finite and non growing Earth. Consequently the scale of the economiic subsystem cannot exceed that of the total earth system. In other words physical growth of the economy is limited. 

    To be sure, we learn that we depend on the sun’s life giving support as well as the photosynthesizing organisms who make it available to us. Yet we quickly forget, and are left to wonder what it is about sunlight that supports life, given that energy is neither created nor destroyed. And besides, doesn’t life require matter as well as energy? Then we learn about entropy. Living and producing both require “sucking low entropy from the environment” as physicist Erwin Schrodinger aptly put it. Entropy is the qualitative difference between equal quantities of useful matter-energy and waste matter-energy. The difference between useful resources and useless waste seems a very basic fact for economics, but economists seldom learn about the laws of thermodynamics. 

    An exception was Nicholas Georgescu-Roegen whose magisterial book The Entropy Law and the Economic Process tried to correct this defect, but met with limited success in convincing his fellow economists. But if we look again at his work we can get a picture of the too-large-to-see cosmic structure that ultimately governs the maintenance of life and wealth. From Georgescu-Roegen we learn that there are two sources of the low-entropy flow that sustains our lives: the solar and the terrestrial. They differ significantly in their patterns of scarcity. The solar source is only energy, no materials, and is practically infinite in its stock dimension, but finite and dispersed in its flow rate of arrival to earth. The terrestrial source of low-entropy consists of both matter and energy – concentrated deposits of minerals in the earth’s crust, including fossil fuels which are ancient solar energy accumulated over billions of years. Terrestrial low entropy is limited in its stock dimension, but can be used up at a flow rate of our own choosing. We cannot mine the sun to use tomorrow’s solar energy today, we must wait for it to arrive tomorrow. We can, however, mine and use up today the accumulated solar energy of Paleolithic summers, and have chosen to use it rapidly, at least during the past two centuries. We have thereby become more dependent on the scarcer terrestrial source, rather than the abundant solar source, than we were in pre-industrial times. We prefer the terrestrial concentrated—and we are impatient to use it to grow. We, especially economists, think that thanks to growth the future will be richer than the present, and, therefore, the (growth-inflicted) costs of depletion and pollution will be easier to bear. 

    Solar energy is abundant and renewed every day. To capture its flow requires extended space covered by a “net” made out of highly structured materials. These structures wear out over time and need maintenance, as well as replacement, and of course require initial construction. These needs must be largely met out of our diminishing terrestrial stock of low-entropy matter-energy. Current sunlight and terrestrial material collectors are complementary factors. The one in short supply is therefore limiting. The limiting factor is terrestrial low-entropy, concentrated materials in the earth’s crust, including fossil fuels. To see how useless abundant solar energy would be without material structures capable of capturing it, one need only look at the barren moon, Mars, etc. 

    The economic question then is, how best to use the limiting factor? We should focus our attention on how to allocate our scarce dowry of terrestrial low entropy. We have two general alternatives. We can consume it directly in building cruise ships, jetliners, rockets to Mars, and Cadillacs—or we can invest it in structures that tap into our more abundant solar source of low entropy. We collect solar energy in two basic ways. The first way is indirectly through the photosynthesis of plants in agriculture, forestry, ranching, hunting, fishing, etc. Other species concentrate, to our benefit, the solar energy captured in the process of photosynthesis. And we exploit their population growth, either by taking only a sustainable yield or by taking a greater than sustainable yield and thereby converting a renewable resource into a nonrenewable one. The other basic mode of capture is by investing in direct solar collection by modern technologies such as photovoltaics and concentrating solar-thermal power. 

    Our human lives require the conversion of incoming solar energy by photosynthesizing plants and thenceforth other species at lower trophic levels into food and fiber above their own maintenance requirements. Given sufficient bounty from these other species, sustainably exploited, we can then invest resources beyond our own mere maintenance. Investing terrestrial low entropy in a plow, for example, increases our ability to tap incoming sunlight for vital purposes. Spending it on a Cadillac, on the other hand, is not a vital purpose but rather a luxury expenditure of our limiting factor. This led Georgescu-Roegen to a rather dramatic conclusion: “The upshot is clear. Every time we produce a Cadillac, we irrevocably destroy an amount of low entropy that could otherwise be used for producing a plow or a spade. In other words, every time we produce a Cadillac, we do it at the cost of decreasing the number of human lives in the future.” 

    It seems that in spending our limiting factor we face a tradeoff. Using it up on present luxury has the opportunity cost of fewer lives in the future . Saving it for future plows has the opportunity cost of less luxury in the present. This basic tradeoff exists regardless of how efficient the solar collectors may be. 

    Georgescu-Roegen’s argument was anticipated by Henry David Thoreau’s oft-quoted insight that “the cost of a thing is the amount of what I will call life which is required to be exchanged for it, immediately or in the long run.” Or as John Ruskin put it, “There is no wealth but life. Life, including all its powers of love, of joy, and of admiration. That country is the richest which nourishes the greatest [cumulative] number of noble and happy human beings.” Life requires current sunlight, and the most vital use of accumulated Paleozoic sunlight is to build or preserve material structures capable of increasing our ability to capture current sunlight. 

    The realization that the cost of present luxury is foregone future lives is dramatic and sobering. However, life at a mere basic subsistence does not offer much enjoyment, and most people are certainly not willing to live that way. Yet extravagant luxury and gross inequality become less tolerable when the same reasonable people recognize the opportunity cost in terms of even “good life” foregone. So, we are forced to ponder a big question: should we not strive to maximize cumulative lives ever to be lived over time by depleting terrestrial low-entropy stocks at a flow rate that is low, but sufficient for a “good life”? There is no point in maximizing years lived in misery, so the qualification “sufficient for a good life” is important. And there remains the question of of how much life of other species is necessary for a good world. 

    Even with careful use, the scarce terrestrial stocks eventually will be gone, even as the sun continues to shine. Mankind will revert to what Georgescu- Roegen called “a berry-picking economy” until the sun burns out—if not driven to extinction sooner by some other event, as seems increasingly likely. But in the meantime, striving for a steady state with a rate of resource use sufficient for a good (not luxurious) life, and sustainable for a long (not infinite) future, seems to be a worthy goal. It’s a goal of maximizing the cumulative life satisfaction possible under finite and depleting terrestrial resource constraints. 

    Does this cosmic invisible structure, once recognized, raise any questions for current practical economic policy?
    Consider: 

    – How much resource use per capita is sufficient for a good life? 

    – How do we ensure that everyone gets that amount? 

    – How large a population can a viable technology support at that standard of consumption? 

    – How much of the scarce terrestrial stock of low entropy can be economically invested in further tapping the abundant solar flow? In other words, which direct solar technologies actually have a positive net energy yield? 

    – Is indirect or direct collection of solar energy a more economic investment at the present margin (i.e., more reforestation and conservation of ecosystems, or more photovoltaic collectors and windmills? 

    – What is the best policy sequence—efficiency first to make frugality less necessary? Or frugality first to make efficiency more necessary? 

    These questions have not been central to modern growthist economics—indeed, not even peripheral! But a cosmic macro-economics puts them front and center. 

    Herman Daly is Emeritus Professor at the University of Maryland, School of Public Affairs. From 1988 to 1994 he was Senior Economist in the Environment Department of the World Bank. Prior to 1988 he was Alumni Professor of Economics at Louisiana State University. His books include: Steady- State Economics (1977; 1991); For the Common Good (with John Cobb, 1989;1994); Beyond Growth (1996); Ecological Economics and Sustainable Development (2007); and From Uneconomic Growth to a Steady-State Economy (2014). In 1996 he received Sweden’s Honorary Right Livelihood Award, and the Heineken Prize for Environmental Science awarded by the Royal Netherlands Academy of Arts and Sciences. In 2001 he received the Leontief Prize For Advancing the Frontiers of Economic Thought, and in 2002 he was awarded the Medal of the Presidency of the Italian Republic. In 2010 the U. S. National Council for Science and the Environment gave him its Lifetime Achievement Award. In 2014 he received the Blue Planet Prize, awarded by the Asahi Glass Foundation of Japan.

    1 – 02
    Essay
    PDF
    ✕

    I’d Prefer To Be Too Many To Name

    Curtis Roth

    More and more I imagine the next thing will set everything right. Like faster-acting melatonin gummies, or a mail-order mattress, or timing my daily internet intake – all to assuage a growing unease that remains difficult to specify. If I’m not alone in this sentiment, then perhaps it could be said that never before have […]

    More and more I imagine the next thing will set everything right. Like faster-acting melatonin gummies, or a mail-order mattress, or timing my daily internet intake – all to assuage a growing unease that remains difficult to specify. If I’m not alone in this sentiment, then perhaps it could be said that never before have we had so many specific solutions for such general problems. And at no other time has this been more apparent than during the rolling quarantines of our present moment; where the cruel indifference of our public institutions is offset by the obligation to maintain an endless array of self-care regiments from starting sourdough to learning Mandarin. Informed by earlier self-actualization movements like Quantified Self (QS) or Neuro-linguistic Programming (NLP), the contemporary economy of self-care regards life as the confluence of so many discrete signals. This cybernetic understanding of being suggests that our futures might be positively steered by meticulously managing the flows from which our lives are constituted. We’re told our futures now depend on the constant interrogation of these signals, in other words: self-care entails the responsibility to relentlessly self-profile. Today, many are offered the ability to manage the minutiae of their lives at an unimaginably fine resolution. But like melatonin gummies on the deck of the Titanic, the responsibility to self-profile grows increasingly perverse amidst the increasing uninhabitability of reality itself. 

    While I might not be alone in my impulse to neurotically profile my own life, this impulse is far from universally accessible. I write this text from the United States, following weeks of public protests against state-sponsored racial violence and the uneven death-toll of a viral pandemic that’s normalized by the powerful as the cost of doing business while disproportionately killing the poor. If the responsibility to continuously manage one’s life can be understood as a technique for directing my future, such events remind us that one of the ways in which power remains powerful is by unevenly distributing such techniques of living. I’m compelled to profile myself while others are brutally profiled. 

    Such incongruities between techniques of the self are also present in the ways in which life is captured by contemporary online surveillance. Until recently, the most common way to profile an internet user was through Challenge-Response Authentication. These profiling processes are typically used to differentiate human beings from bots, and to allocate a user’s privileges appropriately. Challenge-Response Authentication is usually encountered as annoying JavaScript CAPTCHA apps, requiring users to retype distorted lines of text, or select all of the images containing traffic signals from a nine-square grid. CAPTCHAs entail a sensory-cognitive task presumed easy for humans and difficult for computers. Importantly, these tests don’t care which particular user you might be, only whether the user in question is a human being or not. 

    Problematically however, CAPTCHAs profile this human being through the narrow threshold of specific abilities that are far from universally human. For example, a bot and the visually impaired are equally unable to select all of the images containing traffic signals from a nine-square grid. In order to expand this overly-narrow circumscription of the human, in 2014, Google unveiled an application called “no CAPTCHA reCAPTCHA.”1 The unwieldy name signifies a comparatively painless process: a small check box accompanied by the succinct assertion “I’m not a robot.” A user agrees simply by checking the box and is immediately authenticated. But while reCAPTCHA was unveiled through a narrative of increased accessibility, it simultaneously facilitated a new regime of surveillance built atop a radically different conception of life itself. Unlike previous Challenge-Response tests, reCAPTCHA isn’t strictly interested in whether the user is a human being, but in registering the user as a specific human being in the process. 

    Clicking a reCAPTCHA doesn’t confirm your humanity through a test, rather it infers it from your ability to enter into a legal agreement with Google. By clicking “I’m not a robot” the user submits to a process of continual surveillance designed to calculate their humanity in perpetuity. After accepting the agreement, each user is saddled with a tracking cookie and assigned a ‘risk-score’ indicating a live calculation of their potential for malicious activity while using a site.2 While Google refuses to indicate what factors comprise users’ risk-scores, security researchers have theorized that it is derived through a combination of hardware and software fingerprinting, as well as the live tracking of the cursor gestures of individual users.3 Today, these two models of user authentication exist in an uneven patchwork of surveillance across the web. But critically, CAPTCHA and reCAPTCHA are not only competing models of authentication, but competing techniques for exerting power. 

    Theorist Byung-Chul Han differentiates the techniques implicit in CAPTCHA and reCAPTCHA by drawing a contrast between the biopolitics of the industrial state and the psychopolitics employed under neoliberalism.4 Like CAPTCHAs, Biopolitics exerts power over life by construing it through systems of norms, such as the cognitive-perceptual criterion tested by a user’s selection of traffic signals from a nine-square grid. For Byung-Chul Han, while norms such as citizenship, gender, or physical ability have proved useful for calibrating the productivity of bodies, they prove less useful in conscripting the psyche upon which neoliberal production increasingly depends.5 While CAPTCHAs differentiate humans from bots through binary categorization, reCAPTCHAs regard life as the ever-changing aggregate of probabilities processed from a user’s behavior. Such systems are psychopolitical, in that they allocate freedom by modeling a users’ cognitive states such as their attention, arousal or ennui. 

    Whether biometrically or psychometrically, such attempts at profiling are invariably directed toward the monetization of users’ futures. It’s of no real interest on the back-end whether a user is a human or a bot in any ontological sense, rather what is at stake is the probability of a user behaving in ways that are reliably profitable. Crucially, CAPTCHA and reCAPTCHA, along with the bio and psychopolitical techniques that underwrite them, project the future through two distinct regimes of probability. 

    CAPTCHAs rely on a mathematical method known as frequentism, the dominant technique for statistical analysis prior to the 21st century. According to Justin Joque, “[frequentism] defines probability as the long-run frequency of a system.” 6 Through frequentism, a static prediction is made and then proven or disproven based on the frequency of its occurrence over a series of instances. Like many biopolitical demographic techniques, frequentism works at the level of total systems over long-runs. The assertion that a human can complete a CAPTCHA while a bot cannot, depends on a static and universal conception of the capacities of all humans and all bots for all time. 

    ReCAPTCHA, on the other hand, relies on an alternative predictive technique known as Bayesian probability. While first theorized in the 18th century, most Bayesian methods remained prohibitively inefficient until recent advances in computation. Rather than a stable prediction, Bayesian probability allows a prediction to be updated after each discrete event.7 Instead of static hypotheses, Bayesianism can establish probabilities for individual events. ReCAPTCHA doesn’t require any preexisting definition of what constitutes a human user, only that the behaviors of a particular user presumed to be human continue to be similar to the behaviors of other presumably human users. My surfing behaviors, recorded by Google’s tracking cookies, inform predictive models of a general human user that eventually determine the risk scores of others. Crucially, this flexibility is afforded by the Bayesian method’s ability to perpetually incorporate new inputs. While subjects modeled through CAPTCHAs are what they will always be, reCAPTCHA regards the user as an evolving confluence of signals amongst a spectrum of similarly evolving users. 

    In this sense, today’s economies of self-care rely on a model of life in which the future is realized through ad hoc Bayesian principles. I don’t need to know the precise ways in which my melatonin intake and mattress type contribute toward my personal fulfillment, only that by fine-tuning such inputs I am more likely to eventually find fulfillment. The connection between Bayesian statistics, self-care and economic privilege is made explicit in organizations like the Silicon Valley based Less Wrong group. Founded by artificial intelligence researcher Eliezer Yudkowsky in 2009, and supported by radical libertarian financier Peter Thiel, Less Wrong is a techno-utopian doomsday cult.8 The organization employs Bayesian statistical methods to maximize its members’ pleasure as they collectively hurtle toward the technological singularity, and the end of human life as we understand it. The forward-looking nature of such groups, along with the myriad ways in which self-care is now expected to substitute for state-care would seem to confirm the growing sense that the present moment constitutes some sort of epochal shift. One in which frequentism is supplanted by Bayesianism, biopolitics by psychopolitics, and Keynesianism by neoliberalism. 

    Instead, I would argue that the disproportionate suffering made explicit over the last several months suggests otherwise. Like the internet’s uneven muck of CAPTCHAs and reCAPTCHAs, today we occupy a moment in which life itself is a wildly unstable concept. Less one thing following another than every past turned productive by living-on in simultaneity. Where the responsibilities of governance are outsourced to the psyches of some as the obligation of self-care; even while others are murdered through much cruder techniques of population management. This isn’t to call for a more equitable distribution of suffering, but rather to suggest that any model of life precipitates the possibility of another future. And that if design has something to offer the present moment, it is our ability to make new configurations of life real. To offer the present muck ways to be that allow for a more just future. One in which the capturing of life as information, implicit in all contemporary profiling, is no longer merely the raw material means to others’ ends, but a form of self-determination.

     

    1,2,3 Schwab, Katharine. “Google’s new reCAPTCHA has a dark side” Fast Company, June 19, 2019.
    4,5 Han, Byung-Chul. Psychopolitics, Neoliberalism and New Technologies of Power. Verso, London, 2017.
    6,7 Joque, Justin. “Chances Are” Real-Life Magazine, March 28, 2019.
    8 Tiku, Nitasha. “Faith, Hope and Singularity: Entering the Matrix with New York’s Futurist Set” Observer, July 25th, 2012.

     

    Curtis Roth is an Associate Professor at the Knowlton School of Architecture at The Ohio State University. His work examines new formations of subjectivities within networks of computation, labor and distance.

     

     

     

    1 – 03
    Essay
    PDF
    ✕

    Classifying the Déclassé: A Non-Methodical Methodology

    Dr Annebella Pollen

    Dewey Decimal schemes, archival fonds and sub-fonds. Acid-free boxes, Secol sleeves and white cotton gloves. As a historian of material culture, working with texts, images, artefacts and collections, my practice may seem to be formally organised and performed via recognisable systems. It is underpinned by scientific coordinates, disciplinary apparatus and proprietary products. These are the […]

    Dewey Decimal schemes, archival fonds and sub-fonds. Acid-free boxes, Secol sleeves and white cotton gloves. As a historian of material culture, working with texts, images, artefacts and collections, my practice may seem to be formally organised and performed via recognisable systems. It is underpinned by scientific coordinates, disciplinary apparatus and proprietary products. These are the tangible tools with which I work; my visible structures, if you like. Neon Post-It notes and highlighter pens make the see-able and know-able world even more hi-vis. 

    How ordered it all sounds! I train my PhD students in how to organise their data, code their interpretations and structure their chapters. I evaluate research proposals on their logical design and developing arguments against specified criteria. I earn my academic keep by balancing budgets and populating spreadsheets. Yet my office is a mess. I learned recently that there are two types of hoarding, horizontal and vertical: piles on the floor and piles up the walls. I’m giving them both a try. I variously group my books by colour; into themes according to what I’m working on; by their proximity to my desk; by how much I can bear to look at them or not look at them. I’ve discovered that this kind of emotionally reactive and mostly productive non-method has a nickname: procrastivation. It avoids the centre by working at the margins. 

    The subjective selections that underpin classification structures fascinate me and I’ve long been attracted to research material that eludes easy categorisation. Photographs, for example, seem to offer a straightforward window on the world but they are constantly disruptive of the boxes into which they are placed. They slip between truth and lies, science and art, documents and pictures. They are never simple illustrations of the visible and there are far too many of them to know where to stop. Their excess, in terms of what they picture and their quantities, is a key characteristic. Their captions and their storage and display locations tether them to a certain extent but they always exceed their parameters. Their character is complex and their meanings are ever multiple. 

    Sample hanging file. Former History of Photography collection, University of Brighton slide library.
    Photograph by Richard Boll, 2019.

    The second-hand marketplace is another site where objects’ relative fortunes are made and unmade, where treasure and trash are bargained over, where narrative and context variously adds and subtracts cultural value. The dealers at dawn haggling for house clearance cast-offs may or may not have read Pierre Bourdieu’s famous 1960s study, Distinction: A Social Critique of the Judgement of Taste, but they live it out daily as they move goods from unwanted to wanted and rate them accordingly. A sign at the door of a local flea market frames these shifts playfully: We buy junk and sell antiques. Pricing seems to add objectivity but the rules are mostly unwritten and get renegotiated with each transaction. Taste classifies, and it classifies the classifier, as Bourdieu famously put it. We are what we value; we are what we throw away. 

    These subjective selections and mutable taxonomies are the subject of my research as well as the operating systems through which I receive and interpret my information. Most of my projects concern what I call non-canonical material: the overlooked, troublesome and unwanted. I relish the challenge of the wild things, the unwieldy. I’ve cherished orphaned family albums, unsaleable garments at the dump, photo competition rejects, deaccessioned museum collections and archival boxes marked ‘Miscellaneous’. These are difficult objects that create disorder and that reveal the inadequacies of the systems that are meant to provide meaning and certainty. 

    Broken slide. William Henry Fox Talbot, Study of Leaves, 1839. Former History of Photography
    collection, University of Brighton slide library.
    Photograph by Richard Boll, 2019.

    A pertinent example of such a project is the study I have made of art school slide libraries or, rather, their destruction. Once the core site of the visual aids through which histories of art and design were taught, the 35mm slides and their analogue projection equipment have recently been deemed obsolete. Tiny, carefully-labelled squares of glass have been superseded by vast virtual image databases and speedy digital display mechanisms. The labour of generations of slide librarians in photographing, mounting and processing the hardware of visual information has been decimated, in some cases literally ground to dust, over the past decade or so. In my institution, as with many others, hundreds of thousands of images were dismantled and dispersed. Artists and art historians, however, sentimentally rescued what they could. To me, they contained the historiography of a discipline and its technology; its archaeology of knowledge, to borrow a phrase from Foucault. 

    Two filing cabinets in my office are stuffed with my university’s former history of photography collection. Some slides are shattered, faded to pink; others stick to their decomposing storage systems. They speak of order and chaos, the enduring and the fragile, the changing materiality of photography and its ever proliferating scale. Slides of the very earliest of photographs from the 1830s, taken and displayed with 1960s technology now abandoned, seem hopelessly poetic. I was taught my subject via these transparencies when I looked through them as images rather than at them as objects. I now see them as structures, as indexes of cultural flux. Their fallibilities are writ large in their bulky forms, their peeling stickers, handwritten labels and yellowing cases. They seem clunky and clumsy in a friction-free world of easy image supply. Their slowness and messiness, however, reveal complexities and show how the visual world is ordered and disordered. 

    How to manage the unmanageable? How to tie this turmoil down into a visible and readable outcome? Words, my other tools, share a mercurial character with my research materials; they are similarly slippery and similarly spatial. They pile up and spill over. I scribble them on scraps and pour them onto screens. I do it in the middle of the night or when the mood takes me, on a run or at the kitchen sink. I scroll up and down Word documents and iPhone photo feeds, adding hearts and asterisks. I circle, underline, fire arrows and shout capitals and until patterns emerge at the edges. I need to keep rummaging through the clutter, the overflowing folders and the teetering towers of boxes. The disorder is essential and the process is never truly structureless. Perhaps my methodology comes closest in practice to Walter Benjamin’s ragpicker, in turn borrowed from the nineteenth century poetry of Charles Baudelaire. I assemble wholes from individually unpromising parts. I rehabilitate rubbish.

     

    Dr Annebella Pollen is Reader in History of Art and Design at University of Brighton, UK. Her research interests include popular image culture, especially in relation to the uses and expectations made of photography. Publications include Mass Photography: Collective Histories of Everyday Life (2015) and Photography Reframed: New Visions in Contemporary Photographic Culture (2018, co-edited with Ben Burbridge). Her other books include Dress History: New Directions in Theory and Practice (2015, co-edited with Charlotte Nicklas), The Kindred of the Kibbo Kift: Intellectual Barbarians (2015), a visual history of a utopian interwar youth group, and the forthcoming Art without Frontiers, on British art in international cultural relations. 

     

    1 – 04
    Essay
    PDF