Friday, December 25, 2009

You have the formula. Give it to me now.

Longevity has been one of the main goals of transhumanism. We all want to live long enough to see the Future. What if someone tells you that you can live longer without waiting for gene mods, nanobots, or symbiotes? Too easy, right? No, we don't need technology to live longer, better lives. It can be as easy as changing your diet, and we can do that right now.

I'm not trying to sell any diets; there's too much controversy on which one works, and which one is pseudoscientific bullshit. I don't even buy the natural, or traditional diet, the one that Michael Pollan pushes: "Eat food. Not too much. Mostly plants." I know most of the processed foods have all kinds of icky preservatives in them, and I know soda will rot you from the inside. But why worship the traditional diet? Traditionally, the average life-span is shorter than it is today. Show me a culture that lives to 120 and I will eat whatever they are having.

Aside: lobsters show negligible senescence and can live to well over 100. Too bad for them they are so delicious. Too bad for us we don't get that longevity by eating them.

140 yr old lobster about to be interrogated on the secrets of its longevity.

The point remains: whether you are anti-fat, pro-carbs, or pro-starving-yourself, you can live better just by tweaking your lifestyle. A bit of exercise is good. Learn to cook, or dance, or play Guitar Hero. Be happy and free yourself from stress. Hang out with friends. Fall in love. It could work. Even if these things don't work, and they don't make you live any longer, well, at least you had fun living.

Merry Christmas. See you in the Future.

Wednesday, December 16, 2009

Apocalypse!

As civilization grows toward dizzying heights, we become increasingly aware of how far we have to fall. We dote on collapse, destruction, and survival, because of the uneasiness we feel living in a world too complex to understand. Its machinery has evolved beyond our control, and we are all trapped in lives that we call normal. The sub-genre of science fiction that deals with apocalyptic (and post-apocalyptic) scenarios appeals to this dread, and offers a second chance - to blow away the fluff, to reduce civilization to its imaginary core values, and, maybe, to design and rebuild it better than before.

The futurist is concerned with rebounding from this collapse and rebuilding a technological society. After survival is assured, we'd like to skip past all the grime and pollution that came with the Industrial Revolution and go directly into clean, renewable structures. At the last H+ meeting, there was a great deal of discussion on these two ends of the spectrum: basic survival needs like agriculture, medicine, metal-working, tool-making, and sustainable solutions like solar and wind power.

But how do we connect the two? How do we go from subsistence to prosperity and beyond? Are there any shortcuts? There are, but it has less to do with technology than with the propagation of ideas and social structures (at the meeting, interesting analogies were drawn to the role of technology in developing countries). We'd have to consider what resources are available, and how to take the most advantage of them. Let's talk about the end of the world:

Scenario: Biological catastrophe



This broad category includes scenarios of destruction targeted at the human population: any disaster that kills off most of the people without damaging the things we built.  It includes pandemic (The Stand), reproductive failure (Children of Men), gendercide (Y: The Last Man), zombies (28 Days Later, I am Legend), famine due to climate change, ecological collapse, gamma rays and ozone depletion.

Though most of the infrastructure will deteriorate in a decade or less, our most precious resource can last for much longer: the wealth of knowledge stored in books, hard drives (inactive), and optical media. First priority will be the protect repositories of knowledge - libraries, datacenters, and universities - against looting and environmental damage.

As people begin to repopulate and rebuild, we'll need to distribute knowledge to far away communities. The people with access to life-saving information (for example, how to grow food and maintain greenhouses, how to survive cholera or perform an appendicitis) have a responsibility to share it with everyone. Without people operating the network of communication satellites, the easiest way to send information around the world is shortwave radio (around 10 MHz). These radios can be operated by individuals and many are built independent of the electric grid.

Education should be a prime focus. When people become scarce, each person must be encouraged to develop to his or her fullest potential. The problems that caused the disaster, whether pandemic or famine, may prevent any progress until we can find a fix. And that can't be done until we train enough engineers, researchers, and scientists to replace the ones lost in the catastrophe.

Technology level follows the standard decay curve - crumbling quickly as nobody is around to maintain it, but eventually levels off so all that's left are Twinkies, Spam, and those plastic things that hold six-packs together. Population level, following the initial collapse, will slowly grow to a level that's limited by the technology (agricultural, infrastructure, medical, etc) - a logistic curve. Units in the following graphs are arbitrary.



Here is a model of the worst case scenario: nobody bothers to rebuild or maintain the tech, so people keep exploiting what was left from the catastrophe until everything runs down. Buildings crumble to dust, cities turn into wilderness, forests spread across the land. The survivors band together in small tribes and live a hand-to-mouth existence. Even worse: there is no guarantee that we would ever return to a pre-collapse level of civilization.



The recovery model: we preserve enough knowledge to rebuild and use the existing structures to speed up the process. Note that technological decline is reversed quickly and the population limit rises as a result. There's a feedback process here: the more technology and infrastructure we save, the faster people can recover their former lives and become productive (and reproductive!). Optimistically, we'll restore the electric grid in 10 years and networks in 30. Recovery is a race against time - the longer we wait, the more we lose to decay, and the harder it will be to fix things up.

Resources for survivors:
Download the offline version of Wikipedia
Library of Congress in Washington D.C.
National Library of Medicine at Bethesda, Maryland
Research libraries at Harvard, MIT, UC Berkeley, Stanford, etc.

Tuesday, October 20, 2009

Too cynical

You know this quote from Niemöller ?
First they came for the communists, and I did not speak out—because I was not a communist;
Then they came for the socialists, and I did not speak out—because I was not a socialist;
Then they came for the trade unionists, and I did not speak out—because I was not a trade unionist;
Then they came for the Jews, and I did not speak out—because I was not a Jew;
Then they came for me—and there was no one left to speak out for me.
Of course. Every high school teaches it. It's suppose to be a call to action, asking us to speak out and defend our fellow human beings. But why, according to this poem, why should I care about the communists and the unionists and the Jews? Because when they come for me, when it's my own butt on the line, I want the communists and the socialists and the Jews to save me. Not exactly altruistic.

My interpretation sounds terribly cynical and selfish (as I like to call it "rational self-interest"). Why do I care about the research for Parkinson's or blindness or aging? Is it because I want to reduce the suffering of other people, or because these bad things can one day happen to me?

Tuesday, October 13, 2009

Morality of genetic enhancement: How will it change relationships?

With nature: Sandel's main objection revolves around humility and what he calls "the gifted aspect of life". If we can't accept who we are and what we are given, if seek to control our own nature, then we lose appreciation of who we are. He bases this argument on the ancient concept of hubris: we shouldn't play god. I will give the standard transhumanist response: people don't build skyscrapers and bridges and dams out of humility. We didn't take flight and land on the Moon out of humility. We didn't split atoms and peer into the infinity of space out of humility. We did those and other things because we wish to better ourselves, and because we can. That's who we are. To ignore this restless impulse would be to deny our very nature. Through out human history, we've been striving to increase our dominion over the world around us. It is only logical to extend that will inward, toward our own bodies and minds.

With our bodies: How do you see yourself? Is there a sacred boundary around our own bodies and our minds, one that makes them off-limits to our willful control? The boundary is becoming more and more fuzzy: most people accept vaccinations and nutritional supplements, contact lenses and pacemakers. We drink coffee and alcohol and take other mind-altering substances. We read and learn and expand our minds. While these modifications are not as permanent as genetic enhancements, they do cross over any bodily boundary.

With our children: "We do not choose our children." Sandel sees having children as an unbidden, surprising, humbling experience. Some parents choose not to learn the sex of the child before birth, because knowing would ruin that experience. Children are gifts (he shied away from using the religious term "blessing"). A parent's love should not depend on the attributes of the child. But it is because of that parental love, that some parents wish to give the best to their child. The proper analogy is not designing specific traits for an offspring like one would order a Whopper at Burger King (no mayo, no tomatoes, extra lettuce), but of three wise men bringing gifts to a new-born child.

With our parents: Will genetic enhancement burden the parents and the children with too much responsibility for their choices? A short child today can only wish that he were taller, but in the future, he might grow to resent his parents for not making him taller, or better looking, or fitting whatever fad the culture dictates. A child that's engineered to be perfect might feel inadequate if she does not succeed. We are not wholly responsible for the way we are, Sandel says, and that relieves us of some pressure. To me, that's a child-like attitude. We as adults take responsibility for the choices we make, for what we learn or fail to learn, for how we behave. People today already deal with the choices that parents made for them, like circumcision, home-schooling, and religious practice. We can blame personal problems on how we were born, or we can learn to overcome them.

In the end, these considerations should be done on a personal level. Each of us learn and grow and figure out our own identities. The government should not make policies based on these objections.

Next post will deal with how enhancement will change society as a whole.

Morality of genetic enhancement (part 1)



"What's the right thing to do?" That's the question Professor Sandel used to open his Justice lectures at Harvard. He served on President Bush's bioethics committee back in 2002 and provided a liberal voice for the ethics of human genetic research and policy, liberal compared to the rest of the Bush-appointed panel. Recently, he gave a lecture for BBC about his moral objections to genetic enhancement of children. I'll summarize his points and inject some of my own thoughts, along with ideas gleamed from last Friday's Seattle H+ meeting.

Before I can start, I need to address two important distinctions. Genetic enhancement is not eugenics. Eugenics seeks to improve the human population by selective breeding, either through incentives or limitations on reproduction. Not only is it associated with racism and tainted by the Nazi ideas of racial purity, but it also infringes on the basic human right of reproduction:
Reproductive rights rest on the recognition of the basic right of all couples and individuals to decide freely and responsibly the number, spacing and timing of their children - World Health Organization
Sterilizing or restricting reproduction of a racial group is comparable to genocide. But, the non-coercive part of eugenics tends to slip under our moral radar: many religions encourage practitioners to marry inside the group, and to have more children. Countries like Singapore paid certain classes of citizens to have more children, and paid others to be sterilized. These practices fall into the vast gray area between right and wrong.

Genetic enhancement provides couples with the opportunity to choose and select specific traits for their unborn children, or modify the genes in a way that's impossible by nature. Unlike eugenics, it favors quality over quantity. It is not coercive, and it does not inherently discriminate against any group. At first sight, this does not violate any human rights. To justify any limitation, Sandel and other bio-conservatives must weigh any harms of genetic enhancement against what seems to be a reproductive right (though not mentioned yet by the WHO) for parents to decide what's best for their own children. The burden of proof falls on the conservatives.

Enhancement is not therapy. This is a contentious issue, because there exists a wide middle-ground where cases can be interpreted different ways in different contexts. Screening out Down syndrome and muscular dystrophy falls into preventative therapy, but engineering the perfect muscles for a future Olympic swimmer is clearly enhancement. More ambiguously, a treatment for Alzheimer's disease or for ADHD may also enhance memory and learning for regular people. The NSF's report on ethics defines enhancement as rising above the "species-normal" range - but even that can lead to fuzzy scenarios. At the end, the distinction relies on our intution.

Tuesday, October 6, 2009

Data's struggle


He is wondering: "Why is she holding my arm?"

Until recently, I didn't understand why Data wanted to be more human. His human-like appearance only magnifies the differences: he takes words literally; humor eludes him; he is deprived of pain, pleasure, and emotions; he does not understand human behavior. When put in such negative terms, these qualities make him seem defective and incomplete. It's as if his creator didn't have time to finish the job. Maybe Data feels insecure about these inadequacies. But he can't feel insecure; that's a human emotion!

Data tries to remedy his condition by putting himself in social situations, usually with awkward results. In the episode "In Theory", he engages in a romantic relationship with crewmemeber Jenna, but never feeling any real connection with her. "In regards to romantic relationships, there is no real me," he admits, only a compilation of different cultural sources.

Why does he even try? He goes around to various colleagues for relationship advice. He devotes many processing cycles to write himself a romantic subroutine (which he promptly deletes after Jenna breaks up with him). In the bigger scheme, he struggles to be more human. But there are many other fictional androids who are perfectly comfortable being who they are: the replicants in Blade Runner, the cyborgs in Terminator, the robots in Asimov's stories. Why can't Data accept who he is, and consider his emotionless rationality to be a personality quirk rather than a deficiency?

Data's real struggle, I realized, isn't to become more human; he aspires to become more than he is. The goal is merely a product of the culture he lives in. He wants to exceed the limitations that he was created with. Ironically, that restless longing makes him more human than any emotion can.

When we look at our own lives, and ask what motivates us, maybe the destinations don't matter as much as the journey itself.

Friday, October 2, 2009

A healthy dose of doubt

I haven't updated in a while; I think I needed a break from the ever-quickening stream of information about singularity and transhumanism. Too much immersion can lead to a false sense of certainty, of inevitability, and I don't want hold any delusions. Been thinking about philosophy of knowledge lately:

"Philosophy, though unable to tell us with certainty what is the true answer to the doubts it raises, is able to suggest many possibilities which enlarge our thoughts and free them from the tyranny of custom. Thus, while diminishing our feeling of certainty as to what things are, it greatly increases our knowledge as to what they may be; it removes the somewhat arrogant dogmatism of those who have never traveled into the region of liberating doubt, and it keeps alive our sense of wonder by showing familiar things in an unfamiliar aspect."  -Bertrand Russell

Someone on Reddit recently asked for arguments against the singularity. I think apathy, laziness, conservatism, and the lack of imagination can hold back human potential better than any technological limitation. Many obstacles originate from well-meaning suggestions: "be careful experimenting with things you don't understand", "let's solve the current problems first, like poverty, famine, and war", "the life we have is good enough; let's keep it that way". While these are all good ideas, they should not be twisted to hinder progress.

More about conservatism this weekend.