TransHumanism

Transhumanism is the idea that technology should be embraced, and we’ll be okay even though humanity will change into something that people of the 1500s may not recognize as human.

This is in contrast to the idea that what is natural is superior. It is also in contrast to the idea that technology and discovery and construction should be limited and controlled, because we will not be human if these things happen. Transhumanists aren’t necessarily against such controls, but they are against controls by reasoning of “we won’t be human anymore,” or “we won’t be our traditional selves anymore.” Several transhumanists recommend control of nanotech and biotech discovery, though, until such a time that safety procedures are found and implemented (ex: Bill Joy.)

Discussion

Transhumanism has been a major theme in my thought the last few months.

I just don’t know if we want to talk about it, that’s all.

I have plenty to say, plenty of thoughts, that go outside of future analysis. I almost just submitted a page “ZeroLaws?.” It was about fundamental laws (The right to Think, the right to Communicate, the right to Desire) that are not in the US Constitution, because we just never even thought that they could be under influence. Well, all but the 2nd.

But, I refrained, because I frankly don’t know if they’re of general interest on the wiki. I know two others on this wiki that are clearly interested in this sort of thing (Bayle, Emile,) but I don’t know about everyone else.

I don’t even know if it’s a direction I want to take on the wiki. My work is on SocialSoftware, and a lot of the time, I think reflecting on future implications encroaches on my work on Social Software. I don’t want to distract us from our SuperProjects.

I just did a keyword search for “Transhumanism” on both MeatballWiki and here. The only uses of the phrase here are by me, in the last month or so.

On Meatball, it’s just MeatballWiki:CyberPunk and MeatballWiki:InterMapDiscussion.

I’m interested because I think that wiki is 70% a social and only 30% a technical phenomenon. Apart from that it would help if you would explain what transhumanism is meant to mean.

I’m not yet sure where you want to take it, but from what I see on Wikipedia (“Transhumanism is an emergent school of speculative philosophy analysing or favouring the use of science and technology, especially neurotechnology, biotechnology, and nanotechnology, to overcome human limitations and improve the human condition.” – Wikipedia:Transhumanism) it seems to be both on-topic and interesting. :)

I agree that there is something, that there will be something and that the transgression from what will be to what is is getting faster and faster (Lion on MoonEdit: I thought we’d not have that before 2006). It’s a little frightening, I agree on that too. Thinking about it, not eviting to think about it might help. Whatever you call it, the titel of the story, the book’s titel, the name of the song, the page’s titel, TransHumanism or NotTransHumanism? comes last anyhow (therefore editable titels). First talk about it. Think about it. Good idea.

Glancing over the WP page, it sounds like technocratic hype. Of course, why not discuss it. But those that push these issues usually have something to win: sell books, get grants (eternal youth, cyborg technologies). We are not the winners, so why add to the hype? If it should become available - which I doubt - it will be technology for the privileged. So I stay with the critics.

Technocratic hype is Helmuts polite term. Life extention seemingly is a central issue in trans-humanism. Taking a little surf round wikilandia I receintly dropped into a survival-outdoor-wiki. On their forum they linked to a nice ‘n entertainig little test figuring out how sustainable your way of living is (it is a very simplified and a not too precise test, but ok). You take it and at the end it comes up with a surprising result (I was told we’d need 3.1 planets earth if all would live the way I do). Try it, doesn’t take long [1] (in German, sorry). I wonder how the result would change if the question would be added: Do you plan to take life extending messures? - See, I’m really interested in thinking about possible consequences of a technical explosion - since the industrial revolution we increasingly are into such and we all can feel the immense pressure on our ears sometimes. I’m born in a country where far too many people - 27 to 15 years before I was born - have been pretty convinced to be better than all the other people by birth - with the known consequences. The idea of a wealthy elderly lady from Florida having a cup of tea with her girlfiends, who all are also more than 200 years old is ok. It might be interesting to listen or to talk to them, it might even be enlightening. But if any global sukker somewhere had to die earlier - maybe with 13 instead of 15 years of age - to make this tea possible? How much on resources does it consume to make the elderly ladies afternoon tea possible? On the other hand: Downloading the .mp3 of Mozarts latest symfony No. 75 wouldn’t be too bad either, eh? Medicine generally is about life extending measures, doctors do their best on it, everywhere. Still it’s a difference if I have a bad malaria break out while I’m still in Kenya or when already back in Germany. The life extending measures stuff is locally different. Anyhow. What I wanna say, I guess: I’m still new to the topic, gimme some time, please.

I believe that we will share in the winnings, because we are sharing in the winnings right now, as we type to each other.

That said, we may not. Technology has done a good job of concentrating wealth, too. But, this perspective doesn’t necessarily make you an non-transhumanist. Many transhumanists warn of mass job loss due to robotics. I am very sympathetic to this view. They argue that new jobs will be complex, and that people won’t have the money to support themselves as they learn new jobs. (Marshall Brain argues that if technology is a concentrator of wealth, that we should then support a basic income, where everyone is paid an amount X yearly. And further, that people should self-organize, in preparation. Specificly, he points to the Free Software ethic, and describes a society run something like Wikipedia is run.)

I think the common basic idea is that technology should be embraced, and that we’ll be okay even though humanity will change. This is in contrast to the idea that what is natural is superior, that technology and discovery and construction should be limited and controlled, because we will not be human if these things happen.

Of course, if you don’t think that the technologies are likely, or even possible, transhumanism makes near-zero sense, except on grounds of general humanism. There is a lot of technology hype in transhumanist circles, and I rely on your criticism to help calm it and point out flaws. I believe in many outlandish things, though, and I study to see if they are realistic; I do so at the TaoRiver futures wiki. I don’t think we can dismiss outright a lot of “outlandish” things, looking into my daughter’s generation, especially. I think Wikipedia is an “outlandish thing,” but we would have been wrong to dismiss it. “Brittanica? Going out of business? Because people will make their own encyclopedia on the Internet? Who has time? Who would make sure it was any good?” We have experienced this one before.

Transhumanism isn’t just about immortality (What’s more, as far as I can tell, advances in medecine have led in longer lives for most people, not just the wealthy few). For a more down-to-earth example, check out this guy (warning : slightly gruesome pictures), who put a magnet in his finger that allows him to perceive magnetic fields.

Though I guess that here on CW we’d be more interested in the impact on society rather than individuals. Things like instant messanging accross continents is as much a transformation of human nature as more blatant things like brain-computer interfaces, extended senses or robotic limbs. So, discussing the interaction of communities and technology fits into transhumanism, because society has a huge part in what makes us human.

Maybe some of what we talk about here could be called “transhumanism-lite”, because, well, it doesn’t freak people out as much as the cyborg stuff. But I also think that it’s more important than the cyborg stuff.

(By the way, I copied what I thought was a pretty good summary by lion to the top of the page)

Dude, Rainbow Dash is way ahead of that guy. ;)

As an experienced father of a little girl, I can tell you: My Little Pony have freaking huge magnets built into their feet. See? You don’t believe me. Say’s right there: “Pony’s foot contains magnet.”

What’s more, Magneto-plastic G3 Cyborg Pony’s use their subcutaneous cybernetic enhancements to operate cash registers, automatically open doors, and stick to the refridgerator. They work seamlessly atop existing infrastructure, you don’t have to do a thing, but keep your hard drives away from the play area.

“Cyborg Pony” makes me laugh. :-D

To answer MattisManzel’s question (Is there a better wiki for talking about it?): “figuring out how sustainable your way of living is … I wonder how the result would change if the question would be added: Do you plan to take life extending messures? … friends, who all are also more than 200 years old … It might be interesting to listen or to talk to them, it might even be enlightening. But if any global sukker somewhere had to die earlier - maybe with 13 instead of 15 years of age - to make this tea possible?”

The result would be the same. Sucking years of life out of one person and putting them into another is something I only see in horror movies.

Yes, life extension is a big deal in transhumanism. I think Anders Sandberg (or was it some other transhumanist?) said something like “If you don’t have the brains to pick the better path, pick the path that leads to better brains”. One thing that almost always improves brains is simply living longer and gaining more experience.

All the real life extending measures that we already use, and all potential life extending measures I’ve heard about, extend the life of individual people without shortening the lives of others.

Some people, when they think about “living much longer”, they jump to the conclusion that Earth would rapidly fill up with people, and we would run out of resources.

That sort of thinking is exactly backwards. Running out of resources is a serious problem, and we should be worried about it. But as far as I can tell, killing people prematurely only makes things worse.

Imagine an alternative history, one where Johnny Appleseed discovers a kind of fruit that makes people live twice as long (but they have the same number of children, in the same years, as they did in our history).

People living twice as long would lead to (very roughly) twice as many people alive at any one time.

In particular, by 1900, there would have been (very roughly) 3 000 000 000 people alive, instead of the 1 650 000 000 alive in our history.

3 000 000 000 people! Three Giga-humans! Can you imagine!

But I’ve lived on a planet with over 6 000 000 000 people. As far as I can tell, farmers are growing plenty of food to feed all of them.

I only see 2 possibilities:

  • We never figure out how to avoid running out of resources. So eventually we run out of resources, and Bad Things Happen. In the hypothetical alternative history, this happens a few years earlier – is that any worse?
  • We do figure out how to avoid running out of resources. In the hypothetical alternative history, the extra years of life give people more time to worry about things like this. So in the hypothetical alternative history, they probably figure it out a few years earlier – is that any worse?

I don’t see how it is any worse, and so I think doubling human lifespan makes no difference one way or the other in the sustainability of a way of living. Am I missing something obvious?

Guess what I think the effects of quadrupling human lifespan would be on sustainability?


Define external redirect: ZeroLaws NotTransHumanism

EditNearLinks: MeatballWiki

Languages: