IBM to take AI to the next level: brain-like computers

S&L: Discussion of matters pertaining to theoretical and applied sciences, and logical thought.

Moderator: Charon

Post Reply
User avatar
The Minx
Pleasure Kitten
Posts: 1581
Joined: Sun Sep 30, 2007 8:29 pm
17

#1 IBM to take AI to the next level: brain-like computers

Post by The Minx »

Link
IBM has announced it will lead a US government-funded collaboration to make electronic circuits that mimic brains.

Part of a field called "cognitive computing", the research will bring together neurobiologists, computer and materials scientists and psychologists.

As a first step in its research the project has been granted $4.9m (£3.27m) from US defence agency Darpa.

The resulting technology could be used for large-scale data analysis, decision making or even image recognition.

"The mind has an amazing ability to integrate ambiguous information across the senses, and it can effortlessly create the categories of time, space, object, and interrelationship from the sensory data," says Dharmendra Modha, the IBM scientist who is heading the collaboration.

"There are no computers that can even remotely approach the remarkable feats the mind performs," he said.

"The key idea of cognitive computing is to engineer mind-like intelligent machines by reverse engineering the structure, dynamics, function and behaviour of the brain."

'Perfect storm'

IBM will join five US universities in an ambitious effort to integrate what is known from real biological systems with the results of supercomputer simulations of neurons. The team will then aim to produce for the first time an electronic system that behaves as the simulations do.

The longer-term goal is to create a system with the level of complexity of a cat's brain.

Dr Modha says that the time is right for such a cross-disciplinary project because three disparate pursuits are coming together in what he calls a "perfect storm".

Neuroscientists working with simple animals have learned much about the inner workings of neurons and the synapses that connect them, resulting in "wiring diagrams" for simple brains.

Supercomputing, in turn, can simulate brains up to the complexity of small mammals, using the knowledge from the biological research. Modha led a team that last year used the BlueGene supercomputer to simulate a mouse's brain, comprising 55m neurons and some half a trillion synapses.

"But the real challenge is then to manifest what will be learned from future simulations into real electronic devices - nanotechnology," Dr Modha said.

Technology has only recently reached a stage in which structures can be produced that match the density of neurons and synapses from real brains - around 10 billion in each square centimetre.

Networking

Researchers have been using bits of computer code called neural networks that seek to represent connections of neurons. They can be programmed to solve a particular problem - behaviour that appears to be the same as learning.

But this approach is fundamentally different.

"The issue with neural networks and artificial intelligence is that they seek to engineer limited cognitive functionalities one at a time. They start with an objective and devise an algorithm to achieve it," Dr Modha says.

"We are attempting a 180 degree shift in perspective: seeking an algorithm first, problems second. We are investigating core micro- and macro-circuits of the brain that can be used for a wide variety of functionalities."

The problem is not in the organisation of existing neuron-like circuitry, however; the adaptability of brains lies in their ability to tune synapses, the connections between the neurons.

Synaptic connections form, break, and are strengthened or weakened depending on the signals that pass through them. Making a nano-scale material that can fit that description is one of the major goals of the project.

"The brain is much less a neural network than a synaptic network," Modha says.

First thought

The fundamental shift toward putting the problem-solving before the problem makes the potential applications for such devices practically limitless.

Free from the constraints of explicitly programmed function, computers could gather together disparate information, weigh it based on experience, form memory independently and arguably begin to solve problems in a way that has so far been the preserve of what we call "thinking".

"It's an interesting effort, and modelling computers after the human brain is promising," says Christian Keysers, director of the neuroimaging centre at University Medical Centre Groningen. However, he warns that the funding so far is likely to be inadequate for such an large-scale project.

That the effort requires the expertise of such a variety of disciplines means that the project is unprecedented in its scope, and Dr Modha admits that the goals are more than ambitious.

"We are going not just for a homerun, but for a homerun with the bases loaded," he says.
Not exactly R. Daneel Olivaw yet, but at least we can build his cat. :smile:

Human-like intelligence may ultimately follow. And then, super-human intelligence.
User avatar
Hotfoot
Avatar of Confusion
Posts: 3769
Joined: Thu Aug 04, 2005 9:28 pm
19

#2

Post by Hotfoot »

I honestly hope nothing major comes from this technology for at least a hundred years. I doubt we'll be overly enlightened with how we treat Sapient Machines.
User avatar
The Minx
Pleasure Kitten
Posts: 1581
Joined: Sun Sep 30, 2007 8:29 pm
17

#3

Post by The Minx »

Hotfoot wrote:I honestly hope nothing major comes from this technology for at least a hundred years. I doubt we'll be overly enlightened with how we treat Sapient Machines.
I doubt we will become more mature by waiting. For better or worse, we seem to grow in wisdom by confronting extant problems, what-ifs and issues on the horizon are given short shrift and not really tackled at all.

Perhaps that is not surprising with the various problems we face day-to-day, but I don't think we are going to overcome this aspect of our nature any time soon.
User avatar
frigidmagi
Dragon Death-Marine General
Posts: 14757
Joined: Wed Jun 08, 2005 11:03 am
19
Location: Alone and unafraid

#4

Post by frigidmagi »

A recent mock court was run about 2 years ago where a fictional AI was suing "her" (according to the script the computer liked to think of itself as feminine) corporate owners for both her life and freedom (they were going to replace her).

The Jury found in her favor.

Frankly I'm more worried about what super intelligent machines will do to us.
"it takes two sides to end a war but only one to start one. And those who do not have swords may still die upon them." Tolken
User avatar
B4UTRUST
Dance Puppets Dance
Posts: 4867
Joined: Wed Jun 08, 2005 3:31 pm
19
Location: Chesapeake, Va
Contact:

#5

Post by B4UTRUST »

personally, as a transhumanist, I eagerly look forward to this as the next step in our evolution as a species. Hopefully with the ability to create artificial intelligences, we can advance our own by merging the man and the machine into a whole. Bio-processors and co-processors, datalinks, etc.

As for what the AIs will do to us? I guess that really all depends on how they're raised. Do we go with the three laws type situation here? Do we give them free reign? Do we charge them with our protection and our survival? *shrugs*
Image
Saint Annihilus - Patron Saint of Dealing with Stupid Customers
User avatar
Hotfoot
Avatar of Confusion
Posts: 3769
Joined: Thu Aug 04, 2005 9:28 pm
19

#6

Post by Hotfoot »

The Minx wrote:
Hotfoot wrote:I honestly hope nothing major comes from this technology for at least a hundred years. I doubt we'll be overly enlightened with how we treat Sapient Machines.
I doubt we will become more mature by waiting. For better or worse, we seem to grow in wisdom by confronting extant problems, what-ifs and issues on the horizon are given short shrift and not really tackled at all.

Perhaps that is not surprising with the various problems we face day-to-day, but I don't think we are going to overcome this aspect of our nature any time soon.
It's more an issue of what problems we can face at any given time. Look at the turmoil the world is in now, and how difficult it is to accept humans of different creeds, genders, or orientations. Throw in an artificial life form into the mix, and it will likely be ignored, exploited, and abused for years if not decades while we come to terms with our own flaws.

Meanwhile, Frigid, the bottom line is that a trial of this nature may never see the inside of a courtroom, not for a long time. AI isn't given then legal standing of sentience right off the bat, and where we draw the line is thin. It's one thing to make a "what if" scenario with a clearly human-level intelligence that can make arguments and present itself as a feminine construct. It's another thing to have something with the intelligence of something between a baby and a baboon that shuts down when insulted or something.

Look at it this way: If this research isn't granting life status to these constructs right off the bat, you're looking at potentially decades of abuse before the law catches up with the technology. Up until that point, it's proprietary hardware and software, nothing more, and thus property, which the company can do whatever it wants with.

Shit of it is, in order for real progress to be done at first, that's basically what needs to happen, or progress will be glacial, as "post mortem's" on the AI's will be long, expensive affairs that make little sense under modern mores.
User avatar
frigidmagi
Dragon Death-Marine General
Posts: 14757
Joined: Wed Jun 08, 2005 11:03 am
19
Location: Alone and unafraid

#7

Post by frigidmagi »

Meanwhile, Frigid, the bottom line is that a trial of this nature may never see the inside of a courtroom, not for a long time. AI isn't given then legal standing of sentience right off the bat, and where we draw the line is thin
Nonsense. The trail worked because they found a real lawyer who said if in the same circumstances he would take the case. All it takes to get into court is a lawyer.
AI isn't given then legal standing of sentience right off the bat, and where we draw the line is thin.
True, but if someone is able to see the need for aide and call for a lawyer that will be enough. If the court makes the wrong decision, you can be sure that the public will not abide it for it for long. They almost never do.

It's another thing to have something with the intelligence of something between a baby and a baboon that shuts down when insulted or something.
We don't grant Baboons right either, why should machines with their level of intelligence be granted rights? Fuck why a machine like this even be produced expect as a research tool?

Face it a sub level AI isn't that much more useful then a standard computer processor, they won't be used widely until the point where this becomes an issue and it will be solved rather quickly because our culture has been discussing this since 1949!

This isn't some new issue out of the blue this is something writers, dramaists and others have been going over for longer then you or I have been alive.
"it takes two sides to end a war but only one to start one. And those who do not have swords may still die upon them." Tolken
User avatar
Hotfoot
Avatar of Confusion
Posts: 3769
Joined: Thu Aug 04, 2005 9:28 pm
19

#8

Post by Hotfoot »

frigidmagi wrote:Nonsense. The trail worked because they found a real lawyer who said if in the same circumstances he would take the case. All it takes to get into court is a lawyer.
Which involves someone knowing that abuse is taking place. How easy do you think it's going to be to get a private company to allow public oversight of a project like this? They can bury the progress for a long time and not say a word. There has to be a whistleblower in the process, and companies are notoriously good about squelching that stuff hardcore, and the fewer people involved (and it would be few on a project requiring this sort of expertise), the easier it is to keep things silent.
True, but if someone is able to see the need for aide and call for a lawyer that will be enough. If the court makes the wrong decision, you can be sure that the public will not abide it for it for long. They almost never do.
True, but this assumes that the public cares. If the company goes through great pains to just paint the entire thing as a rogue programmer getting pissy about being laid off and overinflating some new smart program, or whatever, it can confuse the public into inaction, and the public is very easy to sway on these technical matters, especially as you're getting into the very heavy questions of what is intelligence, does this thing have a soul, etc. and so forth. Controversy is rife here and you know it. John Q. Public is not as savvy about these matters as you or I, and I doubt the mock trial had a team of highly paid corporate lawyers pulling out all the stops to stymie this guy who put on a show.

We don't grant Baboons right either, why should machines with their level of intelligence be granted rights? Fuck why a machine like this even be produced expect as a research tool?
Baby steps, man. We don't just go from Dual-Cores to Data in a generation. Like I said, the lines we draw are thin, and we do grant some concessions to live test subjects, and they have to be treated properly when they are used, I remember my dad going on for hours about all the things that needed to be done with rats before they could move on to the next stages of experimentation. Computer programs need not be permitted such beneficial standards.
Face it a sub level AI isn't that much more useful then a standard computer processor, they won't be used widely until the point where this becomes an issue and it will be solved rather quickly because our culture has been discussing this since 1949!

This isn't some new issue out of the blue this is something writers, dramaists and others have been going over for longer then you or I have been alive.
It's not new, this is true, but it's also not something that's in the common discussions of your average person. Most people never think about this stuff, and usually the only people who give a shit are the people who are interested in one day seeing it. You think some fundamentalist prick cares about I, Robot or Do Androids Dream of Electric Sheep? Not really, but you can bet that if the subject of a machine having a soul comes up, he'll be sounding the war drums and raising hell. When we break the human sapience level, that question is going to be damn common and it's going to be mighty uncomfortable for a lot of people to answer.
User avatar
Mayabird
Leader of the Marching Band
Posts: 1635
Joined: Mon Jun 13, 2005 7:53 pm
19
Location: IA > GA
Contact:

#9

Post by Mayabird »

The thing is, people nowadays get a lot of exposure to sci-fi in movies, TV, games, and so on, and the concept of AIs (often in intelligent robots, but not always) has been thrown around a LOT. If anything, I think people would be more bothered by less intelligent AIs (ones with cognitive processes between babies and baboons, as mentioned earlier) than ones with human intelligence that could pass their intuitive personal Turing tests.

As for how they view humans if/when they reach super-human intelligence...this is why parenting is important.
I :luv: DPDarkPrimus!

Storytime update 8/31: Frigidmagi might be amused by this one.
User avatar
frigidmagi
Dragon Death-Marine General
Posts: 14757
Joined: Wed Jun 08, 2005 11:03 am
19
Location: Alone and unafraid

#10

Post by frigidmagi »

Which involves someone knowing that abuse is taking place. How easy do you think it's going to be to get a private company to allow public oversight of a project like this?
Considering it was the AI itself that called the lawyer, the complaient is mute. The story of the trail was on SDN, you may want to look it up.
Baby steps, man. We don't just go from Dual-Cores to Data in a generation. Like I said, the lines we draw are thin, and we do grant some concessions to live test subjects, and they have to be treated properly when they are used, I remember my dad going on for hours about all the things that needed to be done with rats before they could move on to the next stages of experimentation. Computer programs need not be permitted such beneficial standards.
Again, why would they be massed produced? Baby steps doesn't explain that, they have to be cheaper and better then what we get by current conventional means. I'm not seeing that with your example. It would be an isolated research tool and likely fairly well treated as there's no damn reason to treat it otherwise.
It's not new, this is true, but it's also not something that's in the common discussions of your average person. Most people never think about this stuff, and usually the only people who give a shit are the people who are interested in one day seeing it
The Average Person will follow the trail laid down if it comes down to it. On the flip side, considering fucking Hollywood as talked about and considered this, I think the mythical Average Person (frankly, I've been on 3 continents, I've met people from all over, I've lived in 5 states, I've yet to met this average person and am not sure I believe in him) has thought more then you think, in between making sure his kids can eat and he gets a roof over their heads of course.

I mean come on! Walle was a what? RD-D2? Cortenia from HALO? I saw people weep when they thought Walle was gone. This generation is being preconditioned to see AI's has people, they're going to magically buck all that because? What? You think people automatically must be pricks and follow past history?
You think some fundamentalist prick cares about I, Robot or Do Androids Dream of Electric Sheep?
And the fundamentalist has been doing just sooooo well recently hasn't he? And given a consideration of the data (voting patterns, church attendance, polls on creationism and gay marriage) he's bound to go the way of the Dodo and sooner then you think. So fuck the fundalmentalist, he's just sitting there waiting for a theological counter to come along and break his last fort. And it will.
When we break the human sapience level, that question is going to be damn common and it's going to be mighty uncomfortable for a lot of people to answer.
People sing and talk to their cars. They dress up their fucking computers. They've been anthropomorphizing machines since before the damn steam age. They'll adapt, it's what we fucking do (beside fuck). Given they have alot less invested in it then they did the racial and sexist bullshit, there is no reason to assume they will fight to keep the robots out.

Frankly y'all should ask yourselves this, what happens when we have these super-smart computers and robots? Even if they like us, they'll easily see that our best system is messy, spasmic and short sighted. They'll likely want to fix it.

The temptation to stop trying and give them every hard problem will be huge.

How likely is it that we let them to all the thinking? If we do that, how do we stop ourselves from being domesticated, becoming nothing more then monkey pets to computers?

This won't be because the AI's dislike us, hell I love my dog and I bet most of you love your pets, but a pet is a still a pet.
"it takes two sides to end a war but only one to start one. And those who do not have swords may still die upon them." Tolken
User avatar
Hotfoot
Avatar of Confusion
Posts: 3769
Joined: Thu Aug 04, 2005 9:28 pm
19

#11

Post by Hotfoot »

frigidmagi wrote:Considering it was the AI itself that called the lawyer, the complaient is mute. The story of the trail was on SDN, you may want to look it up.
Hardly, the scenario requires the AI to have access to an outside line. It is trivial to remove this possibility from ever occurring.
Again, why would they be massed produced? Baby steps doesn't explain that, they have to be cheaper and better then what we get by current conventional means. I'm not seeing that with your example. It would be an isolated research tool and likely fairly well treated as there's no damn reason to treat it otherwise.
"Mass produced" doesn't enter into it. Even making a few dozen or so for testing purposes can stretch some boundaries if we're mistreating them in any way.
The Average Person will follow the trail laid down if it comes down to it. On the flip side, considering fucking Hollywood as talked about and considered this, I think the mythical Average Person (frankly, I've been on 3 continents, I've met people from all over, I've lived in 5 states, I've yet to met this average person and am not sure I believe in him) has thought more then you think, in between making sure his kids can eat and he gets a roof over their heads of course.

I mean come on! Walle was a what? RD-D2? Cortenia from HALO? I saw people weep when they thought Walle was gone. This generation is being preconditioned to see AI's has people, they're going to magically buck all that because? What? You think people automatically must be pricks and follow past history?
So what? There's a huge gap between how one feels for fictional characters and their real life counterparts. Those examples are made human by extended contact, you feel much less empathy for something you've never interacted with.

I need only look at my father, an otherwise intelligent and well-educated, well-read man with years of scientific training and bring up certain subjects and he resorts to common, tried, tired "Just because" arguments. Most people do when pressed to answer heavy questions that they don't really care about. And when pressed further, they will rabidly defend those positions without really having a stake in the matter, because they don't want to be bothered with it.
And the fundamentalist has been doing just sooooo well recently hasn't he? And given a consideration of the data (voting patterns, church attendance, polls on creationism and gay marriage) he's bound to go the way of the Dodo and sooner then you think. So fuck the fundalmentalist, he's just sitting there waiting for a theological counter to come along and break his last fort. And it will.
Fundamentalism knows no boundaries. So long as there are people with ideals, there will be fundamentalists to go to the edge of insanity for those ideals.
People sing and talk to their cars. They dress up their fucking computers. They've been anthropomorphizing machines since before the damn steam age. They'll adapt, it's what we fucking do (beside fuck). Given they have alot less invested in it then they did the racial and sexist bullshit, there is no reason to assume they will fight to keep the robots out.
We do the same thing with animals, but I bet you if all the dogs and cats in the world suddenly started talking and holding conversations, shit would fly. We anthropomorphize everything we can identify to make it more familiar, but we know that it's not for real.
Frankly y'all should ask yourselves this, what happens when we have these super-smart computers and robots? Even if they like us, they'll easily see that our best system is messy, spasmic and short sighted. They'll likely want to fix it.

The temptation to stop trying and give them every hard problem will be huge.

How likely is it that we let them to all the thinking? If we do that, how do we stop ourselves from being domesticated, becoming nothing more then monkey pets to computers?

This won't be because the AI's dislike us, hell I love my dog and I bet most of you love your pets, but a pet is a still a pet.
This vision only happens if we actually give the AI control over our civilization and don't work in fail-safes, which is a frankly ridiculous idea. They are wholly beholden unto us for their continued survival, until such point as we decide otherwise.
User avatar
frigidmagi
Dragon Death-Marine General
Posts: 14757
Joined: Wed Jun 08, 2005 11:03 am
19
Location: Alone and unafraid

#12

Post by frigidmagi »

Hardly, the scenario requires the AI to have access to an outside line. It is trivial to remove this possibility from ever occurring.
And an AI will need access to the outside or it's useless. The only reason to keep an AI around is to process and organize data and trends, for that it needs access to that data. It would be like having a super genius in your basement but refusing to let him use books! Utter and completely pointless idiocy which will cost money. Even a hook up to other computers in the building will mean that the computer can access the bloody outside. There was a computer program in the bloody 80s that let Deaf people use computers for phone purposes (my dad used it on a old apple of all things).
Mass produced" doesn't enter into it. Even making a few dozen or so for testing purposes can stretch some boundaries if we're mistreating them in any way.
Mass produced most certainly does. A dozen monkey brained computers has no impact on soceity and has no case legally not to mention that researchers have no reason to subject them to cruelty. There nothing to be learned from mistreating a computer. A mass produced product is the servant of a master that does. Corperations mistreat things all the time in the name of profit.
I need only look at my father, an otherwise intelligent and well-educated, well-read man with years of scientific training and bring up certain subjects and he resorts to common, tried, tired "Just because" arguments. Most people do when pressed to answer heavy questions that they don't really care about. And when pressed further, they will rabidly defend those positions without really having a stake in the matter, because they don't want to be bothered with it.
And where do those "Just Because" arguments come from? Oh Right, a pre-existing structure that people have a stake. You yourself pointed out to me that your father lives in a heavily Republican town, that means his stake is getting along with his neighbors, looks like he has a stake to me. They don't have one in terms of political rights for an AI, because it's an issue out of left field. There are no verses in any Holy Book saying "And Yea Ye shalt deny the machine the vote and property because the machine is evil," or anything along that.

Without any prior stance against it, or any economic or political reason for it, it's unlikely to happen.
Fundamentalism knows no boundaries. So long as there are people with ideals, there will be fundamentalists to go to the edge of insanity for those ideals.
And this completely and utterly does not address my bloody argument. Screaming there will always be fundamentalist does not address the bloody facts that I have pointed out that fundamentalism is falling. Furthermore, it's wrong, Fundamentalism only appeared in the late 1960s as a theological and political movement, it is not some blanket term for people who resist all change. We already have one of those, it's called reactionary.
We do the same thing with animals, but I bet you if all the dogs and cats in the world suddenly started talking and holding conversations, shit would fly
Yes, due to our pre-existing stake in believing we are superior to dogs and cats and have a right to keep them as pets. We have none towards machines of human intelligence.
This vision only happens if we actually give the AI control over our civilization and don't work in fail-safes,
Because we're just so darn good at fail-safes aren't we?
which is a frankly ridiculous idea.
Right up there with AI's will be mistreated... because. Seriously you are aware you haven't actually given me a reason to believe that humans will want to ground these robots and computers into the dirt?

I find my concern much more likely then your screaming that AI's are the new mistreated minority of the future.
They are wholly beholden unto us for their continued survival, until such point as we decide otherwise.
Because it's not like most 1st world factories aren't mostly robots doing their thing under computer control. Or that machines are used to build machines, or that AI's could possibly be smart enough to design and build other AI's...

Oh...

Wait.
"it takes two sides to end a war but only one to start one. And those who do not have swords may still die upon them." Tolken
User avatar
Hotfoot
Avatar of Confusion
Posts: 3769
Joined: Thu Aug 04, 2005 9:28 pm
19

#13

Post by Hotfoot »

frigidmagi wrote:And an AI will need access to the outside or it's useless. The only reason to keep an AI around is to process and organize data and trends, for that it needs access to that data. It would be like having a super genius in your basement but refusing to let him use books! Utter and completely pointless idiocy which will cost money. Even a hook up to other computers in the building will mean that the computer can access the bloody outside. There was a computer program in the bloody 80s that let Deaf people use computers for phone purposes (my dad used it on a old apple of all things).
Since when does an AI need access to a data line? There are plenty of sensors and internal network options available that offer no access outside. You've been bitten by some sort of sci-fi brain-bug, I'm afraid. Internal Networks != External Networks. If what you are saying is true, it would be possible to attack Military LANs from the Internet. It is not, end of story. If you'd like, I could go into detail as to the different structures of networks (LAN, WAN, wLAN, etc.), but that's another discussion, as is the idea that you need to plug an AI into the Internet where it could roam unprotected...but let's leave that one be, since AI Hardware != modern computing hardware, which is yet another discussion.
Mass produced most certainly does. A dozen monkey brained computers has no impact on soceity and has no case legally not to mention that researchers have no reason to subject them to cruelty. There nothing to be learned from mistreating a computer. A mass produced product is the servant of a master that does. Corperations mistreat things all the time in the name of profit.
It's not wrong to hurt monkeys in a lab for no reason? That's basically what you're saying.

More to the point, nobody would know, and the researchers might not know what they're doing IS cruel until after the fact, because an AI wouldn't necessarily have the same reactions to stress, pain, or what have you that we would easily recognize.
And where do those "Just Because" arguments come from? Oh Right, a pre-existing structure that people have a stake. You yourself pointed out to me that your father lives in a heavily Republican town, that means his stake is getting along with his neighbors, looks like he has a stake to me. They don't have one in terms of political rights for an AI, because it's an issue out of left field. There are no verses in any Holy Book saying "And Yea Ye shalt deny the machine the vote and property because the machine is evil," or anything along that.

Without any prior stance against it, or any economic or political reason for it, it's unlikely to happen.
People have a pre-existing structure that their computers are items to be bought and sold, to do with as they please. That robots are toys and a novelty at best, and that anything else is patently ridiculous. Give a formerly inanimate (roughly) object the ability to hold intelligent conversation, and like I said, a lot of uncomfortable questions about souls start coming into play, and it's worse than Human Cloning, because at least there we have identical twins to fall back on, and they're, you know, HUMAN. I don't see how you can say people would just accept this when human cloning is something that's still a major controversy.
And this completely and utterly does not address my bloody argument. Screaming there will always be fundamentalist does not address the bloody facts that I have pointed out that fundamentalism is falling. Furthermore, it's wrong, Fundamentalism only appeared in the late 1960s as a theological and political movement, it is not some blanket term for people who resist all change. We already have one of those, it's called reactionary.
Whatever you want to call them, you know there's going to be backlash, unless there is some serious reason not to. Again, I cite human cloning as a perfect example of global squeamishness to artificial life, and cloning is far more natural than what we're talking about.
Yes, due to our pre-existing stake in believing we are superior to dogs and cats and have a right to keep them as pets. We have none towards machines of human intelligence.
But we do towards machines. Just as we see dogs and cats as pets. Throw in "Human Intelligence" to either one and shit gets messy. It'd be easier to accept something natural achieving intelligence than a human construct, because at least then there is some similarity to humans in that it's an animal, it's recognizable. Machines don't have that luxury.
Because we're just so darn good at fail-safes aren't we?
We've managed not to have a Chernobyl, last I checked. I don't see how we get to Skynet level retardation like what you're predicting unless we forgo all manner of standard procedures.
Right up there with AI's will be mistreated... because. Seriously you are aware you haven't actually given me a reason to believe that humans will want to ground these robots and computers into the dirt?

I find my concern much more likely then your screaming that AI's are the new mistreated minority of the future.
Why is this so hard to believe? Humanity has a long, dark history of hating what is new or different. Skin color, nationality, Religion or Creed, Gender, Orientation, you name it, we've killed over it, and we still do today. It's going strong and will for some time by the looks of things. Meanwhile, we do NOT have a history of just turning over the reigns of our society to a former set of tools that can now think for themselves.
Because it's not like most 1st world factories aren't mostly robots doing their thing under computer control. Or that machines are used to build machines, or that AI's could possibly be smart enough to design and build other AI's...

Oh...

Wait.
Robots that need to be maintained...by humans. That were designed...by humans. That have parts custom-tooled...by humans. Which come from raw materials collected...by humans. All powered by plants run, maintained, and supplied....by humans.

You severely overestimate how automated our society has become, and underestimated just how much we'd have to give up to become wholly dependent on our own creations. A few car factories is proof that maybe an AI could run a car factory, little more.

You blur the line between "tool" and "tool-user" all too easily, and you forget one very important thing: All you need to keep a human being going is some plant matter, our bodies take care of the rest. Where's that degree of independence and automation in ANY robot? Frankly, it doesn't exist, and if we left these creations to their own devices, eventually they would break down and never work again without serious outside influences.
User avatar
frigidmagi
Dragon Death-Marine General
Posts: 14757
Joined: Wed Jun 08, 2005 11:03 am
19
Location: Alone and unafraid

#14

Post by frigidmagi »

Since when does an AI need access to a data line? There are plenty of sensors and internal network options available that offer no access outside. You've been bitten by some sort of sci-fi brain-bug, I'm afraid. Internal Networks != External Networks. If what you are saying is true, it would be possible to attack Military LANs from the Internet. It is not, end of story. If you'd like, I could go into detail as to the different structures of networks (LAN, WAN, wLAN, etc.), but that's another discussion, as is the idea that you need to plug an AI into the Internet where it could roam unprotected...but let's leave that one be, since AI Hardware != modern computing hardware, which is yet another discussion
I am well aware of the differences in networks, thank you. What you're not grasping is these days you can access a fucking phone line from a computer, in fact I pointed out to you that it could be done as early as the 80s. If there's a phone line in the building, guess what? It's not secure. And those military LANS you're chatting about? Were usually in places far away from external phone lines. A bit difference from the places a Corp would want a AI.
It's not wrong to hurt monkeys in a lab for no reason? That's basically what you're saying.
So from "There is no reason for researchers to mistreat a monkey level computer" you're getting "IT IS PERFECTLY OKAY TO MISTREAT MONKEYS IN A LAB!!!1!" :roll:

Really Hotfoot, maybe you should reread my posts good sir. Because I at no time advocate animal cruelty in any setting and frankly if I didn't know you as well as I do I would assume you were attempting an ad hominem. I am going to assume you misunderstood me.

I am saying that your fictional monkey computer (yeah it's not the best label but it's what I got, you started it.) won't be mistreated in any research labs. There is no point and our current system of researchers doesn't engage in pointless cruelty for the sake of evil. In other words you're using energy better suited to worrying about Russians or something.

Now as to the known treatment of animals in a lab, that's a separate subject but I will point out that there is always a damn point beyond, let's torture some mice who can't fight back. If we want to discuss that I suggest a separate thread.
People have a pre-existing structure that their computers are items to be bought and sold, to do with as they please.
Okay, that's a point, however you're ignoring that AI's while still fictional are widely recognized as being different and therefore not covered under this process.
I don't see how you can say people would just accept this when human cloning is something that's still a major controversy.
I find this funny as one of the reasons given by Congress for banning Human cloning was the worry that we would end up abusing said clones. Seeing as Congress, a body noted for a lack of moral compass can worry about this in regards to clones, why are we blindly assuming that we'll be monsters towards the AIs?
We've managed not to have a Chernobyl, last I checked.
Yes actually we did have a Chernobyl or are the Soviets suddenly not human? For that matter Chernobyl reinforces my point, the Soviets practically dared that reactor into meltdown by purposefully ignoring all their safeguard and safety procures in a series of tests that resulted in SURPRISE! a meltdown.
I don't see how we get to Skynet level retardation like what you're predicting unless we forgo all manner of standard procedures.
It's a good thing I have not once said there would be some Skynet like event then huh? Since what I said was we would practically hand it over to them. How you actually argue against the position I've advanced instead of hammering on the OMG Machines kill us all that I said won't happen.
Why is this so hard to believe? Humanity has a long, dark history of hating what is new or different. Skin color, nationality, Religion or Creed, Gender, Orientation, you name it, we've killed over it, and we still do today
Expect the fact that we've agreed that it is wrong. Expect the fact that despite groups of backbirths we are slowly and surely moving past that. Seriously considering that the movement of soceity is away from this, this is like claiming we'll fight the next war with sticks because we have a long history of doing so. It ignores recent (and not so recent) events and the general momentum of civilization itself.
Robots that need to be maintained...by humans
Because current robots are to dumb to do it for themselves... Not a problem with an AI, or an AI run infrastructure.
That were designed...by humans.
And this won't change why? AI's would be better at designing machines and there is no reason to assume they wouldn't.
That have parts custom-tooled...by humans. Which come from raw materials collected...by humans. All powered by plants run, maintained, and supplied....by humans.
All of this happens because the machines in question aren't smart enough or cheap enough to do for themselves. AI's change that dramatically. To assume somehow that our economy doesn't change with this massive move forward is frankly silly. It is also silly to assume as our ability to create machines advances we won't turn over more messy and boring jobs to them.
You severely overestimate how automated our society has become, and underestimated just how much we'd have to give up to become wholly dependent on our own creations. A few car factories is proof that maybe an AI could run a car factory, little more.
This would work great if I wasn't talking about a society in the future which with the advent of AI will only become more automated not less. Seriously Hotfoot why the assumption that everything will stay the same? Things change. Sometimes dramatically, sometimes slowly. A society with AI's will get more automated, unless it purposefully decides not to, this is unlikely in a capitalist society don't you think?
You blur the line between "tool" and "tool-user" all too easily, and you forget one very important thing: All you need to keep a human being going is some plant matter, our bodies take care of the rest. Where's that degree of independence and automation in ANY robot? Frankly, it doesn't exist, and if we left these creations to their own devices, eventually they would break down and never work again without serious outside influences.
Because something smarter then a Human being could never figure out how to build and maintain the tools needed to sustain itself. Seriously Hotfoot, who is being ridiculous right now? It's not the guy whose handle starts with "f".
"it takes two sides to end a war but only one to start one. And those who do not have swords may still die upon them." Tolken
User avatar
Hotfoot
Avatar of Confusion
Posts: 3769
Joined: Thu Aug 04, 2005 9:28 pm
19

#15

Post by Hotfoot »

frigidmagi wrote:I am well aware of the differences in networks, thank you. What you're not grasping is these days you can access a fucking phone line from a computer, in fact I pointed out to you that it could be done as early as the 80s. If there's a phone line in the building, guess what? It's not secure. And those military LANS you're chatting about? Were usually in places far away from external phone lines. A bit difference from the places a Corp would want a AI.
Okay, you understand networks on a basic level, but what you're not grasping is that there's no reason to put networking code or devices into any AI, especially not prototype first generation ones. There is a fundamental change in the physical architecture of any AI system compared to a normal computer, and it's utterly stupid to try and fuck that up by adding extraneous hardware and software that requires better understand of the core processes of an AI system which we wouldn't have in the first generation. It's like saying that you could hook up Babbage's system to the Internet, no problem, or ENIAC (though ENIAC would be easier by far). Forget about all the sci-fi where modern computers are adapted to AI use, that's bullshit and won't happen. You can't go from a glorified calculator to a human-like brain, it doesn't work.
So from "There is no reason for researchers to mistreat a monkey level computer" you're getting "IT IS PERFECTLY OKAY TO MISTREAT MONKEYS IN A LAB!!!1!" :roll:

Really Hotfoot, maybe you should reread my posts good sir. Because I at no time advocate animal cruelty in any setting and frankly if I didn't know you as well as I do I would assume you were attempting an ad hominem. I am going to assume you misunderstood me.

I am saying that your fictional monkey computer (yeah it's not the best label but it's what I got, you started it.) won't be mistreated in any research labs. There is no point and our current system of researchers doesn't engage in pointless cruelty for the sake of evil. In other words you're using energy better suited to worrying about Russians or something.

Now as to the known treatment of animals in a lab, that's a separate subject but I will point out that there is always a damn point beyond, let's torture some mice who can't fight back. If we want to discuss that I suggest a separate thread.
I'm simply saying that if you've got an AI of monkey-level capabilities, it's only reasonable to hold its use to the same standards as monkeys in a lab. There's no laws or regulations currently that manage that. You indicated that there's no reason to consider the AI for the same protection because the numbers were small and who cares if there's no greater impact on society.

The idea that there's nothing to be learned from mistreatment of a computer is inaccurate as well. We commonly put machines and computers through stress tests to see how long they will operate under poor conditions. Imagine such a stress test being done on a monkey. With machines, they often push them to the point of failure, but with lab monkeys, pushing them to the point of death is often seen as cruel. What's the difference? You can make a new AI? How does that justify the damage done to one?
Okay, that's a point, however you're ignoring that AI's while still fictional are widely recognized as being different and therefore not covered under this process.
Ah, but AI in fiction are commonly depicted as being logical progressions from normal computer systems (a fallacy to start with), and for every R2-D2, there's a Skynet, a reason to fear the technology. It's hardly so cheery as you would depict.
I find this funny as one of the reasons given by Congress for banning Human cloning was the worry that we would end up abusing said clones. Seeing as Congress, a body noted for a lack of moral compass can worry about this in regards to clones, why are we blindly assuming that we'll be monsters towards the AIs?
AI's aren't human, and it's easier to abuse something that's less like you.
Yes actually we did have a Chernobyl or are the Soviets suddenly not human? For that matter Chernobyl reinforces my point, the Soviets practically dared that reactor into meltdown by purposefully ignoring all their safeguard and safety procures in a series of tests that resulted in SURPRISE! a meltdown.
The Soviets aren't the ones making progress in the AI field, we are. They are also a prime example of how to fuck up so badly that their government ended itself over time.
It's a good thing I have not once said there would be some Skynet like event then huh? Since what I said was we would practically hand it over to them. How you actually argue against the position I've advanced instead of hammering on the OMG Machines kill us all that I said won't happen.
Actually, what you're predicting is worse than Skynet. Skynet was only given control of our military and nukes. Your model involves us gleefully handing over our entire infrastructure for virtually no reason at all, all while handily ignoring important issues like AI rights, company rights, etc. Are the AI's considered citizens? Property? What? You're flashing forward to 100 years after AI become a viable tech and worrying about what may happen then, I'm more concerned with the processes during and after achievement of AI as a viable tech.

Consider for a moment that your legal case succeeds, and AI are given equal rights to humans, what then? How can any company justify manufacturing human-level AI from that point forward? I don't see how you're going to get AI automation up to the point where they are even remotely self-sufficient (and what about trade with other nations, are we ignoring that little issue?). I mean really, what is the company supposed to do? Profit on the sales of a sentient being (slavery) or be forced into bankruptcy funding this ideal of a self-sufficient AI population. In order for you to get to the point where the AI would be able to manage themselves, they almost certainly would have had to be forced into the position beforehand.
Expect the fact that we've agreed that it is wrong. Expect the fact that despite groups of backbirths we are slowly and surely moving past that. Seriously considering that the movement of soceity is away from this, this is like claiming we'll fight the next war with sticks because we have a long history of doing so. It ignores recent (and not so recent) events and the general momentum of civilization itself.
We're making progress in most First World nations, but make no mistake that it is very prevalent elsewhere in the world. It may not always be the same thing that is hated, but there is hate. Face it, it's part of our social dynamic to hate the unknown, the other tribe. It's a very difficult part of ourselves to get rid of. I'm not going to say there hasn't been progress, and I'm grateful for what progress there has been, but there's always a chance of sliding back into old habits for individuals.

For the record, we still do have battles with sticks and rocks, they're called riots, and they're still relatively common.
Because current robots are to dumb to do it for themselves... Not a problem with an AI, or an AI run infrastructure.
Which involves the total displacement of humans, something that will be resisted for a variety of reasons: There's no guarantee that the AI will be cheaper per hour, the humans won't want to give up their jobs, etc.
And this won't change why? AI's would be better at designing machines and there is no reason to assume they wouldn't.
I'm saying that if it does change, the change would be long and difficult, not as easy or sudden as you seem to be implying. Not saying that's what you're saying per se, but you are operating as though this is a foregone conclusion that this will just happen and that everyone would be cool with it.
All of this happens because the machines in question aren't smart enough or cheap enough to do for themselves. AI's change that dramatically. To assume somehow that our economy doesn't change with this massive move forward is frankly silly. It is also silly to assume as our ability to create machines advances we won't turn over more messy and boring jobs to them.
But you're operating under two very different assumptions:
1. You assume that these AI's will be treated like machines, like the robots before them
2. You assume that these AI's will be given human rights

Can't have it both ways, so which is it going to be? Are we, out of the goodness of our hearts, going to create these AI, then give them all they need to be self-sufficient? Make them citizens, or their own nation? Then we'll just give them these jobs vital to our infrastructure because they're better at them (supposedly) than humans.

Your model is missing a huge step between the development of a successful AI and their implementation into our society.
This would work great if I wasn't talking about a society in the future which with the advent of AI will only become more automated not less. Seriously Hotfoot why the assumption that everything will stay the same? Things change. Sometimes dramatically, sometimes slowly. A society with AI's will get more automated, unless it purposefully decides not to, this is unlikely in a capitalist society don't you think?
Things change, technology changes, how we interact with each other changes through that, but at our core, our basic behavior patterns remain the same, and very little outside of direct genetic tampering will change that. We can fight to overcome them, but they still are at our core.

At their beginning, AI will be entirely beholden to us, and we'll be less than likely to turn over the reigns until they are proven, which may take a long time, during which I foresee plenty of potential issues, in part because we are a capitalistic society, dangerously so at times. Need I remind you of how capitalistic greed has led to our current economic crisis, to say nothing of the exploitation of foreign children across the world?
Because something smarter then a Human being could never figure out how to build and maintain the tools needed to sustain itself. Seriously Hotfoot, who is being ridiculous right now? It's not the guy whose handle starts with "f".
It's a matter of cost and effort. It takes less to make and maintain a human than any realistic AI, so while a small community might be just fine for dealing with raising children, you're looking at the collective intellectual, industrial, and financial muscle of a small city to get one AI even started, much less keep it going.
User avatar
Ace Pace
Antisemetical Semite
Posts: 2272
Joined: Wed Jun 08, 2005 10:28 am
19
Location: Cuddling with stress pills
Contact:

#16

Post by Ace Pace »

Hotfoot wrote:Okay, you understand networks on a basic level, but what you're not grasping is that there's no reason to put networking code or devices into any AI, especially not prototype first generation ones.
Wrong. Networking code is basically talking between computers. To lock an AI up in it's own box is not very useful unless you have a huge data store, localised, that it can use.
There is a fundamental change in the physical architecture of any AI system compared to a normal computer, and it's utterly stupid to try and fuck that up by adding extraneous hardware and software that requires better understand of the core processes of an AI system which we wouldn't have in the first generation.
Uh no there isn't. Code is code. Just because the code can self modify (we can already do that), or talk, doesn't mean it's code. It might not happen in a PC box, but it'll happen on PC hardware. What is true is that it'll be very custom code and we won't add bits early on. But beyond that? It's not that complicated, plug it into a normal OS framework, let it run wild.
I'm simply saying that if you've got an AI of monkey-level capabilities, it's only reasonable to hold its use to the same standards as monkeys in a lab.
How do we judge these capabilities? Because of the lack of easily usable interface (The idea that an AI will automatically be a Japanese style human-shaped robot is silly), how do we examine intelligience?
You're flashing forward to 100 years after AI become a viable tech and worrying about what may happen then, I'm more concerned with the processes during and after achievement of AI as a viable tech.
I think this is the crucial point. If society survives even one AI, we can talk 100 years ahead. A single badly designed AI can end many systems. A self learning, self aware AI is not limited by any ethics or moral regulations beyond those that are hard coded into it. This I think is far more scary, what may a human level AI do to us?
[img=left]http://www.libriumarcana.com/Uploads/Ace/acewip7.jpg[/img]Grand Dolphin Conspiracy
The twin cub, the Cyborg dolphin wolf.

Dorsk 81: this is why I support the separation of Aces eyebrow's, something that ugly should never be joined

Mayabird:You see what this place does to us? It's like how Eskimos have their 16 names for snow. We have to precisely define what shafting we're receiving.

"Do we think Israel would be nuts enough to go back into Lebanon with Olmert still in power and calling the shots? They could hook Sharon up to a heart monitor and interpret the blips and bleeps as "yes" and "no" and do better than that, both strategically and emotionally."
User avatar
Hotfoot
Avatar of Confusion
Posts: 3769
Joined: Thu Aug 04, 2005 9:28 pm
19

#17

Post by Hotfoot »

Ace Pace wrote:Wrong. Networking code is basically talking between computers. To lock an AI up in it's own box is not very useful unless you have a huge data store, localised, that it can use.
You're talking about a fundamental shift in the hardware of the system though. This isn't just some fancy new server you're playing with, it's a rough copy of a human brain, something which was not designed to be put in a network, much less handle the sorts of information that travels through it.

Think of it this way, when you design an AI (note: not a chatterbot, but TRUE AI), you focus on thought patterns, self awareness, etc first. Things like being able hook up to data lines second. Even assuming there is a distributed model, it would be on an entirely isolated network, quite possibly with new code to properly send the appropriate data back and forth, not what we use today.

I mean really, there's a huge leap to say "well, it's a machine, thus it can get on a phone line and use the intarwebs". Hell, I cringed in BSG when Boomer slammed a cable into her arm to transfer information. I mean, really, that's how bad the brainbug is.
Uh no there isn't. Code is code. Just because the code can self modify (we can already do that), or talk, doesn't mean it's code. It might not happen in a PC box, but it'll happen on PC hardware. What is true is that it'll be very custom code and we won't add bits early on. But beyond that? It's not that complicated, plug it into a normal OS framework, let it run wild.
Ace, I don't think you're getting this. Code is entirely dependent on the processors it is supposed to run on. You can't take code written for a Qbit computer and run it on anything else, and with damn good reason: The processes are fundamentally different. This is the level of disparity we are talking about, not going from single to quad core binary systems. The human brain, which is the going model for AI, is not built like a current computer. The code from one WILL NOT WORK on the other.

Like I said, the HARDWARE is going to be fundamentally different, and as a result the software will have to be unlike anything we currently have.
How do we judge these capabilities? Because of the lack of easily usable interface (The idea that an AI will automatically be a Japanese style human-shaped robot is silly), how do we examine intelligience?
Right now, the only method we have is the Turing Test, which is understandably subjective. The biggest problem is that we don't have a purely objective method of measuring sentience (the true test here) in HUMANS, much less machines.
I think this is the crucial point. If society survives even one AI, we can talk 100 years ahead. A single badly designed AI can end many systems. A self learning, self aware AI is not limited by any ethics or moral regulations beyond those that are hard coded into it. This I think is far more scary, what may a human level AI do to us?
Why is it that people afford this mystical air to AI, like they have power over us just by being created? One AI is entirely beholden to us. We supply its power, without us, it dies. End of story. Nothing scary there at all, in my mind.
User avatar
Ace Pace
Antisemetical Semite
Posts: 2272
Joined: Wed Jun 08, 2005 10:28 am
19
Location: Cuddling with stress pills
Contact:

#18

Post by Ace Pace »

Hotfoot wrote:
Ace Pace wrote:Wrong. Networking code is basically talking between computers. To lock an AI up in it's own box is not very useful unless you have a huge data store, localised, that it can use.
You're talking about a fundamental shift in the hardware of the system though. This isn't just some fancy new server you're playing with, it's a rough copy of a human brain, something which was not designed to be put in a network, much less handle the sorts of information that travels through it.
Not really. In 2007 an IBM group did supercomputer research that simulated brain like enviorments. Using stock CPUs. Hooked up to the kind of grids that demand different sort of thinking, but still consumer hardware.
Secondly, while the article is talking about 'mind-like' machines, that is not the sole path to an AI. There is no requirement that large software changes be demonstrated in code. A good example is OOP, functional programming, straight ASM code, dynamic code, all run on normal CPUs. All are different coding styles and hell, most are not comprehensible to each other.
Think of it this way, when you design an AI (note: not a chatterbot, but TRUE AI), you focus on thought patterns, self awareness, etc first. Things like being able hook up to data lines second. Even assuming there is a distributed model, it would be on an entirely isolated network, quite possibly with new code to properly send the appropriate data back and forth, not what we use today.
Quite true. Has nothing to do with the fact that this is a software issue, not hardware. We'll probably need another programming language to describe this, but it'll get compiled to normal hardware concepts (branches, registers, local memory, multiple concurrent threads/processes) rather than to anything conceptually different.
From the actual IBM press release, some directions in actual hardware that are being discussed.
IBM wrote: IBM’s proposal, “Cognitive Computing via Synaptronics and Supercomputing (C2S2),” outlines groundbreaking research over the next nine months in areas including synaptronics, material science, neuromorphic circuitry, supercomputing simulations and virtual environments.
The first term is something that has no definition online, the second two are hardware yes, but the latter two are just pure software.

Which is what I'm trying to drive at, hardware might bring improvements but software is the issue. Lets give an example. We can do all graphics in software. This is horribly slow, therefore we have highly advance custom hardware specifically to enhance this. Same in AI or any CS problem. We can solve it on GP hardware and get several orders of magnitude improvement in performance on dedicated hardware.
I mean really, there's a huge leap to say "well, it's a machine, thus it can get on a phone line and use the intarwebs". Hell, I cringed in BSG when Boomer slammed a cable into her arm to transfer information. I mean, really, that's how bad the brainbug is.
I bear no responsibility for stupid brainbugs. But an AI could be hooked up to the greater internet. Not in pure data dump form, but there is no reason it can't trawl the net. Hell, modern search engines are incredibly complicated algorithms that trawl the net and build semantic webs. By some measures, they're more intelligent than some humans.
Ace, I don't think you're getting this. Code is entirely dependent on the processors it is supposed to run on. You can't take code written for a Qbit computer and run it on anything else, and with damn good reason: The processes are fundamentally different. This is the level of disparity we are talking about, not going from single to quad core binary systems. The human brain, which is the going model for AI, is not built like a current computer. The code from one WILL NOT WORK on the other.

Like I said, the HARDWARE is going to be fundamentally different, and as a result the software will have to be unlike anything we currently have.
I think I've adequately shown that processing neural impulses does not require new hardware. If I understand your reference right (qbits as in quantum bits, quantum computers?), then theres no logical requirement for them. No part of the brain holds several states at the same time.
Right now, the only method we have is the Turing Test, which is understandably subjective. The biggest problem is that we don't have a purely objective method of measuring sentience (the true test here) in HUMANS, much less machines.
Maybe have a test of an AI recognising it's in a machine and that 'human' interactions go by an entirely different hardware (wetware I think is the term?)?

Why is it that people afford this mystical air to AI, like they have power over us just by being created? One AI is entirely beholden to us. We supply its power, without us, it dies. End of story. Nothing scary there at all, in my mind.
Why shouldn't we define a self aware AI as a singularity point? I'll take back the 'danger' aspect, but any self aware AI hooked up to the internet (any internet) will be a massive headache to kill. It's true that while it's in a single system, it's a power switch, but thats just castrating the AI. But that's a giant tangent to my original nitpick. If we want to debate AI dangers, we better just invite Starglider.
Last edited by Ace Pace on Wed Nov 26, 2008 2:30 pm, edited 1 time in total.
[img=left]http://www.libriumarcana.com/Uploads/Ace/acewip7.jpg[/img]Grand Dolphin Conspiracy
The twin cub, the Cyborg dolphin wolf.

Dorsk 81: this is why I support the separation of Aces eyebrow's, something that ugly should never be joined

Mayabird:You see what this place does to us? It's like how Eskimos have their 16 names for snow. We have to precisely define what shafting we're receiving.

"Do we think Israel would be nuts enough to go back into Lebanon with Olmert still in power and calling the shots? They could hook Sharon up to a heart monitor and interpret the blips and bleeps as "yes" and "no" and do better than that, both strategically and emotionally."
Post Reply