frigidmagi wrote:I am well aware of the differences in networks, thank you. What you're not grasping is these days you can access a fucking phone line from a computer, in fact I pointed out to you that it could be done as early as the 80s. If there's a phone line in the building, guess what? It's not secure. And those military LANS you're chatting about? Were usually in places far away from external phone lines. A bit difference from the places a Corp would want a AI.
Okay, you understand networks on a basic level, but what you're not grasping is that there's no reason to put networking code or devices into any AI, especially not prototype first generation ones. There is a fundamental change in the physical architecture of any AI system compared to a normal computer, and it's utterly stupid to try and fuck that up by adding extraneous hardware and software that requires better understand of the core processes of an AI system which we wouldn't have in the first generation. It's like saying that you could hook up Babbage's system to the Internet, no problem, or ENIAC (though ENIAC would be easier by far). Forget about all the sci-fi where modern computers are adapted to AI use, that's bullshit and won't happen. You can't go from a glorified calculator to a human-like brain, it doesn't work.
So from "There is no reason for researchers to mistreat a monkey level computer" you're getting "IT IS PERFECTLY OKAY TO MISTREAT MONKEYS IN A LAB!!!1!"
Really Hotfoot, maybe you should reread my posts good sir. Because I at no time advocate animal cruelty in any setting and frankly if I didn't know you as well as I do I would assume you were attempting an ad hominem. I am going to assume you misunderstood me.
I am saying that your fictional monkey computer (yeah it's not the best label but it's what I got, you started it.) won't be mistreated in any research labs. There is no point and our current system of researchers doesn't engage in pointless cruelty for the sake of evil. In other words you're using energy better suited to worrying about Russians or something.
Now as to the known treatment of animals in a lab, that's a separate subject but I will point out that there is always a damn point beyond, let's torture some mice who can't fight back. If we want to discuss that I suggest a separate thread.
I'm simply saying that if you've got an AI of monkey-level capabilities, it's only reasonable to hold its use to the same standards as monkeys in a lab. There's no laws or regulations currently that manage that. You indicated that there's no reason to consider the AI for the same protection because the numbers were small and who cares if there's no greater impact on society.
The idea that there's nothing to be learned from mistreatment of a computer is inaccurate as well. We commonly put machines and computers through stress tests to see how long they will operate under poor conditions. Imagine such a stress test being done on a monkey. With machines, they often push them to the point of failure, but with lab monkeys, pushing them to the point of death is often seen as cruel. What's the difference? You can make a new AI? How does that justify the damage done to one?
Okay, that's a point, however you're ignoring that AI's while still fictional are widely recognized as being different and therefore not covered under this process.
Ah, but AI in fiction are commonly depicted as being logical progressions from normal computer systems (a fallacy to start with), and for every R2-D2, there's a Skynet, a reason to fear the technology. It's hardly so cheery as you would depict.
I find this funny as one of the reasons given by Congress for banning Human cloning was the worry that we would end up abusing said clones. Seeing as Congress, a body noted for a lack of moral compass can worry about this in regards to clones, why are we blindly assuming that we'll be monsters towards the AIs?
AI's aren't human, and it's easier to abuse something that's less like you.
Yes actually we did have a Chernobyl or are the Soviets suddenly not human? For that matter Chernobyl reinforces my point, the Soviets practically dared that reactor into meltdown by purposefully ignoring all their safeguard and safety procures in a series of tests that resulted in SURPRISE! a meltdown.
The Soviets aren't the ones making progress in the AI field, we are. They are also a prime example of how to fuck up so badly that their government ended itself over time.
It's a good thing I have not once said there would be some Skynet like event then huh? Since what I said was we would practically hand it over to them. How you actually argue against the position I've advanced instead of hammering on the OMG Machines kill us all that I said won't happen.
Actually, what you're predicting is worse than Skynet. Skynet was only given control of our military and nukes. Your model involves us gleefully handing over our entire infrastructure for virtually no reason at all, all while handily ignoring important issues like AI rights, company rights, etc. Are the AI's considered citizens? Property? What? You're flashing forward to 100 years after AI become a viable tech and worrying about what may happen then, I'm more concerned with the processes during and after achievement of AI as a viable tech.
Consider for a moment that your legal case succeeds, and AI are given equal rights to humans, what then? How can any company justify manufacturing human-level AI from that point forward? I don't see how you're going to get AI automation up to the point where they are even remotely self-sufficient (and what about trade with other nations, are we ignoring that little issue?). I mean really, what is the company supposed to do? Profit on the sales of a sentient being (slavery) or be forced into bankruptcy funding this ideal of a self-sufficient AI population. In order for you to get to the point where the AI would be able to manage themselves, they almost certainly would have had to be forced into the position beforehand.
Expect the fact that we've agreed that it is wrong. Expect the fact that despite groups of backbirths we are slowly and surely moving past that. Seriously considering that the movement of soceity is away from this, this is like claiming we'll fight the next war with sticks because we have a long history of doing so. It ignores recent (and not so recent) events and the general momentum of civilization itself.
We're making progress in most First World nations, but make no mistake that it is very prevalent elsewhere in the world. It may not always be the same thing that is hated, but there is hate. Face it, it's part of our social dynamic to hate the unknown, the other tribe. It's a very difficult part of ourselves to get rid of. I'm not going to say there hasn't been progress, and I'm grateful for what progress there has been, but there's always a chance of sliding back into old habits for individuals.
For the record, we still do have battles with sticks and rocks, they're called riots, and they're still relatively common.
Because current robots are to dumb to do it for themselves... Not a problem with an AI, or an AI run infrastructure.
Which involves the total displacement of humans, something that will be resisted for a variety of reasons: There's no guarantee that the AI will be cheaper per hour, the humans won't want to give up their jobs, etc.
And this won't change why? AI's would be better at designing machines and there is no reason to assume they wouldn't.
I'm saying that if it does change, the change would be long and difficult, not as easy or sudden as you seem to be implying. Not saying that's what you're saying per se, but you are operating as though this is a foregone conclusion that this will just happen and that everyone would be cool with it.
All of this happens because the machines in question aren't smart enough or cheap enough to do for themselves. AI's change that dramatically. To assume somehow that our economy doesn't change with this massive move forward is frankly silly. It is also silly to assume as our ability to create machines advances we won't turn over more messy and boring jobs to them.
But you're operating under two very different assumptions:
1. You assume that these AI's will be treated like machines, like the robots before them
2. You assume that these AI's will be given human rights
Can't have it both ways, so which is it going to be? Are we, out of the goodness of our hearts, going to create these AI, then give them all they need to be self-sufficient? Make them citizens, or their own nation? Then we'll just give them these jobs vital to our infrastructure because they're better at them (supposedly) than humans.
Your model is missing a huge step between the development of a successful AI and their implementation into our society.
This would work great if I wasn't talking about a society in the future which with the advent of AI will only become more automated not less. Seriously Hotfoot why the assumption that everything will stay the same? Things change. Sometimes dramatically, sometimes slowly. A society with AI's will get more automated, unless it purposefully decides not to, this is unlikely in a capitalist society don't you think?
Things change, technology changes, how we interact with each other changes through that, but at our core, our basic behavior patterns remain the same, and very little outside of direct genetic tampering will change that. We can fight to overcome them, but they still are at our core.
At their beginning, AI will be entirely beholden to us, and we'll be less than likely to turn over the reigns until they are proven, which may take a long time, during which I foresee plenty of potential issues, in part because we are a capitalistic society, dangerously so at times. Need I remind you of how capitalistic greed has led to our current economic crisis, to say nothing of the exploitation of foreign children across the world?
Because something smarter then a Human being could never figure out how to build and maintain the tools needed to sustain itself. Seriously Hotfoot, who is being ridiculous right now? It's not the guy whose handle starts with "f".
It's a matter of cost and effort. It takes less to make and maintain a human than any realistic AI, so while a small community might be just fine for dealing with raising children, you're looking at the collective intellectual, industrial, and financial muscle of a small city to get one AI even started, much less keep it going.