Jump to content

Androids Versus Humans


Neon Genesis

Recommended Posts

What if sometime far off into the future, human technology became so advanced we were able to create androids that were so similar to humans, you could barely tell them apart from the real thing? If there is no soul or anything supernatural that separate humans from animals, what would make a soulless human born from natural means different from a soulless android nearly identical to a soulless human expect it was created artificially? If there is no soul to distinguish humans, then would androids be counted as humans? What if these androids became so advanced and perfect they were able to create their own androids without human guidance? If natural selection is all about survival of the fittest and androids became so advanced they could build other androids without the need of humans, would humans eventually become extinct and the androids would continue to evolve past their creators on their own? If humans are able to create such advanced androids, is it inevitable the human race will be replaced by androids and androids will become the new evolved and superior humans if there is no soul that makes humans different from androids? Or am I just too obsessed with cyberpunk sci-fi and I'm out of my mind here?

Link to comment
Share on other sites

  • Replies 54
  • Created
  • Last Reply

Neon,

 

I would not say that natural selection is all about survival of the fittest. This seems to be an assumption of Socal Darwinisn and rooted in the notion that competition is the only way in which species adapt. If perfection (whatever that is) is the standard, then your androids might learn that cooperation is more effective in the long run.

 

Concerning the notion of there being a sharp distinction between humans and animals, this I do not believe. I think this is an assumption inherited from traditional Christianity that is no longer useful.

 

Myron

Link to comment
Share on other sites

The only way for artificial intelligence to be counted as truly sentient, it would be have to be able to feel and to find meaning in things (the ability to experience and find oneself implicated by it). These are not abilities that can merely be built out of hardware, even if the behavior of androids mimicked our behavior to some arbitrarily high degree.

 

I don't think it is impossible for sentience to arise. The thing is, when it does arise (as it does in us), it could not be any form of "artificial intelligence", but just plain intelligence. Ironically, even if we somehow managed to build a creature that turned out to be sentient (as opposed to just behaving as if it were), we still wouldn't understand why this was so. The construction of a sentient being would have to be an accident and not based on any pre-existing theory, therefore, its own sentience would probably be just as much a mystery as our own.

Link to comment
Share on other sites

Very interesting Mike,

 

The challenge of building sense organs like sight, touch and feel and smell etc. and integrating them into a computer brain would be quite complex indeed but certainly not impossible. The programming would be in my view, of such a scope that it would take decades to even come close to mimicking humans .As you say for sentience to arise of itself doesn't seem possible with hardware but software could do as such but in my estimation and software experience, (computer programmer here, FORTRAN, C, and Basic and machine language) we would have to understand our own consciousness in such detail and to a degree to even begin to write such a program.

 

Joseph

Link to comment
Share on other sites

Joseph,

 

If I understand Neon's scenario correctly, the androids would have to understand the symbols their programs would be manipulating and then take over control of their own programming. In other words, they would have to be conscious, and if their consciouness is anything like human consciousness, they would then be vulnerable to errors of symbolic reference.

 

Myron

Link to comment
Share on other sites

Myron,

Yes, i was responding to Mikes input not Neon's OP but to take control of their own programming, of course consciousness to do so would have to be programmed in them as even their emotions and reactions to events would require software programming at this time. We just don't yet have the technology and understanding to hardwire all the genetic programming at this time without software doing most of the job. At least i think so but i forget that Neon was talking far off into the future so i guess anything is possible and i am speaking from insufficient comprehension of the OP and old technology in computers. It's a sign of old age. :lol:

Joseph

 

PS Neon, Perhaps i can be easily replaced by an Android in the future but it hurts my head to get more than a few months into the future at a time. :D So i don't usually think on such things.

Link to comment
Share on other sites

The only way for artificial intelligence to be counted as truly sentient, it would be have to be able to feel and to find meaning in things (the ability to experience and find oneself implicated by it). These are not abilities that can merely be built out of hardware, even if the behavior of androids mimicked our behavior to some arbitrarily high degree.

What if we created androids who could simulate feelings and emotions based on software that they were running on? What would then be the difference between androids being programmed to "feel" emotions from a software and humans being "programmed" by genetics?
Link to comment
Share on other sites

Hi Neongenesis,

 

What if we created androids who could simulate feelings and emotions based on software that they were running on? What would then be the difference between androids being programmed to "feel" emotions from a software and humans being "programmed" by genetics?

 

I think the difference would, again, be actual feeling and meaning. To my knowledge, no one has ever come close, even theoretically/conceptually, to "programming" feeling or meaning. Computers are in principle mindless, no matter how anthropomorphically we talk about them. Whatever sentience is, it is not reducible to the principles of computation.

 

There tends to be a lot of talk about the hardware/software distinction, but I at least don't really understand how that distinction makes much of a difference. "Software" is a set of instructions (programmed operations) taking place in the hardware itself, and it's not something other than the hardware itself. Fundamentally there is nothing more going on in a computer than what goes on in the flip of a light switch, or in any machine, even a train or a car. It's all engineering from the bottom up. I think the ghost has yet to be exorcised from the machine...

 

Peace,

Mike

Link to comment
Share on other sites

Well well Mike, i think you will find that software is capable of whatever the mind can imagine. I can write a program now to manipulate a mechanical hand and grab an object and sense the pressure it is applying. i can also write a program that can connect to a material and measure the pressure you apply to it. I can write a program to apply meaning to that pressure. it starts to get complicated but analog and digital work both in the body and in robotics. While software uses the hardware memory it is capable of manipulating analog hardware and peripherals to do anything you can imagine as long as the peripherals exists with digital connection to memory and the software is sophisticated enough to interpret the data. Well that's my experience except i don't know enough about the human mind and thinking to write such a complex program and if the memory and hardware required for such is technologically feasable in less than a building size android at this time but i would imagine it possible in the future.

 

joseph

Link to comment
Share on other sites

Joseph,

 

Right now I am imagining a purple elephant in a pink tutu wearing dark rimmed glasses giving a lecture on intentionality to graduate students at USC. I am aware that it is imagination. How would an android know the difference?

 

Myron

Link to comment
Share on other sites

Hi Joseph,

 

I certainly do not doubt that computers may well be able perform many functions that humans can do, or even can't do. But I think it's very easy, given the universality of computation as a physical process, to equivocate between function/behavior and first-hand experience. That a computer can respond to pressure (as described in physics) is not the same as experiencing pressure (as described as a conscious qualitative feel). If computers can feel such things, it would mean to me not that feeling is programmable, but that feeling is in the very nature of reality itself (which very well may be the case). The same is the case with meaning. 'Interpretation' for computers is really engineering. Meaningful, intentional states are not at all what's going on inside a computer when it 'inteprets' its data according to a formal symbolic logic or rigid syntax. Computers don't pause to think 'this hurts' or 'what is the meaning of my existence?;

 

 

Peace,

Mike

Link to comment
Share on other sites

Joseph,

 

Right now I am imagining a purple elephant in a pink tutu wearing dark rimmed glasses giving a lecture on intentionality to graduate students at USC. I am aware that it is imagination. How would an android know the difference?

 

Myron

A computer can do anything the mind can do. it takes alot of if and, or ,compare and branches but everything can be digatized. Go to the links i referenced above where IBM is doing similar work with advanced hardware. All i know is that when i was a programmer, it was possible with even very simple machines and a alot of software code to teach a computer to learn how to play chess. Now i agree assigning meaning is more difficult but if i can do it with my mind and know how my mind works, (which i don't) i can program a computer to work similarly.

Link to comment
Share on other sites

Hi Joseph,

 

I certainly do not doubt that computers may well be able perform many functions that humans can do, or even can't do. But I think it's very easy, given the universality of computation as a physical process, to equivocate between function/behavior and first-hand experience. That a computer can respond to pressure (as described in physics) is not the same as experiencing pressure (as described as a conscious qualitative feel). If computers can feel such things, it would mean to me not that feeling is programmable, but that feeling is in the very nature of reality itself (which very well may be the case). The same is the case with meaning. 'Interpretation' for computers is really engineering. Meaningful, intentional states are not at all what's going on inside a computer when it 'inteprets' its data according to a formal symbolic logic or rigid syntax. Computers don't pause to think 'this hurts' or 'what is the meaning of my existence?;

 

 

Peace,

Mike

 

Computers can be programmed to learn and apply meaning that the programmer assigns it whether simple or complex. it is then a small step to give it some ruiles and let it learn new things for itself. It has to be programmed on what and how to process data and apply meaning but with some genetic type programming it may not come to the same conclusions as the original programmer but it can learn and apply meaning in accordance with its programming modified by its experience. Check out the link in my other posts and see what IBM is doing. perhaps a long way to go but not in my view impossible.

Link to comment
Share on other sites

A computer can do anything the mind can do. it takes alot of if and, or ,compare and branches but everything can be digatized. Go to the links i referenced above where IBM is doing similar work with advanced hardware. All i know is that when i was a programmer, it was possible with even very simple machines and a alot of software code to teach a computer to learn how to play chess. Now i agree assigning meaning is more difficult but if i can do it with my mind and know how my mind works, (which i don't) i can program a computer to work similarly.

 

Yes, but can a computer become depressed concerning it's past mistakes and anxious about future mistakes?

 

Myron

Link to comment
Share on other sites

 

Yes, but can a computer become depressed concerning it's past mistakes and anxious about future mistakes?

 

Myron

Only if it is programmed to analyse its behavior in accordance with programming and respond thusly. It can be programmed to be depressed or act as a depressed person and slow down its processing rate but that would be an advantage for the computer not to get depressed. Being anxious or depressed would be something of a trait that could be avoided in computers but if you wanted it, you could program it in. But why would you want to?.

.

Link to comment
Share on other sites

Hi Joseph,

 

I think the problem here is that we're using two different meanings of the words 'meaning' and 'experience' as if they were the same. There's a difference between say, programming a robot to process patterns of information about wavelengths of light, and programming a robot to subjectively experience the beauty and inexpressible meaning of a sunset. Sure, you can program a computer to mindlessly spit out this output when it registers on its machinery a certain pattern corresponding to a sunset: "My, how moving! I makes me contemplate the cosmos and wonder if there's more to my life than meets the eye." But the computer will not be thinking that, feeling that, or finding any meaning whatsoever in that pattern. The meaning is there only at the other end when we find something meaningful in the output.

 

I would think that we have an ontological leap to make before we can create a mind. That is, we have to make a transition from mechanism to intentional states, and to my knowledge no one has ever given even a theoretical account of how such a feat would be possible. I don't think 'meaning' is what computers work with, but quantitative (measurable, external, objective) values. In other words, it is all mechanical. Likewise, they do not experience anything, but respond mechanically according to quantitative processes. There is no theoretical point where it acquires meaning or experience. A computer can 'call it' damage if it is programmed to do so while avoiding some objectively measurable force. But at no point does it have a subjective experience of saying 'I want to avoid pain'. At no point is it having an internal intentional state where it doesn't want to get hurt.

 

Peace,

Mike

Link to comment
Share on other sites

Only if it is programmed to analyse its behavior in accordance with programming and respond thusly. It can be programmed to be depressed or act as a depressed person and slow down its processing rate but that would be an advantage for the computer not to get depressed. Being anxious or depressed would be something of a trait that could be avoided in computers but if you wanted it, you could program it in. But why would you want to?.

.

people

 

But, the evidence is that depressed people see reality more accurately. That is why they are depressed.

 

Myron

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...

Important Information

terms of service