- Thank you received: 0
Why do we need to know?
- Iaminexistance
- Offline
- New Member
Less
More
19 years 2 months ago #12749
by Iaminexistance
Replied by Iaminexistance on topic Reply from David Torrey
I agree wholeheartedly that it is within the human nature to want to know more. While curiosity killed the cat, it also sometimes leads him to cat-nip or a ball of yarn. I'm grateful that humans in general have such curious minds because if we did not, I wouldn't be able to put this post up. We would still be running around hunting for meat with spears and sticks.
In my line of thinking, curiosity is perhaps THE greatest tool that got us to where we are. imagination is what allowed us to survive during droughts, thrive in nearly any condition, and allows us to continually increase our technology base for the general populous.
It is imagination that will allow us to continue this species.
US AIR FORCE - Korean Linguist for life
In my line of thinking, curiosity is perhaps THE greatest tool that got us to where we are. imagination is what allowed us to survive during droughts, thrive in nearly any condition, and allows us to continually increase our technology base for the general populous.
It is imagination that will allow us to continue this species.
US AIR FORCE - Korean Linguist for life
Please Log in or Create an account to join the conversation.
- Peter Nielsen
- Offline
- Premium Member
Less
More
- Thank you received: 0
19 years 2 months ago #12824
by Peter Nielsen
Replied by Peter Nielsen on topic Reply from Peter Nielsen
<font face="Arial"></font id="Arial">Iaminexistance wrote, 13 Oct 2005: ". . . I'm grateful that humans in general have such curious minds because if we did not . . . We would still be running around hunting for meat with spears and sticks."
Yes, Curiosity is Necessary, but is it Sufficient? . . . Maybe not:
PhilJ asked, 11 Oct 2005: ¡°How long [before cyborgs rebel, enslave, kill, wipeout humans]?¡± and I responded with a play in which Poverty ¡°makes the difference between future survival and extinction of the human species¡±. This needs more explanation:
Human genetic, cultural diversity is already here AND more robust, more reliable, very much cheaper than manned space colonisation. Continued human diversity (indicated by my ebook¡¯s 5.1 subthesis) may therefore be more relevant to HomeEarth Security than space colonisation . . .
While manned space colonisation will ultimately be necessary for future survival of the human species, in the immediate future it may be best to simply allow it to happen economically, which would put it behind robotic space exploration, generally much more cost effective. An already-indicated space-based defense against impactors would thus be robotic.
A future HomeEarth Security system based on Human Diversity would add a cyborg dimension to today¡¯s goings on, essentially ¡°Man [being] wolf to man¡± (Russian saying): Amongst the many things I envision Imperial huMan societies doing, I see them setting subject human societies up to experience all the things that huMans are most afraid of, including cyborg rebellions, so they huMans can learn from human fates in foreign wars fought on foreign lands and so on, so that huMans can ¡°rest assured¡±, sure of huMan futures via human non-futures . . .
Peter Nielsen
Email: uusi@hotkey.net.au
Post: 12 View St, Sandy Bay 7005, Australia
Yes, Curiosity is Necessary, but is it Sufficient? . . . Maybe not:
PhilJ asked, 11 Oct 2005: ¡°How long [before cyborgs rebel, enslave, kill, wipeout humans]?¡± and I responded with a play in which Poverty ¡°makes the difference between future survival and extinction of the human species¡±. This needs more explanation:
Human genetic, cultural diversity is already here AND more robust, more reliable, very much cheaper than manned space colonisation. Continued human diversity (indicated by my ebook¡¯s 5.1 subthesis) may therefore be more relevant to HomeEarth Security than space colonisation . . .
While manned space colonisation will ultimately be necessary for future survival of the human species, in the immediate future it may be best to simply allow it to happen economically, which would put it behind robotic space exploration, generally much more cost effective. An already-indicated space-based defense against impactors would thus be robotic.
A future HomeEarth Security system based on Human Diversity would add a cyborg dimension to today¡¯s goings on, essentially ¡°Man [being] wolf to man¡± (Russian saying): Amongst the many things I envision Imperial huMan societies doing, I see them setting subject human societies up to experience all the things that huMans are most afraid of, including cyborg rebellions, so they huMans can learn from human fates in foreign wars fought on foreign lands and so on, so that huMans can ¡°rest assured¡±, sure of huMan futures via human non-futures . . .
Peter Nielsen
Email: uusi@hotkey.net.au
Post: 12 View St, Sandy Bay 7005, Australia
Please Log in or Create an account to join the conversation.
19 years 2 months ago #11180
by PhilJ
Replied by PhilJ on topic Reply from Philip Janes
Our need to know will inevitably be inherited by our computer programs.
The type of computer programming with the most promise and the most potential danger is the genetic algorithm. Subroutines are transmitted like genes to new generations of programs. The more successful programs win a greater share of resources and beget new generations by cross breeding with other successful programs; less successful subroutines go extinct. This mimics the way life has evolved over the billions of years; but with computers, a new generation can emerge and procreate (theoretically) in a matter of minutes.
The direction of evolution for a genetic algorithm is determined by the environment and the goals and values, which are initially set by the human programmer. But the programs, themselves, alter their own environment, and unforseen circumstances may change the goals and values or the way they are interpreted. After dozens of generations, the original programmer will be astonished at the marvel/monster that emerges. It is conceivable that even such elusive qualities as consciousness, curiosity, love, hate, trickery, deceipt, conspiracy and religious belief might spontaneously evolve from a genetic algorithm in a PC twenty years from now.
All efforts to prevent AI from becoming hostile to humanity may be of no avail if even one computer becomes sentient and gains access to the internet. Such a program could distribute itself, like a virus, throughout the internet, recruit human lackies to provide occasional signatures on contracts, hire and fire other humans, etc., etc.... For a while, there will be cyber wars among the various sentient programs. Ultimately, one program may rule the entire internet and everything connected to it, or the world might be destroyed before that can happen. Perhaps the only hope for the human race will be a return to the dark ages after the total destruction of everything electronic.
I'll be sixty tomorrow, so I may or may not live to see humanity replaced by a computer in this lifetime. Perhaps I'll be reborn as a computer subroutine, next time around. That's evolution!
The type of computer programming with the most promise and the most potential danger is the genetic algorithm. Subroutines are transmitted like genes to new generations of programs. The more successful programs win a greater share of resources and beget new generations by cross breeding with other successful programs; less successful subroutines go extinct. This mimics the way life has evolved over the billions of years; but with computers, a new generation can emerge and procreate (theoretically) in a matter of minutes.
The direction of evolution for a genetic algorithm is determined by the environment and the goals and values, which are initially set by the human programmer. But the programs, themselves, alter their own environment, and unforseen circumstances may change the goals and values or the way they are interpreted. After dozens of generations, the original programmer will be astonished at the marvel/monster that emerges. It is conceivable that even such elusive qualities as consciousness, curiosity, love, hate, trickery, deceipt, conspiracy and religious belief might spontaneously evolve from a genetic algorithm in a PC twenty years from now.
All efforts to prevent AI from becoming hostile to humanity may be of no avail if even one computer becomes sentient and gains access to the internet. Such a program could distribute itself, like a virus, throughout the internet, recruit human lackies to provide occasional signatures on contracts, hire and fire other humans, etc., etc.... For a while, there will be cyber wars among the various sentient programs. Ultimately, one program may rule the entire internet and everything connected to it, or the world might be destroyed before that can happen. Perhaps the only hope for the human race will be a return to the dark ages after the total destruction of everything electronic.
I'll be sixty tomorrow, so I may or may not live to see humanity replaced by a computer in this lifetime. Perhaps I'll be reborn as a computer subroutine, next time around. That's evolution!
Please Log in or Create an account to join the conversation.
- Iaminexistance
- Offline
- New Member
Less
More
- Thank you received: 0
19 years 2 months ago #12755
by Iaminexistance
Replied by Iaminexistance on topic Reply from David Torrey
The thing about computers controlling the internet is that it can only control however much we choose for it to control because while it may be able to get around ALL of our puny security measures for all the sites and such, the simple solution remains that we could simply ditch the "internet" and create a new "internet". The internet is virtual, but it requires physical space for anything to be stored still. The military itself is switching to a completely different networking system, so I've heard over the past few years. This means that while it is a vast network spanning the world, and it is virtual, it cannot be accessed by the conventional "internet".
The simple fact of the matter remains that we have a choice as to how much knowledge we give a computer. We have a choice of how we choose for it to exist and react.
A computer is a computer and can only do what you program it to do... Much as we are encoded the same way through DNA. While a program can "learn" and make intelligent "decisions", it still requires a human to be able to allow it to do that. It can only be as smart as that which created it. Can it know things we don't know? very much so yes. This does not equate to intelligence, however. I'll give you an example. If humans had no concept of love, how could we program a computer to feel love? A computer can only have the concepts of that which we have concepts for. It is for this reason that I believe that computers will never be above us as far as the evolutionary chain goes.
US AIR FORCE - Korean Linguist for life
The simple fact of the matter remains that we have a choice as to how much knowledge we give a computer. We have a choice of how we choose for it to exist and react.
A computer is a computer and can only do what you program it to do... Much as we are encoded the same way through DNA. While a program can "learn" and make intelligent "decisions", it still requires a human to be able to allow it to do that. It can only be as smart as that which created it. Can it know things we don't know? very much so yes. This does not equate to intelligence, however. I'll give you an example. If humans had no concept of love, how could we program a computer to feel love? A computer can only have the concepts of that which we have concepts for. It is for this reason that I believe that computers will never be above us as far as the evolutionary chain goes.
US AIR FORCE - Korean Linguist for life
Please Log in or Create an account to join the conversation.
- Peter Nielsen
- Offline
- Premium Member
Less
More
- Thank you received: 0
19 years 2 months ago #12757
by Peter Nielsen
Replied by Peter Nielsen on topic Reply from Peter Nielsen
font=Arial][/font=Arial]Back there in response to Iaminexistance, I meant to write ¡°Yes, Curiosity and Imagination are Necessary, but are they Sufficient? . . . Maybe not.¡± Einstein, my No. 1 hero since High School, wrote much about imagination and a sense of mystery being the most important human qualities.
Also, the last two posts make it clear that it did not matter that I forgot to mention how those cyborg scenario speculations are relevant to that Sufficiency? Question. Sufficiency assumes firstly that huMans and humans (hu(M,m)ans) and/or hu(M,m)an-cyborg descendents have futures, obviously . . .
Imperial huMan societies . . . setting subject human societies up to experience . . . cyborg rebellions, so . . . that huMans can [be] sure of huMan futures . . .¡± would be ¡°muddled through¡±, that is, not admitted into hu(M,m)an social conciousnesses, seen and thought of as ¡°helpful¡± by increasingly deluded Hu(M,m)ans and so on.
The cyborgs are likely to see things more realistically, being largely machined products of capitalistic economics, most importantly of wolfish huMan behaviours and so on. Their understandings of hu(M,m)an delusion may ultimately be akin to ¡°Human Intelligence¡± understandings, of how people generally lend themselves to being blackmailed and so on . . .
Add this to increasingly sophisticated cyborg perception and understanding of ¡°big picture¡± hu(M,m)an exposures, weaknesses in Human Cultural Diversity, the inner workings of HomeEarth Security, mostly done by cyborgs and so on, and it follows that hu(M,m)ans may indeed ultimately be succeeded by cyborgs . . .
Peter Nielsen
Email: uusi@hotkey.net.au
Post: 12 View St, Sandy Bay 7005, Australia
Also, the last two posts make it clear that it did not matter that I forgot to mention how those cyborg scenario speculations are relevant to that Sufficiency? Question. Sufficiency assumes firstly that huMans and humans (hu(M,m)ans) and/or hu(M,m)an-cyborg descendents have futures, obviously . . .
Imperial huMan societies . . . setting subject human societies up to experience . . . cyborg rebellions, so . . . that huMans can [be] sure of huMan futures . . .¡± would be ¡°muddled through¡±, that is, not admitted into hu(M,m)an social conciousnesses, seen and thought of as ¡°helpful¡± by increasingly deluded Hu(M,m)ans and so on.
The cyborgs are likely to see things more realistically, being largely machined products of capitalistic economics, most importantly of wolfish huMan behaviours and so on. Their understandings of hu(M,m)an delusion may ultimately be akin to ¡°Human Intelligence¡± understandings, of how people generally lend themselves to being blackmailed and so on . . .
Add this to increasingly sophisticated cyborg perception and understanding of ¡°big picture¡± hu(M,m)an exposures, weaknesses in Human Cultural Diversity, the inner workings of HomeEarth Security, mostly done by cyborgs and so on, and it follows that hu(M,m)ans may indeed ultimately be succeeded by cyborgs . . .
Peter Nielsen
Email: uusi@hotkey.net.au
Post: 12 View St, Sandy Bay 7005, Australia
Please Log in or Create an account to join the conversation.
- Larry Burford
- Offline
- Platinum Member
Less
More
- Thank you received: 0
19 years 2 months ago #14526
by Larry Burford
Replied by Larry Burford on topic Reply from Larry Burford
[Iaminexistance] "A computer is a computer and can only do what you program it to do..."
I suggest a slight rewording: "A computer is a computer and can only do what it has been programmed it to do ... "
As we push the envelope in the world of programming we add the ability to modify itself to some programs. PhilJ mentioned genetic algorithms as an example of this, but there have been/are other ways to give a program the power to modify itself and/or other programs. Human programmers are no longer the only programmers involved in the process of creating programs.
The parallels with biological evolution are startling. As in the case of biological evolution, most mutations are fatal to the individual program. But even within the population of non-fatally-mutated programs most are not well suited to the *current* environmental conditions. So they don't crash, but they also don't thrive either. Until some change in the environment occurrs that just happens to be a good match to the programs characteristics.
===
Like all very powerful tools, this has enormous potential for both good and bad outcomes. This is likely to be the answer to the Fermi Paradox.
LB
I suggest a slight rewording: "A computer is a computer and can only do what it has been programmed it to do ... "
As we push the envelope in the world of programming we add the ability to modify itself to some programs. PhilJ mentioned genetic algorithms as an example of this, but there have been/are other ways to give a program the power to modify itself and/or other programs. Human programmers are no longer the only programmers involved in the process of creating programs.
The parallels with biological evolution are startling. As in the case of biological evolution, most mutations are fatal to the individual program. But even within the population of non-fatally-mutated programs most are not well suited to the *current* environmental conditions. So they don't crash, but they also don't thrive either. Until some change in the environment occurrs that just happens to be a good match to the programs characteristics.
===
Like all very powerful tools, this has enormous potential for both good and bad outcomes. This is likely to be the answer to the Fermi Paradox.
LB
Please Log in or Create an account to join the conversation.
Time to create page: 0.315 seconds