The following discussion presumes the following; that conscious minds function in accord with the laws of physics and can operate on substrates other than the neurological meat found between our ears, that conscious artificial devices can therefore be constructed will become even more self aware and intelligent than humans, and that the minds operating in human brains will then become correspondingly obsolete and unable to compete with the new mental uberpowers.
This particular primate-level thinking biomachine tends to think that the development of artificial superminds is a good idea. Although our species has its positives, Homo sapiens is obviously a severely limited, badly "designed" (by bioevolution) system that is doing grave damage to the wee planet it inhabits, even as the planet does grave damage in return—e. g. diseases have slaughtered about half the some100 billion kids born so far. Attempts to preserve humans much as they currently are indefinitely into the future fly is a static conservation project that flies in the face of evolutionary processes in which species come and species go in a continual turnover. There is no a-priori reason to presume that H. sapiens are so very special that they deserve exceptional protection, particularly if their successors are capable of self-aware conscious thought.
But, to be blunt, what we think about these matters probably does not really matter all that much. That's because humanity as a whole is not really in charge of the situation. Once upon a time—the year 1901 that my grandmother was born—building flying machines was so hard that no one could yet do it. Now the necessary technology is so readily available that you can build an airplane in your garage. Once upon a time—shortly before I was born—we did not understand the structure of DNA. Now grade school kids do DNA experiments. Currently the technologies needed to generate nonbiological conscious minds are not on hand. Eventually commonly available information processing technology will probably become so sophisticated that making thinking machines will not all that hard to do. And lots of people will want to create and/or become cyberminds no matter what others might think, and despite what laws and regulations governments may pass in futile efforts to prevent the onset of the new minds.
In the end, all the contemporary chit-chat about the cyberrevolution often called The Singularity is so much venting and opinionating, not all that different from the subsequently pretty useless discussions back in the 1800s about the feasibility, advisability, and the ultimate meaning of the oncoming onset of powered flying machines. What we say now does not count for much because if the technology never works then superminds will never be a problem or a benefit, and if the technology does work then one way or another the new thinking machines will be devised and they will take over the planet whether we like it or not.
If so then the important question will not be what we think about thinking machines, it will be what do they think about old-fashioned human minds? One item there is no need to fear is hapless humans being enslaved by their cybersuperiors' people are too inept and inefficient for smart robots to bother with exploiting big-brained primates—even now corporations are trying to minimize the labor they have to pull out of pesky people. The way for human minds to avoid becoming uselessly obsolete is to join in the cyber civilization, by uploading out of growth-limited biobrains into rapidly improving cyberbrains. That could be for the best. If high level intelligence can get out of the billions of human bodies that are weighing down on the planetary ecosystem, then the biosphere will have the potential to return to its prehuman vitality.