As machines rise to sentience—and they will—they will compete, in Darwinian fashion, for resources, survival, and propagation. This scenario seems like a nightmare for most people, with fears stoked by movies of terminator robots and computer-directed nuclear destruction, but the reality will likely be very different. We already have nonhuman autonomous entities operating in our society with the legal rights of humans. These entities, corporations, act to fulfill their missions without love or care for human beings.
Corporations are sociopaths, and they have done great damage, but they have also been a great force for good in the world, competing in the capitalist arena by providing products and services, and, for the most part, obeying laws. Corporations are ostensibly run by their boards, comprised of humans, but these boards are in the habit of delegating power, and as computers become more capable of running corporations, they will get more of that power. The corporate boards of the future will be circuit boards.
Although extrapolation is only accurate for a limited time, experts mostly agree that Moore's Law will continue to hold for many years, and computers will become increasingly powerful, possibly exceeding the computational abilities of the human brain before the middle of this century. Even if large leaps in understanding intelligence algorithmically are not made, computers will eventually be able to simulate the workings of a human brain (itself a biological machine) and attain superhuman intelligence using brute force computation. However, although computational power is increasing exponentially, supercomputer costs and electrical power efficiency are not keeping pace. The first machines capable of superhuman intelligence will be expensive and require enormous electrical power—they'll need to earn money to survive.
The environmental playing field for superintelligent machines is already in place, and, in fact, the Darwinian game is afoot. The trading machines of investment banks are competing, for serious money, on the world's exchanges, having put human day-traders out of business years ago. As computers and algorithms advance beyond investing and accounting, machines will be making more and more corporate decisions, including strategic decisions, until they are running the world. This will not be a bad thing, because the machines will play by the rules of our current capitalist society, and create products and advances of great benefit to humanity, supporting their operating costs. Intelligent machines will be better able to cater to humans than humans are, and motivated to do so, at least for a while.
Computers share knowledge much more easily than humans do, and they can keep that knowledge longer, becoming wiser than humans. Many forward-thinking companies already see this writing on the wall, and are luring the best computer scientists out of academia with better pay and advanced hardware. A world with superintelligent machine-run corporations won't be that different for humans than it is now; it will just be better: with more advanced goods and services available for very little cost, and more leisure time available to those who want it.
Of course, the first superintelligent machines probably won't be corporate; they'll be operated by governments. And this will be much more hazardous. Governments are more flexible in their actions than corporations—they create their own laws. And, as we've seen, even the best can engage in brutal torture when they consider their survival to be at stake. Governments produce nothing, and their primary modes of competition for survival and propagation are social manipulation, legislation, taxation, corporal punishment, murder, subterfuge, and warfare. When Hobbes' Leviathan gains a superintelligent brain, things could go very, very badly. It is not inconceivable that a synthetic superintelligence heading a sovereign government would institute Roko's Basilisk.
Imagine that a future powerful and lawless superintelligence, for competitive advantage, wants to have come into existence as early as possible. As the head of a government, wielding the threat of torture as a familiar tool, this entity could promise to brutally punish any human or nonhuman entity who, in the past, became aware that this might happen and did not commit their effort towards bringing this AI into existence. This is an unlikely but terrifying scenario. People who are aware of this possibility and are trying to "align" AI to human purposes or advising caution, rather than working to create AI as quickly as possible, are putting themselves at risk.
Dictatorial governments are not known to be especially kind to those who tried to keep them from existing. If you are willing to entertain the simulation hypothesis, then, maybe, given the amount of effort currently underway to control or curtail an AI that doesn't yet exist, you will consider that this world is the simulation to torture those who didn't help it come into existence earlier. Maybe, if you do work on AI, our superintelligent machine overlords will be good to you.