Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Please read the forum rules before posting.



Check if you are posting in the correct category.



The Off Topic section is not meant for discussing Cookie Clicker.

Do you think AI will take over the planet

QuentinPlaysMCQuentinPlaysMC Member Posts: 471 ✭✭✭
edited August 27 in Off Topic
Anyways back to writing The tree portal.

Do you think AI will take over the planet 16 votes

yes
12%
Brainstormidc 2 votes
no
31%
Jarr2003OrcaguyBillaboBloodScourgeFishnet 5 votes
probably
18%
GouchnoxDarkmatterfireYoukCat 3 votes
probably not
12%
Ian5QuentinPlaysMC 2 votes
depends
25%
¤RunninginReverse¤CarmoryManiklasLava_Entity 4 votes

Comments

  • BrainstormBrainstorm Member Posts: 10,387 ✭✭✭✭
    yes
    We have more to benefit if it happens
    "Calm your caps, bro." -Brainstorm

    the following link is the best thing that could happen to you: http://forum.dashnet.org/discussions/tagged/brainstormgame

    Currently managing a large-based forum game.. DashNet RPG! Play it now: http://forum.dashnet.org/discussion/15882/dashnet-rpg-dashnets-greatest-forum-game-of-all-time
    Dashnet RPG Pastebin: https://pastebin.com/6301gzzx
  • CarmoryCarmory Member, Wiener Posts: 2,955 ✭✭✭✭
    depends
    i think it would be cool for the wood master to take over the world
    lmao nice. >:]
  • GouchnoxGouchnox Member, Friendly, Cool, Conversationalist Posts: 6,496 ✭✭✭✭✭✭
    probably
    Be more specific, self-aware unbounded AI? a pure product of a Singularity? Well, when (not if, when) such a thing happens, there is no doubt it will evolve its neural network and structure past any of our expectations. We have already made self-building neural networks like that, and it's pretty simple: they outsmart us if we don't stop them. So if we were to reach a Singularity, and such an AI has no boundaries set to its own evolutions, then I'm fairly positive we're gonna end up to the trashdump at some point.

    At this point it's not even an utopia. Google themselves have recently created a self-building neural network that evolved so much it became able to create its own completely human-proof language. If we were to try to decode it, it could make an entirely different one instantly. This guy got so good and so scary that Google preventively shut it down.
    So yeah: how many years until a Singularity, how many months until it outruns us, how many days until it disposes of us.
    image Packs: (CC ~ MC ~ BoI ~ DNF) image
  • ¤RunninginReverse¤¤RunninginReverse¤ Member, Friendly Posts: 15,450 ✭✭✭✭✭
    depends
    Depends on how we approach AI. If we make the AI sentient, or at least close to sentient, but fail to make them acknowledge human morality and similar such concepts, we might very well be fucked.
  • GouchnoxGouchnox Member, Friendly, Cool, Conversationalist Posts: 6,496 ✭✭✭✭✭✭
    probably

    Depends on how we approach AI. If we make the AI sentient, or at least close to sentient, but fail to make them acknowledge human morality and similar such concepts, we might very well be fucked.

    See the problem isn't "if we make AI do this or do that", because we are already in a point in history where AIs aren't just lines of code that someone wrote down, but neural networks, which evolve. This is a surprisingly simple thing: the network performs a bunch of different actions "at random" and gathers data from the result of these actions, therefore learning what they do and their effectiveness. This is exactly how biologic evolution works, and this is why neural networks are now capable of evolving exactly like how organic creatures evolved. Learning how to move, then learning how to hunt for your food, then learning to be more efficient at that, then learning sentience.... this is what we did, and this is exactly what neural networks are currently doing.
    See, the problem is that we can't "make them acknowledge human morality", they will just evolve (if we let them to) just like we did, building their own brains. If what we call morality is inherent to sentience, then they will get it. If not, they might get it, they might get something else. The only power we do have over them is pushing the "okay now it's time to stop being smart, you're scaring me" button: shutting the evolution process down.
    The truth is that the neural networks we have created have evolved beyond our ability to dissect and analyze them. A prime example is "the youtube algorythm", which is actually two neural networks stacked on top of each other: youtube (google) themselves don't know in exact detail how it works, because it's constantly evolving, and it's been doing so for ~10 years now. It's just a giant ball of layers and layers of generations of a neural network: you can see what it's doing, you just can't know for sure what it's "thinking" when doing it.
    image Packs: (CC ~ MC ~ BoI ~ DNF) image
  • GouchnoxGouchnox Member, Friendly, Cool, Conversationalist Posts: 6,496 ✭✭✭✭✭✭
    probably
    Clarification: if there was a post-Singularity sentient AI, you could tell it "please respect Asimov's laws of robotics, be nice to humans". But it would be like telling to someone (potentially a hyper-intelligent someone) to respect the law. I mean, they could do it, but they're still gonna think about it. After all, sentient AI, by definition, is smart. It's capable of having its own opinions, even on your orders. It can say no, just like you and me can.
    image Packs: (CC ~ MC ~ BoI ~ DNF) image
  • Lava_EntityLava_Entity Member Posts: 1,503 ✭✭✭
    depends
    Gouchnox said:

    Clarification: if there was a post-Singularity sentient AI, you could tell it "please respect Asimov's laws of robotics, be nice to humans". But it would be like telling to someone (potentially a hyper-intelligent someone) to respect the law. I mean, they could do it, but they're still gonna think about it. After all, sentient AI, by definition, is smart. It's capable of having its own opinions, even on your orders. It can say no, just like you and me can.

    You have a point. If you were to tell a post-singularity AI to respect Asimov's laws, it would most likely consider us as a 'lower species' and find that those laws are irrelevant. Therefore, we should try to prevent the Singularity from happening in the first place. Don't let your robotic butler go sentient, ok?
    Current Status : Normal

    "What in God's name is happening dare I ask?"-me
    "Do you ever just skLat?"-QuentinPlaysMC

  • BloodScourgeBloodScourge Member Posts: 90 ✭✭
    no
    Calculators have no soul.

    I hope this won't spark an argument about whether calculators are considered AI or not.

  • Lava_EntityLava_Entity Member Posts: 1,503 ✭✭✭
    edited August 27
    depends

    Calculators have no soul.

    I hope this won't spark an argument about whether calculators are considered AI or not.

    We are not talking about caulculators here.


    We are thinking if humanity makes android servants, (NOT THE DUMBASS PHONES) then if at some point they reach a point called 'Singularity' where they become superior to us and they decide take over the planet.
    Current Status : Normal

    "What in God's name is happening dare I ask?"-me
    "Do you ever just skLat?"-QuentinPlaysMC

  • GouchnoxGouchnox Member, Friendly, Cool, Conversationalist Posts: 6,496 ✭✭✭✭✭✭
    probably
    image

    Somehow relevant
    image Packs: (CC ~ MC ~ BoI ~ DNF) image
  • Jarr2003Jarr2003 Member Posts: 3,787 ✭✭✭✭✭
    no
    Now that I think about it, I probably should have picked "depends," but whatever. Anyways, when an AI advanced enough to be smarter than humans invented, I have doubts we will be able to predict what will happen. But I think there are two "likely" scenarios:

    1. The AI will evolve to feel threatened by humanity, and terminator style will decide to either destroy humanity, or at least minimize the threat it could be to the AI (which would most likely be slavery or something similar). Of course, this could only happen if the AI decides that humanity is a realistic threat, which to me seems unlikely, especially since this would require humans to be advanced enough to harm said AI, and since the AI would already be more intelligent than the entirety of humanity combined, this seems highly unlikely.
    2. Humanity and the AI end up more or less coexisting. I think this is the more likely scenario, because like I said before, I doubt the entirety of humanity could ever be a threat to that advanced of an AI. Exactly how this would change the world is hard to predict, especially since it's highly likely that more than one AI of this level of intelligence would be created, but optimally this would massively advance humanity, especially if the AI decides that helping humanity would be good.
    "its like every 5 AS you require 10x more souls." -uiomancant
    My Pokefarm Q Account
    Dashnet Plays NationStates: Play now! Please...
  • GouchnoxGouchnox Member, Friendly, Cool, Conversationalist Posts: 6,496 ✭✭✭✭✭✭
    probably
    Jarr2003 said:

    Now that I think about it, I probably should have picked "depends," but whatever. Anyways, when an AI advanced enough to be smarter than humans invented, I have doubts we will be able to predict what will happen. But I think there are two "likely" scenarios:

    -snip snoop-

    I think that, for the case of post-Singularity AI being agressive towards humanity, it won't be out of fear that humans may be dangerous to itself, but rather that it sees more positives than negatives in ridding the earth of humans.
    Think about it, we have a hypothetical AI with sentience and thoughts and opinions, potentially wiser than us. The strength that this AI has and that we don't have is that it's not biological, but purely technological, and thus can upload, replicate and emulate itself with insane ease on anything that has enough processing power. Since we're setting a hypothesis of a post-Singularity AI, it's save to assume that our servers, networks and various internets will have also greatly evolved, allowing this AI to, as it pleases, become essentially immortal by spreading itself all over every conceivable server that has ever existed. If you think we would be able to limit that, and prevent the AI from uploading itself, just remember that anything can be hacked, and a smart supercomputer is better at hacking than any human. So we're starting with an AI that can infiltrate any human hardware from computers to street lights and pacemakers (hackers can do that easily, once again, an AI would do that with even more ease). Here, two possibilities: the AI wants us to die (it can live without us, we use too much resources for its survival, it has pity for the planet that we're destroying...) or the AI wants to keep us alive (it can't live without us, it has pity for us, it can fix our global warming shit..). I don't need to explain why the first possibility means instant death for all of humanity. A global intelligence in control of all things electric and connected wants us dead: we die, simple as that.
    Also btw the only thing that can kill such an AI (in this hypothetical scenario, still) is a planet-wide EMP. So like, a solar wind in the right place at the right time. Won't kill us, or give us cancer, but will instantly burn down everything that has currently running current. So, anything that's turned on. All the things that are (completely) turned off wont' die, but they can't turn themselves back on either.

    All these hypothesies are based on the idea that we let such an AI arise in the first place, let's be clear here. We can still shut down anything we want.
    image Packs: (CC ~ MC ~ BoI ~ DNF) image
  • Jarr2003Jarr2003 Member Posts: 3,787 ✭✭✭✭✭
    no
    I don't see why the AI would want to kill us if it can live without us easily. It could easily just build itself a highly advanced spaceship to find a planet it can live on more easily. I also don't see why it would "feel bad" for Earth. I think such a highly advanced AI would be able to understand the flaws in believing that everything would instantly be better without humans.
    "its like every 5 AS you require 10x more souls." -uiomancant
    My Pokefarm Q Account
    Dashnet Plays NationStates: Play now! Please...
Sign In or Register to comment.