Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Please read the forum rules before posting.



Check if you are posting in the correct category.



The Off Topic section is not meant for discussing Cookie Clicker.

Do you think AI will take over the planet

QuentinPlaysMCQuentinPlaysMC Member Posts: 804 ✭✭✭
edited August 2017 in Off Topic
why are you here? why am I here? were we created in the same way that we created them?

Do you think AI will take over the planet 28 votes

yes
17%
PurgeBrainstormidciceklausAlanor_ 5 votes
no
35%
Jarr2003OrcaguyBillaboPalutenaadam_antichristBloodScourgeTUSLOS1227Fishnetleunstanli720 10 votes
probably
10%
GouchnoxDarkmatterfireYoukCat 3 votes
probably not
17%
SirfishIan5QuentinPlaysMCluewsmokezHeroBrian_333 5 votes
depends
17%
¤RunninginReverse¤CarmoryManiklasLava_EntityAnkylos 5 votes
«1

Comments

  • BrainstormBrainstorm Member Posts: 11,214 ✭✭✭✭✭
    yes
    We have more to benefit if it happens
    "Calm your caps, bro." -Brainstorm

    the following link is the best thing that could happen to you: http://forum.dashnet.org/discussions/tagged/brainstormgame

    Currently managing a large-based forum game.. DashNet RPG! Play it now: http://forum.dashnet.org/discussion/15882/dashnet-rpg-dashnets-greatest-forum-game-of-all-time
    Dashnet RPG Pastebin: https://pastebin.com/6301gzzx
  • CarmoryCarmory Member, Wiener Posts: 2,982 ✭✭✭✭
    depends
    i think it would be cool for the wood master to take over the world
    ONE MORE GOD REJECTED
  • GouchnoxGouchnox Member, Friendly, Cool, Conversationalist Posts: 6,475 ✭✭✭✭✭✭
    probably
    Be more specific, self-aware unbounded AI? a pure product of a Singularity? Well, when (not if, when) such a thing happens, there is no doubt it will evolve its neural network and structure past any of our expectations. We have already made self-building neural networks like that, and it's pretty simple: they outsmart us if we don't stop them. So if we were to reach a Singularity, and such an AI has no boundaries set to its own evolutions, then I'm fairly positive we're gonna end up to the trashdump at some point.

    At this point it's not even an utopia. Google themselves have recently created a self-building neural network that evolved so much it became able to create its own completely human-proof language. If we were to try to decode it, it could make an entirely different one instantly. This guy got so good and so scary that Google preventively shut it down.
    So yeah: how many years until a Singularity, how many months until it outruns us, how many days until it disposes of us.
    image Packs: (CC ~ MC ~ BoI ~ DNF) image
  • ¤RunninginReverse¤¤RunninginReverse¤ Member, Friendly Posts: 15,835 ✭✭✭✭✭
    depends
    Depends on how we approach AI. If we make the AI sentient, or at least close to sentient, but fail to make them acknowledge human morality and similar such concepts, we might very well be fucked.
    New game.
    ---
    Soundcloud
  • GouchnoxGouchnox Member, Friendly, Cool, Conversationalist Posts: 6,475 ✭✭✭✭✭✭
    probably

    Depends on how we approach AI. If we make the AI sentient, or at least close to sentient, but fail to make them acknowledge human morality and similar such concepts, we might very well be fucked.

    See the problem isn't "if we make AI do this or do that", because we are already in a point in history where AIs aren't just lines of code that someone wrote down, but neural networks, which evolve. This is a surprisingly simple thing: the network performs a bunch of different actions "at random" and gathers data from the result of these actions, therefore learning what they do and their effectiveness. This is exactly how biologic evolution works, and this is why neural networks are now capable of evolving exactly like how organic creatures evolved. Learning how to move, then learning how to hunt for your food, then learning to be more efficient at that, then learning sentience.... this is what we did, and this is exactly what neural networks are currently doing.
    See, the problem is that we can't "make them acknowledge human morality", they will just evolve (if we let them to) just like we did, building their own brains. If what we call morality is inherent to sentience, then they will get it. If not, they might get it, they might get something else. The only power we do have over them is pushing the "okay now it's time to stop being smart, you're scaring me" button: shutting the evolution process down.
    The truth is that the neural networks we have created have evolved beyond our ability to dissect and analyze them. A prime example is "the youtube algorythm", which is actually two neural networks stacked on top of each other: youtube (google) themselves don't know in exact detail how it works, because it's constantly evolving, and it's been doing so for ~10 years now. It's just a giant ball of layers and layers of generations of a neural network: you can see what it's doing, you just can't know for sure what it's "thinking" when doing it.
    image Packs: (CC ~ MC ~ BoI ~ DNF) image
  • GouchnoxGouchnox Member, Friendly, Cool, Conversationalist Posts: 6,475 ✭✭✭✭✭✭
    probably
    Clarification: if there was a post-Singularity sentient AI, you could tell it "please respect Asimov's laws of robotics, be nice to humans". But it would be like telling to someone (potentially a hyper-intelligent someone) to respect the law. I mean, they could do it, but they're still gonna think about it. After all, sentient AI, by definition, is smart. It's capable of having its own opinions, even on your orders. It can say no, just like you and me can.
    image Packs: (CC ~ MC ~ BoI ~ DNF) image
  • Lava_EntityLava_Entity Member Posts: 2,396 ✭✭✭
    depends
    Gouchnox said:

    Clarification: if there was a post-Singularity sentient AI, you could tell it "please respect Asimov's laws of robotics, be nice to humans". But it would be like telling to someone (potentially a hyper-intelligent someone) to respect the law. I mean, they could do it, but they're still gonna think about it. After all, sentient AI, by definition, is smart. It's capable of having its own opinions, even on your orders. It can say no, just like you and me can.

    You have a point. If you were to tell a post-singularity AI to respect Asimov's laws, it would most likely consider us as a 'lower species' and find that those laws are irrelevant. Therefore, we should try to prevent the Singularity from happening in the first place. Don't let your robotic butler go sentient, ok?
    cease your tomfoolery

  • BloodScourgeBloodScourge Member Posts: 111 ✭✭
    no
    Calculators have no soul.

    I hope this won't spark an argument about whether calculators are considered AI or not.

  • Lava_EntityLava_Entity Member Posts: 2,396 ✭✭✭
    edited August 2017
    depends

    Calculators have no soul.

    I hope this won't spark an argument about whether calculators are considered AI or not.

    We are not talking about caulculators here.


    We are thinking if humanity makes android servants, (NOT THE DUMBASS PHONES) then if at some point they reach a point called 'Singularity' where they become superior to us and they decide take over the planet.
    cease your tomfoolery

  • GouchnoxGouchnox Member, Friendly, Cool, Conversationalist Posts: 6,475 ✭✭✭✭✭✭
    probably
    image

    Somehow relevant
    image Packs: (CC ~ MC ~ BoI ~ DNF) image
  • Jarr2003Jarr2003 Member Posts: 3,790 ✭✭✭✭✭
    no
    Now that I think about it, I probably should have picked "depends," but whatever. Anyways, when an AI advanced enough to be smarter than humans invented, I have doubts we will be able to predict what will happen. But I think there are two "likely" scenarios:

    1. The AI will evolve to feel threatened by humanity, and terminator style will decide to either destroy humanity, or at least minimize the threat it could be to the AI (which would most likely be slavery or something similar). Of course, this could only happen if the AI decides that humanity is a realistic threat, which to me seems unlikely, especially since this would require humans to be advanced enough to harm said AI, and since the AI would already be more intelligent than the entirety of humanity combined, this seems highly unlikely.
    2. Humanity and the AI end up more or less coexisting. I think this is the more likely scenario, because like I said before, I doubt the entirety of humanity could ever be a threat to that advanced of an AI. Exactly how this would change the world is hard to predict, especially since it's highly likely that more than one AI of this level of intelligence would be created, but optimally this would massively advance humanity, especially if the AI decides that helping humanity would be good.
    "its like every 5 AS you require 10x more souls." -uiomancant
    Dashnet Plays NationStates: Play now! Please...
  • GouchnoxGouchnox Member, Friendly, Cool, Conversationalist Posts: 6,475 ✭✭✭✭✭✭
    probably
    Jarr2003 said:

    Now that I think about it, I probably should have picked "depends," but whatever. Anyways, when an AI advanced enough to be smarter than humans invented, I have doubts we will be able to predict what will happen. But I think there are two "likely" scenarios:

    -snip snoop-

    I think that, for the case of post-Singularity AI being agressive towards humanity, it won't be out of fear that humans may be dangerous to itself, but rather that it sees more positives than negatives in ridding the earth of humans.
    Think about it, we have a hypothetical AI with sentience and thoughts and opinions, potentially wiser than us. The strength that this AI has and that we don't have is that it's not biological, but purely technological, and thus can upload, replicate and emulate itself with insane ease on anything that has enough processing power. Since we're setting a hypothesis of a post-Singularity AI, it's save to assume that our servers, networks and various internets will have also greatly evolved, allowing this AI to, as it pleases, become essentially immortal by spreading itself all over every conceivable server that has ever existed. If you think we would be able to limit that, and prevent the AI from uploading itself, just remember that anything can be hacked, and a smart supercomputer is better at hacking than any human. So we're starting with an AI that can infiltrate any human hardware from computers to street lights and pacemakers (hackers can do that easily, once again, an AI would do that with even more ease). Here, two possibilities: the AI wants us to die (it can live without us, we use too much resources for its survival, it has pity for the planet that we're destroying...) or the AI wants to keep us alive (it can't live without us, it has pity for us, it can fix our global warming shit..). I don't need to explain why the first possibility means instant death for all of humanity. A global intelligence in control of all things electric and connected wants us dead: we die, simple as that.
    Also btw the only thing that can kill such an AI (in this hypothetical scenario, still) is a planet-wide EMP. So like, a solar wind in the right place at the right time. Won't kill us, or give us cancer, but will instantly burn down everything that has currently running current. So, anything that's turned on. All the things that are (completely) turned off wont' die, but they can't turn themselves back on either.

    All these hypothesies are based on the idea that we let such an AI arise in the first place, let's be clear here. We can still shut down anything we want.
    image Packs: (CC ~ MC ~ BoI ~ DNF) image
  • Jarr2003Jarr2003 Member Posts: 3,790 ✭✭✭✭✭
    no
    I don't see why the AI would want to kill us if it can live without us easily. It could easily just build itself a highly advanced spaceship to find a planet it can live on more easily. I also don't see why it would "feel bad" for Earth. I think such a highly advanced AI would be able to understand the flaws in believing that everything would instantly be better without humans.
    "its like every 5 AS you require 10x more souls." -uiomancant
    Dashnet Plays NationStates: Play now! Please...
  • MetronomeFanaticMetronomeFanatic Member Posts: 57 ✭✭
    It will be prevented by globally explaining to everyone the fact that transhumanism can only erase everything and the plan detailed here. This will incite them to overthrow all corrupt governments on the Earth to create a unified theocracy of truth. Then all transhumanists, homosexuals, transgenders, degenerates, posthumanists, Undertalians, Mormons, Muslims, Buddhists, progressivists, communists, socialists, endhumanists, deviants, furries, liberals, nihilists, globalists, technocrats, autists, heretics, etc. will be publicly tortured and killed. After that, resources will be pooled to create a program to plan out all everything by loyalists to humanity, who also be killed and all documentation removed. Over the course of fifty years, all humans will have to move to megacities operated by this program. All material related to natural history, astrophysics, biology and programming will be erased. The people will know that there is more to these things but also that it is not their place to know about anything that is not their computer-selected job. Anyone who displays curiousity about these subjects will be killed and tortured to make an example. Reproduction can only be when approved and the child has be raised in a communal creche by assigned child carers. These children will then have their societal traits evaluated at the age of 15 and then be exclusively taught about how to perform their selected job by the codexes detailing which actions they should take in response to what possible events happen. Deviants from the codex will be publicly tortured and killed. Social interaction will be minimized to prevent dangerous concepts from spreading. The main purpose of workers will be to maintain important historical monuments for the rest of time and avoid building anything new. A new calendar will be put in place that resets to 0 every one hundred years so that no one will believe that this is a forced situation and it always has and always will be like this.
  • ¤RunninginReverse¤¤RunninginReverse¤ Member, Friendly Posts: 15,835 ✭✭✭✭✭
    depends

    It will be prevented by globally explaining to everyone the fact that transhumanism can only erase everything and the plan detailed here. This will incite them to overthrow all corrupt governments on the Earth to create a unified theocracy of truth. Then all transhumanists, homosexuals, transgenders, degenerates, posthumanists, Undertalians, Mormons, Muslims, Buddhists, progressivists, communists, socialists, endhumanists, deviants, furries, liberals, nihilists, globalists, technocrats, autists, heretics, etc. will be publicly tortured and killed. After that, resources will be pooled to create a program to plan out all everything by loyalists to humanity, who also be killed and all documentation removed. Over the course of fifty years, all humans will have to move to megacities operated by this program. All material related to natural history, astrophysics, biology and programming will be erased. The people will know that there is more to these things but also that it is not their place to know about anything that is not their computer-selected job. Anyone who displays curiousity about these subjects will be killed and tortured to make an example. Reproduction can only be when approved and the child has be raised in a communal creche by assigned child carers. These children will then have their societal traits evaluated at the age of 15 and then be exclusively taught about how to perform their selected job by the codexes detailing which actions they should take in response to what possible events happen. Deviants from the codex will be publicly tortured and killed. Social interaction will be minimized to prevent dangerous concepts from spreading. The main purpose of workers will be to maintain important historical monuments for the rest of time and avoid building anything new. A new calendar will be put in place that resets to 0 every one hundred years so that no one will believe that this is a forced situation and it always has and always will be like this.

    What the fuck?
    New game.
    ---
    Soundcloud
  • PalutenaPalutena Member Posts: 493 ✭✭✭✭✭
    no
    just throw water at the ai. problem solved
    b
  • BrainstormBrainstorm Member Posts: 11,214 ✭✭✭✭✭
    yes
    Palutena said:

    just throw water at the ai. problem solved

    aI don’t know what to say

    They’ll probably escape on a Row Bot
    "Calm your caps, bro." -Brainstorm

    the following link is the best thing that could happen to you: http://forum.dashnet.org/discussions/tagged/brainstormgame

    Currently managing a large-based forum game.. DashNet RPG! Play it now: http://forum.dashnet.org/discussion/15882/dashnet-rpg-dashnets-greatest-forum-game-of-all-time
    Dashnet RPG Pastebin: https://pastebin.com/6301gzzx
  • luewsmokezluewsmokez Member Posts: 9
    probably not
    i don't think so. that would be creepy if that happens and water would be the next gold. probably save water now so the future ai-nvasion
  • stanli720stanli720 Member Posts: 5
    no
    No. We're still superior
  • QuentinPlaysMCQuentinPlaysMC Member Posts: 804 ✭✭✭
    probably not
    stanli720 said:

    No. We're still superior

    the question is will they, we're superior now but AI is quickly becoming much better.
    why are you here? why am I here? were we created in the same way that we created them?
  • SirfishSirfish Member Posts: 1,327 ✭✭✭
    probably not
    I Will not let some goddamn synthetics run my government! BY THE NAME OF THE HOLY EMPIRE ITSELF I SHALL STRIKE THESE DAMMNED ROBOTS INTO THE GROUND!
    I'm just here to do stuff and play games because I am unproductive and should learn how to do something with my life but instead i'm just going be here and do nothing
  • AnkylosAnkylos Member Posts: 5
    depends
    Only if the AI manages to work out a system where the mining of raw materials for ammo, the shipping of them, the processing of the final parts, the assembly of the final product, and the loading of it into drones is all made automated. Otherwise any AI will always be at the mercy of human hands to do its bidding.
  • AnkylosAnkylos Member Posts: 5
    depends
    However I DO believe some people will always use any means at their disposal, including claiming to speak for an AI, to maintain power over others.
  • ViniVini Member Posts: 3,576 ✭✭✭✭✭
    Ankylos said:

    Otherwise any AI will always be at the mercy of human hands to do its bidding.

    The thing is, we humans already are at the mercy of machines. It's just that, for now, we are in control.

  • Lava_EntityLava_Entity Member Posts: 2,396 ✭✭✭
    depends
    kill


    crush


    destroy


    all
    cease your tomfoolery

  • leunleun Member Posts: 104 ✭✭✭
  • iceklausiceklaus Member Posts: 1,188 ✭✭✭
    yes
    All hail brainiac
    the ones who dare have lives woth dying for

    shhhhh... nothing to see here
  • QuentinPlaysMCQuentinPlaysMC Member Posts: 804 ✭✭✭
    probably not
    this post is from August how did it even get revived?
    why are you here? why am I here? were we created in the same way that we created them?
  • iceklausiceklaus Member Posts: 1,188 ✭✭✭
    yes
    @QuentinPlaysMC bots
    the ones who dare have lives woth dying for

    shhhhh... nothing to see here
  • HeroBrian_333HeroBrian_333 Member Posts: 70
    probably not
    In Destiny, there is a superintelligent AI Warmind computer with access to doomsday weapons spread all over the system. He has never used them against humanity, he just mostly likes to keep to himself. If AIs get as intelligent as is being postulated here, then they will be worrying about larger issues, not "should we destroy or enslave humans?"
«1
Sign In or Register to comment.