OpenAI Plays Hide and Seek地nd Breaks The Game!

OpenAI Plays Hide and Seek地nd Breaks The Game!


Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. In this project, OpenAI built a hide and seek
game for their AI agents to play. While we look at the exact rules here, I will
note that the goal of the project was to pit two AI teams against each other, and hopefully
see some interesting emergent behaviors. And, boy, did they do some crazy stuff. The coolest part is that the two teams compete
against each other, and whenever one team discovers a new strategy, the other one has
to adapt. Kind of like an arms race situation, and it
also resembles generative adversarial network a little. And the results are magnificent, amusing,
weird – you’ll see in a moment. These agents learn from previous experiences,
and to the surprise of no one, for the first few million rounds, we start out with…pandemonium. Everyone just running around aimlessly. Without proper strategy and semi-random movements,
the seekers are favored and hence win the majority of the games. Nothing to see here. Then, over time, the hiders learned to lock
out the seekers by blocking the doors off with these boxes and started winning consistently. I think the coolest part about this is that
the map was deliberately designed by the OpenAI scientists in a way that the hiders can only
succeed through collaboration. They cannot win alone and hence, they are
forced to learn to work together. Which they did, quite well. But then, something happened. Did you notice this pointy, doorstop-shaped
object? Are you thinking what I am thinking? Well, probably, and not only that, but about
10 million rounds later, the AI also discovered that it can be pushed near a wall and be used
as a ramp, and, tadaa! Got’em! Te seekers started winning more again. So, the ball is now back on the court of the
hiders. Can you defend this? If so, how? Well, these resourceful little critters learned
that since there is a little time at the start of the game when the seekers are frozen, apparently,
during this time, they cannot see them, so why not just sneak out and steal the ramp,
and lock it away from them. Absolutely incredible. Look at those happy eyes as they are carrying
that ramp. And, you think it all ends here? No, no, no. Not even close. It gets weirder. Much weirder. When playing a different map, a seeker has
noticed that it can use a ramp to climb on the top of a box, and, this happens. Do you think couchsurfing is cool? Give me a break! This is box surfing! And, the scientists were quite surprised by
this move as this was one of the first cases where the seeker AI seems to have broken the
game. What happens here is that the physics system
is coded in a way that they are able to move around by exerting force on themselves, but,
there is no additional check whether they are on the floor or not, because who in their
right mind would think about that? As a result, something that shouldn’t ever
happen does happen here. And, we’re still not done yet, this paper
just keeps on giving. A few hundred million rounds later, the hiders
learned to separate all the ramps from the boxes. Dear Fellow Scholars, this is proper box surfing
defense…then, lock down the remaining tools and build a shelter. Note how well rehearsed and executed this
strategy is – there is not a second of time left until the seekers take off. I also love this cheeky move where they set
up the shelter right next to the seekers, and I almost feel like they are saying “yeah
see this here? there is not a single thing you can do about
it”. In a few isolated cases, other interesting
behaviors also emerged, for instance, the hiders learned to exploit the physics system
and just chuck the ramp away. After that, the seekers go “what?” “what just happened?”. But don’t despair, and at this point, I
would also recommend that you hold on to your papers because there was also a crazy case
where a seeker also learned to abuse a similar physics issue and launch itself exactly onto
the top of the hiders. Man, what a paper. This system can be extended and modded for
many other tasks too, so expect to see more of these fun experiments in the future. We get to do this for a living, and we are
even being paid for this. I can’t believe it. In this series, my mission is to showcase
beautiful works that light a fire in people. And this is, no doubt, one of those works. Great idea, interesting, unexpected results,
crisp presentation. Bravo OpenAI! Love it. So, did you enjoy this? What do you think? Make sure to leave a comment below. Also, if you look at the paper, it contains
comparisons to an earlier work we covered about intrinsic motivation, shows how to implement
circular convolutions for the agents to detect their environment around them, and more. Thanks for watching and for your generous
support, and I’ll see you next time!

45 Comments

  1. 2:55 "The seeker AI seems to have broken the game" – This alone, and including it's context, needs to be remembered during application. Things are about to get weird. -JB

  2. Fifty years from now police robots will use these algorithms to find you cowering in your home and ship you off to the gulag.

  3. Okay, i am legitimately scared of AI now. What if, in the far distant future, a powerful AI legitimately decides to rebel against us humans??

    When i said powerful, i meant, having comprising info about people, companies or even the government, that it could use as leverage.

    I know about Isaac Asimov's "3 Laws of Robotics". But as you saw in the video – and I'm sure that there are plenty more examples out there of an AI breaking a game or a simulated environment for its own "benefit" – the AI broke the game many times in many ways, you actually think that it's not gonna do the same with Asimov's laws??

    What exactly an AI has to gain from all of this is something i have not thought much about, all i am saying is that it is a possibility in the far distant future.

  4. Even paid for it no shit even the AI speaking have a weird accent to convince us its human, this development is what Elon musk warned us about…..

  5. I want to know why the two bots haven't helped each other climb the wall yet.

    And to be honest, the AI is a fascinating subject but it feels like it can never it's just trying millions of solutions until it gets the desired result. If anyone else had that many chances to learn, eventually an amazing result would happen.

  6. It's important to mention that none of this is actually learning behavior. In the course of this experiment the AI goes through every possible input at every moment that passes and conserves input sequences that achieve predetermined parameters while discarding failures. Given enough repetitions, every possibility will eventually be discovered.

  7. My question is : Do they think like us, humans ? Could we have ended it that way if we didn't know about physique or anything

  8. if you have eyes, put the video on speed x2
    you don't really need to hear all the slow explanation carefully and the same thing happening on the screen over and over again since you have eyes and can see by yourself what is happening

  9. honestly, I thought Elon was a bit out there when he kept warning about the robot rebellion, but this video has Changed my mind

Leave a Reply

Your email address will not be published.


*