Figuring out just what an AI is good at is one of the hardest thing about understanding them. To help determine this, OpenAI has designed a set of games that can help researchers tell whether their machine learning agent is actually learning basic skills or, what is equally likely, has figured out how to rig the system in its favor.

It’s one of those aspects of AI research that never fails to delight: the ways an agent will bend or break the rules in its endeavors to appear good at whatever the researchers are asking it to do. Cheating may be thinking outside the box, but it isn’t always welcome, and one way to check is to change the rules a bit and see if the system breaks down.

Clever hide-and-seek AIs learn to use tools and break the rules

What the agent actually learned can be determined by seeing if those “skills” can be applied when it’s put into new circumstances where only some of its knowledge is relevant.

For instance, say you want to learn if an AI has learned to play a Mario-like game where it travels right and jumps over obstacles. You could …read more

Source:: TechCrunch

      

(Visited 6 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *