John Schulman: OpenAI and recent advances in Artificial Intelligence – #16

John Schulman is a research scientist at OpenAI. He co-leads the Reinforcement Learning group and works on agent learning in virtual game worlds (e.g., Dota) as well as in robotics. John, Corey, and Steve talk about AI, AGI (Artifical General Intelligence), the Singularity (self-reinforcing advances in AI which lead to runaway behavior that is incomprehensible to humans), and the creation and goals of OpenAI. They discuss recent advances in language models (GPT-2) and whether these results raise doubts about the usefulness of linguistic research over the past 60 years. Does GPT-2 imply that neural networks trained using large amounts of human-generated text can encode “common sense” knowledge about the world? They also discuss what humans are better at than current AI systems, and near term examples of what is already feasible: for example, using AI drones to kill people.

John Schulman: OpenAI and recent advances in Artificial Intelligence – #16

John Schulman is a research scientist at OpenAI. He co-leads the Reinforcement Learning group and works on agent learning in virtual game worlds (e.g., Dota) as well as in robotics. John, Corey, and Steve talk about AI, AGI (Artifical General Intelligence), the Singularity (self-reinforcing advances in AI which lead to runaway behavior that is incomprehensible to humans), and the creation and goals of OpenAI. They discuss recent advances in language models (GPT-2) and whether these results raise doubts about the usefulness of linguistic research over the past 60 years. Does GPT-2 imply that neural networks trained using large amounts of human-generated text can encode “common sense” knowledge about the world? They also discuss what humans are better at than current AI systems, and near term examples of what is already feasible: for example, using AI drones to kill people.

Transcript: John Schulman: OpenAI and recent advances in Artificial Intelligence – #16

John Schulman is a research scientist at OpenAI. He co-leads the Reinforcement Learning group and works on agent learning in virtual game worlds (e.g., Dota) as well as in robotics. John, Corey, and Steve talk about AI, AGI (Artifical General Intelligence), the Singularity (self-reinforcing advances in AI which lead to runaway behavior that is incomprehensible to humans), and the creation and goals of OpenAI. They discuss recent advances in language models (GPT-2) and whether these results raise doubts about the usefulness of linguistic research over the past 60 years. Does GPT-2 imply that neural networks trained using large amounts of human-generated text can encode “common sense” knowledge about the world? They also discuss what humans are better at than current AI systems, and near term examples of what is already feasible: for example, using AI drones to kill people.