John Schulman: OpenAI and recent advances in Artificial Intelligence – #16

John Schulman is a research scientist at OpenAI. He co-leads the Reinforcement Learning group and works on agent learning in virtual game worlds (e.g., Dota) as well as in robotics. John, Corey, and Steve talk about AI, AGI (Artifical General Intelligence), the Singularity (self-reinforcing advances in AI which lead to runaway behavior that is incomprehensible to humans), and the creation and goals of OpenAI. They discuss recent advances in language models (GPT-2) and whether these results raise doubts about the usefulness of linguistic research over the past 60 years. Does GPT-2 imply that neural networks trained using large amounts of human-generated text can encode “common sense” knowledge about the world? They also discuss what humans are better at than current AI systems, and near term examples of what is already feasible: for example, using AI drones to kill people.

John Schulman: OpenAI and recent advances in Artificial Intelligence – #16

John Schulman is a research scientist at OpenAI. He co-leads the Reinforcement Learning group and works on agent learning in virtual game worlds (e.g., Dota) as well as in robotics. John, Corey, and Steve talk about AI, AGI (Artifical General Intelligence), the Singularity (self-reinforcing advances in AI which lead to runaway behavior that is incomprehensible to humans), and the creation and goals of OpenAI. They discuss recent advances in language models (GPT-2) and whether these results raise doubts about the usefulness of linguistic research over the past 60 years. Does GPT-2 imply that neural networks trained using large amounts of human-generated text can encode “common sense” knowledge about the world? They also discuss what humans are better at than current AI systems, and near term examples of what is already feasible: for example, using AI drones to kill people.

Transcript: John Schulman: OpenAI and recent advances in Artificial Intelligence – #16

John Schulman is a research scientist at OpenAI. He co-leads the Reinforcement Learning group and works on agent learning in virtual game worlds (e.g., Dota) as well as in robotics. John, Corey, and Steve talk about AI, AGI (Artifical General Intelligence), the Singularity (self-reinforcing advances in AI which lead to runaway behavior that is incomprehensible to humans), and the creation and goals of OpenAI. They discuss recent advances in language models (GPT-2) and whether these results raise doubts about the usefulness of linguistic research over the past 60 years. Does GPT-2 imply that neural networks trained using large amounts of human-generated text can encode “common sense” knowledge about the world? They also discuss what humans are better at than current AI systems, and near term examples of what is already feasible: for example, using AI drones to kill people.

Joe Cesario on Political Bias and Problematic Research Methods in Social Psychology – #13

Corey and Steve continue their discussion with Joe Cesario and examine methodological biases in the design and conduct of experiments in social psychology and ideological bias in the interpretation of the findings. Joe argues that experiments in his field are designed to be simple but that in making experimental set ups simple researchers remove critical factors that actually matter for a police officer to make a decision in the real world. In consequence, he argues that the results cannot be taken to show anything about actual police behavior. Joe maintains that social psychology as a whole is biased toward the left politically and that this affects how courses are taught and research conducted. Steve points out the university faculty on the whole tend to be shifted left relative to the general population. Joe, Corey, and Steve discuss the current ideological situation on campus and how it can be alienating for students from conservative backgrounds.

James Cham on Venture Capital, Risk Taking, and the Future Impacts of AI – Episode #12

James Cham is a partner at Bloomberg Beta, a venture capital firm focused on the future of work. James invests in companies applying machine intelligence to businesses and society. Prior to Bloomberg Beta, James was a Principal at Trinity Ventures and a VP at Bessemer Venture Partners. He was educated in computer science at Harvard and at the MIT Sloan School of Business.

James Cham on Venture Capital, Risk Taking, and the Future Impacts of AI – Episode #12

James Cham is a partner at Bloomberg Beta, a venture capital firm focused on the future of work. James invests in companies applying machine intelligence to businesses and society. Prior to Bloomberg Beta, James was a Principal at Trinity Ventures and a VP at Bessemer Venture Partners. He was educated in computer science at Harvard and at the MIT Sloan School of Business.

Transcript: James Cham on Venture Capital, Risk Taking, and the Future Impacts of AI – Episode #12

Steve: Hi, this is Steve Hsu, and this is Manifold. Our guest today is James Cham, a venture capitalist at Bloomberg Beta. Corey couldn’t make it today, so it’s just me and James. We got to know each other, I think, starting many years ago through a kind of unstructured Silicon Valley meeting that has […]

Transcript: Bobby Kasthuri & Brain Mapping – Episode #2

Corey: All right. Okay. Welcome to our podcast. My name’s Corey Washington, and this is my co-host Steve Hsu. And we’re going to tell you little bit about how we hope our next episode should be going and some of the plans we’ve got for the show, but we’d like to lay out our general […]

Bobby Kasthuri & Brain Mapping – Episode #2

Corey and Steve are joined by Bobby Kausthuri, a Neuroscientist at Argonne National Laboratory and the University of Chicago. Bobby specializes in nanoscale mapping of brains using automated fine slicing followed by electron microscopy. Among the topics covered: Brain mapping, the nature of scientific progress (philosophy of science), Biology vs Physics, Is the brain too complex to be understood by our brains? AlphaGo, the Turing Test, and wiring diagrams, Are scientists underpaid? The future of Neuroscience.

Bobby Kasthuri & Brain Mapping – Episode #2

Corey and Steve are joined by Bobby Kasthuri, a Neuroscientist at Argonne National Laboratory and the University of Chicago. Bobby specializes in nanoscale mapping of brains using automated fine slicing followed by electron microscopy. Among the topics covered: Brain mapping, the nature of scientific progress (philosophy of science), Biology vs Physics, Is the brain too complex to be understood by our brains? AlphaGo, the Turing Test, and wiring diagrams, Are scientists underpaid? The future of Neuroscience.