This document is a biographical profile of researcher Anca Dragan, appearing on page 97 of a larger House Oversight collection. It details her work at the InterACT Laboratory at Berkeley focusing on human-robot interaction and AI safety, and highlights her collaboration with mentor Stuart Russell. The text includes quotes from an interview she gave to the Future of Life Institute regarding the risks of AI agents producing unintended behaviors.
| Name | Role | Context |
|---|---|---|
| Anca Dragan | Researcher / Head of InterACT Laboratory |
Subject of the biography; researches algorithms for human-robot interaction.
|
| Stuart Russell | Professor / Mentor |
Veteran Berkeley colleague and mentor to Anca Dragan; co-author of papers on machine learning and value alignment.
|
| Name | Type | Context |
|---|---|---|
| InterACT Laboratory |
Lab run by Anca Dragan at Berkeley.
|
|
| Berkeley |
Academic institution (University of California, Berkeley) where Dragan and Russell work.
|
|
| Future of Life Institute |
Organization that interviewed Dragan regarding AI safety.
|
| Location | Context |
|---|---|
|
Location of the InterACT Laboratory.
|
"An immediate risk is agents producing unwanted, surprising behavior"Source
"Even if we plan to use AI for good, things can go wrong, precisely because we are bad at specifying objectives and constraints for AI agents."Source
"Their solutions are often not what we had in mind."Source
"unexpected side effects."Source
Complete text extracted from the document (1,433 characters)
Discussion 0
No comments yet
Be the first to share your thoughts on this epstein document