HOUSE_OVERSIGHT_016900.jpg

1.09 MB

Extraction Summary

2
People
3
Organizations
1
Locations
0
Events
1
Relationships
4
Quotes

Document Information

Type: Biography / profile (part of a larger report)
File Size: 1.09 MB
Summary

This document is a biographical profile of researcher Anca Dragan, appearing on page 97 of a larger House Oversight collection. It details her work at the InterACT Laboratory at Berkeley focusing on human-robot interaction and AI safety, and highlights her collaboration with mentor Stuart Russell. The text includes quotes from an interview she gave to the Future of Life Institute regarding the risks of AI agents producing unintended behaviors.

People (2)

Name Role Context
Anca Dragan Researcher / Head of InterACT Laboratory
Subject of the biography; researches algorithms for human-robot interaction.
Stuart Russell Professor / Mentor
Veteran Berkeley colleague and mentor to Anca Dragan; co-author of papers on machine learning and value alignment.

Organizations (3)

Name Type Context
InterACT Laboratory
Lab run by Anca Dragan at Berkeley.
Berkeley
Academic institution (University of California, Berkeley) where Dragan and Russell work.
Future of Life Institute
Organization that interviewed Dragan regarding AI safety.

Locations (1)

Location Context
Location of the InterACT Laboratory.

Relationships (1)

Anca Dragan Colleague/Mentor Stuart Russell
she has co-authored a number of papers with her veteran Berkeley colleague and mentor Stuart Russell

Key Quotes (4)

"An immediate risk is agents producing unwanted, surprising behavior"
Source
HOUSE_OVERSIGHT_016900.jpg
Quote #1
"Even if we plan to use AI for good, things can go wrong, precisely because we are bad at specifying objectives and constraints for AI agents."
Source
HOUSE_OVERSIGHT_016900.jpg
Quote #2
"Their solutions are often not what we had in mind."
Source
HOUSE_OVERSIGHT_016900.jpg
Quote #3
"unexpected side effects."
Source
HOUSE_OVERSIGHT_016900.jpg
Quote #4

Full Extracted Text

Complete text extracted from the document (1,433 characters)

Romanian-born Anca Dragan’s research focuses on algorithms that will enable robots
to work with, around, and in support of people. She runs the InterACT Laboratory at
Berkeley, where her students work across different applications, from assistive robots to
manufacturing to autonomous cars, and draw from optimal control, planning, estimation,
learning, and cognitive science. Barely into her thirties herself, she has co-authored a
number of papers with her veteran Berkeley colleague and mentor Stuart Russell which
address various aspects of machine learning and the knotty problems of value alignment.
She shares Stuart’s preoccupation with AI safety: “An immediate risk is agents
producing unwanted, surprising behavior,” she told an interviewer from the Future of
Life Institute. “Even if we plan to use AI for good, things can go wrong, precisely
because we are bad at specifying objectives and constraints for AI agents. Their
solutions are often not what we had in mind.”
Her principal goal is therefore to help robots and programmers alike to overcome
the many conflicts that arise because of a lack of transparency about each other’s
intentions. Robots, she says, need to ask us questions. They should wonder about their
assignments, and they should pester their human programmers until everybody is on the
same page—so as to avoid what she has euphemistically called “unexpected side
effects.”
97
HOUSE_OVERSIGHT_016900

Discussion 0

Sign in to join the discussion

No comments yet

Be the first to share your thoughts on this epstein document