Hi, I'm David Gros.
I’m a computer scientist and engineer that mostly works in Artificial Intelligence, Machine Learning, and Natural Language Processing. I'm currently a PhD Candidate at UC Davis, where I'm dual advised by Prem Devanbu and Zhou Yu. I am also very interested in problems of how to make AI systems which benefit the world and how to improve AI safety, explainability, and policy.

Projects

Avoiding machines that pretend to be human

As machines get more capable at language, it is possible they could deceive humans into thinking they are human. This can be either explicit deception (e.g., a human asks a chatbot “are you a robot?” and it replies incorrectly), or implicit deception (a machine that implies it is human, e.g. “that movie made me cry”). I have researched how to avoid both cases.

To explore the explicit case, I worked with collaborators to create the R-U-A-Robot dataset, which collects ~2500 phrasings for how someone might ask if a system is human or non-human. It was published at the ACL 2021 conference. To explore the implicit case, we created the Robots-Dont-Cry dataset which studies kinds of things that are possible for a human to say, but not a machine. It was published at EMNLP 2022.

I believe it is useful to work on social and technical progress on "simple norms" (like machines should be honest, and not pretend to be human) as one category of steps to learning about broader AI alignment.

AI for Software Engineering (AI4SE)

Part of my research focus is on Artificial Intelligence applied to Software Engineering. I have worked on program generation for automatically converting from natural language to programs. In 2019 I completed my Turing Honors thesis on converting natural language to bash commands, and in was later involved in the NeurIps NLC2CMD challenge. I have worked in other diverse AI4SE areas like automated code summarization, low-code interfaces, and automated code review. I have also seeked to understand how AI for Software Engineering fits into critical AI Safety problems as we approach a world with highly intelligent machines.

VR/AR

In the past I developed several virtual reality (VR) and augmented reality (AR) projects.

In 2017 I developed Selfie Anywhere, an augmented reality Android app that puts you inside a photosphere by changing the background of a selfie in realtime (for the post-covid world, this can be thought a bit like movable videocall virtual backgrounds). To solve the problem with 2015ish mobile hardware, I used a mix of both classical computer vision and deep learning. I collected a label dataset of thousands of diverse selfies, and designed custom deep learning models. I was one of the 13 teams that year accepted into Longhorn Startup Lab and one of the top 6 teams selected to present at the South by Southwest conference.

In 2016, I developed a few games for Google Cardboard (GolfMe, where the player plays a golf ball) and for the Google Daydream (FPChess, a first-person VR chess game). I also worked with the Microsoft Hololens during my time at NASA, and have played with other mobile AR frameworks.

Professional Experience

Microsoft

Summer 2022

In 2022 at Microsoft I investigated AI applications to code review and methods for extracting uncertainty from large language models.

IBM Research

Summer 2021

At IBM I explored converting natural language to domain-specific visual programs.

Microsoft

Summer 2020

At Microsoft I explored some data analysis on software static analysis errors.

University of Texas at Austin: Applied Research Lab (ARL)

Summer 2019

At ARL:UT I researched machine learning and NLP techniques as applied to security-related problems.

NASA Jet Propulsion laboratory

Summer 2018

At JPL I worked in the OpsLab. There I was involved in several projects doing development in Unity3D and ReactJS. I worked on OnSight, which is a hololens AR application for “walking on Mars,” and ASTTRO, which is a web tool built off of OnSight for science planning during the Mars 2020 Rover mission. I also worked on AR applications for the Insight Lander.

Salesforce

Summer 2017

At Salesforce I worked on the Apex Compiler team. I developed a command line tool and REST endpoints for adjusting compiler settings for thousands of customers at once. Towards the end of the summer I worked on fixing bugs in a new compiler release.

Houston Mechatronics Inc.

Summer 2016

At Houston Mechatronics I worked on software for a complex automation process involving several robotic arms working together. I got to work on several parts of the project ranging from general process control flow, computer vision, and robot sensor data visualization. I also worked on a genetic algorithm for optimizing electric motors.

NASA Johnson Space Center

Summer 2015

At NASA Johnson Space Center I mostly worked on the Resource Prospector rover. A fellow intern and I developed a tool monitor the experimental batteries in the 1G full scale prototype of the rover. We also worked on a tool to make it easier to control and experiment with the rover.

Woodgroup Mustang

October 2014 - April 2015

At Wood Group Mustang I worked on a C# data visualization tool of an engineering project's workflow.

Ochsner Hospital

Summer 2014

At Ochsner Hospital I worked on research studying the effects of estrogen on bone mineral density. I worked with a dataset in mostly in Excel, cleaning the data, calculating statistics, and creating visualizations.

Get in Touch