Hi, I'm David Gros.
I am a computer scientist and engineer interested in AI, NLP, ML, VR/AR, and UI/UX. Currently I’m a 2nd year PhD student at UC Davis where I'm dual advised by Prem Devanbu and Zhou Yu. I am also very interested in the problem of how to make AI systems which benefit the world and how to improve AI safety, explainability, and policy.



In May 2019 I completed my Turing Honors thesis on AInix: An open platform for natural language interfaces to shell commands. It is intended to be a free and open platform AI-assisted computing. It enables aish, which is a command-line shell which understands both natural language and normal shell commands. While it initially focuses on making it easier to use traditional Unix-like commands, it aspires to eventually power language understanding for more general-use things like checking the weather or ordering something online. Eventually it is intended to be able support a wide variety of language-driven user interfaces ("shells") such as voice assistants, chatbots, robots, video game characters, and more.

My work on AInix has been temporarily on hold while starting my PhD. If you want to find out more, the project has a site at AInix.org (or see the thesis writeup).

Selfie Anywhere

Selfie Anywhere is an augmented reality (AR) app that puts you inside a photosphere by changing the background of a selfie in realtime. It's like a green screen putting you on a relaxing beach, in front of a famous monument, or on the surface of Mars, but without requiring you to have a green screen in your living room.

The process of automaticlly figuring out what is the background of a selfie is a surprisingly complex task. People can take many poses while taking the selfie, have various numbers of people in the selfie, and be in a large variety of places and clothing. This is made even harder when trying to run on a phone in near-realtime.

To solve the problem I explored a variety of classical computer vision approaches before moving to a custom deep learning solution. State-of-the-art “off the shelf” pretrained DL models at the time were too large to run quickly on a phone and were not tuned specifically for selfies. Instead, I experimented with dozens of different models in Keras and Tensorflow (#Selfienet) and created a custom dataset of thousands of labeled selfies (#Selfieset) using Amazon Mechanical Turk. There are two models that ship with the app, one for realtime segmentation, and one larger one for refining the picture after you take it.

The backend for the app uses AWS Lambda and S3 to store locations you can take the selfie at. Selfie Anywhere supports photospheres and panoramas that pan and tilt in sync with gyro and accelerometer readings from your phone, which was also quite a challenge to get working. There is also basic functionality for a gallery of pictures you have taken in the app and sharing your selfies to other apps.

This started as a hackathon project in October 2016 and was released May 2017. During the spring 2017 semester I applied to the Longhorn Startup Lab and was one of about 13 teams to get in. There I learned a lot about startups, and was selected as one of the about top 6 teams to present at South By Southwest 2017.

After I released it I wasn't quite sure how to get people to use it, but you can check it out on the Playstore. (there is also a simple splash page for it at SelfieAnywhere.com)

GolfMe VR

GolfMe VR is a Google cardboard VR game where you play as a golf ball and launch yourself to the hole. I made the actual game in Unity 3D, and modeled nine low-polygon holes in Blender. The holes range from pretty normalish golf holes to multi-tiered floating island holes that you have to hop between. It is fully playable and fairly fun and challenging (it's probably not for the easily motion sick though). It was a cool first venture in VR development that I worked on October 2015 to January 2016.


FPChess (or first-person chess) is a VR chess game for Daydream where you play as if you were the king piece. It is a different perspective on the game as you can’t view the whole board at once, and have to be constantly looking around and monitoring the game.

It was made in Unity 3D, and I made a low poly chess pieces models in Blender. I experimented with writing my own engine, but ultimately decided to implement a UCI (universal chess interface) bridge so it can hook in with just about any engine (I ended up using Stockfish).

I started it in November 2016 just to be able to able to play around with the then recently-released Daydream headset, then in July 2017 I made it more playable.


Stratlines is real-time multiplayer strategy game I wrote in Actionscript 3 for frontend and C# for the server. I wrote most of it when I was in 9th grade, but then finally decided to clean it up a little bit and publish it a few years later. I learned a lot from it since it was one of my first large programming projects in an object oriented language, and my first time dealing with networking or complex graphics.

In Fall 2016 I revisited it and wrote a simple AI player using neural nets and genetic algorithms with self-play for a “Computational Intelligence in Game Design” class I took. This was pretty fun to dust off the code I had written years ago. (I didn’t end up deploying the AI player to production though (it was pretty bad, and a lot more work would have been needed to make it a competitive opponent)).

If you are willing to wrangle with Flash, You can still play Stratlines here.

FRC Scouting Database

The FRC Scouting Database is a tool for keeping track of robot statistics while I was involved with my highschool robotics team CRyptonite.

In the FIRST robotics competition, events have about 50 different robot teams. You are randomly paired with teams in qualification rounds, and then potentially must choose your top choices going into the elimination rounds. This requires keeping track of the abilities of all the robots.

I made the first version of it in 2009 when I was in 6th grade. This was years before I was on the team, but while watching my older brother compete, and I wanted desperately to get involved. I originally used my knowledge of VBA programming and Excel to make a very rudimentary simulator, which grew into an excel database and visualization tool. Then year after year it became more complex and more people got involved. By 2015 it was a fairly complex web app and MySQL/PHP backend for entering data and understanding data on the go on a mobile device.

FreeDestroy (Unity 3D)

For a final project in a honors computer graphics class I took, a partner and I decided to create a Unity 3D pluggin for destroying and fracturing objects. The pluggin allows the user to generate a 3D fracturing of a mesh without leaving the Unity engine.


DataVR was a simple 24-hack done at Hackrice. Basically you could upload a CSV file to this simple web interface, and it would render a scatter plot, bar graph, or pie chart of it in VR. It was cool becuase we used WebVR and got to play around with WebGL/Threejs, which I hadn't done before. We also got an award of "Best Data Hack" for it.

Professional Experience


Summer 2020

At Microsoft I explored data mining of software repositories.

University of Texas at Austin: Applied Research Lab (ARL)

Summer 2019

At ARL:UT I researched machine learning and NLP techniques as applied to security-related problems.

NASA Jet Propulsion laboratory

Summer 2018

At JPL I worked in the OpsLab. There I was involved in several projects doing development in Unity3D and ReactJS. I worked on OnSight, which is a hololens AR application for “walking on Mars,” and ASTTRO, which is a web tool built off of OnSight for science planning during the Mars 2020 Rover mission. I also worked on AR applications for the Insight Lander.


Summer 2017

At Salesforce I worked on the Apex Compiler team. I developed a command line tool and REST endpoints for adjusting compiler settings for thousands of customers at once. Towards the end of the summer I worked on fixing bugs in a new compiler release.

Houston Mechatronics Inc.

Summer 2016

At Houston Mechatronics I worked on software for a complex automation process involving several robotic arms working together. I got to work on several parts of the project ranging from general process control flow, computer vision, and robot sensor data visualization. I also worked on a genetic algorithm for optimizing electric motors.

NASA Johnson Space Center

Summer 2015

At NASA Johnson Space Center I mostly worked on the Resource Prospector rover. A fellow intern and I developed a tool monitor the experimental batteries in the 1G full scale prototype of the rover. We also worked on a tool to make it easier to control and experiment with the rover.

Woodgroup Mustang

October 2014 - April 2015

At Wood Group Mustang I worked on a C# data visualization tool of an engineering project's workflow.

Ochsner Hospital

Summer 2014

At Ochsner Hospital I worked on research studying the effects of estrogen on bone mineral density. I worked with a dataset in mostly in Excel, cleaning the data, calculating statistics, and creating visualizations.

Get in Touch