Growing Solver-Aided Languages with Rosette
Abstract
Decision procedures have helped automate key aspects of programming: coming up with a code fragment that implements a desired behavior (synthesis); establishing that an implementation satisfies a desired property (code checking); locating code fragments that cause an undesirable behavior (fault localization); and running a partially implemented program to test its existing behaviors (angelic execution). Each of these aspects is supported, at least in part, by a family of formal tools. Most such tools are built on infrastructures that are tailored for a particular language or purpose, e.g., Boogie for verification and Sketch for synthesis of C-like languages. But so far, no single infrastructure provides a platform for automating the full spectrum of programming activities, or for making such automation easily available in new languages and programming models.
This talk introduces Rosette, a new shared infrastructure for designing and growing solver-aided languages. The Rosette system is itself a solver-aided EDSL (embedded domain-specific language), which inherits and exposes extensive support for meta programming from its host language, Racket. The system works through a combination of program transformation (enabled by Racket’s meta-programming facilities) and symbolic execution, in which programming constructs and operators accept both concrete and symbolic values. With our framework, a designer defines a new solver-aided language by simply writing an interpreter for it in Rosette. Such a language will allow some constructs to be underspecified---that is, to produce symbolic "holes" rather than just concrete values. When a program with holes and the language interpreter are compiled with Rosette, the result is a custom symbolic execution engine for the given program and language. Executing this engine yields an encoding of the program’s semantics as a set of constraints, which are then solved by an off-the-shelf decision procedure to discharge synthesis, verification, and other queries about the program. We show how to use Rosette to prototype a tiny DSL with a program checker, a fault localizer, an angelic oracle, and
an inductive synthesizer.
There’s a Game for That
With new technologies and interfaces, the gaming world is pushing the boundaries of what it means to play. Today computer games aim to do much more than offer players the chance to shoot the bad guy. We can now use them to learn how to read, how to make environmentally conscious decisions, and even how to become a better actress, all without sacrificing the fun.
Playing a game inherently requires a certain amount of learning, according to Casper Harteveld, assistant professor of art and design. For example, you need to learn how the game space works and how you can level up your character within it. So while it was perhaps an unconscious development, games have become an ideal educational platform for teaching a broad spectrum of topics.
At the fourth Pop Up Open Lab Experience and Reception, held in the Digital Media Commons on Monday, researchers from across the university came together to demonstrate how they are both utilizing and optimizing games to address a variety of problems. The event was hosted by the Office of the Provost, the College of Arts, Media and Design and the College of Computer and Information Science.
Many of the games on display focus on health-related challenges or explore back-end methods for making those games more engaging and effective in their educational goals.
A capstone team comprising four physical therapy students and one neuroscience student is exploring how a robotic smart glove for stroke survivors can more effectively help patients regain their motor skills. The team believes that if the user’s hand motions control a virtual environment instead of an image of a hand on the computer screen, she will be more likely to return to the device repeatedly, said team member Jacob Watterson. Making that virtual environment part of a game should only increase this likelihood, he said.
The issue of repeatability seemed to be on many of the researchers’ minds. For example, Gillian Smith, an assistant professor in the College of Computer and Information Science and College of Arts, Media and Design, is exploring how automatic content generation can expand the game space to make it more dynamic for the user. Professor Magy Seif El-Nasr, director of the game design program with joint appointments in the College of Arts, Media and Design and the College of Computer and Information Science, and Russell Pensyl, a professor of interactive media, are working on incorporating emotion recognition into the gaming experience. The goal of the Affective Media project is to allow games to respond to a user’s experience in order to generate content that will be more likely to keep them engaged.
Alessandro Canossa, associate professor of game design, is developing tools for designers to help them make better games for their users. Using the Google Maps API, his G-Player tool maps the virtual space of a game and shows designers the areas players most often populate. If they see that an entire area of the game is never used, they might either expand the area’s accessibility or cut it out completely. This way the designers can help promote greater interest and usability, he said.
Other games on display explored a variety of challenges. Some aim to promote healthy behaviors while others explore the use of interactive storytelling to promote engagement. The diversity of projects showed that gaming has clearly reached its tentacles into a variety of disciplines. What was once a tool merely for fun is now a fun tool for education and learning across a spectrum of topics.
Hurricane Sandy – Professor David Lazer Studies Cell Phone Use in Natural Disasters
My lab, with the support of the NSF, is launching a crowd-sourced study of Hurricane Sandy, so as to better understand how people react in emergencies. If you were affected by Hurricane Sandy and use an Android phone, I hope you will be willing to help out. This will take 10-15 minutes of your time. And if you weren’t, then I hope you can pass this post on to someone that was affected by Sandy.
How do people respond in large-scale emergency situations, like earthquakes and hurricanes? Understanding this should inform more effective responses to save lives and reduce hardships. Getting hard behavioral data in the moment and aftermath is difficult–because people have better things to do than to participate in a study. There is quite a bit of valuable research based on interviews after the fact, but such research necessarily relies on reconstructed memories of behavior.
There is another path–which is to study the data passively collected about people by the sociotechnical systems relied upon during emergencies. An outstanding example of this is the paper by Bagrow et al that examined behavior as captured by mobile phones during a set of emergencies. The power of this approach is that it offers hard behavioral data at massive scale. The shortcoming, however, is that it cannot contextualize (beyond geography) the data. Who, exactly, are people calling? Their spouses? Friends? What are they communicating–the need for help, reassurances that they are ok?
Here we are launching a study that sits between these two approaches. Essentially, we are asking people to load an app on their Android phones (iPhone users: sorry, but for now we could only afford to develop for one platform), and the app will ask about their situations during Hurricane Sandy, and look at their calling and texting behaviors, asking them about their relationships with those individuals. We will therefore get a precise record of behaviors before/during/after Hurricane Sandy, and contextualize within personalize circumstances and particular relationships.
My motivation here is scientific and personal. I think there is the possibility to do great science here that is potentially consequential for people’s lives, that can inform interventions that will help people. And, having grown up on Long Island, and spent the early part of my career Red Bank, New Jersey –near the shore (“shaw”)– I could see a lot of suffering occur among my friends and family in the aftermath, where there was very little I could do. But this study is at least something good that I can make out of a terrible thing.
We have posted more information about the study on our newly launched crowd-sourced science website, Volunteer Science, or you could go directly to the Google Play store.
