Program

9:00 – 9:05

Welcome

9:05 – 10:00 : Session 1 (Chair: Ranjit Jhala)

10:00 – 10:30

Break

10:30 – 11:30 : Session 2 (Chair: Peter Thiemann)

11:30 – 12:00 : Session 3 (Chair: Peter Thiemann)

12:00 – 2:30

Lunch

2:30 – 3:20 : Session 4 (Chair: David Walker)

3:20 – 4:00 : Session 5 (Chair: Saurabh Srivastava)

4:00 – 4:30

Break

4:30 – 5:10 : Session 6 (Chair: Kathleen Fisher)

Abstracts

Probabilistic Programming

Authors: Andrew D. Gordon, Microsoft Research and University of Edinburgh

The promise of probabilistic programming is that not just machine learning researchers, but also data scientists, domain experts, and general developers can specify custom machine learning solutions by authoring small probabilistic programs, while leaving the inference mechanisms to a compiler. Delivering on this promise would democratize machine learning, by empowering orders of magnitude more users to apply it to their problems. My talk introduces probabilistic programming by example, and identifies some challenges to which programming language researchers can contribute.

Tilting At Windmills

Authors: Suresh Jagannathan and Jan Vitek

Harnessing wind energy is one of the pressing challenges of our time. The scale, complexity, and robustness of wind power systems present compelling cyber-physical system design issues. To address these challenges, we aim to develop a comprehensive computational infrastructure for programming distributed real-time embedded devices that collaborate to perform complex, time critical tasks amenable to formal verification. In contrast to traditional efforts that focus on programming-in-the-small, we emphasize declarative specifications, robustness, longevity, and assurance.

We have embarked on the construction of Ensembles, a computational infrastructure for coordinating complex behavior in heterogeneous distributed real-time embedded (DRE) systems, and using the infrastructure to develop an automated wind arm control and sensing software system. A key component of this infrastructure is a programming model that includes mechanisms for declaration of real-time and macroprogrammed constraints and performance requirements, mechanisms for distributed coordination among single-node virtual machines, and support for highly heterogeneous compute environments encountered in our testbed.

Traité de lutherie, informatisé

Authors: Harry Mairson (Brandeis)

How to build a historically accurate software system for eighteenth-century string instrument design using principles of functional programming.

In 2006, a French luthier (string instrument maker) from Angers, named François Denis, published a book called “Traité de lutherie”, based on his research into the history of string instrument design. Modern luthiers make copies (with modifications, certainly) of famous instruments—for example, of the renowned violincellos of Montagnana, Gofriller, the Guarneri family, and the primus inter pares Stradivari. But these famous makers were not copying someone else: they had a way of designing their instrument outlines. How did they do it?

Denis proposed an organized, rational reconstruction of what these luthiers did, based on ruler and compass constructions. His book is brilliant, but not that easy to read—these directions are not easy to write down, even when the author understands them totally. The language is sometimes imprecise and awkward (or… was that my translation?!), and there is not always a hierarchical sense of design.

I have implemented a set of graphics primitives, on top of Scheme, with the goal of making Denis’s constructions explicit, clear to a human, nonexpert reader (read: luthier), and to facilitate their generation by a graphics package instead of (as he describes) drafting them by hand. The graphics primitives are based on what I call “functional geometry” (to reuse a term that was used by Peter Henderson in his 1980s-era graphics language), where ruler and compass constructions are described without any mathematics—in effect, going back from the Cartesian (which is the way we think—and program—about nearly everything) to the Euclidean. And it had to be without math: luthiers did not know any.

My goal is that this work could serve as a foundation for instrument design, where designs can be modified and experimented with, easily and rapidly, by modern makers. Moreover, it would facilitate a digital repository of classical designs that can be easily modified by luthiers. Finally and perhaps most interesting, the approach could serve as a tool in what I call “computational art history”, facilitating the identification of instruments and the history of their evolution.

Increasing Human Compiler Interaction

Authors: Thomas Ball, Peli de Halleux, Daan Leijen and Nikhil Swamy (MSR)

Programming language researchers can accelerate their ability to learn and explore new language ideas by exposing their new languages to users via web technologies, to observe and test the interactions between humans and compilers.

VisualizeSLE: A Visual Editor for Separation Logic Entailments

Authors: Aquinas Hobor, Lin Myat Soe and Bimlesh Wadhwa (NUS)

We propose VisualizeSLE, a visual proof editor for separation logic entailments.

Probabilistic Inference Using Program Analysis

Authors: Andy Gordon, Aditya Nori and Sriram Rajamani (MSR)

Though probabilistic model checkers such as PRISM have been built to analyze a variety of probabilistic models such as discrete and continuous Markov chains, they primarily check for conformance of satisfaction of probabilistic temporal logic formulas (such as PCTL or PLTL) by these models, which limits their applications to the formal methods community, and in analyzing system reliability and performance. In contrast, the machine learning community at large has focused on computing quantities such as maximum likelihood estimates or posterior probabilities for models, and such inferences have applications in a variety of domains such as information retrieval, speech recognition, computer vision, coding theory and biology. In this paper, we argue that static and dynamic program analysis techniques can indeed be used to infer such quantities of interest to the machine learning community, thereby providing a new and interesting domain of application for program analysis.

Language Support for Efficient Dynamic Computation

Authors: Umut Acar, Ezgi Cicek and Deepak Garg (CMU & MPI-SWS)

While programming language researchers have the techniques and tools to design and develop programming languages that can improve the algorithmic—not just run-time—efficiency of computations, they usually avoid doing so, leaving this problem instead to algorithm designers. This approach results in a clean division of labor: programming language researchers focus on correctness and semantics and the algorithms researchers focus on algorithmic efficiency. It can, however, lead to undesirable outcomes such as impractical, overly complicated algorithms, and inefficient programming languages. We feel that integrating concerns of algorithm efficiency into the language design can improve the impact of programming languages research. In this paper, we propose to study a notion of continuity which we call dynamic optimality, for the purpose of improved efficiency of computations operating on dynamically changing data.

Making proof-based verified computation almost practical: progress and next steps

Authors: Michael Walfish (UT Austin)

Abstract: How can a machine specify a computation to another one and then, without executing the computation, check that the other machine carried it out correctly? And how can this be done without assumptions about the performer (replication, trusted hardware, etc.) or restrictions on the computation? This is the problem of verified computation, and it is motivated by the cloud and other third-party computing models. It has long been known that (1) this problem can be solved in theory using probabilistically checkable proofs (PCPs) coupled with cryptographic tools, and (2) the performance of these solutions is wildly impractical (trillions of CPU-years or more to verify simple computations).

I will describe a project at UT Austin that challenges the second piece of this folklore. We have developed an interactive protocol that begins with an efficient argument [IKO CCC 2007] and incorporates new theoretical work to improve performance by 20+ orders of magnitude. In addition, we have broadened the computational model from Boolean circuits to a general-purpose model (though there is much room for improvement here, and this would be a wonderful place to focus PL attention). We have fully implemented the system, accelerated it with GPUs, and developed a compiler that transforms computations expressed in a high-level language to executables that implement the protocol entities.

The resulting system, while not quite ready for the big leagues, is close enough to practicality to suggest that in the near future, PCPs could be a real tool for building actual systems.

Migration to the Cloud made Safe and Secure

Authors: Ken Eguro, Kaushik Rajan, Ravi Ramamurthy, Kapil Vaswani and Ramarathnam Venkatesan (MSR)

In the last few years, cloud computing has evolved from a buzzword to a critical infrastructure component of many enterprise and consumer services. The cloud provides virtually limitless compute, storage and network resources at low cost, allowing services to scale on demand. However, the benefits of cloud computing do not come for free; building and running applications for the cloud comes with significant challenges including security of sensitive data. While mechanisms such as encryption offer a means to secure data at rest, these mechanisms have a huge bearing on the correctness, security and performance of the rest of the application stack, making the process of migrating applications to the cloud extremely challenging. In this paper, we discuss some of the challenges and we argue that for the full potential of the cloud has to be realized, developers must have access to tools that enable secure and transparent migration of applications. We then highlight the critical role of programming languages research in this space.

Crossing the Software-Hardware Chasm With DSLs

Authors: Arvind Sujeeth et al. (Stanford & EPFL)

Due to the proliferation of heterogeneous processing elements (e.g. multicore, GPUs, DSPs, FPGAs), modern computer systems have more raw computing power than ever. However, writing software to efficiently utilize these processors is challenging; each hardware accelerator typically has its own programming model and requires careful optimization to obtain good performance. Domain-specific languages (DSLs) have been proposed as one way to target heterogeneous processors from high-level abstractions. However, a number of practical problems related to high performance DSL adoption remain wholely or partially unsolved, including: integration, debugging, prototyping, and performance tuning. In this talk, we discuss different approaches to implementing high performance DSLs, from purely embedded to stand-alone languages. We briefly present Delite [1], a Scala framework for rapidly developing performance-oriented embedded DSLs, and highlight Delite’s strengths and weaknesses as a DSL platform. The latter part of the talk is devoted to imagining what the next generation of a high-performance DSL platform should look like and very rough, off the wall ideas of how we might get from here to there.

Tempo Support in Programming Languages

Authors: Yu David Liu (SUNY Binghamton)

In this short essay, we explore programming language support for execution paces, which we term as tempo. In particular, the paper illustrates its applications in energy-efficient programming and multi-threaded programming.

Biochemistry as a Programming Language

Authors: Saurabh Srivastava et al. (UC Berkeley)

Synthetic biology makes biology engineerable. One objective of this engineering is to modify the chemical reactions within the cell, i.e., the biochemistry, to produce non-native compounds of commercial interest. To do this at scale, ideas from language design, verification, and synthesis will be useful. In this talk, we present our lessons learnt, future avenues and open problems, in formalizing biochemistry.

Fast Analysis of Biological Models Using Open-World Logic Programming

Authors: Benjamin Hall, Ethan Jackson and Jasmin Fisher (MSR)

Executable biology describes biological systems as formal models, which can be explored through simulation and model checking. Computational models of biology support explicit representations of biological phenomena, even in the presence of unknown initial conditions and systems parameters. In this work we apply open-world logic programming to axiomatize biological systems in the presence of unknowns. Unknowns are treated as open program relations in the logic program. Scientifically relevant questions can be phrased as queries on open-world programs: “Are there closures of the program where the biological system stabilizes in configuration X?” We show that this approach is capable of answering queries several orders of magnitude faster than traditional model checking techniques. We validated our approach on executable models of C. elegans (the common earthworm).

To solve these problems we apply the open-world LP language FORMULA. It implements an open-world query operation via symbolic execution, and produces formulas solvable by modern SMT solvers. FORMULA reduces query operations to the Z3 SMT solver. Using this approach we can explore the consequences of different mutations 1-2 orders of magnitude faster than previous approaches, allowing us to more quickly develop and refine models of biological signalling.

Off The Beaten Track 2013
New Frontiers for Programming Languages Research
Home Important Dates Scope Structure Submission Organizers Program Registration