Write a Blog >>
SPLASH 2017
Sun 22 - Fri 27 October 2017 Vancouver, Canada

Slack will be used during the workshop. Details to follow!

Live programming systems abandon the traditional edit-compile-run cycle in favor of fluid user experiences that encourages powerful new ways of “thinking to code” and enables programmers to see and understand their program executions. Programming today requires much mental effort with broken stuttering feedback loops: programmers carefully plan their abstractions, simulating program execution in their heads; the computer is merely a receptacle for the resulting code with a means of executing that code. Live programming aims to create a tighter more fluid feedback loop between the programmer and computer, allowing the computer to augment more of the programming process by, for example, allowing programmers to progressively mine abstractions from concrete examples and providing continuous feedback about how their code will execute. Meanwhile, under the radar of the PL community at-large, a nascent community has formed around the related idea of “live coding”—live audiovisual performances which use computers and algorithms as instruments and include live audiences in their programming experiences. This workshop focuses on exploring notions and degrees of live programming as they relate to development, creative activities, learning, and performance. We are interested in methodologies, tools, demos, infrastructures, language designs, and questions that stimulate interest and understanding in live programming.

Following up on the success of LIVE workshops at ECOOP 2016 and ICSE 2013, LIVE 2017 solicits high quality submissions on live programming and will discuss how to move forward with this topic to enable better programming experiences.

Tue 24 Oct

live-2017
08:30 - 10:00: LIVE 2017 - Spring at Room 2
live-2017150882660000008:30 - 09:20
Other
live-2017150882960000009:20 - 09:40
Talk
live-2017150883080000009:40 - 10:00
Talk
live-2017
10:30 - 12:00: LIVE 2017 - Summer at Room 2
live-2017150883380000010:30 - 11:20
Other
live-2017150883680000011:20 - 11:40
Talk
live-2017150883800000011:40 - 12:00
Talk
Media Attached
live-2017
13:30 - 15:00: LIVE 2017 - Fall at Room 2
live-2017150884460000013:30 - 14:20
Other
live-2017150884760000014:20 - 14:40
Talk
live-2017150884880000014:40 - 15:00
Talk
live-2017
15:30 - 17:00: LIVE 2017 - Winter at Room 2
live-2017150885180000015:30 - 15:50
Talk
live-2017150885300000015:50 - 16:10
Talk
live-2017150885420000016:10 - 17:00
Other

Call for Papers

LIVE 2017 aims to bring together people who are interested in live programming. Live programming systems abandon the traditional edit-compile-run cycle in favor of fluid user experiences that encourage powerful new ways of “thinking to code” and enable programmers to see and understand their program executions. Programming today requires much mental effort with broken stuttering feedback loops: programmers carefully plan their abstractions, simulating program execution in their heads; the computer is merely a receptacle for the resulting code with a means of executing that code. Live programming aims to create a tighter more fluid feedback loop between the programmer and computer, allowing the computer to augment more of the programming process by, for example, allowing programmers to progressively mine abstractions from concrete examples and providing continuous feedback about how their code will execute. Meanwhile, under the radar of the PL community at large, a nascent community has formed around the related idea of “live coding”—live audiovisual performances which use computers and algorithms as instruments and include live audiences in their programming experiences.

We encourage short research papers, position papers, web essays, tool demonstrations (as videos), and performance proposals in areas such as:

  • Recent work in REPLs, language environments , code playgrounds, and interactive notebooks.
  • Live visual programming.
  • Programming by example.
  • Programming tools for creative experiences and interactive audio visual performances.
  • Live programming as a learning aid.
  • Fluid debugging experiences
  • Language design in support of the above.

Submissions are due on August 8th and will go through HotCRP @ https://live17.hotcrp.com/paper/new. The workshop is open to various kinds of media: you can write a traditional short paper (PDF), a web essay with embedded videos, a narrated video, or whatever else you think can explain your work best! Content should be consumable in 30 minutes to an hour of casual reader’s time, which means around 5-10 pages for a paper, a video from 10-20 minutes (assuming the viewer would need to pause to contemplate), and essays of around a few pages length. Video and non-paper submissions can by listed as URLs (e.g. to a web page, file locker, or streaming site) in the submission’s abstract.

Any questions or trouble with submitting, please contact mcdirmid@outlook.com. This CFP is hosted @ https://2017.splashcon.org/track/live-2017#Call-for-Papers.

You can submit any kind of media that you feel is appropriate (video, paper, web page, etc…). The submission should be in short form that can be digested by a reader in less than an hour. For papers, please limit your content to 5 pages, for videos, 5-10 minutes is sufficient; web pages should be around 5 pages printed out.

Format

If a paper is being submitted, please try using the ACM SIGPLAN Conference acmart Format with ‘sigplan’ Subformat, 10 point font, using the font family Times New Roman. All submissions should be in PDF format. If you use LaTeX or Word, please use the provided ACM SIGPLAN acmart Templates provided here. Otherwise, follow the author instructions.

If you are formatting your paper using LaTeX, you will need to set the 10pt option in the \documentclass command. If you are formatting your paper using Word, you may wish to use the provided Word template that supports this font size. Please include page numbers in your submission with the LaTeX \settopmatter{printfolios=true} command. Please also ensure that your submission is legible when printed on a black and white printer. In particular, please check that colors remain distinct and font sizes are legible.

Publication (Digital Library Early Access Warning)

AUTHORS TAKE NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

In the movie Iron Man, Tony Stark uses an interactive holographic environment that reifies a model of his power suit that he manipulates directly to express its design. Watching someone really program is comparatively boring since most of the “action” occurs in a programmer’s head where they solve problems, perform abstraction, and reason about code. Live programming aims to change this by moving more thinking from the programmer’s head into the computer. Live programming first eliminates the phase distinction between code editing and debugging, creating a more fluid experience where observed program behavior can immediately inform on edits made by the programmer. A basic live programming environment can be built simply by continuously re-executing whole programs as they are being edited. The technical challenging is then to “scale up” implementation to handle larger programs; e.g. by using incremental program re-execution. However, the success of live programming depends much more on the design of its programming experience than by the sophistication of its implementation.

Although live programming continuously provides feedback about program execution, the form that this feedback should take is far from obvious. In the naive case, feedback just takes the form of the program’s user-oriented output, changing in sync with code edits. However, such output can reveal little about the program’s behavior; e.g. consider a program that just prints the final answer of a complex computation. Behavior relevant to a programming task must then be made visible before live programming can be useful. This can be accomplished either through explicit instrumentation, e.g. by adding printf calls to the code, or through pervasive visualization statement execution results, e.g. by listing state changes for each statement. Both approaches make a trade off between convenience (pervasive visualization) and conciseness that does not overwhelm with too many details (explicit instrumentation). As an example, the Deja Vu system [1] brings live programming to Kinect with frame-by-frame visualization of how input is processed by code in a programmer-configured way. Bret Victor also describes many ways to make state, data, and program flow visible in his Learnable Programming essay [2].

Even if relevant, the way output changes during live programming can be very “jerky,” jumping around drastically on each edit. Consider changing a value from 200 to 175 with a sequence of keystrokes that yield 20 2 1 17 and finally 175. If this value was used as a box position, the box would jump around at random locations, making the live programming feedback too distracting than useful. Chris Hancock [3] suggests that live programming should be like hitting a target with a water hose. Unlike using a bow and arrow, hitting a target with a water hose is easy: the aimer merely sees where the water is going and makes a series of small adjustments to their aim until the target is hit. For live programming, small edits to code would ideally lead to small changes in the program’s output, as if programming itself was a continuous function. One way this can occur is by editing values through scrubbing; e.g. the programmer selects 200 to bring up a slider that is then moved down to 175. All intermediate values (199-176) in this edit are smoothly correlated in continuous movement from location 200 to 175.

Unfortunately, live programming’s water hose principle is difficult to realize in general: the impact of adding a procedure call to a program can be drastic and, anyways, does not exist in a continuum of other calls that would enable small execution-correlated adjustments. If output cannot be updated smoothly with respect to code edits, perhaps code could instead be updated with respect to changes in the output? The direct manipulation of output is intrinsically live because edit actions immediately update objects they are applied to. Program output must then be tailored to direct manipulation. For a graphical program, this could mean drawing and modifying shapes (see “create by reacting” in [2]), but this could also involve manipulating programmer-oriented output such as traces of expected program behavior.

Direct manipulation is followed by the gradual abstraction of manipulated output into general programs, which is similar to programming by demonstration systems like Pygmalion and Tinker [4], but is also similar to “create by abstracting” in [2]. For example, after direct manipulation has aligned two boxes in a horizontal dimension, gradual abstraction can unify their horizontal coordinates with a single variable, ensuring that the boxes are always aligned. A live programming system can then use preservation of directly specified output as a guide to suggest to the programmer how the abstractions can be added to the program correctly. In a live programming environment that can only be imagined today, one would manipulate output, do abstraction, manipulate, abstract, and so on. This process allows much more of the programmer’s thinking to be spilled out to the computer in small chunks, reducing programming’s cognitive load. It would also be more exciting to watch!

Live programming is movement to make programming a more augmented experience by eliminating the phase distinction between programming and execution. This involves many programmer experience (PX) challenges, including how to program output can be relevant as feedback, how to smoothly correlate code changes with execution changes, and how to directly manipulate and gradually abstract output into code. But in solving these challenges, the science fiction of programming in the movies could very well become reality.

Citations:

[1] Jun Kato, Sean McDirmid, Xiang Cao: DejaVu: integrated support for developing interactive camera-based programs. UIST 2012

[2] Bret Victor (2012). Learnable Programming. http://worrydream.com/LearnableProgramming/

[3] Chris Hancock (2003). Real-time programming and the big ideas of computational literacy. Doctoral dissertation. MIT Media Lab, Cambridge, MA

[4] Watch What I do (1993), edited by Allen Cypher. MIT Press, Cambridge, MA.