US 11,809,302 B2
Automated program repair using stack traces and back translations
Colin Bruce Clement, Seattle, WA (US); Dawn Drain, Bellevue, WA (US); Guillermo Serrato Castilla, Redmond, WA (US); and Neelakantan Sundaresan, Bellevue, WA (US)
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC., Redmond, WA (US)
Filed by MICROSOFT TECHNOLOGY LICENSING, LLC., Redmond, WA (US)
Filed on Feb. 16, 2023, as Appl. No. 18/110,873.
Application 18/110,873 is a continuation of application No. 17/213,193, filed on Mar. 25, 2021, granted, now 11,604,719.
Claims priority of provisional application 63/144,259, filed on Feb. 1, 2021.
Prior Publication US 2023/0195600 A1, Jun. 22, 2023
Int. Cl. G06F 11/36 (2006.01); G06N 3/045 (2023.01); G06N 3/08 (2023.01)
CPC G06F 11/3636 (2013.01) [G06F 11/3688 (2013.01); G06N 3/045 (2023.01); G06N 3/08 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A system comprising:
one or more processors; and
a memory that stores one or more programs that are configured to be executed by the one or more processors, the one or more programs including instructions to perform acts that:
receive a request to generate repair source code for a source code program with a source code bug;
obtain a stack trace from execution of the source code program with the source code bug;
access a neural transformer model with attention, wherein the neural transformer model with attention is associated with a vocabulary of tokens, each token having a token embedding;
transform the source code program with the source code bug and the stack trace into a context tensor, wherein the context tensor represents the source code program with the source code bug and stack trace as a sequence of token embeddings based on the token embeddings of the neural transformer model with attention;
perform a beam search to generate at least one repair code candidate for the source code program with the source code bug, wherein the beam search generates the at least one repair code candidate one token at each time step by utilizing the neural transformer model with attention to generate a probability, at each time step, for each token of the vocabulary of the neural transformer model with attention given the context tensor, wherein the probability represents a likelihood of a token to expand one or more partial candidate sequences, wherein the beam search expands the one or more partial candidate sequences at each time step based on the probabilities, until a termination condition is reached; and
output the at least one repair code candidate.