US 11,809,842 B2
Multi-lingual line-of-code completion system
Alexey Svyatkovskiy, Bellevue, WA (US); Shengyu Fu, Redmond, WA (US); Neelakantan Sundaresan, Bellevue, WA (US); and Shao Kun Deng, Bellevue, WA (US)
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC., Redmond, WA (US)
Filed by MICROSOFT TECHNOLOGY LICENSING, LLC., Redmond, WA (US)
Filed on Jan. 20, 2022, as Appl. No. 17/580,609.
Application 17/580,609 is a continuation of application No. 16/680,328, filed on Nov. 11, 2019, granted, now 11,262,984.
Claims priority of provisional application 62/881,736, filed on Aug. 1, 2019.
Prior Publication US 2022/0147321 A1, May 12, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 8/33 (2018.01); G06N 3/08 (2023.01); G06N 3/04 (2023.01); G06F 16/901 (2019.01); G06F 17/16 (2006.01); G06F 17/18 (2006.01); G06N 3/088 (2023.01); G06N 5/04 (2023.01); G06N 3/084 (2023.01); G06F 8/35 (2018.01); G06F 8/71 (2018.01); G06F 8/75 (2018.01); G06N 3/045 (2023.01)
CPC G06F 8/33 (2013.01) [G06F 16/9027 (2019.01); G06F 17/16 (2013.01); G06F 17/18 (2013.01); G06N 3/088 (2013.01); G06N 5/04 (2013.01); G06F 8/35 (2013.01); G06F 8/71 (2013.01); G06F 8/75 (2013.01); G06N 3/045 (2023.01); G06N 3/084 (2013.01)] 12 Claims
OG exemplary drawing
 
1. A computer-implemented method, comprising:
extracting a plurality of ordered sequences of tokens from a plurality of source code programs, wherein an ordered sequence of tokens represents a context of a segment of source code from a select one of the plurality of source code programs; and
utilizing the plurality of ordered sequences of tokens to train a decoder-only neural transformer model with attention to learn to predict a next token to complete a partial sequence of tokens, wherein the partial sequence of tokens completes a line-of-code in a target source code program, wherein the decoder-only neural transformer model with attention includes one or more decoder blocks, each decoder block including an attention layer and a neural network layer.