CPC G06F 8/33 (2013.01) [G06F 16/9027 (2019.01); G06F 17/16 (2013.01); G06F 17/18 (2013.01); G06N 3/088 (2013.01); G06N 5/04 (2013.01); G06F 8/35 (2013.01); G06F 8/71 (2013.01); G06F 8/75 (2013.01); G06N 3/045 (2023.01); G06N 3/084 (2013.01)] | 12 Claims |
1. A computer-implemented method, comprising:
extracting a plurality of ordered sequences of tokens from a plurality of source code programs, wherein an ordered sequence of tokens represents a context of a segment of source code from a select one of the plurality of source code programs; and
utilizing the plurality of ordered sequences of tokens to train a decoder-only neural transformer model with attention to learn to predict a next token to complete a partial sequence of tokens, wherein the partial sequence of tokens completes a line-of-code in a target source code program, wherein the decoder-only neural transformer model with attention includes one or more decoder blocks, each decoder block including an attention layer and a neural network layer.
|