OpenAI CodexOpenAI Codex is an artificial intelligence model developed by OpenAI. It parses natural language and generates code in response. It powers GitHub Copilot, a programming autocompletion tool for select IDEs, like Visual Studio Code and Neovim.[1] Codex is a descendant of OpenAI's GPT-3 model, fine-tuned for use in programming applications. OpenAI released an API for Codex in closed beta.[1] In March 2023, OpenAI shut down access to Codex.[2] Due to public appeals from researchers, OpenAI reversed course.[3] The Codex model can still be used by researchers of the OpenAI Research Access Program.[4] CapabilitiesBased on GPT-3, a neural network trained on text, Codex was additionally trained on 159 gigabytes of Python code from 54 million GitHub repositories.[5][6] A typical use case of Codex is for a user to type a comment, such as " OpenAI claims that Codex can create code in over a dozen programming languages, including Go, JavaScript, Perl, PHP, Ruby, Shell, Swift, and TypeScript, though it is most effective in Python.[1] According to VentureBeat, demonstrations uploaded by OpenAI showed impressive coreference resolution capabilities. The demonstrators were able to create a browser game in JavaScript and generate data science charts using matplotlib.[9] A very powerful language model called OpenAI Codex was created expressly to generate code in response to natural language commands. It is capable of understanding and producing code in a multitude of areas because it is compatible with a large number of programming languages and libraries. Codex is a useful tool for developers who want to optimize their coding processes because it can debug, parse natural language inquiries, and provide code completions.[12] OpenAI showed that Codex can interface with services and apps such as Mailchimp, Microsoft Word, Spotify, and Google Calendar.[9][13] Microsoft is reportedly interested in exploring[vague] Codex's capabilities.[13] IssuesOpenAI demonstrations showcased flaws such as inefficient code and one-off quirks in code samples.[9] In an interview with The Verge, OpenAI chief technology officer Greg Brockman said that "sometimes [Codex] doesn't quite know exactly what you're asking" and that it can require some trial and error.[13] OpenAI researchers found that Codex struggles with multi-step and higher-level[clarification needed] prompts, often failing or yielding counter-intuitive behavior. Additionally, they brought up several safety issues, such as over-reliance by novice programmers, biases based on the training data, and security impacts due to vulnerable code.[11] VentureBeat stated that because Codex is trained on public data, it could be vulnerable to "data poisoning" via intentional uploads of malicious code.[9] According to a study by researchers from New York University, approximately 40% of code generated by GitHub Copilot (which uses Codex) in scenarios relevant to high-risk CWEs included glitches or other exploitable design flaws.[14] CopyrightThe Free Software Foundation expressed concerns that code snippets generated by Copilot and Codex could violate copyright, in particular the condition of the GPL that requires derivative works to be licensed under equivalent terms.[15] Issues they raised include whether training on public repositories falls into fair use or not, how developers could discover infringing generated code, whether trained machine learning models could be considered modifiable source code or a compilation of the training data, and if machine learning models could themselves be copyrighted and by whom.[15][16] An internal GitHub study found that approximately 0.1% of generated code contained direct copies from the training data. In one example the model outputted the training data code implementing the fast inverse square root algorithm, including comments and an incorrect copyright notice.[7] In response, OpenAI stated that "legal uncertainty on the copyright implications of training AI systems imposes substantial costs on AI developers and so should be authoritatively resolved."[7] The copyright issues with Codex have been compared to the Authors Guild, Inc. v. Google, Inc. court case, in which judges ruled that Google Books's use of text snippets from millions of scanned books constituted fair use.[7][17] However, use of text snippets from books provides for a reliable reference of the copyright owner, as opposed to compiled works used for the training algorithm data where the final output is made without any such reference. References
|