Category Archives: software citation

ChatGPT and AI-generated Code: The Impact of Natural Language Models on Software Creation and Sharing

The following guest post is by John Wallin, the Director of the Computational and Data Science Ph.D. Program and Professor of Physics and Astronomy at Middle Tennessee State University

Photo of Dr. John Wallin, Director of the Computational and Data Science Ph.D. Program and Professor of Physics and Astronomy at Middle Tennessee State University

Dr. John Wallin

Since the 1960s, scientific software has undergone repeated innovation cycles in languages, hardware capabilities, and programming paradigms. We have gone from Fortran IV to C++ to Python. We moved from punch cards and video terminals to laptops and massively parallel computers with hundreds to millions of processors. Complex numerical and scientific libraries and the ability to immediately seek support for these libraries through web searches have unlocked new ways for us to do our jobs. Neural networks are commonly used to classify massive data sets in our field. All these changes have impacted the way we create software.

In the last year, large language models (LLM) have been created to respond to natural language questions. The underlying architecture of these models is complex, but the current generation is based on generative pre-trained transformers (GPT). In addition to the base architecture, they have recently incorporated supervised learning and reinforcement learning to improve their responses. These efforts resulted in a flexible artificial intelligence system that can help solve routine problems. Although the primary purpose of these large language models was to generate text, it became apparent that these models could also generate code. These models are in their infancy, but they have been very successful in helping programmers create code snippets that are useful in a wide range of applications. I wanted to focus on two applications of the transformer-based LLM  – ChatGPT by OpenAI and GitHub Copilot.

ChatGPT is perhaps the most well-known and used LLM. The underlying GPT LLM was released about a year ago, but a newer interactive version was made publicly available in November 2022. The user base exceeded a million after five days and has grown to over 100 million. Unfortunately, most of the discussion about this model has been either dismissive or apocalyptic. Some scholars have posted something similar to this:

“I wanted to see what the fuss is about this new ChatGPT thing, so I gave a problem from my advanced quantum mechanics course. It got a few concepts right, but the math was completely wrong. The fact that it can’t do a simple quantum renormalization problem is astonishing, and I am not impressed. It isn’t a very good “artificial intelligence” if it makes these sorts of mistakes!”

The other response that comes from some academics:

“I gave ChatGPT an essay problem that I typically give my college class. It wrote a PERFECT essay! All the students are going to use this to write their essays! Higher education is done for! I am going to retire this spring and move to a survival cabin in Montana to escape the cities before the machine uprising occurs.”

Jobs will change because of these technologies, and our educational system needs to adapt.Of course, neither view is entirely correct. My reaction to the first viewpoint is, “Have you met any real people?” It turns out that not every person you meet has advanced academic knowledge in your subdiscipline. ChatGPT was never designed to replace grad students. A future version of the software may be able to incorporate more profound domain-specific knowledge, but for now, think of the current generation of AIs as your cousin Alex. They took a bunch of college courses and got a solid B- in most of them. They are very employable as an administrative assistant, but you won’t see them publish any of their work in Nature in the next year or two. Hiring Alex will improve your workflow, even if they can’t do much physics.

The apocalyptic view also misses the mark, even if the survival cabin in Montana sounds nice. Higher education will need to adapt to these new technologies. We must move toward more formal proctored evaluations for many of our courses. Experiential and hands-on learning will need to be emphasized, and we will probably need to reconsider (yet again) what we expect students to take away from our classes. Jobs will change because of these technologies, and our educational system needs to adapt.

Despite these divergent and extreme views, generative AI is here to stay. Moreover, its capabilities will improve rapidly over the next few years. These changes are likely to include:

  • Access to live web data and current events. Microsoft’s Bing (currently in limited release) already has this capability. Other engines are likely to become widely available in the next few months.
  • Improved mathematical abilities via linking to other software systems like Wolfram Alpha. ChatGPT makes mathematical errors routinely because it is doing math via language processing. Connecting this to symbolic processing will be challenging, but there have already been a few preliminary attempts.
  • Increased ability to analyze graphics and diagrams. Identifying images is already routine, so moving to understand and explaining diagrams is not an impossible extension. However, this type of future expansion would impact how the system analyzes physics problems.
  • Accessing specialized datasets such as arXiv, ADS, and even astronomical data sets. It would be trivial to train GPT3.5 on these data sets and give it domain-specific knowledge.
  • Integrating the ability to create and run software tools within the environment. We already have this capability in GitHub Copilot, but the ability to read online data and immediately do customized analysis on it is not out of reach for other implementations as well.

Even without these additions, writing code with GitHub Copilot is still a fantastic experience. Based on what you are working on, your comments, and its training data, it attempts to anticipate your next line or lines of code. Sometimes, it might try to write an entire function for you based on a comment or the name of the last function. I’ve been using this for about five months, and I find it particularly useful when using library functions that are a bit unfamiliar. For example, instead of googling how to add a window with a pulldown menu in python, you would write a comment explaining what you want to do, and the code will be created below your comment. It also works exceptionally well solving simple programming tasks such as creating a Mandelbrot set or downloading and processing data. I estimate that my coding speed for solving real-world problems using this interface has tripled.

However, two key issues need to be addressed when using the code: authorship and reliability.

When you create a code using an AI, it goes through millions of lines of public domain code to find matches to your current coding. It predicts what you might be trying to do based on what others have done. For simple tasks like creating a call to a known function in a python library, this is not likely to infringe on the intellectual property of someone’s code. However, when you ask it to create functions, it is likely to find other codes that accomplish the task you want to complete. The only thing of value we produce in science is ideas. Using someone else's thoughts or ideas without attribution can cross into plagiarism, even if that action is unintentional.For example, there are perhaps thousands of examples of ODE integrators in open-source codes. Asking it to create such a routine for you will likely result in inadvertently using one of those codes without knowing its origin.

The only thing of value we produce in science is ideas. Using someone else’s thoughts or ideas without attribution can cross into plagiarism, even if that action is unintentional. Code reuse and online forums are regularly part of our programming process, but we have a higher level of awareness of what is and isn’t allowed when we are the ones googling the answer. Licensing and attribution become problematic even in a research setting. There may be problems claiming a code is our intellectual property if it uses a public code base. Major companies have banned ChatGPT from being used for this reason. At the very least, acknowledging that you used an AI to create the code seems like an appropriate response to this challenge. Only you can take responsibility for your code, but explaining how it was developed might help others understand its origin.

The second issue for the new generation of AI assistants is reliability. When I asked ChatGPT to write a short biographical sketch for “John Wallin, a professor at Middle Tennessee State University,” I found that I had received my Ph.D. from Emory University. I studied Civil War and Reconstruction era history. It confidently cited two books that I had authored about the Civil War. All of this was nonsense created by a computer creating text that it thought I wanted to read.

It is tempting to think that AI-generated code would produce correct results. However, I have regularly seen major and minor bugs within the code it has generated. Some of the mistakes can be subtle but could lead to erroneous results. Therefore, no matter how the code is generated, we must continue to use validation and verification to determine if we both have a code that correctly implements our algorithms and have the correct code to solve our scientific problem.

Both authorship and reliability will continue to be issues when we teach our students about software development in our fields. At the beginning of the semester, I had ChatGPT generate “five group coding challenges that would take about 30 minutes for graduate students in a Computational Science Capstone course.” When I gave them to my students, it took them about 30 minutes to complete. I created solutions for ALL of them using GitHub Copilot in under ten minutes. Specifying when students can and can’t use these tools is critical, along with developing appropriate metrics for evaluating their work when using these new tools. We also need to push students toward better practices in testing their software, including making testing data sets available when the code is distributed.

Sharing your software has never been more important, given these challenges. Although we can generate codes faster than ever, the reproducibility of our results matters. Your methodology’s only accurate description is the code you used to create the results. Publishing your code when you publish your results will increase the value of your work to others. As the abilities of artificial intelligence improve, the core issues of authorship and reliability still need to be verified by human intelligence.

Addendum: The Impact of GPT-4 on Coding and Domain-Specific Knowledge
Written with the help of GPT-4; added March 20, 2023

Since the publication of the original blog post, there have been significant advancements in the capabilities of AI-generated code with the introduction of GPT-4. This next-generation language model continues to build on the successes of its predecessors while addressing some of the limitations that were previously observed.

One of the areas where GPT-4 has shown promise is in its ability to better understand domain-specific knowledge. While it is true that GPT-4 doesn’t inherently have access to specialized online resources like arXiv, its advanced learning capabilities can be utilized to incorporate domain-specific knowledge more effectively when trained with a more specialized dataset.

Users can help GPT-4 better understand domain-specific knowledge by training it on a dataset that includes examples from specialized sources. For instance, if researchers collect a dataset of scientific papers, code snippets, or other relevant materials from their specific domain and train GPT-4 with that data, the AI-generated code would become more accurate and domain-specific. The responsibility lies with the users to curate and provide these specialized datasets to make the most of GPT-4’s advanced learning capabilities.

By tailoring GPT-4’s training data to be more suited to their specific needs and requirements, users can address the challenges of authorship and reliability more effectively. This, in turn, can lead to more efficient and accurate AI-generated code, which can be particularly valuable in specialized fields.

In addition to the advancements in domain-specific knowledge and coding capabilities, GPT-4 is also set to make strides in the realm of image analysis. Although not directly related to coding, these enhancements highlight the growing versatility of the AI engine. While the image analysis feature is not yet publicly available, it is expected to be released soon, allowing users to tap into a new array of functionalities. This expansion of GPT-4’s abilities will enable it to understand and interpret images, diagrams, and other visual data, which could have far-reaching implications for various industries and applications. As GPT-4 continues to evolve, it is crucial to recognize and adapt to the ever-expanding range of possibilities that these AI engines offer, ensuring that users can leverage their full potential in diverse fields.

With the rapid advancements in AI capabilities, it is essential for researchers, educators, and developers to stay informed and adapt to the changes that GPT-4 and future models bring. As AI-generated code becomes more accurate and domain-specific, the importance of understanding the potential benefits, limitations, and ethical considerations of using these tools will continue to grow.

ASCL poster on software citation at AAS 241

All posters at the 241st meeting of the American Astronomical Society were iPosters: displayed on a screen instead of printed on paper or fabric. The ASCL’s iPoster is available for viewing in the iPoster Gallery; the image below is a static screenshot.

Why others might not be citing your astronomy software

Screenshot of ASCL iPoster at AAS 241

Your codes can themselves be cited, and you can choose your preferred citation method! So why aren’t people citing your code? Come find out, and also learn what five steps you can take to improve citation of the software you write.

In the past decade, software citation has accelerated in astrophysics, resulting in the field now having multiple ways to cite computational methods. Adding software metadata files, such as a CITATION.cff or a codemeta.json file, to the root directory of a GitHub repo (or other code storage site) lets others know how they should cite that software. Yet most software authors do not specify how they would like their code to be cited, while others specify a citation method that is not easily tracked (or tracked at all) by most indexers. In 2020, the Astrophysics Source Code Library (ASCL, ascl.net) sent authors of 135 codes software metadata files (CITATION.cff and codemeta.json), tailored to each computational method, and suggested that one of these files be edited as needed and included on their code site. In early 2021, we examined the code sites for these 135 entries and found that only 41% of these sites had citation information in any form available. In mid-2021, GitHub announced the integration of CITATION.cff into its service, making it easier to add this metadata file to one’s repo. Even so, as of January, 2023, 54% of the codes registered in the ASCL do not specify how to cite use of the software. The lack of citation information creates an obstacle for article authors to provide credit to software creators, thus hindering citation of and recognition for computational contributions to research and for the scientists who develop and maintain software.

#AAS241

ASCL poster on software citation at ADASS XXXII


Are others using software you’ve written in their research and citing it as you want it to be cited? If not, this poster will help! Software can be cited in different ways, some good, and some not good at all for tracking and counting citations in indexers such as ADS and Google Scholar. Generally, indexers need to match citations to resources, such as journal articles, they ingest. There are several reasons why your code might not be cited well (in a trackable/countable way). One common reason is the lack of clear and explicit instructions on a code’s download site. Most astro code sites don’t list a preferred citation method! Make it easy for people to cite your software by listing a (good! trackable!) preferred citation method where others can easily find it. Creating a standard software metadata file, such as a CITATION.cff or codemeta.json, and adding it to the root of your code repo is easy to do with the ASCL’s metadata file creation overlay (see handout below), and will help out anyone wanting to give you credit for your computational method, whether it’s a huge carefully-written and tested package, or a short quick-and-dirty-but-oh-so-useful code.

#ADASSXXXI

Using the Astrophysics Source Code Library: Find, cite, download, parse, study, and submit

This morning, I gave a tutorial on the ASCL at ADASS XXXII, which is being held virtually from the University of Toronto and the University of Victoria. I’ll write more extensively about ADASS later this week; it is, as always, a fabulous conference with a lot of great work, talks, software, data, discussion, posters, chats, demos, tutorials… well, a lot! It’s my favorite astro conference.

But for now, slides from the tutorial and a link to the recording are below. Thanks to ADASS for accepting the proposal and to the participants for attending and for all the interesting (and occasionally scary!) comments and questions!

Slides (PDF)
Session (video)

Citation method, please? A case study in astrophysics

I did an experiment last year to see whether I could influence software authors to add either CITATION.cff or codemeta.json files to their repos to make clear how the software should be cited. It mostly didn’t work, but was still a useful exercise. I’ve written a short paper about it that will appear on arXiv tonight (ETA: here) and is available now at the link below.


Abstract: Software citation has accelerated in astrophysics in the past decade, resulting in the field now having multiple trackable ways to cite computational methods. Yet most software authors do not specify how they would like their code to be cited, while others specify a citation method that is not easily tracked (or tracked at all) by most indexers. Two metadata file formats, codemeta.json and CITATION.cff, developed in 2016 and 2017 respectively, are useful for specifying how software should be cited. In 2020, the Astrophysics Source Code Library (ASCL, ascl.net) undertook a year-long effort to generate and send these software metadata files, specific to each computational method, to code authors for editing and inclusion on their code sites. We wanted to answer the question, “Would sending these files to software authors increase adoption of one, the other, or both of these metadata files?” The answer in this case was no. Furthermore, only 41% of the 135 code sites examined for use of these files had citation information in any form available. The lack of such information creates an obstacle for article authors to provide credit to software creators, thus hindering citation of and recognition for computational contributions to research and the scientists who develop and maintain software.

Citation method, please? A case study in astrophysics (PDF)

ADASS 2020 in the time of pandemic

Astronomical Data Analysis Software and Systems (ADASS), which was to have been in Granada, Spain this year, kicked off the fully online ADASS XXX meeting yesterday with four tutorials, as is usually done, though not quite like it was done this year. The Programming Organizing Committee and especially the Local Organizing Committee had to convert a conference that had been two years in the planning to a virtual meeting. This offered numerous challenges and learning opportunities! One challenge is that the conference is international; scheduling sessions for access to all participants couldn’t have been easy, but with the technology stack they chose, which includes the conference website, Zoom, YouTube, and Discord, and hard work, all of ADASS’s resources are available to all participants. One might have to get up early or stay up late to hear all of the talks live — the sleep-deprived author of this post awoke at 12:15 AM today to catch the opening sessions — but there are asynchronous options available, so groggy stumbling as one makes her way to the computer is a choice, not a requirement.

The ASCL has several presentations and activities this year. ASCL Chair Peter Teuben, ASCL Advisory Committee member Bruce Berriman, and I organized a Birds of a Feather (BoF) session on How to better describe software for discovery and citation today. We have organized BoFs focused on some aspect of software in the past, and, as in the past, this BoF offered a number of very short presentations and then open discussion.

The BoF session focused on software metadata, to improve how software is described and can be discovered and cited. After Teuben opened the session, Berriman presented his experience with using CiteAs to see how it suggested his software Montage be cited. CiteAs uses numerous ways to find a code’s citation method, including looking for metadata files — specific files that contain metadata for the software — on the code’s website and/or GitHub repository. Montage does not currently have a metadata file on its sites, so the citation method CiteAs suggested was not as robust as it could have been. The results of the search and its provenance are shown in the BoF’s slides, which can be downloaded at a link below.

This led nicely into my short talk on metadata files and how the ASCL can create a metadata file from an ASCL entry. The files the ASCL creates programmatically, codemeta.json and CITATION.cff, are intended to be starting points and contain placeholders for data the ASCL does not capture, but which we feel should be included in the metadata file; we encourage software authors to edit these files before they are placed on one’s code site.

Yan Grange, who had organized an earlier BoF on Best licensing practices, presented a summary of the session and the results of two of the several polls taken during that BoF.  Providing a license for your software is vitally important, as it lets others know what they can and cannot do with your software. Resources and other information from the earlier BoF are available online, and Grange’s summary slides for our software metadata BoF are included in the slides file below.

Teuben presented on several related topics: expanding or deepening a codemeta file with “API” information, the Unified Astronomy Thesaurus (UAT) and keywords, and the possibility of taking a software census at a niche science meeting. For this latter, he would like to take a well-defined field in astrophysics and have members of that community take an inventory of the software used and categorize it. He thinks a conference would be an ideal event for getting all the stakeholders together, and has identified a possible candidate conference for this activity.

The floor, if there can be a floor in a virtual meeting, was then open for comments, questions, answers and ideas, though discussion had already started in the Discord channel. One outcome of this session was that before the end of it, several participants had added metadata files to code repositories!

All slides for this session are in the PDF file below. If you would like more information about the session, please let us know in the comments section below, pinging us at ADASS if you are participating in the meeting, or by emailing me at editor@ascl.net.

Slides (PDF)

ASCL poster at AAS235


Abstract: Software citation is good for research transparency and reproducibility, and maybe, if you work it right, for your CV, too. You can get credit and recognition through citations for your code! This presentation highlights several powerful methods for increasing the probability that use of your research software will be cited, and cited correctly. The presentation covers how to create codemeta.json and CITATION.cff automagically from Astrophysics Source Code Library (ASCL ascl.net) entries, edit, and use these files, the value of including such files on your code site(s), and efforts underway in astronomy and other fields to improve software citation and credit.

Authors: A. Allen1,2, R. Nemiroff3, P. Ryan1, J. Schmidt1, P. Teuben2
1Astrophysics Source Code Library
2Astronomy Department, University of Maryland, College Park, MD
3Michigan Technological University, Houghton, MI

Download (PDF)

The ASCL at AAS 235

The ASCL is participating in the American Astronomical Society (AAS) meeting that started yesterday in Honolulu, Hawai’i. We have two events, both on Sunday, January 5:

Best ways to let others know how to cite your research software
January 5; Poster 109.12
Software citation is good for research transparency and reproducibility, and maybe, if you work it right, for your CV, too. You can get credit and recognition through citations for your code! This presentation highlights several powerful methods for increasing the probability that use of your research software will be cited, and cited correctly. The presentation covers how to create codemeta.json and CITATION.cff automagically from Astrophysics Source Code Library (ASCL ascl.net) entries, edit, and use these files, the value of including such files on your code site(s), and efforts underway in astronomy and other fields to improve software citation and credit.

The Future and Future Governance of the Astrophysics Source Code Library
January 5, 2:00 PM – 3:30 PM; HCC – Room 301B
Over the past ten years, the Astrophysics Source Code Library (ASCL, ascl.net) has grown from a small repository holding about 40 codes with hand-coded HTML pages maintained by one person to a resource with citable entries on over 2000 codes with a modern database structure that is user- and editor-friendly maintained by a small group of volunteers. With its 20th anniversary now behind it, it’s time to look at the resource and its governance and management. Does its current structure best serve the astro community? What changes would you like to see to its governance? We don’t know the answers to these and other questions! Please join us for an open discussion on the resource and what a new governance model for the ASCL might be.

A workshop for scientific software registries and repositories

I am involved in several efforts, in addition to the ASCL, to improve recognition and credit for software authors; one such effort is the FORCE11 Software Citation Implementation Working Group (SCIWG), in which several software registries and repositories are involved. These resources, along with others not part of the SCIWG, have formed a Repository Best Practice Task Force, which has held monthly conference calls this year to collaboratively develop a list of best practices for such resources. This has also been an excellent vehicle for enabling people who run these resources to share information about managing software registries and working with software authors, researchers, and journal editors to improve software citation.

Thanks to funding from the Sloan Foundation, members of this Task Force and other software resources are coming together in a Scientific Software Registry Collaboration Workshop to demonstrate unique aspects of our respective services, discuss challenges and share solutions to common issues that arise in managing our resources, finalize a list of best practices for our resources, and work cooperatively to speed adoption of the CodeMeta and/or Citation File Format standards. The workshop has been organized by the Caltech Library and ASCL, and takes place at the University of Maryland (College Park) this coming Wednesday and Thursday (November 13-14). It includes presentations by software registry managers and subject matter experts, break-out sessions for collaborative work, and group discussion.

I’m happy to say we are able to provide remote access to most of the plenary portions of the workshop through Webex; links on the workshop agenda identify the sessions available over Webex. As the workshop has an element of unconferencing, it’s possible that additional portions of the workshop will be suitable for Webex and if so, we will update the agenda accordingly. In addition, we will have someone live-scribing the event; a link to the Google Doc for these notes will be added to the agenda webpage before the workshop begins.

A major focus of this workshop is to discuss and finalize the best practices that have been identified so far in our monthly conference calls. A draft list of the practices (PDF) is available for download below; these are the practices we will be working on in break-out groups during the workshop. Links to the Google Docs we will be using for these breakout sessions are listed on the agenda; this offers another way for anyone interested to see the work being done in this meeting.

I have wanted to meet with others doing work similar to that I do on the ASCL for a long time, and am very grateful to Tom Morrell, Mike Hucka, and Stephen Davison from Caltech Libraries for partnering with me to organize this workshop, and to Josh Greenberg at the Sloan Foundation for thinking this workshop was a good idea and funding the project. My thanks to all of them!

Draft list of Best Practices for research software registries (pdf)

(per apparent established practice)

I’ve set a goal of bringing the number of entries missing preferred citation information to under 1000, though that might be just beyond possible. When I started this process, there were 1284 entries without a preferred citation; I’ve examined the software sites and documentation of 150+ of these codes so far and have found explicit citation information for just over 14% of these.

In general, we include a preferred citation in an ASCL record when a code’s site or documentation explicitly states what should be cited (“cite [code] with this [ASCL entry/article/DOI/etc.]”). We don’t assume a paper listed under “References” or “Articles” is intended to be for citation, though that may be the intent of some authors listing them, as some list these papers because a code is built upon others’ work, or these papers include research that used the software.

In some cases, a particular software has no citations to the ASCL record and numerous citations (> 25, let’s say) to a code description paper even though the download site or repo does not specify how the software should be cited. Allowing this “apparent established practice” of citation to substitute for an explicit statement and listing the description paper as the preferred citation seems fair to me, and valuable to those who want to do the right thing by citing a software package but don’t find guidance for how to do so on the code’s site.

We very much prefer that authors provide explicit information on their preferred citation for their programming work, but where they don’t, and where there is an apparent established practice of citation, we will now list that citation method as the preferred citation in the ASCL entry. So far, this inferred information has been added to 15 ASCL entries.
Partial screenshot showing location of link to suggest a change or addition to an ASCL entry

Do you want to discuss different software citation methods before selecting a preferred method? Did I get your software’s preferred citation wrong or miss it entirely? If so, please let me know via email or the Suggest a change link at the bottom of your code’s ASCL entry.