ChatGPT Gets Dartmouth Talking

News subtitle

Advances in artificial intelligence spark enthusiasm, and some caution.

Image
Image
Illustration of Baker Library interior with students studying
This illustration was created using Midjourney artificial intelligence when senior graphic designer Richard Clark asked for a “vector illustration depicting 2 students working on laptops in the Baker Library, Dartmouth College.” (Graphic by Midjourney artificial intelligence)
Body

ChatGPT, OpenAI’s trending chatbot that generates conversational responses to user prompts through advanced artificial intelligence, has been busy since its launch in late November.

It has composed reams of poetry, written essays and code, debated with users on a dizzying breadth of subjects, earned co-author credits in peer-reviewed publications, earned passing grades in business and law school tests, and almost cracked the notoriously difficult U.S. medical licensing exam.

Given its capabilities, it is no surprise that ChatGPT has drawn the attention of students, faculty, and administrators at Dartmouth and elsewhere.

Many on campus have taken the software for a spin—simply engaging it in enjoyable banter, challenging its prowess, or discovering how to put its abilities to use. Meanwhile, faculty and administrators are also grappling with questions of how the new technology will impact teaching, learning, and assessment of student work.

There is optimism and enthusiasm, tinged with caution.

“ChatGPT and other generative AI technologies have huge potential for—and will have huge effects on—education,” says Provost David Kotz ’86, the Pat and John Rosenwald Professor in the Department of Computer Science. “My hope is to provide immediate support to faculty and instructors to become familiar with the technology and its impacts, and then look further down the road to consider how we can leverage it as a pedagogical tool, recognizing that it will be part of the future of teaching, learning, scholarship, and work.”

What’s so special about ChatGPT?

ChatGPT is the most proficient version yet of a large language model, an advanced AI system that is trained to mirror human language. It crunched 570 gigabytes of data—millions of existing web texts totaling more than 3 billion words—in its quest to master the nuances of conversation and writing in an estimated 95 languages.

“The technology behind ChatGPT has been available for some time now, but OpenAI has now made it available to the general public with a very easy-to-use interface,” says Soroush Vosoughi, an assistant professor of computer science who studies large language models and their potential impacts.

Image
Soroush Vosoughi
Soroush Vosoughi, assistant professor of computer science. (Photo by Katie Lenhart)

Access to previous models was limited and users needed at least some coding expertise, if not some understanding of machine learning, to use them effectively. ChatGPT has changed all that.

What’s more, says Vosoughi, the creators have come up with a method of refining their model through human feedback to make it more socially acceptable, steering it away from using offensive language or engaging in unethical exchanges.

Carly Bobak, biomedical informatics specialist at Research Computing, experienced this feature firsthand. After toying with ChatGPT to create Christmas gifts for her family, Bobak was keen to direct the bot’s talents to support her research.

A research project she was involved in was looking at data collected in medical clinics from the early 1900s to the 1950s. Words about race and ethnicity, which had been written freeform, had to be updated to reflect modern language, often correcting typos and rephrasing colloquialisms appropriately.

“I was trying to use ChatGPT to help me bin the races appropriately, but because some of the terms are no longer considered politically correct, it wouldn’t do it for me; it argued with me about it,” she says. Bobak had to work hard to convince the bot by explaining her motivations before it would help her with the task.

The fact that the AI is language-agnostic is a huge plus, says Christian Darabos, the senior director of Research Computing. He cites the example of a project that involved extracting names of ships from 18th-century French language books.

“Not only can you converse with ChatGPT in one of many languages, but you can explain in English what you want it to do on a French text and it will do a pretty decent job,” he says.

The genie is out of the bottle

Even as they open the doors of novel possibilities, new technologies present new challenges. There is palpable concern about the potential for misuse of generative AI tools like ChatGPT in assignments and exams.

The following, for example, is the first of a three-paragraph response ChatGPT gave when asked to describe Dartmouth’s history with AI, which incorrectly gives all of the named researchers a Dartmouth affiliation.

Dartmouth College, located in Hanover, New Hampshire, has a rich history of research in artificial intelligence (AI). In the mid-1950s, a group of Dartmouth computer science researchers, including John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, held a conference on “The Dartmouth Conference on Artificial Intelligence,” which is widely considered to be the birthplace of AI as a field of study. The conference marked the beginning of AI as a new and separate field of research, distinct from cybernetics and computer science.

“Whenever academic communities begin to discuss how to interact with—whether to partition off or invite—new technology into classrooms, it opens up the question of how they talk about academic integrity and academic dishonesty around it,” says Katherine Maguire, director of Dartmouth’s Office of Community Standards and Accountability.

Faculty across departments are having conversations about how to deal with honor code violations facilitated by ChatGPT, says Vosoughi. Many have broached the issue in their classes, but a coherent policy is not yet in place.

“We cannot put the genie back in the bottle,” says Vosoughi, but instead update the curriculum and any relevant policies. Like others in his field, he is working to develop tools that can detect whether a piece of writing was generated using ChatGPT.

The larger, and more interesting, conversation runs deeper, according to Maguire. It questions the very nature of learning, the purpose of teaching, and how new technologies support or detract from that.

“It’s a multilayered conversation that, because of the speed with which ChatGPT has entered the foray, is an accelerated conversation,” says Maguire. It is crucial to help students understand what the updated expectations and the principles behind those expectations are, she adds.

As institutions move to create policies, James Dobson, assistant professor of English and creative writing and director of the Institute for Writing and Rhetoric, cautions against overreactions. “I am especially concerned with not treating our students with suspicion.” he says.

To inform the College’s response, the Dartmouth Center for the Advancement of Learning will host a facilitated discussion for faculty and staff who are thinking about issues of artificial intelligence in the Dartmouth teaching and learning context this Friday.

Image
Susan Brison
Susan Brison, Eunice and Julian Cohen Professor for the Study of Ethics and Human Values (Photo by Rob Strong ’04)

Plagiarism is another issue to consider, says Susan Brison, Eunice and Julian Cohen Professor for the Study of Ethics and Human Values. She predicts that issues of copyright will become more salient. “ChatGPT is trained on massive amounts of data from the internet, including works of literature and scholarship, but the original sources aren’t credited,” she says.

Another pitfall to contend with is misinformation. AI tools draw their knowledge from online sources, which are peppered with false information.

In a recent interview with Time magazine, OpenAI Chief Technology Officer Mira Murati, Thayer ’12, said ChatGPT has “immense potential” to help with personalized education, among other fields, but acknowledged there are also many challenges.

“This is a unique moment in time where we do have agency in how it shapes society. And it goes both ways: the technology shapes us and we shape it. There are a lot of hard problems to figure out,” Murati told the magazine.

Harnessing the potential

Despite its many flaws, AI is clearly poised to revolutionize research and education.

Brison and other Dartmouth faculty are already pondering ways to bring ChatGPT to their classrooms. “It could be useful to get students to think in new ways about writing and examine what goes into writing a good essay, whatever the genre is,” Brison says.

“We’ve had people teach courses on the creative use of computing in the past and I could certainly see that coming back with renewed interest now,” says Dobson.

AI systems are already hard at work on campus. Marvis, an AI-driven network assistant that diagnoses and responds to problems in Dartmouth’s wireless networks, is one example.

Mitchel Davis, vice president for Information, Technology, and Consulting and chief information officer, expects that the new technology has the potential to free people from many mundane tasks, allowing them to focus their energies on higher-end projects. Members of his team have used it to review code, assist in crafting staff reviews, and set career goals for the year.

“It’s like a partner that works with you. It learns how you work, learns your environment, and can go ahead and provide support; but it’s learning not just from you, it’s learning from everybody,” says Davis.

There are many ways in which generative AI tools can speed up the process of research, says Darabos, the Research Computing researcher. They can summarize academic articles and discover relevant sources, making literature reviews significantly more efficient. They can be used to improve technical writing, making text smoother, better, more readable. As long as the assistance is credited, these are acceptable uses of technology, he says.

Darabos and Bobak are proposing a workshop on how AI-based software can be used in biomedical data science at an upcoming biocomputing symposium.

When asked how the chatbot was likely to shift the landscape of research, Bobak’s response, using ChatGPT itself to clean up her thoughts, was: “ChatGPT continues to surprise and inspire me as I see new ways people are harnessing its capabilities. Its use as a real-time, on-demand ideation partner is particularly fascinating. With the ability to receive immediate feedback and generate actionable plans, it streamlines the problem-solving process and optimizes our face-to-face time with colleagues.”

Everyone in the field—both in academia and industry—is surprised and taken aback by the widespread and overwhelming response to the tool, says Vosoughi, the computer science professor.

“Thanks to ChatGPT, people are now more aware of machine learning,” he says. “The last time I spoke to my grandma, she asked me what the buzz was all about.”