Ä¢¹½ÊÓƵ to Host AI Symposium

image of a woman in profile, with tech symbols and a cityscape

Adobe Stock image generated by AI.

Hands-on workshops, presentations explore AI use in education, business

The Dickinson community is gearing up for a symposium focused on artificial intelligence (AI). Held Monday, April 14, the inaugural Dickinson AI Symposium provides a platform for discussions about the implications of this transformative technology along with practical tips for engaging with it responsibly and effectively.

The daylong event will include presentations and hands-on workshops highlighting innovative ways AI is being used in education, business and philanthropy; best practices for teaching, learning and organizational leadership; ethics and equity in AI; alumni and student perspectives and more. Some of the hands-on workshops and presentations will also be available on livestream. All members of the global Dickinson community are invited to take part, but .

The symposium will kick off with an opening keynote address by Mike Capone, CEO of Qlik, on the importance of effective implementation, high-quality data and proper human oversight in AI use and the need for professionals across all fields to grasp AI's potential and limitations. The afternoon keynote speaker, is Seth Hain, senior vice president for research and development at Epic, will provide insights into AI's role in healthcare.

Hands-on workshops will provide opportunities to learn how to:

  • craft AI-informed policies and syllabi
  • maximize results through improved prompt engineering
  • use datasets to query with AI
  • help students practice world languages with chatbots
  •  improve machine-translation literacy
  • and conduct deep research

 Faculty and staff presentations will explore:

  • equitable AI systems and use
  • AI’s role in modern philanthropy
  • the role of human evaluation of AI calculations and the importance of critical thinking
  • alumni views of AI
  • creating generative art
  • and using AI to channel fictional characters in psychology education.

Students will also weigh in with insiders' looks at how students at Dickinson use LLMs in class and beyond. 

Registration is open through Thursday, April 10, 2024.
Attendance is limited for in-person presentations and workshops, and seating is available on a first-come, first-served basis..

If your selected presentation or workshop is full, you will be placed on a waiting list and notified before the symposium if space becomes available.

Dickinson AI Symposium Schedule: Monday, April 14

9:30–10:20 a.m.

Opening Keynote: Mike Capone, CEO, QLIK
Beyond Ethics: The Real Responsibility of AI

Social Hall (virtual option is available)

AI responsibility isn’t just about ethics—it’s about diligence. Being responsible with AI means understanding how to implement it effectively, ensuring that it operates on high-quality, unbiased data and is integrated with the right human oversight. Just as learning to drive requires more than good intentions—it demands knowledge of the mechanics, road conditions, and handling unexpected events—AI requires the same rigor. In this keynote, we’ll explore why every professional, not just technologists, must grasp AI’s operational foundations to harness its full potential. The businesses and individuals that master this balance—between governance and speed, oversight and automation—will gain the true competitive edge in an AI-driven world.

Learn more about Capone and his recent visit to campus:

10:30 – 11:10 a.m. 

Workshop: AI as a Partner: Using Microsoft Copilot

Andrew Connell, Director of User Services

HUB Side Room 204

Discover how Microsoft Copilot can enhance productivity and streamline administrative tasks. This session will explore the differences between the Copilot available through the college’s Microsoft 365 license and the premium Copilot for Office 365. Through live demonstrations, attendees will learn practical ways to leverage AI for document drafting, data formatting, meeting summaries, and more. Whether you're new to AI-powered tools or looking to maximize their potential, this session will provide practical insights into making Copilot your digital assistant.

Attendees are encouraged to bring a laptop for active participation. (A limited number of loaner laptops are available through the ITS Help Desk. To reserve one, please email helpdesk@dickinson.edu or indicate your need when registering for the AI Symposium.)

Writing Workshop: Generative AI Policies and Syllabus Statements

Lucy McInerney, Assistant Director of the Writing Program

HUB Side Room 205

Tutors from the Writing Center will offer a hands-on workshop for faculty and administrative staff on writing policies that address generative AI in the classroom or on campus. They will offer their expertise as writing partners, as well as their perspectives as students bringing a critical eye to classroom policies.

Attendees are encouraged to bring a laptop for active participation. (A limited number of loaner laptops will be available through the ITS Help Desk. To reserve one, please email helpdesk@dickinson.edu or indicate your need when registering for the AI Symposium.)

How Dickinson Students Use LLMs: AI in the Classroom and Beyond

Hemanth Surya Ganesh Kapa '27 

Library Classroom 1

This presentation investigates how Dickinson students use large language models (LLMs) in their academic work, comparing PingPong—a specialized AI tool recommended in one class—with widely accessible LLMs like ChatGPT. Focusing on classes that explicitly allow LLM use, this session highlights common applications such as spellchecking, brainstorming, summarizing texts and solving problems. It also explores student perspectives on the benefits (e.g., time-saving, improved understanding) and drawbacks (e.g., dependency risks) of using AI. Additionally, the research compares LLM usage in classes where instructors permit AI tools versus those with restrictions or no clear guidelines. By analyzing these differences, the study aims to reveal how instructor policies shape student behavior, encourage experimentation or promoting responsible use. This work provides actionable insights into the role of AI in education.

Machines Calculate but Humans Evaluate

Steve Erfle, Professor of International Business & Management

Library Classroom 2 (virtual option is available)

Generative art will necessarily maintain a humanistic element even in the face of AI advances because machines calculate, but humans evaluate. Electronic string art is a simple form of generative art that allows users to create artistic images by playing with four parameters. The Sequence Player mode increases the dynamism involved in such play and leads to learning, even if that learning is informal in nature. This presentation provides examples of this humanistic interaction.

11:30 a.m.–12:10 p.m. 

Workshop: AI as a Partner: Prompt Engineering

James D’Annibale, Director of Academic Technology

HUB Side Room 204

"Prompt engineering" sounds so fancy and formal. In this workshop we'll demystify prompting and show that anyone can do it. You'll learn a prompting structure that will work for most use cases and will even significantly decrease the likelihood of "hallucinations." We'll also demonstrate how our prompting structure promotes using AI as a partner rather than a replacement, increasing critical thinking and efficiency together.

Attendees are encouraged to bring a laptop for active participation. (A limited number of loaner laptops are available through the ITS Help Desk. To reserve one, please email helpdesk@dickinson.edu or indicate your need when registering for the AI Symposium.)

Workshop: Chatbots in the Foreign Language Classroom

Todd Bryant, Language Technology Specialist, Academic Technology 

HUB Side Room 205

The academic-technology team partnered with Dickinson computer-science students to develop chatbots that can be customized to simulate a person or scenario in any language, embeddable with Moodle. Improvements have been made since we announced this past fall.. I will walk participants through adding their own customized chatbot to their Moodle course or website and discuss future improvements.

Attendees are encouraged to bring a laptop for active participation. (A limited number of loaner laptops are available through the ITS Help Desk. To reserve one, please email helpdesk@dickinson.edu or indicate your need when registering for the AI Symposium.)

AI Artistry: An Intro to Creating Original Generative Images

William Milberry, Computing Specialist, User Services

Library Classroom 1

This presentation will introduce generative AI prompting for the creation of original images. I’ll give an overview of key concepts, including how to structure prompts, and shine a light on creative options that newcomers might not realize are available . I’ll touch on the inner workings and how that impacts results. The goal is to remove some of the mystery and give attendees a starting point for using AI to create their own custom images. 

The presentation will be applicable to most generative AI tools, with examples given using Microsoft Copilot, Adobe Firefly and Stable Diffusion.

Alumni Views of AI

MaryAlice Bitts-Jackson, Assistant Director of Editorial Services

Library Classroom 2 (virtual option is available)

Last year, nearly 200 Dickinson alumni, representing different generations and occupations, shared their views and experiences through a newsletter poll. They revealed what excited them about AI technologies, what gave them pause, how they use AI and more. How do our alumni think about and engage with these potentially transformative technologies, and have their views changed in the year since? What insights did Dickinson AI experts share? And what lesson did the presenter learn when she asked an LLM to write a poll about AI?

1:30–2:20 p.m.

Afternoon Keynote: Seth Hain, Epic SVP, R&D

Virtual (HUB Social Hall will be available for viewing)

Seth Hain, senior vice president for research and development, Epic, will deliver the afternoon keynote. As senior vice president of R&D at Epic, he focuses on advancing the use of analytics and AI across health care to improve care and efficiency. During his 19 years at Epic, he has worked on enhancing the foundational technologies, including AI, across Epic’s platform and their applications across the clinical domain. He has also led the system and performance team, with a focus on database performance and architecture. A native of Seward, Neb., he received a B.S. in mathematics from the University of Nebraska and an M.S. in mathematics from the University of Wisconsin. 

2:30 – 3:10 p.m.

GenAI & Bias: Bridging Technology and Humanities for Ethical AI

 Pasquale Cascarano, Assistant Professor, Department of the Arts, University of Bologna 

HUB Social Hall

Generative artificial intelligence (GenAI) is increasingly shaping how we interact with information, create multimedia content and perceive the world. However, GenAI systems often reflect and perpetuate societal biases. This session will explore the origins of bias in GenAI models, its impact on marginalized and discriminated communities, and strategies for developing more equitable AI systems. We will examine real-world examples of bias in open-source GenAI tools across various tasks, including text-to-text, text-to-image, and text-to-3D object generation. Additionally, we will discuss ethical frameworks and policy considerations to mitigate these biases.

Workshop: Targeting AI: Using Your Own Text Dataset for Querying with AI—NotebookLM

Todd Bryant, Language Technology Specialist, Academic Technology

HUB Side Room 204

Two common limitations for most AI tools are that they are both frequently vague and too often unreliable. One method to improve AI models is to have the AI retrieve its response based on a specific collection of texts. This technique is referred to as a RAG (retrieval augmented generation). This technique is useful when you already have a large collection of texts such as articles, textbooks or you own notes. Perhaps you are looking for patterns or occurrences of texts in an area in which you are very familiar. Alternatively, imagine you’re a student looking for additional examples, alternative explanations or even assessments based on text from a textbook or series of articles. We will use NotebookLM as our starting point for our introduction into RAGs. Participants are encouraged to come with a Google account they would like to use along with a collection of texts with which they are familiar.

Attendees are encouraged to bring a laptop for active participation. (A limited number of loaner laptops are available through the ITS Help Desk. To reserve one, please email helpdesk@dickinson.edu or indicate your need when registering for the AI Symposium.)

Writing Workshop: Generative AI Policies and Syllabus Statements

Lucy McInerney, Assistant Director of the Writing Program

HUB Side Room 205

Tutors from the Writing Center will offer a hands-on workshop for faculty and administrative staff on writing policies that address generative AI in the classroom or on campus. They will offer their expertise as writing partners, as well as their perspectives as students bringing a critical eye to classroom policies.

Attendees are encouraged to bring a laptop for active participation.(A limited number of loaner laptops are available through the ITS Help Desk. To reserve one, please email helpdesk@dickinson.edu or indicate your need when registering for the AI Symposium.)

AI as a Thinking Partner in Philanthropy: Harnessing Insights Without Losing the Human Touch

Nicole Simmons, Executive Director of Research & Strategy, and Carlo Robustelli, Vice President for College Advancement

Library Classroom 1

Artificial intelligence is transforming fundraising, prospect research and donor engagement—but how do we ensure it enhances, rather than replaces, human intelligence? In this session, we’ll explore the role of AI as a thinking partner in philanthropy, examining its potential to reveal new patterns and trends while challenging the biases that shape donor outreach. We’ll discuss the intersection of AI, storytelling, and emotional intelligence, and ask: Is AI expanding our vision or just reinforcing existing perceptions?

Through real-world examples and an interactive discussion, we’ll tackle questions like: how do we ensure AI-driven insights lead to meaningful donor relationships and institutional growth? Where should we trust AI, and where should human experience take the lead? Join us to explore how AI can serve philanthropy best—not by replacing human judgment, but by making it sharper, more inclusive, and more effective.

When AI Gives Bad Advice: Critical Thinking in Human-AI Collaborations

Eren Bilen, Assistant Professor of Data Analytics

Library Classroom 2 (virtual option is available)

As human-AI collaborations become increasingly prevalent in organizations, this research investigates whether limited critical thinking might explain observed poor performance of human-AI collaborations in problem solving tasks at the frontier of AI capacities. We conducted experiments to assess whether (i) individuals critically engage with generative AI output, (ii) whether this critical engagement is detectable by potential interlocutors/team members, and (iii) whether simple interventions promoting critical evaluations of AI output could improve output accuracy in organizational contexts. In the first experiment, we examine participants' critical evaluation of AI-generated output using a simple problem. Participants were randomly assigned to groups receiving no help (Control), a partially correct suggestion from ChatGPT-4o, or a fully correct suggestion from ChatGPT-4o. Correct response rates were 15% in the Control group, 3% in the partially correct ChatGPT group, and 94% in the fully correct ChatGPT group. The significant difference in accuracy between the partially correct and fully correct ChatGPT groups, along with the decline in correctness in the partially correct group below that of the Control, suggests deficiencies in critical reasoning when interacting with the chatbot. In the second experiment, participants identified whether an email was written by a human or a chatbot, providing further evidence that individuals can detect AI-generated content in their interactions. This highlights the potential impact of limited critical evaluation of AI output on team dynamics. Finally, we show that a simple recommendation to carefully examine ChatGPT's output before providing an answer can significantly enhance output accuracy when AI suggestions are incorrect.

3:30–4:10 p.m.

Workshop: AI as a Partner: Prompt Engineering

James D’Annibale, Director of Academic Technology

HUB Side Room 204

"Prompt engineering" sounds so fancy and formal. In this workshop we'll demystify prompting and show that anyone can do it. You'll learn a prompting structure that will work for most use-cases and will even significantly decrease the likelihood of "hallucinations." We'll also demonstrate how our prompting structure promotes using AI as a partner rather than a replacement, increasing critical thinking and efficiency together.

Attendees are encouraged to bring a laptop for active participation. A limited number of loaner laptops are available through the ITS Help Desk. (To reserve one, please email helpdesk@dickinson.edu or indicate your need when registering for the AI Symposium)

Workshop: AI for Research: Specific Tools, Reasoning, and "Deep Research"

Todd Bryant, Language Technology Specialist, Academic Technology

HUB Side Room 205

We will introduce several tools, starting with those that expand familiar functionality such as network visualizations of research papers based on common content and citations. We will then look at the latest models "reasoning" functionality. Previously AI tools would take a single prompt and use an algorithm to create a response. With reasoning, the models can perform multistep processes for more complicated tasks. One use case is what is called “Deep Research” which follows a multistep process for gathering sources, adding data and then producing the response.  Google and ChatGPT have recently released versions which apply this reasoning functionality. Participants are encouraged to come with a Google account they would like to use and at least one research question. One possibility would be to place yourself in a Dickinson student’s shoes considering a topic for their undergraduate research. Which questions do you ask them to consider when first discussing a topic?

Attendees are encouraged to bring a laptop for active participation. (A limited number of loaner laptops are available through the ITS Help Desk. To reserve one, please email helpdesk@dickinson.edu or indicate your need when registering for the AI Symposium.)

Workshop: Machine Translation Literacy and Critical Thinking: A Toolkit

Amélie Josselin-Leray, Professor of Linguistics and Translation Studies, University of Toulouse Jean Jaurès, France    

Library Classroom 1

Every day, Machine Translation (MT) systems such as Google Translate, DeepL (neural machine translation systems) and ChatGPT (generative AI systems) are used to perform a wide range of translation tasks with seemingly equal ease. It is important to distinguish critically between their strengths and limitations to maximize their usefulness.

In combination with the April 15 Clarke Forum in the age of AI, this workshop addresses the issue of machine Translation Literacy and aims to provide a set of tools to anyone-particularly students and educators-who uses these systems in their daily work.

The term "machine translation literacy," coined after the concept of digital literacy, was first introduced by Bowker and Buitrago Ciro (2019) to describe the core skills needed by users of MT systems. These include the ability to understand the basics of how MT engines process texts, to appreciate the difference from other tools with which they are often confused, to understand the wider implications associated with the use of MT, to create or alter a text so that it can be translated more easily by an MT system (pre-editing), and to modify the output of an MT system to improve its accuracy and readability (post-editing’). The critical thinking skills needed to assess whether, when and why to use MT and how to interact with it will be presented to the audience together with concrete examples of texts that have been automatically translated into English using various MT systems.

Attendees are encouraged to bring a laptop for active participation. (A limited number of loaner laptops are available through the ITS Help Desk. To reserve one, please email helpdesk@dickinson.edu or indicate your need when registering for the AI Symposium.)

Fuhgeddaboudit! Using AI and HBO The Sopranos Characters to Teach Forensic Psychology

Howard Rosen, Adjunct Faculty, Psychology

Library Classroom 2 (virtual option is available)

Students in an advanced seminar on forensic psychology--Anna Choudary '25, Christina DiGiorgio '27, Abigail Foster '26, and Kaitlyn Meneely '25--are conducting real-time forensic evaluations with AI characters responding in the role of Tony Soprano and other series characters to provide students with high-quality, high-interest, realistic simulations that otherwise would be impossible to explore in an undergraduate classroom. Not only do students develop evaluation knowledge and skill, but through AI they get a useful introduction to a wide array of test instruments and their interpretation reducing the need to devote valuable class time to teach to this component. This submission is proposed as a 30-minute presentation where the class structure will be described, and students will then reflect on their learning experience with AI.

4:30–5:30 p.m.

Closing Reception

HUB Social Hall

.

Beyond the Symposium

The AI Symposium is just one part of a broader series of events at Ä¢¹½ÊÓƵ focused on artificial intelligence. MaryAlice Bitts noted that several other AI-related events are scheduled for April, further enriching the campus-wide dialogue. These include:

  • Thursday, April 3, 2025 – Joseph Priestley Award Celebration Lecture | Clarke Forum for Contemporary Issues: Misinformation in the Age of AI: This Clarke Forum event will explore the critical issue of misinformation in the context of advancing AI technologies.
  • Tuesday, April 15, 2025 | Clarke Forum for Contemporary Issues: What Does It Mean to Be a Human Translator in the Age of AI?: This event will focus on the evolving role of human translators in an AI-driven world.
  • April 16 & 18, Hartman House, hosted by the Presidential Working Group on AI: AI - Continue the Conversation (Employees Only): These sessions will provide Ä¢¹½ÊÓƵ employees with dedicated time and space to further discuss AI-related topics.

TAKE THE NEXT STEPS

Published March 26, 2025