When new technology emerges, it will always be met with some form of criticism and caution, but also hope for the opportunities that can arise. Historically, this was found during the introduction of automobiles as people expressed safety concerns for its users and pedestrians, many almost viewing it as a toy. But as more safety measures were implemented and the threshold for its tolerance expanded, its convenience for travel and gaining access to the world has made the automobile more dependable than ever before.
Time and time again, technology’s label of good or bad is nuanced and based on context: Who is using it, what is it being used for, how is it being used, and just how much are humans involved? Artificial intelligence (AI) is no different, and it’s the current technological advancement to face the chopping block of judgment.
At Old Dominion University, artificial intelligence is being explored and used to the fullest extent with ethics being a top priority. In February, ODU released MonarchMind, a generative AI platform that is intended for the needs of staff and faculty in higher education. “We are not just adopting AI; we are shaping how it can be responsibly integrated into teaching, research and University operations. MonarchMind provides a structured, ethical AI environment that allows faculty and staff to innovate confidently,” said Dr. Chrysoula Malogianni, the Associate Vice President for Digital Innovation.
From March 3-16, ODU’s School of Cybersecurity held their first AI competition to raise awareness about security vulnerabilities in Large Language Models (LLMs). Participants were required to craft prompts to uncover hidden “flags” embedded within a pre-trained ChatGPT model in what’s called AI red-teaming, according to Dr. Mohammad GhasemiGol, a research assistant professor at the university’s School of Cybersecurity.
AI red-teaming is a simulated process of testing AI systems for exploitations and flaws that can uncover security risks. “This exercise not only highlighted the importance of safeguarding sensitive data in LLMs but also allowed participants to better understand the tactics and techniques used in probing AI systems for vulnerabilities,” said GhasemiGol. The winners have yet to be officially announced; however, with 94 participants submitting over 21,000 prompts, the competition indicates a strong presence of enthusiasm for learning the ropes of AI.
These are only the most recent implementations of AI at ODU. Multiple projects are in development such as a Help Desk service for IT and staff, a Canvas 24/7 course assistant, and an avatar-based tool to help with virtual learning. In October, the university announced that it will recruit 25 faculty members across multiple disciplines that have expertise in AI to expand research in the field. From an instructional standpoint, 10 faculty members were selected across various colleges and schools at the University to expand courses that will cover the ethics, development, and usages of AI. ODUGlobal also offers two certifications, AI in Data Science and Trustworthy AI, in an accelerated asynchronous format.
ODU is embracing the fast-paced evolving world of AI to advance academia and research in an ethical and responsible manner, but the line for using AI as a tool versus a cheat code is drawn where human involvement ends. Generative AI only works as well as the prompts it is given, but even as AI becomes more accessible, versatile, accurate, and efficient, it is critical for its users to remain aware of the ethical implications that arise.
A 2024 paper co-authored by Lisa Messeri, a Yale anthropologist, and M.J. Crockett, a Princeton University cognitive scientist, proposed a scientific process that would use AI to replace essentially any human role. The purpose was to advocate for AI as a tool and warn against using it as a partner. The proposal consisted of the four primary research stages:
- Design stage: Using AI as an oracle to search, evaluate, and analyze substantial literature and also formulate research questions.
- Data collection: When data cannot be obtained, AI would be used to generate accurate data, which would lack human study participants.
- Data analysis and conclusions: AI used to analyze large datasets and interpret data, narrative, and the decision-making process.
- Peer-review process: AI objectively evaluates scientific studies for replicability.
AI is not fluent in recognizing nuance and gray area from data the way that humans are, let alone a disciplinary researcher. On top of this, algorithms trained from biased data can produce biased and discriminatory outcomes. Academia and research are built on the diversity, collaboration, and creativity of human minds. “Acknowledging that science is a social practice that benefits from including diverse standpoints will help us realize its full potential,” says Crockett. “Replacing diverse standpoints with AI tools will set back the clock on the progress we’ve made toward including more perspectives in scientific work.”
If society becomes too reliant on AI for objectivity, then AI will become more likely to be treated as an authoritative figure of knowledge while we use less of our evolutionary brain power, thus understanding less. AI tools are not a collective representation of all perspectives, nor is it the removal of any perspective that is believed to be the pinnacle of objectivity.
Today, we see AI being used everywhere, even in places we would least expect. Perhaps AI is acceptable doing a quick, low-stakes Google search, but no, I do not need an AI summary of two text messages; AI cannot replicate the tone and manner of the person I’m receiving texts from. Some have expressed annoyance for AI in their daily lives, so much so that the acronym “AI” has become a sort of buzz word, in the way that “NFTs” or “crypto” did. AI is becoming the default for applications, software, and search engines to the point that one has to go in to turn it off, rather than AI being something to toggle on.
The implementation of AI ingrained into our daily lives begs the question: Will AI eventually become normalized to the point where our psychology evolves to be desensitized to its safety concerns, or would we become more vigilant? The Mere Exposure Effect suggests that the more exposure we have to AI to learn about it and its capabilities, the more positive a perception we can have. This can be an issue, however, because if safety and risks (LLM hallucinations and biases) are not addressed seriously or properly enough, then society becomes complacent and desensitized to the risks.
On the other hand, the Black Box Effect takes skepticism and the lack of total understanding about AI and induces a sense of unease or danger. As the general public, who are not experts of AI, becomes more accepting of AI, those who understand the complexities and uncertainties that come with AI remain cautious. The term “black box” implies that what goes in and what comes out are understood, but what happens in between is not necessarily understood.
AI is critically taught with the fundamentals of ethics, responsibility, transparency, and safety. These fundamentals are not boxes that are checked off only by one, but they’re engrained in the teaching and application of AI. This ensures that we keep AI regulated, use it strictly as a tool without becoming reliant upon it, do not become desensitized but stay vigilant of what it can be capable of, and protect our role as humans in society.
This story was originally published in the Mace and Crown’s Spring 2025 magazine.