Man touching robot hand.

Adapting to generative AI in education, business, and beyond

W. P. Carey faculty are integrating generative AI into their classroom, research, and university initiatives.

Molly Loonam

Higher education is learning to leverage evolving technology's influence and potential in modern academic and business environments. But as students engage with generative artificial intelligence (AI) tools — a type of AI application that uses machine learning algorithms to create new content like text, images, artwork, and video — how are W. P. Carey faculty integrating generative AI into their classroom policies, research, and university initiatives while addressing issues like equity, privacy, and overdependence on technology?

Across the university, classroom technology expectations differ between instructors and departments, but faculty members are encouraged to provide transparent information about technology usage. Daniel Mazzola, faculty director of W. P. Carey's Master of Science in Information Systems Management (MS-ISM), says students and faculty can benefit from classroom policies that provide clear expectations about coursework citation requirements, and define how AI aligns with the course's goals and values.

As AI continues to evolve, faculty need to stay abreast of these changes and update their teaching methods accordingly. Including AI guidelines in syllabi reflects this commitment to contemporary education.

Daniel Mazzola, assistant chair and clinical professor of information systems (IS)

Syllabus guidelines can include acknowledging AI-assisted contributions to assignments, citing specific generative AI applications and prompts used to generate information, and requiring students to take full responsibility for the accuracy and appropriateness of AI-generated content. The use of AI tools cannot breach academic integrity policies, be unprofessional, or undermine the spirit of the assessment, and AI detection software is also employed to check for academic integrity.

When it comes to generative AI large language models like ChatGPT, Matt Sopha, clinical associate professor of information systems, sees potential for reinforcing positive habits like asking good questions and recognizing how to understand good answers. Generative AI can also help automate repetitive, low-level tasks like coding, summarizing ideas, and creating teaching or presentation outlines. Sopha says technology can help students be better consumers of their disciplines. Still, it's important they continue to hone foundational skills — like constructing logical arguments, coherent thoughts, and understanding logical problem solving — in the classroom.

I love the idea of freeing up human beings to lean into the things that make us unique compared to technology, but there is a danger of overreliance and forgetting the human aspect of learning. I worry about an overdependence and over-utilization of technology without a more effective understanding of what we're doing with it.

— Matt Sopha, assistant chair and clinical professor of information systems

Faculty are also investigating how technology can make them better instructors. Clinical Assistant Professor of Information Systems Geoff Pofahl explains that essay assignments help check student understanding but are time-consuming to grade for instructors teaching large classes. Pofahl has been experimenting with generative AI's ability to understand and summarize language to evaluate coursework. So far, Pofahl has only used generative AI to review participation-only assignments — where students are only graded on assignment completion — but is testing AI's ability to award points based on more complex rubrics. He plans to document his research in an upcoming paper.

I've seen some promising results from this AI. In theory, it's possible to use it to assist with grading coursework, but there's a lot of testing that still needs to be done.

— Geoff Pofahl, clinical professor of information systems

As generative AI becomes a general-purpose technology in business, students are seeking out technology-focused degrees. W. P. Carey is integrating AI and data analytics into its curriculum to meet these needs. Deep technology courses into the Master of Science in Business Analytics degree program, and it's introducing AI specializations into its MBA programs to equip the next generation with the tools and knowledge necessary to lead in an AI-driven business world.

During the 12-month MS-ISM program, students can choose between four distinct tracks — AI, data analytics, cybersecurity, and cloud computing — to customize their education to align with the evolving demands of the business sector.

"These tracks target a unique segment of information systems and business and enable students to hone their skills in a specific domain that matches their interests and career aspirations," says Mazzola. "Whether students prefer the data-driven aspects of business, the protective measures of cybersecurity, the innovation of AI, or the scalability provided by cloud services, they can steer their learning journey to best suit the current and future market landscapes."

Mindfully motivated technology

As AI technologies become more prevalent in education, Mazzola emphasizes the importance of educators imparting mindful approaches to AI usage among students. In the classroom, mindful AI means understanding academic integrity, recognizing original authorship, safeguarding personal and others' data privacy, and acknowledging AI-assisted contributions within coursework.

Outside the classroom, applications like ChatGPT have the potential to positively transform content creation in business and education but pose risks like the spread of biased or polarizing information. To address these concerns and businesses' increasing need to organize large amounts of data, ASU's recently approved Center for AI and Data Analytics (AIDA) for Business and Society (aka the Mindful AI center) is providing mindful data and AI support and education to prepare for the risks and challenges posed by AI while shaping a desirable and sustainable future. The initiative also connects corporate partners needing AI and data analytics support with experts across the university.

"The Mindful AI center is driven by the recognition of the immense potential and profound impact AI and data analytics technologies can have on our lives and society," says Pei-yu "Sharon" Chen, IS chair and Red Avenue Foundation Professor and AIDA Center director. "We want to ensure AI can better our lives, society, and the world."

AIDA is taking a mindful approach to data and AI analytics governance by providing research, education, and several innovative solutions to address real-world problems involving AI, including developing framework, metrics, and solutions to assess mindful AI, educating leaders and the workforce with a mindful AI mindset, and preparing for the evolving landscape of AI and advanced analytics tools to shape a sustainable future.

Chen explained that mindful AI is inclusive of responsible AI, which governs the way people design, develop, and deploy technology and tools, and ethical AI, which concerns values and beliefs. Ultimately, mindful AI "attempts to bring a holistic view to understand its broader impact" on different types of users, society, and humanity in relation to ethics, bias, fairness, equity, privacy, security, and well-being.

Chen describes AI as algorithms that learn from data to make inferences and predictions. The release of ChatGPT's November 2022 model exemplified how powerful these algorithms have become, while the application's low cost and accessibility have made it easily adoptable by the public. Its prevalence has also sparked debates about the mindful and ethical uses of technology.

OpenAI, a nonprofit AI research organization whose mission is to build AI that is safe and beneficial for all people, recently made headlines when the company fired and then re-hired its CEO in a matter of days. The announcement has furthered discussions about how quickly AI technology should be developed and implemented, and regulations surrounding AI to ensure ethical use and governance.

This drama surfaces the concerns of the future of AI, as well as the need for AI governance. There is much we do not know. All we can do now is our best to ensure mindful governance and process.

— Pei-yu "Sharon" Chen, IS chair, Red Avenue Foundation Professor, and co-director of the Mindful AI center

Chen likens AI algorithm's ability to learn from data to a child learning about the world unsupervised. The child could be successful, but there is a high probability he will learn unethical tricks or shortcuts to get what he wants. Chen says that just as society and the education system help young people shape their values, the AI world needs this too.

"As AI and data analytics tools become easily accessible, and individuals and organizations eagerly embrace AI, they are prone to be misused, leading to unintended consequences and potentially serious risks to human beings, society, and the planet," says Chen. "We believe in taking preventative measures now, emphasizing the urgency to instill mindful and responsible practices before potential issues escalate and require expensive corrective actions at a later stage."

Linking business and academia

In 2019 Ben Shao, professor and associate dean for Asia-Pacific programs and initiatives, organized an Industry Partners Conference to address generative AI best practices and significance in business as a general-purpose technology. On Sept. 29, the AIDA Center partnered with the Department of Information Systems for the third Industry Partners Conference, themed AI: Getting the Value and Tackling the Challenges. Held in McCord Hall on the ASU Tempe campus, the conference invited industry leaders and ASU experts to discuss AI-related issues related to decision-making, ethics, bias mitigation, data privacy, inclusive excellence, and innovative problem-solving in business and education.

Ben Shao presenting at the Industry Partners Conference.

"AI's impact has become more significant in the past four years. Almost every company has initiatives to incorporate AI into how they use technology not just to operate but to compete with other businesses," says Shao, conference organizer and co-director of the Digital Society Initiative (DSI), a W. P. Carey research lab affiliated with AIDA that aims to understand the roles digital technologies play in the transformation of consumers, businesses, and the society.

DSI is just one of many university initiatives dedicated to technology research and implementation. The Actionable Analytics Lab is bridging the gap between academic research and applied execution of machine learning methodology, deep data analytics, computational linguistics, and social network analysis. The Blockchain Research Lab, made up of ASU students, faculty, and staff who partner with industry leaders, provides blockchain-backed solutions for real-world problems in business, finance, economics, mathematics, and computer science.

"We wanted DSI to be a bridge between people in academia and people in industry," says Michael Shi, associate professor, DSI co-director, and conference organizer. "This event is an exchange of ideas between people from two different sides. It can inspire research ideas for university faculty as they learn what people in industry are working on in terms of analytics and AI."

The event included three panel discussions moderated by ASU faculty. It focused on the use of AI in different industries, how we'll implement the next generation of AI and take on its challenges, and networking and communication surrounding technology funding.

"We all know that AI is changing rapidly," says Shao. "Through collaboration, dialogue, and the sharing of insights, we hope to exploit AI's opportunities and address its challenges effectively."

Latest news