© 2025 WUKY
background_fid.jpg
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

'One on One': Ian McClure on CATS AI

Marketing shoot on October 25, 2019. Photo by Mark Cornelison | UKphoto
Mark Cornelison/Mark Cornelison | UKphoto
Marketing shoot on October 25, 2019. Photo by Mark Cornelison | UKphoto

Last week, the University of Kentucky announced its "Commonwealth AI Transdisciplinary Strategy" or, CATS AI, a framework built to support a university-wide integration of AI tools and technology. WUKY's Clay Wallace speaks with one of the program's interim co-directors, Ian McClure, about what he hopes CATS AI will be able to accomplish and how questions of data safety, accountability, and intellectual property are being addressed.

Interview transcript has been edited for length and clarity.

Clay Wallace, WUKY

"AI" covers a wide range of technologies. When we talk about AI and implementing AI tools, what are the range of technologies that we're looking at?

Ian McClure, CATS AI co-director

One of the main [misconceptions] about AI is that it's been in development for decades. In fact, it's been in use; the foundational elements of AI have been in use for many years, like automation, for example.

AI came onto the mainstream public consumption platform in November 2022, when ChatGPT surfaced. That's when everyone realized generative AI is something that can help me quickly get to things that I can touch and feel and see and use.

In the context of our institution, we think about AI advancing the possibility of research, for example. Think about inferences and discovery that can now happen at a much more rapid pace than ever we could have imagined. We're doing things here at the University of Kentucky like discovering new drugs and therapies for cancer and developing new devices that can deliver care to patients in rural communities and things that are truly transformational, life-changing. And now, we can discover things like that with a few months of testing and scaling rather than perhaps years.

Clay

I remember, I think last year, a team of UK researchers had a Pompeian scroll that they used some sort of AI technology to decode without unrolling it.

Ian

That's right. The amount of information that has to be digested to do something like read scrolls that you cannot see with the naked eye is huge, right? But now with AI, you can scour information, analyze that information, process it at an accelerated pace that we could not do before.

Clay

I understand there are already existing ways that AI is being implemented at UK. What role does CATS AI fill that wasn't already filled by these roles?

Ian

AI is happening everywhere, not just here at the university. But, as you can imagine, at a knowledge-driven institution, at a research institution, a flagship where we have 30,000-plus employees and students, ideas are everywhere and using these kinds of tools is everywhere.

When we started designing CATS AI and thinking about this vision of how we are going to support our people to move through this transformational AI era, we started with taking inventory. We took this inventory and we actually counted over 150 of these things, then we stopped counting because we realized, okay, it's happening everywhere.

So: how can we do this in a way that we are ensuring responsible adoption, that we're ensuring cost sharing and benefit sharing and that these things aren't just siloed in one college or within one unit or department or within one faculty member?

There was an empirical study back in the spring of this year. EDUCAUSE polled about 750 university leaders and found that, while a large majority of them have AI activity going on, only one in five have some strategy in place holistically to support, resource, and carefully adopt. CATS AI provides that foundation.

Another purpose for CATS AI is to balance risk with innovation. A goal is to ensure that across our institution we are advancing responsible AI and AI for good.

Clay

While we're talking about computing and storage responsibility - what responsibilities does UK take on when they're thinking about how to protect student data and student anonymity?

Ian

That's top of mind for us, but it's also not new for us. We've always had creative mechanisms to ensure that we are protecting and safeguarding data, ideas, and our intellectual property.

As an example, one of the main concerns of the use of generative AI tools is that you put an idea in and you don't know where it goes. Large language models are typically public information. Small language models, though, operate the same, except the information that it draws from and where you put your idea or your query can be more private. "Private enclave" is a term in the AI space. You can create a private enclave with a small language model that is safeguarded and private to the institution.

Clay

CATS AI is accepting nominations onto five subcommittees [health care, students, research, education, and administration and service]. How much influence do they have over policy decisions?

Ian

Those subcommittees won't be making policy, but they will be the tip of the spear for ideas, proposals, identifying challenges, and resources and solutions that can meet those challenges within those five areas.

We created those because it's tough to have a conversation about AI at the institution and focus only generally, because there are health care specific needs, just like there are education and student-success specific needs. Proposals and ideas will flow up to a CATS AI operations group that will act on those with our leadership council.

Clay

In Dr. Capilouto's email, there were two specific technology uses that were named within these five categories. For students, a digital assistant. And then, for health care, some sort of listening software.

Ian

Ambient listening is something that we're already piloting over at UK Healthcare. Think about how that saves our doctors and nurses time in a physician room, where they're one-on-one speaking with a patient, and now they don't have to turn to their computer, away from the patient, to type something, waste the patient's time, and then come back. Now we can actually do real-time ambient listening notes, and then physicians can go straight into analysis.

Another use that was mentioned was a digital assistant. Low-hanging fruit right now for generative AI is to create chatbots and digital assistants that can be personalized, that can curate information to the user based on user behavior.

In education, think about digital assistants that can support our faculty in their teaching, that can help students plan for course scheduling and time management.

Clay

A major concern about AI implementation is accountability. If UK is responsible for the implementation of these technologies, who is accountable when a person acts on incorrect guidance?

Ian

That's a really good question. It's something that CATS AI is going to have to be focused on. Balancing the risks of AI with the potential is something that everybody, globally, is thinking about.

Certainly, no one knows what all those risks currently are. This is a fast-moving space, right? Governance will be really important in how we identify things and react to them, and that is one of the main purposes for CATS AI.

Clay

Tell me more about the CATS AI subcommittees.

Ian

We intentionally opened that nomination process campus-wide because, CATS AI, we want it to be truly participatory. These subcommittees won't just be administrative leadership; they will be faculty and staff and maybe even students involved to ensure that we have perspectives from all people on what use cases are, what being a user means, and also, you know, not everyone is at the same level of inclination to adopt, right?

Clay

So, these subcommittees… Are people with reservations to adopting AI also welcome?

Ian

Yeah, absolutely. We understand that, again, not everyone is at the same level of inclination to adopt. Some people are very excited about AI. Others feel anxious about AI. Our goal is not to push everyone to adopt AI tomorrow. Our goal is to bring everyone along in different ways to show them what it could do and what it means, plus AI fundamentals and AI ethics. We need to respond to very different levels of excitement and anxiety about AI, including in the subcommittees.

Clay

In Dr. Capilouto's email last week, it said that AI concepts will be integrated into courses and career development programs. Will students, faculty, and staff be able to opt out of using AI tools?

Ian

I think we've come to terms that there's an inevitability to this, where the world is going to be an AI world. It already is. What we're trying to do is ensure that everyone that is inevitably going through this transformational period can do so with a higher level of understanding, awareness, and have access to those learning opportunities and those tools. When we talk about providing an AI sandbox, for example, which is just a suite of resources and tools, that's access, but that's not necessarily a mandate.

Clay

Studies have shown that some ways students use AI result in an outsourcing of learning, and results in lower performance on tests and exams. How can a framework like CATS AI make sure students are equipped to learn rather than outsource their learning?

Ian

That is top of mind for almost everyone that is within a post-secondary institution. How do we ensure that AI is not just giving answers? Some of the same concerns came along when calculators were introduced, for example.

But, in the end, we learned how to work a calculator into the process of understanding something. AI is going to be no different. Soon we will have education tools where the target audience is students. It's not just generative AI where you type in your query and get the answer, but it actually is an understanding tool that helps you through a process of learning to get to an answer.

Clay

A corporate partner has been announced but not named. What can you tell us right now about that partnership?

Ian

This is going to be pretty game-changing, to be honest. Information will come soon about this partnership.

If CATS AI is the framework, the order and structure, this partnership is the resources. This is going to create the access that we really need and, frankly, it's going to make us a leader in this space. We will be one of the first institutions to have access to certain resources and tools that most institutions don't have access to yet.

Clay

Will academic work be used to train AI models?

Ian

Lots of research today, and much more coming forward, will be focused on training models and building models to support how we can do what we can do with a certain data set. Some of the tools that we will be building and offering to our students and staff here - like digital assistants - those will be trained on certain data. Think about all the public information that's currently out through the web page infrastructure of the University of Kentucky that gets distilled down to FAQs on hundreds of different webpages that you then have to sort through to find. Think about if a digital assistant provided any of that information to you in one place with one query. You can't build that without training that model and making sure the responses are accurate.

Clay

What is UK considering when it's making decisions about the level of access AI tools have to students, staff, and faculty?

Ian

It goes back to the purpose of CATS AI. That type of activity is going to happen, so it's important that CATS AI creates an assessment framework of: should we build that thing? Should we train that model? What's the ROI? What are the risks? As AI models get better, we see new models being announced, and new abilities to secure information. I just saw this week that some of the large corporate tech companies that are in this model race, they're creating new private enclave solutions for institutions like universities that have proprietary data, confidential data, and research data that need to be kept private but still be used to train models against and analyze that data set.

Clay

Probably the questions that I heard the most from people were about the AI integrated residence hall. What's that?

Ian

I can't say a whole lot about it right now because it's still early stage vision and discussion. What I can say is: it's pretty exciting, to be honest. Think of it like an immersive environment where students can opt into really neat experiential initiatives - hackathons, on-premise events, agent builder training sessions, and really cool things.

Clay

When will we see some of these tools being rolled out?

Ian

I think we'll see the materialization of these things in 2026. So first, we'll be announcing our partnership, and the CATS AI framework subcommittees need to be populated. That cadence needs to start flowing, and then resources will start to come online in 2026.

Clay

Will costs to students increase as a result of implementing AI?

Ian

I can't answer that question.

Clay

Looking forward, what will success for the CATS AI program look like?

Ian

Success will look like an institution that has embraced and carefully adopted AI as a true AI university, one where all students have access to courses, non-credit-bearing training opportunities, and professional development opportunities. I would see employers and our industry partners coming to Kentucky because of CATS AI, because they know that students that are coming out and graduating with certain skill sets and a level of understanding and awareness and pre-employment access to tools so that they're ready for their workforce. I think it would be a research enterprise that is humming along at a rapid pace with adoption of AI to support discoveries that can happen in days instead of years. It would be that the University of Kentucky is recognized nationwide and even globally as a place that is leading in new and emerging markets and a new and emerging economy for this digital transformation.