The University of Chicago’s CIO On Why Universities Make Ideal AI Test Beds

Flight Path

Flight Path is a Wing series for AI startup founders that draws on candid conversations with C-suite executives at leading corporations and unpacks what it takes to become a trusted AI partner in a variety of different industries.

When Kevin Boyd talks about generative AI tools, he’s thinking in terms of a wide, complex, and thriving garden of possible applications. As CIO of The University of Chicago, Boyd recently launched a secure, campus-wide AI platform designed not around a single use case, but around open experimentation. In this Q&A, he shares why higher ed is a uniquely fertile ground for AI innovation, how his team evaluates early-stage tools, and what startup founders can learn from an environment that treats experimentation as strategy—not risk.

Q: You recently rolled out a generative AI tool across the entire university. What makes a university environment such a good testbed for AI development?

Kevin Boyd: We have lots of different pilots, we have lots of different groups that are interested in trying things, whether that's building chatbots, agents, or other products. And we're really excited about where that's going over the next year.

AI is really important for us right now. We're spending a lot of time talking about it and working on it. It's using OpenAI models, but running on Microsoft Azure so we can control the security, the privacy, and the accessibility. We have literally hundreds of use cases that we're experimenting with and more that we want to experiment with — with research, teaching, and administrative computing. 

Q: What are some of those AI use cases you’re most excited about across research and teaching?

AI has proven to be very useful in radiology. The ability to train a model on radiology images by telling it, “This is what cancer looks like,” and then use it to actually find cancer — that’s something that we're excited about doing more of in the future.

There are similar things in ancient languages. Researchers who are studying hieroglyphics, cuneiform, or ancient languages like Akkadian, or how the progression of language happens have the ability to train a model based on these ancient languages found on walls, tablets, or pillars and help with the translation. There are those types of use cases in almost every area of the university.

Q: How does the university evaluate AI tools, especially from startups? What advice would you give to founders trying to get on your radar?

We have a fairly standard process of evaluating any tool, whether it's a large enterprise tool or something coming from a startup. What's the business problem that we're trying to solve? What's causing us pain? What's making things more difficult or inefficient? And then what are the business parameters? In order to solve that business problem, we need to know how much would it make sense to spend? Is it a little problem or a big problem? How many resources is it impacting? 

Then typically we're going to look at multiple options that would allow us to solve that problem. And that might be multiple small startups, or it might be multiple enterprise solutions. So what we're looking for is the maturity of the solution. What's the existing customer base? Who are the players in the company? Do they have a track record? 

And then we're obviously going to try it. We're going to pilot it. We're going to kick the tires and see whether we think it’s going to meet that business problem, and whether it fits within the business parameters that we've defined.

Too often what we end up with are solutions in search of a problem. I have this great idea, and I'm going to go build it, now let me go find someone who needs it. And we really try to encourage people to flip that. Go understand what are the business problems that people have and then build a solution, because then you have a market that's ready for you.

Q: You’ve emphasized that the platform was designed to be secure and accessible. What’s your approach to data privacy and safety in this AI rollout?

A: Data security and data privacy are huge for us, and we think about that in multiple different dimensions. Some of our research and some of the work that we do, as you might imagine, is covered by regulations like HIPAA or regulations that are specific to higher ed, like FERPA. There's a fair amount of sensitive data that we work with in the HR space and other areas. So there has been plenty of concern about that data ending up in a public model or being used to train the model.

So it's been very important for us to ensure that we create an environment where people working with sensitive data can interact with the AI, but do it safely and securely within that researcher’s walled garden. And that's been really important to be able to build that type of environment.

Q: Ethics and AI is a huge concern in higher ed. How are you navigating that space?

We have governance boards that we work with made up of faculty and, in some cases, of students. The ethical questions come up in multiple different areas.

The first one that we dealt with is really, “When is it OK to use AI in academic work, and when is it not?” So when you're rolling this out to 18,000 students, the first question of the faculty is, “Well, are they going to use it to write their papers? Because that would be a problem.” And our answer generally has been, “We have an academic code of conduct. And that largely covers the fact that you do your own work.”

Where I think it gets a lot more gray is some of these generative AI tools can be incredibly useful in areas, like for individuals for whom English is not their first language. And helping them to understand an assignment, helping them to work with an assignment. But sometimes it gets very gray around when it’s helping and when it’s doing the work. And those are the things that I think we're going to continue to wrestle with.

In the HR and hiring space, AI could be really useful in going through and looking at resumes. Show me all the resumes that meet certain criteria. Is that OK, or is it not? Another area that's been very controversial in higher ed is the use of AI in admissions. You get thousands of students applying, is it OK to use AI to filter? Those are all things that universities are wrestling with right now.

Finally, what’s one problem you wish AI could solve for at this present moment?

A: I would love it if we could use AI to simplify the HR open enrollment process. So, this is something that we are going to work on. But if you think about the process, you have multiple different benefits plans, the cost to you may be different depending on your salary. The cost to you is going to be different depending on your partnership status and your number of dependents.

I would love it if we could build an AI tool that would accurately and reliably help me say, “All right, I am someone who has a partner, and two kids, and I make this amount of money. Help me compare these plans. Which plan is going to cost me more each month? Which will cost me more if I have X number of medical conditions?” I think that would be really amazing for our faculty and staff, and that's something that we'd like to do.”

Read Full Article
The Wing Team
Author
No items found.
Wing Logo in blue, all lower case letters.
Thanks for signing up!
Form error, try again.