Future in Focus: Dr. Mahesh Nagarajan on Bias, Hallucinations, and Building Better AI
As AI becomes ubiquitous, business leaders face a choice: ignore it as a passing fad or examine it closely to seize opportunities while staying alert to its gaps.
Dr. Mahesh Nagarajan wants to prepare leaders in the second category. He is the Senior Associate Dean at the UBC Sauder School of Business, and his areas of applied research include data analytics and mathematical modelling of business systems. Drawing on his love for literature and film, Dr. Nagarajan approaches AI as both art and science—mapping its central narrative (how it works) while exposing the subplots that may mislead (how it falls short.
He recently presented at the third session of the Future in Focus series - an exclusive, invitation-only speaker series delivered in partnership with the Montalbano Centre for Responsible Leadership Development. The talk—Harnessing the AI Boom: Understanding and Utilizing Large Language Models—was both a crash course on AI design and a reminder for leaders to understand its inherent biases.
The hierarchy of AI—and how LLMs actually work
The first step is understanding AI’s building blocks beyond buzzwords: Artificial Intelligence (AI) → Machine Learning (ML) → Deep Learning → Large Language Models (LLMs).
“AI is the big mothership,” explained Dr. Nagarajan. “Think of intelligent machines like robots. Within AI is Machine Learning, within that is Deep Learning… and within that is a Large Language Model, like ChatGPT.” He broke them down further.
- Machine Learning: According to Dr. Nagarajan, ML's goal is "to discover patterns in data and then to say something smart about the relationship between inputs and outcomes."
- Deep Learning: This is a code word for neural networks, essentially a mathematical model that tries to imitate the brain so that it looks at unstructured data like image, text, audio and other data to make sense of it.
- Large Language Models: These are giant predictive engines that generate text by predicting the next word. They are trained on massive internet datasets and powered by the 'transformer' architecture in GPT.
Ever wondered why it's called 'GPT'? Dr. Nagarajan explained: "There are three steps: It’s 'generated', meaning it predicts the next word. It’s 'pre-trained', meaning the engine that's inside has been trained on the World Wide Web. ‘The 'transformer' is the neural network’s sophisticated core which makes predictions feel natural, coherent, and fast.
If AI is so smart, can it go wrong?
As it turns out - yes.
Dr. Nagarajan demonstrated this with the example of predicting heart disease in a population. In this case, a model’s usefulness depends on whether it works broadly across different populations (generalization), or only for the specific dataset it was trained on.
The biggest caveat in application of AI models is the model's fit to the training data. The common gaps that arise are overfitting and underfitting.
If the model learns the training dataset too exactly, it might capture random quirks or noise in that specific group of patients instead of the true patterns that predict heart disease. This can be especially consequential when the model performs poorly on new patients, because its predictions cannot be generalized.
Borrowing from books and films, Dr. Nagarajan said: "Overfitting follows the story a little too closely. When it does that, it misses out on the nuances and subplots of the story. You want to think about the plot rather than get everything right.”
The opposite of that is also true. If the model is too simple, it fails to capture the patterns in the data that accurately predicts heart disease. That's called 'underfitting,' where high-risk patients can go unnoticed because the model is too rudimentary to even analyze markers like cholesterol or age variations.
Bias: The hidden cost for organizations
AI models can be riddled with errors if trained on false, incomplete, or homogeneous data, usually mirroring the biases of the humans who built them.
"AI bias, machine learning bias, or algorithmic bias … refers to the occurrence of biased results due to human biases that skew the training data, or the algorithm itself,” said Dr. Nagarajan.
Continuing on the example of healthcare, Dr. Nagarajan said that using AI to predict heart disease can be even more erroneous if we left out the data on gender and ethnicity of the population set when training the model. "One of the things I notice in healthcare work is that it was never actually trained on women. So it's obviously going to have terrible output." said Dr. Nagarajan.
“Bias can lead to wrong outcomes. Beyond the fact that it's simply unfair, it can be massively costly.”
Hallucinations: When AI lies … convincingly!
Did dinosaurs build a civilization? When Dr. Nagarajan asked ChatGPT, it responded with an emphatic 'yes.'
"AI hallucination is also called confabulation. In plain English, it just lies. Estimates are different, but … over a third of complicated reports that any one of these LLMs write is garbage.”
The tricky part is that some hallucinations sound smart and convincing. Dr. Nagarajan urged leaders to insist on verification and not accept AI output at face value. “You should request sources or evidence. If it says it got the data from the Science Journal,’ ask: 'when was it in Science? Show me the paper.' And then it will say: 'Okay, I was wrong.'” said Dr. Nagarajan.
“Keep pushing. Also ask for explanations or reasoning. You could say: 'How can a dinosaur design tools? It doesn't have fingers.' And it will stop very quickly.”
AI & Leadership: 'We can make it better'
Dr. Nagarajan urged the leaders in the room to take the lessons in AI to their workplaces. And the message was clear: focus on how AI can enhance productivity and decision-making, while preparing their teams for change. He also discouraged the doom-and-gloom thinking that is usually associated with AI.
“People say there can be dire job losses. That's true. I think it's probably already happening. But until AI figures out these hallucinations, the higher-end jobs are not going to get replaced.”
“Automation in clinical work, healthcare, insurance - you're seeing that really fast. That’s actually good news for us.”
By the end, attendees were ready to guide AI toward improving both business and society.
“In critical areas like healthcare, AI can actually help a physician. You don’t want it to replace them, you want it to improve their productivity. While there is distrust, there's also promise. We often say, ‘algorithms are bad.’ What you want to ask is: 'How can we improve the system?'"
"Errors are going to continue to happen. But we can make it better.”
Dr Mahesh Nagarajan spoke on the topic, Bias, Hallucinations, and Building Better AI as part of the Future in Focus series hosted by Professional Growth and supported by the Montalbano Centre for Responsible Leadership Development