The boom in machine learning (ML) is transforming the tools used across industries, forcing companies to adapt to an ever-evolving economy where agility and adaptability are key to survival.
The global ML market size is estimated to be approximately US$ 38.11 billion in 2022, Projected to reach US$771.38 billion By 2032.
As an SMU Professor of Computer Science, Son Jun The widespread adoption of ML across many fields is due to its “seemingly limitless ability to discover complex patterns in big data and effectively solve a wide variety of problems,” he said.
But the power of ML is limited by the complexity of the model: as the demands of the task increase, the number of dials to adjust to fine-tune the algorithm explodes.
For example, state-of-the-art models such as language models ChatGPT needs to calibrate 175 billion weightsMeanwhile, weather forecast models Pangu-Weather has 256 million parameters.
To bridge the gap between human understanding and decision-making by sophisticated ML models, we need a simple approach to quantify the difficulty of interpreting these models.
In his paper, “Which Neural Networks Make More Explainable Decisions? An Approach Towards Measuring Explainability.”Professor Sun is also co-director of the Intelligent Software Engineering Research Center.rise) introduces a functional paradigm that organizations can leverage to choose the right model for their business.
Machine learning: the good and the bad
In this digital age, the vast amounts of data collected from millions of individuals represents a valuable resource that businesses can leverage.
However, processing this massive data set and translating it into an operational strategy requires technical expertise and a significant investment of time.
According to cognitive psychologist George A. Miller, average The number of objects an individual can hold in working memory (short-term memory) is about seven, which is the limit of a human worker’s capacity.
Overcoming this limitation in human capability is the true power of ML models: their ability to process big data, find subtle patterns, and solve difficult tasks allows companies to allocate resources more effectively.
“ML models and techniques are increasingly being used to guide all kinds of decisions, including predictive analytics, pricing strategies, hiring and other business and management-related decisions.”
Professor Sun says:
Commercial implementations of ML models are built around neural networks, algorithms that mimic the structure of the human brain.
With large numbers of “neurons” woven into vast interconnected structures, these models can quickly accumulate millions of parameters as more neurons are added.
Recent developments in fast self-training algorithms have given companies greater access to state-of-the-art models, enabling them to be deployed in many end-user applications without a comprehensive understanding of their internal logic.
However, in sensitive and niche applications, decisions made by these “black box” algorithms need to be justified.
For example, the General Data Protection Regulation (GDPR), in the context of Article 22, addresses concerns surrounding automated personal data processing by giving European Union citizens the right to obtain a contextual explanation of decisions made by automated means.
Similarly, when a customer is denied credit, the U.S. Equal Credit Opportunity Act (ECOA) requires creditors to provide an explanation.
Beyond the legal implications, Professor Sun also explains the need for explainability in building trust and assurance between customers and businesses deploying machine learning algorithms.
“Over time, users will have more trust in these technologies and systems if they know that most of the decisions can actually be explained to them in a language they understand.”
Measures of explainability
For an intangible concept like explainability, designing a consistent and universal metric is not easy.
On the surface, explainability seems impossible because it is subjective to the individual. Professor Sun goes directly into a practical approach, stating:
“Fundamentally, we aim to answer one question: If you have a choice between multiple neural network models and have reasons to require a certain level of explainability, how do you choose?”
Professor Sun and his team chose to measure the explainability of neural networks in the form of decision trees, another common ML algorithm.
In this model, the computer starts at the base of the tree and asks “yes” or “no” questions as it moves up.
The answers collected allow the computer to trace a path to a specific branch and tell it what action to take.
As the number of questions increases, the taller the tree needed to make a decision.
Compared to the inherent complexity of neural networks, decision trees are closer to how humans evaluate situations and make choices.
By decomposing the choices made by complex neural networks into decision trees and measuring the height of the trees, we can determine the explainability of ML algorithms.
For example, an algorithm that decides whether to take an umbrella with you when you go out (Is the sky overcast? Did it rain yesterday?) has a smaller decision tree than an algorithm that determines whether you qualify for a bank loan (What is your annual income? What is your credit rating? Do you have any existing loans?).
A new paradigm for quantifying explainability bridges the gap in the human-machine interface when applying cutting-edge ML models to operational deployments in the enterprise.
“Our approach helps business owners choose the right neural network model.”
Highlights from Professor Son.
The research team plans to use their findings to further explore the practical applications of ML models, including reliability, safety, security, and ethics.
Professor Sun wants to develop practical techniques and tools that leverage machine learning to make the world a better place.
Professor Sun Jun teaches SMU’s CS612 AI Safety: Assessment and Mitigation Master of Information Technology in Business (MITB) ProgrammeThis course systematically covers the practical aspects of deploying ML models, focusing on safety and security concerns along with risk assessment and mitigation methodologies.
Applications are now open for SMU’s Master of Information Technology in Business (MITB) entry in January 2025. Contact us for more information. Learn more about the programme. here Or contact us for more information.