Thursday, 10 November 2022

Driving Machine Learning Solutions to Success Through Model Interpretability

Dell EMC, Dell EMC Study, Dell EMC Tutorial and Material, Dell EMC Career Exam, Dell EMC Jobs, Dell EMC Skill

Despite the improvements the field of data science (DS) has made in the last decade, Gartner has estimated that almost 85 percent of all data science projects fail. Further, only 4% of data science projects are considered ‘very successful’. Among the major drivers of data science project failure are poor data quality, lack of technical skill or business acumen, lack of deployment infrastructure and lack of adoption.

The last of these, model adoption by users, can “make or break” the entire project, but can be overlooked in project planning under the assumption that adoption will follow, as long as the model helps the business. Unfortunately, the observed ground reality is not that simple. The key reasons for low adoption of data science models are a lack of trust and understanding of the model output.

Many machine learning models operate as a “black-box”, where they take a series of inputs to give a series of outputs but do not offer any insight into what factors in the input drove those outputs, be it classification or regression outputs. Nor do they provide any rationale about how an undesired output can be changed to a desired outcome in the future for a similar case by impacting the input.

Explanations about which input variable impacted the output in what manner is critical for efforts to influence the key underlying metrics that may be being tracked for that product/process. The success of a data science model largely depends on how well the model is adopted and used by these consumers of the model outputs.

Frequently, the adoption fails to generate traction because the end users do not understand why the model generated a given prediction. In most cases, the responsibility of identifying the drivers of the prediction falls on the product owners or business analysts who use their experience and tribal knowledge to make assumptions about the reason behind the predictions. This necessarily relies on subjectivity and human bias and may or may not align with the true underlying data patterns the model uses to make its prediction. This problem is particularly acute when the model predictions are not aligned with end users’ tribal knowledge or gut instincts.

Likewise, user trust also gets affected when the model predicts an incorrect output. If the end user were able to see why the model made a particular decision, it can mitigate the ensuing trust erosion, restore trust and also elicit feedback for the model’s improvement. In the absence of trust restoration, the lack of trust may precede a gradual fall back to the old way of doing things, leading to the DS project’s failure without clear feedback to the developers about why the model was not adopted.

Adding interpretability and explanations for predictions can increase user confidence in the data science solutions and drive its adoption by end users. A key learning from our work in increasing and maintaining data science adoption is that explainability and interpretability are significant factors in driving success of data science solutions.

Even as machine learning solutions are touted as the next best thing for making better and quicker decisions, the human component of these systems is still what eventually influences their success or failure. As the advancements in artificial intelligence come progressively quicker, it is the solutions that incorporate this human component along with the cutting-edge algorithms that will rise to the top while the ones that ignore it at their own peril, will see themselves left behind.

Not sure where to start? A successful use case detailing how explainable AI was used in a real-world ML product at Dell can be found in a whitepaper here.

Source: dell.com

Related Posts

0 comments:

Post a Comment