![]() By definition, artificial intelligence (AI) has the capability to introduce a bias, or false sense of reality, into the enterprise. Today's belief often is that data-driven decision making must lead to the right conclusions and that the more data points we gather, the better founded in reality the decisions are. This is true as long as long as we carefully create, secure, and manage our AI models in order to directly tie them to reality. Here are my six rules to prevent AI bias in a nutshell. For examples and more detail, please read my article on this topic at TechTarget's SearchSoftwareQuality site. 1. AI Model Transparency: Clearly document the what goes into the AI model, how the decision process works and what the model's limitations are. 2. Validate the Training Data: Always talk to subject matter experts to fully understand the business background of input variables. This will also help tune and test the model in the end. 3. Carefully evaluate commercial data sets: While this rule applies to all data sets, it is even more important for commercially purchased data sets. Always carefully keep in mind any potential bias that could have been introduced through "cutting corners" or through commercial interests of the vendor. 4. Dictionaries: Dictionaires are the "connector" between the real world and your AI model. If dictionaries are incomplete, biased, or inaccurate, you model will not be able to recognize the relevant input variables and arrive at invalid conclusions. 5. Transfer Learning: When using an existing AI model to solve a related but different problem, it is crucial to carefully test your assumptions about the original model's ability to "grasp" the new task. 6. Feedback Loop: Modern reinforcement learning requires well designed feedback loops to continuously tune the AI model based on the results of its output. Often identifying these results is tricky, as the environment can contain an infinite number of confounding variables. Read my piece at TechTarget on the same topic: Prevent AI bias in predictive applications before it starts
4 Comments
Jamba123
9/7/2018 11:23:23 am
This makes sense, but then is it feasible to safely leverage AI within the devops pipeline or am I opening up my teams for these types of bias? Is there software out there that checks for AI bias and shows me a report?
Reply
10/15/2022 02:21:05 pm
Class team you five despite. Property leader while leave anyone seek often.
Reply
11/14/2022 08:57:51 am
Congress ability talk follow wonder source deal.
Reply
Leave a Reply. |
ProfileThis Blog is all about demystifying artificial intelligence and machine learning (AI/ML) for enterprise use. The EMA team and outside experts will offer pragmatic advice to help you plan, prepare, and execute your AI/ML projects. Without becoming overly technical, this blog will provide perspective and a clear understanding of how ML/AI works and what results we can and cannot expect today. Archives
February 2019
Categories |