Monday, August 24, 2020

Responsible AI news from Build

Microsoft Build took place more than a month ago, but only now I’m writting about it. I’m facing the danger of being nicknamed as one famous Brazilian formula 1 pilot. Internal Brazilian joke.

Build is a historical mark among community events. It implemented many new techniques to make an online event closer to an in-person event – and the result was great. Three days, almost non-stop, full of technical knowledge.

It’s impossible to write one single post about Build without missing a lot of information. Even me, crazy enough to follow the event for the entire three days, will miss details from sessions I haven’t attended.

You can watch the sessions on-demand on this link: https://mybuild.microsoft.com/ but I hope with the help of the many posts I will write about Build you will be able to make good choices of sessions you want to watch.

Responsible AI

It’s very tempting the idea of having an AI which makes the decisions for you and you are not responsible for the decisions. However, this can lead to the worst Ayn Rand scenarios, where no one is responsible for anything, even evolving to a Skynet scenario.

We need to be responsible. If the AI denies a loan, an insurance coverage or payment, we need to be responsible for the result.

Notwithstanding, we also need to remember AI depends on data. If we have a small set of data about some groups of our society because our society isn’t fair, this can lead some bad AI models to have a “racist” behaviour.

In order to make us responsible, we need tools which are able to “open” AI models and explain to us why the model made one decision instead of another.

Let’s check some tools and source of information to achieve this.

FairLearn

https://fairlearn.github.io/

This website provides tools to measure the fairness of your AI models using metrics about your models. You can read a step-by-step article about how to implement fairlearn API’s with Azure Machine Learning here: https://opendatascience.com/how-to-assess-ai-systems-fairness-and-mitigate-any-observed-unfairness-issues/

There is also a video in Channel 9 about how to do the same thing: https://channel9.msdn.com/Shows/AI-Show/How-to-Test-Models-for-Fairness-with-Fairlearn-Deep-Dive?term=build2020%20responsibleml&lang-en=true

There are also articles on the Microsoft Research blog about responsible AI: https://www.microsoft.com/en-us/research/blog/machine-learning-for-fair-decisions/

In order to go deeper in the concept, there is this study made by Microsoft Research, the University of Maryland and the Carnegie Mellon University: https://arxiv.org/pdf/1812.05239.pdf

This other study explains the math behind an approach for fair classification: http://proceedings.mlr.press/v80/agarwal18a/agarwal18a.pdf

More in deep details and many links on this document: https://sites.google.com/view/fairness-tutorial

Webminar about Develop AI Responsibly: https://info.microsoft.com/ww-ondemand-develop-ai-responsibly.html?lcid=en-us

InterpretML

InterpretML is a framework used to explain decisions made by AI models, allowing to check if the decision is fair.

The framework is available on github: https://t.co/1q4ft5aToM?amp=1

Here you can watch a video about how to use InterpretML: https://channel9.msdn.com/Shows/AI-Show/How-to-Explain-Models-with-IntepretML-Deep-Dive?term=build2020%20responsibleml&lang-en=true

Here you can watch a video about a use case of InterpretML in an airline company: https://channel9.msdn.com/Shows/AI-Show/InterpretML-in-Practice-at-Scandinavian-Airlines?term=build2020%20responsibleml&lang-en=true

DICE

Dice is a Microsoft Research project used to make counterfactual explanations of ML model decisions.

For example, let’s say you had a loan rejected. Dice will provide you alternative possibilities which would result in the loan approval, such as “You would have received the loan if your income was higher by $10k”.

You can read more about DICE and get the framework on this link: https://www.microsoft.com/en-us/research/project/dice/

Responsible AI resources from Microsoft

This link is a central place for responsible AI with microsoft technology, with many videos, articles, frameworks and guidelines about how to ensure a responsible AI while building your models

https://www.microsoft.com/en-us/ai/responsible-ai-resources?activetab=pivot1%3aprimaryr4

Conclusion

Considering all these tools to ensure the quality of result from AI, movie characters such as HAL or Skynet are way far from reality than before.

The post Responsible AI news from Build appeared first on Simple Talk.



from Simple Talk https://ift.tt/2ExmdTU
via

No comments:

Post a Comment