Joost H. van der Linden bio photo

Joost H. van der Linden

Applied Mathematician and Data Scientist.

Email Facebook Google+ LinkedIn Github

(A modified copy of this post first appeared on Our Community’s Innovation Lab page)

At Our Community (my current employer), we do a lot of thinking about the present status and the future directions of grantmaking. One potential and important focus is the future use of predictive algorithms. How can artificial intelligence benefit grantmakers and grantseekers? What data can responsibly be collected to build these algorithms? How do we make the algorithmic decisions and inner workings transparent to the user? Should we be using predictive algorithms in the first place? How do we ensure an algorithm is fair?

We explore the last question, in particular, in our latest white paper: The bias trade-off for grantmaking algorithms. Using a practical example of what a grantmaking algorithm might one day look like, we explore different types of biases. We demonstrate that some degree of unfairness is unavoidable, highlighting how important it is to make the algorithm transparent to those affected, and mitigate biases where possible.

Other considerations

The Berkeley Artificial Intelligence Research lab published a study that looks at a related issue: Delayed Impact of Fair Machine Learning (May 17, 2018).