October 22, 2018 | |
---|---|
topic: | Innovation |
tags: | #Matthias Spielkamp, #ALGORITHMWATCH, #human rights |
located: | Germany |
by: | Frank Odenthal |
FairPlanet had a word with Matthias Spielkamp, one of the founders of AlgorithmWatch.
FairPlanet: How did you come up with the idea to start AlgorithmWatch? Was there a triggering moment, an event?
I had already spent four years trying to raise money to hold workshops to explore the role journalists should play in terms of algorithmic accountability. It's about the question of whether and how to account for algorithms and automation systems. That sounds negative at first, but ultimately it's about democratic control. Unfortunately, I did not succeed; all of the donors I asked looked at me with sceptical eyes and did not know what it was all about.
Then, with two colleagues, Lorenz Matzat and Katharina Zweig, I applied for funding from the Volkswagen Foundation, including a project on predicted policing systems. But the application was rejected as well. However, the three of us were convinced that we absolutely have to do something in this matter.
At some point I got an email from Lorenz Matzat, telling me that he had reserved the Twitter name and the domain 'Algorithmwatch'. And that was the moment when I said to myself, okay, now we know what we have to do. We then sat down and wrote and published a mission statement and a manifesto. We wanted to turn the tables, we wanted to show that there is a social need to talk about these issues and know more, and then hopefully the money will follow. And the plan worked out. We got our first funding one year later.
What are the goals of Algorithmwatch?
There are more and more systems of automated decision-making or decision-preparation in our society, and they are becoming more and more complex. A good example is credit scoring; A lot of data is collected about citizens and then scores are calculated in a procedure that is more or less transparent, and on the basis of that score very important decisions are made. For example: Do I get a loan for building a house? Can I buy a car? Do I get a mobile phone contract or internet access?
These are decisions that are important to the individual's participation in society and that are made on the basis of these automated systems. Not always fully automatic though, but with great influence on the subsequent decision. And as such automated decisions become more frequent, we've said we have to take care of them. It is by no means a completely new phenomenon; automation has been around for many hundreds of years, and ultimately that is always a matter of interpretation; One could say that an aqueduct in ancient Rome was already a form of automation.
However, we think we are now at a point where we need to take a closer look. And the public and the community agree, as it seems; we get a lot of encouragement, and that shows us, that's an issue that is haunting people. So our job is first of all to point out what happens in such processes. And then we want to be involved in the discussion of how such things should be shaped in the future; to suggest solutions and to do research. Therefore we call ourselves an advocacy organisation, but we also want to act evidence-based - so we do not just want to have an ideology, but we want to deliver on facts, and we have to raise some of those ourselves.
So you are not against technology, as one might think, but you want such processes to be democratic and understandable ...
Exactly. The use of such systems must be legitimised, and there are different standards for that. We all know these odd 60-page privacy statements, which we have to accept in order to use a particular service. That is - at first sight - a very high degree of transparency. They explain in great detail what's going on with our data. But the consequence for the user is overburdening and helplessness.
And it comes also without alternative in such cases. It's not always like that though. If I have two competing services that offer the same features to me, and one has a privacy policy that states that all my data is permanently collected and shared with third parties and used for advertising, etc., and the second service says we don't do that, then it is relatively clear what I might do, namely to choose the second service. Or maybe I do choose the first service, if I think it's good to get personalised advertising, that's also conceivable. But there are many offers in which the choice is very limited.
I can not use an alternative to Facebook because this alternative simply does not exist. I can only either participate or I can leave it - and that's a very limited choice. Here you see: Transparency can only ever be the basis, but not a goal in itself. Transparency may allow me to make autonomous decisions, but it is not the same.
As for the lack of alternatives that appeal to you; I just thought about the use of cookies, which is now constantly being advertised to me while surfing the internet. You can either just agree or click 'Help' or 'Learn more'; but usually there's no option to say 'No'…
This is an interesting example, and it also shows that there are very different approaches from the suppliers side. You have quite a choice, if you pay close attention. Basically, there are three types of cookies: the ones to guarantee the functioning of the website; the ones that are necessary to record user patterns, and then those to implement commercials.
There are providers, who say we are transparent about your cookies, but if you want to use the website, you have to accept all of them. And there are other providers, which leave it to you to decide; that is more for users who have time and also basic knowledge. And finally there are indeed, but very rarely, providers who ask if you only want the cookies that are important to the function of the site, or the others as well.
That's great, of course. Every user could click only on this essential cookie. However, this will penalise the provider who gives the user the best options to choose, because he does not get the data necessary for commercialisation, in contrast to the others who say: take it or leave it. This is a fundamentally messed up system.
We now know that algorithms are getting ever more important in much of our daily lives, so in a sense they are of public concern. On the other hand, companies can say that they have developed their algorithms, their software, and they do not want to open up for public - they claim these are their trade secrets. What are your thoughts on that?
Well. You see, there are activists, but also decent scientists who say we do not need patents, nor business or trade secrets. Innovation in the economy also works without them. However, that is a minority opinion. It's actually a discussion that is highly complex. I've also dealt with intellectual property and copyright, but I've never been able to really make up my mind on that. Well, our society has put the possibility of patents and trade and business secrets into the law;
So there is a social consensus. But then the question is: when are you entitled to know them? Are we entitled to know them in any case? These are discussions that we already have now, and they are likely to intensify in the upcoming years. So the question is, where can such secrets be preserved? Take for example the case of compass racial bias in the USA. An automated risk assessment system for offenders is being used in the US. Depending on risk scores, judges decide whether to release offenders on probation, conditional them, or leave them in jail.
The research team of US-based investigative journalism platform 'Pro-Publica' has found that this system is discriminatory against blacks. And this algorithm, which leads to such serious decisions, is unknown to the public; It's kept secret by the company that developed it. There is now a dispute over whether it violates constitutional principles, namely, to know as an accused, what is being brought against you.
So in the future many of those individual cases will come to us, and we have to decide whether we want it that way or not. Therefore, we need more clarity about how such decisions are made; and we have a right to do so as a public. It was similar with the German credit check company Schufa. Here, too, a woman has moved to the Federal Court, and it was said, no, the woman has no right to know how the decision came about.
There will always be struggles in the future to challenge this practice and say that decisions that have such relevant and far-reaching consequences for us as society and as individuals can not be kept as a secret. And ultimately, it's about the question, to whom such an algorithm must be disclosed. Does it have to be disclosed to the entire public by publishing it on a website? Or should there be a state or para-state institution that has the right to review these procedures? At Schufa, it is the case that data protection officers can exercise complete control over those procedures; but we at Algorithmwatch say it's not done properly, because this privacy officer does not have the expertise to do it properly.
Does the government have a long-term chance to force companies to disclose algorithms anyway? Or will the big tech giants like Google be so powerful in the long term that they can no longer effectively be controlled by any government?
The sovereignty of the state, I.e. the citizens, will always remain contestable and questionable to some extent. That is already the case today. Take the financial markets. Would you say that the government, the sovereign, prevailed over the financial institutions? Or would you say that it capitulated? Or would you say maybe they have found a good compromise? In such questions there is no yes or no. It is more a question of constantly struggling to supervise such instances; and how successful we are depends on how much we work for it.
In an essay published by Algorithmwatch, a distinction is made between "my data" and "data about me". Can you explain that?
The terms data sovereignty and data ownership are very often used at the moment. The question is, what does "my data" actually mean? The whole concept is highly problematic. Because "my data" does not really exist. What should be "my data"? The date of birth? Is that my date? I have to list it all the time, and a lot of people know it, for example my parents. I can not exclude all of them from the knowledge about my date of birth. What about postings on Facebook?
So many people say, yes, that's my data; but how do you intend to enforce this claim of ownership against others with whom you shared it? Basically, all you could say is that only what I'm telling is my data. It's about the sole power over data, and that's a conceptually rather untenable requirement for discussions about data ownership. For example, "data about me" could mean that my neighbours know when I leave home in the morning.
Or for example, that I went by tram. At why time I arrived at the office. All "data about me" about which I can not meaningfully claim power of disposition; they are not my property. And that's just how it is when I move in the digital world. Here one can object, however, not to be constantly tracked. So I want to be able to really turn off my phone without Google knowing where I am. Such data should not even exist, they should not be collected, so that it can not even have to argue about who they belong.
It is often demanded that such intelligent systems - in a very general sense - should always be participatory, otherwise they will become a kind of black box where the user has no chance to understand what is actually happening. Do you share this point of view?
On the one hand, I think it would be good if as many systems as possible were created participatively. This also has a great advantage for the creators of such systems, because they'll recognise things that they otherwise might not have recognised. It's ultimately a matter of detail. What exactly means "participatory"? In Germany we have now automated tax assessment. The tax authority does not provide any information on how this works. That's even a legal requirement.
It is to prevent those who are well informed about the subject matter from being able to manipulate the process. Nevertheless, one could rightly demand that there should be democratic control, in whatever form. And one could also demand that such a system should have been created participatory. But what should that look like? That I can apply for a committee that assists in the creation? Or should someone be sent by social organisations, which may have a little more idea about it? Should then someone from the Ministry of Finance be there, too? And from the tax consultant association? At the end of the day, there will always be gradations of participation.
The former German Minister of Justice Heiko Maas suggested the creation of a digital agency, probably a kind of consumer protection on the Internet. What do you think about it?
I don't like the impression that one institution alone could take care of everything out there. This is in a sense a populist demand. I do agree that public institutions should be responsible. It's just a little more complicated. The German Financial Supervisory Authority "Bafin" already checks whether everything is OK in high-frequency trading on the stock exchange.
Whether it does that well or not is another question. But why should you hand this over to a digital agency that has no idea about it? Take another example: There is a very complex system in Germany for the approval of medical technology. If you have automated diagnostic systems, why shouldn't the experts keep on doing that? To put it all together in a single digital agency is simply unrealistic.
In the Netherlands, there are already communities that use algorithms to filter out potential social fraudsters. How do you rate this development?
I think this development will intensify. I feel uncomfortable with it. However, I do not want social fraud to take place on a large scale, because ultimately society pays the price for it and, above all, the poor are more likely to pay for it than the rich. So it's a question of justice. But there are so many "ifs" in this matter that one can only look at each individual case alone. What is the modelling? Which data are to be collected? And are such technologies — as claimed by American sociologist Virginia Eubanks in her book "Automating Inequality" — always tried on the most vulnerable in society? It looks for social fraud of poor people, not for people who carry money abroad or evade taxes.
In one of your projects, you ask citizens to give you their Schufa score, i.e. their credit check information. What is at all about? And how is the project going?
It's called 'OpenSCHUFA', and it's a joint project of the 'Open Knowledge Foundation Germany' and AlgorithmWatch. The idea was to ask people to send us their Schufa credit check scores, so that we can analyse what kind of algorithm is behind that system. The development of the open source software and the implementation of the project was funded by the help of more than 1800 supporters via crowdfunding. We did a great crowdfunding campaign, which brought us nearly 44,000 Euros. We have received great approval from people who have finally made a Schufa approval of self-disclosure; that's over 27,000 until today. And we've gotten a relatively large number of about 2,900 of Schufa credit checks uploaded. That's over ten percent of all approvals made; not a bad value. However, we do have a relatively poor demographics, i.e. many participants are of our "bubble", well-heeled people without a bad Schufa score. It is a great challenge to prove to Schufa that it is working systematically incorrectly. But it is still too early to say, we are still in the middle of the data analysis. Our partners Spiegel Online and Bayrische Rundfunk, who also received data, are also still in the evaluation process.
How is Algorithmwatch being financed?
Mainly project financing, and partly donations. For example, the project "OpenSchufa" is completely donation-funded. We also want to use crowdfunding in the future if we think that could work. Otherwise, we are currently financed exclusively by donations from private investors, specifically the Bertelsmann Foundation, the Hans Böckler Foundation and the Open Society Foundation.
Do you cooperate with other NGOs and other media?
With Spiegel Online we have a continuous media partnership. And we have a close cooperation with the "Open Knowledge Foundation Germany". We also cooperate with other NGOs, but also with institutions such as the TU Braunschweig and the University of Passau, with regard to research projects.
Can you already reveal details about future projects?
We will be releasing a report in December that will bring together examples from different European countries to show the breadth of how automated decision-making systems are already being used in the European Union. This report will be presented in the European Parliament.
MATTHIAS SPIELKAMP is founder and executive director of AlgorithmWatch. He is co-founder and publisher of the online magazine iRights.info (Grimme Online Award 2006). He testified before several committees of
the German Bundestag, i.e. on AI and robotics. Matthias serves on the governing board of the German section of Reporters Without Borders and the advisory councils of Stiftung Warentest and the Whistleblower Network. In the steering committee of the German Internet Governance
Forum (IGF-D), he acts as co-chair for the academia and civil society stakeholder groups. He has been a fellow of ZEIT Stiftung, Stiftung Mercator and the American Council on Germany. Matthias has written and
edited books on digital journalism and Internet governance and was named one of 15 architects building the data-driven future by Silicon Republic in 2017. He holds master’s degrees in Journalism from the University
of Colorado in Boulder and in Philosophy from the Free University of Berlin.
By copying the embed code below, you agree to adhere to our republishing guidelines.