Dutch Algorithmic Transparency Standard

View code on Github


Print Form: Use of Algorithm (EN)

Form for inventarisation. Under development. Please add suggestions in the margin.


BASIC INFORMATION
Name (name)

What is the colloquial name used to identify this algorithm? Example: 'WMO prediction Rotterdam' or 'Crowd-monitoring Enschede'.

BASIC INFORMATION
Organization (organization)

What is the full name of the organization used to identify responsibility for use of the model, algorithm or AI? Example: 'City of Amsterdam' or 'Water Authority Limburg '.

BASIC INFORMATION
Department (department)

What is the full name of the department or division used to specify responsibility for use of the model, algorithm or AI? Example: 'District Segbroek' or 'Traffic and transport'.

BASIC INFORMATION
Short description (description_short)

Please give a short description of maximum 150 chars in order to give a quick overview of the purpose of the model, algorithm or AI. Example: 'The traffic light priority algorithm prioritises traffic modalities based on applicable law and local regulations'.

BASIC INFORMATION
Type of algorithm (type)

Please indicate wether the model, algorithm or AI is descriptive, diagnostic, predictive or prescriptive. This information can be used as a possible risk indicator.

BASIC INFORMATION
Category (category)

Please provide keywords to facilitate search. Examples: 'traffic', 'transport', 'social security', 'crowd-monitoring', 'facial recognition', 'camera surveillance'.

BASIC INFORMATION
URL of the website (website)

What is the URL reference to the landing page with further information about the model, algorithm or AI and its use? This facilitates users searching more in-depth information about the practical use or technical details.

BASIC INFORMATION
Status (status)

Please indicate wether the model, algorithm or AI is in development, in use, or archived.

USE CASE
Goal (goal)

Please describe the goal of the policy for which the model, algorithm or AI was developed and how the technology contributes to reaching that goal. It should become clear why it is likely that this specific technology will help reach the goal.

USE CASE
Impact (impact)

Please describe in what way citizens come into contact with the effects of the model, algorithm or AI. It should become clear under what specific circumstances this happens and what the expected consequences are on an individual and/or societal level.

USE CASE
Proportionality (proportionality)

Please describe why the authority by which the model, algorithm or AI is used is reasonably necessary. It should explain why the expected benefit outweighs any potential expected harm.

USE CASE
Decision-making process (decision_making_process)

What is the official process in the organisation in which the model, algorithm or AI is involved. It should refer to concrete laws, regulation or policy, as published in publicly available sources.

USE CASE
Documentation (documentation)

Please provide a URL reference to any extended information about the use of the model, algorithm or AI within this specific use case.

APPLICATION
Long description (description)

Please provide an extensive description between 500 and 10000 chars in which the inner workings of the model, algorithm or AI are explained. It should detail all relevant aspects that are needed to understand how the model, algorithm or AI processes data and feeds decision making.

APPLICATION
Application URL (application_url)

Please provide the URL reference to the algorithmic application or code base. Examples are links to a Github repository, the Common Ground Component Catalogue or supplier documentation.

APPLICATION
URL of PublicCode.yml (publiccode)

What is the URL reference to the the PublicCode.yml standard if available?

APPLICATION
Connection to Municipal Personal Records Database (MPRD)

Please indicate wether a connection is being made to the Municipal Personal Records Database.

APPLICATION
Source data (source_data)

Please give an overview of the data that is being processed by the model, algorithm or AI. It should describe the pupose with which each data source is added and possible dependencies that resulting from this.

APPLICATION
Methods and models (methods_and_models)

Please indicate which standard methods or models the algorithm is using. Examples: ROC-curve, confusion matrix.

OVERSIGHT
Monitoring (monitoring)

Please give a general overview of how the competent authority monitors the implementation of the model, algorithm or AI.

OVERSIGHT
Human intervention (human_intervention)

Please describe how the outcomes of the model, algorithm or AI can be intervened by humans. It should detail how the responsibility for possible human intervention is secured, so it's clear who can and may act.

OVERSIGHT
Risks (risks)

Please provide an overview of the outcome of the internal risk analysis. It can also refer to available online documentation. We currently refer to the assessment framework for algorithms by the Netherlands Court of Audit and the Regulation on a European Approach for Artificial Intelligence.

OVERSIGHT
Performance standard (performance_standard)

Please describe what the expected performance of the model, algorithm or AI is and how it is measured. It should detail which criteria are used and the frequency with which the performance is monitored.

LEGAL
Competent authority (competent_authority)

What is the legal entity responsible for deployment of the model, algorithm or AI?

LEGAL
Lawful basis (lawful_basis)

Please provide a link to the administrative act that makes the use of the model, algorithm or AI legitimate.

LEGAL
Data protection impact assessment (DPIA)

Has a data protection impact assessment been carried out?

LEGAL
Description of the DPIA (DPIA_description)

Please give an overview of the key points from the data protection impact assessment. It should explain how discrimination is prevented when (proxies of) ethnicity, sex or zipcode are being used. If available it can reference the URL to the full DPIA documentation.

LEGAL
Objection procedure (objection_procedure)

Please describe in what way can citizens object against the use or outcome of the model, algorithm or AI.

METADATA
Schema (schema)

This is the schema used for this entry.

METADATA
Identifier (id)

This is the Universal Unique Identifier for this entry.

METADATA
URL (url)

This is the URL for this entry.

METADATA
Contact person e-mail (contact_email)

This is the e-mail address of the organisation or contact person for this entry.

METADATA
Geographical area (area)

This is the geographical area to which this entry applies.

METADATA
Language (lang)

This is the language in which this entry was filled.

METADATA
Revision date (revision_date)

This is the date before which this entry has to be revisited.