Book a call
By LegalEdge News

Using Profiling and Automated Decision Making


Do you use (or are you thinking of using) profiling or automated decision making in your online business practices?  If so, are you compliant with data protection legislation? 

What is it and how does it work?   

Profiling is the automated processing of personal information to evaluate certain things about a person, such as analysing an individual’s personality, behaviour, interests and habits through their online activity. It’s then used to make predictions or decisions about them.  For example, an online retailer may use profiling to look at your past purchases so they can offer you other similar goods. Or not so positively, an insurance company may refuse cover based on your health or driving history using profiling. 

Automated decision-making is just that – making a decision using solely automated means, without any human involvement. For example, an online decision to award a loan. Profiling can be part of an automated decision-making process.  

Both processes collect and analyse large amounts of personal data using algorithms, artificial intelligence (AI) and/or machine learning to find correlations between data sets and then build links between different behaviours and attributes. They then create profiles and predict behaviours based on those profiles.

Does GDPR regulate these activities and how? 

Yes. The General Data Protection Regulation (GDPR) applies to all automated individual decision-making and profiling. It distinguishes between automated decision making that is used to make decisions that have a ‘legal or other similarly significant effect’ and those that don’t.  Those that do are decisions that adversely affect someone’s legal rights, such as a decision not to pay someone housing benefits that may be available by law. Or examples of decisions that have a similar adverse effect are a refusal of an online credit application or rejecting a candidate in an e-recruiting process based on an online aptitude test that has no human intervention.  

Under the GDPR, if you are using profiling or automated decision making to make decisions that have a legal or other similar significant effect you can only do so if:     

  • it’s necessary for the entry into or performance of a contract
  • it’s required by law, for example for fraud or tax evasion monitoring or
  • you have obtained a person’s explicit consent

And if you can do so under one of the bases above, you must 

  • tell people you are doing it; 
  • take steps to prevent errors, bias and discrimination and carry out regular checks to ensure your systems are working properly; and
  • give people simple ways to ask for human intervention or challenge a decision.

If you use special category personal data (i.e. personal data that needs more protection because it is sensitive), the rules are stricter and you can only do so with explicit consent or to protect a vital interest. The processing of health data related to COVID-19, for example, may be allowed  to protect the vital interest of people, but the measures taken will still need to be fair and not discriminate.  

If you are not using profiling/automated decision making for decisions having a legal or significantly similar effect, you can do so, but you still have to comply with GDPR principles, such as recording your lawful basis for doing so, having processes in place so people can exercise their rights and letting them know about their rights to object and how to do so. 

Have a look at the following short video from Aphaia (the data protection experts) which provides more detail.

What else should you consider?

Because profiling and automated decision making is considered high risk under the GDPR, you should run a Data Protection Impact Assessment and a Legitimate Interest Assessment to show you’ve identified and assessed the risks and how you will address them. Where processing involves AI an AI Ethics Assessment should also be run – click here for more detail. 

The ICO has published detailed guidance here.

It’s worth noting that other laws may also apply. One example is recruitment, where there’s a growing use of AI, algorithms and chatbots to target, screen and even help interview candidates.  Compliance with relevant employment and equality laws will also be important to avoid discrimination/bias.

And if you’re still unsure about your use of profiling and/or automated decision making please get in touch by emailing info@legaledge.co.uk for further help. 

Back To Blog Our Services
  • Share:

What do our clients think?