Bavaria DPA Dashboard on inspections (planned, ongoing, completed)
incl. completed online inspection of 172 wordpress web sites planned, e.g. inspections around data deletion in SAP, questionnaires, detailed expectations on controls, ..
FTC basic cybersecurity Guidance to small business
GSMA Privacy Design Guidelines for mobile app development
NHS mobile app assessment questions
Examples from Opinion 2/2017 on data processing at work
Opinion 2/2017 on data processing at work
http://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=610169
Example
During the recruitment of new staff, an employer checks the profiles of the candidates on various social networks and includes information from these networks (and any other information available on the internet) in the screening process.
Only if it is necessary for the job to review information about a candidate on social media, for example in order to be able to assess specific risks regarding candidates for a specific function, and the candidates are correctly informed (for example, in the text of the job advert) the employer may have a legal basis under Article 7(f) to review publicly-available information about candidates.
Example
An employer monitors the LinkedIn profiles of former employees that are involved during the duration of non-compete clauses. The purpose of this monitoring is to monitor compliance with such clauses. The monitoring is limited to these former employees.
As long as the employer can prove that such monitoring is necessary to protect his legitimate interests, that there are no other, less invasive means available, and that the former employees have been adequately informed about the extent of the regular observation of their public communications, the employer may be able to rely on the legal basis of Article 7(f) of the DPD.
Example
An employer intends to deploy a TLS inspection appliance to decrypt and inspect secure traffic, with the purpose of detecting anything malicious. The appliance is also able to record and analyse the entirety of an employee’s online activity on the organisation’s network.
Use of encrypted communications protocols is increasingly being implemented to protect online data flows involving personal data against interception. However, this can also present issues, as the encryption makes it impossible to monitor incoming and outgoing data. TLS inspection equipment decrypts the data stream, analyses the content for security purposes and then re-encrypts the stream afterwards.
In this example, the employer relies upon legitimate interests—the necessity to protect the network, and the personal data of employees and customers held within that network, against unauthorised access or data leakage. However, monitoring every online activity of the employees is a disproportionate response and an interference with the right to secrecy of communications. The employer should first investigate other, less invasive, means to protect the confidentiality of customer data and the security of the network.
To the extent that some interception of TLS traffic can be qualified as strictly necessary, the appliance should be configured in a way to prevent permanent logging of employee activity, for example by blocking suspicious incoming or outgoing traffic and redirecting the user to an information portal where he or she may ask for review of such an automated decision. If some general logging would nonetheless be deemed strictly necessary, the appliance may also be configured not to store log data unless the appliance signals the occurrence of an incident, with a minimization of the information collected.
As a good practice, the employer could offer alternative unmonitored access for employees. This could be done by offering free WiFi, or stand-alone devices or terminals (with appropriate safeguards to ensure confidentiality of the communications) where employees can exercise their legitimate right to use work facilities for some private usage17. Moreover, employers should consider certain types of traffic whose interception endangers the proper balance between their legitimate interests and employee’s privacy—such as the use of private webmail, visits to online banking and health websites—with the aim to appropriately configure the appliance so as not to proceed with interception of communications in circumstances that are not compliant with proportionality. Information on the type of communications that the appliance is monitoring should be specified to the employees.
A policy concerning the purposes for when, and by whom, suspicious log data can be accessed should be developed and made easily and permanently accessible for all employees, in order to also guide them about acceptable and unacceptable use of the network and facilities. This allows employees to adapt their behaviour to prevent being monitored when they legitimately use IT work facilities for private use. As good practice, such a policy should be evaluated, at least annually, to assess whether the chosen monitoring solution delivers the intended results, and whether there are other, less invasive tools or means available to achieve the same purposes.
Example
An employer deploys a Data Loss Prevention tool to monitor the outgoing e-mails automatically, for the purpose of preventing unauthorised transmission of proprietary data (e.g. customer’s personal data), independently from whether such an action is unintentional or not. Once an e-mail is being considered as the potential source of a data breach, further investigation is performed.
Again, the employer relies upon the necessity for his legitimate interest to protect the personal data of customers as well as his assets against unauthorised access or data leakage. However, such a DLP tool may involve unnecessary processing of personal data —for example, a “false positive” alert might result in unauthorized access of legitimate e-mails that have been sent by employees (which may be, for instance, personal e-mails).
Therefore, the necessity of the DLP tool and its deployment should be fully justified so as to strike the proper balance between his legitimate interests and the fundamental right to the protection of employees’ personal data. In order for the legitimate interests of the employer to be relied upon, certain measures should be taken to mitigate the risks. For example, the rules that the system follows to characterize an e-mail as potential data breach should be fully transparent to the users, and in cases that the tool recognises an e-mail that is to be sent as a possible data breach, a warning message should inform the sender of the e-mail prior to the e-mail transmission, so as to give the sender the option to cancel this transmission.
Example:
An organisation offers fitness monitoring devices to its employees as a general gift. The devices count the number of steps employees take, and register their heartbeats and sleeping patterns over time.
The resulting health data should only be accessible to the employee and not the employer. Any data transferred between the employee (as data subject) and the device/service provider (as data controller) is a matter for those parties.
As the health data could also be processed by the commercial party that has manufactured the devices or offers a service to employers, when choosing the device or service the employer should evaluate the privacy policy of the manufacturer and/or service provider, to ensure that it does not result in unlawful processing of health data on employees.
Example:
An employer maintains a server room in which business-sensitive data, personal data relating to employees and personal data relating to customers is stored in digital form. In order to comply with legal obligations to secure the data against unauthorised access, the employer has installed an access control system that records the entrance and exit of employees who have appropriate permission to enter the room. Should any item of equipment go missing, or if any data is subject to unauthorised access, loss or theft, the records maintained by the employer allow them to determine who had access to the room at that time.
Given that the processing is necessary and does not outweigh the right to private life of the employees, it can be in the legitimate interest under Art. 7(f), if the employees have been adequately informed about the processing operation. However, the continuous monitoring of the frequency and exact entrance and exit times of the employees cannot be justified if these data are also used for another purpose, such as employee performance evaluation.
Example
A transport company equips all of its vehicles with a video camera inside the cabin which records sound and video. The purpose of processing these data is to improve the driving skills of the employees. The cameras are configured to retain recordings whenever incidents such as sudden braking or abrupt directional change take place. The company assumes it has a legal ground for the processing in its legitimate interest under Article 7(f) of the Directive, to protect the safety of its employees and other drivers’ safety.
However, the legitimate interest of the company to monitor the drivers does not prevail over the rights of those drivers to the protection of their personal data. The continuous monitoring of employees with such cameras constitutes a serious interference with their right of privacy. There are other methods (e.g., the installation of equipment that prevents the use of mobile phones) as well as other safety systems like an advanced emergency braking system or a lane departure warning system that can be used for the prevention of vehicle accidents which may be more appropriate. Furthermore, such a video has a high probability of resulting in the processing of personal data of third parties (such as pedestrians) and, for such a processing, the legitimate interest of the company is not sufficient to justify the processing.
Example:
A delivery company sends its customers an e-mail with a link to the name and the location of the deliverer (employee). The company also intended to provide a passport photo of the deliverer. The company assumed it would have a legal ground for the processing in its legitimate interest (Article 7(f) of the Directive), allowing the customer to check if the deliverer is indeed the right person.
However, it is not necessary to provide the name and the photo of the deliverer to the customers. Since there is no other legitimate ground for this processing, the delivery company is not allowed to provide these personal data to customers.
Examples from Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679
Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679
http://ec.europa.eu/newsroom/just/document.cfm?doc_id=47963
Example
A data broker collects data from different public and private sources, either on behalf of its clients or
for its own purposes. The data broker compiles the data to develop profiles on the individuals and
places them into segments. It sells this information to companies who wish to improve the targeting of
their goods and services. The data broker carries out profiling by placing a person into a certain
category according to their interests.
Whether or not there is automated decision-making as defined in Article 22(1) will depend upon the
circumstances.
Example
Imposing speeding fines purely on the basis of evidence from speed cameras is an automated decisionmaking process that does not necessarily involve profiling.
It would, however, become a decision based on profiling if the driving habits of the individual were
monitored over time, and, for example, the amount of fine imposed is the outcome of an assessment
involving other factors, such as whether the speeding is a repeat offence or whether the driver has had
other recent traffic violations.
Example
An automated process produces what is in effect a recommendation concerning a data subject. If a
human being reviews and takes account of other factors in making the final decision, that decision
would not be ‘based solely’ on automated processing.
Example
Hypothetically, a credit card company might reduce a customer’s card limit, based not on that
customer’s own repayment history, but on non-traditional credit criteria, such as an analysis of other
customers living in the same area who shop at the same stores.
This could mean that someone is deprived of opportunities based on the actions of others.
In a different context using these types of characteristics might have the advantage of extending credit
to those without a conventional credit history, who would otherwise have been denied.
Example
A controller uses credit scoring to assess and reject an individual’s loan application. The score may
have been provided by a credit reference agency, or calculated directly based on information held by
the controller.
Regardless of the source (and information on the source must be provided to the data subject under
Article 14 (2) (f) where the personal data have not been obtained from the data subject), if the
controller is reliant upon this score it must be able to explain it and the rationale, to the data subject.
The controller explains that this process helps them make fair and responsible lending decisions. It
provides details of the main characteristics considered in reaching the decision, the source of this
information and the relevance. This may include, for example:
•the information provided by the data subject on the application form;
•information about previous account conduct , including any payment arrears; and
•official public records information such as fraud record information and insolvency records.
The controller also includes information to advise the data subject that the credit scoring methods used
are regularly tested to ensure they remain fair, effective and unbiased.
The controller provides contact details for the data subject to request that any declined decision is
reconsidered, in line with the provisions of Article 22(3).
Example
An insurance company uses an automated decision making process to set motor insurance premiums
based on monitoring customers’ driving behaviour. To illustrate the significance and envisaged
consequences of the processing it explains that dangerous driving may result in higher insurance
payments and provides an app comparing fictional drivers, including one with dangerous driving
habits such as fast acceleration and last-minute braking.
It uses graphics to give tips on how to improve these habits and consequently how to lower insurance
premiums.
Example
Some insurers offer insurance rates and services based on an individual’s driving behaviour. Elements
taken into account in these cases could include the distance travelled, the time spent driving and the
journey undertaken as well as predictions based on other data collected by the sensors in a (smart) car.
The data collected is used for profiling to identify bad driving behaviour (such as fast acceleration,
sudden braking, and speeding). This information can be cross-referenced with other sources (for
example the weather, traffic, type of road) to better understand the driver’s behaviour.
The controller must ensure that they have a lawful basis for this type of processing. The controller
must also provide the data subject with information about the collected data, the existence of
automated decision-making, the logic involved, and the significance and envisaged consequences of
such processing.
Example
A data broker sells consumer profiles to financial companies without consumer permission or
knowledge of the underlying data. The profiles define consumers into categories (carrying titles such
as “Rural and Barely Making It,” “Ethnic Second-City Strugglers,” “Tough Start: Young Single
Parents,”) or “score” them, focusing on consumers’ financial vulnerability. The financial companies
offer these consumers payday loans and other “non-traditional” financial services (high-cost loans and
other financially risky products).
Example
Some mobile applications provide location services allowing the user to find nearby restaurants
offering discounts. However, the data collected is also used to build a profile on the data subject for
marketing purposes – to identify their food preferences, or lifestyle in general. The data subject expects
their data will be used to find restaurants, but not to receive adverts for pizza delivery just because the
app has identified that they arrive home late. This further use of the location data may not be
compatible with the purposes for which it was collected in the first place, and may thus require the
consent of the individual concerned.
Example
A user buys some items from an on-line retailer. In order to fulfil the contract, the retailer must
process the user’s credit card information for payment purposes and the user’s address to deliver the
goods. Completion of the contract is not dependent upon building a profile of the user’s tastes and
lifestyle choices based on his or her visits to the website. Even if profiling is specifically mentioned in
the small print of the contract, this fact alone does not make it ‘necessary’ for the performance of the
contract.
Example
A data broker undertakes profiling of personal data. In line with their Article 13 and 14 obligations
the data broker should inform the individual about the processing, including whether it intends to share
the profile with any other organisations. The data broker should also present separately details of the
right to object under Article 21(1).
The data broker shares the profile with another company. This company uses the profile to send the
individual direct marketing.
The company should inform the individual (Article 14(1) (c)) about the purposes for using this profile,
and from what source they obtained the information (14(2) (f)). The company must also advise the
data subject about their right to object to processing, including profiling, for direct marketing purposes
(Article 21(2)).
The data broker and the company should allow the data subject the right to access the information
used (Article 15) to correct any erroneous information (Article 16), and in certain circumstances erase
the profile or personal data used to create it (Article 17). The data subject should also be given
information about their profile, for example in which ‘segments’ or ‘categories’ they are placed. 33
If the company uses the profile as part of a solely automated decision-making process with legal or
similarly significant effects on the data subject, the company is the controller subject to the Article 22
provisions. (This does not exclude the data broker from Article 22 if the processing meets the relevant
threshold.)
Example
A local surgery’s computer system places an individual into a group that is most likely to get heart
disease. This ‘profile’ is not necessarily inaccurate even if he or she never suffers from heart disease.
The profile merely states that he or she is more likely to get it. That may be factually correct as a
matter of statistics.
Nevertheless, the data subject has the right, taking into account the purpose of the processing, to
provide a supplementary statement. In the above scenario, this could be based, for example, on a more
advanced medical computer system (and statistical model) carrying out more detailed examinations
and factoring in additional data than the one at the local surgery with more limited capabilities.
EU commission response on contractual form of data processing agreements
“The GDPR further provides that such contract or legal act shall be in writing, including in electronic form. [..] In principle, automated contract processes are lawful. It is not necessary to append an electronic signature to contracts for them to have legal effects. E-signatures are one of several means to prove their conclusion and terms.[..]”
Full text:
http://www.europarl.europa.eu/sides/getAllAnswers.do?reference=E-2018-003163&language=EN
Belgium: new Belgian Data Protection Act (September 5, 2018)
The new Belgian Data Protection Act
http://www.ejustice.just.fgov.be/eli/wet/2018/07/30/2018040581/staatsblad
Sidley has an article on it here:
https://datamatters.sidley.com/new-belgian-data-protection-act-takes-effect/
“Genetic, Biometric and Health-Related Data Processing
Additional organizational and security measures must be put in place by data controllers and/or processors that process genetic, biometric or health-related data. On the basis of the Belgian Act, they must designate specific personnel authorized to access such data, and identify their capacity in relation to the data processing. A list with this information should be compiled and kept at the disposal of the competent Supervisory Authority. In addition, they must ensure that these individuals are bound by confidentiality with regard to this data on the basis of either statutory or contractual requirements.”
Sidley article on the new privacy law in California
On June 28, 2018, California Gov. Jerry Brown signed into law the California Consumer Privacy Act of 2018 (AB 375).
AB 375 will go into effect on Jan. 1, 2020, unless changed in the interim.
While it has been compared with GDPR in news articles, there are significant differences.
https://datamatters.sidley.com/california-enacts-broad-privacy-protections-modeled-on-gdpr/