Best Practices – Privacy Design® / [protecting people by good design, solid security, efficient processes and trusted services] Thu, 28 Oct 2021 10:18:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2018/02/cropped-favicon-32x32.jpg Best Practices – Privacy Design® / 32 32 ULD SH: Länderübergreifende Datenschutz-Prüfung von Medien-Webseiten: Nachbesserungen nötig /2021/06/30/uld-sh-landerubergreifende-datenschutz-prufung-von-medien-webseiten-nachbesserungen-notig/ Wed, 30 Jun 2021 23:05:00 +0000 /?p=3414 Continue reading "ULD SH: Länderübergreifende Datenschutz-Prüfung von Medien-Webseiten: Nachbesserungen nötig"

]]>
https://www.datenschutzzentrum.de/artikel/1377-Laenderuebergreifende-Datenschutz-Pruefung-von-Medien-Webseiten-Nachbesserungen-noetig.html

Coordinated cookie practice review by German DPAs.

Observations on

  • Wrong order – loading cookies/trackers prior to consent
  • Wrong information – Insufficient or wrong information on user tracking on first level of the consent banner
  • Wrong consent – Insufficient scope of consent. Many cookies/trackers remain active even if users deny consent on first level of banner to “All”
  • No easy consent denial/revocation – Often no easy way to deny consent on first level of consent banner, or to close banner without a decision.
  • Manipulation of users – dark design patterns, nudging..
]]>
Fraunhofer: Die Datenschutz-Folgenabschätzung nach Art. 35 DSGVO – Ein Handbuch für die Praxis /2021/04/20/fraunhofer-die-datenschutz-folgenabschatzung-nach-art-35-dsgvo-ein-handbuch-fur-die-praxis/ Tue, 20 Apr 2021 22:35:26 +0000 /?p=2992 http://publica.fraunhofer.de/starweb/servlet.starweb?path=urn.web&search=urn:nbn:de:0011-n-586394-15

Volltext (Online):
http://publica.fraunhofer.de/eprints/urn_nbn_de_0011-n-586394-15.pdf

]]>
Switzerland: Minimal security standards and assesment tool /2021/01/04/switzerland-minimal-security-standards-and-assesment-tool/ Mon, 04 Jan 2021 07:29:33 +0000 /?p=2739 from 2018… (modelled on NIST cybser security framework with some modifications)

hhttps://www.bwl.admin.ch/bwl/en/home/themen/ikt/ikt_minimalstandard.html

]]>
Article: Johner Institut on meeting German DIGA requirements /2020/11/30/article-johner-institut-on-meeting-german-diga-requirements/ Mon, 30 Nov 2020 17:36:35 +0000 /?p=2647 Continue reading "Article: Johner Institut on meeting German DIGA requirements"

]]>
https://www.johner-institut.de/blog/regulatory-affairs/datensicherheit-und-datenschutz-fuer-diga/

includes overview on regulatory requirements:

  • MDR
  • DVG
  • DIGAV
  • BSI 200-1 BSI-Standard 200-1, Managementsysteme für die Informationssicherheit
  • BSI 200-2 BSI-Standard 200-2, IT-Grundschutz-Methodik
  • BSI TR03161 Sicherheitsanforderungen an digitale Gesundheitsanwendungen
  • ISO 27001:2017
  • ISO/IEC 82304-1 Gesundheitssoftware – Teil 1: Allgemeine Anforderungen für die Produktsicherheit
  • ISO/IEC 82304-2 Health Software – Part 2: Health and wellness apps – Quality and reliability [future – includes a “seal”]
  • IEC 8001-5-1 Safety, security and effectiveness in the implementation and use of connected medical devices or connected health software – Part 5-1: Security – Activities in the product lifecycle
]]>
Germany: DPAs (DSK) Paper on Microsoft Windows 10 Telemetry Functions (with BSI input) /2020/11/30/germany-dpas-dsk-paper-on-microsoft-windows-10-telemetry-functions-with-bsi-input/ Mon, 30 Nov 2020 08:55:08 +0000 /?p=2640 Telemetriefunktionen und Datenschutz beim Einsatz von Windows 10 Enterprise

https://www.datenschutzkonferenz-online.de/media/dskb/TOP_30_Beschluss_Windows_10_mit_Anlagen.pdf

]]>
UKANON: second edition of the Anonymisation Decision-making Framework /2020/11/17/ukanon-second-edition-of-the-anonymisation-decision-making-framework/ Tue, 17 Nov 2020 22:08:53 +0000 /?p=2623 The Framework has been given a significant overhaul and for the first time there is a systematic method for evaluating your data environment.

https://ukanon.net/framework/

]]>
BSI TR-03161 Sicherheitsanforderungen an digitale Gesundheitsanwendungen /2020/11/11/bsi-tr-03161-sicherheitsanforderungen-an-digitale-gesundheitsanwendungen/ Wed, 11 Nov 2020 22:55:57 +0000 /?p=2584 Germany: BSI – Security requriements for digital health applications
https://www.bsi.bund.de/DE/Themen/Unternehmen-und-Organisationen/Standards-und-Zertifizierung/Technische-Richtlinien/TR-nach-Thema-sortiert/tr03161/tr-03161.html

English version: https://www.bsi.bund.de/EN/Publications/TechnicalGuidelines/TR03161/TechnicalGuidelines_03161_node.html
with direct link: https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publications/TechGuidelines/TR03161/TR-03161.pdf?__blob=publicationFile&v=2

]]>
AEPD: verification list for Privacy By Design audits /2020/10/23/aepd-verification-list-for-privacy-by-design-audits/ Fri, 23 Oct 2020 06:29:42 +0000 /?p=2473 The AEPD gives a non-comprehensive verification list for PrivacybyDesign audits in chapter VIII of its guidance (in English!)

https://www.aepd.es/sites/default/files/2020-10/guia-proteccion-datos-por-defecto-en.pdf

]]>
Bavaria: Data Protection Checklists (incl. Guidance on TOMs) /2020/10/14/bavaria-data-protection-checklists/ Wed, 14 Oct 2020 17:25:21 +0000 /?p=2428 Continue reading "Bavaria: Data Protection Checklists (incl. Guidance on TOMs)"

]]>
The DPA of Bavaria has published the following checklists (in German)
at https://www.lda.bayern.de/de/checklisten.html:

]]>
Paper: Bitkom: Anonymisierung und Pseudonymisierung von Daten für Projekte des maschinellen Lernens /2020/10/09/paper-bitkom-anonymisierung-und-pseudonymisierung-von-daten-fur-projekte-des-maschinellen-lernens/ Fri, 09 Oct 2020 15:06:55 +0000 /?p=2417 Continue reading "Paper: Bitkom: Anonymisierung und Pseudonymisierung von Daten für Projekte des maschinellen Lernens"

]]>
Anonymization and Pseudonymization of data used in Machine Learning Projects

https://www.bitkom.org/sites/default/files/2020-10/201002_lf_anonymisierung-und-pseudonymisierung-von-daten.pdf

Examples given:

  • Processing of geolocation profiles (movements)
  • Google’s COVID-19 Community Mobility Reports
  • De-coupled pseudonyms, e.g. for manufacurers remote monitoring machine performance at customers
  • Speech recognition as example of federated learning
  • Anonmyization and pseudonymization of medical text data using Natural Language Processing
  • Use of sematic anonymization of sensitive data with inference-based AI and active ontolgies in the financial industry

Key words:

    • Anonymization of structured data
        • Approaches
        • Aggregation approach
          • Generalization, Microaggregation
          • k-anonymity, l-diversity, t-closeness
          • Mondrian algorithm, MDAV method (Maximum Distance to Average Vector)
        • Randomization approach
          • Adding noise
        • Synthetic approach
          • (Creating a synthetic model based on original data to generate “matching” synthetic data)
      • Attacks
        • Was personal data of a known person used to genrate the anonymous data?
        • Which data in the anonymous data relates to personal data of a known person?
        • Predicting attributes of a known person
      • Static anonymization, Dynamic anonymization, Interactive anonymization
      • Pseudonymization
        • Format preserving encryption, Tokenization, Trusted third party, Pseudonymous Authentication (PAUTH), Oblivious transfer
      • Anonymization of texts
        • Ensure that free text inlcudes no identifying terms (e.g. via organizational measures)
        • Masking of identifying terms as part of post-processing
        • Structuring via Natural Language Processing
        • Caveat: Author might be identifiable based on writing style
      • Anonymization of multimedia data
      • Privacy via on-prem analysis and decentralization (see also: federated learning)
        • Homomorphic encryption: fully homomorphic, partially homomorphic, somewhat homomorphic
        • Secure multi-party computation
        • Garbled circuits
      • Privacy risks related to machine learning and controls
        • Identification of persons
        • Deanonmymization of data (e.g. of blurred images)
        • Memmbership inference
        • Model inversion
        • Defeating noise, others..
    • Federated learning
      • (Moving algorithms to the local data – instead of moving data to central algorithm)
      • (Local data doesn’t leave device)
      • AI models as personal data
      • Legal advantages of federated learning
    • Attacks and controls
      • Model inversion
        • Querying the trained AI model to reconstruct its training data
      • Membership inference
        • Was a given data point used to train the model?
      • Model extraction
        • “Stealing” the trained model – by cloning the behaviour and predictive capabilities of a given AI model
      • Adversial examples (creating inputs that trigger unintended responses)
      • Countermeasures
        • Restriction son outputs
        • Adversarial Regularization
        • Distillation
        • Differential Privacy
        • Cryptography
        • Secure multi-party computation (MPC)
        • Federated machine learning
        • Differential Private Data Synthesis (DIPS) (e.g. via Copula functions, Generative Adversarial Networks)
]]>