Leverage Personalised Recommendations for Enhanced User Management of Privacy Settings

Problem Summary

Users face difficulties in effectively managing their privacy settings when sharing content online, leading to potential privacy breaches. The texts and images shared online often contain sensitive personal information, and default or poorly configured privacy settings can inadvertently expose this information. These difficulties stem from the complexity of privacy settings and the sheer volume of content shared online, necessitating greater user awareness and understanding of privacy implications. Manually configuring privacy settings for each piece of shared content, whether text-based posts or images, is challenging. Current systems are inadequate in providing personalised, dynamic privacy management solutions that align with users' evolving privacy needs and preferences.

Rationale

Automating privacy settings recommendations aims to enhance user privacy protection, reduce the risk of accidental data exposure, and alleviate the burden on users to understand and navigate complex privacy settings interfaces.

Solution

The development of advanced, user-friendly systems leveraging machine learning, data mining, and semantic analysis to predict and recommend personalised privacy settings for users.

Chen et al. [1] proposed a model to predict and recommend privacy settings for text-based social media posts using a multi-class classifier and crowdsourcing. The solution analyses historical posts, social context, and keywords to recommend privacy settings. For new users or those with insufficient data (cold-start problem), it integrates crowdsourcing with machine learning, utilising data from other users to predict privacy settings.

Nakamura et al. [2] proposal is a prediction engine for default privacy settings using Support Vector Machines (SVM) based on minimal user information at registration. The solution utilises user attributes like age, gender, and type of mobile phone to predict optimal privacy settings, improving user experience during account setup.

Rosni et al. [3] proposed a system using Factorization Machines (FMs) to model and predict user consent preferences for LinkedIn accounts. The solution predicts privacy settings based on user attributes and behaviours, reducing consent fatigue and improving privacy management.

Orekondy, Schiele, Fritz [4] proposed a Visual Privacy Advisor, a system to predict privacy risks in images shared on social media using machine learning models. The solution provides privacy risk scores for images based on privacy-sensitive attributes, helping users manage visual content privacy. The solution employs Convolutional Neural Networks (CNNs) like CaffeNet, GoogleNet, and ResNet-50 to predict privacy attributes from images, utilising a multi-label classification approach. The approach is validated through a user study that assesses the alignment between users' privacy preferences and the model's privacy risk predictions, aiming to provide personalised privacy risk assessments for images. This technical framework underpins the development of a Visual Privacy Advisor that helps users manage their visual privacy based on personalised privacy preferences. The dataset and other materials are available at https://tribhuvanesh.github.io/vpa/.

Lee and Kobsa [5] focused on privacy preference prediction in IoT, proposing a model for predicting user privacy preferences using contextual factors and clustering. The solution utilises factors like location, monitoring entity, and purpose to provide personalised privacy recommendations for IoT devices.

Squicciarini et al. [6] presented an Adaptive Privacy Policy Prediction (A3P) system for images on social media. The system uses image content, metadata, and social context to generate personalised privacy policies automatically. When a user uploads an image, A3P first classifies the image using content and metadata, then predicts a privacy policy using historical data. If the user lacks history or exhibits changes in privacy preferences, A3P consults social context to align recommendations with similar users' practices. This dual approach ensures tailored privacy settings, addressing individual preferences and evolving social dynamics, simplifying privacy management in complex social media environments.

Albertini, Carminati and Ferrari [7] focused on a recommender system for privacy settings in social networks that uses relationship-based Access Control (ReBAC) and association rules mining. The system analyses user behaviour and relationship data to recommend personalised access control policies. It dynamically suggests access control policies when new resources are uploaded, using learned associations to predict the most appropriate privacy settings. This model aims to alleviate the complexity of manually setting privacy policies by providing automated, context-sensitive recommendations tailored to individual user habits and preferences.

Squicciarini et al. [8] presented a tag-driven policy recommender for image sharing on social networks. The solution analyses image tags to correlate with privacy preferences, offering personalised recommendations based on tag semantics and co-presence. For new images with unfamiliar tags, T2P employs a cold-start approach using semantic analysis to recommend privacy settings, ensuring relevance even when historical data is lacking.

Mondal et al. [9] proposed a Semi-Automated Privacy Management to update outdated privacy settings on social media, particularly Facebook, using user studies and machine learning. The system identifies and corrects potentially incorrect privacy settings based on user interactions and dynamic behaviour. The study considered the longitudinal aspect of privacy management, highlighting the challenges users face in maintaining appropriate privacy settings as their life circumstances and social networks evolve.

Sanchez et al. [10] first defined a data model of privacy preferences for the fitness domain using the PPIoT ontology (Privacy Preference for IoT Ontology). This model includes permissions based on the most popular fitness trackers and GDPR requirements. They used a crowdsourced dataset to identify privacy profiles via machine learning clustering algorithms. They also developed a tree-based classifier to recommend privacy profiles based on user traits, such as privacy attitudes, social behaviour, and demographics. The recommendation strategies were integrated into a personal data manager framework, aiming to simplify privacy settings while maintaining GDPR compliance and user control over their preferences.

Kelley et al. [11] presented an incremental policy learning approach where users provide feedback to refine privacy settings. The solution uses user feedback to improve privacy policies incrementally, ensuring settings align with user preferences. A key component is the neighbourhood search mechanism, which explores close variations of the current policy to suggest incremental improvements. These suggestions are then presented to the user, who decides which, if any, to accept, ensuring the user's control over the policy evolution. This approach combines machine learning with user insight by allowing users to interact directly with and refine their preferences based on system suggestions, offering a tailored and user-friendly method for managing privacy settings. To demonstrate the approach's effectiveness, they developed a prototype application called PeopleFinder, in which users share their locations with others based on refined privacy configurations.

Misra and Such [12] proposed PACMAN, an agent that recommends personalised access control decisions on social media based on social context and network structure. The agent analyses relationships and content to make dynamic privacy recommendations, ensuring settings match user interactions.

Munemasa and Iwaihara [13] employed a recommendation system for social networking services using trend analysis and privacy scores. The solution uses privacy settings data and user trends to provide personalised privacy recommendations, helping users understand common privacy practices. These recommendations are visualised on the user's SNS privacy settings page, aiding users in adjusting their settings based on the analysis of large-scale user data and specific attribute co-occurrences, ultimately helping users manage their online privacy better.

Villarán and Beltrán [14] proposed a Privacy Advisor for federated identity systems that would provide personalised recommendations to assist users in managing privacy settings. The solution provides real-time, personalised privacy advice based on user profiles and service provider practices, ensuring compliance with regulations like GDPR.

Shanmugarasa et al. [15] presents a Privacy Preference Recommender System (PPRS) designed for smart home environments, integrating Personal Data Stores (PDSs) to assist users in making data-sharing decisions. The PPRS uses machine learning models to recommend privacy settings dynamically based on contextual factors and user preferences. The system operates within the PDS architecture, ensuring that all data processing and analytics remain under the user’s control. It automates privacy decisions by suggesting appropriate actions for data-sharing requests, reducing the burden on users to configure their privacy settings manually.

Bernsmed. Tøndel and Nyre [16] presents the Privacy Advisor, a software tool that uses Case-Based Reasoning (CBR) to assist users in making informed privacy decisions by providing personalised recommendations based on past experiences. The Privacy Advisor retrieves and adapts solutions from similar past cases to recommend privacy settings. It learns from user feedback to improve future recommendations, ensuring that privacy policies align with users' preferences and contexts.

Bilogrevic et al [17] propose the Smart Privacy-aware Information Sharing Mechanism (SPISM), which uses machine-learning techniques to make semi-automatic decisions about whether to share information and at what level of detail. This system adapts to each user's behaviour and predicts the level of detail for each sharing decision based on personal and contextual features. By learning from user behaviours and preferences over time, SPISM offers personalised recommendations for information sharing, ensuring that the recommendations align with individual privacy preferences and contexts.

Alom et al. [18] proposed two innovative techniques: Knapsack Privacy Checking (KPC) and Knapsack Graph-Based Privacy Checking (KPC-G). These techniques frame the privacy-checking problem as a knapsack problem, optimising the selection of services based on user-specified privacy preferences and tolerance levels. The KPC method matches services to user preferences based on a preset tolerance level. In contrast, the KPC-G method dynamically adjusts these tolerance values by leveraging a similarity graph of user privacy preferences. The latter maximises user satisfaction by suggesting services that balance privacy concerns with benefits.

Couto and Zorzo [19] present a privacy negotiation mechanism for Internet of Things (IoT) environments. The proposed mechanism mediates the exchange of information between data producers (users) and data consumers (services), allowing users to control their data disclosure more effectively. The mechanism leverages a Multilayer Perceptron (MLP) neural network to predict user privacy preferences, enabling automatic responses to data requests based on learned patterns. Users can set a confidence level for these predictions, ensuring the mechanism's responses align with their privacy expectations.

Filipczuk et al. [20] propose an innovative agent-based negotiation framework to manage privacy permissions between users and service providers. This framework leverages a multi-issue alternating-offer protocol that accommodates partial and complete offers, aiming to automate privacy negotiations and reduce user burden. The framework uses autonomous agents to negotiate on behalf of users, presenting a potential agreement that the user can accept, override, or continue to negotiate. It introduces methods for implicitly learning user preferences through feedback on offers, either by accepting or rejecting them.

VeilMe [21] aims to reduce user effort in configuring privacy settings by offering initial privacy recommendations based on rule-based and prediction-based approaches. These initial settings aim to reduce the user’s efforts in adjusting the complex privacy configurations by providing a reasonable starting point tailored to the user’s personality and common sharing preferences​.

Platforms: personal computers, mobile devices, smart devices

Related guidelines: Implement Contextual Privacy Controls for Enhanced User Data Protection, Provide Users with User-Friendly Tools to Manage Their Privacy Settings

Example

Screenshot of the PeopleFinder prototype interface for feedback and location <a href="#section11">[11]</a>.

Screenshot of the PeopleFinder prototype interface for feedback and location [11]. (See enlarged)

From top to bottom right: (a) dialogue before sending profiles and privacy settings; (b) privacy score diagnosis; (c) recommendation by attribute co-occurrence <a href=#section13">[13]</a>.

From top to bottom right: (a) dialogue before sending profiles and privacy settings; (b) privacy score diagnosis; (c) recommendation by attribute co-occurrence [13]. (See enlarged)

Privacy Advisor GUI <a href="#section16">[16]</a>.

Privacy Advisor GUI [16]. (See enlarged)

SPISM mobile application interfaces <a href="#section17">[17]</a>.

The SPISM mobile application interfaces allow users to register and log in, check other users’ current locations, view nearby devices and their availability, and access features such as past activity records and contacts lists [17]. (See enlarged)

PrivacyApplication interface < a href="#section19">[19]</a>.

PrivacyApplication interface [19]. (See enlarged)

Screenshots of the negotiation prototype as in <a href="#section20">[20]</a>.

Screenshots of the negotiation prototype as in [20]. (See enlarged)

Interface for users setting their privacy preferences according to their choices as provided in <a href="#section3">[3]</a>.

Interface for users setting their privacy preferences according to their choices as provided in [20]. (See enlarged)

The screenshot of the VeilMe interface <a href="#section21">[21]</a>.

The screenshot of the VeilMe interface. Panel A & B: user’s Twitter profile and the latest tweets; C: portrait exploration panel; D: privacy setting panel. Users can click to expand to reveal traits with sub-traits. When hovering a social distance knob, the input audience names of that group will be shown for user engagement [21]. (See enlarged)

Use cases
  • Visualising privacy settings distributions among similar users to facilitate informed privacy settings adjustments by users.
  • Adapting users' privacy settings to changes in their social context or privacy preferences over time.
  • Offering privacy setting recommendations to users new to a platform, leveraging crowd-sourced data and machine learning.
  • Helping users automatically configure privacy settings for their social media posts based on content analysis and historical behaviour.
  • Educating users about the privacy implications of their settings through visual feedback and recommendations, promoting more privacy-aware behaviour online.
  • Leveraging machine learning and artificial intelligence to create personalised privacy advisors or agents that can automatically configure privacy settings based on the user's past decisions, preferences, and context.
Pros

  • Systems like the Privacy Advisor [16] and those employing human-in-the-loop approaches [9] emphasise the importance of user-centred design by keeping users in control and making privacy decisions less mentally demanding. They effectively communicate privacy settings through visualisations and recommendation messages validated through field studies and user feedback [13][14]. Extending privacy settings to visual content [4] and leveraging AI for scalable, automatic text analysis demonstrate advancements in privacy management tools [7], outperforming human judgment in identifying privacy risks. The simplicity and generalisability of privacy profiles [10] and incremental policy suggestions [14] facilitate user understanding and decision-making, aligning privacy choices with user preferences and contexts. These approaches underscore the systems' ability to adapt to changing user needs and improve the relevance and effectiveness of privacy policies over time.
  • Many proposals effectively address the cold start problem by leveraging machine learning, crowdsourcing, and contextual information to recommend privacy policies for new users. These systems, such as T2P [8] and A3P [6], show promising results without prior user data, providing utility across various scenarios. They incorporate user feedback to refine recommendations, enhancing accuracy over time. The use of diverse datasets and human evaluators, as seen in the studies [4][7], adds real-world applicability, demonstrating significant potential for practical implementation in online social networks and other platforms. These systems' adaptability and multifaceted feature selection, combining text, sentiment, and keyword analysis [1], enhance their performance and align well with user expectations and privacy preferences.
  • The various proposed solutions demonstrated high accuracy and effectiveness in predicting and managing user privacy preferences across different contexts and platforms. PACMAN [12] achieves an average accuracy of 91.8% by considering relationship types and content information and making precise access control recommendations for social media. The consent recommender system [3] shows an accuracy of 87% for users with no prior data, performing even better for users with existing data. Decision tree models achieve 77% accuracy in predicting privacy decisions for IoT scenarios [5], while the PPRS [15] reaches 92.62% accuracy in data-sharing decisions, effectively addressing the cold start problem. SPISM [17] outperforms individual privacy policies, achieving up to 90% correct sharing decisions with limited user setup. The KPC-G [18] model and other mechanisms [19] show high satisfaction and accuracy rates (up to 88%), validating their effectiveness in aligning user privacy choices with their preferences and reducing the mental demand of decision-making processes.

Cons

  • Several proposals encounter scalability issues and difficulties in accounting for user-specific differences. For example, the A3P-Social [6] component's complexity and cost pose challenges for efficient implementation, and systems like PACMAN [12] may not fully represent the complexity of social relationships. Furthermore, one-size-fits-all approaches may not satisfy unique privacy needs [8], and reliance on user-supplied content information can limit effectiveness in scenarios lacking sufficient metadata [12]. Over-reliance on technology for privacy management could lead to user complacency, potentially overlooking non-visual privacy risks [4] or user's capacity to review suggestions might decrease as the complexity and number of policies and configurations grow [11]. Additionally, advanced privacy management tools often require significant computational resources and technical knowledge, which could limit accessibility for average users or small-scale platforms [4].
  • Many of the proposals face challenges in model adaptability and user experience. For instance, while combining text, sentiment, and keyword features achieves high prediction accuracy, it may limit adaptability to posts with less indicative features [1]. Similarly, although effective for new users, global classifiers do not match personalised classifiers' performance, highlighting room for improvement in handling new user data [1]. Systems also struggle with real-world applicability due to biases in participant demographics and expertise, survey scenarios, assumptions about the availability of dynamic features on mobile phones, and specific web applications scenarios [3][5][9][10][11][13][14][15][16].

Privacy Choices

This guideline discusses solutions that contribute to the design space of privacy choices [22]. They address key aspects of how users interact with privacy settings, make privacy decisions, and manage their personal data across various platforms. Considering the design space for privacy choices, this guideline can be applied in the following dimensions:

  • Binary choices
    The solutions discussed in this guideline can provide users with a straightforward, binary choice regarding their privacy settings, such as opt-in or opt-out of data collection and processing.
  • Multiple choices
    The solutions discussed in this guideline offer users various privacy settings options, allowing for more granular control over their data. These choices enable users to tailor their privacy preferences more precisely, beyond the simple binary options.
  • Contextualised
    The discussed solutions in this guideline adopt privacy settings recommendations based on the context of data collection or user activity. These solutions provide contextualised privacy choices by analysing user behaviour or leveraging machine learning to understand user preferences in different situations. This approach aligns with the contextual integrity framework, recognising that privacy preferences may vary significantly depending on the context, such as time, location, or purpose of data collection.
  • Privacy rights-based choices
    Recommendations can assist users in exercising their privacy rights (e.g., access, rectification, erasure) by suggesting appropriate actions based on their data and usage patterns.

  • On-demand
    With the solutions discussed in this guideline, users can actively seek recommendations for privacy settings or modifications based on their current concerns or changes in their privacy preferences. This empowers users to take control of their privacy settings whenever they need to review or change them.
  • Personalised
    The solutions discussed in this guideline offer personalised privacy settings recommendations based on the users' behaviours, preferences, and previous decisions. This approach acknowledges the diversity in users' privacy preferences and provides tailored suggestions that align with individual privacy needs and expectations.
  • Just in time
    The discussed solutions in this guideline provide privacy settings recommendations and adjustments when the user is about to share personal data or when a specific data practice is imminent. This is in line with presenting privacy choices at the moment they are most relevant to the user's current actions, enhancing the decision-making process by making it contextually appropriate.
  • Context-aware
    Recommendations can be tailored to specific contexts, such as location or activity, making privacy choices more relevant and effective.

  • Auditory
    For users who prefer auditory inputs, recommendations can be provided through spoken words or alerts.
  • Combined
    Utilising multiple modalities ensures that personalised recommendations are accessible and comprehensible through various means, enhancing user engagement and understanding.
  • Machine-readable
    Personalised privacy settings can be encoded in a machine-readable format, enabling software agents to manage privacy settings on behalf of users.
  • Visual
    Personalised privacy recommendations can be presented visually through text, images, or icons, making them easy to understand and act upon.

  • Enforcement
    The solutions discussed in this guideline propose or imply using automated systems to enforce privacy choices.
  • Feedback
    The discussed solutions also involve providing users with confirmation or updates regarding their privacy settings, enhancing transparency and user confidence in the system's respect for their decisions.
  • Presentation
    Privacy choices always have a presentation that involves a system providing clear and easily understandable information to users about potential data practices, available options, and how to communicate privacy decisions, often incorporating multiple components and integrating with related privacy notices, requiring careful consideration of design dimensions such as timing, channel, and modality [10].

  • Secondary
    When primary channels are not available or suitable, secondary channels (e.g., mobile apps and websites) can provide personalised privacy recommendations.
  • Primary
    Personalised recommendations can be integrated directly into the primary channel (e.g., website, app) where the user interacts with the system, ensuring seamless and contextually relevant privacy choices.

Control

This guideline focuses on enabling users to manage their privacy settings effectively through personalised recommendations. This aligns with the Control [23] attribute, which emphasises users' ability to make informed decisions about their data and influence how their data is handled. Providing personalised recommendations aiming to enhance users' control over their privacy settings, ensuring they can opt in or opt out based on their preferences, empowers them to maintain an active role in their data management. Other related privacy attributes:

The discussed solutions in this guideline could influence data collection minimisation by recommending settings limiting data sharing.

By enabling users to set appropriate privacy settings, the discussed solutions in this guideline indirectly contribute to protecting personal data against unauthorised access.

Personalised recommendations often involve providing users with clear and accessible information about their privacy settings and how their data is being managed. This helps users make informed decisions, thereby enhancing transparency.

The solutions discussed in this guideline touch upon the trade-off between functionality and privacy, aiming to offer users the ability to maintain privacy without sacrificing service utility.

Personalised recommendations can help identify and correct mismatches or inaccuracies in privacy settings. By continuously adapting to user feedback and behaviour, the system ensures that privacy settings accurately reflect the user's preferences, thereby helping to maintain data correctness.


References

[1] Lijun Chen, Ming Xu, Xue Yang, Ning Zheng, Yiming Wu, Jian Xu, Tong Qiao, and Hongbin Liu. A Privacy Settings Prediction Model for Textual Posts on Social Networks. In: Romdhani, I., Shu, L., Takahiro, H., Zhou, Z., Gordon, T., Zeng, D. (eds) Collaborative Computing: Networking, Applications and Worksharing. CollaborateCom 2017. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 252. Springer, Cham. https://doi.org/10.1007/978-3-030-00916-8_53

[2] Toru Nakamura, Welderufael B. Tesfay, Shinsaku Kiyomoto, and Jetzabel Serna (2017). Default privacy setting prediction by grouping user’s attributes and settings preferences. In Data Privacy Management, Cryptocurrencies and Blockchain Technology: ESORICS 2017 International Workshops, DPM 2017 and CBT 2017, Oslo, Norway, September 14-15, 2017, Proceedings (pp. 107-123). Springer International Publishing. https://doi.org/10.1007/978-3-319-67816-0_7

[3] Rosni K V, Manish Shukla, Vijayanand Banahatti, and Sachin Lodha (2019). Consent recommender system: A case study on LinkedIn settings. In Central Europe Workshop Proceedings https://ceur-ws.org/Vol-2335/1st_PAL_paper_12.pdf

[4] Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz (2017). Towards a Visual Privacy Advisor: Understanding and Predicting Privacy Risks in Images. 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp. 3706-3715 https://doi.org/10.1109/ICCV.2017.398

[5] Hosub Lee and Alfred Kobsa. (2017). Privacy preference modeling and prediction in a simulated campuswide IoT environment. In IEEE International Conference on Pervasive Computing and Communications (PerCom), Kona, HI, USA, 2017, pp. 276-285 https://doi.org/10.1109/PERCOM.2017.7917874

[6] Anna Cinzia Squicciarini, Dan Lin, Smitha Sundareswaran, and Joshua Wede. (2014). Privacy Policy Inference of User-Uploaded Images on Content Sharing Sites. IEEE transactions on knowledge and data engineering 27, no. 1, 2014, 193-206. https://doi.org/10.1109/TKDE.2014.2320729

[7] Davide Alberto Albertini, Barbara Carminati, and Elena Ferrari (2016). Privacy Settings Recommender for Online Social Network. In 2016 IEEE 2nd international conference on collaboration and internet computing (CIC), 2016, 514-521. https://doi.org/10.1109/CIC.2016.079

[8] Anna Cinzia Squicciarini, Andrea Novelli, Dan Lin, Cornelia Caragea, and Haoti Zhong (2017). From Tag to Protect: A Tag-Driven Policy Recommender System for Image Sharing. In 2017 15th Annual Conference on Privacy, Security and Trust (PST), 2017, 337-33709. https://doi.org/10.1109/PST.2017.00047

[9] Mainack Mondal, Günce Su Yilmaz, Noah Hirsch, Mohammad Taha Khan, Michael Tang, Christopher Tran, Chris Kanich, Blase Ur, and Elena Zheleva. Moving Beyond Set-It-And-Forget-It Privacy Settings on Social Media. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS '19). Association for Computing Machinery, New York, NY, USA, 2019, 991–1008. https://doi.org/10.1145/3319535.3354202

[10] Odnan Ref Sanchez, Ilaria Torre, Yangyang He, and Bart P. Knijnenburg (2020). A recommendation approach for user privacy preferences in the fitness domain. User Modeling and User-Adapted Interaction, 30, pp.513-565. https://doi.org/10.1007/s11257-019-09246-3

[11] Patrick Gage Kelley, Paul Hankes Drielsma, Norman Sadeh, and Lorrie Faith Cranor (2008). User-controllable learning of security and privacy policies. In Proceedings of the 1st ACM workshop on Workshop on AISec (AISec '08). Association for Computing Machinery, New York, NY, USA, 2008, 11–18. https://doi.org/10.1145/1456377.1456380

[12] Gaurav Misra and Jose M. Such (2017). PACMAN: Personal Agent for Access Control in Social Media. In IEEE Internet Computing, vol. 21, no. 6, pp. 18-26, November/December 2017. https://doi.org/10.1109/MIC.2017.4180831

[13] Munemasa, Toshikazu, and Mizuho Iwaihara (2011). Trend Analysis and Recommendation of Users’ Privacy Settings on Social Networking Services. In International conference on social informatics, pp. 184-197. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. https://doi.org/10.1007/978-3-642-24704-0_23

[14] Carlos Villarán and Marta Beltrán. (2022). User-Centric Privacy for Identity Federations Based on a Recommendation System. Electronics, 11(8), 1238. https://doi.org/10.3390/electronics11081238

[15] Yashothara Shanmugarasa, Hye-young Paik, Salil S. Kanhere, Liming Zhu (2022). Automated Privacy Preferences for Smart Home Data Sharing Using Personal Data Stores. In IEEE Security & Privacy, vol. 20, no. 1, pp. 12-22, Jan.-Feb. 2022 https://doi.org/10.1109/MSEC.2021.3106056

[16] Karin Bernsmed, Inger Anne Tøndel and Åsmund Ahlmann Nyre. Design and Implementation of a CBR-based Privacy Agent. In: Seventh International Conference on Availability, Reliability and Security, Prague, Czech Republic, 2012, 317-326. https://doi.org/10.1109/ARES.2012.60

[17] Igor Bilogrevic, Kévin Huguenin, Berker Agir, Murtuza Jadliwala, Maria Gazaki and Jean-Pierre Hubaux (2016). A machine-learning based approach to privacy-aware information-sharing in mobile social networks. Pervasive and Mobile Computing, 25, 125-142. https://doi.org/10.1016/j.pmcj.2015.01.006

[18] Zulfikar Alom, Bikash Chandra Singh, Zeyar Aung, and Mohammad Abdul Azim. Knapsack graph-based privacy checking for smart environments. Computers & Security, vol. 105, 2021, 10224 https://doi.org/10.1016/j.cose.2021.102240

[19] Fagner Roger Pereira Couto and Sergio Donizetti Zorzo (2018). Privacy Negotiation Mechanism in Internet of Things Environments. In Proceedings of the Twenty-fourth Americas Conference on Information Systems, New Orleans, AMCIS 2018. https://aisel.aisnet.org/amcis2018/Security/Presentations/33

[20] Dorota Filipczuk, Tim Baarslag, Enrico H. Gerding, and m. c. schraefel (2022). Automated privacy negotiations with preference uncertainty. Autonomous Agents and Multi-Agent Systems, 36(2), p.49. https://doi.org/10.1007/s10458-022-09579-1

[21] Yang Wang, Liang Gou, Anbang Xu, Michelle X. Zhou, Huahai Yang, and Hernan Badenes (2015). VeilMe: An Interactive Visualization Tool for Privacy Configuration of Using Personality Traits. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). Association for Computing Machinery, New York, NY, USA, 2015, 817–826. https://doi.org/10.1145/2702123.2702293

[22] Yuanyuan Feng, Yaxing Yao, and Norman Sadeh (2021). A Design Space for Privacy Choices: Towards Meaningful Privacy Control in the Internet of Things. In CHI Conference on Human Factors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan. ACM, New York, NY, USA, 16 pages. https://doi.org/10.1145/3411764.3445148

[23] Susanne Barth, Dan Ionita, and Pieter Hartel (2022). Understanding Online Privacy — A Systematic Review of Privacy Visualizations and Privacy by Design Guidelines. ACM Comput. Surv. 55, 3, Article 63 (February 2022), 37 pages. https://doi.org/10.1145/3502288