Encourage Users to Consider Privacy Implications Before Sharing Online

Problem Summary

The online privacy decision-making process is complex, and users may not fully understand the audience and potential risks associated with sharing information online. Users often unknowingly disclose sensitive information on online platforms such as Online Social Networks (OSNs), which can lead to increased social and financial risks.

Rationale

The goal is to prompt users to evaluate potential audience access and information-spreading risks. By employing scoring frameworks, privacy awareness models, visual cues and others, this approach aims to empower individuals to understand their privacy status, encouraging them to think about privacy implications.

Solution

Implement a mechanism that nudges or guides users to reconsider their disclosure actions before performing them. This could include visual cues, warnings, or interactive prompts that encourage users to think about the privacy implications of their intended sharing. Collectively, the papers presented below address the complexity of online privacy decision-making by developing tools and frameworks that nudge users to reconsider their sharing actions. These solutions aim to raise awareness of potential risks and audience reach before users disclose information on online platforms such as OSNs. By providing visual cues, interactive tools, and real-time feedback, these papers help users understand the sensitivity of the information they are about to share and the potential consequences. This proactive approach enables users to make more informed decisions, reducing the likelihood of unknowingly disclosing sensitive information and mitigating social and financial risks.

In Alemany et al. [1], the solution aims to address teenage vulnerability to privacy risks on social networks by introducing soft-paternalism mechanisms, known as nudges, to influence user decision-making without restricting their freedom of choice. These mechanisms include Picture Nudge and Number Nudge. Picture Nudge displays profile images of potential post viewers and a risk-level alert, while Number Nudge shows the number of potential viewers with a similar alert. This approach seeks to enhance users' awareness of the privacy implications before they post content on social networks. The privacy risk is computed as a Privacy Risk Score (PRS), which evaluates message diffusion risk based on user position in the network and message novelty, determining user privacy threat levels in social networks. The value reflects the percentage of network members who might view a message from the user at any point.

Ferreyra et al. [2] introduced an approach to improve online privacy decisions on online social networks by implementing risk scenarios and personalised nudges. This strategy is designed to enhance users' awareness of potential privacy threats and guide them towards making safer privacy choices. The solution leverages users' perceptions of risk to tailor preventative nudges, encouraging a more cautious approach to disclosing personal information.

Dang, Dang and Küng [3] present a visual model aimed at enhancing user privacy on OSNs through improved interaction and visualisation design. The authors propose a model comprising a privacy object and a privacy controller to help users better understand and manage the privacy options for the content to be shared. The privacy object quantifies information exposure across five dimensions: Who, What, When, Where, and Whom. This model allows users to visualise the level of exposure associated with each data-sharing action on OSNs. The controller uses a soft paternalistic approach, making it easier to reduce privacy exposure while requiring more effort to increase it. It utilises visual aids such as radar charts to help users intuitively understand the exposure of the content to be shared.

Vishwamitra et al. [4] proposed a novel system called AutoPri to address the issue of inadvertent privacy breaches through photo sharing on online social networks (OSNs). AutoPri employs a multimodal variational autoencoder to automatically detect private photos in a user-specific manner, learning the joint representation of user information and photo content. This system also utilises explainable deep-learning techniques to pinpoint sensitive regions within photos, enabling fine-grained privacy control. By providing real-time feedback and visual cues, AutoPri helps users make more informed decisions about sharing their photos, thereby enhancing their privacy awareness and reducing the risk of disclosing sensitive information unintentionally.

Platforms: personal computers, mobile devices

Related guidelines: Promote User Awareness and Decision-Making on Permission/Authorisation Requests, Enhance Privacy Awareness by Communicating Privacy Risks

Example

Picture and Number Nudge <a href="#section1">[1]</a>.

Picture (left) and Number (right) nudges notify users of the privacy risk linked to their intended action [1]. (See enlarged)

Envisioned interface for a preventative nudge as in <a href="#section2">[2]</a>.

Envisioned interface for a preventative nudge [2]. (See enlarged)

The privacy object <a href="#section3">[3]</a>.

The privacy object is presented as a filled radar chart. A change in any dimension (in the example, the WHO) is displayed in the interface [3]. (See enlarged)

Use cases
  • Encouraging users to be more aware of and cautious about privacy implications when sharing information online.
Pros

  • Encourages more careful consideration of privacy settings without limiting freedom of choice. Experiments show significant evidence that users’ privacy behaviour for posting actions changed when the nudging mechanisms were activated (independently of the mechanism used – picture or number nudge) [1].
  • The study's findings indicate that the proposed privacy measurement framework can positively influence users' perceptions, enabling a more detailed examination of the information they intend to share in the future [2].
  • The privacy controller typically sets the initial values for the five dimensions of the privacy object to the least exposed settings possible if the user has not specified any privacy preferences [3].
  • The evaluation of AutoPri showed that it effectively detects user-specific private photos with high accuracy and low-performance overhead. Additionally, the explainable model accurately pinpoints sensitive regions in private photos, enabling effective privacy control [4].

Cons

  • Some users found the nudge irritating, highlighting the necessity for a long-term assessment of its usage and its influence on behaviours [1].
  • The usability of the privacy controller has not been fully evaluated in real-world applications, which may include additional components and more complex user interactions. This could affect the model's effectiveness and user experience [3].
  • The study relies on hypothetical self-disclosure scenarios to assess user behaviour, which can bring discrepancy between reported behaviour and genuine actions and could limit the applicability of the findings [2].
  • The dataset may be limited as very privacy-sensitive participants might have avoided sharing extremely sensitive photos, and the current binary privacy settings (private and public) lack granularity. Future work aims to incorporate more detailed privacy categories such as friends, close friends, family, and colleagues or user-defined settings. Although the paper highlights the potential for leveraging current face detection algorithms used by platforms like Facebook and Instagram to identify users in photos and suggest appropriate privacy settings, it also addresses challenges, such as assisting new users with limited photo data and handling users with imbalanced private and public photo datasets. To mitigate these issues, the authors propose initially relying on general privacy concerns and augmenting datasets with photos from close connections​​ [4].

Privacy Notices

Such solutions aim to communicate privacy risks to personal data disclosure through privacy notices. Considering the design space for privacy notices [5], this guideline can be applied to the following dimensions:

  • Just in time
    According to [5], just-in-time notices precede data collection, often near input fields or with summary dialogues, reducing user interruptions. This is the case of the solutions discussed in the proposed guideline.

  • Non-blocking
    This guideline can be coupled with non-blocking controls (privacy choices [5]), providing control options without forcing user interaction.
  • Blocking
    This guideline is primarily for non-blocking contexts but can be applied to a privacy notice providing control options that require user interaction.

  • Visual
    This guideline uses visual resources such as colours, text and icons for a visual notice.

  • Primary
    This guideline is primarily applied to the platform or device the user interacts with.

Control

Providing users with comprehensive insights into privacy threats arising from disclosing data increases their privacy awareness and leverages control [7] by allowing them to make informed decisions about sharing their data. Other related privacy attributes:

Awareness mechanisms increase Transparency [6] as users are given information about potential privacy risks and the impact of their sharing decisions.


References

[1] José Alemany, Elena Del Val, Juan Alberola, and Ana García-Fornes (2019). Enhancing the privacy risk awareness of teenagers in online social networks through soft-paternalism mechanisms. International Journal of Human-Computer Studies. 2019 Sep 1;129:27-40. https://doi.org/10.1016/j.ijhcs.2019.03.008

[2] Díaz Ferreyra, Nicolás E., Tobias Kroll, Esma Aïmeur, Stefan Stieglitz, and Maritta Heisel (2020). Preventative Nudges: Introducing Risk Cues for Supporting Online Self-Disclosure Decisions. Information 11, no. 8, 2020, 399. https://doi.org/10.3390/info11080399

[3] Tri Tran Dang, Khanh Tran Dang and Josef Küng (2020). Interaction and Visualization Design for User Privacy Interface on Online Social Networks. SN Computer Science 1, no. 5 (2020): 297. https://doi.org/10.1007/s42979-020-00314-9

[4] Nishant Vishwamitra, Yifang Li, Hongxin Hu, Kelly Caine, Long Cheng, Ziming Zhao, and Gail-Joon Ahn (2022). Towards Automated Content-based Photo Privacy Control in User-Centered Social Networks. In Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy (CODASPY '22). Association for Computing Machinery, New York, NY, USA, 65–76. https://doi.org/10.1145/3508398.3511517

[5] Florian Schaub, Rebecca Balebako, Adam L Durity, and Lorrie Faith Cranor (2015). A Design Space for Effective Privacy Notices. In: Symposium on Usable Privacy and Security (SOUPS 2015). [S.l.: s.n.], p. 1–17. https://www.usenix.org/system/files/conference/soups2015/soups15-paper-schaub.pdf

[6] Yuanyuan Feng, Yaxing Yao, and Norman Sadeh (2021). A Design Space for Privacy Choices: Towards Meaningful Privacy Control in the Internet of Things. In CHI Conference on Human Factors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan. ACM, New York, NY, USA, 16 pages. https://doi.org/10.1145/3411764.3445148

[7] Susanne Barth, Dan Ionita, and Pieter Hartel (2022). Understanding Online Privacy — A Systematic Review of Privacy Visualizations and Privacy by Design Guidelines. ACM Comput. Surv. 55, 3, Article 63 (February 2022), 37 pages. https://doi.org/10.1145/3502288