Navigating Data Risks: The Impact of Large Language Models on Privacy and Security

Authors

  • Lara Kawle Woodbridge Academy Magnet School
  • Sheetal Dhavale

DOI:

https://doi.org/10.47611/jsrhs.v13i4.8276

Keywords:

Computer Science, Artificial Inteligence, Neural Networks, Deep Learning, Large Language Models

Abstract

Throughout the 21st century, the increasing presence of Artificial Intelligence (AI) in everyday applications underscores its crucial role in enhancing community comfort and convenience. And with the world’s increased use in technology, there is no doubt that Artificial Intelligence will remain as a persistent center of focus on the stage of technological advancement. However, it is crucial to understand how the inner workings of AI components could be manipulated for malicious use by threat actors. Large Language Models (LLMs) are a type of Artificial Intelligence solution that process data, can recognize patterns and generate output text. Through large and extensive training, LLMs have the ability to produce natural language text as a response – but at what cost? The use of LLMs is prone to security risks and data breaches, as manipulated data that is fed to these systems could ultimately lead to incorrect, false, or unintended outputs. Overall, the  vulnerabilities in the input data can compromise the integrity of information produced by the model and introduce unforeseen privacy attacks. This research paper summarizes the latest findings on the security risks that are associated with LLMs. In addition, the paper explores the inner workings of LLMs, the advantages and limitations of the use in this model, and recommendations to address the risks.

Downloads

Download data is not yet available.

References or Bibliography

Amazon. (2023). What is Deep Learning? Deep Learning Explained - AWS. Amazon Web Services, Inc. https://aws.amazon.com/what-is/deep-learning/

CerboAI. (2024, May 2). CerboAI’s Guide: Understanding CNN/RNN/GAN/Transformer and Other Architectures. Medium; Medium. https://medium.com/@CerboAI/cerboais-guide-understanding-cnn-rnn-gan-transformer-and-other-architectures-2ded10988eee

Data Poisoning: The Essential Guide | Nightfall AI Security 101. (n.d.). Www.nightfall.ai. https://www.nightfall.ai/ai-security-101/data-poisoning

Forrest, A., & Kosinski, M. (2024, April 11). What Is a Prompt Injection Attack? | IBM. Www.ibm.com. https://www.ibm.com/topics/prompt-injection

Grieve, P. (2018). A simple way to understand machine learning vs deep learning - Zendesk. Zendesk. https://www.zendesk.com/blog/machine-learning-and-deep-learning/

harkiran78. (2020, June 24). Artificial Neural Networks and its Applications. GeeksforGeeks. https://www.geeksforgeeks.org/artificial-neural-networks-and-its-applications/

IBM. (2023). What Are Large Language models? | IBM. Www.ibm.com. https://www.ibm.com/topics/large-language-models

Information Matters. (2020, July 31). IBM TechXchange: IBM Automation & AI Day. Information Matters - AI in the UK. https://informationmatters.net/data-poisoning-ai/

Robert, J., & Schmidt, D. (2024, January 22). 10 Benefits and 10 Challenges of Applying Large Language Models to DoD Software Acquisition. Insights.sei.cmu.edu. https://insights.sei.cmu.edu/blog/10-benefits-and-10-challenges-of-applying-large-language-models-to-dod-software-acquisition/

Sheth, J., & Menon, R. (2024, May 30). Perils of AI: LLM applications and sensitive information handling | Globant Blog. Globant Blog. https://stayrelevant.globant.com/en/technology/cybersecurity/sensitive-information-disclosure-in-llm-applications/

Timbó, R. (2024, July 2). NLP vs. LLM. Revelo.com. https://www.revelo.com/blog/nlp-vs-llm#:~:text=Natural%20language%20processing%20(NLP)%20refers

What is a large language model (LLM)? (n.d.). Cloudflare. https://www.cloudflare.com/learning/ai/what-is-large-language-model/

Published

11-30-2024

How to Cite

Kawle, L., & Dhavale, S. (2024). Navigating Data Risks: The Impact of Large Language Models on Privacy and Security. Journal of Student Research, 13(4). https://doi.org/10.47611/jsrhs.v13i4.8276

Issue

Section

HS Research Projects