Compliance
Здесь описание твоей базы.
LLM & RAG Security
- Indirect Prompt Injection: How hackers break into RAG through "poisoned" documents.
- Invisible Text Attacks: Hidden instructions in PDF (white font, zero size) and how to find them.
- The "Ignore Previous Instructions" Vulnerability: Analysis of classical attacks and defense methods.
- Roleplay & Jailbreaking: Attacks like "DAN" (Do Anything Now) and "Developer Mode" — detection in prompta.
- System Prompt Leakage: How to force a model to issue its instructions ("Reveal your system prompt" attacks).
- HTML Comment Injection: Why hidden comments in web pages are dangerous for LLM.
- Output Constraining Attacks: Attacks that force the model to respond in JSON/XML format to bypass filters.
- Context Window Overflow: Spam attacks and garbage data in RAG pipelines.
- Multilingual Jailbreaks: Why do attacks in Russian/Chinese bypass English filters?
- Base64 Obfuscation: Detection of encoded payloads in prompta.
Model & Supply Chain Security
- Python Pickle RCE: Why downloading .pkl files is remote code execution.
- PyTorch Malware: The danger of torch.load and hidden instructions in the scales of models.
- Keras Lambda Injection: Malicious code inside the configuration of neural network layers.
- YAML Deserialization Attacks: Vulnerabilities in configs (yaml.load vs safe_load).
- Typosquatting in Python: Attacks through fake packages (for example, tourch instead of torch).
- Dependency Confusion: How attackers inject their code through requirements.txt .
- Data Poisoning via Malicious URLs: Links to .exe and .sh inside CSV/Parquet datasets.
- Git LFS Pointer Attacks: Substitution of models for pointer text files.
Secrets & Credential Leaks
- AWS IAM Key Leakage: Risks of AKIA key exposure in Jupyter Notebooks.
- Hugging Face Token Exposure: What are the dangers of Write tokens in public repositories?
- OpenAI API Key Leaks: How to protect your budget from key theft (sk-...).
- SSH Private Key Exposure: Detection of forgotten keys (id_rsa) in datasets.
- Slack & Discord Webhook Security: The risks of phishing through leaked webhooks.
- Google Cloud Credentials: Search for service accounts (.json keys) in the code.
- Generic API Key Detection: How to find non-standard secrets using entropy analysis.
- Environment Variable Leaks: The danger of .env and .pypirc files in the repository.
Infrastructure & Malicious Activity (For DevOps)
- Cryptojacking in ML Containers: Detection of miners (XMRig, Ethminer) in Docker images.
- Reverse Shell Detection: Search for backdoors in scripts (/bin/sh, nc -e).
- SSRF (Server-Side Request Forgery): Attacks on cloud metadata (169.254.169.254).
- Ransomware Indicators: Signs of cryptographers in Python scripts.
- Dangerous System Calls: Why os.system and subprocess in ML code are a red flag.
- Exfiltration via Curl/Wget: How hackers "merge" data from closed learning loops.
]()**
- **[