Security Assessment

Score: 16%

Assessment


OWASP

OWASP: LLMs 2025

AI; LLM

Risk & Mitigations for LLMs and Gen AI Apps outlining vulnerabilities and mitigations for developing and securing generative AI and large language model applications across the development, deployment and management lifecycle.

Public





https://genai.owasp.org/llm-top-10/

  1. Input Validation and Sanitization: Ensure all inputs are validated and sanitized to prevent prompt injection and data poisoning.
  2. Access Controls: Implement strict access controls to limit who can interact with the model and access sensitive data.
  3. Encryption: Use strong encryption methods to protect data and system prompts both at rest and in transit.
  4. Regular Audits and Monitoring: Conduct regular security audits and continuously monitor interactions and resource usage to detect and respond to anomalies.
  5. User Training and Awareness: Educate users on best practices for handling outputs, recognizing misinformation, and understanding the model's limitations.
  6. Fact-Checking and Human Oversight: Integrate fact-checking mechanisms and involve human reviewers to validate critical outputs.
  7. Dependency Management and Vendor Assessment: Carefully manage dependencies and assess third-party vendors to ensure supply chain security.
  8. Bias Mitigation and Robust Training: Use diverse and high-quality training data, and apply techniques to detect and reduce biases in embeddings.
  9. Resource Management: Implement rate limiting, resource quotas, and auto-scaling to manage and optimize resource consumption.
  10. Incident Response Planning: Develop and maintain an incident response plan to quickly address any security incidents or vulnerabilities.


efacb457-8f91-4223-ab01-2ce1c9fdbb8c

Impacts

Impact Description Impact
Regulatory Security incidents can attract the attention of regulators, leading to investigations, fines, and increased oversight. Moderate
Operational Addressing reputational damage often requires significant resources, diverting attention from other critical business operations. Minor
Customer Customers might lose faith in the organization's capability to manage and secure its technology and their personal data. As a result, users could decide to discontinue using the service or switch to competitors if they view the model as unreliable or unsafe. Major
Reputational Security vulnerabilities and their exploitation can lead to negative media coverage, damaging the organization's public image. Moderate

Risks

Risk Description Type Overall Risk
1 LLM01:2025 Prompt Injection - attackers can manipulate input prompts to influence the model's output in unintended ways. Threat Medium
Link
2 LLM02:2025 Sensitive Information Disclosure - risk sensitive data, such as personal identifiable information (PII), financial details, health records, confidential business data, security credentials, and legal documents, can be exposed. Threat Medium
Link
3 LLM03:2025 Supply Chain - compromised supply chains can introduce malicious code or data, leading to data integrity issues, security vulnerabilities, operational disruptions, trust erosion, and financial loss. Threat Medium
Link
4 LLM04:2025 Data and Model Poisoning - poisoned data can degrade the model's performance and accuracy, introduce security vulnerabilities, spread misinformation, erode user trust, and cause operational disruptions. Threat Major
Link
5 LLM05:2025 Improper Output Handling - improper handling of outputs can lead to data leakage, misinformation, security vulnerabilities, compliance issues, and erosion of trust. Threat Medium
Link
6 LLM06:2025 Excessive Agency - LLMs with excessive agency might perform unintended actions, introduce security vulnerabilities, cause users to lose control, lead to compliance issues, and disrupt operations. Threat Medium
Link
7 LLM07:2025 System Prompt Leakage - leakage of system prompts can expose sensitive information about the model's configuration and operations, which attackers can exploit. Threat Medium
Link
8 LLM08:2025 Vector and Embedding Weaknesses - weaknesses in vectors and embeddings can be exploited by attackers to manipulate the model's behaviour or extract sensitive information. Threat Low
Link
9 LLM09:2025 Misinformation - LLMs have the potential to generate and disseminate false or misleading information, which poses a significant vulnerability for applications that depend on these models. Threat Medium
Link
10 LLM10:2025 Unbounded Consumption - excessive resource usage can lead to resource exhaustion, denial of service (DoS), increased operational costs, performance degradation, and potential security vulnerabilities. Threat Low
Link

Severe
Major 4
Moderate 569 27
Minor 810 13
Insignificant
Impact / Likelihood Rare (0 - 5%) Unlikely (5% - 15%) Possible (15% - 40%) Likely (40% - 90%) Certain (>90%)

Threats

Threat: ATLAS Internal External 3rd Party Technological Physical
Execution - The adversary is trying to run malicious code embedded in machine learning artifacts or software.
Privilege Escalation - The adversary is trying to gain higher-level permissions.
Exfiltration - The adversary is trying to steal machine learning artifacts or other information about the machine learning system.
Initial Access - The adversary is trying to gain access to the machine learning system.
Persistence - The adversary is trying to maintain their foothold via machine learning artifacts or software.
Impact - The adversary is trying to manipulate, interrupt, erode confidence in, or destroy your machine learning systems and data.

Controls

Control Coverage: 0%

No Control Reference(s) XE found.
Control


:




None

No Threat(s) found.

No Control(s) found.
An error has occurred. This application may no longer respond until reloaded. Reload 🗙