© 2026 OSOS/Omega. All rights reserved.
Omega version: 0.1.0
Product: OSOS / Omega
Provider: Osos AI GmbH, Cosimastraße 121, 81925 Munich, Germany
Effective Date: 30.04.2026
Version: 1.0
Relationship to other documents: This AI Data Policy is an annex to the EULA. It takes precedence over the general provisions of the EULA on AI-specific matters to the extent it contains more specific rules. For the processing of personal data, the Data Processing Agreement (DPA) additionally applies.
OSOS / Omega ("Software") uses AI components — in particular Large Language Models (LLM) and downstream model components — to support engineering workflows (e.g. requirements analysis, generation, traceability, review). This AI Data Policy transparently describes:
This policy applies to all deployment models (Cloud Software / Self-Hosted / Air-Gapped); model-specific differences are noted in the respective section.
The Licensor does not use Customer Data, AI Inputs, and AI Outputs of the Licensee for the training, fine-tuning, reinforcement learning, or other improvement of Baseline Models.
Data of different Licensees is processed and stored in logically separate form. AI Inputs of one Licensee do not flow into responses to other Licensees. Embeddings, vector indexes, and caches are separated per tenant.
Within a tenant, the AI component respects the access and permission rules configured in the Licensee's system. Authorized Users do not receive AI outputs from projects or documents to which they have no access according to the permission system.
AI Inputs are processed exclusively to generate the requested AI Output, to provide the Software, to ensure its security, and to fulfill statutory obligations.
The Software labels AI-generated content and provides — where technically meaningful — references to the model or model type used (Art. 50 EU AI Act).
The Software uses the following types of models:
The current list of AI subprocessors (incl. hosting location, model type, and processing purpose) is published at [Link Subprocessor List]. The list is also maintained as an annex to the DPA.
If the list is changed, the notification and objection procedure pursuant to the DPA applies (generally at least 30 days' lead time, right of objection in case of material changes).
Where available, the Licensee may choose the processing region (e.g. EU, EU-only mode) in the Order Form or in the platform settings. The default for customers headquartered in the EEA is processing in EU data centers.
For Self-Hosted or Air-Gapped deployments, inference runs in the environment controlled by the Licensee; AI subprocessors are involved in such cases only by separate agreement.
AI Inputs are processed exclusively to:
The AI processing pipeline may include in particular the following categories:
The Licensee will not introduce particularly sensitive data (Art. 9 GDPR, classified information, payment data, health data) into the AI components without express prior agreement. If such data is introduced without agreement, the Licensor may refuse acceptance and/or automatically suppress incoming content.
AI Inputs are transmitted to the respective model environment for the purpose of inference. For third-party LLMs — where available — the "Zero Data Retention" or "No-Train" variant of the API is used. With these APIs, the third-party provider does not store inputs/outputs beyond the immediate inference, with the exception of short-term storage for security purposes (typically ≤ 30 days).
To optimize performance, the Software may cache AI Inputs/Outputs per tenant. Cache contents are logically isolated and deleted after a configurable period. Caches contain no cross-tenant accessible data.
Log data is stored (timestamps, model, token counts, status codes, errors). Substantive AI Inputs/Outputs are — to the extent not necessary for diagnostics — not logged in plain text; where unavoidable, storage takes place only in a tenant-specific, access-restricted logging environment with a separate retention period (default: 30 days, unless otherwise agreed).
Aggregated, anonymized telemetry data (e.g. number of requests, average latency) may be used to improve the platform. This data contains no Customer Data and no personal references.
Customer-Specific Models, fine-tunings, retrieval indexes, or similar customer-specific adaptations are created only upon the express, documented instruction of the Licensee ("opt-in") and only using explicitly approved data sets.
Customer-Specific Models remain tenant-specific and are not fed back into Baseline Models. There is no "backflow" into other tenants.
At the end of the Subscription Term, Customer-Specific Models are, at the Licensee's choice, either delivered (where technically and legally possible) or irrevocably deleted; the default is deletion in accordance with the DPA.
The Licensee is responsible for:
AI Outputs are suggestions, not authoritative decisions. The Licensee shall ensure adequate human oversight within the meaning of Art. 14 EU AI Act and shall implement documented review processes for safety-relevant or regulated use cases.
Generative AI models can produce erroneous, incomplete, outdated, or misleading content ("hallucinations"). The Software gives no warranty as to the correctness, completeness, currency, freedom from third-party rights, or suitability of AI Outputs for a particular purpose.
The Licensee shall not circumvent, disable, or attempt to circumvent technical or organizational protective and security mechanisms (in particular safety or compliance filters).
The Licensee shall not use AI Outputs of the Software to train, fine-tune, or evaluate competing AI models.
The Licensor implements in particular the following measures:
Detailed information can be found in the certificates and security notes and in the TOM annex to the DPA.
OSOS / Omega is to be classified as an AI system under Regulation (EU) 2024/1689 (EU AI Act). According to the current assessment, the standard scope of functionality is not a "high-risk AI system" within the meaning of Annex III of the regulation. If the Licensee deploys the Software in a use case that qualifies as high-risk on its side, it shall inform the Licensor in good time; in this case the Licensor will provide the documentation required to fulfill provider/operator obligations (in particular Art. 11, 13, 14, 26 AI Act) against remuneration, to the extent this corresponds to its status as a provider within the meaning of the regulation.
The Licensor strives to maintain conformity of its AI management processes with ISO/IEC 42001 ("AI Management System"). The current status is published in the certificate notes.
AI-generated content is labeled as such (Art. 50 EU AI Act). When interacting with AI functions, users are informed of the AI nature.
The Licensor undertakes reasonable efforts to detect and mitigate bias and discriminatory effects in the models used. However, complete freedom from bias cannot be guaranteed for generative models.
Subject to other statutory provisions and the terms of the AI subprocessor, AI Outputs are available to the Licensee for free use within its business operations. Due to the probabilistic nature of generative AI models, exclusivity of outputs cannot be assured; similar outputs may also be displayed to other users.
The Licensee is obligated to verify whether AI Outputs infringe third-party rights (in particular copyright, trademark, patent, and personality rights) before deploying them productively.
The Licensee may direct inquiries, incidents, and complaints regarding AI processing to:
[ai-policy@provider.com]For security-relevant incidents or data breaches, the reporting channels pursuant to the DPA apply (in particular immediate notification within the deadlines stated therein).
This policy may be adapted in case of material changes to the models, subprocessors, or regulatory requirements used. Material changes will be announced with reasonable notice. Otherwise, the change provisions of the EULA apply accordingly.
In case of conflicts, this policy takes precedence on AI-specific matters over the general provisions of the EULA; DPA provisions for personal data take precedence over this policy.