© 2026 OSOS/Omega. All rights reserved.

Omega version: 0.1.0

AI Data Policy

Product: OSOS / Omega

Provider: Osos AI GmbH, Cosimastraße 121, 81925 Munich, Germany

Effective Date: 30.04.2026

Version: 1.0

Relationship to other documents: This AI Data Policy is an annex to the EULA. It takes precedence over the general provisions of the EULA on AI-specific matters to the extent it contains more specific rules. For the processing of personal data, the Data Processing Agreement (DPA) additionally applies.


1. Purpose and Scope

OSOS / Omega ("Software") uses AI components — in particular Large Language Models (LLM) and downstream model components — to support engineering workflows (e.g. requirements analysis, generation, traceability, review). This AI Data Policy transparently describes:

  • which data ("AI Inputs") goes into the AI components,
  • which AI subprocessors and models are used,
  • whether, when, and how data is used to train or improve models,
  • which protective mechanisms, controls, and choices the Licensee has, and
  • which obligations regarding human oversight and verification of AI outputs apply.

This policy applies to all deployment models (Cloud Software / Self-Hosted / Air-Gapped); model-specific differences are noted in the respective section.


2. Definitions

  • AI Input: Data, inputs, documents, requirements, code snippets, metadata, or prompts introduced by the Licensee or its Authorized Users into the AI components of the Software.
  • AI Output: Content, suggestions, classifications, embeddings, or summaries generated by the AI system.
  • Baseline Model: A foundation or standard model provided by the Licensor or an AI subprocessor and made jointly accessible to multiple customers.
  • Customer-Specific Model / Tenant-Specific Model: A model trained or fine-tuned for a single Licensee that remains exclusively assigned to that Licensee.
  • No-Train Mode: Processing with active technical and contractual exclusion of the use of AI Inputs/Outputs for model training.
  • Subprocessor / AI Subprocessor: A third-party provider that delivers AI model or inference services for the Software (e.g. hyperscaler AI services, model hosters).

3. Core Principles

3.1 No Training on Customer Data

The Licensor does not use Customer Data, AI Inputs, and AI Outputs of the Licensee for the training, fine-tuning, reinforcement learning, or other improvement of Baseline Models.

3.2 Tenant Isolation ("Data Isolation by Design")

Data of different Licensees is processed and stored in logically separate form. AI Inputs of one Licensee do not flow into responses to other Licensees. Embeddings, vector indexes, and caches are separated per tenant.

3.3 Respect for Internal Permissions

Within a tenant, the AI component respects the access and permission rules configured in the Licensee's system. Authorized Users do not receive AI outputs from projects or documents to which they have no access according to the permission system.

3.4 Purpose Limitation

AI Inputs are processed exclusively to generate the requested AI Output, to provide the Software, to ensure its security, and to fulfill statutory obligations.

3.5 Transparency

The Software labels AI-generated content and provides — where technically meaningful — references to the model or model type used (Art. 50 EU AI Act).


4. AI Components and Subprocessors

4.1 Models in Use

The Software uses the following types of models:

  • proprietary models provided or curated by the Licensor (e.g. domain-specific classifiers, embedding models);
  • foundation models from third parties (LLMs, vision models) accessed via API or dedicated hosting environments;
  • optionally: models provided or licensed by the Licensee itself ("Bring-your-own-Model"), configured in the Software.

4.2 AI Subprocessors

The current list of AI subprocessors (incl. hosting location, model type, and processing purpose) is published at [Link Subprocessor List]. The list is also maintained as an annex to the DPA.

If the list is changed, the notification and objection procedure pursuant to the DPA applies (generally at least 30 days' lead time, right of objection in case of material changes).

4.3 Choice of Processing Region

Where available, the Licensee may choose the processing region (e.g. EU, EU-only mode) in the Order Form or in the platform settings. The default for customers headquartered in the EEA is processing in EU data centers.

4.4 Air-Gapped / On-Premise

For Self-Hosted or Air-Gapped deployments, inference runs in the environment controlled by the Licensee; AI subprocessors are involved in such cases only by separate agreement.


5. Processing Purposes and Data Categories

5.1 Processing Purposes

AI Inputs are processed exclusively to:

  • generate the AI Outputs requested by the Licensee (generation, classification, search, embeddings, summarization);
  • ensure functionality, stability, and security (e.g. abuse detection, rate limiting);
  • improve the platform (not: the models) on the basis of aggregated and anonymized telemetry;
  • fulfill statutory obligations.

5.2 Data Categories

The AI processing pipeline may include in particular the following categories:

  • requirements texts, specifications, glossaries, policies of the Licensee;
  • code and documentation excerpts;
  • metadata (project ID, module, status, timestamps);
  • where applicable, personal data (e.g. author/editor names, comments). The DPA applies to the processing of such data.

5.3 Particularly Sensitive Data

The Licensee will not introduce particularly sensitive data (Art. 9 GDPR, classified information, payment data, health data) into the AI components without express prior agreement. If such data is introduced without agreement, the Licensor may refuse acceptance and/or automatically suppress incoming content.


6. Retention, Caching, and Logging

6.1 Inference

AI Inputs are transmitted to the respective model environment for the purpose of inference. For third-party LLMs — where available — the "Zero Data Retention" or "No-Train" variant of the API is used. With these APIs, the third-party provider does not store inputs/outputs beyond the immediate inference, with the exception of short-term storage for security purposes (typically ≤ 30 days).

6.2 Platform-Side Caching

To optimize performance, the Software may cache AI Inputs/Outputs per tenant. Cache contents are logically isolated and deleted after a configurable period. Caches contain no cross-tenant accessible data.

6.3 Logging

Log data is stored (timestamps, model, token counts, status codes, errors). Substantive AI Inputs/Outputs are — to the extent not necessary for diagnostics — not logged in plain text; where unavoidable, storage takes place only in a tenant-specific, access-restricted logging environment with a separate retention period (default: 30 days, unless otherwise agreed).

6.4 Telemetry

Aggregated, anonymized telemetry data (e.g. number of requests, average latency) may be used to improve the platform. This data contains no Customer Data and no personal references.


7. Customer-Specific Customizations

7.1 Opt-in

Customer-Specific Models, fine-tunings, retrieval indexes, or similar customer-specific adaptations are created only upon the express, documented instruction of the Licensee ("opt-in") and only using explicitly approved data sets.

7.2 Isolation

Customer-Specific Models remain tenant-specific and are not fed back into Baseline Models. There is no "backflow" into other tenants.

7.3 Termination

At the end of the Subscription Term, Customer-Specific Models are, at the Licensee's choice, either delivered (where technically and legally possible) or irrevocably deleted; the default is deletion in accordance with the DPA.


8. Obligations and Responsibilities

8.1 Responsibility of the Licensee

The Licensee is responsible for:

  • the lawfulness, accuracy, and suitability of the data introduced into the AI components;
  • obtaining required consents or legal bases for affected individuals;
  • the final substantive review, verification, and approval of all AI Outputs prior to productive use;
  • the configuration of internal access rights and the training of its Authorized Users in handling the Software.

8.2 Human Oversight

AI Outputs are suggestions, not authoritative decisions. The Licensee shall ensure adequate human oversight within the meaning of Art. 14 EU AI Act and shall implement documented review processes for safety-relevant or regulated use cases.

8.3 Output Disclaimer

Generative AI models can produce erroneous, incomplete, outdated, or misleading content ("hallucinations"). The Software gives no warranty as to the correctness, completeness, currency, freedom from third-party rights, or suitability of AI Outputs for a particular purpose.

8.4 No Circumvention of Protective Mechanisms

The Licensee shall not circumvent, disable, or attempt to circumvent technical or organizational protective and security mechanisms (in particular safety or compliance filters).

8.5 No Competing Models

The Licensee shall not use AI Outputs of the Software to train, fine-tune, or evaluate competing AI models.


9. Security of AI Components

The Licensor implements in particular the following measures:

  • encryption in transit (TLS 1.2+) and at rest (AES-256 or equivalent);
  • authentication against AI subprocessors using rotating, hardened credentials;
  • tenant-specific separation of vector stores, caches, and logs;
  • protective mechanisms against prompt injection attacks (input sanitization, output filtering, allow/deny lists);
  • monitoring for anomalies and abusive use;
  • regular penetration tests and red-teaming of AI interfaces.

Detailed information can be found in the certificates and security notes and in the TOM annex to the DPA.


10. EU AI Act, ISO 42001, and Regulatory Classification

10.1 Classification

OSOS / Omega is to be classified as an AI system under Regulation (EU) 2024/1689 (EU AI Act). According to the current assessment, the standard scope of functionality is not a "high-risk AI system" within the meaning of Annex III of the regulation. If the Licensee deploys the Software in a use case that qualifies as high-risk on its side, it shall inform the Licensor in good time; in this case the Licensor will provide the documentation required to fulfill provider/operator obligations (in particular Art. 11, 13, 14, 26 AI Act) against remuneration, to the extent this corresponds to its status as a provider within the meaning of the regulation.

10.2 ISO 42001

The Licensor strives to maintain conformity of its AI management processes with ISO/IEC 42001 ("AI Management System"). The current status is published in the certificate notes.

10.3 Transparency and Labeling

AI-generated content is labeled as such (Art. 50 EU AI Act). When interacting with AI functions, users are informed of the AI nature.

10.4 Bias, Discrimination, and Fairness

The Licensor undertakes reasonable efforts to detect and mitigate bias and discriminatory effects in the models used. However, complete freedom from bias cannot be guaranteed for generative models.


11. IP Rights in AI Output

Subject to other statutory provisions and the terms of the AI subprocessor, AI Outputs are available to the Licensee for free use within its business operations. Due to the probabilistic nature of generative AI models, exclusivity of outputs cannot be assured; similar outputs may also be displayed to other users.

The Licensee is obligated to verify whether AI Outputs infringe third-party rights (in particular copyright, trademark, patent, and personality rights) before deploying them productively.


12. Complaints, Incidents, Contact

The Licensee may direct inquiries, incidents, and complaints regarding AI processing to:

  • Email: [ai-policy@provider.com]
  • Postal address: [Provider GmbH, Address]

For security-relevant incidents or data breaches, the reporting channels pursuant to the DPA apply (in particular immediate notification within the deadlines stated therein).


13. Changes to this AI Data Policy

This policy may be adapted in case of material changes to the models, subprocessors, or regulatory requirements used. Material changes will be announced with reasonable notice. Otherwise, the change provisions of the EULA apply accordingly.


14. Relationship to Other Documents

In case of conflicts, this policy takes precedence on AI-specific matters over the general provisions of the EULA; DPA provisions for personal data take precedence over this policy.