EU AI Act Compliance
Risk Assessment, Transparency Statement, and Compliance Declaration
Last updated: March 3, 2026
Executive Summary
faktry is committed to compliance with the European Union Artificial Intelligence Act (Regulation (EU) 2024/1689). This document provides a comprehensive risk assessment of our AI systems, our compliance measures, and transparency obligations under the EU AI Act.
Overall Risk Classification: Limited Risk
The majority of our AI systems fall under the "limited risk" category, requiring transparency obligations. Certain features may qualify as "high-risk" depending on use case, and we have implemented appropriate safeguards.
1. About the EU AI Act
The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It establishes a risk-based approach to AI regulation, categorizing AI systems into four risk levels:
- Unacceptable Risk: Banned AI practices (e.g., social scoring, real-time biometric identification in public spaces)
- High Risk: AI systems with significant potential harm, requiring strict compliance (e.g., medical devices, recruitment tools)
- Limited Risk: AI systems requiring transparency (e.g., chatbots, content generation)
- Minimal Risk: AI systems with no specific obligations (e.g., spam filters, games)
2. AI Systems Inventory
faktry utilizes the following AI systems in its platform:
2.1 Image Generation
| Feature | Models Used | Provider | Risk Level |
|---|---|---|---|
| Text-to-Image | DALL-E 3, Flux, Ideogram, Midjourney, etc. | OpenAI, Black Forest Labs, Ideogram, Midjourney | Limited |
| Image Editing | Various | Multiple providers | Minimal |
| Upscaling | Various | Multiple providers | Minimal |
2.2 Video Generation
| Feature | Models Used | Provider | Risk Level |
|---|---|---|---|
| Text-to-Video | Runway, Kling, Luma, etc. | Runway, Kling, Luma AI | Limited |
| Image-to-Video | Various | Multiple providers | Limited |
| AI Video Editor | Integrated pipeline | Multiple providers | Limited |
2.3 Audio Processing
| Feature | Models Used | Provider | Risk Level |
|---|---|---|---|
| Speech-to-Text | Whisper | OpenAI | Minimal |
| Text-to-Speech | ElevenLabs, etc. | ElevenLabs | Limited |
2.4 Text & Content Generation
| Feature | Models Used | Provider | Risk Level |
|---|---|---|---|
| Script Generation | GPT-4, Claude, Gemini | OpenAI, Anthropic, Google | Limited |
| Prompt Enhancement | GPT-4, Claude, Gemini | OpenAI, Anthropic, Google | Minimal |
| Brand Content | GPT-4, Claude, Gemini | OpenAI, Anthropic, Google | Minimal |
3. Detailed Risk Assessment
3.1 Limited Risk Systems
The following AI systems are classified as "limited risk" under Article 50 of the EU AI Act:
Image & Video Generation Systems
AI systems that generate or manipulate image, audio, or video content constituting a "deep fake" must disclose that the content has been artificially generated or manipulated.
Our Compliance: All AI-generated content is watermarked or accompanied by clear disclosure that it was generated by AI. Users are informed via our interface when content is AI-generated.
Text Generation Systems (Chatbots)
AI systems intended to interact with natural persons must inform those persons that they are interacting with an AI system.
Our Compliance: Our text generation features are clearly labeled as AI-powered. Generated text is displayed with clear indication of its AI origin.
Emotion Recognition & Biometric Categorization
AI systems performing emotion recognition or biometric categorization must inform natural persons exposed to such systems.
Our Compliance: We do not deploy emotion recognition or biometric categorization systems. Our service does not process biometric data for identification purposes.
3.2 High-Risk Assessment
We have evaluated our AI systems against the high-risk criteria in Annex III of the EU AI Act:
| High-Risk Category (Annex III) | Assessment |
|---|---|
| Biometric identification & categorization | ✓ Not applicable |
| Critical infrastructure management | ✓ Not applicable |
| Education & vocational training | ✓ Not applicable |
| Employment, worker management, access to self-employment | ✓ Not applicable |
| Access to essential services (credit, insurance, benefits) | ✓ Not applicable |
| Law enforcement | ✓ Not applicable |
| Migration, asylum, and border control | ✓ Not applicable |
| Administration of justice and democratic processes | ✓ Not applicable |
Assessment Result: No High-Risk Systems
Based on our analysis, faktry does not currently operate AI systems that fall under the high-risk categories defined in Annex III of the EU AI Act. However, we recognize that use cases may vary, and users deploying our tools in high-risk contexts are responsible for their own compliance.
3.3 Prohibited Practices
Under Article 5 of the EU AI Act, certain AI practices are prohibited. We confirm that faktry does not engage in any of the following prohibited practices:
- Social scoring by governments or on their behalf
- Exploitation of vulnerabilities of vulnerable groups
- Biometric categorization based on sensitive attributes
- Real-time remote biometric identification in public spaces (with limited exceptions)
- Scraping of facial images from the internet or CCTV footage
- Emotion recognition in workplace and education contexts
- Indiscriminate scraping of biometric data
4. Transparency Obligations
4.1 AI-Generated Content Disclosure
In compliance with Article 50, we implement the following transparency measures:
- Content Labels: All AI-generated content is clearly labeled as such in our interface
- Export Disclosure: Downloaded content includes metadata indicating AI generation
- API Responses: API responses include headers indicating AI-generated content
- Public Gallery: Content in the Prompt Gallery displays AI model attribution
4.2 Deep Fake Disclosure
Under Article 50(4), when our AI systems generate or manipulate content that constitutes a "deep fake" (image, audio, or video content that appreciably resembles an existing person, object, place, or event), we ensure:
- Clear disclosure that the content has been artificially generated or manipulated
- Technical means to detect that the content is AI-generated
- User acknowledgment of disclosure requirements before generating content
4.3 Copyright Disclosure
Under Article 50(5), we maintain records and make available to rightsholders information about whether our AI systems have been trained on copyright-protected works. We rely on AI providers who maintain compliance with the EU AI Act and copyright regulations.
5. AI Governance Framework
5.1 Human Oversight
Our AI systems include human oversight measures:
- All AI outputs are reviewed by users before use
- Users can reject, modify, or regenerate AI outputs
- Content moderation policies with human review for public content
- Ability to flag inappropriate or harmful AI outputs
- Regular human review of AI system performance
5.2 Data Governance
We implement robust data governance practices:
- Training data for our AI systems is sourced from reputable providers with proper licensing
- We do not train foundation models on user data
- User content is processed only for service provision, not AI training (without consent)
- Data quality controls and bias mitigation measures
- Regular audits of data practices
5.3 Technical Documentation
We maintain documentation for our AI systems including:
- System purpose and intended use cases
- Model specifications and capabilities
- Training methodologies and data sources (where applicable)
- Performance metrics and limitations
- Risk mitigation measures
- Human oversight mechanisms
5.4 Quality Management System
Our quality management system for AI includes:
- Regular testing and validation of AI outputs
- Performance monitoring and incident logging
- User feedback integration
- Continuous improvement processes
- Version control and change management
6. Risk Mitigation Measures
6.1 Content Moderation
We implement content moderation to prevent harmful outputs:
- Input filtering to prevent generation of illegal or harmful content
- Output filtering for inappropriate content
- Prohibited content categories enforced (violence, CSAM, hate speech, etc.)
- User reporting mechanisms
- Human review of flagged content
6.2 Bias Mitigation
We work to mitigate bias in AI outputs:
- Selection of AI providers with bias mitigation measures
- Regular evaluation of AI outputs for biased patterns
- User controls to report biased outputs
- Diverse model options to reduce single-provider bias
6.3 Security Measures
Our AI systems are secured against attacks:
- Input validation and sanitization
- Prompt injection protection
- Rate limiting and abuse prevention
- Secure API key management
- Regular security assessments
7. User Obligations
Users of faktry have certain obligations under the EU AI Act:
- Transparency: When deploying AI-generated content, users must disclose its artificial origin as required by Article 50
- Deep Fake Disclosure: Content that resembles real persons must include clear disclosure of AI generation
- High-Risk Use Cases: Users deploying our tools in high-risk contexts must conduct their own conformity assessment
- Legal Compliance: Users are responsible for ensuring their use of AI complies with applicable laws
- Content Responsibility: Users are responsible for the content they create and how it is used
8. Third-Party AI Providers
faktry uses AI services from third-party providers. We select providers who:
- Are committed to EU AI Act compliance
- Provide transparency about their AI models
- Implement appropriate safety measures
- Have robust data governance practices
Key AI providers we use:
| Provider | Services Used | Compliance Status |
|---|---|---|
| OpenAI | GPT-4, DALL-E, Whisper | EU AI Act compliant |
| Anthropic | Claude | EU AI Act compliant |
| Gemini | EU AI Act compliant | |
| ElevenLabs | Voice synthesis | EU AI Act compliant |
| Others | Various AI services | Evaluated for compliance |
9. Incident Management
We maintain an incident management system for AI-related issues:
- Incident Logging: All AI-related incidents are logged and tracked
- Response Procedures: Defined processes for addressing incidents
- User Reporting: Users can report AI incidents via support channels
- Authority Notification: Serious incidents are reported to competent authorities as required
- Corrective Actions: Post-incident reviews and improvements
10. Contact & Compliance
For questions about our EU AI Act compliance or to report AI-related incidents:
AI Compliance Officer: ai-compliance@faktry.ai
Incident Reporting: ai-incidents@faktry.ai
General Inquiries: legal@faktry.ai
11. Updates to This Statement
This EU AI Act compliance statement will be updated as:
- Our AI systems evolve or new systems are deployed
- New guidance on EU AI Act implementation is published
- Regulatory requirements change
- Our risk assessment identifies new considerations
This EU AI Act compliance statement was last updated on March 3, 2026. We are committed to ongoing compliance with the EU AI Act and will update this document as regulations and our practices evolve.
