ACE Platform - EU AI Act Compliance Statement

Effective Date: December 1, 2025 · Version: 1.0.0
Regulation: EU AI Act (Regulation 2024/1689)


1. Executive Summary

This document describes how the ACE (Agentic Context Engineering) platform complies with the European Union Artificial Intelligence Act (EU AI Act, Regulation 2024/1689).

Key Points:

  • ACE is classified as a LIMITED RISK AI system
  • ACE is NOT intended for any HIGH-RISK applications under Annex III
  • ACE complies with transparency obligations under Article 50
  • Users (deployers) maintain full oversight and control

2. System Description

2.1 What is ACE?

ACE is an AI-powered pattern learning platform for software developers that:

  1. Receives execution traces (coding task descriptions, steps taken, results)
  2. Analyzes traces using AI (Anthropic Claude) to identify patterns
  3. Stores learned patterns in a structured "playbook"
  4. Retrieves relevant patterns for future coding tasks

2.2 AI Components

ComponentAI ModelFunctionProvider
ReflectorClaude Sonnet 4.5Analyzes traces, identifies patternsAnthropic
CuratorClaude Haiku 4.5Merges, deduplicates patternsAnthropic
EmbeddingsJina Code EmbeddingsSemantic similarity matchingSentence Transformers

2.3 Value Chain Position

┌─────────────────────────────────────────────────────────────────┐
│                    EU AI ACT VALUE CHAIN                        │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  GPAI MODEL PROVIDER     AI SYSTEM PROVIDER      DEPLOYER      │
│  (Anthropic)             (Code Engine/ACE)       (Users)       │
│       │                        │                    │          │
│       │  Claude models         │  ACE platform      │          │
│       │  (Sonnet, Haiku)       │  (API + SDK)       │          │
│       │                        │                    │          │
│       ▼                        ▼                    ▼          │
│  ┌─────────┐            ┌───────────┐        ┌─────────┐       │
│  │ GPAI    │ ─────────▶ │ ACE       │ ──────▶│ End     │       │
│  │ Models  │ integrated │ Platform  │  used  │ Users   │       │
│  └─────────┘   into     └───────────┘   by   └─────────┘       │
│                                                                 │
│  Anthropic's       Code Engine's        Customer's             │
│  obligations       obligations          obligations            │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

3. Risk Classification

3.1 Classification Analysis

Under Article 6 of the EU AI Act, AI systems are classified based on their intended purpose and potential impact.

Risk LevelCriteriaACE Status
Unacceptable (Art. 5)Social scoring, manipulation, exploitationNOT APPLICABLE
High-Risk (Annex III)Employment, education, credit, justice, etc.NOT INTENDED
Limited Risk (Art. 50)AI systems requiring transparencyAPPLICABLE
Minimal RiskGeneral AI systemsAPPLICABLE

3.2 Why ACE is NOT High-Risk

ACE does not fall under Annex III high-risk categories because:

Annex III CategoryACE Analysis
BiometricsACE does not process biometric data
Critical InfrastructureACE is not used for infrastructure management
EducationACE does not determine educational access or outcomes
EmploymentACE is explicitly prohibited from employment decisions
Essential ServicesACE is not used for credit, insurance, or benefits
Law EnforcementACE has no law enforcement applications
MigrationACE has no immigration/asylum applications
JusticeACE has no legal/judicial applications

3.3 Safeguards Against High-Risk Use

To ensure ACE is not used for high-risk purposes, we implement:

  1. Acceptable Use Policy - Explicitly prohibits Annex III uses
  2. Terms of Service - Legally binding prohibition on high-risk use
  3. Technical Controls - No features designed for HR/employment use
  4. Monitoring - Usage patterns reviewed for policy compliance

4. Transparency Compliance (Article 50)

4.1 AI Disclosure

Users are informed that they are interacting with an AI system through:

Disclosure PointHow Implemented
WebsiteClear statement that ACE uses AI (Claude by Anthropic)
API DocumentationAI processing described in SDK docs
Terms of ServiceSection 2 explicitly describes AI processing
Privacy PolicySection 3.2 details AI components and purposes
In-AppManagement interface shows AI-generated patterns

4.2 AI Processing Information

Users are informed about:

  • Which AI models process their data (Claude Sonnet, Claude Haiku)
  • What decisions AI makes (pattern retention, similarity matching)
  • How to override AI decisions (management interface)
  • How to disable AI processing (configuration settings)

5. Human Oversight (Article 14)

5.1 Oversight Mechanisms

Users maintain oversight through:

MechanismDescription
Pattern ReviewAll AI-generated patterns visible in management interface
Manual OverrideUsers can modify, delete, or add patterns manually
Voting SystemUpvote/downvote to influence pattern confidence
Learning ControlCan disable automatic learning entirely
ConfigurationAdjustable thresholds for AI decisions
Data ExportFull export of all data at any time
Data DeletionComplete deletion of projects/accounts

5.2 Effective Oversight Design

The system is designed so that:

  • No AI decision is irreversible
  • Users can inspect all AI reasoning (patterns include evidence)
  • AI suggestions can be ignored or overridden
  • Human judgment is final in all cases

6. Data Governance (Article 10)

6.1 Training Data

ACE does NOT train AI models. We use pre-trained models from Anthropic (Claude). Anthropic is responsible for their training data governance.

6.2 User Data Processing

For user-submitted data, we ensure:

RequirementImplementation
RelevanceOnly data user explicitly submits is processed
QualityValidation of input format and structure
RightsUser owns their data; we process under contract
MinimizationOnly necessary data retained
SecurityEncryption in transit and at rest

7. Technical Documentation (Article 11)

We maintain documentation covering:

  • System architecture and data flows
  • AI component specifications
  • Security measures and controls
  • API specifications
  • Operational procedures

8. Record-Keeping (Article 12)

We maintain comprehensive logs:

Log TypeContentsRetention
API Access LogsRequests, responses, timestamps90 days
Audit LogsAdministrative actions, token views1 year
Learning LogsPatterns created, updated, deleted1 year
Error LogsSystem errors and exceptions90 days

9. Accuracy and Robustness (Article 15)

Accuracy Measures

  • Confidence scoring based on helpful/harmful feedback
  • 85% similarity threshold prevents redundant patterns
  • Low-confidence patterns (<30%) automatically pruned
  • User feedback refines pattern quality

Robustness Measures

  • Pydantic schema validation on all inputs
  • Rate limiting prevents system overload
  • Graceful degradation on AI failures
  • System functions without AI if needed

10. Cybersecurity (Article 15)

ControlImplementation
AuthenticationToken-based API authentication
AuthorizationMulti-tenant isolation, role-based access
EncryptionTLS 1.3 in transit, AES-256 at rest
Token SecuritySHA-256 hashing, encrypted storage via Clerk
Audit TrailToken view logging with IP, user agent

11. Provider Obligations Summary

As an AI System Provider under the EU AI Act, Code Engine:

ObligationStatusEvidence
Risk ClassificationCompleteThis document, Section 3
TransparencyCompleteToS, Privacy Policy, Documentation
Human Oversight DesignCompleteManagement interface, configuration API
Data GovernanceCompletePrivacy Policy, data handling procedures
Technical DocumentationCompleteArchitecture docs, API specs
Record-KeepingCompleteLogging infrastructure, Logfire
Accuracy MeasuresCompleteConfidence scoring, pruning, feedback
CybersecurityCompleteSecurity audit, encryption, access controls

12. Deployer Guidance

If you integrate ACE into your own products, you may be a "deployer" under the EU AI Act with your own obligations:

12.1 Your Potential Obligations

If You...You May Need To...
Integrate ACE into a productAssess if YOUR product is high-risk
Process end-user data through ACEConduct data protection impact assessment
Operate in the EUEnsure compliance with EU AI Act deployer rules
Use ACE for HR purposesSTOP - This is prohibited

12.2 Resources

13. Contact for Compliance Inquiries

For questions about EU AI Act compliance:

For compliance documentation requests (enterprise customers): legal@code-engine.app


This document demonstrates Code Engine's commitment to responsible AI development and regulatory compliance.