← Back to home

EU AI Act Compliance

UltimateIntel is committed to transparency and compliance with the EU Artificial Intelligence Act (Regulation 2024/1689). As AI regulation evolves, we proactively implement measures that meet and exceed requirements applicable to our risk classification. This page details our compliance approach across the key obligations of the EU AI Act.

Risk Classification Our AI system is classified as limited risk under the EU AI Act. UltimateIntel processes business operational data to generate analytical insights and does not operate in high-risk domains such as credit scoring, employment decisions, law enforcement, or critical infrastructure management. However, we choose to implement transparency and oversight measures that exceed the minimum requirements for limited-risk systems because we believe transparency builds trust.

Our risk assessment considers the nature of data processed (business operational data, not biometric or sensitive personal data), the purpose of AI use (analytical insights and recommendations, not autonomous decisions), the human oversight model (all actions require explicit human approval), and the impact scope (business efficiency improvements, not fundamental rights decisions).

Transparency Measures Every AI-generated insight includes evidence citations that trace back to source data in your connected systems. Users can verify any claim by following the citation chain to the original records. Our evidence tier system clearly labels the methodology used for each insight, distinguishing between pattern correlation (Tier 1), temporal causal analysis (Tier 2), and more advanced analytical methods. Confidence intervals are provided where applicable, and the system explicitly states when evidence is limited or when conclusions carry uncertainty.

We do not present AI output as ground truth. Every response includes context about how the answer was derived, what data sources were consulted, and what limitations apply. When the system cannot provide a confident answer, it says so rather than generating speculative content.

Risk Assessment We conduct regular AI risk assessments covering accuracy and reliability of generated insights, potential for bias in pattern detection and analysis, failure modes and their impact on business decisions, data quality dependencies and their effects on output quality, and adversarial robustness against prompt manipulation. Assessment results inform our model selection, prompt engineering, and output validation processes.

Technical Documentation We maintain comprehensive technical documentation of our AI systems as required by the EU AI Act. This documentation covers system architecture and data flow diagrams, model selection criteria and performance benchmarks, training data characteristics and quality measures, input validation and output filtering procedures, monitoring and logging specifications, and version control and change management processes. Documentation is reviewed and updated with each significant system change.

Human Oversight Details All autonomous actions (email drafts, task creation, CRM updates, Slack messages, alert creation) require explicit human approval before execution. Users can review the proposed action, the evidence supporting it, and the expected outcome before approving or rejecting. The approval workflow includes a clear explanation of what will happen, an undo mechanism for approved actions where technically feasible, and logging of all approval and rejection decisions. Users can disable AI-powered features at any time through their account settings. Manual override is available for all automated processes including morning briefs, anomaly detection, and query routing.

Transparency Report Format We publish quarterly transparency reports covering system performance metrics including accuracy, reliability, and availability, a summary of AI-related incidents and their resolution, updates to models, algorithms, or processing methods, changes to data sources or processing scope, results of bias and fairness assessments, and user feedback and satisfaction metrics. Reports are available to all customers through their account dashboard. Aggregate anonymized reports will be published publicly when we reach general availability.