AI actions are never executed without human review and adherence to business rules. Approval processes are clearly defined and formalized.
Frequently Asked Questions (FAQ)
This section brings together the most frequently asked questions related to data protection, model explainability, bias mitigation, and human oversight.
Questions? Contact us today.
Governance
No. Our AI is supervised: no action occurs without human validation. In some highly specific use-cases, such as predictive analysis, the AI has some degree of autonomy to make realistic projections, but these decisions never lead to actual actions. In all cases, outputs are delivered as reports for review and approval.
Yes. Responsibilities cover the provision, integration/deployment, operation, and auditing of AI. Security and data teams define the scope, controls, and traceability at each stage.
Yes. Methodology documentation outlining the data preparation pipeline, controls, validation metrics, and operational rules is available retrospectively in the event of an audit.
Input Data, Privacy & Learning
By default, direct identifiers (such as first name, last name, email, etc.) are excluded from training processes. We apply data minimization, anonymization/pseudonymization, and masking as required.
No. AI outputs are not reused for training. Outputs are watermarked (visibly or via metadata) to ensure origin and integrity.
We use a mix tailored to each case:
Rule-based techniques for formats, missing values, and outliers
Machine learning for duplicates, invalid reference data, time-series anomalies, predictive analytics)
NLP for content consistency
Recommendation systems for corrective actions
Data is hosted in the selected region, with options for replication and logical separation of environments (dev/test/qa/prod).
Model and AI operations security
We use RBAC, MFA, admin bastions, logging, network segmentation (WAF, IPS, NSG/VNet), and perform adversarial tests to protect against malicious inputs and data poisoning.
Yes. Mechanisms include SIEM correlation, monitoring for unusual usage and outbound traffic, and alerts for mass data exports.
No. We use synthetic or anonymized data, with masking or pseudonymization when needed.
Yes. Detailed logs include access, API calls, infrastructure, resource usage, training/inference events, and performance metrics. Logs are retained for up to six months and are available upon request.
Quality, robustness, bias & metrics
Systematic human validation, error analysis, thresholds/business rules, false alert control, and cross-review by data and business teams.
Depending on the specifics of the use-case: Precision, recall, F1-score, stability indicators, MAE, RMSE.
Yes. Adversarial scenarios are integrated into the testing cycle, including input validation, anomaly detectors, and robustness against targeted manipulations.
We use regularization, cross-validation, diverse datasets, production monitoring, and periodic model reevaluation to prevent overfitting and underfitting.
Explainability, transparency & human oversight
Yes. We provide the influencing factors, context, and rules behind the results; decisions remain human and fully traceable.
Yes. AI outputs include visible watermarks or metadata, allowing them to be audited and traced back to their inputs and processing pipelines.
AI data confidentiality, retention, and end-of-contract procedures
Retention is based on need. Logs and traces of performance and inference are kept for up to six months, and outputs are never reused for model training.
At contract end, disk keys are rotated and the full environment—including data, backups, and logs—is securely deleted following a formal process. Inactive environments are removed within 90 days.
Yes. Selective export and deletion are possible, with full traceability and logging.
Compliance, privacy by design & training
Data minimization, anonymization/pseudonymization, masking, RBAC, controlled retention; developer guidelines and incident response plan.
Teams receive regular training on security and AI risks, stay updated on threats such as bias and adversarial attacks, participate in practical workshops, and follow internal charters and policies.
Yes. Internal Responsible AI policies cover fairness, transparency, privacy, human oversight, traceability, and non-reuse of outputs.
AI integration in IT systems & operation
RBAC and MFA for access to AI modules; IdP federation (SAML/OIDC); SIEM logging for AI APIs and pipelines.
WAF enabled, IPS on firewalls, segmentation via NSG/VNet, DDoS protection; admin access through a bastion; 24/7 monitoring.
Models undergo periodic evaluations, controlled updates in QA or staging, and require approval before going into production. Outputs are never reused for training.
AI onboarding (standard workflow)
Secure AI onboarding typically includes:
-
AI and security workshop covering data residency, classification, and risks
-
IAM setup with RBAC/MFA and IdP federation
-
SIEM monitoring and alerting for AI-related signals
-
Data handling with minimization and anonymization
-
AI testing for quality, adversarial robustness, and metrics
-
Go-live with bastion access, network segmentation, WAF/IPS, and continuous monitoring
-
Use cases for each approach:
-
Rules: ensure format integrity, detect simple anomalies, handle specific outliers
-
ML: detect duplicates, invalid reference data, temporal irregularities, and complex correlations
-
NLP: check semantic consistency, identify incoherent content, and classify text
-
Explanations are provided with key factors and thresholds, context, justifications, correlation ID logs, and actionable operational recommendations.