Skip to content

AI GOVERNANCE POLICY

Introduction 

Pisano is committed to the ethical and responsible use of Artificial Intelligence (AI) in its products and services. This AI Governance Policy outlines how Pisano designs, deploys, and manages AI systems in alignment with emerging regulations and best practices. In particular, we adhere to the principles of the EU AI Act – the first comprehensive AI framework aimed at fostering trustworthy AI in Europe[1] – as well as guidelines from international bodies (e.g. the OECD and NIST) that emphasize transparency, accountability, fairness, and security in AI[2]. Our goal is to ensure that partners, customers, and regulators have confidence in Pisano’s AI capabilities, knowing they are used in a safe, transparent, and compliant manner. 

Guiding Principles 

Pisano’s approach to AI is governed by key principles that guide every stage of development and deployment. These guiding principles reflect global standards for trustworthy AI and underpin our commitment to ethical AI use: 

  • Accountability: We maintain clear oversight and responsibility for our AI systems. Pisano’s internal governance ensures that there are defined roles for monitoring AI performance and compliance. We regularly review AI outcomes and processes, holding ourselves accountable to this policy and applicable laws. Mechanisms are in place to audit AI decisions and address any issues proactively, ensuring compliance and ethical standards are upheld. 
  • Transparency: At Pisano, we try to make our AI systems understandable and open to scrutiny. We openly communicate the purpose and capabilities of our AI features to stakeholders. In practice, this means providing documentation on how our AI models function and are trained, and clearly informing users when they are interacting with an AI system[3]. We also offer tools (such as dashboards and reports) that shed light on AI operations, so that customers and regulators can observe how the AI is working and have insight into its decision-making processes. 
  • Fairness: We are dedicated to ensuring our AI treats users and data impartially and avoids bias. Pisano uses carefully curated and synthetic data for training our general models to prevent the introduction of biased or discriminatory patterns. We do not use one client’s data to influence models for another client, preserving fairness and respecting boundaries between datasets. Additionally, we continuously monitor AI outputs for any unintended bias or unfair outcomes, and we have processes to mitigate and correct any disparities that are identified. 
  • Privacy & Security: Protecting user privacy and data security is paramount in all AI initiatives. All personal or sensitive data processed by Pisano’s AI is first anonymized or masked to safeguard individual privacy. Our systems are designed so that no personally identifiable information is exposed in AI processing or outputs. We enforce strict data access controls and security measures to prevent unauthorized access or data leaks. Pisano’s AI infrastructure runs in a secure cloud environment with robust cybersecurity protections (encryption, access control, network isolation), and we comply with data protection regulations like GDPR in all AI operations. 

Data Governance 

Effective data governance is the foundation of Pisano’s AI strategy. We ensure that all data used in AI model training and operation is handled with the highest standards of privacy, quality, and integrity. 

Training Data Sourcing and Isolation: Pisano follows a strict data isolation policy for model training. For any customer-specific AI models (often referred to as fine-tuned models), the training data comes exclusively from that same customer’s data. We never mix or use one client’s data to train models for another client, and we do not incorporate public or third-party personal data into customer-specific models. For our general-purpose AI offerings – including core models and industry-specific “embedded” models – Pisano uses synthetic data generated internally for training. This approach ensures that no real customer data is used across clients, and each organization’s data remains fully isolated and protected from misuse. By maintaining these boundaries, Pisano guarantees that your data stays yours and is never utilized to benefit or inform other clients’ AI systems. 

Privacy Protection and Anonymization: Privacy-by-design is embedded in Pisano’s AI workflows. Prior to any data being processed by our AI algorithms, we perform rigorous anonymization and masking of personal information. Identifiers such as names, contact details, or any personally identifiable information (PII) are removed or obfuscated. This means that even as the AI model learns from feedback or generates analytics, it does so without ever exposing sensitive personal data. In cases where aggregated feedback data is used to improve our core models, we only use it in a masked form, ensuring individuals cannot be re-identified and no private content is revealed. These measures align with data protection best practices and regulations, allowing us to continually refine our AI capabilities without compromising user privacy. 

Access Control and Data Isolation: We have implemented strict controls to govern how data is accessed and utilized within our AI systems. Direct access to raw data by end-users or unauthorized personnel is prohibited. Instead, all queries and interactions with the data must go through Pisano’s managed AI interface. Users of Pisano’s applications cannot perform unrestricted or arbitrary queries on underlying datasets; the AI will only provide information that the user is authorized to receive. This access control mechanism, enforced at the application and AI level, ensures that an AI-generated output never includes data that the requesting user should not see. Furthermore, our integration methods guarantee complete data isolation at a technical level. Each client’s data is segregated using unique identifiers rather than personal details, so the AI has no persistent link to actual identities or accounts. This prevents any accidental cross-over of data between clients or any trace of data that could lead back to an individual or a source system. In summary, through robust access controls and isolation techniques, Pisano’s AI platform makes certain that data remains compartmentalized and secure within its proper context. 

Output Handling and Validation: All outputs generated by Pisano’s AI undergo careful handling to ensure they meet our privacy and security standards. We classify and manage AI outputs under Pisano’s control before they are delivered to end-users. In practice, this means the AI’s responses or analytical results are checked against our strict data privacy protocols and content guidelines. Any output that could contain sensitive information or otherwise violate our policies is filtered or blocked. By having this extra layer of validation, Pisano ensures that the information provided by the AI is not only relevant and accurate but also appropriate and safe for the intended audience. This controlled output process upholds our commitment that the AI will not inadvertently reveal confidential data or produce biased/harmful content. All AI responses remain consistent with the permissions and privacy expectations set for each user and dataset. 

Model Governance 

Pisano maintains a disciplined approach to how AI models are developed, deployed, and monitored throughout their lifecycle. We differentiate between various model types and apply governance measures appropriate to each. 

AI Model Categories: To serve different customer needs while maintaining control and quality, Pisano supports three categories of AI models: - Core Models: These are general-purpose AI models developed and maintained by Pisano. Core models provide base capabilities (such as language understanding or text analysis) that are broadly applicable across use cases. They are trained on large datasets (synthetic and curated data) and serve as the foundation of our AI services. - Embedded Models: Embedded models are industry-specific AI models created by Pisano to address common patterns or requirements in particular sectors (for example, tailored language models for retail, finance, etc.). They leverage Pisano’s domain knowledge and are trained on domain-relevant synthetic data to ensure effectiveness. Like core models, embedded models are managed by Pisano and offered as part of our platform’s out-of-the-box intelligence for various industries. - Edge Models: Edge models are custom AI models fine-tuned for individual customers’ unique use cases. Authorized customer administrators can create and train Edge models using their own organization’s data, via Pisano’s platform tools, without requiring any coding or IT support. Edge models enable personalization and fine-grained tuning – for example, analyzing company-specific jargon or specialized feedback – while still operating within Pisano’s governance framework. Importantly, even when customers fine-tune these models, the training occurs in isolation with that customer’s data (per our data governance policies above), and other clients cannot access or benefit from those custom models. 

All three types of models are subject to Pisano’s overarching governance. Core and Embedded models are created and updated by Pisano’s AI team, who ensure they meet our quality, fairness, and compliance standards. Edge models, while configurable by clients, are built within Pisano’s platform sandbox, meaning Pisano’s safeguards (such as data masking, bias checks, and output filters) automatically apply to them as well. 

Model Training and Updates: When developing or updating AI models, Pisano follows rigorous procedures. We perform risk assessments and testing before any new model or model update is released into production. In alignment with regulatory expectations for high-stakes AI, Pisano emphasizes dataset quality and bias mitigation in training – for example, using high-quality, representative data and avoiding features that could lead to discriminatory outcomes[11]. Models are evaluated for accuracy, robustness, and fairness prior to deployment. We also implement a process for continuous learning that respects data privacy: if our AI models learn from new data, they do so using anonymized and aggregated inputs, ensuring that no private information is incorporated into the learned patterns. Pisano does not use customers’ raw data to globally improve our models without permission; any general improvements rely on either synthetic data or broad insights derived from masked data, thus protecting customer confidentiality. 

Output Quality and Oversight: Pisano fully manages and oversees the outputs of all AI models to ensure they are reliable and compliant. We have established criteria for acceptable AI outputs and put in place monitoring to catch any anomalies. As noted in Data Governance, every AI-generated output is automatically screened through Pisano’s privacy and security filters. In addition, we log AI decisions and important metadata about model outputs. This logging creates an audit trail for traceability and helps our team and our customers trace how an AI result was produced, contributing to both transparency and accountability. If an AI output is found to be problematic (e.g., incorrect, biased, or inappropriate), Pisano’s team will investigate and take corrective action, which may include refining the model, adjusting its training data, or improving the output filters. We also enable human oversight in the loop where necessary: for certain sensitive applications, Pisano or the customer can require that AI outputs are reviewed by a human before any final action is taken. By combining automated safeguards with human review and robust logging, Pisano’s model governance ensures that AI behavior remains under careful supervision and control at all times. 

Security and Compliance 

Pisano’s AI infrastructure and operations adhere to strict security standards and regulatory requirements. We recognize that AI systems must not only be innovative, but also safe, secure, and lawful in their design and use. 

Secure Infrastructure: All of Pisano’s AI systems are deployed within secure cloud environments with strong protections. Pisano leverages the Amazon Web Services (AWS) cloud, utilizing its enterprise-grade security features including encryption of data at rest and in transit, network firewalls, and continuous monitoring. Our AI processing, particularly for advanced functions like natural language understanding, integrates with LLM platforms. We ensure that this integration is done in a secure and privacy-preserving manner: data sent to LLM platforms is processed on EU servers to keep it within Europe’s jurisdiction, and no personal data is ever transmitted in the process. All sensitive fields are masked prior to sending requests to the AI model, so the external AI service never sees real personal identifiers. We have also put robust technical measures in place to prevent any data from persisting on external systems beyond what is necessary to generate the AI output. By confining AI operations to trusted environments (Pisano’s AWS infrastructure and EU data centers) and minimizing data exposure, we drastically reduce security risks and ensure compliance with data residency requirements. 

Regulatory Compliance: Pisano is fully committed to complying with all applicable AI regulations, data protection laws, and industry standards. We have executed Data Processing Agreements (DPAs) with our AI technology providers and abide by the EU’s General Data Protection Regulation (GDPR) for all personal data handling. This means that we uphold principles of lawfulness, necessity, and data minimization in our AI data processing. Our practice of masking personal data before AI processing, for instance, is a direct measure to protect privacy and meet GDPR obligations. We closely follow developments in AI-specific regulations such as the EU AI Act and proactively align our policies with their requirements. For example, the EU AI Act calls for transparency, risk management, and human oversight in high-risk AI systems[12][13] – Pisano’s governance framework already incorporates these elements through documented procedures, bias and risk assessments, and the oversight mechanisms described in this policy. We also ensure that any AI features that may interact with end-users meet the Act’s transparency obligations (like user notifications and documentation) as part of our compliance (see the Observability and Transparency section). Pisano’s legal and compliance teams review our AI offerings against regional regulations (EU, and other jurisdictions as relevant) and international guidelines to ensure continuous adherence. Non-compliance is not only a legal risk but undermines user trust – hence, compliance is a core pillar of our AI strategy. 

Access Controls and Employee Training: Security isn’t only about technology – it’s also about people. Pisano restricts access to AI systems and data strictly on a need-to-know basis. Only authorized personnel (such as specific engineers or support staff) have access to the AI model environments and logs, and even then, their access is limited to the minimum necessary to perform their roles. For example, if a support engineer needs to investigate an issue for a customer, they will be granted temporary access to the relevant data or logs, and only to that which is required to resolve the issue. Our support staff do not have open-ended access to customer data or AI outputs – all such access is logged and audited. We enforce the principle of least privilege across our AI development and operations teams. Moreover, Pisano provides regular training to employees and contractors on data security, privacy, and AI ethics. Everyone involved in building or supporting our AI features is educated about this AI Governance Policy and the obligations it carries. By combining strict technical access controls with workforce training and awareness, we ensure that human access to AI systems is secure, appropriate, and accountable. 

Observability and Transparency 

Pisano believes that continuous observability of AI systems and transparency about their functioning are crucial for trust. We have put in place tools and practices to make the inner workings and the output of our AI as visible and understandable as possible, both to our internal teams and to our customers. 

System Monitoring and Dashboards: To facilitate real-time observability, Pisano’s AI platform includes a dedicated AI Dashboard that provides insight into the AI’s activities and performance. This dashboard is a part of our Advanced Text Analytics module and is designed with transparency in mind. It displays a clear view of the inputs received by the AI and the outputs generated. For example, if the AI module analyzes a batch of customer feedback, authorized users can see summary information about what was processed and the results (such as identified sentiment or categories), without exposing any sensitive raw data. The dashboard and associated logs allow us to determine the functional state of the AI at any time – we can observe whether it’s operating within expected parameters and detect anomalies by watching its input-output behavior. This level of observability gives both Pisano and our clients operational insight into the AI’s functioning, acting as an early warning system for issues and a validation that the AI is doing what it is intended to do. Additionally, we maintain automated alerting on key metrics (like error rates, response times, unusual output patterns), so that if something goes wrong or drifts out of bounds, our teams are immediately notified and can take action. All these measures ensure that Pisano’s AI does not operate as a “black box” – instead, it is continuously watched and transparently reported on. 

Transparency to Stakeholders: Beyond technical monitoring, Pisano is committed to external transparency about our AI systems. We provide clear documentation and communication to our customers and users regarding how our AI features work. This includes publishing explainers or whitepapers on our AI model design, intended use cases, and limitations. For instance, we disclose the types of data our models are trained on (e.g., synthetic data for core models, client data for their own edge models) and the evaluation processes we use. In line with forthcoming regulatory requirements, when an AI system is interacting directly with users or making automated decisions that affect them, Pisano will inform those users that AI is involved[3]. If, for example, we deploy a conversational AI assistant in a product interface, we will make it clear (through on-screen messages or documentation) that responses are generated by AI, thereby allowing users to make informed decisions in their engagement with the system[14]. We also ensure that users have avenues to ask for clarification or contest an AI-driven decision, which is part of being transparent and fair. Pisano’s transparency efforts aim to meet and exceed the “transparency and explainability” principles advocated by international AI ethics guidelines[2]. By openly sharing information about our AI and ensuring its operations are visible, we build trust with our clients and end-users. Transparency is not a one-time effort but an ongoing commitment – as our AI systems evolve, we will continue updating our communications and tools to keep all stakeholders informed. 

Versioning and Update History 

This AI Governance Policy is a living document and will be periodically updated to reflect new practices, technologies, or regulatory requirements. Below is the version history of the policy: 

  • Version 1.0 – January 2025: Initial release of Pisano’s AI Governance Policy (internal use). Established the foundational principles, data governance approach, model categories, and compliance measures for AI at Pisano. 
  • Version 1.1 – September 2025: First public publication of the AI Governance Policy. Revised for clarity and external transparency, with updates aligning the policy to the EU AI Act and other global AI governance standards. Added details on guiding principles, enhanced transparency commitments, and included this versioning history section. 

 Citations

[1] [11] AI Act | Shaping Europe’s digital future 

https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai 

[2] Artificial Intelligence Risk Management Framework (AI RMF 1.0) 

https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf 

[3] [12] [13] [14] Everything You Need To Know (So Far) About The EU AI Act- ISMS.online 

https://www.isms.online/iso-42001/everything-you-need-to-know-so-far-about-the-eu-ai-act/