Understanding the EU AI Act: Key regulations for AI compliance
Vertex AI Foundations offers a robust solution. It empowers businesses to meet AI compliance requirements. Simultaneously, it optimises system performance. As the demand for transparency and accountability in AI systems grows, Vertex AI Foundations provides essential tools. These tools ensure traceability, security, and operational quality. Furthermore, they align with key regulations. This post delves into how Vertex AI Foundations can assist your business. Learn how to build compliant and high-performing AI systems.
The AI Act can be broken down into these core sections:
- Banned applications: These applications pose a threat to the rights of EU citizens. Examples include creating scraped facial recognition databases. Another example is categorising individuals based on sensitive attributes like race or sexual orientation. Such systems can have detrimental effects on specific groups. Consequently, they are prohibited except in limited circumstances, such as law enforcement.
- High-risk systems: Certain AI applications are classified as high-risk. This is due to their potential impact on individuals’ lives. These applications encompass various domains. Examples include recruitment, healthcare, and the operation of critical infrastructure. They also include systems designed to influence elections. Because of their significant impact, these systems must adhere to stringent guidelines.
- General AI systems: Generative AI models are rapidly gaining prominence. These models are undoubtedly powerful. However, they are prone to generating inaccurate outputs, sometimes referred to as “hallucinations.” Some models have raised concerns regarding copyright infringement. The potential for misinformation poses another challenge. Therefore, generative AI applications must comply with specific requirements. These include producing comprehensive technical documentation and adhering to EU copyright law. They must also provide detailed summaries of the data used for training.
How can Devoteam’s Vertex AI Foundations help you comply?
Devoteam’s Vertex AI FoundationsIt facilitates rapid deployment and utilisation of Google’s Vertex AI product suite. This is achieved through a combination of Google’s technology and Devoteam’s extensive experience in implementing AI systems. This section elucidates how Vertex AI Foundations addresses the requirements set forth by the AI Act.
Traceability
The EU AI Act grants individuals a significant right. They can demand explanations for decisions made by AI systems. This underscores the critical role of traceability. Traceability entails the ability to explain an AI model’s output. This involves examining all factors that influenced its training and prediction processes. These factors encompass the training data, any data transformations applied, the hyperparameters used during training, and the data used during runtime to generate predictions.
This challenge can be divided into two distinct aspects: training and inference.
Traceability in training
Ensuring traceability in training requires a robust mechanism. You must be able to trace the lineage of your model artefact. This means tracing it back to the source data. Additionally, you must track any modifications made during the process. To achieve this, Vertex AI Foundations employs Vertex AI Pipelines. This is an MLOps tool built upon the open-source framework Kubeflow.
Vertex AI Pipelines comprises components and artefacts. Components are responsible for executing specific code. Artefacts represent the outputs generated by components. Crucially, each pipeline execution bundles all artefacts together. It also assigns them a version. This enables you to examine a specific model version. You can then trace it back to the corresponding version of the input data used for training.
Vertex AI Pipelines offer another advantage. By tracking lineage, you can identify and address potential biases present in the training data. Such biases can lead to discriminatory outcomes. This proactive approach to bias mitigation aligns with the AI Act’s emphasis on fairness and accountability in AI development.
Image 1: An example of a Vertex AI Pipeline
Traceability at inference time
During inference, predictions often incorporate real-time data. If this data is not properly logged, it can be challenging to understand why a specific output was generated. Vertex AI Foundations addresses this challenge through tight integration with Google’s Monitoring suite. This enables comprehensive tracking of all predictions made by the model.
Moreover, Vertex AI provides an Explainable AI feature. This offers explanations for individual predictions. It achieves this by analysing the input features provided to the model. This information can be combined with traceability data from the training phase. This allows you to pinpoint potential anomalies within the training dataset.
Operational performance quality
The AI Act mandates the monitoring of operational performance. For AI systems, this means companies must diligently track predictions. They must also compare these predictions against the actual observations. Furthermore, monitoring the input data is essential to detect data drift. Data drift occurs when the distribution of input data changes significantly. This can adversely impact the accuracy of the model’s output.
Vertex AI Foundations supports these monitoring requirements through integrations with Google’s Monitoring suite and BigQuery. This allows you to effectively detect model degradation. Because Vertex AI Foundations utilises Vertex AI Pipelines, it can automatically trigger a new training pipeline when degradation is detected. Alternatively, it offers the option of human intervention if necessary.
Cybersecurity
Security is paramount at Devoteam. This commitment extends to Vertex AI Foundations. It adheres to security best practices and leverages our extensive expertise in Google Cloud. This ensures a secure setup. Vertex AI Foundations manages permissions, service accounts, and networking through infrastructure as code. It uses Terraform for this purpose. This enforces the principle of least privilege access.
Furthermore, Vertex AI Foundations supports clear environment separation. This ensures that sensitive production data is not replicated outside the production environment.
Technical documentation
Another requirement of the AI Act is comprehensive technical documentation. AI systems deployed in production environments must be properly documented. When using Vertex AI Foundations, customers benefit from Devoteam’s extensive documentation. We provide detailed information on how the Foundations are set up and the rationale behind specific design decisions.
Vertex AI Pipelines further streamlines the documentation process. It provides comprehensive templates. This allows you to focus on documenting content specific to your unique use cases. This significantly reduces the time spent on documentation. As a result, you can allocate resources more effectively.
Data quality
The AI Act also mandates the use of high-quality data. This is a broad subject with various facets. In this context, we will concentrate on aspects specific to machine learning. A future blog post will delve deeper into data quality and GDPR compliance.
Assuming your organisation has access to qualitative data, a feature store is crucial. A feature store serves as a central repository for input features used by your models. It provides a place to standardise these features. If sensitive data is not available in the feature store, models cannot utilise it for training or prediction.
Beyond governance, a feature store offers a historical view of features over time. For example, consider customer behaviour. This can change significantly. A feature store allows you to link events to features at specific points in time.
Lastly, a feature store is an excellent tool for data drift detection. You can use this to automatically trigger the re-training of models that use these features.
How Devoteam’s Vertex AI Foundations can help you work with generative AI
Vertex AI Foundations also includes modules that support the use of Google’s Generative AI products. These products primarily utilise Google’s own generative AI models or open-source models. This means it is the responsibility of those foundational model providers to comply with disclosure requirements regarding training datasets.
Fine-tuning presents a unique scenario. Fine-tuning involves taking a foundational model and further training it with your own data. In this case, your organisation must disclose the training data used. Vertex AI seamlessly integrates fine-tuning with its existing pipeline infrastructure. This ensures that model lineage is preserved throughout the fine-tuning process. This allows you to trace the model’s output back to the specific training data used. This enables the identification and correction of any biases that may arise due to changes in the data.
Vertex AI Foundations simplifies the use of these products. It provides templates for common use cases. One such example is Retrieval Augmented Generation (RAG). This technique involves retrieving data from external sources to augment the prompt given to a generative AI model. It is crucial to log all of this information to ensure explainability. You need to understand why the model produced a specific output. This is based on the model version and the supplementary data used. Vertex AI Foundations achieves this through a combination of Cloud Monitoring and Vertex AI Pipelines.
Understanding the EU AI Act: key regulations for AI compliance
Vertex AI Foundations offers a robust solution. It empowers businesses to meet AI compliance requirements. Simultaneously, it optimises system performance. As the demand for transparency and accountability in AI systems grows, Vertex AI Foundations provides essential tools. These tools ensure traceability, security, and operational quality. Furthermore, they align with key regulations. This post delves into how Vertex AI Foundations can assist your business. Learn how to build compliant and high-performing AI systems.
The AI Act can be broken down into these core sections:
- Banned applications: These applications pose a threat to the rights of EU citizens. Examples include creating scraped facial recognition databases. Another example is categorising individuals based on sensitive attributes like race or sexual orientation. Such systems can have detrimental effects on specific groups. Consequently, they are prohibited except in limited circumstances, such as law enforcement.
- High-risk systems: Certain AI applications are classified as high-risk. This is due to their potential impact on individuals’ lives. These applications encompass various domains. Examples include recruitment, healthcare, and the operation of critical infrastructure. They also include systems designed to influence elections. Because of their significant impact, these systems must adhere to stringent guidelines.
- General AI systems: Generative AI models are rapidly gaining prominence. These models are undoubtedly powerful. However, they are prone to generating inaccurate outputs, sometimes referred to as “hallucinations.” Some models have raised concerns regarding copyright infringement. The potential for misinformation poses another challenge. Therefore, generative AI applications must comply with specific requirements. These include producing comprehensive technical documentation and adhering to EU copyright law. They must also provide detailed summaries of the data used for training.
How Devoteam’s Vertex AI Foundations can help you comply
Devoteam’s Vertex AI Foundations is an MLOps accelerator. It facilitates rapid deployment and utilisation of Google’s Vertex AI product suite. This is achieved through a combination of Google’s technology and Devoteam’s extensive experience in implementing AI systems. This section elucidates how Vertex AI Foundations addresses the requirements set forth by the AI Act.
Traceability
The EU AI Act grants individuals a significant right. They can demand explanations for decisions made by AI systems. This underscores the critical role of traceability. Traceability entails the ability to explain an AI model’s output. This involves examining all factors that influenced its training and prediction processes. These factors encompass the training data, any data transformations applied, the hyperparameters used during training, and the data used during runtime to generate predictions.
This challenge can be divided into two distinct aspects: training and inference.
Traceability in training
Ensuring traceability in training requires a robust mechanism. You must be able to trace the lineage of your model artefact. This means tracing it back to the source data. Additionally, you must track any modifications made during the process. To achieve this, Vertex AI Foundations employs Vertex AI Pipelines. This is an MLOps tool built upon the open-source framework Kubeflow.
Vertex AI Pipelines comprises components and artefacts. Components are responsible for executing specific code. Artefacts represent the outputs generated by components. Crucially, each pipeline execution bundles all artefacts together. It also assigns them a version. This enables you to examine a specific model version. You can then trace it back to the corresponding version of the input data used for training.
Vertex AI Pipelines offer another advantage. By tracking lineage, you can identify and address potential biases present in the training data. Such biases can lead to discriminatory outcomes. This proactive approach to bias mitigation aligns with the AI Act’s emphasis on fairness and accountability in AI development.
Traceability at inference time
During inference, predictions often incorporate real-time data. If this data is not properly logged, it can be challenging to understand why a specific output was generated. Vertex AI Foundations addresses this challenge through tight integration with Google’s Monitoring suite. This enables comprehensive tracking of all predictions made by the model.
Moreover, Vertex AI provides an Explainable AI feature. This offers explanations for individual predictions. It achieves this by analysing the input features provided to the model. This information can be combined with traceability data from the training phase. This allows you to pinpoint potential anomalies within the training dataset.
Operational performance quality
The AI Act mandates the monitoring of operational performance. For AI systems, this means companies must diligently track predictions. They must also compare these predictions against the actual observations. Furthermore, monitoring the input data is essential to detect data drift. Data drift occurs when the distribution of input data changes significantly. This can adversely impact the accuracy of the model’s output.
Vertex AI Foundations supports these monitoring requirements through integrations with Google’s Monitoring suite and BigQuery. This allows you to effectively detect model degradation. Because Vertex AI Foundations utilises Vertex AI Pipelines, it can automatically trigger a new training pipeline when degradation is detected. Alternatively, it offers the option of human intervention if necessary.
Cybersecurity
Security is paramount at Devoteam. This commitment extends to Vertex AI Foundations. It adheres to security best practices and leverages our extensive expertise in Google Cloud. This ensures a secure setup. Vertex AI Foundations manages permissions, service accounts, and networking through infrastructure as code. It uses Terraform for this purpose. This enforces the principle of least privilege access.
Furthermore, Vertex AI Foundations supports clear environment separation. This ensures that sensitive production data is not replicated outside the production environment.
Technical documentation
Another requirement of the AI Act is comprehensive technical documentation. AI systems deployed in production environments must be properly documented. When using Vertex AI Foundations, customers benefit from Devoteam’s extensive documentation. We provide detailed information on how the Foundations are set up and the rationale behind specific design decisions.
Vertex AI Pipelines further streamlines the documentation process. It provides comprehensive templates. This allows you to focus on documenting content specific to your unique use cases. This significantly reduces the time spent on documentation. As a result, you can allocate resources more effectively.
Data quality
The AI Act also mandates the use of high-quality data. This is a broad subject with various facets. In this context, we will concentrate on aspects specific to machine learning. A future blog post will delve deeper into data quality and GDPR compliance.
Assuming your organisation has access to qualitative data, a feature store is crucial. A feature store serves as a central repository for input features used by your models. It provides a place to standardise these features. If sensitive data is not available in the feature store, models cannot utilise it for training or prediction.
Beyond governance, a feature store offers a historical view of features over time. For example, consider customer behaviour. This can change significantly. A feature store allows you to link events to features at specific points in time.
Lastly, a feature store is an excellent tool for data drift detection. You can use this to automatically trigger the re-training of models that use these features.
How Devoteam’s Vertex AI Foundations can help you work with generative AI
Vertex AI Foundations also includes modules that support the use of Google’s Generative AI products. These products primarily utilise Google’s own generative AI models or open-source models. This means it is the responsibility of those foundational model providers to comply with disclosure requirements regarding training datasets.
Fine-tuning presents a unique scenario. Fine-tuning involves taking a foundational model and further training it with your own data. In this case, your organisation must disclose the training data used. Vertex AI seamlessly integrates fine-tuning with its existing pipeline infrastructure. This ensures that model lineage is preserved throughout the fine-tuning process. This allows you to trace the model’s output back to the specific training data used. This enables the identification and correction of any biases that may arise due to changes in the data.
Vertex AI Foundations simplifies the use of these products. It provides templates for common use cases. One such example is Retrieval Augmented Generation (RAG). This technique involves retrieving data from external sources to augment the prompt given to a generative AI model. It is crucial to log all of this information to ensure explainability. You need to understand why the model produced a specific output. This is based on the model version and the supplementary data used. Vertex AI Foundations achieves this through a combination of Cloud Monitoring and Vertex AI Pipelines.
Copyright
Google offers indemnity for a large portion of their Generative AI stack. Provided you use the product suite responsibly and without malicious intent, Google will indemnify you against third-party IP claims. This offers valuable protection for customers using generative AI. Such lawsuits can be costly and time-consuming.
EU AI Act: Building a solid basis for any ML system
The EU AI Act establishes crucial requirements for the responsible development and deployment of AI. This benefits society as a whole. It ensures a fairer future for everyone. While meeting these regulations may initially appear challenging, solutions are available to support you on this journey.
Devoteam’s Vertex AI Foundations can help. It allows you to quickly implement best practices. This ensures your ML system is built on a solid foundation. This foundation will fully comply with the EU AI Act and other relevant regulations.
Lead in AI Compliance & Performance with Vertex AI
Leverage Vertex AI for top-tier compliance and performance. Contact our experts for tailored AI solutions.