While most countries are still figuring out how to regulate artificial intelligence, the European Union has taken the lead by passing a risk-based framework earlier this year. The law, which came into effect in August, is still being fine-tuned, but its provisions will soon apply to AI app and model makers, making compliance a top priority.
One of the biggest challenges now is evaluating whether AI models are meeting their legal obligations. This is where LatticeFlow AI comes in. This spinout from ETH Zurich is focused on AI risk management and compliance. Recently, they released the first technical interpretation of the EU AI Act, mapping regulatory requirements to technical specifications. They also unveiled Compl-AI, an open-source LLM validation framework to help model makers assess compliance with the law.

In collaboration with the Swiss Federal Institute of Technology and Bulgaria’s INSAIT, LatticeFlow has created a platform where AI model makers can request evaluations of their technology’s compliance with the EU AI Act. They have also evaluated popular LLMs, ranking them based on compliance with the law’s requirements. Performance across various benchmarks highlights both strengths and weaknesses in model capabilities, emphasizing the need for a balanced development approach.
Moving forward, LatticeFlow’s framework will continue to evolve to align with updates to the EU AI Act and advancements in the field of AI. The goal is to provide a comprehensive assessment platform that promotes compliance and safety in AI development. Join us in advancing this project and shaping the future of AI regulation.