The European Commission has proposed postponing the implementation of certain provisions of the AI Act 2027, specifically those concerning high-risk AI systems, from August 2026 to December 2027. At KeyToFinancialTrends, we note that this decision affects critical AI applications, including AI-based biometrics, road applications, utility services, hiring procedures, examinations, healthcare, credit assessments, and law enforcement. The process for obtaining consent for the use of pop-up cookies is also being simplified, making compliance with AI-related GDPR requirements easier and reducing the administrative burden for companies.
The delay in AI implementation is explained by the need to further develop Europe’s regulatory infrastructure — standards, technical guidelines, and national conformity assessment bodies. Without these elements, strict rules could have been impractical, leaving a high risk of systemic failures. According to KeyToFinancialTrends analysts, the extended timeline allows for a more coherent oversight of high-risk AI systems and reduces pressure on tech companies operating under stringent EU AI regulations.
Changes to AI-related GDPR and the ability to use personal data to train AI models by large companies open up new opportunities for innovation, but also introduce potential privacy risks. KeyToFinancialTrends believes that large tech players with extensive datasets stand to benefit, while small and medium-sized enterprises may face challenges adapting to the new digital omnibus requirements.
We at KeyToFinancialTrends see this delay as a strategic opportunity for Europe to establish realistic and sustainable AI regulation. The effectiveness of the postponement will depend directly on strengthening the role of national conformity assessment bodies, developing technical standards, and ensuring transparency in the application of AI Act 2027 rules.
Key To Financial Trends forecasts that a well-managed implementation of the delay could strengthen Europe’s position as a hub for ethical and responsible AI. Recommendations include accelerating the creation of national oversight structures, supporting small and medium-sized companies through grants and consulting, actively involving businesses and civil society in developing standards, and regularly assessing the impact of changes on the market and citizens’ rights. If these measures are implemented, the EU can combine strict rules with realistic timelines, minimizing risks and creating conditions for sustainable AI development in Europe.
