Essential Insights: Automated Decisions, GDPR & the EU AI Act
- Author: Arno Schlösser, DP-Dock GmbH
- Last updated: November 2025
- Category: Enforcement, Data Security, General obligations
Hamburg sets a precedent for automated decisions.
A company in Hamburg was fined ( 492,000 EURO ) for rejecting credit applications through fully automated algorithms without proper explanation or human review. The company failed to provide meaningful information about the logic involved when applicants asked why their applications were rejected. Regulators are signaling that untransparent AI-driven decisions will no longer be accepted.
What This Means for All Companies Using Fully Automated Algorithms
Even though the Hamburg case arose in the financial sector, the implications extend far beyond credit scoring. Any company that relies on fully automated algorithms to make decisions affecting individuals must treat this case as a clear warning signal.
1. GDPR rules apply broadly
- transparency about the logic behind the decision,
- meaningful explanations on request,
- human intervention on demand,
- the ability to contest decisions
2. The AI Act will tighten expectations further
This enforcement action underlines the increasing regulatory scrutiny of algorithmic decision-making and the critical importance of transparency and accountability in AI- driven process. In addition to the GDPR, the EU AI Act will also be relevant in the future, as it contains further regulations, particulary regarding the use of high-risk AI Systems, which complement the obligations of GDPR.
Bottom Line
The Hamburg case is not a blanket precedent, but it is a clear regulatory signal.
Any company using automated processing-especially for impact decisions-must ensure that transparency, explainability, and human oversight are firmly in place.
If you have any questions or would like further guidance regarding the Hamburger case or automated decision-making compliance, please do not hesitate to contact us at any time.