Growing need for ethics in embedded systems: Can we trust AI-powered decisions?
Author: IntaPeople | Date published: 23/06/25


From adaptive cruise control to battery-management algorithms, embedded electronics are now making (or heavily influencing) split-second safety decisions. As hiring managers in Europe race to add AI-ready skills to their teams, an uncomfortable question keeps surfacing:
Can we really trust these systems to act fairly and safely?
And if something goes wrong, who carries the blame; the developer, the company, or the code itself?
Trust is wobbling
A 2025 Deloitte study found that roughly one in four drivers in major EU markets remain sceptical about AI in cars, citing worries over data privacy and real-time decision-making. German respondents were near the top of the mistrust league at 25 per cent.
Regulation is catching up, quickly
- EU AI Act: Now in force, the Act classifies AI “safety components in transport” as high-risk and insists on risk mitigation, human oversight, and exhaustive logging. Fines for non-compliance can reach 7 per cent of global turnover.
- Product liability shake-up: Proposed updates place explicit responsibility on manufacturers for harm caused by software faults, including over-the-air updates.
- Safety of the Intended Functionality (ISO 21448) and related functional safety standards (ISO 26262) are fast becoming essential for embedded engineering teams.
In short, the machine is never legally “at fault”. The organisation deploying it is.
Designing for ethics, not just function
1. Human-in-the-loop by design – The EU AI Act demands “effective human oversight”. UK guidance refers to this as a Safe & Ethical Operating Concept.
2. Bias and edge-case testing – Diverse sensor data and adversarial scenarios help reduce automation bias.
3. Explainability tooling – Black-box neural nets are losing favour. Traceable decision logs are now the expectation.
4. Fail-safe defaults – If confidence drops below a threshold, the system must degrade gracefully.
5. Ethical review gates – Formal checkpoints (similar to code reviews) where safety and fairness are signed off by cross-functional teams.
Are ethical reviews part of your process?
Ask yourself:
- Do you run a structured ethics review before every major firmware release?
- Can your engineers map a failure back to a dataset, line of code, and design decision?
- Is accountability clear in your supply chain contracts, especially with freelance or third-party contributors?
If the answer to any of these is “not sure,” you may have a skills gap as well as a compliance risk.
The talent pinch
Engineers who understand both deep-learning pipelines and ISO 21448 are scarce. Many teams in Europe are now working with IntaPeople to bridge the gap with contract specialists in functional safety, SOTIF and AI assurance. This keeps projects on track while permanent headcount approvals work their way through internal processes.
How IntaPeople can help
At IntaPeople, we maintain a vetted network of embedded-systems contractors who combine AI expertise with a deep understanding of safety and ethics. This is ideal for projects where compliance and trust are just as important as delivery. Whether you need a short-term audit lead or a full project team, we’ll connect you with talent who can deliver and de-risk your next release.
Ready to embed ethics into your embedded systems? get in touch with our team today.