Building Trustworthy Robotics and Cobotic Systems
From Isolated Machines to Human-centric Systems
Industrial robots have traditionally operated behind fences—fast, powerful, and isolated from people. Today, this paradigm is changing rapidly. Collaborative robots (cobots), mobile platforms, and even humanoid systems are increasingly designed to work alongside humans, not separated from them.
This shift fundamentally changes the requirements for embedded systems. Once robots leave their cages, trustworthiness becomes a non-negotiable property. Trustworthiness is not achieved through a single feature but must be designed into the system from the very beginning—across safety, security, architecture, and lifecycle processes.
The Growing Role of Robotics in Society
Several macro trends are accelerating the adoption of robotics:
- Demographic Change: Aging populations are increasing demand for automation in healthcare, logistics, and daily assistance.
- Productivity Pressure: Industry seeks higher efficiency, flexibility, and adaptability beyond traditional automation.
- Technological Convergence: Robotics, AI, high-performance computing, and low-latency connectivity are increasingly interdependent.
While these drivers unlock new opportunities, they also raise fundamental concerns. Systems that directly interact with humans must not only be intelligent and efficient—but also predictable, safe, and secure under all conditions.
Safety is not optional — and Security is a Prerequisite
A key misconception in embedded systems is that functional safety and cybersecurity are independent disciplines. In reality, they are tightly coupled.
A system that is functionally safe but vulnerable to attack is, by definition, not safe. An attacker who can manipulate software behavior, sensor data, or communication channels can effectively bypass safety mechanisms.
Modern robotic systems therefore require:
- Functional safety compliance (e.g., IEC 61508 up to high integrity levels)
- Strong security guarantees, including protection against unauthorized access, manipulation, and inter-component interference
This convergence makes security-by-design a foundational principle rather than an add-on.
Mixed-Criticality: One System, Multiple Trust Levels
Robots and cobots are no longer single-purpose machines. They integrate:
- Safety-critical motion control
- Real-time sensor processing
- AI-based perception and decision-making
- Connectivity, visualization, and user interfaces
Not all these functions share the same criticality level. The challenge lies in enabling mixed-criticality systems on shared hardware—without allowing non-critical software to interfere with safety-relevant functionality.
This is where strong isolation mechanisms become essential.
Partitioning and Microkernel Architectures
A microkernel-based operating system with spatial and temporal partitioning provides a robust architectural foundation:
- Safety-critical functions are isolated from non-safety software
- Faults or compromises in one partition cannot propagate to others
- Attack surfaces are reduced by design
Such architectures enable scenarios where, for example:
- A safety-certified real-time partition controls motion and emergency functions
- AI or robotics frameworks run in separate, non-safety partitions
- Communication between partitions is strictly controlled and validated
This architectural separation is a key enabler for certification, security evaluation, and long-term system evolution.
Formal Methods and Process-Driven Safety
Functional safety standards emphasize systematic fault avoidance, which is why development is highly process-driven. Formal methods add another dimension to this approach.
Formal verification allows developers to mathematically prove that certain classes of errors cannot occur. However, this is not something that can be applied retroactively. To be effective, formal methods must be embedded into the design philosophy from the start.
In practice:
- Some components are designed specifically to enable formal verification
- Formal proofs complement—not replace—process-based certification
- The result is often higher confidence and, in some cases, greater development efficiency
Human–Robot Interaction: No Compromise on Safety
When humans and robots share physical space, safety requirements always take priority. Productivity, speed, and adaptability are important—but never at the expense of human safety.
System designers must:
- Define clear safety goals based on context, motion, force, and reach
- Use sensors, force feedback, vision, and low-latency communication to enforce safe behavior
- Architect systems so that safety mechanisms cannot be bypassed or weakened
There is no “balancing” safety against other objectives. Safety defines the boundary conditions within which optimization can occur.
AI in Safety-Critical Robotics: Promise and Complexity
Artificial intelligence is transforming robotics, particularly in perception, autonomy, and sensor fusion. However, integrating AI into safety-certified systems introduces significant challenges.
Two aspects must be distinguished:
- Functional Safety: Concerns failures of components (e.g., sensors, processors) and how they are monitored and mitigated.
- Safety of the Intended Functionality (SOTIF): Addresses whether a system behaves safely even when all components function correctly—especially relevant for AI-based perception and decision-making.
AI models depend on training data, assumptions, and probabilistic behavior. Defining acceptable safety metrics for such systems remains an open challenge across industries.
Architectural isolation again plays a key role: AI-based functions can be separated from safety-critical control logic, enabling innovation without undermining certification goals.
Connectivity Expands the Attack Surface
As robots become more connected—via Ethernet, 5G, Wi-Fi, and cloud services—their attack surface grows significantly:
- Mobile robots and cobots communicate wirelessly
- Remote updates and diagnostics become common
- Home and public environments are far less controlled than factory floors
Strong separation, minimized interfaces, and certified security mechanisms are essential to prevent scenarios where large fleets of robots could be compromised simultaneously.
Looking ahead: Regulation, Responsibility, and Design Discipline
Robotics and AI will undoubtedly reshape industry and society over the next decade. The challenge is not whether these technologies will advance—but how responsibly they are integrated.
Key principles will define success:
- Safety and security must be designed in, not added later
- Architecture matters as much as algorithms
- Certification, regulation, and engineering discipline remain essential—even in the age of AI
Trustworthy robotic systems are not created by chance. They are the result of deliberate design choices made at the very foundation of the system.