A research was conducted by Nikolakis
et al, (2019) and they explained that systems of modern manufacturing needs
high automation levels for low and fast-cost production, but also increased
levels of adaptability and flexibility to requirements of dynamic production. In
accordance with authors, the interaction between robot and human enables closer
collaboration in tasks of assembly which need significant productivity by
combining robot performance with dexterity and flexibility of human workers. But
the key problem is developing the safety of humans while working near robots. Towards
establishing a safe collaboration between robots and humans, several software
and physical systems have to be incorporated. In this research, a
physical-cyber system for controlling and enabling such assembly functions is
explained (Nikolakis et al, 2019).
In accordance with authors, this
approach seems to consider a fenceless workplace where auto-guided vehicles,
moving objects, industrial robots, and humans might operate. Optical sensors
monitor the working space. Human safety, in this context, is a significant
issue. This paper focused on a physical-cyber system for permitting
collaboration between robots and humans on the basis of real-time analysis of a
closed-loop control and safety distance for triggering preventing actions. A
primary implementation is analyzed in a certain use case in a shop of
laboratory machine with an industrial robot, and its outcomes are compared
concerning the utilization of an individual or more sensors. The comparison is
carried out in terms of response time of a system for detecting human presence
in a predetermined zone of safety (Nikolakis et al, 2019).
Most of the studies, at present,
are looking to improve the safety of some certain interactive situations,
specifically failure prevention or collision avoidance. Techniques of collision
avoidance have an objective of preventing robots from contacting the near
objects. It has been analyzed that avoidance of contact with humans, particularly
in cooperative situations, seems to need a higher perception level in
comparison with other dynamic or static bodies (Heinzmann and Zelinsky, 2003). It
has caused authors to suggest that HRI safety needs both fast times of reaction
and high precision of sensory information for working with machines Kulic and
Croft (2007); Giuliani et al. (2010).
Alexander et al. (2009) in their
work suggest that for autonomous systems to be capable of supporting humans as
partners, while keeping strong safety, actions of robots might have to be
limited, preventing ideal performance and flexibility. Other than collision
avoidance, plans have to be created for integrating mitigation of post-action
into the scheme of avoidance. Ikuta et al. (2003) in their work have indicated
that in the development of the robot, while creating control systems, it is
significant to recognize implications of safety included in robotic parts. Among
the autonomous robot requirements, similar to the ones being explained in the
study, is a specific standard of robustness. This indicates being capable of
handling errors and continuing operations during unusual conditions.
For achieving this, it is
significant that system must be capable of supporting changes to specifications
of task (Bonasso and Kortenkamp, 1996). Such changes are quite important
because a robot in a dynamic environment will often find itself in different unique
situations. Much of this study has focused on this problem by utilizing
algorithms of learning, normally implemented as ANNs or artificial neural
networks (Larsen and Hansen, 2005). But as it is identified by Nehmzow et al.
(2004), these implementations are quite to analyze because of inherent opacity
of algorithms on the basis of connections. It means that it is tough to develop
a smart system structure model that can be utilized in safety analysis.
Bensalem et al. (2009) in their
study have indicated that a hierarchical approach to safety of system, with
different layers of control offering safety supervision, task execution, and
planning, can optimize the reliability and dependability of a system of an autonomous
robot. This study is based broadly on the techniques based on behaviour created
by Rodney Brooks. In the work of Brooks, a subsumption architecture was
introduced by him (Brooks, 1999), which seems to demonstrate how different
behaviours can be incorporated for producing complex behaviors. In addition,
the abstract of robot control software’s functional elements enables modules to
be amended, removed, and added while retaining the robot’s control
functionality. LAAS architecture is being used in a notable architecture (Bensalem
et al., 2009).
The control software is actually
divided into three levels: functional, execution, and decisional. LAAS
architecture’s distinct feature is the functional level, which seems to
encapsulate the sensor groups into modules which can interact with modules through
a service link. A significant feature of this architecture is the level of
execution control (Ingrand and Py., 2002). Synchronously, this level if
seemingly executed and it is accountable for identifying faults which might
take place as a consequence of being executed. With the detection of a
potential fault, then this decision is not passed to the level of
functionality. Alami et al. (2006) in their report identify different robotic
manufacturers in Europe who have included modules of safety to monitor, via
external sensing, space which exists around robots for potential dangers. This
kind of system is referred to as a system of safety protection (Fuller and
Vassie, 2004). Actually, for handling a high-powered laser, (Wozniak et al.,
2007) has implemented a practical example. It was determined by the study that
a design which segregated safety from control enabled them to more configurable
and extend the parameters of safety for meeting future requirements.