Abstract
Future’s technological advancements will foresee robotics as a leading cause of change.
Since 1960, robots are predominantly used in manufacturing, where the automotive industries
were the main customers. Today, its applications are spread across various fields, starting
from utilisation in factories, storage facilities, transportation, household, underwater and even
in outer space to perform multiple tasks. Even though these tasks are combinations of basic
operations like inspection, grabbing, lifting, catching, carrying etc., complex and unstructured
environments and diverse robot structures introduce multiple challenges. Autonomous operation requires addressing these challenges reliably, accurately and efficiently for successful
task completion. Sensing plays a crucial role in achieving autonomy. Among various sensor
modalities, vision is the most vital and can provide rich information about the environment
that a robot can perceive. The introduction of visual feedback with perceived images provides
powerful means for a robot to perform accurate positioning tasks. Vision-based autonomous
control of robotic systems has been a field of interest in robotics for many years. Image-Based
Visual Servoing (IBVS) is one such method that gives autonomy in reaching the desired pose
to provide a robust grasp compared to conventional controls. Robots with diverse architectures
bring in different challenges while performing visual servoing and requires attention. These
robotic systems include industrial robots, mobile robots, underactuated systems like quadrotors and redundant systems like humanoids and multi-arm robots. Many application scenarios
require dealing with tasks that cannot be executed by a single-arm robot and demand coordinated control strategies. Using a multi-arm robotic system with enhanced control strategies
is inevitable for such tasks beyond the capabilities of a single-arm. Hence, multi-arm robotic
systems are getting more attention to perform multiple tasks and dextrous operations, making
it our primary focus of study.
Even though various solutions have been proposed for vision based-control, significant challenges exist for multi-arm robotic systems working in complex environments. Driven by this
motivation, an attempt has been made in this work to develop a generic framework for visual servoing of a multi-arm robotic system in the presence of task constraints. An analytical
framework for kinematics of fixed and free-floating multi-arm robotic system has been proposed for executing IBVS. It is worth mentioning that, while designing IBVS for multi-arm
robotic systems, a novel concept for reactionless visual servoing is introduced, which builds
on the system’s redundancy to performs multiple tasks.
One of the primary concerns in any IBVS scheme is the uncertainty in acquiring precise
knowledge of the environment due to unreliable sensor inputs, noise, varying lighting conditions, and occlusions. The existing solutions depend upon filtering, feature extraction, matching and real-time tracking for alleviating these concerns. However, incorporating such multiple methods and their additional computations become extra overheads for visual servoing.
In many computer vision applications, probabilistic techniques help obtain desirable performance under similar situations. Therefore, an effort has been made to investigate visual servoing through this paradigm. A novel approach to visual servoing is proposed using student’s
t-distribution based mixture models that consider the whole image a probabilistic function. The
introduced probabilistic model-based approach provides efficient servoing operation and satisfactory performance than many commonly used dense visual servoing methodologies reported
in the literature.
Another contribution is in the development of an approach for servoing towards a noncooperative tumbling object. Although strategies for capturing fixed and moving objects are
well studied, existing methods for capturing tumbling objects use complex reconstruction and
motion estimation techniques. On the other hand, visual servoing is less complicated and
provides efficient autonomous motion for a robot towards a fixed object. These benefits motivated the choice of IBVS for the capture of tumbling objects. Unlike standard approaches, the
proposed technique does not require estimation of inertia, centroid, angular velocities or orientation of the unknown tumbling object. The elliptical feature motion exhibited by features on a
tumbling object is utilised to develop an enhanced IBVS framework. The approach is generic
enough to be extended for any robot servoing towards a tumbling object, and space debris capture is one of the potential areas of application. The proposed Visual servoing framework is
demonstrated using both manipulator and mobile robotic systems.
Task constraints, like, image plane limits, joint limits, kinodynamic limits of motors, kinema